无法抓取所有评论

问题描述 投票:0回答:1

我正在尝试抓取这个网站并尝试获得评论,但我遇到了一个问题,

  • 页面仅加载 50 条评论。

  • 要加载更多内容,您必须单击“显示更多评论”,我不知道如何获取所有数据,因为没有页面链接,而且“显示更多评论”没有可供探索的 URL,地址保持不变。

    import requests
    from bs4 import BeautifulSoup
    import json
    import pandas as pd
    a = []
    
    url = requests.get(url)
    html = url.text
    soup = BeautifulSoup(html, "html.parser")
    
    table = soup.findAll("div", {"class":"review-comments"})
    #print(table)
    for x in table:
        a.append(x.text)
    df = pd.DataFrame(a)
    df.to_csv("review.csv", sep='\t')
    
python python-3.x beautifulsoup request
1个回答
1
投票

查看网站,“显示更多评论”按钮会进行 ajax 调用并返回附加信息,您所要做的就是找到它的链接并向其发送 get 请求(我已经使用一些简单的正则表达式完成了) :

import requests
import re
from bs4 import BeautifulSoup
headers = {
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/74.0.3729.169 Chrome/74.0.3729.169 Safari/537.36"
}
url = "https://www.capterra.com/p/134048/HiMama-Preschool-Child-Care-App/#reviews"
Data = []
#Each page equivalant to 50 comments:
MaximumCommentPages = 3 

with requests.Session() as session:
    info = session.get(url)
    #Get product ID, needed for getting more comments
    productID = re.search(r'"product_id":(\w*)', info.text).group(1)
    #Extract info from main data
    soup = BeautifulSoup(info.content, "html.parser")
    table = soup.findAll("div", {"class":"review-comments"})
    for x in table:
        Data.append(x)
    #Number of pages to get:
    #Get additional data:
    params = {
        "page": "",
        "product_id": productID
    }
    while(MaximumCommentPages > 1): # number 1 because one of them was the main page data which we already extracted!
        MaximumCommentPages -= 1
        params["page"] = str(MaximumCommentPages)
        additionalInfo = session.get("https://www.capterra.com/gdm_reviews", params=params)
        print(additionalInfo.url)
        #print(additionalInfo.text)
        #Extract info for additional info:
        soup = BeautifulSoup(additionalInfo.content, "html.parser")
        table = soup.findAll("div", {"class":"review-comments"})
        for x in table:
            Data.append(x)

#Extract data the old fashioned way:
counter = 1
with open('review.csv', 'w') as f:
    for one in Data:
        f.write(str(counter))
        f.write(one.text)
        f.write('\n')
        counter += 1

请注意我如何使用会话来为 ajax 调用保留 cookie。

编辑1:您可以多次重新加载网页并再次调用ajax以获取更多数据。

编辑2:使用自己的方法保存数据。

编辑 3:更改了一些内容,现在为您获取任意数量的页面,使用 good' ol open() 保存到文件

© www.soinside.com 2019 - 2024. All rights reserved.