刮网,直到“下一页”被禁用

问题描述 投票:1回答:1
url = 'https://www.tripadvisor.ie/Attraction_Review-g295424-d2038312-Reviews-Global_Village-Dubai_Emirate_of_Dubai.html'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
def get_links():
  review_links = []
  for review_link in soup.find_all('a', {'class':'title'},href=True):
      review_link = review_link['href']
      review_links.append(review_link)
  return review_links
link = 'https://www.tripadvisor.ie'
review_urls = []
for i in get_links():
   review_url = link + i
   print (review_url)
review_urls.append(review_url)

这里的代码保存了这个网页上的所有超链接 - 但是我想把页面上的所有超链接刮到319.当分页被禁用时无法实现

python web-scraping beautifulsoup pagination
1个回答
0
投票

你可以在网址中更改一个参数来循环并获取所有评论。所以我只是添加了一个循环并请求所有网址

def get_page(index):
    url = "https://www.tripadvisor.ie/Attraction_Review-g295424-d2038312-Reviews-or{}-Global_Village-Dubai_Emirate_of_Dubai.html".format(str(index))
    html = requests.get(url)
    page = soup(html.text, 'html.parser')
    return page

nb_review = 3187
for i in range(0, nb_review, 10):
    page = get_page(i)

使用您的代码段的完整代码是:

from bs4 import BeautifulSoup as soup
import requests

def get_page(index):
    url = "https://www.tripadvisor.ie/Attraction_Review-g295424-d2038312-Reviews-or{}-Global_Village-Dubai_Emirate_of_Dubai.html".format(str(index))
    html = requests.get(url)
    page = soup(html.text, 'html.parser')
    return page

def get_links(page):
  review_links = []
  for review_link in page.find_all('a', {'class':'title'},href=True):
      review_link = review_link['href']
      review_links.append(review_link)
  return review_links

link = 'https://www.tripadvisor.ie'
review_urls = []
nb_review = 3187
for i in range(0, nb_review, 10):
    page = get_page(i)
    for i in get_links(page):
        review_url = link + i
        review_urls.append(review_url)
print(len(review_urls))

OUTPUT:

3187

编辑:

显然,您可以废弃第一页并获取评论编号以升级代码,以使其更具可定制性

© www.soinside.com 2019 - 2024. All rights reserved.