使用python从div中抓取h3

问题描述 投票:0回答:2

我想使用 Python 3.6 从 DIV 中抓取 H3 标题 - 从页面:

https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1

注意页码发生变化,增量为 1。

我很难返回或识别标题。

from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)
from bs4 import BeautifulSoup
html_soup = BeautifulSoup(response.text, 'lxml')
type(html_soup)
movie_containers = html_soup.find_all('div', class_ = 'card card--rentals')
print(type(movie_containers))
print(len(movie_containers))

我也尝试过循环它们:

for dd in page("div.card__content"):
    print(div.select_one("h3.card__title").text.strip())

任何帮助都会很棒。

谢谢,

我期待每个页面上每部电影标题的结果,包括电影的链接。例如。 https://player.bfi.org.uk/rentals/film/watch-akenfield-1975-online

python html web-scraping beautifulsoup
2个回答
1
投票

该页面正在通过 xhr 将内容加载到另一个网址,因此您错过了这一点。您可以模仿页面使用的 xhr POST 请求并更改发送的 post json。如果你改变

size
,你会得到更多结果。

import requests

data = {"size":1480,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
r = requests.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
for film in r['hits']['hits']:
    print(film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url'])

rentals
的实际结果计数位于json中,
r['hits']['total']
,因此您可以执行初始请求,从比您预期高得多的数字开始,检查是否需要另一个请求,然后通过更改来收集任何额外的请求
from
size
清除所有未清理的内容。

import requests
import pandas as pd

initial_count = 10000
results = []

def add_results(r):
    for film in r['hits']['hits']:
        results.append([film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url']])

with requests.Session() as s:
    data = {"size": initial_count,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
    r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
    total_results = int(r['hits']['total'])
    add_results(r)

    if total_results > initial_count :
        data['size'] = total_results - initial_count
        data['from'] = initial_count
        r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
        add_results(r)

df = pd.DataFrame(results, columns = ['Title', 'Link'])
print(df.head())

0
投票

您遇到的问题实际上并不是找到

div
- 我认为您做得正确。但是,当您尝试使用

访问该网站时
from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)

响应实际上并不包含您在浏览器中看到的所有内容。您可以通过

'card' in response == False
检查是否属于这种情况。这很可能是因为网站加载后,所有卡片都是通过 javascript 加载的,因此仅使用
requests
库加载基本内容不足以获取您想要抓取的所有信息。

我建议您尝试查看网站如何加载所有卡片 - 浏览器开发工具中的“网络”选项卡可能会有所帮助。

© www.soinside.com 2019 - 2024. All rights reserved.