我试图浏览足球网站的多个页面。所有链接都在teamLinks列表中。其中一个链接的示例是:'http://www.premierleague.com//clubs/1/Arsenal/squad?se=79'。我只是想知道是否有可能使请求功能等到页面完全更新后再实现。如果你点击链接,它将首先显示2018/2019小队,然后刷新到2017/2018小队,这是我想要的。
playerLink1 = []
playerLink2 = []
for i in range(len(teamLinks)):
# Request
squadPage = requests.get(teamlinks[i])
squadTree = html.fromstring(squadPage.content)
#Extract the player links.
playerLocation = squadTree.cssselect('.playerOverviewCard')
#For each player link within the team page.
for i in range(len(playerLocation)):
#Save the link, complete with domain.
playerLink1.append("http://www.premierleague.com/" +
playerLocation[i].attrib['href'] + '?se=79')
#For the second link, change the page from player overview to stats
playerLink2.append(playerLink1[i].replace("overview", "stats"))
您要废弃的页面正在使用Javascript加载您想要的播放器列表。
选项1:你可以使用这个名为requests-html的新模块(从未尝试过我自己)声称支持Javascript。
选项2:使用Chrome的devtools,我可以找到页面上的实际XHR请求来获取播放器列表。此代码可以使用请求模块获取所需的输出。
import json
playerLink1 = []
playerLink2 = []
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36',
'Origin': 'https://www.premierleague.com',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Referer': 'https://www.premierleague.com//clubs/1/Arsenal/squad?se=79'}
res = requests.get('https://footballapi.pulselive.com/football/teams/1/compseasons/79/staff?altIds=true&compCodeForActivePlayer=EN_PR', headers=headers)
player_data = json.loads(res.content.decode('utf-8'))
for player in player_data['players']:
href = 'https://www.premierleague.com/players/{}/{}/'.format(player['id'], player['name']['display'].replace(' ', '-'))
playerLink1.append("http://www.premierleague.com/" + href + "overview" + '?se=79')
playerLink2.append(href + "stats")
我找到了一个解决方案。你必须在webdriver
模式下使用selenium headless
并从驱动程序获得page_source
并给出一些time.sleep()
。我检查了它显示的预期数据。
但是我不知道你的网址列表,所以你可以创建你的列表并试试。如果你需要进一步的帮助,请告诉我。
from selenium import webdriver
from bs4 import BeautifulSoup
import time
teamlinks=['http://www.premierleague.com//clubs/1/Arsenal/squad?se=79','http://www.premierleague.com//clubs/1/Arsenal/squad?se=54']
playerLink1 = []
playerLink2 = []
for i in range(len(teamlinks)):
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('window-size=1920x1080');
driver = webdriver.Chrome(options=chrome_options)
driver.get(teamlinks[i])
time.sleep(10)
squadPage=driver.page_source
soup = BeautifulSoup(squadPage, 'html.parser')
playerLocation = soup.findAll('a', class_=re.compile("playerOverviewCard"))
for i in range(len(playerLocation)):
#Save the link, complete with domain.
playerLink1.append("http://www.premierleague.com/" +
playerLocation[i]['href'] + '?se=79')
#For the second link, change the page from player overview to stats
playerLink2.append(playerLink1[i].replace("overview", "stats"))
driver.quit()
print(playerLink2)