Python多层网页抓取

问题描述 投票:3回答:1

我想遍历此列表(https://express-press-release.net/Industries/Automotive-press-releases.php)上的每个URL,然后复制数据并返回到根目录以获取下一个URL。我可以从单个页面抓取,但不能通过多个链接抓取。

python web-scraping beautifulsoup scrapy pycharm
1个回答
0
投票

YOu可以找到所有带有href的<a>标签,并将它们拉入列表。然后只需遍历该列表即可。您可能需要添加一些额外的某种过滤器,因为您可能只需要特定的链接,但这可以助您一臂之力:

import requests
from bs4 import BeautifulSoup

url = 'https://express-press-release.net/Industries/Automotive-press-releases.php'

response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')

links = soup.find_all('a', href=True)

root = 'https://express-press-release.net/'

link_list = [ root + a['href'] for a in links if '..' in a['href'] ]

for link in link_list:
    do some stuff...
© www.soinside.com 2019 - 2024. All rights reserved.