如何改进这个Web爬虫逻辑?

问题描述 投票:0回答:1

我正在使用一个网络爬虫,它将使用请求和bs4仅抓取内部链接。

我有一个粗略的工作版本,但我不知道如何正确处理检查链接是否已被先前爬行。

import re
import time
import requests
import argparse
from bs4 import BeautifulSoup


internal_links = set()

def crawler(new_link):


    html = requests.get(new_link).text 
    soup = BeautifulSoup(html, "html.parser")
    for link in soup.find_all('a', attrs={'href': re.compile("^http://")}):
        if "href" in link.attrs:
            print(link)
            if link.attrs["href"] not in internal_links:
                new_link = link.attrs["href"]
                print(new_link)
                internal_links.add(new_link)
                print("All links found so far, ", internal_links)
                time.sleep(6)
                crawler(new_link)


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('url', help='Pass the website url you wish to crawl')
    args = parser.parse_args()

    url = args.url

    #Check full url has been passed otherwise requests will throw error later

    try:
        crawler(url)

    except:
        if url[0:4] != 'http':
            print('Please try again and pass the full url eg http://example.com')



if __name__ == '__main__':
    main()

这些是输出的最后几行:

All links found so far,  {'http://quotes.toscrape.com/tableful', 'http://quotes.toscrape.com', 'http://quotes.toscrape.com/js', 'http://quotes.toscrape.com/scroll', 'http://quotes.toscrape.com/login', 'http://books.toscrape.com', 'http://quotes.toscrape.com/'}
<a href="http://quotes.toscrape.com/search.aspx">ViewState</a>
http://quotes.toscrape.com/search.aspx
All links found so far,  {'http://quotes.toscrape.com/tableful', 'http://quotes.toscrape.com', 'http://quotes.toscrape.com/js', 'http://quotes.toscrape.com/search.aspx', 'http://quotes.toscrape.com/scroll', 'http://quotes.toscrape.com/login', 'http://books.toscrape.com', 'http://quotes.toscrape.com/'}
<a href="http://quotes.toscrape.com/random">Random</a>
http://quotes.toscrape.com/random
All links found so far,  {'http://quotes.toscrape.com/tableful', 'http://quotes.toscrape.com', 'http://quotes.toscrape.com/js', 'http://quotes.toscrape.com/search.aspx', 'http://quotes.toscrape.com/scroll', 'http://quotes.toscrape.com/random', 'http://quotes.toscrape.com/login', 'http://books.toscrape.com', 'http://quotes.toscrape.com/'}

所以它正在工作,但只有到某一点,然后它似乎没有进一步跟随链接。

我确定它是因为这条线

for link in soup.find_all('a', attrs={'href': re.compile("^http://")}):

因为那样只能找到以http开头的链接,而且很多内部页面上的链接都没有,但当我尝试这样的时候

for link in soup.find_all('a')

该程序运行非常简短,然后结束:

http://books.toscrape.com
{'href': 'http://books.toscrape.com'}
http://books.toscrape.com
All links found so far,  {'http://books.toscrape.com'}
index.html
{'href': 'index.html'}
index.html
All links found so far,  {'index.html', 'http://books.toscrape.com'}
python web-scraping beautifulsoup
1个回答
1
投票

你可以减少

for link in soup.find_all('a', attrs={'href': re.compile("^http://")}):
        if "href" in link.attrs:
            print(link)
            if link.attrs["href"] not in internal_links:
                new_link = link.attrs["href"]
                print(new_link)
                internal_links.add(new_link)

links = {link['href'] for link in soup.select("a[href^='http:']")}
internal_links.update(links)  

这使用grabs仅使用http协议限定标记元素,并使用集合理解来确保没有欺骗。然后它使用任何新链接更新现有集。我不知道足够的python来评论使用.update的效率,但我相信它会修改现有的集合,而不是创建一个新集合。这里列出了更多组合集合的方法:How to join two sets in one line without using "|"

© www.soinside.com 2019 - 2024. All rights reserved.