并发期货网络抓取

问题描述 投票:0回答:1

谁在读他的!感谢您抽出宝贵时间来看看这个。

我目前正在尝试开发一个快速的webscraping函数,所以我可以刮掉一大堆文件。

这是我目前的代码:

import time
import requests
from bs4 import BeautifulSoup
from concurrent.futures import ProcessPoolExecutor, as_completed
def parse(url):
    r = requests.get(url)
    soup = BeautifulSoup(r.content, 'lxml')
    return soup.find_all('a')
with ProcessPoolExecutor(max_workers=4) as executor:
    start = time.time()
    futures = [ executor.submit(parse, url) for url in URLs ]
    results = []
    for result in as_completed(futures):
        results.append(result)
    end = time.time()
    print("Time Taken: {:.6f}s".format(end-start))

这为网站带来了结果,即www.google.com,但我的问题是我不知道它查看它带来的数据我只获得未来的对象。

请有人解释/告诉我如何做到这一点。

我感谢您随时帮助我。

python web-scraping concurrent.futures
1个回答
1
投票

您也可以通过dict理解来实现它,如下所示。

with ProcessPoolExecutor(max_workers=4) as executor:

    start = time.time()
    futures = { executor.submit(parse, url): url for url in URLs }
    for result in as_completed(futures):
        link = futures.get(result)
        try:
            data = result.result()
        except Exception as e:
            print(e)
        else:
            print("Link: {}, data: {}".format(link, data))
    end = time.time()
    print("Time Taken: {:.6f}s".format(end-start))
© www.soinside.com 2019 - 2024. All rights reserved.