网络抓取多处理不起作用

问题描述 投票:0回答:1

我正在尝试在大量网址上使用网页抓取,我应用多处理来加速,但不知道为什么它根本无法加速。这是我的代码的一部分:

def scrap(url,output_path):
    page = urlopen(URL)
    soup = BeautifulSoup(page, 'html.parser')
    item_text = soup.select('#scatter6001 script')[0].text
    table = soup.find_all('table',{'class':'noborder dark'})
    df1 = pd.read_html(str(table),header = 0)
    df1 = pd.DataFrame(df1[0])
    ...
# function for scraping the data from url

rootPath = '...' 
urlp1 = "https://www.proteinatlas.org/"

try:
    df1 = pd.read_csv(rootPath + "cancer_list1_2(1).csv", header=0);
except Exception as e:
    print("File " + f + " doesn't exist")
    print(str(e))
    sys.exit()

cancer_list = df1.as_matrix().tolist()

URLs = []
for cancer in cancer_list:

    urlp2 = "/pathology/tissue/" + cancer[1]
    f = cancer[0]

    try:
        df1 = pd.read_csv(rootPath + f + ".csv", header=0);
    except Exception as e:
        print("File " + f + " doesn't exist")
        print(str(e))
        sys.exit()
    ...
# list of URLs

if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=6)
    records = p.map(scrap(url,output_path))

    p.terminate()
    p.join()

不确定如何使用多处理加速网页抓取。

python multithreading web-scraping multiprocessing
1个回答
0
投票

你实际上并没有使用多处理。您正在运行scrap函数一次并将结果作为参数传递给p.map()。相反,您需要传递一个可调用的参数,例如:

func = lambda url: scrap(url, output_path)
p.map(func, list_of_urls)
© www.soinside.com 2019 - 2024. All rights reserved.