多处理在python web-scraping中不起作用

问题描述 投票:0回答:1

我已经使用beautifulsoup完成了网页抓取,并成功将解析后的数据保存到csv文件中,但我想加快这个过程,所以我使用多处理。但是在脚本中应用多处理之后没有区别。这是我的代码

rootPath = '....' 
urlp1 = "https://www.proteinatlas.org/"

try:
    df1 = pd.read_csv(rootPath + "cancer_list1_2(1).csv", header=0);
except Exception as e:
    print("File " + f + " doesn't exist")
    print(str(e))
    sys.exit()

cancer_list = df1.as_matrix().tolist()
# [["bcla_gene","beast+cancer"], ...]

URLs = []
for cancer in cancer_list:

    urlp2 = "/pathology/tissue/" + cancer[1]
    f = cancer[0]

    try:
        df1 = pd.read_csv(rootPath + f + ".csv", header=0);
    except Exception as e:
        print("File " + f + " doesn't exist")
        print(str(e))
        sys.exit()
    ... # list of urls

def scrap(url,output_path):
    page = urlopen(URL)
    soup = BeautifulSoup(page, 'html.parser')
    item_text = soup.select('#scatter6001 script')[0].text
    table = soup.find_all('table',{'class':'noborder dark'})
    df1 = pd.read_html(str(table),header = 0)
    df1 = pd.DataFrame(df1[0])
    Number = soup.find('th',text = "Number of samples").find_next_sibling("td").text 


...
#function of scraping



if __name__ == "__main__":

    Parallel(n_jobs=-1)(scrap(url,output_path) for url in URLs)

只需更新代码,现在的问题是CPU利用率只能在开始时达到100%,但很快就会下降到1%。我很困惑。

python multithreading web-scraping beautifulsoup multiprocessing
1个回答
0
投票

无需在代码中查看任何细节:您可以从查看joblib模块中获益。

伪代码:

import joblib

if __name__ == "__main__":
      URLs = ["URL1", "URL2", "URL2", ...]
      Parallel(n_jobs=-1)(scrap(url,output_path) for url in URLs)

重构代码可能是必要的,因为只有在任何def:if __name__ == "__main__":-branch之外没有代码运行时,joblib才有效。

n_jobs=-1将启动一系列与您机器上的核心数相当的进程。有关更多详细信息,请参阅joblib的documentation

使用这种方法与selenium / geckodriver一起使用,根据您的机器,可能会在不到一个小时的时间内刮掉一个10k URL的池(我通常在64GB ram的octacore机器上打开40-50个进程)。

© www.soinside.com 2019 - 2024. All rights reserved.