Python Web-scraping多页表到csv和DF进行分析

问题描述 投票:0回答:1

当我尝试浏览网页时,它只从第10页到csv文件的表格,我想将每个页面的结果发送到文件。我知道我可能在这里犯了一个非常简单的错误。谁能够以正确的方式指导我在这里谢谢,我感谢任何输入。

import pandas as pd
import requests
from bs4 import BeautifulSoup
from tabulate import tabulate

#transactions over the last 17hrs 
#Looping through page nimbers using url manipulation
#for i in range(1,100,1):

dfs = []

url = "https://etherscan.io/txs?p="
for index in range(1, 10, 1):
    res = requests.get(url+str(index))
    soup = BeautifulSoup(res.content,'lxml')
    table = soup.find_all('table')[0] 
    df = pd.read_html(str(table))

    dfs.append(df)
    #df[0].to_csv('Desktop/scrape.csv')

final_df[0] = pd.concat(dfs)
final_df[0].to_csv('Desktop/scrape.csv')
print( tabulate(df[0], headers='keys', tablefmt='psql'))

我收到以下类型错误。

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-c6a3a8b0cd1d> in <module>()
     20     #df[0].to_csv('Desktop/scrape.csv')
     21 
---> 22 final_df[0] = pd.concat(dfs)
     23 final_df[0].to_csv('Desktop/scrape.csv')
     24 print( tabulate(df[0], headers='keys', tablefmt='psql'))

~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/concat.py in concat(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, copy)
    204                        keys=keys, levels=levels, names=names,
    205                        verify_integrity=verify_integrity,
--> 206                        copy=copy)
    207     return op.get_result()
    208 

~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/concat.py in __init__(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy)
    261         for obj in objs:
    262             if not isinstance(obj, NDFrame):
--> 263                 raise TypeError("cannot concatenate a non-NDFrame object")
    264 
    265             # consolidate

TypeError: cannot concatenate a non-NDFrame object
python pandas web-scraping beautifulsoup
1个回答
1
投票

您只是在代码中缺少一行。 pd.read_html将返回DataFrames列表。因此,在附加到dfs之前只是连续。

dfs = []

url = "https://etherscan.io/txs?p="
for index in range(1, 10):
    res = requests.get(url+str(index), proxies=proxyDict)
    soup = BeautifulSoup(res.content, 'lxml')
    table = soup.find_all('table')[0]
    df_list = pd.read_html(str(table))
    df = pd.concat(df_list)  # this line is what you're missing
    dfs.append(df)

final_df = pd.concat(dfs)
final_df.to_csv('Desktop/scrape.csv')
© www.soinside.com 2019 - 2024. All rights reserved.