如何刮取存储在列表中的多个链接

问题描述 投票:0回答:1

我试图通过将每个pagenumber应用到url然后将url存储在列表中来刮取多个url页面。在执行迭代时,仅刮取第一页中的内容而不是其余内容。故障在哪里?

df = pd.DataFrame()
list_of_links = []
url = 'https://marknadssok.fi.se/publiceringsklient?Page='
    for link in range(1,10):
        urls = url + str(link)
        list_of_links.append(urls)

 #Establish connection

    for i in list_of_links:
        r = requests.get(i)
        html = BeautifulSoup(r.content, "html.parser")

#Append each column to it's attribute

        table_body=html.find('tbody')
        rows = table_body.find_all('tr')
        data = []
        for row in rows:
            cols=row.find_all('td')
            cols=[x.text.strip() for x in cols]
            data.append(cols)

df = pd.DataFrame(data, columns=['Publiceringsdatum', 'utgivare', 'person', 'befattning',
                             'Närstående', 'karaktär', 'Instrumentnamn', 'ISIN', 'transaktionsdatum',
                             'volym', 'volymsenhet', 'pris', 'valuta', 'handelsplats', 
                             'status', 'detaljer' ])
python python-3.x list loops web-scraping
1个回答
0
投票

问题是存储来自url的内容的数据变量是for-loop意思。通过将其从for-loop中取出来解决它

© www.soinside.com 2019 - 2024. All rights reserved.