如何刮掉多个div(并将它们放在csv中)?

问题描述 投票:1回答:1

我有这个代码从twitter上的媒体中抓取标记的用户ID:

from bs4 import BeautifulSoup
from selenium import webdriver
import time
import csv
import re

# Create a new instance of the Firefox driver
driver = webdriver.Firefox()

# go to page
driver.get("http://twitter.com/RussiaUN/media")

#You can adjust it but this works fine
SCROLL_PAUSE_TIME = 2

# Get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")

while True:
    # Scroll down to bottom
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

    # Wait to load page
    time.sleep(SCROLL_PAUSE_TIME)

    # Calculate new scroll height and compare with last scroll height
    new_height = driver.execute_script("return document.body.scrollHeight")
    if new_height == last_height:
        break
    last_height = new_height


# Now that the page is fully scrolled, grab the source code.
src = driver.page_source

#Past it into BS
soup = BeautifulSoup(src, 'html.parser')
#divs = soup.find_all('div',class_='account')
divs = soup.find_all('div', {"data-user-id" : re.compile(r".*")})


#PRINT RESULT
#print('printing results')
#for div in divs:
#    print(div['data-user-id'])


#SAVE IN FILE
print('Saving results')    
#with open('file2.csv','w') as f:
 #  for div in divs:
  #      f.write(div['data-user-id']+'\n')    

with open('file.csv','w', newline='') as f:
    writer = csv.writer(f)
    for div in divs:
        writer.writerow([div['data-user-id']])

- 但我想刮掉用户名,然后在一个带有IDS列和一列USERNAMES的csv中组织所有这些数据。

所以我的猜测是我必须先修改这段代码:

divs = soup.find_all('div', {"data-user-id" : re.compile(r".*")})

但我无法找到实现这一目标的方法......

- 然后我也有重复问题。正如您在代码中看到的,有两种方法来刮取数据:

1 #divs = soup.find_all('div',class_='account')

2 divs = soup.find_all('div', {"data-user-id" : re.compile(r".*")})

第一个短语似乎有效,但效率不高。 2号工作正常,但似乎最终给了我dupplicates,因为它通过所有的divs而不仅仅是class_='account'

我很抱歉,如果有人觉得我在这里有点垃圾,因为我在24小时内发了3个问题......感谢那些帮助过并且会帮助的人。

python selenium csv twitter web-scraping
1个回答
1
投票

Python有一个内置的csv module用于编写csv文件。

您使用的滚动脚本似乎也不起作用,因为它不是一直向下滚动并在一定时间后停止。我只用你的脚本在csv文件中获得了大约1400条记录。我已用pagedown键替换它。您可能需要调整no_of_pagedowns来控制要向下滚动的数量。即使使用200页面记录,我也获得了大约2200条记录。请注意,此数字不会删除重复项。

我添加了一些额外的修改,只将唯一数据写入文件。

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import csv
driver = webdriver.Firefox()
driver.get("http://twitter.com/RussiaUN/media")
time.sleep(1)
elem = driver.find_element_by_tag_name("html")
no_of_pagedowns = 200
while no_of_pagedowns:
    elem.send_keys(Keys.PAGE_DOWN)
    time.sleep(2)
    no_of_pagedowns-=1


src = driver.page_source

soup = BeautifulSoup(src, 'html.parser')
divs = soup.find_all('div',class_='account')
all_data=[]
#get only unique data
for div in divs:
    single=[div['data-user-id'],div['data-screen-name']]
    if single not in all_data:
        all_data.append(single)
with open('file.csv','w') as f:
    writer = csv.writer(f, delimiter=",")
    #headers
    writer.writerow(["ID","USERNAME"])
    writer.writerows(all_data)

产量

ID,USERNAME
255493944,MID_RF
2230446228,Rus_Emb_Sudan
1024596885661802496,ambrus_drc
2905424987,Russie_au_Congo
2174261359,RusEmbUganda
285532415,tass_agency
34200559,rianru
40807205,kpru
177502586,nezavisimaya_g
23936177,vzglyad
255471924,mfa_russia
453639812,pass_blue
...

如果您想要重复项,只需删除if条件

for div in divs:
    single=[div['data-user-id'],div['data-screen-name']]
    all_data.append(single)
© www.soinside.com 2019 - 2024. All rights reserved.