使用Beautiful Soup或Selenium(Py)下载ASPX PDF链接

问题描述 投票:0回答:2

我试图刮的网站是:http://www.imperial.courts.ca.gov/CourtCalendars/Public/MCalendars.aspx

它使用ASPX生成我想要的PDF链接。

我试图改编的旧代码是:

import requests, sys, webbrowser, bs4, os

# v1 - this finds links but due to asp does not click through
print('Checking for Calendars')
res = requests.get('https://imperial.courts.ca.gov/CourtCalendars/Public/MCalendars.aspx')
res.raise_for_status

soup = bs4.BeautifulSoup(res.text, 'html.parser')

os.makedirs('Calendars', exist_ok=True)

for link in soup.findAll('a', href=True):
    if link.string == 'Misdemeanor':
        linkUrl = 'http:' + link.get('href')

        res = requests.get(linkUrl) # this line is in error because aspx
        #link in html d/n = link after click

        res.raise_for_status()

        pdfFile = open(os.path.join('Calendar', os.path.basename(linkUrl)), 'wb')
        for chunk in res.iter_content(100000):
            pdfFile.write(chunk)
        pdfFile.close

该代码在另一个网站上工作,其中第一页上的链接地址=链接地址,但这里没有动态ASPX链接。

我正在考虑使用KEYS右键单击每个链接,然后在新选项卡中打开,下载,但这似乎过多了。 (而且我不确定如何管理Selenium中的几个标签。)

有没有办法简单地下载if循环中的每个链接?

我开始的另一个选择是:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox()
browser.get('https://imperial.courts.ca.gov/CourtCalendars/Public/MCalendars.aspx')

# using singular find_element, then click
# this gets one of the links, but not all
# per git, need to use find elements and loop through

#beneath gets 0 new tabs
linkElems = browser.find_elements_by_link_text('Misdemeanor')
totalLinks = len(linkElems)

for i in linkElems:
    i.send_keys(Keys.CONTROL + 't')

但基本上我再也不确定如何点击和下载(或打开,下载,关闭)每一个。

提前致谢。

python asp.net selenium web-scraping beautifulsoup
2个回答
1
投票

使用Chrome选项。

chromeOptions=webdriver.ChromeOptions()
prefs = {"plugins.always_open_pdf_externally": True}
chromeOptions.add_experimental_option("prefs",prefs)
driver = webdriver.Chrome(chrome_options=chromeOptions)
driver.get("https://imperial.courts.ca.gov/CourtCalendars/Public/MCalendars.aspx")

linkElems = driver.find_elements_by_link_text('Misdemeanor')

for i in linkElems:
    driver.get(i.get_attribute('href'))

1
投票

我敢打赌它不是因为是一个ASPX文件而是因为它是一个相对路径。它应该工作如果你这样做:

linkUrl = 'https://imperial.courts.ca.gov/CourtCalendars/Public/' + link.get('href')
© www.soinside.com 2019 - 2024. All rights reserved.