刮刮ASPX表格并避免使用Selenium

问题描述 投票:0回答:1

我先前询问过(参见here)如何从ASPX表单中删除结果。表单在新选项卡中呈现输出(通过使用JS中的函数window.open)。在我之前的帖子中,我没有发出正确的POST请求,我解决了这个问题。

以下代码使用正确的请求标头从表单中成功检索HTML代码,它与我在Chrome检查器中看到的POST响应完全相同。但是(...)我无法检索数据。一旦用户做出选择,就会打开一个新的弹出窗口,但我无法捕捉到它。弹出窗口有一个新URL,其信息不是请求响应主体的一部分。

请求网址:https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx

弹出URL [我要下载的数据]:https://apps.neb-one.gc.ca/CommodityStatistics/ViewReport.aspx

url = 'https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx'

with requests.Session() as s:
        s.headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36",
            "Content-Type": "application/x-www-form-urlencoded",
            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
            "Referer": "https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx",
            "Accept-Encoding": "gzip, deflate, br",
            "Accept-Language": "en-US,en;q=0.9"
        }

        response = s.get(url)
        soup = BeautifulSoup(response.content, 'html5lib')

        data = { tag['name']: tag['value'] 
            for tag in soup.select('input[name^=ctl00]') if tag.get('value')
            }
        state = { tag['name']: tag['value'] 
                for tag in soup.select('input[name^=__]')
            }

        payload = data.copy()
        payload.update(state)

        payload.update({
            "ctl00$MainContent$rdoCommoditySystem": "ELEC",
            "ctl00$MainContent$lbReportName": '76',
            "ctl00$MainContent$rdoReportFormat": 'PDF',
            "ctl00$MainContent$ddlStartYear": "2008",
            "__EVENTTARGET": "ctl00$MainContent$rdoCommoditySystem$2"
        })

        print(payload['__EVENTTARGET'])
        print(payload['__VIEWSTATE'][-20:])

        response = s.post(url, data=payload, allow_redirects=True)
        soup = BeautifulSoup(response.content, 'html5lib')

        state = { tag['name']: tag['value'] 
                 for tag in soup.select('input[name^=__]')
             }

        payload.pop("ctl00$MainContent$ddlStartYear")
        payload.update(state)
        payload.update({
            "__EVENTTARGET": "ctl00$MainContent$lbReportName",
            "ctl00$MainContent$lbReportName": "171",
            "ctl00$MainContent$ddlFrom": "01/12/2018 12:00:00 AM"
        })

        print(payload['__EVENTTARGET'])
        print(payload['__VIEWSTATE'][-20:])

        response = s.post(url, data=payload, allow_redirects=True)
        soup = BeautifulSoup(response.content, 'html5lib')

        state = { tag['name']: tag['value']
                 for tag in soup.select('input[name^=__]')
                }

        payload.update(state)
        payload.update({
            "ctl00$MainContent$ddlFrom": "01/10/1990 12:00:00 AM",
            "ctl00$MainContent$rdoReportFormat": "HTML",
            "ctl00$MainContent$btnView": "View"
        })

        print(payload['__VIEWSTATE'])

        response = s.post(url, data=payload, allow_redirects=True)
        print(response.text)

有什么方法可以使用requestsbs4从弹出窗口中检索数据?我注意到html-requests可以解析并渲染JS,但我的所有试验都没有成功。

url源代码显示了这个JS代码,我想这是打开带有数据的弹出窗口的代码:


//<![CDATA[
window.open("ViewReport.aspx", "_blank");Sys.Application.initialize();
//]]>

但是我无法访问它。

python selenium web-scraping python-requests python-requests-html
1个回答
0
投票

看到这个scrapy博客https://blog.scrapinghub.com/2016/04/20/scrapy-tips-from-the-pros-april-2016-edition

我过去曾使用过这个概念来刮掉aspx页面。

© www.soinside.com 2019 - 2024. All rights reserved.