Selenium Click() 不适用于 scrapy 蜘蛛

问题描述 投票:0回答:2

我正在尝试使用 scrapy 蜘蛛从列表页面抓取产品页面的链接。该页面显示前 10 台机器,并有一个调用一些 javascript 的“显示所有机器”按钮。 javascript 相当复杂(即我不能只查看函数并查看按钮指向的 url)。我正在尝试使用 selenium webdriver 来模拟单击按钮,但由于某种原因它不起作用。当我抓取产品链接时,我只得到前 10 个,而不是完整列表。

谁能告诉我为什么它不起作用?

我想要抓取的页面是http://www.ncservice.com/en/second-hand-milling-machines

蜘蛛是

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.http import Request, FormRequest
from scrapy import log
from scrapy.exceptions import DropItem
from scrapy import signals
from mtispider.items import MachineItem

import urlparse
import time
import MySQLdb
import unicodedata
import re
from mtispider import tools
from selenium import webdriver


class MachineSpider(CrawlSpider):
name = 'nc-spider'
allowed_domains = ['ncservice.com']

def start_requests(self):
    requests = list(super(MachineSpider, self).start_requests())
    requests.append(Request('http://www.ncservice.com/en/second-hand-milling-machines', callback=self.parsencmilllist))
    return requests

def parsencmilllist(self,response):
    hxs=HtmlXPathSelector(response)
    driver= webdriver.Firefox()
    driver.get(response.url)
    try:
        driver.FindElement(By.Id("mas-resultados-fresadoras")).Click()
    except:
        log.msg("Couldnt get all the machines", level=log.INFO)
    ncmachs = hxs.select('//div[@id="resultados"]//a/@href').extract()
    for ncmach in ncmachs:
        yield Request(ncmach,
                      meta = {'type':'Milling'},
                      callback=self.parsencmachine)
    driver.quit()

def parsencmachine(self,response):
    #scrape the machine
    return item

谢谢!

javascript selenium-webdriver click web-crawler scrapy
2个回答
1
投票

主要问题是您需要从网络驱动程序的

Selector
初始化
page_source
,而不是传递到回调中的
response

from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import Request
from scrapy import Selector

from selenium import webdriver

class MachineSpider(CrawlSpider):
    name = 'nc-spider'
    allowed_domains = ['ncservice.com']

    def start_requests(self):
        yield Request('http://www.ncservice.com/en/second-hand-milling-machines',
                      callback=self.parsencmilllist)

    def parsencmilllist(self, response):
        driver = webdriver.Firefox()

        driver.get(response.url)
        driver.find_element_by_id("mas-resultados-fresadoras").click()

        sel = Selector(text=driver.page_source)
        driver.quit()

        links = sel.xpath('//div[@id="resultados"]//a/@href').extract()
        for link in links:
            yield Request(link,
                          meta={'type': 'Milling'},
                          callback=self.parsencmachine)

    def parsencmachine(self, response):
        print response.url

0
投票

如果您看到错误

AttributeError: ‘WebDriver’ object has no attribute ‘find_element_by_name’
,您正在使用
Selenium >= 4.3.0

将方法

find_element_by_name
find_element_by_id
替换为
find_element

参考:https://aleshativadar.medium.com/attributeerror-webdriver-object-has-no-attribute-find-element-by-name-e7cf3b271227

© www.soinside.com 2019 - 2024. All rights reserved.