BeautifulSoup在没有经过身份验证的会话的情况下解析页面

问题描述 投票:0回答:1

我正在尝试使用scrapy和selenium从多个页面中抓取数据我使用selenium驱动程序成功登录但是当我的蜘蛛开始抓取时他不使用来自selenium的登录会话并且只刮取可用于任何用户的数据(未经过身份验证的用户) )

class Brother(Spider):
name = "spiderbrother"
allowed_domain = ["mywebsite"]
start_urls = ['https://mywebsite../']
custom_settings = {
    'ITEM_PIPELINES': {
        'Equipe.pipelines.Brother': 500
    },
    'COOKIES_ENABLED': True
}

def parse(self, response):
    driver = webdriver.Firefox()
    driver.get("https://mywebsite../login")
    username = driver.find_element_by_id("email")
    password = driver.find_element_by_id("passwd")
    username.send_keys("myEmail")
    password.send_keys("MyPWD")
    driver.find_element_by_name("SubmitLogin").click()
    categories = Selector(response).xpath('//*[@id="leo-top-menu"]/ul/li/a')
    for categorie in categories:
        page_url = categorie.xpath('@href').extract_first()
        next_page = response.urljoin(page_url)
        if next_page:
            yield scrapy.Request(url=next_page, callback=self.types)

def types(self, response):

    sub_categories = Selector(response).xpath('//*[@id="subcategories"]/div/div/div/h5/a')
    for sub_categorie in sub_categories:
        page_url = sub_categorie.xpath('@href').extract_first()
        next_page = response.urljoin(page_url)
        if next_page:
            yield scrapy.Request(url=next_page, callback=self.products)

def products(self, response):

    products = Selector(response).xpath('//div[@class="product-image-container image"]/a')

    for product in products:
        url = product.xpath('@href').extract_first()
        page = requests.get(url).text
        soup = BeautifulSoup(page, 'html.parser')
        item = TestItem()
        item["title"] = soup.find("h1").text
        item['image_url'] = soup.find("div", {"id": "image-block"}).img["src"]
        item['price'] = soup.find("span", {"id": "our_price_display"}).text
        try:
            item['availability'] = soup.find("span", {"id": "availability_value"}).text()
        except:
            item['availability'] = "Available"
        try:
            item['description'] = soup.find("div", {"itemprop": "description"}).text.strip()
        except:
            item['description'] = "no description found"
        yield item

    next_page = response.xpath('//li[@class="pagination_next"]/a/@href').extract_first()
    next_page = response.urljoin(next_page)
    if next_page:
        yield scrapy.Request(url=next_page, callback=self.products)

我得到除“价格”之外的所有数据,因为它仅在登录时可用

尝试使用FormRequest登录而不是selenium仍然遇到了同样的问题..我尝试在访问产品的页面之前检索数据(只有价格)并使用BeautifulSoup解析它并且它工作..似乎beautifulsoup是这里的问题

我使用FormRequest登录

    def parse(self, response):
    return FormRequest.from_response(response,
                                     formxpath="//*[@id='login_form']",
                                     formdata={'email': 'MyEmail', 'passwd': 'myPWD'},
                                     callback=self.after_login)

def after_login(self, response):
    categories = Selector(response).xpath('//*[@id="leo-top-menu"]/ul/li/a')
    for categorie in categories:
        page_url = categorie.xpath('@href').extract_first()
        next_page = response.urljoin(page_url)
        if next_page:
            yield Request(url=next_page, callback=self.types)
python selenium web-scraping scrapy
1个回答
0
投票

似乎requests.get()在没有登录会话的情况下打开URL,所以我尝试使用Request访问它并回调一个新方法parse_item(),以便Beatifulsoup从响应中解析并且它有效。

更新的代码

def products(self, response):
    products = Selector(response).xpath('//div[@class="product-image-container image"]/a')
    for product in products:
        url = product.xpath('@href').extract_first()
        page = response.urljoin(url)
        yield Request(url=page, callback=self.parse_item)
    next_page = response.xpath('//li[@class="pagination_next"]/a/@href').extract_first()
    next_page = response.urljoin(next_page)
    if next_page:
        yield Request(url=next_page, callback=self.products)

def parse_item(self, response):
    soup = BeautifulSoup(response.text, 'lxml')
    item = TestItem()
    item["title"] = soup.find("h1").text
    item['image_url'] = soup.find("div", {"id": "image-block"}).img["src"]
    item['price'] = soup.find("span", {"id": "our_price_display"}).text
    try:
        item['availability'] = soup.find("span", {"id": "availability_value"}).text()
    except:
        item['availability'] = "Available"
    try:
        item['description'] = soup.find("div", {"itemprop": "description"}).text.strip().replace(u'\xa0', u' ')
    except:
        print("no description found")
    yield item
© www.soinside.com 2019 - 2024. All rights reserved.