无法使用自定义方法解析某些内容

问题描述 投票:0回答:1

我用scrapy编写了一个脚本来从网站上获取namephone数字和email。我所追求的内容有两个不同的链接,如namephone在一个链接中,email在另一个链接中。我在这里使用yellowpages.com作为一个例子并试图以这样的方式实现逻辑,以便即使我在它的登陆页面我也可以解析email。这是我不能使用meta的要求。然而,我使用requestsBeautifulSoup结合scrapy来完成符合上述条件的工作,但它真的很慢。

工作一个(与requestsBeautifulSoup一起):

import scrapy
import requests
from bs4 import BeautifulSoup
from scrapy.crawler import CrawlerProcess

def get_email(target_link):
    res = requests.get(target_link)
    soup = BeautifulSoup(res.text,"lxml")
    email = soup.select_one("a.email-business[href^='mailto:']")
    if email:
        return email.get("href")
    else:
        return None

class YellowpagesSpider(scrapy.Spider):
    name = "yellowpages"
    start_urls = ["https://www.yellowpages.com/search?search_terms=Coffee+Shops&geo_location_terms=San+Francisco%2C+CA"]

    def parse(self,response):
        for items in response.css("div.v-card .info"):
            name = items.css("a.business-name > span::text").get()
            phone = items.css("div.phones::text").get()
            email = get_email(response.urljoin(items.css("a.business-name::attr(href)").get()))
            yield {"Name":name,"Phone":phone,"Email":email}

if __name__ == "__main__":
    c = CrawlerProcess({
        'USER_AGENT': 'Mozilla/5.0',
    })
    c.crawl(YellowpagesSpider)
    c.start()

我试图在没有requestsBeautifulSoup的情况下模仿上述概念,但无法使其发挥作用。

import scrapy
from scrapy.crawler import CrawlerProcess

class YellowpagesSpider(scrapy.Spider):
    name = "yellowpages"
    start_urls = ["https://www.yellowpages.com/search?search_terms=Coffee+Shops&geo_location_terms=San+Francisco%2C+CA"]

    def parse(self,response):
        for items in response.css("div.v-card .info"):
            name = items.css("a.business-name > span::text").get()
            phone = items.css("div.phones::text").get()
            email_link = response.urljoin(items.css("a.business-name::attr(href)").get())

            #CANT APPLY THE LOGIC IN THE FOLLOWING LINE

            email = self.get_email(email_link)
            yield {"Name":name,"Phone":phone,"Email":email}

    def get_email(self,link):
        email = response.css("a.email-business[href^='mailto:']::attr(href)").get()
        return email

if __name__ == "__main__":
    c = CrawlerProcess({
        'USER_AGENT': 'Mozilla/5.0',
    })
    c.crawl(YellowpagesSpider)
    c.start()

如何使我的第二个脚本工作模仿第一个脚本?

python python-3.x web-scraping scrapy
1个回答
1
投票

我会使用response.meta,但是如果需要避免它,那么,让我们尝试另一种方式:检查lib https://pypi.org/project/scrapy-inline-requests/

from inline_requests import inline_requests


class YellowpagesSpider(scrapy.Spider):
    name = "yellowpages"
    start_urls = ["https://www.yellowpages.com/search?search_terms=Coffee+Shops&geo_location_terms=San+Francisco%2C+CA"]

    @inline_requests
    def parse(self, response):
        for items in response.css("div.v-card .info"):
            name = items.css("a.business-name > span::text").get()
            phone = items.css("div.phones::text").get()

            email_url = items.css("a.business-name::attr(href)").get()
            email_resp = yield scrapy.Request(response.urljoin(email_url), meta={'handle_httpstatus_all': True})
            email = email_resp.css("a.email-business[href^='mailto:']::attr(href)").get() if email_resp.status == 200 else None
            yield {"Name": name, "Phone": phone, "Email": email}
© www.soinside.com 2019 - 2024. All rights reserved.