将scrapy蜘蛛打造成我自己的程序,我不想从命令行调用scrapy)

问题描述 投票:8回答:2

与此问题类似:stackoverflow: running-multiple-spiders-in-scrapy

我想知道,我可以在另一个python程序中运行整个scrapy项目吗?让我们说我想构建一个需要抓取几个不同站点的整个程序,并为每个站点构建整个scrapy项目。

而不是从命令行运行,我想运行这些蜘蛛并从中获取信息。

我可以在python中使用mongoDB,我已经可以构建包含蜘蛛的scrapy项目,但现在只需将它们合并到一个应用程序中。

我想运行一次应用程序,并且能够从我自己的程序中控制多个蜘蛛

为什么这样?以及此应用程序还可以使用API​​连接到其他站点,并且需要实时比较API站点和已删除站点的结果。我不想从命令行调用scrapy,它的意思是自包含。

(我最近一直在问很多关于抓取的问题,因为我正在努力寻找合适的解决方案来构建)

谢谢 :)

python web-scraping scrapy
2个回答
8
投票

是的,你当然可以;)

The idea (inspired from this blog post)是创建一个worker然后在你自己的Python脚本中使用它:

from scrapy import project, signals
from scrapy.conf import settings
from scrapy.crawler import CrawlerProcess
from scrapy.xlib.pydispatch import dispatcher
from multiprocessing.queues import Queue
import multiprocessing

class CrawlerWorker(multiprocessing.Process):

    def __init__(self, spider, result_queue):
        multiprocessing.Process.__init__(self)
        self.result_queue = result_queue

        self.crawler = CrawlerProcess(settings)
        if not hasattr(project, 'crawler'):
            self.crawler.install()
        self.crawler.configure()

        self.items = []
        self.spider = spider
        dispatcher.connect(self._item_passed, signals.item_passed)

    def _item_passed(self, item):
        self.items.append(item)

    def run(self):
        self.crawler.crawl(self.spider)
        self.crawler.start()
        self.crawler.stop()
        self.result_queue.put(self.items)

使用示例:

result_queue = Queue()
crawler = CrawlerWorker(MySpider(myArgs), result_queue)
crawler.start()
for item in result_queue.get():
    yield item

另一种方法是使用system()执行scrapy crawl命令


0
投票

Maxime Lorant的答案终于解决了我在自己的剧本中制作scrapy蜘蛛的问题。它解决了我遇到的两个问题:

  1. 它允许连续两次调用蜘蛛(在scrapy教程中的简单示例中,这会导致崩溃,因为您无法启动twister reactor两次)
  2. 它允许将变量从spider返回到脚本中。

只有一件事:这个例子不适用于我现在使用的scrapy版本(Scrapy 1.5.2)和Python 3.7

在玩了一些代码后,我得到了一个我想分享的工作示例。我也有一个问题,请参见下面的脚本。它是一个独立的脚本,所以我也添加了一个蜘蛛

import logging
import multiprocessing as mp

import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.signals import item_passed
from scrapy.utils.project import get_project_settings
from scrapy.xlib.pydispatch import dispatcher


class CrawlerWorker(mp.Process):
    name = "crawlerworker"

    def __init__(self, spider, result_queue):
        mp.Process.__init__(self)
        self.result_queue = result_queue
        self.items = list()
        self.spider = spider
        self.logger = logging.getLogger(self.name)

        self.settings = get_project_settings()
        self.logger.setLevel(logging.DEBUG)
        self.logger.debug("Create CrawlerProcess with settings {}".format(self.settings))
        self.crawler = CrawlerProcess(self.settings)

        dispatcher.connect(self._item_passed, item_passed)

    def _item_passed(self, item):
        self.logger.debug("Adding Item {} to {}".format(item, self.items))
        self.items.append(item)

    def run(self):
        self.logger.info("Start here with {}".format(self.spider.urls))
        self.crawler.crawl(self.spider, urls=self.spider.urls)
        self.crawler.start()
        self.crawler.stop()
        self.result_queue.put(self.items)


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def __init__(self, **kw):
        super(QuotesSpider, self).__init__(**kw)

        self.urls = kw.get("urls", [])

    def start_requests(self):
        for url in self.urls:
            yield scrapy.Request(url=url, callback=self.parse)
        else:
            self.log('Nothing to scrape. Please pass the urls')

    def parse(self, response):
        """ Count number of The's on the page """
        the_count = len(response.xpath("//body//text()").re(r"The\s"))
        self.log("found {} time 'The'".format(the_count))
        yield {response.url: the_count}


def report_items(message, item_list):
    print(message)
    if item_list:
        for cnt, item in enumerate(item_list):
            print("item {:2d}: {}".format(cnt, item))
    else:
        print(f"No items found")


url_list = [
    'http://quotes.toscrape.com/page/1/',
    'http://quotes.toscrape.com/page/2/',
    'http://quotes.toscrape.com/page/3/',
    'http://quotes.toscrape.com/page/4/',
]

result_queue1 = mp.Queue()
crawler = CrawlerWorker(QuotesSpider(urls=url_list[:2]), result_queue1)
crawler.start()
# wait until we are done with the crawl
crawler.join()

# crawl again
result_queue2 = mp.Queue()
crawler = CrawlerWorker(QuotesSpider(urls=url_list[2:]), result_queue2)
crawler.start()
crawler.join()
#
report_items("First result", result_queue1.get())
report_items("Second result", result_queue2.get())

如您所见,代码几乎完全相同,除了一些导入由于scrapy API的更改而发生更改。

有一件事:我得到了pydistatch导入的弃用警告:

 ScrapyDeprecationWarning: Importing from scrapy.xlib.pydispatch is deprecated and will no longer be supported in future Scrapy versions. If you just want to connect signals use the from_crawler class method, otherwise import pydispatch directly if needed. See: https://github.com/scrapy/scrapy/issues/1762
  module = self._system_import(name, *args, **kwargs)

我发现here如何解决这个问题。但是,我不能让这个工作。有谁知道如何应用from_crawler类方法来摆脱弃用警告?

© www.soinside.com 2019 - 2024. All rights reserved.