Scrapy-playwright:KeyError:'playwright_page'

问题描述 投票:0回答:1

我正在尝试抓取一个在向下滚动时加载文章的网页。为了实现这一目标,我结合使用了 scrapy 和 playwright。这是我的爬虫的python代码:

import json

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.http import Request
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

from scrapy.utils.response import open_in_browser
from scrapy_playwright.page import PageMethod
from scrapy.selector import Selector



class SiftedCrawler(CrawlSpider):
    name = "SiftedCrawler"
    allowed_domains = ["sifted.eu"]
    start_urls = ["https://sifted.eu/sectors/"]

    categories = ["artificial-intelligence"]

    rules = (
        Rule(LinkExtractor(allow=categories), callback="parse_page", follow=True),
    )

    def parse_page(self, response):
        # Find total number of loadable pages for category
        res = response.xpath("//script[@type='application/json']/text()").get()
        res = json.loads(res)
        num_pages = res["props"]["pageProps"]["articles"]["pages"]

        # xpath of articles' url
        xpath_articles = "//a[@class='ga-article-list-card articleListCard__link peer absolute inset-0 -z-0']/@href"

        kwargs = {"num_pages": num_pages, "xpath_articles": xpath_articles}
        yield Request(
                      url=response.url,
                      meta={
                          "playwright": True,
                          "playwright_include_page": True
                      },
                      errback=self.close_page,
                      cb_kwargs=kwargs,
                      callback=self.parse_articles
        )

    async def parse_articles(self, response, num_pages, xpath_articles):
        page = response.meta['playwright_page']
        articles = []
        for _ in range(0, num_pages):
            articles += await page.wait_for_selector(xpath_articles)
            await page.evaluate("window.scrollBy(0, document.body.scrollHeight)")
        await page.close()

        for article in articles:
            yield Request(url=article, callback="parse_content")

    def parse_content(self, response):
        pass

    async def close_page(self, error):
        page = error.request.meta['playwright_page']
        await page.close()


def run_spider():
    process = CrawlerProcess(get_project_settings())
    process.crawl(SiftedCrawler)
    process.start()


if __name__ == "__main__":
    run_spider()

这些是爬虫的设置:

BOT_NAME = "SiftedCrawling"

SPIDER_MODULES = ["SiftedCrawling.spiders"]
NEWSPIDER_MODULE = "SiftedCrawling.spiders"
DUPEFILTER_CLASS = 'scrapy.dupefilters.BaseDupeFilter'
ROBOTSTXT_OBEY = True
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
FEED_EXPORT_ENCODING = "utf-8"
DOWNLOAD_HANDLERS = {
    "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
    "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
}
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"

问题是,即使我设置

"playwright_include_page": True
,playwright_page也没有在元字典中传递,给我以下错误:

2023-08-29 12:14:48 [scrapy.utils.log] INFO: Scrapy 2.10.0 started (bot: SiftedCrawling)
2023-08-29 12:14:48 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.2, Twisted 22.10.0, Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0], pyOpenSSL 23.2.0 (OpenSSL 3.1.2 1 Aug 2023), cryptography 41.0.3, Platform Linux-5.19.0-50-generic-x86_64-with-glibc2.35
2023-08-29 12:14:48 [scrapy.addons] INFO: Enabled addons:
[]
2023-08-29 12:14:48 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'SiftedCrawling',
 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
 'FEED_EXPORT_ENCODING': 'utf-8',
 'HTTPCACHE_ENABLED': True,
 'NEWSPIDER_MODULE': 'SiftedCrawling.spiders',
 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['SiftedCrawling.spiders'],
 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2023-08-29 12:14:48 [asyncio] DEBUG: Using selector: EpollSelector
2023-08-29 12:14:48 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2023-08-29 12:14:48 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
2023-08-29 12:14:48 [scrapy.extensions.telnet] INFO: Telnet Password: 0cf51f631c97cc16
2023-08-29 12:14:48 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2023-08-29 12:14:48 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware']
2023-08-29 12:14:48 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-08-29 12:14:48 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2023-08-29 12:14:48 [scrapy.core.engine] INFO: Spider opened
2023-08-29 12:14:48 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-08-29 12:14:48 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in /home/iliamous/PycharmProjects/sifted_crawler/SiftedCrawling/.scrapy/httpcache
2023-08-29 12:14:48 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-08-29 12:14:48 [scrapy-playwright] INFO: Starting download handler
2023-08-29 12:14:48 [scrapy-playwright] INFO: Starting download handler
2023-08-29 12:14:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://sifted.eu/robots.txt> (referer: None) ['cached']
2023-08-29 12:14:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://sifted.eu/sectors/> (referer: None) ['cached']
2023-08-29 12:14:53 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (308) to <GET https://sifted.eu/sector/artificial-intelligence> from <GET https://sifted.eu/sector/artificial-intelligence/>
2023-08-29 12:14:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://sifted.eu/sector/artificial-intelligence> (referer: https://sifted.eu/sectors/) ['cached']
2023-08-29 12:14:54 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://sifted.eu/sector/artificial-intelligence> (referer: https://sifted.eu/sector/artificial-intelligence) ['cached']
2023-08-29 12:14:54 [scrapy.core.scraper] ERROR: Spider error processing <GET https://sifted.eu/sector/artificial-intelligence> (referer: https://sifted.eu/sector/artificial-intelligence)
Traceback (most recent call last):
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/defer.py", line 293, in aiter_errback
    yield await it.__anext__()
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/python.py", line 374, in __anext__
    return await self.data.__anext__()
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/python.py", line 355, in _async_chain
    async for o in as_async_generator(it):
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/asyncgen.py", line 14, in as_async_generator
    async for r in it:
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/python.py", line 374, in __anext__
    return await self.data.__anext__()
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/python.py", line 355, in _async_chain
    async for o in as_async_generator(it):
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/utils/asyncgen.py", line 14, in as_async_generator
    async for r in it:
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 118, in process_async
    async for r in iterable:
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/spidermiddlewares/offsite.py", line 31, in process_spider_output_async
    async for r in result or ():
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 118, in process_async
    async for r in iterable:
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/spidermiddlewares/referer.py", line 355, in process_spider_output_async
    async for r in result or ():
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 118, in process_async
    async for r in iterable:
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/spidermiddlewares/urllength.py", line 30, in process_spider_output_async
    async for r in result or ():
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 118, in process_async
    async for r in iterable:
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/spidermiddlewares/depth.py", line 35, in process_spider_output_async
    async for r in result or ():
  File "/home/iliamous/PycharmProjects/sifted_crawler/venv/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 118, in process_async
    async for r in iterable:
  File "/home/iliamous/PycharmProjects/sifted_crawler/SiftedCrawling/SiftedCrawling/spiders/crawling_spider.py", line 53, in parse_articles
    page = response.meta['playwright_page']
KeyError: 'playwright_page'
2023-08-29 12:14:54 [scrapy.core.engine] INFO: Closing spider (finished)
2023-08-29 12:14:54 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1292,
 'downloader/request_count': 5,
 'downloader/request_method_count/GET': 5,
 'downloader/response_bytes': 78474,
 'downloader/response_count': 5,
 'downloader/response_status_count/200': 4,
 'downloader/response_status_count/308': 1,
 'elapsed_time_seconds': 5.359041,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2023, 8, 29, 9, 14, 54, 182998),
 'httpcache/hit': 5,
 'httpcompression/response_bytes': 478554,
 'httpcompression/response_count': 4,
 'log_count/DEBUG': 9,
 'log_count/ERROR': 1,
 'log_count/INFO': 12,
 'memusage/max': 3083468800,
 'memusage/startup': 3083468800,
 'request_depth_max': 2,
 'response_received_count': 4,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'spider_exceptions/KeyError': 1,
 'start_time': datetime.datetime(2023, 8, 29, 9, 14, 48, 823957)}
2023-08-29 12:14:54 [scrapy.core.engine] INFO: Spider closed (finished)
2023-08-29 12:14:54 [scrapy-playwright] INFO: Closing download handler
2023-08-29 12:14:54 [scrapy-playwright] INFO: Closing download handler
python scrapy playwright infinite-scroll
1个回答
0
投票

设置

HTTPCACHE_ENABLED = False

在您的项目设置中。或者使用 custom_settings 属性:

class SiftedCrawler(CrawlSpider):
    custom_settings = {
        'HTTPCACHE_ENABLED': False,
        "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
        "DOWNLOAD_HANDLERS": {
            "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
            "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
        },
        "ITEM_PIPELINES" : {}
    }
© www.soinside.com 2019 - 2024. All rights reserved.