Scrapy Crawl ValueError

问题描述 投票:0回答:1

我是python的新手,对它不屑一顾。我按照教程进行了抓取抓取quotes.toscrape.com。

我在代码中输入的内容与本教程中的代码完全相同,但是我不断得到ValueError: invalid hostname:运行草率的爬网引号时。我正在Pycharm计算机上的Mac中执行此操作。

我尝试在URL部分中的start_urls = []周围用单引号和双引号引起来,但这不能解决错误。

这是代码的样子:

import scrapy

class QuoteSpider(scrapy.Spider):
    name = 'quotes'
    start_urls = [
        'http: // quotes.toscrape.com /'
    ]

    def parse(self, response):
        title = response.css('title').extract()
        yield {'titletext':title}

应该是标题的网站。

这是错误的样子:

2019-11-08 12:52:42 [scrapy.core.engine] INFO: Spider opened
2019-11-08 12:52:42 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-11-08 12:52:42 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-11-08 12:52:42 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http:///robots.txt>: invalid hostname: 
Traceback (most recent call last):
  File "/Users/newuser/PycharmProjects/ScrapyTutorial/venv/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
    defer.returnValue((yield download_func(request=request, spider=spider)))
ValueError: invalid hostname: 
2019-11-08 12:52:42 [scrapy.core.scraper] ERROR: Error downloading <GET http:///%20//%20quotes.toscrape.com%20/>
Traceback (most recent call last):
  File "/Users/newuser/PycharmProjects/ScrapyTutorial/venv/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
    defer.returnValue((yield download_func(request=request, spider=spider)))
ValueError: invalid hostname: 
2019-11-08 12:52:42 [scrapy.core.engine] INFO: Closing spider (finished)
python-3.x scrapy pycharm
1个回答
0
投票

不要对URL使用空格

start_urls = [
    'http://quotes.toscrape.com/'
]
© www.soinside.com 2019 - 2024. All rights reserved.