函数不调用scrapywebcrawler。

问题描述 投票:0回答:1

谁能告诉我为什么不调用 ParseLinks 和 ParseContent?其余的运行和printsappendsdoes的东西,但我得到teo解析函数的tumbleweed。任何进一步的错误信息建议也欢迎。

import scrapy
import scrapy.shell
from scrapy.crawler import CrawlerProcess


Websites = ("https://www.flylevel.com/", "https://www.latam.com/en_us/")
links = []
D = {}
#D = {main website: links: content}
def dictlayout():
    for W in Websites:
        D[W] = []

dictlayout()

class spider(scrapy.Spider):
    name = "spider"
    start_urls = Websites
    print("request level 1")
    def start_requests(self):
        print("request level 2")
        for U in self.start_urls:
            print("request level 3")
            yield scrapy.Request(U, callback = self.ParseLinks)
            print("links: ")
            print(links)


    def ParseLinks(self, response):
        Link = response.xpath("/html//@href")
        Links = link.extract()
        print("parser print")
        print(link)
        for L in Links:
            link.append(L)
            D[W]=L
            yield response.follow(url=L, callback=self.ParseContent)

    def ParseContent(self, response):
        content = ParseLinks.extract_first().strip()
        D[W][L] = content
        print("content")
        print(content)

print(D)
print(links)


process = CrawlerProcess()
process.crawl(spider)
process.start()
function web-scraping callback scrapy
1个回答
1
投票

我想 ParseLinks 其实就是所谓的。问题是你正试图从一个html标签中提取一个href。这一行 Link = response.xpath("/html//@href") 可能会破坏你的代码。试试 Link = response.xpath("//a/@href") 而不是。

© www.soinside.com 2019 - 2024. All rights reserved.