我正在构建一个抓取器来抓取一个页面并从div中返回多个项目(h3和p标签)。出于某种原因,刮刀将在调用时打印所有“名称”字段,但仅保存页面上最后一项的信息。
这是我的代码:
import scrapy
class FoodSpider(scrapy.Spider):
name = 'food'
allowed_domains = ['https://blog.feedspot.com/food_blogs/']
start_urls = ['https://blog.feedspot.com/food_blogs/']
def parse(self, response):
blogs = response.xpath("//div[@class='fsb v4']")
for blog in blogs:
names = blog.xpath('.//h3/a[@class="tlink"]/text()'[0:]).extract()
links = blog.xpath('.//p/a[@class="ext"]/@href'[0:]).extract()
locations = blog.xpath('.//p/span[@class="location"]/text()'[0:]).extract()
abouts = blog.xpath('.//p[@class="trow trow-wrap"]/text()[4]'[0:]).extract()
post_freqs = blog.xpath('.//p[@class="trow trow-wrap"]/text()[6]'[0:]).extract()
networks = blog.xpath('.//p[@class="trow trow-wrap"]/text()[9]'[0:]).extract()
for name in names:
name.split(',')
# print(name)
for link in links:
link.split(',')
for location in locations:
location.split(',')
for about in abouts:
about.split(',')
for post_freq in post_freqs:
post_freq.split(',')
for network in networks:
network.split(',')
yield {'name': name,
'link': link,
'location': location,
'about': about,
'post_freq': post_freq,
'network': network
}
任何人都知道我做错了什么?
如果在DevTools中运行//div[@class='fsb v4']
,它将只返回一个元素
因此,您必须找到一个可以获取所有这些配置文件DIV的选择器
class FoodSpider(scrapy.Spider):
name = 'food'
allowed_domains = ['https://blog.feedspot.com/food_blogs/']
start_urls = ['https://blog.feedspot.com/food_blogs/']
def parse(self, response):
for blog in response.css("p.trow.trow-wrap"):
yield {'name': blog.css(".thumb.alignnone::attr(alt)").extract_first(),
'link': "https://www.feedspot.com/?followfeedid=%s" % blog.css("::attr(data)").extract_first(),
'location': blog.css(".location::text").extract_first(),
}