从网站提取链接和项目名称并打印这些列表

问题描述 投票:-2回答:2

我是python Programming的初学者,我正在使用bs4模块在Python中练习web抓取。我试图从网站中提取一些信息,如下所示。

每个显示的列表都显示为空。请告诉我在哪里做错了。

import requests
from bs4 import BeautifulSoup as bs    

res = requests.get('https://www.flipkart.com/samsung-mobile-store?otracker=nmenu_sub_Electronics_0_Samsung')
soup = bs(res.content, 'lxml')

names = [item['title'] for item in soup.select('._2cLu-1 a')]

links = [item['href'] for item in soup.select('._2cLu-l a')]

ratings = [item.text for item in soup.select('.hGSR34 div')]

print(names)
print(links)
print(ratings)
python python-3.x web-scraping beautifulsoup
2个回答
0
投票

是的,您可以使用select轻松完成此操作。请注意,1项没有评级。您不需要在两个不同的场合访问相同的元素来生成名称和链接。

import requests
from bs4 import BeautifulSoup as bs

url = 'https://www.flipkart.com/samsung-mobile-store?otracker=nmenu_sub_Electronics_0_Samsung'
r = requests.get(url)
soup = bs(r.content, 'lxml')

names, links = zip(*[(item['title'], 'https://www.flipkart.com' + item['href']) for item in soup.select('._2cLu-l')])
ratings = [item.text for item in soup.select('.niH0FQ  .hGSR34')]  # 1 rating missing for a product

print(list(names))
print(list(links))
print(ratings)

如果您想将它们加入数据框并且帐户中缺少评级,您可以使用以下内容(如果需要,您可以将if else扩展到前两个项目)

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd

url = 'https://www.flipkart.com/samsung-mobile-store?otracker=nmenu_sub_Electronics_0_Samsung'
r = requests.get(url)
soup = bs(r.content, 'lxml')

products = soup.select('._3liAhj')
names = []
links = []
ratings = []
for product in products:
     names.append(product.select_one('._2cLu-l').text)
     links.append('https://www.flipkart.com' + product.select_one('._2cLu-l')['href'])
     ratings.append(product.select_one('.hGSR34').text if product.select_one('.hGSR34') is not None else 'No rating')

df = pd.DataFrame(list(zip(names, links, ratings)), columns = ['Name', 'Link', 'Rating'])
print(df)

0
投票

要为名称,链接和评级单独列出,请创建其列表并相应地追加:

from bs4 import BeautifulSoup as bs

res = requests.get('https://www.flipkart.com/samsung-mobile-store?otracker=nmenu_sub_Electronics_0_Samsung')
soup = bs(res.content, 'html.parser')

namesList = []
linksList = []
ratingsList = []

namesLinks = soup.find_all('a', class_ ='Zhf2z-')    
ratings = soup.find_all('div', class_ ='hGSR34')

for rat in ratings:
    ratingsList.append(rat.text)

for nameLnk in namesLinks:
    namesList.append(nameLnk.get('title', 'No title available'))
    linksList.append(nameLnk.get('href', 'No href available'))

print(namesList)
print(linksList)
print(ratingsList)

OUTPUT:

['Samsung Galaxy A30 (Black, 64 GB)', 'Samsung Galaxy M20 (Ocean Blue, 32 GB)', 'Samsung Galaxy M10 (Blue, 16 GB)', ... ]

['/samsung-galaxy-a30-black-64-gb/p/itmfec2hqbxcmbzn?pid=MOBFE4CSBDN9XETN&lid=L ...]

['4.4', '4.1', '4.1', '4.6', '4.3', '4.2', '4.3', '4.1', '4.2', '4.2', '4.2', '4.4', ... ]

编辑:

我还将研究一种方法,它可以一起打印设备名称,链接和评级:

使用zip()

from bs4 import BeautifulSoup as bs

res = requests.get('https://www.flipkart.com/samsung-mobile-store?otracker=nmenu_sub_Electronics_0_Samsung')
soup = bs(res.content, 'html.parser')

names = soup.find_all('a', class_ ='Zhf2z-')
ratings = soup.find_all('div', class_ ='hGSR34')

for nm, rat in zip(names, ratings):
    print("Device: {}, Link: {}, Rating: {}".format(nm.get('title', 'no title avialable'), nm.get('href', 'href not available'), rat.text))

OUTPUT:

Device: Samsung Galaxy A30 (Black, 64 GB) Link: /samsung-galaxy-a30-black-64-gb/pN&lid= .. .. cid=MOBFE4CSBDN9XETN Rating: 4.4
Device: Samsung Galaxy M20 (Ocean Blue, 32 GB) Link: /samsung-galaxy-m20-ocean-blue-32-gb/p/.. .. JGFRTYMC Rating: 4.1
Device: Samsung Galaxy M10 (Blue, 16 GB) Link: /samsung-galaxy-m10-blue-16-gb/p/.. .. 6JYE8YG Rating: 4.1
Device: Samsung Galaxy M30 (Gradation Black, 64 GB) Link:/samsung-galaxy-m30-gradation-black-64-gb/p/.. .. CDPXGUP Rating: 4.6
© www.soinside.com 2019 - 2024. All rights reserved.