我是python的新手,所以我尝试使用visual studio和windows 7
import csv
from bs4 import BeautifulSoup
import requests
contents = []
with open('websupplies.csv','r') as csvf: # Open file in read mode
urls = csv.reader(csvf)
for url in urls:
contents.append(url) # Add each url to list contents
for url in contents: # Parse through each url in the list.
page = requests.get(url).content
soup = BeautifulSoup(page, "html.parser")
price = soup.find('span', attrs={'itemprop':'price'})
availability = soup.find('div', attrs={'class':'product-availability'})
但我明白了 - 找不到连接适配器..'''url']'
为什么?
csv的结构如下
https://www.websupplies.gr/epeksergastis-intel-core-i5-8400-9mb-2-80ghz-bx80684i58400
https://www.websupplies.gr/epeksergastis-intel-celeron-g3930-2mb-2-90ghz-bx80677g3930
https://www.websupplies.gr/epeksergastis-amd-a6-9500-bristol-ridge-dual-core-3-5ghz-socket-am4-65w-ad9500agabbox
他们最后没有分号
您的文件是一个简单的URL列表。它不是真正的CSV。
CSV读取器将每行读入其自己的列表中。所以加载数据的结构将是:
[
["https://www.websupplies.gr/epeksergastis-intel-core-i5-8400-9mb-2-80ghz-bx80684i58400"],
["https://www.websupplies.gr/epeksergastis-intel-celeron-g3930-2mb-2-90ghz-bx80677g3930"],
["https://www.websupplies.gr/epeksergastis-amd-a6-9500-bristol-ridge-dual-core-3-5ghz-socket-am4-65w-ad9500agabbox"],
]
解决这个问题的一种方法是使用url[0]
作为requests.get
的参数,但实际上正确的解决方法是根本不使用CSV。由于每行只有一个数据,因此您可以直接读取数据并将其传递给请求:
with open('websupplies.csv','r') as csvf: # Open file in read mode
for line in csvf:
contents.append(line.strip('\n')) # Add each url to list contents
在this问题中,它说请求需要http方案,也许这就是问题所在?当您从文件中读取行时,还必须删除/ n