我正在尝试从 PGA 网站上抓取数据,以获得美国所有高尔夫球场的列表。我想抓取数据并输入到 CSV 文件中。我的问题是运行我的脚本后出现此错误。任何人都可以帮助修复此错误以及我如何提取数据吗?
这是错误消息:
文件“/Users/AGB/Final_PGA2.py”,第 44 行,位于
writer.writerow(行)UnicodeEncodeError:“ascii”编解码器无法对字符 u'\u201c' 进行编码 位置 35:序数不在范围内(128)
下面的脚本;
import csv
import requests
from bs4 import BeautifulSoup
courses_list = []
for i in range(906): # Number of pages plus one
url = "http://www.pga.com/golf-courses/search?page={}&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0&course_type=both&has_events=0".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)
g_data2=soup.find_all("div",{"class":"views-field-nothing"})
for item in g_data2:
try:
name = item.contents[1].find_all("div",{"class":"views-field-title"})[0].text
print name
except:
name=''
try:
address1=item.contents[1].find_all("div",{"class":"views-field-address"})[0].text
except:
address1=''
try:
address2=item.contents[1].find_all("div",{"class":"views-field-city-state-zip"})[0].text
except:
address2=''
try:
website=item.contents[1].find_all("div",{"class":"views-field-website"})[0].text
except:
website=''
try:
Phonenumber=item.contents[1].find_all("div",{"class":"views-field-work-phone"})[0].text
except:
Phonenumber=''
course=[name,address1,address2,website,Phonenumber]
courses_list.append(course)
with open ('PGA_Final.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)
您不应该在 Python 3 上收到错误。下面的代码示例修复了代码中的一些不相关问题。它解析给定网页上的指定字段并将它们保存为 csv:
#!/usr/bin/env python3
import csv
from urllib.request import urlopen
import bs4 # $ pip install beautifulsoup4
page = 905
url = ("http://www.pga.com/golf-courses/search?page=" + str(page) +
"&searchbox=Course+Name&searchbox_zip=ZIP&distance=50&price_range=0"
"&course_type=both&has_events=0")
with urlopen(url) as response:
field_content = bs4.SoupStrainer('div', 'views-field-nothing')
soup = bs4.BeautifulSoup(response, parse_only=field_content)
fields = [bs4.SoupStrainer('div', 'views-field-' + suffix)
for suffix in ['title', 'address', 'city-state-zip', 'website', 'work-phone']]
def get_text(tag, default=''):
return tag.get_text().strip() if tag is not None else default
with open('pga.csv', 'w', newline='') as output_file:
writer = csv.writer(output_file)
for div in soup.find_all(field_content):
writer.writerow([get_text(div.find(field)) for field in fields])
with open ('PGA_Final.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)
将其更改为:
with open ('PGA_Final.csv','a') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row.encode('utf-8'))
或者:
import codecs
....
with codecs.open('PGA_Final.csv','a', encoding='utf-8') as file:
writer=csv.writer(file)
for row in courses_list:
writer.writerow(row)