我是python的新手,我正在开发基于抓取的项目 - 我应该从包含特定搜索词的链接中提取所有内容并将它们放在csv文件中。作为第一步,我编写了此代码,以根据输入的搜索词从网站中提取所有链接。我只得到一个空白屏幕作为输出,我无法找到我的错误。
import urllib
import mechanize
from bs4 import BeautifulSoup
import datetime
def searchAP(searchterm):
newlinks = []
browser = mechanize.Browser()
browser.set_handle_robots(False)
browser.addheaders = [('User-agent', 'Firefox')]
text = ""
start = 0
while "There were no matches for your search" not in text:
url = "http://www.marketing-interactive.com/"+"?s="+searchterm
text = urllib.urlopen(url).read()
soup = BeautifulSoup(text, "lxml")
results = soup.findAll('a')
for r in results:
if "rel=bookmark" in r['href'] :
newlinks.append("http://www.marketing-interactive.com"+ str(r["href"]))
start +=10
return newlinks
print searchAP("digital marketing")
以下脚本根据给定的搜索关键字从网页中提取所有链接。但它并没有超越第一页。虽然可以通过操作URL中的页码来轻松修改以下代码以从多个页面获取所有结果(如other answer中的Rutger de Knijf所述)。
from pprint import pprint
import requests
from BeautifulSoup import BeautifulSoup
def get_url_for_search_key(search_key):
base_url = 'http://www.marketing-interactive.com/'
response = requests.get(base_url + '?s=' + search_key)
soup = BeautifulSoup(response.content)
return [url['href'] for url in soup.findAll('a', {'rel': 'bookmark'})]
用法:
pprint(get_url_for_search_key('digital marketing'))
输出:
[u'http://www.marketing-interactive.com/astro-launches-digital-marketing-arm-blaze-digital/',
u'http://www.marketing-interactive.com/singapore-polytechnic-on-the-hunt-for-digital-marketing-agency/',
u'http://www.marketing-interactive.com/how-to-get-your-bosses-on-board-your-digital-marketing-plan/',
u'http://www.marketing-interactive.com/digital-marketing-institute-launches-brand-refresh/',
u'http://www.marketing-interactive.com/entropia-highlights-the-7-original-sins-of-digital-marketing/',
u'http://www.marketing-interactive.com/features/futurist-right-mindset-digital-marketing/',
u'http://www.marketing-interactive.com/lenovo-brings-board-new-digital-marketing-head/',
u'http://www.marketing-interactive.com/video/discussing-digital-marketing-indonesia-video/',
u'http://www.marketing-interactive.com/ubs-melvin-kwek-joins-credit-suisse-as-apac-digital-marketing-lead/',
u'http://www.marketing-interactive.com/linkedins-top-10-digital-marketing-predictions-2017/']
希望这是您想要的项目的第一步。
你犯了四个错误:
start
,但你永远不会使用它。 (就我在http://www.marketing-interactive.com/?s=something
上看到的情况而言,你也不能。没有基于网址的分页。)所以你无休止地循环第一组结果。"There were no matches for your search"
不是该网站返回的无结果字符串。所以无论如何它会永远持续下去。http://www.marketing-interactive.com
到http://www.marketing-interactive.com
。所以你最终会得到http://www.marketing-interactive.comhttp://www.marketing-interactive.com/astro-launches-digital-marketing-arm-blaze-digital/
rel=bookmark
的选择:arifs solution是正确的选择。但如果你真的想这样做,你需要这样的事情:
for r in results:
if r.attrs.get('rel') and r.attrs['rel'][0] == 'bookmark':
newlinks.append(r["href"])
这首先检查rel
是否存在然后检查它的第一个孩子是否是"bookmark"
,因为r['href']
根本不包含rel
。这不是BeautifulSoup如何构建事物的方式。要抓住这个特定的网站,你可以做两件事:
"Load more"
按钮。但这非常麻烦。http://www.marketing-interactive.com/wp-content/themes/MI/library/inc/loop_handler.php?pageNumber=1&postType=search&searchValue=digital+marketing
这是提供列表的网址。它具有分页功能,因此您可以轻松遍历所有结果。