如何废弃cricinfo中的所有测试匹配详细信息

问题描述 投票:2回答:2

我试图废弃所有的测试比赛详细信息,但它显示HTTP Error 504: Gateway Timeout我得到测试匹配的详细信息,但它没有显示这是我的代码我已经使用bs4从cricinfo中删除测试匹配详细信息

我需要废弃2000测试匹配的详细信息,这是我的代码

import urllib.request as req

BASE_URL = 'http://www.espncricinfo.com'

if not os.path.exists('./espncricinfo-fc'):
    os.mkdir('./espncricinfo-fc')

for i in range(0, 2000):

    soupy = BeautifulSoup(urllib2.urlopen('http://search.espncricinfo.com/ci/content/match/search.html?search=test;all=1;page=' + str(i)).read())

    time.sleep(1)
    for new_host in soupy.findAll('a', {'class' : 'srchPlyrNmTxt'}):
        try:
            new_host = new_host['href']
        except:
            continue
        odiurl =BASE_URL + urljoin(BASE_URL,new_host)
        new_host = unicodedata.normalize('NFKD', new_host).encode('ascii','ignore')
        print(new_host)
        html = req.urlopen(odiurl).read()
        if html:
            with open('espncricinfo-fc/{0!s}'.format(str.split(new_host, "/")[4]), "wb") as f:
                f.write(html)
                print(html)
        else:
            print("no html")
python-3.x beautifulsoup
2个回答
0
投票

这通常发生在执行多个请求太快时,可能是服务器已关闭或您的连接被服务器防火墙阻止,请尝试增加您的sleep()或添加随机睡眠。

import random

.....
for i in range(0, 2000):
    soupy = BeautifulSoup(....)

    time.sleep(random.randint(2,6))

0
投票

不知道为什么,似乎对我有用。

我通过链接在循环中做了一些更改。我不确定你希望输出看起来如何将它写入你的文件,所以我把这部分单独留下了。但就像我说的那样,似乎我的工作正常。

import bs4
import requests  
import os
import time
import urllib.request as req

BASE_URL = 'http://www.espncricinfo.com'

if not os.path.exists('C:/espncricinfo-fc'):
    os.mkdir('C:/espncricinfo-fc')

for i in range(0, 2000):

    i=0
    url = 'http://search.espncricinfo.com/ci/content/match/search.html?search=test;all=1;page=%s' %i
    html = requests.get(url)

    print ('Checking page %s of 2000' %(i+1))

    soupy = bs4.BeautifulSoup(html.text, 'html.parser')

    time.sleep(1)
    for new_host in soupy.findAll('a', {'class' : 'srchPlyrNmTxt'}):
        try:
            new_host = new_host['href']
        except:
            continue
        odiurl = BASE_URL + new_host
        new_host = odiurl
        print(new_host)
        html = req.urlopen(odiurl).read()

        if html:
            with open('C:/espncricinfo-fc/{0!s}'.format('_'.join(str.split(new_host, "/")[4:])), "wb") as f:
                f.write(html)
                #print(html)
        else:
            print("no html")
© www.soinside.com 2019 - 2024. All rights reserved.