未提供架构以及使用 requests.get() 时出现其他错误

问题描述 投票:0回答:7

我正在通过自动化无聊的东西来学习Python。该程序应该访问 http://xkcd.com/ 并下载所有图像以供离线查看。

我使用的是 2.7 版和 Mac。

由于某种原因,我收到诸如“未提供架构”之类的错误以及使用 request.get() 本身的错误。

这是我的代码:

# Saves the XKCD comic page for offline read

import requests, os, bs4, shutil

url = 'http://xkcd.com/'

if os.path.isdir('xkcd') == True: # If xkcd folder already exists
    shutil.rmtree('xkcd') # delete it
else: # otherwise
    os.makedirs('xkcd') # Creates xkcd foulder.


while not url.endswith('#'): # If there are no more posts, it url will endswith #, exist while loop
    # Download the page
    print 'Downloading %s page...' % url
    res = requests.get(url) # Get the page
    res.raise_for_status() # Check for errors

    soup = bs4.BeautifulSoup(res.text) # Dowload the page
    # Find the URL of the comic image
    comicElem = soup.select('#comic img') # Any #comic img it finds will be saved as a list in comicElem
    if comicElem == []: # if the list is empty
        print 'Couldn\'t find the image!'
    else:
        comicUrl = comicElem[0].get('src') # Get the first index in comicElem (the image) and save to
        # comicUrl

        # Download the image
        print 'Downloading the %s image...' % (comicUrl)
        res = requests.get(comicUrl) # Get the image. Getting something will always use requests.get()
        res.raise_for_status() # Check for errors

        # Save image to ./xkcd
        imageFile = open(os.path.join('xkcd', os.path.basename(comicUrl)), 'wb')
        for chunk in res.iter_content(10000):
            imageFile.write(chunk)
        imageFile.close()
    # Get the Prev btn's URL
    prevLink = soup.select('a[rel="prev"]')[0]
    # The Previous button is first <a rel="prev" href="/1535/" accesskey="p">&lt; Prev</a>
    url = 'http://xkcd.com/' + prevLink.get('href')
    # adds /1535/ to http://xkcd.com/

print 'Done!'

错误如下:

Traceback (most recent call last):
  File "/Users/XKCD.py", line 30, in <module>
    res = requests.get(comicUrl) # Get the image. Getting something will always use requests.get()
  File "/Library/Python/2.7/site-packages/requests/api.py", line 69, in get
    return request('get', url, params=params, **kwargs)
  File "/Library/Python/2.7/site-packages/requests/api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "/Library/Python/2.7/site-packages/requests/sessions.py", line 451, in request
    prep = self.prepare_request(req)
  File "/Library/Python/2.7/site-packages/requests/sessions.py", line 382, in prepare_request
    hooks=merge_hooks(request.hooks, self.hooks),
  File "/Library/Python/2.7/site-packages/requests/models.py", line 304, in prepare
    self.prepare_url(url, params)
  File "/Library/Python/2.7/site-packages/requests/models.py", line 362, in prepare_url
    to_native_string(url, 'utf8')))
requests.exceptions.MissingSchema: Invalid URL '//imgs.xkcd.com/comics/the_martian.png': No schema supplied. Perhaps you meant http:////imgs.xkcd.com/comics/the_martian.png?

问题是我已经多次阅读了书中有关该程序的部分,阅读了请求文档,并查看了此处的其他问题。我的语法看起来是正确的。

感谢您的帮助!

编辑:

这不起作用:

comicUrl = ("http:"+comicElem[0].get('src')) 

我认为在之前添加 http: 可以消除没有提供架构的错误。

python python-2.7 request
7个回答
30
投票

没有模式意味着您还没有提供

http://
https://
提供这些,它会起作用。

编辑:看看这个 URL 字符串!:

URL '//imgs.xkcd.com/comics/the_martian.png':


17
投票

将您的

comicUrl
更改为此

comicUrl = comicElem[0].get('src').strip("http://")
comicUrl="http://"+comicUrl
if 'xkcd' not in comicUrl:
    comicUrl=comicUrl[:7]+'xkcd.com/'+comicUrl[7:]

print "comic url",comicUrl

3
投票

说明:

一些 XKCD 页面具有特殊内容,不是简单的图像文件。没关系;你可以跳过那些。如果你的选择器没有找到任何元素,那么 soup.select('#comic img') 将返回一个空白列表。

工作代码:

import requests,os,bs4,shutil

url='http://xkcd.com'

#making new folder
if os.path.isdir('xkcd') == True:
    shutil.rmtree('xkcd')
else:
    os.makedirs('xkcd')


#scrapiing information
while not url.endswith('#'):
    print('Downloading Page %s.....' %(url))
    res = requests.get(url)          #getting page
    res.raise_for_status()
    soup = bs4.BeautifulSoup(res.text)

    comicElem = soup.select('#comic img')     #getting img tag under  comic divison
    if comicElem == []:                        #if not found print error
        print('could not find comic image')

    else:
        try:
            comicUrl = 'http:' + comicElem[0].get('src')             #getting comic url and then downloading its image
            print('Downloading image %s.....' %(comicUrl))
            res = requests.get(comicUrl)
            res.raise_for_status()

        except requests.exceptions.MissingSchema:
        #skip if not a normal image file
            prev = soup.select('a[rel="prev"]')[0]
            url = 'http://xkcd.com' + prev.get('href')
            continue

        imageFile = open(os.path.join('xkcd',os.path.basename(comicUrl)),'wb')     #write  downloaded image to hard disk
        for chunk in res.iter_content(10000):
            imageFile.write(chunk)
        imageFile.close()

        #get previous link and update url
        prev = soup.select('a[rel="prev"]')[0]
        url = "http://xkcd.com" + prev.get('href')


print('Done...')

1
投票

其实这没什么大不了的。你可以看到漫画网址有点像这样

//imgs.xkcd.com/comics/acceptable_risk.png

您唯一需要添加的是

http:
,记住它是
http:
而不是像一些人之前所说的
http://
因为网址已经包含双斜杠。 所以请将代码更改为

res = requests.get('http:' + comicElem[0].get('src'))

comicUrl = 'http:' + comicElem[0].get('src')

res = requests.get(comicUrl)

快乐编码


0
投票

我只是想在这里插话,我遇到了这个完全相同的错误,并使用了上面@Ajay推荐的答案,但即使添加后我仍然遇到问题,程序下载第一张图像后它会停止并返回此错误:

ValueError: Unsupported or invalid CSS selector: "a[rel"

这是指程序中的最后一行,它使用“上一页按钮”转到下一个要下载的图像。

无论如何,在浏览完 bs4 文档后,我做了如下轻微的更改,现在似乎工作得很好:

prevLink = soup.select('a[rel^="prev"]')[0]

其他人可能会遇到同样的问题,所以我想添加此评论。


0
投票

我也有类似的问题。它以某种方式将响应代码 400 作为要解析的 url,因此很明显该 url 无效。这是我的代码和错误:

import cloudscraper  # to bypass cloudflare that is blocking requests with the request module
import time
import random
import json
import socket
from collections import OrderedDict
from requests import Session
 
 
with open("conf.json") as conf:
    config = json.load(conf)
    addon_api = config.get("Addon API")
    addonapi_url = config.get("Addon URL")
    addonapi_ip = config.get("Addon IP")
    addonapi_agent = config.get("Addon User-agent")
 
 
    # getip = socket.getaddrinfo("https://my.url.com", 443)
    # (family, type, proto, canonname, (address, port)) = getip[0]
    # family, type, proto, canonname, (address, port)) = getip[0]
 
    session = Session()
    headers = OrderedDict({
        'Accept-Encoding': 'gzip, deflate, br',
        'Host': addonapi_ip,
        'User-Agent': addonapi_agent
    })
    session.headers = headers
 
    # define the Data we will post to the Website
    data = {
        "apikey": addon_api,
        "action": "get_user_info",
        "value": "username"
    }
 
    try:  # try-block to handle exceptions if the request Failed
        randomsleep1 = random.randint(10, 30)
        randomsleep2 = random.randint(10, 30)
        randomsleep_total = randomsleep1 + randomsleep2
 
 
        data_variable = data
        headers_variable = headers
        payload = {"key1": addonapi_ip, "key2": data_variable, "key3": headers_variable}
 
        getrequest = session.get(url=addonapi_ip, data=data_variable, headers=headers_variable, params = payload)
        postrequest = session.get(url=addonapi_ip, data=data_variable, headers=headers_variable, params = payload)  # sending Data to the Website
        print(addonapi_ip)
 
        scraper = cloudscraper.create_scraper()  # returns a CloudScraper instance
        print(f"Sleeping for {randomsleep1} Seconds before posting Data to API!")
        time.sleep(randomsleep1)
        session.get(postrequest)  # sending Data to the Website
        print(f"Sleeping for {randomsleep2} Seconds before getting Data from API!")
        time.sleep(randomsleep2)
        print(f"Total Seconds i slept during the Request: {randomsleep_total}")
        session.post(postrequest)
        print(f"Data sent: {postrequest}")
        print(f"Data recived: {getrequest}")  # printing the output from the Request into our Terminal
 
 
    #    post = requests.post(addonapi_url, data=data, headers=headers)
    #    print(post.status_code)
    #    print(post.text)
 
    except Exception as e:
        raise e
        # print(e)  # print a error if occurced
# =========================================== #
Sleeping for 15 Seconds before posting Data to API!
Traceback (most recent call last):
  File "C:\Users\You.Dont.See.My.Name\PythonProjects\addon_bot\addon.py", line 69, in <module>
    raise e
  File "C:\Users\You.Dont.See.My.Name\PythonProjects\addon_bot\addon.py", line 55, in <module>
    session.get(postrequest)  # sending Data to the Website
  File "P:\Documents\IT\Python\lib\site-packages\requests\sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "P:\Documents\IT\Python\lib\site-packages\requests\sessions.py", line 519, in request
    prep = self.prepare_request(req)
  File "P:\Documents\IT\Python\lib\site-packages\requests\sessions.py", line 452, in prepare_request
    p.prepare(
  File "P:\Documents\IT\Python\lib\site-packages\requests\models.py", line 313, in prepare
    self.prepare_url(url, params)
  File "P:\Documents\IT\Python\lib\site-packages\requests\models.py", line 387, in prepare_url
    raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL '<Response [400]>': No schema supplied. Perhaps you meant http://<Response [400]>?

0
投票

我也遇到了这个错误,但是是在我正在创建的类的上下文中。 就我而言,我忘记将“self”作为我正在创建的函数的第一个参数,因此当它期望“URL”时,它实际上得到了一个对象。

© www.soinside.com 2019 - 2024. All rights reserved.