循环内的requests.get()问题。 “未找到连接适配器”

问题描述 投票:0回答:1

因此,我尝试使用其JSON版本抓取多个页面。当我为单个URL运行代码时(如所附代码的第一部分所示),我确实获得了所需的输出,但是,当我尝试在多个URL的for循环内执行相同的操作时,得到了“否从请求中找到连接适配器”,这没有什么意义,因为它适用于for循环之外的相同URL。

# Import package
import requests
from pandas import json_normalize
import pandas as pd

# Assign URL to variable: url
url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=1&sorting=desc-creation&user=16c20011d0fx61aada41'

# Package the request, send the request and catch the response: r
r = requests.get(url)

# Decode the JSON data into a dictionary: json_data
json_data = r.json()

# Extract data from the Json file
json_data_2 = json_data['data']

#normalize json data into a dataframe
df = json_normalize(json_data_2)
df.head()

使用此脚本,一切运行顺利。这是我得到错误的地方。

%%time

n_paginas = 0

all_urls = pd.DataFrame()

for paginas in range(0,20):
    n_paginas += 1
    olx_url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=%s&sorting=desc-creation&user=16c20011d0fx61aada41'
    start_urls = [olx_url % n_paginas]
    r = requests.get(start_urls)
    #json_data = r.json()
    #json_data_2 = json_data['data']
    #df = json_normalize(json_data_2)
    #all_urls.apped(df)

这是回溯:

---------------------------------------------------------------------------
InvalidSchema                             Traceback (most recent call last)
<timed exec> in <module>

~/anaconda3/lib/python3.7/site-packages/requests/api.py in get(url, params, **kwargs)
     74 
     75     kwargs.setdefault('allow_redirects', True)
---> 76     return request('get', url, params=params, **kwargs)
     77 
     78 

~/anaconda3/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs)
     59     # cases, and look like a memory leak in others.
     60     with sessions.Session() as session:
---> 61         return session.request(method=method, url=url, **kwargs)
     62 
     63 

~/anaconda3/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    528         }
    529         send_kwargs.update(settings)
--> 530         resp = self.send(prep, **send_kwargs)
    531 
    532         return resp

~/anaconda3/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs)
    635 
    636         # Get the appropriate adapter to use
--> 637         adapter = self.get_adapter(url=request.url)
    638 
    639         # Start time (approximately) of the request

~/anaconda3/lib/python3.7/site-packages/requests/sessions.py in get_adapter(self, url)
    726 
    727         # Nothing matches :-/
--> 728         raise InvalidSchema("No connection adapters were found for {!r}".format(url))
    729 
    730     def close(self):

InvalidSchema: No connection adapters were found for "['https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=1&sorting=desc-creation&user=16c20011d0fx61aada41']"

基于页码的新URL正确生成,并且如果我在上面的脚本中输入了其中的任何URL,它也可以正常工作。

任何帮助将不胜感激。

谢谢你。

python loops python-requests web-crawler
1个回答
1
投票

您可能不需要start_urls = [olx_url % n_paginas]部分。无论哪种方式,对for循环的这种轻微修改似乎都能得到结果。

# Import package
import requests
from pandas import json_normalize
import pandas as pd

# Assign URL to variable: url
url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=1&sorting=desc-creation&user=16c20011d0fx61aada41'

# Package the request, send the request and catch the response: r
r = requests.get(url)

# Decode the JSON data into a dictionary: json_data
json_data = r.json()

# Extract data from the Json file
json_data_2 = json_data['data']

#normalize json data into a dataframe
df = json_normalize(json_data_2)
df.head()

n_paginas = 0

all_urls = pd.DataFrame()

for paginas in range(0,20):
    n_paginas += 1
    olx_url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page={}&sorting=desc-creation&user=16c20011d0fx61aada41'.format(str(n_paginas))
    r = requests.get(olx_url)
    all_urls = all_urls.append(pd.DataFrame(json_normalize(r.json()['data'])))

all_urls.shape

(400, 60)
© www.soinside.com 2019 - 2024. All rights reserved.