这个问题在这里已有答案:
GITHUB链接到脚本
问题描述
基本上,我制作了一个脚本,从https://mangadex.org下载漫画图像。
该脚本在技术上工作正常,但在第二次迭代开始时通过循环返回“Max Retries Exceed”...这对我来说没有意义,考虑到每次迭代都更新url并且只调用一次,怎么能当它只被调用一次时有多次重试?
问题似乎不是客户端,而是服务器端,因为图像在第一次迭代时下载得很好,但是它很奇怪......
以下是脚本中采取的步骤:
按预期工作正常
但是,在第一个完整的循环周期之后(在下载当前章节的所有页面之后,然后循环到下一章节),我得到一个例外。每次运行脚本时,使用不同的IP地址和不同的标题。它还完全下载了每次指定的第一章。
从第一个循环开始,在Selenium加载第一章的行中,返回此错误消息。
我有一个NordVPN订阅,所以我多次重新路由我的IP,但仍然有同样的错误。
此外,如果图像已经下载到他们应该的文件夹中,脚本只是跳过当前章节并开始下载下一个,所以即使没有下载ANYTHING,我仍然会收到此错误消息。
有关可能导致此问题的原因的任何想法?
错误
DevTools listening on ws://127.0.0.1:51146/devtools/browser/b6d08910-ea23-4279-b9d4-6492e6b865d0
Traceback (most recent call last):
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\connection.py", line 80, in create_connection
raise err
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\connection.py", line 70, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 600, in urlopen
chunked=chunked)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1229, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1275, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1224, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1016, in _send_output
self.send(msg)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 956, in send
self.connect()
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 181, in connect
conn = self._new_conn()
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000002128FCDD518>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Programming/Python/Projects/Mangadex.downloader/main.py", line 154, in <module>
driver.get(chapter_start_url)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 333, in get
self.execute(Command.GET, {'url': url})
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 319, in execute
response = self.command_executor.execute(driver_command, params)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 374, in execute
return self._request(command_info[0], url, body=data)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 397, in _request
resp = self._conn.request(method, url, body=body, headers=headers)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\request.py", line 72, in request
**urlopen_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\request.py", line 150, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\poolmanager.py", line 323, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 667, in urlopen
**response_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 667, in urlopen
**response_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 667, in urlopen
**response_kw)
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\alexT\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=51139): Max retries exceeded with url: /session/4f72fba8650ac3ead558cb25172b4b38/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002128FCDD518>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
目的
我正在创建一个脚本,用于解析导出的MyAnimeList中的Manga标题(也可能适用于Anilist)XML列表并下载https://mangadex.org中存在的所有列出的标题
我正在使用的模块:请求,重新,美丽的汤,json,os,selenium,时间和urllib
请求 - 用于获取具有我需要的信息的页面的源代码
重新使用正则表达式解析包含漫画列表的“.xml”文件,从https://myanimelist.net导出并在章节内更改要下载的当前图像的链接。 (链接总是以“.jpg”或“.png”结尾,在扩展名前有一个数字,这是当前页面的编号,在编号之前有一个随机字母)
美丽的汤 - 用于解析请求的响应,解析标题,标题链接,章节标题,章节链接等...
JSON - 用于存储和解析来自解析漫画列表的数据到/从“index.json”
操作系统 - 用于检查文件/目录是否存在。
Selenium - 仅在章节内部使用时,因为读者使用JavaScript加载图像(这将是下载的内容)以及当前章节中有多少页面(作为循环图像的基础,因为它们具有相同的标题,网址中唯一更改的是当前页面)。
时间 - 在Selenium加载章节页面后仅使用一次,以便页面完全加载。
Urllib - 用于下载章节图像。
PS - MyAnimeList和Anilist是动漫系列和漫画系列的索引,你有漫画和动画系列的列表,你可以在其中为列表的每个项目设置标签。 (如果你打算阅读漫画,看动漫,如果它已经完成,等等......)
我不确定这是否100%相关,但我最近遇到了类似的错误。我找到的解决方案是无法存储cookie,因此该网站基本上是在他们的两个服务器之间ping我的请求,其中一个人会尝试为我的浏览器分配一个cookie而另一个人会期望这个cookie,但我的请求不会与它一起发送所以它将我引回服务器1.我发现解决它的代码是使用:
s = requests.session()
s.headers['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'
我认为你应该复制/粘贴上面的行...我做了:)然后获取URL:
res = s.get(my_URL)
soup = bs4.BeautifulSoup(res.text, 'html.parser')
像这样使用requests.session()允许保存cookie,然后发送到其他内部服务器并正确处理