在 aiohttp.ClientSession.get() 中接收“运行时错误:会话已关闭”,即使在创建新的上下文管理器之后也是如此

问题描述 投票:0回答:1

我正在使用 aiohttp 编写一个网络爬虫,我的程序因网络爬虫中的

"RuntimeError: Session is closed"
错误而崩溃。

主循环完成第一次迭代,毫无问题地获取并处理 URL 队列中的所有页面。但是当它在主循环的第二次迭代中进入

fetch_pages()
并首次调用
aiohttp.ClientSession.session.get()
时,它会抛出
"RuntimeError: Session is closed"

我不明白为什么会收到此错误,因为在我看来,每次调用下面的

aiohttp.ClientSession()
函数时,下面的代码应该创建一个新的
get_batch()
上下文管理器,并在函数调用结束。但这并没有发生。有人可以向我解释为什么我会收到此错误吗?

我已在下面发布了代码的相关部分(我尝试尽可能地修剪,但在下面包含了完整源代码的链接)。


这是主循环:

class Crawler():

    ((...))

    def __init__(self):
        self.loop = asyncio.get_event_loop()
        self.url_queue = URLQueue(maxsize=10000)        # urls are popped from URL queue
        self.page_queue = asyncio.PriorityQueue()       # when fetched, they are placed on page queue for html processing  

    ((...))

    async def fetch_pages(self):
        print("Entering fetch_page()")
        pages, errors = [], []
        if self.url_queue.empty():    
            await asyncio.sleep(1)

        else:
            await self.fetcher.get_batch(self.BATCH_SIZE, self.url_queue, self.page_queue, self.error_queue)

    ((...))

    async def process_html(self): ...
    async def analyze_content(self): ...
    async def extract_links(self): ...
    async def index_content(self): ...
    async def handle_errors(self): ...

    ((...))

    async def main(self):

        try:
            while True:
                tasks = [t.loop.create_task(t.fetch_pages()),
                        t.loop.create_task(t.process_html()),
                        t.loop.create_task(t.analyze_content()),
                        t.loop.create_task(t.index_content()),
                        t.loop.create_task(t.handle_errors())]

                await asyncio.gather(*tasks)

        except KeyboardInterrupt:
            print("shutting down")

        finally:
            print("Pretending to save the URL queue, etc ... ")   

    t = Crawler()

    if __name__ == "__main__":
        #asyncio.run(crawler.crawl(index), debug=True)
        t.loop.run_until_complete(t.main())

完整代码此处)...

这是获取循环的代码:

class Fetcher():

    ((...))

    def __init__(self, domain_manager=None, http_headers = None, dns_cache_lifetime = 300, request_timeout = 30, 
                 connection_timeout = 5, max_connections = 20, max_connections_per_host = 5, obey_robots = False,
                 verify_ssl_certs = False):

        self.loop = asyncio.get_event_loop()

        self.domain_manager = domain_manager    # rate limit requests / robots.txt on per-domain basis

        self.timeout = aiohttp.ClientTimeout(total=request_timeout, 
                                             connect=connection_timeout)  

        self.connector = aiohttp.TCPConnector(ttl_dns_cache=dns_cache_lifetime, 
                                              limit=max_connections, 
                                              limit_per_host=max_connections_per_host,
                                              ssl=verify_ssl_certs)


    async def fetch(self, url, session):
        try:
            async with session.get(url) as resp:                
                status = int(resp.status)
                headers = dict(resp.headers)        

                if self.check_response_headers(url, status, headers):

                    html = await resp.text()

                    return {'url': url,
                            'headers': headers,
                            'html': html,
                            'last_visit': datetime.now()}
                else:
                    raise FetchError(f"Fetch failed for url {url}: Header check failed (but why did we make it here?)", 
                                     url=url, exception=e, fetch_stage="GET")

        except UnicodeDecodeError as e:
       ((...))


    def check_response_headers(self, url, status, headers):
        """Given a response from fetch(), return a (Page object, error object) pair"""

       ((...))


    async def fetch_with_dm(self, url, session, i):
        """fetches next url from queue until successfully fetches a page"""

        domain = self.domain_manager.domain_from_url(url)

        ((...))

        async with self.domain_manager.locks[domain]:

            ((...))

            fetch_result = await self.fetch(url, session)

            return fetch_result


    async def get_batch(self, batch_size, url_queue, page_queue, error_queue):
        start_time = datetime.now()

        async with aiohttp.ClientSession(timeout=self.timeout, connector=self.connector) as session:
            tasks = []
            for i in range(batch_size):
                url = None          
                score = None

                if url_queue.empty():
                    break

                else:
                    score, url = url_queue.get_nowait()  # should we be blocking here / await / sleeping if no urls in queue?

                    if url == None:
                        raise ValueError("Received empty URL")

                    if score == None:
                        raise ValueError("Received empty URL score")

                    tasks.append(self.loop.create_task(self.fetch_with_dm(url, session, i)))


            for p in asyncio.as_completed(tasks):
                try:
                    page = await p
                    page['url_score'] = score
                    await page_queue.put((score, id(page), page))

                except FetchError as fe:
                    await error_queue.put(fe)

完整代码这里

...在

session.get(url)
中调用
fetch
时再次出现“会话关闭”错误,但仅在主循环的第二次迭代中...

python python-3.x session python-asyncio aiohttp
1个回答
0
投票

以下内容对我有用。我必须将参数“connector_owner”添加到 ClientSession 并将其设置为 False 根据文档https://docs.aiohttp.org/en/v3.8.1/client_advanced.html#connectors:

session = aiohttp.ClientSession(connector_owner=False)
© www.soinside.com 2019 - 2024. All rights reserved.