读取异步子过程的流输出

问题描述 投票:1回答:1

我正在尝试从子进程中运行的程序中读取URL,然后安排异步HTTP请求,但看起来请求正在同步运行。那是因为子进程和请求都在同一个协程函数中运行吗?

test.py

import random
import time

URLS = ['http://example.com', 'http://example.com/sleep5s']

def main():
    for url in random.choices(URLS, weights=(1, 1), k=5):
        print(url)
        time.sleep(random.uniform(0.5, 1))


if __name__ == '__main__':
    main()

main.py

import asyncio
import sys

import httpx

from  httpx.exceptions import TimeoutException


async def req(url):
    async with httpx.AsyncClient() as client:
        try:
            r = await client.get(url, timeout=2)
            print(f'Response {url}: {r.status_code}')
        except Exception as TimeoutException:
            print(f'TIMEOUT - {url}')
        except Exception as exc:
            print(f'ERROR - {url}')


async def run():
    proc = await asyncio.create_subprocess_exec(
        sys.executable,
        '-u',
        'test.py',
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE,
    )

    while True:
        line = await proc.stdout.readline()
        if not line:
            break

        url = line.decode().rstrip()
        print(f'Found URL: {url}')

        resp = await req(url)

    await proc.wait()


async def main():
    await run()


if __name__ == '__main__':
    asyncio.run(main())

Test

$ python main.py
Found URL: http://example.com
Response http://example.com: 200
Found URL: http://example.com/sleep5s
TIMEOUT - http://example.com/sleep5s
Found URL: http://example.com/sleep5s
TIMEOUT - http://example.com/sleep5s
Found URL: http://example.com
Response http://example.com: 200
Found URL: http://example.com/sleep5s
TIMEOUT - http://example.com/sleep5s
python subprocess python-asyncio
1个回答
1
投票

看起来请求正在同步运行。那是因为子进程和请求都在同一个协程函数中运行吗?

您的诊断是正确的。 await表示罐子上的内容:协程只有在有结果可给您时才会继续进行。幸运的是,asyncio使得在后台运行协程很容易:

    tasks = []
    while True:
        line = await proc.stdout.readline()
        if not line:
            break

        url = line.decode().rstrip()
        print(f'Found URL: {url}')

        tasks.append(asyncio.create_task(req(url)))

    resps = asyncio.gather(*tasks)
    await proc.wait()

注意:

  • [asyncio.create_task()确保即使我们仍在阅读行的同时,请求也开始得到处理
  • asyncio.gather()确保事实上在协程完成之前等待所有任务。它还提供对响应的访问并传播异常(如果有)。
© www.soinside.com 2019 - 2024. All rights reserved.