我当前正在使用subprocess.Popen(cmd, shell=TRUE)
启动程序
我对Python还是很陌生,但是它感觉像应该有一些api可以让我做类似的事情:
subprocess.Popen(cmd, shell=TRUE, postexec_fn=function_to_call_on_exit)
我正在这样做,以便function_to_call_on_exit
在知道cmd已退出的情况下可以执行某些操作(例如,对当前正在运行的外部进程数进行计数)
[我假设我可以将子流程相当简单地包装在将线程与Popen.wait()
方法结合在一起的类中,但是由于我尚未在Python中完成线程处理,因此对于存在一个API来说,这似乎很常见,我以为我会先找到一个。
提前感谢:)
您是对的-为此没有很好的API。您也说对了第二点-设计一个使用线程为您执行此操作的函数非常容易。
import threading
import subprocess
def popen_and_call(on_exit, popen_args):
"""
Runs the given args in a subprocess.Popen, and then calls the function
on_exit when the subprocess completes.
on_exit is a callable object, and popen_args is a list/tuple of args that
would give to subprocess.Popen.
"""
def run_in_thread(on_exit, popen_args):
proc = subprocess.Popen(*popen_args)
proc.wait()
on_exit()
return
thread = threading.Thread(target=run_in_thread, args=(on_exit, popen_args))
thread.start()
# returns immediately after the thread starts
return thread
即使在Python中线程也很容易,但是请注意,如果on_exit()的计算量很大,则需要将其放在一个单独的进程中,而不是使用多处理(这样,GIL不会降低程序速度)。这实际上非常简单-您基本上可以将所有对threading.Thread
的调用替换为multiprocessing.Process
,因为它们(几乎)遵循相同的API。
[Python 3.2中有concurrent.futures
模块(对于较旧的Python <3.2,可通过concurrent.futures
提供:]
pip install futures
将在与调用pool = Pool(max_workers=1)
f = pool.submit(subprocess.call, "sleep 2; echo done", shell=True)
f.add_done_callback(callback)
相同的过程中调用回调。
f.add_done_callback()
import logging
import subprocess
# to install run `pip install futures` on Python <3.2
from concurrent.futures import ThreadPoolExecutor as Pool
info = logging.getLogger(__name__).info
def callback(future):
if future.exception() is not None:
info("got exception: %s" % future.exception())
else:
info("process returned %d" % future.result())
def main():
logging.basicConfig(
level=logging.INFO,
format=("%(relativeCreated)04d %(process)05d %(threadName)-10s "
"%(levelname)-5s %(msg)s"))
# wait for the process completion asynchronously
info("begin waiting")
pool = Pool(max_workers=1)
f = pool.submit(subprocess.call, "sleep 2; echo done", shell=True)
f.add_done_callback(callback)
pool.shutdown(wait=False) # no .submit() calls after that point
info("continue waiting asynchronously")
if __name__=="__main__":
main()
[我修改了Daniel G的答案,以简单地传递subprocess.Popen args和kwargs作为自己,而不是作为单独的tupple / list,因为我想在subprocess.Popen中使用关键字参数。
就我而言,我有一个要在$ python . && python3 .
0013 05382 MainThread INFO begin waiting
0021 05382 MainThread INFO continue waiting asynchronously
done
2025 05382 Thread-1 INFO process returned 0
0007 05402 MainThread INFO begin waiting
0014 05402 MainThread INFO continue waiting asynchronously
done
2018 05402 Thread-1 INFO process returned 0
之后运行的方法postExec()
使用下面的代码,它简单地变成subprocess.Popen('exe', cwd=WORKING_DIR)
popenAndCall(postExec, 'exe', cwd=WORKING_DIR)
我有同样的问题,并使用import threading
import subprocess
def popenAndCall(onExit, *popenArgs, **popenKWArgs):
"""
Runs a subprocess.Popen, and then calls the function onExit when the
subprocess completes.
Use it exactly the way you'd normally use subprocess.Popen, except include a
callable to execute as the first argument. onExit is a callable object, and
*popenArgs and **popenKWArgs are simply passed up to subprocess.Popen.
"""
def runInThread(onExit, popenArgs, popenKWArgs):
proc = subprocess.Popen(*popenArgs, **popenKWArgs)
proc.wait()
onExit()
return
thread = threading.Thread(target=runInThread,
args=(onExit, popenArgs, popenKWArgs))
thread.start()
return thread # returns immediately after the thread starts
解决了。有两个技巧:
结果是在完成时通过回调执行的一个函数
multiprocessing.Pool
就我而言,我也希望调用也不会阻塞。作品精美
我受到Daniel G. answer的启发,并实现了一个非常简单的用例-在我的工作中,我经常需要重复调用具有不同参数的同一(外部)过程。我破解了一种确定每个特定调用何时完成的方法,但是现在我有了一种更干净的方法来发出回调。
我喜欢此实现,因为它非常简单,但是它允许我向多个处理器发出异步调用(注意,我使用def sub(arg):
print arg #prints [1,2,3,4,5]
return "hello"
def cb(arg):
print arg # prints "hello"
pool = multiprocessing.Pool(1)
rval = pool.map_async(sub,([[1,2,3,4,5]]),callback =cb)
(do stuff)
pool.close()
而不是multiprocessing
)并在完成时接收通知。
我测试了示例程序,效果很好。请随意编辑并提供反馈。
threading
样本输出:
import multiprocessing
import subprocess
class Process(object):
"""This class spawns a subprocess asynchronously and calls a
`callback` upon completion; it is not meant to be instantiated
directly (derived classes are called instead)"""
def __call__(self, *args):
# store the arguments for later retrieval
self.args = args
# define the target function to be called by
# `multiprocessing.Process`
def target():
cmd = [self.command] + [str(arg) for arg in self.args]
process = subprocess.Popen(cmd)
# the `multiprocessing.Process` process will wait until
# the call to the `subprocess.Popen` object is completed
process.wait()
# upon completion, call `callback`
return self.callback()
mp_process = multiprocessing.Process(target=target)
# this call issues the call to `target`, but returns immediately
mp_process.start()
return mp_process
if __name__ == "__main__":
def squeal(who):
"""this serves as the callback function; its argument is the
instance of a subclass of Process making the call"""
print "finished %s calling %s with arguments %s" % (
who.__class__.__name__, who.command, who.args)
class Sleeper(Process):
"""Sample implementation of an asynchronous process - define
the command name (available in the system path) and a callback
function (previously defined)"""
command = "./sleeper"
callback = squeal
# create an instance to Sleeper - this is the Process object that
# can be called repeatedly in an asynchronous manner
sleeper_run = Sleeper()
# spawn three sleeper runs with different arguments
sleeper_run(5)
sleeper_run(2)
sleeper_run(1)
# the user should see the following message immediately (even
# though the Sleeper calls are not done yet)
print "program continued"
下面是program continued
finished Sleeper calling ./sleeper with arguments (1,)
finished Sleeper calling ./sleeper with arguments (2,)
finished Sleeper calling ./sleeper with arguments (5,)
的源代码-我的示例“耗时的外部过程”>
sleeper.c
编译为:
#include<stdlib.h>
#include<unistd.h>
int main(int argc, char *argv[]){
unsigned int t = atoi(argv[1]);
sleep(t);
return EXIT_SUCCESS;
}
AFAIK没有这样的API,至少在gcc -o sleeper sleeper.c
模块中没有。您需要使用线程自己滚动一些东西。