我试图在docker容器内测试python的多处理,但即使成功创建了进程(我有8个CPU和8个进程),它们总是只占用一个物理CPU。这是我的代码:
from sklearn.externals.joblib.parallel import Parallel, delayed
import multiprocessing
import pandas
import numpy
from scipy.stats import linregress
import random
import logging
def applyParallel(dfGrouped, func):
retLst = Parallel(n_jobs=multiprocessing.cpu_count())(delayed(func)(group) for name, group in dfGrouped)
return pandas.concat(retLst)
def compute_regression(df):
result = {}
(slope,intercept,rvalue,pvalue,stderr) = linregress(df.date,df.value)
result["slope"] = [slope]
result["intercept"] = [intercept]
return pandas.DataFrame(result)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logging.info("start")
random_list = []
for i in range(1,10000):
for j in range(1,100):
random_list.append({"id":i,"date":j,"value":random.random()})
df = pandas.DataFrame(random_list)
df = applyParallel(df.groupby('id'), compute_regression)
logging.info("end")
我在启动时尝试了多个docker选项,如--cpus或--cpuset,但它总是只使用1个物理CPU。这是Docker,python,操作系统中的一个问题吗? Docker版本是1.13.1
cpu_count()
的结果:
>>> import multiprocessing
>>> multiprocessing.cpu_count()
8
multiprocessing.cpu_count()
在我的机器上给出了2而没有通过--cpu
选项
有关docker容器资源的更多信息,请访问https://docs.docker.com/engine/admin/resource_constraints/#cpu