使用Python在Windows上实现并发/并行

问题描述 投票:3回答:2

我开发了简单的程序来解决八个皇后问题。现在我想用不同的元参数做更多的测试,所以我想快速完成。我经历了几次分析迭代,并且能够显着减少运行时间,但我达到了我认为只有部分计算同时可以使它更快的程度。我尝试使用multiprocessingconcurrent.futures模块,但它并没有改善运行时间,在某些情况下甚至减慢了执行速度。那只是给出一些背景。

我能够提出类似的代码结构,其中顺序版本节拍并发。

import numpy as np
import concurrent.futures
import math
import time
import multiprocessing

def is_prime(n):
    if n % 2 == 0:
        return False

    sqrt_n = int(math.floor(math.sqrt(n)))
    for i in range(3, sqrt_n + 1, 2):
        if n % i == 0:
            return False
    return True

def generate_data(seed):
    np.random.seed(seed)
    numbers = []
    for _ in range(5000):
        nbr = np.random.randint(50000, 100000)
        numbers.append(nbr)
    return numbers

def run_test_concurrent(numbers):
    print("Concurrent test")
    start_tm = time.time()
    chunk = len(numbers)//3
    primes = None
    with concurrent.futures.ProcessPoolExecutor(max_workers=3) as pool:
        primes = list(pool.map(is_prime, numbers, chunksize=chunk))
    print("Time: {:.6f}".format(time.time() - start_tm))
    print("Number of primes: {}\n".format(np.sum(primes)))


def run_test_sequential(numbers):
    print("Sequential test")
    start_tm = time.time()
    primes = [is_prime(nbr) for nbr in numbers]
    print("Time: {:.6f}".format(time.time() - start_tm))
    print("Number of primes: {}\n".format(np.sum(primes)))


def run_test_multiprocessing(numbers):
    print("Multiprocessing test")
    start_tm = time.time()
    chunk = len(numbers)//3
    primes = None
    with multiprocessing.Pool(processes=3) as pool:
        primes = list(pool.map(is_prime, numbers, chunksize=chunk))
    print("Time: {:.6f}".format(time.time() - start_tm))
    print("Number of primes: {}\n".format(np.sum(primes)))


def main():
    nbr_trails = 5
    for trail in range(nbr_trails):
        numbers = generate_data(trail*10)
        run_test_concurrent(numbers)
        run_test_sequential(numbers)
        run_test_multiprocessing(numbers)
        print("--\n")


if __name__ == '__main__':
    main()

当我在我的机器上运行它 - Windows 7,带有四个核心的英特尔酷睿i5时,我得到了以下输出:

Concurrent test
Time: 2.006006
Number of primes: 431

Sequential test
Time: 0.010000
Number of primes: 431

Multiprocessing test
Time: 1.412003
Number of primes: 431
--

Concurrent test
Time: 1.302003
Number of primes: 447

Sequential test
Time: 0.010000
Number of primes: 447

Multiprocessing test
Time: 1.252003
Number of primes: 447
--

Concurrent test
Time: 1.280002
Number of primes: 446

Sequential test
Time: 0.010000
Number of primes: 446

Multiprocessing test
Time: 1.250002
Number of primes: 446
--

Concurrent test
Time: 1.260002
Number of primes: 446

Sequential test
Time: 0.010000
Number of primes: 446

Multiprocessing test
Time: 1.250002
Number of primes: 446
--

Concurrent test
Time: 1.282003
Number of primes: 473

Sequential test
Time: 0.010000
Number of primes: 473

Multiprocessing test
Time: 1.260002
Number of primes: 473
--

我的问题是我是否可以通过在Python 3.6.4 |Anaconda, Inc.|上同时在Windows上运行它来使其更快。我在这里读到了SO(Why is creating a new process more expensive on Windows than Linux?),在Windows上创建新进程的成本很高。有什么办法可以加快速度吗?我错过了一些明显的东西吗

我也尝试过只创建一次Pool,但它似乎没什么帮助。


编辑:

原始代码结构看起来或多或少像:

我的代码结构或多或少像这样:

class Foo(object):

    def g() -> int:
        # function performing simple calculations
        # single function call is fast (~500 ms)
        pass


def run(self):
    nbr_processes = multiprocessing.cpu_count() - 1

    with multiprocessing.Pool(processes=nbr_processes) as pool:
        foos = get_initial_foos()

        solution_found = False
        while not solution_found:
            # one iteration
            chunk = len(foos)//nbr_processes
            vals = list(pool.map(Foo.g, foos, chunksize=chunk))

            foos = modify_foos()

foos1000元素。不可能事先告诉算法收敛的速度和执行的迭代次数,可能是数千次。

python windows multiprocessing
2个回答
0
投票

您的设置对多处理来说并不公平。你甚至包括了不必要的primes = None作业。 ;)

一些要点:


数据大小

您生成的数据可以用于获取流程创建的开销。尝试使用range(1_000_000)而不是range(5000)。在Linux上将multiprocessing.start_method设置为“spawn”(默认在Windows上),这将绘制不同的图片:

Concurrent test
Time: 0.957883
Number of primes: 89479

Sequential test
Time: 1.235785
Number of primes: 89479

Multiprocessing test
Time: 0.714775
Number of primes: 89479

重新使用你的游泳池

只要在程序中留下要稍后并行化的代码,就不要离开池的with-block。如果您在开始时仅创建一次池,则根本不包括将池创建到基准测试中。


NumPy的

Numpy部分能够发布全球翻译锁(GIL)。这意味着,您可以从多核并行性中受益,而无需创建进程的开销。无论如何,如果你正在做数学,尽量使用numpy。尝试用qumpxswpoi和concurrent.futures.ThreadPoolExecutor代码使用numpy。


0
投票

UNIX变体下的进程更轻量级。 Windows进程很繁重,需要更多时间才能启动。线程是在Windows上进行多处理的推荐方法。您也可以关注此主题:multiprocessing.dummy.Pool

© www.soinside.com 2019 - 2024. All rights reserved.