如何实现并行延迟,当输出低于阈值时,并行化的 for 循环停止?

use*_*534 8 python python-3.x joblib python-multiprocessing

假设我有以下代码:

from scipy import *
import multiprocessing as mp
num_cores = mp.cpu_count()
from joblib import Parallel, delayed
import matplotlib.pyplot as plt

def func(x,y):
    return y/x
def main(y, xmin,xmax, dx):
    x = arange(xmin,xmax,dx)
    output = Parallel(n_jobs=num_cores)(delayed(func)(i, y) for i in x)
    return x, asarray(output)
def demo():
    x,z = main(2.,1.,30.,.1)
    plt.plot(x,z, label='All values')
    plt.plot(x[z>.1],z[z>.1], label='desired range') ## This is better to do in main()
    plt.show()

demo()
Run Code Online (Sandbox Code Playgroud)

我只想计算输出直到输出 > 给定数字(可以假设输出元素随着 x 的增加单调减少)然后停止(不计算 x 的所有值然后排序,这对我的目的来说效率低下)。有没有办法使用并行、延迟或任何其他多处理来做到这一点?

ron*_*ron 1

没有output > a given number具体说明,所以我就编了一个。测试后我必须扭转条件才能正常运行output < a given number

我将使用一个池,使用回调函数启动进程来检查停止条件,然后在准备好时终止池。但这会导致竞争条件,从而允许从不允许完成的正在运行的进程中省略结果。我认为这种方法对代码的修改很少,并且非常易于阅读。不保证列表的顺序。

优点:开销很小
缺点:可能会丢失结果。

方法一)

from scipy import *
import multiprocessing

import matplotlib.pyplot as plt


def stop_condition_callback(ret):
        output.append(ret)
        if ret < stop_condition:
            worker_pool.terminate()


def func(x, y, ):
    return y / x


def main(y, xmin, xmax, dx):
    x = arange(xmin, xmax, dx)
    print("Number of calculations: %d" % (len(x)))

    # add calculations to the pool
    for i in x:
        worker_pool.apply_async(func, (i, y,), callback=stop_condition_callback)

    # wait for the pool to finish/terminate
    worker_pool.close()
    worker_pool.join()

    print("Number of results: %d" % (len(output)))
    return x, asarray(output)


def demo():
    x, z_list = main(2., 1., 30., .1)
    plt.plot(z_list, label='desired range')
    plt.show()


output = []
stop_condition = 0.1

worker_pool = multiprocessing.Pool()
demo()
Run Code Online (Sandbox Code Playgroud)

此方法具有更多开销,但允许已开始的进程完成。方法2)

from scipy import *
import multiprocessing

import matplotlib.pyplot as plt


def stop_condition_callback(ret):
    if ret is not None:
        if ret < stop_condition:
            worker_stop.value = 1
        else:
            output.append(ret)


def func(x, y, ):
    if worker_stop.value != 0:
        return None
    return y / x


def main(y, xmin, xmax, dx):
    x = arange(xmin, xmax, dx)
    print("Number of calculations: %d" % (len(x)))

    # add calculations to the pool
    for i in x:
        worker_pool.apply_async(func, (i, y,), callback=stop_condition_callback)

    # wait for the pool to finish/terminate
    worker_pool.close()
    worker_pool.join()

    print("Number of results: %d" % (len(output)))
    return x, asarray(output)


def demo():
    x, z_list = main(2., 1., 30., .1)
    plt.plot(z_list, label='desired range')
    plt.show()


output = []
worker_stop = multiprocessing.Value('i', 0)
stop_condition = 0.1

worker_pool = multiprocessing.Pool()
demo()
Run Code Online (Sandbox Code Playgroud)

方法 3) 优点:不会遗漏任何结果
缺点:这超出了您通常会做的事情。

采用方法1并添加

def stopPoolButLetRunningTaskFinish(pool):
    # Pool() shutdown new task from being started, by emptying the query all worker processes draw from
    while pool._task_handler.is_alive() and pool._inqueue._reader.poll():
        pool._inqueue._reader.recv()
    # Send sentinels to all worker processes
    for a in range(len(pool._pool)):
            pool._inqueue.put(None)
Run Code Online (Sandbox Code Playgroud)

然后改变stop_condition_callback

def stop_condition_callback(ret):
    if ret[1] < stop_condition:
        #worker_pool.terminate()
        stopPoolButLetRunningTaskFinish(worker_pool)
    else:
        output.append(ret)
Run Code Online (Sandbox Code Playgroud)