Python 多处理在 waiter.acquire() 中无限期阻塞

Por*_*use 6 python concurrency freeze multiprocessing

有人可以解释为什么这段代码会阻塞并且无法完成吗?

我遵循了几个例子multiprocessing并且编写了一些非常相似的代码,这些代码不会被阻止。但是,显然,我看不出该工作代码与下面的代码有什么区别。我认为一切都很好。它一直到达 .get(),但没有一个进程完成。

问题是 python3 在 waiter.acquire() 中无限期地阻塞,您可以通过中断它并读取回溯来判断。

$ python3 ./try415.py
^CTraceback (most recent call last):
  File "./try415.py", line 43, in <module>
    ps = [ res.get() for res in proclist ]
  File "./try415.py", line 43, in <listcomp>
    ps = [ res.get() for res in proclist ]
  File "/usr/lib64/python3.6/multiprocessing/pool.py", line 638, in get
    self.wait(timeout)
  File "/usr/lib64/python3.6/multiprocessing/pool.py", line 635, in wait
    self._event.wait(timeout)
  File "/usr/lib64/python3.6/threading.py", line 551, in wait
    signaled = self._cond.wait(timeout)
  File "/usr/lib64/python3.6/threading.py", line 295, in wait
    waiter.acquire()
KeyboardInterrupt
Run Code Online (Sandbox Code Playgroud)

这是代码

from multiprocessing import Pool
from scipy import optimize
import numpy as np

def func(t, a, b, c):
    return 0.5*a*t**2 + b*t + c

def funcwrap(t, params):
    return func(t, *params)

def fitWithErr(procid, yFitValues, simga, func, p0, args, bounds):
    np.random.seed() # force new seed
    randomDelta = np.random.normal(0., sigma, len(yFitValues))
    randomdataY = yFitValues + randomDelta
    errfunc = lambda p, x, y: func(p, x) -y
    optResult = optimize.least_squares(errfunc, p0, args=args, bounds=bounds)
    return optResult.x

def fit_bootstrap(function, datax, datay, p0, bounds, aprioriUnc):
    errfunc = lambda p, x, y: function(x,p) - y
    optResult = optimize.least_squares(errfunc, x0=p0, args=(datax, datay), bounds=bounds)
    pfit = optResult.x
    residuals = optResult.fun
    fity = function(datax, pfit)

    numParallelProcesses = 2**2 # should be equal to number of ALUs
    numTrials = 2**2 # this many random data sets are generated and fitted
    trialParameterList = list()
    for i in range(0,numTrials):
        trialParameterList.append( [i, fity, aprioriUnc, function, p0, (datax, datay), bounds] )

    with Pool(processes=numParallelProcesses) as pool:
        proclist = [ pool.apply_async(fitWithErr, args) for args in trialParameterList ]

    ps = [ res.get() for res in proclist ]
    ps = np.array(ps)
    mean_pfit = np.mean(ps,0)

    return mean_pfit

if __name__ == '__main__':
    x = np.linspace(0,3,2000)
    p0 = [-9.81, 1., 0.]
    y = funcwrap(x, p0)
    bounds = [ (-20,-1., -1E-6),(20,3,1E-6) ]
    fit_bootstrap(funcwrap, x, y, p0, bounds=bounds, aprioriUnc=0.1)
Run Code Online (Sandbox Code Playgroud)

Por*_*use 2

缩进

毕竟,只是我没有意识到有些代码不在with应有的子句中。(除了一些拼写错误和其他错误,我现在已经修复了。) 间奏曲再次来袭!

感谢 Snowy 让我以不同的方式经历它,直到我发现我的错误。我只是不清楚我打算做什么。Snowy 的颂歌是完全有效且等效的代码。然而,为了记录,timeout没有必要。而且,更重要的是,如果with正确使用它,它对于 Process 是完全有效的,如 Python3.6.6 文档的第一段所示,这就是我得到它的地方。不知何故,我只是把事情搞砸了。我试图编写的代码很简单:multiprocessing

with Pool(processes=numParallelProcesses) as pool:
    proclist = [ pool.apply_async(fitWithErr, args) for args in trialParameterList ]

    ps = [ res.get() for res in proclist ]
    ps = np.array(ps)
    mean_pfit = np.mean(ps,0)
Run Code Online (Sandbox Code Playgroud)

就像我预期的那样工作。

  • 但为什么该代码段必须包含在“with”子句中呢? (2认同)