Jus*_*tin 30 python multiprocessing
我正在阅读有关Python中多处理模块的各种教程,并且无法理解调用的原因/时间process.join().例如,我偶然发现了这个例子:
nums = range(100000)
nprocs = 4
def worker(nums, out_q):
""" The worker function, invoked in a process. 'nums' is a
list of numbers to factor. The results are placed in
a dictionary that's pushed to a queue.
"""
outdict = {}
for n in nums:
outdict[n] = factorize_naive(n)
out_q.put(outdict)
# Each process will get 'chunksize' nums and a queue to put his out
# dict into
out_q = Queue()
chunksize = int(math.ceil(len(nums) / float(nprocs)))
procs = []
for i in range(nprocs):
p = multiprocessing.Process(
target=worker,
args=(nums[chunksize * i:chunksize * (i + 1)],
out_q))
procs.append(p)
p.start()
# Collect all results into a single result dict. We know how many dicts
# with results to expect.
resultdict = {}
for i in range(nprocs):
resultdict.update(out_q.get())
# Wait for all worker processes to finish
for p in procs:
p.join()
print resultdict
Run Code Online (Sandbox Code Playgroud)
根据我的理解,process.join()将阻止调用进程,直到调用其join方法的进程已完成执行.我也相信在上面的代码示例中启动的子进程在完成目标函数时完成执行,也就是说,在他们将结果推送到之后out_q.最后,我认为out_q.get()阻止调用过程直到有结果被拉出.因此,如果您考虑代码:
resultdict = {}
for i in range(nprocs):
resultdict.update(out_q.get())
# Wait for all worker processes to finish
for p in procs:
p.join()
Run Code Online (Sandbox Code Playgroud)
主进程被out_q.get()调用阻塞,直到每个工作进程完成将其结果推送到队列.因此,当主进程退出for循环时,每个子进程应该已经完成执行,对吗?
如果是这种情况,是否有任何理由p.join()在此时调用方法?并非所有工作进程都已完成,那么这是如何导致主进程"等待所有工作进程完成?" 我问的主要是因为我在多个不同的例子中看到了这一点,如果我不理解某些东西,我很好奇.
oef*_*efe 18
在您打电话之前join,所有工作人员都将结果放入队列,但他们不一定返回,他们的流程可能尚未终止.他们可能会也可能不会这样做,具体取决于时间.
调用join可确保所有进程都有时间正确终止.
Bak*_*riu 18
试着运行这个:
import math
import time
from multiprocessing import Queue
import multiprocessing
def factorize_naive(n):
factors = []
for div in range(2, int(n**.5)+1):
while not n % div:
factors.append(div)
n //= div
if n != 1:
factors.append(n)
return factors
nums = range(100000)
nprocs = 4
def worker(nums, out_q):
""" The worker function, invoked in a process. 'nums' is a
list of numbers to factor. The results are placed in
a dictionary that's pushed to a queue.
"""
outdict = {}
for n in nums:
outdict[n] = factorize_naive(n)
out_q.put(outdict)
# Each process will get 'chunksize' nums and a queue to put his out
# dict into
out_q = Queue()
chunksize = int(math.ceil(len(nums) / float(nprocs)))
procs = []
for i in range(nprocs):
p = multiprocessing.Process(
target=worker,
args=(nums[chunksize * i:chunksize * (i + 1)],
out_q))
procs.append(p)
p.start()
# Collect all results into a single result dict. We know how many dicts
# with results to expect.
resultdict = {}
for i in range(nprocs):
resultdict.update(out_q.get())
time.sleep(5)
# Wait for all worker processes to finish
for p in procs:
p.join()
print resultdict
time.sleep(15)
Run Code Online (Sandbox Code Playgroud)
并打开任务管理器.您应该能够看到4个子进程在被OS终止之前进入僵尸状态几秒钟(由于连接调用):

在更复杂的情况下,子进程可以永远处于僵尸状态(就像你在另一个问题中询问的那样),如果你创建了足够的子进程,你可以填充进程表,给操作系统带来麻烦(可能会杀死你的主要过程,以避免失败).
| 归档时间: |
|
| 查看次数: |
32846 次 |
| 最近记录: |