Ale*_*inu 7 python deadlock multiprocessing
我有N独立的任务,它们以一定multiprocessing.Pool大小os.cpu_count()(在我的例子中为 8)执行maxtasksperchild=1(即为每个新任务创建一个新的工作进程)。
主脚本可以简化为:
import subprocess as sp
import multiprocessing as mp
def do_work(task: dict) -> dict:
res = {}
# ... work ...
for i in range(5):
out = sp.run(cmd, stdout=sp.PIPE, stderr=sp.PIPE, check=False, timeout=60)
res[i] = out.stdout.decode('utf-8')
# ... some more work ...
return res
if __name__ == '__main__':
tasks = load_tasks_from_file(...) # list of dicts
logger = mp.get_logger()
results = []
with mp.Pool(processes=os.cpu_count(), maxtasksperchild=1) as pool:
for i, res in enumerate(pool.imap_unordered(do_work, tasks), start=1):
results.append(res)
logger.info('PROGRESS: %3d/%3d', i, len(tasks))
dump_results_to_file(results)
Run Code Online (Sandbox Code Playgroud)
水池有时会被卡住。我执行 a 时的回溯KeyboardInterrupt就在这里。它表明池不会获取新任务和/或工作进程卡在队列/管道调用中recv()。我无法确定地重现这一点,改变了我的实验的不同配置。如果我再次运行相同的代码,它有可能会正常完成。
进一步观察:
fork(使用spawn不能解决问题)strace表明进程陷入了futex wait; gdb 的回溯还显示:do_futex_wait.constprop更新:即使池大小 = 1,似乎也会发生死锁。
strace报告该进程在尝试获取位于 的某些锁时被阻止0x564c5dbcd000:
futex(0x564c5dbcd000, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY
Run Code Online (Sandbox Code Playgroud)
并gdb确认:
(gdb) bt
#0 0x00007fcb16f5d014 in do_futex_wait.constprop () from /usr/lib/libpthread.so.0
#1 0x00007fcb16f5d118 in __new_sem_wait_slow.constprop.0 () from /usr/lib/libpthread.so.0
#2 0x0000564c5cec4ad9 in PyThread_acquire_lock_timed (lock=0x564c5dbcd000, microseconds=-1, intr_flag=0)
at /tmp/build/80754af9/python_1598874792229/work/Python/thread_pthread.h:372
#3 0x0000564c5ce4d9e2 in _enter_buffered_busy (self=self@entry=0x7fcafe1e7e90)
at /tmp/build/80754af9/python_1598874792229/work/Modules/_io/bufferedio.c:282
#4 0x0000564c5cf50a7e in _io_BufferedWriter_write_impl.isra.2 (self=0x7fcafe1e7e90)
at /tmp/build/80754af9/python_1598874792229/work/Modules/_io/bufferedio.c:1929
#5 _io_BufferedWriter_write (self=0x7fcafe1e7e90, arg=<optimized out>)
at /tmp/build/80754af9/python_1598874792229/work/Modules/_io/clinic/bufferedio.c.h:396
Run Code Online (Sandbox Code Playgroud)
死锁的发生是由于worker内存使用率过高,从而触发了OOM杀手,突然终止了worker子进程,使池处于混乱状态。
该脚本重现了我原来的问题。
目前我正在考虑切换到当工作人员突然终止时ProcessPoolExecutor会抛出异常的方法。BrokenProcessPool
参考: