Dar*_*ish 5 python subprocess python-asyncio python-multiprocessing process-pool
类似的问题(但答案对我不起作用):如何取消使用 concurrent.futures.ProcessPoolExecutor 运行的长时间运行的子进程?
与上面链接的问题和提供的解决方案不同,在我的情况下,计算本身相当长(受 CPU 限制)并且无法循环运行以检查是否发生了某些事件。
以下代码的简化版本:
import asyncio
import concurrent.futures as futures
import time
class Simulator:
    def __init__(self):
        self._loop = None
        self._lmz_executor = None
        self._tasks = []
        self._max_execution_time = time.monotonic() + 60
        self._long_running_tasks = []
    def initialise(self):
        # Initialise the main asyncio loop
        self._loop = asyncio.get_event_loop()
        self._loop.set_default_executor(
            futures.ThreadPoolExecutor(max_workers=3))
        # Run separate processes of long computation task
        self._lmz_executor = futures.ProcessPoolExecutor(max_workers=3)
    def run(self):
        self._tasks.extend(
            [self.bot_reasoning_loop(bot_id) for bot_id in [1, 2, 3]]
        )
        try:
            # Gather bot reasoner tasks
            _reasoner_tasks = asyncio.gather(*self._tasks)
            # Send the reasoner tasks to main monitor task
            asyncio.gather(self.sample_main_loop(_reasoner_tasks))
            self._loop.run_forever()
        except KeyboardInterrupt:
            pass
        finally:
            self._loop.close()
    async def sample_main_loop(self, reasoner_tasks):
        """This is the main monitor task"""
        await asyncio.wait_for(reasoner_tasks, None)
        for task in self._long_running_tasks:
            try:
                await asyncio.wait_for(task, 10)
            except asyncio.TimeoutError:
                print("Oops. Some long operation timed out.")
                task.cancel()  # Doesn't cancel and has no effect
                task.set_result(None)  # Doesn't seem to have an effect
        self._lmz_executor.shutdown()
        self._loop.stop()
        print('And now I am done. Yay!')
    async def bot_reasoning_loop(self, bot):
        import math
        _exec_count = 0
        _sleepy_time = 15
        _max_runs = math.floor(self._max_execution_time / _sleepy_time)
        self._long_running_tasks.append(
            self._loop.run_in_executor(
                    self._lmz_executor, really_long_process, _sleepy_time))
        while time.monotonic() < self._max_execution_time:
            print("Bot#{}: thinking for {}s. Run {}/{}".format(
                    bot, _sleepy_time, _exec_count, _max_runs))
            await asyncio.sleep(_sleepy_time)
            _exec_count += 1
        print("Bot#{} Finished Thinking".format(bot))
def really_long_process(sleepy_time):
    print("I am a really long computation.....")
    _large_val = 9729379273492397293479237492734 ** 344323
    print("I finally computed this large value: {}".format(_large_val))
if __name__ == "__main__":
    sim = Simulator()
    sim.initialise()
    sim.run()
这个想法是有一个主要的模拟循环来运行和监控三个机器人线程。然后,这些机器人线程中的每一个都执行一些推理,但也会使用 启动一个非常长的后台进程ProcessPoolExecutor,这可能最终会运行更长的自己的阈值/最大执行时间来推理事物。
正如您在上面的代码中看到的那样,我.cancel()在发生超时时尝试执行这些任务。虽然这并没有真正取消实际的计算,它一直在后台发生,asyncio循环直到所有长时间运行的计算完成后才终止。
如何在方法中终止如此长时间运行的 CPU 密集型计算?
其他类似的 SO 问题,但不一定相关或有帮助:
如何在方法中终止如此长时间运行的 CPU 密集型计算?
您尝试的方法不起作用,因为返回的期货ProcessPoolExecutor不可取消。虽然ASYNCIO的run_in_executor 尝试传播的取消,这只不过是忽略由Future.cancel一次任务开始执行。
这没有根本的原因。与线程不同,进程可以安全地终止,因此完全有可能ProcessPoolExecutor.submit返回cancel终止相应进程的未来。Asyncio 协程定义了取消语义,并且会自动使用它。不幸的是,ProcessPoolExecutor.submit返回一个常规的concurrent.futures.Future,它假定最小公分母并将运行的未来视为不可触碰的。
因此,要取消在子ProcessPoolExecutor进程中执行的任务,必须完全绕过并管理自己的进程。挑战在于如何在不重新实现multiprocessing. 标准库提供的一个选项是(ab)multiprocessing.Pool用于此目的,因为它支持可靠关闭工作进程。ACancellablePool可以按如下方式工作:
ProcessPoolExecutor。)这是该想法的示例实现:
import asyncio
import multiprocessing
class CancellablePool:
    def __init__(self, max_workers=3):
        self._free = {self._new_pool() for _ in range(max_workers)}
        self._working = set()
        self._change = asyncio.Event()
    def _new_pool(self):
        return multiprocessing.Pool(1)
    async def apply(self, fn, *args):
        """
        Like multiprocessing.Pool.apply_async, but:
         * is an asyncio coroutine
         * terminates the process if cancelled
        """
        while not self._free:
            await self._change.wait()
            self._change.clear()
        pool = usable_pool = self._free.pop()
        self._working.add(pool)
        loop = asyncio.get_event_loop()
        fut = loop.create_future()
        def _on_done(obj):
            loop.call_soon_threadsafe(fut.set_result, obj)
        def _on_err(err):
            loop.call_soon_threadsafe(fut.set_exception, err)
        pool.apply_async(fn, args, callback=_on_done, error_callback=_on_err)
        try:
            return await fut
        except asyncio.CancelledError:
            pool.terminate()
            usable_pool = self._new_pool()
        finally:
            self._working.remove(pool)
            self._free.add(usable_pool)
            self._change.set()
    def shutdown(self):
        for p in self._working | self._free:
            p.terminate()
        self._free.clear()
显示取消的简约测试用例:
def really_long_process():
    print("I am a really long computation.....")
    large_val = 9729379273492397293479237492734 ** 344323
    print("I finally computed this large value: {}".format(large_val))
async def main():
    loop = asyncio.get_event_loop()
    pool = CancellablePool()
    tasks = [loop.create_task(pool.apply(really_long_process))
             for _ in range(5)]
    for t in tasks:
        try:
            await asyncio.wait_for(t, 1)
        except asyncio.TimeoutError:
            print('task timed out and cancelled')
    pool.shutdown()
asyncio.get_event_loop().run_until_complete(main())
请注意 CPU 使用率如何从未超过 3 个内核,以及它如何在测试结束时开始下降,表明进程正在按预期终止。
要将其应用于问题中的代码,请创建self._lmz_executor一个实例CancellablePool并更改self._loop.run_in_executor(...)为self._loop.create_task(self._lmz_executor.apply(...)).
| 归档时间: | 
 | 
| 查看次数: | 2043 次 | 
| 最近记录: |