我在从 python 中的分页 API 响应下载大型数据集时遇到内存问题。当我尝试使用 ThreadPoolExecutor 并行下载多个页面时,我注意到已完成和已解决的 future 不会释放其内存占用。
我尝试通过以下两个示例来简化它。第一个使用max_workers设置为 1 的 ThreadPoolExecutor 下载所有页面(据我所知,这应该具有与简单循环相同的内存占用量):
from random import random
from concurrent.futures import ThreadPoolExecutor, as_completed
import gc
TOTAL_PAGES = 60
def download_data(page: int = 1) -> list[float]:
# Send a request to some resource to get data
print(f"Downloading page {page}.")
return [random() for _ in range(1000000)] # mock some larga data sets
def threadpool_memory_test():
processed_pages = 0
with ThreadPoolExecutor(max_workers=1) as executor:
future_to_page = {
executor.submit(download_data, page): page for page in …Run Code Online (Sandbox Code Playgroud)