为什么 `Pool.map()` 多处理中的内存消耗急剧增加?

eve*_*007 5 python multiprocessing threadpool pandas python-multiprocessing

我正在对 pandas 数据帧进行多重处理,方法是将其拆分为多个数据帧,这些数据帧存储为列表。并且,使用Pool.map()我将数据帧传递给定义的函数。我的输入文件约为“300 mb”,因此小数据帧大约为“75 mb”。但是,当多处理运行时,内存消耗会增加 7 GB,每个本地进程大约消耗 1 GB 内存。2 GB 内存。为什么会发生这种情况?

def main():

    my_df = pd.read_table("my_file.txt", sep="\t")
    my_df = my_df.groupby('someCol')

    my_df_list = []
    for colID, colData in my_df:
        my_df_list.append(colData)

    # now, multiprocess each small dataframe individually    
    p = Pool(3)
    result = p.map(process_df, my_df_list)

    p.close()
    p.join()

    print('Global maximum memory usage: %.2f (mb)' % current_mem_usage())

    result_merged = pd.concat(result)

    # write merged data to file


def process_df(my_df):
    my_new_df = do something with "my_df"

    print('\tWorker maximum memory usage: %.2f (mb)' % (current_mem_usage()))

    del my_df
    return my_new_df


#to monitor memory usage
def current_mem_usage():
    return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024.
Run Code Online (Sandbox Code Playgroud)

我的结果很好,但每个 75 MB 文件的内存消耗相当高。为什么这样 ?是不是漏水了?可能的补救措施有哪些?

输出内存使用情况:

Worker maximum memory usage: 2182.84 (mb)
Worker maximum memory usage: 2182.84 (mb)
Worker maximum memory usage: 2837.69 (mb)
Worker maximum memory usage: 2849.84 (mb)
Global maximum memory usage: 3106.00 (mb)
Run Code Online (Sandbox Code Playgroud)