Python 3:如何从多个进程写入同一个文件而不会弄乱它?

use*_*204 4 python parallel-processing multithreading multiprocessing

我有一个可以随时启动或停止的程序。该程序用于从网页下载数据。首先,用户将在一个.csv文件中定义一堆网页,然后保存该.csv文件,然后启动程序。该程序将读取该.csv文件并将其转换为作业列表。接下来,作业被分成 5 个独立的downloader功能,这些功能并行工作但可能需要不同的时间来下载。

downloader(其中有 5 个)完成下载网页后,我需要它来打开.csv文件并删除链接。这样,随着时间的推移,.csv文件会越来越小。问题是有时两个download函数会尝试同时更新.csv文件,会导致程序崩溃。我该如何处理?

zwe*_*wer 5

如果这是您昨天项目的延续,您已经在内存中拥有下载列表 - 只需在加载列表中删除条目,因为它们的进程完成下载,并且只有在您退出时才在输入文件上写下整个列表 '下载器”。没有理由不断地写下这些变化。

如果您想知道(例如从外部进程)何时下载 url,即使您的“下载器”正在运行,请在downloaded.dat每次进程返回下载成功时写一个新行。

当然,在这两种情况下,从主进程/线程中写入,这样您就不必担心互斥锁。

更新- 以下是如何使用与昨天相同的代码库使用附加文件来完成此操作:

def init_downloader(params):  # our downloader initializator
    downloader = Downloader(**params[0])  # instantiate our downloader
    downloader.run(params[1])  # run our downloader
    return params  # job finished, return the same params for identification

if __name__ == "__main__":  # important protection for cross-platform use

    downloader_params = [  # Downloaders will be initialized using these params
        {"port_number": 7751},
        {"port_number": 7851},
        {"port_number": 7951}
    ]
    downloader_cycle = cycle(downloader_params)  # use a cycle for round-robin distribution

    with open("downloaded_links.dat", "a+") as diff_file:  # open your diff file
        diff_file.seek(0)  # rewind the diff file to the beginning to capture all lines
        diff_links = {row.strip() for row in diff_file}  # load downloaded links into a set
        with open("input_links.dat", "r+") as input_file:  # open your input file
            available_links = []
            download_jobs = []  # store our downloader parameters + a link here
            # read our file line by line and filter out downloaded links
            for row in input_file:  # loop through our file
                link = row.strip()  # remove the extra whitespace to get the link
                if link not in diff_links:  # make sure link is not already downloaded
                    available_links.append(row)
                    download_jobs.append([next(downloader_cycle), link])
            input_file.seek(0)  # rewind our input file
            input_file.truncate()  # clear out the input file
            input_file.writelines(available_links)  # store back the available links
            diff_file.seek(0)  # rewind the diff file
            diff_file.truncate()  # blank out the diff file now that the input is updated
        # and now let's get to business...
        if download_jobs:
            download_pool = Pool(processes=5)  # make our pool use 5 processes
            # run asynchronously so we can capture results as soon as they ar available
            for response in download_pool.imap_unordered(init_downloader, download_jobs):
                # since it returns the same parameters, the second item is a link
                # add the link to our `diff` file so it doesn't get downloaded again
                diff_file.write(response[1] + "\n")
        else:
            print("Nothing left to download...")
Run Code Online (Sandbox Code Playgroud)

整个想法是,正如我在评论中所写的那样,在下载链接时使用文件来存储下载的链接,然后在下次运行时过滤掉下载的链接并更新输入文件。这样即使您强行杀死​​它,它也将始终从停止的地方恢复(部分下载除外)。