检查多个文件之间重复数据的最有效方法是什么?

Blo*_*b X 4 python checksum

假设您有一个文件夹,其中包含可能包含不同信息的成百上千个.csv.txt文件,但您想确保它joe041.txt实际上不包含joe526.txt意外的相同数据。

与其将所有内容都加载到一个文件中(如果每个文件都有数千行,这可能会很麻烦),我已经开始使用 Python 脚本来基本上读取目录中的每个文件并计算校验和,然后您可以在数千行之间进行比较的文件。

有没有更有效的方法来做到这一点?

即使使用filecmp了,这似乎不太有效,因为模块只有文件VS文件目录VS目录比较,但没有文件VS DIR命令-即使用它,这意味着你不得不遍历通过X ²次(所有文件dir对所有中的其他文件dir)。

import os
import hashlib

outputfile = []

for x in(os.listdir("D:/Testing/New folder")):
    with open("D:/Testing/New folder/%s" % x, "rb") as openfile:
        text=openfile.read()
        outputfile.append(x)
        outputfile.append(",")
        outputfile.append(hashlib.md5(text).hexdigest())
        outputfile.append("\n")

print(outputfile)

with open("D:/Testing/New folder/output.csv","w") as openfile:
    for x in outputfile:
        openfile.write(x)
Run Code Online (Sandbox Code Playgroud)

Ral*_*alf 5

受到@s?un???q?p 评论的启发,您可以尝试一种迭代方法,首先对所有文件执行廉价操作(获取文件大小),然后对那些具有相等的文件进行更深入的比较尺寸。

这段代码首先比较大小,然后比较文件的第一行,最后md5比较整个文件的散列。您可以以任何您认为适合您的用例的方式对其进行调整。

我使用长变量名使其明确;不要因此分心。

import os
import hashlib

def calc_md5(file_path):
    hash_md5 = hashlib.md5()
    with open(file_path, 'rb') as f:
        for chunk in iter(lambda: f.read(4096), b''):
            hash_md5.update(chunk)
    return hash_md5.hexdigest()

def get_duplicates_by_size(dir_path):
    files_by_size = {}

    for elem in os.listdir(dir_path):
        file_path = os.path.join(dir_path, elem)
        if os.path.isfile(file_path):
            size = os.stat(file_path).st_size

            if size not in files_by_size:
                files_by_size[size] = []
            files_by_size[size].append(file_path)

    # keep only entries with more than one file;
    # the others don't need to be kept in memory
    return {
        size: file_list
        for size, file_list in files_by_size.items()
        if len(file_list) > 1}

def get_duplicates_by_first_content(files_by_size, n_chars):
    files_by_size_and_first_content = {}

    for size, file_list in files_by_size.items():
        d = {}
        for file_path in file_list:
            with open(file_path) as f:
                first_content = f.read(n_chars)

            if first_content not in d:
                d[first_content] = []
            d[first_content].append(file_path)

        # keep only entries with more than one file;
        # the others don't need to be kept in memory
        d = {
            (size, first_content): file_list_2
            for first_content, file_list_2 in d.items()
            if len(file_list_2) > 1}
        files_by_size_and_first_content.update(d)

    return files_by_size_and_first_content

def get_duplicates_by_hash(files_by_size_and_first_content):
    files_by_size_and_first_content_and_hash = {}

    for (size, first_content), file_list in files_by_size_and_first_content.items():
        d = {}
        for file_path in file_list:
            file_hash = calc_md5(file_path)

            if file_hash not in d:
                d[file_hash] = []
            d[file_hash].append(file_path)

        # keep only entries with more than one file;
        # the others don't need to be kept in memory
        d = {
            (size, first_content, file_hash): file_list_2
            for file_hash, file_list_2 in d.items()
            if len(file_list_2) > 1}
        files_by_size_and_first_content_and_hash.update(d)

    return files_by_size_and_first_content_and_hash

if __name__ == '__main__':
    r = get_duplicates_by_size('D:/Testing/New folder')
    r = get_duplicates_by_first_content(r, 20)  # customize the number of chars to read
    r = get_duplicates_by_hash(r)

    for k, v in r.items():
        print('Key:', k)
        print('  Files:', v)
Run Code Online (Sandbox Code Playgroud)