python - 通过readlines(size)提高大文件搜索的效率

dan*_*man 7 python dictionary enumerate multidimensional-array readlines

我是Python新手,我目前正在使用Python 2.我有一些源文件,每个源文件都包含大量数据(大约1900万行).它看起来如下:

apple   \t N   \t apple
n&apos
garden  \t N   \t garden
b\ta\md 
great   \t Adj \t great
nice    \t Adj \t (unknown)
etc
Run Code Online (Sandbox Code Playgroud)

我的任务是在每个文件的第3列搜索一些目标词,并且每次在语料库中找到目标词时,必须将该词前后的10个词添加到多维词典中.

编辑:应排除包含'&','\'或字符串'(未知)'的行.

我尝试使用readlines()和enumerate()来解决这个问题,如下面的代码所示.代码执行它应该做的事情但显然对源文件中提供的数据量不够高效.

我知道readlines()或read()不应该用于大型数据集,因为它将整个文件加载到内存中.然而,逐行读取文件,我没有设法使用枚举方法来获取目标词之前和之后的10个单词.我也不能使用mmap,因为我没有权限在该文件上使用它.

所以,我认为具有一定大小限制的readlines方法将是最有效的解决方案.然而,为此,我不会做出一些错误,因为每次达到大小限制结束时,目标字不会被捕获,因为代码刚刚破坏了10个字?

def get_target_to_dict(file):
targets_dict = {}
with open(file) as f:
    for line in f:
            targets_dict[line.strip()] = {}
return targets_dict

targets_dict = get_target_to_dict('targets_uniq.txt')
# browse directory and process each file 
# find the target words to include the 10 words before and after to the dictionary
# exclude lines starting with <,-,; to just have raw text

    def get_co_occurence(path_file_dir, targets, results):
        lines = []
        for file in os.listdir(path_file_dir):
            if file.startswith('corpus'):
            path_file = os.path.join(path_file_dir, file)
            with gzip.open(path_file) as corpusfile:
                # PROBLEMATIC CODE HERE
                # lines = corpusfile.readlines()
                for line in corpusfile:
                    if re.match('[A-Z]|[a-z]', line):
                        if '(unknown)' in line:
                            continue
                        elif '\\' in line:
                            continue
                        elif '&' in line:
                            continue
                        lines.append(line)
                for i, line in enumerate(lines):
                    line = line.strip()
                    if re.match('[A-Z][a-z]', line):
                        parts = line.split('\t')
                        lemma = parts[2]
                        if lemma in targets:
                            pos = parts[1]
                            if pos not in targets[lemma]:
                                targets[lemma][pos] = {}
                            counts = targets[lemma][pos]
                            context = []
                            # look at 10 previous lines
                            for j in range(max(0, i-10), i):
                                context.append(lines[j])
                            # look at the next 10 lines
                            for j in range(i+1, min(i+11, len(lines))):
                                context.append(lines[j])
                            # END OF PROBLEMATIC CODE
                            for context_line in context:
                                context_line = context_line.strip()
                                parts_context = context_line.split('\t')
                                context_lemma = parts_context[2]
                                if context_lemma not in counts:
                                    counts[context_lemma] = {}
                                context_pos = parts_context[1]
                                if context_pos not in counts[context_lemma]:
                                    counts[context_lemma][context_pos] = 0
                                counts[context_lemma][context_pos] += 1
                csvwriter = csv.writer(results, delimiter='\t')
                for k,v in targets.iteritems():
                    for k2,v2 in v.iteritems():
                        for k3,v3 in v2.iteritems():
                            for k4,v4 in v3.iteritems():
                                csvwriter.writerow([str(k), str(k2), str(k3), str(k4), str(v4)])
                                #print(str(k) + "\t" + str(k2) + "\t" + str(k3) + "\t" + str(k4) + "\t" + str(v4))

results = open('results_corpus.csv', 'wb')
word_occurrence = get_co_occurence(path_file_dir, targets_dict, results)
Run Code Online (Sandbox Code Playgroud)

我复制整个代码的部分是出于完整性的原因,因为它是一个函数的一部分,它从所有提取的信息中创建一个多维字典,然后将其写入csv文件.

我真的很感激任何提示或建议使这个代码更有效.

编辑我更正了代码,因此它考虑了目标词之前和之后的确切10个单词

Sky*_*ycc 3

我的想法是创建一个缓冲区来存储 10 行之前的内容,另一个缓冲区来存储 10 行之后的内容,当文件被读取时,它将被推入缓冲区之前,如果大小超过 10,缓冲区将被弹出

对于后缓冲区,我从第一个文件迭代器克隆另一个迭代器。然后在循环内并行运行两个迭代器,克隆迭代器提前运行 10 次迭代以获得后 10 行。

这可以避免使用 readlines() 并将整个文件加载到内存中。希望它对您的实际情况有用

编辑: 如果第 3 列不包含任何“&”、“\”、“(未知)”,则仅填充前后缓冲区。还将 split('\t') 更改为 split(),这样它将照顾所有空格或制表符

import itertools
def get_co_occurence(path_file_dir, targets, results):
    excluded_words = ['&', '\\', '(unknown)'] # modify excluded words here 
    for file in os.listdir(path_file_dir): 
        if file.startswith('testset'): 
            path_file = os.path.join(path_file_dir, file) 
            with open(path_file) as corpusfile: 
                # CHANGED CODE HERE
                before_buf = [] # buffer to store before 10 lines 
                after_buf = []  # buffer to store after 10 lines 
                corpusfile, corpusfile_clone = itertools.tee(corpusfile) # clone file iterator to access next 10 lines 
                for line in corpusfile: 
                    line = line.strip() 
                    if re.match('[A-Z]|[a-z]', line): 
                        parts = line.split() 
                        lemma = parts[2]

                        # before buffer handling, fill buffer excluded line contains any of excluded words 
                        if not any(w in line for w in excluded_words): 
                            before_buf.append(line) # append to before buffer 
                        if len(before_buf)>11: 
                            before_buf.pop(0) # keep the buffer at size 10 
                        # next buffer handling
                        while len(after_buf)<=10: 
                            try: 
                                after = next(corpusfile_clone) # advance 1 iterator 
                                after_lemma = '' 
                                after_tmp = after.split()
                                if re.match('[A-Z]|[a-z]', after) and len(after_tmp)>2: 
                                    after_lemma = after_tmp[2]
                            except StopIteration: 
                                break # copy iterator will exhaust 1st coz its 10 iteration ahead 
                            if after_lemma and not any(w in after for w in excluded_words): 
                                after_buf.append(after) # append to buffer
                                # print 'after',z,after, ' - ',after_lemma
                        if (after_buf and line in after_buf[0]):
                            after_buf.pop(0) # pop off one ready for next

                        if lemma in targets: 
                            pos = parts[1] 
                            if pos not in targets[lemma]: 
                                targets[lemma][pos] = {} 
                            counts = targets[lemma][pos] 
                            # context = [] 
                            # look at 10 previous lines 
                            context= before_buf[:-1] # minus out current line 
                            # look at the next 10 lines 
                            context.extend(after_buf) 

                            # END OF CHANGED CODE
                            # CONTINUE YOUR STUFF HERE WITH CONTEXT
Run Code Online (Sandbox Code Playgroud)