如何更快地遍历此文本文件?

Col*_*lin 6 python parsing dictionary

我有一个包含这种格式的许多部分的文件:

section_name_1 <attribute_1:value> <attribute_2:value> ... <attribute_n:value> {
    field_1 finish_num:start_num some_text ;
    field_2 finish_num:start_num some_text ;
    ...
    field_n finish_num:start_num some_text;
};

section_name_2 ...
... and so on
Run Code Online (Sandbox Code Playgroud)

该文件可以长达数十万行.每个部分的属性和字段数可以不同.我想构建一些字典来保存这些值.我有一个单独的字典,它包含所有可能的'属性'值.

import os, re
from collections import defaultdict

def mapFile(myFile, attributeMap_d):
        valueMap_d = {}
        fieldMap_d = defaultdict(dict)

        for attributeName in attributeMap_d:
            valueMap_d[attributeName] = {}

        count = 0
        with open(myFile, "rb") as fh:
            for line in fh:
                # only look for lines with <
                if '<' in line:
                    # match all attribute:value pairs inside <> brackets
                    attributeAllMatch = re.findall(r'<(\S+):(\S+)>', line)
                    attributeAllMatchLen = len(attributeAllMatch)
                    count = 0

                    sectionNameMatch = re.match(r'(\S+)\s+<', line)

                    # store each section name and its associated attribute and value into dict
                    for attributeName in attributeMap_d:
                        for element in attributeAllMatch:
                            if element[0] == attributeName:
                                valueMap_d[attributeName][sectionNameMatch.group(1).rstrip()] = element[1].rstrip()
                                count += 1
                        # stop searching if all attributes in section already matched
                        if count == attributeAllMatchLen: break

                    nextLine = next(fh)

                    #in between each squiggly bracket, store all the field names and start/stop_nums into dict
                    #this while loop is very slow...
                    while not "};" in nextLine:
                        fieldMatch = re.search(r'(\S+)\s+(\d+):(\d+)', nextLine)
                        if fieldMatch:
                            fieldMap_d[sectionNameMatch.group(1)][fieldMatch.group(1)] = [fieldMatch.group(2), fieldMatch.group(3)]
                        nextLine = next(fh)

        return valueMap_d
Run Code Online (Sandbox Code Playgroud)

我的问题是匹配所有字段值的while循环明显慢于代码的其余部分:根据cPro​​file,如果我删除while循环,则0.5s与2.2s相比.我想知道我能做些什么来加快速度.

PM *_*ing 2

当您需要奇特的模式匹配时,正则表达式非常有用,但当您不需要时,使用str方法解析文本可以更快。下面是一些代码,比较了使用正则表达式进行字段解析与使用str.split.

首先,我创建一些假测试数据并将其存储在rows列表中。这样做使我的演示代码比从文件中读取数据更简单,但更重要的是,它消除了文件读取的开销,因此我们可以更准确地比较解析速度。

顺便说一句,您应该sectionNameMatch.group(1)在字段解析循环之外进行保存,而不是必须在每个字段行上进行该调用。

首先,我将说明我的代码正确解析数据。:)

import re
from pprint import pprint
from time import perf_counter

# Make some test data
num = 10
rows = []
for i in range(1, num):
    j = 100 * i
    rows.append(' field_{:03} {}:{} some_text here ;'.format(i, j, j - 50))
rows.append('};')
print('\n'.join(rows))

# Select whether to use regex to do the parsing or `str.split`
use_regex = True
print('Testing {}'.format(('str.split', 'regex')[use_regex]))

fh = iter(rows)
fieldMap = {}

nextLine = next(fh)
start = perf_counter()
if use_regex:
    while not "};" in nextLine: 
        fieldMatch = re.search(r'(\S+)\s+(\d+):(\d+)', nextLine)
        if fieldMatch:
            fieldMap[fieldMatch.group(1)] = [fieldMatch.group(2), fieldMatch.group(3)]
        nextLine = next(fh)
else:
    while not "};" in nextLine: 
        if nextLine:
            data = nextLine.split(maxsplit=2)
            fieldMap[data[0]] = data[1].split(':')
        nextLine = next(fh)

print('time: {:.6f}'.format(perf_counter() - start))
pprint(fieldMap)
Run Code Online (Sandbox Code Playgroud)

输出

 field_001 100:50 some_text here ;
 field_002 200:150 some_text here ;
 field_003 300:250 some_text here ;
 field_004 400:350 some_text here ;
 field_005 500:450 some_text here ;
 field_006 600:550 some_text here ;
 field_007 700:650 some_text here ;
 field_008 800:750 some_text here ;
 field_009 900:850 some_text here ;
};
Testing regex
time: 0.001946
{'field_001': ['100', '50'],
 'field_002': ['200', '150'],
 'field_003': ['300', '250'],
 'field_004': ['400', '350'],
 'field_005': ['500', '450'],
 'field_006': ['600', '550'],
 'field_007': ['700', '650'],
 'field_008': ['800', '750'],
 'field_009': ['900', '850']}
Run Code Online (Sandbox Code Playgroud)

use_regex = False这是带有;的输出 我不会费心重新打印输入数据。

Testing str.split
time: 0.000100
{'field_001': ['100', '50'],
 'field_002': ['200', '150'],
 'field_003': ['300', '250'],
 'field_004': ['400', '350'],
 'field_005': ['500', '450'],
 'field_006': ['600', '550'],
 'field_007': ['700', '650'],
 'field_008': ['800', '750'],
 'field_009': ['900', '850']}
Run Code Online (Sandbox Code Playgroud)

现在进行真正的测试。我将设置num = 200000并注释掉打印输入和输出数据的行。

Testing regex
time: 3.640832

Testing str.split
time: 2.480094
Run Code Online (Sandbox Code Playgroud)

正如您所看到的,正则表达式版本大约慢了 50%。

这些计时是在我运行 Python 3.6.0 的古老 2GHz 32 位机器上获得的,因此您的速度可能会有所不同。;) 如果你的 Python 没有time.perf_counter,你可以使用time.time