mar*_*cog 13 python performance file-io large-files
我有一个384MB的文本文件,有5000万行.每行包含2个以空格分隔的整数:键和值.该文件按键排序.我需要一种有效的方法来查找Python中大约200个键列表的值.
我目前的方法包括在下面.这需要30秒.必须有更高效的Python foo才能将其降低到最多几秒钟的合理效率.
# list contains a sorted list of the keys we need to lookup
# there is a sentinel at the end of list to simplify the code
# we use pointer to iterate through the list of keys
for line in fin:
line = map(int, line.split())
while line[0] == list[pointer].key:
list[pointer].value = line[1]
pointer += 1
while line[0] > list[pointer].key:
pointer += 1
if pointer >= len(list) - 1:
break # end of list; -1 is due to sentinel
Run Code Online (Sandbox Code Playgroud)
编码二进制搜索+搜索解决方案(感谢kigurai!):
entries = 24935502 # number of entries
width = 18 # fixed width of an entry in the file padded with spaces
# at the end of each line
for i, search in enumerate(list): # list contains the list of search keys
left, right = 0, entries-1
key = None
while key != search and left <= right:
mid = (left + right) / 2
fin.seek(mid * width)
key, value = map(int, fin.readline().split())
if search > key:
left = mid + 1
else:
right = mid - 1
if key != search:
value = None # for when search key is not found
search.result = value # store the result of the search
Run Code Online (Sandbox Code Playgroud)
Han*_*rén 11
如果你只需要500万行中的200行,那么将它们全部读入内存是一种浪费.我会对搜索键列表进行排序,然后使用seek()或类似的方法将二进制搜索应用于该文件.这样你就不会将整个文件读入内存,我认为应该加快速度.
S.Lotts的轻微优化答案:
from collections import defaultdict
keyValues= defaultdict(list)
targetKeys= # some list of keys as strings
for line in fin:
key, value = line.split()
if key in targetKeys:
keyValues[key].append( value )
Run Code Online (Sandbox Code Playgroud)
由于我们使用的是字典而不是列表,因此密钥不一定是数字.这会将map()操作和字符串保存为每行的整数转换.如果您希望键是数字,那么只需要为每个键执行一次,而不是为每个5000万行执行一次转换.