Gia*_*ear 4 python optimization performance split
我使用行数作为变量来分割文本文件.我写了这个函数,以便在临时目录中保存spitted文件.每个文件有4百万行期望最后一个文件.
import tempfile
from itertools import groupby, count
temp_dir = tempfile.mkdtemp()
def tempfile_split(filename, temp_dir, chunk=4000000):
with open(filename, 'r') as datafile:
groups = groupby(datafile, key=lambda k, line=count(): next(line) // chunk)
for k, group in groups:
output_name = os.path.normpath(os.path.join(temp_dir + os.sep, "tempfile_%s.tmp" % k))
for line in group:
with open(output_name, 'a') as outfile:
outfile.write(line)
Run Code Online (Sandbox Code Playgroud)
主要问题是这个功能的速度.为了在400万行的两个文件中拆分800万行的一个文件,时间超过我的Windows操作系统和Python 2.7的30分钟.
for line in group:
with open(output_name, 'a') as outfile:
outfile.write(line)
Run Code Online (Sandbox Code Playgroud)
正在打开文件,并为组中的每一行写一行.这很慢.
相反,每组写一次.
with open(output_name, 'a') as outfile:
outfile.write(''.join(group))
Run Code Online (Sandbox Code Playgroud)