Pra*_*are 266 python file-io generator
我有一个非常大的文件4GB,当我尝试阅读它时,我的电脑挂起.所以我想逐一阅读它并在处理完每件之后将处理过的零件存储到另一个文件中并阅读下一篇文章.
yield
这些作品有什么方法吗?
我很想拥有一种懒惰的方法.
nos*_*klo 391
要编写一个惰性函数,只需使用yield
:
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
f = open('really_big_file.dat')
for piece in read_in_chunks(f):
process_data(piece)
Run Code Online (Sandbox Code Playgroud)
另一个选择是使用iter
和辅助函数:
f = open('really_big_file.dat')
def read1k():
return f.read(1024)
for piece in iter(read1k, ''):
process_data(piece)
Run Code Online (Sandbox Code Playgroud)
如果文件是基于行的,则文件对象已经是一个懒惰的行生成器:
for line in open('really_big_file.dat'):
process_data(line)
Run Code Online (Sandbox Code Playgroud)
小智 40
如果您的计算机,操作系统和python是64位,那么您可以使用mmap模块将文件的内容映射到内存并使用索引和切片访问它.这里是文档中的一个例子:
import mmap
with open("hello.txt", "r+") as f:
# memory-map the file, size 0 means whole file
map = mmap.mmap(f.fileno(), 0)
# read content via standard file methods
print map.readline() # prints "Hello Python!"
# read content via slice notation
print map[:5] # prints "Hello"
# update content using slice notation;
# note that new content must have same size
map[6:] = " world!\n"
# ... and read again using standard file methods
map.seek(0)
print map.readline() # prints "Hello world!"
# close the map
map.close()
Run Code Online (Sandbox Code Playgroud)
如果您的计算机,操作系统或python是32位,那么mmap-ing大文件可以保留大部分地址空间并使您的内存程序饿死.
小智 32
file.readlines()接受一个可选的size参数,该参数近似于返回的行中读取的行数.
bigfile = open('bigfilename','r')
tmp_lines = bigfile.readlines(BUF_SIZE)
while tmp_lines:
process([line for line in tmp_lines])
tmp_lines = bigfile.readlines(BUF_SIZE)
Run Code Online (Sandbox Code Playgroud)
use*_*678 24
已经有很多很好的答案,但我最近遇到了类似的问题,我需要的解决方案没有列在这里,所以我想我可以补充这个线程.
80%的时间,我需要逐行读取文件.然后,如本回答所示,您希望将文件对象本身用作延迟生成器:
with open('big.csv') as f:
for line in f:
process(line)
Run Code Online (Sandbox Code Playgroud)
不过,我最近遇到了一个非常非常大的(几乎)单行CSV,其中行分隔符实际上没有'\n'
,但是'|'
.
'|'
为'\n'
处理之前也是不可能的,因为此csv的某些字段包含'\n'
(自由文本用户输入).我提出了以下代码段:
def rows(f, chunksize=1024, sep='|'):
"""
Read a file where the row separator is '|' lazily.
Usage:
>>> with open('big.csv') as f:
>>> for r in rows(f):
>>> process(row)
"""
curr_row = ''
while True:
chunk = f.read(chunksize)
if chunk == '': # End of file
yield curr_row
break
while True:
i = chunk.find(sep)
if i == -1:
break
yield curr_row + chunk[:i]
curr_row = ''
chunk = chunk[i+1:]
curr_row += chunk
Run Code Online (Sandbox Code Playgroud)
我已经在大文件和不同的块大小上成功测试了它(我甚至尝试了1个字节的块,只是为了确保算法不依赖于大小).
小智 14
在 Python 3.8+.read()
中,您可以在循环中使用while
:
with open("somefile.txt") as f:
while chunk := f.read(8192):
do_something(chunk)
Run Code Online (Sandbox Code Playgroud)
当然,您可以使用任何您想要的块大小,不必使用8192
( 2**13
) 字节。除非您的文件大小恰好是块大小的倍数,否则最后一个块将小于您的块大小。
myr*_*lav 10
f = ... # file-like object, i.e. supporting read(size) function and
# returning empty string '' when there is nothing to read
def chunked(file, chunk_size):
return iter(lambda: file.read(chunk_size), '')
for data in chunked(f, 65536):
# process the data
Run Code Online (Sandbox Code Playgroud)
更新:该方法最好在/sf/answers/319656641/中解释
参考python的官方文档 https://docs.python.org/3/library/functions.html#iter
也许这个方法更pythonic:
from functools import partial
"""A file object returned by open() is a iterator with
read method which could specify current read's block size"""
with open('mydata.db', 'r') as f_in:
part_read = partial(f_in.read, 1024*1024)
iterator = iter(part_read, b'')
for index, block in enumerate(iterator, start=1):
block = process_block(block) # process your block data
with open(f'{index}.txt', 'w') as f_out:
f_out.write(block)
Run Code Online (Sandbox Code Playgroud)
小智 5
我想我们可以这样写:
def read_file(path, block_size=1024):
with open(path, 'rb') as f:
while True:
piece = f.read(block_size)
if piece:
yield piece
else:
return
for piece in read_file(path):
process_piece(piece)
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
215108 次 |
最近记录: |