Mar*_*sar 18 python csv list chunks
在一个基础我有下一个过程.
import csv
reader = csv.reader(open('huge_file.csv', 'rb'))
for line in reader:
process_line(line)
Run Code Online (Sandbox Code Playgroud)
看到这个相关的问题.我想每100行发送一次生产线,以实现批量分片.
实现相关答案的问题是csv对象是不可取消的,不能使用len.
>>> import csv
>>> reader = csv.reader(open('dataimport/tests/financial_sample.csv', 'rb'))
>>> len(reader)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type '_csv.reader' has no len()
>>> reader[10:]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '_csv.reader' object is unsubscriptable
>>> reader[10]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '_csv.reader' object is unsubscriptable
Run Code Online (Sandbox Code Playgroud)
我怎么解决这个问题?
mik*_*iku 23
只需将您的可reader
订阅包装成一个list
.显然这会破坏真正的大文件(请参阅下面的更新中的替代方案):
>>> reader = csv.reader(open('big.csv', 'rb'))
>>> lines = list(reader)
>>> print lines[:100]
...
Run Code Online (Sandbox Code Playgroud)
进一步阅读:如何在Python中将列表拆分为大小均匀的块?
更新1(列表版本):另一种可能的方法是处理每个chuck,因为它在迭代过程中到达:
#!/usr/bin/env python
import csv
reader = csv.reader(open('4956984.csv', 'rb'))
chunk, chunksize = [], 100
def process_chunk(chuck):
print len(chuck)
# do something useful ...
for i, line in enumerate(reader):
if (i % chunksize == 0 and i > 0):
process_chunk(chunk)
del chunk[:] # or: chunk = []
chunk.append(line)
# process the remainder
process_chunk(chunk)
Run Code Online (Sandbox Code Playgroud)
更新2(生成器版本):我没有对它进行基准测试,但也许您可以通过使用块生成器来提高性能:
#!/usr/bin/env python
import csv
reader = csv.reader(open('4956984.csv', 'rb'))
def gen_chunks(reader, chunksize=100):
"""
Chunk generator. Take a CSV `reader` and yield
`chunksize` sized slices.
"""
chunk = []
for i, line in enumerate(reader):
if (i % chunksize == 0 and i > 0):
yield chunk
del chunk[:] # or: chunk = []
chunk.append(line)
yield chunk
for chunk in gen_chunks(reader):
print chunk # process chunk
# test gen_chunk on some dummy sequence:
for chunk in gen_chunks(range(10), chunksize=3):
print chunk # process chunk
# => yields
# [0, 1, 2]
# [3, 4, 5]
# [6, 7, 8]
# [9]
Run Code Online (Sandbox Code Playgroud)
我们可以使用 pandas 模块来处理这些大的 csv 文件。
df = pd.DataFrame()
temp = pd.read_csv('BIG_File.csv', iterator=True, chunksize=1000)
df = pd.concat(temp, ignore_index=True)
Run Code Online (Sandbox Code Playgroud)