对于较大文件的解析,我需要循环写入大量的parquet文件。但是,此任务消耗的内存似乎在每次迭代中都会增加,而我希望它保持不变(因为不应在内存中附加任何内容)。这使得扩展变得棘手。
我添加了一个最小的可重现示例,它创建了 10 000 个镶木地板和循环附加到它。
import resource
import random
import string
import pyarrow as pa
import pyarrow.parquet as pq
import pandas as pd
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
schema = pa.schema([
pa.field('test', pa.string()),
])
resource.setrlimit(resource.RLIMIT_NOFILE, (1000000, 1000000))
number_files = 10000
number_rows_increment = 1000
number_iterations = 100
writers = [pq.ParquetWriter('test_'+id_generator()+'.parquet', schema) for i in range(number_files)]
for i in range(number_iterations):
for writer in writers:
table_to_write = pa.Table.from_pandas(
pd.DataFrame({'test': [id_generator() for i in range(number_rows_increment)]}),
preserve_index=False,
schema = …Run Code Online (Sandbox Code Playgroud)