Cha*_*lon 95 python csv file python-2.7
我目前正在尝试从Python 2.7中的.csv文件读取数据,最多包含100万行和200列(文件范围从100mb到1.6gb).对于300,000行以下的文件,我可以(非常慢)地执行此操作,但是一旦我超过该值,我就会出现内存错误.我的代码看起来像这样:
def getdata(filename, criteria):
data=[]
for criterion in criteria:
data.append(getstuff(filename, criteron))
return data
def getstuff(filename, criterion):
import csv
data=[]
with open(filename, "rb") as csvfile:
datareader=csv.reader(csvfile)
for row in datareader:
if row[3]=="column header":
data.append(row)
elif len(data)<2 and row[3]!=criterion:
pass
elif row[3]==criterion:
data.append(row)
else:
return data
Run Code Online (Sandbox Code Playgroud)
getstuff函数中else子句的原因是所有符合条件的元素都将在csv文件中一起列出,所以当我越过它们时我会离开循环以节省时间.
我的问题是:
如何才能让这个与更大的文件一起使用?
有什么方法可以让它更快吗?
我的电脑有8GB RAM,运行64位Windows 7,处理器是3.40 GHz(不确定你需要什么信息).
非常感谢您的帮助!
Mar*_*ers 140
您正在将所有行读入列表,然后处理该列表.不要那样做.
在生成行时处理行.如果需要先过滤数据,请使用生成器函数:
import csv
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
count = 0
for row in datareader:
if row[3] == criterion:
yield row
count += 1
elif count:
# done when having read a consecutive series of rows
return
Run Code Online (Sandbox Code Playgroud)
我还简化了你的过滤器测试; 逻辑是一样的但更简洁.
因为您只匹配符合条件的单个行序列,所以您还可以使用:
import csv
from itertools import dropwhile, takewhile
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
# first row, plus any subsequent rows that match, then stop
# reading altogether
# Python 2: use `for row in takewhile(...): yield row` instead
# instead of `yield from takewhile(...)`.
yield from takewhile(
lambda r: r[3] == criterion,
dropwhile(lambda r: r[3] != criterion, datareader))
return
Run Code Online (Sandbox Code Playgroud)
你现在可以getstuff()
直接循环.做同样的事情getdata()
:
def getdata(filename, criteria):
for criterion in criteria:
for row in getstuff(filename, criterion):
yield row
Run Code Online (Sandbox Code Playgroud)
现在直接getdata()
在代码中循环:
for row in getdata(somefilename, sequence_of_criteria):
# process row
Run Code Online (Sandbox Code Playgroud)
您现在只在内存中保留一行,而不是每个条件的数千行.
yield
使函数成为生成器函数,这意味着在开始循环之前它不会执行任何工作.
mma*_*123 31
虽然Martijin的回答是最好的.这是一种为初学者处理大型csv文件的更直观的方法.这允许您一次处理行组或块.
import pandas as pd
chunksize = 10 ** 8
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
Run Code Online (Sandbox Code Playgroud)
小智 13
我做了大量的振动分析,并查看大数据集(数万亿分).我的测试显示pandas.read_csv()函数比numpy.genfromtxt()快20倍.genfromtxt()函数比numpy.loadtxt()快3倍.您似乎需要大型数据集的pandas.
我在博客上发布了我在此测试中使用的代码和数据集,讨论了MATLAB与Python的振动分析.
对于遇到这个问题的人。利用大熊猫与“ CHUNKSIZE ”和“ usecols ”帮我看一个巨大的zip文件比其他建议选择更快。
import pandas as pd
sample_cols_to_keep =['col_1', 'col_2', 'col_3', 'col_4','col_5']
# First setup dataframe iterator, ‘usecols’ parameter filters the columns, and 'chunksize' sets the number of rows per chunk in the csv. (you can change these parameters as you wish)
df_iter = pd.read_csv('../data/huge_csv_file.csv.gz', compression='gzip', chunksize=20000, usecols=sample_cols_to_keep)
# this list will store the filtered dataframes for later concatenation
df_lst = []
# Iterate over the file based on the criteria and append to the list
for df_ in df_iter:
tmp_df = (df_.rename(columns={col: col.lower() for col in df_.columns}) # filter eg. rows where 'col_1' value grater than one
.pipe(lambda x: x[x.col_1 > 0] ))
df_lst += [tmp_df.copy()]
# And finally combine filtered df_lst into the final lareger output say 'df_final' dataframe
df_final = pd.concat(df_lst)
Run Code Online (Sandbox Code Playgroud)
对我有用的是并且超快的是
import pandas as pd
import dask.dataframe as dd
import time
t=time.clock()
df_train = dd.read_csv('../data/train.csv', usecols=[col1, col2])
df_train=df_train.compute()
print("load train: " , time.clock()-t)
Run Code Online (Sandbox Code Playgroud)
另一个可行的解决方案是:
import pandas as pd
from tqdm import tqdm
PATH = '../data/train.csv'
chunksize = 500000
traintypes = {
'col1':'category',
'col2':'str'}
cols = list(traintypes.keys())
df_list = [] # list to hold the batch dataframe
for df_chunk in tqdm(pd.read_csv(PATH, usecols=cols, dtype=traintypes, chunksize=chunksize)):
# Can process each chunk of dataframe here
# clean_data(), feature_engineer(),fit()
# Alternatively, append the chunk to list and merge all
df_list.append(df_chunk)
# Merge all dataframes into one dataframe
X = pd.concat(df_list)
# Delete the dataframe list to release memory
del df_list
del df_chunk
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
128889 次 |
最近记录: |