使用以下设置将100个CSV文件与标题合并为一个的最快方法是什么:
(包含详细设置以使问题的范围更具体.更改是根据此处的反馈进行的)
文件1.csv:
a,b
1,2
Run Code Online (Sandbox Code Playgroud)
文件2.csv:
a,b
3,4
Run Code Online (Sandbox Code Playgroud)
最终out.csv:
a,b
1,2
3,4
Run Code Online (Sandbox Code Playgroud)
根据我的基准测试,所有提出的方法中最快的是纯python.有没有更快的方法?
基准(用评论和帖子中的方法更新):
Method Time
pure python 0.298s
sed 1.9s
awk 2.5s
R data.table 4.4s
R data.table with colClasses 4.4s
Spark 2 40.2s
python pandas 1min 11.0s
Run Code Online (Sandbox Code Playgroud)
工具版本:
sed 4.2.2
awk: mawk 1.3.3 Nov 1996
Python 3.6.1
Pandas 0.20.1
R 3.4.0
data.table 1.10.4
Spark 2.1.1
Run Code Online (Sandbox Code Playgroud)
Jupyter笔记本中的代码:
SED:
%%time
!head temp/in/1.csv > temp/merged_sed.csv
!sed 1d temp/in/*.csv >> temp/merged_sed.csv
Run Code Online (Sandbox Code Playgroud)
纯Python所有二进制读写,带有"next"的无证行为:
%%time
with open("temp/merged_pure_python2.csv","wb") as fout:
# first file:
with open("temp/in/1.csv", "rb") as f:
fout.write(f.read())
# now the rest:
for num in range(2,101):
with open("temp/in/"+str(num)+".csv", "rb") as f:
next(f) # skip the header
fout.write(f.read())
Run Code Online (Sandbox Code Playgroud)
AWK:
%%time
!awk 'NR==1; FNR==1{{next}} 1' temp/in/*.csv > temp/merged_awk.csv
Run Code Online (Sandbox Code Playgroud)
R data.table:
%%time
%%R
filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv"))
files <- lapply(filenames, fread)
merged_data <- rbindlist(files, use.names=F)
fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE)
Run Code Online (Sandbox Code Playgroud)
使用colClasses的R data.table:
%%time
%%R
filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv"))
files <- lapply(filenames, fread,colClasses=c(
V1="integer",
V2="integer",
V3="integer",
V4="integer",
V5="integer",
V6="integer",
V7="integer",
V8="integer",
V9="integer",
V10="integer"))
merged_data <- rbindlist(files, use.names=F)
fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE)
Run Code Online (Sandbox Code Playgroud)
Spark(pyspark):
%%time
df = spark.read.format("csv").option("header", "true").load("temp/in/*.csv")
df.coalesce(1).write.option("header", "true").csv("temp/merged_pyspark.csv")
Run Code Online (Sandbox Code Playgroud)
Python大熊猫:
%%time
import pandas as pd
interesting_files = glob.glob("temp/in/*.csv")
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
full_df.to_csv("temp/merged_pandas.csv", index=False)
Run Code Online (Sandbox Code Playgroud)
数据生成者:
%%R
df=data.table(replicate(10,sample(0:9,100000,rep=TRUE)))
for (i in 1:100){
write.csv(df,paste0("temp/in/",i,".csv"), row.names=FALSE)
}
Run Code Online (Sandbox Code Playgroud)
根据问题中的基准测试,最快的方法是纯 Python,带有未记录的“next()”函数行为和二进制文件。该方法由Stefan Pochmann提出
基准:
基准(使用评论和帖子中的方法更新):
Method Time
pure python 0.298s
sed 1.9s
awk 2.5s
R data.table 4.4s
R data.table with colClasses 4.4s
Spark 2 40.2s
python pandas 1min 11.0s
Run Code Online (Sandbox Code Playgroud)
工具版本:
sed 4.2.2
awk: mawk 1.3.3 Nov 1996
Python 3.6.1
Pandas 0.20.1
R 3.4.0
data.table 1.10.4
Spark 2.1.1
Run Code Online (Sandbox Code Playgroud)
纯Python代码:
with open("temp/merged_pure_python2.csv","wb") as fout:
# first file:
with open("temp/in/1.csv", "rb") as f:
fout.write(f.read())
# now the rest:
for num in range(2,101):
with open("temp/in/"+str(num)+".csv", "rb") as f:
next(f) # skip the header
fout.write(f.read())
Run Code Online (Sandbox Code Playgroud)