我有一个包含大量数据的大文件,我想将其读入数据帧,但发现一些无效的行.这些无效行导致read.table中断.我尝试以下方法来跳过无效行,但似乎表现非常糟糕.
counts<-count.fields(textConnection(lines),sep="\001")
raw_data<-read.table(textConnection(lines[counts == 34]), sep="\001")
Run Code Online (Sandbox Code Playgroud)
有没有更好的方法来实现这一目标?谢谢
Pao*_*olo 18
使用@PaulHiemstra的样本数据:
read.table("test.csv", sep = ";", fill=TRUE)
Run Code Online (Sandbox Code Playgroud)
那么你可以按照自己的意愿照顾这些NAs.
你可以做的是迭代文件中的行,并只添加具有正确长度的行.
我定义了以下测试csv文件:
1;2;3;4
1;2;3;4
1;2;3
1;2;3;4
Run Code Online (Sandbox Code Playgroud)
使用read.table失败:
> read.table("test.csv", sep = ";")
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
line 3 did not have 4 elements
Run Code Online (Sandbox Code Playgroud)
现在是一种迭代方法:
require(plyr)
no_lines = 4
correct_length = 4
file_con = file("test.csv", "r")
result = ldply(1:no_lines, function(line) {
dum = strsplit(readLines(file_con, n = 1), split = ";")[[1]]
if(length(dum) == correct_length) {
return(dum)
} else {
cat(sprintf("Skipped line %s\n", line))
return(NULL)
}
})
close(file_con)
> result
V1 V2 V3 V4
1 1 2 3 4
2 1 2 3 4
3 1 2 3 4
Run Code Online (Sandbox Code Playgroud)
当然,这是一个简单的例子,因为文件非常小.让我们创建一个更具挑战性的例子来充当基准.
# First file with invalid rows
norow = 10e5 # number of rows
no_lines = round(runif(norow, min = 3, max = 4))
no_lines[1] = correct_length
file_content = ldply(no_lines, function(line) paste(1:line, collapse = ";"))
writeLines(paste(file_content[[1]], sep = "\n"), "big_test.csv")
# Same length with valid rows
file_content = ldply(rep(4, norow), function(line) paste(1:line, collapse = ";"))
writeLines(paste(file_content[[1]], sep = "\n"), "big_normal.csv")
Run Code Online (Sandbox Code Playgroud)
现在为基准
# Iterative approach
system.time({file_con <- file("big_test.csv", "r")
result_test <- ldply(1:norow, function(line) {
dum = strsplit(readLines(file_con, n = 1), split = ";")[[1]]
if(length(dum) == correct_length) {
return(dum)
} else {
# Commenting this speeds up by 30%
#cat(sprintf("Skipped line %s\n", line))
return(NULL)
}
})
close(file_con)})
user system elapsed
20.559 0.047 20.775
# Normal read.table
system.time(result_normal <- read.table("big_normal.csv", sep = ";"))
user system elapsed
1.060 0.015 1.079
# read.table with fill = TRUE
system.time({result_fill <- read.table("big_test.csv", sep = ";", fill=TRUE)
na_rows <- complete.cases(result_fill)
result_fill <- result_fill[-na_rows,]})
user system elapsed
1.161 0.033 1.203
# Specifying which type the columns are (e.g. character or numeric)
# using the colClasses argument.
system.time({result_fill <- read.table("big_test.csv", sep = ";", fill=TRUE,
colClasses = rep("numeric", 4))
na_rows <- complete.cases(result_fill)
result_fill <- result_fill[-na_rows,]})
user system elapsed
0.933 0.064 1.001
Run Code Online (Sandbox Code Playgroud)
因此迭代方法相当慢,但100万行的20秒可能是可接受的(尽管这取决于您对可接受的定义).特别是当你只需要这一次,而不是保存它以save供以后检索.@Paolo建议的解决方案几乎与普通调用一样快read.table.使用包含错误数量的列(因此的列NA)的行被消除complete.cases.指定列的哪些类可以进一步提高性能,我认为当列数和行数变大时,这种效果会更大.
总而言之,最好的选择是使用read.tablewith fill = TRUE,同时指定列的类.ldply如果您想要更灵活地选择如何读取线条,则使用迭代方法只是一个很好的选择,例如,如果某个值高于阈值,则只读取线条.但是可能通过将所有数据读入R而不是创建子集可以更快地完成.只有当数据大于你的RAM时,我才能想象迭代方法有其优点.