我想看一下julia语言,所以我写了一个小脚本来导入我正在使用的数据集.但是当我运行并分析脚本时,它发现它比R中的类似脚本慢得多.当我进行性能分析时,它告诉我所有的cat命令都有不好的性能.
文件看起来像这样:
#
#Metadata
#
Identifier1 data_string1
Identifier2 data_string2
Identifier3 data_string3
Identifier4 data_string4
//
Run Code Online (Sandbox Code Playgroud)
我主要想获取data_strings并将它们拆分成单个字符的矩阵.这是一个极少的代码示例:
function loadfile()
f = open("/file1")
first=true
m = Array(Any, 1,0)
for ln in eachline(f)
if ln[1] != '#' && ln[1] != '\n' && ln[1] != '/'
s = split(ln[1:end-1])
s = split(s[2],"")
if first
m = reshape(s,1,length(s))
first = false
else
s = reshape(s,1,length(s))
println(size(m))
println(size(s))
m = vcat(m, s)
end
end
end
end
Run Code Online (Sandbox Code Playgroud)
知道为什么julia可能会因为cat命令变慢或者我怎么能以不同的方式做到这一点?
谢谢你的任何建议!
使用cat这样的速度很慢,因为它需要大量的内存分配.每次我们做的时候,我们都会vcat分配一个全新的数组m,这个数组与旧数组大致相同m.以下是我以更多Julian方式重写代码的方法,其中m只在最后创建:
function loadfile2()
f = open("./sotest.txt","r")
first = true
lines = Any[]
for ln in eachline(f)
if ln[1] == '#' || ln[1] == '\n' || ln[1] == '/'
continue
end
data_str = split(ln[1:end-1]," ")[2]
data_chars = split(data_str,"")
# Can make even faster (2x in my tests) with
# data_chars = [data_str[i] for i in 1:length(data_str)]
# But this inherently assumes ASCII data
push!(lines, data_chars)
end
m = hcat(lines...)' # Stick column vectors together then transpose
end
Run Code Online (Sandbox Code Playgroud)
我为您的示例数据制作了10,000行版本,并发现了以下性能:
Old version:
elapsed time: 3.937826405 seconds (3900659448 bytes allocated, 43.81% gc time)
elapsed time: 3.581752309 seconds (3900645648 bytes allocated, 36.02% gc time)
elapsed time: 3.57753696 seconds (3900645648 bytes allocated, 37.52% gc time)
New version:
elapsed time: 0.010351067 seconds (11568448 bytes allocated)
elapsed time: 0.011136188 seconds (11568448 bytes allocated)
elapsed time: 0.010654002 seconds (11568448 bytes allocated)
Run Code Online (Sandbox Code Playgroud)