我做了一些基准测试:
require 'benchmark'
words = File.open('/usr/share/dict/words', 'r') do |file|
file.each_line.take(1_000_000).map(&:chomp)
end
Benchmark.bmbm(20) do |x|
GC.start
x.report(:map) do
words.map do |word|
word.size if word.size > 5
end.compact
end
GC.start
x.report(:each_with_object) do
words.each_with_object([]) do |word, long_sizes|
long_sizes << word.size if word.size > 5
end
end
end
Run Code Online (Sandbox Code Playgroud)
输出(ruby 2.3.0):
Rehearsal --------------------------------------------------------
map 0.020000 0.000000 0.020000 ( 0.016906)
each_with_object 0.020000 0.000000 0.020000 ( 0.024695)
----------------------------------------------- total: 0.040000sec
user system total real
map 0.010000 0.000000 0.010000 ( 0.015004)
each_with_object 0.020000 0.000000 0.020000 ( 0.024183)
Run Code Online (Sandbox Code Playgroud)
因为我觉得我无法理解这一点each_with_object应该会更快:它只需要1环和1个新对象的情况下,而不是2环和2个新的对象创建一个新的阵列时,我们结合map和compact.有任何想法吗?
Aet*_*rus 10
Array#<<如果原始内存空间没有足够的空间来容纳新项目,则需要重新分配内存.请参阅实现,尤其是此行
VALUE target_ary = ary_ensure_room_for_push(ary, 1);
Run Code Online (Sandbox Code Playgroud)
虽然Array#map不必重新分配内存,因为它已经知道结果数组的大小.特别参见实施
collect = rb_ary_new2(RARRAY_LEN(ary));
Run Code Online (Sandbox Code Playgroud)
它分配与原始数组相同大小的内存.