bur*_*bma 5 performance clojure channels transducer
在高层次上,我理解使用转换器不会创建任何中间数据结构,而通过一长串操作创建->>,因此转换器方法的性能更高。这在我下面的一个例子中被证明是正确的。但是,当我添加clojure.core.async/chan到组合中时,我并没有获得预期的性能改进。显然有些东西我不明白,我希望得到解释。
(ns dev
(:require [clojure.core.async :as async]
[criterium.core :as crit]))
;; Setup some toy data.
(def n 1e6)
(def data (repeat n "1"))
;; Reusable thread-last operation (the "slower" one).
(defn tx [x]
(->> x
(map #(Integer. %))
(map inc) (map inc) (map inc) (map inc) (map inc) (map inc)
(map inc) (map inc) (map inc) (map inc) (map inc)))
;; Reusable transducer (the "faster" one).
(def xf (comp
(map #(Integer. %))
(map inc) (map inc) (map inc) (map inc) (map inc) (map inc)
(map inc) (map inc) (map inc) (map inc) (map inc)))
;; For these first two I expect the second to be faster and it is.
(defn nested []
(last (tx data)))
(defn into-xf []
(last (into [] xf data)))
;; For the next two I again expect the second to be faster but it is NOT.
(defn chan-then-nested []
(let [c (async/chan n)]
(async/onto-chan! c data)
(->> c
(async/into [])
async/<!!
tx
last)))
(defn chan-xf []
(let [c (async/chan n xf)]
(async/onto-chan! c data)
(->> c
(async/into [])
async/<!!
last)))
(comment
(crit/quick-bench (nested)) ; 1787.672 ms
(crit/quick-bench (into-xf)) ; 822.8626 ms
(crit/quick-bench (chan-then-nested)) ; 1535.628 ms
(crit/quick-bench (chan-xf)) ; 2072.626 ms
;; Expected ranking fastest to slowest
;; into-xf
;; nested
;; chan-xf
;; chan-then-nested
;; Actual ranking fastest to slowest
;; into-xf
;; chan-then-nested
;; nested
;; chan-xf
)
Run Code Online (Sandbox Code Playgroud)
最后有两个结果我不明白。首先,为什么使用带有通道的换能器比从通道读取所有内容然后进行嵌套映射慢?看起来,使用带有通道的换能器的“开销”或其他任何东西要慢得多,以至于它压倒了不创建中间数据结构的收益。其次,这个真的出乎意料,为什么把数据放到一个通道上然后取下来然后使用嵌套映射技术比不做通道舞只使用嵌套映射技术更快?(说得更短,为什么chan-then-nested比 快nested?)这一切可能只是基准测试或随机性的产物?(我跑了quick-bench对每一个都进行了几次,结果相同。)我想知道它是否与into调用有关,transduce而在频道版本中根本没有以相同的方式实现。转换器提供了相同的界面来应用跨向量或通道的转换,但应用转换的方式不同;而这种差异决定了一切。
关于您的方法的一些评论:
\nn。如果您的函数运行速度更快,criterium 可以获取更多样本,从而更准确地估计其平均时间。n=100 就足够了。进行这些更改后,这是我看到的基准数据:
\nEvaluation count : 14688 in 6 samples of 2448 calls.\n Execution time mean : 39.978735 \xc2\xb5s\n Execution time std-deviation : 1.238587 \xc2\xb5s\n Execution time lower quantile : 38.870558 \xc2\xb5s ( 2.5%)\n Execution time upper quantile : 41.779784 \xc2\xb5s (97.5%)\n Overhead used : 10.162171 ns\nEvaluation count : 20094 in 6 samples of 3349 calls.\n Execution time mean : 30.557295 \xc2\xb5s\n Execution time std-deviation : 562.641738 ns\n Execution time lower quantile : 29.936152 \xc2\xb5s ( 2.5%)\n Execution time upper quantile : 31.330094 \xc2\xb5s (97.5%)\n Overhead used : 10.162171 ns\nEvaluation count : 762 in 6 samples of 127 calls.\n Execution time mean : 740.642963 \xc2\xb5s\n Execution time std-deviation : 176.879454 \xc2\xb5s\n Execution time lower quantile : 515.588780 \xc2\xb5s ( 2.5%)\n Execution time upper quantile : 949.109898 \xc2\xb5s (97.5%)\n Overhead used : 10.162171 ns\n\nFound 2 outliers in 6 samples (33.3333 %)\n low-severe 1 (16.6667 %)\n low-mild 1 (16.6667 %)\n Variance from outliers : 64.6374 % Variance is severely inflated by outliers\nEvaluation count : 816 in 6 samples of 136 calls.\n Execution time mean : 748.782942 \xc2\xb5s\n Execution time std-deviation : 7.157018 \xc2\xb5s\n Execution time lower quantile : 740.139618 \xc2\xb5s ( 2.5%)\n Execution time upper quantile : 756.102312 \xc2\xb5s (97.5%)\n Overhead used : 10.162171 ns\nRun Code Online (Sandbox Code Playgroud)\n关键要点是:
\nchan-then-nested和之间的差异chan-xf比您的版本小得多。chan-xf仍然慢了一点,但很容易在一个标准差之内:这并不是一个了不起的结果。