Mus*_*afa 10 performance file-io pipe stream node.js
我在VMWare下的SSD上用Node复制文件,但性能非常低.我测量实际速度的基准如下:
$ hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 12004 MB in 1.99 seconds = 6025.64 MB/sec
Timing buffered disk reads: 1370 MB in 3.00 seconds = 456.29 MB/sec
Run Code Online (Sandbox Code Playgroud)
但是,以下复制文件的节点代码非常慢,因此后续运行不会使其更快:
var fs = require("fs");
fs.createReadStream("bigfile").pipe(fs.createWriteStream("tempbigfile"));
Run Code Online (Sandbox Code Playgroud)
运行如下:
$ seq 1 10000000 > bigfile
$ ll bigfile -h
-rw-rw-r-- 1 mustafa mustafa 848M Jun 3 03:30 bigfile
$ time node test.js
real 0m4.973s
user 0m2.621s
sys 0m7.236s
$ time node test.js
real 0m5.370s
user 0m2.496s
sys 0m7.190s
Run Code Online (Sandbox Code Playgroud)
这里有什么问题,如何加快速度?我相信只需调整缓冲区大小,我就可以在C中更快地编写它.令我困惑的是,当我编写简单的几乎pv等效程序时,将stdin管道输出到stdout,如下所示,它非常快.
process.stdin.pipe(process.stdout);
Run Code Online (Sandbox Code Playgroud)
运行如下:
$ dd if=/dev/zero bs=8M count=128 | pv | dd of=/dev/null
128+0 records in 174MB/s] [ <=> ]
128+0 records out
1073741824 bytes (1.1 GB) copied, 5.78077 s, 186 MB/s
1GB 0:00:05 [ 177MB/s] [ <=> ]
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.78131 s, 186 MB/s
$ dd if=/dev/zero bs=8M count=128 | dd of=/dev/null
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 5.57005 s, 193 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.5704 s, 193 MB/s
$ dd if=/dev/zero bs=8M count=128 | node test.js | dd of=/dev/null
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 4.61734 s, 233 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 4.62766 s, 232 MB/s
$ dd if=/dev/zero bs=8M count=128 | node test.js | dd of=/dev/null
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 4.22107 s, 254 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 4.23231 s, 254 MB/s
$ dd if=/dev/zero bs=8M count=128 | dd of=/dev/null
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 5.70124 s, 188 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.70144 s, 188 MB/s
$ dd if=/dev/zero bs=8M count=128 | node test.js | dd of=/dev/null
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 4.51055 s, 238 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 4.52087 s, 238 MB/s
Run Code Online (Sandbox Code Playgroud)
Edw*_*rzo 24
我不知道你的问题的答案,但也许这有助于你调查问题.
在关于流的Node.js文档中,在Streams Under the Hood:Buffering下它说:
Writable和Readable流都将分别在名为_writableState.buffer或_readableState.buffer的内部对象上缓冲数据.
可能缓冲的数据量取决于传递给构造函数的highWaterMark选项.
[...]
流的目的,特别是使用pipe()方法,是将数据缓冲限制在可接受的水平,以便不同速度的源和目标不会压倒可用内存.
因此,您可以使用缓冲区大小来提高速度:
var fs = require('fs');
var path = require('path');
var from = path.normalize(process.argv[2]);
var to = path.normalize(process.argv[3]);
var readOpts = {highWaterMark: Math.pow(2,16)}; // 65536
var writeOpts = {highWaterMark: Math.pow(2,16)}; // 65536
var source = fs.createReadStream(from, readOpts);
var destiny = fs.createWriteStream(to, writeOpts)
source.pipe(destiny);
Run Code Online (Sandbox Code Playgroud)