Evg*_*sky 12 performance iperf tcp udp packetloss
我有一个用作iperf3
客户端的 linux机器,用 Broadcom BCM5721、1Gb 适配器(2 个端口,但只有 1 个用于测试)测试了 2 个配备相同的 Windows 2012 R2 服务器盒。所有机器都通过一个 1Gb 交换机连接。
在例如 300Mbit 测试 UDP
iperf3 -uZVc 192.168.30.161 -b300m -t5 --get-server-output -l8192
Run Code Online (Sandbox Code Playgroud)
导致发送的所有数据包丢失 14%(对于具有完全相同硬件但较旧的 NIC 驱动程序的其他服务器盒,丢失约为 2%),但即使在 50Mbit 时也会发生丢失,尽管不那么严重。使用等效设置的 TCP 性能:
iperf3 -ZVc 192.168.30.161 -t5 --get-server-output -l8192
Run Code Online (Sandbox Code Playgroud)
产生800Mbit以北的传输速度,没有报告重传。
服务器始终使用以下选项启动:
iperf3 -sB192.168.30.161
Run Code Online (Sandbox Code Playgroud)
谁的错?
编辑:
现在我尝试了另一个方向:Windows -> Linux。结果:丢包率始终为 0,而吞吐量最大约为
-l8192
,即分片的 IP 数据包-l1472
未分片的 IP 数据包我猜流量控制会限制吞吐量,并防止数据包丢失。尤其是后者,未分段的数据远不及 TCP 吞吐量(未分段的 TCP 产生的数据与分段的 TCP 相似),但在丢包方面,它比 Linux -> Windows 有了无限巨大的改进。
以及如何找出?
我确实遵循了服务器上驱动程序设置的通常建议以最大化性能并尝试启用/禁用/最大化/最小化/更改
所有卸载功能都已启用。
编辑我也尝试启用/禁用
损失率相似。
UDP 运行的完整输出:
$ iperf3 -uZVc 192.168.30.161 -b300m -t5 --get-server-output -l8192
iperf 3.0.7
Linux mybox 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt4-3 (2015-02-03) x86_64 GNU/Linux
Time: Wed, 13 May 2015 13:10:39 GMT
Connecting to host 192.168.30.161, port 5201
Cookie: mybox.1431522639.098587.3451f174
[ 4] local 192.168.30.202 port 50851 connected to 192.168.30.161 port 5201
Starting Test: protocol: UDP, 1 streams, 8192 byte blocks, omitting 0 seconds, 5 second test
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-1.00 sec 33.3 MBytes 279 Mbits/sec 4262
[ 4] 1.00-2.00 sec 35.8 MBytes 300 Mbits/sec 4577
[ 4] 2.00-3.00 sec 35.8 MBytes 300 Mbits/sec 4578
[ 4] 3.00-4.00 sec 35.8 MBytes 300 Mbits/sec 4578
[ 4] 4.00-5.00 sec 35.8 MBytes 300 Mbits/sec 4577
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-5.00 sec 176 MBytes 296 Mbits/sec 0.053 ms 3216/22571 (14%)
[ 4] Sent 22571 datagrams
CPU Utilization: local/sender 4.7% (0.4%u/4.3%s), remote/receiver 1.7% (0.8%u/0.9%s)
Server output:
-----------------------------------------------------------
Accepted connection from 192.168.30.202, port 44770
[ 5] local 192.168.30.161 port 5201 connected to 192.168.30.202 port 50851
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-1.01 sec 27.2 MBytes 226 Mbits/sec 0.043 ms 781/4261 (18%)
[ 5] 1.01-2.01 sec 30.0 MBytes 252 Mbits/sec 0.058 ms 734/4577 (16%)
[ 5] 2.01-3.01 sec 29.0 MBytes 243 Mbits/sec 0.045 ms 870/4578 (19%)
[ 5] 3.01-4.01 sec 32.1 MBytes 269 Mbits/sec 0.037 ms 469/4579 (10%)
[ 5] 4.01-5.01 sec 32.9 MBytes 276 Mbits/sec 0.053 ms 362/4576 (7.9%)
Run Code Online (Sandbox Code Playgroud)
TCP运行:
$ iperf3 -ZVc 192.168.30.161 -t5 --get-server-output -l8192
iperf 3.0.7
Linux mybox 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt4-3 (2015-02-03) x86_64 GNU/Linux
Time: Wed, 13 May 2015 13:13:53 GMT
Connecting to host 192.168.30.161, port 5201
Cookie: mybox.1431522833.505583.4078fcc1
TCP MSS: 1448 (default)
[ 4] local 192.168.30.202 port 44782 connected to 192.168.30.161 port 5201
Starting Test: protocol: TCP, 1 streams, 8192 byte blocks, omitting 0 seconds, 5 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 109 MBytes 910 Mbits/sec 0 91.9 KBytes
[ 4] 1.00-2.00 sec 97.3 MBytes 816 Mbits/sec 0 91.9 KBytes
[ 4] 2.00-3.00 sec 97.5 MBytes 818 Mbits/sec 0 91.9 KBytes
[ 4] 3.00-4.00 sec 98.0 MBytes 822 Mbits/sec 0 91.9 KBytes
[ 4] 4.00-5.00 sec 97.6 MBytes 819 Mbits/sec 0 91.9 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-5.00 sec 499 MBytes 837 Mbits/sec 0 sender
[ 4] 0.00-5.00 sec 498 MBytes 836 Mbits/sec receiver
CPU Utilization: local/sender 3.5% (0.5%u/3.0%s), remote/receiver 4.5% (2.0%u/2.5%s)
Server output:
-----------------------------------------------------------
Accepted connection from 192.168.30.202, port 44781
[ 5] local 192.168.30.161 port 5201 connected to 192.168.30.202 port 44782
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 105 MBytes 878 Mbits/sec
[ 5] 1.00-2.00 sec 97.5 MBytes 818 Mbits/sec
[ 5] 2.00-3.00 sec 97.6 MBytes 819 Mbits/sec
[ 5] 3.00-4.00 sec 97.8 MBytes 820 Mbits/sec
[ 5] 4.00-5.00 sec 97.7 MBytes 820 Mbits/sec
Run Code Online (Sandbox Code Playgroud)
没有问题。这是正常和预期的行为。
丢包的原因是UDP没有任何拥塞控制。在 tcp 中,当拥塞控制算法启动时,它会告诉发送端放慢发送速度,以最大限度地提高吞吐量并最大限度地减少损失。
所以这实际上是 UDP 完全正常的行为。如果接收队列过载并且会丢弃数据包,则 UDP 不保证交付。如果您想要更高的 UDP 传输速率,您需要增加接收缓冲区。
-l 或 --len iperf 选项应该可以解决问题。可能还有客户端上的目标带宽设置 -b。
-l, --len n[KM] 将长度读/写缓冲区设置为 n(默认 8 KB)
8KB??当没有拥塞控制时,这有点小。
例如在服务器端。
~$ iperf -l 1M -U -s
Run Code Online (Sandbox Code Playgroud)
这就是我让 Linux 到 Linux 的原因
Client connecting to ostore, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.107 port 35399 connected with 192.168.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec
Run Code Online (Sandbox Code Playgroud)
但是对于使用默认设置的 UDP,我只得到
~$ iperf -u -c ostore
------------------------------------------------------------
Client connecting to ostore, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.107 port 52898 connected with 192.168.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec
[ 3] Sent 893 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.027 ms 0/ 893 (0%)
Run Code Online (Sandbox Code Playgroud)
重量?
经过一些实验,我发现我必须同时设置长度和带宽目标。
~$ iperf -u -c ostore -l 8192 -b 1G
------------------------------------------------------------
Client connecting to ostore, UDP port 5001
Sending 8192 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.107 port 60237 connected with 192.168.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.12 GBytes 958 Mbits/sec
[ 3] Sent 146243 datagrams
[ 3] WARNING: did not receive ack of last datagram after 10 tries.
Run Code Online (Sandbox Code Playgroud)
服务器端:
~$ iperf -s -u -l 5M
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 5242880 byte datagrams
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.10 port 5001 connected with 192.168.0.107 port 36448
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.1 sec 1008 KBytes 819 Kbits/sec 0.018 ms 0/ 126 (0%)
[ 4] local 192.168.0.10 port 5001 connected with 192.168.0.107 port 60237
[ 4] 0.0-10.0 sec 1.12 GBytes 958 Mbits/sec 0.078 ms 0/146242 (0%)
[ 4] 0.0-10.0 sec 1 datagrams received out-of-order
Run Code Online (Sandbox Code Playgroud)
用小缓冲区演示数据包丢失。老实说,这并不像我预期的那样极端。我可以在 Linux/Windows 之间测试的 iperf3 的可靠来源在哪里?
~$ iperf -u -c ostore -l 1K -b 1G
------------------------------------------------------------
Client connecting to ostore, UDP port 5001
Sending 1024 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.107 port 45061 connected with 192.168.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 674 MBytes 565 Mbits/sec
[ 3] Sent 689777 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 670 MBytes 562 Mbits/sec 0.013 ms 3936/689776 (0.57%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
Run Code Online (Sandbox Code Playgroud)
服务器端:
~$ iperf -s -u -l 1K
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1024 byte datagrams
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.10 port 5001 connected with 192.168.0.107 port 45061
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 670 MBytes 562 Mbits/sec 0.013 ms 3936/689776 (0.57%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
Run Code Online (Sandbox Code Playgroud)
您是否也看过iperf3 github 页面自述文件?
已知的问题
UDP 性能:在 ESnet 100G 测试台上以高 UDP 速率(10Gbps 以上)使用 iperf3 时发现了一些问题。症状是在 iperf3 的任何特定运行中,接收器报告大约 20% 的丢失率,而不管客户端使用的 -b 选项如何。这个问题似乎不是特定于 iperf3 的,可能是由于 iperf3 进程在 CPU 上的放置及其与入站 NIC 的关系。在某些情况下,可以通过适当使用 CPU 关联 (-A) 选项来缓解此问题。(第 55 期)
您使用的是较慢的 NIC,但我想知道它是否相关。