K E*_*son 5 networking linux troubleshooting multicast bonding
将我们的计算机从 RHEL 6.6 升级到 RHEL 6.7 后,我们发现一个问题:30 台计算机中有 4 台仅在其两个从属接口之一上接收多播流量。目前尚不清楚升级是否相关,或者包含的重新启动是否触发了该行为 - 重新启动的情况很少见。
我们预计会在 4 个不同的端口上收到大量发往组 239.0.10.200 的多播数据包。如果我们检查ethtool其中一台有问题的机器上的统计信息,我们会看到以下输出:
健康的界面:
# ethtool -S eth0 |grep mcast
[0]: rx_mcast_packets: 294
[0]: tx_mcast_packets: 0
[1]: rx_mcast_packets: 68
[1]: tx_mcast_packets: 0
[2]: rx_mcast_packets: 2612869
[2]: tx_mcast_packets: 305
[3]: rx_mcast_packets: 0
[3]: tx_mcast_packets: 0
[4]: rx_mcast_packets: 2585571
[4]: tx_mcast_packets: 0
[5]: rx_mcast_packets: 2571341
[5]: tx_mcast_packets: 0
[6]: rx_mcast_packets: 0
[6]: tx_mcast_packets: 8
[7]: rx_mcast_packets: 9
[7]: tx_mcast_packets: 0
rx_mcast_packets: 7770152
tx_mcast_packets: 313
Run Code Online (Sandbox Code Playgroud)
损坏的接口:
# ethtool -S eth1 |grep mcast
[0]: rx_mcast_packets: 451
[0]: tx_mcast_packets: 0
[1]: rx_mcast_packets: 0
[1]: tx_mcast_packets: 0
[2]: rx_mcast_packets: 5
[2]: tx_mcast_packets: 304
[3]: rx_mcast_packets: 0
[3]: tx_mcast_packets: 0
[4]: rx_mcast_packets: 5
[4]: tx_mcast_packets: 145
[5]: rx_mcast_packets: 0
[5]: tx_mcast_packets: 0
[6]: rx_mcast_packets: 5
[6]: tx_mcast_packets: 10
[7]: rx_mcast_packets: 0
[7]: tx_mcast_packets: 0
rx_mcast_packets: 466
tx_mcast_packets: 459
Run Code Online (Sandbox Code Playgroud)
其他 10 台机器可进行组播。如果我们检查损坏的机器从哪些主机接收多播(使用 tcpdump),它只会从预期主机的子集(3-6)接收。
Linux版本:
# uname -a
Linux ab31 2.6.32-573.3.1.el6.x86_64 #1 SMP Mon Aug 10 09:44:54 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
Run Code Online (Sandbox Code Playgroud)
如果配置:
# ifconfig -a
bond0 Link encap:Ethernet HWaddr 4C:76:25:97:B1:75
inet addr:10.91.20.231 Bcast:10.91.255.255 Mask:255.255.0.0
inet6 addr: fe80::4e76:25ff:fe97:b175/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:18005156 errors:0 dropped:0 overruns:0 frame:0
TX packets:11407592 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:10221086569 (9.5 GiB) TX bytes:2574472468 (2.3 GiB)
eth0 Link encap:Ethernet HWaddr 4C:76:25:97:B1:75
inet6 addr: fe80::4e76:25ff:fe97:b175/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:13200915 errors:0 dropped:0 overruns:0 frame:0
TX packets:3514446 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9386669124 (8.7 GiB) TX bytes:339950822 (324.2 MiB)
Interrupt:34 Memory:d9000000-d97fffff
eth1 Link encap:Ethernet HWaddr 4C:76:25:97:B1:75
inet6 addr: fe80::4e76:25ff:fe97:b175/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:4804241 errors:0 dropped:0 overruns:0 frame:0
TX packets:7893146 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:834417445 (795.7 MiB) TX bytes:2234521646 (2.0 GiB)
Interrupt:36 Memory:da000000-da7fffff
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:139908 errors:0 dropped:0 overruns:0 frame:0
TX packets:139908 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:210503939 (200.7 MiB) TX bytes:210503939 (200.7 MiB)
Run Code Online (Sandbox Code Playgroud)
网络配置:
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=10.91.20.231
NETMASK=255.255.0.0
GATEWAY=10.91.1.25
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="miimon=100 mode=802.3ad"
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
HWADDR="4C:76:25:97:B1:75"
BOOTPROTO=none
ONBOOT="yes"
USERCTL=no
MASTER=bond0
SLAVE=yes
# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
HWADDR="4C:76:25:97:B1:78"
BOOTPROTO=none
ONBOOT="yes"
USERCTL=no
MASTER=bond0
SLAVE=yes
Run Code Online (Sandbox Code Playgroud)
驱动程序信息(与 eth1 相同):
# ethtool -i eth0
driver: bnx2x
version: 1.710.51-0
firmware-version: FFV7.10.17 bc 7.10.11
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
Run Code Online (Sandbox Code Playgroud)
适配器:
# lspci|grep Ether
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
Run Code Online (Sandbox Code Playgroud)
/proc/net/bonding/bond0:
$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 33
Partner Key: 5
Partner Mac Address: 00:01:09:06:09:07
Slave Interface: eth0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 4c:76:25:97:b1:75
Aggregator ID: 1
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 4c:76:25:97:b1:78
Aggregator ID: 1
Slave queue ID: 0
Run Code Online (Sandbox Code Playgroud)
重新启动 ( ifconfig down, ifconfig up) 损坏的界面可修复此问题
有时,在启动过程中,我们会在系统日志中看到以下消息(我们不使用 IPv6),但是,即使未记录此消息,也会出现问题
Oct 2 11:27:51 ab30 kernel: bond0: IPv6 duplicate address fe80::4e76:25ff:fe87:9d75 detected!
Run Code Online (Sandbox Code Playgroud)配置期间系统日志的输出:
Oct 5 07:44:31 ab31 kernel: bonding: bond0 is being created...
Oct 5 07:44:31 ab31 kernel: bonding: bond0 already exists
Oct 5 07:44:31 ab31 kernel: bond0: Setting MII monitoring interval to 100
Oct 5 07:44:31 ab31 kernel: bond0: Setting MII monitoring interval to 100
Oct 5 07:44:31 ab31 kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Oct 5 07:44:31 ab31 kernel: bond0: Setting MII monitoring interval to 100
Oct 5 07:44:31 ab31 kernel: bond0: Adding slave eth0
Oct 5 07:44:31 ab31 kernel: bnx2x 0000:01:00.0: firmware: requesting bnx2x/bnx2x-e2-7.10.51.0.fw
Oct 5 07:44:31 ab31 kernel: bnx2x 0000:01:00.0: eth0: using MSI-X IRQs: sp 120 fp[0] 122 ... fp[7] 129
Oct 5 07:44:31 ab31 kernel: bnx2x 0000:01:00.0: eth0: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
Oct 5 07:44:31 ab31 kernel: bond0: Enslaving eth0 as a backup interface with an up link
Oct 5 07:44:31 ab31 kernel: bond0: Adding slave eth1
Oct 5 07:44:31 ab31 kernel: bnx2x 0000:01:00.1: firmware: requesting bnx2x/bnx2x-e2-7.10.51.0.fw
Oct 5 07:44:31 ab31 kernel: bnx2x 0000:01:00.1: eth1: using MSI-X IRQs: sp 130 fp[0] 132 ... fp[7] 139
Oct 5 07:44:31 ab31 kernel: bnx2x 0000:01:00.1: eth1: NIC Link is Up, 10000 Mbps full duplex, Flow control: none
Oct 5 07:44:31 ab31 kernel: bond0: Enslaving eth1 as a backup interface with an up link
Oct 5 07:44:31 ab31 kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Oct 5 07:44:31 ab31 kernel: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
Run Code Online (Sandbox Code Playgroud)该bond0接口已加入多播组,如下所示ip maddr:
...
4: bond0
inet 239.0.10.200 users 16
...
Run Code Online (Sandbox Code Playgroud)一切都可以在同一网络上的其他机器上运行。然而,似乎(未100%确认)工作机器有另一个网络适配器:
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Run Code Online (Sandbox Code Playgroud)在检查交换机统计数据时,我们可以看到数据发送到两个接口。
正如Linux 内核不传递多播 UDP 数据包中所建议的那样,我们调查了是否存在rp_filter问题。然而,改变这些标志并没有给我们带来任何改变。
将内核降级为 RedHat 升级之前使用的内核 - 没有变化。
任何有关如何进一步排除故障的提示都将受到赞赏。如果需要更多信息,请告诉我。
我们使用的戴尔刀片服务器出现了这个问题。在与戴尔支持人员合作后,我们似乎IGMPv3 EXCLUDE在加入多播组时使用了过滤。显然刀片服务器中的交换机不支持排除模式。我们建议切换到IGMPv3 INCLUDE过滤模式。
然而,我们现在已经停止在我们的平台中使用多播,因此我们可能不会抽出时间来尝试这些更改。因此,我不能肯定地说这是根本原因。
| 归档时间: |
|
| 查看次数: |
4987 次 |
| 最近记录: |