Pat*_*out 5 networking hosting debian vlan proxmox
我正在使用 Proxmox 3,它是全新安装。对于那些知道的人,我使用的是 OVH Vrack 1.5(以及之前的 Vrack 1.0)。
我的服务器有两个接口 eth0 和 eth1,我成功地在主机节点上配置了私有和公共 ip,并且我能够 ping 到 vlan 上的所有服务器。
现在,我创建了一个 OpenVZ 容器,并在 Proxmox GUI(简单的 venet)中分配了公共和私有 ip。
假设我使用 172.16.0.129 作为内部网络。
一旦我登录到容器中,我就可以成功 ping 我所有的私有网络,但我无法访问任何公共 ip。
这是主机节点配置:
ifconfig
dummy0 Link encap:Ethernet HWaddr 8a:ee:41:c1:ec:53
inet6 addr: fe80::84ed:41ff:fec1:ec53/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1950 (1.9 KiB)
eth0 Link encap:Ethernet HWaddr 00:32:90:a7:43:48
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:111570 errors:0 dropped:0 overruns:0 frame:0
TX packets:58220 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:140197486 (133.7 MiB) TX bytes:8647245 (8.2 MiB)
eth1 Link encap:Ethernet HWaddr 00:25:90:54:43:49
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:421 errors:0 dropped:0 overruns:0 frame:0
TX packets:93 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:43258 (42.2 KiB) TX bytes:6322 (6.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:3879 errors:0 dropped:0 overruns:0 frame:0
TX packets:3879 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2507778 (2.3 MiB) TX bytes:2507778 (2.3 MiB)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:49 errors:0 dropped:0 overruns:0 frame:0
TX packets:28 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3535 (3.4 KiB) TX bytes:2236 (2.1 KiB)
vmbr0 Link encap:Ethernet HWaddr 00:25:90:a7:43:48
inet addr:5.135.14.28 Bcast:5.135.14.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:103047 errors:0 dropped:0 overruns:0 frame:0
TX packets:54482 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:137374926 (131.0 MiB) TX bytes:6823790 (6.5 MiB)
vmbr1 Link encap:Ethernet HWaddr 86:ed:41:c1:ec:53
inet6 addr: fe80::84ed:41ff:fec1:ec53/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)
vmbr2 Link encap:Ethernet HWaddr 00:25:90:a7:43:49
inet addr:172.16.0.128 Bcast:172.31.255.255 Mask:255.240.0.0
inet6 addr: fe80::225:90ff:fea7:4349/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:349 errors:0 dropped:0 overruns:0 frame:0
TX packets:69 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:30789 (30.0 KiB) TX bytes:4794 (4.6 KiB)
Run Code Online (Sandbox Code Playgroud)
interfaces
auto lo
iface lo inet loopback
# for Routing
auto vmbr1
iface vmbr1 inet manual
post-up /etc/pve/kvm-networking.sh
bridge_ports dummy0
bridge_stp off
bridge_fd 0
# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
auto vmbr0
iface vmbr0 inet static
address 5.135.14.28
netmask 255.255.255.0
network 5.135.14.0
broadcast 5.135.14.255
gateway 5.135.14.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
# bridge vrack 1.5
auto vmbr2
iface vmbr2 inet static
address 172.16.0.128
netmask 255.240.0.0
broadcast 172.31.255.255
gateway 172.31.255.254
bridge_ports eth1
bridge_stp off
bridge_fd 0
Run Code Online (Sandbox Code Playgroud)
和路由表:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.16.0.129 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
4.1.5.13 0.0.0.0 255.255.255.255 UH 0 0 0 venet0
5.135.14.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 vmbr2
0.0.0.0 5.135.14.254 0.0.0.0 UG 0 0 0 vmbr0
Run Code Online (Sandbox Code Playgroud)
容器路由表如下:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0
Run Code Online (Sandbox Code Playgroud)
和 ifconfig
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:252 (252.0 B) TX bytes:1594 (1.5 KB)
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:172.16.0.129 P-t-P:172.16.0.129 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:4.1.5.173 P-t-P:4.1.5.173 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
Run Code Online (Sandbox Code Playgroud)
恢复:
我与一些现有的 Proxmox 配置进行了比较,这些配置很好用,但我找不到任何差异。
任何帮助将大大赞赏。谢谢。
总结 Proxmox 站点上的相关论坛主题 - http://forum.proxmox.com/threads/5008-Network-issue-setting-up-two-networks-(OpenVZ-container)
您需要使用 VETH(桥接)网络而不是默认的 venet(路由)网络。
通过 Proxmox gui 创建 2 个桥接接口(每个桥接接口/网络一个),然后您可以像配置任何其他类型的服务器一样在容器内配置 2 个网络接口,每个网络 1 个
有关 venet 和 veth 之间差异的更多信息 - 请查看 openvz wiki - http://openvz.org/Differences_ Between_venet_and_veth