You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

pve-network.adoc 15KB

  1. [[sysadmin_network_configuration]]
  2. Network Configuration
  3. ---------------------
  4. ifdef::wiki[]
  5. :pve-toplevel:
  6. endif::wiki[]
  7. Network configuration can be done either via the GUI, or by manually
  8. editing the file `/etc/network/interfaces`, which contains the
  9. whole network configuration. The `interfaces(5)` manual page contains the
  10. complete format description. All {pve} tools try hard to keep direct
  11. user modifications, but using the GUI is still preferable, because it
  12. protects you from errors.
  13. Once the network is configured, you can use the Debian traditional tools `ifup`
  14. and `ifdown` commands to bring interfaces up and down.
  15. NOTE: {pve} does not write changes directly to
  16. `/etc/network/interfaces`. Instead, we write into a temporary file
  17. called `/etc/network/`, and commit those changes when
  18. you reboot the node.
  19. Naming Conventions
  20. ~~~~~~~~~~~~~~~~~~
  21. We currently use the following naming conventions for device names:
  22. * Ethernet devices: en*, systemd network interface names. This naming scheme is
  23. used for new {pve} installations since version 5.0.
  24. * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
  25. scheme is used for {pve} hosts which were installed before the 5.0
  26. release. When upgrading to 5.0, the names are kept as-is.
  27. * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
  28. * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
  29. * VLANs: Simply add the VLAN number to the device name,
  30. separated by a period (`eno1.50`, `bond1.30`)
  31. This makes it easier to debug networks problems, because the device
  32. name implies the device type.
  33. Systemd Network Interface Names
  34. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  35. Systemd uses the two character prefix 'en' for Ethernet network
  36. devices. The next characters depends on the device driver and the fact
  37. which schema matches first.
  38. * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
  39. * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
  40. * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
  41. * x<MAC> — device by MAC address
  42. The most common patterns are:
  43. * eno1 — is the first on board NIC
  44. * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
  45. For more information see[Predictable Network Interface Names].
  46. Choosing a network configuration
  47. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  48. Depending on your current network organization and your resources you can
  49. choose either a bridged, routed, or masquerading networking setup.
  50. {pve} server in a private LAN, using an external gateway to reach the internet
  51. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  52. The *Bridged* model makes the most sense in this case, and this is also
  53. the default mode on new {pve} installations.
  54. Each of your Guest system will have a virtual interface attached to the
  55. {pve} bridge. This is similar in effect to having the Guest network card
  56. directly connected to a new switch on your LAN, the {pve} host playing the role
  57. of the switch.
  58. {pve} server at hosting provider, with public IP ranges for Guests
  59. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  60. For this setup, you can use either a *Bridged* or *Routed* model, depending on
  61. what your provider allows.
  62. {pve} server at hosting provider, with a single public IP address
  63. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  64. In that case the only way to get outgoing network accesses for your guest
  65. systems is to use *Masquerading*. For incoming network access to your guests,
  66. you will need to configure *Port Forwarding*.
  67. For further flexibility, you can configure
  68. VLANs (IEEE 802.1q) and network bonding, also known as "link
  69. aggregation". That way it is possible to build complex and flexible
  70. virtual networks.
  71. Default Configuration using a Bridge
  72. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  73. [thumbnail="default-network-setup-bridge.svg"]
  74. Bridges are like physical network switches implemented in software.
  75. All virtual guests can share a single bridge, or you can create multiple
  76. bridges to separate network domains. Each host can have up to 4094 bridges.
  77. The installation program creates a single bridge named `vmbr0`, which
  78. is connected to the first Ethernet card. The corresponding
  79. configuration in `/etc/network/interfaces` might look like this:
  80. ----
  81. auto lo
  82. iface lo inet loopback
  83. iface eno1 inet manual
  84. auto vmbr0
  85. iface vmbr0 inet static
  86. address
  87. netmask
  88. gateway
  89. bridge_ports eno1
  90. bridge_stp off
  91. bridge_fd 0
  92. ----
  93. Virtual machines behave as if they were directly connected to the
  94. physical network. The network, in turn, sees each virtual machine as
  95. having its own MAC, even though there is only one network cable
  96. connecting all of these VMs to the network.
  97. Routed Configuration
  98. ~~~~~~~~~~~~~~~~~~~~
  99. Most hosting providers do not support the above setup. For security
  100. reasons, they disable networking as soon as they detect multiple MAC
  101. addresses on a single interface.
  102. TIP: Some providers allows you to register additional MACs on their
  103. management interface. This avoids the problem, but is clumsy to
  104. configure because you need to register a MAC for each of your VMs.
  105. You can avoid the problem by ``routing'' all traffic via a single
  106. interface. This makes sure that all network packets use the same MAC
  107. address.
  108. [thumbnail="default-network-setup-routed.svg"]
  109. A common scenario is that you have a public IP (assume ``
  110. for this example), and an additional IP block for your VMs
  111. (``). We recommend the following setup for such
  112. situations:
  113. ----
  114. auto lo
  115. iface lo inet loopback
  116. auto eno1
  117. iface eno1 inet static
  118. address
  119. netmask
  120. gateway
  121. post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  122. post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
  123. auto vmbr0
  124. iface vmbr0 inet static
  125. address
  126. netmask
  127. bridge_ports none
  128. bridge_stp off
  129. bridge_fd 0
  130. ----
  131. Masquerading (NAT) with `iptables`
  132. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  133. Masquerading allows guests having only a private IP address to access the
  134. network by using the host IP address for outgoing traffic. Each outgoing
  135. packet is rewritten by `iptables` to appear as originating from the host,
  136. and responses are rewritten accordingly to be routed to the original sender.
  137. ----
  138. auto lo
  139. iface lo inet loopback
  140. auto eno1
  141. #real IP address
  142. iface eno1 inet static
  143. address
  144. netmask
  145. gateway
  146. auto vmbr0
  147. #private sub network
  148. iface vmbr0 inet static
  149. address
  150. netmask
  151. bridge_ports none
  152. bridge_stp off
  153. bridge_fd 0
  154. post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  155. post-up iptables -t nat -A POSTROUTING -s '' -o eno1 -j MASQUERADE
  156. post-down iptables -t nat -D POSTROUTING -s '' -o eno1 -j MASQUERADE
  157. ----
  158. Linux Bond
  159. ~~~~~~~~~~
  160. Bonding (also called NIC teaming or Link Aggregation) is a technique
  161. for binding multiple NIC's to a single network device. It is possible
  162. to achieve different goals, like make the network fault-tolerant,
  163. increase the performance or both together.
  164. High-speed hardware like Fibre Channel and the associated switching
  165. hardware can be quite expensive. By doing link aggregation, two NICs
  166. can appear as one logical interface, resulting in double speed. This
  167. is a native Linux kernel feature that is supported by most
  168. switches. If your nodes have multiple Ethernet ports, you can
  169. distribute your points of failure by running network cables to
  170. different switches and the bonded connection will failover to one
  171. cable or the other in case of network trouble.
  172. Aggregated links can improve live-migration delays and improve the
  173. speed of replication of data between Proxmox VE Cluster nodes.
  174. There are 7 modes for bonding:
  175. * *Round-robin (balance-rr):* Transmit network packets in sequential
  176. order from the first available network interface (NIC) slave through
  177. the last. This mode provides load balancing and fault tolerance.
  178. * *Active-backup (active-backup):* Only one NIC slave in the bond is
  179. active. A different slave becomes active if, and only if, the active
  180. slave fails. The single logical bonded interface's MAC address is
  181. externally visible on only one NIC (port) to avoid distortion in the
  182. network switch. This mode provides fault tolerance.
  183. * *XOR (balance-xor):* Transmit network packets based on [(source MAC
  184. address XOR'd with destination MAC address) modulo NIC slave
  185. count]. This selects the same NIC slave for each destination MAC
  186. address. This mode provides load balancing and fault tolerance.
  187. * *Broadcast (broadcast):* Transmit network packets on all slave
  188. network interfaces. This mode provides fault tolerance.
  189. * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
  190. aggregation groups that share the same speed and duplex
  191. settings. Utilizes all slave network interfaces in the active
  192. aggregator group according to the 802.3ad specification.
  193. * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
  194. driver mode that does not require any special network-switch
  195. support. The outgoing network packet traffic is distributed according
  196. to the current load (computed relative to the speed) on each network
  197. interface slave. Incoming traffic is received by one currently
  198. designated slave network interface. If this receiving slave fails,
  199. another slave takes over the MAC address of the failed receiving
  200. slave.
  201. * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
  202. load balancing (rlb) for IPV4 traffic, and does not require any
  203. special network switch support. The receive load balancing is achieved
  204. by ARP negotiation. The bonding driver intercepts the ARP Replies sent
  205. by the local system on their way out and overwrites the source
  206. hardware address with the unique hardware address of one of the NIC
  207. slaves in the single logical bonded interface such that different
  208. network-peers use different MAC addresses for their network packet
  209. traffic.
  210. If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
  211. the corresponding bonding mode (802.3ad). Otherwise you should generally use the
  212. active-backup mode. +
  213. //
  214. If you intend to run your cluster network on the bonding interfaces, then you
  215. have to use active-passive mode on the bonding interfaces, other modes are
  216. unsupported.
  217. The following bond configuration can be used as distributed/shared
  218. storage network. The benefit would be that you get more speed and the
  219. network will be fault-tolerant.
  220. .Example: Use bond with fixed IP address
  221. ----
  222. auto lo
  223. iface lo inet loopback
  224. iface eno1 inet manual
  225. iface eno2 inet manual
  226. auto bond0
  227. iface bond0 inet static
  228. slaves eno1 eno2
  229. address
  230. netmask
  231. bond_miimon 100
  232. bond_mode 802.3ad
  233. bond_xmit_hash_policy layer2+3
  234. auto vmbr0
  235. iface vmbr0 inet static
  236. address
  237. netmask
  238. gateway
  239. bridge_ports eno1
  240. bridge_stp off
  241. bridge_fd 0
  242. ----
  243. [thumbnail="default-network-setup-bond.svg"]
  244. Another possibility it to use the bond directly as bridge port.
  245. This can be used to make the guest network fault-tolerant.
  246. .Example: Use a bond as bridge port
  247. ----
  248. auto lo
  249. iface lo inet loopback
  250. iface eno1 inet manual
  251. iface eno2 inet manual
  252. auto bond0
  253. iface bond0 inet manual
  254. slaves eno1 eno2
  255. bond_miimon 100
  256. bond_mode 802.3ad
  257. bond_xmit_hash_policy layer2+3
  258. auto vmbr0
  259. iface vmbr0 inet static
  260. address
  261. netmask
  262. gateway
  263. bridge_ports bond0
  264. bridge_stp off
  265. bridge_fd 0
  266. ----
  267. VLAN 802.1Q
  268. ~~~~~~~~~~~
  269. A virtual LAN (VLAN) is a broadcast domain that is partitioned and
  270. isolated in the network at layer two. So it is possible to have
  271. multiple networks (4096) in a physical network, each independent of
  272. the other ones.
  273. Each VLAN network is identified by a number often called 'tag'.
  274. Network packages are then 'tagged' to identify which virtual network
  275. they belong to.
  276. VLAN for Guest Networks
  277. ^^^^^^^^^^^^^^^^^^^^^^^
  278. {pve} supports this setup out of the box. You can specify the VLAN tag
  279. when you create a VM. The VLAN tag is part of the guest network
  280. configuration. The networking layer supports different modes to
  281. implement VLANs, depending on the bridge configuration:
  282. * *VLAN awareness on the Linux bridge:*
  283. In this case, each guest's virtual network card is assigned to a VLAN tag,
  284. which is transparently supported by the Linux bridge.
  285. Trunk mode is also possible, but that makes configuration
  286. in the guest necessary.
  287. * *"traditional" VLAN on the Linux bridge:*
  288. In contrast to the VLAN awareness method, this method is not transparent
  289. and creates a VLAN device with associated bridge for each VLAN.
  290. That is, creating a guest on VLAN 5 for example, would create two
  291. interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
  292. * *Open vSwitch VLAN:*
  293. This mode uses the OVS VLAN feature.
  294. * *Guest configured VLAN:*
  295. VLANs are assigned inside the guest. In this case, the setup is
  296. completely done inside the guest and can not be influenced from the
  297. outside. The benefit is that you can use more than one VLAN on a
  298. single virtual NIC.
  299. VLAN on the Host
  300. ^^^^^^^^^^^^^^^^
  301. To allow host communication with an isolated network. It is possible
  302. to apply VLAN tags to any network device (NIC, Bond, Bridge). In
  303. general, you should configure the VLAN on the interface with the least
  304. abstraction layers between itself and the physical NIC.
  305. For example, in a default configuration where you want to place
  306. the host management address on a separate VLAN.
  307. .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
  308. ----
  309. auto lo
  310. iface lo inet loopback
  311. iface eno1 inet manual
  312. iface eno1.5 inet manual
  313. auto vmbr0v5
  314. iface vmbr0v5 inet static
  315. address
  316. netmask
  317. gateway
  318. bridge_ports eno1.5
  319. bridge_stp off
  320. bridge_fd 0
  321. auto vmbr0
  322. iface vmbr0 inet manual
  323. bridge_ports eno1
  324. bridge_stp off
  325. bridge_fd 0
  326. ----
  327. .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
  328. ----
  329. auto lo
  330. iface lo inet loopback
  331. iface eno1 inet manual
  332. auto vmbr0.5
  333. iface vmbr0.5 inet static
  334. address
  335. netmask
  336. gateway
  337. auto vmbr0
  338. iface vmbr0 inet manual
  339. bridge_ports eno1
  340. bridge_stp off
  341. bridge_fd 0
  342. bridge_vlan_aware yes
  343. ----
  344. The next example is the same setup but a bond is used to
  345. make this network fail-safe.
  346. .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
  347. ----
  348. auto lo
  349. iface lo inet loopback
  350. iface eno1 inet manual
  351. iface eno2 inet manual
  352. auto bond0
  353. iface bond0 inet manual
  354. slaves eno1 eno2
  355. bond_miimon 100
  356. bond_mode 802.3ad
  357. bond_xmit_hash_policy layer2+3
  358. iface bond0.5 inet manual
  359. auto vmbr0v5
  360. iface vmbr0v5 inet static
  361. address
  362. netmask
  363. gateway
  364. bridge_ports bond0.5
  365. bridge_stp off
  366. bridge_fd 0
  367. auto vmbr0
  368. iface vmbr0 inet manual
  369. bridge_ports bond0
  370. bridge_stp off
  371. bridge_fd 0
  372. ----
  373. ////
  374. TODO: explain IPv6 support?
  375. TODO: explain OVS
  376. ////