You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

535 lines

  1. [[sysadmin_network_configuration]]
  2. Network Configuration
  3. ---------------------
  4. ifdef::wiki[]
  5. :pve-toplevel:
  6. endif::wiki[]
  7. Network configuration can be done either via the GUI, or by manually
  8. editing the file `/etc/network/interfaces`, which contains the
  9. whole network configuration. The `interfaces(5)` manual page contains the
  10. complete format description. All {pve} tools try hard to keep direct
  11. user modifications, but using the GUI is still preferable, because it
  12. protects you from errors.
  13. Once the network is configured, you can use the Debian traditional tools `ifup`
  14. and `ifdown` commands to bring interfaces up and down.
  15. Apply Network Changes
  16. ~~~~~~~~~~~~~~~~~~~~~
  17. {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we
  18. write into a temporary file called `/etc/network/`, this way you
  19. can do many related changes at once. This also allows to ensure your changes
  20. are correct before applying, as a wrong network configuration may render a node
  21. inaccessible.
  22. Reboot Node to apply
  23. ^^^^^^^^^^^^^^^^^^^^
  24. With the default installed `ifupdown` network managing package you need to
  25. reboot to commit any pending network changes. Most of the time, the basic {pve}
  26. network setup is stable and does not change often, so rebooting should not be
  27. required often.
  28. Reload Network with ifupdown2
  29. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  30. With the optional `ifupdown2` network managing package you also can reload the
  31. network configuration live, without requiring a reboot.
  32. NOTE: 'ifupdown2' cannot understand 'OpenVSwitch' syntax, so reloading is *not*
  33. possible if OVS interfaces are configured.
  34. Since {pve} 6.1 you can apply pending network changes over the web-interface,
  35. using the 'Apply Configuration' button in the 'Network' panel of a node.
  36. To install 'ifupdown2' ensure you have the latest {pve} updates installed, then
  37. WARNING: installing 'ifupdown2' will remove 'ifupdown', but as the removal
  38. scripts of 'ifupdown' before version '0.8.35+pve1' have a issue where network
  39. is fully stopped on removal footnote:[Introduced with Debian Buster:
  40.] you *must* ensure
  41. that you have a up to date 'ifupdown' package version.
  42. For the installation itself you can then simply do:
  43. apt install ifupdown2
  44. With that you're all set. You can also switch back to the 'ifupdown' variant at
  45. any time, if you run into issues.
  46. Naming Conventions
  47. ~~~~~~~~~~~~~~~~~~
  48. We currently use the following naming conventions for device names:
  49. * Ethernet devices: en*, systemd network interface names. This naming scheme is
  50. used for new {pve} installations since version 5.0.
  51. * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
  52. scheme is used for {pve} hosts which were installed before the 5.0
  53. release. When upgrading to 5.0, the names are kept as-is.
  54. * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
  55. * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
  56. * VLANs: Simply add the VLAN number to the device name,
  57. separated by a period (`eno1.50`, `bond1.30`)
  58. This makes it easier to debug networks problems, because the device
  59. name implies the device type.
  60. Systemd Network Interface Names
  61. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  62. Systemd uses the two character prefix 'en' for Ethernet network
  63. devices. The next characters depends on the device driver and the fact
  64. which schema matches first.
  65. * o<index>[n<phys_port_name>|d<dev_port>] — devices on board
  66. * s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
  67. * [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
  68. * x<MAC> — device by MAC address
  69. The most common patterns are:
  70. * eno1 — is the first on board NIC
  71. * enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
  72. For more information see[Predictable Network Interface Names].
  73. Choosing a network configuration
  74. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  75. Depending on your current network organization and your resources you can
  76. choose either a bridged, routed, or masquerading networking setup.
  77. {pve} server in a private LAN, using an external gateway to reach the internet
  78. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  79. The *Bridged* model makes the most sense in this case, and this is also
  80. the default mode on new {pve} installations.
  81. Each of your Guest system will have a virtual interface attached to the
  82. {pve} bridge. This is similar in effect to having the Guest network card
  83. directly connected to a new switch on your LAN, the {pve} host playing the role
  84. of the switch.
  85. {pve} server at hosting provider, with public IP ranges for Guests
  86. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  87. For this setup, you can use either a *Bridged* or *Routed* model, depending on
  88. what your provider allows.
  89. {pve} server at hosting provider, with a single public IP address
  90. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  91. In that case the only way to get outgoing network accesses for your guest
  92. systems is to use *Masquerading*. For incoming network access to your guests,
  93. you will need to configure *Port Forwarding*.
  94. For further flexibility, you can configure
  95. VLANs (IEEE 802.1q) and network bonding, also known as "link
  96. aggregation". That way it is possible to build complex and flexible
  97. virtual networks.
  98. Default Configuration using a Bridge
  99. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  100. [thumbnail="default-network-setup-bridge.svg"]
  101. Bridges are like physical network switches implemented in software.
  102. All virtual guests can share a single bridge, or you can create multiple
  103. bridges to separate network domains. Each host can have up to 4094 bridges.
  104. The installation program creates a single bridge named `vmbr0`, which
  105. is connected to the first Ethernet card. The corresponding
  106. configuration in `/etc/network/interfaces` might look like this:
  107. ----
  108. auto lo
  109. iface lo inet loopback
  110. iface eno1 inet manual
  111. auto vmbr0
  112. iface vmbr0 inet static
  113. address
  114. netmask
  115. gateway
  116. bridge_ports eno1
  117. bridge_stp off
  118. bridge_fd 0
  119. ----
  120. Virtual machines behave as if they were directly connected to the
  121. physical network. The network, in turn, sees each virtual machine as
  122. having its own MAC, even though there is only one network cable
  123. connecting all of these VMs to the network.
  124. Routed Configuration
  125. ~~~~~~~~~~~~~~~~~~~~
  126. Most hosting providers do not support the above setup. For security
  127. reasons, they disable networking as soon as they detect multiple MAC
  128. addresses on a single interface.
  129. TIP: Some providers allows you to register additional MACs on their
  130. management interface. This avoids the problem, but is clumsy to
  131. configure because you need to register a MAC for each of your VMs.
  132. You can avoid the problem by ``routing'' all traffic via a single
  133. interface. This makes sure that all network packets use the same MAC
  134. address.
  135. [thumbnail="default-network-setup-routed.svg"]
  136. A common scenario is that you have a public IP (assume ``
  137. for this example), and an additional IP block for your VMs
  138. (``). We recommend the following setup for such
  139. situations:
  140. ----
  141. auto lo
  142. iface lo inet loopback
  143. auto eno1
  144. iface eno1 inet static
  145. address
  146. netmask
  147. gateway
  148. post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  149. post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
  150. auto vmbr0
  151. iface vmbr0 inet static
  152. address
  153. netmask
  154. bridge_ports none
  155. bridge_stp off
  156. bridge_fd 0
  157. ----
  158. Masquerading (NAT) with `iptables`
  159. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  160. Masquerading allows guests having only a private IP address to access the
  161. network by using the host IP address for outgoing traffic. Each outgoing
  162. packet is rewritten by `iptables` to appear as originating from the host,
  163. and responses are rewritten accordingly to be routed to the original sender.
  164. ----
  165. auto lo
  166. iface lo inet loopback
  167. auto eno1
  168. #real IP address
  169. iface eno1 inet static
  170. address
  171. netmask
  172. gateway
  173. auto vmbr0
  174. #private sub network
  175. iface vmbr0 inet static
  176. address
  177. netmask
  178. bridge_ports none
  179. bridge_stp off
  180. bridge_fd 0
  181. post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  182. post-up iptables -t nat -A POSTROUTING -s '' -o eno1 -j MASQUERADE
  183. post-down iptables -t nat -D POSTROUTING -s '' -o eno1 -j MASQUERADE
  184. ----
  185. Linux Bond
  186. ~~~~~~~~~~
  187. Bonding (also called NIC teaming or Link Aggregation) is a technique
  188. for binding multiple NIC's to a single network device. It is possible
  189. to achieve different goals, like make the network fault-tolerant,
  190. increase the performance or both together.
  191. High-speed hardware like Fibre Channel and the associated switching
  192. hardware can be quite expensive. By doing link aggregation, two NICs
  193. can appear as one logical interface, resulting in double speed. This
  194. is a native Linux kernel feature that is supported by most
  195. switches. If your nodes have multiple Ethernet ports, you can
  196. distribute your points of failure by running network cables to
  197. different switches and the bonded connection will failover to one
  198. cable or the other in case of network trouble.
  199. Aggregated links can improve live-migration delays and improve the
  200. speed of replication of data between Proxmox VE Cluster nodes.
  201. There are 7 modes for bonding:
  202. * *Round-robin (balance-rr):* Transmit network packets in sequential
  203. order from the first available network interface (NIC) slave through
  204. the last. This mode provides load balancing and fault tolerance.
  205. * *Active-backup (active-backup):* Only one NIC slave in the bond is
  206. active. A different slave becomes active if, and only if, the active
  207. slave fails. The single logical bonded interface's MAC address is
  208. externally visible on only one NIC (port) to avoid distortion in the
  209. network switch. This mode provides fault tolerance.
  210. * *XOR (balance-xor):* Transmit network packets based on [(source MAC
  211. address XOR'd with destination MAC address) modulo NIC slave
  212. count]. This selects the same NIC slave for each destination MAC
  213. address. This mode provides load balancing and fault tolerance.
  214. * *Broadcast (broadcast):* Transmit network packets on all slave
  215. network interfaces. This mode provides fault tolerance.
  216. * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
  217. aggregation groups that share the same speed and duplex
  218. settings. Utilizes all slave network interfaces in the active
  219. aggregator group according to the 802.3ad specification.
  220. * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
  221. driver mode that does not require any special network-switch
  222. support. The outgoing network packet traffic is distributed according
  223. to the current load (computed relative to the speed) on each network
  224. interface slave. Incoming traffic is received by one currently
  225. designated slave network interface. If this receiving slave fails,
  226. another slave takes over the MAC address of the failed receiving
  227. slave.
  228. * *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
  229. load balancing (rlb) for IPV4 traffic, and does not require any
  230. special network switch support. The receive load balancing is achieved
  231. by ARP negotiation. The bonding driver intercepts the ARP Replies sent
  232. by the local system on their way out and overwrites the source
  233. hardware address with the unique hardware address of one of the NIC
  234. slaves in the single logical bonded interface such that different
  235. network-peers use different MAC addresses for their network packet
  236. traffic.
  237. If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
  238. the corresponding bonding mode (802.3ad). Otherwise you should generally use the
  239. active-backup mode. +
  240. //
  241. If you intend to run your cluster network on the bonding interfaces, then you
  242. have to use active-passive mode on the bonding interfaces, other modes are
  243. unsupported.
  244. The following bond configuration can be used as distributed/shared
  245. storage network. The benefit would be that you get more speed and the
  246. network will be fault-tolerant.
  247. .Example: Use bond with fixed IP address
  248. ----
  249. auto lo
  250. iface lo inet loopback
  251. iface eno1 inet manual
  252. iface eno2 inet manual
  253. auto bond0
  254. iface bond0 inet static
  255. slaves eno1 eno2
  256. address
  257. netmask
  258. bond_miimon 100
  259. bond_mode 802.3ad
  260. bond_xmit_hash_policy layer2+3
  261. auto vmbr0
  262. iface vmbr0 inet static
  263. address
  264. netmask
  265. gateway
  266. bridge_ports eno1
  267. bridge_stp off
  268. bridge_fd 0
  269. ----
  270. [thumbnail="default-network-setup-bond.svg"]
  271. Another possibility it to use the bond directly as bridge port.
  272. This can be used to make the guest network fault-tolerant.
  273. .Example: Use a bond as bridge port
  274. ----
  275. auto lo
  276. iface lo inet loopback
  277. iface eno1 inet manual
  278. iface eno2 inet manual
  279. auto bond0
  280. iface bond0 inet manual
  281. slaves eno1 eno2
  282. bond_miimon 100
  283. bond_mode 802.3ad
  284. bond_xmit_hash_policy layer2+3
  285. auto vmbr0
  286. iface vmbr0 inet static
  287. address
  288. netmask
  289. gateway
  290. bridge_ports bond0
  291. bridge_stp off
  292. bridge_fd 0
  293. ----
  294. VLAN 802.1Q
  295. ~~~~~~~~~~~
  296. A virtual LAN (VLAN) is a broadcast domain that is partitioned and
  297. isolated in the network at layer two. So it is possible to have
  298. multiple networks (4096) in a physical network, each independent of
  299. the other ones.
  300. Each VLAN network is identified by a number often called 'tag'.
  301. Network packages are then 'tagged' to identify which virtual network
  302. they belong to.
  303. VLAN for Guest Networks
  304. ^^^^^^^^^^^^^^^^^^^^^^^
  305. {pve} supports this setup out of the box. You can specify the VLAN tag
  306. when you create a VM. The VLAN tag is part of the guest network
  307. configuration. The networking layer supports different modes to
  308. implement VLANs, depending on the bridge configuration:
  309. * *VLAN awareness on the Linux bridge:*
  310. In this case, each guest's virtual network card is assigned to a VLAN tag,
  311. which is transparently supported by the Linux bridge.
  312. Trunk mode is also possible, but that makes configuration
  313. in the guest necessary.
  314. * *"traditional" VLAN on the Linux bridge:*
  315. In contrast to the VLAN awareness method, this method is not transparent
  316. and creates a VLAN device with associated bridge for each VLAN.
  317. That is, creating a guest on VLAN 5 for example, would create two
  318. interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
  319. * *Open vSwitch VLAN:*
  320. This mode uses the OVS VLAN feature.
  321. * *Guest configured VLAN:*
  322. VLANs are assigned inside the guest. In this case, the setup is
  323. completely done inside the guest and can not be influenced from the
  324. outside. The benefit is that you can use more than one VLAN on a
  325. single virtual NIC.
  326. VLAN on the Host
  327. ^^^^^^^^^^^^^^^^
  328. To allow host communication with an isolated network. It is possible
  329. to apply VLAN tags to any network device (NIC, Bond, Bridge). In
  330. general, you should configure the VLAN on the interface with the least
  331. abstraction layers between itself and the physical NIC.
  332. For example, in a default configuration where you want to place
  333. the host management address on a separate VLAN.
  334. .Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
  335. ----
  336. auto lo
  337. iface lo inet loopback
  338. iface eno1 inet manual
  339. iface eno1.5 inet manual
  340. auto vmbr0v5
  341. iface vmbr0v5 inet static
  342. address
  343. netmask
  344. gateway
  345. bridge_ports eno1.5
  346. bridge_stp off
  347. bridge_fd 0
  348. auto vmbr0
  349. iface vmbr0 inet manual
  350. bridge_ports eno1
  351. bridge_stp off
  352. bridge_fd 0
  353. ----
  354. .Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
  355. ----
  356. auto lo
  357. iface lo inet loopback
  358. iface eno1 inet manual
  359. auto vmbr0.5
  360. iface vmbr0.5 inet static
  361. address
  362. netmask
  363. gateway
  364. auto vmbr0
  365. iface vmbr0 inet manual
  366. bridge_ports eno1
  367. bridge_stp off
  368. bridge_fd 0
  369. bridge_vlan_aware yes
  370. ----
  371. The next example is the same setup but a bond is used to
  372. make this network fail-safe.
  373. .Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
  374. ----
  375. auto lo
  376. iface lo inet loopback
  377. iface eno1 inet manual
  378. iface eno2 inet manual
  379. auto bond0
  380. iface bond0 inet manual
  381. slaves eno1 eno2
  382. bond_miimon 100
  383. bond_mode 802.3ad
  384. bond_xmit_hash_policy layer2+3
  385. iface bond0.5 inet manual
  386. auto vmbr0v5
  387. iface vmbr0v5 inet static
  388. address
  389. netmask
  390. gateway
  391. bridge_ports bond0.5
  392. bridge_stp off
  393. bridge_fd 0
  394. auto vmbr0
  395. iface vmbr0 inet manual
  396. bridge_ports bond0
  397. bridge_stp off
  398. bridge_fd 0
  399. ----
  400. ////
  401. TODO: explain IPv6 support?
  402. TODO: explain OVS
  403. ////