You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

487 lines

  1. [[chapter_zfs]]
  2. ZFS on Linux
  3. ------------
  4. ifdef::wiki[]
  5. :pve-toplevel:
  6. endif::wiki[]
  7. ZFS is a combined file system and logical volume manager designed by
  8. Sun Microsystems. Starting with {pve} 3.4, the native Linux
  9. kernel port of the ZFS file system is introduced as optional
  10. file system and also as an additional selection for the root
  11. file system. There is no need for manually compile ZFS modules - all
  12. packages are included.
  13. By using ZFS, its possible to achieve maximum enterprise features with
  14. low budget hardware, but also high performance systems by leveraging
  15. SSD caching or even SSD only setups. ZFS can replace cost intense
  16. hardware raid cards by moderate CPU and memory load combined with easy
  17. management.
  18. .General ZFS advantages
  19. * Easy configuration and management with {pve} GUI and CLI.
  20. * Reliable
  21. * Protection against data corruption
  22. * Data compression on file system level
  23. * Snapshots
  24. * Copy-on-write clone
  25. * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
  26. * Can use SSD for cache
  27. * Self healing
  28. * Continuous integrity checking
  29. * Designed for high storage capacities
  30. * Protection against data corruption
  31. * Asynchronous replication over network
  32. * Open Source
  33. * Encryption
  34. * ...
  35. Hardware
  36. ~~~~~~~~
  37. ZFS depends heavily on memory, so you need at least 8GB to start. In
  38. practice, use as much you can get for your hardware/budget. To prevent
  39. data corruption, we recommend the use of high quality ECC RAM.
  40. If you use a dedicated cache and/or log disk, you should use an
  41. enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
  42. increase the overall performance significantly.
  43. IMPORTANT: Do not use ZFS on top of hardware controller which has its
  44. own cache management. ZFS needs to directly communicate with disks. An
  45. HBA adapter is the way to go, or something like LSI controller flashed
  46. in ``IT'' mode.
  47. If you are experimenting with an installation of {pve} inside a VM
  48. (Nested Virtualization), don't use `virtio` for disks of that VM,
  49. since they are not supported by ZFS. Use IDE or SCSI instead (works
  50. also with `virtio` SCSI controller type).
  51. Installation as Root File System
  52. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  53. When you install using the {pve} installer, you can choose ZFS for the
  54. root file system. You need to select the RAID type at installation
  55. time:
  56. [horizontal]
  57. RAID0:: Also called ``striping''. The capacity of such volume is the sum
  58. of the capacities of all disks. But RAID0 does not add any redundancy,
  59. so the failure of a single drive makes the volume unusable.
  60. RAID1:: Also called ``mirroring''. Data is written identically to all
  61. disks. This mode requires at least 2 disks with the same size. The
  62. resulting capacity is that of a single disk.
  63. RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
  64. RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
  65. RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
  66. RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
  67. The installer automatically partitions the disks, creates a ZFS pool
  68. called `rpool`, and installs the root file system on the ZFS subvolume
  69. `rpool/ROOT/pve-1`.
  70. Another subvolume called `rpool/data` is created to store VM
  71. images. In order to use that with the {pve} tools, the installer
  72. creates the following configuration entry in `/etc/pve/storage.cfg`:
  73. ----
  74. zfspool: local-zfs
  75. pool rpool/data
  76. sparse
  77. content images,rootdir
  78. ----
  79. After installation, you can view your ZFS pool status using the
  80. `zpool` command:
  81. ----
  82. # zpool status
  83. pool: rpool
  84. state: ONLINE
  85. scan: none requested
  86. config:
  88. rpool ONLINE 0 0 0
  89. mirror-0 ONLINE 0 0 0
  90. sda2 ONLINE 0 0 0
  91. sdb2 ONLINE 0 0 0
  92. mirror-1 ONLINE 0 0 0
  93. sdc ONLINE 0 0 0
  94. sdd ONLINE 0 0 0
  95. errors: No known data errors
  96. ----
  97. The `zfs` command is used configure and manage your ZFS file
  98. systems. The following command lists all file systems after
  99. installation:
  100. ----
  101. # zfs list
  103. rpool 4.94G 7.68T 96K /rpool
  104. rpool/ROOT 702M 7.68T 96K /rpool/ROOT
  105. rpool/ROOT/pve-1 702M 7.68T 702M /
  106. rpool/data 96K 7.68T 96K /rpool/data
  107. rpool/swap 4.25G 7.69T 64K -
  108. ----
  109. Bootloader
  110. ~~~~~~~~~~
  111. Depending on whether the system is booted in EFI or legacy BIOS mode the
  112. {pve} installer sets up either `grub` or `systemd-boot` as main bootloader.
  113. See the chapter on xref:sysboot[{pve} host bootladers] for details.
  114. ZFS Administration
  115. ~~~~~~~~~~~~~~~~~~
  116. This section gives you some usage examples for common tasks. ZFS
  117. itself is really powerful and provides many options. The main commands
  118. to manage ZFS are `zfs` and `zpool`. Both commands come with great
  119. manual pages, which can be read with:
  120. ----
  121. # man zpool
  122. # man zfs
  123. -----
  124. .Create a new zpool
  125. To create a new pool, at least one disk is needed. The `ashift` should
  126. have the same sector-size (2 power of `ashift`) or larger as the
  127. underlying disk.
  128. zpool create -f -o ashift=12 <pool> <device>
  129. To activate compression
  130. zfs set compression=lz4 <pool>
  131. .Create a new pool with RAID-0
  132. Minimum 1 Disk
  133. zpool create -f -o ashift=12 <pool> <device1> <device2>
  134. .Create a new pool with RAID-1
  135. Minimum 2 Disks
  136. zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
  137. .Create a new pool with RAID-10
  138. Minimum 4 Disks
  139. zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
  140. .Create a new pool with RAIDZ-1
  141. Minimum 3 Disks
  142. zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
  143. .Create a new pool with RAIDZ-2
  144. Minimum 4 Disks
  145. zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
  146. .Create a new pool with cache (L2ARC)
  147. It is possible to use a dedicated cache drive partition to increase
  148. the performance (use SSD).
  149. As `<device>` it is possible to use more devices, like it's shown in
  150. "Create a new pool with RAID*".
  151. zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
  152. .Create a new pool with log (ZIL)
  153. It is possible to use a dedicated cache drive partition to increase
  154. the performance(SSD).
  155. As `<device>` it is possible to use more devices, like it's shown in
  156. "Create a new pool with RAID*".
  157. zpool create -f -o ashift=12 <pool> <device> log <log_device>
  158. .Add cache and log to an existing pool
  159. If you have a pool without cache and log. First partition the SSD in
  160. 2 partition with `parted` or `gdisk`
  161. IMPORTANT: Always use GPT partition tables.
  162. The maximum size of a log device should be about half the size of
  163. physical memory, so this is usually quite small. The rest of the SSD
  164. can be used as cache.
  165. zpool add -f <pool> log <device-part1> cache <device-part2>
  166. .Changing a failed device
  167. zpool replace -f <pool> <old device> <new device>
  168. .Changing a failed bootable device when using systemd-boot
  169. sgdisk <healthy bootable device> -R <new device>
  170. sgdisk -G <new device>
  171. zpool replace -f <pool> <old zfs partition> <new zfs partition>
  172. pve-efiboot-tool format <new disk's ESP>
  173. pve-efiboot-tool init <new disk's ESP>
  174. NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
  175. bootable disks setup by the {pve} installer since version 5.4. For details, see
  176. xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
  177. Activate E-Mail Notification
  178. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  179. ZFS comes with an event daemon, which monitors events generated by the
  180. ZFS kernel module. The daemon can also send emails on ZFS events like
  181. pool errors. Newer ZFS packages ship the daemon in a separate package,
  182. and you can install it using `apt-get`:
  183. ----
  184. # apt-get install zfs-zed
  185. ----
  186. To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
  187. favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
  188. --------
  189. ZED_EMAIL_ADDR="root"
  190. --------
  191. Please note {pve} forwards mails to `root` to the email address
  192. configured for the root user.
  193. IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
  194. other settings are optional.
  195. Limit ZFS Memory Usage
  196. ~~~~~~~~~~~~~~~~~~~~~~
  197. It is good to use at most 50 percent (which is the default) of the
  198. system memory for ZFS ARC to prevent performance shortage of the
  199. host. Use your preferred editor to change the configuration in
  200. `/etc/modprobe.d/zfs.conf` and insert:
  201. --------
  202. options zfs zfs_arc_max=8589934592
  203. --------
  204. This example setting limits the usage to 8GB.
  205. [IMPORTANT]
  206. ====
  207. If your root file system is ZFS you must update your initramfs every
  208. time this value changes:
  209. update-initramfs -u
  210. ====
  211. [[zfs_swap]]
  212. SWAP on ZFS
  213. ~~~~~~~~~~~
  214. Swap-space created on a zvol may generate some troubles, like blocking the
  215. server or generating a high IO load, often seen when starting a Backup
  216. to an external Storage.
  217. We strongly recommend to use enough memory, so that you normally do not
  218. run into low memory situations. Should you need or want to add swap, it is
  219. preferred to create a partition on a physical disk and use it as swapdevice.
  220. You can leave some space free for this purpose in the advanced options of the
  221. installer. Additionally, you can lower the
  222. ``swappiness'' value. A good value for servers is 10:
  223. sysctl -w vm.swappiness=10
  224. To make the swappiness persistent, open `/etc/sysctl.conf` with
  225. an editor of your choice and add the following line:
  226. --------
  227. vm.swappiness = 10
  228. --------
  229. .Linux kernel `swappiness` parameter values
  230. [width="100%",cols="<m,2d",options="header"]
  231. |===========================================================
  232. | Value | Strategy
  233. | vm.swappiness = 0 | The kernel will swap only to avoid
  234. an 'out of memory' condition
  235. | vm.swappiness = 1 | Minimum amount of swapping without
  236. disabling it entirely.
  237. | vm.swappiness = 10 | This value is sometimes recommended to
  238. improve performance when sufficient memory exists in a system.
  239. | vm.swappiness = 60 | The default value.
  240. | vm.swappiness = 100 | The kernel will swap aggressively.
  241. |===========================================================
  242. [[zfs_encryption]]
  243. Encrypted ZFS Datasets
  244. ~~~~~~~~~~~~~~~~~~~~~~
  245. ZFS on Linux version 0.8.0 introduced support for native encryption of
  246. datasets. After an upgrade from previous ZFS on Linux versions, the encryption
  247. feature can be enabled per pool:
  248. ----
  249. # zpool get feature@encryption tank
  251. tank feature@encryption disabled local
  252. # zpool set feature@encryption=enabled
  253. # zpool get feature@encryption tank
  255. tank feature@encryption enabled local
  256. ----
  257. WARNING: There is currently no support for booting from pools with encrypted
  258. datasets using Grub, and only limited support for automatically unlocking
  259. encrypted datasets on boot. Older versions of ZFS without encryption support
  260. will not be able to decrypt stored data.
  261. NOTE: It is recommended to either unlock storage datasets manually after
  262. booting, or to write a custom unit to pass the key material needed for
  263. unlocking on boot to `zfs load-key`.
  264. WARNING: Establish and test a backup procedure before enabling encryption of
  265. production data. If the associated key material/passphrase/keyfile has been
  266. lost, accessing the encrypted data is no longer possible.
  267. Encryption needs to be setup when creating datasets/zvols, and is inherited by
  268. default to child datasets. For example, to create an encrypted dataset
  269. `tank/encrypted_data` and configure it as storage in {pve}, run the following
  270. commands:
  271. ----
  272. # zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
  273. Enter passphrase:
  274. Re-enter passphrase:
  275. # pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
  276. ----
  277. All guest volumes/disks create on this storage will be encrypted with the
  278. shared key material of the parent dataset.
  279. To actually use the storage, the associated key material needs to be loaded
  280. with `zfs load-key`:
  281. ----
  282. # zfs load-key tank/encrypted_data
  283. Enter passphrase for 'tank/encrypted_data':
  284. ----
  285. It is also possible to use a (random) keyfile instead of prompting for a
  286. passphrase by setting the `keylocation` and `keyformat` properties, either at
  287. creation time or with `zfs change-key` on existing datasets:
  288. ----
  289. # dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
  290. # zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
  291. ----
  292. WARNING: When using a keyfile, special care needs to be taken to secure the
  293. keyfile against unauthorized access or accidental loss. Without the keyfile, it
  294. is not possible to access the plaintext data!
  295. A guest volume created underneath an encrypted dataset will have its
  296. `encryptionroot` property set accordingly. The key material only needs to be
  297. loaded once per encryptionroot to be available to all encrypted datasets
  298. underneath it.
  299. See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
  300. `keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
  301. change-key` commands and the `Encryption` section from `man zfs` for more
  302. details and advanced usage.
  303. ZFS Special Device
  304. ~~~~~~~~~~~~~~~~~~
  305. Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
  306. pool is used to store metadata, deduplication tables, and optionally small
  307. file blocks.
  308. A `special` device can improve the speed of a pool consisting of slow spinning
  309. hard disks with a lot of metadata changes. For example workloads that involve
  310. creating, updating or deleting a large number of files will benefit from the
  311. presence of a `special` device. ZFS datasets can also be configured to store
  312. whole small files on the `special` device which can further improve the
  313. performance. Use fast SSDs for the `special` device.
  314. IMPORTANT: The redundancy of the `special` device should match the one of the
  315. pool, since the `special` device is a point of failure for the whole pool.
  316. WARNING: Adding a `special` device to a pool cannot be undone!
  317. .Create a pool with `special` device and RAID-1:
  318. zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
  319. .Add a `special` device to an existing pool with RAID-1:
  320. zpool add <pool> special mirror <device1> <device2>
  321. ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
  322. `0` to disable storing small file blocks on the `special` device or a power of
  323. two in the range between `512B` to `128K`. After setting the property new file
  324. blocks smaller than `size` will be allocated on the `special` device.
  325. IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
  326. the `recordsize` (default `128K`) of the dataset, *all* data will be written to
  327. the `special` device, so be careful!
  328. Setting the `special_small_blocks` property on a pool will change the default
  329. value of that property for all child ZFS datasets (for example all containers
  330. in the pool will opt in for small file blocks).
  331. .Opt in for all file smaller than 4K-blocks pool-wide:
  332. zfs set special_small_blocks=4K <pool>
  333. .Opt in for small file blocks for a single dataset:
  334. zfs set special_small_blocks=4K <pool>/<filesystem>
  335. .Opt out from small file blocks for a single dataset:
  336. zfs set special_small_blocks=0 <pool>/<filesystem>