You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

425 lines
11KB

  1. [[chapter_storage]]
  2. ifdef::manvolnum[]
  3. pvesm(1)
  4. ========
  5. :pve-toplevel:
  6. NAME
  7. ----
  8. pvesm - Proxmox VE Storage Manager
  9. SYNOPSIS
  10. --------
  11. include::pvesm.1-synopsis.adoc[]
  12. DESCRIPTION
  13. -----------
  14. endif::manvolnum[]
  15. ifndef::manvolnum[]
  16. {pve} Storage
  17. =============
  18. :pve-toplevel:
  19. endif::manvolnum[]
  20. ifdef::wiki[]
  21. :title: Storage
  22. endif::wiki[]
  23. The {pve} storage model is very flexible. Virtual machine images
  24. can either be stored on one or several local storages, or on shared
  25. storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
  26. configure as many storage pools as you like. You can use all
  27. storage technologies available for Debian Linux.
  28. One major benefit of storing VMs on shared storage is the ability to
  29. live-migrate running machines without any downtime, as all nodes in
  30. the cluster have direct access to VM disk images. There is no need to
  31. copy VM image data, so live migration is very fast in that case.
  32. The storage library (package `libpve-storage-perl`) uses a flexible
  33. plugin system to provide a common interface to all storage types. This
  34. can be easily adopted to include further storage types in the future.
  35. Storage Types
  36. -------------
  37. There are basically two different classes of storage types:
  38. File level storage::
  39. File level based storage technologies allow access to a fully featured (POSIX)
  40. file system. They are in general more flexible than any Block level storage
  41. (see below), and allow you to store content of any type. ZFS is probably the
  42. most advanced system, and it has full support for snapshots and clones.
  43. Block level storage::
  44. Allows to store large 'raw' images. It is usually not possible to store
  45. other files (ISO, backups, ..) on such storage types. Most modern
  46. block level storage implementations support snapshots and clones.
  47. RADOS and GlusterFS are distributed systems, replicating storage
  48. data to different nodes.
  49. .Available storage types
  50. [width="100%",cols="<d,1*m,4*d",options="header"]
  51. |===========================================================
  52. |Description |PVE type |Level |Shared|Snapshots|Stable
  53. |ZFS (local) |zfspool |file |no |yes |yes
  54. |Directory |dir |file |no |no^1^ |yes
  55. |NFS |nfs |file |yes |no^1^ |yes
  56. |CIFS |cifs |file |yes |no^1^ |yes
  57. |GlusterFS |glusterfs |file |yes |no^1^ |yes
  58. |CephFS |cephfs |file |yes |yes |yes
  59. |LVM |lvm |block |no^2^ |no |yes
  60. |LVM-thin |lvmthin |block |no |yes |yes
  61. |iSCSI/kernel |iscsi |block |yes |no |yes
  62. |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
  63. |Ceph/RBD |rbd |block |yes |yes |yes
  64. |ZFS over iSCSI |zfs |block |yes |yes |yes
  65. |=========================================================
  66. ^1^: On file based storages, snapshots are possible with the 'qcow2' format.
  67. ^2^: It is possible to use LVM on top of an iSCSI storage. That way
  68. you get a `shared` LVM storage.
  69. Thin Provisioning
  70. ~~~~~~~~~~~~~~~~~
  71. A number of storages, and the Qemu image format `qcow2`, support 'thin
  72. provisioning'. With thin provisioning activated, only the blocks that
  73. the guest system actually use will be written to the storage.
  74. Say for instance you create a VM with a 32GB hard disk, and after
  75. installing the guest system OS, the root file system of the VM contains
  76. 3 GB of data. In that case only 3GB are written to the storage, even
  77. if the guest VM sees a 32GB hard drive. In this way thin provisioning
  78. allows you to create disk images which are larger than the currently
  79. available storage blocks. You can create large disk images for your
  80. VMs, and when the need arises, add more disks to your storage without
  81. resizing the VMs' file systems.
  82. All storage types which have the ``Snapshots'' feature also support thin
  83. provisioning.
  84. CAUTION: If a storage runs full, all guests using volumes on that
  85. storage receive IO errors. This can cause file system inconsistencies
  86. and may corrupt your data. So it is advisable to avoid
  87. over-provisioning of your storage resources, or carefully observe
  88. free space to avoid such conditions.
  89. Storage Configuration
  90. ---------------------
  91. All {pve} related storage configuration is stored within a single text
  92. file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
  93. gets automatically distributed to all cluster nodes. So all nodes
  94. share the same storage configuration.
  95. Sharing storage configuration makes perfect sense for shared storage,
  96. because the same ``shared'' storage is accessible from all nodes. But it is
  97. also useful for local storage types. In this case such local storage
  98. is available on all nodes, but it is physically different and can have
  99. totally different content.
  100. Storage Pools
  101. ~~~~~~~~~~~~~
  102. Each storage pool has a `<type>`, and is uniquely identified by its
  103. `<STORAGE_ID>`. A pool configuration looks like this:
  104. ----
  105. <type>: <STORAGE_ID>
  106. <property> <value>
  107. <property> <value>
  108. <property>
  109. ...
  110. ----
  111. The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
  112. followed by a list of properties. Most properties require a value. Some have
  113. reasonable defaults, in which case you can omit the value.
  114. To be more specific, take a look at the default storage configuration
  115. after installation. It contains one special local storage pool named
  116. `local`, which refers to the directory `/var/lib/vz` and is always
  117. available. The {pve} installer creates additional storage entries
  118. depending on the storage type chosen at installation time.
  119. .Default storage configuration (`/etc/pve/storage.cfg`)
  120. ----
  121. dir: local
  122. path /var/lib/vz
  123. content iso,vztmpl,backup
  124. # default image store on LVM based installation
  125. lvmthin: local-lvm
  126. thinpool data
  127. vgname pve
  128. content rootdir,images
  129. # default image store on ZFS based installation
  130. zfspool: local-zfs
  131. pool rpool/data
  132. sparse
  133. content images,rootdir
  134. ----
  135. Common Storage Properties
  136. ~~~~~~~~~~~~~~~~~~~~~~~~~
  137. A few storage properties are common among different storage types.
  138. nodes::
  139. List of cluster node names where this storage is
  140. usable/accessible. One can use this property to restrict storage
  141. access to a limited set of nodes.
  142. content::
  143. A storage can support several content types, for example virtual disk
  144. images, cdrom iso images, container templates or container root
  145. directories. Not all storage types support all content types. One can set
  146. this property to select what this storage is used for.
  147. images:::
  148. KVM-Qemu VM images.
  149. rootdir:::
  150. Allow to store container data.
  151. vztmpl:::
  152. Container templates.
  153. backup:::
  154. Backup files (`vzdump`).
  155. iso:::
  156. ISO images
  157. snippets:::
  158. Snippet files, for example guest hook scripts
  159. shared::
  160. Mark storage as shared.
  161. disable::
  162. You can use this flag to disable the storage completely.
  163. maxfiles::
  164. Maximum number of backup files per VM. Use `0` for unlimited.
  165. format::
  166. Default image format (`raw|qcow2|vmdk`)
  167. WARNING: It is not advisable to use the same storage pool on different
  168. {pve} clusters. Some storage operation need exclusive access to the
  169. storage, so proper locking is required. While this is implemented
  170. within a cluster, it does not work between different clusters.
  171. Volumes
  172. -------
  173. We use a special notation to address storage data. When you allocate
  174. data from a storage pool, it returns such a volume identifier. A volume
  175. is identified by the `<STORAGE_ID>`, followed by a storage type
  176. dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
  177. like:
  178. local:230/example-image.raw
  179. local:iso/debian-501-amd64-netinst.iso
  180. local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
  181. iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
  182. To get the file system path for a `<VOLUME_ID>` use:
  183. pvesm path <VOLUME_ID>
  184. Volume Ownership
  185. ~~~~~~~~~~~~~~~~
  186. There exists an ownership relation for `image` type volumes. Each such
  187. volume is owned by a VM or Container. For example volume
  188. `local:230/example-image.raw` is owned by VM 230. Most storage
  189. backends encodes this ownership information into the volume name.
  190. When you remove a VM or Container, the system also removes all
  191. associated volumes which are owned by that VM or Container.
  192. Using the Command Line Interface
  193. --------------------------------
  194. It is recommended to familiarize yourself with the concept behind storage
  195. pools and volume identifiers, but in real life, you are not forced to do any
  196. of those low level operations on the command line. Normally,
  197. allocation and removal of volumes is done by the VM and Container
  198. management tools.
  199. Nevertheless, there is a command line tool called `pvesm` (``{pve}
  200. Storage Manager''), which is able to perform common storage management
  201. tasks.
  202. Examples
  203. ~~~~~~~~
  204. Add storage pools
  205. pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
  206. pvesm add dir <STORAGE_ID> --path <PATH>
  207. pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
  208. pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
  209. pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
  210. Disable storage pools
  211. pvesm set <STORAGE_ID> --disable 1
  212. Enable storage pools
  213. pvesm set <STORAGE_ID> --disable 0
  214. Change/set storage options
  215. pvesm set <STORAGE_ID> <OPTIONS>
  216. pvesm set <STORAGE_ID> --shared 1
  217. pvesm set local --format qcow2
  218. pvesm set <STORAGE_ID> --content iso
  219. Remove storage pools. This does not delete any data, and does not
  220. disconnect or unmount anything. It just removes the storage
  221. configuration.
  222. pvesm remove <STORAGE_ID>
  223. Allocate volumes
  224. pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
  225. Allocate a 4G volume in local storage. The name is auto-generated if
  226. you pass an empty string as `<name>`
  227. pvesm alloc local <VMID> '' 4G
  228. Free volumes
  229. pvesm free <VOLUME_ID>
  230. WARNING: This really destroys all volume data.
  231. List storage status
  232. pvesm status
  233. List storage contents
  234. pvesm list <STORAGE_ID> [--vmid <VMID>]
  235. List volumes allocated by VMID
  236. pvesm list <STORAGE_ID> --vmid <VMID>
  237. List iso images
  238. pvesm list <STORAGE_ID> --iso
  239. List container templates
  240. pvesm list <STORAGE_ID> --vztmpl
  241. Show file system path for a volume
  242. pvesm path <VOLUME_ID>
  243. ifdef::wiki[]
  244. See Also
  245. --------
  246. * link:/wiki/Storage:_Directory[Storage: Directory]
  247. * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
  248. * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
  249. * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
  250. * link:/wiki/Storage:_LVM[Storage: LVM]
  251. * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
  252. * link:/wiki/Storage:_NFS[Storage: NFS]
  253. * link:/wiki/Storage:_CIFS[Storage: CIFS]
  254. * link:/wiki/Storage:_RBD[Storage: RBD]
  255. * link:/wiki/Storage:_CephFS[Storage: CephFS]
  256. * link:/wiki/Storage:_ZFS[Storage: ZFS]
  257. * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
  258. endif::wiki[]
  259. ifndef::wiki[]
  260. // backend documentation
  261. include::pve-storage-dir.adoc[]
  262. include::pve-storage-nfs.adoc[]
  263. include::pve-storage-cifs.adoc[]
  264. include::pve-storage-glusterfs.adoc[]
  265. include::pve-storage-zfspool.adoc[]
  266. include::pve-storage-lvm.adoc[]
  267. include::pve-storage-lvmthin.adoc[]
  268. include::pve-storage-iscsi.adoc[]
  269. include::pve-storage-iscsidirect.adoc[]
  270. include::pve-storage-rbd.adoc[]
  271. include::pve-storage-cephfs.adoc[]
  272. ifdef::manvolnum[]
  273. include::pve-copyright.adoc[]
  274. endif::manvolnum[]
  275. endif::wiki[]