Vous ne pouvez pas sélectionner plus de 25 sujets Les noms de sujets doivent commencer par une lettre ou un nombre, peuvent contenir des tirets ('-') et peuvent comporter jusqu'à 35 caractères.

109 lines
2.8KB

  1. [[ceph_rados_block_devices]]
  2. Ceph RADOS Block Devices (RBD)
  3. ------------------------------
  4. ifdef::wiki[]
  5. :pve-toplevel:
  6. :title: Storage: RBD
  7. endif::wiki[]
  8. Storage pool type: `rbd`
  9. http://ceph.com[Ceph] is a distributed object store and file system
  10. designed to provide excellent performance, reliability and
  11. scalability. RADOS block devices implement a feature rich block level
  12. storage, and you get the following advantages:
  13. * thin provisioning
  14. * resizable volumes
  15. * distributed and redundant (striped over multiple OSDs)
  16. * full snapshot and clone capabilities
  17. * self healing
  18. * no single point of failure
  19. * scalable to the exabyte level
  20. * kernel and user space implementation available
  21. NOTE: For smaller deployments, it is also possible to run Ceph
  22. services directly on your {pve} nodes. Recent hardware has plenty
  23. of CPU power and RAM, so running storage services and VMs on same node
  24. is possible.
  25. [[storage_rbd_config]]
  26. Configuration
  27. ~~~~~~~~~~~~~
  28. This backend supports the common storage properties `nodes`,
  29. `disable`, `content`, and the following `rbd` specific properties:
  30. monhost::
  31. List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
  32. PVE cluster.
  33. pool::
  34. Ceph pool name.
  35. username::
  36. RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
  37. krbd::
  38. Enforce access to rados block devices through the krbd kernel module. Optional.
  39. NOTE: Containers will use `krbd` independent of the option value.
  40. .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
  41. ----
  42. rbd: ceph-external
  43. monhost 10.1.1.20 10.1.1.21 10.1.1.22
  44. pool ceph-external
  45. content images
  46. username admin
  47. ----
  48. TIP: You can use the `rbd` utility to do low-level management tasks.
  49. Authentication
  50. ~~~~~~~~~~~~~~
  51. If you use `cephx` authentication, you need to copy the keyfile from your
  52. external Ceph cluster to a Proxmox VE host.
  53. Create the directory `/etc/pve/priv/ceph` with
  54. mkdir /etc/pve/priv/ceph
  55. Then copy the keyring
  56. scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
  57. The keyring must be named to match your `<STORAGE_ID>`. Copying the
  58. keyring generally requires root privileges.
  59. If Ceph is installed locally on the PVE cluster, this is done automatically by
  60. 'pveceph' or in the GUI.
  61. Storage Features
  62. ~~~~~~~~~~~~~~~~
  63. The `rbd` backend is a block level storage, and implements full
  64. snapshot and clone functionality.
  65. .Storage features for backend `rbd`
  66. [width="100%",cols="m,m,3*d",options="header"]
  67. |==============================================================================
  68. |Content types |Image formats |Shared |Snapshots |Clones
  69. |images rootdir |raw |yes |yes |yes
  70. |==============================================================================
  71. ifdef::wiki[]
  72. See Also
  73. ~~~~~~~~
  74. * link:/wiki/Storage[Storage]
  75. endif::wiki[]