You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

pve-intro.adoc 9.1KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246
  1. Introduction
  2. ============
  3. {pve} is a platform to run virtual machines and containers. It is
  4. based on Debian Linux, and completely open source. For maximum
  5. flexibility, we implemented two virtualization technologies -
  6. Kernel-based Virtual Machine (KVM) and container-based virtualization
  7. (LXC).
  8. One main design goal was to make administration as easy as
  9. possible. You can use {pve} on a single node, or assemble a cluster of
  10. many nodes. All management tasks can be done using our web-based
  11. management interface, and even a novice user can setup and install
  12. {pve} within minutes.
  13. image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"]
  14. Central Management
  15. ------------------
  16. While many people start with a single node, {pve} can scale out to a
  17. large set of clustered nodes. The cluster stack is fully integrated
  18. and ships with the default installation.
  19. Unique Multi-Master Design::
  20. The integrated web-based management interface gives you a clean
  21. overview of all your KVM guests and Linux containers and even of your
  22. whole cluster. You can easily manage your VMs and containers, storage
  23. or cluster from the GUI. There is no need to install a separate,
  24. complex, and pricey management server.
  25. Proxmox Cluster File System (pmxcfs)::
  26. Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a
  27. database-driven file system for storing configuration files. This
  28. enables you to store the configuration of thousands of virtual
  29. machines. By using corosync, these files are replicated in real time
  30. on all cluster nodes. The file system stores all data inside a
  31. persistent database on disk, nonetheless, a copy of the data resides
  32. in RAM which provides a maximum storage size of 30MB - more than
  33. enough for thousands of VMs.
  34. +
  35. Proxmox VE is the only virtualization platform using this unique
  36. cluster file system.
  37. Web-based Management Interface::
  38. Proxmox VE is simple to use. Management tasks can be done via the
  39. included web based management interface - there is no need to install a
  40. separate management tool or any additional management node with huge
  41. databases. The multi-master tool allows you to manage your whole
  42. cluster from any node of your cluster. The central web-based
  43. management - based on the JavaScript Framework (ExtJS) - empowers
  44. you to control all functionalities from the GUI and overview history
  45. and syslogs of each single node. This includes running backup or
  46. restore jobs, live-migration or HA triggered activities.
  47. Command Line::
  48. For advanced users who are used to the comfort of the Unix shell or
  49. Windows Powershell, Proxmox VE provides a command line interface to
  50. manage all the components of your virtual environment. This command
  51. line interface has intelligent tab completion and full documentation
  52. in the form of UNIX man pages.
  53. REST API::
  54. Proxmox VE uses a RESTful API. We choose JSON as primary data format,
  55. and the whole API is formally defined using JSON Schema. This enables
  56. fast and easy integration for third party management tools like custom
  57. hosting environments.
  58. Role-based Administration::
  59. You can define granular access for all objects (like VMs, storages,
  60. nodes, etc.) by using the role based user- and permission
  61. management. This allows you to define privileges and helps you to
  62. control access to objects. This concept is also known as access
  63. control lists: Each permission specifies a subject (a user or group)
  64. and a role (set of privileges) on a specific path.
  65. Authentication Realms::
  66. Proxmox VE supports multiple authentication sources like Microsoft
  67. Active Directory, LDAP, Linux PAM standard authentication or the
  68. built-in Proxmox VE authentication server.
  69. Flexible Storage
  70. ----------------
  71. The Proxmox VE storage model is very flexible. Virtual machine images
  72. can either be stored on one or several local storages or on shared
  73. storage like NFS and on SAN. There are no limits, you may configure as
  74. many storage definitions as you like. You can use all storage
  75. technologies available for Debian Linux.
  76. One major benefit of storing VMs on shared storage is the ability to
  77. live-migrate running machines without any downtime, as all nodes in
  78. the cluster have direct access to VM disk images.
  79. We currently support the following Network storage types:
  80. * LVM Group (network backing with iSCSI targets)
  81. * iSCSI target
  82. * NFS Share
  83. * CIFS Share
  84. * Ceph RBD
  85. * Directly use iSCSI LUNs
  86. * GlusterFS
  87. Local storage types supported are:
  88. * LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
  89. * Directory (storage on existing filesystem)
  90. * ZFS
  91. Integrated Backup and Restore
  92. -----------------------------
  93. The integrated backup tool (`vzdump`) creates consistent snapshots of
  94. running Containers and KVM guests. It basically creates an archive of
  95. the VM or CT data which includes the VM/CT configuration files.
  96. KVM live backup works for all storage types including VM images on
  97. NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing
  98. VM backups fast and effective (sparse files, out of order data, minimized I/O).
  99. High Availability Cluster
  100. -------------------------
  101. A multi-node Proxmox VE HA Cluster enables the definition of highly
  102. available virtual servers. The Proxmox VE HA Cluster is based on
  103. proven Linux HA technologies, providing stable and reliable HA
  104. services.
  105. Flexible Networking
  106. -------------------
  107. Proxmox VE uses a bridged networking model. All VMs can share one
  108. bridge as if virtual network cables from each guest were all plugged
  109. into the same switch. For connecting VMs to the outside world, bridges
  110. are attached to physical network cards and assigned a TCP/IP
  111. configuration.
  112. For further flexibility, VLANs (IEEE 802.1q) and network
  113. bonding/aggregation are possible. In this way it is possible to build
  114. complex, flexible virtual networks for the Proxmox VE hosts,
  115. leveraging the full power of the Linux network stack.
  116. Integrated Firewall
  117. -------------------
  118. The integrated firewall allows you to filter network packets on
  119. any VM or Container interface. Common sets of firewall rules can
  120. be grouped into ``security groups''.
  121. Why Open Source
  122. ---------------
  123. Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux
  124. Distribution. The source code of Proxmox VE is released under the
  125. http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
  126. License, version 3]. This means that you are free to inspect the
  127. source code at any time or contribute to the project yourself.
  128. At Proxmox we are committed to use open source software whenever
  129. possible. Using open source software guarantees full access to all
  130. functionalities - as well as high security and reliability. We think
  131. that everybody should have the right to access the source code of a
  132. software to run it, build on it, or submit changes back to the
  133. project. Everybody is encouraged to contribute while Proxmox ensures
  134. the product always meets professional quality criteria.
  135. Open source software also helps to keep your costs low and makes your
  136. core infrastructure independent from a single vendor.
  137. Your benefits with {pve}
  138. ------------------------
  139. * Open source software
  140. * No vendor lock-in
  141. * Linux kernel
  142. * Fast installation and easy-to-use
  143. * Web-based management interface
  144. * REST API
  145. * Huge active community
  146. * Low administration costs and simple deployment
  147. include::getting-help.adoc[]
  148. Project History
  149. ---------------
  150. The project started in 2007, followed by a first stable version in
  151. 2008. At the time we used OpenVZ for containers, and KVM for virtual
  152. machines. The clustering features were limited, and the user interface
  153. was simple (server generated web page).
  154. But we quickly developed new features using the
  155. http://corosync.github.io/corosync/[Corosync] cluster stack, and the
  156. introduction of the new Proxmox cluster file system (pmxcfs) was a big
  157. step forward, because it completely hides the cluster complexity from
  158. the user. Managing a cluster of 16 nodes is as simple as managing a
  159. single node.
  160. We also introduced a new REST API, with a complete declarative
  161. specification written in JSON-Schema. This enabled other people to
  162. integrate {pve} into their infrastructure, and made it easy to provide
  163. additional services.
  164. Also, the new REST API made it possible to replace the original user
  165. interface with a modern HTML5 application using JavaScript. We also
  166. replaced the old Java based VNC console code with
  167. https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser
  168. to manage your VMs.
  169. The support for various storage types is another big task. Notably,
  170. {pve} was the first distribution to ship ZFS on Linux by default in
  171. 2014. Another milestone was the ability to run and manage
  172. http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
  173. are extremely cost effective.
  174. When we started we were among the first companies providing
  175. commercial support for KVM. The KVM project itself continuously
  176. evolved, and is now a widely used hypervisor. New features arrive
  177. with each release. We developed the KVM live backup feature, which
  178. makes it possible to create snapshot backups on any storage type.
  179. The most notable change with version 4.0 was the move from OpenVZ to
  180. https://linuxcontainers.org/[LXC]. Containers are now deeply
  181. integrated, and they can use the same storage and network features
  182. as virtual machines.
  183. include::howto-improve-pve-docs.adoc[]
  184. include::translation.adoc[]