You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

292 lines
9.8KB

  1. [[chapter_pvesr]]
  2. ifdef::manvolnum[]
  3. pvesr(1)
  4. ========
  5. :pve-toplevel:
  6. NAME
  7. ----
  8. pvesr - Proxmox VE Storage Replication
  9. SYNOPSIS
  10. --------
  11. include::pvesr.1-synopsis.adoc[]
  12. DESCRIPTION
  13. -----------
  14. endif::manvolnum[]
  15. ifndef::manvolnum[]
  16. Storage Replication
  17. ===================
  18. :pve-toplevel:
  19. endif::manvolnum[]
  20. The `pvesr` command line tool manages the {PVE} storage replication
  21. framework. Storage replication brings redundancy for guests using
  22. local storage and reduces migration time.
  23. It replicates guest volumes to another node so that all data is available
  24. without using shared storage. Replication uses snapshots to minimize traffic
  25. sent over the network. Therefore, new data is sent only incrementally after
  26. an initial full sync. In the case of a node failure, your guest data is
  27. still available on the replicated node.
  28. The replication will be done automatically in configurable intervals.
  29. The minimum replication interval is one minute and the maximal interval is
  30. once a week. The format used to specify those intervals is a subset of
  31. `systemd` calendar events, see
  32. xref:pvesr_schedule_time_format[Schedule Format] section:
  33. Every guest can be replicated to multiple target nodes, but a guest cannot
  34. get replicated twice to the same target node.
  35. Each replications bandwidth can be limited, to avoid overloading a storage
  36. or server.
  37. Virtual guest with active replication cannot currently use online migration.
  38. Offline migration is supported in general. If you migrate to a node where
  39. the guests data is already replicated only the changes since the last
  40. synchronisation (so called `delta`) must be sent, this reduces the required
  41. time significantly. In this case the replication direction will also switch
  42. nodes automatically after the migration finished.
  43. For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
  44. You migrate it to `nodeB`, so now it gets automatically replicated back from
  45. `nodeB` to `nodeA`.
  46. If you migrate to a node where the guest is not replicated, the whole disk
  47. data must send over. After the migration the replication job continues to
  48. replicate this guest to the configured nodes.
  49. [IMPORTANT]
  50. ====
  51. High-Availability is allowed in combination with storage replication, but it
  52. has the following implications:
  53. * as live-migrations are currently not possible, redistributing services after
  54. a more preferred node comes online does not work. Keep that in mind when
  55. configuring your HA groups and their priorities for replicated guests.
  56. * recovery works, but there may be some data loss between the last synced
  57. time and the time a node failed.
  58. ====
  59. Supported Storage Types
  60. -----------------------
  61. .Storage Types
  62. [width="100%",options="header"]
  63. |============================================
  64. |Description |PVE type |Snapshots|Stable
  65. |ZFS (local) |zfspool |yes |yes
  66. |============================================
  67. [[pvesr_schedule_time_format]]
  68. Schedule Format
  69. ---------------
  70. {pve} has a very flexible replication scheduler. It is based on the systemd
  71. time calendar event format.footnote:[see `man 7 systemd.time` for more information]
  72. Calendar events may be used to refer to one or more points in time in a
  73. single expression.
  74. Such a calendar event uses the following format:
  75. ----
  76. [day(s)] [[start-time(s)][/repetition-time(s)]]
  77. ----
  78. This allows you to configure a set of days on which the job should run.
  79. You can also set one or more start times, it tells the replication scheduler
  80. the moments in time when a job should start.
  81. With this information we could create a job which runs every workday at 10
  82. PM: `'mon,tue,wed,thu,fri 22'` which could be abbreviated to: `'mon..fri
  83. 22'`, most reasonable schedules can be written quite intuitive this way.
  84. NOTE: Hours are set in 24h format.
  85. To allow easier and shorter configuration one or more repetition times can
  86. be set. They indicate that on the start-time(s) itself and the start-time(s)
  87. plus all multiples of the repetition value replications will be done. If
  88. you want to start replication at 8 AM and repeat it every 15 minutes until
  89. 9 AM you would use: `'8:00/15'`
  90. Here you see also that if no hour separation (`:`) is used the value gets
  91. interpreted as minute. If such a separation is used the value on the left
  92. denotes the hour(s) and the value on the right denotes the minute(s).
  93. Further, you can use `*` to match all possible values.
  94. To get additional ideas look at
  95. xref:pvesr_schedule_format_examples[more Examples below].
  96. Detailed Specification
  97. ~~~~~~~~~~~~~~~~~~~~~~
  98. days:: Days are specified with an abbreviated English version: `sun, mon,
  99. tue, wed, thu, fri and sat`. You may use multiple days as a comma-separated
  100. list. A range of days can also be set by specifying the start and end day
  101. separated by ``..'', for example `mon..fri`. Those formats can be also
  102. mixed. If omitted `'*'` is assumed.
  103. time-format:: A time format consists of hours and minutes interval lists.
  104. Hours and minutes are separated by `':'`. Both, hour and minute, can be list
  105. and ranges of values, using the same format as days.
  106. First come hours then minutes, hours can be omitted if not needed, in this
  107. case `'*'` is assumed for the value of hours.
  108. The valid range for values is `0-23` for hours and `0-59` for minutes.
  109. [[pvesr_schedule_format_examples]]
  110. Examples:
  111. ~~~~~~~~~
  112. .Schedule Examples
  113. [width="100%",options="header"]
  114. |==============================================================================
  115. |Schedule String |Alternative |Meaning
  116. |mon,tue,wed,thu,fri |mon..fri |Every working day at 0:00
  117. |sat,sun |sat..sun |Only on weekends at 0:00
  118. |mon,wed,fri |-- |Only on Monday, Wednesday and Friday at 0:00
  119. |12:05 |12:05 |Every day at 12:05 PM
  120. |*/5 |0/5 |Every five minutes
  121. |mon..wed 30/10 |mon,tue,wed 30/10 |Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
  122. |mon..fri 8..17,22:0/15 |-- |Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
  123. |fri 12..13:5/20 |fri 12,13:5/20 |Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
  124. |12,14,16,18,20,22:5 |12/2:5 |Every day starting at 12:05 until 22:05, every 2 hours
  125. |* |*/1 |Every minute (minimum interval)
  126. |==============================================================================
  127. Error Handling
  128. --------------
  129. If a replication job encounters problems it will be placed in error state.
  130. In this state the configured replication intervals get suspended
  131. temporarily. Then we retry the failed replication in a 30 minute interval,
  132. once this succeeds the original schedule gets activated again.
  133. Possible issues
  134. ~~~~~~~~~~~~~~~
  135. This represents only the most common issues possible, depending on your
  136. setup there may be also another cause.
  137. * Network is not working.
  138. * No free space left on the replication target storage.
  139. * Storage with same storage ID available on target node
  140. NOTE: You can always use the replication log to get hints about a problems
  141. cause.
  142. Migrating a guest in case of Error
  143. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  144. // FIXME: move this to better fitting chapter (sysadmin ?) and only link to
  145. // it here
  146. In the case of a grave error a virtual guest may get stuck on a failed
  147. node. You then need to move it manually to a working node again.
  148. Example
  149. ~~~~~~~
  150. Lets assume that you have two guests (VM 100 and CT 200) running on node A
  151. and replicate to node B.
  152. Node A failed and can not get back online. Now you have to migrate the guest
  153. to Node B manually.
  154. - connect to node B over ssh or open its shell via the WebUI
  155. - check if that the cluster is quorate
  156. +
  157. ----
  158. # pvecm status
  159. ----
  160. - If you have no quorum we strongly advise to fix this first and make the
  161. node operable again. Only if this is not possible at the moment you may
  162. use the following command to enforce quorum on the current node:
  163. +
  164. ----
  165. # pvecm expected 1
  166. ----
  167. WARNING: If expected votes are set avoid changes which affect the cluster
  168. (for example adding/removing nodes, storages, virtual guests) at all costs.
  169. Only use it to get vital guests up and running again or to resolve the quorum
  170. issue itself.
  171. - move both guest configuration files form the origin node A to node B:
  172. +
  173. ----
  174. # mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
  175. # mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
  176. ----
  177. - Now you can start the guests again:
  178. +
  179. ----
  180. # qm start 100
  181. # pct start 200
  182. ----
  183. Remember to replace the VMIDs and node names with your respective values.
  184. Managing Jobs
  185. -------------
  186. [thumbnail="screenshot/gui-qemu-add-replication-job.png"]
  187. You can use the web GUI to create, modify and remove replication jobs
  188. easily. Additionally the command line interface (CLI) tool `pvesr` can be
  189. used to do this.
  190. You can find the replication panel on all levels (datacenter, node, virtual
  191. guest) in the web GUI. They differ in what jobs get shown: all, only node
  192. specific or only guest specific jobs.
  193. Once adding a new job you need to specify the virtual guest (if not already
  194. selected) and the target node. The replication
  195. xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
  196. 15 minutes` is not desired. You may also impose rate limiting on a
  197. replication job, this can help to keep the storage load acceptable.
  198. A replication job is identified by an cluster-wide unique ID. This ID is
  199. composed of the VMID in addition to an job number.
  200. This ID must only be specified manually if the CLI tool is used.
  201. Command Line Interface Examples
  202. -------------------------------
  203. Create a replication job which will run every 5 minutes with limited bandwidth of
  204. 10 mbps (megabytes per second) for the guest with guest ID 100.
  205. ----
  206. # pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
  207. ----
  208. Disable an active job with ID `100-0`
  209. ----
  210. # pvesr disable 100-0
  211. ----
  212. Enable a deactivated job with ID `100-0`
  213. ----
  214. # pvesr enable 100-0
  215. ----
  216. Change the schedule interval of the job with ID `100-0` to once a hour
  217. ----
  218. # pvesr update 100-0 --schedule '*/00'
  219. ----
  220. ifdef::manvolnum[]
  221. include::pve-copyright.adoc[]
  222. endif::manvolnum[]