Browse Source

pveceph: add attribute ceph_codename

To change the codename for Ceph in one place, the patch adds the
asciidoc attribute 'ceph_codename'. Replaces the outdated references to
luminous and the http -> https on the links in pveceph.adoc.

Signed-off-by: Alwin Antreich <>
Alwin Antreich 1 month ago
2 changed files with 16 additions and 15 deletions
  1. +1
  2. +15

+ 1
- 0
asciidoc/asciidoc-pve.conf View File

@@ -15,4 +15,5 @@ author=Proxmox Server Solutions Gmbh

+ 15
- 15
pveceph.adoc View File

@@ -58,15 +58,15 @@ and VMs on the same node is possible.
To simplify management, we provide 'pveceph' - a tool to install and
manage {ceph} services on {pve} nodes.

.Ceph consists of a couple of Daemons footnote:[Ceph intro], for use as a RBD storage:
.Ceph consists of a couple of Daemons footnote:[Ceph intro{ceph_codename}/start/intro/], for use as a RBD storage:
- Ceph Monitor (ceph-mon)
- Ceph Manager (ceph-mgr)
- Ceph OSD (ceph-osd; Object Storage Daemon)

TIP: We highly recommend to get familiar with Ceph's architecture
footnote:[Ceph architecture]
footnote:[Ceph architecture{ceph_codename}/architecture/]
and vocabulary
footnote:[Ceph glossary].
footnote:[Ceph glossary{ceph_codename}/glossary].

@@ -76,7 +76,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there should be at least
three (preferably) identical servers for the setup.

Check also the recommendations from[Ceph's website].{ceph_codename}/start/hardware-recommendations/[Ceph's website].

Higher CPU core frequency reduce latency and should be preferred. As a simple
@@ -237,7 +237,7 @@ configuration file.
Ceph Monitor
The Ceph Monitor (MON)
footnote:[Ceph Monitor]
footnote:[Ceph Monitor{ceph_codename}/start/intro/]
maintains a master copy of the cluster map. For high availability you need to
have at least 3 monitors. One monitor will already be installed if you
used the installation wizard. You won't need more than 3 monitors as long
@@ -282,7 +282,7 @@ Ceph Manager
The Manager daemon runs alongside the monitors. It provides an interface to
monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
footnote:[Ceph Manager] daemon is
footnote:[Ceph Manager{ceph_codename}/mgr/] daemon is

Create Manager
@@ -355,7 +355,7 @@ WARNING: The above command will destroy data on the disk!

Starting with the Ceph Kraken release, a new Ceph OSD storage type was
introduced, the so called Bluestore
footnote:[Ceph Bluestore].
footnote:[Ceph Bluestore].
This is the default when creating OSDs since Ceph Luminous.

@@ -452,7 +452,7 @@ NOTE: The default number of PGs works for 2-5 disks. Ceph throws a

It is advised to calculate the PG number depending on your setup, you can find
the formula and the PG calculator footnote:[PG calculator] online. While PGs can be increased later on, they can] online. While PGs can be increased later on, they can
never be decreased.

@@ -470,7 +470,7 @@ mark the checkbox "Add storages" in the GUI or use the command line option

Further information on Ceph pool handling can be found in the Ceph pool
operation footnote:[Ceph pool operation]{ceph_codename}/rados/operations/pools/]

@@ -503,7 +503,7 @@ advantage that no central index service is needed. CRUSH works with a map of
OSDs, buckets (device locations) and rulesets (data replication) for pools.

NOTE: Further information can be found in the Ceph documentation, under the
section CRUSH map footnote:[CRUSH map].
section CRUSH map footnote:[CRUSH map{ceph_codename}/rados/operations/crush-map/].

This map can be altered to reflect different replication hierarchies. The object
replicas can be separated (eg. failure domains), while maintaining the desired
@@ -649,7 +649,7 @@ Since Luminous (12.2.x) you can also have multiple active metadata servers
running, but this is normally only useful for a high count on parallel clients,
as else the `MDS` seldom is the bottleneck. If you want to set this up please
refer to the ceph documentation. footnote:[Configuring multiple active MDS

Create CephFS
@@ -681,7 +681,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group
number (`pg_num`) for your setup footnote:[Ceph Placement Groups].{ceph_codename}/rados/operations/placement-groups/].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
storage configuration after it was created successfully.

@@ -763,7 +763,7 @@ object in a PG for its health. There are two forms of Scrubbing, daily
(metadata compare) and weekly. The weekly reads the objects and uses checksums
to ensure data integrity. If a running scrub interferes with business needs,
you can adjust the time when scrubs footnote:[Ceph scrubbing]{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
are executed.

@@ -787,10 +787,10 @@ pve# ceph -w

To get a more detailed view, every ceph service has a log file under
`/var/log/ceph/` and if there is not enough detail, the log level can be
adjusted footnote:[Ceph log and debugging].
adjusted footnote:[Ceph log and debugging{ceph_codename}/rados/troubleshooting/log-and-debug/].

You can find more information about troubleshooting
footnote:[Ceph troubleshooting]
footnote:[Ceph troubleshooting{ceph_codename}/rados/troubleshooting/]
a Ceph cluster on the official website.