Ceph/Administration

As the title suggests, this article Article description::focuses on the administration of various services inside a Ceph cluster.

File system
A Ceph file system requires two pools to start with. One pool contains the data while another pool is meant for the metadata. During the installation these pools should already be created:

If this is not the case, create the pools:

With these pools available, a file system can be created. First make sure that no file system already exists:

If it already exists, then no action needs to be undertaken anymore. Otherwise the file system can be created:

To remove a file system, it is necessary to first fail the MDS service:

When a file system exists, it can be mounted on the Linux clients that participate in the cluster. With the Cephx authentication it is necessary to pass on the client name (the client.admin one which is created when the cluster is created can be used, but a less privileged one can be used as well) with the key (usually by referring to the secret file created when creating the user key):

Pools
Pools can be created and manipulated immediately. However, when removing a pool, make sure that the pool is not used as all data is irrevocably removed from the cluster.

To list the current set of pools, use ceph osd lspools:

To get some more information, such as the number of objects, size, etc. use the rados command:

To create a pool:

To remove it:

Administrators can take a snapshot of an existing pool (for instance to simplify backup operations so that the backup can take its time, being certain that the (snapshot) pool remains constant).

To remove the snapshot again, use rmsnap.

Placement groups
To get placement group information, use ceph pg dump:

To get the placement information for a particular placement group, use ceph pg map:

Authentication keys
Authentication keys are best managed through ceph auth. During the key generation, the capabilities are added so that the cluster knows which operations are supported through the key and which ones aren't.

For instance, to create a key for a user (say client.chris) which can manipulate the s3data</tt> pool:

The user can use this keyring for the cluster operations. Note that the user still requires read access on the Ceph configuration file, but does not need access (and in fact shouldn't) to the secret files or keyrings inside.

To list the current authentication keys:

To remove all privileges from a key: