Ceph/Cluster

A Ceph cluster is the whole of all Ceph components, working together to provide a high-available and reliable shared object storage and file system. All components interact with each other through the network as Ceph provides a "shared-nothing" storage environment - there is no need for a central, shared storage platform such as a SAN.

Structure
A Ceph cluster needs a number of components to function properly.


 * A number of monitors which provide the quorum handling capabilities (operations against the cluster must have a majority of monitors accepting the operation, otherwise the operation is unsuccessful)
 * A number of object store devices which represent storage areas (and are usually mapped one-to-one on mounted file systems) that Ceph uses to store the objects in
 * An (optional) metadata server (or set of servers) that allows for mounting a Ceph POSIX-compliant file system

It is also possible to have gateway services that provide a compatible layer between Ceph and popular cloud services / APIs such as Amazon S3 or OpenStack Swift. This allows users to host their own S3-compatible or Swift-compatible storage.

Configuration
The configuration file for a Ceph cluster is hosted in and is named after the cluster name. Most users will call their Ceph cluster ceph so the resulting configuration file is called. It is possible to manage multiple Ceph clusters from the same host if the administrator takes care to use either different IP addresses (multi-homed systems or interface aliases) or different ports for the various Ceph services.

A configuration file contains  settings (which are applicable to the entire cluster), component-specific settings such as   (which are applicable to all instances of that particular component) and service individual settings such as   (which are applicable to that particular instance).

Alongside the cluster configuration file, most clusters will also have the authentication keyrings and/or secrets stored in this location. These keyrings are necessary for services to be able to interact with a Cephx-enabled cluster (cephx is the authentication and authorization implementation of Ceph).

Every host that participates in the Ceph cluster should have the same content. The keyrings can be limited to just those hosts that access them, but it is in general more simple to just keep the entire directory synchronized across hosts.

Querying
To query the state of the cluster, use ceph -s:

This shows that 3 monitor services are up and running, one metadata server, 203 object store devices.