Ceph "Mimic" – Evolution of our Storage Cluster

Recently, we updated our Ceph storage cluster to the latest version: "Mimic". Ceph Mimic lays the foundation for the future development of our storage cluster, but also brings tangible improvements for the continuous management of our storage systems. And last but not least, Mimic also incorporates numerous minor improvements, e.g. in the area of our S3-compatible object storage.

How Mimic simplifies storage cluster management

Even though Ceph does many things automatically, some administrative tasks still remain. Mimic supports our sysadmins with a new dashboard, which summarizes all important information about the current cluster status at a glance. The command line tools of Ceph now consistently format their output as JSON so that this data can be processed in scripts more easily.

Improvements were also made to the upmap mechanism. This feature, which was first introduced in the previous version "Luminous", makes it possible to distribute the data evenly across all OSDs if an imbalance has accumulated during ongoing storage usage. This way, selective space and performance bottlenecks can be avoided. Finally, Mimic receives (security) updates in a timely manner and is supported by the latest Ceph-Ansible playbooks, which already hold a great deal of know-how from the Ceph community.

Three helpful features for objects you should know about

One of the latest features of our S3-compatible object storage is "bucket lifecycle": You can define when objects should expire, e.g. so that backups stored in the object storage are automatically removed after a certain period of time. Using a user-definable prefix, you can specify the set of objects for which a certain lifecycle should apply. The system then processes the defined lifecycles daily between midnight and 6:00 AM (CET/CEST).

"Server-side encryption" ensures that your data is stored encrypted in the object storage. Using the "SSE-C" mode supported by, key management remains completely in your hands: you decide which objects are protected by which key. The subsequent retrieval of these objects will then only be possible using the respective key.

Finally, "bucket policies" allow you to set detailed permissions for your buckets and objects. Define which other users should have access and which actions are allowed. If, for example, you want to make objects available to someone for download, simply create an additional objects user and grant them the necessary read rights using a bucket policy.

What further improvements we are planning based on Mimic

In the background, our Ceph storage cluster distributes all data across a series of storage systems, replicated three times. The individual data fragments are stored in an XFS file system on the physical disks. Following the upgrade to Ceph Mimic, we are now planning to switch to the new "BlueStore" storage backend, which had been officially introduced with Luminous. The POSIX file system as an intermediate layer is no longer necessary since BlueStore stores the data directly as objects on the block device. Another advantage of BlueStore are the integrated checksums for all data and metadata. This ensures that retrieved data is actually correct every time it is being read.

We will use the successive re-creation of the Ceph OSDs during the migration to BlueStore to implement one further improvement at the same time: the encryption of all data disks in our storage cluster. This will provide an extra layer of security to protect your data, e.g. in the event that we have to dispose of a defective disk.

At you can take full advantage of a distributed and replicated storage cluster. And thanks to the ongoing development of Ceph by its active open-source community, you benefit from new features as well as performance and reliability improvements with every upgrade. Without lifting a finger.

Up to date with Ceph Mimic,
your team

Back to overview