News

Back
2021
August
30
2021

Bulk and Object Storage – Upgrade to "All Flash"

At cloudscale.ch, we place great value on performance – even when it is not the main focus. This is why, during the most recent expansion of our bulk and object storage, we switched to "all flash" in this area, too. Consequently, our customers automatically benefit from improved access times to their bulk volumes and objects at no additional cost.

What we changed

From the outset, cloudscale.ch has relied on separate Ceph-based storage clusters. These store data on dedicated storage servers independently of the compute hardware, distributing the data across multiple storage servers and using triple replicated storage. During the expansion of our bulk and object storage, we also decided to replace the existing servers and, going forward, to exclusively use fast SSDs for the disks.

The new storage servers not only have faster disks, but also significantly higher-performance AMD CPUs and more RAM compared to the previous generation. This means that Ceph, which deals with data distribution and replication in the cluster, can work to its full advantage. Our system engineers also significantly increased the object storage caches. As we have continued to use particularly high-performance NVMe-SSDs for these caches, this provides an additional benefit for write operations and frequently read objects.

The advantages

One of the greatest disadvantages of conventional hard drives is their mechanical way of functioning. To write data onto the drive in a certain location or to read them from there, the read/write head has to be physically moved to the correct position and the magnetic disk turned until the desired location is underneath the read/write head. For many hard drives, this process takes an average of about 8.5 ms, but when multiple access to the disk is required simultaneously, the waiting time for a process that is further back in the queue can be considerably longer. The switch to "all-flash storage" for our bulk and object storages means that mechanical latency is no longer an issue and minimizes the mutual impact on performance when several customers require access at the same time.

A typical advantage of Ceph clusters that store data over multiple storage servers and disks can be seen in the case of simultaneous data access: there is no need for queues and consecutive processing as parallel execution is possible. However, this was not the case previously when two operations had to access the same physical hard drive. Due to the mechanics, only one read or write operation could take place at any one time. Thanks to the SSDs, parallel access is now possible even at the disk level and our customers automatically benefit from the higher overall performance of our bulk and objects clusters.

A few tips

You do not have to do anything to benefit from our all-flash clusters for bulk and object storage. The switch took place in May at the Rümlang (RMA) site and in mid-August at the Lupfig (LPG) site, and included all existing volumes and buckets. Please note that the rate limit of 500 IOPS remains in place for bulk volumes. This is to prevent excessive use of our clusters by individual customers with particularly disk-intensive applications to the detriment of other customers. For database applications, in particular, we still recommend our NVMe-SSD volumes, which have no limit of this kind and are specifically designed for the highest level of performance.

While the new setup means that the typical access time of rotating hard drives is no longer an issue, there is still a certain degree of latency: as our storage clusters are operated separately from the compute servers on different hardware, all disk access takes place via the network. For this reason, we recommend that you enable any caching options that the software you use offers and that you select a flavor with sufficient RAM, which Linux can automatically use as a disk cache.


Even if you only rarely require certain data, access to them should still be as fast as possible. By switching our bulk and object storages to "all flash", we have managed to combine unchanged reasonable costs with considerably higher performance thanks to SSDs – even, and in particular, when multiple access is required simultaneously. See for yourself!

Farewell, mechanical limitations – welcome, all flash!
Your cloudscale.ch team

Back to overview