Ceph: properly remove an OSD

Sometimes removing OSD, if not done properly can result in double rebalancing. The best practice to remove an OSD involves changing the crush weight to 0.0 as first step.

Read On...

Ceph is moving outside DevStack core to a plugin

Ceph just moved outside of DevStack in order to comply with the new DevStack’s plugin policy. The code can be found on github. We now have the chance to be on OpenStack Gerrit as well and thus brings all the good things from the OpenStack infra (a CI).

To use it simply create a localrc file with the following:

enable_plugin ceph https://github.com/openstack/devstack-plugin-ceph

A more complete localrc file can be found on Github.

Read On...

Ceph: find an OSD location and restart it

When you manage a large cluster, you do not always know where your OSD are located. Sometimes you have issues with PG such as unclean or with OSDs such as slow requests. While looking at your ceph health detail you only see where the PGs are acting or on which OSD you have slow requests. Given that you might have tons of OSDs located on a lot of node, it is not straightforward to find and restart them.

Read On...

The Ceph and TCMalloc performance story

This article simply relays some recent discovery made around Ceph performance. The finding behind this story is one of the biggest improvement in Ceph performance that has been seen in years. So I will just highlight and summarize the study in case you do not want to read it entirely.

Read On...

Ceph: get the best of your SSD with primary affinity

Using SSD drives in some part of your cluster might useful. Specially under read oriented workloads. Ceph has a mechanism called primary affinity, which allows you to put a higher affinity on your OSDs so they will likely be primary on some PGs. The idea is to have reads served by the SSDs so clients can get faster reads.

Read On...