C Sébastien Han

Stacker! Cepher! What's next?

Ceph: Mix SATA and SSD Within the Same Box

| Comments

The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. In order to achieve our goal, we need to modify the CRUSH map. My example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.

Start Considering Ceph as a Backend for OpenStack Cinder (to Replace LVM)

| Comments

Just back from the Juno summit, I attended most of the storage sessions and was extremely shocked how Ceph was avoided by storage vendors. However LVM, the reference storage backend for Cinder was always mentioned. Maybe, is it a sign that Ceph is taking over? Talking about LVM, the last OpenStack survey showed that it was the more used backend.

Ceph Maintenance With Ansible

| Comments

Following up this article.

This playbook was made to automate Ceph servers maintenance. The typical use case is an hardware change. By running this playbook you will set the noout flag on your cluster, which means that OSD can’t be marked as out of the CRUSH map, but they will be marked as down. Thus the OSD will not receive any data. Basically we tell the cluster to do not move any data since the operation will not last for too long.