C Sébastien Han

Stacker! Cepher! What's next?

OpenStack at the CephDays Paris

| Comments

Save the date (September 18, 2014) and join us at the new edition of the Ceph Days in Paris. I will be talking about the new amazing stuff that happened during this (non-finished yet) Juno cycle. Actually I’ve never seen so many patch sets in one cycle :D. Things are doing well for Ceph in OpenStack! Deploying Ceph with Ansible will be part of the talk as well.

The full schedule is available, don’t forget to register to the event.


Hope to see you there!

OpenStack: Use Ephemeral and Persistent Root Storage for Different Hypervisors

| Comments

Computes with Ceph image backend and computes with local image backend. At some point, you might want to build hypervisor and use their local storage for virtual machine root disks. Using local storage will help you maximasing your IOs and will reduce IO latentcies to the minimum (compare to network block storage). However you will lose handy features like the live-migration (block migration is still an option but slower). Data on the hypervisors will not have a good availability level too. If the compute node crashes the user will not be able to access his virtual machines for a certain amount of time. On another hand, you want to build hypervisors that where virtual machine root disks will live into Ceph. Then you will be able to seemlessly move virtual machine with the live-migration. Virtual machine disks will be highly available so if a compute node crashes you can quickly evacuate the virtual machine disk to another compute node. Ultimately, your goal is to dissociate them, fortunately for you OpenStack provides a mechanism based on host agregate that will help you achieve your objective. Thanks agregate filters you will be able to expose these hypervisors.

Ceph: Mix SATA and SSD Within the Same Box

| Comments

The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. In order to achieve our goal, we need to modify the CRUSH map. My example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.

Start Considering Ceph as a Backend for OpenStack Cinder (to Replace LVM)

| Comments

Just back from the Juno summit, I attended most of the storage sessions and was extremely shocked how Ceph was avoided by storage vendors. However LVM, the reference storage backend for Cinder was always mentioned. Maybe, is it a sign that Ceph is taking over? Talking about LVM, the last OpenStack survey showed that it was the more used backend.