C Sébastien Han

Stacker! Cepher! What's next?

Quick and Efficient Ceph DevStacking

| Comments

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

1
2
3
4
5
$ git clone https://git.openstack.org/openstack-dev/devstack
$ git clone https://github.com/ceph/ceph-devstack.git
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./stack.sh

Happy DevStacking!

OpenStack Summit Vancouver Talks: Ceph and OpenStack Upgrades

| Comments

Self promotion ahead :) For the next OpenStack summit I have submitted two talks.

The first one is about Ceph and OpenStack (yet again!), in this session Josh Durgin and I will be focusing on the roadmap around the integration of Ceph into OpenStack. People might think that we are almost done, this is not true, even if we achieve a really good coverage many things need to be addressed.


The next talk is about OpenStack upgrade, a particularly challenging one that I am working on with Cisco since it’s from Ubuntu Precise Havana to RHEL7 Icehouse. Basically it’s a migration and an upgrade. We already started this process, so John Dewey from Cisco and I would love to share our experience so far.


Thanks a lot in advance for your votes :). See you in Vancouver!

OpenStack: Perform Consistent Snapshots With Qemu Guest Agent

| Comments

A while back, I wrote an article about taking consistent snapshots of your virtual machines in your OpenStack environment. However this method was really intrusive since it required to be inside the virtual machine and to manually summon a filesystem freeze. In this article, I will use a different approach to achieve the same goal without the need to be inside the virtual machine. The only requirement is to have a virtual machine running the qemu-guest-agent.

OpenStack and Ceph: RBD Discard

| Comments

Only Magic Card player might recognize that post picture :) (if you’re interested)


I have been waiting for this for quite a while now. Discard, also called trim (with SSD), is a space reclamation mechanism that allows you to reclaim unused blocks on a disk. RBD images are sparse by default, this means that the space they occupy increase the more you write data (opposite of preallocation). So while writing on your filesystem you might end up to the end of your device. On the Ceph side, no one knows what is happening on the filesystem, so we actually end up with fully allocated blocks… In the end the cluster believes that the RBD images are fully allocated. From an operator perspective, having the ability to reclaim back the space unused by your running instances is really handy.

Ceph Recover a RBD Image From a Dead Cluster

| Comments

Many years ago I came across a script made by Shawn Moore and Rodney Rymer from Catawba university. The purpose of this tool is to reconstruct a RBD image. Imagine your cluster dead, all the monitors got wiped off and you don’t have backup (I know what can possibly happen?). However all your objects remain intact.

I’ve always wanted to blog about this tool, simply to advocate it and make sure that people can use it. Hopefully it will be a good publicity for this tool :-).

Ceph and KRBD Discard

| Comments

Space reclamation mechanism for the Kernel RBD module. Having this kind of support is really crucial for operators and ease your capacity planing. RBD images are sparse, thus size after creation is equal to 0 MB. The main issue with sparse images is that images grow to eventually reach their entire size. The thing is Ceph doesn’t know anything that this happening on top of that block especially if you have a filesystem. You can easily write the entire filesystem and then delete everything, Ceph will still believe that the block is fully used and will keep that metric. However thanks to the discard support on the block device, the filesystem can send discard flush commands to the block. In the end, the storage will free up blocks.

OpenStack: Disable a Compute Node During Its First Bootstrap

| Comments

For operationnal reasons, you might not want to automatically make your compute node available. With the following flag, during its first bootstrap the compute node will register itself to the service list. However it will be disabled, so virtual machines can not be scheduled on it:

enable_new_services=False