C Sébastien Han

Stacker! Cepher! What's next?

OpenStack: Reserve Memory on Your Hypervisors

| Comments

One major use case for operators is to be able to reserve a certain amount of memory in the hypervisor. This is extremely useful when you have to recover from failures. Imagine that you run all your virtual machines on shared storage (Ceph RBD or Sheepdog or NFS). The major benefit from running your instances on shared storage is that it will ease live-migration and evacuation. However, if a compute node dies you want to make sure that you have enough capacity on the other compute nodes to relaunch your instances. Given that the nova host-evacuate call goes through the scheduler again you should get an even distribution.

But how to make sure that you have enough memory on the other hypervisors? Unfortunately there is no real memory restriction mechanism. In this article I will explain how we can mimic such behavior.

OpenStack Glance NFS and Compute Local Direct Fetch

| Comments

This feature has been around for quite a while now, if I remember correctly it was introduced in the Grizzly release. However, I never really got the chance to play around with it. Let’s assume that you use NFS to store Glance images, we know that the default booting mechanism implies to fetch the instance image from Glance to the Nova compute. This is basically streaming the image which involves network throughput and makes the boot process longer. OpenStack Nova can be configured to directly access Glance images from a local filesystem path. This is ideal for our NFS scenario.

OpenStack Guest and Watchdog

| Comments

Libvirt has the ability to configure a watchdog device for QEMU guests. When the guest operating system hangs or crashes the watchdog device is used to automatically trigger some actions. The watchdog support was added in OpenStack Icehouse.

Quick and Efficient Ceph DevStacking

| Comments

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

1
2
3
4
5
$ git clone https://git.openstack.org/openstack-dev/devstack
$ git clone https://github.com/ceph/ceph-devstack.git
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./stack.sh

Happy DevStacking!

OpenStack Summit Vancouver Talks: Ceph and OpenStack Upgrades

| Comments

Self promotion ahead :) For the next OpenStack summit I have submitted two talks.

The first one is about Ceph and OpenStack (yet again!), in this session Josh Durgin and I will be focusing on the roadmap around the integration of Ceph into OpenStack. People might think that we are almost done, this is not true, even if we achieve a really good coverage many things need to be addressed.


The next talk is about OpenStack upgrade, a particularly challenging one that I am working on with Cisco since it’s from Ubuntu Precise Havana to RHEL7 Icehouse. Basically it’s a migration and an upgrade. We already started this process, so John Dewey from Cisco and I would love to share our experience so far.


Thanks a lot in advance for your votes :). See you in Vancouver!

OpenStack: Perform Consistent Snapshots With Qemu Guest Agent

| Comments

A while back, I wrote an article about taking consistent snapshots of your virtual machines in your OpenStack environment. However this method was really intrusive since it required to be inside the virtual machine and to manually summon a filesystem freeze. In this article, I will use a different approach to achieve the same goal without the need to be inside the virtual machine. The only requirement is to have a virtual machine running the qemu-guest-agent.

OpenStack and Ceph: RBD Discard

| Comments

Only Magic Card player might recognize that post picture :) (if you’re interested)


I have been waiting for this for quite a while now. Discard, also called trim (with SSD), is a space reclamation mechanism that allows you to reclaim unused blocks on a disk. RBD images are sparse by default, this means that the space they occupy increase the more you write data (opposite of preallocation). So while writing on your filesystem you might end up to the end of your device. On the Ceph side, no one knows what is happening on the filesystem, so we actually end up with fully allocated blocks… In the end the cluster believes that the RBD images are fully allocated. From an operator perspective, having the ability to reclaim back the space unused by your running instances is really handy.

Ceph Recover a RBD Image From a Dead Cluster

| Comments

Many years ago I came across a script made by Shawn Moore and Rodney Rymer from Catawba university. The purpose of this tool is to reconstruct a RBD image. Imagine your cluster dead, all the monitors got wiped off and you don’t have backup (I know what can possibly happen?). However all your objects remain intact.

I’ve always wanted to blog about this tool, simply to advocate it and make sure that people can use it. Hopefully it will be a good publicity for this tool :-).