C Sébastien Han

Stacker! Cepher! What's next?

Stretching Ceph Networks

| Comments

This is a quick note about Ceph networks, so do not expect anything lengthy here :).

Usually Ceph networks are presented as cluster public and cluster private. However it is never mentioned that you can use a separate network for the monitors. This might sound obvious for some people but it is completely possible. The only requirement of course is to have this monitor network accessible from all the Ceph nodes.

We can then easily imagine 3 VLANs:

  • Ceph monitor
  • Ceph public
  • Ceph cluster

I know this does not sound much, but I’ve been hearing this question so many times :).

Feel the Awk Power

| Comments

My awk favorite expression:

1
2
3
OSD_LISTEN_PORTS:$(netstat -tlpn | awk -F ":" '/ceph-osd/ { sub (" .*", "", $2); print $2 }' | uniq)
NETWORK=$(ip -4 -o a | awk '/eth0/ {print $4}')
IP=$(ip -4 -o a | awk '/eth0/ { sub ("./", "", $4); print $4 }')

OpenStack Summit Vancouver: Thanks for Your Votes

| Comments

Bonjour, bonjour ! Quick post to let you know that my talk submission has been accepted, so I’d like to thank you all for voting. As a reminder, our talk (Josh Durgin and I) is scheduled Tuesday, May 19 between 11:15am - 11:55am.

Also note that the summit has other Ceph talks!


See you in Vancouver!

OpenStack: Reserve Memory on Your Hypervisors

| Comments

One major use case for operators is to be able to reserve a certain amount of memory in the hypervisor. This is extremely useful when you have to recover from failures. Imagine that you run all your virtual machines on shared storage (Ceph RBD or Sheepdog or NFS). The major benefit from running your instances on shared storage is that it will ease live-migration and evacuation. However, if a compute node dies you want to make sure that you have enough capacity on the other compute nodes to relaunch your instances. Given that the nova host-evacuate call goes through the scheduler again you should get an even distribution.

But how to make sure that you have enough memory on the other hypervisors? Unfortunately there is no real memory restriction mechanism. In this article I will explain how we can mimic such behavior.

OpenStack Glance NFS and Compute Local Direct Fetch

| Comments

This feature has been around for quite a while now, if I remember correctly it was introduced in the Grizzly release. However, I never really got the chance to play around with it. Let’s assume that you use NFS to store Glance images, we know that the default booting mechanism implies to fetch the instance image from Glance to the Nova compute. This is basically streaming the image which involves network throughput and makes the boot process longer. OpenStack Nova can be configured to directly access Glance images from a local filesystem path. This is ideal for our NFS scenario.

OpenStack Guest and Watchdog

| Comments

Libvirt has the ability to configure a watchdog device for QEMU guests. When the guest operating system hangs or crashes the watchdog device is used to automatically trigger some actions. The watchdog support was added in OpenStack Icehouse.

Quick and Efficient Ceph DevStacking

| Comments

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

1
2
3
4
5
$ git clone https://git.openstack.org/openstack-dev/devstack
$ git clone https://github.com/ceph/ceph-devstack.git
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./stack.sh

Happy DevStacking!