C Sébastien Han

Stacker! Cepher! What's next?

OpenStack Glance: A First Glimpse at Image Conversion

| Comments

Following my best Kilo’s additions selection, today I will be introducing the Glance image conversion. This feature was discussed at the last OpenStack summit in Paris, you can access the etherpad discussion. Before you get all excited, let me tell you first that the patch introduced during this Kilo cycle is the first of a series. So do not get disappointed if it does not fit your needs yet (and it probably won’t…). Now if you are still inclined reading the article let’s jump in!

OpenStack Glance: Deactivate an Image

| Comments

Kilo has been released last week. This blog post is the first of a series that will demonstrate some nifty new features.

Managing cloud images life cycle is a real pain for public cloud providers. Since users have the ability to import their own images they can potentially introduce vulnerabilities with them. Thus the cloud operators should be able to deactivate (temporary) an image to inspect it. Later operators can reactivate it or just remove it if they believe the image is a threat for the cloud environment.

Another use case, as well is for cloud image updates, while performing the update of an image the operator might want to hide it from all the users. Then when the update is complete he can reactivate the image so the users can boot virtual machines from it.

Ceph Using Monitor Key/value Store

| Comments

Ceph monitors make use of leveldb to store cluster maps, users and keys. Since the store is present, Ceph developers thought about exposing this through the monitors interface. So monitors have a built-in capability that allows you to store blobs of data in a key/value fashion. This feature has been around for quite some time now (something like 2 years), but haven’t got any particular attention since then. I even noticed that I never blogged about it :).

Stretching Ceph Networks

| Comments

This is a quick note about Ceph networks, so do not expect anything lengthy here :).

Usually Ceph networks are presented as cluster public and cluster private. However it is never mentioned that you can use a separate network for the monitors. This might sound obvious for some people but it is completely possible. The only requirement of course is to have this monitor network accessible from all the Ceph nodes.

We can then easily imagine 4 VLANs:

  • Ceph monitor
  • Ceph public
  • Ceph cluster
  • Ceph heartbeat

I know this does not sound much, but I’ve been hearing this question so many times :).

Feel the Awk Power

| Comments

Some of my favorite AWK expressions:

1
2
3
OSD_LISTEN_PORTS:$(netstat -tlpn | awk -F ":" '/ceph-osd/ { sub (" .*", "", $2); print $2 }' | uniq)
NETWORK=$(ip -4 -o a | awk '/eth0/ {print $4}')
IP=$(ip -4 -o a | awk '/eth0/ { sub ("./..", "", $4); print $4 }')

Because grep foo | awk '{print $1}' is not elegant!