Long time no see!
I know, though times at the office, this won’t probably stop until Christmas break, so I’ll do my best to keep up the pace.
In this article, I will explain how to take over an existing Ceph bare metal deployment and use containers.
Spoiler: Ansible baby, Ansible!
For the last three months we got some really nifty new features on the Ceph Devstack plugin.
Red Hat is having a forum on October 11, 2016 in Seoul, South Korea.
I’ll be presenting the tight integration of Ceph into OpenStack along with my colleague Sean Cohen.
See you there!
Ceph ansible is quickly catching up with ceph-deploy in terms of features.
Last week, I was discussing the dm-crypt support.
The ability to shrink a Ceph cluster, removing one or N monitors/OSDs wasn’t possible until very recently.
Let’s have a look at this new feature.
I recently worked on a new feature that ceph-ansible was laking of: support for dmcrypt.
This dmcrypt scenario basically allows you to deploy encrypted OSD data directories.
The encrypted key is stored on the monitor’s key/value store.
Until recently ceph-ansible wasn’t capable of deploying such configuration.
Let’s see how this can be configured.
The next summit will happen in October this year, and it’s already time to vote for your favorite talks!
This article is co-authored with Gregory Charot (author of the tool).
Have you ever found yourself doing long series of pipes to get a particular value that is not directly provided by a Ceph CLI command or just trying to remove surrounding text to get a particular value?
This situation often results in quick & dirty
awk pipelines ending (best case scenario) as alias or forgotten in your shell history until next time you need it.
Here comes ceph-lazy, a shell toolkit that combines some of these queries that require multiple processing or text manipulation.
Some use cases might require to zap a device (destroy partition tables) prior to run your Ceph OSD container with a dedicated disk.
While running development environment this is particularly interesting as this allows us to quickly bootstrap and tear down sandboxes.
I recently pushed into ceph-docker the support for thr RBD mirror.
This daemon is responsable for asynchronously replicating RBD images from one cluster to another.
The main purpose of the daemon is to address disaster recovery use cases.
I use tmux on a daily basis and I’ve had many requests regarding the configuration I’m using.
So here it is