See you at the first Cephalocon
Tomorrow, the first conference fully dedicated to Ceph will start in Beijing, China. I’m attending and super excited. I will see you there!
Tomorrow, the first conference fully dedicated to Ceph will start in Beijing, China. I’m attending and super excited. I will see you there!
Learning Ceph - Second Edition was published in October 2017.
This is special post to highlight a new book I’ve been helping with. Good colleagues of mine wrote that book and I encourage anyone willing to learn Ceph to get a copy of it. The book is available on Amazon.
A couple of releases ago, in order to minimize changes within the ceph.conf.j2
Jinja template, we introduced a new module that we took from the OpenStack Ansible guy.
This module is called config_template
and allows us to declare Ceph configuration options as variables in your group_vars files.
This is extremely useful for us
Based on that work and as part of the big ceph-ansible 3.0 release we added a profile directory that guides users on how to properly inject new configuration options. All of that is based on use cases. For instance, we currently have profile examples for configuring Ceph Rados Gateway with OpenStack Keystone.
Here is the current list of profiles:
More are coming and we expect to get more during the lifetime of the project. One particular profile that we might create is a performance oriented one when running Bluestore on NVMe drives.
Thanks to this recent pull request, you can now bootstrap the Ceph Manager daemon. This new daemon was added during the Kraken development cycle, its main goal is to act as a hook for existing system to get monitoring information from the Ceph cluster. It normally runs alongside monitor daemons but can be deployed to any particular node. Using your preferred method, you can deploy it in a non-containerized or containerized fashion with ceph-ansible.
Also, we just released a new tag for ceph-ansible, 2.2.0, we will go through heavy testing during the next couple of weeks. This will result in a new stable version branched on stable-2.2. I will a new blog post for stable-2.2 once it’s out to highlight some of the best features and functionality we added.
During the last CDM (Ceph Developer Monthly), I presented a blueprint that will help Ceph playing nicely when it’s being containerized.
I’ve been getting a lot of questions about the RBD mirroring daemon so I thought I will do a blog post similar to a FAQ. Most of the features described in this article will likely be released for Ceph Luminous. Luminous should land this spring, so be patient :).
Just like promised last Monday, this article is the first of a series of informative blog posts about incoming Ceph features.
Today, I’m cheating a little bit because I will decrypt one particular feature that went a bit unnoticed with Jewel.
So we are discussing something that is already available but will have follow-ups with new Ceph releases.
The feature doesn’t really have a name but it’s along the line of having an iSCSI support with the RBD protocol.
With that, we can connect Ceph storage to hypervisors and/or operating systems that don’t have a native Ceph support but understand iSCSI.
Technically speaking this targets non-Linux users who can not use librbd
with QEMU or krbd
directly.
Happy New year! Bonne Année ! Best wishes to my readers :).
C’est le turfu and Ceph is moving fast, really fast and you won’t believe how many awesome features are currently in the pipe. So to start the year off the wheels, I’m planning on publishing a set of articles to tease you a little bit. But don’t get to excited, this is on-going work that will mature next year only. Starting this week on Friday and all the following ones and for an undetermined period of time. See you this Friday for the first blog post!
I guess you got lucky, or maybe I felt so bad not posting anything for more than a month but here it is the last blog post of the year :).
With the latest release of Ceph, Jewel, a new Rados Gateway feature came out. This feature hasn’t really been advertised yet so I thought I will do a blog post. This is an initial implementation that will be improved in the first releases of Ceph obviously.
As this requires a couple of components, it is quite difficult at the moment to get it easily working.
So even if we support it in ceph-ansible, this is not that stable yet.
For example on Ubuntu, we need it to ship Ceph v10.2.5 on Xenial so that nfs-ganesha
2.4 can build a working Rados Gateway FSAL.
Long time no see! I know, though times at the office, this won’t probably stop until Christmas break, so I’ll do my best to keep up the pace. In this article, I will explain how to take over an existing Ceph bare metal deployment and use containers. Spoiler: Ansible baby, Ansible!