Disabling scenarios in ceph-docker

Title

I recently completed a full resync from Kraken to Jewel in ceph-docker in which I introduced a new feature to disable scenarios. Running an application on bleeding edge technology can be tough and challenging for individuals and also for companies. Even me, as a developer and for bleeding edge testers I’m tempted to release unstable features (understand not recommended for production). So sometimes it’s handy to have the ability to restrict the use of a software by disabling some of its functionality. This is exactly what I did for ceph-docker, in this article I’ll explain how that works.

Read On...

Test Ceph Luminous pre-release with ceph-docker

Title

/!\ DISCLAIMER /!\

/!\ DO NOT GET TOO EXCITED, AT THE TIME OF THE WRITTING LUMINOUS IS NOT OFFICIALLY RELEASE IN STABLE YET /!\

/!\ USE AT YOUR OWN RISK, DO NOT PUT PRODUCTION DATA ON THIS /!\

Luminous is just around the corner but we have been having packages available for a couple of weeks already. That’s why I recently thought: “how come don’t we have any Ceph container image for Luminous then?”. And I know a lot of you are eager to test the latest developments of Bluestore (the new method to store objects, directly on a raw device).

Now it’s done, you can fetch the ceph/daemon image using one of these two tags:

  • tag-build-master-luminous-centos-7
  • tag-build-master-luminous-ubuntu-16.04

And you will get a running Ceph cluster on Luminous.

Read On...

Ceph manager support in ceph-ansible and ceph-docker

Ceph manager support in ceph-ansible and ceph-docker

Thanks to this recent pull request, you can now bootstrap the Ceph Manager daemon. This new daemon was added during the Kraken development cycle, its main goal is to act as a hook for existing system to get monitoring information from the Ceph cluster. It normally runs alongside monitor daemons but can be deployed to any particular node. Using your preferred method, you can deploy it in a non-containerized or containerized fashion with ceph-ansible.

Also, we just released a new tag for ceph-ansible, 2.2.0, we will go through heavy testing during the next couple of weeks. This will result in a new stable version branched on stable-2.2. I will a new blog post for stable-2.2 once it’s out to highlight some of the best features and functionality we added.

Read On...

No more privileged containers for Ceph OSDs

Title

I’m really sorry for being so quiet lately, I know I promised to release articles more regularly and I clearly failed… Many things are going on and where motivation is key to write articles, I’ve been having a hard time to find the right motivation to write :/

However, I am not giving up and I finally found the time to write a little bit on the things we improved in ceph-docker, our Ceph in container project.

Read On...

Ceph and RBD mirroring, upcoming enhancements

Ceph and RBD mirroring, upcoming enhancements

I’ve been getting a lot of questions about the RBD mirroring daemon so I thought I will do a blog post similar to a FAQ. Most of the features described in this article will likely be released for Ceph Luminous. Luminous should land this spring, so be patient :).

Read On...

Blog post on InformationWeek

Recently, I wrote an article that was published on InformationWeek. The article helps you understand the benefits of Storage Defined Software along with containers. In case you’re interested in reading it, click here.

Read On...

Ceph RBD and iSCSI

Ceph RBD and iSCSI

Just like promised last Monday, this article is the first of a series of informative blog posts about incoming Ceph features. Today, I’m cheating a little bit because I will decrypt one particular feature that went a bit unnoticed with Jewel. So we are discussing something that is already available but will have follow-ups with new Ceph releases. The feature doesn’t really have a name but it’s along the line of having an iSCSI support with the RBD protocol. With that, we can connect Ceph storage to hypervisors and/or operating systems that don’t have a native Ceph support but understand iSCSI. Technically speaking this targets non-Linux users who can not use librbd with QEMU or krbd directly.

Read On...