Sébastien Han2021-11-16T13:33:59.864Zhttps://sebastien-han.fr/blog/Sébastien HanHexoDevops D-Day: Rook-Ceph a storage Orchestrator for Kuberneteshttps://sebastien-han.fr/blog/2021/11/18/Devops-D-Day-Rook-Ceph-a-storage-Orchestrator-for-Kubernetes/2021-11-18T09:35:58.000Z2021-11-16T13:33:59.864Z<p>Date: 18/11/21</p>
<a id="more"></a>
<p>If the slides don’t render properly in the web viewer, please download them:</p>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/devops-d-day-18-11-2021.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 18/11/21</p>
KubeCon North America Virtual: Storage and Networking: Rook on Multushttps://sebastien-han.fr/blog/2021/10/14/KubeCon-North-America-Virtual-Storage-and-Networking-Rook-on-Multus/2021-10-14T12:58:55.000Z2021-11-16T09:42:56.434Z<p>Date: 14/10/21</p>
<a id="more"></a>
<div class="video-container"><iframe src="//www.youtube.com/embed/zIS5qaG_HRw" frameborder="0" allowfullscreen></iframe></div>
<p>If the slides don’t render properly in the web viewer, please download them:</p>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/Storage_and_networking_Rook-Ceph_on_Multus_SebastienHan_RohanGupta_101321_v1.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 14/10/21</p>
KubeCon Europe Virtual: Rook Ceph Deep Divehttps://sebastien-han.fr/blog/2020/08/24/KubeCon-Amsterdam-Virtual-Rook-Ceph-Deep-Dive/2020-08-24T10:01:29.000Z2021-11-16T09:43:52.578Z<p>Date: 20/08/20</p>
<a id="more"></a>
<div class="video-container"><iframe src="//www.youtube.com/embed/eTSokJ3-c-A" frameborder="0" allowfullscreen></iframe></div>
<p>If the slides don’t render properly in the web viewer, please download them:</p>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/Virtual_KubeCon_2020_Rook_Ceph_Deep_Dive.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 20/08/20</p>
KubeCon San Diego: Rook Deep Divehttps://sebastien-han.fr/blog/2019/11/25/KubeCon-San-Diego-Rook-Deep-Dive/2019-11-25T07:18:26.000Z2019-11-25T07:24:16.000Z<p>Date: 21/11/19</p>
<a id="more"></a>
<p>Video, my talk starts at 22 minutes:</p>
<div class="video-container"><iframe src="//www.youtube.com/embed/f3Wyk968VR8" frameborder="0" allowfullscreen></iframe></div>
<p>If the slides don’t render properly in the web viewer, please download them:</p>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/KubeCon_San_Diego_Ceph_Deep_Dive.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 21/11/19</p>
KubeCon Barcelona: Rook, Ceph, and ARM: A Caffeinated Tutorialhttps://sebastien-han.fr/blog/2019/05/25/KubeCon-Barcelona-Rook-Ceph-and-ARM-A-Caffeinated-Tutorial/2019-05-24T22:36:01.000Z2019-05-24T22:51:05.000Z<p>Date: 22/05/19</p>
<a id="more"></a>
<p>Video:</p>
<div class="video-container"><iframe src="//www.youtube.com/embed/pNz0UyaqlE8" frameborder="0" allowfullscreen></iframe></div>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/Rook_Ceph_and_ARM_The_caffeinated_tutorial.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 22/05/19</p>
Rook just landed on operatorhub.iohttps://sebastien-han.fr/blog/2019/05/17/Rook-just-landed-on-operatorhub-io/2019-05-17T20:59:04.000Z2019-05-24T22:48:31.000Z<p><img src="http://sebastien-han.fr/blog/images/rook-operator-framework.png" alt="Rook just landed on operatorhub.io"></p>
<p>I’m excited to announce that just right for Cephalocon and KubeCon Europe, Rook has landed on <a href="https://operatorhub.io" target="_blank" rel="external">operatorhub.io</a>.
It has been quite a challenge to have it merged, but in the end <a href="https://github.com/operator-framework/community-operators/pull/348" target="_blank" rel="external">my pull request</a> got merged :).
If you want to know what this means for upstream you should look at this <a href="https://www.redhat.com/en/blog/rook-ceph-storage-operator-now-operatorhubio" target="_blank" rel="external">article</a>.</p>
<a id="more"></a>
<p><img src="http://sebastien-han.fr/blog/images/rook-operator-framework.png" alt="Rook just landed on operatorhub.io"></p>
<p>I’m excited to announce that just right for Cephalocon and KubeCon Europe, Rook has landed on <a href="https://operatorhub.io">operatorhub.io</a>.
It has been quite a challenge to have it merged, but in the end <a href="https://github.com/operator-framework/community-operators/pull/348">my pull request</a> got merged :).
If you want to know what this means for upstream you should look at this <a href="https://www.redhat.com/en/blog/rook-ceph-storage-operator-now-operatorhubio">article</a>.</p>
Hey! What's up?!https://sebastien-han.fr/blog/2019/05/09/hey-whats-up/2019-05-09T17:52:55.000Z2019-05-10T15:52:48.000Z<p><img src="https://raw.githubusercontent.com/rook/rook/master/Documentation/media/logo.svg?sanitize=true" alt="Title"></p>
<p>It has been a long time since I’ve been giving updates or even blogging.
Let’s take some time here (while being on the plane) to update you on what I’m doing these days.</p>
<a id="more"></a>
<h2 id="Moving-away-from-ceph-ansible-container"><a href="#Moving-away-from-ceph-ansible-container" class="headerlink" title="Moving away from ceph-ansible/container"></a>Moving away from ceph-ansible/container</h2><p>In 2014, I was launching <a href="https://github.com/ceph/ceph-ansible" target="_blank" rel="external">ceph-ansible</a>, a set of playbooks to deploy, manage and upgrade Ceph with the help of Ansible. <br>
In 2015, I was launching <a href="https://github.com/ceph/ceph-container" target="_blank" rel="external">ceph-container</a>, the very first iteration of containerized Ceph with the help of Docker.</p>
<p>Since that, I’ve never stopped contributing to them, but almost a year ago ago things started moving in a different direction.
As much as I love both projects, I realized it was time, to move to something else.
Two years ago, I was attending my very first KubeCon, at this time, we (Ceph team) deciced our strategy when it comes to deploying Ceph in Kubernetes environments, our choice was to use <a href="https://rook.io/" target="_blank" rel="external">Rook</a>. That’s where my involvment started.</p>
<h2 id="New-focus-Rook"><a href="#New-focus-Rook" class="headerlink" title="New focus: Rook"></a>New focus: Rook</h2><p>Back then, even though Rook wasn’t perfect, we decided to go with it and improve it over time.</p>
<p>For the record, Rook is a storage orchestrator for Kubernetes, you can read more about orchestrator <a href="link">on this blogpost announcement</a>.
Rook allows us to deploy, manage and upgrade Ceph in Kubernetes. It is capable of deploying more storage technologies than Ceph such as EdgeFS and CockroachDB to name a few, but my focus is on Ceph obviously.</p>
<p>For almost a year now, I’ve started looking at Rook and contributing to it.
Today, I’m one of the maintainers of the Ceph part and actively commited to its success.
Last week was an important milestone for us as we release the 1.0 version with Ceph Nautilus support.
Stay tune for more blogging on Rook.</p>
<h2 id="Give-Rook-a-try"><a href="#Give-Rook-a-try" class="headerlink" title="Give Rook a try"></a>Give Rook a try</h2><p>With a few commands, you can start playing and getting familiar with it, first download <a href="link">minikube</a>, once minikube is installed run the following commands:</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">minikube start</span><br><span class="line">git <span class="built_in">clone</span> https://github.com/rook/rook</span><br><span class="line"><span class="built_in">cd</span> cluster/examples/kubernetes/ceph</span><br><span class="line">kubectl create <span class="_">-f</span> common.yaml operator.yaml</span><br><span class="line">kubectl create <span class="_">-f</span> cluster.yaml</span><br></pre></td></tr></table></figure>
<p>In less, than 5 minutes you will be up and running!</p>
<p><br></p>
<blockquote>
<p>I kinda miss blogging, I feel it’s important for me and for you readers. You couldn’t believe how happy am I when I meet some of you at conferences and you reward me for the content of the blog. I’ve been hearing a tons of “hey thanks for your blog it’s been really helpful”. I feel so proud about it. Unfortunately, I’ve realized that the content is now getting old and always redirect you to the official documentation. I hope I’ll be able to give more attention to the blog in the second half of the year. Thanks again for your support!</p>
</blockquote>
<p><img src="https://raw.githubusercontent.com/rook/rook/master/Documentation/media/logo.svg?sanitize=true" alt="Title"></p>
<p>It has been a long time since I’ve been giving updates or even blogging.
Let’s take some time here (while being on the plane) to update you on what I’m doing these days.</p>
Open Infrastructure Summit Denver: Rook 101https://sebastien-han.fr/blog/2019/04/30/OpenStack-Summit-Denver-Rook-101/2019-04-29T22:00:00.000Z2019-05-10T14:22:12.000Z<p>Date: 30/04/19</p>
<a id="more"></a>
<p>Video:</p>
<div class="video-container"><iframe src="//www.youtube.com/embed/mo-u9rxuM2Y" frameborder="0" allowfullscreen></iframe></div>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/Rook_and_Ceph_101-Open_Infrastructure_Denver.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 30/04/19</p>
OpenStack and Ceph for Distributed Hyperconverged Edge Deploymentshttps://sebastien-han.fr/blog/2019/03/12/OpenStack-and-Ceph-for-Distributed-Hyperconverged-Edge-Deployments/2019-03-12T15:42:48.000Z2019-05-04T12:43:07.000Z<p>I’m simply relaying an article I reviewed and helped writting.
It is reflecting my talk from the last OpenStack Summit in Berlin.</p>
<p>You can <a href="https://thenewstack.io/openstack-and-ceph-for-distributed-hyperconverged-edge-deployments/" target="_blank" rel="external">read it here</a>
Thanks to the author for capturing the essence of the talk.</p>
<a id="more"></a>
<p>I’m simply relaying an article I reviewed and helped writting.
It is reflecting my talk from the last OpenStack Summit in Berlin.</p>
<p>You can <a href="https://thenewstack.io/openstack-and-ceph-for-distributed-hyperconverged-edge-deployments/">read it here</a>
Thanks to the author for capturing the essence of the talk.</p>
Ceph nano is getting better and betterhttps://sebastien-han.fr/blog/2019/02/24/Ceph-nano-is-getting-better-and-better/2019-02-24T13:42:07.000Z2019-02-27T15:57:38.000Z<p><img src="http://sebastien-han.fr/blog/images/introducing-ceph-nano.png" alt="cn big updates"></p>
<p>Long time no blog, I know, I know…
Soon, I will do another blog entry to “explain” a little why I am not blogging as much I used too but if you’re still around and reading this then thank you!
For the past few months, <code>cn</code> has grown in functionality so let’s explore what’s new and what’s coming.</p>
<a id="more"></a>
<p>To get up to speed on the project and some of the main feature, I encourage to read my <a href="http://www.sebastien-han.fr/blog/2018/11/05/Ceph-meetup-Paris/" target="_blank" rel="external">last presentation</a></p>
<h1 id="Config-file-and-templates"><a href="#Config-file-and-templates" class="headerlink" title="Config file and templates"></a>Config file and templates</h1><p><code>cn</code> now has a configuration file that can be used to create <em>flavors</em> of your <code>cn</code> clusters. They represent different classes of a cluster where CPU, memory, the image can be tuned.
<a href="https://github.com/ceph/cn/pull/100" target="_blank" rel="external">The pull request 100 was dope</a>, thanks to the excellent work of my buddy <a href="https://github.com/ErwanAliasr1" target="_blank" rel="external">Erwan Velu</a>.</p>
<p>These flavors can be used via the <code>--flavor</code> argument to the <code>cn cluster start</code> CLI call.</p>
<p>Here is an example of a <code>mimic</code> flavor in <code>$HOME/.cn/cn.toml</code>, which creates a new image new for a specific image you built:</p>
<figure class="highlight plain"><table><tr><td class="code"><pre><span class="line">[images]</span><br><span class="line"> [images.complex]</span><br><span class="line"> image_name="this.url.is.complex/cool/for-a-test"</span><br></pre></td></tr></table></figure>
<p><code>cn</code> comes with some pre-defined flavors you can use as well:</p>
<figure class="highlight plain"><table><tr><td class="code"><pre><span class="line">$ cn flavors ls</span><br><span class="line">+---------+-------------+-----------+</span><br><span class="line">| NAME | MEMORY_SIZE | CPU_COUNT |</span><br><span class="line">+---------+-------------+-----------+</span><br><span class="line">| large | 1GB | 1 |</span><br><span class="line">| huge | 4GB | 2 |</span><br><span class="line">| default | 512MB | 1 |</span><br><span class="line">| medium | 768MB | 1 |</span><br></pre></td></tr></table></figure>
<p>For images, we also implemented aliases for most-commonly used images:</p>
<figure class="highlight plain"><table><tr><td class="code"><pre><span class="line">$ cn image show-aliases</span><br><span class="line">+----------+--------------------------------------------------+</span><br><span class="line">| ALIAS | IMAGE_NAME |</span><br><span class="line">+----------+--------------------------------------------------+</span><br><span class="line">| redhat | registry.access.redhat.com/rhceph/rhceph-3-rhel7 |</span><br><span class="line">| mimic | ceph/daemon:latest-mimic |</span><br><span class="line">| luminous | ceph/daemon:latest-luminous |</span><br><span class="line">+----------+--------------------------------------------------+</span><br></pre></td></tr></table></figure>
<p>For more examples of possible configuration see the <a href="https://github.com/ceph/cn/blob/master/cmd/cn-test.toml" target="_blank" rel="external">example file</a>.</p>
<h1 id="Container-memory-auto-tuning"><a href="#Container-memory-auto-tuning" class="headerlink" title="Container memory auto-tuning"></a>Container memory auto-tuning</h1><p>Another goodness from <a href="https://github.com/ErwanAliasr1" target="_blank" rel="external">Erwan</a>, this specific work happened in the container image via the <a href="https://github.com/ceph/ceph-container" target="_blank" rel="external">ceph-container project</a>, in this <a href="https://github.com/ceph/ceph-container/pull/1283" target="_blank" rel="external">pull request</a>.
With recent versions of Ceph, Bluestore has implemented its cache in its own memory space.
The default values are not meant to run on a small restricted environment such as <code>cn</code> where the memory limit is usually low.
So we had to adapt these Bluestore flags on the fly by detecting the memory available and whether or not the memory is capped.
Based on several data we are capable of tuning these value, so <code>ceph-osd</code> does not consume too much memory.</p>
<p>This work was crucial for <code>cn</code> reliability since after some time the processes were receiving OOM call from the kernel.
So you had to restart the container, but it’ll die eventually again and again.</p>
<h1 id="New-core-incoming"><a href="#New-core-incoming" class="headerlink" title="New core incoming"></a>New core incoming</h1><p>Initially, the scenario that is used to bootstrap <code>cn</code> is by using a bash script from the <a href="https://github.com/ceph/ceph-container" target="_blank" rel="external">ceph-container project</a>.
When <code>cn</code> starts, it instantiates a particular scenario called <code>demo</code> which deploys all the ceph daemons, the UI etc.</p>
<p>Bash is good, I love it, but it has its limitations. For instance, performing correct logging, error handling, unit test, all of that becomes increasingly difficult as the project grows.
So I decided to switch to Golang with some hope to get below 20 or 15 seconds bootstrap time too.</p>
<p>Typically <code>cn</code> needs around 20 seconds to bootstrap on my laptop, with <code>cn-core</code> we have been able to go below the 15 sec, see by yourself:</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ time cn cluster start -i quay.io/ceph/cn-core:v0.4 core</span><br><span class="line">2019/02/24 14:52:47 Running cluster core | image quay.io/ceph/cn-core:v0.4 | flavor default {512MB Memory, 1 CPU} ...</span><br><span class="line"></span><br><span class="line">Endpoint: http://10.36.117.68:8001</span><br><span class="line">Dashboard: http://10.36.117.68:5001</span><br><span class="line">Access key: PCJEU83FCKAZGM3NO609</span><br><span class="line">Secret key: Ie1fRQuJMqoFI9dis2fOYKIf2Yg08H8R1PeZB8QI</span><br><span class="line">Working directory: /usr/share/ceph-nano</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">real 0m14.730s</span><br><span class="line">user 0m0.032s</span><br><span class="line">sys 0m0.018s</span><br></pre></td></tr></table></figure>
<p>The new core is still in Beta, but it’s a matter of weeks before we switch to <code>cn-core</code> by default.
We almost got feature parity with <code>demo.sh,</code> and there are just a few bugs to fix.</p>
<p>You can contribute to this new core as much as you want in the <a href="https://github.com/ceph/cn-core" target="_blank" rel="external">cn-core repository</a>.</p>
<p><br></p>
<blockquote>
<p>Voilà voilà, I hope you still like the project. If there is anything, you would like to see then tell us, something you hate then tell us too!</p>
</blockquote>
<p><img src="http://sebastien-han.fr/blog/images/introducing-ceph-nano.png" alt="cn big updates"></p>
<p>Long time no blog, I know, I know…
Soon, I will do another blog entry to “explain” a little why I am not blogging as much I used too but if you’re still around and reading this then thank you!
For the past few months, <code>cn</code> has grown in functionality so let’s explore what’s new and what’s coming.</p>
OpenStack Summit Berlin: Distributed Hyperconvergence Pushing Openstack and Ceph to the Edgehttps://sebastien-han.fr/blog/2018/11/13/OpenStack-Summit-Berlin-Distributed-Hyperconvergence-Pushing-Openstack-and-Ceph-to-the-Edge/2018-11-13T12:14:02.000Z2019-05-04T12:43:27.000Z<p>Date: 13/11/18</p>
<a id="more"></a>
<p>Video:</p>
<div class="video-container"><iframe src="//www.youtube.com/embed/A4l3vPMaJew" frameborder="0" allowfullscreen></iframe></div>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/Distributed_HyperConvergence_Pushing_Openstack_and_Ceph_to_the_Edge.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 13/11/18</p>
Ceph meetup Parishttps://sebastien-han.fr/blog/2018/11/05/Ceph-meetup-Paris/2018-11-05T12:23:09.000Z2019-05-04T12:44:07.000Z<p>My latest presentation of <code>cn</code> (ceph nano) that I gave at the French Ceph Meetup in Paris.</p>
<a id="more"></a>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/cn-ceph-meetup-paris.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>My latest presentation of <code>cn</code> (ceph nano) that I gave at the French Ceph Meetup in Paris.</p>
OpenStack Summit Vancouver: How to Survive an OpenStack Cloud Meltdown with Cephhttps://sebastien-han.fr/blog/2018/05/22/OpenStack-Summit-Vancouver-How-to-Survive-an-OpenStack-Cloud-Meltdown-with-Ceph/2018-05-22T14:38:45.000Z2018-05-24T23:44:55.000Z<p>Date: 22/05/18</p>
<a id="more"></a>
<p>Video:</p>
<div class="video-container"><iframe src="//www.youtube.com/embed/n2S7uNC_KMw" frameborder="0" allowfullscreen></iframe></div>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/How_to_Survive_an_OpenStack_Cloud_Meltdown_with_Ceph_-_Vancouver_Summit_2018.pdf" style="width:100%; height:550px"></iframe>
</div>
<p>Date: 22/05/18</p>
See you at the OpenStack Summithttps://sebastien-han.fr/blog/2018/05/17/See-you-at-the-OpenStack-Summit/2018-05-17T14:32:45.000Z2018-05-24T23:45:10.000Z<p><img src="http://sebastien-han.fr/blog/images/openstack-summit-vancouver18.jpg" alt="Title"></p>
<p>Next week is the <a href="https://www.openstack.org/summit/vancouver-2018/" target="_blank" rel="external">OpenStack Summit</a> conference.
I will attend and will be giving a talk <a href="https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=how+to+survive" target="_blank" rel="external">How to Survive an OpenStack Cloud Meltdown with Ceph</a>.</p>
<p>See you there!</p>
<a id="more"></a>
<p><img src="http://sebastien-han.fr/blog/images/openstack-summit-vancouver18.jpg" alt="Title"></p>
<p>Next week is the <a href="https://www.openstack.org/summit/vancouver-2018/">OpenStack Summit</a> conference.
I will attend and will be giving a talk <a href="https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=how+to+survive">How to Survive an OpenStack Cloud Meltdown with Ceph</a>.</p>
<p>See you there!</p>
See you at the Red Hat summithttps://sebastien-han.fr/blog/2018/05/06/See-you-at-the-RedHat-summit/2018-05-06T20:02:45.000Z2019-05-04T12:44:40.000Z<p><img src="http://sebastien-han.fr/blog/images/red-hat-summit-2018-sf.png" alt="Red Hat Summit San Francisco"></p>
<p>I will be attending the Red Hat summit as I’m co-presenting a lab.
This goal of the lab is to deploy an OpenStack Hyperconverged environment (HCI) with Ceph.</p>
<p><br></p>
<blockquote>
<p>See you in San Francisco!</p>
</blockquote>
<p><img src="http://sebastien-han.fr/blog/images/red-hat-summit-2018-sf.png" alt="Red Hat Summit San Francisco"></p>
<p>I will be attending
Ceph Nano big updateshttps://sebastien-han.fr/blog/2018/04/30/Ceph-Nano-big-updates/2018-04-30T10:36:11.000Z2018-04-30T15:08:29.000Z<p><img src="http://sebastien-han.fr/blog/images/introducing-ceph-nano.png" alt="Title"></p>
<p>With its two latest versions (v1.3.0 and v1.4.0) Ceph Nano brought some nifty new functionalities that I’d like to highlight in the article.</p>
<a id="more"></a>
<h2 id="Multi-cluster-support"><a href="#Multi-cluster-support" class="headerlink" title="Multi cluster support"></a>Multi cluster support</h2><p>This is feature is available since v1.3.0.</p>
<p>You can now run more than a single instance of cn, you can run as many as your system allows it (CPU and memory wise). This is how you run a new cluster:</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"> $ ./cn cluster start s3 <span class="_">-d</span> /tmp</span><br><span class="line">2018/04/30 16:12:07 Running cluster s3...</span><br><span class="line"></span><br><span class="line">HEALTH_OK is the Ceph status</span><br><span class="line">S3 object server address is: http://10.36.116.231:8001</span><br><span class="line">S3 user is: nano</span><br><span class="line">S3 access key is: JZYOITC0BDLPB0K6E5WX</span><br><span class="line">S3 secret key is: sF0Vu6seb64hhlsmtxKT6BSrs2KY8cAB8la8kni1</span><br><span class="line">Your working directory is: /tmp</span><br></pre></td></tr></table></figure>
<p>And how you can retrieve the list of running clusters:</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ ./cn cluster ls</span><br><span class="line">+--------+---------+--------------------+----------------+--------------------------------+</span><br><span class="line">| NAME | STATUS | IMAGE | IMAGE RELEASE | IMAGE CREATION TIME |</span><br><span class="line">+--------+---------+--------------------+----------------+--------------------------------+</span><br><span class="line">| s3 | running | ceph/daemon:latest | master<span class="_">-d</span>0d98c4 | 2018-04-20T13:37:06.933085171Z |</span><br><span class="line">| trolol | exited | ceph/daemon:latest | master<span class="_">-d</span>0d98c4 | 2018-04-20T13:37:06.933085171Z |</span><br><span class="line">| e | running | ceph/daemon:latest | master<span class="_">-d</span>0d98c4 | 2018-04-20T13:37:06.933085171Z |</span><br><span class="line">+--------+---------+--------------------+----------------+--------------------------------+</span><br></pre></td></tr></table></figure>
<p>This feature works well in conjunction with the image support.
You can run any container using any container image available in the Docker Hub. You can even your own one if you want to test a fix.</p>
<p>You can list the available image like this:</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ ./cn image ls</span><br><span class="line">latest-bislatest</span><br><span class="line">latest-luminouslatest-kraken</span><br><span class="line">latest-jewelmaster-da37788-kraken-centos-7-x86_64</span><br><span class="line">master-da37788-jewel-centos-7-x86_64master-da37788-kraken-ubuntu-16.04-x86_64</span><br><span class="line">master-da37788-jewel-ubuntu-14.04-x86_64master-da37788-jewel-ubuntu-16.04-x86_64</span><br></pre></td></tr></table></figure>
<p>Use <code>-a</code> to list <strong>all</strong> our images.
So using the <code>-i</code> option when starting a cluster will run the image you want.</p>
<h2 id="Dedicated-device-or-directory-support"><a href="#Dedicated-device-or-directory-support" class="headerlink" title="Dedicated device or directory support"></a>Dedicated device or directory support</h2><p>This feature is available since v1.4.0.</p>
<p>You might be after providing more persistent and fast storage for cn. This is possible by specifying either a dedicated block device (a partition works too) or a directory that you might have configured on a particular device.</p>
<p>You have to run cn with <code>sudo</code> here since it performs a couple of checks on that device to make sure its eligible for usage. Thus higher privileges to run cn are required.</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">sudo ./cn cluster start -b /dev/disk/by-id/wwn-0x600508b1001c4257dacb9870dbc6b1c8 block</span><br></pre></td></tr></table></figure>
<p>Using a directory is identical, just run with <code>-b /srv/cn/</code> for instance.</p>
<p><br></p>
<blockquote>
<p>I’m so glad to see how cn has evolved, I’m proud of this little tool that I use on daily basis for so many things. I hope you are enjoying it as much as I do.</p>
</blockquote>
<p><img src="http://sebastien-han.fr/blog/images/introducing-ceph-nano.png" alt="Title"></p>
<p>With its two latest versions (v1.3.0 and v1.4.0) Ceph Nano brought some nifty new functionalities that I’d like to highlight in the article.</p>
Ansible module to manage CephX Keyshttps://sebastien-han.fr/blog/2018/04/02/Ansible-module-to-manage-CephX-Keys/2018-04-01T22:31:25.000Z2018-04-30T10:28:47.000Z<p><img src="http://sebastien-han.fr/blog/images/ceph-ansible-cephx-module.jpg" alt="Title"></p>
<p>Following our recent initiative on writing more Ceph modules for Ceph Ansible, I’d like to introduce one that I recently wrote: <strong>ceph_key</strong>.</p>
<a id="more"></a>
<p>The module is pretty straightforward to use and will ease your day two operations for managing CephX keys. It has several capabilities such as:</p>
<ul>
<li>create: will create the key on the filesystem with the right permissions (support <code>mode</code>/<code>owner</code>) and will import in the Ceph (can be enabled/disabled) with the given capabilities</li>
<li>update: will update the capabilities of a particular key</li>
<li>delete: will delete the key from Ceph</li>
<li>info: will get every information about a particular key</li>
<li>list: will list all the available keys</li>
</ul>
<p>The module also works on containerized Ceph clusters.</p>
<p>See the following examples:</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># This playbook is used to manage CephX Keys</span></span><br><span class="line"><span class="comment"># You will find examples below on how the module can be used on daily operations</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># It currently runs on localhost</span></span><br><span class="line"></span><br><span class="line"><span class="attr">- hosts:</span> localhost</span><br><span class="line"><span class="attr"> gather_facts:</span> <span class="literal">false</span></span><br><span class="line"><span class="attr"> vars:</span></span><br><span class="line"><span class="attr"> cluster:</span> ceph</span><br><span class="line"><span class="attr"> keys_to_info:</span></span><br><span class="line"><span class="bullet"> -</span> client.admin</span><br><span class="line"><span class="bullet"> -</span> mds<span class="number">.0</span></span><br><span class="line"><span class="attr"> keys_to_delete:</span></span><br><span class="line"><span class="bullet"> -</span> client.leseb</span><br><span class="line"><span class="bullet"> -</span> client.leseb1</span><br><span class="line"><span class="bullet"> -</span> client.pythonnnn</span><br><span class="line"><span class="attr"> keys_to_create:</span></span><br><span class="line"><span class="bullet"> -</span> { name: client.pythonnnn, caps: { mon: <span class="string">"allow rwx"</span>, mds: <span class="string">"allow *"</span> } , mode: <span class="string">"0600"</span> }</span><br><span class="line"><span class="bullet"> -</span> { name: client.existpassss, caps: { mon: <span class="string">"allow r"</span>, osd: <span class="string">"allow *"</span> } , mode: <span class="string">"0600"</span> }</span><br><span class="line"><span class="bullet"> -</span> { name: client.path, caps: { mon: <span class="string">"allow r"</span>, osd: <span class="string">"allow *"</span> } , mode: <span class="string">"0600"</span> }</span><br><span class="line"></span><br><span class="line"><span class="attr"> tasks:</span></span><br><span class="line"><span class="attr"> - name:</span> create ceph key(s) module</span><br><span class="line"><span class="attr"> ceph_key:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">"<span class="template-variable">{{ item.name }}</span>"</span></span><br><span class="line"><span class="attr"> state:</span> present</span><br><span class="line"><span class="attr"> caps:</span> <span class="string">"<span class="template-variable">{{ item.caps }}</span>"</span></span><br><span class="line"><span class="attr"> cluster:</span> <span class="string">"<span class="template-variable">{{ cluster }}</span>"</span></span><br><span class="line"><span class="attr"> secret:</span> <span class="string">"<span class="template-variable">{{ item.key | default('') }}</span>"</span></span><br><span class="line"><span class="attr"> with_items:</span> <span class="string">"<span class="template-variable">{{ keys_to_create }}</span>"</span></span><br><span class="line"></span><br><span class="line"><span class="attr"> - name:</span> update ceph key(s)</span><br><span class="line"><span class="attr"> ceph_key:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">"<span class="template-variable">{{ item.name }}</span>"</span></span><br><span class="line"><span class="attr"> state:</span> update</span><br><span class="line"><span class="attr"> caps:</span> <span class="string">"<span class="template-variable">{{ item.caps }}</span>"</span></span><br><span class="line"><span class="attr"> cluster:</span> <span class="string">"<span class="template-variable">{{ cluster }}</span>"</span></span><br><span class="line"><span class="attr"> with_items:</span> <span class="string">"<span class="template-variable">{{ keys_to_create }}</span>"</span></span><br><span class="line"></span><br><span class="line"><span class="attr"> - name:</span> delete ceph key(s)</span><br><span class="line"><span class="attr"> ceph_key:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">"<span class="template-variable">{{ item }}</span>"</span></span><br><span class="line"><span class="attr"> state:</span> absent</span><br><span class="line"><span class="attr"> cluster:</span> <span class="string">"<span class="template-variable">{{ cluster }}</span>"</span></span><br><span class="line"><span class="attr"> with_items:</span> <span class="string">"<span class="template-variable">{{ keys_to_delete }}</span>"</span></span><br><span class="line"></span><br><span class="line"><span class="attr"> - name:</span> info ceph key(s)</span><br><span class="line"><span class="attr"> ceph_key:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">"<span class="template-variable">{{ item }}</span>"</span></span><br><span class="line"><span class="attr"> state:</span> info</span><br><span class="line"><span class="attr"> cluster:</span> <span class="string">"<span class="template-variable">{{ cluster }}</span>"</span></span><br><span class="line"><span class="attr"> register:</span> key_info</span><br><span class="line"><span class="attr"> ignore_errors:</span> <span class="literal">true</span></span><br><span class="line"><span class="attr"> with_items:</span> <span class="string">"<span class="template-variable">{{ keys_to_info }}</span>"</span></span><br><span class="line"></span><br><span class="line"><span class="attr"> - name:</span> list ceph key(s)</span><br><span class="line"><span class="attr"> ceph_key:</span></span><br><span class="line"><span class="attr"> state:</span> list</span><br><span class="line"><span class="attr"> cluster:</span> <span class="string">"<span class="template-variable">{{ cluster }}</span>"</span></span><br><span class="line"><span class="attr"> register:</span> list_keys</span><br><span class="line"><span class="attr"> ignore_errors:</span> <span class="literal">true</span></span><br></pre></td></tr></table></figure>
<p><br></p>
<blockquote>
<p>The goal is to have all of our Ceph modules included by default in Ansible. Stay tuned, more modules to come!</p>
</blockquote>
<p><img src="http://sebastien-han.fr/blog/images/ceph-ansible-cephx-module.jpg" alt="Title"></p>
<p>Following our recent initiative on writing more Ceph modules for Ceph Ansible, I’d like to introduce one that I recently wrote: <strong>ceph_key</strong>.</p>
Handling app signals in containershttps://sebastien-han.fr/blog/2018/03/26/Handling-app-signals-in-containers/2018-03-26T21:22:51.000Z2018-03-18T23:30:24.000Z<p><img src="http://sebastien-han.fr/blog/images/ceph-container-handling-signals.jpg" alt="Title"></p>
<p>A year ago, I was describing how we were debugging our ceph containers; today I’m back with yet another great thing we wrote :).
Sometimes, when a process receives a signal and if that process runs within a container, you might want to do something before or after its termination.
That’s what we are going to discuss.</p>
<a id="more"></a>
<h2 id="Running-actions-before-or-after-terminating-a-process"><a href="#Running-actions-before-or-after-terminating-a-process" class="headerlink" title="Running actions before or after terminating a process"></a>Running actions before or after terminating a process</h2><p>Performing actions before or after a process get terminated on a host is easy because you don’t lose its environment.
In the micro-services world, your application runs in a container, and this application is PID 1, which means if it exits then your container goes away.</p>
<p>However, sometimes you want to gracefully terminate your programs, just like if they were running on a host and that systemd was doing this for you.
For example, on ceph-container we realized at some point that stopping an OSD running on an encrypted partition (dmcrypt + LUKS) was causing issues.
Indeed LUKS was not being closed after the OSD process exited which caused us a lot of trouble to merely do a restart of that container.</p>
<p>Typically what we are looking at here is to unmount OSD partitions and close LUKS devices, <strong>but after</strong> the OSD termination.
Remember the lines above, how can you perform that action if the container stops? Well your LUKS remained open and stuck in your dead container namespace… Not appealing right?</p>
<p>Fortunately, we came up with a solution that supersedes our debugging mechanism.</p>
<p>As explained in the previous article we remapped the <code>exec</code> function.
Traditionally, we start our container process with <code>exec</code>, so we fork the entrypoint process by breaking any relationship with it. Our new <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/docker_exec.sh#L47-L64" target="_blank" rel="external"><code>exec</code> function</a> contains a <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/docker_exec.sh#L51" target="_blank" rel="external"><code>trap</code></a> that ‘traps’ signal, we look for <code>SIGTERM</code> here. If the container receives a <code>SIGTERM</code> by let’s say <code>docker stop</code> then our trap gets activated. The trap calls a <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/docker_exec.sh#L35-L45" target="_blank" rel="external">function</a> that has two capabilities:</p>
<ul>
<li>run a <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/docker_exec.sh#L42" target="_blank" rel="external">pre task function</a> before <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/docker_exec.sh#L43" target="_blank" rel="external">sending <code>SIGTERM</code></a> to the process</li>
<li>run a <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/docker_exec.sh#L44" target="_blank" rel="external">post task function</a> after <code>SIGTERM</code> was sent to the process</li>
</ul>
<p>In our scenario, this is our <a href="https://github.com/ceph/ceph-container/blob/master/src/daemon/osd_scenarios/osd_disk_activate.sh#L77-L84" target="_blank" rel="external"><code>sigterm_cleanup_post</code> function</a>.</p>
<p>Et voilà, that’s how you handle signal for your containers.</p>
<p><br></p>
<blockquote>
<p>More articles to follow on containers!</p>
</blockquote>
<p><img src="http://sebastien-han.fr/blog/images/ceph-container-handling-signals.jpg" alt="Title"></p>
<p>A year ago, I was describing how we were debugging our ceph containers; today I’m back with yet another great thing we wrote :).
Sometimes, when a process receives a signal and if that process runs within a container, you might want to do something before or after its termination.
That’s what we are going to discuss.</p>
See you at the first Cephaloconhttps://sebastien-han.fr/blog/2018/03/20/See-you-at-the-first-Cephalocon/2018-03-20T17:52:12.000Z2018-03-25T08:26:19.000Z<p><img src="http://sebastien-han.fr/blog/images/cephalocon-beijing.png" alt="Title"></p>
<p>Tomorrow, the first conference fully dedicated to Ceph will start in Beijing, China.
I’m attending and super excited.
I will see you there!</p>
<a id="more"></a>
<p><img src="http://sebastien-han.fr/blog/images/cephalocon-beijing.png" alt="Title"></p>
<p>Tomorrow, the first conference fully dedicated to Ceph will start in Beijing, China.
I’m attending and super excited.
I will see you there!</p>
Huge changes in ceph-containerhttps://sebastien-han.fr/blog/2018/03/19/Huge-changes-in-ceph-container/2018-03-18T23:19:07.000Z2018-03-19T00:13:50.000Z<p><img src="http://sebastien-han.fr/blog/images/ceph-container-huge-change.jpg" alt="Title"></p>
<p>A massive refactor done a week ago on <a href="https://github.com/ceph/ceph-container" target="_blank" rel="external">ceph-container</a>.
And yes, I’m saying ceph-container, not ceph-docker anymore.
We don’t have anything against Docker, we believe it’s excellent and we use it extensively.
However, having the ceph-docker name does not reflect the content of the repository.
Docker is only the <code>Dockerfile</code>, the rest is either entrypoints or examples.
In the end, we believe ceph-container is a better match for the repository name.</p>
<a id="more"></a>
<p><br></p>
<h2 id="I-We-were-doing-it-wrong…"><a href="#I-We-were-doing-it-wrong…" class="headerlink" title="I. We were doing it wrong…"></a>I. We were doing it wrong…</h2><p>Hosting and building images from the Docker Hub made us do things wrong.
The old structure we came up with was mostly to workaround the Docker Hub’s limitation which is basically:</p>
<blockquote>
<p>You can not quickly build more than one image from a single repository. We have multiple Linux distribution and Ceph releases to support. This was a show stopper for us.</p>
</blockquote>
<p>To workaround this, we designed a branching strategy which primarily consisted of each branch as a specific version of the code (distribution and Ceph release), and so at the root of the repository, we had a <code>daemon</code> directory so Docker Hub would fetch all of that and build our images.</p>
<p>The master branch, the one containing all the distribution and Ceph releases had a bunch a symlinks everywhere making the whole structure hard to maintain, modify and this without impacting the rest. Moreover, we had sooo much code duplication, terrible.</p>
<p>But with that, we lost traceability of the code inside the images.
Since the image name was always the same (the tag) and got overwritten for each new content on master (or stable branch).
We only had a single version of a particular distribution and Ceph release.
This made rollbacks pretty hard to achieve for anyone who removed the previous image…</p>
<p><br></p>
<h2 id="II-New-structure-the-matriochka-approach"><a href="#II-New-structure-the-matriochka-approach" class="headerlink" title="II. New structure: the matriochka approach"></a>II. New structure: the matriochka approach</h2><p>The new structure allows us to isolate each portion of the code, from distribution to Ceph release.
One can maintain its distribution; this eases cont maintainer’s life. Importantly, symlinks and code duplication are no more.
The code base has dropped too, 2,204 additions and 8,315 deletions.</p>
<p>For an in-depth description of this approach, please refer to the slides at the end of the blog post.</p>
<p><br></p>
<h2 id="III-Make-make-make"><a href="#III-Make-make-make" class="headerlink" title="III. Make make make!"></a>III. Make make make!</h2><p>Some would say “Old School,” I’d say, we don’t need to re-invent the wheel and clearly <code>make</code> has demonstrated to be robust.
Our entire image build process relies on <code>make</code>.</p>
<p>So the <code>make</code> approach lets you do a bunch of things, see the list:</p>
<pre><code>Usage: make [OPTIONS] ... <TARGETS>
TARGETS:
Building:
stage Form staging dirs for all images. Dirs are reformed if they exist.
build Build all images. Staging dirs are reformed if they exist.
build.parallel Build default flavors in parallel.
build.all Build all buildable flavors with build.parallel
push Push release images to registry.
push.parallel Push release images to registy in parallel
Clean:
clean Remove images and staging dirs for the current flavors.
clean.nones Remove all image artifacts tagged <none>.
clean.all Remove all images and all staging dirs. Implies "clean.nones".
Will only delete images in the specified REGISTRY for safety.
clean.nuke Same as "clean.all" but will not be limited to specified REGISTRY.
USE AT YOUR OWN RISK! This may remove non-project images.
Testing:
lint Lint the source code.
test.staging Perform stageing integration test.
Help:
help Print this help message.
show.flavors Show all flavor options to FLAVORS.
flavors.modified Show the flavors impacted by this branch's changes vs origin/master.
All buildable flavors are staged for this test.
The env var VS_BRANCH can be set to compare vs a different branch.
OPTIONS:
FLAVORS - ceph-container images to operate on in the form
<ceph rel>,<arch>,<os name>,<os version>,<base registry>,<base repo>,<base tag>
and multiple forms may be separated by spaces.
ceph rel - named ceph version (e.g., luminous, mimic)
arch - architecture of Ceph packages used (e.g., x86_64, aarch64)
os name - directory name for the os used by ceph-container (e.g., ubuntu)
os version - directory name for the os version used by ceph-container (e.g., 16.04)
base registry - registry to get base image from (e.g., "_" ~ x86_64, "arm64v8" ~ aarch64)
base repo - The base image to use for the daemon-base container. generally this is
also the os name (e.g., ubuntu) but could be something like "alpine"
base tag - Tagged version of the base os to use (e.g., ubuntu:"16.04", alpine:"3.6")
e.g., FLAVORS_TO_BUILD="luminous,x86_64,ubuntu,16.04,_,ubuntu,16.04 \
luminous,aarch64,ubuntu,16.04,arm64v8,alpine,3.6"
REGISTRY - The name of the registry to tag images with and to push images to.
Defaults to "ceph".
e.g., REGISTRY="myreg" will tag images "myreg/daemon{,-base}" and push to "myreg".
RELEASE - The release version to integrate in the tag. If omitted, set to the branch name.
</code></pre><p><br></p>
<h2 id="IV-We-are-back-to-two-images"><a href="#IV-We-are-back-to-two-images" class="headerlink" title="IV. We are back to two images"></a>IV. We are back to two images</h2><p><code>daemon-base</code> is back!
For a while we used to have <code>daemon</code> and <code>base</code>, then we dropped <code>base</code> to include everything in <code>daemon</code>.
However, we recently started to work on <a href="https://rook.io" target="_blank" rel="external">Rook</a>.
Rook was having its own Ceph container image; they shouldn’t have to build a Ceph image, <strong>we</strong> should be providing one.</p>
<p>So now, we have two images:</p>
<ul>
<li><code>daemon-base</code>, contains Ceph packages</li>
<li><code>daemon</code>, contains <code>daemon-base</code> plus ceph-container’s entrypoint / specific packages</li>
</ul>
<p>So now Rook can build its Rook image but from <code>daemon-base</code> and then add their Rook binary on top of it.
This is not only true for Rook but for any project that would like to use a Ceph container image.</p>
<p><br></p>
<h2 id="V-Moving-away-from-automated-builds"><a href="#V-Moving-away-from-automated-builds" class="headerlink" title="V. Moving away from automated builds"></a>V. Moving away from automated builds</h2><p>We spent too much time workarounding Docker Hub’s limitation. This even caused us to go with our previous terrible approach.
Now things are different. We are no longer using automated builds from the Docker Hub; we just use it as a registry to store our Ceph images.
Each time a pull request is merged into Github, our CI runs a job that builds and push images to the Docker Hub.
We also have a similar mechanism we stable releases, each time we tag a new release our CI runs triggers a job that builds that our stable container images.</p>
<p>Current images that can be found on this <a href="https://hub.docker.com/r/ceph/daemon/tags/" target="_blank" rel="external">Docker Hub page</a>.</p>
<p>Later, we are planning on pushing our images on <a href="http://quay.io" target="_blank" rel="external">Quay</a>, before we do I’d just like to find who’s using the Ceph organization or the Ceph username as I can’t create any of the two… Once this is solved, we will have a Ceph organization on Quay, and we will start pushing Ceph container images in it.</p>
<p><br></p>
<h2 id="VI-Lightweight-baby-container-images"><a href="#VI-Lightweight-baby-container-images" class="headerlink" title="VI. Lightweight baby! (container images)"></a>VI. Lightweight baby! (container images)</h2><p>We now have smaller container images; we went from almost 1GB unzipped to 600MB.
The build mechanism shrinks all the layers to a single one; this drastically reduces the size of the final container image.
Compressed the images went from 320 MB to 231 MB. So this 100MB saved, which is nice.
We could go further, but we decided it was too time-consuming and the value versus the risk is low.</p>
<p><br></p>
<p>These are just a couple of highlights, if you want to learn more, you should look into this presentation.
So you will learn more about the new project structure, our templating mechanism, and more benefits.</p>
<div class="row">
<iframe src="http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/ceph-container-reorg.pdf" style="width:100%; height:550px"></iframe>
</div>
<p><br></p>
<blockquote>
<p>This is huge for ceph-container and I’m so proud of what we achieved. Big shout out to <a href="https://github.com/BlaineEXE" target="_blank" rel="external">Blaine Gardner</a> and <a href="https://github.com/ErwanAliasr1" target="_blank" rel="external">Erwan Velu</a> who did this refactoring work.</p>
</blockquote>
<p><img src="http://sebastien-han.fr/blog/images/ceph-container-huge-change.jpg" alt="Title"></p>
<p>A massive refactor done a week ago on <a href="https://github.com/ceph/ceph-container">ceph-container</a>.
And yes, I’m saying ceph-container, not ceph-docker anymore.
We don’t have anything against Docker, we believe it’s excellent and we use it extensively.
However, having the ceph-docker name does not reflect the content of the repository.
Docker is only the <code>Dockerfile</code>, the rest is either entrypoints or examples.
In the end, we believe ceph-container is a better match for the repository name.</p>