RBD image bigger than your Ceph cluster
Some experiment with gigantic overprovisioned RBD images.
First, create a large image, let’s 1 PB:
$ rbd create --size 1073741824 huge
Problems rise as soon as you attempt to delete the image. Eventually try to remove it:
$ time rbd rm huge
Keeping an of every exiting objects is terribly inefficient since this will kill our performance. The major downside with this technique is when shrinking or deleting an image it must look for all objects above the shrink size.
In dumpling or later RBD can do this in parallel controlled by
--rbd-concurrent-management-ops (undocumented option), which defaults to 10.
You still have another option, if you’ve never written to the image, you can just delete the
You can find it by listing all the objects contained in the image.
rados -p <your-pool> ls | grep <block_name_prefix> will do the trick.
After this, removing the RBD image will take a second.
$ rados -p rbd ls