Shared Storage (Ceph)

While Docker Swarm is great for keeping containers running (and restarting those that fail), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (hint: you do!), you need to provide shared storage to every docker node.


This is a companion discussion topic for the original entry at https://geek-cookbook.funkypenguin.co.nz/docker-swarm/shared-storage-ceph/

Thank you for the great guide!

Is this configuration expandable? Is it possible to add nodes and change redundancy configuration on the live cluster?

And what was the problem with GlusterFS? There’s an anchor to explanation, but it doesn’t exist. Perhaps, something has changed since 2019?

Sort of. Ceph is happy to extend storage space by adding OSD, but changing the redundancy configuration is usually not as easy. You could (for example) change your failure domain from host to rack, for example, but once you’ve created a pool of 3 replicas, you’re stuck with it. Any other configuration will require a new pool, and a long and painful (ask me how I know!) migration…

1 Like

Kia ora! Do you have any experience with utilising Docker Volume plugins/drivers that connect to Ceph RBDs rather than using a CephFS mounts that are then bind mounted to the containers? Potentially its unnecessary faff, but I wondered whether it would provide better performance and reliability, particularly with database applications (particularly those that run on sqlite).

I’m currently running GlusterFS and it is awful (so much so that I don’t run most database workloads off the shared storage), so just wanted to check whether also moving away from bind mounts (as I used today with Gluster) would also be helpful.

Only noticed this message now, sorry for the delayed response!

I’ve got no direct experience using RBD with Docker Volumes, but I can relay that in Kubernetes, there’s a massive performance improvement to be gained by using RBD instead of CephFS. We switched the ElfHosted Plex volumes to RBD for this reason - it sucks a bit to loose the RWX nature of the volume, but performance for millions of little files / SQLite is much better :slight_smile:

Yeah, I’m mostly thinking about it for the config / database storage volumes of a container, so good to hear you’ve experienced signicant performance benefits in a similar situation.

My problem now is finding a definitive way to connect at the docker volume level to Ceph in RBD mode, as so far it has proven difficult. It’s like all the docker storage plugins are abandonware because Docker themselves have abandoned any serious support for anything.

You might not like my answer here, but it rhymes with “Goobernetfleas” :slight_smile:

1 Like

Hahaha :sweat_smile: it feels like such a massive jump…

yeah- im starin down the barrel of a learning curve myself :person_shrugging:- first containers- docker then swarm now k8s? its karma for doin tech architecture n not getting to play with the toys ourselves before now- lucky we have the smartest best friends here to help us along the way! so grateful!:pray::smiling_face_with_three_hearts::v: