8 replies
June 2023

v-bulynkin

Thank you for the great guide!

Is this configuration expandable? Is it possible to add nodes and change redundancy configuration on the live cluster?

And what was the problem with GlusterFS? There’s an anchor to explanation, but it doesn’t exist. Perhaps, something has changed since 2019?

1 reply
June 2023 ▶ v-bulynkin

funkypenguin Chef

Sort of. Ceph is happy to extend storage space by adding OSD, but changing the redundancy configuration is usually not as easy. You could (for example) change your failure domain from host to rack, for example, but once you’ve created a pool of 3 replicas, you’re stuck with it. Any other configuration will require a new pool, and a long and painful (ask me how I know!) migration…

July 2024

WebAshNZ

Kia ora! Do you have any experience with utilising Docker Volume plugins/drivers that connect to Ceph RBDs rather than using a CephFS mounts that are then bind mounted to the containers? Potentially its unnecessary faff, but I wondered whether it would provide better performance and reliability, particularly with database applications (particularly those that run on sqlite).

I’m currently running GlusterFS and it is awful (so much so that I don’t run most database workloads off the shared storage), so just wanted to check whether also moving away from bind mounts (as I used today with Gluster) would also be helpful.

1 reply
August 2024 ▶ WebAshNZ

funkypenguin Chef

Only noticed this message now, sorry for the delayed response!

I’ve got no direct experience using RBD with Docker Volumes, but I can relay that in Kubernetes, there’s a massive performance improvement to be gained by using RBD instead of CephFS. We switched the ElfHosted Plex volumes to RBD for this reason - it sucks a bit to loose the RWX nature of the volume, but performance for millions of little files / SQLite is much better :slight_smile:

August 2024

WebAshNZ

Yeah, I’m mostly thinking about it for the config / database storage volumes of a container, so good to hear you’ve experienced signicant performance benefits in a similar situation.

My problem now is finding a definitive way to connect at the docker volume level to Ceph in RBD mode, as so far it has proven difficult. It’s like all the docker storage plugins are abandonware because Docker themselves have abandoned any serious support for anything.

1 reply
August 2024 ▶ WebAshNZ

funkypenguin Chef

You might not like my answer here, but it rhymes with “Goobernetfleas” :slight_smile:

August 2024

WebAshNZ

Hahaha :sweat_smile: it feels like such a massive jump…

1 reply
August 2024 ▶ WebAshNZ

chefboyrdave2.1

yeah- im starin down the barrel of a learning curve myself :person_shrugging:- first containers- docker then swarm now k8s? its karma for doin tech architecture n not getting to play with the toys ourselves before now- lucky we have the smartest best friends here to help us along the way! so grateful!:pray::smiling_face_with_three_hearts::v: