This is currently a work-in-progress documentation - rough notes for me, maybe missing a lot or wrong
The idea is to replace GlusterFS running inside the VM with storage on my cephfs cluster. This is my proxmox cluster and it runs both the storage and is the hypervisor for my docker VMs.
Other possible approaches:
- ceph fuse clien in VM to mount cephFS or CephRBD over IP
- use of ceph docker volume plugin (no useable version of this yet exists but it is being worked on)
Assumptions:
- I already have a working Ceph Cluster - this will not be documented in this gist. See my proxmox gist for a working example.
- this is for proxmox as a hypervisor+ceph cluster and the VMs are hosted on the same proxmox that is the ceph cluster
I created one called docker
The storage ID is docker-cephFS (i chose this name as I will play with ceph in a varity of other ways too)
In each VM
sudo mkdir /mnt/docker-cephFS/
sudo nano /etc/fstab
- add
#for virtiofs mapping docker-cephFS /mnt/docker-cephFS virtiofs defaults 0 0
- save the file
- add
sudo systemctl daemon-reload
sudo mount -a
basically its
-
stop the stack
-
mv the data from /mnt/gluster-vol1/dirname to /mnt/docker-cephFS/dirname
-
Edit the stack to change the volume defitions from my gluster defition to a local volume - this mean no editing of the service volme lines
Example from my wordpress stack
volumes:
dbdata:
driver: gluster-vol1
www:
driver: gluster-vol1
to
volumes:
dbdata:
driver: local
driver_opts:
type: none
device: "/mnt/docker-cephFS/wordpress_dbdata"
o: bind
www:
driver: local
driver_opts:
type: none
device: "/mnt/docker-cephFS/wordpress_www"
o: bind
- triple check everything
- restart the stack
if you get an error about the volumen already being defined you may need to delete the old volume defition by had - thi can easily be done in portainer or using the docker volume command
havent figured out an ideal strategy for backing up the cephFS on the host or from the vm - with glsuter the bricks were stored on a dedicated vdisk - this was backed up as part of the pbs backup of the vm
As the virtioFS is not presented as a disk this doesn't happen (this is reasonable as the cephFS is not VM specific)
Helpful thanks, got it backrest is configured on each devce that will back itself up to the 'repo', any opinion on using the REST server for restic - seems to have could apped functionality to reduce backup sizes
I already backup all my VMs and CTs to PBS every 2 hours, the pbs is a VM on 2015 synology (i.e. not powerful), I don't care about the containers in the swarm, everything is in the swarm is ephemeral, except for the bind mounts (which today get backed up in the VMs accidentally because the gluster volume bricks are there.
Now my bind data is going to move into a cepFS volume - so i need to figure out how to back that up... given there are 3 copies.... one on each cephnode - hmm maybe a restic client in a HA-CT, that way only one process is ever trying backup the cephFS to a cephFS end point....
I need, to go noodle... lots to think about and tinker with....
oh do you replicate yout pbs backup data else where. so ineffect