Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active April 16, 2025 22:50
Show Gist options
  • Save scyto/1b526c38b9c7f7dca58ca71052653820 to your computer and use it in GitHub Desktop.
Save scyto/1b526c38b9c7f7dca58ca71052653820 to your computer and use it in GitHub Desktop.
Hypervisor Host Based CephFS pass through with VirtioFS

Using VirtioFS backed by CephFS for bind mounts

This is currently a work-in-progress documentation - rough notes for me, maybe missing a lot or wrong

The idea is to replace GlusterFS running inside the VM with storage on my cephfs cluster. This is my proxmox cluster and it runs both the storage and is the hypervisor for my docker VMs.

Other possible approaches:

  • ceph fuse clien in VM to mount cephFS or CephRBD over IP
  • use of ceph docker volume plugin (no useable version of this yet exists but it is being worked on)

Assumptions:

  • I already have a working Ceph Cluster - this will not be documented in this gist. See my proxmox gist for a working example.
  • this is for proxmox as a hypervisor+ceph cluster and the VMs are hosted on the same proxmox that is the ceph cluster

Workflow

Create a new cephFS on the proxmox cluster

I created one called docker

image

The storage ID is docker-cephFS (i chose this name as I will play with ceph in a varity of other ways too)

image

Add this to directory mappings

image

Configure docker host VMs to pass through

image

In each VM

In each VM

  • sudo mkdir /mnt/docker-cephFS/
  • sudo nano /etc/fstab
    • add #for virtiofs mapping docker-cephFS /mnt/docker-cephFS virtiofs defaults 0 0
    • save the file
  • sudo systemctl daemon-reload
  • sudo mount -a

Migrating Docker Swarm Stacks for exising Stack

basically its

  • stop the stack

  • mv the data from /mnt/gluster-vol1/dirname to /mnt/docker-cephFS/dirname

  • Edit the stack to change the volume defitions from my gluster defition to a local volume - this mean no editing of the service volme lines

Example from my wordpress stack

volumes:
  dbdata:
    driver: gluster-vol1
  www:
    driver: gluster-vol1

to

volumes:
  dbdata:
    driver: local
    driver_opts:
      type: none
      device: "/mnt/docker-cephFS/wordpress_dbdata"
      o: bind

  www:
    driver: local
    driver_opts:
      type: none
      device: "/mnt/docker-cephFS/wordpress_www"
      o: bind

  • triple check everything
  • restart the stack

if you get an error about the volumen already being defined you may need to delete the old volume defition by had - thi can easily be done in portainer or using the docker volume command

Backup

havent figured out an ideal strategy for backing up the cephFS on the host or from the vm - with glsuter the bricks were stored on a dedicated vdisk - this was backed up as part of the pbs backup of the vm

As the virtioFS is not presented as a disk this doesn't happen (this is reasonable as the cephFS is not VM specific)

@Drallas
Copy link

Drallas commented Apr 15, 2025

This is much better, when did they add this functionality? No more Hookscript needed, if I understand it correctly?

@mico28
Copy link

mico28 commented Apr 16, 2025

Now i use Docker compose and use backrest to backup data to Qnap and then to Cloud.

Now i will migrate to Swarm after Proxmox announce VirtioFS and i will use same backup approach bellow.
In VM i have mounted Virtiofs in /mnt/DockerData/ and then container like Sonarr in /mnt/DockerData/Sonarr
In Backrest i create job to Backup Sonarr every hour to CIFS location in Qnap.

@scyto
Copy link
Author

scyto commented Apr 16, 2025

This is much better, when did they add this functionality? No more Hookscript needed, if I understand it correctly?

yup in the new proxmox release

only downside is they block snapshotting of VM, even though it is supported by QEMU - i understand why they have done that

i moved one stack in my swarm (my wordpress to this) as my gluster volume plugin helpfully trashed the whole replicated volume (thank god for PBS backup - i had to restore all 3 swarm VMs and then restore my worpdress application level backup.... as the db seemed to be broke/some other weird issue)

only thing i havent figured out - how to backup the cephFS volume......

@scyto
Copy link
Author

scyto commented Apr 16, 2025

Now i use Docker compose and use backrest to backup data to Qnap and then to Cloud.

Now i will migrate to Swarm after Proxmox announce VirtioFS and i will use same backup approach bellow. In VM i have mounted Virtiofs in /mnt/DockerData/ and then container like Sonarr in /mnt/DockerData/Sonarr In Backrest i create job to Backup Sonarr every hour to CIFS location in Qnap.

i would love to know more about backrest - i haven't figured out how to backup the cephFS volume, thought about using the community edition of veeam, or restic (but i know nothing about those so friction is high)

@scyto
Copy link
Author

scyto commented Apr 16, 2025

@mico28 ok i now realize backrest is a gui for restic, i hope you will let me ask these questions :-)

questions:

  1. does the backrest docker container include restic
  2. if not do i a need a 'restic server'
  3. do i need a restic agent on each thing to be backuped (e.g. inside the VMs etc)

tl;dr where do i get started - i want to backup VMs, phsyical hosts, specfic volumes, do this as incremtnal backups (not full daily backps) oh and I wantr 321 strategy where I have one backup copy on one NAS, a second backup copy on another NAS and the last copyt of only critical data in Azure (i do this with synology today but want to get rid of that)

do you think backrest/restic can meet all these needs, do you know of a good getting started guide?

my proxmox is not my NAS
i ahve a physical truenas server I am comissioning for NAS and secondary zimacube i havent decided what to put on it

i don't need to backup movies and shit like that - just a few terabytes.

@mico28
Copy link

mico28 commented Apr 16, 2025

  1. I'm not sure if it includes.
  2. Here https://github.com/garethgeorge/backrest is example compose
  3. The disadvantage is that you have to create Backrest separately on each vm if you use seperate vm and docker container
  • Now with Swarm and VirtioFS i need only one instance of Backrest because all VM have same data over VirtioFS.

  • Backrest/Restic can not meet all your needs.

  • First, you need to create a repository where you will store your backup data

  • Second you create Plan where you select what you want to backup
    This is my Jellyfin VM where i backup Jellyfin config
    Posnetek zaslona 2025-04-16 221940
    Posnetek zaslona 2025-04-16 223137

My backup plan is:

  • backup VM once a day every 24h with Proxmox Backup Server
  • backup docker container volumes with Backrest to my NAS every hour
  • backup from NAS to cloud every hour
  • i don't backup physical hosts.

I have data on Proxmox, NAS and cloud.
There is one copy at each location.
It's not ideal Strategy but it works for me.

@scyto
Copy link
Author

scyto commented Apr 16, 2025

Helpful thanks, got it backrest is configured on each devce that will back itself up to the 'repo', any opinion on using the REST server for restic - seems to have could apped functionality to reduce backup sizes

I already backup all my VMs and CTs to PBS every 2 hours, the pbs is a VM on 2015 synology (i.e. not powerful), I don't care about the containers in the swarm, everything is in the swarm is ephemeral, except for the bind mounts (which today get backed up in the VMs accidentally because the gluster volume bricks are there.

Now my bind data is going to move into a cepFS volume - so i need to figure out how to back that up... given there are 3 copies.... one on each cephnode - hmm maybe a restic client in a HA-CT, that way only one process is ever trying backup the cephFS to a cephFS end point....

I need, to go noodle... lots to think about and tinker with....

oh do you replicate yout pbs backup data else where. so ineffect

PVE VMs & CT > PBS Machine > NAS
                           > CLOUD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment