Skip to content

Instantly share code, notes, and snippets.

@iamchriswick
Last active October 25, 2025 19:30
Show Gist options
  • Select an option

  • Save iamchriswick/28c657caa94dccac9fedd1e0667b1966 to your computer and use it in GitHub Desktop.

Select an option

Save iamchriswick/28c657caa94dccac9fedd1e0667b1966 to your computer and use it in GitHub Desktop.
A Comprehensive Guide to Fedora 42 with Tiered Btrfs/ZFS Backups

Introduction

This guide provides a complete, end-to-end walkthrough for setting up an advanced Fedora 42 Workstation. The goal is to create a highly resilient and performant system built on a custom Btrfs layout, featuring:

  • Automated Local Snapshots: Using Snapper for high-frequency, local snapshots of critical system and user subvolumes.
  • Bootable Rollbacks: Integrating grub-btrfs to allow booting directly into a previous system state from the GRUB menu, making system recovery trivial.
  • A 3-Tier Backup Strategy: A sophisticated, automated backup workflow that dispatches snapshots to different storage tiers based on retention requirements:
    • Tier 1 (Local SSD): Instant, high-frequency snapshots for immediate recovery.
    • Tier 2 (Dedicated Btrfs Drive): Short-term (hourly, daily) snapshot replication for rapid data access.
    • Tier 3 (ZFS Mirror): Long-term (weekly, monthly, yearly) archival on a resilient ZFS mirror for ultimate data integrity.
  • (Optional) High-Performance KVM Setup: Instructions for creating an optimized Btrfs subvolume for KVM virtual machines with maximum disk I/O.

⚠️ Important Note on Disk Identifiers

This guide uses /dev/sda, /dev/sdb, and /dev/sdc as examples for the backup and archive drives. Your system's disk identifiers will likely be different (e.g., /dev/nvme0n1, /dev/sdd).

Before running any disk-related commands, always verify your disk layout. Use the following command to list your disks and their properties. Identify the correct disks for your OS, backup, and archive purposes based on their size and model.

lsblk -o NAME,SIZE,MODEL,FSTYPE,MOUNTPOINT

Using incorrect disk names can lead to permanent data loss. Proceed with caution.

By the end of this guide, you will have a robust, "set-it-and-forget-it" system that provides comprehensive protection against software failures, configuration errors, and data loss.


Table of Contents


Part 0: Initial Fedora Installation 💿

This section covers the installation of Fedora Workstation with the specific manual partitioning required for a robust snapshot and rollback system.

1. Boot the Fedora Installer

Start your computer from the Fedora Workstation live USB in UEFI mode.

2. Launch the Installer

Select Install Fedora. Choose your language and keyboard layout.

3. Installation Destination

  • Navigate to the Installation Destination screen.
  • Select your Main SSD.
  • Under "Storage Configuration", click the three-dot menu (⋮) at the top-right and select Launch Storage Editor. Acknowledge the warning.

4. Partitioning

  • If the drive has old partitions, delete them.
  • Create EFI Partition: Click the three-dot menu (⋮) next to the free space, select Create partition, and use these settings:
    • Name: ESP
    • Mount Point: /boot/efi
    • Type: EFI System Partition
    • Size: 1 GiB
  • Create Btrfs Partition: Use the remaining free space to create another partition:
    • Name: FEDORA
    • Mount Point: (Leave empty)
    • Type: BTRFS
    • Size: Use all remaining space.

5. Creating Btrfs Subvolumes

  • Click the three-dot menu (⋮) next to top-level (btrfs subvolume) and select Create subvolume.
  • Create the subvolumes according to the table below. The root and home subvolumes are critical. The others are highly recommended for preventing snapshot bloat.
Name Mount Point
root /
home /home
opt /opt
cache /var/cache
gdm /var/lib/gdm
libvirt /var/lib/libvirt
log /var/log
spool /var/spool
tmp /var/tmp

6. Finalize Installation

  • Click Return to Installation.
  • Proceed with the installation, create your user account, and reboot when finished.

7. Post-Installation Configuration

After rebooting and logging in for the first time, complete these essential setup steps.

Address GRUB 'sparse file' Message

You may see a harmless error: sparse file not allowed message during boot. This occurs because GRUB cannot write to its environment file on a Btrfs subvolume. This prevents GRUB from automatically hiding the boot menu, which is the desired behavior for a snapshot-enabled system anyway. To ensure the GRUB menu is always visible for selecting snapshots, run the following command:

sudo grub2-editenv - unset menu_auto_hide

Verify Filesystem Layout

Run these commands to confirm that your partitions and subvolumes were created correctly.

# Check the block device layout
lsblk -p /dev/vda

# Show details of the Btrfs filesystem
sudo btrfs filesystem show /

# List all created Btrfs subvolumes
sudo btrfs subvolume list /

Perform Initial System Update

Finally, bring your new system completely up to date.

sudo dnf update -y
sudo reboot

Part 1: (Optional) KVM Performance Setup 🖥️

Note: This entire section is optional and only required if you plan to host high-performance KVM virtual machines on this system.

This sets up a dedicated, high-performance subvolume for your virtual machines.

  1. Ensure Btrfs Tools are Installed:
    sudo dnf install btrfs-progs -y
  2. Create KVM Parent Directory:
    mkdir /home/$USER/.kvm
  3. Create VM Btrfs Subvolume:
    sudo btrfs subvolume create /home/$USER/.kvm/VMs
  4. Set NoCoW (No-Copy-on-Write) Attribute: This is critical for VM disk performance and must be done on the empty directory.
    sudo chattr +C /home/$USER/.kvm/VMs
  5. Create VM Disk & Set Ownership:
    sudo qemu-img create -f raw /home/$USER/.kvm/VMs/win11.raw 250G
    sudo chown -R $USER:$USER /home/$USER/.kvm

Part 2: Prepare Backup & Archival Storage 🗄️

Reminder: Before you begin, re-verify your disk identifiers with lsblk -o NAME,SIZE,MODEL to ensure you are targeting the correct drives for your Btrfs backups (e.g., /dev/sda) and ZFS archives (e.g., /dev/sdb, /dev/sdc).

1. Install ZFS Tools

sudo dnf install zfs -y

2. Format and Mount Dedicated Btrfs Drive (/dev/sda)

# Format the drive
sudo mkfs.btrfs -L BTRFS_SNAPSHOTS /dev/sda
# Create the mount point
sudo mkdir /mnt/btrfs_snapshots
# Add to /etc/fstab for automounting
UUID=$(sudo blkid -s UUID -o value /dev/sda)
echo "UUID=$UUID /mnt/btrfs_snapshots btrfs defaults,compress=zstd 0 0" | sudo tee -a /etc/fstab

# Mount and create subvolumes to receive backups for each source
sudo mount /mnt/btrfs_snapshots
sudo btrfs subvolume create /mnt/btrfs_snapshots/root
sudo btrfs subvolume create /mnt/btrfs_snapshots/home

# --- OPTIONAL: For KVM VM backups ---
sudo btrfs subvolume create /mnt/btrfs_snapshots/kvm_vms

3. Create ZFS Mirror Pool (/dev/sdb, /dev/sdc)

To create a robust ZFS mirror, we will use the stable hardware identifiers for your disks instead of /dev/sdb and /dev/sdc, which can change.

Note: The zpool create command below is specific to the hardware in this guide. You must find the unique IDs for your own disks. To do this, run the command ls -l /dev/disk/by-id/ and identify the long names corresponding to your ZFS drives.

# This command is tailored to the example hardware.
# Replace the disk IDs with the ones you found for your own system.
sudo zpool create -o ashift=12 zvmpool mirror /dev/disk/by-id/ata-WDC_WD30EFAX-68JH4N0_WD-WX22D40J32JF /dev/disk/by-id/ata-WDC_WD30EFAX-68JH4N0_WD-WX22D40J33D9

# Create datasets for Borg archives
sudo zfs create -o compression=lz4 zvmpool/root_archive
sudo zfs create -o compression=lz4 zvmpool/home_archive

# --- OPTIONAL: For KVM VM archives ---
sudo zfs create -o compression=lz4 zvmpool/kvm_vms_archive

Part 3: Configure Snapper for Local Snapshots 📸

This step activates snapshot creation on your main SSD.

1. Install Snapper and DNF Plugin

sudo dnf install snapper dnf-plugin-snapper -y

2. Create Configurations

Create a Snapper config for each subvolume you want to protect.

sudo snapper -c root create-config /
sudo snapper -c home create-config /home

# --- OPTIONAL: For KVM VM snapshots ---
sudo snapper -c kvm_vms create-config /home/$USER/.kvm/VMs

3. Set Retention Policies

Use the snapper set-config command to apply the desired retention policies.

For root and home (with timeline snapshots):

for config in root home; do
  sudo snapper -c "$config" set-config "TIMELINE_CREATE=yes"
  sudo snapper -c "$config" set-config "TIMELINE_CLEANUP=yes"
  sudo snapper -c "$config" set-config "TIMELINE_LIMIT_HOURLY=24"
  sudo snapper -c "$config" set-config "TIMELINE_LIMIT_DAILY=7"
  sudo snapper -c "$config" set-config "NUMBER_CLEANUP=yes"
  sudo snapper -c "$config" set-config "NUMBER_LIMIT=30"
  sudo snapper -c "$config" set-config "NUMBER_LIMIT_IMPORTANT=10"
done

For kvm_vms (manual/DNF snapshots only): This configuration prevents creating snapshots on a timer, which saves space and avoids potential filesystem corruption inside the guest OS.

# --- OPTIONAL: For KVM VM snapshots ---
sudo snapper -c kvm_vms set-config "TIMELINE_CREATE=no"
sudo snapper -c kvm_vms set-config "NUMBER_CLEANUP=yes"
sudo snapper -c kvm_vms set-config "NUMBER_LIMIT=10"
sudo snapper -c kvm_vms set-config "NUMBER_LIMIT_IMPORTANT=5"

4. Enable Snapper Timers

This will enforce the timeline and cleanup policies for root and home.

sudo systemctl enable --now snapper-timeline.timer
sudo systemctl enable --now snapper-cleanup.timer

Part 4: Install and Configure grub-btrfs 🔄

This makes your / snapshots bootable. Since grub-btrfs is not in the official Fedora repositories, we will build it from source.

1. Install Build Dependencies

Install the necessary tools to clone the repository and build the software.

sudo dnf groupinstall "Development Tools" -y
sudo dnf install git -y

2. Clone, Configure, and Install

Clone the official repository and apply Fedora-specific changes before installing.

# Clone the repository
git clone https://github.com/Antynea/grub-btrfs.git
cd grub-btrfs

# Apply Fedora-specific settings to the config file
sed -i.bkp \
  -e '/^#GRUB_BTRFS_SNAPSHOT_KERNEL_PARAMETERS=/a \
GRUB_BTRFS_SNAPSHOT_KERNEL_PARAMETERS="rd.live.overlay.overlayfs=1"' \
  -e '/^#GRUB_BTRFS_GRUB_DIRNAME=/a \
GRUB_BTRFS_GRUB_DIRNAME="/boot/grub2"' \
  -e '/^#GRUB_BTRFS_MKCONFIG=/a \
GRUB_BTRFS_MKCONFIG=/usr/bin/grub2-mkconfig' \
  -e '/^#GRUB_BTRFS_SCRIPT_CHECK=/a \
GRUB_BTRFS_SCRIPT_CHECK=grub2-script-check' \
  config

# Install the configured application
sudo make install

3. Enable the Service and Clean Up

Enable the systemd service that will watch for new snapshots and automatically update GRUB.

sudo systemctl enable --now grub-btrfsd.service

With the installation complete, you can safely remove the cloned repository.

cd ..
rm -rfv grub-btrfs

4. Perform Initial GRUB Update and Verify

Perform a final manual update of the GRUB configuration.

Note: Modern Fedora uses a GRUB wrapper, so the output path /boot/grub2/grub.cfg is correct for both UEFI and Legacy BIOS systems.

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

You should now reboot to verify the "Fedora Linux snapshots" entry appears in your GRUB menu.


Part 5: Automate Tiered Backups 🚀

This section implements the automation using system-level services that run as the root user. This is the most robust method for background tasks as it does not depend on a user's login session.

1. Install Prerequisites

Ensure borgbackup is installed.

sudo dnf install borgbackup -y

2. Create the Secure Passphrase Script

This script will be owned by and only readable by root, providing the passphrase to Borg.

  1. Create the script file:
    sudo mkdir -p /etc/borg
    sudo nano /etc/borg/borg-pass.sh
  2. Add your passphrase to the file:
    #!/bin/sh
    echo 'your-strong-passphrase-here'
  3. Set secure permissions:
    sudo chown root:root /etc/borg/borg-pass.sh
    sudo chmod 700 /etc/borg/borg-pass.sh

3. Create the Automation Scripts (for Root)

These scripts are run directly by root and do not need any internal sudo commands.

The Tier 2 Sync Script (sync_snapshots.sh)

Create the file sudo nano /usr/local/bin/sync_snapshots.sh and paste this code.

Important: In the case statement below, the path for kvm_vms must be absolute. Replace iamchriswick with your actual username.

#!/bin/bash
set -euo pipefail

CONFIG_NAME="$1"
DEST_BASE="/mnt/btrfs_snapshots"
HOURLY_TO_KEEP=24
DAILY_TO_KEEP=7

if [ -z "$CONFIG_NAME" ]; then
    echo "Error: No config name provided. Usage: $0 <config_name>"
    exit 1
fi

SOURCE_SUBVOL=""
case "$CONFIG_NAME" in
  root) SOURCE_SUBVOL="/";;
  home) SOURCE_SUBVOL="/home";;
  kvm_vms) SOURCE_SUBVOL="/home/iamchriswick/.kvm/VMs";; # <-- EDIT THIS USERNAME
  *) echo "Error: Unknown config name '$CONFIG_NAME'"; exit 1;;
esac

if [ "$SOURCE_SUBVOL" == "/" ]; then
    SOURCE_SNAP_DIR="/.snapshots"
else
    SOURCE_SNAP_DIR="$SOURCE_SUBVOL/.snapshots"
fi

DEST_PATH="$DEST_BASE/$CONFIG_NAME"

echo "--- Starting Btrfs sync for config: $CONFIG_NAME ---"

LATEST_LOCAL_SNAP_NUM=$(snapper -c "$CONFIG_NAME" list | tail -n 1 | awk '{print $1}')
if [ -z "$LATEST_LOCAL_SNAP_NUM" ]; then
    echo "No local snapshots found. Exiting."; exit 0;
fi

LATEST_LOCAL_SNAP_PATH="$SOURCE_SNAP_DIR/$LATEST_LOCAL_SNAP_NUM/snapshot"

if test -d "$DEST_PATH/$LATEST_LOCAL_SNAP_NUM"; then
    echo "Latest snapshot $LATEST_LOCAL_SNAP_NUM already on destination."
else
    echo "New snapshot $LATEST_LOCAL_SNAP_NUM found. Sending..."
    PARENT_NUM=$(btrfs subvolume list -s "$DEST_PATH" | awk -F'/' '{print $NF}' | sort -n | tail -n 1)

    if [ -z "$PARENT_NUM" ]; then
        btrfs send "$LATEST_LOCAL_SNAP_PATH" | btrfs receive "$DEST_PATH"
    else
        PARENT_PATH="$DEST_PATH/$PARENT_NUM"
        btrfs send -p "$PARENT_PATH" "$LATEST_LOCAL_SNAP_PATH" | btrfs receive "$DEST_PATH"
    fi

    if test -d "$DEST_PATH/snapshot"; then
        mv "$DEST_PATH/snapshot" "$DEST_PATH/$LATEST_LOCAL_SNAP_NUM"
    fi
    
    echo "Sync complete."
fi

echo "Pruning old snapshots in $DEST_PATH..."
ALL_SNAPS=($(btrfs subvolume list -s "$DEST_PATH" | awk -F'/' '{print $NF}' | sort -n))
TO_KEEP=()
for snap in "${ALL_SNAPS[@]: -$HOURLY_TO_KEEP}"; do TO_KEEP+=("$snap"); done

for i in $(seq 0 $((DAILY_TO_KEEP - 1))); do
    DAY_TO_CHECK=$(date --date="$i days ago" +%F)
    LATEST_OF_DAY=$(find "$DEST_PATH" -maxdepth 1 -type d -newermt "$DAY_TO_CHECK 00:00:00" ! -newermt "$DAY_TO_CHECK 23:59:59" -printf "%f\n" | sort -n | tail -n 1)
    if [ -n "$LATEST_OF_DAY" ]; then TO_KEEP+=("$LATEST_OF_DAY"); fi
done

UNIQUE_TO_KEEP=($(printf "%s\n" "${TO_KEEP[@]}" | sort -u))

for snap in "${ALL_SNAPS[@]}"; do
    if [[ ! " ${UNIQUE_TO_KEEP[*]} " =~ " ${snap} " ]]; then
        echo "Pruning snapshot: $snap"
        btrfs subvolume delete "$DEST_PATH/$snap"
    fi
done

echo "--- Sync and prune for $CONFIG_NAME finished. ---"

Make it executable:

sudo chmod +x /usr/local/bin/sync_snapshots.sh

The Tier 3 Archive Script (archive_snapshots.sh)

Create the file sudo nano /usr/local/bin/archive_snapshots.sh and replace its contents with this code:

#!/bin/bash
set -euo pipefail

CONFIGS=("root" "home") 
# Add "kvm_vms" to the array if you are using it, e.g., CONFIGS=("root" "home" "kvm_vms")
SOURCE_BASE="/mnt/btrfs_snapshots"
ARCHIVE_BASE="/zvmpool"
export BORG_PASSCOMMAND='/etc/borg/borg-pass.sh'
RETENTION_ARGS="--keep-weekly 5 --keep-monthly 12 --keep-yearly 1"

echo "--- Starting daily archival to ZFS mirror ---"

for config in "${CONFIGS[@]}"; do
    SOURCE_PATH="$SOURCE_BASE/$config"
    REPO_PATH="$ARCHIVE_BASE/${config}_archive"
    
    echo "Processing config: $config"

    if [ ! -f "$REPO_PATH/config" ]; then
        echo "Borg repository not found or not initialized. Initializing..."
        borg init --encryption=repokey "$REPO_PATH"
    fi

    LATEST_SNAP_PATH=$(find "$SOURCE_PATH" -mindepth 1 -maxdepth 1 -type d -printf '%T@ %p\n' | sort -n | tail -n 1 | cut -d' ' -f2-)
    
    if [ -z "$LATEST_SNAP_PATH" ]; then
        echo "No snapshots in $SOURCE_PATH to archive. Skipping."
        continue
    fi

    ARCHIVE_NAME="{now:%Y-%m-%d_%H:%M}"
    echo "Creating archive of $LATEST_SNAP_PATH..."
    borg create --stats --compression zstd "$REPO_PATH::$ARCHIVE_NAME" "$LATEST_SNAP_PATH"
    
    echo "Pruning repository $REPO_PATH..."
    borg prune --list $RETENTION_ARGS "$REPO_PATH"
done

echo "--- Daily archival finished. ---"

Make it executable:

sudo chmod +x /usr/local/bin/archive_snapshots.sh

4. Create and Enable SYSTEM systemd Units

These files will now be placed in /etc/systemd/system/ and will run as root.

For Tier 2 (Hourly Sync)

Create sudo nano /etc/systemd/system/[email protected]:

[Unit]
Description=Btrfs Snapshot Sync for %I

[Service]
Type=oneshot
ExecStart=/usr/local/bin/sync_snapshots.sh %i

Create sudo nano /etc/systemd/system/[email protected]:

[Unit]
Description=Run Btrfs snapshot sync for root hourly

[Timer]
OnCalendar=hourly
Persistent=true

[Install]
WantedBy=timers.target

Create sudo nano /etc/systemd/system/[email protected]:

[Unit]
Description=Run Btrfs snapshot sync for home hourly

[Timer]
OnCalendar=hourly
Persistent=true

[Install]
WantedBy=timers.target

For Tier 3 (Daily Archive)

Create sudo nano /etc/systemd/system/zfs-archive.service:

[Unit]
Description=ZFS/Borg Snapshot Archival

[Service]
Type=oneshot
ExecStart=/usr/local/bin/archive_snapshots.sh

Create sudo nano /etc/systemd/system/zfs-archive.timer:

[Unit]
Description=Run ZFS/Borg snapshot archival daily

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

(Optional) For Tier 2 KVM Sync (Event-Driven)

This path unit will trigger a sync for kvm_vms only when a new snapshot is created. Remember to replace iamchriswick with your username.

Create sudo nano /etc/systemd/system/btrfs-sync@kvm_vms.path:

[Unit]
Description=Watch for new KVM snapshots and trigger sync

[Path]
PathModified=/home/iamchriswick/.kvm/VMs/.snapshots

[Install]
WantedBy=default.target

5. Reload Daemon and Enable Timers

Finally, reload the system daemon and enable the new timers.

sudo systemctl daemon-reload

# Enable required timers
sudo systemctl enable --now [email protected]
sudo systemctl enable --now [email protected]
sudo systemctl enable --now zfs-archive.timer

# Optional: Enable the path unit for KVM
sudo systemctl enable --now btrfs-sync@kvm_vms.path

Part 6: Troubleshooting 🔧

Here are solutions to some common issues.

ZFS Kernel Module Fails to Load

  • Symptom: zpool commands fail.
  • Cause: DKMS module rebuild failure after a kernel update, or Secure Boot blocking the module.
  • Solution: Ensure kernel-devel headers are installed. Try sudo modprobe zfs. For Secure Boot, you may need to sign the modules.

GRUB Menu Doesn't Show Snapshots

  • Symptom: "Fedora Linux snapshots" entry is missing from the GRUB menu.
  • Cause: grub-btrfsd.service may have failed or the config is wrong.
  • Solution: Manually run sudo grub2-mkconfig -o /boot/grub2/grub.cfg. Check the service status with systemctl status grub-btrfsd.service.

Permission Denied Errors

  • Symptom: Errors when running scripts or creating subvolumes.
  • Cause: Most operations in this guide require root privileges.
  • Solution: Ensure you are using sudo for system-level commands, and that your sudoers file is correct for the automation scripts.

Borg Fails with Permission Errors

  • Symptom: borg create fails with "Permission denied".
  • Cause: Reading snapshot data requires root privileges.
  • Solution: The scripts use sudo borg ..., which is correct. If this fails, it's likely a problem with your sudoers configuration in Part 5, Step 3.

Part 7: Verifying Your Setup ✅

After completing the guide, follow these steps to verify that each component of the system is working correctly.

1. Test Tier 1: Local Snapshots (Snapper)

  • Test Timeline Snapshots: Wait an hour, then check that Snapper has created snapshots for root and home.

    sudo snapper -c root list
    sudo snapper -c home list

    You should see at least one snapshot with "timeline" in the description.

  • Test DNF Plugin Snapshots: Install a small package to trigger snapshots.

    sudo dnf install -y sl

    Check the root snapshots again. You should see new "pre" and "post" snapshots related to dnf.

    sudo snapper -c root list

2. Test grub-btrfs Rollback

  • Create a Test File: sudo touch /THIS_FILE_SHOULD_DISAPPEAR
  • Reboot and select an older snapshot from the "Fedora Linux snapshots" GRUB menu.
  • Verify: Once booted into the read-only snapshot, run ls /. The test file should not be present.
  • Reboot again to return to your normal system.

3. Test Tier 2: Btrfs Sync (to /dev/sda)

  • Check the Timers: Confirm the hourly system timers are active.
    sudo systemctl status [email protected]
  • Manually Trigger a Sync: Run the service immediately using sudo.
    sudo systemctl start [email protected]
  • Check the Logs: View the system journal for the service.
    sudo journalctl -u [email protected] --since "1 minute ago"
  • Verify the Result: Check that the snapshot was copied.
    sudo btrfs subvolume list /mnt/btrfs_snapshots/root

4. Test Tier 3: ZFS Archive (to ZFS Mirror)

  • Check the Timer: Confirm the daily system timer is active.
    sudo systemctl status zfs-archive.timer
  • Manually Trigger the Archive:
    sudo systemctl start zfs-archive.service
  • Check the Logs:
    sudo journalctl -u zfs-archive.service --since "1 minute ago"
  • Verify the Result: Check the ZFS pool and list the Borg repository contents.
    sudo zpool status
    sudo borg list /zvmpool/root_archive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment