When you install Ubuntu Desktop, you can choose to "Erase disk and install Ubuntu" and "Use LVM and encryption".
Installing alongside an existing Linux OS that already uses encryption (LUKS) and LVM is not supported.
It should be possible to do this using Ubuntu Server, by using
cryptsetup luksOpen ...
to open the encrypted partition(s) before
starting the installation, but I encountered an error after selecting
to install Ubuntu Server 24.10 on a decrypted volume. Maybe Canonical
will resolve that in the future.
The following is how I successfully installed Ubuntu Desktop 24.10 alongside an existing Linux OS on a system that uses LUKS and LVM. These instructions should work for Ubuntu 24.04 and above if you use systemd-boot as I do, or any version of Ubuntu if you use rEFInd.
My drives are set up like this:
$ sudo lsblk
...
nvme0n1 259:0 0 931.5G 0 disk
└─nvme0n1p1 259:1 0 931.5G 0 part
└─cryptdata2 252:1 0 931.5G 0 crypt
├─data-root2 252:3 0 250G 0 lvm # Second OS root
└─data-srv 252:4 0 1.3T 0 lvm /srv
nvme1n1 259:2 0 931.5G 0 disk
├─nvme1n1p1 259:3 0 4.5G 0 part /boot/efi
├─nvme1n1p2 259:4 0 4G 0 part /recovery
└─nvme1n1p3 259:5 0 923G 0 part
└─cryptdata 252:0 0 923G 0 crypt
├─data-root 252:2 0 250G 0 lvm /
└─data-srv 252:4 0 1.3T 0 lvm /srv
As you can see, /dev/nvme1n1
has three partitions. The first is the
EFI partition, which is mounted at /boot/efi
.
The third partition is encrypted using LUKS, and it is unimaginatively
named cryptdata
. On cryptdata
is an LVM group named data
, and in
that group are two LVM logical volumes, root
and srv
. root
stores
the first OS.
/dev/nvme1n1
just has one partition, also encrypted using LUKS, and
named cryptdata2
. It also belongs to the data
LVM group, and in it
are the root2
logical volume, and an extension of the srv
logical
volume.
These instructions are to install Ubuntu on the root2
logical volume.
I use libvirt
and related tools to manage local VMs:
$ sudo apt install libvirt-clients virtinst virt-viewer
I also want the VM to have UEFI support:
$ sudo apt install ovmf
The files related to my Ubuntu VM are in
/srv/share/virt/ubuntu-24.10/
. The drive image is in the images/
subdirectory. I use the following installation script to create the VM:
$ cd /srv/share/virt/ubuntu-24.10/
$ cat create-vm.sh
#!/bin/sh
# Requires OVMF for UEFI support
# $ sudo apt install ovmf
#
# To run:
# $ virsh start ubuntu-24.10
# $ virt-viewer ubuntu-24.10
#
# To delete:
# $ virsh undefine ubuntu-24.10 -nvram
NAME=ubuntu-24.10
BASEDIR=/srv/share/virt/$NAME
IMAGES=$BASEDIR/images
BIOS=$BASEDIR/OVMF_VARS.fd
sudo virt-install \
--name $NAME \
--vcpus 4 \
--ram 16384 \
--disk path=$IMAGES/disk1.qcow2,size=30,format=qcow2 \
--boot loader=/usr/share/OVMF/OVMF_CODE.fd,loader.readonly=yes,loader.type=pflash,nvram.template=$BIOS,loader_secure=no \
--network network=default,model=virtio \
--osinfo ubuntu-stable-latest \
--cdrom /srv/share/dl/iso/ubuntu/ubuntu-24.10-desktop-amd64.iso
# More info:
# * https://www.baeldung.com/linux/qemu-uefi-boot
The script will create the VM, and boot the
ubuntu-24.10-desktop-amd64.iso
image. Run the Ubuntu installation.
Erase the disk. Don't use any advanced features. Create your default
user and give your machine a good name.
After installation is complete, "power off" the VM.
$ sudo halt -p
The following instructions are based on https://gist.github.com/shamil/62935d9b456a6f9877b5
I am assuming here that Ubuntu will be stored on the root2
logical
volume, and that it is available at /dev/mapper/data-root2
. Your
names will probably be different.
-
Enable NBD:
$ sudo modprobe nbd max_part=8
-
Connect the qcow2 image as a network block device:
$ sudo qemu-nbd --connect=/dev/nbd0 images/disk1.qcow2
-
Confirm the partitions inside the qcow2 image:
$ sudo fdisk /dev/nbd0 -l
-
Mount the root partition:
$ sudo mkdir /mnt/vm/ $ sudo mount /dev/nbd0p2 /mnt/vm/
-
Format and mount the destination volume:
$ sudo mkfs.ext4 /dev/mapper/data-root2 $ sudo mkdir /mnt/root2/ $ sudo mount /dev/mapper/data-root2 /mnt/root2
-
Copy everything from the VM image to the destination:
$ cd /mnt/vm/ $ sudo cp -a * /mnt/root2/ $ cd /mnt/root2/
-
Unmount and disconnect the VM image:
$ sudo umount /mnt/vm/ $ sudo qemu-nbd --disconnect /dev/nbd0 $ sudo rmmod nbd
Again, your names and devices will probably be different. Modify as applicable.
To recap, /dev/mapper/data-root2
is mounted at /mnt/root2/
, and the
current directory is /mnt/root2/
.
-
Copy
/etc/fstab
and update.$ sudo blkid | grep root2 /dev/mapper/data-root2: UUID="d4a8badc-f0b4-4b9a-8abc-511e4e96adac" BLOCK_SIZE="4096" TYPE="ext4" $ cat /etc/fstab | sudo tee -a etc/fstab $ sudo vim etc/fstab
Ensure that
/boot/efi
is mounted withumask=0077
so that only the root user can read keys used by the boot loader. -
Copy
/etc/crypttab
.$ sudo cp /etc/crypttab etc/
-
Unmount
/boot/efi
:$ sudo umount /boot/efi
-
Mount and chroot:
$ for m in dev dev/pts proc run sys ; do sudo mount -o bind /$m $m ; done $ blkid | grep EFI /dev/nvme1n1p1: ... LABEL="EFI" ... $ sudo mount -o umask=0077 /dev/nvme1n1p1 boot/efi $ sudo chroot /mnt/root2/
-
Install systemd-boot and cryptsetup. We use systemd-boot because it stores the files it needs on the EFI partition, which is not encrypted. GRUB stores files on
/boot
, and while GRUB supports LUKS1, we are using LUKS2 for all encryption.If you are installing OpenSUSE, and you chose systemd-boot, the packages you need will already be installed.
If systemd-boot is not available for your distribution/version of Linux, you can use rEFInd instead.
# apt install cryptsetup \ systemd-boot \ systemd-cryptsetup \ cryptsetup-initramfs
The
systemd-boot
package installation runsbootctl install
to install the systemd-boot files to the EFI partition. If you ever need to update the systemd-boot files in the EFI partition, use:$ sudo bootctl update
For OpenSUSE, use:
# sudo sdbootutil install # sudo sdbootutil add-all-kernels
The
cryptsetup-initramfs
package runsupdate-initramfs
. If you need it in the future, the following command updates theinitrd.img
files for all installed kernels, and copies them to the right locations for systemd-boot:$ sudo update-initramfs -u -k all
-
You need to edit the boot configuration for Ubuntu: The value of
root
in the kernel options will be wrong. That is because it is taken from/proc/cmdline
, which will give the kernel options that current OS was booted with. Replace the UUID of the current root volume with the UUID of/dev/mapper/data-root2
that you found in (1) above.# vim.tiny /boot/efi/loader/entries/cfb0ee1b4a894fc19a54e70d6bd296d4-6.11.0-9-generic.conf ... options root=UUID=d4a8badc-f0b4-4b9a-8abc-511e4e96adac ro quiet ...
-
Exit chroot
# exit
-
Unmount the filesystems:
$ for m in boot/efi sys run proc dev/pts dev ; do sudo umount $m ; done $ cd $ sudo umount /mnt/root2 $ sudo mount /boot/efi
-
Reboot into your new Ubuntu installation!