The TrueNAS installer doesn't have a way to use anything less than the full device. This is usually a waste of resources when installing to a modern NVMe which is usually several hundred of GB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition would be helpful.
The easiest way to solve this is to modify the installer script before starting the installation process.
-
Boot TrueNAS Scale installer from USB stick/ISO
-
Select
shell
in the first menu (instead of installing) -
While in the shell, run the following commands:
sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install /usr/sbin/truenas-install
For TrueNAS Scale 24.10+ see this comment.
The first command modifies the installer script so that it creates a 16GiB boot-pool partition instead of using the full disk. The second command restarts the TrueNAS Scale installer.
-
Continue installing according to the official docs.
Step 7-12 in the deprecated guide has instructions on how to allocate the remaining space to a partition you can use for data. If you are using a single drive just ignore the steps that has to do with mirroring.
Unfortunately this is only possible by using an intermediate device to act as the installation disk and later move this data to the NVMe. Below I have documented the steps I took to get TrueNAS SCALE to run from a mirrored 16GB partition on NVMe disks.
For an easier initial partition please see this comment and the discussion that follows. Should remove the need to use a USB stick as a intermediate medium.
-
Install TrueNAS SCALE on a USB drive, preferrably 16GB in size. If you use a 32GB stick you must create a 32GB partition on the NVMe, wasting space that can be used for VMs and Docker/k8s applications.
-
Boot and enter a Linux shell as root. For example by enabling SSH service and login by root password.
-
Check available devices
$ parted (parted) print devices /dev/sdb (15.4GB) # boot device /dev/nvme0n1 (500GB) /dev/nvme1n1 (512GB) (parted) quit
If you only have one NVMe disk just ignore the instructions that include the second disk (nvme1n1). This disk is used to create a ZFS mirror to handle disk failures.
-
Clone the boot device to the other devices
$ cat /dev/sdb > /dev/nvme0n1 $ cat /dev/sdb > /dev/nvme1n1
-
Check the partition layout. Fix all the GPT space warning prompts that show up.
$ parted -l [...] Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the space (an extra 946741296 blocks) or continue with the current setting? Fix/Ignore? f [...] Model: USB SanDisk 3.2Gen1 (scsi) Disk /dev/sdb: 15.4GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.5kB 1069kB 1049kB bios_grub 2 1069kB 538MB 537MB fat32 boot, esp 3 538MB 15.4GB 14.8GB zfs [...]
The other disks partition table should look identical to this.
-
Remove the zfs partition from the new devices, number 3 in this case. This is the boot-pool partition and we will recreate it later. The reason we remove it is that zfs will recognize metadata that makes it think it's part of the pool while it is not.
$ parted /dev/nvme0n1 rm Partition number? 3 Information: You may need to update /etc/fstab.
-
Recreate the boot-pool partition as a 16GiB large partition with a sligtly later start sector than before, make sure that it is on a sector divisable with 2048 for best performance (526336 % 2048 = 0). We also do this to make sure that zfs doesn't find any metadata from the old partition.
Start with the smaller disk if they are not identical.
$ parted (parted) unit kiB (parted) select /dev/nvme0n1 (parted) print Model: KINGSTON SNVS500GB (nvme) Disk /dev/nvme0n1: 488386584kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp (parted) mkpart boot-pool 526336kiB 17303552kiB (parted) print Model: KINGSTON SNVS500GB (nvme) Disk /dev/nvme0n1: 488386584kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp 3 526336kiB 17303552kiB 16777216kiB boot-pool
-
Now you can create a partition allocating the rest of the disk.
(parted) mkpart pool 17303552kiB 100% (parted) print Model: KINGSTON SNVS500GB (nvme) Disk /dev/nvme0n1: 488386584kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp 3 526336kiB 17303552kiB 16777216kiB boot-pool 4 17303552kiB 488386560kiB 471083008kiB pool
-
Do the same for the next device, but this time use the same values as in the printout above. We do this to make sure that the partitions are exactly the same size. In this example the disks are slightly different in size so using 100% on the second disk would create a partition larger than the one we just created on the smaller disk.
(parted) select /dev/nvme1n1 Using /dev/nvme1n1 (parted) mkpart boot-pool 526336kiB 17303552kiB (parted) mkpart pool 17303552kiB 488386560kiB (parted) print Model: TS512GMTE220S (nvme) Disk /dev/nvme1n1: 500107608kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp 3 526336kiB 17303552kiB 16777216kiB boot-pool 4 17303552kiB 488386560kiB 471083008kiB pool
-
Make the new system partitions part of the boot-pool. This is done by attaching them to the existing pool while detaching the USB drive.
$ zpool attach boot-pool sdb3 nvme0n1p3
Wait for resilvering to complete, check progress with
$ zpool status
When resilvering is complete we can detach the USB device.
$ zpool offline boot-pool sdb3 $ zpool detach boot-pool sdb3
Finally add the last drive to create a mirror of the boot-pool.
$ zpool attach boot-pool nvme0n1p3 nvme1n1p3 $ zpool status pool: boot-pool state: ONLINE scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme0n1p3 ONLINE 0 0 0 nvme1n1p3 ONLINE 0 0 0
At this point you can remove the USB device and when the machine is rebooted it will start up from the NVMe devices instead. Check BIOS boot order if it doesn't.
-
Now that the boot-pool is mirrored we want to create a mirror pool using the remaining partitions.
$ zpool create pool1 mirror nvme0n1p4 nvme1n1p4 $ zpool status pool: boot-pool state: ONLINE scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme0n1p3 ONLINE 0 0 0 nvme1n1p3 ONLINE 0 0 0 pool: pool1 state: ONLINE config: NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme0n1p4 ONLINE 0 0 0 nvme1n1p4 ONLINE 0 0 0
But to be able to import it in the Web UI we need to export it.
$ zpool export pool1
-
All done! Import pool1 using the Web UI and start enjoying the additional space.
Thanks for the write ups. I had to tweak the instructions a little bit to include additional context as I did some head->desk to figure out why i was getting some errors like mount failed to create mountpoint read-only file system or parted error unable to satisfy all constraints on the partition. Alas, here's my write up:
Creating a partition within your boot drive in order to use the wasted/unused/leftover space on your drive for apps, VM's, etc.
As of X(todo) version, the install uses a python3 script. dont listen to anyone who says use /usr/sbin/truenas-install
I got most of my information from here. Homie is just a lil too vague for c&p instructions:
https://gist.github.com/gangefors/2029e26501601a99c501599f5b100aa6?permalink_comment_id=5297295#gistcomment-5297295
AMD Ryzen 5 2600X
16GB DDR4 RAM
256gb NVME (32gb boot partition / the rest is for apps)
2x 14tb HHD mirrored
sed -i 's/-n3:0:0/-n3:0:+32G/g' /usr/lib/python3/dist-packages/truenas_installer/install.py
exit
to return from the shellNext we re-partition the nvme. You must perform this next setp on console. SSH will not give you the correct access.
parted
tool to enter the parted CLIunit KiB
print list
to show your current nvme layout. (it'll look something like this. C&P from @sthames42)name 3 boot-pool
resizepart <number> <end>
. In this example i took 32GB + <start addr of #3> to get 31777350example:
resizepart 3 31777350kib
Description: This command will resize the boot-pool partition ID #3 to end 32gb after its start address.
for name use apps-pool
for type: i used zfs
for start address: i used the end of block 3 + 2048
for end: i used 100%
Note: somewhere around this step you'll get a warning for not being byte aligned. i think that's due to the end address. I ignored it
Congrats you've just created /dev/nvme0n1p4 in this example!!!
Now we need to setup the zpool for the UI to pick it up
zpool status
zpool create apps-pool /dev/nvme0n1p4
zpool export apps-pool