Skip to content

Instantly share code, notes, and snippets.

@ijsf
Last active September 1, 2025 14:38
Show Gist options
  • Select an option

  • Save ijsf/5a82f39dfea361c18bafa68a49ffd91e to your computer and use it in GitHub Desktop.

Select an option

Save ijsf/5a82f39dfea361c18bafa68a49ffd91e to your computer and use it in GitHub Desktop.
Linux / LTFS / Tape Drive archival guidelines

Linux / LTFS / Tape Drive archival guidelines

This short guide is a quick rundown on how to set up a Linux system with the necessary drivers and tools to reliably use any HPE LTO tape drive with LTFS for home or semi-professional file archival purposes.

Quick recap for users hoping to use Windows:

There is no hope. The current state of affairs of LTO tape drive support on Windows 10 and up is so poor, that by the time you get it all working, it might no longer work effectively after a single reboot with yet another update of some kind. All in all, Microsoft has permanently broken the Windows CopyFile API for sequential access since 22H2, disabling updates is very difficult, Windows Defender interferes with file access, and Microsoft and OEM drivers for SAS/SCSI controllers are extremely outdated. Just don't do it.

Instead, as expected, we rely on open-source and *nix instead.

Distribution

The Linux distribution used for this guide is Debian 12, the latest stable release from Debian as per writing this article. Install it on your system.

Hardware setup


Installation

Install useful tape drive tools:

sudo apt install lsscsi mt-st

Install compilation tools and git:

sudo apt install build-essential pkg-config git

Compilation: ltfs

In order to use LTFS, we'll need to compile the software that implements LTFS. We'll need a patched version of the HP StoreOpen LTFS implementation:

git clone https://github.com/leavelet/ltfs-hp.git

(The changes needed from the reference implementation include support for the newer /sys device interface instead of /proc/scsi which was deprecated and removed from the defaultly shipped kernels in Debian.)

Install dependencies:

sudo apt install fuse libfuse-dev libicu-dev uuid-dev libxml2-dev

On Debian 10 and up, icu-config has been removed, so create a file /usr/bin/icu-config with these contents as root:

#!/bin/sh

opts=$1

case $opts in
  '--cppflags')
    echo '' ;;
  '--ldflags')
    echo '-licuuc -licudata' ;;
  *)
    echo '/usr/lib/x86_64-linux-gnu/icu/pkgdata.inc' ;;
esac

and apply execute permissions:

sudo chmod 755 /usr/bin/icu-config

Next, compile the project:

make

and install the files:

sudo make install

Configure dynamic linker run-time bindings:

sudo ldconfig

Using LTFS

The following command might prove handy when dealing with tapes. We're assuming /dev/nst0 is the accessible tape device.

Formatting a LTFS tape

sudo mkltfs -f -c --device=/dev/nst0

This creates a LTFS filesystem on tape without compression so typically at the original real storage capacity of the tape (and usually half of the max advertised* capacity). The reason for not using compression is to avoid a tape machine that is constantly varying in motor speed and throughput because of compression ratio - I've found that optimizing for a high fixed sustained speed works better in my case.

If you do want compression, just remove the -c option.

The -f option forces the creation of a filesystem, e.g. if the tape is blank or already contains a pre-existing LTFS filesystem. As always, watch out with formatting!

Mounting a LTFS tape

sudo mkdir -p /mnt/ltfs
sudo ltfs -o devname=/dev/nst0,sync_type=time@60 /mnt/ltfs

This mounts the LTFS at /mnt/ltfs. Note that none of the usual options like uid, gid, umask, fmask, dmask will work. The sync_type specifies how often to write the LTFS index to tape, in this case it'll be every 60 minutes (and at unmount) - check the manual (run ltfs) if you want to change this behaviour. This obviously depends on your type of files, but each time the LTFS index is written, the tape machine will interrupt whatever it is doing, seek back and write the index, which is not great when you're in the process of copying.

Copying to tape

rsync -r --progress /storage/some/path /mnt/ltfs/

If you use rsync for copying, you'll get plenty of throughput information while copying, so this works quite good. Don't forget to see if you can actually read the tape file back properly occassionally after copying at least one (big) archive to verify that the whole tape archival is working correctly back and forth!

Some things to keep in mind when copying files:

  • LTFS and tape is not suitable for small files. The seek time can be up to a minute when reading back, so access times can be extremely high. Prefer large files, and fewer files. If you do have many files or many small files, use tar to make an archive, and copy that to tape instead. It'll be a life saver.
  • LTFS and tape is not suitable for random access. Only assume you'll be copying to, and copying from tape, and not accessing files directly on tape. This is the only suitable access pattern you should focus on.
  • Tape requires a sustained throughput that must be met by the system. If the disk/filesystem you're copying to, or copying from, is not fast enough for even a moment, the tape machine will stall, rewind and retry. This includes overloading your CPU, memory or OS. Any hiccup in copying that cannot be compensated by the internal buffers will cause a stall in tape writing or reading, which kills performance and reliability. Here are a few pointers:
    • Do not use your machine while copying to avoid CPU/OS overload.
    • Make sure the disk you're copying from/to is capable of sustaining at least 160 MB/s read and write speeds at all times. This includes NVME SSDs, S-ATA SSDs and USB 3.0 capable SSDs, but take care to choose decent quality SSDs.
    • Do not use cheap SSDs (e.g. generic SSDs) as these will hide poor sustained read/write behaviour. These may use memory caches that fills up for large transfers, or will become much slower as the disk fills up, eventually falling below the required throughput speed for the tape drive.
    • NVMe SSDs seem to work well, either internally or with a USB 3.0 adapter enclosure.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment