-
-
Save proprietary/96f6f08758fb98da8467880904191f64 to your computer and use it in GitHub Desktop.
# Change the relevant {{ PARTS OF THIS FILE }} for your remote address etc. | |
# Make sure this unit file is named similarly to your mountpoint; e.g., for /mnt/mymountpoint name this file mnt-mymountpoint.mount | |
# On Ubuntu: | |
# $ sudo cp mnt-mymountpoint.mount /lib/systemd/system/ | |
# $ sudo systemctl enable mnt-mymountpoint.mount | |
# $ sudo systemctl start mnt-mymountpoint.mount | |
# On Fedora: | |
# $ sudo cp mnt-mymountpoint.mount /etc/systemd/system | |
# $ sudo systemctl enable mnt-mymountpoint.mount | |
# $ sudo systemctl start mnt-mymountpoint.mount | |
[Unit] | |
Description=Mount my remote filesystem over sshfs with fuse | |
[Install] | |
WantedBy=multi-user.target | |
[Mount] | |
What={{ USER }}@{{ HOST }}:{{ REMOTE DIR }} | |
Where={{ MOUNTPOINT like /mnt/mymountdir }} | |
Type=fuse.sshfs | |
# I recommend using your SSH key (no password authentication) with the following options so that you don't have to mount every time you boot | |
Options=_netdev,allow_other,IdentityFile=/home/{{ MY LOCAL USER WITH SSH KEY IN ITS HOME DIRECTORY }}/.ssh/id_rsa,reconnect,x-systemd.automount,uid=1000,gid=1000 | |
# Change to your uid and gid as well according to the output of `cat /etc/group` |
Zelly's file worked perfectly for me 👍
Also worked nicely for me on Ubuntu 20.04LTS. I added some After
commands to ensure my network stack was up first, although this may not be necessary - but I'll add it here in case useful for anyone else (here tailscaled
== Tailscale daemon).
[Unit]
After=network.target
After=tailscaled.service
I believe the name of this file must match the mountpoint, with the slashes replaced by spaces.
For example, to mount at Where=/mnt/mymountdir
, the file must be named mnt-mymountdir.mount
.
Also, Type=sshfs
worked for me.
Any idea how to make it show in the file explorer as a device?
mount[4108]: read: Connection reset by peer
But when doing sshfs manually it works?
mount[4108]: read: Connection reset by peer
But when doing sshfs manually it works?
Same issue. Surprisingly hard to find anyone talking about it.
When using sshfs from the command line, everything works exactly as expected, but the same options fail as a systemd mount unit.
mount[4108]: read: Connection reset by peer
But when doing sshfs manually it works?Same issue. Surprisingly hard to find anyone talking about it.
When using sshfs from the command line, everything works exactly as expected, but the same options fail as a systemd mount unit.
I fixed this, the error is because sudo needs to accept the server key for the first time, just ssh with sudo user once
You're an absolute lifesaver @Kreijstal !
I am wondering though, are there serious security issues with sshing to a server using sudo? I can imagine that terminal control codes injected into a .bash_logout file or some such might be a potential hazard, though if they were that would be a fairly serious risk for any account. It's also kind of moot here given my understanding is if I run this using systemd.mount, no shell session is spawned.
Either way, thanks for saving the few strands of hair I haven't already pulled out in the process of getting this all to work, it's been a surprisingly bumpy ride.
EDIT: Welp, nevermind, it completely stopped working again.
EDIT: Welp, nevermind, it completely stopped working again.
rip, is the systemd logs the same error?
No idea why it worked before or what caused it to stop working, however now when I do it manually, it asks for a password, so seems there was something wrong with the identity file I was pointing it to.
So I generated a new key for root, and everything appears to be working fine again. No doubt posting this comment will mean that it immediately breaks however.
Assuming it doesn't, thanks again for you help, here's the systemd.mount for anybody interested, this allows you to use the same mount unit with as system and user, but mount as user, allowing you to trigger using an automount unit (which only the system session can do), assuming an identically named key file is in both /root/.ssh/ and $HOME/.ssh.
The "{ VAR }" syntax is for the reader to manually input values:
[Unit]
Description=Mount { SERVER_NAME } server over SSHFS
[Install]
WantedBy=remote-fs.target
WantedBy=multi-user.target
[Mount]
What={ REMOTE_USER }@{ REMOTE_HOST }:{ REMOTE_PATH }
Where={ LOCAL_PATH }
Type=fuse.sshfs
Options=_netdev,reconnect,allow_other,uid={ LOCAL_USER_ID },gid={ LOCAL_USER_GID },ServerAliveInterval=60,ServerAliveCountMax=6,rw,nosuid,default_permissions,idmap=user,follow_symlinks,IdentityFile=%h/.ssh/{ KEY_FILENAME }
EDIT: I should mention that one problem with this approach is that one cannot unmount the path as user without creating a bunch of ancillary files for each possible method of unmounting, e.g., umount
command vs utilising dbus, etc. So I'm working on seeing if I can find a way around that, especially given some filemanagers such as dolphin will hang if you enter even the parent directory of the chosen mountpoint, presumably because it's preemptively fetching the file listing for the purposes of its treeview expansion.
On a minimal Debian 12 installation that uses the ifupdown
package, it was a race with the mount units losing. Even after adding After=network-online.target
the mount unit was run before DHCP had returned the address resulting in the terse read connection reset by peer
in the journal. Disabling the interface in /etc/network/interfaces
and enabling systemd-networkd.service
resolved the race.
While this was in a QEMU virtual machine, this may be useful for a Debian installation that uses the ifupdown
package.
I couldn't get this to work, but this guide did it for me: https://blog.tomecek.net/post/automount-with-systemd/