sshfs without sshfs. I'm lazy.

4 minute read Published: 2018-12-05

I've got several systems to administrate at work. It's common that I have to access them through ssh or copy a file from one system to another. I love the Solaris automounter that's configured on /net by default, meaning that if you access /net/server1/nfsshare2/path/to/file you get exactly what you expect: the file. Now the automounter isn't exactly rocket science and it's easy to setup on a linux system, but I don't want to access everything through NFS and all the security issues that come with that. Luckily, #SSHFS is part of pretty much every linux distribution and the server just requires sftp, which is default on pretty much every system, as it's just a subsystem of the ssh daemon and the ssh daemon handles authentication.

So all you need on the server side is an ssh daemon and you need access to login through ssh, preferably through public keys. On the client side all you need is sshfs and the matching ssh client.

$ sshfs myserver:/ /tmp/tmp.FFqyLYuk1X

$ ls -l /tmp/tmp.FFqyLYuk1X/etc/passwd
-rw-r--r-- 1 root root 3679  5. Dez 03:00 /tmp/tmp.FFqyLYuk1X/etc/passwd

This way I can easily access files on the server with local commands on my system.

Having to manually mount every sshfs manually gets tiresome pretty fast. Remember /net on Solaris? The system can easily figure out which server I want to access, so why should I have to mount everything myself?

The automounter is a pretty obvious service that could do that. There's one problem though: automountd runs as root, while the mounts have to run as my own user to be able to access my ssh-agent. I'm sure there are some possible tricks, but hardcoding a root daemon to remotely access the ssh-agent of a user just sounds wrong. Also sshfs isn't a kernel filesystem, it's a filesystem in Userspace (FUSE) and that doesn't seem to work with the kernel automounter. Luckily there's #afuse, that runs as a user and can mount FUSE filesystems.

I would have liked to have this as a systemd user service, but I couldn't figure out how to get sshfs to use my ssh-agent, meaning that all connections would fail. If you have any idea of how to do that... please contact me.

Failing a decent user service managed by systemd, I wrote a simple wrapper, that takes care of running afuse with the necessary options, so my environment.systemPackages in configuration.nix for #NixOS looks like this:

environment.systemPackages = with pkgs; [
	( writeShellScriptBin "afuse-sshfs" ''
		mkdir -p $HOME/sshfs
		exec ${afuse}/bin/afuse -o mount_template='${sshfsFuse}/bin/sshfs %r:/ %m' -o unmount_template='fusermount -u -z %m' $HOME/sshfs
	'' )
] ;

Note: I've discovered that $HOME/sshfs is probably not the best directory for this, you may want to change that e.g. to /sshfs on a single user system or whatever else you fancy.

So now I just have to run afuse-sshfs after login, which I have delegated to the XFCE startup procedure.

The result:

$ ls -l sshfs
total 0

$ ls -l sshfs/myserver/etc/passwd
-rw-r--r-- 1 root root 3679  Dec  5 03:00 sshfs/myserver/etc/passwd

$ df -h | tail -1
myserver:/    125G     11G  108G    9% /tmp/afuse-MHdCcY/myserver

The only issue I have with this solution is that it doesn't seem to automatically unmount the filesystems after some idle time, but as I regularly shut down my system after each work day, that doesn't bother me too much. One more nice feature: Instead of accessing e.g. sshfs/myserver/etc/passwd, I can access sshfs/root@myserver/etc/passwd to force sshfs to login as root. Basically the directory name accepts everything that a simple sftp would accept as well and thus aliases I've added in ~/.ssh/config work just fine.