debian, linux, networking, nextcloud, raspberrypi

Share your files in your LAN with NFS

This post focuses on setting up a NFS server for the NextCloudPi, but attempts to be a general introduction to NFS as most things apply to any setup.

The Network File System, or NFS is an ancient remote file system originally developed by Sun Microsystems in 1984. It is a simple way to share a folder across a local network.

The NFS server is able to share folders in one machine that can then be mounted with the command mount  on another machine.

Features

This is a highlight of its features

  • The NFS server runs in the Linux kernel.
  • It is mostly recommended for Linux clients.
  • It is not secure. Communications are not encrypted and there is no authentication.
  • It is lightweight.
  • Runs over UDP, although TCP operation is also possible.

The fact that it uses UDP means that it has less overhead.

Also, given that UDP is a stateless protocol, any operation that is interrupted by network or server failures will just resume from where it was as soon as the service is back up. More on that later.

TCP, on the other hand, needs to re-arrange the connection, so the mount needs to start over, and processes using the filesystem will have to be force-killed.

The fact that it runs within the Linux kernel has advantages and problems. The main benefit is that it is way more efficient. Anything that runs in userspace needs to go through and extra copy operation to reach userspace memory buffers, so we can save that operation by running on the kernel. It is the same concept after kHTTPd.

The main implementation of the client runs on Linux. Even though there are ways to mount NFS from Windows and Mac, a SAMBA server is probably a better fit for a network with mixed systems.

Finally, the fact that it is not encrypted makes it consume less CPU, at the same time that it makes it unfit for other than trusted LAN.

For these reasons, it is an interesting option for low end systems such as ARM devices (CHIP, RPi…) over local LAN.

Installation

Just install the appropriate package for your distribution, normally nfs-kernel-server.
In the case of NextCloudPi, just update to the latest version with
As usual, the generic installer can be used on any Debian based running server to install through SSH, or to install it and configure it on a Raspbian image through QEMU.

Default configuration (NextCloudPi only)

In the specific case of NextCloudPi, we usually want to share the data folder on the local network, so select NFS in

  • DIR is the directory to share. The default will be /var/www/nextcloud/data/admin/files for user admin on a fresh installation. If you have moved the data folder to an external drive, then it might be more similar to the default /media/USBdrive/ncdata/admin/files
  • SUBNET If your local IP address starts with 192.168.0.X, do not change this. Otherwise adjust it to your case.
  • USER is explained in the next section. You probably don’t need to change this
  • GROUP is explained in the next section. You probably don’t need to change this

If you would like a different setup, read the next section.

Manual configuration

The configuration is quite simple. It is located in the file /etc/exports. This is the default configuration for sharing your NextCloud files in your local network

  • /media/USBdrive/ncdata/admin/files is the remote folder to mount
  • 192.168.1.0/24 indicates that only computers from that local LAN subnetwork can access the share. This can be made more restrictive by, for example, only allowing a specific IP.
  • rw means read-write permissions, but you might be also interested in read-only.
  • 33 is the typical user for http or www-data. See the explanation that follows
  • The all_squash option is related to user mapping between the two computers.

What permissions do we have on the remote filesystem? The way NFS deals with this is by mapping users between host and client. This means that if we have user ownyourbits on id 1005, by default, we will be identified and have permissions for user 1005 in the remote machine. This user might not even exist in that system.

Typically, the first non root user on a Linux system receives the id 1000. We can see how as many different users/computers have to access a common NFS file share with their own identities they would need to change their ids on their machines to be different from each other, and then replicate them in the NFS server. Not very nice.

Also there is the issue of security. If we map root (id 0) from any computer to the root user in the NFS server we have an obvious security issue as everyone would be root.

The way NFS deals with the root problem is by squashing the root to the anonymous user. This is typically a user with no permissions for anything. nobody is typically another such user.

root squashing means that the root user (id 0) will be mapped not to the root user on the NFS server, but to the unpriviledged anonymous user with the id set by the anonuid and anongid parameters. This is the default.

The options for squashing are

  • root_squash: this is the default and maps root (id 0) to anonuid
  • no_squash: this disables squashing. root will still be root on the NFS server.
  • all_squash: this squashes all users to the anonymous user with id anonuid.

So, in the example configuration above, we use all_squash to map all users to the id of the HTTP server. This allows us to have the same restricted permissions as the HTTP server and files that we create there will be modifiable by NextCloud.

We are doing this because we are sharing the data folder for NextCloud. If we wanted to share a whole hard drive, it would probably be more interesting to do the following

, which maps any user to the main unpriviledged user of the NFS server (typically 1000).

When you are playing around, you can reload the configuration with

Usage

Manual mount

After installing the appropriate packages for your distribution, mount the remote folder with

Where

  • 192.168.0.130 is the IP of your NFS server
  • /media/USBdrive/ncdata/admin/files is the remote folder to mount
  • /mnt/mycloud/ is the mountpoint in the local computer.

After this command, your files will be readily accessible as if they were any other local folder.

Through fstab

You can automount on boot from fstab with a line such as

Through autofs

Alternatively, we can only mount on demand with autofs.

This requires maintaining yet another server, the autofs server and will only mount the NFS share the first time we try to access it.

The added benefit is that it will not delay or impede our boot by trying to access the NFS server, even if it is not up.

Problems

My server went down, and my system is frozen

As it was mentioned before, if there is a connectivity problem your filesystem will be stuck. Very badly so. Any program that is accessing a file in this filesystem will be unkillable not even with SIGKILL, and it will appear with the dreaded D status in ps.

D means uninterruptible sleep and means that a process is stuck in I/O. There is no way to kill the process, not even with SIGKILL. If that happens, the best way to gracefully recover is to bring the server or the connectivity back up again, and the I/O will resume as long as we are using UDP.

The other way is to lazy umount the NFS filesystem with

This will not be graceful as the memory maps will suddenly disappear and processes will segfault.

NFS is not starting

The NFS server relies on RPCbind (also known as portmapper). RPCbind is a service that runs on TCP and UDP port 111 and provides a mapping from services to ports.

In the example of NFS in Linux, whenever the mount command is issued, the client asks the RPCbind server on port 111 on which port is the NFS server listening to, and then connects to NFS through this port.

This allows servers to be listening on any port and be correctly discovered by clients.

The downside of this approach is the added complexity. We have to enable the RPCbind service and make sure it starts before the NFS server on boot.

See the Systemd configuration in the following code.

Also, NFS will not start if there is no /etc/exportfs

Code

github

Author: nachoparker

Humbly sharing things that I find useful [ github dockerhub ]

Leave a Reply

Your email address will not be published. Required fields are marked *