# NFS3 sharing multiple filesystems



## Michael Bushey (Jan 14, 2017)

I have a ZFS file-system (project) that contains numerous ZFS file-systems (project001, project002...project120). How can I share this with NFS3 as just a single share? On Linux there is a crossmnt option that does this. I have separate file-systems as I need to be able to snapshot each independently.


----------



## Michael Bushey (Jan 16, 2017)

Is my only option to do this to switch the OS over to Linux? I'd rather keep FreeBSD as I like having some OS diversity.


----------



## aribi (Jan 16, 2017)

An easy solution is to use automount with /net map. AFAIK the entry is there in default /etc/auto_master, so you only have to enable `autofs_enable="YES"` in /etc/rc.conf
Works out of the box, provided you have the mountpoints for your projects in root:

```
zfs set mountpoint=/project001 mypool/project001
zfs set sharenfs=on mypool/project001
service mountd onerestart
```
And then access the filesystems on your client at /net/myserver/project001


----------



## SirDice (Jan 17, 2017)

Michael Bushey said:


> Is my only option to do this to switch the OS over to Linux?


It's a limitation of NFS, so you're going to have the exact same "problem".


----------



## Michael Bushey (Jan 18, 2017)

SirDice said:


> It's a limitation of NFS, so you're going to have the exact same "problem".


No, you mean this is a limitation of NFS in FreeBSD. I just verified that, on Linux,  as soon as I add the "crossmnt" option to the mypool/project line in /etc/exports that all the project directories [zfs filesystems] are visible and can read/write properly on the client with a single mount command. As soon as I access one of the project folders, then that folder/filesystem shows up in mount: ls /mypool/projects/project001 then I see in mount: server:/mypool/projects/project001.

I believe Solaris also has the crossmnt option.


----------



## ANOKNUSA (Jan 18, 2017)

Michael Bushey said:


> No, you mean this is a limitation of NFS in FreeBSD. I just verified that, on Linux, as soon as I add the "crossmnt" option to the mypool/project line in /etc/exports that all the project directories [zfs filesystems] are visible and can read/write properly on the client with a single mount command.



Wrong. You asked in your original post for the ability to export an entire tree of filesystems "as a single share." That's impossible, regardless of what operating system you're running. The `crossmnt` option allows an NFS client to see exported shares that are nested within each other, and automatically mount them while traversing the filesystem. That's a feature of the Linux NFS client. On FreeBSD you would achieve this by using autofs(5). The mounting would be different by default---everything would be mapped under /net---but the general effect is the same. You could create a custom map to set custom mounts.


----------



## Michael Bushey (Jan 18, 2017)

Thanks to everyone for the responses.  I get that nfs3 uses inodes so that's why multiple filesystems cannot be directly shared. It looks like the Linux way of dealing with this is for the server to automatically export child filesystems by using the crossmnt option in exports and a client that supports this. The FreeBSD way appears to be having every entry in exports (or /etc/zfs/exports managed by ZFS) and having the client handle the multiple mounts via autofs.

What I didn't realize was important when I first posted this issue is that several of the nfs3 clients are arm linux boxes. I'll have to research what it would take to get autofs working on them.

Also I'm using Docker with docker-volume-netshare to access NFS so that's probably another whole can of worms.


----------



## SirDice (Jan 18, 2017)

This is something similar for CentOS. I'm positive other Linux distributions also support it.

https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-nfs-config-autofs.html


----------



## aribi (Jan 18, 2017)

Michael Bushey said:


> It looks like the Linux way of dealing with this is for the server to automatically export child filesystems by using the crossmnt option in exports and a client that supports this.


Technically I think this could be explained further. The crossmnt option is a client helper thing. All it does on the server is generate some extra entries for the client when it requests the list of shares via rpc. For this work-around to function "seamlessly" the client needs to mount subshares "on the fly" if necessary:
The client examines the servers list of exports and selects extra entries to mount on top of the already mounted base nfs mount.
As such, it is in violation with it's own manpage (https://linux.die.net/man/8/mount.nfs):

```
mount.nfs4 is used for mounting NFSv4 file system, ....
```
Note the singular "file system". The comparable FreeBSD command mount_nfs(8) is extremely clear in what it does; it is a frontend to the nmount systemcall.

```
The mount_nfs utility calls the nmount(2) system call to prepare and graft a remote NFS file system
```
Is this a problem? Well, it can be. These programs are used internally by system scripts and the like. Side effects should be avoided - mount does a single mount, period.
For example, umount of such a tree needs to be done in reverse order, or it might fail (device busy).
Use of "fs boundary crossing symlinks" might present suprises.
The hidden submounts use the same mount-options as the basemount: think of /allprojects for which you might choose NFS3 anonymous access and some subdirs requiring nfs4 credentials.
In short, intergrating automatic subshare mounting in the clients mount_nfs will work only in the simplest of situations and blurs what is actually going on.
I say, confronted with the situation of multiple filesystems (aka nfs shares) deal with it in a part of the software stack that is designed for this - autofs.

BTW enjoyed your


----------



## Michael Bushey (Jan 31, 2017)

aribi said:


> BTW enjoyed your



If you ever get to Los Angeles I'll buy you a real beer. This offer applies to everyone who responded to this post. Thanks for all the help/insight!


----------

