03 March 2011

Configuring NFS in FreeBSD

Having written up Solaris and Linux, it's time to take a look at
configuring NFS in FreeBSD.  Briefly, NFS (network file system) provides
access to remote filesystems which appear similar to local resources
on client hosts.  The following focuses on simple NFS server and client
configuration in FreeBSD (see note 1).  Our host details are:
        HOST (server):          beastie (10.0.23.181)
        HOST (client):          berkeley (10.0.22.221)
        PROMPT (root):          HOST [0]
        PROMPT (user):          troy@berkeley [0]
        USER UID:GID:           1000:1000 (on both server and client)
        OS:                     FreeBSD 8.1
        NOTE:                   The following should equally apply for
                                previous versions of FreeBSD.
Starting off with our server side, NFS requires at least 3 services
running (for sane usage), though possibly up to 5, depending on NFS
version and features (see note 2):
        rpcbind (/etc/rc.d/rpcbind)             (required)
        statd (/etc/rc.d/statd)                 (optional)
        lockd (/etc/rc.d/lockd)                 (optional)
        mountd (/etc/rc.d/[nfsd|mountd])        (required)
        nfsd (/etc/rc.d/nfsd)                   (required)
        gssd (/etc/rc.d/gssd)                   (NFSv4, optional)
        nfsuserd (/etc/rc.d/nfsuserd)           (NFSv4, required)
Below, we see that our services aren't running (rpcbind, statd, lockd,
mountd, nfsd), update '/etc/rc.conf' for them, and finally start them:
        beastie [0] /bin/ps ajwwx | /usr/bin/egrep 'n[f]sd|m[o]untd|rp[c]bind|l]ockd|[s]tatd'
        beastie [0] /usr/bin/egrep -v '^$|^#' /etc/rc.conf
        defaultrouter="127.0.0.1"
        hostname="beastie"
        ifconfig_em0="inet 10.0.23.181 netmask 255.255.254.0"
        ifconfig_em1="inet 192.168.56.35  netmask 255.255.255.0"
        inetd_enable="YES"
        sshd_enable="YES"
        sendmail_enable="NO"
        sendmail_submit_enable="NO"
        rpcbind_enable="YES"                    <====
        rpc_lockd_enable="YES"                  <====
        rpc_statd_enable="YES"                  <====
        nfs_server_enable="YES"                 <====
        mountd_flags="-r"                       <====
        beastie [0] for i in rpcbind statd lockd nfsd ; do /etc/rc.d/${i} start ; sleep 4 ; done
        Starting rpcbind.
        Starting statd.
        Starting lockd.
        Starting mountd.
        Starting nfsd.
In the above, the lines added to 'rc.conf' have been identified with
"<====".  Also, in starting the various processes, I dropped a sleep
of 4 seconds between each to ensure dependency services have a chance
to start up.  As an aside, you can start each service without updating
'rc.conf' by using "forcestart" with the rc scripts:
                beastie [0] mountd_flags="-r" ; export mountd_flags
                beastie [0] for i in rpcbind statd lockd nfsd ; do /etc/rc.d/${i} forcestart ;
                > sleep 4 ; done
                Starting rpcbind.
                Starting statd.
                Starting lockd.
                Starting mountd.
                Starting nfsd.
A final note before continuing, 'mountd_flags="-r"' is only necessary
if you intend to allow clients to mount individual files as well as
directories.  After we've run our init (rc) scripts, we check the status
of the processes via 'ps' and 'RC_SCRIPT status':
        beastie [0] /bin/ps ajwwx | /usr/bin/egrep 'n[f]sd|m[o]untd|rp[c]bind|[l]ockd|[s]tatd'
        root  2159     1  2159  2159    0 Ss    ??    0:00.12 /usr/sbin/rpcbind
        root  2169     1  2169  2169    0 Ss    ??    0:00.01 /usr/sbin/rpc.statd
        root  2179     1  2179  2179    0 Ss    ??    0:00.02 /usr/sbin/rpc.lockd
        root  2205     1  2205  2205    0 Is    ??    0:00.01 /usr/sbin/mountd -r
        root  2207     1  2207  2207    0 Is    ??    0:00.04 nfsd: master (nfsd)
        root  2209  2207  2207  2207    0 S     ??    0:00.06 nfsd: server (nfsd)
        beastie [0] for i in rpcbind statd lockd nfsd ; do /etc/rc.d/${i} status ; done
        rpcbind is running as pid 2159.
        statd is running as pid 2169.
        lockd is running as pid 2179.
        nfsd is running as pid 2207 2209.
Now that everything is running, we can setup some filesystems to export
(share) by adding them to '/etc/exports'.  We'll add shares for "/home"
and "/usr/sfw" (see note 3):
        beastie [0] /bin/cat /etc/exports
        /home -network=10.0.22.0/23
        /usr/sfw -maproot=root 10.0.23.191
        /usr/sfw beastie-int
        /usr/sfw -ro -network=10.0.22.0 -mask 255.255.254.0
We will need 'mountd' to reread any updates / changes made to
'/etc/exports' so either perform a 'kill -HUP' on its PID or pass
"onereload" to its rc script:
        beastie [0] /etc/rc.d/mountd onereload
        beastie [0] /usr/bin/showmount -e
        Exports list on localhost:
        /usr/sfw                           10.0.23.191 beastie-int 10.0.22.0
        /home                              10.0.22.0
After the reload, we can use 'showmount' to review what FS are currently
shared, as seen above.  With the server configured, we can work on the
client host.  An  NFS client requires at least 1 service running (for
sane usage), though possibly 4 depending on NFS version and features
(see note 2):
        rpcbind (/etc/rc.d/rpcbind)             (required)
        statd (/etc/rc.d/statd)                 (optional)
        lockd (/etc/rc.d/lockd)                 (optional)
        nfsiod (/etc/rc.d/nfsclient)            (optional)
        nfscbd (/etc/rc.d/nfscbd)               (NFSv4, required)
        nfsuserd (/etc/rc.d/statd)              (NFSv4, required)
Below, we see that the services aren't running, update '/etc/rc.conf'
for their configuration, and run their rc scripts to start them:
        berkeley [0] /bin/ps ajwwx | /usr/bin/egrep '[l]ockd|rp[c]bind|[s]tatd'
        berkeley [0] /usr/bin/egrep -v '^$|^#' /etc/rc.conf
        defaultrouter="127.0.0.1"
        hostname="berkeley"
        ifconfig_em0="inet 10.0.22.221 netmask 255.255.254.0"
        ifconfig_em1="inet 192.168.56.95  netmask 255.255.255.0"
        inetd_enable="YES"
        sshd_enable="YES"
        sendmail_enable="NO"
        sendmail_submit_enable="NO"
        rpcbind_enable="YES"                    <====
        rpc_lockd_enable="YES"                  <====
        rpc_statd_enable="YES"                  <==== 
        nfs_client_enable="YES"                 <====
        berkeley [0] for i in rpcbind statd lockd nfsclient ; do /etc/rc.d/${i} start ; sleep 4 ; done
        Starting rpcbind.
        Starting statd.
        Starting lockd.
        NFS access cache time=60
After starting the processes, we verify their status via 'ps' and check
the mounts offered from the NFS server:
        berkeley [0] /bin/ps ajwwx | /usr/bin/egrep '[l]ockd|rp[c]bind|[s]tatd'
        root  1595     1  1595  1595    0 Is    ??    0:00.03 /usr/sbin/rpcbind
        root  1605     1  1605  1605    0 Is    ??    0:00.00 /usr/sbin/rpc.statd
        root  1615     1  1615  1615    0 Ss    ??    0:00.03 /usr/sbin/rpc.lockd
        berkeley [0] /usr/bin/showmount -e 10.0.23.181
        Exports list on 10.0.23.181:
        /usr/sfw                           10.0.23.191 beastie-int 10.0.22.0
        /home                              10.0.22.0 
Since we already have a '/home' on 'berkeley', we'll create '/home2'
to mount '/home' from our NFS server (10.0.23.181), mount it, and verify:
        berkeley [0] /bin/mkdir /home2
        berkeley [0] /bin/ls -ld /home2
        drwxr-xr-x  2 root  wheel  512 Mar  3 20:14 /home2
        berkeley [0] /sbin/mount -t nfs -o rw,bg,intr 10.0.23.181:/home /home2
        berkeley [0] /bin/ls -ld /home2                      
        drwxr-xr-x  4 root  wheel  512 Oct  6 22:33 /home2
        berkeley [0] /bin/df -h /home2
        Filesystem           Size    Used   Avail Capacity  Mounted on
        10.0.23.181:/home    7.0G    1.7G    4.7G    27%    /home2
        berkeley [0] /sbin/mount | /usr/bin/grep /home2
        10.0.23.181:/home on /home2 (nfs)
It's notable that the timestamp on '/home2' changes from its original
modification time to the last modification time of '/home' on the
NFS server, after we have mounted the share.  On 'berkeley' as user
'troy', we switch to '/home2/troy' (10.0.23.181:/home/troy) and test
out our access:
        troy@berkeley [0] cd /home2/troy
        troy@berkeley [0] echo "this is my file" >> myfile
        troy@berkeley [0] /bin/cat myfile
        this is my file
        troy@berkeley [0] /bin/ls -l myfile
        -rw-r--r--  1 troy  sysvuser  16 Mar  3 20:30 myfile
        troy@berkeley [0] /bin/rm myfile
        troy@berkeley [0] /bin/ls -l myfile
        ls: myfile: No such file or directory
Excellent, we can access and write to the shared filesystem, like we
would expect.  Now, let's create '/opt/sfw' on 'berkeley' so that we
have a place to mount the exported '/usr/sfw' FS to:
        berkeley [0] /bin/mkdir -p /opt/sfw
        berkeley [0] /sbin/mount -t nfs -o rw,intr 10.0.23.181:/usr/sfw /opt/sfw
        berkeley [0] /sbin/mount | /usr/bin/grep /opt/sfw
        10.0.23.181:/usr/sfw on /opt/sfw (nfs)
        berkeley [0] /bin/df -h /opt/sfw
        Filesystem              Size    Used   Avail Capacity  Mounted on
        10.0.23.181:/usr/sfw    7.0G    1.7G    4.7G    27%    /opt/sfw
        berkeley [0] /bin/ls /opt/sfw
        bin     troy
With our share mounted, again as user 'troy', we try to create another
file (also-mine) on 'berkeley.  This time, it will be to the read-only
exported FS 10.0.23.181:/usr/sfw (mounted at /opt/sfw):
        troy@berkeley [0] /bin/ls -ld /opt/sfw/troy
        drwxr-xr-x  2 troy  sysvuser  512 Mar  3 20:13 /opt/sfw/troy
        troy@berkeley [0] cd /opt/sfw/troy
        troy@berkeley [0] echo "this is also my file" >> also-mine
        su: cannot create also-mine: Read-only file system
The above is to illustrate that export options (ro) from the NFS
server take precedence over the 'mount' options (rw) used by the client.
After the "Read-only" error, we've unmounted both '/opt/sfw' and '/home2'.
Rather than manually mounting an NFS share each time the host reboots,
I've added an entry to '/etc/fstab' on the last line below for '/home2':
        berkeley [0] /sbin/umount /opt/sfw
        berkeley [0] /sbin/umount /home2
        berkeley [0] /bin/cat /etc/fstab
        # Device                Mountpoint      FStype  Options         Dump    Pass#
        /dev/da0s1b             none            swap    sw              0       0
        /dev/da0s1a             /               ufs     rw              1       1
        /dev/da0s1d             /var            ufs     rw              2       2
        /dev/acd0               /cdrom          cd9660  ro,noauto       0       0
        10.0.23.181:/home       /home2          nfs     rw,bg,intr      0       0
Assuming we no longer need any of our configured shares, after our
clients have unmounted them, we can stop sharing our exported filesystems.
To do so, simply comment our their entries in '/etc/exports' and reload
'mountd' as seen earlier.  The 'showmount' below verifies all exported
FS have been unshared:
        beastie [0] /usr/bin/showmount -e
        Exports list on localhost:

NOTES

Note 1: The details provided herein do not take into account any potential
    security issues and assume access via a local LAN segment.

Note 2:
    server
        rpcbind         manages RPC connections, converts RPC program
                        numbers into universal addresses
        statd           tracks NFS file locks
        lockd           manages file locks
        mountd          services NFS mount requests from client hosts
        nfsd            your friendly neighborhood NFS server
        gssd            provides support for GSS contexts used with NFSv4
        nfsuserd        NFSv4 [u|g]ID <-> name mapping daemon
    client
        rpcbind         manages RPC connections, converts RPC program
                        numbers into universal addresses
        statd           tracks NFS file locks
        lockd           manages file locks
        nfsiod          local NFS asynchronous I/O server
        nfscbd          NFSv4 callback daemon
        nfsuserd        NFSv4 [u|g]ID <-> name mapping daemon
Note 3: The breakdown of 'exports' entries reads:
    /home -network=10.0.22.0/23
    /usr/sfw -maproot=root 10.0.23.191
    /usr/sfw beastie-int
    /usr/sfw -ro -network=10.0.22.0 -mask 255.255.254.0
        format is 'directory [options] host'; directories requiring alternative options
        for specific host need to be specified on a subsequent line

        (/home|/usr/sfw)                        directory to be shared (exported)
        -network=10.0.22.0/23                   network specification using CIDR notation
        -maproot=root                           normally the root user on a client has
                                                EUID:EGID -2:-2 relevant to NFS shares,
                                                however, '-maproot' allows us to set the
                                                root user's EUID:EGID, in this case, to
                                                root, retaining root access as with other
                                                local FS
        (10.0.23.191|beastie-int)               specifying of individual hosts; the entry
                                                for beastie-int allows 'read write' access
                                                only for beastie-int
        -ro                                     export FS read only (default is read write)
        -network=10.0.22.0 -mask 255.255.254.0  alternate means of network specification
see also:
    Configuring NFS in Solaris
    Configuring NFS in Linux
    Configuring NFS in SmartOS