NFS allows for sharing a directory between two or more servers. It is needed in case you want to share files between servers.
Setting up NFS server is rather simple, you have to:
So let's get started:
root@nfs-server:/backups/mongo# mkdir /backups/mongo/ root@nfs-server:/backups/mongo#
Once you create the directory, you have to edit the export file:
root@nfs-server:/backups/mongo# cat /etc/exports /backups/oracle amg-cx-odb1(rw,sync) /backups/mongo client-server(rw,sync) root@nfs-server:/backups/mongo#
You can have many clients accessing the same server directory but you have to specify different fsid as foolows:
root@tain-cx-backup1:/backups# cat /etc/exports /backups/oracle server(rw,sync) /backups/mongo serverA(rw,sync,fsid=1) /backups/mongo serverB(rw,sync,fsid=2)
And reload the export rules:
root@nfs-server:/backups/mongo# exportfs -ra exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "amg-cx-odb1:/backups/oracle". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "client-server:/backups/mongo". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x root@nfs-server:/backups/mongo#
On the client we have to do practically following:
[root@client-server log]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Thu Jan 26 16:24:16 2017 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=8333d912-ce24-4819-bd32-940e6cca13e0 / xfs defaults 0 0 UUID=cf0dbc16-627e-4909-bdb3-c02793e93418 /boot xfs defaults 0 0 UUID=e01dacfa-e912-420d-908c-fe5481eb8145 swap swap defaults 0 0 /dev/mapper/mongovg-mongllv1 /var/mongodb/db ext4 defaults 0 0 nfs-server:/backups/mongo /apphome/backup/mongo nfs rw,sync,hard,intr 0 0 <- This line [root@client-server log]#
And Mount the file system:
[root@client-server log]# mount /apphome/backup/mongo [root@client-server log]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 16006144 2520376 13485768 16% / devtmpfs 1922740 0 1922740 0% /dev tmpfs 1932524 12 1932512 1% /dev/shm tmpfs 1932524 8672 1923852 1% /run tmpfs 1932524 0 1932524 0% /sys/fs/cgroup /dev/sda1 1038336 172520 865816 17% /boot /dev/mapper/mongovg-mongllv1 192942346 360476 185379394 1% /var/mongodb/db tmpfs 386508 0 386508 0% /run/user/10177 tmpfs 386508 0 386508 0% /run/user/10163 nfs-server:/backups/mongo 6289912064 2255692288 4034219776 36% /apphome/backup/mongo <- New NFS
Errors which you can have is:
Apr 5 10:57:12 tain-cx-mdb1 snmpd[811]: Cannot statfs /apphome/backup/mongo#012: Stale file handle
This means that the header isn't correct, there are 2 ways to resolve this:
root@tain-cx-backup1:/backups/mongo# cat /etc/exports /backups/oracle amg-cx-odb1(rw,sync) /backups/mongo tain-cx-mdb1(rw,sync,fsid=1) root@tain-cx-backup1:/backups/mongo#
You can check if there are problems with the mount using verbose mode:
mount -t nfs IP_ADDR:/backups/mongo /apphome/backup/mongo -vvv -o mountproto=tcp
To unmount NFS, you can use the standard:
umount /apphome/backup/mongo
However in a lot of cases, you cannot do that because, the NFS isn't responsible or the device is busy. You have to use the force option:
umount -f /apphome/backup/mongo
However, the upper command relies that the NFS is responsible up to some amount and that isn't always teh case. In such cases, we can do so called Lazy unmount which will unmount the FS now and clean the slots when the NFS becoms available again:
umount -l /apphome/backup/mongo
In some cases, we really need to shutdown the NFS and kick everyone and burn the server to the ground :) For that we can use:
fuser -ks /apphome/backup/mongo
That will kick all the processes from that file system and most probably kick you from the server as well.