This is an old revision of the document!


ZFS is native to Oracle NAS Storage Appliance. With ZFS you can create a pool which you can share with out appliances through either NFS or Samba.

So let's create one pool:

We can create a pool, using the zfs command line as follows:

We all likes pools, especially with water and diving equipment. But let's create a ZPool now in RAID 2 with 10 disks and name it…..“tank” :)

| Create a pool
[root@hostname ~]# zpool create tank -O casesensitivity=mixed -O compression=lz4 raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8 disk9 disk10 -f

The following command, will create you a pool comprised by 10 disks in raid-5 configuration with double parity. The pool name is called: “tank”.

You can also list the status of the created pool as follows:

| Pool status
[root@hostname ~]# zpool list -v
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank  9.06T  1.94T  7.12T         -     4%    21%  1.00x  ONLINE  -
  raidz2  9.06T  1.94T  7.12T         -     4%    21%
    disk1      -      -      -         -      -      -
    disk2      -      -      -         -      -      -
    disk3      -      -      -         -      -      -
    disk4      -      -      -         -      -      -
    disk5      -      -      -         -      -      -
    disk6      -      -      -         -      -      -
    disk7      -      -      -         -      -      -
    disk8      -      -      -         -      -      -
    disk9      -      -      -         -      -      -
    disk10      -      -      -         -      -      -

If you are worried about the integrity of the pool, you can of course perform integrity check and verify the data corruption as follows:

|Check Integrity
[root@hostname ~]# zpool status -v tank
  pool: tank
 state: ONLINE
 scrub: scrub completed after 0h7m with 0 errors on Tue Tue Feb  2 12:54:00 2010
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
         raidz2:
		disk1      ONLINE 	0	0	0
		disk2      ONLINE 	0	0	0
		disk3      ONLINE 	0	0	0
		disk4      ONLINE 	0	0	0
		disk5      ONLINE 	0	0	0
		disk6      ONLINE 	0	0	0
		disk7      ONLINE 	0	0	0
		disk8      ONLINE 	0	0	0
		disk9      ONLINE 	0	0	0
		disk10     ONLINE 	0	0	0
 
errors: No known data errors
 
 
P.S. you can stop the scrubbing with: zpool scrub -s tank

After we have a pool, we can create a ZFS file system on it :)

The ZFS will be shared either over SMB (Samba) or NFS(Network File system) to other machines. NFS is create way to setup Oracle RAC (with Direct NFS of course) and Samba is great way for a backup solution.

So let's create NFS :)

The command is fairly simple:

| Create ZFS File system
[root@hostname ~]# zfs create tank/backup

We can create also multiple aggregation of that file system, for example:

| Create ZFS File system
[root@hostname ~]# zfs create tank/backup/servers/server_name

You can check the status of the file systems as well, very easy, as follows:

none|List ZFS File Systems
[root@hostname ~]# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
tank                         1.55T  5.45T  50.2K  /tank
tank/backup                  1.55T  5.45T  52.6K  /tank/backup
tank/backup/zfsForServerOne    46.4G  5.45T  46.4G  /tank/backup/forServerOne
tank/backup/zfsForServerTwo  1.50T  5.45T  1.50T  /tank/backup/forServerTwo

We can see that the total availability is 5.45 TBs, our of which are used: 1.55 TBs. 46.4 GBs on the file system for the first server and 1.5 TBs on the file system for the 2nd server.

We spoke, a lot about sharing and giving and stuff like that, but HOW we share of ZFS file system. So let's check it out :)

In the past, sharing of the NFS was done via the following commands:

way of sharing ZFS as NFS
[root@hostname ~]# share -F nfs /tank
[root@hostname ~]# cat /etc/dfs/sharetab
/tank    -       nfs     rw

That is the old way, so how is the new way. Well, we have to set who server can do what :) For example:

New way of Sharing ZFS as NFS
[root@hostname ~]# zfs set sharenfs="rw=@serverOne/16,ro=@serverTwo/28" tank/backup/zfsForServerOne

What that does is:

  • Give READ-WRITE access of ZFS: tank/backup/zfsForServerOne to Server One's IP with netmask: 16
  • Give READ-ONLY access of ZFS: tank/backup/zfsForServerOne to Server Two's IP with netmask: 28

It is uselessly to say that the NFS port (111) should be allowed both ways :) For example: server_one (111) ↔ (111) Oracle_NAS_Server (111) ↔ (111) server_two Of course, such configuration (with READ-ONLY) isn't suitable for RAC, if you don't want to hit your head in the wall long time. If you wish, to do so, please go ahead and try to install RAC on that :)

That command will update also the sharetab file of course:

the sharing settings
[root@tbp-slm-zfs01 ~]# cat /etc/dfs/sharetab
/tank/backup/zfsForServerOne    -       nfs     rw=@serverOne/16,ro=@serverTwo/28

After you start the NFS server on the Oracle NAS, you can try to mount it on serverOne and serverTwo. I will not cover this as I have done in other section, but it will look like a normal NFS:

on serverOne
[root@serverOne]# df
Filesystem                                 1K-blocks       Used  Available Use% Mounted on
devtmpfs                                     3985372          0    3985372   0% /dev
tmpfs                                        4000116          0    4000116   0% /dev/shm
tmpfs                                        4000116      33480    3966636   1% /run
tmpfs                                        4000116          0    4000116   0% /sys/fs/cgroup
/dev/mapper/volGroup-root                   38774276    2454892   36319384   7% /
/dev/sda1                                     999320     181048     749460  20% /boot
tmpfs                                         800020          0     800020   0% /run/user/283600015
/dev/mapper/vg_mongo-lv_mongo              721426424  277759688  409058372  41% /app/data
172.23.4.101:/tank/backup/zfsForServerOne 7463166976 1611866112 5851300864  22% /apphome/backup

Cheers and enjoy :)

  • linux_storage_zfs.1581010979.txt.gz
  • Last modified: 2020/02/07 01:42
  • (external edit)