Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
linux_storage_zfs [2020/02/06 17:20] – created andonovj | linux_storage_zfs [2020/02/13 18:26] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
=====Overview===== | =====Overview===== | ||
+ | ZFS stands for Zettabyte File system and to save you a lot of boring reading it is a file system which protects agains data corruption provide high redundancy and high availability. It was designed by Sun Microsystems and there are two implementations in nutshell: | ||
+ | |||
+ | * Oracle ZFS | ||
+ | * OpenProject ZFS | ||
+ | |||
+ | Both implementations are pretty close to each other and both are widely used in the IT world to provide availability FS with high protection and performance. | ||
ZFS is native to Oracle NAS Storage Appliance. With ZFS you can create a pool which you can share with out appliances through either NFS or Samba. | ZFS is native to Oracle NAS Storage Appliance. With ZFS you can create a pool which you can share with out appliances through either NFS or Samba. | ||
So let's create one pool: | So let's create one pool: | ||
- | =====Management===== | + | =====Administration===== |
We can create a pool, using the zfs command line as follows: | We can create a pool, using the zfs command line as follows: | ||
- | < | + | |
+ | ====Manage a Pool==== | ||
+ | We all likes pools, especially with water and diving equipment. But let's create a ZPool now in RAID 2 with 10 disks and name it....." | ||
+ | |||
+ | < | ||
[root@hostname ~]# zpool create tank -O casesensitivity=mixed -O compression=lz4 raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8 disk9 disk10 -f | [root@hostname ~]# zpool create tank -O casesensitivity=mixed -O compression=lz4 raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8 disk9 disk10 -f | ||
</ | </ | ||
- | The following command, will create you a pool comprised by 10 disks in raid-5 configuration with double parity. The pool name is called: " | + | The following command, will create you a pool comprised by 10 disks in raid-5 configuration with double parity. The pool name is called: " |
+ | |||
+ | * On | ||
+ | * Off | ||
+ | * lzjb | ||
+ | * gzip | ||
+ | * gzip[1-9] | ||
+ | * zle | ||
+ | * lz4 <- The one used here and it is pretty good | ||
You can also list the status of the created pool as follows: | You can also list the status of the created pool as follows: | ||
- | < | + | < |
[root@hostname ~]# zpool list -v | [root@hostname ~]# zpool list -v | ||
NAME | NAME | ||
Line 34: | Line 52: | ||
If you are worried about the integrity of the pool, you can of course perform integrity check and verify the data corruption as follows: | If you are worried about the integrity of the pool, you can of course perform integrity check and verify the data corruption as follows: | ||
- | < | + | < |
- | [root@hostname ~]# zpool status -v tank | + | [root@hostname ~]# zpool scrub tank |
+ | [root@hostname ~]# zpool status | ||
pool: tank | pool: tank | ||
| | ||
- | scrub: scrub completed after 0h7m with 0 errors on Tue Tue Feb 2 12:54:00 2010 | + | scan: scrub repaired 0B in 0h56m with 0 errors on Mon Aug 12 12:40:46 2019 |
config: | config: | ||
+ | |||
NAME STATE READ WRITE CKSUM | NAME STATE READ WRITE CKSUM | ||
tank ONLINE | tank ONLINE | ||
- | raidz2: | + | |
- | disk1 ONLINE | + | disk1 |
- | disk2 ONLINE | + | disk2 |
- | disk3 ONLINE | + | disk3 |
- | disk4 ONLINE | + | disk4 |
- | disk5 ONLINE | + | disk5 |
- | disk6 ONLINE | + | disk6 |
- | disk7 ONLINE | + | disk7 |
- | disk8 ONLINE | + | disk8 |
- | disk9 ONLINE | + | disk9 |
- | disk10 | + | disk10 |
errors: No known data errors | errors: No known data errors | ||
Line 58: | Line 78: | ||
P.S. you can stop the scrubbing with: zpool scrub -s tank | P.S. you can stop the scrubbing with: zpool scrub -s tank | ||
+ | [root@hostname ~]# zpool scrub -s tank | ||
</ | </ | ||
+ | |||
+ | After we have a pool, we can create a ZFS file system on it :) | ||
+ | |||
+ | ====Create file system==== | ||
+ | The ZFS will be shared either over SMB (Samba) or NFS(Network File system) to other machines. NFS is great way to setup Oracle RAC (with Direct NFS of course) and Samba is great way for a backup solution. | ||
+ | |||
+ | So let's create NFS :) | ||
+ | |||
+ | The command is fairly simple: | ||
+ | |||
+ | < | ||
+ | [root@hostname ~]# zfs create tank/backup | ||
+ | </ | ||
+ | |||
+ | We can create also multiple aggregation of that file system, for example: | ||
+ | |||
+ | < | ||
+ | [root@hostname ~]# zfs create tank/ | ||
+ | </ | ||
+ | |||
+ | You can check the status of the file systems as well, very easy, as follows: | ||
+ | |||
+ | < | ||
+ | [root@hostname ~]# zfs list | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | tank | ||
+ | tank/ | ||
+ | tank/ | ||
+ | tank/ | ||
+ | </ | ||
+ | |||
+ | |||
+ | We can see that the total availability is 5.45 TBs, our of which are used: 1.55 TBs. 46.4 GBs on the file system for the first server and 1.5 TBs on the file system for the 2nd server. | ||
+ | |||
+ | We spoke, a lot about sharing and giving and stuff like that, but HOW we share of ZFS file system. So let's check it out :) | ||
+ | |||
+ | ====Share ZFS==== | ||
+ | In the past, sharing of the NFS was done via the following commands: | ||
+ | |||
+ | < | ||
+ | [root@hostname ~]# share -F nfs /tank | ||
+ | [root@hostname ~]# cat / | ||
+ | /tank - | ||
+ | </ | ||
+ | |||
+ | That is the old way, so how is the new way. Well, we have to set who server can do what :) | ||
+ | For example: | ||
+ | |||
+ | < | ||
+ | [root@hostname ~]# zfs set sharenfs=" | ||
+ | </ | ||
+ | |||
+ | What that does is: | ||
+ | |||
+ | * Give READ-WRITE access of ZFS: tank/ | ||
+ | * Give READ-ONLY access of ZFS: tank/ | ||
+ | |||
+ | It is uselessly to say that the NFS port (111) should be allowed both ways :) | ||
+ | |||
+ | |||
+ | For example: server_one (111) <-> (111) Oracle_NAS_Server (111) <-> (111) server_two | ||
+ | |||
+ | |||
+ | Of course, such configuration (with READ-ONLY) isn't suitable for RAC, if you don't want to hit your head in the wall long time. If you wish, to do so, please go ahead and try to install RAC on that :) | ||
+ | |||
+ | That command will update also the sharetab file of course: | ||
+ | |||
+ | < | ||
+ | [root@tbp-slm-zfs01 ~]# cat / | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | After you start the NFS server on the Oracle NAS, you can try to mount it on serverOne and serverTwo. I will not cover this as I have done in other section, but it will look like a normal NFS: | ||
+ | |||
+ | < | ||
+ | [root@serverOne]# | ||
+ | Filesystem | ||
+ | devtmpfs | ||
+ | tmpfs 4000116 | ||
+ | tmpfs 4000116 | ||
+ | tmpfs 4000116 | ||
+ | / | ||
+ | / | ||
+ | tmpfs | ||
+ | / | ||
+ | 172.23.4.101:/ | ||
+ | </ | ||
+ | |||
+ | Cheers and enjoy :) | ||
+ | |||
+ | =====Compression Options===== | ||
+ | For more info you can check here :) [[https:// |