======Grid/Cloud Storage======
Shared storage is one of the uniqueness of Oracle and honestly said it is one of the weaknesses. The amount of resources needed for the synchronisation, while other databases are using only replication, sometimes is more than the server can handle. But again, Oracle isn't for weak servers, especially the Grid Infra. That's why, please don't install it on weak servers.
The configuration can be done either via:
* **Virtual Box Shared disks**
* **iSCSI Disks using iSCSI target and iSCSI initiators from multiple servers**
or if you have enough money, via:
* **FC SCSI Disks**
* **Exadata Storage Cells**
----
=====Virtual Box Shared Disks=====
This option is the simplest. Just add disks images, to one of the nodes and change them to shareable. Then using the udev rules or the following script:
====Create a physical Partitions====
Be sure to create the physical Partitions on ***ONE*** of the servers.
[root@lparacb ~]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x98d4dc1e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1): <-- Enter
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261): <-- Enter
Using default value 261
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
**P.S. Create a partition for each disk using fdisk, BUT DO NOT CREATE ASM DISKS USING ORACLEASM LIB, It will recognise only couple disks on the other node**
====Create a binding rules====
Creating of the UDEV rules is mandatory step so that the OS can bind the partitions to a certain user and with certain permissions. This should be done even in case the disks are iSCSI, so it is a mandatory step.
===Linux 6===
#!/bin/bash
#file ~/createOracleAsmUdevRules.sh
i=1
# ol6 / rhel6 / centos 6
cmd="/sbin/scsi_id -g -u -d"
for disk in sdb sdc sdd sde ; do
cat <> /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="$cmd /dev/\$parent", \
RESULT=="`$cmd /dev/$disk`", NAME="asm-disk$i", OWNER="oracle", GROUP="dba", MODE="0660"
EOF
i=$(($i+1))
done
===Linux 7===
[root@lparaca ~]# cat createOracleAsmUdevRules.sh
#!/bin/bash
#file ~/createOracleAsmUdevRules.sh
i=1
# ol7 / rhel7 / centos 7
cmd="/lib/udev/scsi_id -g -u -d"
for disk in sdc sdd sde sdf ; do
cat <> /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="$cmd /dev/\$parent", \
RESULT=="`$cmd /dev/$disk`", SYMLINK+="asm-disk$i", OWNER="oracle", GROUP="dba", MODE="0660"
EOF
i=$(($i+1))
done
[root@lparaca ~]#
Bind them to their respective ASM disks using the created rule via invoking the UDEV trigger:
$udevadm control --reload-rules && udevadm trigger
The result should be something like this:
[root@lparacb ~]# ls -lart /dev/asm*
brw-rw---- 1 oracle dba 8, 113 Jan 12 09:10 /dev/asm-disk5
brw-rw---- 1 oracle dba 8, 97 Jan 12 09:10 /dev/asm-disk4
brw-rw---- 1 oracle dba 8, 81 Jan 12 13:42 /dev/asm-disk3
brw-rw---- 1 oracle dba 8, 65 Jan 12 13:42 /dev/asm-disk2
brw-rw---- 1 oracle dba 8, 49 Jan 12 13:42 /dev/asm-disk1
[root@lparacb ~]#
----
=====iSCSI Disks=====
To be done