====== DB2 pureScale ======
===== Installation & Configuration =====
**Linux pre-requisites :**
yum -y install
**Change redhat release :**
Red Hat Enterprise Linux Server release 6.7 (Santiago)
**Install DB2 on-all nodes !**
**Prepare /var/ct/cfg/netmon.cf :**
!IBQPORTONLY !ALL
!REQD eth0 192.168.88.10
!REQD eth3 192.168.56.1
**Update existing instance to pureScale **:
./db2iupdt -cf db2dev01 -cfnet db2dev01-hadr -m db2dev01 -mnet db2dev01-hadr -mid 0 -instance_shared_dev /dev/sda -tbdev /dev/sdb -u db2pure1 db2pure1
**Create new instance :**
===== GPFS =====
==== Create GPFS FS Cluster ====
**Create GPFS Cluster filesystem domain :**
db2cluster -cfs -create -domain db2cluster_TEST.unixguest.eu -host db2purescaleA
**Set the GPFS Tiebreaker to disk device :**
db2cluster -cfs -set -tiebreaker -disk /dev/sdb
**Start the GPFS Cluster :**
db2cluster -cfs -start -all
**Create Instance shared FS :**
db2cluster -cfs -create -filesystem db2inst1_shared -disk /dev/sdc -mount /db2inst1/shared
**List shared filesystems :**
db2cluster -cfs -list -filesystem
**Stop GPFS filesystems :**
db2cluster -cfs -stop -all
**Add new GPFS filesystem :**
db2cluster -cfs -create -filesystem db2fs_data_01 -disk /dev/sdb -mount /db2_DATA_01
**Remove shared GPFS FS & GPFS Domain :**
# List the FS
db2cluster -cfs -list -filesystem
# Change the tiebreaker to Majority
db2cluster -cfs -stop -all
db2cluster -cfs -set -tiebreaker -majority
db2cluster -cfs -start -all
# Delete all shared files
db2cluster -cfs -mount -filesystem db2fs1
rm -rf /.*
# Remove the Shared FS
db2cluster -cfs -delete -filesystem db2fs1
# Remove the RSCT Domain
db2cluster -cfs -stop -all
db2cluster -cfs -list -domain
db2cluster -cfs -delete -domain
# Clear the DB2 glogal registry
db2greg -delvarrec service=GPFS_CLUSTER,variable=NAME,installpath=-
db2greg -delvarrec service=DEFAULT_INSTPROF,variable=DEFAULT,installpath=-
**Enable PR-3 Reservations :**
===== GPFS =====
**Check GPFS cluster configuration :**
mmlscluster
===== Troubleshooting =====
**If primary GPFS cluster fails, designate new primary server :**
mmchcluster -p -s
**Cluster Ping tool :**
[[http://www-01.ibm.com/support/docview.wss?uid=swg21967473]]
**Enable member node as Quorum node :**
export CT_MANAGEMENT_SCOPE=2
chrsrc -s 'Name=""' IBM.PeerNode IsQuorumNode=1
**Enable failback mode for RG :**
1. Log in as root from any cluster node
2. export CT_MANAGEMENT_SCOPE=2
3. rgreq -o lock db2_db2sdin1_0-rg
4. rgreq -o lock db2_db2sdin1_1-rg
5. chrsrc -s "MemberOf = 'db2_db2sdin1_0-rg'" IBM.ManagedResource SelectFromPolicy=3
6. chrsrc -s "MemberOf = 'db2_db2sdin1_1-rg'" IBM.ManagedResource SelectFromPolicy=3
7. rgreq -o unlock db2_db2sdin1_0-rg
8. rgreq -o unlock db2_db2sdin1_1-rg
**Attaching to DB2 processes / generating core dumps :**
-Lock all member resource groups as root
rgreq -o lock db2_inst0_0-rg
rgreq -o lock db2_inst0_1-rg
rgreq -o lock db2_inst0_2-rg
rgreq -o lock db2_inst0_3-rg
-Gather a core file ( on Linux attach and gcore from gdb, on AIX by gencore )
-Unlock all members?as follows as root
rgreq -o unlock db2_inst0_0-rg
rgreq -o unlock db2_inst0_1-rg
rgreq -o unlock db2_inst0_2-rg
rgreq -o unlock db2_inst0_3-rg
**Enable GPFS failover on multipath :**
http://www-01.ibm.com/support/docview.wss?uid=swg22003235&myns=swgimgmt&mynp=OCSSEPGG&mync=E&cm_sp=swgimgmt-_-OCSSEPGG-_-E
**Kerberos authentication**
- Create KRB database
kdb5_util create -s