Show pageOld revisionsBacklinksODT exportBack to top This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== DB2 pureScale ====== ===== Installation & Configuration ===== **Linux pre-requisites :** <sxh bash> yum -y install </sxh> **Change redhat release :** <sxh bash> Red Hat Enterprise Linux Server release 6.7 (Santiago) </sxh> **Install DB2 on-all nodes !** **Prepare /var/ct/cfg/netmon.cf :** <sxh bash> !IBQPORTONLY !ALL !REQD eth0 192.168.88.10 !REQD eth3 192.168.56.1 </sxh> **Update existing instance to pureScale **: <sxh bash> ./db2iupdt -cf db2dev01 -cfnet db2dev01-hadr -m db2dev01 -mnet db2dev01-hadr -mid 0 -instance_shared_dev /dev/sda -tbdev /dev/sdb -u db2pure1 db2pure1 </sxh> **Create new instance :** ===== GPFS ===== ==== Create GPFS FS Cluster ==== **Create GPFS Cluster filesystem domain :** <sxh bash> db2cluster -cfs -create -domain db2cluster_TEST.unixguest.eu -host db2purescaleA </sxh> **Set the GPFS Tiebreaker to disk device :** <sxh bash> db2cluster -cfs -set -tiebreaker -disk /dev/sdb </sxh> **Start the GPFS Cluster :** <sxh bash> db2cluster -cfs -start -all </sxh> **Create Instance shared FS :** <sxh bash> db2cluster -cfs -create -filesystem db2inst1_shared -disk /dev/sdc -mount /db2inst1/shared </sxh> **List shared filesystems :** <sxh bash> db2cluster -cfs -list -filesystem </sxh> **Stop GPFS filesystems :** <sxh bash> db2cluster -cfs -stop -all </sxh> **Add new GPFS filesystem :** <sxh bash> db2cluster -cfs -create -filesystem db2fs_data_01 -disk /dev/sdb -mount /db2_DATA_01 </sxh> **Remove shared GPFS FS & GPFS Domain :** <sxh bash> # List the FS db2cluster -cfs -list -filesystem # Change the tiebreaker to Majority db2cluster -cfs -stop -all db2cluster -cfs -set -tiebreaker -majority db2cluster -cfs -start -all # Delete all shared files db2cluster -cfs -mount -filesystem db2fs1 rm -rf </mount/point>/.* # Remove the Shared FS db2cluster -cfs -delete -filesystem db2fs1 # Remove the RSCT Domain db2cluster -cfs -stop -all db2cluster -cfs -list -domain db2cluster -cfs -delete -domain <DOMAIN> # Clear the DB2 glogal registry db2greg -delvarrec service=GPFS_CLUSTER,variable=NAME,installpath=- db2greg -delvarrec service=DEFAULT_INSTPROF,variable=DEFAULT,installpath=- </sxh> **Enable PR-3 Reservations :** <sxh bash> </sxh> ===== GPFS ===== **Check GPFS cluster configuration :** <sxh bash> mmlscluster </sxh> ===== Troubleshooting ===== **If primary GPFS cluster fails, designate new primary server :** <sxh bash> mmchcluster -p <NEWPRIMARY> -s <NEWSECONDARY> </sxh> **Cluster Ping tool :** <sxh bash> [[http://www-01.ibm.com/support/docview.wss?uid=swg21967473]] </sxh> **Enable member node as Quorum node :** <sxh bash> export CT_MANAGEMENT_SCOPE=2 chrsrc -s 'Name="<NODENAME>"' IBM.PeerNode IsQuorumNode=1 </sxh> **Enable failback mode for RG :** <sxh bash> 1. Log in as root from any cluster node 2. export CT_MANAGEMENT_SCOPE=2 3. rgreq -o lock db2_db2sdin1_0-rg 4. rgreq -o lock db2_db2sdin1_1-rg 5. chrsrc -s "MemberOf = 'db2_db2sdin1_0-rg'" IBM.ManagedResource SelectFromPolicy=3 6. chrsrc -s "MemberOf = 'db2_db2sdin1_1-rg'" IBM.ManagedResource SelectFromPolicy=3 7. rgreq -o unlock db2_db2sdin1_0-rg 8. rgreq -o unlock db2_db2sdin1_1-rg </sxh> **Attaching to DB2 processes / generating core dumps :** <sxh bash> -Lock all member resource groups as root rgreq -o lock db2_inst0_0-rg rgreq -o lock db2_inst0_1-rg rgreq -o lock db2_inst0_2-rg rgreq -o lock db2_inst0_3-rg -Gather a core file ( on Linux attach and gcore from gdb, on AIX by gencore ) -Unlock all members?as follows as root rgreq -o unlock db2_inst0_0-rg rgreq -o unlock db2_inst0_1-rg rgreq -o unlock db2_inst0_2-rg rgreq -o unlock db2_inst0_3-rg </sxh> **Enable GPFS failover on multipath :** <sxh bash> http://www-01.ibm.com/support/docview.wss?uid=swg22003235&myns=swgimgmt&mynp=OCSSEPGG&mync=E&cm_sp=swgimgmt-_-OCSSEPGG-_-E </sxh> **Kerberos authentication** - Create KRB database <sxh bash> kdb5_util create -s </sxh> db2_purescale.txt Last modified: 2019/10/18 20:04by 127.0.0.1