oracle_gdid_sa_upgrade_frstnode

Overview

The first thing which should be done is to UNZIP the binaries on the home(location), where you want the 2nd (higher) grid home to reside. Be sure that the same folder on the other nodes is empty. Therefore, if you want the location for the higher grid home to be: /u01/app/oracle/grid_12201 You should

[oracle@lparaca ~]$ unzip -q linuxx64_12201_grid_home.zip -d /u01/app/oracle/grid_12201/ <--- Location for the new home.

Once the files has been unzip, verify that the same directory is EMPTY on the other nodes:

The Upgrade procedure is rather forward, you can either:

  • Perform the entire upgrade
  • Install only the Software

In this example, I did the entire upgrade:

Start VNC

[oracle@lparaca grid_12201]$ vncserver


New 'lparaca:4 (oracle)' desktop is lparaca:4

Starting applications specified in /home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/lparaca:4.log

[oracle@lparaca grid_12201]$

You can use any vnc client app to connect, I used VNC Viewer:

Start the gridSetup

The gridSetup.sh, like the runInstaller, will be found at the place where you unzipped the binaries:


[oracle@lparaca grid_12201]$ ls -lart
total 424
-rw-r-----   1 oracle oinstall   500 Feb  6  2013 welcome.html
-rw-r--r--   1 oracle oinstall   852 Aug 19  2015 env.ora
-rwxr-x---   1 oracle oinstall   628 Sep  4  2015 runcluvfy.sh
-rwxr-x---   1 oracle oinstall  5395 Jul 21  2016 gridSetup.sh   <--- Run from VNC
drwxr-x---   6 oracle oinstall  4096 Jan 26  2017 xdk
drwxr-xr-x   3 oracle oinstall  4096 Jan 26  2017 wwg
drwxr-xr-x   3 oracle oinstall  4096 Jan 26  2017 wlm
drwxr-xr-x   7 oracle oinstall  4096 Jan 26  2017 usm
drwxr-xr-x   4 oracle oinstall  4096 Jan 26  2017 tomcat
drwxr-x---   5 root   oinstall  4096 Jan 26  2017 suptools
drwxr-xr-x   3 oracle oinstall  4096 Jan 26  2017 slax
drwxr-xr-x   4 oracle oinstall  4096 Jan 26  2017 scheduler

Follow the GUI

The GUI is pretty straight forward:

In the initial page you can choose the type of configuration or process you want to follow:

The second page contains all the nodes which should be patched. Insert ALL Nodes which are to be patched. If a node isn't inserted here, adding it later to the cluster will require a lot more steps and procedures:

For some reason, Oracle requires more than 30 GBs for the upgrade, I had to add 16 more disks each having 2 GBs in order to solve that problem:

The management page is option to define Enterprise Management settings if such exists:

Now, here was a bummer, since I didn't unzip the binaries in the correct home and though that the location on the first node should be empty, like always before.

The proble was fixed once, i UNZIP ALL BINARIES INTO THE DESIRED LOCATION FOR THE NEW HOME

The prerequisites page will show you which prerequisites weren't met.

FIX THE PHYSICAL MEMORY PROBLEM

Once all is done, it will ask you to execute the rootupgrade script on each node:

Output of rootupgrade.sh

Node 1:

[root@lparaca grid_12201]# ./rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/grid_12201

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/oracle/grid_12201/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/product/crsdata/lparaca/crsconfig/rootcrs_lparaca_2018-01-21_12-52-31AM.log

2018/01/21 12:53:30 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/01/21 12:53:30 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/01/21 12:55:38 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/01/21 12:55:38 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/01/21 12:56:10 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2018/01/21 12:56:13 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2018/01/21 12:56:13 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/01/21 12:56:28 CLSRSC-515: Starting OCR manual backup.
2018/01/21 12:57:21 CLSRSC-516: OCR manual backup successful.
2018/01/21 12:57:50 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/01/21 12:57:50 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2018/01/21 12:57:50 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2018/01/21 12:57:50 CLSRSC-615:
 3. The last node to downgrade cannot be a Leaf node.
2018/01/21 12:58:05 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/01/21 12:58:05 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/01/21 12:58:08 CLSRSC-363: User ignored prerequisites during installation
2018/01/21 12:59:07 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/01/21 12:59:13 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/01/21 13:01:21 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2018/01/21 13:01:27 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/01/21 13:01:27 CLSRSC-482: Running command: '/u01/app/oracle/grid/bin/crsctl start rollingupgrade 12.2.0.1.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2018/01/21 13:01:49 CLSRSC-482: Running command: '/u01/app/oracle/grid_12201/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/oracle/grid -oldCRSVersion 12.1.0.2.0 -firstNode true -startR olling false '

ASM configuration upgraded in local node successfully.

2018/01/21 13:02:35 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/01/21 13:05:36 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/01/21 13:07:50 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/01/21 13:08:01 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/01/21 13:08:01 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/01/21 13:08:48 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/01/21 13:08:48 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2018/01/21 13:08:54 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/01/21 13:09:02 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/01/21 13:09:17 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/01/21 13:09:52 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lparaca'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lparaca' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/01/21 13:10:15 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/01/21 13:10:19 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lparaca'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lparaca' has completed
CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'lparaca'
CRS-2672: Attempting to start 'ora.evmd' on 'lparaca'
CRS-2676: Start of 'ora.mdnsd' on 'lparaca' succeeded
CRS-2676: Start of 'ora.evmd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'lparaca'
CRS-2676: Start of 'ora.gpnpd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'lparaca'
CRS-2676: Start of 'ora.gipcd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lparaca'
CRS-2676: Start of 'ora.cssdmonitor' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lparaca'
CRS-2672: Attempting to start 'ora.diskmon' on 'lparaca'
CRS-2676: Start of 'ora.diskmon' on 'lparaca' succeeded
CRS-2676: Start of 'ora.cssd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'lparaca'
CRS-2672: Attempting to start 'ora.ctssd' on 'lparaca'
CRS-2676: Start of 'ora.ctssd' on 'lparaca' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'lparaca'
CRS-2676: Start of 'ora.asm' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'lparaca'
CRS-2676: Start of 'ora.storage' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'lparaca'
CRS-2676: Start of 'ora.crf' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'lparaca'
CRS-2676: Start of 'ora.crsd' on 'lparaca' succeeded
CRS-6017: Processing resource auto-start for servers: lparaca
CRS-6016: Resource auto-start has completed for server lparaca
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.


CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lparaca'
CRS-2673: Attempting to stop 'ora.crsd' on 'lparaca'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lparaca'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'lparaca'
CRS-2677: Stop of 'ora.DATA.dg' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'lparaca'
CRS-2677: Stop of 'ora.asm' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'lparaca'
CRS-2677: Stop of 'ora.net1.network' on 'lparaca' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lparaca' has completed
CRS-2677: Stop of 'ora.crsd' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'lparaca'
CRS-2673: Attempting to stop 'ora.crf' on 'lparaca'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'lparaca'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'lparaca'
CRS-2677: Stop of 'ora.crf' on 'lparaca' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'lparaca' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'lparaca' succeeded
CRS-2677: Stop of 'ora.asm' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'lparaca'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'lparaca'
CRS-2673: Attempting to stop 'ora.evmd' on 'lparaca'
CRS-2677: Stop of 'ora.ctssd' on 'lparaca' succeeded
CRS-2677: Stop of 'ora.evmd' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'lparaca'
CRS-2677: Stop of 'ora.cssd' on 'lparaca' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'lparaca'
CRS-2677: Stop of 'ora.gipcd' on 'lparaca' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lparaca' has completed
CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'lparaca'
CRS-2672: Attempting to start 'ora.evmd' on 'lparaca'
CRS-2676: Start of 'ora.mdnsd' on 'lparaca' succeeded
CRS-2676: Start of 'ora.evmd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'lparaca'
CRS-2676: Start of 'ora.gpnpd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'lparaca'
CRS-2676: Start of 'ora.gipcd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lparaca'
CRS-2676: Start of 'ora.cssdmonitor' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lparaca'
CRS-2672: Attempting to start 'ora.diskmon' on 'lparaca'
CRS-2676: Start of 'ora.diskmon' on 'lparaca' succeeded
CRS-2676: Start of 'ora.cssd' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'lparaca'
CRS-2672: Attempting to start 'ora.ctssd' on 'lparaca'
CRS-2676: Start of 'ora.ctssd' on 'lparaca' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'lparaca'
CRS-2676: Start of 'ora.asm' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'lparaca'
CRS-2676: Start of 'ora.storage' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'lparaca'
CRS-2676: Start of 'ora.crf' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'lparaca'
CRS-2676: Start of 'ora.crsd' on 'lparaca' succeeded
CRS-6017: Processing resource auto-start for servers: lparaca
CRS-2672: Attempting to start 'ora.ons' on 'lparaca'
CRS-2673: Attempting to stop 'ora.lparaca.vip' on 'lparacb'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'lparacb'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'lparacb' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'lparacb'
CRS-2677: Stop of 'ora.lparaca.vip' on 'lparacb' succeeded
CRS-2672: Attempting to start 'ora.lparaca.vip' on 'lparaca'
CRS-2677: Stop of 'ora.scan1.vip' on 'lparacb' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'lparaca'
CRS-2676: Start of 'ora.lparaca.vip' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'lparaca'
CRS-2676: Start of 'ora.scan1.vip' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'lparaca'
CRS-2676: Start of 'ora.ons' on 'lparaca' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'lparaca' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'lparaca'
CRS-5017: The resource action "ora.asm check" encountered the following error:
Agent is in abort action - may not connect to instance. For details refer to "(:CLSN00109:)" in "/u01/app/oracle/product/diag/crs/lparaca/crs/trace/crsd_oraagent_oracle.trc".
CRS-2676: Start of 'ora.orcl.db' on 'lparaca' succeeded
CRS-2672: Attempting to start 'ora.orcl.actest.svc' on 'lparaca'
CRS-5017: The resource action "ora.DATA.dg check" encountered the following error:
Agent is in abort action - may not connect to instance. For details refer to "(:CLSN00109:)" in "/u01/app/oracle/product/diag/crs/lparaca/crs/trace/crsd_oraagent_oracle.trc".
CRS-2676: Start of 'ora.orcl.actest.svc' on 'lparaca' succeeded
CRS-6016: Resource auto-start has completed for server lparaca
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/01/21 13:27:45 CLSRSC-343: Successfully started Oracle Clusterware stack
2018/01/21 13:27:47 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2018/01/21 13:28:34 CLSRSC-474: Initiating upgrade of resource types
2018/01/21 13:31:15 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2018/01/21 13:31:16 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/01/21 13:32:17 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/01/21 13:32:30 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@lparaca grid_12201]#
[root@lparaca grid_12201]#
[root@lparaca grid_12201]#
[root@lparaca grid_12201]#
[root@lparaca grid_12201]#
[root@lparaca grid_12201]#
[root@lparaca grid_12201]#
[root@lparaca grid_12201]# su - oracle
Last login: Sun Jan 21 13:38:00 EST 2018
. oraenv[oracle@lparaca ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle/product
[oracle@lparaca ~]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sun Jan 21 13:40:37 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: [oracle@lparaca ~]$
[oracle@lparaca ~]$
[oracle@lparaca ~]$
[oracle@lparaca ~]$ crs_stat -t
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    lparacb
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    lparacb
******************************************************************

We can check the version of the CRS after the update on the first node and we will see that it is in ROLLING UPGRADE MODE:

[root@lparaca grid_12201]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[root@lparaca grid_12201]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING UPGRADE]. <- This thingy means we have to continue with the 2nd node.
The cluster active patch level is [37976992].

Check after the upgrade on the second node:

[root@lparaca grid_12201]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

  • oracle_gdid_sa_upgrade_frstnode.txt
  • Last modified: 2021/02/09 21:25
  • by andonovj