This is an old revision of the document!
Overview
In this section we will configure a dataguard as a RAC dataguard to a RAC Primary database. That configuration is known also as MAA (Maximum Availability Architecture). We will use RMAN and I will skip the boring steps with the installation of the software only and Grid infrastructure.
These steps can be checked in the Grid Infrastructure and Oracle Database sections. So let's assume a configuration in which we have 2 sets of 2 servers. The first set(enode1, enode2) is where a typical Oracle RAC database resides with Oracle Grid Infrastructure, prividing the ASM storage, called EASTDB.
The second set(wnode3 and wnode4) is where only the Grid infrastructure is installed with ASM prividing the storage again, but the software only database is installed.
So, our goal is to bring it to:
So, let's get going
Preration Primary
First, we have to prepare the primary. Primary database needs to have the following features / attributes:
- Archivelog Mode Enabled
- Has Force logging
- Has Standby Redo log Groups
- Modify FAL_Server and other database configs
Enable Archivelogs
In RAC environment, the Archivelog is enabled by bringing the database down, starting one instance, enable it and then restart the whole database with all instances. Therefore we need a downtime, just like with a normal database. You are changing the static section of the controlfile after all.
Check archivelog
[oracle@enode01 ~]$ sqlplus / as sysdba SQL> archive log list Database log mode No Archive Mode Automatic archival Disabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 12 Current log sequence 14
So, we don't have it for now, so let's change it.
Enable Archivelog
[oracle@enode01 ~]$ srvctl start instance -db eastdb –instance eastdb1 -startoption mount [oracle@enode01 ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Sat Sep 12 15:05:12 2015 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options SQL> alter database archivelog; Database altered. SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 13 Next log sequence to archive 15 Current log sequence 15 SQL> exit [oracle@enode01 ~]$ srvctl stop database -db eastdb [oracle@enode01 ~]$ srvctl start database -db eastdb [oracle@enode01 ~]$
Enable Force Logging
Then, as we said, we have to check the Force logging feature.
Check force logging
[oracle@enode01 ~]$ sqlplus / as sysdba SQL> SELECT force_logging FROM v$database; FORCE_LOGGING -------------------- NO SQL> ALTER DATABASE FORCE LOGGING; Database altered. SQL> SELECT force_logging FROM v$database; FORCE_LOGGING -------------------- YES SQL>
Create Standby Redo log Groups for Primary
Then we have to create Standby Redo log groups on the primary, which will be THE SAME as teh Redo log groups. That is important to avoid a particular bug in oracle.
Create Standby Redo log Groups
SQL> select thread#, group#, bytes/1024/1024 MB from v$log; THREAD# GROUP# MB ---------- ---------- ---------- 1 1 50 1 2 50 2 3 50 2 4 50 SQL> alter database add standby logfile thread 1 group 5 size 50M, group 6 size 50M, group 7 size 50M; Database altered. SQL> alter database add standby logfile thread 2 group 8 size 50M, group 9 size 50M, group 10 size 50M; Database altered. SQL> select thread#, group#, bytes/1024/1024 MB from v$standby_log; THREAD# GROUP# MB ---------- ---------- ---------- 1 5 50 1 6 50 1 7 50 2 8 50 2 9 50 2 10 50 6 rows selected. SQL>
We had 4 Redo log groups each with 1 member (file) with 50 MB of size, so we created the same for the Standby Redo Log groups.
Modify the Archivelgo Destinations
Oracle can have up to 31 archivelog destinations. That means we can send our redo log file copies to up to 31 destinations. For the purposes of this training, we need only 2.
- Local Destination (FRA)
- Remote Destination (The dataguard service)
So let's do it.
Set Archivelog Destinations
SQL> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=eastdb' scope=both sid='*'; System altered. SQL> alter system set log_archive_dest_state_1='enable' scope=both sid='*'; System altered. valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=westdb' scope=both sid='*'; System altered. SQL> alter system set log_archive_dest_state_2='DEFER' scope=both sid='*'; System altered. SQL> SQL> SQL> alter system set log_archive_max_processes=4 scope=both sid='*'; <- Set this value to 10 if the env. allows it. System altered. SQL>
It is also good to enable the Automatic File management on the primary in case of a role reversal. In that case, any manupulation of the OS files of the Primary ,will be automatically replicated on the Standby as well.
Set Automatic File Management
SQL> alter system set standby_file_management='auto' scope=both sid='*'; System altered. SQL>
Lastly, we have to set the FAL server. FAL or First Archivelog Server is the location where the Standby will search for the archivelogs in case they are needed. In an environment with multiple dataguards and one primary. That location should be changed when the primary also changes. But in our case, we ALWAYS KNOW, that the Primary will always be the other database. So if EASTDB will become standby, the primary will always be WESTDB and vice versa.
Set FAL_Server
SQL> alter system set fal_server='westdb' scope=both sid='*'; System altered. SQL> exit [oracle@enode01 ~]$
That concludes the prepartion of the Primary.
Prepare Standby
Now it is time to prepare the standby. As we said before. Here we have ASM providing the storage on nodes; wnode3 and wnode4 and the database software only binaries installed on both nodes. Of course you have the tnsnames.ora and listener.ora configured for the primary, but that goes without saying :) To prepare the standby we need to:
- Create oratab entries and adump directory for the new Standby Database
- Copy & Import the password file from the Primary Database
- Modify and start Listener Configuration
- Create a pfile and start an instance
Prepare the Environment
Firstl, we need to add the oratab entriesfor the new database:
Add Oratab Entries
[oracle@wnode03 ~]$ echo westdb1:/u01/app/oracle/product/12.1.0/dbhome_1:N >> /etc/oratab [oracle@wnode03 ~]$ ssh wnode04 [oracle@wnode04 ~]$ echo westdb2:/u01/app/oracle/product/12.1.0/dbhome_1:N >> /etc/oratab [oracle@wnode04 ~]$ exit logout Connection to wnode04 closed. [oracle@wnode03 ~]$ <code> We will add also teh directories for the ADUMP location. ADUMP is the only location which if doesn't exist, oracle will crash :D <Code:bash|Create Adump location> [oracle@wnode03 ~]$ mkdir -p /u01/app/oracle/admin/westdb/adump [oracle@wnode03 ~]$ ssh wnode04 [oracle@wnode04 ~]$ mkdir -p /u01/app/oracle/admin/westdb/adump [oracle@wnode04 ~]$ exit logout Connection to wnode04 closed. [oracle@wnode03 ~]$
Copy the Password File from Primary
To copy the password file from the primary we need to login as grid (Grid Infrastructure owner) on the primary and extract it from the ASM.
Extract, Copy and Import the Password File
--switch to grid [oracle@enode1 ~]$ su - grid grid@enode01's password: <oracle> [grid@enode01 ~]$. oraenv ORACLE_SID = [grid]? +ASM1 The Oracle base has been set to /u01/app/grid [grid@enode01 ~]$ --locate and Extract the password file [grid@enode01 ~]$ srvctl config database -db eastdb | grep Password Password file: +DATA/eastdb/orapweastdb [grid@enode01 ~]$ asmcmd pwcopy +DATA/eastdb/orapweastdb /tmp/orapweastdb copying +DATA/eastdb/orapweastdb -> /tmp/orapweastdb [grid@enode01 ~]$ ls -l /tmp/orapw* -rw-r----- 1 grid oinstall 10240 Sep 12 14:02 /tmp/orapweastdb [grid@enode01 ~]$ --Copy the password file. [oracle@enode01 ~]$ scp /tmp/orapweastdb wnode03:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwwestdb1 oracle@enode01's password: <oracle> oracle@wnode03's password: <oracle> orapweastdb 100% 10KB 10.0KB/s 00:00 Connection to enode01 closed. [oracle@enode01 ~]$ [oracle@enode01 ~]$ scp /tmp/orapweastdb wnode04:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwwestdb2 oracle@enode01's password: <oracle> oracle@wnode04's password: <oracle> orapweastdb 100% 10KB 10.0KB/s 00:00 Connection to enode01 closed. [oracle@enode01 ~]$ <Code> ====Modify and Start the Listener on the standby==== The Global Listeners in RAC, also known as SCAN (Single Client Access Name) listeners, are started from the Grid user, therefore the configuration should be in the Grid Infrastructure Home. Modify the listener.ora file to contain the following records: <Code:bash|Configuration of the Global Listener> [grid@wnode03 ~]$ cd $ORACLE_HOME/network/admin [grid@wnode03 admin]$ cat listener.ora # listener.ora Network Configuration File: /u01/app/12.1.0/grid/network/admin/listener.ora # Generated by Oracle configuration tools. ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3 = ON ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2 = ON ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR = SUBNET VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3 = OFF VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2 = OFF VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = westdb_DGMGRL.example.com) (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1) (SID_NAME = westdb1) ) (SID_DESC = (GLOBAL_DBNAME = westdb.example.com) (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1) (SID_NAME = westdb1) ) (SID_DESC = (GLOBAL_DBNAME = clone) (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1) (SID_NAME = westdb1) ) ) VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET MGMTLSNR = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = MGMTLSNR)) ) ADR_BASE_MGMTLSNR = /u01/app/grid ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR = ON LISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER)) ) ADR_BASE_LISTENER = /u01/app/grid ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON LISTENER_SCAN3 = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN3)) ) LISTENER_SCAN2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN2)) ) ADR_BASE_LISTENER_SCAN3 = /u01/app/grid LISTENER_SCAN1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1)) ) ADR_BASE_LISTENER_SCAN2 = /u01/app/grid ADR_BASE_LISTENER_SCAN1 = /u01/app/grid [grid@wnode03 admin]$ [grid@wnode03 admin]$ srvctl stop listener -node wnode03 [grid@wnode03 admin]$ srvctl start listener -node wnode03 [grid@wnode03 admin]$ lsnrctl status .......................................................... Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.121)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.171)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... **Service "clone" has 1 instance(s). <- These Entries ARE IMPORTANT Instance "westdb1", status UNKNOWN, has 1 handler(s) for this** <- These Entries ARE IMPORTANT service...
P.S. FORMATING IS IMPORTANT, don't forget some space, otherwise you will have issues.