Table of Contents

Overview

In this section we will configure a dataguard as a RAC dataguard to a RAC Primary database. That configuration is known also as MAA (Maximum Availability Architecture). We will use RMAN and I will skip the boring steps with the installation of the software only and Grid infrastructure.

These steps can be checked in the Grid Infrastructure and Oracle Database sections. So let's assume a configuration in which we have 2 sets of 2 servers. The first set(enode1, enode2) is where a typical Oracle RAC database resides with Oracle Grid Infrastructure, prividing the ASM storage, called EASTDB.

The second set(wnode3 and wnode4) is where only the Grid infrastructure is installed with ASM prividing the storage again, but the software only database is installed.

So, our goal is to bring it to:

So, let's get going

Preration Primary

First, we have to prepare the primary. Primary database needs to have the following features / attributes:

Enable Archivelogs

In RAC environment, the Archivelog is enabled by bringing the database down, starting one instance, enable it and then restart the whole database with all instances. Therefore we need a downtime, just like with a normal database. You are changing the static section of the controlfile after all.

Check archivelog

[oracle@enode01 ~]$ sqlplus / as sysdba
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 12
Current log sequence 14

So, we don't have it for now, so let's change it.

Enable Archivelog

[oracle@enode01 ~]$ srvctl start instance -db eastdb –instance eastdb1
-startoption mount
[oracle@enode01 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Sat Sep 12 15:05:12 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> alter database archivelog;
Database altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 13
Next log sequence to archive 15
Current log sequence 15
SQL> exit
[oracle@enode01 ~]$ srvctl stop database -db eastdb
[oracle@enode01 ~]$ srvctl start database -db eastdb
[oracle@enode01 ~]$

Enable Force Logging

Then, as we said, we have to check the Force logging feature.

Check force logging

[oracle@enode01 ~]$ sqlplus / as sysdba
SQL> SELECT force_logging FROM v$database;
FORCE_LOGGING
--------------------
NO
SQL> ALTER DATABASE FORCE LOGGING;
Database altered.
SQL> SELECT force_logging FROM v$database;
FORCE_LOGGING
--------------------
YES
SQL>

Create Standby Redo log Groups for Primary

Then we have to create Standby Redo log groups on the primary, which will be THE SAME as teh Redo log groups. That is important to avoid a particular bug in oracle.

Create Standby Redo log Groups

SQL> select thread#, group#, bytes/1024/1024 MB from v$log;
THREAD# GROUP# MB
---------- ---------- ----------
1 1 50
1 2 50
2 3 50
2 4 50
SQL> alter database add standby logfile thread 1
group 5 size 50M,
group 6 size 50M,
group 7 size 50M;
Database altered.
SQL> alter database add standby logfile thread 2
group 8 size 50M,
group 9 size 50M,
group 10 size 50M;
Database altered.
SQL> select thread#, group#, bytes/1024/1024 MB from v$standby_log;
THREAD# GROUP# MB
---------- ---------- ----------
1 5 50
1 6 50
1 7 50
2 8 50
2 9 50
2 10 50
6 rows selected.
SQL>

We had 4 Redo log groups each with 1 member (file) with 50 MB of size, so we created the same for the Standby Redo Log groups.

Modify the Archivelgo Destinations

Oracle can have up to 31 archivelog destinations. That means we can send our redo log file copies to up to 31 destinations. For the purposes of this training, we need only 2.

  1. Local Destination (FRA)
  2. Remote Destination (The dataguard service)

So let's do it.

Set Archivelog Destinations

SQL> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=eastdb' scope=both sid='*';               <- That command should be ONE LINE, otherwise you will have issues later
System altered.

SQL> alter system set log_archive_dest_state_1='enable' scope=both sid='*';
System altered.


SQL> alter system set log_archive_dest_2='SERVICE=westdb ASYNC valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=westdb' scope=both sid='*';               <- That command should be ONE LINE, otherwise you will have issues later
System altered.

SQL> alter system set log_archive_dest_state_2='DEFER' scope=both
sid='*';
System altered.
SQL>
SQL>
SQL>
SQL> alter system set log_archive_max_processes=4 scope=both sid='*';       <- Set this value to 10 if the env. allows it.
System altered.
SQL>

It is also good to enable the Automatic File management on the primary in case of a role reversal. In that case, any manupulation of the OS files of the Primary ,will be automatically replicated on the Standby as well.

Set Automatic File Management

SQL> alter system set standby_file_management='auto' scope=both
sid='*';
System altered.
SQL>

Lastly, we have to set the FAL server. FAL or First Archivelog Server is the location where the Standby will search for the archivelogs in case they are needed. In an environment with multiple dataguards and one primary. That location should be changed when the primary also changes. But in our case, we ALWAYS KNOW, that the Primary will always be the other database. So if EASTDB will become standby, the primary will always be WESTDB and vice versa.

Set FAL_Server

SQL> alter system set fal_server='westdb' scope=both sid='*';
System altered.
SQL> exit
[oracle@enode01 ~]$

That concludes the prepartion of the Primary.

Prepare Standby

Now it is time to prepare the standby. As we said before. Here we have ASM providing the storage on nodes; wnode3 and wnode4 and the database software only binaries installed on both nodes. Of course you have the tnsnames.ora and listener.ora configured for the primary, but that goes without saying :) To prepare the standby we need to:

Prepare the Environment

Firstl, we need to add the oratab entriesfor the new database:

Add Oratab Entries

[oracle@wnode03 ~]$ echo
westdb1:/u01/app/oracle/product/12.1.0/dbhome_1:N >> /etc/oratab
[oracle@wnode03 ~]$ ssh wnode04
[oracle@wnode04 ~]$ echo
westdb2:/u01/app/oracle/product/12.1.0/dbhome_1:N >> /etc/oratab
[oracle@wnode04 ~]$ exit
logout
Connection to wnode04 closed.
[oracle@wnode03 ~]$
<code>

We will add also teh directories for the ADUMP location. ADUMP is the only location which if doesn't exist, oracle will crash :D 

<Code:bash|Create Adump location>
[oracle@wnode03 ~]$ mkdir -p /u01/app/oracle/admin/westdb/adump
[oracle@wnode03 ~]$ ssh wnode04
[oracle@wnode04 ~]$ mkdir -p /u01/app/oracle/admin/westdb/adump
[oracle@wnode04 ~]$ exit
logout
Connection to wnode04 closed.
[oracle@wnode03 ~]$

Copy the Password File from Primary

To copy the password file from the primary we need to login as grid (Grid Infrastructure owner) on the primary and extract it from the ASM.

Extract, Copy and Import the Password File

--switch to grid
[oracle@enode1 ~]$ su - grid
grid@enode01's password: <oracle>
[grid@enode01 ~]$. oraenv
ORACLE_SID = [grid]? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@enode01 ~]$

--locate and Extract the password file
[grid@enode01 ~]$ srvctl config database -db eastdb | grep Password
Password file: +DATA/eastdb/orapweastdb
[grid@enode01 ~]$ asmcmd pwcopy +DATA/eastdb/orapweastdb
/tmp/orapweastdb
copying +DATA/eastdb/orapweastdb -> /tmp/orapweastdb
[grid@enode01 ~]$ ls -l /tmp/orapw*
-rw-r----- 1 grid oinstall 10240 Sep 12 14:02 /tmp/orapweastdb
[grid@enode01 ~]$

--Copy the password file.
[oracle@enode01 ~]$ scp /tmp/orapweastdb
wnode03:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwwestdb1
oracle@enode01's password: <oracle>
oracle@wnode03's password: <oracle>
orapweastdb 100% 10KB 10.0KB/s 00:00
Connection to enode01 closed.
[oracle@enode01 ~]$
[oracle@enode01 ~]$ scp /tmp/orapweastdb
wnode04:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwwestdb2
oracle@enode01's password: <oracle>
oracle@wnode04's password: <oracle>
orapweastdb 100% 10KB 10.0KB/s 00:00
Connection to enode01 closed.
[oracle@enode01 ~]$

Modify and Start the Listener on the standby

The Global Listeners in RAC, also known as SCAN (Single Client Access Name) listeners, are started from the Grid user, therefore the configuration should be in the Grid Infrastructure Home. Modify the listener.ora file to contain the following records:

Configuration of the Global Listener

[grid@wnode03 ~]$ cd $ORACLE_HOME/network/admin
[grid@wnode03 admin]$ cat listener.ora
# listener.ora Network Configuration File:
/u01/app/12.1.0/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR = SUBNET
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = westdb_DGMGRL.example.com)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = westdb1)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = westdb.example.com)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = westdb1)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = clone)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = westdb1)
    )
)

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

MGMTLSNR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = MGMTLSNR))
  )

ADR_BASE_MGMTLSNR = /u01/app/grid
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR = ON

LISTENER =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
  )

ADR_BASE_LISTENER = /u01/app/grid

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

LISTENER_SCAN3 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN3))
  )

LISTENER_SCAN2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN2))
  )

ADR_BASE_LISTENER_SCAN3 = /u01/app/grid

LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
  )

ADR_BASE_LISTENER_SCAN2 = /u01/app/grid
ADR_BASE_LISTENER_SCAN1 = /u01/app/grid
[grid@wnode03 admin]$
[grid@wnode03 admin]$ srvctl stop listener -node wnode03
[grid@wnode03 admin]$ srvctl start listener -node wnode03
[grid@wnode03 admin]$ lsnrctl status
..........................................................
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.121)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.171)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
**Service "clone" has 1 instance(s).                                                          <- These Entries ARE IMPORTANT
Instance "westdb1", status UNKNOWN, has 1 handler(s) for this**                               <- These Entries ARE IMPORTANT
service...

P.S. FORMATING IS IMPORTANT, don't forget some space, otherwise you will have issues.

Start standby Instance

We will prepare a pfile for one instance for the standby. Standby will firsly reside only on ONE instance and then we will add more. As Oracle user on wnode3:

Create pfile & Start the instance

[oracle@wnode03 ~]$ . oraenv
ORACLE_SID = [oracle] ? westdb1
The Oracle base has been set to /u01/app/oracle
[oracle@wnode03 ~]$
[oracle@wnode03 ~]$ cd $ORACLE_HOME/dbs
[oracle@wnode03 dbs]$ vi initwestdb.ora
### Add the following entries
db_name=eastdb
db_unique_name=westdb
db_domain=example.com
:wq!
[oracle@wnode03 dbs]$
[oracle@wnode03 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Sat Sep 12 14:16:27 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initwestdb.ora
ORACLE instance started.
Total System Global Area 243269632 bytes
Fixed Size 2923000 bytes
Variable Size 184550920 bytes
Database Buffers 50331648 bytes
Redo Buffers 5464064 bytes
SQL> exit
[oracle@wnode03 dbs]$

Oracle can start with literally a db_name only, but we need also db_unique_name for our example. We have also domain for the global name.

Restore teh Primary for Standby

Now that we have a started instance we can start the restore from the primary. Again assure you can tnsping from PRIMARY → Standby and from Standby → Primary

Restore the primary for Standby

[oracle@wnode03 dbs]$ rman target sys/oracle_4U@eastdb auxiliary
sys/oracle_4U@clone
Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 12
14:20:34 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights
reserved.
connected to target database: EASTDB (DBID=812282532)
connected to auxiliary database: EASTDB (not mounted)
RMAN> run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby1 type disk;
duplicate target database for standby from active database
  spfile
   set db_unique_name='westdb'
   set cluster_database='false'
   set control_files='+DATA','+FRA'
   set remote_listener='cluster02-scan:1521'
   set fal_server='eastdb'
   set audit_file_dest='/u01/app/oracle/admin/westdb/adump'
   nofilenamecheck;
allocate auxiliary channel stby type disk;
   sql channel stby "alter database recover managed standby database
disconnect";
}
....................................................................
sql statement: alter system archive log current
contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=892026441 file
name=+DATA/WESTDB/DATAFILE/system.284.892026001
datafile 2 switched to datafile copy
input datafile copy RECID=2 STAMP=892026441 file
name=+DATA/WESTDB/DATAFILE/undotbs2.286.892026023
datafile 3 switched to datafile copy
input datafile copy RECID=3 STAMP=892026441 file
name=+DATA/WESTDB/DATAFILE/sysaux.287.892026011
datafile 4 switched to datafile copy
input datafile copy RECID=4 STAMP=892026442 file
name=+DATA/WESTDB/DATAFILE/undotbs1.276.892026173
datafile 5 switched to datafile copy
input datafile copy RECID=5 STAMP=892026442 file
name=+DATA/WESTDB/DATAFILE/example.281.892025993
datafile 6 switched to datafile copy
input datafile copy RECID=6 STAMP=892026442 file
name=+DATA/WESTDB/DATAFILE/users.294.892026273
Finished Duplicate Db at 02-OCT-15
allocated channel: stby
channel stby: SID=46 device type=DISK
sql statement: alter database recover managed standby database
disconnect
released channel: prmy1
released channel: prmy2
released channel: prmy3
released channel: prmy4
released channel: stby1
released channel: stby
RMAN> exit
[oracle@wnode03 ~]$

Completing the Standby Configuration

Now that we have a standby database, we have to integrate it in the Grid Infrastructure, so we can take advantage of all these nice features. However before that, we have to enable it and check if it works correctly. So in this part we will:

Enable The archivelog destination & Check archivelog apply

Remember, that we set the second archivelog destination on the primary as DEFER :) Well, I sure do, that means that Oracle won't send the archivelog on the service. We have to change that

On Primary

SQL> col destination format a10
SQL> col error format a10
SQL> select dest_id, status, destination, archiver, error, valid_now
     from v$archive_dest where dest_id = 2;

DEST_ID STATUS DESTINATIO ARCHIVER ERROR VALID_NOW
------- --------- ---------- --------- ---------- --------------
2 DEFERRED westdb LGWR ORA-12514 UNKNOWN
SQL>

SQL> alter system set log_archive_dest_state_2=ENABLE;
System altered.

SQL> select dest_id, status, destination, archiver, error, valid_now
from v$archive_dest where dest_id = 2;
DEST_ID STATUS DESTINATIO ARCHIVER E VALID_NOW
---------- --------- ---------- ---------- - ----------------
2 VALID westdb LGWR YES
SQL>

After that, we have to verify the application

Verify Application

--On Primary
[oracle@enode01 ~]$ sqlplus / as sysdba
SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$ARCHIVED_LOG GROUP BY THREAD#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 11
2 2

--On Standby
SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY THREAD#;
THREAD# SEQUENCE# APPLIED
---------- ---------- ---------
1 11 YES
2 2 YES
SQL>

Now that we checked and verified that the archivelogs are being applied, we have to re-create the spfile. Remember that the spfile was re-created during the duplicate procedure and it contains data referring to the primary.

Modify the pfile

SQL> create pfile='/tmp/init.ora' from spfile;
File created.
SQL> exit
[oracle@wnode03 ~]$
[oracle@wnode03 ~]$ vi /tmp/init.ora
:g/westdb1._/d
10 fewer lines

:%s/eastdb._/westdb._/g
10 substitutions on 10 lines

:%s/eastdb1/westdb1/g
13 substitutions on 13 lines

:%s/eastdb2/westdb2/g
13 substitutions on 13 lines

--Be sure the following values are SET:
*.cluster_database=TRUE
*.dispatchers='(PROTOCOL=TCP) (SERVICE=westdbXDB)'
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=westdb'
*.log_archive_dest_2='SERVICE=eastdb ASYNC valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=eastdb'
*.log_archive_dest_state_2='enable'
:wq!

[oracle@wnode03 ~]$ sqlplus / as sysdba
SQL> create spfile='+DATA/westdb/spfilewestdb.ora' from
pfile='/tmp/init.ora';
File created.
SQL> exit
[oracle@wnode03 ~]$
[oracle@wnode03 ~]$ vi $ORACLE_HOME/dbs/initwestdb1.ora
### And Add the following entry
spfile='+DATA/westdb/spfilewestdb.ora'
:wq!

[oracle@wnode03 ~]$ scp $ORACLE_HOME/dbs/initwestdb1.ora
wnode04:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initwestdb2.ora
initwestdb1.ora 100% 39 0.0KB/s 00:00
[oracle@wnode03 ~]$

Phew, that was something, now that we are done with the modifications, we have to shutdown the standby, prior adding it to the grid.

Shutdown the Standby

[oracle@wnode03 ~]$ sqlplus / as sysdba
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> exit
[oracle@wnode03 ~]$

Add Clusterware entries for the Standby

Adding entries to the clusterware registeries is rather simple:

On standby

[oracle@wnode03 ~]$ srvctl add database -db westdb -oraclehome
/u01/app/oracle/product/12.1.0/dbhome_1 -dbtype RAC -spfile
'+DATA/westdb/spfilewestdb.ora' -role physical_standby -diskgroup
"DATA,FRA" -dbname eastdb -domain example.com
[oracle@wnode03 ~]$ srvctl add instance -db westdb -instance westdb1 -
node wnode03
[oracle@wnode03 ~]$ srvctl add instance -db westdb -instance westdb2 -
node wnode04
[oracle@wnode03 ~]$

It is important they to be added as the owner of the resource. Otherwise, you might not be able to start it later, due to profile restrictions.

Finally, we can start and verify the standby:

On Standby

[oracle@wnode03 ~]$ srvctl start database -db westdb -startoption mount
[oracle@wnode03 ~]$ srvctl status database -db westdb -verbose
Instance westdb1 is running on node wnode03. Instance status: Mounted
(Closed).
Instance westdb2 is running on node wnode04. Instance status: Mounted
(Closed).
[oracle@wnode03 ~]$
[oracle@wnode03 ~]$ sqlplus / as sysdba
SQL> alter database recover managed standby database disconnect;
Database altered.
SQL> exit
SQL>
[oracle@wnode03 ~]$ srvctl config database -db westdb
Database unique name: westdb
Database name: eastdb
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/westdb/spfilewestdb.ora
Password file:
Domain: example.com
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY                 <- Physical Standby
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC                                       <- RAC Database
Start concurrency:
Stop concurrency:
Database is enabled
Database is individually enabled on nodes:
Database is individually disabled on nodes:
OSDBA group: dba
OSOPER group: oper
Database instances: westdb1,westdb2
Configured nodes: wnode03,wnode04
Database is administrator managed
[oracle@wnode03 ~]$

Verify Replication

To verify replication, we can simply check 2 things. Is the MRP process on the Standby Running, is the archivelogs being applied. Don't forget that the Data Guard in general can be either:


Archiver Process – The archiver process (ARCn or ARCH) is responsible for archiving online redo logs. The archival destination could be a local destination or a remote standby database site. In the case of a Data Guard configuration, one of the archival destinations must be a standby database. The archiver process of the primary database writes the redo log file.

Log Writer (LGWR) – The log writer process on the primary database writes entries from the redo log buffer to the online redo log file. When the current online redo log file is full, it triggers the archiver process to start the archiving activity.

Remote File Server (RFS) Process – The RFS process runs on the standby database and is responsible for communication between the primary and the standby database. For the log transport service, the RFS on the standby database receives the redo records from the archiver or the log writer process of the primary database over Oracle Net and writes to filesystem on the standby site.

Fetch Archive Log (FAL) – The FAL process has two components: FAL Client and FAL Server. Both processes are used for archive gap resolution. If the Managed Recovery Process (MRP) on the standby database site detects an archive gap sequence, it initiates a fetch request to the FAL client on the standby site. This action, in turn, requests the FAL server process on the primary database to re-transmit the archived log files to resolve the gap sequence. Archive gap sequences will be discussed later in this chapter.

Once the log transport service completes the transmission of redo records to the standby site, the log apply service starts applying the changes to the standby database. The log apply service operates solely on the standby database. The following processes on the standby site facilitate the log apply operations:

Managed Recovery Process (MRP) – The MRP applies the redo entries from the archived redo logs onto the physical standby database.

Logical Standby Process (LSP) – The LSP applies the redo records from archived redo logs to the logical standby database. The Oracle database log miner engine is used by the logical standby process for the SQL apply operations. Using the log miner engine, the LSP process recreates the SQL statements from redo logs that have been executed on the primary database. These statements are then applied to the standby database to keep it current with the primary database.

Because of that there are limitation of the Logical Guard. But we will address that in the Logical Guard section, enough side ways :)


So, to verify the application we can do the following:

Verify Apply

--On Standby
[oracle@wnode03 ~]$ pgrep -lf mrp
25968 ora_mrp0_westdb1
[oracle@wnode03 dbs]$
[oracle@wnode03 ~]$ sqlplus / as sysdba
SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY
THREAD#;
THREAD# SEQUENCE# APPLIED
---------- ---------- ---------
1 12 YES
1 13 YES
1 11 YES
2 4 IN-MEMORY
2 3 YES
2 2 YES
6 rows selected.
SQL>

--on Primary:
SQL> alter system archive log current;
System altered.
SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM GV$ARCHIVED_LOG WHERE
RESETLOGS_CHANGE# = (SELECT MAX(RESETLOGS_CHANGE#) FROM
GV$ARCHIVED_LOG) GROUP BY THREAD#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 16
2 7
SQL>

--On Standby
SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY
THREAD#;
THREAD# SEQUENCE# APPLIED
---------- ---------- ---------
1 16 YES
1 15 YES
1 13 YES
1 11 YES
1 14 YES
1 12 YES
2 6 YES
2 5 YES
2 7 IN-MEMORY
2 3 YES
2 2 YES
THREAD# SEQUENCE# APPLIED
---------- ---------- ---------
2 4 YES
12 rows selected.
SQL>

SQL> select inst_id, process, status, sequence#, thread# from
gv$managed_standby;
INST_ID PROCESS STATUS SEQUENCE# THREAD#
---------- --------- ------------ ---------- ----------
1 ARCH CLOSING 13 1
1 ARCH CONNECTED 0 0
1 ARCH CLOSING 7 2
1 ARCH CLOSING 5 2
1 RFS IDLE 8 2
1 RFS IDLE 0 0
1 RFS IDLE 0 0
1 MRP0 APPLYING_LOG 8 2
2 ARCH CLOSING 16 1
2 ARCH CONNECTED 0 0
2 ARCH CLOSING 14 1
INST_ID PROCESS STATUS SEQUENCE# THREAD#
---------- --------- ------------ ---------- ----------
2 ARCH CLOSING 15 1
2 RFS IDLE 0 0
2 RFS IDLE 0 0
2 RFS IDLE 17 1
15 rows selected.