Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
oracle_guard_installation [2020/10/25 08:07] – andonovj | oracle_guard_installation [2020/10/25 15:17] (current) – [Verify Replication] andonovj | ||
---|---|---|---|
Line 1: | Line 1: | ||
=====Overview===== | =====Overview===== | ||
- | In this section we will configure a dataguard as a RAC dataguard to a RAC Primary database. That configuration is known also as MAA (Maximum Availability Architecture). We will use RMAN and I will skip the boring steps with the installation of the software only and Grid infrastructure. These steps can be checked in the Grid Infrastructure and Oracle Database sections. | + | In this section we will configure a dataguard as a RAC dataguard to a RAC Primary database. That configuration is known also as MAA (Maximum Availability Architecture). We will use RMAN and I will skip the boring steps with the installation of the software only and Grid infrastructure. |
- | So let's assume a configuration in which we have 2 sets of 2 servers. The first set(enode1, enode2) is where a typical Oracle RAC database resides with Oracle Grid Infrastructure, | + | |
+ | These steps can be checked in the Grid Infrastructure and Oracle Database sections. | ||
+ | So let's assume a configuration in which we have 2 sets of 2 servers. The first set(enode1, enode2) is where a typical Oracle RAC database resides with Oracle Grid Infrastructure, | ||
+ | The second set(wnode3 and wnode4) is where only the Grid infrastructure is installed with ASM prividing the storage again, but the software only database is installed. | ||
- | {{ : | + | |
+ | |||
+ | {{: | ||
So, our goal is to bring it to: | So, our goal is to bring it to: | ||
- | {{ : | + | {{: |
So, let's get going | So, let's get going | ||
Line 19: | Line 23: | ||
* Has Force logging | * Has Force logging | ||
* Has Standby Redo log Groups | * Has Standby Redo log Groups | ||
+ | * Modify FAL_Server and other database configs | ||
====Enable Archivelogs==== | ====Enable Archivelogs==== | ||
Line 127: | Line 132: | ||
< | < | ||
- | SQL> alter system set | + | SQL> alter system set log_archive_dest_1=' |
- | log_archive_dest_1=' | + | |
- | valid_for=(ALL_LOGFILES, | + | |
- | sid=' | + | |
System altered. | System altered. | ||
- | SQL> alter system set log_archive_dest_state_1=' | + | SQL> alter system set log_archive_dest_state_1=' |
- | sid=' | + | |
System altered. | System altered. | ||
- | valid_for=(ONLINE_LOGFILES, | + | |
- | scope=both sid=' | + | |
+ | SQL> alter system set log_archive_dest_2=' | ||
System altered. | System altered. | ||
+ | |||
SQL> alter system set log_archive_dest_state_2=' | SQL> alter system set log_archive_dest_state_2=' | ||
sid=' | sid=' | ||
System altered. | System altered. | ||
+ | SQL> | ||
SQL> | SQL> | ||
SQL> | SQL> | ||
Line 171: | Line 175: | ||
That concludes the prepartion of the Primary. | That concludes the prepartion of the Primary. | ||
+ | |||
+ | =====Prepare Standby===== | ||
+ | Now it is time to prepare the standby. As we said before. Here we have ASM providing the storage on nodes; wnode3 and wnode4 and the database software only binaries installed on both nodes. Of course you have the tnsnames.ora and listener.ora configured for the primary, but that goes without saying :) | ||
+ | To prepare the standby we need to: | ||
+ | |||
+ | * Create oratab entries and adump directory for the new Standby Database | ||
+ | * Copy & Import the password file from the Primary Database | ||
+ | * Modify and start Listener Configuration | ||
+ | * Create a pfile and start an instance | ||
+ | |||
+ | ====Prepare the Environment==== | ||
+ | Firstl, we need to add the oratab entriesfor the new database: | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 ~]$ echo | ||
+ | westdb1:/ | ||
+ | [oracle@wnode03 ~]$ ssh wnode04 | ||
+ | [oracle@wnode04 ~]$ echo | ||
+ | westdb2:/ | ||
+ | [oracle@wnode04 ~]$ exit | ||
+ | logout | ||
+ | Connection to wnode04 closed. | ||
+ | [oracle@wnode03 ~]$ | ||
+ | < | ||
+ | |||
+ | We will add also teh directories for the ADUMP location. ADUMP is the only location which if doesn' | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 ~]$ mkdir -p / | ||
+ | [oracle@wnode03 ~]$ ssh wnode04 | ||
+ | [oracle@wnode04 ~]$ mkdir -p / | ||
+ | [oracle@wnode04 ~]$ exit | ||
+ | logout | ||
+ | Connection to wnode04 closed. | ||
+ | [oracle@wnode03 ~]$ | ||
+ | </ | ||
+ | |||
+ | ====Copy the Password File from Primary==== | ||
+ | To copy the password file from the primary we need to login as grid (Grid Infrastructure owner) on the primary and extract it from the ASM. | ||
+ | |||
+ | < | ||
+ | --switch to grid | ||
+ | [oracle@enode1 ~]$ su - grid | ||
+ | grid@enode01' | ||
+ | [grid@enode01 ~]$. oraenv | ||
+ | ORACLE_SID = [grid]? +ASM1 | ||
+ | The Oracle base has been set to / | ||
+ | [grid@enode01 ~]$ | ||
+ | |||
+ | --locate and Extract the password file | ||
+ | [grid@enode01 ~]$ srvctl config database -db eastdb | grep Password | ||
+ | Password file: +DATA/ | ||
+ | [grid@enode01 ~]$ asmcmd pwcopy +DATA/ | ||
+ | / | ||
+ | copying +DATA/ | ||
+ | [grid@enode01 ~]$ ls -l /tmp/orapw* | ||
+ | -rw-r----- 1 grid oinstall 10240 Sep 12 14:02 / | ||
+ | [grid@enode01 ~]$ | ||
+ | |||
+ | --Copy the password file. | ||
+ | [oracle@enode01 ~]$ scp / | ||
+ | wnode03:/ | ||
+ | oracle@enode01' | ||
+ | oracle@wnode03' | ||
+ | orapweastdb 100% 10KB 10.0KB/s 00:00 | ||
+ | Connection to enode01 closed. | ||
+ | [oracle@enode01 ~]$ | ||
+ | [oracle@enode01 ~]$ scp / | ||
+ | wnode04:/ | ||
+ | oracle@enode01' | ||
+ | oracle@wnode04' | ||
+ | orapweastdb 100% 10KB 10.0KB/s 00:00 | ||
+ | Connection to enode01 closed. | ||
+ | [oracle@enode01 ~]$ | ||
+ | </ | ||
+ | |||
+ | ====Modify and Start the Listener on the standby==== | ||
+ | The Global Listeners in RAC, also known as SCAN (Single Client Access Name) listeners, are started from the Grid user, therefore the configuration should be in the Grid Infrastructure Home. Modify the listener.ora file to contain the following records: | ||
+ | |||
+ | < | ||
+ | [grid@wnode03 ~]$ cd $ORACLE_HOME/ | ||
+ | [grid@wnode03 admin]$ cat listener.ora | ||
+ | # listener.ora Network Configuration File: | ||
+ | / | ||
+ | # Generated by Oracle configuration tools. | ||
+ | ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3 = ON | ||
+ | ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2 = ON | ||
+ | ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON | ||
+ | VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR = SUBNET | ||
+ | VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3 = OFF | ||
+ | VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2 = OFF | ||
+ | VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF | ||
+ | SID_LIST_LISTENER = | ||
+ | (SID_LIST = | ||
+ | (SID_DESC = | ||
+ | (GLOBAL_DBNAME = westdb_DGMGRL.example.com) | ||
+ | (ORACLE_HOME = / | ||
+ | (SID_NAME = westdb1) | ||
+ | ) | ||
+ | (SID_DESC = | ||
+ | (GLOBAL_DBNAME = westdb.example.com) | ||
+ | (ORACLE_HOME = / | ||
+ | (SID_NAME = westdb1) | ||
+ | ) | ||
+ | (SID_DESC = | ||
+ | (GLOBAL_DBNAME = clone) | ||
+ | (ORACLE_HOME = / | ||
+ | (SID_NAME = westdb1) | ||
+ | ) | ||
+ | ) | ||
+ | |||
+ | VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET | ||
+ | |||
+ | MGMTLSNR = | ||
+ | (DESCRIPTION = | ||
+ | (ADDRESS = (PROTOCOL = IPC)(KEY = MGMTLSNR)) | ||
+ | ) | ||
+ | |||
+ | ADR_BASE_MGMTLSNR = / | ||
+ | ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR = ON | ||
+ | |||
+ | LISTENER = | ||
+ | (DESCRIPTION = | ||
+ | (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER)) | ||
+ | ) | ||
+ | |||
+ | ADR_BASE_LISTENER = / | ||
+ | |||
+ | ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON | ||
+ | |||
+ | LISTENER_SCAN3 = | ||
+ | (DESCRIPTION = | ||
+ | (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN3)) | ||
+ | ) | ||
+ | |||
+ | LISTENER_SCAN2 = | ||
+ | (DESCRIPTION = | ||
+ | (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN2)) | ||
+ | ) | ||
+ | |||
+ | ADR_BASE_LISTENER_SCAN3 = / | ||
+ | |||
+ | LISTENER_SCAN1 = | ||
+ | (DESCRIPTION = | ||
+ | (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1)) | ||
+ | ) | ||
+ | |||
+ | ADR_BASE_LISTENER_SCAN2 = / | ||
+ | ADR_BASE_LISTENER_SCAN1 = / | ||
+ | [grid@wnode03 admin]$ | ||
+ | [grid@wnode03 admin]$ srvctl stop listener -node wnode03 | ||
+ | [grid@wnode03 admin]$ srvctl start listener -node wnode03 | ||
+ | [grid@wnode03 admin]$ lsnrctl status | ||
+ | .......................................................... | ||
+ | Listening Endpoints Summary... | ||
+ | (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) | ||
+ | (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.121)(PORT=1521))) | ||
+ | (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.171)(PORT=1521))) | ||
+ | Services Summary... | ||
+ | Service " | ||
+ | Instance " | ||
+ | **Service " | ||
+ | Instance " | ||
+ | service... | ||
+ | </ | ||
+ | |||
+ | P.S. FORMATING IS IMPORTANT, don't forget some space, otherwise you will have issues. | ||
+ | |||
+ | ====Start standby Instance==== | ||
+ | We will prepare a pfile for one instance for the standby. Standby will firsly reside only on ONE instance and then we will add more. | ||
+ | As Oracle user on wnode3: | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 ~]$ . oraenv | ||
+ | ORACLE_SID = [oracle] ? westdb1 | ||
+ | The Oracle base has been set to / | ||
+ | [oracle@wnode03 ~]$ | ||
+ | [oracle@wnode03 ~]$ cd $ORACLE_HOME/ | ||
+ | [oracle@wnode03 dbs]$ vi initwestdb.ora | ||
+ | ### Add the following entries | ||
+ | db_name=eastdb | ||
+ | db_unique_name=westdb | ||
+ | db_domain=example.com | ||
+ | :wq! | ||
+ | [oracle@wnode03 dbs]$ | ||
+ | [oracle@wnode03 dbs]$ sqlplus / as sysdba | ||
+ | SQL*Plus: Release 12.1.0.2.0 Production on Sat Sep 12 14:16:27 2015 | ||
+ | Copyright (c) 1982, 2014, Oracle. All rights reserved. | ||
+ | Connected to an idle instance. | ||
+ | SQL> startup nomount pfile=$ORACLE_HOME/ | ||
+ | ORACLE instance started. | ||
+ | Total System Global Area 243269632 bytes | ||
+ | Fixed Size 2923000 bytes | ||
+ | Variable Size 184550920 bytes | ||
+ | Database Buffers 50331648 bytes | ||
+ | Redo Buffers 5464064 bytes | ||
+ | SQL> exit | ||
+ | [oracle@wnode03 dbs]$ | ||
+ | </ | ||
+ | |||
+ | Oracle can start with literally a db_name only, but we need also db_unique_name for our example. We have also domain for the global name. | ||
+ | ====Restore teh Primary for Standby==== | ||
+ | Now that we have a started instance we can start the restore from the primary. Again assure you can tnsping from PRIMARY -> Standby and from Standby -> Primary | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 dbs]$ rman target sys/ | ||
+ | sys/ | ||
+ | Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 12 | ||
+ | 14:20:34 2015 | ||
+ | Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights | ||
+ | reserved. | ||
+ | connected to target database: EASTDB (DBID=812282532) | ||
+ | connected to auxiliary database: EASTDB (not mounted) | ||
+ | RMAN> run { | ||
+ | allocate channel prmy1 type disk; | ||
+ | allocate channel prmy2 type disk; | ||
+ | allocate channel prmy3 type disk; | ||
+ | allocate channel prmy4 type disk; | ||
+ | allocate auxiliary channel stby1 type disk; | ||
+ | duplicate target database for standby from active database | ||
+ | spfile | ||
+ | set db_unique_name=' | ||
+ | set cluster_database=' | ||
+ | set control_files=' | ||
+ | set remote_listener=' | ||
+ | set fal_server=' | ||
+ | set audit_file_dest='/ | ||
+ | | ||
+ | allocate auxiliary channel stby type disk; | ||
+ | sql channel stby "alter database recover managed standby database | ||
+ | disconnect"; | ||
+ | } | ||
+ | .................................................................... | ||
+ | sql statement: alter system archive log current | ||
+ | contents of Memory Script: | ||
+ | { | ||
+ | switch clone datafile all; | ||
+ | } | ||
+ | executing Memory Script | ||
+ | datafile 1 switched to datafile copy | ||
+ | input datafile copy RECID=1 STAMP=892026441 file | ||
+ | name=+DATA/ | ||
+ | datafile 2 switched to datafile copy | ||
+ | input datafile copy RECID=2 STAMP=892026441 file | ||
+ | name=+DATA/ | ||
+ | datafile 3 switched to datafile copy | ||
+ | input datafile copy RECID=3 STAMP=892026441 file | ||
+ | name=+DATA/ | ||
+ | datafile 4 switched to datafile copy | ||
+ | input datafile copy RECID=4 STAMP=892026442 file | ||
+ | name=+DATA/ | ||
+ | datafile 5 switched to datafile copy | ||
+ | input datafile copy RECID=5 STAMP=892026442 file | ||
+ | name=+DATA/ | ||
+ | datafile 6 switched to datafile copy | ||
+ | input datafile copy RECID=6 STAMP=892026442 file | ||
+ | name=+DATA/ | ||
+ | Finished Duplicate Db at 02-OCT-15 | ||
+ | allocated channel: stby | ||
+ | channel stby: SID=46 device type=DISK | ||
+ | sql statement: alter database recover managed standby database | ||
+ | disconnect | ||
+ | released channel: prmy1 | ||
+ | released channel: prmy2 | ||
+ | released channel: prmy3 | ||
+ | released channel: prmy4 | ||
+ | released channel: stby1 | ||
+ | released channel: stby | ||
+ | RMAN> exit | ||
+ | [oracle@wnode03 ~]$ | ||
+ | </ | ||
+ | |||
+ | |||
+ | =====Completing the Standby Configuration===== | ||
+ | Now that we have a standby database, we have to integrate it in the Grid Infrastructure, | ||
+ | |||
+ | * Enable the log_archive_dest_2 & Check archivelog apply | ||
+ | * Add the Clusterware entries | ||
+ | |||
+ | ====Enable The archivelog destination & Check archivelog apply ==== | ||
+ | Remember, that we set the second archivelog destination on the primary as DEFER :) Well, I sure do, that means that Oracle won't send the archivelog on the service. We have to change that | ||
+ | |||
+ | < | ||
+ | SQL> col destination format a10 | ||
+ | SQL> col error format a10 | ||
+ | SQL> select dest_id, status, destination, | ||
+ | from v$archive_dest where dest_id = 2; | ||
+ | |||
+ | DEST_ID STATUS DESTINATIO ARCHIVER ERROR VALID_NOW | ||
+ | ------- --------- ---------- --------- ---------- -------------- | ||
+ | 2 DEFERRED westdb LGWR ORA-12514 UNKNOWN | ||
+ | SQL> | ||
+ | |||
+ | SQL> alter system set log_archive_dest_state_2=ENABLE; | ||
+ | System altered. | ||
+ | |||
+ | SQL> select dest_id, status, destination, | ||
+ | from v$archive_dest where dest_id = 2; | ||
+ | DEST_ID STATUS DESTINATIO ARCHIVER E VALID_NOW | ||
+ | ---------- --------- ---------- ---------- - ---------------- | ||
+ | 2 VALID westdb LGWR YES | ||
+ | SQL> | ||
+ | </ | ||
+ | |||
+ | After that, we have to verify the application | ||
+ | |||
+ | < | ||
+ | --On Primary | ||
+ | [oracle@enode01 ~]$ sqlplus / as sysdba | ||
+ | SQL> SELECT THREAD#, MAX(SEQUENCE# | ||
+ | THREAD# MAX(SEQUENCE# | ||
+ | ---------- -------------- | ||
+ | 1 11 | ||
+ | 2 2 | ||
+ | |||
+ | --On Standby | ||
+ | SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY THREAD#; | ||
+ | THREAD# SEQUENCE# APPLIED | ||
+ | ---------- ---------- --------- | ||
+ | 1 11 YES | ||
+ | 2 2 YES | ||
+ | SQL> | ||
+ | </ | ||
+ | |||
+ | Now that we checked and verified that the archivelogs are being applied, we have to re-create the spfile. Remember that the spfile was re-created during the duplicate procedure and it contains data referring to the primary. | ||
+ | |||
+ | < | ||
+ | SQL> create pfile='/ | ||
+ | File created. | ||
+ | SQL> exit | ||
+ | [oracle@wnode03 ~]$ | ||
+ | [oracle@wnode03 ~]$ vi / | ||
+ | : | ||
+ | 10 fewer lines | ||
+ | |||
+ | : | ||
+ | 10 substitutions on 10 lines | ||
+ | |||
+ | : | ||
+ | 13 substitutions on 13 lines | ||
+ | |||
+ | : | ||
+ | 13 substitutions on 13 lines | ||
+ | |||
+ | --Be sure the following values are SET: | ||
+ | *.cluster_database=TRUE | ||
+ | *.dispatchers=' | ||
+ | *.log_archive_dest_1=' | ||
+ | *.log_archive_dest_2=' | ||
+ | *.log_archive_dest_state_2=' | ||
+ | :wq! | ||
+ | |||
+ | [oracle@wnode03 ~]$ sqlplus / as sysdba | ||
+ | SQL> create spfile=' | ||
+ | pfile='/ | ||
+ | File created. | ||
+ | SQL> exit | ||
+ | [oracle@wnode03 ~]$ | ||
+ | [oracle@wnode03 ~]$ vi $ORACLE_HOME/ | ||
+ | ### And Add the following entry | ||
+ | spfile=' | ||
+ | :wq! | ||
+ | |||
+ | [oracle@wnode03 ~]$ scp $ORACLE_HOME/ | ||
+ | wnode04:/ | ||
+ | initwestdb1.ora 100% 39 0.0KB/s 00:00 | ||
+ | [oracle@wnode03 ~]$ | ||
+ | </ | ||
+ | |||
+ | Phew, that was something, now that we are done with the modifications, | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 ~]$ sqlplus / as sysdba | ||
+ | SQL> shutdown immediate | ||
+ | ORA-01109: database not open | ||
+ | Database dismounted. | ||
+ | ORACLE instance shut down. | ||
+ | SQL> exit | ||
+ | [oracle@wnode03 ~]$ | ||
+ | </ | ||
+ | |||
+ | ====Add Clusterware entries for the Standby==== | ||
+ | Adding entries to the clusterware registeries is rather simple: | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 ~]$ srvctl add database -db westdb -oraclehome | ||
+ | / | ||
+ | ' | ||
+ | " | ||
+ | [oracle@wnode03 ~]$ srvctl add instance -db westdb -instance westdb1 - | ||
+ | node wnode03 | ||
+ | [oracle@wnode03 ~]$ srvctl add instance -db westdb -instance westdb2 - | ||
+ | node wnode04 | ||
+ | [oracle@wnode03 ~]$ | ||
+ | </ | ||
+ | |||
+ | It is important they to be added as the owner of the resource. Otherwise, you might not be able to start it later, due to profile restrictions. | ||
+ | |||
+ | Finally, we can start and verify the standby: | ||
+ | |||
+ | < | ||
+ | [oracle@wnode03 ~]$ srvctl start database -db westdb -startoption mount | ||
+ | [oracle@wnode03 ~]$ srvctl status database -db westdb -verbose | ||
+ | Instance westdb1 is running on node wnode03. Instance status: Mounted | ||
+ | (Closed). | ||
+ | Instance westdb2 is running on node wnode04. Instance status: Mounted | ||
+ | (Closed). | ||
+ | [oracle@wnode03 ~]$ | ||
+ | [oracle@wnode03 ~]$ sqlplus / as sysdba | ||
+ | SQL> alter database recover managed standby database disconnect; | ||
+ | Database altered. | ||
+ | SQL> exit | ||
+ | SQL> | ||
+ | [oracle@wnode03 ~]$ srvctl config database -db westdb | ||
+ | Database unique name: westdb | ||
+ | Database name: eastdb | ||
+ | Oracle home: / | ||
+ | Oracle user: oracle | ||
+ | Spfile: +DATA/ | ||
+ | Password file: | ||
+ | Domain: example.com | ||
+ | Start options: open | ||
+ | Stop options: immediate | ||
+ | Database role: PHYSICAL_STANDBY | ||
+ | Management policy: AUTOMATIC | ||
+ | Server pools: | ||
+ | Disk Groups: DATA,FRA | ||
+ | Mount point paths: | ||
+ | Services: | ||
+ | Type: RAC <- RAC Database | ||
+ | Start concurrency: | ||
+ | Stop concurrency: | ||
+ | Database is enabled | ||
+ | Database is individually enabled on nodes: | ||
+ | Database is individually disabled on nodes: | ||
+ | OSDBA group: dba | ||
+ | OSOPER group: oper | ||
+ | Database instances: westdb1, | ||
+ | Configured nodes: wnode03, | ||
+ | Database is administrator managed | ||
+ | [oracle@wnode03 ~]$ | ||
+ | </ | ||
+ | |||
+ | ====Verify Replication==== | ||
+ | To verify replication, | ||
+ | |||
+ | * Physical Guard (up to Read Only - Active Data Guard) | ||
+ | * Logical Guard (up to Read Write) | ||
+ | |||
+ | {{: | ||
+ | |||
+ | |||
+ | ---- | ||
+ | Archiver Process – The archiver process (ARCn or ARCH) is responsible for archiving online redo logs. The archival destination could be a local destination or a remote standby database site. In the case of a Data Guard configuration, | ||
+ | |||
+ | Log Writer (LGWR) – The log writer process on the primary database writes entries from the redo log buffer to the online redo log file. When the current online redo log file is full, it triggers the archiver process to start the archiving activity. | ||
+ | |||
+ | Remote File Server (RFS) Process – The RFS process runs on the standby database and is responsible for communication between the primary and the standby database. For the log transport service, the RFS on the standby database receives the redo records from the archiver or the log writer process of the primary database over Oracle Net and writes to filesystem on the standby site. | ||
+ | |||
+ | Fetch Archive Log (FAL) – The FAL process has two components: FAL Client and FAL Server. Both processes are used for archive gap resolution. If the Managed Recovery Process (MRP) on the standby database site detects an archive gap sequence, it initiates a fetch request to the FAL client on the standby site. This action, in turn, requests the FAL server process on the primary database to re-transmit the archived log files to resolve the gap sequence. Archive gap sequences will be discussed later in this chapter. | ||
+ | |||
+ | Once the log transport service completes the transmission of redo records to the standby site, the log apply service starts applying the changes to the standby database. The log apply service operates solely on the standby database. The following processes on the standby site facilitate the log apply operations: | ||
+ | |||
+ | Managed Recovery Process (MRP) – The MRP applies the redo entries from the archived redo logs onto the physical standby database. | ||
+ | |||
+ | Logical Standby Process (LSP) – The LSP applies the redo records from archived redo logs to the logical standby database. The Oracle database log miner engine is used by the logical standby process for the SQL apply operations. Using the log miner engine, the LSP process recreates the SQL statements from redo logs that have been executed on the primary database. These statements are then applied to the standby database to keep it current with the primary database. | ||
+ | |||
+ | Because of that there are limitation of the Logical Guard. But we will address that in the Logical Guard section, enough side ways :) | ||
+ | ---- | ||
+ | |||
+ | So, to verify the application we can do the following: | ||
+ | |||
+ | < | ||
+ | --On Standby | ||
+ | [oracle@wnode03 ~]$ pgrep -lf mrp | ||
+ | 25968 ora_mrp0_westdb1 | ||
+ | [oracle@wnode03 dbs]$ | ||
+ | [oracle@wnode03 ~]$ sqlplus / as sysdba | ||
+ | SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY | ||
+ | THREAD#; | ||
+ | THREAD# SEQUENCE# APPLIED | ||
+ | ---------- ---------- --------- | ||
+ | 1 12 YES | ||
+ | 1 13 YES | ||
+ | 1 11 YES | ||
+ | 2 4 IN-MEMORY | ||
+ | 2 3 YES | ||
+ | 2 2 YES | ||
+ | 6 rows selected. | ||
+ | SQL> | ||
+ | |||
+ | --on Primary: | ||
+ | SQL> alter system archive log current; | ||
+ | System altered. | ||
+ | SQL> SELECT THREAD#, MAX(SEQUENCE# | ||
+ | RESETLOGS_CHANGE# | ||
+ | GV$ARCHIVED_LOG) GROUP BY THREAD#; | ||
+ | THREAD# MAX(SEQUENCE# | ||
+ | ---------- -------------- | ||
+ | 1 16 | ||
+ | 2 7 | ||
+ | SQL> | ||
+ | |||
+ | --On Standby | ||
+ | SQL> SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY | ||
+ | THREAD#; | ||
+ | THREAD# SEQUENCE# APPLIED | ||
+ | ---------- ---------- --------- | ||
+ | 1 16 YES | ||
+ | 1 15 YES | ||
+ | 1 13 YES | ||
+ | 1 11 YES | ||
+ | 1 14 YES | ||
+ | 1 12 YES | ||
+ | 2 6 YES | ||
+ | 2 5 YES | ||
+ | 2 7 IN-MEMORY | ||
+ | 2 3 YES | ||
+ | 2 2 YES | ||
+ | THREAD# SEQUENCE# APPLIED | ||
+ | ---------- ---------- --------- | ||
+ | 2 4 YES | ||
+ | 12 rows selected. | ||
+ | SQL> | ||
+ | |||
+ | SQL> select inst_id, process, status, sequence#, thread# from | ||
+ | gv$managed_standby; | ||
+ | INST_ID PROCESS STATUS SEQUENCE# THREAD# | ||
+ | ---------- --------- ------------ ---------- ---------- | ||
+ | 1 ARCH CLOSING 13 1 | ||
+ | 1 ARCH CONNECTED 0 0 | ||
+ | 1 ARCH CLOSING 7 2 | ||
+ | 1 ARCH CLOSING 5 2 | ||
+ | 1 RFS IDLE 8 2 | ||
+ | 1 RFS IDLE 0 0 | ||
+ | 1 RFS IDLE 0 0 | ||
+ | 1 MRP0 APPLYING_LOG 8 2 | ||
+ | 2 ARCH CLOSING 16 1 | ||
+ | 2 ARCH CONNECTED 0 0 | ||
+ | 2 ARCH CLOSING 14 1 | ||
+ | INST_ID PROCESS STATUS SEQUENCE# THREAD# | ||
+ | ---------- --------- ------------ ---------- ---------- | ||
+ | 2 ARCH CLOSING 15 1 | ||
+ | 2 RFS IDLE 0 0 | ||
+ | 2 RFS IDLE 0 0 | ||
+ | 2 RFS IDLE 17 1 | ||
+ | 15 rows selected. | ||
+ | </ | ||
+ | |||