Download Oracle Archive Log with Data Guard

Document related concepts

SQL wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Serializability wikipedia , lookup

Microsoft Access wikipedia , lookup

IMDb wikipedia , lookup

PL/SQL wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Btrieve wikipedia , lookup

Ingres (database) wikipedia , lookup

Functional Database Model wikipedia , lookup

Relational model wikipedia , lookup

Database wikipedia , lookup

Concurrency control wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Versant Object Database wikipedia , lookup

Oracle Database wikipedia , lookup

Database model wikipedia , lookup

Clusterpoint wikipedia , lookup

ContactPoint wikipedia , lookup

Transcript
Oracle Archive Log with
Data Guard - Solution guide
Version 0.02
DISCLAIMER
Sanovi™ believes the information in this publication is accurate as of its publication date.
The information is subject to change without notice.
COPYRIGHT
Copyright © 2015. Sanovi Technologies.
Printed March 2015
Use, copy, and distribution of any Sanovi software described in this publication need an
applicable software license.
No part of this product or document may be reproduced, stored in a retrieval system, or
transmitted, in any form by any means, electronic, mechanical, photocopy, recording, or
otherwise, without prior written authorization of Sanovi Technologies Corporation and its
licensers, if any.
TRADEMARK INFORMATION
Oracle Archive Log with Data Guard is a trademark of Sanovi Technologies.
All other trademarks used in this publication are the property of their respective holders.
Table of Contents
Preface ............................................................................................................................................... 5
Purpose ............................................................................................................................................ 5
Audience .......................................................................................................................................... 5
Conventions...................................................................................................................................... 6
Additional Support ............................................................................................................................. 6
Architecture Overview ........................................................................................................................... 7
Pre-requisites ...................................................................................................................................... 8
Configuring Sanovi DRM ........................................................................................................................ 9
Dataset Discovery ............................................................................................................................. 9
Protection Scheme ........................................................................................................................... 12
Reduced Privileges ........................................................................................................................ 16
Creating Functional Groups ............................................................................................................... 18
Workflow Configuration ................................................................................................................. 21
Event Configuration ......................................................................................................................... 22
RPO and RTO Monitoring .................................................................................................................. 31
Replication Monitoring ...................................................................................................................... 33
Configuration of BCOs ...................................................................................................................... 37
Failover Operation ........................................................................................................................ 37
Test Exercise Configuration............................................................................................................... 39
Switchover Operation .................................................................................................................... 39
Failover Operation with AWS .......................................................................................................... 42
Switchback Operation.................................................................................................................... 42
Troubleshooting Information ............................................................................................................. 46
Applying Logs Manually ................................................................................................................. 46
How to set the primary DB in Archive Mode? .................................................................................... 46
How to start Oracle database manually in the DR site ....................................................................... 47
How to run SQLPlus? ..................................................................................................................... 47
Appendix ........................................................................................................................................ 48
Audit Records in Windows Application Log........................................................................................ 48
Primary Database requires manual intervention at the Stand-by site ................................................... 48
Sample Listener.ora file entries ...................................................................................................... 50
Sample Tnsnames.ora file entries ................................................................................................... 50
Glossary............................................................................................................................................ 51
Index................................................................................................................................................ 55
Preface
Welcome to Sanovi Cloud Continuity User Guide. The preface discusses the following
topics:
 Purpose
 Audience
 Conventions
 Additional Support
Purpose
This manual is your guide to the operation and maintenance of the product. The
document gives a detailed description of:

All menu commands, icons and links available in the product.

The terminologies used.

Procedures to create modify and delete various entities.

Procedure to maintain an interface with a variety of features in order to
accomplish a particular task.
Thus the manual helps you to use the product with ease and makes you familiar with
Sanovi Cloud Continuity software.
Audience
This guide is a part of the Sanovi Cloud Continuity documentation set and is intended for
use by Sanovi Cloud Continuity software administrators and operators.
Assuming the user of this guide has already gone through the Sanovi
Cloud Continuity document, this guide explains in-depth contents
pertaining to Oracle Dataguard solution.
Administrators can use this manual to perform the following tasks:

Create Sites

Discover components, datasets and protection scheme

Create, modify and delete groups

Configure groups with Continuity operations, Events, RPO/RTO and Test exercises

Configure users and create notification lists

Configure agents
The operators can use this manual to monitor the group related information during
Business Continuity Operations.
Conventions
Typeface/Font
Usage
Boldface
Boldface font is used in table headings and in
paragraphs to convey especially strong events.
Italic
Italic font is used to emphasis special
information or additional information and
examples.
Courier
New
Terminal input and output.
$, /
Command prompt.
Additional Support
In addition to providing documentation, Sanovi Technologies offers the following remote
services:

For assistance with Sanovi Cloud Continuity, contact technical support at
[email protected].

For license information contact Sanovi Cloud Continuity sales team at
[email protected].

For additional information about the product, visit the website at
http://www.sanovi.com.

For customer support, contact our Sanovi Technologies help desk at
Phone : 1800 103 0609
Architecture Overview
Oracle Archive logs with DataGuard DR Solution provides business continuity for the
applications that run on Oracle database. Data protection is done using DataGuard to
copy log files and other DB related files between Production and DR (remote) sites. Since
this whole process is automated using Sanovi DRM™, the management is simplified and
the DR database is continuously updated.
Following diagram explains how Oracle works with DataGuard to protect Oracle data in
Sanovi DRM™ environment.
This DR Solution works, only if the replication is happening between two servers. There
must be a server on a production site that has production data in the form of Oracle
database. This data has to be protected with a disaster recovery plan.
Oracle Archive logs with DataGuard DR Solution type, performs the following.

Sets up environment for protecting the data in a DR environment.

Protects the data regularly.

Checks if the RPO / RTO is on track or not.

Checks the integrity of the Databases on both the sites.

Recovers the Oracle database in case of any disaster.
Pre-requisites
Supported configurations:
Database/Application
Version
Platform
Intel, AMD,
Sun/Sparc, AIX
Oracle 10g, Oracle 11g
OS Version/Patch
Windows 2008, 2012
RHEL 6.x
Infrastructure
Space to hold 3 days of archived logs on primary and DR servers.
Software and Utilities
Oracle client libraries, JDBC libraries and sqlplus utility on primary and DR
server.
Configuration and privileges
Database on the primary should have archive logging enabled.
Database on the DR should be a physical managed standby database in
mounted mode.
DataGuard replication should be setup and a managed recovery must be
active on the DR database.
STANDBY_FILE_MANAGEMENT should be set to AUTO on both primary and
DR to automatically create the newly added data files.
Sanovi DRM agents can be run as root/administrator or as any other user.
The user must have:

Full permissions on $EAMSROOT folder and its subfolders

Read-write permission on folders that are configured as part of
solution configuration
restore
Examples: Folders used for Dump and Apply of archive logs
: Folders Configured for full database dump and
Switchback.
: Folders Configured for Tests like Switchover,
Oracle DB user configured in the DRM Oracle agent should have sysdba
privileges.
A static listener should be configured for the databases on both primary and
DR so that Sanovi DRM agent can connect to the idle database.
Security Configuration
Ports 46000 and 46001 should be opened bidirectional between primary and
DR server for replication of log files.
Port 45000 should be opened bidirectional between Panaces server and both
the primary and DR agent server.
Sanovi DRM uses JDBC to connect to the database and so
remote_login_passwordfile should be set to “exclusive” so that password file
authentication is enabled.
The port that is configured in the listener for the database should be opened
up to processes connecting from the same server.
The OS user, using which the Sanovi DRM oracle agent is running should be a
part of the Oracle OS user’s group.
Configuring Sanovi DRM
Dataset Discovery
A Dataset indicates all related data that is the object of protection and/or management
by Sanovi DRM™. You should discover the component before proceeding with the
dataset discovery.
To discover a new Dataset, perform following steps:
1.
Click DISCOVERY > Subsystems on the navigation bar. The Subsystems page
appears.
2.
Click the Datasets tab.
3.
Select the type of Database Subsystem from Create New drop-down list at the top
right corner. The drop-down list displays all the Datasets Subsystems that can be
created.
4.
Click Go. The New Dataset Discovery page appears and the Subsystems side bar
page displays the list of all the Datasets that are already discovered.
Provide the following information to configure Dataset.
Field
Dependent
Component
Description
Select the component (server) containing the
Dataset from the drop-down list.
If the component is not present, abort the
dataset discovery and initiate Component
discovery from the same 'Create New' dropdown list.
Type
Displays the type of Dataset you already
selected.
You have the option to change your selection.
Dataset Name
By-default, the dataset name is auto populated
which is a combination of dataset type and
component name.
OR
Enter a unique name for dataset.
Note:
This field only accepts alphanumeric characters
and must start with an alphabetical character. It
should not have a space.
Credentials
This will be activated only on selecting the
Server Managed Remotely check box, while
discovering the Component. The following are
the credentials available, select any one dropdown list based on the requirement:
 Use Component Credentials: If the
user wish to use the default credentials
provided, select this option.
 Add New Credentials: Enter the
UserName and Password.
Note:
User Name
Password
Test Credentials
Port No (SSH) is fetched automatically, once the
Dependent Component is selected.
 Other credentials are displayed which are
created under Group credentials.
Example: Unix, Linux or, Windows. It is
in disabled mode and on selection of this
option it is attached here.
Enter the User Name
Enter the Password
Click Test Credentials button and the following are
performed:
1. The system checks the connectivity of the agent
with the Oracle database for the provided agent
configuration details (such as user name, password,
port number, SID).
2. The system performs the following depending on
the connectivity:

For successful connectivity, the system then
checks the reduced privileges details of the
user. The system then displays Success
below the Database Information bar. The
system also displays the reduced privileges
information, if any, of the user.
(Example: If Reduced Privileges displays the
message user may not support Manage
operation, then the user privileges do not
permit any actions pertaining to Manage
operations.)

If the connectivity fails, the system displays
an error message indicating the cause of
failure.
Port No (SSH)
Enter the required Port Number
Database SID
Check on the Select discovered SID radio
button option.
Click on Get SIDS button
Choose your database from the drop-down list.
Database
UserName
Enter the User name.
Database
Password
Enter the Password.
Test Credentials
Click Test Credentials button.
Discover
Click Discover button.
Note: Other fields are auto fetched after clicking the Test Credentials and
Discover buttons.
4. Click Save and return to Subsystems window.
Protection Scheme
The Protection Scheme tab in the Subsystems page lists all the Protection Schemes
that have been set up.
To view the Protection Schemes, perform following steps:
1.
2.
Click DISCOVERY > Subsystems on the navigation bar. The Subsystems page
appears.
Click the Protection Schemes tab. The Protection Schema Discovery page
appears.
Following table summarizes the information displayed on Protection Schemes tab:
Field
Type
Description
Displays the type of the discovered Protection
scheme.
The available types of Protection schemes are
PFR, Hitachi Replication, and Oracle Data
Guard.
Name
Enter the Protection Scheme name.
Credentials
Select the Add New Credentials from the dropdown list.
User Name
Enter the User Name
Password
Enter the Password
Test
Credentials
Click Test Credentials button and the following
are performed:
1. The system checks the connectivity of the
agent with the Oracle database for the provided
agent configuration details (such as user name,
password, port number, SID).
2. The system performs the following
depending on the connectivity:

For successful connectivity, the system then
checks the reduced privileges details of the
user. The system then displays Success
below the Database Information bar. The
system also displays the reduced privileges
information, if any, of the user.
(Example: If Reduced Privileges displays the
message user may not support Manage
operation, then the user privileges do not
permit any actions pertaining to Manage
operations.)

If the connectivity fails, the system
displays an error message indicating the
cause of failure.
Port No (SSH)
Enter the required Port Number
Agent
Register
Click on the Agent Register Link.
Note: All the information on Agents will be
fetched automatically.
Database SID
Check on the Select discovered SID radio
button option.
Click on Get SIDS button
Choose your database from the drop-down list.
Port on which
Oracle listens
IP address of
which Oracle
listen
Choose
Authentication
Type
Enter the port details.
Enter the IP address.
Choose DB as the Authentication type.
Database
UserName
Enter the User name.
Database
Password
Enter the Password.
Test
Credentials
Click Test Credentials button.
Note: Success message is displayed to confirm
that all the credentials entered is valid.
Get
Destinations
Discover
Click Get Destinations button, which fetches the
information on Stand-by-Destinations
Click Discover button.
Notes: This fetches information about the fields
like Destination ID, Protection Mode, and
Replication Pair.
3.
Click Save.
The advanced details provides information on the DR Solution specific configuration for
each of the Subsystem and are configured at the time of DR Solution setup. Refer to
respective book under DR Solutions supported by Sanovi DRM™.
You can edit the respective protection scheme by clicking the
icon, adjacent to the
protection scheme. You can also delete the respective protection scheme by clicking the
icon, adjacent to the protection scheme.
Reduced Privileges
To perform Discover, Monitor and Manage operations on PR, the user requires all the
following privileges:
1.
2.
3.
4.
Create a session privilege.
Either select any dictionary privilege or select the privilege on following tables.
{"v$database","v$archive_dest","v$parameter","v$archive_dest_stat
us",
"v$managed_standby”,"v$archive_gap","v$instance","v$log","v$log_h
istory"
"v$archived_log","v$recovery_progress","v$pwfile_users","dba_sys_
privs","all_tab_privs"}
Alter the database privilege.
Alter the system privilege.
Note:
To perform Discover, Monitor and Manage operations on DR, the user requires
sysdba privileges.
If user is sys, then all operations are performed as sysdba.
The following table provides the details about the Operations and Role/Privileges:
Operation/
Module
Name
Discovery
Role/Privilege For Production
Role/
Privilege
For
Standby
CREATE SESSION
SYSDBA
SELECT ANY DICTIONARY or Select privilege on following tables:
"v$archived_log","v$instance","v$database","v$log_history",
"v$archive_dest_status","v$archive_dest","v$parameter",
"v$dataguard_stats","v$backup_corruption","v$datafile",
"v$logfile","v$controlfile","v$instance_recovery","dual",
"v$log","v$thread","v$archive_gap", "v$tablespace",
"dba_tablespaces", "V$DATABASE_INCARNATION",
"smon_scn_time","all_tab_privs","dba_sys_privs","v$pwfile_users"
SYSDBA
Monitor
SYSOPER,ALTER SYSTEM,ALTER DATABASE
SYSDBA
Manage
ALTER SYSTEM,ALTER DATABASE,SYSOPER/SYSDBA
SYSDBA
Note:
If Authentication type is selected as OS Authentication then user should have all the
above defined privileges and OS user should be part of DBA group. If OS user is part of
DBA group, then user will have sysdba/sysoper privileges automatically.
Creating Functional Groups
A Functional Group is created to protect data. The data protection is done by binding
the data to the associated Components and Protection Scheme present in the DR
infrastructure.
Note:
Functional Group can be created only after component, dataset, and protection scheme
are discovered.
When PFR is used for data replication and NormalFullCopy operation of Sanovi DRM is
not used, you have to create a new fileset and associate it to the new Group. If the
NormalFullCopy operation is performed by Sanovi DRM™, then the filesets are created
by Sanovi DRM™ itself.
To create a Functional Group:
1. Click DISCOVERY > Groups on the navigation bar. The Groups page appears.
2. Select Functional Group from the Create New drop-down list at the top right
corner of the page.
3. Click Go. The Create Functional Group page appears.
4. Enter the following details.
Field
Description
Group Name
Enter a unique name for the Functional Group.
If the Group name already exists, a message
is displayed prompting you to enter a different
name for the Group.
This field is mandatory. all
Note:
Group name should not be empty. It can have
a maximum of 32 characters that includes
alphanumeric characters and underscore. It
should start with an alphabet only and should
not contain any blank spaces.
Description
Enter a description for the Functional Group.
Group Priority
Group Priority
Solution Signature
Select the type of DR Solution from the dropdown list.
This field is mandatory.
Include Redo Logs check box is displayed,
only if you select any of the Oracle DR
Solutions from the drop-down list.
Enable the Include Redo Logs check box if
the DR Solution supports Redo Logs.
At present, Sanovi DRM™ supports Redo Log
for Oracle Archive Logs with Hitachi
Replication and Oracle Archive Logs with Other
Replicator” only.
Note:
If you select Include Redo Logs check box,
ensure that you set up additional protection
schemes for redo log protection on production
and DR respectively.
Configured App
RPO
Select the App RPO time based on the
requirement.
Configured Data
RPO
Select the Data RPO time based on the
requirement.
Configured RTO
Select the RTO time based on the
requirement.
Note:
The App RPO/ Data RTO values are
dependent on the DR Solution type
selected. For an Functional Group that
does not have an impact on Application
Group’s App RPO/ Data RTO, its RPO/RTO
values though configured, will
be shown as N/A.
 Configured RPO/RTO will be disabled in
Test License packages.

Configured Data
Lag Objective
Enter the Data Lag Objective value. You can
configure the value in KB/MB/ number of files.
Note:


This server is part
of a Cluster
The Data Lag Objective unit is
dependent on DR solution type
selected. If a PFR solution is selected,
than the unit will be the number of
files.
Configured Data Lag Objective will
be disabled in Test License packages.
Select the check box, only if the DR Solution is
supported on a cluster.
If you select this check box, provide the
Cluster Timeout time in seconds.
Refer to Support for Cluster for more
information.
Part of Flex pod.
Select the check box, only if it is a part of
Flex Pod
Note:
For more information, refer Creating Flexi FG
Assign to
Organization
Select the Organization name to create the
Functional group.
5. Click Next to proceed with the Define Group Relationship.
6. Define Group Relationship by configuring the following elements for Production and
Remote sites. To configure these elements, you can either click the links available
under Configuration Process section or on the respective pictorial representation.

Server Component

Application Dataset

Data Protection

After defining each of the elements as mentioned above, the pictorial
representation of the respective icons becomes (tick icon) indicating that the
elements have been set up.

At any point during group configuration, click Back to go back to the previous
page or click Cancel to abort the group configuration.
Note:
7. Click Next. A message box is displayed indicating the Group has been created
successfully.
8. If Group creation fails, Group Details page of the create group will be shown with
the error message.
9. Click OK to configure DR Solution specific details.
For specific information on this configuration, refer to respective pages under DR
Solutions Supported by Sanovi DRM™ book.
10. Perform any of the following:

Click Save to create Functional Group.

Click Reset to set the previous values in the GUI.

Click Cancel to quit the current window without saving changes.
Workflow Configuration
BCOs, Tests, Policies and BPI are workflows. These can be configured using the Workflow
Manager functionality. Refer to Working with Workflow Manager topic (refer the Sanovi
DRM online help).
Event Configuration
You can configure events only if policies are assigned to it. Refer to Configuring Events
topic in the Sanovi DRM Administrator's Guide for information on how to configure the
events. The Configure Policy Execution mode for specific events page lists all the
possible events that can occur for the particular Group. The occurrence of any event
must be notified to the respective owner to take an action.
The following table shows the lists of all possible events with their severity status. The
events listed can change with respect to the customer environment and ‘n’ number of
new events can be identified based on the customer’s DR environment.
BCS
Event
ID
Severity
Description
Impact
BCSMG
R005
INFO
Unable to
calculate the
Availability
Index for the
Group.
The
Availability
Index for the
group could
not be
calculated ,
because of
some Internal
Error.
BCSMG
R006
INFO
Normal Copy
can be
restarted.
The conditions
are right to
restart normal
copy.
Caused by agent
event
BCSMG
R007
SERIOUS
Error occurred
while
registering the
resources.
One or
multiple
resources for
the group may
have failed to
register
Further
monitoring of
these
resources is
not possible
by the
system.
Please contact
Customer
Support
BCSOra
cleArLog
DG100
SERIOUS
Replication(Lo
g Tansport
Service)
is
OFF/Stopped.
RPO will be
impacted.
DataGuardEvent004,
DataGuardEvent005,
DataGuardEvent007
BCSOra
cleArLog
DG101
NORMAL
Replication(Lo
g Transport
Service) is
ON.
Data is being
copied on DR
site
DataGuardEvent013,
DataGuardEvent003
BCSOra
cleArLog
DG102
CRITICAL
Replication(Lo
g Transport
Service) is
failing.
RPO will be
impacted.
DataGuardEvent006,
DataGuardEvent008,
DataGuardEvent009,
DataGuardEvent010,
DataGuardEvent011
BCSOra
cleArLog
DG105
SERIOUS
Replication(Da
taGuard)
Agent unable
to get
information
from
replication
System
(Oracle
DataGuard).
Unable to
monitor the
replication
status. May
impact
RPO/RTO if
data is not
being copied
to DR site.
DataGuardEvent310
Replication(Da
taGuard)
Agent is able
to get
information
from
replication
System
(Oracle
DataGuard).
Replication
Agent is able
to monitor the
Replication
system.
DataGuardEvent311
BCSOra
cleArLog
DG106
NORMAL
BCSOra
cleArLog
DG107
WARNING
Replication(Da
taGuard) Pair
Role Changed.
There will be
loss of
data if Role
change is not
intentional.
DataGuardEvent201,
DataGuardEvent202
BCSOra
cleArLog
DG108
SERIOUS
Replication(Da
taGuard)
Mode/Config
changed.
Depending on
the change, it
may lead to
increase
RPO/RTO.
DataGuardEvent208
DataGuardEvent209,
DataGuardEvent210,
DataGuardEvent211,
DataGuardEvent212,
DataGuardEvent213,
DataGuardEvent214,
DataGuardEvent215,
DataGuardEvent216,
DataGuardEvent217,
DataGuardEvent218,
DataGuardEvent219,
DataGuardEvent220,
DataGuardEvent221,
DataGuardEvent222,
DataGuardEvent223,
DataGuardEvent224
BCSOra
cleArLog
SERIOUS
DataGuard
Log Apply
RTO will
increase.
DataGuardEvent012
DG200
Services is
failing.
BCSOra
cleArLog
DG201
NORMAL
DataGuard
Log Apply
Services is
applying logs.
Remote
database is
getting up to
date with new
logs.
DataGuardEvent014
BCSOra
cleArLog
DG202
SERIOUS
Non-archived
log
accumulation
in primary
exceeded
threshold
(MB).
RPO will
increase.
DataGuardEvent100
Non-archived
log
accumulation
in primary is
within
threshold
(MB).
Non-archive
logs in
primary is
within desired
level.
DataGuardEvent101
BCSOra
cleArLog
DG203
NORMAL
BCSOra
cleArLog
DG204
SERIOUS
Not-applied
(but received)
log
accumulation
in standby
exceeded
threshold
(MB).
RTO will
increase.
DataGuardEvent102
BCSOra
cleArLog
DG205
NORMAL
Not-applied
(but received)
log
accumulation
in standby is
within
threshold
(MB).
Not-applied
(but received)
log
accumulation
in standby is
with desired
level.
DataGuardEvent103
BCSOra
cleArLog
DG206
SERIOUS
Archived on
Primary but
not received
on Standby
exceeded
threshold(MB).
RPO will
increase.
DataGuardEvent104
Archived on
Primary but
not received
on Standby is
within
threshold
(MB).
Archived on
Primary
but not
received on
Standby is
within desired
level.
DataGuardEvent105
BCSOra
cleArLog
DG207
NORMAL
BCSOra
cleArLog
DG208
WARNING
Replication(Da
taGuard)
Protection
Mode
Changed.
Will impact
RPO/RTO
depending on
current
protection
mode.
BCSOra
cleLogE
vent001
CRITICAL
Oracle agent
on primary
server is down
Continuity
Group cannot
be managed
BCSOra
cleLogE
vent002
CRITICAL
Oracle agent
on DR server
is down.
Continuity
Group cannot
be managed
BCSOra
cleLogE
vent003
CRITICAL
Oracle agent
down on DR
(current
production)
server
Cannot
perform
Continuity
operations
BCSOra
cleLogE
vent004
CRITICAL
Database
instance
down/not
available on
primary server
Affects
application
availability/no
rmal mode
Affects
application
availability/no
rmal mode
BCSOra
cleLogE
vent005
CRITICAL
Database
instance
down/not
available on
DR server
Affects
application
availability/no
rmal mode
Affects
application
availability/no
rmal mode
BCSOra
cleLogE
vent006
CRITICAL
Database
instance is
down on the
DR (current
production)
server
Production
data is not
available for
applications.
BCSOra
cleLogE
vent007
INFO
Database is
ACTIVE and is
under Sanovi
DRM
management
on primary
server
Continuity
monitoring
and
management
of database
can be
performed
BCSOra
cleLogE
vent008
INFO
Database is
ACTIVE and is
under Sanovi
DRM
management
Continuity
monitoring
and
management
of database
DataGuardEvent203,
DataGuardEvent204,
DataGuardEvent205,
DataGuardEvent206,
DataGuardEvent207
on DR server
can be
performed.
BCSOra
cleLogE
vent009
INFO
Database is
ACTIVE and is
under Sanovi
DRM
management
on DR (current
production)
server
Production
data is
available for
applications.
BCSOra
cleLogE
vent010
INFO
Database is
ACTIVE and is
under Sanovi
DRM
management
on primary
(currently
nonproduction)
server
Database
available for
continuity
operations.
BCSOra
cleLogE
vent011
CRITICAL
Oracle process
not running on
primary server
Affects Oracle
availability
BCSOracleL
ogEvent012
CRITICAL
Oracle process
not running on
DR server
Affects Oracle
availability
BCSOracleL
ogEvent013
CRITICAL
Oracle process
not running on
DR (current
production)
server
Affects Oracle
availability
BCSOracleL
ogEvent014
CRITICAL
Oracle
Listener not
running on
primary server
Affects Oracle
availability
BCSOracleL
ogEvent015
CRITICAL
Oracle
Listener not
running on DR
server
Affects Oracle
availability
BCSOracleL
ogEvent016
CRITICAL
Oracle
Listener not
running on DR
(current
production)
server
Affects Oracle
availability
BCSOracleL
ogEvent017
CRITICAL
OS agent is
down on
primary
server.
Continuity
Group cannot
be managed
completely.
Logs will not
get dumped
on the
primary server
leading to
impact on RPO
and RTO.
BCSOracleL
ogEvent018
CRITICAL
OS agent is
down on DR
server.
Continuity
Group cannot
be managed
completely.
Failover
operation
cannot be
performed.
Logs may not
get applied to
the DR
database,
leading to
impact on RPO
and RTO.
BCSOracleL
ogEvent019
INFO
OS Agent is up
on primary
server
Effects
Failover
BCSOracleL
ogEvent020
INFO
OS Agent is up
on DR server
Effects
Failover.
BCSOracleL
ogEvent021
CRITICAL
Protection
agent on
primary server
is down.
Continuity
Group cannot
be managed
completely.
RPO may be
impacted
BCSOracleL
ogEvent022
CRITICAL
Protection
agent on DR
server is
down.
Continuity
Group cannot
be managed
completely.
RPO may be
impacted.
BCSOracleL
ogEvent023
INFO
Protection
agent on
primary server
is up.
Data
protection
service can be
monitored and
managed.
BCSOracleL
ogEvent024
INFO
Protection
agent on the
DR server is
up
Data
protection
service can be
monitored and
managed.
BCSOracleL
ogEvent025
INFO
Protection
event occurred
on primary
server
Might impact
protection
BCSOracleL
ogEvent026
INFO
Protection
event occurred
on DR server
Might impact
protection
BCSOracleL
ogEvent027
WARNING
Dumping of
archive logs
failed
Impacts RPO
and Continuity
BCSOracleL
ogEvent028
WARNING
Applying of
archive logs
failed
Impacts
RPO/RTO and
Continuity
BCSOracleL
ogEvent029
CRITICAL
Network
connectivity to
Primary
Server is lost
The Continuity
Group cannot
be managed.
BCSOra
cleLogE
vent030
CRITICAL
Network
connectivity to
DR Server is
lost
The Continuity
Group cannot
be managed.
BCSOra
cleLogE
vent031
CRITICAL
Network
connectivity to
DR server
(current
production) is
lost.
Current
production
server cannot
be managed
BCSOra
cleLogE
vent032
CRITICAL
Primary server
down/not
available
Affects
application
availability/no
rmal mode
BCSOra
cleLogE
vent033
CRITICAL
Oracle
configuration
file has been
updated on
the production
database
server
Applications
on the DR
may not
function
properly.
BCSOra
cleLogE
vent034
CRITICAL
Control files
on DR needs
to be
refreshed
Impacts the
consistency of
the database
BCSOra
cleLogE
vent035
CRITICAL
Log Volume on
the Primary
Server has
exceeded
threshold
Impacts
Continuity
BCSOra
cleLogE
vent036
CRITICAL
Log Volume on
the DR Server
has exceeded
threshold.
Impacts
Continuity
BCSOra
cleLogE
vent037
CRITICAL
Log Volume on
the Current
Production
(DR) Server
Impacts
Continuity
has exceeded
threshold. The
group is in
Failover mode.
BCSOra
cleLogE
vent038
CRITICAL
WAN Link
state changed
The WAN link
has toggled its
state.
BCSOra
cleLogE
vent039
CRITICAL
Username/pas
sword
configured for
primary server
is invalid
Sanovi DRM
cannot
monitor/mana
ge the
Primary
database
BCSOra
cleLogE
vent040
CRITICAL
Username/pas
sword
configured for
DR server is
invalid
Sanovi DRM
cannot
monitor/mana
ge the DR
database
BCSOra
cleLogE
vent041
CRITICAL
Username/pas
sword
configured for
DR server
(current
production) is
invalid
Sanovi DRM
cannot
monitor/mana
ge the
DR(current
production)
database
BCSOra
cleLogE
vent042
CRITICAL
Archive log
sequence
number got
reset on
primary
database
Sanovi DRM
cannot do
normal copy
BCSOra
cleLogE
vent043
CRITICAL
Primary
database is
restored from
the older
backup
Sanovi DRM
cannot do
normal copy
BCSOra
cleLogE
vent044
CRITICAL
Apply/Dump
action failing
repeatedly
Affects
Continuity.
This in turn
affects
RPO/RTO.
BCSOra
cleLogE
vent045
CRITICAL
OS
Authenticated
User
configured for
Primary server
is invalid
Sanovi DRM
cannot
monitor/mana
ge the
Primary
database
BCSOra
cleLogE
vent046
CRITICAL
OS
Authenticated
User
configured for
Sanovi DRM
cannot
monitor/mana
ge the DR
DR server is
invalid
database
BCSOra
cleLogE
vent047
CRITICAL
OS
Authenticated
User
configured for
DR server
(current
production) is
invalid
Sanovi DRM
cannot
monitor/mana
ge the
DR(current
production)
database
Primary
ServerN
etworkD
own
CRITICAL
Network to
Production
server down
Monitoring
and
management
of the
protection
cannot be
performed
Seconda
ryServer
Network
Down
CRITICAL
Network to
Secondary
server down
Monitoring
and
management
of the DR
cannot be
performed
UserInp
utObtain
ed001
INFO
System
obtained input
System will
resume
execution of
the operation
which required
input to
proceed
UserInp
utRequir
ed001
INFO
System is
awaiting input
System
requires input
to proceed
UserInp
utRequir
ed002
WARNING
System is
awaiting input
System
requires input
to proceed
UserInp
utRequir
ed003
SERIOUS
System is
awaiting input
System
requires input
to proceed
UserInp
utRequir
ed004
CRITICAL
System is
awaiting input
System
requires input
to proceed
Failure
occurred in
Workflow
Workflow
might abort or
continue
execution
based on the
configuration
Failure
occurred in
Workflow
Workflow
might abort or
continue
Workflo
wFailure
001
Workflo
wFailure
002
INFO
WARNING
execution
based on the
configuration
Workflo
wFailure
003
SERIOUS
Failure
occurred in
Workflow
Workflow
might abort or
continue
execution
based on the
configuration
Workflo
wFailure
004
CRITICAL
Failure
occurred in
Workflow
Workflow
might abort or
continue
execution
based on the
configuration
RPO and RTO Monitoring
Refer to Viewing Functional Group RPO and RTO details topic (see Sanovi DRM
online help for more information) on how to navigate to RPO and RTO Monitoring page.
RPO/RTO tab in the Group Dashboard of the desired Group displays information related
to RPO and RTO details of that Group. It displays Current RPO and RTO value, Recovery
point, Deviation, Transaction ID and Transaction Time on Primary and Remote, and RTO
Breakup.
Immediately after starting the NormalCopy Operation the DR database's transaction time
stamp will always be shown ahead of the production, because the last action in
NormalFullCopy (Restore Database) is a transaction on the DR. But once the first log is
applied the timestamp is always shown of the transaction on the production.
The timestamp accuracy between production and DR depends on what Oracle database
is being used. Oracle 9i and Oracle 10g provides the accuracy of +/- 5 minutes and +/3 seconds respectively. As a result, Oracle 9i can compute a RPO as 0, though the
Transaction ID at production is ahead of Transaction ID that on DR. This will actually
have a RPO or data loss of 5 minutes. This data loss will be 3 seconds in case of Oracle
10g.
Note:
The timestamps of both the production and DR should be synchronous. This could be
achieved by using a common time server or synchronously changing the clock time on
both production and DR. If the timestamps of production and DR are not same then the
RPO calculation will always show either ‘0’ or negative.
To monitor RPO and RTO values for a Group, do the following:
1. Click MONITOR on the navigation bar. The Continuity page appears.
OR
Click the List View link at the top right corner of the Replication Summary area on
the Dashboard window.
2. Click desired Group from the Group Name column. The Functional Group
Dashboard page appears.
3. Click desired Group from the GROUP NAME column. The Functional Group
Dashboard page for the respective FG or AG appears.
4. Click The RPO/ RTO tab. The RPO/ RTO tab displays the following page:
The RPO/ RTO tab in the Functional Group Dashboard of the desired group displays
the following information.
Field
App RPO Summary
Description
Displays Current App RPO, Recovery Point, App RPO
deviation (with percentage) and Configured App
RPO details.
Displays the configured App RPO details.
Primary
Remote
RTO Summary
RTO Breakup
For database Groups, it displays the Last
Transaction ID of the database file and Last
Transaction Time of the latest committed
transaction on the production database.
For database Groups, Displays the Last Transaction
ID of the database file and Last Transaction Time of
the latest committed transaction on the DR
database.
Displays Current RTO, Recovery Time, RTO
deviation (with percentage) and Configured RTO
details.
Displays the name of the Recovery Step executed
during the recovery process along with the
Expected Completion Time in seconds.
Replication Monitoring
Replication tab in the Group Dashboard of the desired Group displays the following
information related to current replication status of that Group.
To view the DR Solution specific replication details of the FG, perform the following
steps:
1. Click MONITOR on the navigation bar. The Continuity page appears.
OR
Click the List View link at the top right corner of the Replication Summary area on the
Dashboard window
2. Click desired Group from the Group Name column. The Functional Group Dashboard
page appears.
3. The Replication tab of the Functional Group Dashboard lists the DR Solution specific
replication details. The Replication tab is displayed only for the Functional Group
Dashboard.
You can monitor the current status of the replication between the production and DR. For
example, in the case of SRDF protection mechanism, the details of the protection
mechanism, the specific device names, the status of the WAN link connection between
the two RDF devices the mode of replication i.e synchronous or asynchronous and the
number of invalid tracks are displayed.
Field
Replication
Status
Description
Displays the status of replication.
Replication status can be ACTIVE,
INACTIVE or UNKNOWN.
ACTIVE indicates that the replication is ON.
INACTIVE indicates that the replication
has stopped.
UNKNOWN indicates that the replication
agent is down or due to some reason we
are unable to get replication information.
Primary and
remote
services
information
Displays the Primary and DR Protection
scheme names configured for the group.
Summary
Pair Name
Displays the protection scheme name on
the primary and DR.
Protection
Mode
Displays the protection mode.
It can be MAX_PROTECTION,
MAX_AVAILABILITY, or
MAX_PERFORMANCE.
DG Broker
Displays whether the DataGuard Broker is
Enabled or Disabled.
Sync Status
Displays the synchronisation status.
The status can be SYNCHRONIZING,
SYNCHRONIZED, UNSYNCHRONIZED or
UNKNOWN.
Stdby Redo
Displays whether the standby Redo logging
is Enabled or Disabled.
Status
Information
Displays the status information of Primary
and Standby databases. It includes Name,
Host and Role of the databases.
Data Lag Details
Data Not
Archived (KB)
Displays the amount of data (in KB) not yet
to be returned to the archive log file.
Data Not
Received(KB)
Displays the amount of data not yet
received at the DR.
Data Not
Applied(KB)
Displays the amount of data not yet applied
in the DR database.
Current Log
Displays the latest sequence number
generated by primary.
Last Received
Log
Displays the sequence number last received
at DR.
Last Applied
Log
Displays the sequence number last applied
at DR.
Fetch Archive Log Service
FAL Client
Displays the Fetch Archive Log client.
For normal replication, FAL client will be
primary. For reverse replication, FAL client
will be DR.
FAL Server
Displays the Fetch Archive Log server.
For normal replication, FAL server will be
DR.
For reverse replication, FAL client will be
primary.
Transport Service
Displays as either ON or OFF.
OFF indicates the log files are not getting transported to current
DR.
Destination ID
Displays the destination ID of the DR to
which the log files are getting transported.
Mode
Displays whether the transport mode is
SYNCHRONOUS or ASYNCHRONOUS.
Type
Displays the type of transport service.
The types are ARCH, FOREGROUND,
LGWR, and RFS.
Status
Displays the status of the transport service.
The status can be VALID or FAILED.
Apply Service- Displays as either ON or OFF.
OFF indicates the log files are not applied to current DR.
Process
Displays the list of processes in the
DataGuard setup.
Process will show one of the values ARCH,
FGRD, LGWR, RFS(FAL), RFS(NEXP), LNS.
Client
Displays the list of clients in the DataGuard
setup.
It can be ARCHIVAL, ARCH and LGWR.
Apply Delay
(Min)
Displays the interval in applying the log
files in minutes.
Status
Displays the status of the apply service.
The status can be VALID or FAILED.
Configuration of BCOs
The BCOs, NormalCopy, Fallback, FallbackReSync and ReverseNormalCopy are prepackaged for Oracle Dataguard.
1. User need to add Oracle Switch Log RAL.
2. Select the RAL and click on the edit option or double click on the RAL.
OracleSwitchLogOperation pop-up message appears.
3. Go to Action Properties tab, Select Primary database as the Dataset Name.
4. Click outside the pop-up window to save and publish the workflow.
Failover Operation
This is a pre-packaged workflow provided as XML file. The XML file in the Sanovi DRM
Server (Linux) is located at $EAMSROOT/workflows/Oracle/OracleArLogDG/BCSOracleArLogDG-FO.xml. Copy this file to your desktop, if required make necessary
modifications, and use XML import method to load it into the Failover Operation of the
group.
Refer Configuring Continuity Operations (see Sanovi DRM online help for more
information) on navigating to the Action Configuration Handler page to configure
Failover operation.
Workflow for Failover Operation of Oracle with DataGuard is illustrated below.
Remote PreCheck
This is a Custom action. This action checks for the status (health) of Components,
Datasets and Protection Scheme discovered on the remote site.
Check Archive Gap
This is a Check Archive Gap action. This action checks the archive gap and fetches the
LOW/HIGH sequence numbers.
Is Gap Present
This is a Compare Operation action. This action performs a comparison of two objects.
Apply Pending Logs
This is a Start Managed Recovery action. This action starts the managed recovery
process and log apply services in the DR database.
Stop Managed Recovery
This is a Cancel Managed Recovery action. This action cancels the already active
managed recovery in the DR database.
Activate Standby Database
This is a Execute SQL action. This action activates the DR (standby) database. This is
'Execute SQL' RAL action which executes the SQL command for activating DR database.
Shutdown Database
This is a Shutdown Database action. This action shutdowns the DR database which is
going to be opened in Read-Write mode.
Start Database Mounted
This is a Startup Database action. This action startup the DR database.
Open Database Read-Write
This is a Alter Database action. This action alters the DR database to Read-Write mode.
Switch Log File
This is a Switch Log File action. This action switches the current redo log of DR database
causing it to be archived/dumped.
Verify Database State is in Read-Write
This is a Verify Database State action. This action verifies whether the DR Database
State is in Read-Write.
Resolve Archive Gaps
This is a listing action. This action identifies whether an archive gap is found. If found,
please take remedial action before proceeding.
Test Exercise Configuration
The Oracle Archive Log with DataGuard solution supports the following test
exercises:

Switchover

Switchback
Refer Configuring Test Exercises (see Sanovi DRM online help for more information)
on how to configure the test exercise.
Switchover Operation
This is a pre-packaged workflow provided as XML file. The XML file in the Sanovi DRM
Server (Linux) is located at $EAMSROOT/workflows/Oracle/OracleArLogDG/BCSOracleArLogDG-SO.xml. User need to copy this file to your desktop, if required make
necessary modifications, and use XML import method to load it into the Switchover Test
Exercise of the group. Import the same workflow XML file for Switchback also.
Refer to Configuring Test Exercises for information on navigating to the Action
Configuration Handler page to configure Switchover operation.
When the Switchover operation is performed, the Group state changes from Normal
Inactive to Switchover Transit. This screen displays a green coloured progress bar
exhibiting the status of the operation.
At the end of the Switchover operation, the Configured Properties and the Current
Properties are changed, i.e. the Production and the DR site that were configured at the
time of Site Configuration are inter changed (Production site had become DR site and
vice-versa).
The actions requiring inputs during execution of this operation are given below:

Shutdown Application
This action displays a message box indicating to shut down all the applications connected
to the Oracle database before starting the Switchover operation.
Shutdown all the applications connected to the Oracle database and click OK on the
message box.
Note
A group in Normal mode only can execute Switchover.
Workflow for Switchover Operation of Oracle Archive Logs with DataGuard is illustrated
below.
Production Server pre-check
This is a Custom action. This action checks for the status (health) of Components,
Datasets and Protection Scheme discovered on the production site.
DR Server pre-check
This is a Custom action. This action checks for the status (health) of Components,
Datasets and Protection Scheme discovered on the remote site.
Backup Production Control File
This is a Backup Control File action. This action backs up the control file of the
production database.
Verify Production Database Mode
This is a Verify Database State action. This action checks Production database mode.
Verify DR Database Mode
This is a Verify Database State action. This action checks Remote database mode.
Switch Redo Log at Production
This is a Switch Log action. This action switches the current redo log causing it to be
archived/dumped at Production.
Apply Logs at DR
This is a Start Managed Recovery action. This action applies the remaining archive logs
that are received from Production Server which are yet to be applied on the DR
database.
Get Current Sequence Number
This is a Get Current Sequence Number action. This action gets the current archive log
sequence number.
AssignAction
This is an Assignment Operation action. This action assigns the value of the key
"PANORA_VERIFY_CURR_SEQ_NUM" to the key "PANORA_GET_CURR_SEQ_NUM".
Wait for sync
This action ensures that the Sequence Number is applied.
Clear Job Queue
This action resets Job Queue Processes.
Switch Production Database to Standby Mode
This is an Alter Database State action. This action current Production database state to
Standby.
Shutdown Production Database
This is a Shutdown Database action. This action shutdown the current Production
database.
Start Production Database Unmounted
This is a Startup Database action. This action startup the Production database
'Unmounted'.
Mount Production Database as Standby
This is an Alter Database State action. This action alters Production database state to
Standby.
Switch database to primary
This is an Alter Database State action. This action switches DR database to Production.
Shutdown DR Database
This is a Shutdown Database action. This action shutdown the current DR database.
Start DR Database Mounted
This is a Startup Database action. This action startup the DR database 'Mounted'.
DR Archive Log Start
This action starts DR Archive Logging.
DR Archive Log
This action alters DR database Archive Log.
Open DR Database Read-Write
This is an Alter Database State action. This action opens DR Database in Read-Write
mode.
Production Start Media Recovery
This is a Start Managed Recovery action. This action starts the managed recovery
process and log apply services in the production database.
DR Switch Redo Log File
This is a Switch Log File action. This action switches the current redo log of DR database
causing it to be archived/dumped.
Failover Operation with AWS
A Sample workflow for Non-intrusive failover test exercise for Oracle Data guard solution
is supported for this product.
This is a pre-packaged workflow provided as XML file. The XML file in the Sanovi DRM
Server (Linux) is located at $EAMSROOT/workflows/Oracle/OracleArLogDG/BCSOracleArLogDG-FO.xml. You need to copy this file to your desktop, if required make
necessary modifications, and use XML import method to load it into the Failover Test
Exercise of the group.
Switchback Operation
This is a pre-packaged workflow provided as XML file. The XML file in the Sanovi DRM
Server (Linux) is located at $EAMSROOT/workflows/Oracle/OracleArLogDG/BCSOracleArLogDG-SB.xml. You need to copy this file to your desktop, if required make
necessary modifications, and use XML import method to load it into the Switchback Test
Exercise of the group. Import the same workflow XML file for Switchover also.
Refer Configuring Test Exercises (see Sanovi DRM online help for more information)
on navigating to the Action Configuration Handler page to configure Switchback
operation.
When the Switchback operation is performed, the Group state changes from Switchover
Inactive to Switchback Transit. This screen displays a green colored progress bar
exhibiting the status of the operation.
At the end of the Switchback operation, the Production site comes live and the DR site
becomes stand-by. So, Configured Properties and the Current Properties displayed on
the screen are same at the end of Switchback operation (i.e. the configured Production
site is the current Production site and the configured DR site is the current DR site).
The action requiring inputs during execution of this operation is given below:

Shutdown Application
Note
A group in Normal mode only can execute Switchback.
Workflow for Switchback Operation of Oracle Archive Logs with DataGuard is illustrated
below.
Production Server pre-check
This is a Custom action. This action checks for the status (health) of Components,
Datasets and Protection Scheme discovered on the production site.
DR Server pre-check
This is a Custom action. This action checks for the status (health) of Components,
Datasets and Protection Scheme discovered on the remote site.
Backup Production Control File
This is a Backup Control File action. This action backs up the control file of the
production database.
Verify Production Database Mode
This is a Verify Database State action. This action checks Production database mode.
Verify DR Database Mode
This is a Verify Database State action. This action checks Remote database mode.
Switch Redo Log at Production
This is a Switch Log action. This action switches the current redo log causing it to be
archived/dumped at Production.
Apply Logs at DR
This is a Start Managed Recovery action. This action applies the remaining archive logs
that are received from Production Server which are yet to be applied on the DR
database.
Get Current Sequence Number
This is a Get Current Sequence Number action. This action gets the current archive log
sequence number.
AssignAction
This is an Assignment Operation action. This action assigns the value of the key
"PANORA_VERIFY_CURR_SEQ_NUM" to the key "PANORA_GET_CURR_SEQ_NUM".
Wait for sync
This action ensures that the Sequence Number is applied.
Clear Job Queue
This action resets Job Queue Processes.
Switch Production Database to Standby Mode
This is an Alter Database State action. This action current Production database state to
Standby.
Shutdown Production Database
This is a Shutdown Database action. This action shutdown the current Production
database.
Start Production Database Unmounted
This is a Startup Database action. This action startup the Production database
'Unmounted'.
Mount Production Database as Standby
This is an Alter Database State action. This action alters Production database state to
Standby.
Switch database to primary
This is an Alter Database State action. This action switches DR database to Production.
Shutdown DR Database
This is a Shutdown Database action. This action shutdown the current DR database.
Start DR Database Mounted
This is a Startup Database action. This action startup the DR database 'Mounted'.
DR Archive Log Start
This action starts DR Archive Logging.
DR Archive Log
This action alters DR database Archive Log.
Open DR Database Read-Write
This is an Alter Database State action. This action opens DR Database in Read-Write
mode.
Production Start Media Recovery
This is a Start Managed Recovery action. This action starts the managed recovery
process and log apply services in the production database.
DR Switch Redo Log File
This is a Switch Log File action. This action switches the current redo log of DR database
causing it to be archived/dumped.
Recipients
Modify the e-mail address of the report
recipients.
If the field is left blank, the report will
be sent to the e-mail address of the
current user.
5. Click Submit after modifying the scheduled details. The user will be redirected to
the Scheduled Reports page.
Troubleshooting Information
Applying Logs Manually
There are very rare situations when NormalCopy stops because a certain archive log has
not been applied at the DR site. This can happen in many ways like Replicator (PFR,
DataGuard, Hitachi, HPXP or NetApp SnampMirror) did not deliver the log to DR, disk
space was full, date time problem, apply procedure does not pick up the log, system was
down etc. To overcome this, you have to manually identify and apply the missed log so
that NormalCopy can continue.
Start applying log manually only after all the logs from production instance are
transferred onto the standby machine. The standby database knows the next file that
has to be applied. Till that file is not applied, the standby database will not apply other
logs that are available.
To apply log manually, perform the following steps:
From the SQLPlus prompt of the standby instance, type the following:
recover automatic standby database until cancel
How to set the primary DB in Archive Mode?
1.
2.
3.
Logon to the server using Oracle OS user.
Open command prompt (in Windows) or Terminal in Solaris.
Enter the following commands one after the other.
sqlplus /nolog (Assumed Oracle OS profile configure properly)
sql>conn /as sysdba
sql>select status from v$instance; (Status should be open if
database is opened for access)
SQL> archive log list
The following information is displayed:
4.
Database log mode
No Archive Mode
Automatic archival
Disabled
Archive
destination
/voonna/u01/app/oracle/product/dbs/arch
Oldest online log sequence
29
Current log sequence
30
Enter the command:
SQL> show parameter log_arch
(Both commands give the information on Archive log mode).
5.
When the database is in Archive Mode you can see the value of “Database log
mode” as “Archive Mode" and when the database is not in Archive mode and it will show as "No Archive
Mode".
To change the database mode, perform the following steps:
Perform the following steps only if the database is not in Archive Mode. If the database is
open earlier then it should be shut down first.
SQL> shutdown immediate (This command shuts down the database)
Now start these steps to make database in Archive log mode.
SQL> connect /as sysdba
SQL> startup mount ;
(The below commands are required if the database is using spfile) otherwise these
parameters need to be modified in init.ora file of the database)
SQL > alter system set log_archive_dest_1
=
'location=/u06/app/oradata/ORA920/archive MANDATORY' scope=spfile;
SQL> alter system set log_archive_dest_state_1 = 'enable'
scope=spfile;
SQL> alter system set log_archive_format
= 'arch_t%t_s%s.dbf'
scope=spfile;
SQL> alter system set log_archive_start
= true scope=spfile;
These steps are recommended if they are using spfile or pfile.
SQL> alter database archivelog;
SQL> alter database open;
Verification to know the database mode:
SQL> archive log list
The above command displays the following information:
Database log mode
Automatic archival
Archive destination
Oldest online log sequence
Current log sequence
SQL> show parameter log_arch
Archive Mode
Enabled
/voonna/u01/app/oracle/product/dbs/arch
29
30
This will show the mode of the database as displayed previously.
How to start Oracle database manually in the DR site
When DR server reboots after recovering from a disaster, the Oracle services may not
start automatically. In this case, you need to perform the following steps to start the
oracle services in the DR site. This situation occurs only in the DR site scenario as the
server is in standby mode.
1. Go to SQL prompt
2. Using pfile:
startup nomount pfile = <pfile name with path>;
Using spfile:
startup nomount;
3. alter database mount standby database;
How to run SQLPlus?
There are two ways SQLPlus could be run for SQL command verifications.
1.
sqlplus "<username>/<password>@<sid> as sysdba".
The example follows:
C:\>sqlplus "sys/sanovi@coreBank as sysdba"
SQL*Plus: Release 9.2.0.1.0 - Production on Wed Apr 20 09:59:54 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
2.
JServer Release 9.2.0.1.0 - ProductionSQL>
sqlplus /nolog
SQL> connect <username>/<password>@<sid> as sysdba;
The example follows:
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.1.0 - Production on Wed Apr 20 10:07:00 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> connect sys/sanovi@coreBank as sysdba
Connected.
SQL>
Appendix
Audit Records in Windows Application Log
If Oracle Auditing Option is enabled, then each time archive log is dumped or applied in
Oracle, Oracle writes an audit record. These audit records go to the file system (in UNIX)
and Event Log (in Windows).
Hence, if Oracle audit option is enabled, and if dump and apply log interval configured in
Sanovi DRM is “frequent”, then during the NormalCopy operation, the User MAY WANT
TO MONITOR THE EVENT/SYSTEM LOG (depending on their platform) and periodically
backup and purge it, to keep it from getting filled up.
Primary Database requires manual intervention at the Stand-by
site
Most changes to the production database are automatically propagated to a standby
database through archived redo logs and so require no user intervention. Nevertheless,
some changes to a production database require manual intervention at the standby site
in the following situations.
1.
If the online redo logs are cleared or lost on the production, the standby database
is out of synchronous with the production. That is,


If redo log is lost in the production, or
User has used a command like
ALTER DATABASE CLEAR LOGFILE GROUP integer; on the primary, or

2.
Has opened the production database with OPEN RESETLOGS option
after the standby database was created (using NormalFullCopy
operation), then the standby database is said to be out-ofsynchronous with the production. And the user has to re-synchronize
the standby database with the production.
If the user has created a control file on the production using a command like:
CREATE CONTROLFILE [REUSE] [SET] DATABASE database
LOGFILE [GROUP int] filespec
[RESETLOGS | NORESETLOGS]
DATAFILE filespec options
Then recreating the standby control file is required in DR site.
3.
If you perform Media recovery and open the database with RESETLOGS option, the
standby database goes out-of-synchronous with the production. In this scenario, you
have to re-synchronize the standby database with the production.
Note
An event will also be raised indicating the user that the standby database is out
of sync.
4.
When the user adds a data file or creates a table space in production using
commands like:
Or
SQL> CREATE TABLESPACE tbs_2 DATAFILE
't_db2.f'
SIZE 2M;
ALTER TABLESPACE tools ADD DATAFILE
‘c:\oracle\oracledata\orabase\tools02.tom’ SIZE 20M;
In standby, the data files need to be created manually. At this time you
could observe in Sanovi DRM, either NormalCopy action has raised an
event, stating unable to continue recovery, or it could be in progress. But
if the NormalCopy has stopped after raising an event, then restart
NormalCopy in Sanovi DRM.
5.
When the user drops a data file or table space in production, the user needs to
remove the data file(s) from the DR site manually.
Note:
This is applicable only if the oracle database initialization parameter
STANDBY_FILE_MANAGEMENT is not set to AUTO. If it is set to AUTO, the change
will be automatically propagated to the standby database.
6.
Auto extending a data file in production may cause failure of the standby recovery,
if there is not enough disk space in the DR site, for this data file to extend. Hence, the
user may need to ensure that enough disk space is available in the DR site, for the file
being extended. And then restart NormalCopy in Sanovi DRM, if it has stopped after
raising an event.
7.
Whenever the user opens the production database with OPEN RESETLOGS option
(while the DR site is in standby recovery mode), or removes any of the archive logs files
that have not been applied yet, using a command like:
SQL> ALTER SYSTEM CLEAR UNARCHIVED LOGFILES
Then the standby database recovery can’t proceed and will fail. In this
case, the user has to restart NormalFullCopy operation in Sanovi DRM, to
re-synchronize the production and standby database.
Note
An event will be raised by Sanovi DRM indicating the user to do a
NormalFullCopy.
Whenever the user updates the init.ora of the production, while the DR site is in
standby recovery mode, then the user has to manually propagate the init.ora to the DR
site. Whenever the user updates the SPFILE parameter with command like:
8.
SQL> alter system ... scope=SPFILE
Or
SQL> alter system ... scope=both
Then the user needs to get the init.ora using the command
SQL> create pfile=<pfile_name> from spfile[=<spfile_name>]
Then the user needs to manually copy this pfile (init.ora) to the DR site as
the standby init.ora in $ORACLE_HOME/<standby
ORACLE_SID>/Init_stdby.ora
9.
Whenever the user performs password change or any other actions that modify the
Oracle Password file, and then the user has to manually copy the Oracle Password file
from the production to the DR site.
Note
An event will be raised by Sanovi DRM indicating the user to do the same
on the standby database. This Note is common to point # 8 and point # 9.
Sample Listener.ora file entries
# LISTENER.ORA Network Configuration File:
C:\oracle\ora92\network\admin\listene
r.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = primaryServer)(PORT = 1521))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = coreBank)
(ORACLE_HOME = C:\oracle\ora92)
(SID_NAME = coreBank)
)
)
Sample Tnsnames.ora file entries
# TNSNAMES.ORA Network Configuration File:
C:\oracle\ora92\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
COREBANK =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = primaryServer)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = coreBank)
)
)
Glossary
A
Agent: It is a light weight Panaces™ Software Component that runs on a customer application server to
manage a specific object (dataset, service, or component).
Archived Redo Log: Oracle allows you to save filled groups of online redo log files to one or more offline
destinations, known as the archived redo log.
Archiving: The process of turning online redo log files into archived redo log files is called archiving. This
process is only possible if the database is running in ARCHIVELOG mode.
Asynchronous Replication: Under asynchronous replication, updates to the application on the Source server
are persistently queued to be forwarded to the Target server.
B
BCS Module: It is a Panaces™ Software Component that runs on the Panaces™ Master Server and provides
Enterprise Continuity Solution intelligence of different type of Solutions for a specific database or filesystem.
Business Continuity Mode (BCM): Represents the current state of an Availability Group whose data is
protected. Typical Modes are: Normal, Failover, and Fallback.
Business Continuity Solution (BCS): A data management solution that provides the capability to protect the
Production Data of a Business Application at an Alternate Site (called DR site) during normal
conditions. In case of a disaster where the production site went down, provides the capability to
recover the application to as current data as possible on the Alternate Site within the specified
Recovery objectives.
Business Continuity State: Business Continuity State represents the relationship of the Primary Dataset with
respect to the DR Dataset. Supported Business Continuity States are: Normal Switchover Failover
Fallback
C
Component: It is an infrastructure object such as a Windows Server that participates in a Disaster Recovery
Solution.
Configured DR Server: The database server initially configured to store the backup of the data replicating
from the primary server is known as configured DR server. This server is located at a different location
and is also called as remote or secondary server.
Configured DR Site: The location of the configured DR server is called the configured DR site.
Configured Primary Server: The database server initially configured as primary (production) server. The
data volumes to be replicated reside on this server.
Configured Primary Site: The location of the configured primary server is called the configured primary site.
Configured Remote Server: The database server initially configured to store the backup of the data
replicating from the primary server is known as configured remote server. This server is located at a
different location and is also called as remote server.
Configured Remote Site: The location of the configured remote server is called the configured remote site.
Current DR Server: The database server functioning as a DR server for time being is called current DRserver.
This server backs up of the data replicated from the current primary server and is located at a
different location.
Current DR Site: The location of the current DR server is called the current DR site.
Current Primary Server: The database server functioning as a production server for time being is called
current primary or production server. The data gets replicated from this server to the current remote
server.
Current Primary Site: The location of the current production server is called the current primary site.
Current Remote Server: The database server functioning as a remote server for time being is called current
remote server. This server backs up of the data replicated from the current primary server and is
located at a different location.
Current Remote Site: The location of the current remote server is called the current remote site.
Cyclic Redundancy Check (CRC): CRC is performed to check the consistency of data on two different
machines, especially used when transferring files from one location to another.
D
Dataset: Indicates all related data that is the object of replication and/or management by Panaces™. For
example, for Oracle - data files, control files and configuration files form the dataset.
Dependant Component object: A Component on which the Dataset associated to the Group under
consideration is based on is called dependant Component object.
Disaster Recovery Solution: A data management solution that provides the capability to protect the
Production Data of a Business Application at an Alternate Site (called remote site) during normal
conditions In case of a disaster where the production site went down, provides the capability to
recover the application to as current data as possible from the data residing at the Alternate Site,
within a specified Recovery objectives.
DR Server: The server residing on the remote site is called DR server. This server contains a copy of the data
(backup) of the production server. The DR server is referred as remote server.
F
Failover operation: A BCM where the business is in the process of moving the Production to the DR site or
has moved to the DR site and executing production in the DR site. In this case, the data protection no
longer happens; only the DR site (current production) has the latest data and the Primary site does
not get updates. Starting BCM is 'Normal'.
Fallback operation: A BCM where the business is in the process of moving Production from a DR site to the
Primary site. Starting BCM is "Failover".
FIRST: Panaces provides the following feature, referred by an acronym FIRST.F- Failover/Fallback (ContinuityI
- IncidentR – RPO/RTOS - Security (Data Protection)T - Test Exercises
FlashCopy: The FlashCopy feature, also known as point-in-time snapshot copy, is designed to provide the
ability to create full volume copies of data. When you set up a FlashCopy operation, a relationship is
established between source and target volumes, and a bitmap of the source volume is created. Once
this relationship and bitmap are created, the target volume can be accessed as though all the data
had been physically copied. That is, any read of target data is implemented by FlashCopy by reading
the corresponding location of the source data. Whenever a write occurs on the source volume,
FlashCopy will first copy the old source data to the target before the write is done on the source. This
way the target volumepoint-in-time snapshot is preserved.
Functional Group: Representation of a DR solution comprising of Datasets, Protection Scheme and
Components, inter-dependency of these along with other solution parameters of a specified
application (database). Typically Functional Group is created for each application whose data
protection needs to be managed by Panaces™.
G
Global Copy: Global Copy copies data non-synchronously over long distances. During Global Copy the source
volume sends a periodic, incremental copy of updated tracks to the target volume instead of sending a
constant stream of updates. This causes less impact to application writes for source volumes and less
demand for bandwidth resources, while allowing a more flexible use of available bandwidth. Global
Copy does not maintain the sequence of write operations.
Global Mirror: Global Mirror provides an unlimited long-distance remote copy feature across two sites using
the asynchronous technology of Global Copy combined with FlashCopy. With Global Mirror, the data
that the host writes to the storage media at the local site is asynchronously shadowed to the storage
media at the remote site.
Group: Group constitutes Components, Datasets and Protection Schemes. In general group refers to
Functional Group. The group represents the data to the replicated (datasets), location of the data
(Component) and the mechanism used to replicate the data (Protection Scheme).
I
Incident: A condition that leads to or causes disruption of continuous access to production server, application
or dataset.
J
Job: An operation or function (like backup/restore) performed through BrightStor ARCserveBackup.
Job Number: A unique number assigned to a job to identify the job in the Job Queue.
L
Listener: When an instance starts, a network listener process establishes a communication pathway to Oracle.
When you process a request connection, the listener establishes an appropriate connection.
Logical Log: The logical-log files store a record of database server activity that occurs between backups. The
database server reuses the freed logical-log files for recording new transactions, and to free full
logical-log files, they need to be backed up. The process of copying a logical-log file to media is
referred to as backing up a logical-logfile. A manual logical-log backup backs up all the full logical-log
files and stops at the current logical-log file. In continuous logical-log backup, the database server
backs up each logical log automatically when it becomes full. If continuous logical-log backup is turned
off, the logical-log files continue to fill. If all logical logs are filled, the database server hangs until the
logs are backed up.
Logical Restore: Replaying of logical log files during a restore is called as Logical Restore. The database
server uses temporary logical logs to roll forward transactions during a warm restore, because the
permanent logs are not available then. When the roll forward completes, the database server frees
the temporary log files. If you issue on stat -l during a warm restore, the output includes a fourth
section on temporary log files in the same format as regular log files. Temporary log files use only the
B, C, F, and U status flags. Note: Because the logical-log files are replayed using temporary space
during a warm restore, ensure that you have enough temporary space for the logical restore. The
minimum amount of temporary space that the database server needs is equal to the total logical-log
space for the database server instance, or the number of log files to be replayed whichever is smaller.
To improve performance, replay logical-log transactions in parallel during a warm restore. Use the
ON_RECVRY_THREADS configuration parameter to set the number of parallel threads.
Logical Sub system (LSS): A Logical Sub System (LSS) groups logical volumes (LUN’s) in groups of up to
256LUN’s. There can be up to 255 LSS’s defined on a DS8000.
LUN Id: A LUN Id is a four digit hexadecimal number of which the left two digits are the LSS Id. Therefore, if
LUN Id is 1F0A then the LSS it belongs to is 1F.
M
Mounting: Associating the instance with a specific database is mounting a database.
O
Online Redo Log: The online redo log file consists of two or more pre allocated files that store all changes
made to the database as they occur. Every instance of an Oracle database has an associated online
redo log to protect the database in case of an instance failure.
Oracle Instance: Combination of the memory called System Global Area and its associated Oracle process is
an Oracle Instance.
P
Panaces™ Client: An end application or database server that runs one or multiple Panaces™ Agents.
Panaces™ Master Server: It is a dedicated Server that runs the Panaces™ Master software and has the
primary ownership of the entire DR infrastructure.
PIT data copy: A fully usable copy of a dataset that contains an image of the data as it appeared at a single
point-in-time. The different types or implementations of Point-In-Time(PIT) copy technology are Split
Mirror, Changed Block, Concurrent, Clone, etc. The copy is considered to have logically occurred at
that point-in-time, but implementations may perform part or all of the copy at other times as long as
the result is a consistent copy of the dataset as it appeared at that point-in-time.
Primary Server: The server with production database located on the primary site is called primary server.
This is also called as Production Server. The data replicates from the primary server to the DR server.
Primary Site: A location where the production database server resides.
Production Server: The server with production database is called the production server. This could be either
primary or DR server depending on BCO. During normal situations, the production data resides at the
primary server, hence the primary server is called the production server.
R
Remote Server: The server residing on the remote site is called Remote server. This server contains a copy of
the data (backup) of the production server.
Remote Site: A Site where a copy of the production data is transferred and maintained as a backup copy is
called Remote site.
S
Secondary Site: A location where a copy of the production data is transferred and maintained as a backup
copy is called Secondary site.
Source Dataset: This is the dataset on the site that contains the Master or source image of the data. This is
the copy source for any of the PIT data copies that exist on the site.
Storage Volume: A storage volume is a LUN presented from a storage array. A storage volume is based on
physical media or a more complicated RAID configuration.
Synchronous Replication: Synchronous replication ensures that an update has been posted to the Target
server and acknowledged to the Source server before completing the update at the Sourceserver. For
information on the following Oracle terms refer to the respective Oracle Guide.·
Control
File·
Database Initialization File·
Data Volumes·
Password File·
Oracle SID·
Server
Parameter File (SPFILE)·
Trace
file·
udump·
cdump·
bdump·
Tablespace·
Oracle’s auditing option
Index
C
Custom Report Framework 53
Customizing Report 47
D
DR Drill/Test Report 36
P
Print Report 46
R
Report Customization 47
RTO Report 47
V
View Reports 36
W
Windows Application Log 65