Download 006. General view of RAC components [DOC file]

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Open Database Connectivity wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Relational model wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Database wikipedia , lookup

Concurrency control wikipedia , lookup

Database model wikipedia , lookup

Object-relational impedance mismatch wikipedia , lookup

Clusterpoint wikipedia , lookup

ContactPoint wikipedia , lookup

Oracle Database wikipedia , lookup

Transcript
Focus Area: DBA
Author: Jason Cross
ORACLE 10G RAC AND ASM: PUTTING THE PIECES TOGETHER
Jason Cross,
INTRODUCTION
Understanding the terms and concepts encompassing Real Application Cluster (RAC) and Automated Storage Management
(ASM) can be a daunting task even for the most seasoned Information Technology (IT) Manager or Technical Professional.
Gaining a solid understanding of the terms and concepts is vital to proposing the best database solutions to customers and to
building a well-architected Oracle RAC environment.
The “Oracle 10g RAC and ASM: Putting The Pieces Together” white paper will present an overview of the components that
comprise an Oracle RAC and ASM solution. From the basis of this understanding, an explanation of how the pieces fit
together to make a working solution will be given.
RAC AND ASM OVERVIEW
The release of Oracle 9i marked the introduction of the RAC option to Oracle’s RDBMS product offering. RAC addresses
the need for High Availability (HA) at the server level. Unlike the traditional Standby database feature, RAC made HA
database architecture “active”. Each node (a.k.a. database server) of the RAC cluster actively participates in the daily
operations of the Oracle database.
The release of Oracle 10g marked the introduction of the ASM option. ASM addresses the need for HA at the disk storage
level by enabling the mirroring of database files across physical disk partitions. Similar to traditional RAID technology, ASM
allows both 2-way and 3-way mirroring of database files across disk partitions.
THE HARDWARE PIECE
There are numerous options available when choosing the hardware for an Oracle RAC environment. To ensure the chosen
hardware meets the minimum systems requirements, refer to the Oracle® Real Application Clusters Administrator’s Guide
10g Release 1 (10.1) Part No. B10765-02, published June 2004. Quick Installation Guides are also available for the various
operating systems supported by Oracle.
For each node of the cluster, the following hardware pieces are needed:

1 host server

2 Network Interface Cards (NIC)

2 Disk Controller Cards

Local disk storage space
For each cluster, the following hardware pieces are needed:

1 shared disk storage device
The Oracle base software is installed on the local disk space of each host server. Each host server contains an Oracle
database instance and the host services the memory and CPU requirements of the running database instance. Each of the disk
controller cards control access to the local disk storage space and the access to the shared storage device respectively. Two
Network Interface Cards are configured with a public and a private IP address respectively. The combination of a host server,
its’ hardware components and an Oracle instance defines a node.
Paper 450
Focus Area: DBA
Author: Jason Cross
The physical database files (online and archived redo logs, control files, data files and the server parameter file) reside on a
shared disk storage device accessible by all nodes of the cluster. Database I/O requests are routed through the connection
between the disk controller card of each host server and the shared disk storage device. SCSII and Firewire cables are 2 of the
options available for making the connection between the host server and the shared disk storage device.
The local disk space on each host can be formatted with the formats native to the host servers’ operating system. For a
Windows environment, NTFS formatting should be used. For Unix environments, local disks are formatted UFS.
The shared disk space must not be formatted, but rather configured as RAW partitions. Traditional operating system formats
have native data buffering and file read/write access control that is not compatible with Oracle clustering software. On a
Unix cluster that does not use ASM, the shared disk space can be formatted with a Cluster File System as long as Veritas
Cluster File System software is used to manage data buffering and file read/write access.
THE NETWORK PIECE
Once the nodes have been assembled and connected to disk storage devices, the nodes of the cluster will need to
communicate with each other and with the public domain outside of the cluster. For each node, the following network pieces
are needed:

1 public IP address (PUIP)

1 private IP address (PRIP)

1 virtual IP address (VIP)
The public IP address is used for communications between a cluster node and the public domain outside of the cluster. One
of the Network Interface Cards of the node is configured with the public IP address. The public IP address should be stored
in the Domain Names Server (DNS) and not in the host file of the host servers within the cluster.
The private IP address is used for communication between the nodes of the cluster. Synchronization and heartbeat
information is distributed via the private IP address. One of the Network Interface Cards of a node is configured with the
private IP address. The private IP addresses assigned to all cluster nodes are stored in the local host file on each node.
For each node in the cluster there needs to be a virtual IP address configured within the DNS. The VIP is not statically
assigned to a Network Interface Card. Instead, Oracle clustering software dynamically links the Virtual IP address to a public
IP address assigned to the nodes of the cluster. When a node becomes unavailable, Oracle adjusts the links between virtual IP
addresses and public IP addresses to ensure database connections are routed to the available nodes.
THE ORACLE CLUSTER READY SERVICES (CRS) PIECE
The first Oracle product that needs to be installed on the host servers is CRS. The CRS installer is launched from one node
of the cluster and will propagate the installation to all nodes selected during the installation. The CRS software is installed on
the local disk space of each node and a CRS home directory is created. There are two vital CRS components that are created
for the cluster. They are:

Voting Disk

Oracle Cluster Registry (OCR)
Both the Voting Disk and the OCR are created on the shared disk device with separate RAW partitions required for each file.
The Voting Disk partition should be sized at a minimum 20MB while the OCR disk partition should be sized at a minimum
of 100Mb.
The Voting Disk contains dynamic heartbeat and synchronization information about the working cluster. The OCR contains
specific information about the nodes within the cluster and the overall configuration of the cluster.
The executables crs_stat, crs_start and crs_stop located in the CRS_HOME bin directory, can be used to view the status of,
start and stop the CRS resources across all cluster nodes. The most commonly used derivations of the CRS commands are:

“Crs_stat” – list the current status of all cluster wide resources.

“Crs_start –all” – starts all components of the RAC cluster, including databases, instances and services.
Paper 450
Focus Area: DBA
Author: Jason Cross

“Crs_stop –all” – stops all components of the RAC cluster, including databases, instances and services.
THE ORACLE SERVER SOFTWARE PIECE
Once the CRS installation has been completed, the Oracle Server software can be installed. Similar to the CRS install, the
Oracle Server installation is launched from one node of the cluster and will propagate the software binaries to all cluster
nodes. The Oracle Server software is installed on the local disk space of each node and a new home directory is created.
During the installation, the installer can choose to setup a database using the Database Configuration Assistant or defer
database creation until after the Oracle Server software installation has been completed.
THE CREATING STAMPED DISKS FOR ASM PIECE
Before RAW partitions can be used by ASM, they must be “stamped”. The method to stamp a disk differs on each of the
various operating systems. On Windows systems the asmtoolg.exe GUI tool is used to stamp a disk while on Unix the
mknod command is used. A disk stamp is a link name used by ASM to reference a RAW partition. The link names are the
same across all nodes of the cluster.
THE DATABASE CONFIGURATION ASSISTANT (DBCA) PIECE
The DBCA is used to create the ASM instances and database instances for each node of the RAC cluster and to create the
Oracle database. The DBCA is launched from one node of the cluster and the cluster wide ASM instances, Oracle instances
and the database are created from this node. During the configuration, ASM disk groups are created upon which the database
files will reside. It is recommended to change as little as possible of the default parameters for the database within the DBCA.
The creation of additional tablespaces, online redo log groups and members and parameter file customizations can be done
after the bare-bones RAC database has been created with the DBCA. Keeping the DBCA defaults will help tremendously in
troubleshooting DBCA errors that may occur.
THE AUTOMATED STORAGE MANAGEMENT (ASM) PIECE
ASM is used to define and administer the storage structures on which the RAC database files reside. Each node of the cluster
has a separate Oracle instance that contains information about the ASM storage structures. A started ASM instance is
mounted but never opened as a physical database. Create and Alter commands can be executed against the ASM instance to
manage the logical storage components. The following are 3 key ASM storage components.

ASM Disks

ASM Disk Groups

ASM Failure Groups
The ASM disks are synonymous with the ASM link names assigned during the disk stamping process. ASM disks are the
physical building blocks for creating logical ASM storage structures.
ASM disks are assimilated together to form ASM disk groups. ASM disk groups can be created during the DBCA RAC
database creation or manually by connecting to the ASM instance and executing a create diskgroup statement. A required
attribute of the ASM disk group is the redundancy level. The redundancy level defines whether the database files created on
the disk group will be mirrored and to what level they will be mirrored. The following values can be specified for the
redundancy level attribute of an ASM disk group.

External

Normal

High
Paper 450
Focus Area: DBA
Author: Jason Cross
Database files that reside on external redundancy disk groups are not mirrored by ASM. Thus, there is only one instance of
each database file stored within the disk group. External redundancy is normally used when data mirroring is being performed
at the hardware level, external to ASM.
Normal redundancy disk groups have 2-way mirroring for each of the database files within the disk group. Thus, there are
two identical copies of each database file within the disk group.
High redundancy disk groups have 3-way mirroring for each of the database files within the disk group. Thus, there are three
identical copies of each database file within the disk group.
The mirroring of database files created on a normal or high redundancy disk group is performed by ASM.
Failure groups are used for normal and high redundancy disk groups. For a normal redundancy disk group that has 2-way
mirroring, 2 failure groups are required. For a high redundancy disk group that has 3-way mirroring, 3 failure groups are
needed. Failure groups are ASM disks across which ASM will distribute the mirror copies of the database files.
Example 1: Create Diskgroup Statement
create diskgroup asm_ndg_03
normal redundancy
failgroup asm_fdg_03a
disk '\\.\orcldiskdata4' name asm_ndg_03a size 1498m
failgroup asm_fdg_03b
disk '\\.\orcldiskdata5' name asm_ndg_03b size 1498m;
To get information about the ASM storage structures, query the following V$ tables of the ASM instance.

V$ASM_ALIAS

V$ASM_CLIENT

V$ASM_DISK

V$ASM_DISKGROUP

V$ASM_FILE

V$ASM_OPERATION

V$ASM_TEMPLATE
THE REAL APPLICATION CLUSTER (RAC) DATABASE PIECE
The RAC database is comprised of database instances running on each node of the cluster with each accessing shared disk
storage. Instances can be started and stopped independently of each other without affecting database availability. In addition,
the overall CPU and memory of the cluster can be increased through the addition of node(s) to the cluster.
The components of the RAC database can be administered, started and stopped using the SRVCTL (Server Control)
command line utility. Numerous operations can be performed on the node applications, database, database instances, ASM
instance, and services that comprise the RAC cluster using the Server Control utility. For the list of server control commands,
complete with syntax and examples, refer to Appendix B of the Oracle® Real Application Clusters Administrator’s Guide
10g Release 1 (10.1) Part No. B10765-02, published June 2004.
To make reference to a database file residing on an ASM device within a SQL command, the ASM diskgroup name must be
superseded with a plus sign (+). As an example, the following statement could be used to alter the size of a users tablespaces
datafile residing on ASM diskgroup asm_ndg_01.
Paper 450
Focus Area: DBA
Author: Jason Cross
Example 2: Referencing ASM Storage
alter database demo
datafile '+asm_ndg_01/demo/datafile/users.259.1'
resize 100M;
BACKING UP AN ORACLE RAC CLUSTER
Five key cluster components need to be backed up. They are:

The CRS Oracle Cluster Registry

The CRS Voting Disk

The Oracle Binaries

Recovery Manager (RMAN) configuration

The Oracle database
Both the CRS OCR and the voting disk exist on RAW partitions. These critical components of the cluster can be backed up
using operation system commands for backing up RAW partitions. On Unix systems, the “dd” command must be used to
backup these components.
RMAN configuration information, if it’s stored in a database repository (RMAN catalog), should be placed on a disk partition
that has been formatted. The RMAN host server does not have to be a member of the RAC cluster. The RMAN catalog
database can be backed up using a combination of o/s level full, incremental and differential backup operations.
The Oracle CRS and Server binaries are stored on a formatted disk partition that is local to each node of the cluster. The
Oracle binaries can be backed up using a combination of o/s level full, incremental and differential backup operations.
To backup the RAC database, RMAN has to be used. Through RMAN, the database files that exist on the RAW partitions
can be backed up to a formatted disk partition or to a tape device. If the backups are written to disk, o/s level full,
incremental and differential backup operations can be used to transfer the RMAN backup sets to tape.
Paper 450
Focus Area: DBA
Author: Jason Cross
THE COMPLETED PICTURE (2 NODE CLUSTER)
Cluster Node 1
+ASM1
Instance
RMAN
RAC
Database
Instance
Cluster Node 2
CRS
Binaries
Cluster
Interconnect
Oracle
Server
Binaries
+ASM2
Instance
RAC
Database
Instance
CRS
Binaries
Oracle
Server
Binaries
Shared Disk
Storage Device
CRS OCR
CRS Voting
Disk
Oracle
Database
Files
Paper 450