Documente Academic
Documente Profesional
Documente Cultură
The RAC cluster comprises two Intel x86 servers running on RHEL3 (Kernel 2.4.21-27).
Each node has access to a shared storage and connectivity to the public and private
network.
1. Preliminary Installation
2. Migrating Your Database to ASM
3. Installing Oracle Cluster Ready Services (CRS) Software
4. Installing Oracle RAC Software
5. Post Installation
6. Testing Transparent Application Failover (TAF)
Unless otherwise specified, you should execute all steps on both nodes.
Here's an overview of our single-instance database environment before converting to RAC:
Database File
Host Name Instance Name Database Name $ORACLE_BASE
Storage
salmon1 prod1 prod1 /u01/app/oracle ext3
You'll install the Oracle Home on each node for redundancy.The ASM and RAC instances share the same
Oracle Home on each node.
Step 1: Preliminary Installation
1a. Verify software package versions.
Install the required packages. Additional information can be obtained from the
documentation.
[root@salmon1]# rpm -qa | grep -i compat
compat-libstdc++-7.3-2.96.128
compat-gcc-c++-7.3-2.96.128
compat-libstdc++-devel-7.3-2.96.128
compat-db-4.0.14-5
compat-glibc-7.x-2.2.4.32.6
compat-slang-1.4.5-5
compat-gcc-7.3-2.96.128
compat-pwdb-0.62-3
[root@salmon1]#
[root@salmon1]# rpm -qa | grep openmotif
openmotif-2.2.3-3.RHEL3
[root@salmon1]#
[root@salmon1]# rpm -qa | grep -i gcc
gcc-gnat-3.2.3-42
gcc-c++-ssa-3.5ssa-0.20030801.48
compat-gcc-c++-7.3-2.96.128
libgcc-ssa-3.5ssa-0.20030801.48
gcc-3.2.3-42
gcc-g77-3.2.3-42
gcc-java-3.2.3-42
gcc-ssa-3.5ssa-0.20030801.48
gcc-g77-ssa-3.5ssa-0.20030801.48
gcc-objc-ssa-3.5ssa-0.20030801.48
libgcc-3.2.3-42
gcc-c++-3.2.3-42
gcc-objc-3.2.3-42
gcc-java-ssa-3.5ssa-0.20030801.48
compat-gcc-7.3-2.96.128
1b. Verify kernel parameters.
Verify the following kernel parameters. Additional information can be obtained from the
documentation.
[root@salmon1]# sysctl -a | grep shm
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 2147483648
[root@salmon1]# sysctl -a | grep sem
kernel.sem = 250 32000 100 128
[root@salmon1]# sysctl -a | grep -i ip_local
net.ipv4.ip_local_port_range = 1024 65000
[root@salmon1]# sysctl -a | grep -i file-max
fs.file-max = 65536
1c. Create the Oracle Base directory, oracle user, and groups.
Using the information below on node 1, create the oracle user and the oinstall and dba
groups on the second node.
[oracle@salmon1]$ hostname
salmon1.dbsconsult.com
[oracle@salmon1]$
[oracle@salmon1]$ id
uid=500(oracle) gid=500(dba) groups=500(dba),501(oinstall)
[oracle@salmon1]$
[oracle@salmon1]$ echo $ORACLE_BASE
/u01/app/oracle
1d. Edit the oracle user environment file.
[oracle@salmon1]$ more .bash_profile
# .bash_profile
export PATH=$PATH:$HOME/bin
export ORACLE_SID=prod1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.1.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
1e. Configure the oracle user shell limits.
[root@salmon1]# more /etc/security/limits.conf
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536
[root@salmon1]# grep pam_limits /etc/pam.d/login
session required /lib/security/pam_limits.so
1f. Configure public and private network.
Using the information below, make the necessary changes to network interface devices
eth0 (public) and eth1 (private).
[root@salmon1]# redhat-config-network
Host Name IP Address Type
salmon1.dbsconsult.com 192.168.0.184 Public (eth0)
salmon2.dbsconsult.com 192.168.0.185 Public (eth0)
salmon1.dbsconsult.com 10.10.10.84 Private (eth1)
salmon2.dbsconsult.com 10.10.10.85 Private (eth1)
salmon1-
192.168.0.186 Virtual
vip.dbsconsult.com
salmon2-
192.168.0.187 Virtual
vip.dbsconsult.com
1g. Edit the /etc/hosts file.
127.0.0.1 localhost.localdomain localhost
10.10.10.84 sallocal1.dbsconsult.com sallocal1
10.10.10.85 sallocal2.dbsconsult.com sallocal2
192.168.0.184 salmon1.dbsconsult.com salmon1
192.168.0.185 salmon2.dbsconsult.com salmon2
192.168.0.186 salmon1-vip.dbsconsult.com salmon1-vip
192.168.0.187 salmon2-vip.dbsconsult.com salmon2-vip
Verify the hostname and the configured network interface devices.
[root@salmon1]# hostname
salmon1.dbsconsult.com
[root@salmon1]# /sbin/ifconfig
1h. Establish user equivalence with SSH.
During the Cluster Ready Services (CRS) and RAC installation, the Oracle Universal Installer (OUI) has to be
able to copy the software as oracle to all RAC nodes without being prompted for a password. In Oracle 10g, this
can be accomplished using ssh instead of rsh.
To establish user equivalence, generate the user's public and private keys as the oracle
user on both nodes.
[oracle@salmon1]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
5d:8c:42:97:eb:42:ae:52:52:e9:59:20:2a:d3:6f:59 oracle@salmon1.dbsconsult.com
Test the connection on every node. Verify that you are not prompted for password when you run the following
the second time.
ssh salmon1 date
ssh salmon2 date
ssh sallocal1 date
ssh sallocal2 date
ssh salmon1.dbsconsult.com date
ssh salmon2.dbsconsult.com date
ssh sallocal1.dbsconsult.com date
ssh sallocal2.dbsconsult.com date
1i. Configure hangcheck timer kernel module.
The hangcheck timer kernel module monitors the system's health and restarts a failing
RAC node. It uses two parameters, hangcheck_tick (defines the system checks
frequency) and hangcheck_margin (defines the maximum hang delay before a RAC node
is reset), to determine if a node is failing.
Add the following line in /etc/rc.d/rc.local to load the hangcheck module automatically.
[root@salmon1]# grep insmod /etc/rc.d/rc.local
insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
1j. Recreate database control file.
Make sure that the entries below are sized appropriately in the control file before
converting to RAC. If required, recreate the database control file with the right settings.
MAXLOGFILES
MAXLOGMEMBERS
MAXDATAFILES
MAXINSTANCES
MAXLOGHISTORY
1k. Resize the database buffer cache.
When transitioning from a single instance database to RAC, additional memory is
required for the database buffer cache. In RAC, space is allocated for the Global Cache
Service (GCS) in every block buffer cache. The amount of additional memory requires
depends on how the application accesses the data—that is, if the same block is cached
in more than one instance.
I observed an increase of about 8% buffer cache usage during an experimental demonstration. Use the buffer
cache advisory to determine an optimal buffer cache size or let Oracle take control by switching to Oracle
Automatic Shared Memory Management (ASMM).
• oracleasm-support-2.0.0-1.i386.rpm
• oracleasm-2.4.21-27.EL-1.0.4-2.i686.rpm (driver for UP kernel) or oracleasm-2.4.21-27.ELsmp-1.0.4-
1.i686.rpm (driver for SMP kernel)
• oracleasmlib-2.0.0-1.i386.rpm
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
SQL> startup
ORACLE instance started.
File created.
2i. Create disk groups.
Create three disk groups: DG1, DG2, and RECOVERYDEST. DG1 and DG2 will be used to
store Oracle data files and redo logs. RECOVERYDEST will be used as the flash recovery
area.
SQL> create diskgroup dg1 normal redundancy
2 failgroup fg1a disk
3 'ORCL:VOL1','ORCL:VOL2'
4 failgroup fg1b disk
5 'ORCL:VOL3','ORCL:VOL4';
Diskgroup created.
Diskgroup created.
NAME TOTAL_MB
------------------------- -------------------
DG1 36864
DG2 36864
RECOVERYDEST 73728
3 rows selected.
10 rows selected.
2j. Configure flash recovery area.
SQL> connect sys/sys@prod1 as sysdba
Connected.
SQL> alter database disable block change tracking;
Database altered.
System altered.
System altered.
2k. Migrate data files to ASM.
You must use RMAN to migrate the data files to ASM disk groups. All data files will be
migrated to the newly created disk group, DG1. The redo logs and control files are
created in DG1 and DG2. In a production environment, you should store redo logs on
different set of disks and disk controllers from the rest of the Oracle data files.
SQL> connect sys/sys@prod1 as sysdba
Connected.
SQL> alter system set db_create_file_dest=’+DG1’;
System altered.
System altered.
database mounted
released channel: ORA_DISK_1
database opened
RMAN> exit
SQL> connect sys/sys@prod1 as sysdba
Connected.
SQL> select tablespace_name, file_name from dba_data_files;
TABLESPACE FILE_NAME
--------------------- -----------------------------------------
USERS +DG1/prod1/datafile/users.260.1
SYSAUX +DG1/prod1/datafile/sysaux.258.1
UNDOTBS1 +DG1/prod1/datafile/undotbs1.259.1
SYSTEM +DG1/prod1/datafile/system.257.1
2l. Migrate temp tablespace to ASM.
SQL> alter tablespace temp add tempfile size 100M;
Tablespace altered.
FILE_NAME
-------------------------------------
+DG1/prod1/tempfile/temp.264.3
2m. Migrate redo logs to ASM.
Drop existing redo logs and recreate them in ASM disk groups, DG1 and DG2.
SQL> alter system set db_create_online_log_dest_1='+DG1';
System altered.
System altered.
GROUP# MEMBER
--------------- ----------------------------------
1 /u03/oradata/prod1/redo01.log
2 /u03/oradata/prod1/redo02.log
Database altered.
System altered.
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
GROUP# MEMBER
--------------- ----------------------------------------
1 +DG1/prod1/onlinelog/group_1.265.3
1 +DG2/prod1/onlinelog/group_1.257.1
2 +DG1/prod1/onlinelog/group_2.266.3
2 +DG2/prod1/onlinelog/group_2.258.1
2n. Create pfile from spfile.
Create and retain a copy of the database pfile. You'll add more RAC specific parameters
to the pfile later, in the Post Installation.
SQL> connect sys/sys@prod1 as sysdba
Connected.
SQL> create pfile='/tmp/tmppfile.ora' from spfile;
File created.
2o. Add additional control file.
If an additional control file is required for redundancy, you can create it in ASM as you
would on any other filesystem.
SQL> connect sys/sys@prod1 as sysdba
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Database altered.
System altered.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
NAME
---------------------------------------
+DG1/cf1.dbf
+DG2/cf2.dbf
After successfully migrating all the data files over to ASM, the old data files are no longer needed and can be
removed. Your single-instance database is now running on ASM!
Step 3: Install Cluster Ready Services (CRS) Software
CRS requires two files—the Oracle Cluster Registry (OCR) and the Voting Disk—on shared
raw devices or Oracle Cluster File System (OCFS). These files must be accessible to all
nodes in the cluster. Raw devices are used here to house both files.
3a. Create OCR and Voting Disk.
The storage size for the OCR should be at least 100MB and the storage size for the
voting disk should be at least 20MB.
File Raw Device Disk Partition Filename Size (MB)
OCR /dev/raw/raw11 /dev/sde1 /u02/oradata/prod1/ocr 100
Voting Disk /dev/raw/raw12 /dev/sde2 /u02/oradata/prod1/vdisk 20
[root@salmon1]# more /etc/sysconfig/rawdevices
/dev/raw/raw11 /dev/sde1
/dev/raw/raw12 /dev/sde2
[root@salmon1]# chown oracle:dba /dev/raw/raw11
[root@salmon1]# chown oracle:dba /dev/raw/raw12
[root@salmon1]# /sbin/service rawdevices restart
Assigning devices:
/dev/raw/raw11 --> /dev/sde1
/dev/raw/raw11: bound to major 8, minor 65
/dev/raw/raw12 --> /dev/sde2
/dev/raw/raw12: bound to major 8, minor 66
done
[root@salmon1]# su - oracle
[oracle@salmon1]$ ln -s /dev/raw/raw11 /u02/oradata/prod1/ocr
[oracle@salmon1]$ ln -s /dev/raw/raw12 /u02/oradata/prod1/vdisk
3b. Install CRS software.
Before installing the CRS software, shut down the listener, database, and ASM instance.
Mount the CRS CD or download the software from OTN. The OUI should be launched on
only the first node. During installation, the installer automatically copies the software to
the second node.
[oracle@salmon1]$ export ORACLE_BASE=/u01/app/oracle
[oracle@salmon1]$ /mnt/cdrom/runInstaller
[oracle@salmon1]$ /u01/app/oracle/product/10.1.0/crs_1/bin/olsnodes -n
salmon1 1
salmon2 2
[oracle@salmon1]$ ps -ef | egrep "css|crs|evm"
export PATH=$PATH:$HOME/bin
export ORACLE_SID=prod1a
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.1.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.1.0/crs_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
4b. Install RAC software.
Mount the Oracle Database 10g Enterprise Edition CD or download the software from
OTN. Launch the OUI on only the first node. During installation, the installer
automatically copies the software to the second node.
[oracle@salmon1]$ . ~/.bash_profile
[oracle@salmon1]$ /mnt/cdrom/runInstaller
LISTENER_SALMON1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.184)(PORT = 1521))
)
)
)
SID_LIST_LISTENER_SALMON1 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.1.0/db_1)
(PROGRAM = extproc)
)
)
On node 2:
[oracle@salmon2]$ more $ORACLE_HOME/network/admin/listener.ora
LISTENER_SALMON2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.185)(PORT = 1521))
)
)
)
SID_LIST_LISTENER_SALMON2 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.1.0/db_1)
(PROGRAM = extproc)
)
)
4f. tnsnames.ora file
On both nodes:
[oracle@salmon1]$ more $ORACLE_HOME/network/admin/tnsnames.ora
LISTENERS_PROD1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
)
PROD1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVICE_NAME = PROD1)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 200)
(DELAY = 5)
)
)
)
PROD1A =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon1-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PROD1)
(INSTANCE_NAME = PROD1A)
)
)
PROD1B =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = salmon2-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PROD1)
(INSTANCE_NAME = PROD1B)
)
)
SQL> startup
ORACLE instance started.
File created.
File created.
GROUP# MEMBER
--------------- ----------------------------------------
1 +DG1/prod1/onlinelog/group_1.265.3
1 +DG2/prod1/onlinelog/group_1.257.1
2 +DG1/prod1/onlinelog/group_2.266.3
2 +DG2/prod1/onlinelog/group_2.258.1
3 +DG1/prod1/onlinelog/group_3.268.1
3 +DG2/prod1/onlinelog/group_3.259.1
4 +DG1/prod1/onlinelog/group_4.269.1
4 +DG2/prod1/onlinelog/group_4.260.1
8 rows selected.
Database altered.
5m. Create undo tablespace for the second RAC instance.
SQL> create undo tablespace UNDOTBS2 datafile size 200M;
TABLESPACE FILE_NAME
--------------------- --------------------------------------
UNDOTBS2 +DG1/prod1/datafile/undotbs2.270.1
5n. Start up the second RAC instance.
[oracle@salmon1]$ srvctl start instance -d prod1 -i prod1b
[oracle@salmon1]$ crs_stat -t
Name Type Target State Host
-----------------------------------------------------------------------
ora....1a.inst application ONLINE ONLINE salmon1
ora....1b.inst application ONLINE ONLINE salmon2
ora.prod1.db application ONLINE ONLINE salmon1
ora....M1A.asm application ONLINE ONLINE salmon1
ora....M1B.asm application ONLINE ONLINE salmon2
ora....N1.lsnr application ONLINE ONLINE salmon1
ora....on1.gsd application ONLINE ONLINE salmon1
ora....on1.ons application ONLINE ONLINE salmon1
ora....on1.vip application ONLINE ONLINE salmon1
ora....N2.lsnr application ONLINE ONLINE salmon2
ora....on2.gsd application ONLINE ONLINE salmon2
ora....on2.ons application ONLINE ONLINE salmon2
ora....on2.vip application ONLINE ONLINE salmon2
[oracle@salmon1]$ srvctl status database -d prod1
Instance prod1a is running on node salmon1
Instance prod1b is running on node salmon2
[oracle@salmon1]$ srvctl stop database -d prod1
[oracle@salmon1]$ srvctl start database -d prod1
[oracle@salmon1]$ sqlplus system/system@prod1
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
Conclusion
With proper planning and understanding of the RAC architecture, the transition from a
single-instance database to a RAC configuration is not necessarily complex. ASM and RAC
compliment each other to provide higher levels of availability, scalability, and business
continuity. Hopefully, this guide has provided a clear and concise method of performing
the conversion.