Sunteți pe pagina 1din 21

Table of Contents

............................................................................................................................................1 1. Introduction....................................................................................................................2 2. Platform and Version....................................................................................................2 3. AIX Setup......................................................................................................................2 4. File Systems Layout......................................................................................................4 5. Setting up SSH keys (for RAC only)...........................................................................5 6. Host Names and Network Interface (for RAC only)................................................6 7. Cluster and Scan names (for RAC only)...................................................................7 8. ASM Configuration......................................................................................................7 9. Grid Infrastructure Software Installation..................................................................9 10. Oracle Database Software Installation...................................................................15 11. Adjusting ASM Parameters......................................................................................18 12. Create ASM Diskgroup.............................................................................................19 13. TNS_ADMIN..............................................................................................................20 14. Register Listener and ASM to Grid Infrastructure...............................................20 15. Post-Install Database Setup......................................................................................21

1. Introduction
This document describes the installation standard of Oracle Grid Infrastructure which is the cluster software for Oracle RAC. Given that more projects are considering migrating standalone database to Oracle RAC option, having a common standard in implementation can help a smooth implementation and better on-going support from other DBAs. The upcoming new data center is will have a significant number of new Oracle Grid Infrastructure installations. This document will serve as a blueprint for both existing and newly hired DBAs to follow in the course of Oracle Grid Infrastructure installation. Since the target audience is DBAs who are doing the software installation, the language used in this document tends to be technical and product specific.

2. Platform and Version


The latest version of Oracle Grid Infrastructure is 11.2.0.3 PSU2. It will be installed on IBM P7 LPARs running operating system AIX 6.1 TL6 SP6 with patch IV10539. The Grid Infrastructure is required not just for Oracle RAC nodes, but also for stand-alone database servers planning to use ASM for storage. Starting Oracle 11g, ASM is an integral part of Grid Infrastructure. To verify the AIX version and patch level,
$ oslevel s 6100-06-06-1140

To verify the existence of AIX patch IV10539


$ instfix -ik IV10539 All filesets for IV10539 were found.

3. AIX Setup
O/S Settings
The /tmp file system needs to have at least 1GB of free space. The system block size allocation ncargs value needs to be at least 128K. Run the following command to verify.
$ lsattr -E -l sys0 -a ncargs ncargs 256 ARG/ENV list size in 4K byte blocks True

The maximum number of PROCESSES allowed per user needs to be at least 16384. Run the following command to verify.

$ lsattr -E -l sys0 -a maxuproc maxuproc 16384 Maximum number of PROCESSES allowed per user True

User Accounts
The Grid Infrastructure software will be installed under the Unix account oragrid. Here is the profile of the oragrid account. You can verify it by running the id command and checking the passwd file. For consistency, make sure the user ID and group IDs are the same as listed below.
$ id oragrid uid=501(oragrid) gid=501(dba) groups=502(oinstall) $ id oracle uid=500(oracle) gid=501(dba) groups=502(oinstall) $ grep oragrid /etc/passwd
oragrid:!:501:501:Oracle GRID Administration:/u01/home/oragrid:/bin/ksh

$ grep oracle /etc/passwd


oracle:!:500:501:Oracle owner:/u01/home/oracle:/bin/ksh

The AIX ulimit should be set to unlimited in all categories for users: oragrid, oracle, and root.
$ ulimit -a time(seconds) file(blocks) data(kbytes) stack(kbytes) memory(kbytes) coredump(blocks) nofiles(descriptors) threads(per process) processes(per user) unlimited unlimited unlimited unlimited unlimited unlimited unlimited unlimited unlimited

Get the password of oragrid and oracle from the sysadmin. Change the passwords and keep it safe.

.Profile
Update the .profile of users oragrid and oracle to include the following lines. This is to set the command line prompt to include the username, hostname, and current directory information. Additional information can be added to the command line prompt as desired. For user oragrid, DB_NAME will be ASM, ie. ASMenv. For user oracle, DB_NAME will be the name of the primary database to be hosted.

export PS1="[${LOGNAME}@`hostname -s`] \$PWD $ " . $HOME/<DB_NAME>env

For example, ASMenv has following contents.


export export export export export export export export export GRID_HOME=/u01/grid/11.2.0/grid CRS_HOME=/u01/grid/11.2.0/grid ASM_HOME=/u01/grid/11.2.0/grid ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/grid/11.2.0/grid ADR_HOME=/u01/app/oracle/diag/asm/+asm/+ASM TNS_ADMIN=/u01/grid/11.2.0/grid/network/admin PATH=$ORACLE_HOME/bin:$PATH ORACLE_SID=+ASM

For example, EIRS1Denv has following contents.


export export export export export export

ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ADR_HOME=/u01/app/oracle/diag/rdbms/eirs1d/EIRS1D TNS_ADMIN=/u01/grid/11.2.0/grid/network/admin PATH=$ORACLE_HOME/bin:$PATH ORACLE_SID=EIRS1D

4. File Systems Layout


Mount Points
The home directory and the code trees will have separate mount points. Mount Point /u01 Owner oracle:dba Permission Size 775 10 GB Volume Group orabinvg Description Home directory of user oracle /u01/home/oracle Home directory of user oragrid /u01/home/oragrid Oracle RDBMS binary and c entralized Oracle inventory Oracle Grid Infrastructure binary

/u01/app /u01/grid /u02

oracle:dba oragrid:dba oracle:dba

775 775 775

100 GB 150 GB

orabinvg orabinvg

OraInventory Location
Inventory directory location should be /u01/app/oraInventory. Since this location is shared by both the Grid Infrastructure and RDBMS installations, change its permission setting to allow both oragrid and oracle users to have full privileges. As user oragrid, create the oraInventory. This needs to be owned by oragrid; otherwise, the grid infrastructure installation or cloning will fail.
$ mkdir /u01/app/oraInventory $ chmod 770 /u01/app/oraInventory

ORACLE_BASE
The ORACLE_BASE will be set to /u01/app/oracle for both oragrid and oracle. In order to be shared by both oragrid and oracle, we have to set the permission of certain directories for both users. As user oracle,
$ $ $ $ $ $ cd /u01/app mkdir oracle chmod 775 oracle cd /u01/app/oracle mkdir diag cfgtoollogs chmod 775 diag cfgtoollogs

5. Setting up SSH keys (for RAC only)


On each RAC node, do the following for both oracle and oragrid users, taking all the defaults when prompted: Assumption: no authorized keys file exists in $HOME/.ssh $ /usr/bin/ssh-keygen -b 2048 -t rsa $ /usr/bin/ssh-keygen -t dsa On the first node, do the following: $ $ $ $ cd $HOME/.ssh cat id_rsa.pub >> authorized_keys cat id_dsa.pub >> authorized_keys /usr/bin/scp p authorized_keys <hostname#2>:.ssh/authorized_keys

Then from each remaining node do these commands: $ cd $HOME/.ssh $ cat id_rsa.pub >> authorized_keys $ cat id_dsa.pub >> authorized_keys Do the following on each node except the last one.

$ /usr/bin/scp p authorized_keys <hostname#N+1>:.ssh/authorized_keys

From the last node, do the following:


$ /usr/bin/scp p authorized_keys <hostname#1>:.ssh/authorized_keys $ /usr/bin/scp p authorized_keys <hostname#2>:.ssh/authorized_keys

etc. From every node, do the following command for ALL nodes (including the same node) to verify that the SSH keys are working without prompting for password:
$ ssh <hostname> date

6. Host Names and Network Interface (for RAC only)


Check the /etc/hosts file. This file should contains host information of all RAC nodes. For each node, it should list the host, virtual host, and dual private host entries. The contents should be identical across all RAC nodes. The host entry should have the long name (with domain) before the short name. <IP> <IP> <IP> <IP> <host>.comp.pge.com <host>-vip.comp.pge.com <host>-priv.comp.pge.com <host>-priv.comp.pge.com <host> <host>-vip <host>-priv1 <host>-priv2

All the network names for this server except for <hostname>-priv should be defined in the Domain Name Server (DNS). The RAC private IP addresses (xxxxx-priv) should NOT be defined in DNS. You should be able to successfully ping all the <hostname> and <hostname>-priv<n> names. The <hostname>-vip and <scan_name> addresses are only pingable when the cluster is running. Do the following commands to check out the network names. Repeat for each server in the cluster. For nslookup, verify that the /etc/hosts entry matches the IP address found (see example below).
$ $ $ $ $ $ $ $ ping <hostname> ping <hostname>-priv1 ping <hostname>-priv2 nslookup <hostname> nslookup <hostname>-vip nslookup <scan_name> nslookup <hostname>-priv1 nslookup <hostname>-priv2

(should return with not found) (should return with not found)

$ nslookup <IP address of <hostname>-priv1> found) $ nslookup <IP address of <hostname>-priv2> found)

(should return with not (should return with not

During the Grid Infrastructure installation when prompted for network interface names, use en0 for public, en1 for not being used, and en2 for private #1, en3 for private #2.

7. Cluster and Scan names (for RAC only)


During the Grid Infrastructure installation when prompted for the cluster name, use the naming convention of <app>-<type>n in lower case. Type can be prod, fste, qa, dev, test depending on purpose. For example, if this is the second test cluster for the CC&B application, use cluster name ccb-test2. The SCAN name should be in the format of <cluster_name>-scan.comp.pge.com. For example, ccb-test2-scan.comp.pge.com

8. ASM Configuration
ASM DiskGroups
In the RAC configuration, all voting disks, datafiles, redo logs, archived logs, flashback logs, control files, spfile should reside on the shared ASM storage. Here is the naming convertion of the various ASM diskgroups. Note that the naming convention specifies only on the suffix of the diskgroup names. DBAs can determine the other portion of the diskgroup names according to the application and nature of the diskgroups. Some diskgroups may be shared by multiple applications. In that case, it may not be reasonable to tie the diskgroup name to the application name. We recommend to use the default allocation unit which is 1MB. Let ASM manage its directory structure. Do not manually create sub-directories in ASM diskgroups. When creating datafiles for example, just specify the file name as +<diskgroup>. ASM will assign a name and put it in the appropriate sub-directory. Diskgroup Name *_GRID *_DATA_nn *_REDO_nn ASM Redundancy High (5 disks) External External LUN size 5G 50G, 250G, 500G, 1000G 5G, 50G Description Voting disk, OCR Datafiles, tempfiles, control files, spfile Online redo logs, 7

*_FRA *_IMAGE

External External

50G, 250G, 500G, 1000G 50G, 250G, 500G, 1000G

control files, spfile Archived log, flashback logs, incremental backups RMAN image copies

ASM Disks
When system administrators present the disks for ASM, they have to change ownership to oragrid and change the permission setting to 660.
$ chown oragrid:dba /dev/rhdisk<nnn> $ chmod 660 /dev/rhdisk<nnn>

9. Grid Infrastructure Software Installation


9.1. Method 1: Clone from non-RAC Grid Infrastructure Gold Image
The gold image file we will use is standalone non-RAC version of 11.2.0.3 Grid Infrastructure with PSU2. First, copy the gold image file to /u02/software/gold_images from As root, unzip the gold image to the GRID_HOME
$ cd /u01/grid/11.2.0/grid (create directory if not exist) $ gunzip c /u02/software/gold_images/AIX_nonRAC_gi_binaries_11.2.0.3.2.tar.gz | tar xf oragctst21: /u02/data01/dbmaint/gold_images/AIX/AIX_nonRAC_gi_binaries_11.2.0.3.2.tar.gz

As root, run the following

$ /u01/grid/11.2.0/grid/clone/rootpre.sh

As oragrid, run the Oracle cloning script


$ cd /u01/grid/11.2.0/grid/clone/bin # The following command is all on one line
$ /usr/bin/perl clone.pl ORACLE_HOME="/u01/grid/11.2.0/grid" ORACLE_BASE="/u01/app/oracle" INVENTORY_LOCATION="/u01/app/oraInventory"

As root, run the following


$ /u01/app/oraInventory/orainstRoot.sh $ /u01/grid/11.2.0/grid/root.sh # The following command is all on one line
$ /u01/grid/11.2.0/grid/perl/bin/perl -I/u01/grid/11.2.0/grid/perl/lib -I/u01/grid/11.2.0/grid/crs/install /u01/grid/11.2.0/grid/crs/install/roothas.pl

# Output will be like the following. Ignore the ACFS driver error. Using configuration parameter file: /u01/grid/11.2.0/grid/crs/install/crsconfig_params LOCAL ADD MODE Creating OCR keys for user 'oragrid', privgrp 'dba'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'system'.. Operation successful. CRS-4664: Node eirsdboratst01 successfully pinned. Adding Clusterware entries to inittab ACFS drivers installation failed

Please ignore the ACFS drivers installation failed warning.

As oragrid, configure ASM


$ crsctl start resource all CRS-5702: Resource 'ora.evmd' is already running on 'mdmdboradev01' CRS-2501: Resource 'ora.ons' is disabled CRS-2672: Attempting to start 'ora.diskmon' on 'mdmdboradev01' CRS-2672: Attempting to start 'ora.cssd' on 'mdmdboradev01' CRS-2676: Start of 'ora.diskmon' on 'mdmdboradev01' succeeded CRS-2676: Start of 'ora.cssd' on 'mdmdboradev01' succeeded CRS-4000: Command Start failed, or completed with errors. Please ignore the errors. # Have to register the ASM in OCR $ srvctl add asm -p $ORACLE_HOME/dbs/init+ASM.ora $ sqlplus / as sysasm SQL> startup ASM instance started ORA-15110: no diskgroups mounted SQL> create spfile='/u01/grid/11.2.0/grid/dbs/spfile+ASM.ora' from pfile='/u01/grid/11.2.0/grid/dbs/init+ASM.ora'; SQL> shutdown immediate

Now that the spfile has been created, remove the $ORACLE_HOME/dbs/init+ASM.ora file and start up the ASM.
$ srvctl start asm Create the Oracle password $ orapwd file=/u01/grid/11.2.0/grid/dbs/orapw+ASM password=<password> Create the ASMSNMP user $ sqlplus / as sysasm SQL> create user asmsnmp identified by <password>; SQL> grant sysdba to asmsnmp;

Update the host in the $GRID_HOME/network/admin/listener.ora file.

9.2. Method 2: Fresh Install of Grid Infrastructure


Copy the downloaded 11203 software zip files from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/* to /u02/software/11203cd directory.

10

Unzip all the zip files in /u02/maint/11203cd. Request sysadmin to run the rootpre.sh script as root on each node.
/u02/software/11203cd/grid/rootpre.sh

For RAC installation, run the Oracle cluvfy utility as oragrid to verify that the nodes are ready for the cluster installation. You can to unset CV_HOME first.
/u02/software/11203cd/grid/runcluvfy.sh stage pre crsinst n <node1>,<node2>, fixup verbose

Run the Oracle Installer as user oragrid,

$ export DISPLAY=<your_PC_name>:0.0 $ cd /u02/software/11203cd/grid $ ./runInstaller ignoreInternalDriverError

Download Software Updates


Select Skip software updates.

Select Installation Option


For RAC, select Install and Configure Oracle Grid Infrastructure for a Cluster. For non-RAC, select Configure Oracle Grid Infrastructure for a Standalone Server.

Select Installation Type (for RAC only)


Select Advanced Installation.

Grid Plug and Play Information (for RAC only)


Specify the cluster name and SCAN name. Set SCAN port to 1521. Do not configure GNS.

Cluster Node Information


Include all the RAC nodes. Click the Add button to add the other nodes.

11

Specify Network Interface Usage (for RAC only)


Interface Name en0 en1 en2 en3 Subnet Refer to /etc/hosts Refer to /etc/hosts Refer to /etc/hosts Refer to /etc/hosts Interface Type Public Do Not Use Private Private

Storage Option Information (for RAC only)


Choose Automatic Storage Management (ASM)

Create ASM Disk Group


For RAC only, create a disk group for GRID using High redundancy. Choose 5 disks with size 5GB LUN size. Create other diskgroups for data, redo logs, FRA etc using External redundancy.

Specify ASM Password


Use different passwords for these accounts SYS and ASMSNMP. Keep the passwords in a safe place.

Privileged Operating System Groups


The privileged O/S groups to be specified during installation is as follows: ASM Database Administrator (OSDBA) Group ASM Instance Administrator Operator (OSOPER) Group ASM Instance Administrator (OSASM) Group dba dba dba

Specify Installation Location


Set Oracle Base to /u01/app/oracle

12

Set Software Location to /u01/grid/11.2.0/grid When warned that the selected Oracle home is outside of Oracle base, click Yes to continue.

Create Inventory
Set Inventory Location to /u01/app/oraInventory

Preform Prerequisite Checks


Ignore the following errors if detected OS Kernel Parameter: tcp_ephemeral_low OS Kernel Parameter: tcp_ephemeral_high OS Kernel Parameter: udp_ephemeral_low OS Kernel Parameter: udp_ephemeral_high

Summary
Click Save Response File button. Click the Install button.

Execute Configuration scripts


After installation is completed, request sysadmin to run the following scripts as root in each cluster node.
/u01/app/oraInventory/orainstRoot.sh /u01/grid/11.2.0/grid/root.sh

Applying 11.2.0.3 Grid Infrastructure PSU2


The Oracle Grid Infrastructure 11.2.0.3 PSU2 is 13696251 while the RDBMS PSU2 is 13696216. Both are included in the single zip file p13696251_112030_AIX64-5L.zip. Copy the zip file from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/PSU/APR2012 13

to /u02/software/11203cd/PSU/APR2012 directory. Unzip all the zip files in /u02/maint/11203cd/PSU/APR2012 Do the following on each node. Login as user oragrid, $ . ./ASMenv Ask sysadm to run this command as root on each node For RAC, run $ $GRID_HOME/crs/install/rootcrs.pl unlock For non-RAC standalone, run $ $GRID_HOME/crs/install/roothas.pl unlock You will need to have a new version of OPatch to apply the patch, As oragrid, $ cd /u02/software/11203cd/PSU/APR2012 $ scp -p oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/OPATCH/*.zip . $ cd /u01/grid/11.2.0/grid $ mv OPatch OLD_OPatch $ cp /u02/software/11203cd/OPATCH/p6880880_112000_AIX64-5L.zip . $ unzip p6880880_112000_AIX64-5L.zip On each node, login as user oragrid and run the following. $ . ./ASMenv $ $GRID_HOME/OPatch/opatch napply oh /u01/grid/11.2.0/grid -local /u02/software/11203cd/PSU/APR2012/13696251 Ask sysadm to run this command as root on each node $ $GRID_HOME/rdbms/install/rootadd_rdbms.sh For RAC, run $ $GRID_HOME/crs/install/rootcrs.pl patch For non-RAC standalone, run $ $GRID_HOME/crs/install/roothas.pl patch

14

10. Oracle Database Software Installation


10.1. Method 1: Clone from non-RAC Database Software Gold Image
The gold image file we will use is standalone non-RAC version of 11.2.0.3 Database Enterprise Edition with PSU2. First, copy the gold image file to /u02/software/gold_images from As oracle, unzip the gold image to the ORACLE_HOME
$ cd /u01/app/oracle/product/11.2.0/db_1 (if not create the directory) $ gunzip -c /u02/software/gold_images/AIX_nonRAC_db_binaries_11.2.0.3.2.tar.gz | tar xf oragctst21: /u02/data01/dbmaint/gold_images/AIX/AIX_nonRAC_db_binaries_11.2.0.3.2.tar.gz

As root, run the following, ignore warning on aborting pre-installation procedure.


$ /u01/app/oracle/product/11.2.0/db_1/clone/rootpre.sh

As oracle, run the Oracle cloning script. Ignore any warning about configTool.
$ cd /u01/app/oracle/product/11.2.0/db_1/clone/bin # The following command is all in one line
$ /usr/bin/perl clone.pl ORACLE_HOME="/u01/app/oracle/product/11.2.0/db_1" ORACLE_BASE="/u01/app/oracle"

As root, run the following


$ /u01/app/oraInventory/orainstRoot.sh -- only if GI not installed $ /u01/app/oracle/product/11.2.0/db_1/root.sh

If Grid Infrastructure has been installed, point the SQL*Net files to the Grid Infrastructure location. As oracle,
$ cd $ORACLE_HOME/network $ rm rf admin $ ln s /u01/grid/11.2.0/grid/network/admin admin

15

10.2. Method 2: Fresh Install of Database Software


Run the following script as user root,
$ cd /u02/software/11203cd/database $ ./rootpre.sh

Run the Oracle Installer as user oracle,


$ export DISPLAY=<your_PC_name>:0.0 $ cd /u02/software/11203cd/database $ ./runInstaller ignoreInternalDriverError

Download Software Updates


Select Skip software updates

Select Installation Option


Select Install database software only

Grid Installation Options


For RAC, select Oracle Real Application Clusters database installation Check box all the nodes. No node to check SSH Connectivity since we have already checked it. For non-RAC, select Single instance database installation.

Select Database Edition


Select Enterprise Edition

Specify Installation Location


Oracle Base: /u01/app/oracle

16

Software Location: /u01/app/oracle/product/11.2.0/db_1

Privileged Operating System Groups


Database Administrator (OSDBA) Group: dba Database Operator (OSOPER) Group (Optional): dba

Perform Prerequisite Check


Check bos Ignore All if the only check failures are the followings: OS Kernel Parameter: tcp_ephemeral_low OS Kernel Parameter: tcp_ephemeral_high OS Kernel Parameter: udp_ephemeral_low OS Kernel Parameter: udp_ephemeral_high Task resolv.conf Integrity

Summary
Click Save Response File button. Click the Install button.

Execute Configuration scripts


After installation is completed, request sysadmin to run the following script as root in each cluster node.
/u01/app/oracle/product/11.2.0/db_1/root.sh

Applying 11.2.0.3 RDBMS PSU2


The Oracle Grid Infrastructure 11.2.0.3 PSU2 is 13696251 while the RDBMS PSU2 is 13696216. Both are included in the single zip file p13696251_112030_AIX64-5L.zip.

17

Copy the zip file from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/PSU/APR2012 to /u02/software/11203cd/PSU/APR2012 directory. Unzip all the zip files in /u02/maint/11203cd/PSU/APR2012 On each node, login as user oracle and run the following. $ /u02/software/11203cd/PSU/APR2012/13696251/custom/scripts/prepatch.sh dbhome /u01/app/oracle/product/11.2.0/db_1 $ $ORACLE_HOME/OPatch/opatch napply oh /u01/app/oracle/product/11.2.0/db_1 -local /u02/software/11203cd/PSU/APR2012/13696216 $ $ORACLE_HOME/OPatch/opatch apply oh /u01/app/oracle/product/11.2.0/db_1 -local /u02/software/11203cd/PSU/APR2012/13696216 $ /u02/software/11203cd/PSU/APR2012/13696251/custom/scripts/postpatch.sh dbhome /u01/app/oracle/product/11.2.0/db_1

11. Adjusting ASM Parameters


The default values of ASM init parameters need to be scaled up to avoid potential node eviction in RAC environment. The ASM load is tied to the number of connections to the RAC databases. Since ASM parameters are stored as a spfile inside ASM. To change its parameters, you have to update the spfile and then bounce the ASM instances by bouncing the cluster. For large system with plenty of memory As user oragrid,

$ sqlplus /as sysasm SQL> alter system set memory_target=2560m scope=spfile sid='*'; SQL> alter system set memory_max_target=2560m scope=spfile sid='*'; SQL> alter system set processes=300 scope=spfile sid='*'; SQL> alter system set sga_max_size=2g scope=spfile sid='*'; SQL> alter system set large_pool_size=208m scope=spfile sid='*'; SQL> alter system set shared_pool_size=1g scope=spfile sid='*'; SQL> shutdown immediate SQL> startup

For smaller system with limited memory As user oragrid,

$ sqlplus /as sysasm SQL> alter system set memory_target=1g scope=spfile sid='*'; SQL> alter system set memory_max_target=1g scope=spfile sid='*'; SQL> alter system set processes=300 scope=spfile sid='*';

18

SQL> SQL> SQL> SQL> SQL>

alter system set sga_max_size=1g scope=spfile sid='*'; alter system set large_pool_size=100m scope=spfile sid='*'; alter system set shared_pool_size=500m scope=spfile sid='*'; shutdown immediate startup

12. Create ASM Diskgroup


As oragrid, identify the CANDIDATE disk available to be assigned to ASM,

SQL> select path, OS_MB, HEADER_STATUS from v$asm_disk where HEADER_STATUS='CANDIDATE' order by 1;

Based on the database server specification SAN storage section on space requirement, create the scripts to create ASM diskgroup. Here is an example, please change the diskgroup name and disk numbers.
create diskgroup MDM_DATA_01 external redundancy disk '/dev/rhdisk25', '/dev/rhdisk26', '/dev/rhdisk28', '/dev/rhdisk29' / create diskgroup MDM_FRA external redundancy disk '/dev/rhdisk30', '/dev/rhdisk31', '/dev/rhdisk34', '/dev/rhdisk35', / create diskgroup MDM_REDO_01 external redundancy disk '/dev/rhdisk15', '/dev/rhdisk16', '/dev/rhdisk19' / create diskgroup MDM_REDO_02 external redundancy disk '/dev/rhdisk20', '/dev/rhdisk21', '/dev/rhdisk24' /

'/dev/rhdisk27',

'/dev/rhdisk32', '/dev/rhdisk33', '/dev/rhdisk36', '/dev/rhdisk37'

'/dev/rhdisk17', '/dev/rhdisk18',

'/dev/rhdisk22', '/dev/rhdisk23',

Verify the ASM diskgroup status

SQL> select name, state, total_mb, free_mb from v$asm_diskgroup; NAME STATE TOTAL_MB FREE_MB ------------------------------ ----------- ---------- ---------MDM_DATA_01 MOUNTED 1280000 1279932 MDM_FRA MOUNTED 409600 409536 MDM_REDO_01 MOUNTED 25600 25542 MDM_REDO_02 MOUNTED 25600 25542

For each diskgroup, change the attribute.


SQL> alter diskgroup MDM_FRA set attribute 'COMPATIBLE.ASM'='11.2.0.0.0'; SQL> alter diskgroup MDM_FRA set attribute 'COMPATIBLE.RDBMS'='11.2.0.0.0';

19

SQL> alter diskgroup MDM_FRA set attribute 'DISK_REPAIR_TIME'='12 H';

13. TNS_ADMIN
The TNS_ADMIN environment variable should point to /u01/grid/11.2.0/grid/network/admin. This is the central location of tnsnames.ora, listener.ora, and sqlnet.ora files. Do not put any of these files in the RDBMS codetree to cause unnecessary confusion. Instead, replace the /u01/app/oracle/product/11.2.0/db_1/network/admin directory with a soft link pointing to the TNS_ADMIN location.
$ cd /u01/app/oracle/product/11.2.0/db_1/network $ rmdir admin $ ln s /u01/grid/11.2.0/grid/network/admin admin

14. Register Listener and ASM to Grid Infrastructure


By default, the listeners and ASM will not always be started up automatically after server reboot. We have to run the following commands to add the listener and ASM instance to the Grid Infrastructure so that they will be started automatically. As user oragrid, shutdown listener first, and then run the following command.
$ srvctl add listener -l <listener_name> -o $ORACLE_HOME

For example,

$ srvctl add listener -l LISTENER -o $ORACLE_HOME

Now, start the listener. As user oragrid, Configure ASM to always start up automatically after server starts up.
$ crsctl modify resource ora.asm attr AUTO_START=always

As user oragrid, Run the following crsctl command to verify. You should see the ora.<listener_name>.lsnr and ora.<instance_name>.db resources.
$ crsctl status res -t Cluster Resources --------------------------------------------------------------------ora.LISTENER.lsnr ONLINE ONLINE etodboratst01 ora.asm ONLINE ONLINE etodboratst01 Started

20

15. Post-Install Database Setup


Enforcing archive logging in the database.
SQL> alter database force logging;

Set the database password to be not case sensitive.


SQL> alter system set sec_case_sensitive_logon=FALSE scope=both sid=*;

Move the SYS.AUD$ table from the dictionary managed SYSTEM tablespace to SYSAUX.
SQL> exec DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION ( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, audit_trail_location_value => 'SYSAUX');

Enable block change tracking, for example using the EIRS_DATA_01 diskgroup.
SQL> alter database enable block change tracking using file '+EIRS_DATA01';

Enable flashback database if needed (optional)


SQL> alter database flashback on;

Configure RMAN
$ rman target / RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 15 DAYS; RMAN> CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(TDPO_OPTFILE=/etc/tdpo.opt)'; RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';

If it is on a primary database with standby configured,

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

21

S-ar putea să vă placă și