Documente Academic
Documente Profesional
Documente Cultură
............................................................................................................................................1 1. Introduction....................................................................................................................2 2. Platform and Version....................................................................................................2 3. AIX Setup......................................................................................................................2 4. File Systems Layout......................................................................................................4 5. Setting up SSH keys (for RAC only)...........................................................................5 6. Host Names and Network Interface (for RAC only)................................................6 7. Cluster and Scan names (for RAC only)...................................................................7 8. ASM Configuration......................................................................................................7 9. Grid Infrastructure Software Installation..................................................................9 10. Oracle Database Software Installation...................................................................15 11. Adjusting ASM Parameters......................................................................................18 12. Create ASM Diskgroup.............................................................................................19 13. TNS_ADMIN..............................................................................................................20 14. Register Listener and ASM to Grid Infrastructure...............................................20 15. Post-Install Database Setup......................................................................................21
1. Introduction
This document describes the installation standard of Oracle Grid Infrastructure which is the cluster software for Oracle RAC. Given that more projects are considering migrating standalone database to Oracle RAC option, having a common standard in implementation can help a smooth implementation and better on-going support from other DBAs. The upcoming new data center is will have a significant number of new Oracle Grid Infrastructure installations. This document will serve as a blueprint for both existing and newly hired DBAs to follow in the course of Oracle Grid Infrastructure installation. Since the target audience is DBAs who are doing the software installation, the language used in this document tends to be technical and product specific.
3. AIX Setup
O/S Settings
The /tmp file system needs to have at least 1GB of free space. The system block size allocation ncargs value needs to be at least 128K. Run the following command to verify.
$ lsattr -E -l sys0 -a ncargs ncargs 256 ARG/ENV list size in 4K byte blocks True
The maximum number of PROCESSES allowed per user needs to be at least 16384. Run the following command to verify.
$ lsattr -E -l sys0 -a maxuproc maxuproc 16384 Maximum number of PROCESSES allowed per user True
User Accounts
The Grid Infrastructure software will be installed under the Unix account oragrid. Here is the profile of the oragrid account. You can verify it by running the id command and checking the passwd file. For consistency, make sure the user ID and group IDs are the same as listed below.
$ id oragrid uid=501(oragrid) gid=501(dba) groups=502(oinstall) $ id oracle uid=500(oracle) gid=501(dba) groups=502(oinstall) $ grep oragrid /etc/passwd
oragrid:!:501:501:Oracle GRID Administration:/u01/home/oragrid:/bin/ksh
The AIX ulimit should be set to unlimited in all categories for users: oragrid, oracle, and root.
$ ulimit -a time(seconds) file(blocks) data(kbytes) stack(kbytes) memory(kbytes) coredump(blocks) nofiles(descriptors) threads(per process) processes(per user) unlimited unlimited unlimited unlimited unlimited unlimited unlimited unlimited unlimited
Get the password of oragrid and oracle from the sysadmin. Change the passwords and keep it safe.
.Profile
Update the .profile of users oragrid and oracle to include the following lines. This is to set the command line prompt to include the username, hostname, and current directory information. Additional information can be added to the command line prompt as desired. For user oragrid, DB_NAME will be ASM, ie. ASMenv. For user oracle, DB_NAME will be the name of the primary database to be hosted.
100 GB 150 GB
orabinvg orabinvg
OraInventory Location
Inventory directory location should be /u01/app/oraInventory. Since this location is shared by both the Grid Infrastructure and RDBMS installations, change its permission setting to allow both oragrid and oracle users to have full privileges. As user oragrid, create the oraInventory. This needs to be owned by oragrid; otherwise, the grid infrastructure installation or cloning will fail.
$ mkdir /u01/app/oraInventory $ chmod 770 /u01/app/oraInventory
ORACLE_BASE
The ORACLE_BASE will be set to /u01/app/oracle for both oragrid and oracle. In order to be shared by both oragrid and oracle, we have to set the permission of certain directories for both users. As user oracle,
$ $ $ $ $ $ cd /u01/app mkdir oracle chmod 775 oracle cd /u01/app/oracle mkdir diag cfgtoollogs chmod 775 diag cfgtoollogs
Then from each remaining node do these commands: $ cd $HOME/.ssh $ cat id_rsa.pub >> authorized_keys $ cat id_dsa.pub >> authorized_keys Do the following on each node except the last one.
etc. From every node, do the following command for ALL nodes (including the same node) to verify that the SSH keys are working without prompting for password:
$ ssh <hostname> date
All the network names for this server except for <hostname>-priv should be defined in the Domain Name Server (DNS). The RAC private IP addresses (xxxxx-priv) should NOT be defined in DNS. You should be able to successfully ping all the <hostname> and <hostname>-priv<n> names. The <hostname>-vip and <scan_name> addresses are only pingable when the cluster is running. Do the following commands to check out the network names. Repeat for each server in the cluster. For nslookup, verify that the /etc/hosts entry matches the IP address found (see example below).
$ $ $ $ $ $ $ $ ping <hostname> ping <hostname>-priv1 ping <hostname>-priv2 nslookup <hostname> nslookup <hostname>-vip nslookup <scan_name> nslookup <hostname>-priv1 nslookup <hostname>-priv2
(should return with not found) (should return with not found)
$ nslookup <IP address of <hostname>-priv1> found) $ nslookup <IP address of <hostname>-priv2> found)
During the Grid Infrastructure installation when prompted for network interface names, use en0 for public, en1 for not being used, and en2 for private #1, en3 for private #2.
8. ASM Configuration
ASM DiskGroups
In the RAC configuration, all voting disks, datafiles, redo logs, archived logs, flashback logs, control files, spfile should reside on the shared ASM storage. Here is the naming convertion of the various ASM diskgroups. Note that the naming convention specifies only on the suffix of the diskgroup names. DBAs can determine the other portion of the diskgroup names according to the application and nature of the diskgroups. Some diskgroups may be shared by multiple applications. In that case, it may not be reasonable to tie the diskgroup name to the application name. We recommend to use the default allocation unit which is 1MB. Let ASM manage its directory structure. Do not manually create sub-directories in ASM diskgroups. When creating datafiles for example, just specify the file name as +<diskgroup>. ASM will assign a name and put it in the appropriate sub-directory. Diskgroup Name *_GRID *_DATA_nn *_REDO_nn ASM Redundancy High (5 disks) External External LUN size 5G 50G, 250G, 500G, 1000G 5G, 50G Description Voting disk, OCR Datafiles, tempfiles, control files, spfile Online redo logs, 7
*_FRA *_IMAGE
External External
control files, spfile Archived log, flashback logs, incremental backups RMAN image copies
ASM Disks
When system administrators present the disks for ASM, they have to change ownership to oragrid and change the permission setting to 660.
$ chown oragrid:dba /dev/rhdisk<nnn> $ chmod 660 /dev/rhdisk<nnn>
$ /u01/grid/11.2.0/grid/clone/rootpre.sh
# Output will be like the following. Ignore the ACFS driver error. Using configuration parameter file: /u01/grid/11.2.0/grid/crs/install/crsconfig_params LOCAL ADD MODE Creating OCR keys for user 'oragrid', privgrp 'dba'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'system'.. Operation successful. CRS-4664: Node eirsdboratst01 successfully pinned. Adding Clusterware entries to inittab ACFS drivers installation failed
Now that the spfile has been created, remove the $ORACLE_HOME/dbs/init+ASM.ora file and start up the ASM.
$ srvctl start asm Create the Oracle password $ orapwd file=/u01/grid/11.2.0/grid/dbs/orapw+ASM password=<password> Create the ASMSNMP user $ sqlplus / as sysasm SQL> create user asmsnmp identified by <password>; SQL> grant sysdba to asmsnmp;
10
Unzip all the zip files in /u02/maint/11203cd. Request sysadmin to run the rootpre.sh script as root on each node.
/u02/software/11203cd/grid/rootpre.sh
For RAC installation, run the Oracle cluvfy utility as oragrid to verify that the nodes are ready for the cluster installation. You can to unset CV_HOME first.
/u02/software/11203cd/grid/runcluvfy.sh stage pre crsinst n <node1>,<node2>, fixup verbose
11
12
Set Software Location to /u01/grid/11.2.0/grid When warned that the selected Oracle home is outside of Oracle base, click Yes to continue.
Create Inventory
Set Inventory Location to /u01/app/oraInventory
Summary
Click Save Response File button. Click the Install button.
to /u02/software/11203cd/PSU/APR2012 directory. Unzip all the zip files in /u02/maint/11203cd/PSU/APR2012 Do the following on each node. Login as user oragrid, $ . ./ASMenv Ask sysadm to run this command as root on each node For RAC, run $ $GRID_HOME/crs/install/rootcrs.pl unlock For non-RAC standalone, run $ $GRID_HOME/crs/install/roothas.pl unlock You will need to have a new version of OPatch to apply the patch, As oragrid, $ cd /u02/software/11203cd/PSU/APR2012 $ scp -p oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/OPATCH/*.zip . $ cd /u01/grid/11.2.0/grid $ mv OPatch OLD_OPatch $ cp /u02/software/11203cd/OPATCH/p6880880_112000_AIX64-5L.zip . $ unzip p6880880_112000_AIX64-5L.zip On each node, login as user oragrid and run the following. $ . ./ASMenv $ $GRID_HOME/OPatch/opatch napply oh /u01/grid/11.2.0/grid -local /u02/software/11203cd/PSU/APR2012/13696251 Ask sysadm to run this command as root on each node $ $GRID_HOME/rdbms/install/rootadd_rdbms.sh For RAC, run $ $GRID_HOME/crs/install/rootcrs.pl patch For non-RAC standalone, run $ $GRID_HOME/crs/install/roothas.pl patch
14
As oracle, run the Oracle cloning script. Ignore any warning about configTool.
$ cd /u01/app/oracle/product/11.2.0/db_1/clone/bin # The following command is all in one line
$ /usr/bin/perl clone.pl ORACLE_HOME="/u01/app/oracle/product/11.2.0/db_1" ORACLE_BASE="/u01/app/oracle"
If Grid Infrastructure has been installed, point the SQL*Net files to the Grid Infrastructure location. As oracle,
$ cd $ORACLE_HOME/network $ rm rf admin $ ln s /u01/grid/11.2.0/grid/network/admin admin
15
16
Summary
Click Save Response File button. Click the Install button.
17
Copy the zip file from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/PSU/APR2012 to /u02/software/11203cd/PSU/APR2012 directory. Unzip all the zip files in /u02/maint/11203cd/PSU/APR2012 On each node, login as user oracle and run the following. $ /u02/software/11203cd/PSU/APR2012/13696251/custom/scripts/prepatch.sh dbhome /u01/app/oracle/product/11.2.0/db_1 $ $ORACLE_HOME/OPatch/opatch napply oh /u01/app/oracle/product/11.2.0/db_1 -local /u02/software/11203cd/PSU/APR2012/13696216 $ $ORACLE_HOME/OPatch/opatch apply oh /u01/app/oracle/product/11.2.0/db_1 -local /u02/software/11203cd/PSU/APR2012/13696216 $ /u02/software/11203cd/PSU/APR2012/13696251/custom/scripts/postpatch.sh dbhome /u01/app/oracle/product/11.2.0/db_1
$ sqlplus /as sysasm SQL> alter system set memory_target=2560m scope=spfile sid='*'; SQL> alter system set memory_max_target=2560m scope=spfile sid='*'; SQL> alter system set processes=300 scope=spfile sid='*'; SQL> alter system set sga_max_size=2g scope=spfile sid='*'; SQL> alter system set large_pool_size=208m scope=spfile sid='*'; SQL> alter system set shared_pool_size=1g scope=spfile sid='*'; SQL> shutdown immediate SQL> startup
$ sqlplus /as sysasm SQL> alter system set memory_target=1g scope=spfile sid='*'; SQL> alter system set memory_max_target=1g scope=spfile sid='*'; SQL> alter system set processes=300 scope=spfile sid='*';
18
alter system set sga_max_size=1g scope=spfile sid='*'; alter system set large_pool_size=100m scope=spfile sid='*'; alter system set shared_pool_size=500m scope=spfile sid='*'; shutdown immediate startup
SQL> select path, OS_MB, HEADER_STATUS from v$asm_disk where HEADER_STATUS='CANDIDATE' order by 1;
Based on the database server specification SAN storage section on space requirement, create the scripts to create ASM diskgroup. Here is an example, please change the diskgroup name and disk numbers.
create diskgroup MDM_DATA_01 external redundancy disk '/dev/rhdisk25', '/dev/rhdisk26', '/dev/rhdisk28', '/dev/rhdisk29' / create diskgroup MDM_FRA external redundancy disk '/dev/rhdisk30', '/dev/rhdisk31', '/dev/rhdisk34', '/dev/rhdisk35', / create diskgroup MDM_REDO_01 external redundancy disk '/dev/rhdisk15', '/dev/rhdisk16', '/dev/rhdisk19' / create diskgroup MDM_REDO_02 external redundancy disk '/dev/rhdisk20', '/dev/rhdisk21', '/dev/rhdisk24' /
'/dev/rhdisk27',
'/dev/rhdisk17', '/dev/rhdisk18',
'/dev/rhdisk22', '/dev/rhdisk23',
SQL> select name, state, total_mb, free_mb from v$asm_diskgroup; NAME STATE TOTAL_MB FREE_MB ------------------------------ ----------- ---------- ---------MDM_DATA_01 MOUNTED 1280000 1279932 MDM_FRA MOUNTED 409600 409536 MDM_REDO_01 MOUNTED 25600 25542 MDM_REDO_02 MOUNTED 25600 25542
19
13. TNS_ADMIN
The TNS_ADMIN environment variable should point to /u01/grid/11.2.0/grid/network/admin. This is the central location of tnsnames.ora, listener.ora, and sqlnet.ora files. Do not put any of these files in the RDBMS codetree to cause unnecessary confusion. Instead, replace the /u01/app/oracle/product/11.2.0/db_1/network/admin directory with a soft link pointing to the TNS_ADMIN location.
$ cd /u01/app/oracle/product/11.2.0/db_1/network $ rmdir admin $ ln s /u01/grid/11.2.0/grid/network/admin admin
For example,
Now, start the listener. As user oragrid, Configure ASM to always start up automatically after server starts up.
$ crsctl modify resource ora.asm attr AUTO_START=always
As user oragrid, Run the following crsctl command to verify. You should see the ora.<listener_name>.lsnr and ora.<instance_name>.db resources.
$ crsctl status res -t Cluster Resources --------------------------------------------------------------------ora.LISTENER.lsnr ONLINE ONLINE etodboratst01 ora.asm ONLINE ONLINE etodboratst01 Started
20
Move the SYS.AUD$ table from the dictionary managed SYSTEM tablespace to SYSAUX.
SQL> exec DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION ( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, audit_trail_location_value => 'SYSAUX');
Enable block change tracking, for example using the EIRS_DATA_01 diskgroup.
SQL> alter database enable block change tracking using file '+EIRS_DATA01';
Configure RMAN
$ rman target / RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 15 DAYS; RMAN> CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(TDPO_OPTFILE=/etc/tdpo.opt)'; RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
21