Documente Academic
Documente Profesional
Documente Cultură
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i [ID 823586.1]
Modified: Dec 1, 2011 Type: WHITE PAPER Status: PUBLISHED Priority: 3
Oracle Applications Release 11i (11.5.10) offers numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle Applications Release 11i (Release 11.5.10) running on a single database instance to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 2 (11gR2).
Note: At present, this document applies to UNIX and Linux platforms only. If you are using Windows and want to migrate to Oracle RAC or ASM, you must follow the procedures described in the Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) and the Oracle Database Administrator's Guide 11g Release 2 (11.2). The most current version of this document can be obtained in My Oracle Support Knowledge Document 823586.1. There is a change log at the end of this document.
Note: Most documentation links are to the generic 11gR2 documentation. However, where links are provided to installation documentation, the links resolve to the Linux doc set. Please refer to your specific platform for installation documentation.
A number of conventions are used in describing the Oracle Applications architecture: Convention Application tier Database tier oracle CONTEXT_NAME Meaning Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier. Machines (nodes) running the Oracle Applications database. User account that owns the database file system (database ORACLE_HOME and files). The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is <SID>_<hostname>. Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: <APPL_TOP>/admin/<CONTEXT_NAME>.xml Database tier context file: <RDBMS ORACLE_HOME>/appsutil/<CONTEXT_NAME>.xml Oracle Applications database user password.
CONTEXT_FILE
APPSpwd
Monospace Text Represents command line text. Type such a command exactly as shown. <> \ Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets. On UNIX or Linux, the backslash character can be entered to indicate continuation of the command line on the next screen line.
This document is divided into the following sections: Section 1: Overview Section 2: Environment Section 3: Database Installation and Oracle RAC Migration Section 4: References Appendix A: Sample rconfig XML file Appendix B: Sample <context_name_ifile.ora> for concurrent processing nodes Appendix C: Example Grid Installation Appendix D: Database Conversion - Known Issues Appendix E: Using SCAN Listener with E-Business Suite 11i
Section 1: Overview
1 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
You should be familiar with Oracle Database 11gR2, and have a good knowledge of Oracle Real Application Clusters (RAC). Refer to the Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) when planning to set up Oracle Real Application Clusters and shared devices. 1.1 Cluster Terminology You should understand the terminology used in a cluster environment. Key terms include the following. Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically. Cluster Ready Services (CRS) is the primary program that manages high availability operations in an RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners. Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience to node failure. Real Application Clusters (RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, reducing processing time significantly. An Oracle RAC environment also offering resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime. Grid Infrastructure is the new unified ORACLE_HOME for both ASM and CRS. That is, Grid Infrastructure Install replaces Clusterware Install in 11gR2. See Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux. 1.2 Configuration Prerequisites The prerequisites for using Oracle RAC with Oracle Applications Release 11i are: 1. If you do not already have an existing single instance environment, install Oracle Applications using Rapid Install. Note: If you are not planning ASM as part of RAC conversion ensure that all your data files, control files and redo log files of existing single instance database are located on a shared disk . If your data files, control files and redo log files reside on a local disk, move them to a shared disk and recreate the control files. Refer to Oracle Database Administrator's Guide 11g Release 2 (11.2) for more information on recreating the control files. 2. Set up the required cluster hardware and interconnect medium. 3. Apply the following Oracle Applications patches before you start to configure your environment to use Oracle RAC. 1. Minipack 11i.AD.I.5 - Patch 5161676 2. 11GR2 APPS INTEROPERABILITY PATCH - <Patch 8815204> 3. USE OF LITERAL CAUSING PERFORMANCE ISSUE WHEN INSERTING DATA INTO USER DEFINED ATTRIBUTES TABLES <Patch 5644137> 4. CZ1CT102:ISTORE CACHE COMPONENTS TO BE DISABLED FOR SHOWING ADD TO CART BUTTONS - Patch 6400762 5. 11i.ATG_PF.H RUP6 - Patch 5903765 6. 11.5.10:SFM UNABLE TO PROCESS ORDERS IN RAC CONFIG - Patch 4022732 The Readme for this patch states that Concurrent Manager setup for Oracle RAC should already have been done as a prerequisite. However, this patch can be applied before setting up Oracle RAC. 7. To use the named db listener feature of AutoConfig, apply TXK RUP-U (Patch 9535311) or higher. This patch is also an autoconfig prerequisite in the most recent interoperability note, Oracle Applications Release 11i with Oracle Database 11g Release 2 , Document 881505.1 .
Section 2: Environment
2.1 Software and Hardware Configuration For supported hardware configurations, refer to the relevant platform installation guides - for example, Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux and Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX. The minimum software versions are detailed below. Component Version
2 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
Oracle Applications Release 11i 11.5.10.2 Oracle Database Oracle Cluster Ready Services 11.2.0.1 or higher 11.2.0.1 or higher
You can obtain the latest Oracle Database 11gR2 software from: http://www.oracle.com/technology/software/products/database /index.html Note: The Oracle Cluster Ready Services can be at a release level equal to or greater than the Oracle Database version, but not at a lower level.
2.2 ORACLE_HOME Nomenclature This document refers to various ORACLE_HOMEs, as follows: ORACLE_HOME SOURCE_ORACLE_HOME 11gR2 ORACLE_HOME Purpose Database ORACLE_HOME used by Oracle Applications Release 11i. Can be any supported version. Database ORACLE_HOME installed for 11gR2 Oracle RAC Database.
11gR2 CRS ORACLE_HOME ORACLE_HOME installed for 11gR2 Cluster Ready Services (Infrastructure home). 8.0.6 ORACLE_HOME ORACLE_HOME installed by Oracle E-Business Suite on application tier.
Note: You should take complete backups of your environment before executing these procedures, and take further backups after each stage of the migration. These procedures should be validated on test environments before being carried out in a production environment. Users must be logged off the system during these procedures.
Note: Installation of Oracle Clusterware 11g Release 2 is now part of the Infrastructure install. This task requires an understanding of the type of cluster and infrastructure that have been chosen, a description of which is outside the scope of this document. For convenience, the general steps involved are outlined below, but you should use the Infrastructure documentation set as the primary reference.
3.1.1 Check Network Requirements In Oracle Database 11g Release 2, the Infrastructure install can be configured to specify address management via node addresses or names (as per older releases), or via Grid Naming Services. Regardless of the choice here, nodes must satisfy the following requirements: Each node must have at least two network adapters: one for the public network interface, and one for the private network
3 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
interface (interconnect). For the public network, each network adapter must support the TCP/IP protocol. For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters, and switches that support TCP/IP (Gigabit Ethernet or better is recommended). To improve availability, backup public and private network adapters can be configured for each node. The interface names associated with the network adapter(s) for each network must be the same on all nodes. If Grid Naming Services is not used, the following addresses must also be set up: An IP address and associated host name for each public network interface, registered in DNS. One unused virtual IP address (VIP) and associated virtual host name that are registered in DNS, resolved in the host file, or both, and which will be configured for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, clients can be configured to use either the virtual host name or virtual IP address. If a node fails, its virtual IP address fails over to another node. A private IP address (and optionally a host name) for each private interface. Oracle recommends that you use private network IP addresses for these interfaces. An additional virtual IP address (VIP) and associated virtual host name for the Scan Listener, registered in DNS. For further information, see the Pre-installation requirements in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) Linux.
Note: A common mistake is to not set up ntpd correctly - see Network Time Protocol Setting in the Oracle Grid Infrastructure Installation Guide
3.1.2 Verify Kernel Parameters As part of the typical Infrastructure install, a fixup script is generated to handle most common kernel parameter issues. Follow the installation instructions for running this script. Detailed hardware and OS requirements are detailed in Advanced Installation Oracle Grid Infrastructure for a Cluster Pre-installation Tasks [Linux] 3.1.3 Set up Shared Storage The available shared storage options either ASM or shared file system (clustered or NFS). Use of raw disk devices is only supported for upgrades. These storage options are detailed in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) - Configuring Storage Linux. 3.1.4 Check Account Setup Configure the oracle account's environment for Oracle Clusterware and Oracle Database 11gR2, as per the Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) Linux 3.1.5 Configure Secure Shell on All Cluster Nodes Secure Shell configuration is covered in detail in both the Oracle Real Application Clusters Installation Guide and Oracle Grid Infrastructure Installation Guide. The 11gR2 installer now provides the option to automatically set up passwordless ssh connectivity, so unlike previous releases manual set up of Secure Shell is not necessary. For further details on manual set up of passwordless ssh, see Appendix E in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux). 3.1.6 Run Cluster Verification Utility (CVU) The installer will automatically run the Cluster Verify tool and provide fix up scripts for OS issues. However, you can also run CVU prior to installation to check for potential issues. 1. Install the cvuqdisk package as per Installing the cvuqdisk Package for Linux in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux).
4 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
2. Use the following command to determine which pre-installation steps have been completed, and which need to be performed:
3. Confirming Oracle Clusterware function: 1. After installation, log in as root, and use the following command to confirm that your Oracle Clusterware installation is running correctly:
<CRS_HOME>/bin/crs_stat -t -v
2. Successful Oracle Clusterware operation can also be verified using the following command:
<CRS_HOME>/bin/crsctl check crs CRS-4638: CRS-4537: CRS-4529: CRS-4533: Oracle High Availability Services is online Cluster Ready Services is online Cluster Synchronization Services is online Event Manager is online
3. Post-Install Actions 1. By default, the Global Services Daemon (GSD) is not started on the cluster. To start GSD, change directory to the <CRS_HOME> and issue the following commands:
3.2 Install Oracle Database Software 11g Release 2 and Upgrade Applications Database to 11g Release 2
Note: You should take a full backup of the oraInventory directory before starting this stage, in which you will run the Oracle Universal Installer (runInstaller) to carry out an ":Oracle Database Installation with Oracle RAC. In the Cluster Nodes Window, verify the cluster nodes shown for the installation. Select all nodes included in your Oracle RAC cluster.
5 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
To install Oracle Database 11g Release 2 software and upgrade existing database to 11g Release 2, refer to the interoperability note, Oracle Applications Release 11i with Oracle Database 11g Release 2 , Document 881505.1, following all the instructions and steps listed there except these:
Start the new database listener (Conditional) Implement and run AutoConfig Restart Applications server processes (Conditional)
Note: Ensure the database software is installed on all nodes in the cluster.
srvctl add listener -l listener_ebs -o <11gR2 ORACLE_HOME> -p 1522 srvctl setenv listener -l listener_ebs -T TNS_ADMIN= $TNS_ADMIN
When the listener starts, it will run from the database ORACLE_HOME. srvctl manages the listener.ora file across all nodes. 3.3.2 Listener requirements for converting to Oracle RAC By default the Grid install creates a default listener this step is optional. Tools such as rconfig impose additional restrictions on the choice of listener. The listener must be the default listener, and it must run from the Grid Infrastructure home. So if the default listener is not set up for rconfig, the example in 3.3.1 would need to be changed to:
6 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
listener_<NODE_NAME>, i.e. different listener names on each node in the cluster. If the named db listener patch has not been applied , then manual steps will be required to enable use of SRVCTL - see 3.7.4.
6. Move the $SOURCE_ORACLE_HOME/dbs/spfile<ORACLE_SID>.ora for this instance to the shared location. 7. Take a backup of existing $SOURCE_ORACLE_HOME/dbs/init<ORACLE_SID>.ora, and create a new $SOURCE_ORACLE_HOME/dbs/init<ORACLE_SID>.ora with the following parameter:
7 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
For further details of how to control archiving, see Oracle Database Administrator's Guide 11g Release 2 (11.2).
ORACLE_HOME =<11gR2_ORACLE_HOME> LD_LIBRARY_PATH = <11gR2_ORACLE_HOME>/lib, <11gR2_ORACLE_HOME>/ctx/lib ORACLE_SID = <instance name for current database node> PATH= $PATH:$ORACLE_HOME/bin; TNS_ADMIN = $ORACLE_HOME/network/admin/<context_name>
7. Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to the $TNS_ADMIN directory, and edit the aliases for SID=<new RAC instance name>. 8. As the APPS user, run the following command on the primary node to de-register the current configuration:
SQL>exec fnd_conc_clone.setup_clean;
9. From the 11gR2 ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
8 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
Note: If you have applied the named db listener AutoConfig patch [ see Configuration Prerequisites ] and want to use a named database listener, modify the s_db_listener context variable in the context file. 10. Set the value of s_virtual host_name to point to the virtual hostname for the database host, by editing the database context file $ORACLE_HOME/appsutil/<sid>_hostname.xml
11. From the 11gR2 ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script. 12. Check the AutoConfig log file located in the <11gR2 ORACLE_HOME>/appsutil/log/<CONTEXT_NAME>/<MMDDhhmm 13. (Optional) If you want to use SCAN Listener complete the steps under "Steps to perform on Database " section in Appendix E. 3.7.2 Shut Down Instances and Listeners Use the following commands:
9 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
srvctl add listener -l listener_<name> -o <11gR2 ORACLE_HOME> -p <port> srvctl setenv listener -l listener_<name> -T TNS_ADMIN= $ORACLE_HOME/network /admin
3. Edit AutoConfig listener.ora and change LISTENER_<node> to LISTENER_<name> (for example, LISTENER_EBS). 4. On each node, add the AutoConfig listener.ora as an ifile in the $ORACLE_HOME/network/admin/listener.ora. 5. On each node, add the Autoconfig tnsnames.ora file as an ifile in the $ORACLE_HOME/network/admin/tnsnames.ora 6. Add TNS_ADMIN to database:
$AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/<context_file>.
For more information on AutoConfig, see My Oracle Support Knowledge Document 165195.1 , Using AutoConfig to Manage System Configurations with Oracle E-Business Suite 11i. 5. Check the $APPL_TOP/admin/<context_name>/log/<MMDDhhmm> AutoConfig log file for errors. 6. Source the environment by using the latest environment file generated. 7. Validate the tnsnames.ora and listener.ora files located in the $ORACLE_HOME/network/admin and $IAS_ORACLE_HOME/network/admin. In particular, ensure that the correct TNS aliases have been generated for load balancing and failover, and that all the aliases are defined using the virtual hostnames. 8. Verify the dbc file located at $FND_SECURE. Ensure that the parameter 'APPS_JDBC_URL' is configured with all instances in the environment, and 'load_balance' is set to 'YES'. Note: If your database and application tiers are running on same node, and if your concurrent managers do not start, follow the relevant steps in My Oracle Support Knowledge Document 434613.1> . 9. If you have configured SCAN Listener using step 13 under Section 3.7.1, perform the steps under "Steps to perform on Application Tier" in Appendix E 3.8.2 Implement Load Balancing Implement load balancing for the Oracle Applications database connections:
10 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
1. Follow the substeps (1) and (2) below to run the context editor (via the Oracle Applications Manager interface) and modify the value of "Tools OH TWO_TASK"(s_tools_twotask), "iAS OH TWO_TASK"(s_weboh_twotask), and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias). 1. To load-balance the forms-based Applications database connections, set the value of "Tools OH TWO_TASK" to point to the <alias>_806_balance alias generated in the tnsnames.ora file. 2. To load-balance the Self-Service (HTML-based) Applications database connections, set the value of iAS OH TWO_TASK" and "Apps JDBC Connect Alias" to point to the <database_name>_balance alias generated in the tnsnames.ora file. 2. Execute AutoConfig by running the command:
$AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/<context_file>
3. Restart the Oracle Applications processes using the new scripts generated by AutoConfig. 4. Ensure that value of the profile option "Application Database ID" is set to dbcfile name generated in $FND_SECURE. Note: If you are adding a new node to the application tier, repeat this sequence of steps to set up load balancing on the new application tier node.
11 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
SQL>shutdown immediate;
3. Edit $ORACLE_HOME/dbs/<context_name>_ifile.ora and add these parameters:
_lm_global_posts=TRUE _immediate_commit_propagation=TRUE
4. Start the instance on each database node, one by one. 5. Start up the Application tier services on all nodes. 6. Log on to Oracle E-Business Suite 11i as SYSADMIN, and choose the System Administrator Responsibility. Navigate to Profile > System and change the profile option Concurrent: TM Transport Type' to QUEUE', and verify that the transaction manager works across the RAC instance. 7. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers. 8. Restart the concurrent managers. 3.9.4 Set Up Load Balancing of Concurrent Processing Tiers 1. Edit the Applications context file (via Oracle Applications Manager), setting the value of Concurrent Manager TWO_TASK to the load balancing alias created in the previous step. 2. On all concurrent processing nodes, run AutoConfig with the command:
$COMMON_TOP/admin/scripts/<context_name>/adautocfg.sh.
Section 4: References
My Oracle Support Knowledge Document 745759.1 : Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap My Oracle Support Knowledge Document 165195.1 : Using AutoConfig to Manage System Configurations with Oracle E-Business Suite 11i My Oracle Support Knowledge Document 230672.1 : Cloning Oracle Applications Release 11i with Rapid Clone My Oracle Support Knowledge Document 240575.1 : RAC on Linux Best Practices My Oracle Support Knowledge Document 265633.1 : Automatic Storage Management Technical Best Practices My Oracle Support Knowledge Document 881505.1 : Oracle Applications Release 11i with Oracle 11g Release 2 Oracle Applications System Administrator's Guide, Release 11i Migration to ASM Technical White Paper
RConfig xsi:schemaLocation="http://www.oracle.com/rconfig">
<n:ConvertToRAC>
<!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY -->
<n:Convert verify="YES">
Note: The Convert verify option in the ConvertToRAC.xml file can take one of three values YES/NO/ONLY: 1. YES: rconfig performs prerequisites check and then start conversion. 2. NO: rconfig does not perform prerequisites check, and start conversion. 3. Convert verify="ONLY" rconfig only performs prerequisites check; it does not start conversion after completing prerequisite
12 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
checks. In order to validate, and test the settings specified for converting to RAC with rconfig, it is advisable to execute rconfig using Convert verify="ONLY" prior to carrying out the actual conversion.
<!-- Specify OracleHome where the RAC database should be configured. It can be same as SourceDBHome --> <n:TargetDBHome>/oracle/product/11.1.0/db_1</n:TargetDBHome>
<!-- Specify SID of non-RAC database and credential. User with sysdba role is required to perform conversion -->
<!-- Specify the list of nodes that should have RAC instances running. LocalNode should be the first node in this nodelist. -->
<!-- Specify prefix for RAC instances. It can be same as the instance name for non-RAC database or different. The instance number will be attached to this prefix. Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name-->
<n:InstancePrefix>sales</n:InstancePrefix>
<!-- Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. -->
<!-- Specify the type of storage to be used by RAC database. Allowable values are CFS and ASM. The non-RAC database should have same storage type. --> <n:SharedStorage type="ASM">
Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above xml file. The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required diskgroups are mounted on each of them. The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance:
13 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
<!-- Specify Database Area Location to be configured for RAC database.If this field is left empty, current storage will be used for RAC database. For CFS, this field will have directory path. --> <n:TargetDatabaseArea>+ASMDG</n:TargetDatabaseArea>
Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this path, specify the ASM parameters as per your environment in the above XML file. If you are using CFS for your current database files then specify "NULL" to use the same location unless you want to switch to other CFS location. If you specify the path for the TargetDatabaseArea, rconfig will convert the files to Oracle Managed Files nomenclature.
<!-Specify Flash Recovery Area to be configured for RAC database. If this field is left empty, current recovery area of non-RAC database will be configured for RAC database. If current database is not using Recovery Area, the resulting RAC database will not have a recovery area. --> <n:TargetFlashRecoveryArea>+ASMDG</n:TargetFlashRecoveryArea> </n:SharedStorage> </n:Convert> </n:ConvertToRAC> </n:RConfig>
14 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
6. Enter cluster name, scan name and scan port. Click "Next". 7. Add Hostnames and Virtual IP names for nodes in the cluster. 8. Click "SSH Connectivity". Click "Test". If SSH is not established, enter OS user and password and let the installer set up passwordless connectivity. Click "Test" again, and if successful click "Next" 9. Choose one interface as public, one as private. eth0 should be public; eth1 is usually set up as private. Click "Next". 10. Choose Shared File System. Click "Next". 11. Choose the required level of redundancy, and enter location for the OCR disk. This must be located on shared storage. Click "Next". 12. Choose the required level of redundancy, and enter location for the voting disk. This must be located on shared storage. Click "Next". 13. Choose the default of "Do not use" for IPMI. Click "Next". 14. Select an operating system group for the operator and dba accounts. For the purposes of this example installation, choose the same group, such as "dba", for both. Click "Yes" in the popup window that asks you to confirm that the same group should be used for both, then click "Next". 15. Enter Oracle Base and Oracle Home. The Oracle Home should not be located under Oracle Base. Click "Next". 16. System checks are now performed. Fix any errors by clicking on "Fix and Check Again", or check "Ignore All" and click "Next". If you are not familiar with the possible effects of ignoring errors, it is advisable to fix them. 17. Save the response file for possible future use, then click "Finish" to start the install. 18. You will be required to run various scripts as root during the install. Follow the relevant on-screen instructions.
#$CRS_HOME/bin/srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node <node name> From Node on which the SCAN Listener is running issue the command # lsnrctl status <SCAN Listener name from above command >
5. Run AutoConfig on all nodes. Verify that tnsnames.ora file created with all instances on all nodes. 6. Create <sid>_<node>_ifile.ora under TNS_ADMIN as a copy of tnsnames.ora, removing all _LOCAL references. Replace the VIP descriptors with the SCAN descriptor.
15 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
For Example : If the Database listener port is 1531 and SCAN listener port is 1521 <sid>= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=<VIP Host>.<DOMAIN>)(PORT=1531)) (CONNECT_DATA= (SERVICE_NAME=<service name>) (INSTANCE_NAME=<sid>) ) ) SCAN Descriptor
<sid>= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=<SCAN Host>.<DOMAIN>)(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=<service name>) (INSTANCE_NAME=<sid>) ) )
7. Ensure that all the aliases other than <SID>_LOCAL are connecting through the SCAN Listener. Steps to perform on Applications Tier 1. Perform steps in Section 3.8 and run AutoConfig using the local [VIP] listener on 1531. The tnsnames and DBC files will be set to use the local [ VIP ] listener. 2. Change s_dbport and s_dbhost to SCAN Port and SCAN host. 3. Create <sid>_<node>_ifile.ora under TNS_ADMIN as a copy of tnsnames.ora, removing all FNDFS references. Replace the VIP descriptors with the SCAN descriptor. 4. Edit the context file and change s_apps_jdbc_connect_descriptor to use the SCAN host/port. 5. Change s_jdbc_connect_descriptor_generation context value to FALSE. 6. Rerun AutoConfig. 7. Perform all the above on all application tier nodes. 8. Ensure that all the aliases and DBC are connecting through the SCAN Listener. To Revert SCAN configuration To revert back AutoConfig to non-SCAN Listener solution. On Database Tier 1. Remove the <sid>_<node>_ifile.ora under TNS_ADMIN. 2. Set remote_listener to non-SCAN Listener on all nodes.
Change Log
Date Description
16 of 17
08-Mar-2012 5:09 PM
Document Display
https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...
14 Nov, 2011
Update for 11.2.0.3, there are no specific configuration changes for 11.2.0.3 patchset.
01 Aug , 2011
24 Mar, 2011
Implemented remarks
17 Mar, 2011
15 Sep, 2010
30 Sep, 2009
Initial creation. Deleted paragraph in 3.7.1 requiring removal of init.ora and added more detail on setting remote_listener parameter.
16 Sep, 2009
Initial creation.
Knowledge Document 823586.1 by Oracle E-Business Suite Development Copyright 2008, 2009, Oracle.
17 of 17
08-Mar-2012 5:09 PM