Sunteți pe pagina 1din 43

Table of Contents

Rac11gR2OnWindows........................................................................................................................................1
1. Introduction.........................................................................................................................................1
1.1. Overview of new concepts in 11gR2 Grid Infrastructure...................................................1
1.1.1. SCAN..................................................................................................................1
1.1.2. GNS....................................................................................................................1
1.1.3. OCR and Voting on ASM storage......................................................................1
1.1.4. Intelligent Platform Management interface (IPMI)............................................1
1.1.5. Time sync............................................................................................................2
1.1.6. Clusterware and ASM share the same Oracle Home..........................................2
1.1.7. Hardware Requirements.....................................................................................2
1.1.8. Network Hardware Requirements......................................................................2
1.1.9. IP Address Requirements...................................................................................3
1.1.10. Installation method...........................................................................................3
2. Prepare the cluster nodes for Oracle RAC.......................................................................................................3
2.1. User Accounts..................................................................................................................................3
2.1.1. User Account changes specifically for Windows 2008:..................................................3
2.1.2. Net Use Test.....................................................................................................................4
2.1.3. Remote Registry Connect................................................................................................5
2.2. Networking...................................................................................................................................................5
2.2.1. Network Ping Tests.......................................................................................................................6
2.2.2. Network Interface Binding Order (and Protocol Priorities).........................................................7
2.2.3. Disable DHCP Media Sense.........................................................................................................7
2.2.4. Disable SNP Features...................................................................................................................7
2.3. Stopping Services..........................................................................................................................................8
2.4. Synchronizing the Time on ALL Nodes.......................................................................................................8
2.5. Environment Variables.................................................................................................................................8
2.6. Stage the Oracle Software.............................................................................................................................9
2.7. CVU stage check...........................................................................................................................................9
3. Prepare the shared storage for Oracle RAC.....................................................................................................9
3.1. Shared Disk Layout..........................................................................................................................9
3.1.1. Grid Infrastructure Shared Storage..................................................................................9
3.1.2. ASM Shared Storage.....................................................................................................10
3.2. Enable Automount......................................................................................................................................10
3.3. Clean the Shared Disks...............................................................................................................................10
3.4. Create Logical partitions inside Extended partitions..................................................................................12
3.4.1. View Created partitions..............................................................................................................13
3.5. List Drive Letters........................................................................................................................................13
3.5.1. Remove Drive Letters.................................................................................................................14
3.5.2. List volumes on Second node.....................................................................................................15
3.6. Marking Disk Partitions for use by ASM...................................................................................................15
3.7. Verify Grid Infrastructure Installation Readiness.......................................................................................17
4. Oracle Grid Infrastructure Install...................................................................................................................18
4.1. Basic Grid Infrastructure Install (without GNS and IPMI)...........................................................18
5. Grid Infrastructure Home Patching................................................................................................................27
6. RDBMS Software Install...............................................................................................................................27
7. RAC Home Patching......................................................................................................................................32
8. Run ASMCA to create diskgroups................................................................................................................32
9. Run DBCA to create the database.................................................................................................................35
i
Rac11gR2OnWindows
1. Introduction
1.1. Overview of new concepts in 11gR2 Grid Infrastructure
1.1.1. SCAN
The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN
name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid
Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or
removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT.
The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated
for the SCAN name. Allocate three addresses to the SCAN name. During Oracle grid infrastructure
installation, listeners are created for each of the SCAN addresses, and Oracle grid infrastructure
controls which server responds to a SCAN address request. Provide three IP addresses in the DNS to
use for SCAN name mapping. This ensures high availability.

The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster.
The SCAN domain name must be unique within your corporate network.
1.1.2. GNS
In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can
simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the
cluster resides.
1.1.3. OCR and Voting on ASM storage
The ability to use ASM diskgroups for the storage of Clusterware OCR and Voting disks is a new feature in
the Oracle Database 11g Release 2 Grid Infrastructure. If you choose this option and ASM is not yet
configured, OUI launches ASM configuration assistant to configure ASM and a diskgroup.
1.1.4. Intelligent Platform Management interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware
and firmware that administrators can use to monitor system health and manage the system.
With Oracle Database 11g Release 2, Oracle Clusterware can integrate IPMI to provide failure isolation
support and to ensure cluster integrity. You must have the following hardware and software configured to
enable cluster nodes to be managed with IPMI:
Each cluster member node requires a Baseboard Management Controller (BMC) running firmware
compatible with IPMI version 1.5, which supports IPMI over LANs, and configured for remote
control.

Each cluster member node requires an IPMI driver installed on each node.
The cluster requires a management network for IPMI. This can be a shared network, but Oracle
recommends that you configure a dedicated network.

Each cluster node's ethernet port used by BMC must be connected to the IPMI management network.
Rac11gR2OnWindows 1
If you intend to use IPMI, then you must provide an administration account username and password to
provide when prompted during installation.
1.1.5. Time sync
There is a general requirement for Oracle RAC that the time on all the nodes be the same. With 11gR2 time
synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services
Daemon) or by using the Windows Time Service. If the Windows Time Service is being used, it MUST be
configured to prevent the time from being adjusted backwards.
1.1.6. Clusterware and ASM share the same Oracle Home
The clusterware and ASM share the same home thus we call it Grid Infrastructure home (prior to 11gR2 ASM
could be installed either in a separate home or in the same Oracle home as RDBMS.)
1.1.7. Hardware Requirements
Physical memory (at least 1.5 gigabyte (GB) of RAM)
An amount of swap space equal the amount of RAM
Temporary space (at least 1 GB) available in /tmp
A processor type (CPU) that is certified with the version of the Oracle software being installed
At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays
correctly

All servers that will be used in the cluster have the same chip architecture, for example, all 32-bit
processors or all 64-bit processors

Disk space for software installation locations. You will need at least 4.5 GB of available disk space
for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of
available disk space for the Oracle Database home directory.

Shared disk space
An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the
server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is
accessible by all the Oracle RAC database instances. The Oracle RAC installation that is described in this
guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount
of shared disk space is determined by the size of your database.
1.1.8. Network Hardware Requirements
Each node has at least two network interface cards (NIC), or network adapters.
Public interface names must be the same for all nodes. If the public interface on one node uses the
network adapter 'PublicLAN', then you must configure 'PublicLAN' as the public interface on all
nodes.

You should configure the same private interface names for all nodes as well. If 'PrivateLAN' is the
private interface name for the first node, then 'PrivateLAN' should be the private interface name for
your second node.

For the private network, the end points of all designated interconnect interfaces must be completely
reachable on the network. Every node in the cluster should be able to connect to every private
network interface in the cluster.

The host name of each node must conform to the RFC 952 standard, which permits alphanumeric
characters. Host names using underscores ("_") are not allowed.

1.1.4. Intelligent Platform Management interface (IPMI) 2
1.1.9. IP Address Requirements
One public IP address for each node
One virtual IP address for each node
Three single client access name (SCAN) addresses for the cluster
1.1.10. Installation method
This document details the steps for installing a 2-node Oracle 11gR2 RAC cluster on Windows:
The Oracle Grid Infrastructure Home binaries are installed on the local disk of each of the RAC
nodes.

The files required by Oracle Clusterware (OCR and Voting disks) are stored in ASM
The installation is explained without GNS and IPMI (additional Information for Installation with GNS
and IPMI are explained)

2. Prepare the cluster nodes for Oracle RAC
The guides include hidden sections, use the and image for each section to show/hide the section or you
can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which
requires Java Script to be enabled on your browser.
2.1. User Accounts
The installation should be performed as the Local Administrator, the Local Administrator username and
password MUST be identical on all cluster nodes.
If a domain account is used, this domain account must be explicitly defined as a member of the Local
Administrator group on all cluster nodes.
For Windows 2008:
Open Windows 2008 Server Manager
Expand the Configuration category in the console tree
Expand the Local Users and Groups category in the console tree
Within Groups, open the Administrator group
Add the desired user account as a member of the Administrator Group
Click OK to save the changes.
We must now configure and test the installation users ability to interact with the other cluster nodes.
2.1.1. User Account changes specifically for Windows 2008:
1. Change the elevation prompt behavior for administrators to "Elevate without prompting" to allow for user
equivalence to function properly in Windows 2008:
Open a command prompt and type secpol.mscto launch the Security Policy Console management
utility.

From the Local Security Settings console tree, click Local Policies, and then Security Options
Scroll down to and double-click User Account Control: Behavior of the elevation prompt for
administrators.

1.1.9. IP Address Requirements 3
From the drop-down menu, select: "Elevate without prompting (tasks requesting elevation will
automatically run as elevated without prompting the administrator)"

Click OK to confirm the changes.
Repeat the previous 5 steps on ALL cluster nodes.
2. Ensure that the Administrators group is listed under Manage auditing and security log:
Open a command prompt and type secpol.mscto launch the Security Policy Console management
utility.

Click on Local Policies
Click on User Rights Assignment
Locate and double click the Manage auditing and security login the listing of User Rights
Assignments.

If the Administrators group is NOT listed in the Local Security Settingstab, add the group now.
Click OK to save the changes (if changes were made).
Repeat the previous 6 steps on ALL cluster nodes.
3. Disable Windows Firewall When installing Oracle Grid Infrastructure and/or Oracle RAC it is required to
turn off the Windows firewall. Follow these steps to turn off the windows firewall :
Click Start, click Run, type firewall.cpl, and then click OK
In the Firewall Control Pannel, click Turn Windows Firewall on or off(upper left hand corner of
the window).

Choose the Offradio button in the Windows Firewall Settingswindow and click OK to save the
changes.

Repeat the previous 3 steps on ALL cluster nodes.
After the installation is successful, you can enable the Windows Firewall for the public connections. However,
to ensure correct operation of the Oracle software, you must add certain executables and ports to the Firewall
exception list on all the nodes of a cluster. See Section 5.1.2, "Configure Exceptions for the Windows
Firewall" of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Microsoft Windows for
details: http://download.oracle.com/docs/cd/E11882_01/install.112/e10817/postinst.htm#CHDJGCEH
NOTE: The Windows Firewall must be disabled on all the nodes in the cluster before performing any
cluster-wide configuration changes, such as:
Adding a node
Deleting a node
Upgrading to patch release
Applying a one-off patch
If you do not disable the Windows Firewall before performing these actions, then the changes might not be
propagated correctly to all the nodes of the cluster.
2.1.2. Net Use Test
The net useutility can be used to validate the ability to perform the software copy among the cluster nodes.
Open a command prompt
Execute the following (replacing C$ with the appropriate drive letter if necessary) repeat the
command to ensure access to every node in the cluster from the local node replacing with the
appropriate nodes in the cluster.

C:\Users\Administrator>net use \\remote node name\C$
The command completed successfully.
2.1.1. User Account changes specifically for Windows 2008: 4
Repeat the previous 2 steps on ALL cluster nodes.
2.1.3. Remote Registry Connect
Validate the ability to connect to the remote nodes registry(s) as follows:
Open a command prompt and type regedit
Within the registry editor menu bar, choose File and select Connect Network Registry
In the Select Computer window enter the remote node name.
Click OK and wait for the remote registry to appear in the tree.
Repeat the previous 4 steps for ALL cluster nodes.
2.2. Networking
NOTE: This section is intended to be used for installations NOT using GNS.
1. Determine your cluster name. The cluster name should satisfy the following conditions:
The cluster name is globally unique throughout your host domain.
The cluster name is at least 1 character long and less than 15 characters long.
The cluster name must consist of the same character set used for host names: single-byte
alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).

NOTE: It is a requirement that network interfaces used for the Public and Private Interconnect be consistently
named (have the same name) on every node in the cluster. Common practice is to use the names Public and
Private for the interfaces, long names should be avoided and special characters are NOT to be used.
2. Determine the public host name for each node in the cluster. For the public host name, use the primary host
name of each node. In other words, use the name displayed by the hostname command for example: racnode1.
It is recommended that NIC teaming is configured. Active/passive is the preferred teaming method
due to its simplistic configuration.

For Windows 2008:
Perform the following to rename the network interfaces:
Click Start, click Run, type ncpa.cpl, and then click OK.
Determine the intended purpose for each of the interfaces (may need to view the IP configuration)
Right click the interface to be renamed and click rename
Enter the desired name for the interface.
Repeat the previous 4 steps on ALL cluster nodes ensuring that the public and private interfaces have
the same name on every node.

3. Determine the virtual hostname for each node in the cluster. The virtual host name is a public node name
that is used to reroute client requests sent to the node if the node is down. Oracle recommends that you
provide a name in the format <public hostname>-vip, for example: racnode1-vip. The virutal hostname must
meet the following requirements:
The virtual IP address and the network name must not be currently in use.
The virtual IP address must be on the same subnet as your public IP address.
The virtual host name for each node should be registered with your DNS.
2.1.2. Net Use Test 5
4. Determine the private hostname for each node in the cluster. This private hostname does not need to be
resolvable through DNS and should be entered in the hosts file (typically located in:
c:\windows\system32\drivers\etc). A common naming convention for the private hostname is <public
hostname>-priv.
The private IP should NOT be accessible to servers not participating in the local cluster.
The private network should be on standalone dedicated switch(es).
The private network should NOT be part of a larger overall network topology.
The private network should be deployed on Gigabit Ethernet or better.
It is recommended that redundant NICs are configured using teaming. Active/passive is the preferred
teaming method due to its simplistic configuration.

5. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). SCAN VIPs
must NOT be in the c:\windows\system32\drivers\etc\hosts file. SCAN VIPs must be resolvable by DNS.
6. Even if you are using a DNS, Oracle recommends that you list the public IP, VIP and private addresses for
each node in the hosts file on each node. Configure the
c:\windows\system32\drivers\etc\hosts file so that it is similar to the following example:
NOTE: The SCAN VIP MUST NOT be in the hosts file. This will result in only 1 SCAN VIP for the entire
cluster.
#PublicLAN - PUBLIC
192.0.2.100 racnode1.example.com racnode1
192.0.2.101 racnode2.example.com racnode2
#VIP
192.0.2.102 racnode1-vip.example.com racnode1-vip
192.0.2.103 racnode2-vip.example.com racnode2-vip
#PrivateLAN - PRIVATE
172.0.2.100 racnode1-priv
172.0.2.101 racnode2-priv
After you have completed the installation process, configure clients to use the SCAN to access the cluster.
Using the previous example, the clients would use docrac-scan to connect to the cluster.
The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_subdomain_name, for example
docrac-scan.example.com. The short SCAN
for the cluster is docrac-scan. You can use any name for the SCAN, as long as it is unique within your
network and conforms to the RFC 952 standard.
2.2.1. Network Ping Tests
There are a series of 'ping' tests that should be completed, and then the network adapter binding order should
be checked. You should ensure that the public IP addresses resolve correctly and that the private addresses are
of the form 'nodename-priv' and resolve on both nodes via the hosts file.
* Public Ping test
Pinging Node1 from Node1 should return Node1's public IP address
Pinging Node2 from Node1 should return Node2's public IP address
Pinging Node1 from Node2 should return Node1's public IP address
Pinging Node2 from Node2 should return Node2's public IP address
* Private Ping test
Pinging Node1 private from Node1 should return Node1's private IP address
2.2. Networking 6
Pinging Node2 private from Node1 should return Node2's private IP address
Pinging Node1 private from Node2 should return Node1's private IP address
Pinging Node2 private from Node2 should return Node2's private IP address
* VIP Ping test
Pinging the VIP address at this point should fail. VIPs will be activated at the end of the Oracle Clusterware install.
If any of the above tests fail you should fix name/address resolution by updating the DNS or local hosts files
on each node before continuing with the installation.
2.2.2. Network Interface Binding Order (and Protocol Priorities)
It is required that the Public interface be listed first in the network interface binding order on ALL cluster
nodes.
For Windows 2008:
Perform the follow tasks to ensure this requirement is met:
Click Start, click Run, type ncpa.cpl, and then click OK.
In the menu bar on the top of the window click Advancedand choose Advanced Settings(For
Windows 2008, if the "Advanced" is not showing, click 'Alt' to enable that menu item).

Under the Adapters and Bindings tab use the up arrow to move the Public interface to the top of the
Connections list.

Under Binding order for increase the priority of IPv4 over IPv6
Click OK to save the changes
Repeat the previous 5 steps on ALL cluster nodes
2.2.3. Disable DHCP Media Sense
You should disable media sense. Media Sense allows Windows to uncouple an IP address from a card when
the link to the local switch is lost. You should disable this activity using the registry editor regedit.
Navigate to the Key HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters and
right click to create a new key of type DWORD. make sure that the Key is called DisableDHCPMediaSense,
is of type DWORD and has a value of 1. For Windows 2008 we can check the status of DHCP Media Sense
with the command: netsh interface ipv4 show global
2.2.4. Disable SNP Features
On Windows 2003 SP2 and later platforms there are several network issues related to SNP features. These
issues are described in detail in Microsoft KB article 948496. Perform the following tasks to take proactive
action on these potential issues:
Click Start, click Run, type ncpa.cpl, and then click OK.
Right-click a network adapter object, and then click Properties.
Click Configure, and then click the Advanced tab.
In the Property list, click Receive Side Scaling, click Disable in the Value list, and then click OK.
In the Property list, click TCP/IP Offload, click Disable in the Value list, and then click OK.
Repeat steps 2 through 5 for each network adapter object.
2.2.1. Network Ping Tests 7
2.3. Stopping Services
There can be issues with some of the services, which may already be running on the cluster nodes. Typically a
Microsoft Service: Distributed Transaction Coordinator (MSDTC) can interact with Oracle software during
install. It is recommended that this service is stopped and set to manual start using services.msc on both
nodes.
2.4. Synchronizing the Time on ALL Nodes
There is a general requirement for Oracle RAC that the time on all the nodes be the same. With 11gR2 time
synchronization can be performed by the Clusterware using CTSSD (Cluster Time Synchronization Services
Daemon) or by using the Windows Time Service. If the Windows Time Service is being used, it MUST be
configured to prevent the time from being adjusted backwards. Perform the following steps to ensure the time
is NOT adjusted backwards using Windows Time Service:
Open a command prompt and type regedit
Within the registry editor locate the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config key.

Set the value for MaxNegPhaseCorrection? to 0 and exit the registry editor.
Open a command prompt and execute the following to put the change into effect:
cmd> W32tm /config /update
Repeat steps 1 through 4 for ALL cluster nodes.
2.5. Environment Variables
Set the TEMP and TMP environment variables to a common location that exists on ALL nodes in the cluster.
During installation the Oracle Universal Installer (OUI) will utilize these directories to store temporary copies
of the binaries. If the location is not the same for both variables on ALL cluster nodes the installation will fail.
Most commonly these parameters are set as follows:
TMP=C:\temp
TEMP=C:\temp
For Windows 2008:
To set the TEMP and TMP environment variables:
Log into the server as the user that will perform the installation
Open Computer Properties
Click the Advanced system settings link (on the left under tasks)
Under the Advanced tab, click the Environment Variables button
Modify the TEMP and TMP variables under User variables for Administratorto the desired
setting. Keep in mind, this path must be identical for both TMP and TEMP and they must be set to the
same location on ALL cluster nodes.

Click OK to save the changes.
Repeat steps 1 through 6 for ALL cluster nodes.
2.3. Stopping Services 8
2.6. Stage the Oracle Software
It is recommended that you stage the required software onto a local drive on Node 1 of your cluster.
Important. Ensure that you use only 32 bit versions of the Oracle Software on a 32bit OS and 64 bit versions
of the Oracle Software on a 64bit OS.
For the Grid Infrastructure (clusterware and ASM) software download:
Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Windows
For the RDBMS software download from OTN:
Oracle Database 11g Release 2 (11.2.0.1.0) for Windows
2.7. CVU stage check
Now you can run the CVU to check the state of the cluster prior to the install of the Oracle Software. Check if
there is a newer version of CVU available on otn compared to the one that ships on the installation media
http://otn.oracle.com/rac
3. Prepare the shared storage for Oracle RAC
This section describes how to prepare the shared storage for Oracle RAC
Shared Disk Layout 1.
Enable Automounting of disks on Windows 2.
Clean the Shared Disks 3.
Create Logical partitions inside Extended partitions 4.
Drive Letters 5.
View Disks 6.
Marking Disk Partitions for use by ASM 7.
Verify Clusterware Installation Readiness 8.
3.1. Shared Disk Layout
It is assumed that the two nodes have local disk primarily for the operating system and the local Oracle
Homes. Labelled C: The Oracle Grid Infrastructure software also resides on the local disks on each node. The
2 nodes must also share some central disks. This disk must not have cache enabled at the node level. i.e. if the
HBA drivers support caching of reads/writes it should be disabled. If the SAN supports caching that is visible
to all nodes then this can be enabled.
3.1.1. Grid Infrastructure Shared Storage
With Oracle 11gR2 it is considered a best practice to store the OCR and Voting Disk within ASM and to
maintain the ASM best practice of having no more than 2 diskgroups (Flash Recovery Area and Database
Area). This means that the OCR and Voting disk will be stored along with the database related files. If you are
utilizing external redundancy for your disk groups this means you will have 1 Voting Disk and 1 OCR.
For those who wish to utilize Oracle supplied redundancy for the OCR and Voting disks you could create a
separate (3rd) ASM Diskgroup having a minimum of 2 fail groups (total of 3 disks). This configuration will
provide 3 multiplexed copies of the Voting Disk and a single OCR which takes on the redundancy of that disk
group (mirrored within ASM). The minimum size of the 3 disks that make up this diskgroup is 1GB. This
2.6. Stage the Oracle Software 9
diskgroup will also be used to store the ASM SPFILE.
For demonstration purposes within this cookbook, we will be using the more complex of the above
configurations by creating a 3rd diskgroup for storage of the OCR and Voting Disks. Our third disk group will
be normal redundancy allowing for 3 Voting Disks and a single OCR which takes on the redundancy of that
diskgroup.
Disk Number Volume Size (MB) ASM Label Prfix Diskgroup Redundancy
Disk 1 Volume 2 1024 OCR_VOTE OCR_VOTE Normal
Disk 2 Volume 3 1024 OCR_VOTE OCR_VOTE Normal
Disk 3 Volume 4 1024 OCR_VOTE OCR_VOTE Normal
3.1.2. ASM Shared Storage
It is recommended that ALL ASM disks within a disk group are of the same size and carry the same
performance characteristics. Whenever possible Oracle also recommends sticking to the SAME (Stripe And
Mirror Everything) methodology by using RAID 1+0. If SAN level redundancy is available, external
redundancy should be used for database storage on ASM.
Number of LUNs (Disks) RAID Level Size (GB) ASM Label Prfix Diskgroup Redundancy
4 1+0 100 DBDATA DBDATA External
4 1+0 100 DBFLASH DBFLASH External
In this document we will use the diskpart command line tool to manage these LUNs. You must create logical
drives inside of extended partitions for the disks to be used by Oracle Grid Infrastructure and Oracle ASM.
There must be no drive letters assigned to any of the Disks1 Disk10 on any node. For MIcrosoft Windows
2003 it is possible to use diskmgmt.msc instead of diskpart (as used in the following sections) to create these
partitions. For Microsoft Windows 2008, diskmgmt.msc cannot be used instead of diskpart to create these
partitions.
3.2. Enable Automount
You must enable automounting of disks for them to be visible to Oracle Grid Infrastructure. On each node log
in as someone with Administrator privileges then Click START->RUN and type diskpart
C:\>diskpart
Microsoft DiskPart version 5.2.3790.3959
Copyright (C) 1999-2001 Microsoft Corporation.
On computer: WINNODE1
DISKPART>AUTOMOUNT ENABLE
Repeat the above command on all nodes in the cluster
3.3. Clean the Shared Disks
You may want to clean your shared disks before starting the install. Cleaning will remove data from any
previous failed install. But see a later Appendix for coping with failed installs. On Node1 from within diskpart
you should clean each of the disks. WARNING this will destroy all of the data on the disk. Do not select
the disk containing the operating system or you will have to reinstall the OS
Cleaning the disk scrubs every block on the disk. This may take some time to complete.
3.1.1. Grid Infrastructure Shared Storage 10
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ---------- ------- ------- --- ---
Disk 0 Online 34 GB 0 B
Disk 1 Online 1024 MB 1024 MB
Disk 2 Online 1024 MB 1024 MB
Disk 3 Online 1024 MB 1024 MB
Disk 4 Online 1024 MB 1024 MB
Disk 5 Online 1024 MB 1024 MB
Disk 6 Online 100 GB 100 GB
Disk 7 Online 100 GB 100 GB
Disk 8 Online 100 GB 100 GB
Disk 9 Online 100 GB 100 GB
Disk 10 Online 100 GB 100 GB
Now you should clean disks 1 10 (Not disk0 as this the local C: drive)
DISKPART>select disk 1
Disk 1 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 2
Disk 2 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 3
Disk 3 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 4
Disk 4 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 5
Disk 5 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 6
Disk 6 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 7
Disk 7 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 8
Disk 8 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 9
Disk 9 is now the selected disk.
DISKPART> clean all
DISKPART> select disk 10
Disk 10 is now the selected disk.
DISKPART> clean all
3.3. Clean the Shared Disks 11
3.4. Create Logical partitions inside Extended partitions
Assuming the disks you are going to use are completely empty you must create an extended partition and then
inside that partition a logical partition. In the following example, for Oracle Grid Infrastructure, I have
dedicated LUNS for each device.
DISKPART> select disk 1
Disk 1 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 2
Disk 2 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 3
Disk 3 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition..
DISKPART> select disk 4
Disk 4 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 5
Disk 5 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 6
Disk 6 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 7
Disk 7 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
3.4. Create Logical partitions inside Extended partitions 12
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 8
Disk 8 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 9
Disk 9 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
DISKPART> select disk 10
Disk 10 is now the selected disk.
DISKPART> create part ext
DiskPart succeeded in creating the specified partition.
DISKPART> create part log
DiskPart succeeded in creating the specified partition.
3.4.1. View Created partitions
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ---------- ------- ------- --- ---
Disk 0 Online 34 GB 0 B
Disk 1 Online 1024 MB 0 MB
Disk 2 Online 1024 MB 0 MB
Disk 3 Online 1024 MB 0 MB
Disk 4 Online 1024 MB 0 MB
Disk 5 Online 1024 MB 0 MB
Disk 6 Online 100 GB 0 MB
Disk 7 Online 100 GB 0 MB
Disk 8 Online 100 GB 0 MB
Disk 9 Online 100 GB 0 MB
Disk 10 Online 100 GB 0 MB
3.5. List Drive Letters
Diskpart should not add drive letters to the partitions on the local node. The partitions on the other node may
have drive letters assigned. You must remove them. On earlier versions of Windows 2003 a reboot of the
other node will be required for the new partitions to become visible. Windows 2003 SP2 and Windows 2008
do not suffer from this issue.
Using diskpart on Node2
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 C NTFS Partition 16 GB Healthy System
3.4.1. View Created partitions 13
Volume 1 D RAW Partition 1023 MB Healthy
Volume 2 E RAW Partition 1023 MB Healthy
Volume 3 F RAW Partition 1023 MB Healthy
Volume 4 G RAW Partition 1023 MB Healthy
Volume 5 H RAW Partition 1023 MB Healthy
Volume 6 I RAW Partition 100 GB Healthy
Volume 7 J RAW Partition 100 GB Healthy
Volume 8 K RAW Partition 100 GB Healthy
Volume 9 L RAW Partition 100 GB Healthy
Volume 10 M RAW Partition 100 GB Healthy
Notice that the volumes are listed in a completely different order compared to the disk list.
3.5.1. Remove Drive Letters
You need to remove the drive letters D E F G H I J K L M These relate to volumes 1 2 3 4 5 6 7 8 9 10 *Do
NOT remove drive letter C which, in this case, is the local disk. This relates to volume 0 (In this example)
DISKPART> select volume 1
Volume 1 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 2
Volume 2 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 3
Volume 3 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 4
Volume 4 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 5
Volume 5 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 6
Volume 6 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 7
Volume 7 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 8
Volume 8 is the selected volume.
3.5. List Drive Letters 14
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 9
Volume 9 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
DISKPART> select volume 10
Volume 10 is the selected volume.
DISKPART> remov
DiskPart successfully removed the drive letter or mount point.
3.5.2. List volumes on Second node
You should check that none of the RAW partitions have drive letters assigned
DISKPART> list vol
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 C NTFS Partition 16 GB Healthy System
Volume 1 RAW Partition 1023 MB Healthy
Volume 2 RAW Partition 1023 MB Healthy
Volume 3 RAW Partition 1023 MB Healthy
Volume 4 RAW Partition 1023 MB Healthy
Volume 5 RAW Partition 1023 MB Healthy
Volume 6 RAW Partition 100 GB Healthy
Volume 7 RAW Partition 100 GB Healthy
Volume 8 RAW Partition 100 GB Healthy
Volume 9 RAW Partition 100 GB Healthy
Volume 10 RAW Partition 100 GB Healthy
You can now exit diskpart on all nodes
3.6. Marking Disk Partitions for use by ASM
The only partitions that the Oracle Universal Installer acknowledges on Windows systems are logical drives
that are created on top of extended partitions and that have been stamped as candidate ASM disks. Therefore
prior to running the OUI the disks that are to be used by Oracle RAC MUST be stamped using ASM Tool.
ASM Tool is available in two different flavors, command line (asmtool) and graphical (asmtoolg). Both
utilities can be found under the asmtool directory within the Grid Infrastructure installation media. For this
installation, asmtoolg will be used to stamp the ASM disks. The following table outlines the summary of the
disks that will be stamped for ASM usage:
Number of LUNs (Disks) RAID Level Size (GB) ASM Label Prfix Diskgroup Redundancy
3 1+0 1024 OCR_VOTE OCR_VOTE Normal
4 1+0 100 DBDATA DBDATA External
4 1+0 100 DBFLASH DBFLASH External
Perform this task as follows:
Within Windows Explorer navigate to the asmtool directory within the Grid Infrastructure installation
media and double click the asmtoolg.exeexecutable.

Within ASM Tool GUI, select Add or Change Labeland click Next.
3.5.1. Remove Drive Letters 15
On the Select Disks screen choose the appropriate disks to be assigned a label and enter an ASM
Label Prefix to make the disks easily identifiable for their intended purpose, size and/or performance
characteristics. After choosing the intended disks and entering the appropriate ASM Label Prefix,
click Nextto continue.

Review the summary screen and click Next.
3.6. Marking Disk Partitions for use by ASM 16
On the final screen, click Finishto update the ASM Disk Labels.
Repeat these steps for all ASM disks that will differ in their label prefix.
3.7. Verify Grid Infrastructure Installation Readiness
Prior to installing Grid Infrastructure it is highly recommended to run the cluster verification utility
(CLUVFY) to verify that the cluster nodes have been properly configured for a successful Oracle Grid
Infrastructure installation. There are various levels at which CLUVFY can be run, at this stage it should be run
in the CRS pre-installation mode. Later in this document CLUVFY will be run in pre dbinst mode to validate
the readiness for the RDBMS software installation.
3.7. Verify Grid Infrastructure Installation Readiness 17
Though CLUVFY is packaged with the Grid Infrastructure installation media, it is recommended to download
and run the latest version of CLUVFY. The latest version of the CLUVFY utility can be downloaded from:
http://otn.oracle.com/rac
Once the latest version of the CLUVFY has been downloaded and installed, execute it as follows to perform
the Grid Infrastructure pre-installation verification:
Login to the server in which the installation will be performed as the Local Administrator.
Open a command prompt and run CLUVFY as follows to perform the Oracle Clusterware
pre-installation verification:

cmd> runcluvfy stage -post hwos -n [verbose] cmd> runcluvfy stage -pre crsinst -n [-verbose]
If any errors are encountered, these issues should be investigated and resolved before proceeding with the
installation.
4. Oracle Grid Infrastructure Install
4.1. Basic Grid Infrastructure Install (without GNS and IPMI)
Shutdown all Oracle Processes running on all nodes (not necessary if performing the install on new
servers)

Start the Oracle Universal Installer (OUI) by running setup.exe as the Local Administrator user from
the Clusterware (db directory if using the DVD Media, see step 3) directory on the 11g Release 2
(11.2.0.1) installation media.

If using the DVD installation media (from edelivery.oracle.com) the first screen to appear will be the
Select a Product to Install screen. Choose Oracle Clusterware 11g and click next to continue. If the
OTN media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to
be displayed will be the Welcome screen .

On the Select Installation Option Screen, choose Install and Configure Grid Infrastructure for a
Clusterand click Nextto continue.

4. Oracle Grid Infrastructure Install 18
* On the Select Installation Type screen, choose Advanced Installationand click Nextto continue.
Choose the appropriate language on the Select Product Languages screen and click Nextto
continue.

4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 19
On the Grid Plug and Play Information screen perform the following:
Enter the desired Cluster Name, this name should be unique for the enterprise and CAN NOT
be changed post installation.

Enter the SCAN Name for the cluster. The SCAN Name must be a DNS entry resolving to 3
IP addresses. The SCAN Name MUST NOT be in the hosts file.

Enter the port number for the SCAN Listener, this port defaults to 1521.
Uncheck the Configure GNScheckbox.
Click Nextto continue.

Add all of the cluster nodes hostnames and Virtual IP hostnames on the Cluster Node Information
Screen. By default the OUI only knows about the local node, additional nodes can be added using the
Addbutton.

4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 20
On the Specify Network Interface Usage screen, make sure that the public and private interfaces are
properly specified. Make the appropriate corrections on this screen and click Nextto continue.

NOTE
The public and private interface names MUST be consistent across the cluster, so if
Publicis public and Privateis private, the same MUST be true on all of the other
nodes in the cluster.
Choose Automatic Storage Management (ASM) on the Storage Option Information screen and click
Nextto create the ASM diskgroup for Grid Infrastructure.

4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 21
On the Create ASM Disk Group screen perform the following:
Enter the Disk Group Name that will store the OCR and Voting Disks (e.g. OCR_VOTE)
Choose Normalfor the Redundancy level.
Select the disks that were previously designated for the OCR and Voting Disks.

NOTE
If no Candidate disks are available the disks have not yet been stamped for use by
ASM. To stamp the disks review the instructions in the Storage Prerequisites section
of this document and click the Stamp Disksbutton on this screen.
Click Nextto continue.
4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 22
Enter the appropriate passwords for the SYS and ASMSNMP users of ASM on the Specify ASM
Passwords screen and click Nextto continue.

On the Failure Isolation Support screen choose Do not use Intelligent Platform Management
Interface (IPMI)and click Nextto continue.

Specify the location for Oracle Base (e.g. d:\app\oracle) and the Grid Infrastructure Installation (e.g.
d:\OraGrid) on the Specify Installation Location screen. Click Nextto allow the OUI to perform
the prerequisite checks on the target nodes.

NOTE
Continuing with failed prerequisites may result in a failed installation; therefore it is
recommended that failed prerequisite checks be resolved before continuing.
4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 23
After the prerequisite checks have successfully completed, review the summary of the pending
installation and click Finishto install and configure Grid Infrastructure.

4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 24
On the Finish screen click Closeto exit the OUI.
Once the installation has completed, check the status of the CRS resources as follows:
NOTE
All resources should report as online with the exception of GSD and OC4J? . GSD will only
be online if Grid Infrastructure is managing a 9i database and OC4J? is reserved for use in a
future release. Though these resources are offline it is NOT supported to remove them.
cmd> %GI_HOME%\bin\crsctl stat res -t
The output of the above command will be similar to the following:
--------------------------------------------------------------------------------
4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 25
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE ratwin01
ONLINE ONLINE ratwin02
ora.OCR_VOTE.dg
ONLINE ONLINE ratwin01
ONLINE ONLINE ratwin02
ora.asm
ONLINE ONLINE ratwin01 Started
ONLINE ONLINE ratwin02 Started
ora.eons
ONLINE ONLINE ratwin01
ONLINE ONLINE ratwin02
ora.gsd
OFFLINE OFFLINE ratwin01
OFFLINE OFFLINE ratwin02
ora.net1.network
ONLINE ONLINE ratwin01
ONLINE ONLINE ratwin02
ora.ons
ONLINE ONLINE ratwin01
ONLINE ONLINE ratwin02
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ratwin02
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE ratwin01
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE ratwin01
4.1. Basic Grid Infrastructure Install (without GNS and IPMI) 26
ora.oc4j
1 OFFLINE OFFLINE
ora.ratwin01.vip
1 ONLINE ONLINE ratwin01
ora.ratwin02.vip
1 ONLINE ONLINE ratwin02
ora.scan1.vip
1 ONLINE ONLINE ratwin02
ora.scan2.vip
1 ONLINE ONLINE ratwin01
ora.scan3.vip
1 ONLINE ONLINE ratwin01
5. Grid Infrastructure Home Patching
This Chapter is a placeholder
6. RDBMS Software Install
Prior to installing the Database Software (RDBMS) it is highly recommended to run the cluster verification
utility (CLUVFY) to verify that Grid Infrastructure has been properly installed and the cluster nodes have
been properly configured to support the Database Software installation. In order to achieve this CLUVFY
must be run in pre dbinst mode. The following outlines how to achieve this task:
cmd> runcluvfy stage -pre dbinst -n -verbose
Start the OUI by running the setup.exe command as the Local Administrator user from the DB
directory of the Oracle Database 11g Release 2 (11.2.0.1) installation media and click next to begin
the installation process.

If using the DVD installation media (from edelivery.oracle.com) the first screen to appear will be the
Select a Product to Install screen. Choose Oracle Database 11g and click next to continue. If the OTN
media is used the RDBMS/ASM and Clusterware are separate downloads and the first screen to be
displayed will be Installation Type screen .

The first OUI screen will prompt for an email address to allow you to be notified of security issues
and to enable Oracle Configuration Manager. If you wish NOT to be notified or not to use Oracle
Configuration Manager leave the email box blank and uncheck the check box. After entering the
desired information click Nextto continue.

5. Grid Infrastructure Home Patching 27
Choose Install database software onlyon the Select Installation Option screen and click Next
to continue.

On the Grid Installation Options screen, choose Real Application Clusters database installation
and select ALL nodes for the software installation. Click Nextto continue.

6. RDBMS Software Install 28
Choose the appropriate language on the Select Product Languages screen and click Nextto
continue.

On the Select Database Edition screen choose Enterprise Editionand click Nextto continue.
NOTE
If there is a need for specific database options to be installed or not installed, these options
can be chosen by clicking the Select Optionsbutton.
6. RDBMS Software Install 29
Specify the location for Oracle Base (e.g. d:\app\oracle) and the database software installation (e.g.
d:\app\oracle\product\11.2.0\db_1) on the Specify Installation Location screen. Click Nextto
allow the OUI to perform the prerequisite checks on the target nodes.

NOTE
It is recommended that the same Oracle Base location is used for the database installation that
was used for the Grid Infrastructure installation.
NOTE
Continuing with failed prerequisites may result in a failed installation; therefore it is
recommended that failed prerequisite checks be resolved before continuing.
After the prerequisite checks have successfully completed, review the summary of the pending
installation and click Finishto install the database software.

6. RDBMS Software Install 30
On the Finish screen click Closeto exit the OUI.
6. RDBMS Software Install 31
7. RAC Home Patching
This Chapter is a placeholder
8. Run ASMCA to create diskgroups
Prior to creating a database on the cluster, the ASM diskgroups that will house the database must be created.
In an earlier chapter, the ASM disks for the database diskgroups were stamped for ASM usage. We will now
use the ASM Configuration Assistant to create the diskgroups:
Run the ASM Configuration Assistant (ASMCA) from the ASM Home by executing the following:
cmd> %GI_HOME%\bin\asmca
After launching ASMCA, click the DiskGroups? tab.
7. RAC Home Patching 32
While on the DiskGroups? tab, click the Createbutton to display the Create DiskGroup? window.
NOTE: To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that generally
no more than two diskgroups be maintained for database storage, a Database Area DiskGroup? and a Flash
Recovery Area DiskGroup? . The Database Area Diskgroup will house active database files such as datafiles,
control files, online redo logs, and change tracking files (used in incremental backups) are stored. The Flash
Recovery Area will house recovery-related files are created, such as multiplexed copies of the current control
file and online redo logs, archived redo logs, backup sets, and flashback log files
Within the Create DiskGroup? window perform the following:
Enter the desired DiskGroup? Name (e.g. DBDATA or DBFLASH)
Choose External Redundancy (assuming redundancy is provided at the SAN level).
Select the candidate disks to include in the DiskGroup?

NOTE: If no Candidate disks are available the disks have not yet been stamped for use by ASM. To stamp
the disks review the instructions in the 'Rac11gR2WindowsPrepareDisk' chapter of this document and click
the Stamp Disksbutton on this screen.
8. Run ASMCA to create diskgroups 33
Click OKto create the DiskGroup? .

Repeat the previous two steps to create all necessary diskgroups.
Once all the necessary DiskGroups? have been created click Exitto exit from ASMCA.
Note:
It is Oracle's Best Practise to have an OCR mirror stored in a second disk group. To follow this
recommendation add an OCR mirror. Mind that you can only have one OCR in a diskgroup.
Action:
8. Run ASMCA to create diskgroups 34
1. To add OCR mirror to an Oracle ASM disk group, ensure that the Oracle Clusterware stack is running and
run the following command as administrator:
2. D:\OraGrid\BIN>ocrconfig -add +DBDATA
3. D:\OraGrid\BIN>ocrcheck -config
9. Run DBCA to create the database
To help to verify that the system is prepared to successfully create a RAC database, use the following Cluster
Verification Utility command syntax:
cmd> runcluvfy stage -pre dbcfg -n all -d c:\app\11.2.0\grid -verbose
Perform the following to create an 11gR2 RAC database on the cluster:
Run the Database Configuration Assistant (DBCA) from the ASM Home by executing the following:
cmd> %ORACLE_HOME\bin\dbca
On the Welcome screen, select Oracle Real Application Clustersand click Next
On the Operation screen, select Create Databaseand click Next
9. Run DBCA to create the database 35
The Database Templates screen will now be displayed. Select the General Purpose or Transaction
Processingdatabase template and click Nextto continue.

On the Database Identification screen perform the following:
Select the Admin Managedconfiguration type.
Enter the desired Global Database Name.
Enter the desired SID prefix.
Select ALL the nodes in the cluster.
Click Nextto continue.

9. Run DBCA to create the database 36
On the Management Options screen, choose Configure Enterprise Manager. Once Grid Control
has been installed on the system this option may be unselected to allow the database to be managed by
Grid Control. Click Nextto continue.

Enter the appropriate database credentials for the default user accounts and click Nextto continue.
9. Run DBCA to create the database 37
On the Database File Locations screen perform the following:
Choose Automatic Storage Management (ASM).
Select Use Oracle-Managed Filesand specify the ASM Disk Group that will house the
database files (e.g. +DBDATA)

Click Nextand enter the ASMSNMP password when prompted.
Click OKafter entering the ASMSNMP password to continue.

9. Run DBCA to create the database 38
For the Recovery Configuration, select Specify Flash Recovery Area, enter +DBFLASH for the
location and choose an appropriate size. It is also recommended to select Enable Archivingat this
point. Click Nextto continue.

On the Database Content screen, you are able create the sample schemas. Check the check box if the
sample schemas are to be loaded into the database and click Nextto continue.

9. Run DBCA to create the database 39
Enter the desired memory configuration and initialization parameters on the Initialization Parameters
screen and click Nextto continue.

On the Database Storage screen, review the file layout and click Nextto continue.
NOTE: At this point you may want to increase the number of redo logs per thread to 3 and increase the size
of the logs from the default of 50MB.
9. Run DBCA to create the database 40
On the Last screen, ensure that Create Databaseis checked and click Finishto review the
summary of the pending database creation.

Click OKon the Summary window to create the database.
9. Run DBCA to create the database 41
9. Run DBCA to create the database 42

S-ar putea să vă placă și