Documente Academic
Documente Profesional
Documente Cultură
When one computer is not powerfull enough for an Oracle database, the solution is to use
more computers for a database. In this case we will have one database, but more
computers will use the same database. As you probably know, the database is not more
than some files used for keeping data and managing that data. To access that data we need
same processes, memory used by the these processes for accomplishing this tasks. These
form an instance. So, an instance or more instances are accessing the database. So, all the
time the database is on the disks (temporarily, parts of that information are loaded in
memory for the management of the database, application data).
When more instances are used to access the same data (database) we need to put that
insnces in a cluster. The clusterware, assure that the management of data is done correctly
(for instance, the same data is not modified in the same time by 2 users, even if the users
access the database by 2 or more instances). The clusterware can be bought from another
vendor than the database vendor. Oracle offers a solution for the clusterware as well. In this
case we speak about the Oracle clusterware. When the Oracle database is installed on a
clusterware (Oracle or not) we speak about an Oracle RAC or Oracle Real Application Cluster.
The Oracle RAC and Oracle clusterware is not necessarily the same thing. The Oracle RAC
installation includes the Oracle clusterware installation.
Here is an image which show how the Oracle Real Application Cluster (Oracle RAC is
working):
The clusterware is installed on each node (on an Oracle Home) and on the
shared disks (the voting disks and the CSR file)
The base software is installed on each node of the cluster and the database
storage on the shared disks.
3. What kind of storage we can use for the shared clusterware files?
- OCFS (Release 1 or 2)
- raw devices
- third party cluster filesystem such as GPFS or Veritas
4. What kind of storage we can use for the RAC database storage?
OCFS (Release 1 or 2)
ASM
raw devices
third party cluster filesystem such as GPFS or Veritas
5. What is a CFS?
A cluster File System (CFS) is a file system that may be accessed (read and
write) by all members in a cluster at the same time. This implies that all
members of a cluster have the same view.
6. What is an OCFS2?
The OCFS2 is the Oracle (version 2) Cluster File System which can be used
for the Oracle Real Application Cluster.
A raw device is a disk drive that does not yet have a file system set up. Raw
devices are used for Real Application Clusters since they enable the sharing
of disks.
A CFS offers:
- Simpler management
- Use of Oracle Managed Files with RAC
- Single Oracle Software installation
- Autoextend enabled on Oracle datafiles
- Uniform accessibility to archive logs in case of physical node failure
- With Oracle_Home on CFS, when you apply Oracle patches CFS guarantees
that the updated Oracle_Home is visible to all nodes in the cluster.
Oracle RAC 10g Release 1 introduced Oracle Cluster Ready Services (CRS), a
platform-independent set of system services for cluster environments. In
Release 2, Oracle has renamed this product to Oracle Clusterware.
5. What are the restrictions on the SID with a RAC database? Is it limited to
5 characters?
The SID prefix in 10g Release 1 and prior versions was restricted to five
characters by install/ config tools so that an ORACLE_SID of up to max of
5+3=8 characters can be supported in a RAC environment. The SID prefix is
relaxed up to 8 characters in 10g Release 2, see bug 4024251 for more
information.
7. Are there any issues for the interconnect when sharing the same switch as
the public network by using VLAN to separate the network?
RAC and Clusterware deployment best practices suggests that the
interconnect (private connection) be deployed on a stand-alone, physically
separate, dedicated switch. On big network the connections could be
instables.
The Cluster Verification Utility (CVU) is a validation tool that you can use to
check all the important components that need to be verified at different
stages of deployment in a RAC environment.
11. What versions of the database can I use the cluster verification utility
(cluvfy) with?
The cluster verification utility is release with Oracle Database 10g Release 2
but can also be used with Oracle Database 10g Release 1.
Yes. When ceritifed, you can use Vendor clusterware however you must still
install and use Oracle Clusterware for RAC. Best Practice is to leave Oracle
Clusterware to manage RAC. For details see Metalink Note 332257.1 and for
Veritas SFRAC see 397460.1.
No.
The hangcheck timer checks regularly the health of the system. If the system
hangs or stop the node will be restarted automatically.
There are 2 key parameters for this module:
-> hangcheck-tick: this parameter defines the period of time between checks
of system health. The default value is 60 seconds; Oracle recommends
setting it to 30seconds.
-> hangcheck-margin: this defines the maximum hung delay that should be
tolerated before hangcheck-timer resets the RAC node.
15. Is the hangcheck timer still needed with Oracle RAC 10g?
Yes.
For optimal performance, you should only put the following files on Linux
OCFS2:
- Datafiles
- Control Files
- Redo Logs
- Archive Logs
- Shared Configuration File (OCR)
- Voting File
- SPFILE
17. Is it possible to use ASM for the OCR and voting disk?
No, the OCR and voting disk must be on raw or CFS (cluster file system).
18. Can I change the name of my cluster after I have created it when I am
using Oracle Clusterware?
No, you must properly uninstall Oracle Clusterware and then re-install.
Services Daemon
-> ora.crsd
->
ora.LISTENER_SCAN.lsnr
-> ora.ons
-> ora.eons
-> ora.asm
-> ora.DB.db
->orarootagent ->
ora.nodename.vip
-> ora.net1.network
-> ora.gns.vip
-> ora.gnsd
-> ora.SCANn.vip
If a resource is written using blue & bold font => resource owned by root.
The other resources are owner by oracle. (all this on UNIX environment)
When a resource is managed by root, we need to run the
command crsctl as root or oracle.
Clusterware Resource Status Check
-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.LISTENER.lsnr
ora.asm
ONLINE ONLINE
ONLINE ONLINE
tzdev1rac
tzdev2rac
OFFLINE OFFLINE
OFFLINE OFFLINE
tzdev1rac
tzdev2rac
ONLINE ONLINE
ONLINE ONLINE
tzdev1rac
tzdev2rac
ora.eons
ora.gsd
1
ONLINE ONLINE
ora.tzdev2rac.vip
1
ONLINE ONLINE
tzdev1rac
tzdev2rac
crsctl start has -> start all the clusterware services/ resources (including
the database server and the listener);
crsctl stop has -> stop all the clusterware services/ resources (including
the database server and the listener);
crsctl enable has -> enable Oracle High Availability Services autostart
crsctl disable has -> disable Oracle High Availability Services autostart
crsctl config has -> check if Oracle High Availability Services autostart is
enabled/ disabled.
==> all
reboot mode -> when crsd starts all the resources are restarted.
restart mode -> when crsd starts the resources are started as these were
before the shutdown.
When CRS is installed on the cluster where a 3rd-party clusterware is
integrated (there are 2 clusterware on the cluster)
COMMENT:
In order to start the crsd we need:
- the public interface, the private interface and the virtual IP
(VIP) should be up and running !
- these IPs must be pingable to each other.
5) RACG
-t 1000 -m 400