Sunteți pe pagina 1din 8

• Cognizant 20-20 Insights

Virtualizing Oracle:
Oracle RAC on VMware vSphere 4
Executive Summary become increasingly well-established with newer
releases of VMware vSphere. Advances in the
While most databases running in a VMware envi-
capabilities and overall performance of VMware
ronment benefit from the increased reliability
have put to rest the arguments about running
of a virtual environment, complete failover still
high-performance applications as VMs.
requires database clustering. Yet, traditional
clustering methods that use Raw Disk Mappings
However, the current release of VMware vSphere
(RDMs) have generally achieved redundancy at
can only provide continuous availability through
the expense of the many benefits that result from
VMware Fault Tolerance for single vCPU systems,
running in virtual environments.
and then only in limited configurations. vSphere
Recent advances in the capabilities of VMware is not yet able to provide fault tolerance for
vSphere have opened the door to new cluster- multi-CPU systems, which are often needed to
ing methods. These methods meet the demands of high-performance databases
and other Tier 1 platforms. Thus, concerns remain
Traditional enable individual virtual machines around enabling high availability on virtual
(VMs) in a database cluster to
database clustering be migrated via VMware vMotion machines with more than one virtual CPU, along
is still required for from one ESX host to another, cre- with other properties that are not yet supported
by VMware fault tolerance. Organizations with
mission-critical, ating an opportunity to synergis- enterprise-class database platforms that require
tically combine the natural resil-
high-availability and iency of database clusters with mission-critical availability or carrier-grade
high-performance the high-availability and load- stability must find other ways to meet this need in
a virtual environment.
compute capacity. balancing properties of VMware
virtual environments.
As a result, traditional database clustering is
The net result is a high-performance database still required for mission-critical, high availabil-
system with greater reliability than what could ity and high-performance compute capacity. Yet,
otherwise be achieved through traditional clus- when using traditional methods, clustering virtual
tering methods on either a physical or virtual machines in VMware leads to another limitation.
infrastructure. We have delivered Oracle RAC on The individual nodes in a typical cluster — whether
VMware to a leading entertainment/communica- or not these nodes are running on physical, virtual
tions provider, which is an industry first. or even mixed architectures — require access to
shared data. These shared drives are used for
Database Virtualization: Getting Started storing information common to all systems, as well
The fundamentals of running database systems as for keeping all of the nodes in a given cluster
such as Oracle in a virtual environment have coordinated (voting and quorum drives).

cognizant 20-20 insights | april 2011


In VMware, traditional VM clustering methods ment, Oracle would only provide support on
have required the use of RDMs on a shared Fibre VMware when an issue could first be duplicated
Channel or iSCSI storage system. When used in on a physical infrastructure. This effectively kept
this way, RDMs introduce several limitations in some companies from virtualizing Oracle products
virtual infrastructure environments: and applications, as many in the user community
already knew that specific Oracle configurations
• RDMs are often difficult to back up and restore worked well without it. More recently (but still
using traditional VMware backup methods, prior to this announcement), Oracle changed its
particularly if they are physical as opposed to stance on supporting virtualized applications
virtual RDMs (vRDM). when running on its hypervisor product. In all of
• RDMs, when used for voting and quorum these cases, Oracle RAC was expressly excluded
drives, require VMs to turn on a feature called from being supported.
SCSI Bus Sharing. This feature is incompat-
The recent Oracle support statement changed
ible with certain key VMware technologies, the
things dramatically.1 The key portion of that change
most important of which is VMware vMotion,
which enables a VM to be migrated from one is as follows:
ESX host to another with no downtime (also
If a problem is a known Oracle issue, Oracle
called live migration).
support will recommend the appropriate
As a result, a VM that is used in traditional database solution on the native OS. If that solution
clustering is always tied to a dedicated ESX host. does not work in the VMware virtualized
It cannot be moved to another ESX host without environment, the customer will be referred
incurring some amount of downtime. This lack of to VMware for support. When the customer
mobility makes other key features that rely on can demonstrate that the Oracle solution
VMware vMotion technology unavailable, such as does not work when running on the native
VMware Distributed Resource Scheduler (DRS). OS, Oracle will resume support, including
logging a bug with Oracle Development
The end result is that workloads within a tradi- for investigation if required. If the problem
tional, RDM-based VMware cluster are more is determined not to be a known Oracle
difficult to load-balance across a DRS cluster. issue, we (Oracle) will refer the customer
Further, the primary method used to ensure to VMware for support. When the customer
high availability for a database cluster is to use can demonstrate that the issue occurs when
multiple VMs in the cluster itself — just as multiple running on the native OS, Oracle will resume
physical servers would do in support, including logging a bug with Oracle
The end result is a physical cluster. VMware Development for investigation if required.
that workloads within is unable to contribute to NOTE: Oracle has not certified any of its
a traditional RDM- or enhance this capability
in any meaningful way, at products on VMware. For Oracle RAC,
based VMware cluster least for the foreseeable Oracle will only accept Service Requests as
described in this note on Oracle RAC 11.2.0.2
are more difficult to future. While VMware High and later releases.
load-balance across a Availability (HA) can auto-
matically restart a failed
In short, Oracle RAC 11gR2 is now supported when
DRS cluster. VM in a database cluster, running on VMware environments, starting with
it is unable to follow the
version 11.2.0.2. In making this statement, it is also
additional load balancing rules provided by DRS
clear that Oracle appropriately expects VMware will
as a part of that process. Thus, the potential that
provide support for VMware in this configuration.
system performance issues will arise in the event
of a HA restart due to either the VM or ESX host The issue of certification is worth noting. As a
failure is increased. practical measure, Oracle would not be expected
to certify its products on VMware because VMware
Oracle Support operates at the same level in the overall stack with
On November 8, 2010, Oracle announced a change respect to Oracle products as physical hardware
to its support statements for all Oracle products does. It should be noted that VMware is no different
when running on VMware. Prior to this announce- in this case than any other hardware vendor

cognizant 20-20 insights 2


(HP, Dell, IBM, etc.). Oracle also does not certify is a logical flow to the system. Sound from a
its products on any specific hardware platforms. microphone or instrument is first filtered into
Thus, the opportunity and incentive to migrate the top of the mixing board on one of several
Oracle systems to virtualized environments has channels through a “trim” control. The sound
never been greater. is then “mixed” in a variety of
ways (treble, bass, echo effects, The opportunity and
Breaking Free of RDM etc.) as it travels down from the
Overcoming this last barrier for most mission- top to the bottom of the mixing
incentive to migrate
critical applications is key. By eliminating the need board where another lever, called Oracle systems
for RDMs in VM clusters, the high availability of a “fader” controls how much to virtualized
traditional database clusters can be combined sound comes out on that instru-
synergistically with the built-in features of VMware ment’s channel. The processed
environments has
vSphere to provide a high-performance database sound from each channel is never been greater.
cluster environment with even greater resiliency then sent to a master volume
than would be possible on either physical infra- control, which is used to set overall volume for all
structure or via traditional VMware high-availabil- of the instruments and voices. Understanding this
ity methods. Thanks to the performance enhance- flow lets a sound engineer use his highly skilled
ments in VMware vSphere 4, this final barrier can ears to optimize the sound of the music.
now be broken by using iSCSI or NFS protocol
in-guest to take the place of RDMs. There is a similar logical layout and flow to how
physical infrastructure, VMware and Oracle
Take Advantage of Improved vSphere database components interact. Knowing how
Performance and where data flows through the network, how
VMware vSphere 4 ushered in key performance CPU and memory is assigned, and how storage
enhancements across the board for virtual is accessed, a skilled architect or administra-
machines. These enhancements allow vSphere 4 tor has a similar framework for optimizing
to easily meet and exceed the compute capacity performance. Balancing these for maximum
needed to run high-performance, Tier 1 applica- performance still requires skill and knowledge,
tions. In particular, the enhancements to the iSCSI but the concepts of what each component does
and networking stacks have increased I/O and and how it works can be easily understood.
efficiency gains by as much as 50% and a factor
of 10, respectively. As a result, both in-guest iSCSI Virtual Infrastructure Architecture
and NFS can be used to access shared drives, Virtual infrastructure environments that are
as needed. based on converged network topologies such
as 10-GB Ethernet are especially friendly to vir-
In virtual infrastructure environments leveraging
tualized Tier 1 applications such as Oracle RAC.
converged 10-GB Ethernet networks, the options
This is due to the available network bandwidth
and benefits are significant. However, tradi-
and the use of IP-based storage protocols (iSCSI
tional Fibre Channel environments can also take
and NFS). These architectures allow the shared
advantage of these benefits through the use of an
drives needed for VM clusters to be hosted
iSCSI “gateway” VM.
directly from the physical storage system. As a
When combining multiple systems with the result, they are able to take better advantage of
sophistication of virtual infrastructure and Tier 1 the hardware infrastructure that supports the
database clusters, a significant amount of feature virtual environment.
overlap can occur. Managing and eliminating
performance bottlenecks requires a clear under- However, this doesn’t rule out the ability to also
standing of how these products interact with use traditional Fibre Channel storage systems.
virtual infrastructure environments and with each Here, a dedicated VM is used to share the Fibre
other. While this can sometimes look complex, Channel storage-area network (SAN) storage
understanding how and why certain components using an IP SAN protocol. While this particular
provide performance boosts can be broken into method of sharing has an additional step and
logical components. requires additional tuning to achieve optimum
performance, it has the advantage that all the
As an analogy, a sound engineer’s mixing board storage for the VM clusters is kept in one or more
has dozens of control knobs, levers and switches, virtual disk files stored on a VMware VMFS data
which can appear daunting to manage. But there store. This provides for a consistent method of

cognizant 20-20 insights 3


storage across all systems that can be backed up Enterprise Linux have been used in other Oracle
using the same virtual machine backup methods. database solutions and should work, as well).

The primary difference between Fibre Channel For those using an iSCSI gateway, the VM configu-
and IP-based storage solutions is solely that a ration might look something like this:
“gateway” VM is required in Fibre Channel SAN
environments. It provides the same benefits in
• Two vCPUs.
both storage configurations. • 4-GB RAM.
Since both configurations allow clustering without
• 10-GB primary disk (can be thin-provisioned).
the need for SCSI bus sharing, all of the VMs — • 100-GB secondary disk, thick-provisioned
including iSCSI or NFS “gateway” VMs — can be (to be shared via iSCSI).
moved between the various ESX hosts in a DRS • Two vNICs (VMXNET3 driver): one for adminis-
cluster via vMotion. This enables clusters to be tration and one for the iSCSI network.
freely configured such that the benefits of HA and
DRS can be synergistically added to the failover
• Current Linux distribution (CentOS, Ubuntu,
and Fedora have been successfully tested. Red
capabilities inherent in Oracle RAC clusters. Hat Enterprise Linux, SUSE Linux and Oracle
Enterprise Linux have been used in similar
Virtual Machine Architecture
solutions and should work, as well).
The virtual machine configuration for the
individual Oracle RAC nodes relies on in-guest Further, the example VMs as configured in
iSCSI or in-guest NFS protocol for all shared this document make use of a 10-GB Ethernet
drives. This means that each virtual machine converged network for both network and storage
connects directly to an iSCSI or NFS shared drive access. When configuring for Gigabit Ethernet
for all data that must be held in common. This networks, additional dedicated network ports and
connection uses the same protocols and security interfaces at the physical layer will be required.
mechanisms that would be used if these VMs were
instead servers in a purely physical environment. The above example configuration is intended to
support up to a medium-sized Oracle database
With an appropriate underlying infrastructure, for development, small-scale production, and
iSCSI and NFS deliver similar performance, secondary support for enterprise-class, large-
with unique benefits and drawbacks that are scale database solutions, such as Oracle Exadata.
well known among storage administrators and This configuration should be modified as
architects. The decision of which to choose is necessary to support alternate use cases.
driven by available skills layout of the underlying
infrastructure, company security policies and iSCSI Tuning
even personal tastes and preferences. As such, There are a variety of options for an appropriate
the examples used in this document are based on iSCSI gateway VM, most of which are some variant
iSCSI, but they can also be readily applied to NFS of Linux. These include Red Hat Enterprise Linux,
configurations. Ubuntu, SUSE, Fedora and FreeNAS, to name a
few. All have an iSCSI target capability built into
Configuring a Virtualized Oracle System
them. The two most common iSCSI target applica-
Properly sizing the VMs that make up a RAC tions found on Linux variants are:
cluster and the gateway VM (if implemented) is
critical to maximizing performance. An example • iSCSI Enterprise Target
VM configuration for Oracle RAC nodes might • TGT
have the following characteristics:
The iSCSI Enterprise Target is arguably the more
• Four vCPUs. mature of the two. TGT is included “in the box”
with Red Hat and its derivatives, and is more
• 12-GB RAM. than capable as a iSCSI platform. However, the
• 50-GB primary disk (can be thin-provisioned). default settings for both iSCSI systems are too
• Two vNICs (VMXNET3 driver): one public and conservative for the level of performance needed
one private. for Oracle RAC. Tuning is required to achieve a
• Current Linux distribution (CentOS, Ubuntu desirable level of performance. There are several
and Fedora have been successfully tested. Red online resources for tuning and configuring an
Hat Enterprise Linux, SUSE Linux and Oracle iSCSI target on Linux for Oracle.

cognizant 20-20 insights 4


The primary issue is that the default settings /etc/init.d/iscsitarget and change the MEM_SIZE
for iSCSI target servers in Linux do not allocate variable to MEM_SIZE=1073741824 and then
sufficient resources to handle restart the iSCSI Target server by issuing the
Oracle Automatic the I/O needs of databases command:
Storage Management such as Oracle RAC. Tuning
iSCSI to have larger memory
• /etc/init.d/iscsitarget restart.
(ASM) should be used caches and to handle larger Configuring iSCSI Targets with TGT
in this configuration to chunks of data, as well as If configuring the iSCSI target Gateway VM using
provide disk for shared to spawn more threads to
handle data requests more
TGT, use the following commands:
disk management in efficiently, can reap signifi- • tgtadm –lld iscsi –mode target –op update
exactly the same way cant performance benefits. –tid $tid –name MaxRecvDataSegmentLength
–value 16776192
that it would be used When combined with enabling
Jumbo Frame support, iSCSI • tgtadm –lld iscsi –mode target –op update
in a traditional physical performance increases even –tid $tid –name MaxXmitDataSegmentLength
server deployment. more. Performance boosts –value 16776192
of 30% to 40% have been
reported by clients who enabled Jumbo Frames
• tgtadm –lld iscsi –mode target –op update –tid
$tid –name HeaderDigest –value None
on 10 GB Ethernet networks.
• tgtadm –lld iscsi –mode target –op update –tid
The following settings have been com- $tid –name DataDigest –value None
piled from several community-based sources • tgtadm –lld iscsi –mode target –op update –tid
(online blogs, “Linux man $tid –name InitialR2T –value No
Be sure that the pages,” etc.). These represent • tgtadm –lld iscsi –mode target –op update –tid
underlying hardware some of the more common $tid –name MaxOutstandingR2T –value 16
settings and should provide
infrastructure is adequate performance for • tgtadm –lld iscsi –mode target –op update –tid
$tid –name ImmediateData –value Yes
optimized to support most situations. A full expla-
Oracle just as if nation of these parameters • tgtadm –lld iscsi –mode target –op update –tid
can be found in the Linux man $tid –name FirstBurstLength –value 16776192
it were running pages of the iSCSI Enterprise • tgtadm –lld iscsi –mode target –op update –tid
directly on physical Target configuration file (ietd. $tid –name MaxBurstLength –value 16776192
infrastructure. conf). Online explanations Configuring iSCSI Initiators (on each RAC VM)
of each parameter can also
be found online at (http://www.linuxcertif.com/ On each of the Oracle RAC VM nodes, the iSCSI
man/5/ietd.conf), as well as other locations. initiator needs to be tuned. To do so, add the
following to /etc/sysctl.conf
Configuring iSCSI Enterprise Target
On the target server, place the following in the
• net.core.rmem_max = 1073741824
/etc/ietd.conf file: • net.core.wmem_max = 1073741824
• MaxConnections 1 • net.ipv4.tcp_rmem = 1048576 16777216
1073741824
• InitialR2T No • net.ipv4.tcp_wmem = 1048576 16770216
• ImmediateData Yes 1073741824
• MaxRecvDataSegmentLength 16776192 • net.ipv4.tcp_mem = 1048576 16770216
• MaxXmitDataSegmentLength 16776192 1073741824
• MaxBurstLength 16776192 Reload the system parameters with the command:
• FirstBurstLength 16776192 • sysctl –p
• MaxOutstandingR2T 16
Then finally backup and overwrite /etc/iscsi/
• Wthreads 16 iscsid.conf on each VM server so it contains:
• DataDigest None
• HeaderDigest None • node.startup = automatic
Next, adjust the amount of memory the iSCSI • node.session.timeo.replacement_timeout = 120
target system is configured to use. To do this, edit • node.conn[0].timeo.login_timeout = 15

cognizant 20-20 insights 5


• node.conn[0].timeo.logout_timeout = 15 monitored to ensure maximum performance
• node.conn[0].timeo.noop_out_interval = 10 and resource utilization efficiency.

• node.conn[0].timeo.noop_out_timeout = 15 • Make use of VMXNET3 and PVSCSI drivers in


• node.session.initial_login_retry_max = 4 VMs whenever possible to ensure maximum
network and disk performance.
• node.session.cmds_max = 128
• node.session.queue_depth = 128 • Enable Jumbo Frames on all network interfaces
(suggested MTU = 9000).
• node.session.iscsi.InitialR2T = No
• node.session.iscsi.ImmediateData = Yes • Disable unneeded services in Linux VMs.
• node.session.iscsi.FirstBurstLength = 16776192 • Tune the iSCSI initiators and targets, especially
• node.session.iscsi.MaxBurstLength = 16776192 on gateway VMs, for the performance needs
of the VMs in the cluster using it. Multiple
• # the default is 131072 gateway VMs should be considered when
• node.conn[0].iscsi.MaxRecvDataSegment- multiple clusters are deployed. The configura-
Length = 16776192 tion of the underlying host systems, network
• # the default is 32768 and storage in a virtual environment can
• discovery.sendtargets.iscsi.MaxRecvDataSeg- have a significant impact on virtual machine
mentLength = 16776192 performance. Oracle is particularly sensitive in
this area. Be sure that the underlying hardware
• node.conn[0].iscsi.HeaderDigest = None infrastructure is optimized to support Oracle
• node.session.iscsi.FastAbort = No just as if it were running directly on physical
Once this is done, restart the iSCSI daemon with infrastructure.
the command:
It is also important to note that Oracle ASM is
• service iscsi restart a sophisticated and robust database storage
• # the default is 32768 mechanism that is designed to make the most of
• discovery.sendtargets.iscsi.MaxRecvData physical storage systems with multiple disks. In a
SegmentLength = 16776192 virtualized environment, the virtual storage system
will normally have most of the performance and
• node.conn[0].iscsi.HeaderDigest = None reliability that ASM would commonly provide for
• node.session.iscsi.FastAbort = No itself. As a result, ASM configurations in VMware
Oracle Automatic Storage Management environments are usually much simpler to set
up. Don’t be misled! A redundant disk volume in
Oracle Automatic Storage Management (ASM)
VMware is normally presented to ASM as if it were
should be used in this configuration to provide
a single disk drive. Just because ASM doesn’t know
storage for shared disk management in exactly
that a disk volume it is using is redundant, doesn’t
the same way that it would be used in a tradi-
mean there is no redundancy. By the same token,
tional physical server deployment. The primary
ensure that you have built appropriate levels of
difference is that ASM and its components all
data protection into your storage system.
operate from within the virtual infrastructure envi-
ronment, but access the shared iSCSI or NFS disk Time Synchronization
in exactly the same way. It makes no difference if
Time synchronization in vSphere environments
the iSCSI target is directly on the storage system
can be tricky, and applications that are sensitive
or accessed through a “gateway” VM.
to time require special attention. Oracle RAC is no
Key factors to keep in mind for any VM configura- exception. Each virtualized Oracle RAC node must
tion include: be time-synchronized to the other nodes. There
are two methods for keeping the cluster nodes in
• All nodes in a given RAC cluster should have sync. Each has its benefits and works just as well:
an identical virtual hardware configuration.
Ideally, it’s best to clone a properly configured
RAC VM to create the other RAC nodes
• Cluster Time Synchronization Service: This
is the easier of the two options to set up. Prior
in a cluster. to beginning the installation of Oracle RAC,
• VM performance, especially CPU, RAM, and make sure that all Network Time Protocol
CPU-ready parameters, should be closely (NTP) programs are disabled (and ideally unin-

cognizant 20-20 insights 6


stalled). The Oracle RAC installer then auto- and grouped together. Note that having multiple
matically installs and configures Oracle Cluster RAC nodes from different clusters running on the
Time Synchronization Service. same host server is acceptable, subject to resource
utilization and other resource management issues
• Enable NTP: The default NTP configuration
common to all virtual machines.
must be modified only to allow the Slew option
(-x). This will force the NTP daemon to ensure For optimal HA detection and monitoring,
the clock on the individual nodes does not configure VM heartbeat monitoring for all nodes
move backwards. This option is set in different in the RAC cluster. This will ensure that, if VM is
places depending on the Linux distribution powered on but not actually functioning, VMware
used. Please refer to the documentation for HA will automatically restart the VM.
the specific Linux distribution chosen for
additional details.
Database Clustering Advances
Because time synchronization in vSphere can be Thanks to the performance enhancements of
sensitive, best practices suggest using VMware VMware vSphere 4, it is now possible to cluster
Tools to synchronize with the hardware clock of database systems reliably without the use of Raw
the ESX host system on which they are running. Disk Mappings. This change enables individual
Testing to date has proved this to be unnecessary. nodes in a virtualized database cluster to migrate
freely across ESX hosts in a HA/DRS cluster, and
The above methods have been proven to satisfy adds the benefits of database clustering to those
the actual need. Follow Oracle best practices with provided by vSphere. When configured this way,
respect to time synchronization regardless of the vSphere HA and DRS work to complement the
platform. inherent HA capabilities of Oracle RAC clusters.

High Availability and DRS Configuration vSphere DRS will ensure that all virtual Oracle
One of the primary drivers for deploying Oracle RAC nodes receive the resources they require
RAC is the high availability provided by a RAC by dynamically load-balancing the nodes across
cluster. This cluster failover carries forward into a the vSphere HA/DRS cluster. In the event any
VMware vSphere environment and — with cluster ESX host in the cluster fails (or RAC node when
nodes that can be migrated via vMotion — can now HA heartbeat monitoring is used), vSphere HA
be configured to take advantage of these capabili- will automatically restart all failed RAC nodes
ties. Remember that VMware HA will, in the event on another available ESX host. The process of
a physical ESX host fails, automatically restart all restarting these nodes will follow all HA and DRS
of the failed host’s VMs on surviving ESX hosts. rules in place to ensure that the failed nodes are
VMs do experience downtime when this happens. placed on a host where no other nodes in the
For this reason, allowing more than one virtual- same RAC cluster are running. With this combi-
ized RAC server node in a given RAC cluster to nation, Oracle RAC will automatically manage the
run on a single ESX host needlessly exposes the loss of a failed node from an application perspec-
RAC cluster to failure scenarios from which it tive, and vSphere will then automatically recover
potentially may not recover gracefully. the failed RAC node, restoring the Oracle RAC
cluster’s state to normal. All of this occurs with
As such, it is important to set a series of DRS no human intervention required.
anti-affinity policies between all nodes in a given
RAC cluster. A typical virtualized Oracle RAC envi- The end result is that by using in-guest, iSCSI
ronment will consist of three server nodes. Since (and/or NFS) storage for shared data, virtual-
anti-affinity DRS policies can currently only be set ized Oracle RAC database clusters can achieve
between two specific VMs, multiple policies are improved levels of redundancy and — on appropri-
required to keep three or more nodes in a RAC ate hardware infrastructure — enhanced levels of
cluster properly separated. Be sure to name the performance that cannot be achieved on physical
DRS policies such that they can be easily identified infrastructure alone.

cognizant 20-20 insights 7


Footnote
1
“Support Position for Oracle Products Running on VMware Virtualized Environments, Oracle blog post,
November 2, 2010, http://wiki.oracle.com/thread/2438514/Support+Position+for+Oracle+Products+
Running+on+VMWare+Virtualized+Env

About the Author


Christopher (Chris) Williams is a Senior Manager and Principal Architect in Consulting and Profes-
sional Services with Cognizant’s IT Infrastructure Services business unit, where he serves as the lead
consultant in the virtualization practice. Chris holds a Bachelor of Science from Metropolitan State
College of Denver and an MBA with an Information Systems emphasis from the University of Colorado.
He can be reached at chris.williams@cognizant.com.

About Cognizant

Cognizant (Nasdaq: CTSH) is a leading provider of information technology, consulting,and business process outsourc-
ing services, dedicated to helping the world’s leading companies build stronger businesses. Headquartered in Teaneck,
N.J., Cognizant combines a passion for client satisfaction, technology innovation, deep industry and business process
expertise and a global, collaborative workforce that embodies the future of work. With over 50 delivery centers world-
wide and approximately 104,000 employees as of December 31, 2010, Cognizant is a member of the NASDAQ-100, the
S&P 500, the Forbes Global 2000, and the Fortune 1000 and is ranked among the top performing and fastest growing
companies in the world.

Visit us online at www.cognizant.com for more information.

World Headquarters European Headquarters India Operations Headquarters


500 Frank W. Burr Blvd. Haymarket House #5/535, Old Mahabalipuram Road
Teaneck, NJ 07666 USA 28-29 Haymarket Okkiyam Pettai, Thoraipakkam
Phone: +1 201 801 0233 London SW1Y 4SP UK Chennai, 600 096 India
Fax: +1 201 801 0243 Phone: +44 (0) 20 7321 4888 Phone: +91 (0) 44 4209 6000
Toll Free: +1 888 937 3277 Fax: +44 (0) 20 7321 4890 Fax: +91 (0) 44 4209 6060
Email: inquiry@cognizant.com Email: infouk@cognizant.com Email: inquiryindia@cognizant.com

­­© Copyright 2011, Cognizant. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise, without the express written permission from Cognizant. The information contained herein is
subject to change without notice. All other trademarks mentioned herein are the property of their respective owners.

S-ar putea să vă placă și