Sunteți pe pagina 1din 19

JULY 2013

Oracle Enterprise Edition 11g Vs Standard Edition 11g

INTRODUCTION
This document highlights the functionalities of Oracle Database 11g Enterprise Edition which are not
applicable for Oracle Database 11g Standard Edition. The Document is intended to help the audience
looking for evaluating Enterprise Edition over Standard Edition.
The Document gives a brief introduction of the Features which are only specific to Enterprise Edition and
not in Standard edition.

Disclaimer:
The following is intended to outline our general product direction. It is intended for information
purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any
material, code, or functionality, and should not be relied upon in making purchasing decisions. The
development, release, and timing of any features or functionality described for Oracles products
remains at the sole discretion of Oracle.

HIGH AVAILABILITY
How do you protect your database from disaster?

Oracle Data Guard:


Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data.
Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or
more standby databases to enable production Oracle databases to survive disasters and data
corruptions.

Redo-Apply:
Provides a physically identical copy of the primary database, with on disk database structures that are
identical to the primary database on a block-for-block basis. The database schemas including indexes are
the same. A physical standby database is kept synchronized with the primary database, though Redo
Apply, which recovers the redo data, received from the primary database and applies the redo to the
physical standby database.
How do you upgrade your database with Minimal down time?

Oracle Data Guard:


Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data.
Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or
more standby databases to enable production Oracle databases to survive disasters and data
corruptions.

SQL-Apply:
The logical standby database is kept synchronized with the primary database though SQL Apply, which
transforms the data in the redo received from the primary database into SQL statements and then
executing the SQL statements on the standby database.
A logical standby database can be used for other business purposes in addition to disaster recovery
requirements. This allows users to access a logical standby database for queries and reporting purposes
at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch
sets with almost no downtime. Thus, a logical standby database can be used concurrently for data
protection, reporting, and database upgrades.

Can you use your standby Database for testing?

Oracle Data Guard - Snapshot Standby:


A standby database can be open temporarily (that is, activated) for read/write activities such as
reporting and testing. A physical standby database in such a state can still receive redo data from the
primary database, thereby providing data protection for the primary database while still in the reporting
database role.
Do you have zero downtime Upgrades?

Rolling Upgrades:
Data Guard supports rolling upgrades of patchset and major database releases, which allows you to
have a zero down time for your upgrades and thus having your applications Highly Available. Major
database upgrades can be performed in a rolling fashion using physical standby (by converting it to a
logical standby for a brief period of time).
What if your Transactional capabilities go down because of the continuance activities?

Online Index Rebuild:


For any large Enterprise that has a massive customer base, data availability 24X7 plays a significant role
in its business growth. With Online Index Rebuild, customers can have much faster data access even
when the continuance activities on the data in the background are being performed. Parallel
transactions enablement along with sustainment of the same is an add-on, that this feature addresses.
Can you defragment or Restructure you Index Organized Tables without affecting the Application
Availability and performance?

Online Index Organized table Organization:


For any large Enterprise that has a massive customer base, data availability 24X7 plays a significant role
in its business growth. With Online Index Organized table Organization, customers can have much faster
data access even when the continuance activities on the data in the background are being performed.
Parallel transactions enablement along with sustainment of the same is an add-on, that this feature
addresses.
Can you defragment or Restructure you Tables without affecting the Application Availability and
performance?

Online Table Redefinition:


Oracle Database provides a mechanism to make table structure modifications without significantly
affecting the availability of the table. The mechanism is called online table redefinition. Redefining

tables online provides a substantial increase in availability compared to traditional methods of


redefining tables.
Are you Backup Reliable?

DUPLEX BACKUP SETS:


Oracle 11G EE provides support for duplexed backup sets, increasing the reliability of backups. RMAN
implemented this ability in the RMAN API in which the tape vendors integrate with, to backup the file
one time and make multiple copies of the backup to different tapes.
How can you make you incremental backups Fast?

Block Change Tracking for Incremental Backups:


Backup only the changed blocks since the last backup. Can be optimized by avoiding scanning blocks that
have not changed since last backup. By using a new type of log file (change tracking file) to track blocks
that have changed in the database, Recovery Manager (RMAN) can avoid scanning the entire datafile
during an incremental backup. Instead, the amount of data scanned is proportional to the amount of
data changed.
How can you reduce the Size of your Backup sets?

Unused Block Compression in Backups:


Although it is referred to as block compression, it might be helpful to think of block compression as
block skipping. Rather than compressing the data in the blocks, RMAN completely eliminates the blocks
from the backup. RMAN reads blocks from a data file and writes them to a backup set. If unused block
compression is enabled, RMAN reads only those blocks of the data file in an extent that currently
belongs to a database segment. When not employing unused block compression, RMAN reads every
block of the data file.
How can you reduce the application downtime caused by blocks Corruption in your Database?

Block Level Media Recovery:


You can use block media recovery to recover one or more corrupt data blocks within a datafile. Block
media recovery provides the following advantages over datafile media recovery:
Lowers the Mean Time to Recover (MTTR) because only blocks needing recovery are restored and
recovered
Enables affected datafiles to remain online during recovery
Without block media recovery, if even a single block is corrupt, then you must take the datafile offline
and restore a backup of the datafile. You must apply all redo generated for the datafile after the backup

was created. The entire file is unavailable until media recovery completes. With block media recovery,
only the blocks actually being recovered are unavailable during the recovery.
How to Minimize the Backup and Recovery time?

Parallel Backup and Recovery:


Oracle Database automatically selects the optimum degree of parallelism for instance, crash, and media
recovery. Oracle Database applies archived redo logs using an optimal number of parallel processes
based on the availability of CPUs.
Can you do Partial Database Recovery to a Previous Point in Time?

Tablespace Point in Time Recovery:


Recovery Manager (RMAN) automatic tablespace point-in-time recovery (commonly abbreviated
TSPITR) enables you to quickly recover one or more tablespaces in an Oracle database to an earlier time,
without affecting the state of the rest of the tablespaces and other objects in the database.
Do you have a Fast, Predictable and Bounded Recovery Time for your Database?

Fast-Start Fault Recovery:


Controls the time to recover from system failures. Fast Start Recovery and Predictable Recovery using
"automatic checkpoint tuning" which uses periods of low I/O usage to advance checkpoints and
therefore improve availability
Can you do a Trial Recovery before doing actual Recovery and be sure about your recovery?

Trial Recovery:
Allows media recovery to go through the redo logs and apply all the changes to data blocks but not write
the changes out to disk, thereby enabling a good decision to be made on how to proceed after
encountering stuck recovery scenarios.
Have you ever had a downtime due to accidental Table Drop or Update?

Flashback Table:
Customer Information loss is the biggest threat any enterprise faces globally that can have adverse
effects on the business. Flashback Table addresses this issue especially when the loss of data takes place
due to an internal logical corruption. This feature helps in restoration of the lost data accurately to its
last capture, immediately, thereby reducing the risk of business losing, both, time and money.

Have you ever had a downtime due to accidental Schema Drop or unwanted Data removal?

Flashback Database:
This feature introduces the FLASHBACK DATABASE statement in SQL. It let you quickly bring your
database to a prior point in time by undoing all the changes that have taken place since that time. This
operation is fast, because you do not need to restore the backups. This in turn results in much less
downtime following data corruption or human error.
Have you ever had a downtime due to unwanted or accidental Transactions?

Flashback Transaction:
Increases availability during logical recovery by easily and quickly backing out a specific transaction or
set of transactions and their dependent transactions, with one command, while the database remains
online. Flashback transaction is a new feature that can easily back out a transaction and its dependent
transactions. This recovery operation utilizes undo data to create and execute the corresponding,
compensating transactions that revert the affected data back to its original state.
Can you Query data at some point-in-time in the past?

Flashback Query:
Flashback query allows a user to view the data quickly and easily the way it was at a particular time in
the past, even when it is modified and committed, be it a single row or the whole table.
How can you increase the Availability of a Single instance database?

RAC One Node:


Oracle Real Application Clusters (RAC) One Node is an option to the Oracle Database Enterprise Edition
that was introduced with Oracle Database 11g Release 2. It provides enhanced high availability for single
instance Oracle Databases, protecting them from both, planned and unplanned downtime. Oracle RAC
One Node provides Best in-class Oracle Database Availability, Better Oracle Database Consolidation,
Including Oracle Multitenant support, Better Oracle Database Virtualization.
Oracle RAC One Node also allows customers to standardize their database deployment and
management, consolidate database storage and, should the need arise, upgrade to a full multi-node
Oracle RAC database without downtime or disruption.
Can you offload you reporting to your Standby database?

Oracle Active Data Guard:


Allows the use of the standby database as a reporting instance, allowing offloading of processing cycles
from the primary. This increases your return on investment in standby database technology. It is now

possible to query a physical standby database while Redo Apply is active. Also called "real time query"
and sold as Active Data Guard option.
Added a configurable Real-Time Query Apply Lag Limit (STANDBY_MAX_DATA_DELAY). This capability
allows queries to be safely offloaded from the primary database to a physical standby database, because
it is possible to detect if the standby database has become unacceptably stale.
Can you access your Historic Data?

Total Recall:
The database maintains every state of the record for a long time or even lifetime. You no longer need to
build this intelligence into the application. This feature is useful for compliance reporting and audit
reports.
Can you avoid block corruption errors to the users?

Automatic Block Repair:


Oracle Database 11g Release 2 now has the capability to automatically repair corrupt data blocks in your
production database as soon as the corruption is detected by using your Active Data Guard standby
database to retrieve good copies of the corrupted blocks.
Automatic Block Media Recovery will also automatically repair corrupted blocks that are discovered in
your physical standby databases. This feature reduces the amount of time that data is inaccessible due
to block corruption and will avoid returning errors to your application.
This reduces block recovery time by using up-to-date good blocks in real-time, as opposed to retrieving
blocks from disk or tape backups, or from Flashback logs.

SECURITY
How can you secure you Database without any Application change?

Virtual Private Database:


VPD is used when the standard object privileges and associated database roles are insufficient to meet
application security requirements. VPD policies can be simple or complex depending on your security
requirements. VPD can be used in combination with the "application context" feature to enforce
sophisticated row and/or column level security requirements for privacy and regulatory compliance. A
simple VPD example might restrict access to data during business hours and a more complex VPD
example might read an application context during a login trigger and enforce row level security against
the ORDERS table.

How good is your Data Auditing?

Fine-grained auditing:
Oracle 11g FGA lets users specify the conditions necessary for an audit record to be generated. This
creates more meaningful audit trails, since not each and every access to a table is recorded.
Furthermore FGA supports all combinations of "select", "insert", "update" and "delete" statements in
one policy. The fact that the FGA policy is bound to the table simplifies management of audit policies,
since it needs only to be changed once in the database, not in each and every application.
Is your Data Secure from Administrators?

Oracle Database Vault:


Oracle Database Vault proactively protects application data from being accessed by privileged database
users. Oracle Database Vault can also help discover Oracle Database runtime privileges without
disruption. Oracle Database Vault provides essential safeguards against common threats, including:

Threats that exploit stolen credentials obtained from social engineering, key-loggers, and other
mechanisms to get access to privileged accounts in your database

Threats from insiders that misuse privileged accounts to access sensitive data, or to create new
accounts, and grant additional roles and privileges for future exploits

Threats from insiders who bypass the organizations usage policies (including IP address, date,
and time of usage), or from unintended mistakes from junior DBAs who might use unauthorized
SQL commands that change the database configuration and put the database in a vulnerable
state

Threats to sensitive data during maintenance window from the application administrators

Threats that exploit weaknesses in the application to escalate privileges and attack other
applications on the same database

How Secure is your Data?

Advanced Security Option:


Oracle Advanced Security transparent data encryption (TDE) provides the industry's most advanced
database encryption solution. TDE automatically encrypts data written to storage by the Oracle
database and automatically decrypts the data after the requesting user or application has authenticated
to the Oracle database and passed all access control checks including those enforced by Database Vault,
Label Security and virtual private database. Database backups retain the data as encrypted, providing
protection for backup media. Data exported into flat files from the Oracle Database can be encrypted as
well. Both logical and physical standby databases can be configured with TDE to provide complete

protection for sensitive data in high availability architectures. Advanced Security network encryption
provides both SSL based and native network encryption capabilities to protect data in transit. Advanced
Security strong authentication services support PKI, Kerberos and RADIUS for an alternative to existing
password-based authentication.
Do your Tables have Row level Security?

Oracle Label Security:


Easily categorize and mediate access to data based on its classification. Designed to meet public-sector
requirements for multilevel security and mandatory access control, Oracle Label Security provides a
flexible framework that both government and commercial entities worldwide can use to manage access
to data on a "need to know" basis.
Following are the Benefits:

Ensure access to sensitive data is restricted to users with the appropriate clearance level

Enforce regulatory compliance with a policy-based administration model

Establish custom data classification schemes for implementing need to know access for
applications

Labels can be used as factors within Oracle Database Vault command rules for multifactor
authorization policies

Integrates with Oracle Identity Management, enabling centralized management of policy


definitions
PERFORMANCE

Can you store the query results at client side?

Client Side Query Cache:


This feature enables caching of query result sets in client memory. The cached result set data is
transparently kept consistent with any changes done on the server side. Applications leveraging this
feature see improved performance for queries which have a cache hit. Additionally, a query serviced by
the cache avoids round trips to the server for sending the query and fetching the results. It also reduces
the server CPU that would have been consumed for processing the query, thereby improving server
scalability.

How to improve your query performance?

Query Results Cache:


A separate shared memory pool is now used for storing and retrieving cached results. Query retrieval
from the query result cache is faster than rerunning the query. Frequently executed queries will see
performance improvements when using the query result cache.
The new query result cache enables explicit caching of results in database memory. Subsequent queries
using the cached results will experience significant performance improvements.
How to avoid recalculating the PL/SQL Function?

PL/SQL Function Result Cache:


The ability to mark a PL/SQL function to indicate that its result should be cached to allow lookup, rather
than recalculation, on the next access when the same parameter values are called. This function result
cache saves significant space and time. Oracle does this transparently using the input value as the
lookup key. The cache is system-wide so that all distinct sessions invoking the function benefit. If the
result for a given set of values changes, you can use constructs to invalidate the cache entry so that it is
properly recalculated on the next access.
This feature is especially useful when the function returns a value that is calculated from data selected
from schema-level tables. For such uses, the invalidation constructs are simple and declarative.
Can you hold your data in memory for Fastest Access?

In-Memory Database Cache:


Oracle In-Memory Database Cache is a database option that provides real-time, updatable caching for
the Oracle database. Oracle In-Memory Database Cache improves application transaction response time
by caching a performance-critical subset of tables and table fragments from an Oracle database to the
application tier. Cache tables are managed like regular SQL relational database tables within the
TimesTen In-Memory Database. Thus, Oracle In-Memory Database Cache offers applications the full
generality and functionality of a relational database, the transparent maintenance of cache consistency
with the Oracle Database, and the real-time performance of an in-memory database.

How to you increase the Buffer cache without adding additional RAM?

Database Smart Flash Cache:


This feature increases the size of the database buffer cache without having to add RAM to the system. In
effect, it acts as a second level cache on flash memory and will especially benefit read-intensive
database applications.

MANAGEBILITY
How do you manage the resource effectively?

Database Resource Manager:


Oracle Database Resource Manager (the Resource Manager) enables you to manage multiple workloads
within a database that are contending for system and database resources. With the Resource Manager,
you can guarantee certain sessions a minimum amount of CPU regardless of the load on the system and
the number of users, Distribute available CPU by allocating percentages of CPU time to different users
and applications. Limit the degree of parallelism of any operation performed by members of a group of
users.
Can you run multiple databases on a single server without affecting performance?

Instance Caging:
Oracle Database provides a method for managing CPU allocations on a multi-CPU server running
multiple database instances. This method is called instance caging. Instance caging and Oracle Database
Resource Manager (the Resource Manager) work together to support desired levels of service across
multiple instances.
How to preserve SQL performance?

SQL Plan Management:


SQL plan management is a preventative mechanism that records and evaluates the execution plans of
SQL statements over time, and builds SQL plan baselines composed of a set of existing plans known to
be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL
statements, regardless of changes occurring in the system.

How do you track the changes in your Database?

Oracle Change Management Pack:


The Oracle Change Management Pack for Databases manages changes at the database schema level.
You can capture the metadata definitions of any schema in a Dictionary Baseline. This baseline can then
be propagated to other database targets. In this way, planned schema changes can be deployed in an
automated manner from the development database to a test or production database, rather than
relying on manually written scripts that are prone to human error. Baselines can be captured and
versioned. You can copy database objects from one database to the othereither with no data, full
data, or a subset of data.
Is it possible to store the Database and Host configuration?

Oracle Configuration Management Pack:


The databases, hosts, and applications in the IT space consist of tremendous amounts of configuration
information, which needs to be captured and maintained, preferably in a central location. The Oracle
Configuration Management Pack for Databases, along with the Oracle Configuration Management Pack
for Applications, does precisely that, once the targets have been discovered by Oracle Enterprise
Manager.
Any configuration change on the database, host, and operating system is captured by the Configuration
Management Pack for Databases. You can save a gold configuration, which you can then compare to the
current configuration or to a different server or database altogether, as well as track the historical
changes over time. This feature would be of great assistance in troubleshooting, for example, to see if
anything has changed at a particular time that could have affected the normal functioning of the system.
Is there a way to diagnose your Database issues automatically?

Oracle Diagnostic Pack:


Oracle Diagnostics Pack, a part of the Oracle Database 11g product set, offers a comprehensive set of
automatic performance diagnostics and monitoring functionality built into core database engine and
Oracle Enterprise Manager. Whether you are managing one or many databases, Oracle Diagnostic Pack
offers a complete, cost effective, and easy to use solution for managing the performance your Oracle
Database environment. When used as part of Oracle Enterprise Manager Grid Control, Diagnostic Pack
additionally provides enterprise-wide performance and availability reporting, a centralized performance
repository, and valuable cross-system performance aggregation, significantly simplifying the task of
managing large sets of databases.

Is there a way to Tune your Database automatically?

Oracle Tuning Pack:


Oracle Tuning Pack, a part of Oracle Database 11g product set, offers an extremely cost effective and
easy-to-use solution that automates the entire application tuning process. Enhancement of SQL
performance is achieved through real-time monitoring and SQL Advisors that are seamlessly integrated
with the Enterprise Manager, and together provide a comprehensive solution for automating the
complex and time-consuming task of application tuning.
Is there a way to automate the Provisioning and Patching?

Oracle Provisioning and Patch Automation Pack:


The Provisioning solution is an important part of Lifecycle Management solution offered by Cloud
Control. As part of the database provisioning solution, Cloud Control enables you to provision Oracle
Databases (also known as single-instance databases) and Oracle Real Application Clusters databases,
extend or delete Oracle Real Application Clusters nodes, provision Oracle Real Application Clusters One
node databases, and also upgrade Oracle single-instance databases in a scalable and automated
manner.
Cloud Control addresses the challenges with its much-improved patch management solution that
delivers maximum ease with minimum downtime. The new patch management solution offers the
following benefits:

Integrated patching workflow with My Oracle Support, therefore, you see recommendations,
search patches, and roll out patches all using the same user interface.

Complete, end-to-end orchestration of patching workflow using Patch Plans, including


automated selection of deployment procedures and analysis of the patch conflicts, therefore,
there is minimal manual effort required.

Can you test my database performance before upgrade?


Oracle Real Application Testing option enables you to perform real-world testing of Oracle Database. By
capturing production workloads and assessing the impact of system changes before production
deployment, Oracle Real Application Testing minimizes the risk of instabilities associated with changes.
Oracle Real Application Testing comprises two components:
1) Database Replay
2) SQL Performance Analyzer
Database Replay and SQL Performance Analyzer are complementary solutions that can be used for real
application testing. Depending on the nature and impact of the system change, and on which system the
test will be performed (production or test), you can use either solutions to perform your testing.

DATAWAREHOUSING
How do you save storage space?

Basic Table Compression:


Oracle Database provides a unique compression technique that is very attractive for large data
warehouses. It is unique in many ways. Its reduction of disk space can be significantly higher than
standard compression algorithms, because it is optimized for relational data. It has virtually no negative
impact on the performance of queries against compressed data; in fact, it may have a significant positive
impact on queries accessing large amounts of data, as well as on data management operations like
backup and recovery. It ensures that compressed data is never larger than uncompressed data.
How do you save storage space from empty tables?

Deferred Segment Creation:


By using this Option space will not be occupied by your tables until you insert first record into the table.
DEFERRED_SEGMENT_CREATION specifies the semantics of deferred segment creation. If set to true,
then segments for tables and their dependent objects (LOBs, indexes) will not be created until the first
row is inserted into the table.
How do you improve the performance of a SQl query?

Bitmapped index, bitmapped join index, and bitmap plan conversions:


Bitmap indexes are widely used in data warehousing environments. The environments typically have
large amounts of data and ad hoc queries, but a low level of concurrent DML transactions. For such
applications, bitmap indexing provides:
1) Reduced response time for large classes of ad hoc queries.
2) Reduced storage requirements compared to other indexing techniques.
3) Dramatic performance gains even on hardware with a relatively small number of CPUs or a small
amount of memory.
4) Efficient maintenance during parallel DML and loads.
Fully indexing a large table with a traditional B-tree index can be prohibitively expensive in terms of disk
space because the indexes can be several times larger than the data in the table. Bitmap indexes are
typically only a fraction of the size of the indexed data in the table.
Can we run SQL queries parallely to increase performance?

Parallel query/DML:
You can use parallel queries and parallel sub queries in SELECT statements and execute in parallel the
query portions of DDL statements and DML statements (INSERT, UPDATE, and DELETE). You can also

query external tables in parallel. This increases the Performance of the Query and utilizes the available
resources to give faster results.
Can we reduce the overall statistics gathering time?

Parallel statistics gathering:


Oracle Database 11g Release 2 introduces a new statistics gathering mode, 'concurrent statistics
gathering'. The goal of this new mode is to enable a user to gather statistics on multiple tables in a
schema (or database), and multiple (sub)partitions within a table concurrently. Gathering statistics on
multiple tables and (sub)partitions concurrently can reduce the overall time it takes to gather statistics
by allowing Oracle to fully utilize a multi-processor environment.
Can we speedup the Index creation?

Parallel index build/scans:


Multiple processes can work together simultaneously to create an index. By dividing the work necessary
to create an index among multiple server processes, Oracle Database can create the index more quickly
than if a single server process created the index sequentially.
Parallel index creation works in much the same way as a table scan with an ORDER BY clause. The table
is randomly sampled and a set of index keys is found that equally divides the index into the same
number of pieces as the DOP. A first set of query processes scans the table, extracts key-rowid pairs, and
sends each pair to a process in a second set of query processes based on key. Each process in the second
set sorts the keys and builds an index in the usual fashion. After all index pieces are built, the parallel
coordinator simply concatenates the pieces (which are ordered) to form the final index.
Is there a way to speed up the backup and recovery?

Parallel Data Pump Export/Import:


The Data Pump Export and Import (expdp and impdp) PARALLEL parameter can be set to a value greater
than one only in the Enterprise Edition of Oracle Database. A user must be privileged in order to use a
value greater than one for this parameter. It is most useful for big jobs with a lot of data relative to
metadata. Small jobs or jobs with a lot of metadata will not see significant improvements in speed.
How to speed up your Parallel Processing?

In-memory Parallel Execution:


Parallel processing by-passed the database buffer cache for most operations, reading data directly from
disk (through direct path I/O) into the parallel execution server's private working space. Only objects
smaller than about 2% of DB_CACHE_SIZE would be cached in the database buffer cache of an instance,
and most objects accessed in parallel are larger than this limit. This behavior meant that parallel

processing rarely took advantage of the available memory other than for its private processing.
However, over the last decade, hardware systems have evolved quite dramatically; the memory capacity
on a typical database server is now in the double or triple digit gigabyte range. This, together with
Oracle's compression technologies and the capability of Oracle Database 11g Release 2 to exploit the
aggregated database buffer cache of an Oracle Real Application Clusters environment now enables
caching of objects in the terabyte range.
In-Memory parallel execution takes advantage of this large aggregated database buffer cache. By having
parallel execution servers access objects using the database buffer cache, they can scan data at least ten
times faster than they can on disk.
With In-Memory parallel execution, when a SQL statement is issued in parallel, a check is conducted to
determine if the objects accessed by the statement should be cached in the aggregated buffer cache of
the system. In this context, an object can either be a table, index, or, in the case of partitioned objects,
one or multiple partitions.
Can Oracle control the DOP (Degree of Parallelism)?

Parallel Statement Queuing:


Because of the expected behavior of more statements running in parallel it becomes more important to
manage the "scarce resources" of parallel processes available. That means that the system should be
smart about when to run a statement and verify the requested numbers of parallel processes are
around. Requested number of processes in this is the DOP for that action.
The answer to this is Parallel Statement Queuing. The long and short of parallel statement queuing is
that a statement runs when its requested DOP is available. E.g. when a statement requests as DOP of
64, it will not run if there are only 32 processes currently free to assist this customer. As with the
annoying telephone systems, the statement will be placed into a queue. However, Oracle does enforce a
strict First In - First Out queue.
Does oracle supports Cross platform data transportation?

Transportable tablespaces, including cross-platform:


You can transport tablespaces in a database that runs on one platform into a database that runs on a
different platform. Typical uses of cross-platform transportable tablespaces include the following:
1) Publishing structured data as transportable tablespaces for distribution to customers, who can
convert the tablespaces for integration into their existing databases regardless of platform.
2) Moving data from a large data warehouse server to data marts on smaller computers such as Linuxbased workstations or servers.
3) Sharing read-only tablespaces across a heterogeneous cluster in which all hosts share the same
endian format.

How to improve Response time for the Queries in DWH applications?

Summary management-Materialized View Query Rewrite:


The query rewrite mechanism in the Oracle server automatically rewrites the SQL query to use the
summary tables. This mechanism reduces response time for returning results from the query.
Materialized views within the data warehouse are transparent to the end user or to the database
application. Although materialized views are usually accessed through the query rewrite mechanism, an
end user or database application can construct queries that directly access the materialized views.
Can oracle identify only changed data for processing?

Asynchronous Change Data Capture:


Data warehousing involves the extraction and transportation of relational data from one or more
production databases into a data warehouse for analysis. Change Data Capture quickly identifies and
processes only the data that has changed and makes the change data available for further use.
What is the Most Cost-Effective Solution for Comprehensive Data Protection?

Oracle Partitioning:
Oracle Partitioning, an option of Oracle Database 11g Enterprise Edition, enhances the manageability,
performance, and availability of a wide variety of applications. Partitioning allows tables, indexes, and
index-organized tables to be subdivided into smaller pieces, enabling these database objects to be
managed and accessed at a finer level of granularity.
How do you deal with Online Analytical Processing (OLAP)?

Oracle OLAP:
Oracle OLAP is a world class multidimensional analytic engine embedded in Oracle Database 11g. Oracle
OLAP cubes deliver sophisticated calculations using simple SQL queries - producing results with speed of
thought response times. This outstanding query performance may be leveraged transparently when
deploying OLAP cubes as materialized views enhancing the performance of summary queries against
detail relational tables. Because Oracle OLAP is embedded in Oracle Database 11g, it allows centralized
management of data and business rules in a secure, scalable and enterprise-ready platform.
Is there a way to explore the data, build and evaluate models?

Oracle Data Mining:


Oracle Data Mining (ODM) provides powerful data mining functionality as native SQL functions within
the Oracle Database. Oracle Data Mining enables users to discover new insights hidden in data and to

leverage investments in Oracle Database technology. With Oracle Data Mining, you can build and apply
predictive models that help you target your best customers, develop detailed customer profiles, and
find and prevent fraud.
Is there a way to assess the quality of data?

Oracle Data Profiling and Quality:


Oracle Data Profiling is a data investigation and quality monitoring tool. It allows business users to
assess the quality of their data through metrics, to discover or infer rules based on this data, and to
monitor the evolution of data quality over time.
Oracle Data Quality for Data Integrator is a comprehensive award-winning data quality platform that
meets even the most complex data quality requirements. Its powerful rule-based engine and its robust
and scalable architecture place data quality and name and address cleansing at the heart of an
enterprise data integration strategy.
Do you have data profiling and correction solution?

Oracle Data Watch and Repair Connector:


Data Watch and Repair is a data profiling and correction solution created to assist data governance
processes in Oracles Master Data Management (MDM) solutions. MDM applications must successfully
consolidate and clean up a systems master data, sharing it with multiple connected entities to achieve a
single view of the data. However, MDM systems face a never-ending challenge: the constant state of
flux of master data. Not only does master data quickly become out of date as new events happen and
need to be captured in the system, but also any incoming data can potentially be inaccurate, either from
entry mistakes or purposely misrepresented data.
How do you compress your data to reduce storage cost?

Oracle Advanced Compression:


Oracle Advanced Compression provides a comprehensive set of compression capabilities to help
improve performance and reduce storage costs. It allows organizations to reduce their overall database
storage footprint by enabling compression for all types of data: relational (table), unstructured (file),
network and backup data. Although storage cost savings and optimization across servers (production,
development, QA, Test, Backup and etc...) are often seen as the most tangible benefits, additional
innovative technologies included in Oracle Advanced Compression are designed to improve
performance and to reduce CapEx and OpEx costs for all components of an IT infrastructure, including
memory and network bandwidth as well as heating, cooling and floor-space costs.

S-ar putea să vă placă și