Sunteți pe pagina 1din 36

Solid State Storage in Oracle

Environments
Mark Henderson & Rick Stehno
CAUTION: We successfully completed
all of these system and storage modifications
in our lab to perform our benchmark tests.

Before implementing any of these


modifications in your environment, be sure to
test them completely to determine if they
should be used in your environment.
LSI Overview IP
3%
Company Highlights
Networking
19%
 Focused on Storage and Networking Storage
Semiconductors
 12,000+ Patents and Patent Applications Storage 44%
Systems
 $2.5B Annual Revenue 34%

 Global Presence, 3000+ Employees


 300,000+ Storage Systems Deployed

LSI-Oracle Partnership
 12 years of Successful Partnership Spanning
Silicon, Boards and Storage Systems
 Technology and Manufacturing Partner for the
Oracle 2500 & 6000 Storage Systems
 Designed and Tested for Interoperability with
Oracle Operating Systems and Applications

3
Who we are:
• Rick Stehno is an Oracle Technologist/DBA with LSI Corporation
which designs and manufactures high performance storage systems.
Rick works with Oracle and LSI's various OEM's to create and promote
solutions using LSI's storage systems with the various Oracle
technologies. Rick has been in the IT field for over 34 years and
working with Oracle databases since 1989.

• Mark Henderson is a Technical Marketing Manager with LSI


Corporation which designs and manufactures high-performance
midrange storage systems for major OEMs. Mark works with Oracle
and LSI’s various channels to create and promote solutions that
address customer problems and create competitive advantage. He
has a degree in Computer Systems Engineering, has designed high-
end flight simulators, participated in computer science and networking
research at US DOE labs, architected HPC centers, and has been
involved with Storage Resellers, Fibre Channel Director SAN
technology and MAID storage systems.

4
Solid State Storage Comes in Many Forms

• Is delivered to the market in three basic forms


• Server Cards – think memory expansion
• Network device – the most well known is the Oracle 5100
• Solid State Disk – which are installed in either Servers or RAID
systems
– SSD installed in servers has many of the same properties of server cards.

5
Solid State Technology (And do you really care?)

• Internally they are similar to a bunch of your average jump drives

• Solid State Storage is a consumable resource – but don’t panic!


– It wears out, not unlike regular old rotating disk drives
– Bad blocks on drives, remapped sectors

• There are two technologies that you may hear about


– MLC Multi-Level cell
– SLC Single-Level cell

• The technology is simply a discussion of *cost*, not price.

6
OK so now that I have this super fast device – what
does that mean? The obvious, well isn’t…
• It’s all equally accessible – no short stroking
• While it doesn’t rotate, mixed reads and writes do slow it down
• Scanning the Device for bad sectors is a thing of the past
• It may not be necessary to stripe for performance
• In cache cases you might not even need to mirror SSDs
• Using Smart Flash Cache AND moving data objects to SSD decreased
performance
• Online Redo Logs are best handled by HDD because of the sequential
writes

7
Look at the Solid State price per GB!!!
($/IOP vs. $/GB)

IOPS

GB

8
Oracle Smart Flash Cache & Database Stroage Tiering

Smart Flash Cache


• Technology available in 11gR2 + a patch
• Extends Oracle Buffer Cache
• Can use any technology
– Flash Cards, Network Flash, Solid State Disk, even USB drives
• Point Oracle at the flash resource and it’s all automatic
• Least interaction between storage admin and DBA

Database Storage Tiering


• Tiered storage often uses Flash as a “Tier 0” layer
• Can be higher performance AND less expensive
• Mix and Match Multi-technology solutions
• Storage Arrays can hold multiple Tiers
– Some do so with more grace than others
• Use Database Partitioning to drive Storage Tiering

9
Where should you invest in Solid State?

• Where should you invest?


– Server
– Network
– Storage

10
Investing in Solid State in the Server

• Lowest latency
• Low entry point
• Dedicated to specific server
• Not transportable
• Great for buffer extension / acceleration
• Varity of manufactures / sizes / capability

11
Investing in Solid State in the Network

• Most dense flash storage


• Looks like a drive
– Partitionable x4
– Not sharable

12
Investing in Solid State Storage

• Persistent Storage
• Multi-Server Shared Storage
– OVM
– RAC
– VMware
• Data Protection Methods
• Database Partitioning
• Automatic Storage Tiering

13
So where should you invest in solid state?

“Depends…”
Property \ Location Server Flash Network Flash Storage Flash
Physical Description Card Device Solid State Disk
Latency Lowest Low Low
Entry Price Point Lowest ($5k) Medium($50k) Low ($15-30k)
Sharable Data No No Yes
Partition Sharing No Yes Yes
Single Machine (SMP) Fit Yes Yes Yes
Shared Multi-Machine Fit Real Application Clusters (RAC) No No Yes
Persistent Data (no power) No No Yes
Data Protection (Snapshot, Volume Copy) No No Yes
Site Protection (Remote Volume Mirroring) No No Yes
Automatic Storage Management (ASM) Compatible Yes Yes Yes
Recovery Manager (RMAN) Compatible Yes Yes Yes
Oracle Smart Flash Cache Yes Yes Yes
VMware Shared Storage (HA and Advanced Features) No No Yes

14
Moving the Bottleneck

• High Performance array controller


– Sustained throughput to the drives
– Not just Cache numbers
• And the rest of the system has to be able to use the faster speed…

Server(s)

Network
FC
Controller

Drives

15
Product Background External

16
Oracle Storage Array SSD Testing Results

Smart Flash Cache


0.2
0.18
0.16
0.14
All HDD Baseline
0.12
Flash Cache on SSD
0.1
0.08
0.06
0.04
0.02
0
Avg Response Time (sec)

Response Time (sec)


0.2 Percent Improvement
0.18
0.16 1600%
0.14 1400%
0.12
0.1 1200%
0.08 1000%
0.06 Avg Response Time
0.04 800%
0.02 (sec) 600% Flash Cache on SSD
0
Avg Transaction Time 400%
e D D 200% Move Top 9 Objects to
in
s el SS SS (sec) 0% SSD
Ba on s
to
e t ns ns
DD ach j ec ai ai
lH C b G G
Al h 9
O
ns
e
io
n
as p o ac
t
Fl To sp s
ov
e Re an
M % Tr
%

17
SAN Based SSD Testing

• We used an LSI 7900 Storage Array


– Three Storage Drive Enclosures
– (28) 15k RPM Fibre Channel drives in RAID 10 for ASM disk groups
– (3) 15k RPM Fibre Channel drives in RAID 10 for Redo logs
– (2) 73GB SSD in mirrored RAID for data protection

• Sever: Two Xeon 5150 @ 2.66GHz dual-core

• Oracle Enterprise Linux Release 5.5

18
Database Configuration

• SGA=1.5GB
• filesystemio_options=async
• disk_async_io=TRUE
• 1GB redo logs
• ASM
• 60GB Oracle Smart Flash Cache
– SQL> alter system set db_flash_cache_file='/u04/flash.dbf‘ scope=spfile;
– SQL> alter system set db_flash_cache_size=60g scope=spfile;
– SQL> show parameter flash

• NAME TYPE VALUE


• -------------------------------------------------------------------------
• db_flash_cache_file string /u04/flash.dbf
• db_flash_cache_size big integer 60G

19
WarpDrive™ PCIe Solid State Acceleration Card

• Provides scalable SSD performance


inside-the-server
• Designed to supercharge application
performance
– Built for IOPS, throughput and both
random and sequential I/O workloads
– Performance: 240K IOPs, 1.5GB/s,
50usec latency
– Usable capacity 300GB (w/28% over-
provisioning)
• No change to OS or applications
• Built for broad OS support
– Bootable
– Including RHEL, SLES, Windows
32/64 support

20
WarpDive Testing Configuration

• HP ProLiant DL370 G6
– Dual Intel Xeon Processor X5570
– 48GB - 1333 DDR3
– LSI 9210-8i SAS host bus adapter
– LSI SAS 2x36 Expander
– 146GB 2.5-in. SFF 6G SAS 10K RPM drives

• Software RAID 0 over 6 LUNs for the UNDO tablespace


• Software RAID 0 over 6 LUNs for the Online REDO Logs
• All tablespaces were striped over 10 individual LUNs when using HDD

21
Database Configuration Single WarpDrive
• SGA=16GB
• filesystemio_options=async
• disk_async_io=TRUE
• 4GB redo logs
• Benchmarks used Swingbench with 100 user load with no latency
• 250 GB Oracle Smart Flash Cache

• SQL> alter system set db_flash_cache_file='/u05/flash.dbf‘


scope=spfile;
• SQL> alter system set db_flash_cache_size=250g scope=spfile;
• SQL> show parameter flash

• NAME TYPE VALUE


• -------------------------------------------------------------------------
• db_flash_cache_file string /u05/flash.dbf
• db_flash_cache_size big integer 250G
22
Dual WarpDrives with Oracle ASM
(Database Configuration)
• SQL> alter system set db_flash_cache_file='+DATAWH/flash.dbf'
scope=spfile;
• SQL> alter system set db_flash_cache_size=250g scope=spfile;
• SQL> show parameter flash

NAME TYPE VALUE


-------------------------------------------------------------------------
db_flash_cache_file string +DATAWH/flash.dbf
db_flash_cache_size big integer 250G

23
Oracle Warp Drive Testing Results
TPS
8000
7000
6000
5000

Response Time (ms) 4000


120 3000

100 2000
1000
400000 TPM 80
0

es
60

ch
350000

lin

riv
Ca
se

pd
Ba

ar
as

W
300000 40

Fl

ed
t
ar

or
Sm

irr
M
250000 20

200000 0
e

es
e

ch
lin

riv
150000
Ca
se

pd
Ba

sh

ar
W
la
tF

100000
ed
ar

or
Sm

irr
M

50000

24
Tools or Procedures to Investigate I/O Activity

Tools available in the database: Operating System level tools:


• Statspack (Free, since 8i) • For Linux/Unix
• Automatic Workload Repository – Iostat
(AWR) ‐ Requires license – Vmstat
• Oracle Enterprise Manager ‐ OEM
• For Windows
The database views in specific areas: – Performance Monitor using the Oracle
options
• v$filestat
• v$sysstat
• v$system_event
• v$session_wait
• turn on trace events

25
Review Statspack or AWR Reports

• Instance CPU Section


– Is the system is CPU bound?

• Tablespace I/O Statistics Section


– Which tablespace(s) have the highest I/O activity?

• Segments by physical Reads


– Most active physical reads objects
– Percentage amount of the total Read I/O activity

26
Additional AWR Analysis

• Segments by Physical Writes


– List of the most active database objects based on physical writes and the
percentage amount of total Write I/O activity.

AWR was used to ID the top nine data objects to move

27
LSI Oracle Enterprise Manager Plug-in

• Our Plug-in is intended to assist Database Administrators:


– To understand the storage configuration

– To comprehend performance trends

– View the current storage status

– Plan proactively for capacity needs

28
OEM Plug-in Displays Storage Resources

29
OEM Plug-in shows Database relationship to LUNS

30
OEM Plug-in Performance Graphs

31
OEM Plug-in Storage Array Performance

32
Linux Tuning for Solid State Drives
(both SAN based SSD and WarpDrive)
• Align the SSD on a 4-KB boundary for optimal performance

• Use EXT-2 to bypass filesystem journaling


– eliminates double writes to the SSD
– which increases performance
– prolongs the life of the SSD

• Modify the kernel I/O scheduler to NOOP for the SSD device

• Used the noatime filesystem mount option


– eliminates the need for the system writes to the filesystem when objects are
only being read

33
Linux Tuning Test Observations
• Test results using a 500 user load with just operating system tuning
efforts applied:

1600 TPS Average Response (ms)


Average TPM
120
1400
90000
80000 1200 100
70000
1000 80
60000
50000 800
60
40000 600
30000 40
20000 400
10000 20
200
0
Before After 0 0
Before After Before (ms) After (ms)

– Overall: 35% performance increase


• These changes not only increased the performance when using a 100-
user load, they also improved the performance of the higher user
loads.
34
• System performance did not drop dramatically when using the 500-
Solid State Conclusions and Recommendations

• If you are I/O bound, AND you have CPU cycles


– Take your storage admin out for coffee…
– If you aren’t using ASM, consider it
– Smart Flash Cache will get you an improvement, IFF you have CPU cycles
– Best results using AWR / StatsPak, but it takes some work
– Move Data Objects or Smart Flash Cache, not both

• SS in Server, Network or Storage will work, depending on goals


– Shared storage requires a storage system

• A modest SSD investment can provide huge returns

• The LSI 7900 Engenio Storage System and the LSI WarpDrive can
deliver performance using SSD technology to applications such as
Oracle, for balanced performance and cost efficiency.
35
Resources and Contact Information

Material taken from the following white papers:


• Migration of Live Oracle Databases to LSI Storage
• Oracle Storage Tiering within a LSI Engenio 7900
• Where to Invest in Flash in an Oracle Environment
• Practical Application of Solid State Disk (SSD) to an Oracle
Database on LSI Engenio Storage
• Best Practices for Optimizing Oracle® Database Performance
with the LSI™ WarpDrive™ Acceleration Card

Rick.Stehno@lsi.com
Mark.Henderson@lsi.com
36

S-ar putea să vă placă și