Sunteți pe pagina 1din 11

Storage best practices for deploying IBM DB2 on HP Integrity servers and HP EVA8400 storage

Technical white paper

Table of contents

Executive summary

2

Overview

2

Disk storage

3

Multipath storage devices

4

DB2 log files

6

DB2 tablespaces

7

System Managed Space (SMS) tablespace

7

Database Managed Space (DMS) tablespace

7

LVM layout

8

File system layout

9

DB2 registry variables

9

Implementing a proof-of-concept

10

Summary

10

For more information

11

DB2 registry variables 9 Implementing a proof-of-concept 10 Summary 10 For more information 11

Executive summary

This document provides suggestions and best practices for configuring disk storage for IBM DB2 database. The document mainly focuses on HP 8400 Enterprise Virtual Arrays (EVAs) and an HP Integrity rx8640 server running HP-UX 11i v3 operating system. The test configuration consists of three EVA8400 arrays and an HP Integrity rx8640 server in a DB2/SAP environment. Two EVAs are used for database tablespaces and one for database logs. The recommendations focus on the following areas:

Physical disks and logical unit numbers (LUNs)

Striping

Database logs and data

File system configuration

Registry variables and configuration parameters

Target audience: This document is intended for database and system administrators. Prior knowledge of DB2 database and HP-UX operating system is required.

This white paper describes testing performed in June 2011.

Overview

The following recommendations are provided from the tests performed in our lab environment running a DB2 SAP environment. The environment was tested with IBM DB2 9.7 fix pack 2 and 3a on HP-UX DC-OE Sep 2010 Update 7 running on an HP Integrity rx8640 server. Figure 1 shows the hardware configuration of the test environment. On the database system OnlineJFS (Journaled File System) was installed, which supports Concurrent I/O (CIO) capabilities. Also LibcEnhancement library a set of APIs which are extension to libc was installed on the HP-UX system.

Two EVA8400 disk arrays with 192 146GB disks were used for database tablespaces and one EVA8400 disk array with 112 146GB disks were used for database log files to eliminate I/O contention in the benchmark environment. Generally it is recommended to have dedicated LUNs and file systems for database tablespaces and log files. While running the tests two large tables were moved to different tablespaces to improve the I/O response time for those tables. The tablespace creation statement is provided in the section describing the tablespaces. DB2 registry variable db2set DB2_WORKLOAD=SAP” was set for the instance profile.

Figure 1. DB2 database configuration HP Integrity and ProLiant server, IBM DB2 and SAP Reference

Figure 1. DB2 database configuration

HP Integrity and ProLiant server, IBM DB2 and SAP

Reference Configuration

HP ProLiant DL580 and DL785

Servers

Application Server SAP

NetWeaver 7.1

4 x DL580 G5; Intel® Xeon®

X7350 4P/16C 2.93GHz;

64GB; 16x146GB 10K SAS

4 x DL785 G6; AMD 08439SE

8P/48C 2.8GHz 6MB L3; 384

GB; 8x146GB 15k SAS

3 x HP 8400 Enterprise Virtual

Array (EVA8400) with

1 - 112x146GB EVA DB2 log

2 - 192x146GB EVA DB2

database

HP Integrity rx8640 Server

with 32 core Intel® Itanium® 2

 

9100 series processors (1.6

GHz, 24 MB); 255.74 GB

Database IBM DB2 9.7

StoragHePWorks U I D DL380 ProHLPi ant G6 DL380 ProHLPi ant G6 DL380 ProHLPi ant
StoragHePWorks
U
I D
DL380
ProHLPi ant
G6
DL380
ProHLPi ant
G6
DL380
ProHLPi ant
G6
UID
Pro
HP
DL580
L
iant
1
2
3
1
4
6
9
7
1
1
2
0
UID
UID
UID
1
2
3
4
1
5
1
2
3
4
1
2
3
4
1
2
OVER
1
2
1
1
2
5
POWER
POWER
TEMP
OVER
5
OVER
1
TEMP
POWER
SUPPLY
POWER
SUPPLY
POWER
POWER
SUPPLY
POWER
TEMP
SUPPLY
SUPPLY
CAP
CAP
SUPPLY
POWER
POWER
CAP
9
1
9
1
2
6
9
1
2
6
StorageWorks
HP
2
6
8 7 6 5 4 3 2 1
2
1
8 7 6 5 4 3 2 1
2
2 3 4 5 6 7 8 9
8 7 6 5 4 3 2 1
2 3 4 5 6 7 8 9
ONLINE
2
2 3 4 5 6 7 8 9
1
SPA
RE
1
ONLINE
SPA
RE
ONLINE
SPA
RE
FANS
PROC
MIRR
OR
PROC
3
7
FANS
PROC
MIRR
OR
PROC
3
7
PROC
MIRR
OR
PROC
3
7
6
5
4
3
2
1
6
5
4
3
2
1
FANS
6
5
4
3
2
1
U
I D
4
8
4
8
4
8
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
G5
3
1
4
6
9
7
1
1
2
0
HP
StorageWorks
HP
StorageWorks
HP
HP
StorageWorks
StorageWorks
U
I D
U
I
D
U
I
D
U
I
D
UID
Pro
HP
1
2
DL580
Liant
3
1
4
1
4
7
1 0
6
9
7
1
1
2
0
3
6
9
1 2
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
U
I D
U
I
D
U
I
D
U
I
D
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
G5
1
4
7
1
0
3
1
4
6
9
7
9
3
1
2
1 2
1 0
4
3
6
1
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
HP
StorageWor
HP
ks
StorageWor
HP
ks
St
o
rageWorks
HP
StorageWor
ks
U
I D
U
I D
U
I D
U
I
D
HP
UID
Pro
L
iant
1
2
DL580
1
4
7
1
0
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1 2
1 0
3
1
4
7
1 0
3
6
9
1
2
6
9
1 2
Sto
rageWorks
HP
Sto
rageWorks
HP
Sto
rageWorks
HP
St
o
rageWorks
HP
U
I D
U
I D
U
I D
U
I
D
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
G5
1
4
7
1
0
1
4
7
1
0
3
1
4
3
1
4
6
9
7
1
1
2
0
3
6
9
1
2
3
6
9
1
2
6
9
7
1
1
2
0
Sto
rageWorks
HP
Sto
rageWorks
HP
HP
Sto
rageWorks
HP
UID
Pro
DL580
Liant
HP
1
2
St
o
rageWorks
U
I D
U
I D
U
I D
U
I
D
3
1
4
6
9
7
1
1
2
0
3
1
4
6
9
7
1
1
2
0
3
1
4
6
9
7
1
1
2
0
3
1
4
6
9
7
1
1
2
0
HP
HP
HP
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
G5
Sto
rageWorks
Sto
rageWorks
Sto
rageWorks
St
o
rageWorks
HP
U
I D
U
I D
U
I D
DL785
ProLiant
HP
U
I
D
G5
3
1
4
6
9
7
1
1
2
0
3
1
4
7
1
1
4
7
1
0
6
9
1
2
0
3
1
4
6
9
7
1
1
2
0
3
6
9
1
2
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
UID
StorageWorks
HP
1
2
U
I D
U
I D
U
I D
U
I D
1
4
7
1 0
1
4
7
1 0
1
4
7
1
0
3
6
9
1 2
3
6
9
1 2
3
1
4
6
9
7
1 2
1 0
3
6
9
1
2
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
1
2
3
4
5
6
7
8
StorageWorks
HP
U
I
D
U
I
D
U
I
D
U
I D
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1
1
2
0
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
U
StorageWorks
I
D
U
I D
HSV450
H
P
U
StorageWorks
I
D
1
HSV450
H
P
U
I
D
4
7
1 0
StorageWorks
1
HSV450
4
H
P
DL785
ProHLPi ant
G5
3
1
4
6
9
7
1 0
1 2
3
6
9
1 2
3
6
9
7
1 0
1 2
3
1
4
6
9
7
1
1
2
0
StorageWorks
HP
StorageWorks
HP
UID
HSV300
HSV300
StorageWorks
HP
HSV300
UID
StorageWorks
HP
1
2
UID
HSV300
StorageWorks
HSV450
H
P
StorageWorks
HSV450
H
P
UID
StorageWorks
H
P
UID
StorageWorks
H
P
UID
HSV450
UID
HSV450
UID
ESC
ENTER
ESC
ENTER
ESC
ENTER
UID
ESC
ENTER
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
UID
HSV300
UID
HSV300
UID
HSV300
1
2
3
4
5
6
7
8
StorageWorks
HP
UID
HSV300
UID
UID
UID
ESC
ENTER
ESC
ENTER
ESC
ENTER
UID
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
ESC
ENTER
StorageWorks
HP
U
I D
U
I D
U
I D
U
I D
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1
1
2
0
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
StorageWorks
HP
U
I D
U
I D
U
I D
DL785 ProLi
HP ant G5
U
I D
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1
1
2
0
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
UID
1
2
StorageWorks
HP
U
I D
U
I D
U
I D
Attention
Remote
S
tandby Power
Run
U
I D
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
7
Fault
Power
6
1 0
SP Present
9
1 2
1
2
3
4
5
6
7
8
3
1
4
6
9
7
1
1
2
0
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
U
I D
U
I D
U
I D
StorageWorks
HP
U
I D
hp Integrity rx8640
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1
1
2
0
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
U
I
D
U
I
D
U
I
D
StorageWorks
HP
U
I D
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 0
1 2
DL785 ProLiant HP G5
3
1
4
6
9
7
1
1
2
0
StorageWorks
HP
StorageWorks
HP
StorageWorks
HP
U
I
D
U
I
D
U
I
D
StorageWorks
HP
U
I D
1
4
7
1 0
3
1
4
UID
6
9
7
1 0
1 2
3
1
4
6
9
7
1 0
1 2
1
2
3
6
9
1 2
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
3
1
4
6
9
7
1
1
2
0
U
I D
U
I D
U
I D
StorageWorks
HP
U
I D
3
1
4
6
9
7
1 0
1 2
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
1
2
3
4
5
6
7
8
3
1
4
6
9
7
1
1
2
0
StorageWor
HP
ks
StorageWor
HP
ks
StorageWor
HP
ks
U
I D
U
I D
U
I D
StorageWorks
HP
U
I D
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1 2
1 0
3
1
4
6
9
7
1
1
2
0
StoragHePWorks
StoragHePWorks
StoragHePWorks
U
I D
U
I D
U
I D
StorageWorks
HP
U
I D
3
1
4
6
9
7
1
1
2
0
3
1
4
6
9
7
1
1
2
0
3
1
4
6
9
7
1
1
2
0
3
1
4
6
9
7
1
1
2
0
HP StorageWorks 8/8 SAN Switch
0
4
1
5
2
6
3
7
8
1 2
9
1 3
1 0
1 4
1 1
1 5
1 6
2 0
1 7
2 1
1 8
2 2
1 9
2 3
J8693A
P35ro0C0yulr-v4e8GSwitch
Mdl
1
EPS
RPS
Spd Mode
off = 10Mbps
flash = 100Mbps
on = 1000Mbps
PoE-Integrated 10/100/1000Base-T Ports (1-24T) - Ports are IEEE Auto MDI/MDI-X
L
i n k
M
o d e
Status of the Back
3
5
7
9
11
L i n k 13
M o d e
15
17
19
21
23
L i n k
25
M o d e
27
29
31
33
35
L i n k
37
M o d e
39
41
43
45
Dual-Personality Port 10/100/1000-T (T) or Mini-GBIC (M)
47
PoE
T
T
L i n k
45
M
M o d e
47
M
Power
Status
Fault
Tmp
PoE
FDx
Act
Test
Fan
Mod
LED
e
Spd
PoE
Usr
R e s e t C l e a r
L i n k
2
M o d e
4
6
8
10
12
L i n k
14
M o d e
16
18
20
22
24
L i n k
26
M o d e
28
30
32
34
36
L i n k
38
M o d e
40
42
44
46
T
48
T
L i n k
46
M M o d e
48
M
PROCURVE NETWORK
SAN SWITCH
Use only one (T or M) for each Port

SWITCH

SAN SWITCH Use only one (T or M) for each Port SWITCH Disk storage The key

Disk storage

The key advantage of the EVA disk array is its ability to virtualize multiple physical disk drives into a single block of disk space, called a disk group. Although both controllers in an EVA can access all physical disks, a Vdisk is assigned to be managed by only one controller at a time. The HSV450 controllers connect via four host ports (FP1, FP2, FP3, and FP4) to the SAN fabrics. The hosts that will access the storage system are connected to the same fabrics. The logical representation of SAN topology is shown in figure 2. Consequently, a single Vdisk can only use the resources of a single controller, such as cache, data path bandwidth, Fibre Channel ports, or processor cycles. Distributing the workload evenly across both controllers ensures the maximum total performance of the array. There are two methods to distribute the access: assignment and striping. In assignment, the database or system administrator will assign different files or file sets to different Vdisks through different controllers. The Vdisks are assigned access through different controllers to the host machine and are allocated to DB2 as FILE containers. This is discussed in detail in the Database Managed Space (DMS) tablespace section. DB2 does the striping across both the Vdisks thus both controllers are accessed equally.

For striping, database administrators can use the operating system to stripe the data evenly to the controllers. Striping provides the best performance distribution of the workload. Here is an example of striping across controllers: Two Vdisks are created within a single disk group. Each Vdisk is assigned access through a different controller. HP-UX LVM (logical volume manager) stripes the two Vdisks to form a single logical disk or LVM presented to the file system. The LVM striping ensures both controllers are accessed equally. This is discussed in the section LVM layout.

Figure 2. DB2 Database SAN topology CPU CPU 0 CPU 1 CPU 2 CPU 3

Figure 2. DB2 Database SAN topology

CPU CPU 0 CPU 1 CPU 2 CPU 3 Optional Cell Board Memory Partition 0
CPU
CPU 0
CPU 1
CPU 2
CPU 3
Optional Cell Board
Memory
Partition 0
Partition 0
Optional Cell Board
Optional Cell Board
Chipset H
P SX2000
Cell 2
Cell 1
Cell 0
Cell 3
Partition 0 Partition 0 Optional Cell Board Optional Cell Board Chipset H P SX2000 Cell 2
Partition 0 Partition 0 Optional Cell Board Optional Cell Board Chipset H P SX2000 Cell 2
Partition 0 Partition 0 Optional Cell Board Optional Cell Board Chipset H P SX2000 Cell 2
Partition 0 Partition 0 Optional Cell Board Optional Cell Board Chipset H P SX2000 Cell 2
Partition 0 Partition 0 Optional Cell Board Optional Cell Board Chipset H P SX2000 Cell 2
Partition 0 Partition 0 Optional Cell Board Optional Cell Board Chipset H P SX2000 Cell 2

Partition 1

Partition 1

PDCA B1 BPS 1 BP S3 BPS5 PDCA A1 PDCA B0 BPS 0 BPS2 BPS4
PDCA B1
BPS 1
BP
S3
BPS5
PDCA A1
PDCA B0
BPS 0
BPS2
BPS4
PDCA A0
Cell 0 Core I/O
Cell 1 Core I/O
rx8640
Console LAN
Console
LAN
DVD / DDS 0
DVD / DDS 1
LVD
LVD
SCSI
SCSI
0/0/0/2/1.x.0
1/0/0/2/1.x.0
Modem
Modem
Console
Console
u320 Disk Slot 0
u320 Disk Slot 2
UPS
UPS
0/0/0/2/0.6.0
1/0/0/2/0.6.0
1000t LAN
1000t LAN
u320 Disk Slot 1
u320 Disk Slot 3
0/0/0/3/0.6.0
1/0/0/3/0.6.0
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
133MHz PCI-X Mode1
1/0/2/0/0
266MHz PCI-e Mode2
Serial
1/0/4/0/0
266MHz PCI-e Mode2
1/0/6/0/0
266MHz PCI-e Mode2
1/0/14/0/0
266MHz PCI-e Mode2
1/0/12/0/0
133MHz PCI-X Mode1
1/0/10/0/0
133MHz PCI-X Mode1
1/0/8/0/0
66MHz PCI-X Mode1
0/0/1/0/0
133MHz PCI-X Mode1
0/0/2/0/0
266MHz PCI-e Mode2
0/0/4/0/0
266MHz PCI-e Mode2
0/0/6/0/0
266MHz PCI-e Mode2
0/0/14/0/0
Serial
266MHz PCI-e Mode2
0/0/12/0/0
133MHz PCI-X Mode1
0/0/10/0/0
133MHz PCI-X Mode1
0/0/8/0/0

SAN TOPOLOGY FOR EACH EVA8400

66MHz PCI-X Mode1 1/0/1/0/0 8 4/32 SAN Switch 4/32 SAN Switch 0 0 4 4

66MHz PCI-X Mode1

1/0/1/0/0

8

4/32 SAN Switch

4/32 SAN Switch

0

0

4

4

EVA 8400 EVA HSV450 Controller Mgmt DP1-A DP2-A DP3-A MP1 FP1 FP 2 FP3 FP4
EVA 8400
EVA HSV450 Controller
Mgmt
DP1-A
DP2-A
DP3-A
MP1
FP1
FP
2 FP3
FP4
CTRL B

1

5

2

6

3

7

8

1 2

9

1 3

1

1 4

1 1

1 5

1 6

2 0

1 7

2 1

1 8

2 2

1 9

2 3

2 4 2 8

2 5 2 9

2 6 3 0

2 7

3 1

1

2

3

8

9

1

1 1

1 6

1 7

1 8

1 9

2 4

2 5

2 6

2 7

 

5

6

7

1 2

1 3

1 4

1 5

2 0

2 1

2 2

2 3

2 8

2 9

3 0

3 1

SAN 1

SAN 2

FP1

FP

PS 1

2 FP3

FP4

PS 2

MP2

DP1-B

DP2-B

DP3-B

CTRL A

EVA HSV450 Controller

 

Mgmt

 
 

PS 1

PS 2

DP1-A

DP2-A

DP3-A

MP1

MP2

DP1-B

DP2-B

DP3-B

  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
  Mgmt     PS 1 PS 2 DP1-A DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B
DP2-A DP3-A MP1 MP2 DP1-B DP2-B DP3-B Multipath storage devices HP-UX 11i v3 introduces a new

Multipath storage devices

HP-UX 11i v3 introduces a new representation of mass storage devices called the agile view. The central idea of the agile view is that disk and tape devices are identified by the actual object via its World Wide Identifier (WWID) and not by a path to the device. Paths to a device can change dynamically, and multiple paths to a single device can be transparently treated as a single virtualized path, with I/O distributed across those multiple paths. This representation increases the reliability, adaptability, performance, and scalability of the mass storage stack, all without the need for operator intervention.

The Vdisks created for the tablespaces and log files are accessed using the agile representation of the disks (/dev/disk/disk40 and /dev/disk/disk41) so that the HP-UX multipath feature is used. The following example represents 4 channels from each HBA port. Two HBA ports are presented to each Vdisk for high availability. Two Vdisks are presented to the host with two HBA ports per EVA controller. LUN disk41 is accessed through controller A and disk40 is accessed through controller B of the EVA8400. Figure 3 shows the logical representation of how database tablespaces are striped across two EVAs.

# ioscan -m lun /dev/disk/disk40

class

I

Lun H/W Path Driver

S/W State

H/W Type

Health Description

disk

40 64000/0xfa00/0x10

esdisk CLAIMED

DEVICE

online HP

HSV450

1/0/8/1/0.0x50001fe1501e76cc.0x400a000000000000

1/0/8/1/0.0x50001fe1501e76cd.0x400a000000000000

1/0/8/1/0.0x50001fe1501e76ce.0x400a000000000000

1/0/8/1/0.0x50001fe1501e76cf.0x400a000000000000

2/0/8/1/0.0x50001fe1501e76cc.0x400a000000000000

2/0/8/1/0.0x50001fe1501e76cd.0x400a000000000000

2/0/8/1/0.0x50001fe1501e76ce.0x400a000000000000

2/0/8/1/0.0x50001fe1501e76cf.0x400a000000000000

/dev/disk/disk40

/dev/rdisk/disk40

# ioscan -m lun /dev/disk/disk41

class

I

Lun H/W Path Driver

S/W State

H/W Type

Health Description

disk

41 64000/0xfa00/0x11

esdisk CLAIMED

DEVICE

online HP

HSV450

1/0/10/1/0.0x50001fe1501e76c8.0x400a000000000000

1/0/10/1/0.0x50001fe1501e76c9.0x400a000000000000

1/0/10/1/0.0x50001fe1501e76ca.0x400a000000000000

1/0/10/1/0.0x50001fe1501e76cb.0x400a000000000000

2/0/10/1/0.0x50001fe1501e76c8.0x400a000000000000

2/0/10/1/0.0x50001fe1501e76c9.0x400a000000000000

2/0/10/1/0.0x50001fe1501e76ca.0x400a000000000000

2/0/10/1/0.0x50001fe1501e76cb.0x400a000000000000

/dev/disk/disk41

/dev/rdisk/disk41

The database can achieve better performance with multiple LUNs, or they might require special queue depth tuning to achieve maximum performance with a small number of LUNs. The HP-UX system automatically sets the queue depth to the default value of 8. The scsictl command allows viewing and changing the queue depth parameter for each device, as shown in the following examples:

View the current SCSI queue depth:

#/usr/sbin/scsictl -a /dev/rdisk/disk40 immediate_report = 0; queue_depth = 8

Change the SCSI queue depth to 24:

#/usr/sbin/scsictl -m queue_depth=24 a /dev/rdisk/disk40 immediate_report = 0; queue_depth = 24

View SCSI queue depth after change:

#/usr/sbin/scsictl -a /dev/rdisk/disk40 immediate_report = 0; queue_depth = 24

Figure 3. DB2 database disk layout EVA8400 EVA8400 192x146GB 192x146GB Disk Group Disk Group 192-disks
Figure 3. DB2 database disk layout EVA8400 EVA8400 192x146GB 192x146GB Disk Group Disk Group 192-disks
Figure 3. DB2 database disk layout
EVA8400
EVA8400
192x146GB
192x146GB
Disk Group
Disk Group
192-disks
192-disks
Logical Volume 1.8TB
TBK Database
Stripe acros
s
4 EVA LUNs
Tablespaces
Stripe s
ize 1024K
Logical Volume 1.0TB
PAYMITEM
Stripe acros
s
4 EVA LUNs
Tablespace
Stripe s
ize 1024K
Logical Volume 500GB
GL_PAYMITEM
Stripe acros
s
4 EVA LUNs
Tablespace
Stripe s
ize 1024K

EVA8400

112x146GB Disk Group 112-disks Logical Volume 500GB Stripe across 2 EVA LUN DB2 LOG
112x146GB
Disk Group
112-disks
Logical Volume 500GB
Stripe across 2 EVA
LUN
DB2 LOG
Logical Volume 500GB Stripe across 2 EVA LUN DB2 LOG DB2 log files DB2 supports two

DB2 log files

DB2 supports two different modes of logging: “circular” or “archive” (db cfg LOGRETAIN). In circular mode a predefined number (db cfg LOGPRIMARY) of fixed size (db cfg LOGFILSIZ) log files are created in the database subdirectory.

The I/O characteristics of log files with primarily sequential writes (sequential reads only during crash recovery) is different than the primarily random read access to most tablespaces. It is a good idea to place the database control and log files on a single file system separated from the tablespaces. If necessary the log file can be placed on separate storage by using the database configuration parameter NEWLOGPATH. Mirroring (RAID1) is highly recommended because the loss of the log files may render the complete database inaccessible after a server crash.

In the test configuration two Vdisks were created on a separate EVA8400 specifically used for database log files. Each Vdisk is accessed through a different controller and presented to the HP-UX server. In your environment if you are sharing the same EVA for tablespaces and log files, you can create separate Vdisks for tablespaces and log files accessible through different controllers. Figure 3 shows the logical representation of the database tablespace and log files storage. The two Vdisks were striped across as described below using LVM stripe size of two. The file system created using the logical volume was provided as NEWLOGPATH database configuration parameter as shown below.

Create the volume group with two Vdisks:

#vgcreate -s 32 /dev/vgdb2log /dev/disk/disk68 /dev/disk/disk69

-s: sets the number of megabytes in each physical extent expressed in units of megabytes (MB) in the range of 1 to 256.

Create the logical volume with stripe size of 1024k and striping across 2 disks:

#lvcreate i 2 I 1024 L 100000 n lvol1 /dev/vgdb2log

-i: number of disks to stripe across

-I: stripe size in kilobytes

-L: size of logical volume in MB

Create the file system for DB2 log:

# mkfs -F vxfs o bsize=8192,largefiles /dev/vgdb2log/rlvol1

bsize=bsize bsize is the block size for files on the file system and represents the smallest amount of disk space allocated to a file.

Mount the file system for DB2 log:

#mount /dev/vgdb2log/rlvol1 /DB2LOG/TBK

Update the DB2 database parameter with the new log path:

#db2 update db cfg for database_name using NEWLOGPATH /DB2LOG/TBK

Separating logs and data leads to better resiliency and the ability to optimize performance or service levels for each independently.

A setup with separate logs and data can also outperform a shared storage setup. Firstly, because I/Os to devices

holding logs are sequential on real physical disks, actuator seeks are avoided, I/O service time is reduced, and throughput is higher than with random I/O. For OLTP, fast log response time is often more important than I/O service times for data, which is frequently asynchronous.

DB2 tablespaces

DB2 database has two types of tablespaces SMS and DMS. They are described below.

System Managed Space (SMS) tablespace

A minimum set of SMS files are created when the tablespace is created using the “MANAGED BY SYSTEM USING”

command parameter. The advantage of SMS is that storage space is allocated by the operating system as required.

In contrast, DMS space is allocated using the create command and resized only in predefined extents with “db2 alter

tablespace”.

The same operating system performance penalties, such as, double buffering and single write locking, apply to SMS and DMS FILE. DMS provided better performance than SMS tablespaces in the testing. The following site provides further comparison of SMS and DMS tablespaces

Database Managed Space (DMS) tablespace

DMS files are created using the “MANAGED BY DATABASE USING” command parameter. DMS storage allows better control of the placement data by the database manager in comparison to SMS. For DMS FILE a single file is associated which each container and space requirements have to be defined at the time the tablespace is created. Multiple containers can be associated with a single tablespace. While for DMS FILE the database manager will allocate the storage inside a file system, added or resized tablespaces can be done only by “ALTER TABLESPACE” not dynamically on demand as with SMS. Because data inside tablespaces is read in random patterns by the database not in a sequential fashion striping across multiple disks and/or controllers significantly boosts performance. If you choose to implement disk striping along with DB2 striping, the extent size of the tablespace and the strip size of the disk should be identical.

The following commands are used to create striped logical volumes (LVM) for the database.

Create the volume group with four Vdisks:

#vgcreate -s 32 /dev/vgdb2sap /dev/disk/disk40 /dev/disk/disk41 /dev/disk/disk58

/dev/disk/disk59

-s: sets the number of megabytes in each physical extent expressed in units of megabytes (MB) in the range of 1 to 256.

Create the logical volume with stripe size of 1024k and striping across 4 disks:

#lvcreate i 4 I 1024 L 10000 n lvol1 /dev/vgdb2sap

-i: number of disks to stripe across

-I: stripe size in kilobytes

-L: size of logical volume in MB

Create the file system for DB2 database:

# mkfs -F vxfs o bsize=8192,largefiles /dev/vgdb2sap /rlvol1

bsize=bsize bsize is the block size for files on the file system and represents the smallest amount of disk space allocated to a file.

Mount the file system for DB2 database:

#mount /dev/vgdb2sap /rlvol1 /db2/TBK

The DB2 database was created on the filesystem that was created on the logical volume in volume group vgdb2sap with stripe size of 4. The following is the DB2 create database statement:

#db2 create database TBK automatic storage yes on /db2/TBK/sapdata1, /db2/TBK/sapdata2, /db2/TBK/sapdata3, /db2/TBK/sapdata4 dbpath on /db2/TBK

pagesize 16k dft_extent_sz 2 catalog tablespace managed by automatic storage

create tablespace TBK#STABD in nodegroup SAPNODEGRP_TBK extentsize 2 prefetchsize automatic dropped table recovery off no file system caching;

These are the 4 Vdisks presented from the two EVA8400s allocated for database tables. Each Vdisk is managed by a separate controller on the EVA thus improving the performance. Multiple VxFS file systems were created for database tablespaces. The largest table PAYMITEM was move to tablespace payitemtbsp using FILE containers on /db2/TBK/sapdata5, /db2/TBK/sapdata6, /db2/TBK/sapdata7, and /db2/TBK/sapdata8 for better performance. The following are the commands to create the file system and tablespace:

Create the file system for DB2 tablespaces:

#

mkfs -F vxfs o bsize=8192,largefiles /dev/disk/disk111 /db2/TBK/sapdata5

#

db2 create tablespace payitemtbsp in nodegroupSAPNODEGRP_TBK pagesize 16k

managed by database using (FILE ‘/db2/TBK/sapdata5’, 150000M) using (FILE ‘/db2/TBK/sapdata6’,150000M) using (FILE ‘/db2/TBK/sapdata7’, 150000M) using (FILE ‘/db2/TBK/sapdata8’, 150000M) on node(0) extentsize 2 prefetchsize automatic dropped table recovery off autoresize yes maxsize none no file system caching;

LVM layout

EVA8400 storage controllers offer excellent RAID striping directly in the controller firmware. Use the striping that EVA8400 controllers provide. If you provide more than one LUN to a DB2 database server or database partition, use DB2 fine-grained striping between containers.

Because two levels of striping are adequate for all systems, avoid using a third level of striping, such as the operating system’s logical volume manager (LVM) striping.

In case of DMS, DB2 is able to stripe internally between containers of a single tablespace, but using LVM for this feature is more flexible and easier to administrate. Striping data across multiple LUNs reduces congestion caused by nearly concurrent access to the data on the disk. The more stripes the better the read performance and as such the

shorter respond time for a database query. In benchmark environments separate spindles are assigned to each stripe unit using only a fraction of the available space of each disk. In real world database environments any available space might be allocated in a more random, uncontrolled fashion.

File system layout

The mkfs (make file system) command will use the underlying LVM volume layout and size the new file system accordingly. On the command line the following command has to be executed:

# mkfs -F vxfs o bsize=8192,largefiles /dev/vg02/rlvol1

bsize: is the block size for files on the file system and represents the smallest amount of disk space allocated to a file.

This will create a new file system on the logical volume named rvol1. The default block size is increased from the default of 1024 Byte to the maximum value of 8192 Byte.

DB2 9.7 supports Concurrent I/O (CIO). CIO allows multiple threads to simultaneously perform reads and writes on a shared file and locks the file in exclusive mode only during metadata (inode) update operations. Locking is taken care of by the database. CIO can be turned on by using the “NO FILE SYSTEM CACHING” options of the create/alter tablespace commands.

db2> create tablespace payitemtbsp in nodegroup SAPNODEGRP_TBK pagesize 16k managed by database using (FILE ‘/db2/TBK/sapdata5’, 150000M) using (FILE ‘/db2/TBK/sapdata6’,150000M) using (FILE ‘/db2/TBK/sapdata7’, 150000M) using (FILE ‘/db2/TBK/sapdata8’, 150000M) on node(0) extentsize 2 prefetchsize automatic dropped table recovery off autoresize yes maxsize none no file system caching;

The above command will create a tablespace with CIO turned on. CIO can be enabled on an existing tablespace using the command:

db2>ALTER TABLESPACE PAYITEMTBSP NO FILE SYSTEM CACHING;

On HP-UX systems, install the OnlineJFS product to enable CIO. OnlineJFS on HP-UX is a licensed product and should be purchased before using these features. The DB2 best practice is to use CIO. In case CIO is not enabled then Direct I/O (DIO) is automatically turned on.

# swlist | grep JFS

B3929GB

B.05.01.02

OnlineJFS for Veritas File System 5.0.1 Bundle

File systems can be managed easily when compared to raw devices because you can use a single file system as a container for multiple tablespaces.

DB2 registry variables

The default value of AUTOMTIC was set to NUM_IOCLEANERS, NUM_IOSERVERS and PREFETCHSIZE configuration parameters. This setting gave the best performance in the test environment. DB2 best practice is that the number of cleaners is equal to the number of physical cores instead of logical cores.

#db2set DB2_USE_FAST_PREALLOCATION=OFF

When DB2 “fast pre-allocation” is enabled (which it is by default), space is reserved on the VxFS file system for new or extended files via the VX_GROWFILE mechanism. While this allows for the rapid creation of large files, it does impose a small additional overhead every time a page within the pre-allocated region is first written. The overhead can become more pronounced when the “first write” may happen for many pages in parallel, as could happen with aggressive DB2 page cleaning. For certain workloads, such as tables intended to grow continuously due to frequent inserts, this “small overhead” can become a significant bottleneck. The use of fast pre-allocation is therefore not recommended in scenarios where run-time insert performance is critical.

Implementing a proof-of-concept

As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HP Services representative (http://www.hp.com/large/contact/enterprise/index.html) or your HP partner.

Summary

The DB2 database performance can be improved by utilizing the disk array capabilities (using both the controllers) of the EVA8400, distributing the load across multiple file systems/containers, and striping across multiple LUNs for better I/O response time. Configuring the database system with CIO using NO FILE SYSTEM CACHING provides near raw performance for I/O intensive workloads. DB2 autonomic features help to make database administration as easy and low-cost as possible. They leverage the flexibility and performance of the database.

For more information

Performance improvements using Concurrent I/O on HP-UX 11i v3 with OnlineJFS 5.0.1 and the HP-UX 11i Logical Volume Manager, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA1-5719ENW

HP 4400/6400/8400 Enterprise Virtual Array configuration,

IBM DB2 9.7 for Linux, UNIX®, and Windows® Information Center,

LibcEnhancement library,

To help us improve our documents, please provide feedback at

. © Copyright 2011 Hewlett-Packard Development Company, L.P.

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Windows is a U.S. registered trademark of Microsoft Corporation. UNIX is a registered trademark of The Open Group. Intel, Xeon and Itanium are trademarks of Intel Corporation in the U.S. and other countries.

4AA3-8389ENW, Created November 2011

Xeon and Itanium are trademarks of Intel Corporation in the U.S. and other countries. 4AA3-8389ENW, Created