Sunteți pe pagina 1din 55

Tivoli Storage Productivity Center 4.

2
Repository Whitepaper
October 2012
ersion 1.2
!ai "oerner#
$ndrei "ulie#
Ce%ar Constantinescu#
&artin 'ggers#
(ragos )aiduc#
Werner *in+#
&aggie Phung#
Oliver Roehrshei,#
Patric+ Schae-er#
&arcus Sieg,und#
)elene Wass,ann
1
Legal Notice
. Copyright /"& Corporation 2011#2012
/"& Research 0 (evelop,ent 1er,any
*ocation &ain%
)echtshei,er Strasse 2
22131 &ain%
1er,any
Octorber 2012
$ll Rights Reserved
/"&# the /"& logo# ib,.co,# Tivoli# /"& ("2 and Tivoli Storage Productivity Center are
trade,ar+s or registered trade,ar+s o- /nternational "usiness &achines Corporation in the
4nited States# other countries# or both.
/- these and other /"& trade,ar+ed ter,s are ,ar+ed on their -irst occurrence in this in-or,ation
5ith a trade,ar+ sy,bol 67 or T&8# these sy,bols indicate 4.S. registered or co,,on la5
trade,ar+s o5ned by /"& at the ti,e this in-or,ation 5as published. Such trade,ar+s ,ay also
be registered or co,,on la5 trade,ar+s in other countries.
$ current list o- /"& trade,ar+s is available on the Web at 9Copyright and trade,ar+
in-or,ation9 at
ib,.co,:legal:copytrade.sht,l
Other product# co,pany or service na,es ,ay be trade,ar+s or service ,ar+s o- others.
2
Table of Content
1 /ntroduction ................................................................................................................................... 4
1.1 Tivoli Storage Productivity Center ......................................................................................... 4
1.2 Who should read this (ocu,ent ............................................................................................. 2
2 'sti,ating and controlling the si%e o- TPC database .................................................................... ;
2.1 1uide through the calculation ................................................................................................. ;
2.1.1 /denti-ying +ey calculation -actors .................................................................................. ;
2.1.2 /denti-ying +eepalive ti,e o- historical data ................................................................... <
2.1.3 'sti,ate database space consu,ption ............................................................................. <
2.1.4 Setting up database space on various -ilesyste,s and dis+s ............................................ <
2.2 Controlling through Resource Retention ................................................................................ =
2.2.1 Resource )istory Retention ............................................................................................. =
2.2.2 Re,oved Resource Retention ........................................................................................ 11
2.2.3 )istory $ggregator ........................................................................................................ 13
2.3 'sti,ate and control si%e o- )istorical Per-or,ance &anage,ent data .............................. 14
2.3.1 4nderstanding Per-or,ance (ata si%e dependencies .................................................... 12
2.3.2 Per-or,ance (ata )istory Retention ............................................................................. 1;
2.3.3 Storage Subsyste, Per-or,ance (ata Si%ing ................................................................ 1<
2.3.4 >abric Per-or,ance &anage,ent (ata Si%ing .............................................................. 24
2.3.2 Snapshot Per-or,ance &anage,ent (ata Si%ing .......................................................... 2;
2.4 'sti,ate si%ing o- TPC -or (ata ........................................................................................... 30
2.4.1 Si%ing the repository -or TPC -or (ata re?uire,ents .................................................... 30
2.4.2 Si%ing the repository -or (" Scan re?uire,ents ........................................................... 34
2.2 'sti,ate si%ing o- TPC $lerts .............................................................................................. 3;
2.2.1 $lerts ............................................................................................................................. 3;
2.2.2 (atabase space used by $lerts ....................................................................................... 3<
3 (atabase con-iguration ................................................................................................................ 3@
3.1 (atabase con-iguration through TPC installer ..................................................................... 3@
3.2 TuningAPara,eters -or -resh database installations .............................................................. 43
3.2.1 (atabase instance para,eters ........................................................................................ 44
3.2.2 (atabase para,eters ...................................................................................................... 44
3.2.3 Re-erences to ("2 con-iguration para,eters ................................................................ 4<
4 Opti,i%ing use o- physical storage -or ("2 databases ............................................................... 4=
4.1 Placing the ("2 transaction logs onto a separate dis+ ......................................................... 4=
4.2 Setting the ("2 database on separate dis+ ........................................................................... 4@
4.2.1 &oving an B$uto,atic StorageC database to one or ,ore di--erent dis+s .................... 24
4.2.2 'Dpanding an B$uto,atic StorageC database 5ith an additional dis+ .......................... 24
4.2.3 'Dpanding B(atabase ,anaged 6(&S8 tablespacesC 5ith an additional dis+ .............. 22
3
1 Introduction
/n a 5orld 5ith net5or+ed and highly virtuali%ed storage# database storage design can appear as a
daunting co,pleD tas+ -or a database ad,inistrator 6("$8 or syste, architect to get right.
Poor database storage design can have a signi-icant negative i,pact on a database server. CP4s
are so ,uch -aster than physical dis+s that it is not unco,,on to -ind poorly per-or,ing database
servers that are signi-icantly /:O bound and underper-or,ing by ,any ti,es their true potential.
The good ne5s is that it is ,ore i,portant to not get database storage design 5rong than it is to
get it per-ectly right. Trying to understand the innards o- the storage stac+ and hand tuning 5hich
database tables and indeDes should be stored on 5hich part o- 5hat physical dis+ is an eDercise
that is neither generally achievable nor ,aintainable 6by the average ("$8 in todayEs virtuali%ed
storage 5orld.
Si,plicity is the +ey to ensuring good database storage design. The basics involve ensuring an
ade?uate nu,ber o- physical dis+s to +eep the syste, -ro, beco,ing /:O bound.
This docu,ent provides so,e guidelines and best practices regarding /"& ("2 repository
tuning -or /"& Tivoli Productivity Center.
1.1 Tivoli Storage Productivity Center
/"&7 Tivoli7 Storage Productivity Center 6TPC8 is a storage in-rastructure ,anage,ent
so-t5are product that can centrali%e# auto,ate# and si,pli-y the ,anage,ent o- co,pleD and
heterogeneous storage environ,ents. The generic ter, TPC is used -urther in this docu,ent# and
is intended to speci-ically re-er to TPC 'dition o- release 4.2.D. TPC uses the /"& ("2 (atabase
&anage,ent Syste, 6("&S8 as persistence layer to store the various obFects and corresponding
obFect attributes. Since /"& ("2 5as developed to be used -or various di--erent applications#
having a variety o- di--erent re?uire,ents to5ards a ("&S# /"& ("2 o--ers a rich set o-
para,eters to eDplicitly tune the database to -ull-ill application speci-ic re?uire,ents.
4
1.2 Who should read this Document
This docu,ent enables the TPC ad,inistrator to control and esti,ate the /"& Tivoli Storage
Productivity Center version 4.2.D. /t is also -or those 5ho are responsible -or ad,inistrating and
tuning the underlying /"& ("2 -or best per-or,ance.
This docu,ent 5ill assu,e that TPC is used as application layer so-t5are# leveraging /"& ("2
as ("&S. 1eneral approaches -or ("2 repository per-or,ance i,prove,ent 5ill be described.
)o5 to enable ("2 para,eters -or opti,i%ing ("2 -or your storage environ,ent 5ill be
covered.
One ,aFor topic o- this docu,ent is to enable the ad,inistrator o- TPC to control and esti,ate
the need o- physical dis+ space -or all historical data TPC collects.
Chapter 2 enables the TPC $d,inistrator to esti,ate ho5 ,uch space its TPC database need -or
historical data# per-or,ance data and -ilesyste, data.
Chapter 3 and 4 enabled the TPC $d,inistrator to ,ove the TPC database to the speci-ied dis+
drives and con-igure ("2 to i,prove per-or,ance and space usage.
Readers should be -a,iliar 5ith the -ollo5ing topicsG
&icroso-t7 Windo5s7 # *inuD and 4H/I7 operating syste,s
(atabase architecture and concepts
Security ,anage,ent aspects
4ser authentication and authori%ation
/- you are enabling Secure Soc+ets *ayer 6SS*8 co,,unication# you also should be -a,iliar 5ith
SS* protocol# +ey eDchange 6public and private8# digital signatures# and certi-icate authorities.

2
2 Estimating and controlling the size of TPC database
This chapter describes so,e topics you should +no5 and understand regarding ho5 TPC stores
and ,aintains data collected by various ,onitoring Fobs in the database 6such as Per-or,ance
&onitors or >ilesyste, Scans8. So,e co,ponents 5ithin TPC 4.2 are +no5n to produce large
a,ounts o- data J especially -or the purpose o- visuali%ing historical data. /n the -ollo5ing
sections you 5ill -ind sa,ples and so,e in-or,ation that 5ill help you to esti,ate the space
re?uire,ents in your speci-ic environ,ent -or these co,ponents.
The chapter KControlling through Resource RetentionK eDplains all the details re?uired to
understand ho5 the re?uired TPC 4.2 repository database space 5ill gro5 over ti,e -or these
tas+s and ho5 you can e--ectively li,it the gro5ing space re?uire,ents to your needs.
2.1 Guide through the calculation
2.1.1 Identifying key calculation factors
"e-ore you can start the entire calculation# chec+ your eDisting and planned environ,ent -or a set
o- -actors needed -or the space calculation. Please identi-y volu,es# pools# dis+s -or each
subsyste, o- speci-ic type# a,ount o- s5itch ports# -iles and settings you 5ould li+e to +eep
entity history and per-or,ance history data -or.
>or esti,ating the database space used by subsyste,s# you have to gather the in-or,ation -ro,
your S$H environ,entG
$,ount o- *4HsG
/"& 'SS
/"& (S=000
/"& I/
S&/AS based subsyste,s 6/"& (S4000# '&C# )(S# )P# Hetapp#...8.
$,ount o- vdis+s# ,dis+s and nodesG
/"& SC
/"& Stor5i%e
$,ount o- -ibre channel portsG
S&/AS based S$H S5itches
$,ount o- >ilesyste,s# 4sers and di--erent -ile typesG
Co,puters# 5hich 5ill be scanned by Storage Resource $gent -or >ilesyste, data.
$,ount o- TablespacesG
Co,puters# 5hich 5ill be scanned by Storage Resource $gent -or database data
;
2.1.2 Identifying keeali!e time of historical data
"e-ore you proceed# please ,a+e yoursel- a5are o- 5hich and ho5 long you 5ould li+e to +eep
the historical data o- capacity in-or,ation# entity data and especially per-or,ance ,etrics. This
in-or,ation is stored by TPC in di--erent level o- detail 6per scan:sa,ple# per hour# per day# per
5ee+ and per ,onth8. &ore details on ho5 to set this in TPC is docu,ented in chapter 2.2.
!eep in ,ind that storing all in-or,ation consu,e database space 5hich 5ill gro5 over ti,e
5hen history gets -illed and have in-luence on database and TPC per-or,ance. So clari-y -or your
purposes the ?uestionsG
>or ho5 long you need to +no5 capacity# entity and per-or,ance dataL
(o you really need sa,ple or other -ine granularity data -or a long a,ount o- days# or is
,aybe daily# 5ee+ly or ,onthly data enoughL
!eep in ,ind that these settings are -or data collections o- all subsyste,s# co,puters and
s5itches# including the devices 5hich you 5ill add in -utureM
2.1." Estimate database sace consumtion
With the in-or,ation o- the +ey -actors o- your S$H environ,ent and the +no5ledge o- ho5
long you need to +eep the data# you can no5 ,ove on and calculate the esti,ated database space
consu,ption. Chapter 2.3 covers details and eDa,ple calculations -or various subsyste,s#
virtuali%ers and s5itches. Wal+ through each o- these chapters to esti,ate -or each S$H entity
the a,ount o- database space used -or storing per-or,ance ,etric data and their related history
snapshot data.
4se the detailed calculations docu,ented in chapter 2.4 to identi-y the a,ount i- space needed
-or storing co,puter -ilesyste, and database data.
Chapter 2.2 describes the space typically needed -or the alerting -unctionality o- TPC. One
,illion alerts could use up to 1 1" o- space. To be sa-e# you should al5ays add a large a,ount o-
contingency space 6in 1"8 -or general usage# alerts and devices not yet ,anaged by TPC.
2.1.# $etting u database sace on !arious filesystems and disks
$t this point you should +no5 ho5 ,uch database space is esti,ated -or per-or,ance data#
historical capacity and entity data and ho5 o-ten and -ast you 5ould li+e to access each o- the,.
So it is ti,e to thin+ about ho5 to place the database data on the storage 5ith the dis+
per-or,ance and ability to increase dis+ space. Chapter 3 and 4 provide ,ore details about ho5
you can use TPC installation to deploy several tablespaces and their containers on di--erent drives
and ho5 to ,ove bet5een the, later a-ter installation is done.
<
2.2 Controlling through Resource Retention
2.2.1 %esource &istory %etention
)istory data is the largest and ,ost varied a,ount o- data in the TPC repository# and there-ore
has the ,ost signi-icant in-luence on the si%e and per-or,ance o- the repository database.
The resource history retention that you can see in /llustration 1 controls ho5 long to +eep the
historical in-or,ation. "y speci-ying a nu,ber -or the days# the 5ee+s# or the ,onths -or each
ele,ent on this 5indo5# you can control the a,ount o- data that is retained and available -or
historical analysis and charting. The longer you +eep the data# the ,ore in-or,ative your analysis
is over ti,e.
When these settings are too high# they can have a negative e--ect on the a,ount o- ti,e that it
ta+es to generate reports. /- report generation starts to slo5 do5n to an unacceptable level# this
,ight be due to high resource history retention settings.
HoteG /- you do not select a Per-or,ance &onitor chec+ boD# the data is never discarded and
generates a great a,ount o- data in the database.
/- you donKt 5ant to have any history or retained (ata# press BHo )istoryC button or select
individual chec+ boDes and insert 0 days to +eep.
/llustration 1 sho5s the de-ault con-iguration. $dFust those values according to your speci-ic
environ,ent.
Sa,ple data is the data that is collected at the speci-ied interval length o- the ,onitor# -or
eDa,ple# you can collect per-or,ance data every -ive ,inutes. The de-ault retention period -or
sa,ple data is 14 days# 5hich ,eans that by de-ault# TPC +eeps the individual -ive ,inute
sa,ples -or 14 days be-ore TPC purges the sa,ples. TPC su,,ari%es the /ndividual sa,ples into
hourly and daily data. >or eDa,ple# TPC saves the su, o- 12 o- the -ive ,inute sa,ples as an
hourly per-or,ance data record# and TPC saves the su, o- 2== o- these sa,ples as a daily
per-or,ance data record. The de-ault retention periods -or hourly and daily data are 30 days and
@0 days# respectively.
=
Illustration 1: Resource History Retention
TPC -or (atabases has its o5n resource history retention con-iguration. Nou can see the
con-iguration pane at $d,inistrative Services A Con-iguration A Resource )istory Retention -or
(atabases. /llustration 2 sho5s the de-ault settings.
@
Illustration 2: Resource History Retention for Databases
TPC -or (ata has an additional resource history retention con-iguration. Nou can see this setting
in each pro-ile 6(ata &anager A &onitoring A Pro-iles8. Pro-iles are assigned to scans. /llustration
3 sho5s an eDa,ple. Nou can set the retention period per sa,ple# 5ee+# and ,onth.
10
Illustration 3: Profile defined for Statistical Data History
2.2.2 %emo!ed %esource %etention
Re,oved Resource Retention controls ho5 long an entity resides in the TPC repository a-ter it
has been re,oved -ro, the environ,ent. Such entities are displayed 5ith a status o- K,issingK in
the user inter-ace. This a--ects entities li+e subsyste,s# co,puters# -abrics# s5itches etc# because
5hen these are re,oved# all related data to that syste, is re,oved# too.
>or eDa,ple# i- a volu,e has been deleted# a-ter the neDt probe# the storage subsyste, health
indicates a 5arning# because a volu,e is ,issing. This health status re,ains until you re,ove the
,issing entity ,anually or the Re,ove Resource Retention ti,e has passed. Nou can only
re,ove ,issing entities -ro, the TPC repository# eDcept that you can also re,ove the storage
subsyste, -ro, the TPC repository. Select (is+ &anager J Storage Subsyste,s# clic+ the
subsyste, that you 5ant to re,ove# and select Re,ove.
11
Select $d,inistrative Services A Con-iguration A Re,oved Resource Retention to reach the
control panel. /llustration 4 displays the de-ault settings.
Illustration 4: Removed Resource Retention
TPC -or (atabases has its o5n resource history retention con-iguration. Select $d,inistrative
Services A Con-iguration A Re,oved Resource Retention -or (atabases to access the
con-iguration panel. /llustration 2 sho5s the de-ault settings.
12
Illustration 5: Removed Resource Retention for Databases
2.2." &istory 'ggregator
The )istory $ggregator enables Fobs to su, data in an enterprise environ,ent -or historical
reporting purposes# such as su,,ary statistics and user space usage statistics# and is eDtre,ely
i,portant. Chec+ the state o- the )istory $ggregator regularly. /,proper ,aintenance in this area
can negatively i,pact historical data trending. Run the )istory $ggregator daily# 5hich is the
de-ault. Con-igure the )istory $ggregator at $d,inistrative Services A Con-iguration A )istory
$ggregator. /llustration ; sho5s the con-iguration panel. The )istory $ggregator is used 5ithin
TPC -or (ata but also re,oves old $lert log records. Nou do not use the )istory $ggregator to
create hourly and daily per-or,ance sa,ples.
13
Illustration 6: History !!re!ator
2.3 Estimate and control sie o! "istorical Per!ormance #anagement
data
/"& Tivoli Storage Productivity Center Per-or,ance &anager produces a lot o- historical data
by collecting this -ro, the ,anaged subsyste,s and s5itches# and stores this in the TPC
database. The si%e o- such data gro5s over ti,e until it reaches the age you de-ined previously in
Resource Retention. 'sti,ating ho5 ,uch dis+ space is needed -or this data in the database
allo5s you to avoid running out o- dis+ space# 5hich 5ould severly i,pact the operation o- TPC.
So i- your space is li,ited and cannot be increased easily# ta+e so,e ti,e and calculate your
esti,ated database si%e re?uire,ents.
Per-or,ance data is historical in nature. /t ,eans# it is collected periodically and stored in the
TPC repository database.
The sam"le "erformance data is obtained directly -ro, the individual devices# and the delta
bet5een t5o subse?uent sa,ples are saved into TPC repository tables. 'ach ro5 saved in these
14
sa,ple per-or,ance data tables is associated 5ith a particular resource o- the storage subsyste,
at a particular date and ti,e.
Over ti,e# TPC creates hourly and daily averages o- this data. The averaged data re?uires less
storage space in the repository over a longer period o- ti,e. /t also ,a+es reporting over a longer
ti,e period ,ore ,eaning-ul and easier to display.
2.".1 (nderstanding Performance )ata size deendencies
There are a nu,ber o- per-or,ance counters that each device type +eeps trac+ o-. Since counters
are +ept -or diverse devices and device co,ponents# usually including individual volu,es# the
database si%e can gro5 dra,atically in short periods o- ti,e. The ,ain -actor in database si%e is
the number of volumes con-igured on the particular storage subsyste, devices# and the fre#uency
5ith 5hich historical per-or,ance data is collected and saved.
The sa,ple data# based on user settings -or each particular device# could be collected in 2# 10# 12#
20# 30 or ;0 ,inutes ti,e intervals. >or devices# 5ith the collection -re?uency at 2A,inutes
interval# TPC 5ould collect and store in the repository ;0:2 D 24 O 288 sa,ples per day. This
consu,es ,uch ,ore repository storage than i- the user chose to sa,ple the sa,e devices at 30A
,inute interval. The last re?uires -ro, TPC to collect and store ;0:30 D 24 O 48 sa,ples per day
and per device.
/n order to avoid -illing up the available dis+ space# or over5hel,ing the database ,anage,ent
syste, 5ith too ,uch data# it is necessary to purge or consolidate the eDisting data periodically.
(ata consolidation is handled via the su,,ari%ation per-or,ance data in hourly and daily# but
the sa,ple data tables should be subFect to strict si%e control. /t is i,portant to recogni%e that
database per-or,ance 5ill be i,pacted by database si%e# and especially database ?uery and delete
operations 5ill sho5 linear degradation 5ith increased table si%e. The best ans5er to this issue is
-or the user to adopt a rigid database purge policy to ensure that older less i,portant per-or,ance
data is periodically deleted -ro, the database. TPC has a con-iguration panel that controls ho5
,uch history you +eep over ti,e.
12
2.".2 Performance )ata &istory %etention
/llustration < sho5s the TPC panel -or setting the history retention -or per-or,ance ,onitors as
5ell as other types o- collected statistics.
Important: The history aggregation process is a global setting# 5hich ,eans that the values set
-or history retention are applied to all per-or,ance data -ro, all devices. Nou cannot set history
retention on an individual device basis.
Per performance monitoring task
This value de-ines the nu,ber o- days that TPC +eeps individual data sa,ples -or all o- the
devices that send per-or,ance data. The eDa,ple sho5s 14 days. When per sa,ple data reaches
this age# TPC per,anently deletes it -ro, the database. /ncreasing this value allo5s the user to
loo+ bac+ at device per-or,ance at the ,ost granular level at the eDpense o- consu,ing ,ore
storage space in the TPC repository database. (ata held at this level is good -or plotting
per-or,ance over a s,all ti,e period but not -or plotting data over ,any days or 5ee+s because
o- the nu,ber o- data points.
Consider +eeping ,ore data in the hourly and daily sections -or longer ti,e period reports.
Hourly
Illustration $: Performance Data History Retention
1;
This value de-ines the nu,ber o- days that TPC holds per-or,ance data that has been
grouped into hourly averages. )ourly average data potentially consu,es less space in the
database.
Daily
This value de-ines the nu,ber o- days that TPC holds per-or,ance data that has been grouped
into daily averages. $-ter the de-ined nu,ber o- days# TPC per,anently deletes records o- the
daily history -ro, the repository.(aily averaged data re?uires 24 ti,es less space in the data -or
storage co,pared to hourly data. This is at the eDpense o- granularityP ho5ever# plotting
per-or,ance over a longer period 6perhaps 5ee+s or ,onths8 beco,es ,ore ,eaning-ul.
2."." $torage $ubsystem Performance )ata $izing
There is a signi-icant di--erence in the si%ing calculation bet5een the -ollo5ing 3 device type
groupsG
/"& and HonA/"& storage subsyste,s 6'SS# (S=000# (S4000# (S2000# '&C# )P# )(S# ...8
/"& storage virtuali%ers 6SC# Stor5i%e8
/"& I/ storage subsyste,
>or this reason# the si%ing tables are separated.
Note: Hotice that the -inal illustration is pure per-or,ance data si%ing and doesnKt include any
additions used by ("2 -or indeD spaces and other database overhead.
The ("2 indeD spaces ,ay vary signi-icantly based on the current ("2 version and its setup.
$izing the reository for I*+ and non,I*+ storage subsystems
>or these subsyste, -a,ilies and ,ost o- nonA/"& subsyste,s# the biggest co,ponent is the
volu,e# and the biggest per-or,ance sa,ple data 5ill be that o- volu,es.
Table 1 sho5s 5or+ing eDa,ples -or -our storage subsyste,s in an environ,ent and the a,ount
o- storage space that per-or,ance collection uses -or each eDa,ple. The total illustration
represents the a,ount o- storage needed -or the Bper sa,pleC data. Continue through this section
to calculate the co,plete a,ount o- storage needed -or hourly and daily history types.
Calculation ,ethod eDa,ple -or ESSG
6%&5 ' 24 ( 288 sam"les "er day ' 1,500 volumes ' 34) bytes "er sam"le ( 150,768,000 bytes
1<
-a. $ubsystem
tye
-b. Number of
!olumes -L(Ns.
samled
-c. Performance
collection
inter!al
-minutes.
-d. Performance
data record size
-e. )aily amount of
samles data collected
/0 1 -c. x 2# x -b. x -d.
2 -e.
ESS 1,500 5 349 bytes 143 MB
DS8000 1,500 5 605 bytes 235 MB
DS4000 500 5 349 bytes 50 MB
Non-IBM 1,000 5 349 bytes 95 MB
)aily totals (f) 523 MB
Sa!"e #$sto%y &etent$on
!e% !e%fo%an'e on$to% tas( )14
*ays
Total +* 2 -f. 3 1# days +322 MB
*able 1: Per sam"le re"ository database si+in! for ,SS- DS''''- and non.I/0 subsystems
Nou can see that the a,ount o- space that is re?uired increases dra,atically as the sa,ple rate
increases. Re,e,ber this 5hen you plan the appropriate sa,ple rate -or your environ,ent.
HeDt# use Table 2 to calculate the a,ount o- storage that is needed to hold the per-or,ance data
-or the hourly and daily history averages. When co,plete# add together the totals -ro, Table 1
and Table 2 to give you the total repository re?uire,ent -or these types o- storage subsyste,s as
seen in Table 3.
Calculation ,ethod eDa,ple -or ESSG
1,500 volu,es D 349 bytes per sa,ple D ;0:interval length O 6,282,000 bytes -or hourly
history average
1=
-a. $ubsystem
name
-b. Number of
!olumes -L(Ns.
samled
-c.Performance
data record size
-d. &ourly re4uirement
-b. x -c. x 2#
-e. )aily re4uiremt
-b. x -c.
ESS 1,500 349 bytes 12 MB 0,5 MB
DS8000 1,500 605 bytes 20 MB 0,82 MB
DS4000 500 349 bytes 4 MB 0,1+ MB
non-IBM 1,000 349 bytes 8 MB 0,33 MB
)aily totals ## +* 1562 +*
#$sto%y &etent$on
,o-%"y )30 *ays
(f) 1320 MB
#$sto%y &etent$on
*a$"y )90 *ays
(.) 164 MB
Total +*
- -f. / -g. .
1#6# +*
*able 2: Hourly and daily re"ository database si+in! for ,SS- DS''''- and non.I/0 stora!e
Table 3 shows the total TP repos!tor" spa#e re$%!re& 'or ESS, (Sxxxx, an& non)I*+
storage subsyste,s. The total TPC repository space is the su, o- the totals o- both Table 1 and
Table 2
Total sace re4uired
+,322 MB
1,484 MB
6560/ +*
*able 3: *otal *P1 re"ository s"ace re#uired for ,SS- DS''''- and non.I/0 subsystems
$izing the reository for $7C and $tor8ize 79000
There are a -e5 topology basics# 5hich di--erentiate the SC:Stor5i%e <000 -ro, any other
storage device and a--ect the entire per-or,ance ,anage,ent vie5 o- that device.
The S$H olu,e Controller is a single point o- control -or disparate# heterogenous storage
resources. /t can be eDpanded up to eight nodes in one cluster. )ighly available /:O 1roups are
the basic con-iguration ele,ent o- an SC cluster. $n /:O 1roup is -or,ed by co,bining a
redundant pair o- SC nodes. SC sub,its /:O to the bac+Aend 6&(is+8 storage# 5hich could
1@
be an 'SS or (S volu,e# a >$StT volu,e# or so,e other generic volu,e allocated -ro, so,e
other storage subsyste,. The SC nodes interact 5ith the storage subsyste,s in the sa,e 5ay as
any directAattached host. SC re-erences to &(is+s are analogous 5ith hostAattached *4Hs in a
nonASC environ,ent.
TPC collects and stores the &(is+ level per-or,ance statistics -or SC devices. 'ach &(is+
corresponds to the per-or,ance data associated 5ith a single ,anaged dis+ con-igured -or the
SC ,achine# ,easured -ro, a single SC node# -or a single data collection interval. /t ,eans#
there is a ro5 in TPC repository database per collection interval# per node# per &(is+. There-ore
the nu,ber o- &dis+s per-or,ance data ro5s could be counted as -ollo5sG
0dis2s 3 0dis2s 4 nodes ( 0dis2 4 5nodes 316
TPC collects the (is+ level per-or,ance statistics -or SC devices as 5ell. The dis+ entity
corresponds to the per-or,ance data associated 5ith a single virtual dis+ con-igured -or the SC
,achine# ,easured -ro, a single SC node# in the responsible /:O 1roup# -or a single data
collection interval. >or each dis+ there 5ill be a ro5 in the repository per collection interval#
per /:O 1roup node. There-ore the counter -or dis+ ro5sG
7dis2s 3 7dis2s 42 5nodes in I&8 9rou"6 ( 7dis2s 43
Table 4 sho5s 5or+ing eDa,ples -or 2 SCs 5ith di--erent con-igurations 6nu,ber o- volu,es#
per-or,ance data sa,ple interval# etc..8. The total illustration represents the a,ount o- storage
needed -or the Bper sa,pleC data. Continue through this section to calculate the co,plete a,ount
o- storage needed -or hourly and daily history types.
Calculation ,ethod eDa,ple -or SC na,ed S,-TestG
6%&5 ' 24 ( 288 sam"les "er day ' !,0"5 data records546 ' 250 bytes "er sam"le ( 21#,2"0,000
bytes

-or data records546 use the -ollo5ing -or,ulaG
5a6:umber of 7dis2s 43 3 5b6:umber of 0dis2s 455c6:umber of nodes 316
20
$7C -a.Number
of 7disks
-L(Ns.
-b.Number
of +)isks
-c.Number
of nodes
-d.Performance
collection
inter!al -min.
-e.Performance
data record
size
)aily amount of
data collected
/0 1 -d. x 2# x -e.
x data records(*)

S0123est 1,000 15 2 5 250 bytes 219,240,000
S0124%o* 3,000 15 6 15 250 bytes 218,520,000
)aily totals (f)43+,+60,000
#$sto%y &etent$on !e%
!e%fo%an'e on$to% tas( )14
*ays
(.)6,128,640,000
Total +*
- -f. / -g. . 5
150005000
/5:/9 +*
*able 4: Per sam"le Re"ository si+in! for S71
HeDt# use Table 4 to calculate the a,ount o- storage that is needed to hold the per-or,ance
data -or the hourly and daily history averages.
Calculation ,ethod eDa,ple -or SC na,ed S,-TestG
3,045 data records 6Q8 D 250 bytes per sa,ple D 24 O 1652905000 bytes -or hourly history average
-or data records546 use the -ollo5ing -or,ulaG
:umber of 7dis2s 43 3 :umber of 0dis2s 45:umber of nodes 316
-a.$7C -b.data records; -c.Performance
data record size
-d. &ourly re4uirement
-b. x -c. x 2#
-e. )aily re4uiremt
-b. x -c.
S0123est 3,045 250 bytes 18,2+0,000 86,250
S0124%o* 9,105 250 bytes 54,630,000 2,2+6,250
)aily totals 925<005000 25"/25:00
#$sto%y &etent$on
,o-%"y )30 *ays
(f) 2,18+,000,000
#$sto%y &etent$on
*a$"y )90 *ays
(.) 212,625,000
Total +*
- -f. / -g. . 5150005000
25#00 +*
*able 5: Hourly and daily re"ository database si+in! for S71
21
Table 6 sho5s the total TPC repository space re?uired -or S71 subsyste,s. The total TPC
repository space is the su, o- the totals o- both Table 4 and Table 2.
Total sace re4uired
6,56+ MB
2,400 MB
65</9 +*
*able 6: *otal *P1 re"ository s"ace re#uired for S71 subsystems
$izing the reository for =I7
>or each single I/ volu,e there 5ill be a ro5 stored in the TPC repository (" ,easured -or a
single data collection interval. Since I/ storage is relatively dyna,ic storage to TPC# it is not
easy to provide the user the eDact per-or,ance data ro5 si%ing. /n our labs 5e ,easured bet5een
=0 and 200 bytes per ro5 -or I/ devices. This depends on ho5 ,any inter-ace ,odules are
used -or each individual volu,e. We are using 200 bytes in our calculation. To +eep it si,ple
enough 5e provide you a sa-e value -or your capacity planning.
The table belo5 sho5s eDa,ples -or a storage subsyste, in a production environ,ent and the
a,ount o- storage space that per-or,ance collection is using -or each eDa,ple. The total
illustration represents the a,ount o- storage needed -or the Bper sa,pleC data. Continue through
this section to calculate the co,plete a,ount o- storage needed -or hourly and daily history types.
Calculation ,ethod eDa,ple -or .I,-Pro&%#t!onG
;0:2 D 24 O 288 sa,ples per day D 500 volu,es D 200 bytes per sa,ple O 28,800,000
bytes
Per sam"le re"ository database si+in! for ;I7-
-a.$ubsystem
name
-b. Number of
!olumes -L(Ns.
samled
-c. Performance
collection
inter!al
-minutes.
-d. Performance
data record size -bytes.
-e. )aily amount
of data collected
-/01-c. 3 2#. 3 -b.
3 -d. 2 -e.
6I02!%o*-'t$on 500 5 200 28,800,000
6I023est 320 5 200 14,400,000
(f) 3ota" %e7-$%e* !e% *ay 43,200,000
(.) N-be% of *ays to
%eta$n !e%
sa!"e ) 14 *ays
(f) x (.)51,024,000 / 508
<09 +*
*able $: Per sam"le Re"ository si+in! for ;I7
22
HoteG Table / includes an additional 20R. This a,ount provides -or ("2 table indeDes and other
database overhead.
Nou can see that the a,ount o- space that is re?uired increases dra,atically as the sa,ple
rate increases. Re,e,ber this 5hen you plan the appropriate sa,ple rate -or your
environ,ent.
The neDt table sho5s ho5 to calculate the a,ount o- storage that is needed to hold the
per-or,ance data -or the hourly and daily history averages. When co,plete# add the totals -ro,
both tables in this section to give you the total repository re?uire,ent -or this type o- storage
subsyste,# as sho5n in the last table -ro, this section.
Calculation ,ethod eDa,ple -or .I,-pro&%#t!onG
500 volu,es D 200 bytes per sa,ple D 24 O 2,400,000 bytes -or hourly history average
Hourly and daily re"ository database si+in!
-a.$ubsystem
name
-b. Number of
!olumes
samled -L(Ns.
-c. Performance
data record size
-bytes.
-d. &ourly
re4uirement
-b. 3 -c. 3 2#
-e. )aily amount
of data collected
-/01-c. 3 2#. 3 -b.
3 -d. 2 -e.
6I02!%o*-'t$on 500 200 2,400,000 100,000
6I023est 320 200 1,536,000 64,000
)aily totals 3,936,000 100,000
#o-%"y %etent$on
*ays ) 30 (f) 118,080,000
Da$"y %etent$on
*ays ) 90
(.) 14,+60,000
Total +*
(f) / (.)5
1,024,000 / 508
1<0 +*
*able <: Hourly and daily re"ository database si+in! for ;I7
The last table sho5s the total Tivoli Storage Productivity Center repository space re?uired -or
I/ storage subsyste,s. The total Tivoli Storage Productivity Center
repository space is the su, o- the totals -ro, the per sa,ple and hourly:daily repository database
si%ing tables.
23
Total sace re4uired
90+ MB
190 MB
150<9 +*
*able ): *otal *P1 re"ository s"ace re#uired for ;I7 subsystems
2.".# >abric Performance +anagement )ata $izing
>or s5itches# the per-or,ance data is proportional to the nu,ber o- ports in a s5itch. The
necessary dis+ space is deter,ined by the su, o- the s5itch per-or,ance sa,ple si%e and the
aggregated sa,ple si%e.
The dis+ space be calculated approDi,ately according to the -ollo5ing -or,ulaG
$ssu,ingG
Hu,Pt O total nu,ber o- ports to ,onitor
CR O nu,ber o- sa,ple data collected per hour
6-or a sa,ple interval o- 2 ,inutes# this should be ;0:2 O 12 sa,ples 8
Rs O retention -or sa,ple data in days
Rh O retention -or hourly su,,ari%ed data in days
Rd O retention -or daily su,,ari%ed data in days
S5itch per-or,ance sa,ple si%eSbyteT O
Hu,Pt Q CR Q 24 Q Rs Q 400 byte
S5itch per-or,ance aggregated data si%eSbyteT O
Hu,Pt Q 624 Q Rh U Rd8 Q 420 byte
Total Sw!t#h per'orman#e s!0e 1 Sw!t#h per'orman#e sample s!0e 2 Sw!t#h per'orman#e
a33re3ate& &ata s!0e
'Da,pleG >or an environ,ent 5ith the -ollo5ing con-igurationG
3 S5itchesG
S5itch $ 5ith = ports
S5itch " 5ith 1; ports
S5itch C 5ith 24 ports
The >abric per-or,ance ,onitor is con-igured to run inde-initely.
24
The sa,ple interval is set to 2 ,inutes.
Retention -or sa,ple data is 14 days.
Retention -or hourly su,,ari%ed data is 30 days
Retention -or daily su,,ari%ed data is 30 days
The necessary dis+ space can be calculated 5ithG
Hu,Pt O = U 1; U 24 O 4=
CR O ;0 : 2 O 12
Rs O 14
Rh O 30
Rd O 30
$8itch -a.Number of Ports -b.Performance collection
inter!al -min.
-c.Performance
data record
size
)aily amount of
data collected
-a. x /0 1 -b. x 2#
x -c.

S9$t', : 8 5 400 bytes 921,600
S9$t', B 16 5 400 bytes 1,843,200
S9$t', 1 24 5 400 bytes 2,+64,800
)aily totals (f)5,529,600
#$sto%y &etent$on !e% !e%fo%an'e on$to%
tas( )14 *ays
(.)++,414,400
Total +*
- -f. / -g. . 5
150005000
625<## +*
*able 1%: Per sam"le Re"ository si+in! for S=itc>es
22
$8itch -a.Number of Ports -b.Performance data
record size
)aily amount
of hourly data
collected
-a. x -b. x 2#
)aily amount of
daily data
collected
-a. x -b.

S9$t', : 8 420 bytes 80,640 3,360
S9$t', B 16 420 bytes 161,280 6,+20
S9$t', 1 24 420 bytes 241,920 10,080
#$sto%y &etent$on !e% ,o-%"y
!e%fo%an'e on$to% *ata
)30 *ays
(f)14,515,200
#$sto%y &etent$on !e% *a$"y
!e%fo%an'e on$to% *ata
)30 *ays
(.) 604,800
Total +*
- -f. / -g. . 5
150005000
1:5120 +*
*able 11: Per >ourly and daily Re"ository si+in! for S=itc>es
Total Sw!t#h per'orman#e s!0e 1 82494 +* 2 15412 +*
1 98406 +*
2.".: $nashot Performance +anagement )ata $izing
Snapshots are copies o- the current con-iguration tables at a given point in ti,e. The
con-iguration data -ro, the snapshot tables at a certain ti,e 5ill associate 5ith the statistical data
collected -ro, that instance on5ards. $ snapshot copy 5ill be used as re-erence until there is a
ne5 copy created. >or ,ost devices# these snapshot copies are created 5hen
The per-or,ance data collection is started
$ change in con-iguration occurs# -or eDa,ple# a ne5 volu,e is created.
$ snapshot has reached its eDpiration date -or the duration o- data collection.
>or SC and Stor5i%e# due to the underlying structure# there is an additional case 5hen a
snapshot is ta+en. When the SC is scheduled to collect per-or,ance data at a de-inite duration#
a set o- snapshot 5ill be created 5hen the tas+ has reached hal- o- its duration. >or eDa,ple# i-
the duration is set to 2 hours# the snapshot 5ill be ta+en 5hen the data collection starts and an
hour a-ter it has started.
2;
The data -ro, these snapshot tables are +ept as long as the duration o- the statistical data persist
in the tables base on the retention setting.
The calculation -or the si%e o- snapshots can be very di--erent. This can be a--ected by a nu,ber
o- -acts -ro, device type to con-iguration o- each device. /n order to si,pli-y the calculation#
each device type 5ill have its o5n -or,ula. The co,ponents that are ,ostly in-luencing the si%e
o- the database -or that individual device type 5ill contribute as the -actors in the -or,ulars.
1. Calculation ,ethod -or $%% snapshotG
>or every volu,e# there 5ill be approDi,ately t5o ro5s persisted into the database.
>or,ula -or data records546 G
:umber of 7olumes 4 2
With an average o- 250 bytes per record and the nu,ber o- data records calculated above# the
total space needed -or this 'SS can be calculated as -ollo5ingG

1 sna"s>ot "er day ' 2%%% data records546 ' 250 bytes "er record ( 500,000 bytes

2. Calculation ,ethod -or D%8000&D%6000 snapshotG
The data records are ,ainly based on the nu,ber o- volu,es and ho5 they span over ,ultiA
ran+ eDtent pools. The data records calculation uses a constant o- 2 as an average nu,ber o-
ran+s that a volu,e spreads over.
>or,ula -or data records546 G
:umber of 7olumes 3 :umber of 7olumes 4 5 ( :umber of 7olumes 4 6
With an average o- 200 bytes per record and the nu,ber o- data records calculated above# the
total space needed -or this (S=000 can be calculated as -ollo5ingG
1 sna"s>ot "er day ' 1<-%%% data records546 ' 200 bytes "er record ( !,600,000 bytes
3. Calculation ,ethod -or D%"000&D%5000 snapshotG
The data records are ,ainly based on the nu,ber o- volu,es and ho5 they spread across the
nu,ber o- physical dis+s. The -ollo5ing -or,ula uses 2 as an average nu,ber o- dis+s that a
volu,e spreads over.
>or,ula -or data records546 G
:umber of 7olumes 3 :umber of 7olumes 4 5 ( :umber of 7olumes 4 6
With an average o- 200 bytes per record and the nu,ber o- data records calculated above# the
total space needed -or this (S4000 can be calculated as -ollo5ingG
1 sna"s>ot "er day ' 2%%% data records546 ' 200 bytes "er record ( "00,000 bytes

4. Calculation ,ethod -or non'()* snapshot
2<
The data records are ,ainly based on the nu,ber o- volu,es and ho5 they spread across the
nu,ber o- physical dis+s. The -ollo5ing -or,ula uses 2 as an average nu,ber o- dis+s that a
volu,e spreads over.
>or,ula -or data records546 G
:umber of 7olumes 3 :umber of 7olumes 4 5 ( :umber of 7olumes 4 6
With an average o- 200 bytes per record and the nu,ber o- data records calculated above# the
total space needed -or this )(S can be calculated as -ollo5ingG
1 sna"s>ot "er day ' 6,000 data records546 ' 200 bytes "er record ( 1,200,000 bytes
2. Calculation ,ethod -or %+,&%tor-i.e snapshot
The data records -or SC are ,ainly based on the nu,ber o- volu,es and ho5 they span
across the nu,ber o- &dis+s. $ dis+s can reside on as little as 1 &dis+ to the ,aDi,u,
nu,ber o- available &dis+s. )o5ever# a dis+ is unli+ely to spread over all &dis+s. >or
,ost cases# each 5ill span over a range o- 1 to 10 &dis+s. /n the -ollo5ing -or,ular# using 2
as an average nu,ber o- &dis+s.
>or,ula -or data records546 G
:umber of 7dis2s 3 :umber of 7dis2s4 5 ( ( :umber of 7dis2s 4 6
With an average o- 4== bytes per record and the nu,ber o- data records calculated above# the
total space needed -or this SC can be calculated as -ollo5ingG
1 sna"s>ot "er day ' !,7!2 data records546 ' 2"8 bytes "er record ( #25,5!6 bytes
;. Calculation ,ethod -or /(+ snapshot
>or every volu,e# there 5ill be approDi,ately three ro5s persisted into the database.
>or,ula -or data records546 G
:umber of 7olumes 4 3
With an average o- "!8 bytes per record and the nu,ber o- data records calculated above#
the total space needed -or this I/ can be calculated as -ollo5ingG
1 sna"s>ot "er day ' !,000 data records546 ' "!8 bytes "er record ( 1,!"",000 bytes
<. Calculation ,ethod -or %-itc0 snapshot
>or every port# there 5ill be approDi,ately t5o ro5s persisted into the database.
>or,ula -or data records546 G
:umber of Ports 4 2
With an average o- 150 bytes per record and the nu,ber o- data records calculated above#
the total space needed -or this S5itch can be calculated as -ollo5ingG
2=
1 sna"s>ot "er day ' 80 data records546 ' 150 bytes "er record ( 12,000 bytes
Table 12 sho5s the eDa,ples -or all device types 5ith di--erent hard5are con-igurations and the
assu,ption that one snapshot is ta+en daily . The total represents the a,ount o- storage needed
-or all devices in one day.
$torage
)e!ices or
$8itches
Number
of Ports
Number
of Nodes
Number
of +disks
Number
of %ank
Number
of )isks
Number
of
7olumes
%ecord
$ize
%e4uired sace
er snashot
ESS 1000 250 bytes 600,000 bytes
DS8000 8 3000 200 bytes 3,600,000 bytes
DS4000 14 400 200 bytes 400,000 bytes
S015Sto%9$;e 2 66 622 250 bytes
925,536 bytes
Non-IBM 12 1000 200 bytes 1,200,000 bytes
6I0 180 300 348 bytes 1,344,000 bytes
S9$t', 40 150 bytes 12,000 bytes
Total +* 9.9 +*
*able 12: Sna"s>ot re"ository database si+in!
2@
2.$ Estimate siing o! TPC !or Data
2.#.1 $izing the reository for TPC for )ata re4uirements
Repository si%ing -or TPC -or (ata is ,ore di--icult to accurately ,odel due to the dyna,ic
nature o- the collected data. Per-or,ance data collection si%ing is si,ple in that it collects a set
a,ount o- data at regular intervals.
)o5ever 5ith TPC -or (ata# a policy or pro-ile collects a variable a,ount o- data -ro, each
,onitored server based on 5hat and ho5 ,uch data o- a ,atching type is -ound on each
,achine.
!ey -actors in si%ing TPC -or (ata 65ith -ocus on >ilesyste, Scan8G
Total nu,ber o- operating syste, registered users storing -iles
Total nu,ber o- -ilesyste,s ,onitored
Total nu,ber o- di--erent -ile types 6that is# Q.tDt# Q.eDe# Q.doc# Q.,p3# and so -orth8
Hu,ber o- ,achines 5ith Storage Resource $gents or (ata $gents deployed and
collecting data
Total nu,ber o- -ile na,es collected and stored -or reporting
The settings -or KResource )istory RetentionK as outlined in the previous chapter.
With respect to >ilesyste, Scan# the -ollo5ing database tables are +no5n to typically accu,ulate
the ,ost in-or,ation over ti,e. There-ore this chapter eDplains ho5 to calculate the space
re?uire,ents -or these table.
Table Name Describtion Average size in bytes per entry
TVST$TV>TNP'V)/ST >ile type history 55
TVST$TV>/*' Stored -ile na,es 155
TVST$TV4S'RV)/ST 4ser history -ile 55
*able 13 ?ey lar!est re"ository tables
Table 14# Table 12 and Table 1; help to esti,ate the 5orst case si%ing -or these +ey tables.
The -ollo5ing table helps to esti,ate the re?uire,ents -or ,aintaining the -ile:-older o5nership
history on user basis 6TVST$TV4S'RV)/ST8.
30
Stat!st!#
name
5a6 N%mber
o'
'!les"stems
#o7ere&
5b6 N%mber
o' %sers
#o7ere&
5#6 (a"s to
8eep s#an
h!stor"
5&6 N%mber
o' wee8s o'
s#an h!stor"
5e6 N%mber
o' months o'
s#an h!stor"
9e#or&s
ma!nta!ne&
O a ' b ' 6c U d U
e8
Custo,Vstat 3%% <%% 3% 52 24 22#440#000
4H/IVstat 25% 1-5%% 3% 52 24 3@#<20#000
Windo5sVst
at
5%% 2-%%% 3% 52 24 10;#000#000

Total nu,ber o- records 65orst case8 1<1#1@0#000

Total re?uire,ent 6bytes8
O Total nu,ber o- records ' 22 bytes
@#412#420#000

Realistic eDpectation G
reduce to 20R o- 5orst
4#<0<#<22#000
4,/08 +*
*able 14 ,stimatin! t>e user >istory re"ository re#uirement 5*@S**@AS,R@HIS*6
Note: 4nli+e the per-or,ance tables# 5e ,ust esti,ate ,uch ,ore here. >or eDa,ple# there
,ight be 200 -ilesyste,s covered by the Windo5sVstat and 2#000 users 5ith data across the 200
-ilesyste,s# but not all o- the 200 -ilesyste,s have -iles o5ned by the 2#000 users. There is li+ely
only a subset o- -ilesyste,s 5ith data -or all 2#000 users. This is 5hy the realistic illustration is
reduced to only 20R o- the 5orst case illustraton. Nou ,ight 5ant to change the 20R -actor to
your speci-ic re?uire,ents.
4se Table 12 to calculate the repository space re?uired to store -ile type history in-or,ation.
'sti,ating the re?uired space -or ,aintaining a -ile type history is ,ore accurate than esti,ating
the user table history# because the data entering this table is ,ore constant -or a given pro-ile.
31
Stat!st!#
pro'!le
name
5a6
N%mber o'
'!le t"pes
5b6 N%mber
o' TP
a3ents
#o7ere&
5#6 (a"s to
8eep s#an
h!stor"
5&6 N%mber
o' wee8s o'
s#an h!stor"
5e6 N%mber
o' months o'
s#an h!stor"
9e#or&s
ma!nta!ne&
O a ' b ' 6c U d U e8
WinVtypes 5% 2%% 3% 52 24 1#0;0#000
4H/IVserv
ers
5% 15% 6% 52 24 1#020#000
&edia -iles 3% 5% 6% 52 24 204#000

Total nu,ber o- records 2#2=4#000

Total 6bytes8
O Total nu,ber o- records ' 22 bytes
122#;20#000
Total &" 126 +*
*able 15 ,stimatin! t>e file ty"e re"ository re#uirement 5*@S**@B*CP,@HIS*6
The third TPC -or (ata repository table o- signi-icant si%e is the TVST$TV>/*' table. This table
holds a record o- the -ile na,es# 5hich have been collected by pro-iles -or largest# ,ost obsolete#
orphan -iles# and so -orth.
Note: /- you plan to use TPC -or (ata -or duplicate -ile spotting or to archive speci-ic -iles -or
you# it is li+ely that you 5ill need to increase the nu,ber o- -ile na,es that each agent 5ill
collect. "e a5are that TPC -or (ata can assist such tas+s only by analy%ing the -ile na,es
collected into the database. There-ore the nu,ber o- -ile na,es to be ,aintained is essential. On
the other hand# increasing these nu,bers increases the re?uire,ents on the repository at the sa,e
ti,e.
T!p: 4sing -ilters on -ile types and -olders 5ould be help-ul in order to li,it the -ile na,es to
those candidates that are o- interest.
>or ,ore in-or,ation on this topic re-er to Redboo+ KIBM Total Storage Productivity Center
Advanced Topics K.
With the -ollo5ing values applied to a speci-ic pro-ile# the 9Total -ile na,es per agent9 5ould
accu,ulate to 1#=00 -ile na,es to be collected per agent.
32
Illustration <: ddin! u" all filenames
Stat!st!#
pro'!le name
5a6 Total '!le names
#olle#te& per a3ent
5b6 N%mber o' a3ents to
wh!#h th!s pro'!le appl!es
Total '!les per
stat!st!#
O a ' b
(uplicate -ile
spot
2-%%% 5%% 1#000#000
Control audio
-iles
2%% 15% 30#000
$rchive old
data
2%% 5% 10#000

Total -iles in table 1#040#000

Si%e 6bytes8 O
Total -iles ' 122 bytes
1;1#200#000
Si%e 162 +*
*able 16 ,stimatin! t>e file name re"ository re#uirement 5*@S**@BID,6
The -inal step -or si%ing the TPC -or (ata repository is to total the three tables and add an
overhead -or the de-ault statistics. The average overhead -or the de-ault statistic types is provided
at 1.2 &" per TPC agent. There-oreG
Default statistics over>ead O *otal number of a!ents ' 1.2 &"
33
'Da,ple 5ith 400 agents G
Default statistics over>ead O 400 ' 1.2 &" O ;00 &"
$dd this value in Table 1<
So%r#e :mo%nt !n +*
4ser history 4-$%<
>ile type history 126
>ile na,es 162
(e-ault statistics overhead 6%%
Total re$%!rement 5+*6 5,596
*able 1$ *P1 for Data re"ository total
2.#.2 $izing the reository for )* $can re4uirements
Since TPC ,aintains historical data on tablespace level as lo5est entity# the total nu,ber o-
tablespaces ,onitored by TPC is the driving -actor -or calculating the space re?uire,ents -or
("&S Scans.
(epending on the vendor o- the ("&S and the actual setup o- the databases ,onitored# the
nu,ber o- tablespaces 5ill vary. /n case you are ,onitoring these databases 5ith TPC already#
you ,ay -acilitate TPCKs reporting -unctionality to deter,ine the actual nu,bers. Other5ise you
,ight 5ant to use the toolset provided by your ("&S vendor.
Hote that the +ey table in this calculation is also used by -ilesyste, scan. There-ore the re?uired
space -or ("&S Scan adds on top o- eDisting space re?uire,ents and does not provide the upper
li,it si%e -or this table.
Table Name Describtion Average size in bytes per entry
TVST$TV>SV)/ST Tablespace history 71
*able 1< ?ey lar!est re"ository tables
34
Stat!st!#
pro'!le
name
5a6
N%mber o'
&atabases
5b6 :7era3e
n%mber o'
tablespa#es
5#6 (a"s to
8eep s#an
h!stor"
5&6 N%mber
o' wee8s o'
s#an h!stor"
5e6 N%mber
o' months o'
s#an h!stor"
9e#or&s
ma!nta!ne&
O a ' b ' 6c U d U e8
Warehouse
6("28
2%% 5% 6% 52 24 1#3;0#000
Test 6other8 1-%%% 25 $ 52 12 1#<<2#000

Total nu,ber o- records 3#132#000

Total 6bytes8
O Total nu,ber o- records ' <1 bytes
222#2=2#000
Total &" G Total 6bytes8 : 1#000#000 223 +*
*able 1) ,stimatin! t>e tables"ace >istory re"ository re#uirement 5*@S**@BS@HIS*6
32
2.% Estimate siing o! TPC &lerts
2.:.1 'lerts
Nou can set up Tivoli7 Storage Productivity Center so that it eDa,ines the data that it collects
about your storage in-rastructure and 5rites an alert to a log 5hen an event occurs. Nou also can
speci-y that an action is initiated# such as sending an S&HP trap# sending an eA,ail# or running a
script 5hen the event occurs.
$lerts are triggered based on the data collected by ,onitoring Fobs 6pings# scans# and probes8# so
the alerts ,ust be de-ined be-ore the ,onitoring Fobs are run. >or each alert# you select a
condition that triggers the alert and 6optionally8 an action to be per-or,ed 5hen that condition
occurs. Nou can de-ine an alert in the -ollo5ing 5aysG
$s part o- a data collection schedule
$s part o- an alert de-inition
When an event occurs and triggers an alert. The alert is stored in the database# so that you can
vie5 the, through TPC 14/. Nou can also select one or ,ore other 5ays to be noti-ied o- the
event. These alert noti-ications include SH&P traps# /"&7 Tivoli 'nterprise Console7 events#
/"& Tivoli7 Hetcool7:O&H/bus# Tivoli Storage Productivity Center login noti-ications#
operatingAsyste, event logs# or e,ail.
Note: $lerts are not generated in a Tivoli Storage Productivity Center instance -or actions that
you per-or, -ro, that instance. >or eDa,ple# i- you start Tivoli Storage Productivity Center and
use (is+ &anager to assign or unassign volu,es -or a subsyste,# you do not receive alerts -or
those volu,e changes in that instance. )o5ever# i- you assign and unassign volu,es outside o-
that Tivoli Storage Productivity Center instance# an alert is generated.
/n general# the -ollo5ing types o- conditions can trigger alertsG
$ data collection schedule -ailed to co,plete.
$ change occurred in the storage in-rastructure.
$ per-or,ance threshold 5as violated.
The speci-ic conditions that can trigger events vary# depending on the type o- storage resource
that is being ,onitored.
Hoti-ication ,ethods# or triggered actions# de-ine the ,ethod by 5hich you are noti-ied o- an
alertG SH&P Trap# Tivoli 'nterprise Console 'vent# /"& Tivoli7 Hetcool7:O&H/bus# *ogin
Hoti-ication# Windo5s 'vent *og# 4H/I Syslog# and e,ail. $lerts are al5ays 5ritten to the
error log. $dditionally# i- a (ata agent or Storage Resource $gent is deployed on a ,onitored
3;
co,puter# you can run a script or start an /"& Tivoli Storage &anager Fob in response to the
alert.
The -ollo5ing conditions ,ust be ,et in order to success-ully use alertsG
(ata collection schedules are con-igured and scheduled to run on a regular basis.
/- you 5ant to be noti-ied about an alert in so,e 5ay other than an entry in the log -ile# such
as using SH&P traps# Tivoli 'nterprise Console events# or e,ail# alert noti-ication
instructions ,ust be con-igured be-ore using the alert.
The -ollo5ing types o- alerts are set -or your syste, by de-aultG
He5 entity -ound
Wob -ailed
2.:.2 )atabase sace used by 'lerts
:"" t,e a"e%ts a%e a"9ays sto%e* $n t,e :"e%t <o., e=en $f yo- ,a=e not set -! not$f$'at$on>
3,$s "o. 'an be fo-n* $n 341 ?@I $n t,e na=$.at$on t%ee at IBM 3$=o"$ Sto%a.e
4%o*-'t$=$ty 1ente% A :"e%t$n. A :"e%t <o. A :""
3,e a"e%ts -se 3 tab"es $n t,e DB2 *atabase B 32:<E&32DECINI3IDN ,
32:<E&32EM:I< , 32:<E&32<D?
32:<E&32DECINI3IDN E tab"e 9,e%e a"" a"e%ts *ef$n$t$on a%e sto%e*
32:<E&32EM:I< - tab"e 9,e%e ea$" *ef$n$t$on a"e%ts a%e sto%e*
32:<E&32<D? E tab"e 9,e%e a"" t%$..e%e* a"e%ts a%e sto%e*
&any database activities to -or, SH&P and Tivoli 'nterprise Console7 events ,ight a--ect
overall per-or,ance i- too ,any alerts get created.
1. $lerts can be suppressed to avoid generating too ,any alert log entries or too ,any actions
5hen the triggering condition occurs o-ten. /- you selected a threshold as a triggering
condition -or alerts# you can speci-y the conditions under 5hich alerts are triggered and
choose 5hether you 5ant to suppress repeating alerts. Nou can vie5 suppressed alerts in the
constraint violation reports.
2. Change the triggered actions
Nou can edit an alert i- you 5ant to change the triggering condition# triggered action# or the
storage resources against 5hich it is deployed.
3. 'nable : disable the alerts
Nou can disable an alert. This retains the alert de-inition but prevents the alert -ro, being
3<
run.
4. (elete the alerts 6 not the de-ault ones 8
To ensure that the list o- alerts in the navigation tree is upAtoAdate# you can delete alerts that you
no longer 5ant to i,ple,ent.
If yo- *e'$*e t,at t,e a"e%ts tab"es a%e o''-!y$n. too -', s!a'e yo- 'an a"9ays set
t,e :"e%t D$s!os$t$on A :"e%t <o. D$s!os$t$on5 9,$', %e!%esents t,e "en.t, of t$e to
(ee! ent%$es $n t,e a"e%t "o.s> 3,e *efa-"t $s 90 *ays>
3o a(e an $*ea of ,o9 -', s!a'e t,e a"e%ts o''-!y , $n a 'o!"ex en=$%onent t,e
n-be% of %o9s $n 32:<E&32<D? 'an %ea', $n 90 *ays 3 $""$on ent%$es an* o''-!y
a!!%ox$at$=e 1>12 ?B>
3=
" )atabase configuration
3.1 Data'ase con!iguration through TPC installer
Through the typical installation ,ode the TPC installer is installing the TPC database 5ith the
de-ault settings in the de-ault location set by the ("2. The other 5ay to install TPC is the custo,
installation ,ode. Through this ,ode the installer lets you to con-igure the database as 5ell.
/llustration @ sho5s the panel -or choosing the installation ,ode.
Illustration ): Initial installation "anel

Once you have selected the custo, installation ,ode# you have to select 5hat TPC co,ponents
you 5ant to install 6you ,ust select at least to create the database sche,a8. Once you have
selected the co,ponents you 5ant to install and entered the credentials -or the ("2. This ("2
user and pass5ord is used to create the TPC database sche,a. Once you have done that# you
reach the /llustration 10 5hich gives you the options to create a ne5 custo,i%ed database or
select an eDisting database.
3@
Illustration 1%: Database sc>ema installation "anel
/- you have already used the installer to create a database sche,a only# the installer sees it and
displays the in-or,ation about it in this panel. /n this case you have the option to use that
database sche,a by selecting B4se local databaseC or you have the option to BCreate local
databaseC.
Note: /- you have already installed a ("2 server and TPC database sche,a on a local syste,#
you cannot use a re,ote ("2 server -or the TPC database sche,a installation. The TPC installer
selects the local ("2 server by de-ault.
/- you choose to create a ne5 database sche,a# you ,ust provide the database na,e. 4sually it is
TPC(" and the -ield is co,pleted auto,atically# but it can have di--erent na,es as 5ell and
the, ,ust be co,pleted in the database na,e -ield. /n this step once again you have the option to
leave the de-ault values -or the database sche,a or to custo,i%e it by clic+ing the B(atabase
creation details...C button.
/- you choose to custo,i%e the creation o- the database# a ne5 panel 5ill open 5ith the database
sche,a in-or,ation. /llustration 11 sho5s the details panel -or database sche,a creation.
40
Illustration 11: Database sc>ema detailed installation "anel
/n this panel you can custo,i%e the database sche,a creation by setting the sche,a na,e 6 the
sche,a na,e cannot be longer than eight characters 8 # by selecting on 5hich drive the database
sche,a is created 6 i.e.G CG # (G 8# by changing the si%e and location o- the tablespace# by changing
the ,anage,ent o- the tablespace# and by changing the location and the si%e o- the log that is
created during the installation and 5hich contains in-or,ation about the install operations
per-or,ed. The dis+ space needed -or the TPC database sche,a varies signi-icantly 5ith the
storage net5or+ con-iguration# data collection# data retention period and other -actors. 4nless you
are an eDperienced ad,inistrator# you should use the de-ault values -or tablespace allocation# that
the TPC installer provides.
"elo5 / 5ill provide a space esti,ation -or each tablespace re?uired by the TPC6 Hor,al# !ey#
"ig# Te,p# Te,porary user tablespace 8# based on a con-iguration containing 2000 volu,es 5ith
so,e general assu,ptionG
The Normal tablespace is used -or snapshots and ,iscellaneous data# and -or a 2000 volu,es
con-iguration it should have around 200 &". $ table that uses signi-icant space is
TVR'SVSTOR$1'VO*4&'VSH$PS)OT. This table uses about 2200 bytes -or each record.
The nu,ber o- snapshots depends on the data collection activities.
41
The ;e" tablespace is used -or con-iguration data 5hich is constantly used. >or eDa,ple# the +ey
entity and the relationships data # and -or a 2000 volu,es con-iguration it should have 200 &". $
table that uses signi-icant space is TVR'SV($T$VP$T). This table uses about 300 bytes -or
each record -or the relationship bet5een the storage volu,es and host ports. There could be
do%ens to hundreds o- data path -or a volu,e.
The *!3 tablespace is used -or per-or,ance statics# and -or a 2000 volu,es con-iguration it
should have around 400 &" per day o- per-or,ance data. The data collected -or per-or,ance
data -or storage volu,es can use a signi-icant a,ount o- space. >or 2000 volu,es# i-
per-or,ance data is collected every 2 ,inutes# the data -or one day 5ould be around 400 &". /-
the data is +ept < days# the data collected 5ould ta+e about 3 1". /- the data is +ept longer# the
storage ,ust be scaled up accordingly.

The Temp tablespace is used -or te,porary data -or ?uery processing and -or other te,porary
tables. >or a 2000 volu,es con-iguration this tablespace should have 1 1". This value applies as
5ell -or the Temporar" %ser tablespace 6-or the sa,e con-iguration8.
Nou can choose to ,anage the above tablespaces in 3 5aysG S"stem mana3e& 5S+S6, (atabase
mana3e& 5(+S6 or :%tomat!# Stora3e4
Through S&S the operating syste, ,anages the tablespaces. Containers are de-ined as regular
operating syste, -iles# and they are accessed using operating syste, calls. Through S&S
,anage,ent containers cannot be dropped -ro, S&S tablespaces# and adding ne5 ones is
restricted to partitioned databases. The S&S is used by de-ault by the TPC installer.
Through (&S ("2 ,anages the tablespaces. Containers can be de-ined either as -iles# 5hich
5ill be -ully allocated 5ith the si%e given 5hen the tablespace is created# or as devices. 'Dtending
the containers is possible using the $*T'R T$"*'SP$C' co,,and. 4nused portions o- (&S
containers can be also released.
$uto,atic storage database is the one in 5hich tablespaces can be created and 5hose container
and space ,anage,ent characteristics are co,pletely deter,ined by the ("2 database ,anager.
$t the ,ost basic level# databases that are enabled -or auto,atic storage have a set o- one or ,ore
storage paths associated 5ith the,.
Note: 4nless you are an eDperienced ad,inistrator# you should use the de-ault values -or
tablespace allocation.

42
3.2 Tuning(Parameters !or !resh data'ase installations
The TPC installer sets i,portant ("2 tuning para,eters to autoAtuning.
One i,portant change to the earlier TPC releases is that TPC con-igures sel- tuning bu--erpools.
With sel- tuning bu--erpools the database ,anager adFusts the si%e o- the bu--er pools in response
to 5or+load re?uire,ents. )aving sel- tuning bu--erpools re,oves the need to ,easure
bu--erpool hit ratios and to ,anually adFust bu--erpool si%es in order to utili%e the, best.
Sel- tuning bu--er pools reside in shared ,e,ory regions that the database ,anager can balance
5ith other database ,e,ory pools. The TPC installer con-igures no upper li,it -or the shared
,e,ory regions. /- upper li,its -or overall ,e,ory usage o- ("2 are needed# this can be
con-igured either on the database instance level or on the database level. The para,eter -or the
database instance is BSi%e o- instance shared ,e,oryC 6/HST$HC'V&'&ORN8. The para,eter
-or the database is BSi%e o- database shared ,e,oryC 6($T$"$S'V&'&ORN8.
>or re-erence# the para,eters that the TPC installer con-igures to autoAtuning are listed in the
tables belo5. These are the ,ost i,portant para,eters and are not a co,plete listing.
$ hints and tips chapter B2.2.2 ("2 Per-or,ance TuningC ,entions di--erent tuning
reco,,endations 6httpG::555A01.ib,.co,:support:docvie5.5ssLuidOs5g2<00=2248. Those are
outdated and 5ill be re,oved -ro, the TPC hints and tips.
43
".2.1 )atabase instance arameters
Parameter ,al%e E<planat!on
&OHV)'$PVSX ("2 @.1 J 212
("2 @.2 or higher J $uto,atic
This para,eter deter,ines the
a,ount o- the ,e,ory# in
pages# to allocate -or database
syste, ,onitor data.
/HST$HC'V&'&ORN $uto,atic This para,eter speci-ies the
,aDi,u, a,ount o- ,e,ory
that can be allocated -or a
database partition. To li,it the
total ,e,ory usage you set it
to a speci-ic value.
".2.2 )atabase arameters
Parameter ,al%e E<planat!on
($T$"$S'V&'&ORN $uto,atic This para,eter speci-ies the
,aDi,u, a,ount o- ,e,ory
that can be allocated -or the
database shared ,e,ory
region. To li,it the total
,e,ory usage you set it to a
speci-ic value.
44
Parameter ,al%e E<planat!on
S'*>VT4H/H1V&'& On This para,eter deter,ines
5hether the ,e,ory tuner 5ill
dyna,ically distribute
available ,e,ory resources as
re?uired bet5een ,e,ory
consu,ers that are enabled -or
sel-Atuning.
(")'$P ("2 @.1 J 3000
("2 @.2 J 3000
("2 @.< J $uto,atic
This para,eter deter,ines the
,aDi,u, ,e,ory used by the
database heap. /t contains
control bloc+ in-or,ation -or
tables# indeDes# tablespaces#
and bu--er pools. /t also
contains space -or the log
bu--er and te,porary ,e,ory
used by utilities.
SORT)'$P $uto,atic This para,eter de-ines the
,aDi,u, nu,ber o- private
,e,ory pages to be used -or
private sorts# or the ,aDi,u,
nu,ber o- shared ,e,ory
pages to be used -or shared
sorts. The ,e,ory tuner
dyna,ically adFust the si%e o-
the ,e,ory area controlled by
this para,eter as the 5or+load
re?uire,ents change.
S)'$PT)R'SVS)R $uto,atic This para,eter represents a
so-t li,it on the total a,ount
o- database shared ,e,ory
that can be used by sort
,e,ory consu,ers at any one
ti,e. The ,e,ory tuner
dyna,ically adFust the si%e o-
the ,e,ory area controlled by
this para,eter as the 5or+load
re?uire,ents change.
42
Parameter ,al%e E<planat!on
*OC!*/ST $uto,atic This para,eter indicates the
a,ount o- storage that is
allocated to the loc+ list. There
is one loc+ list per database
and it contains the loc+s held
by all applications
concurrently connected to the
database. The ,e,ory tuner
dyna,ically adFust the si%e o-
the ,e,ory area controlled by
this para,eter as the 5or+load
re?uire,ents change.
&$I*OC!S $uto,atic This para,eter de-ines a
percentage o- the loc+ list held
by an application that ,ust be
-illed be-ore the database
,anager per-or,s loc+
escalation. The ,e,ory tuner
dyna,ically adFust the si%e o-
the ,e,ory area controlled by
this para,eter as the 5or+load
re?uire,ents change.
H4&V/OC*'$H'RS $uto,atic This para,eter allo5s you to
speci-y the nu,ber o-
asynchronous page cleaners -or
a database.
H4&V/OS'R'RS $uto,atic This para,eter speci-ies the
nu,ber o- /:O servers -or a
database.
PC!C$C)'SX $uto,atic This para,eter is allocated out
o- the database shared
,e,ory# and is used -or
caching o- sections -or static
and dyna,ic SY* and IYuery
state,ents on a database. The
,e,ory tuner dyna,ically
adFust the si%e o- the ,e,ory
area controlled by this
para,eter as the 5or+load
re?uire,ents change.
4;
Parameter ,al%e E<planat!on
$4TOV&$/HT On This para,eter is the parent o-
all the other auto,atic
,aintenance database
con-iguration para,eters.
$4TOVT"*V&$/HT On This para,eter is the parent o-
all table ,aintenance
para,eters.
$4TOVR4HST$TS On This auto,ated table
,aintenance para,eter enables
or disables auto,atic table
runstats operations -or the
database. Statistics collected
by the runstats utility are used
by the opti,i%er to deter,ine
the ,ost e--icient plan -or
accessing the physical data.
The hints and tips -or TPC ,ention that running runstats ,anually is reco,,ended. With
$4TOVR4HST$TS on this is no longer reco,,ended.
".2." %eferences to )*2 configuration arameters
>or re-erence see the con-iguration para,eters su,,aries -or ("2 @.<# @.2# and @.1.
("2 @.< con-iguration para,eters su,,ary
httpG::publib.boulder.ib,.co,:in-ocenter:db2lu5:v@r<:indeD.FspL
topicO:co,.ib,.db2.lu5.ad,in.con-ig.doc:doc:r00021=1.ht,l
("2 @.2 con-iguration para,eters su,,ary
httpG::publib.boulder.ib,.co,:in-ocenter:db2lu5:v@r2:indeD.FspL
topicO:co,.ib,.db2.lu5.ad,in.con-ig.doc:doc:r00021=1.ht,l
("2 @.1 con-iguration para,eters su,,ary
httpG::publib.boulder.ib,.co,:in-ocenter:db2lu5:v@:indeD.FspL
topicO:co,.ib,.db2.udb.ad,in.doc:doc:r00021=1.ht,
4<
# ?timizing use of hysical storage for )*2 databases
The ,ost recent TPC version 4.2 only uses one database as repositoryG TPC(".
"y de-ault# the TPC 4.2 database and its ("2 transaction log -iles are stored on the sa,e
-ilesyste,. Nou can achieve per-or,ance i,prove,ents by placing the transaction log -iles onto
a separate dis+.
While the ratio o- log physical dis+s to data physical dis+s is heavily dependent on
5or+load# a good rule o- thu,b is to use 12R to 22R o- the physical dis+s -or logs# and <2R to
=2R o- the physical dis+s -or data.
$.1 Placing the D)2 transaction logs onto a se*arate dis+
On a Windo5s syste, chose as target directory a directory on a separate dis+. On *inuD:4niD
choose as target directory a directory under a ,ount point o- the separate dis+.
'Da,pleG To ,ove the logs -or the TPC(" database to a ne5 location# use the -ollo5ing stepsG
1. Shutdo5n the TPC servicesG TPC data server service# TPC device server service
2. Choose a ne5 log path location# -or eDa,ple# 'GZ("2VlogsZTPC("6,a+e sure that the
target directoiry eDists8.
3. Start a ("2 co,,and line processor6see illustrations belo58
Illustration 12: D/2 1ommand line "rocessor =indo=
4. /ssue the -ollo5ing co,,ands in the 5indo5G
4=
connect to database_name
update db c-g -or database_name using H'W*O1P$T)
target_directory_of_new_disk
Where database_name is the na,e o- the database 6 e.g. TPC("8 and
target_directory_of_new_disk is the path to 5hich you 5ant to relocate the logs.
Restart ("2 in order to activate the change. Nou can use db2stop and db2start# or the
("2 control center in order to restart the database instance. $ reboot does the tric+ as
5ell.
2. Start the TPC services againG TPC data server service# TPC device server service
The ("2 logs -or the TPC database repository are no5 stored on a di--erent physical dis+ than
the corresponding tablespaces. This 5ill lead to per-or,ance i,prove,ents during TPC ("
operation.
$.2 Setting the D)2 data'ase on se*arate dis+
With a high 5or+load on TPC 6 possibly due to ,any per-or,ance Fobs or probes 8 and i,proper
settings o- resource history retention that +eep too ,uch data# you ,ay -ind yoursel- 5ith a very
large database. So,eti,es the database is so large that it -ills the entire -ree space o- a -ilesyste,
6partition8. /n this case the user ,ust clean up the repository or ,ove it to a ne5 -ilesyste,. The
neDt steps 5ill sho5 ho5 the TPC repository can be ,oved to another partition 6 -or this
eDa,ple# ,ove TPC(" -ro, CG partition to (G partition 8G
1. eri-y that the partition you 5ant to ,ove the repository to has enough -ree space -or the
database and -or the database bac+up.
2. Stop the TPC services 6TPC data server service# TPC device server service8. Wait -or the, to
co,pletely stop.
3. Create a snapshot o- the current con-igurationG 6This step is optional.8
1. Create a -ile called su,,ary.c,d 5ith the -ollo5ing lines.
database@name is the repository na,e 6 e.g. TPC(" 8G
*/ST (" (/R'CTORN
connect to database@name
4@
get snapshot -or tablespaces on databaseVna,e
*/ST T$"*'SP$C'S S)OW ('T$/*
get db c-g -or database@name

2. Start ("2 Co,,and Windo5 by using Start &enu or use si,ply the co,,and
Bdb2c,dCG
Illustration 13: Start 0enu Pat> to D/2 1ommand Eindo=
3. >ro, the ("2 Co,,and Windo5 navigate to 5here su,,ary.c,d 5as created.
4. Run the co,,andG db2 A- su,,ary.c,d Av% su,,aryAlogAold.tDt
4. "ac+up the current TPC database 6This step is ,andatory# so that in case so,ething goes
5rong the repository can be restored easily -ro, the bac+up8G
1. Create the -older 5here you 5ill bac+up the database. >or eDa,ple (:=&b2ba#8%p
2. >ro, the ("2 Co,,and Windo5 navigate to (GZdb2bac+up and run the -ollo5ing
co,,andsG
db2connect to database@name
db2,ove database@name eDport
6This ,ay ta+e a 5hile depending on the si%e o- the database.8
2. Stop the ("2
>ro, the ("2 Co,,and Windo5 previously opened runG db2stop -orce
;. Copy speci-ic directories -ro, CG to (G
1. Havigate to CGZ("2ZHO('0000Z and op" SY*00001 directory 5ith all its content to
(GZ("2ZHO('0000Z . The ne5 location 5ill loo+ li+e (GZ("2ZHO('0000ZSY*00001
2. Havigate bac+ to CGZ("2ZHO('0000Z and op" TPC(" directory 5ith all its content
to (GZ("2ZHO('0000Z . The ne5 location 5ill loo+ li+e (GZ("2ZHO('0000ZTPC("
3. Havigate bac+ to CGZ("2ZHO('0000Z and op" SY*("(/R directory 5ith all its
content to (GZ("2ZHO('0000Z . The ne5 location 5ill loo+ li+e
(GZ("2ZHO('0000ZSY*("(/R
4. Copy directory CGZ("2ZTPC(" to (GZ("2Z .
Note: There ,ight be ,ore than one SY*0000D -olders 6 e.g.. SY*00001# SY*00002#
and so on 8. To -ind out in 5hich one reside TPC(" 6the one that needs to be ,oved8#
run the -ollo5ing co,,and -ro, ("2 Co,,and Windo5G
list db directory on CG
20
<. Create a -ile called TPC(".c-g 5ith the -ollo5ing contentG
("VH$&'OTPC("
("VP$T)OCGZ#(GZ
/HST$HC'O("2
STOR$1'VP$T)OCGZ#(GZ
HoteG This con-iguration -ile is used by the db2relocatedb co,,and. The con-iguration
para,eters listed above are re?uired -or the co,,and to 5or+# but the con-iguration -ile can
have the -ollo5ing additional para,etersG
("VH$&'OoldHa,e#ne5Ha,e
("VP$T)OoldPath#ne5Path
/HST$HC'Oold/nst#ne5/nst
HO('H4&OnodeHu,ber
*O1V(/ROold(irPath#ne5(irPath
COHTVP$T)OoldContPath1#ne5ContPath1
COHTVP$T)OoldContPath2#ne5ContPath2
[
STOR$1'VP$T)OoldStoragePath1#ne5StoragePath2
STOR$1'VP$T)OoldStoragePath2#ne5StoragePath2
[
>$/*$RC)/'VP$T)One5(irPath
*O1$RC)&'T)1One5(irPath
*O1$RC)&'T)2One5(irPath
&/RROR*O1VP$T)One5(irPath
O'R>*OW*O1VP$T)One5(irPath
[
This is the description o- the para,etersG
("VH$&'
Speci-ies the na,e o- the database being relocated. /- the database na,e is being changed#
both the old na,e and the ne5 na,e ,ust be speci-ied. This is a re?uired -ield.
("VP$T)
Speci-ies the original path o- the database being relocated. /- the database path is changing#
both the old path and ne5 path ,ust be speci-ied. This is a re?uired -ield.
/HST$HC'
Speci-ies the instance 5here the database eDists. /- the database is being ,oved to a ne5
instance# both the old instance and ne5 instance ,ust be speci-ied. This is a re?uired -ield.
HO('H4&
Speci-ies the node nu,ber -or the database node being changed. The de-ault is 0.
21
*O1V(/R
Speci-ies a change in the location o- the log path. /- the log path is being changed# both the
old path and ne5 path ,ust be speci-ied. This speci-ication is optional i- the log path resides
under the database path# in 5hich case the path is updated auto,atically.
COHTVP$T)
Speci-ies a change in the location o- tablespace containers o- type (atabase &anaged Space
or Syste, &anaged Space. "oth the old and ne5 container path ,ust be speci-ied. &ultiple
COHTVP$T) lines can be provided i- there are ,ultiple container path changes to be ,ade.
This speci-ication is optional i- the container paths reside under the database path# in 5hich
case the paths are updated auto,atically. /- you are ,a+ing changes to ,ore than one
container 5here the sa,e old path is being replaced by a co,,on ne5 path# a single
COHTVP$T) entry can be used. /n such a case# an asteris+ 6Q8 could be used both in the old
and ne5 paths as a 5ildcard.
STOR$1'VP$T)
This is only applicable to databases 5ith auto,atic storage enabled. /t speci-ies a change in
the location o- one o- the storage paths -or the database. "oth the old storage path and the
ne5 storage path ,ust be speci-ied. &ultiple STOR$1'VP$T) lines can be given i- there
are several storage path changes to be ,ade. Nou can veri-y i- auto,atic storage is enabled
by launching ("2 Control Center and navigate to the details o- the tablespace containers.
>$/*$RC)/'VP$T)
Speci-ies a ne5 location to archive log -iles i- the database ,anager -ails to archive the log
-iles to either the pri,ary or the secondary archive locations. Nou should only speci-y this
-ield i- the database being relocated has the -ailarchpath con-iguration para,eter set.
*O1$RC)&'T)1
Speci-ies a ne5 pri,ary archive location. Nou should only speci-y this -ield i- the database
being relocated has the logarch,eth1 con-iguration para,eter set.
*O1$RC)&'T)2
Speci-ies a ne5 secondary archive location. Nou should only speci-y this -ield i- the database
being relocated has the logarch,eth2 con-iguration para,eter set.
&/RROR*O1VP$T)
Speci-ies a ne5 location -or the ,irror log path. The string ,ust point to a path na,e# and it
,ust be a -ully ?uali-ied path na,e# not a relative path na,e. Nou should only speci-y this
-ield i- the database being relocated has the ,irrorlogpath con-iguration para,eter set.
O'R>*OW*O1VP$T)
Speci-ies a ne5 location to -ind log -iles re?uired -or a roll-or5ard operation# to store active
log -iles retrieved -ro, the archive# and to -ind and store log -iles re?uired by the
db2Read*og $P/. Nou should only speci-y this -ield i- the database being relocated has the
over-lo5logpath con-iguration para,eter set.
22
"lan+ lines or lines beginning 5ith a co,,ent character 6\8 are ignored.
=. >ro, the ("2 Co,,and Windo5 previously opened navigate to the TPC(".c-g -ile and
run the -ollo5ing co,,andG
db2relocatedb A- TPC(".c-g
This co,,and 5ill relocate the TPC repository -ro, and to the drives speci-ied in the
TPC(".c-g -ile.
$-ter the co,,and co,pletes you should see the -ollo5ing outputG
Biles and control structures =ere c>an!ed successfullyF
Database =as catalo!ued successfullyF
D/*1%%%I *>e tool com"leted successfully
@. >ro, the ("2 Co,,and Windo5 run db2start co,,and to start the ("2.
10. eri-y current con-iguration. This step is optionalG
1. >ro, the ("2 Co,,and Windo5 navigate to the su,,ary.c,d -ile created in step 3
and run the -ollo5ing co,,andG
db2 A- su,,ary.c,d Av% su,,aryAlogsAne5.tDt
2. Co,pare su,,aryAlogsAne5.tDt 5ith su,,aryAlogsAold.tDt. The ("2 con-iguration
para,eters should be the sa,e other than the di--erences related to the ("2 path.
11. (elete the -olders copied in step ; -ro, the original partition6 CG8. This 5ill release the dis+
space occupied 5ith the original location o- the repository.
12. Start the TPC Services.
HotesG
The db2relocatedb co,,and cannot be used to ,ove a TPC database that has been created 5ith
the option B$uto,atic StorageC under the database creation details in the custo, installation path
o- the TPC installer. >or B$uto,atic StorageC a si,pler solution eDist to ,ove the database to a
di--erent dis+.
Hote that itKs possible to eDpand a Bautono,ic storageC database 5ith an additional dis+.
$lso traditional tablespaces ,anaged by database can be eDpanded 5ith an additional dis+ by
placing additional tablespace containers onto the additional dis+.
The ad,inistrative vie5 sysib,ad,.dbpath sho5s 5hich type o- storage an eDisting database
uses. '.g. type B("VSTOR$1'VP$T)C is B$uto,atic StorageC. Re-er also to the ("2
in-or,ation center.
httpG::publib.boulder.ib,.co,:in-ocenter:db2lu5:v@r<:indeD.FspL
topicO:co,.ib,.db2.lu5.s?l.rtn.doc:doc:r002203<.ht,l
23
44241 +o7!n3 an >:%tomat!# Stora3e? &atabase to one or more &!''erent &!s8s
This can be done 5ith -e5er steps then in section 4.2 as a bac+up:restore operation. Still an
o--line ,aintenance 5indo5 is re?uired. The database bac+up should be +ept in a sa-e location to
be able to restore the database.
Steps are the -ollo5ing.
1. &a+e sure the database uses B$uto,atic StorageC by loo+ing at the ad,inistrative vie5
sysib,ad,.dbpath.
2. Stop the TPC services 6TPC data server service# TPC device server service8. Wait -or the, to
co,pletely stop.
3. "ac+up the database.
4. (rop the database.
2. Restore the bac+up i,age to ne5 storage paths. 4se the BOHC para,eter to speci-y the
re-ined storage paths. 4se the BH'W*O1P$T)C para,eter to speci-y the re-ined location
-or the active log -iles.
;. Start the TPC services.
'Da,pleG /n order to ,ove an B$uto,atic StorageC database tpcdb -ro, a Windo5s CG drive to
drives >G and 1G -or the tablepaces and to *GZdb2Vlog -or the active log -iles the ("2 related
co,,ands are the -ollo5ing i- the -older (GZdbVbac+up has enough space -or the database
bac+up.
1. db2 bac+up db tpcdb to (GZdbVbac+up
2. db2 drop db tpcdb
3. db2 restore db tpcdb -ro, (GZdbVbac+up on >G#1G ne5logpath *GZdb2Vlog
44242 E<pan&!n3 an >:%tomat!# Stora3e? &atabase w!th an a&&!t!onal &!s8
This can be done online 5hile TPC operates. One or ,ore additional storage paths are added to
the database. $uto,atic storage tablespaces use the ne5 storage path6s8.
Steps are the -ollo5ing.
1. &a+e sure the database uses B$uto,atic StorageC by loo+ing at the ad,inistrative vie5
sysib,ad,.dbpath.
24
2. While connected to the database issue
db2 alter database add storage on stora!ePat>Dist
5here stora!ePat>Dist is a co,,a separated list o- storage paths.
'Da,pleG /n order to eDpand the database tpcdb on a Windo5s CG drive to the drive (G the ("2
co,,ands are the -ollo5ing.
1. db2 connect to tpcdb
2. db2 Bselect Q -ro, sysib,ad,.dbpathC and ,a+e sure the type B("VSTOR$1'VP$T)C is
listed but not types BT"SPV('/C'C# BT"SPVCOHT$/H'RC or BT"SPV(/R'CTORNC.
3. db2 alter database add storage on (G
44243 E<pan&!n3 >(atabase mana3e& 5(+S6 tablespa#es? w!th an a&&!t!onal &!s8
This can be done online 5hile TPC operates. He5 tablespace containers are added to a
tablespace. The ne5 containers reside on the additional dis+.
Steps are the -ollo5ing.
1. &a+e sure the tablespace uses traditional (&S -ile base tablespace containers by listing the
tablepace containers and loo+ing the, up in the ad,inistrative vie5 sysib,ad,.dbpath.
They have type BT"SPVCOHT$/H'RC.
2. While connected to the database issue
db2 Balter tablespace tables"ace:ame add 6-ile KfilePat>K si+e8C
5here tables"ace:ame is na,e o- the tablespace to be eDtended# filePat> the na,e o- the -ile
to be added# si+e the si%e o- the -ile to be used.
'Da,pleG /n order to eDpand the (&S tablespace tpctbspp, in database tpcdb by 220 &" onto a
Windo5s drive (G 5ith the ne5 tableapace container -ile (GZtpcdbZtpctbspp,.2 the ("2
co,,ands are the -ollo5ing.
1. db2 connect to tpcdb
2. db2 list tablapaces 6record the BTablespace /(C -or the tablespace tpctbspp,# itKs 4 -or the
eDa,ple8
3. db2 list tablespace containers -or 4
4. db2 Bselect Q -ro, sysib,ad,.dbpathC and ,a+e sure the listed containers have type
BT"SPVCOHT$/H'RC
2. db2 Balter tablespace tpctbspp, add 6-ile K(GZtpcdbZtpctbspp,.2K 220&8C
22

S-ar putea să vă placă și