Sunteți pe pagina 1din 1034

Front cover

IBM TotalStorage Productivity Center V2.3: Getting Started


Effectively use the IBM TotalStorage Productivity Center Learn to install and customize the IBM TotalStorage Productivity Center Understand the IBM TotalStorage Open Software Family

Mary Lovelace Larry Mc Gimsey Ivo Gomilsek Mary Anne Marquez

ibm.com/redbooks

International Technical Support Organization IBM TotalStorage Productivity Center V2.3: Getting Started December 2005

SG24-6490-01

Note: Before using this information and the product it supports, read the information in Notices on page xiii.

Second Edition (December 2005) This edition applies to Version 2, Release 3 of IBM TotalStorage Productivity Center (product number 5608-UC1, 5608-UC3, 5608-UC4, 5608-UC5.

Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Part 1. IBM TotalStorage Productivity Center foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 7 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 9 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 12 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 14 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 17 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 IBM TotalStorage Productivity Center architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Architectural overview diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Architectural layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Relationships between the managers and components . . . . . . . . . . . . . . . . . . . . 2.1.4 Collecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Standards used in IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 ANSI standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Web-Based Enterprise Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Fibre Alliance MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Service Location Protocol (SLP) overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Common Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Component interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 CIMOM discovery with SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 How CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 28 28 29 31 32 34 34 34 35 36 37 38 38 47 49 49 50 51 53 53

Part 2. Installing the IBM TotalStorage Productivity Center base product suite . . . . . . . . . . . . . . . . . 55 Chapter 3. Installation planning and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Copyright IBM Corp. 2005. All rights reserved.

iii

3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 3.2.2 Default databases created during the installation . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Our lab setup environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 User IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Increasing user security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Windows Management Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 World Wide Web Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Uninstalling Internet Information Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Installing SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11.6 Changing the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.1 Server recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.2 Supported subsystems and databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.4 Creating the DB2 database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58 58 59 62 62 64 65 65 68 69 69 70 70 73 73 73 75 75 75 75 76 77 77 78 78 78 79 81

Chapter 4. Installing the IBM TotalStorage Productivity Center suite . . . . . . . . . . . . . 83 4.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.1.1 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Prerequisite Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.2 Installing prerequisite software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 Suite installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3.2 Installing the TotalStorage Productivity Center suite . . . . . . . . . . . . . . . . . . . . . 110 4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 125 4.3.4 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.3.5 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 146 4.3.6 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 157 4.3.7 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Chapter 5. CIMOM install and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 192 192 192 193 194 194 195 196

iv

IBM TotalStorage Productivity Center V2.3: Getting Started

5.5.1 ESS CLI Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 DS CIM Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Configuring the DS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Registering DS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 CIMOM user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 5.7.4 Configuring TotalStorage Productivity Center for SLP discovery . . . . . . . . . . . . 5.7.5 Registering the DS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.6 Verifying and managing CIMOMs availability. . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 5.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 5.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 5.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

196 202 211 212 212 213 214 215 215 216 219 220 221 223 224 224 225 233 234 235 241 241 242 243 243

Part 3. Configuring the IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk. . . . . . . . . . . 6.1 Productivity Center for Disk Discovery summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 SLP DA definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Disk and Replication Manager remote GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 6.3.2 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 247 248 248 256 259 270 277

Chapter 7. Configuring TotalStorage Productivity Center for Replication . . . . . . . . 279 7.1 Installing a remote GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Chapter 8. Configuring IBM TotalStorage Productivity Center for Data . . . . . . . . . . 8.1 Configuring the CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 CIM and SLP interfaces within Data Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Configuring CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Setting up a disk alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Setting up the Web GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Using IBM HTTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Using Internet Information Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Configuring the URL in Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Installing the Data Manager remote console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Configuring Data Manager for Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Alert Disposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 290 290 290 293 295 295 299 303 304 313 316

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . 319 9.1 TotalStorage Productivity Center component interaction . . . . . . . . . . . . . . . . . . . . . . 320

Contents

9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 9.1.2 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Tivoli Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Post-installation procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Installing Productivity Center for Fabric Agent . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Installing Productivity Center for Fabric Remote Console . . . . . . . . . . . . . . . . 9.3 Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . 9.3.1 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Configuring the outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Checking inband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Performing an initial poll and setting up the poll interval . . . . . . . . . . . . . . . . . . . Chapter 10. Deployment of agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Installing the agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Data Agent installation using the installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Deploying the agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

320 320 320 321 321 331 342 342 346 348 349 351 352 354 361

Part 4. Using the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Chapter 11. Using TotalStorage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . 11.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Changing the display name of a storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.2 Assigning and unassigning ESS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.3 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.4 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Working with DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 DS8000 Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.2 Assigning and unassigning DS8000 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.3 Creating new DS8000 volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.4 Launch device manager for an DS8000 device . . . . . . . . . . . . . . . . . . . . . . . . 11.8 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8.1 Working with SAN Volume Controller MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . 11.8.2 Creating new MDisks on supported storage devices . . . . . . . . . . . . . . . . . . . . 11.8.3 Create and view SAN Volume Controller VDisks . . . . . . . . . . . . . . . . . . . . . . . 11.9 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.1 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.2 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9.3 Assigning hosts to DS4000 and FAStT Volumes . . . . . . . . . . . . . . . . . . . . . . . 11.9.4 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . 11.9.5 Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.10 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.10.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . 11.10.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. Using TotalStorage Productivity Center Performance Manager . . . . . . 12.1 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.2 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
IBM TotalStorage Productivity Center V2.3: Getting Started

375 376 376 377 378 378 382 383 384 385 387 388 389 390 392 393 394 396 396 399 402 406 407 409 413 414 415 416 421 423 427 428 429 429

12.1.3 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.4 Reviewing data collection task status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.5 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.6 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.7 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.8 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.9 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.10 Data collection for the DS6000 and DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.11 DS6000 and DS8000 thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Creating gauges: an example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . 12.3 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . 12.4.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . 12.4.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . 12.4.7 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.8 Creating and managing workload profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 13. Using TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . 13.1 TotalStorage Productivity Center for Data overview . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Business purpose of TotalStorage Productivity Center for Data. . . . . . . . . . . . 13.1.2 Components of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . 13.1.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Functions of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Basic menu displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2 Discover and monitor Agents, disks, filesystems, and databases . . . . . . . . . . 13.2.3 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.4 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.5 Chargeback: Charging for storage usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 OS Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.4 Pings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.5 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.6 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.7 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 OS Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Alerting navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Computer Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3 Filesystem Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.4 Directory Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.5 Alert logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

435 437 439 443 457 460 461 463 466 467 468 468 471 471 474 475 475 477 478 478 478 479 479 480 481 482 508 521 522 522 522 523 523 524 526 529 532 533 533 534 535 540 542 545 547 552 555 558 560 562 563 564

Contents

vii

13.5 Policy management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Network Appliance Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.4 Filesystem extension and LUN provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.5 Scheduled Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Database monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.2 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.3 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.4 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Database Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7.1 Instance Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7.2 Database-Tablespace Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7.3 Table Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7.4 Alert log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8 Databases policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.1 Network Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.2 Instance Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8.3 Database Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9 Database administration samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.1 Database up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.2 Database utilization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.3 Need for reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10 Data Manager reporting capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10.1 Major reporting categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11 Using the standard reporting functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.1 Asset Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.2 Storage Subsystems Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.3 Availability Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.4 Capacity Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.5 Usage Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.6 Usage Violation Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11.7 Backup Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.12 TotalStorage Productivity Center for Data ESS Reporting . . . . . . . . . . . . . . . . . . . 13.12.1 ESS Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13 IBM Tivoli Storage Resource Manager top 10 reports . . . . . . . . . . . . . . . . . . . . . . 13.13.1 ESS used and free storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.2 ESS attached hosts report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.3 Computer Uptime Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.4 Growth in storage used and number of files . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.5 Incremental backup trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.6 Database reports against DBMS size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.7 Database instance storage report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.13.8 Database reports size by instance and by computer . . . . . . . . . . . . . . . . . . . 13.13.9 Locate the LUN on which a database is allocated . . . . . . . . . . . . . . . . . . . . . 13.13.10 Finding important files on your systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.14 Creating customized reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.14.1 System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.14.2 Reports owned by a specific username . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.14.3 Batch Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.15 Setting up a schedule for daily reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.16 Setting up a reports Web site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM TotalStorage Productivity Center V2.3: Getting Started

565 565 570 570 576 582 583 584 585 586 587 588 588 588 589 589 589 590 591 591 591 591 591 591 592 593 594 595 604 604 605 607 610 627 634 634 653 653 656 657 659 661 665 667 667 669 672 683 683 686 688 697 698

13.17 Charging for storage usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Chapter 14. Using TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . 14.1 NetView navigation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 NetView interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.2 Maps and submaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.3 NetView window structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.4 NetView Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.5 NetView Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.6 Object selection and NetView properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.7 Object symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.8 Object status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.9 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.10 NetView and Productivity Center for Fabric integration . . . . . . . . . . . . . . . . . 14.2 Walk-through of Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Device Centric view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.2 Host Centric view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.4 Launching element managers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.5 Explore view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.2 Device Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.4 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.5 MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 SAN menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 SAN Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.1 Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.2 NetView support for Web interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.3 Launching TotalStorage Productivity Center for Data. . . . . . . . . . . . . . . . . . . . 14.5.4 Other menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Status cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 Practical cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.1 Cisco MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.2 Removing a connection on a device running an inband agent . . . . . . . . . . . . . 14.7.3 Removing a connection on a device not running an agent . . . . . . . . . . . . . . . . 14.7.4 Powering off a switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.5 Running discovery on a RNID-compatible device. . . . . . . . . . . . . . . . . . . . . . . 14.7.6 Outband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.7 Inband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.8 Disk devices discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.9 Well placed agent strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8 Netview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8.1 Reporting overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8.2 SNMP and MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.9 NetView setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.9.1 Advanced Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.9.2 Copy Brocade MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.9.3 Loading MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.10 Historical reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.10.1 Creating a Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703 704 704 704 704 705 707 707 709 709 711 711 712 713 714 714 723 725 725 727 731 732 733 734 735 735 739 740 740 742 742 743 745 745 747 750 752 756 758 760 762 764 766 767 767 769 769 770 771 774 775

Contents

ix

14.10.2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.10.3 Troubleshooting the Data Collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . 14.10.4 NetView Graph Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.11 Real-time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.11.1 MIB Tool Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.11.2 Displaying real-time data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.11.3 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.11.4 SmartSets and Data Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.11.5 Seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.12 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.13 What is iSCSI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.14 How does iSCSI work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.15 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.15.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.15.2 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.16 ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.16.2 Error processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.16.3 Configuration for ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . 14.16.4 Using ED/FI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.16.5 Searching for the faulted device on the topology map . . . . . . . . . . . . . . . . . . 14.16.6 Removing notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 15. Using TotalStorage Productivity Center for Replication. . . . . . . . . . . . . 15.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . 15.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Exploiting Productivity Center for replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.2 Adding a replication device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.3 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.4 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.5 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.6 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.7 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.8 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.9 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.10 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.11 Creating storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.12 Point-in-Time Copy - creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.13 Creating a session - verifying source-target relationship . . . . . . . . . . . . . . . . 15.2.14 Continuous Synchronous Remote Copy - creating a session. . . . . . . . . . . . . 15.2.15 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.16 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . 15.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

783 784 784 786 787 791 794 802 805 810 811 811 812 813 813 814 814 816 818 820 822 825 827 828 828 830 831 831 832 833 834 834 834 838 841 842 843 844 847 848 849 850 852 856 861 866 873 884 886 888 892 893

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 16. Hints, tips, and good-to-knows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.1 Locations of configured user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.2 Resource Manager registration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.3 Tivoli Agent Manager status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.4 Registered Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.5 Registered Data Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1 Launchpad installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.2 Launchpad customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Remote consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Verifying whether a port is in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7 Collecting logs for support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.3 Following Discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . 16.7.4 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.5 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8.1 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8.2 Device registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . 16.11 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.12 SVC Data collection task failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 17. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 17.2.1 Performance manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 17.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

899 900 901 901 901 902 902 904 906 906 907 909 911 911 911 917 917 921 921 922 927 928 929 930 930 931 931 931 932 940 940 943 944 944 945 948 948 950 950 951 952 952 956 956 959 976 976 979 984 988

Appendix A. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991 User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992


Contents

xi

Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage Enterprise Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

992 993 994 994 995 996 997 997 997 997 998 998

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999

xii

IBM TotalStorage Productivity Center V2.3: Getting Started

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2005. All rights reserved.

xiii

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX Cloudscape DB2 DB2 Universal Database e-business on demand Enterprise Storage Server Eserver Eserver FlashCopy IBM ibm.com iSeries MVS Netfinity NetView OS/390 Predictive Failure Analysis pSeries QMF Redbooks Redbooks (logo) S/390 Sequent ThinkPad Tivoli Enterprise Tivoli Enterprise Console Tivoli TotalStorage WebSphere xSeries z/OS zSeries 1-2-3

The following terms are trademarks of other companies: Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others.

xiv

IBM TotalStorage Productivity Center V2.3: Getting Started

Preface
IBM TotalStorage Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center V2.3. It provides an overview of the product components and functions. We describe the hardware and software environment required, provide a step-by-step installation procedure, and offer customization and usage hints and tips. This book is not a replacement for the existing IBM Redbooks, or product manuals, that detail the implementation and configuration of the individual products that make up the IBM TotalStorage Productivity Center, or the products as they may have been called in previous versions. We refer to those books as appropriate throughout this book.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), San Jose Center. Mary Lovelace is a Consulting IT Specialist at the ITSO in San Jose, California. She has more than 20 years of experience with IBM in large systems, storage and Storage Networking product education, system engineering and consultancy, and systems support. Larry Mc Gimsey is a consulting IT Architect working in Managed Storage Services delivery supporting worldwide SAN storage customers. He has over 30 years experience in IT. He joined IBM 6 years ago as a result of an outsourcing engagement. Most of his experience prior to joining IBM was in mainframe systems support. It included system programming, performance management, capacity planning, system automation and storage management. Since joining IBM, Larry has been working with large SAN environments. He currently works with Managed Storage Services offering and delivery teams to define the architecture used to deliver worldwide storage services. Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central and Eastern European Region in architecting, deploying, and supporting SAN/storage/DR solutions. His areas of expertise include SAN, storage, HA systems, xSeries servers, network operating systems (Linux, MS Windows, OS/2), and Lotus Domino servers. He holds several certifications from various vendors (IBM, Red Hat, Microsoft). Ivo has contributed to various other redbooks on Tivoli products, SAN, Linux/390, xSeries, and Linux. Mary Anne Marquez is the team lead for tape performance at IBM Tucson. She has extensive knowledge in setting up a TotalStorage Productivity Center environment for use with Copy Services and Performance Management, as well as debugging the various components of TotalStorage Productivity Center including WebSphere, ICAT, and the CCW interface for ESS. In addition to TPC, Mary Anne has experience with the native Copy Services tools on ESS model-800 and DS8000. She has authored several performance white papers.
Copyright IBM Corp. 2005. All rights reserved.

xv

Thanks to the following people for their contributions to this project: Sangam Racherla Yvonne Lyon ITSO, San Jose Center Bob Haimowitz ITSO, Raleigh Center Diana Duan Tina Dunton Nancy Hobbs Paul Lee Thiha Than Miki Walter IBM San Jose Martine Wedlake IBM Beaverton Ryan Darris IBM Tucson Doug Dunham Tivoli Storage SWAT Team Mike Griese Technical Support Marketing Lead, Rochester Curtis Neal Scott Venuti Open System Demo Center, San Jose

xvi

IBM TotalStorage Productivity Center V2.3: Getting Started

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an email to:


redbook@us.ibm.com

Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099

Preface

xvii

xviii

IBM TotalStorage Productivity Center V2.3: Getting Started

Part 1

Part

IBM TotalStorage Productivity Center foundation


In this part of the book we introduce the IBM TotalStorage Productivity Center: Chapter 1, IBM TotalStorage Productivity Center overview on page 3, contains an overview of the components of IBM TotalStorage Productivity Center. Chapter 2, Key concepts on page 27, provides information about the communication, protocols, and standards organization that is the foundation of understanding the IBM TotalStorage Productivity Center.

Copyright IBM Corp. 2005. All rights reserved.

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 1.

IBM TotalStorage Productivity Center overview


IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software family, designed to provide a single point of control for managing both IBM and non-IBM networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT), IBM TotalStorage DS4000, IBM TotalStorage DS6000, and IBM TotalStorage DS8000 series. TotalStorage Productivity Center is a solution for customers with storage management requirements, who want to reduce the complexities and costs of storage management, including management of SAN-based storage, while consolidating control within a consistent graphical user interface. This chapter provides an overview of the entire IBM TotalStorage Open Software Family.

Copyright IBM Corp. 2005. All rights reserved.

1.1 Introduction to IBM TotalStorage Productivity Center


The IBM TotalStorage Productivity Center consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. The TotalStorage Productivity Center is based on the recent standard issued by the Storage Networking Industry Association (SNIA). The standard addresses the interoperability of storage hardware and software within a SAN.

1.1.1 Standards organizations and standards


Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible.

Figure 1-1 SAN management standards bodies

Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).

IBM TotalStorage Productivity Center V2.3: Getting Started

1.2 IBM TotalStorage Open Software family


The IBM TotalStorage Open Software Family, is designed to provide a full range of capabilities, including storage infrastructure management, Hierarchical Storage Management (HSM), archive management, and recovery management. The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is a complete range of IBM storage hardware and devices providing flexibility in choice of service quality and cost structure. On top of the hardware infrastructure is the virtualization layer. The storage virtualization is infrastructure software designed to pool storage assets, enabling optimized use of storage assets across the enterprise and the ability to modify the storage infrastructure with minimal or no disruption to application services. The next layer is composed of storage infrastructure management to help enterprises understand and proactively manage their storage infrastructure in the on demand world; hierarchical storage management to help control growth; archive management to manage cost of storing huge quantities of data; recovery management to ensure recoverability of data. The top layer is storage orchestration which automates work flows to help eliminate human error.

Figure 1-2 Enabling customer to move toward On Demand

Chapter 1. IBM TotalStorage Productivity Center overview

Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3.

Figure 1-3 IBM TotalStorage Open Software Family

1.3 IBM TotalStorage Productivity Center


The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to On Demand storage needs. The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help simplify the management of complex storage network environments. The IBM TotalStorage Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli Storage Resource Manager), and TotalStorage Productivity Center for Fabric (formerly Tivoli SAN Manager).

IBM TotalStorage Productivity Center V2.3: Getting Started

Taking a closer look at storage infrastructure management (see Figure 1-4), we focus on four subject matter experts to empower the storage administrators to effectively do their work. Data subject matter expert San Fabric subject matter expert Disk subject matter expert Replication subject matter expert

Figure 1-4 Centralized, automated storage infrastructure management

1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data
The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 8 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager).

Chapter 1. IBM TotalStorage Productivity Center overview

Figure 1-5 Monitor and Configure the Storage Infrastructure Data area

Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.

IBM TotalStorage Productivity Center V2.3: Getting Started

TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information.

Architecture
The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.

1.3.2 Fabric subject matter expert: Productivity Center for Fabric


The storage infrastructure management for Fabric covers the Storage Area Network (SAN). To handle and manage SAN events you need a comprehensive tool. The tool must have a single point of operation and it tool must be able to perform all the tasks from the SAN. This role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN Manager) which is a part of the IBM TotalStorage Productivity Center. The Fabric subject matter expert is the expert in the SAN. Its role is: Discovery of fabric information Provide the ability to specify fabric policies What HBAs to use for each host and for what purpose Objectives for zone configuration (for example, shielding host HBAs from one another and performance) Automatically modify the zone configuration TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs, including heterogeneous switch support, and is a central point of control for SAN configuration (including zoning). It automates the management of heterogeneous storage area networks, resulting in Improved Application Availability Predicting storage network failures before they happen enabling preventative maintenance Accelerate problem isolation when failures do happen

Chapter 1. IBM TotalStorage Productivity Center overview

Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert.

Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area

TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management.

10

IBM TotalStorage Productivity Center V2.3: Getting Started

TotalStorage Productivity Center for Fabric components


The major components of the TotalStorage Productivity Center for Fabric include: A manager or server, running on a SAN managing server Agents, running on one or more managed hosts Management console, which is by default on the Manager system, plus optional additional remote consoles Outband agents - consisting of vendor-supplied MIBs for SNMP There are two additional components which are not included in the TotalStorage Productivity Center. IBM Tivoli Enterprise Console (TEC) which is used to receive TotalStorage Productivity Center for Fabric generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business. The TotalStorage Productivity Center for Fabric functions are distributed across the Manager and the Agent.

TotalStorage Productivity Center for FabricServer


Performs initial discovery of environment: Gathers and correlates data from agents on managed hosts Gathers data from SNMP (outband) agents Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console or SNMP managers

TotalStorage Productivity Center for Fabric Agent


Gathers information about: SANs by querying switches and devices for attribute and topology information Host-level storage, such as file systems and LUNs Event and other information detected by HBAs Forwards topology and event information to the Manager

Discover SAN components and devices


TotalStorage Productivity Center for Fabric uses two methods to discover information about the SAN - outband discovery, and inband discovery. Outband discovery is the process of discovering SAN information, including topology and device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is normally used to manage devices such as switches and hubs which support SNMP.

Chapter 1. IBM TotalStorage Productivity Center overview

11

In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network.

TotalStorage Productivity Center for Fabric benefits


TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors the status of all the discovered components. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or smartsets, of components). This will help them increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. TotalStorage Productivity Center for Fabric helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and Identify and resolve problems in the storage infrastructure quickly, when a problem Supported devices for TotalStorage Productivity Center for Fabric Provide fault isolation of SAN links. For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848.

1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk
The Disk subject matter experts job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 13. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk.

12

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 1-7 Monitor and configure the Storage Infrastructure Disk area

The TotalStorage Productivity Center for Disk provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the TotalStorage Productivity Center for Disk is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The TotalStorage Productivity Center for Disk enables you to perform sophisticated performance analysis for the supported storage devices.

Functions
TotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs.

Chapter 1. IBM TotalStorage Productivity Center overview

13

You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be started, which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 11, Using TotalStorage Productivity Center for Disk on page 375.

1.3.4 Replication subject matter expert: Productivity Center for Replication


The Replication subject matter experts job is to provide a single point of control for all replication activities. This role is filled by the TotalStorage Productivity Center for Replication. Given a set of source volumes to be replicated, the Productivity Center for Replication will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volumes relationships are set up. Given a set of source volumes that represent an application, the Productivity Center for Replication will group these in a consistency group, give that consistency group a name, and allow you to start replication on the application.

14

IBM TotalStorage Productivity Center V2.3: Getting Started

Productivity Center for Replication will start up all replication pairs and monitor them to completion. If any of the replication pairs fail, meaning the application is out of sync, the Productivity Center for Replication will suspend them until the problem is resolved, resync them and resume the replication. The Productivity Center for Replication provides complete management of the replication process. The requirements addressed by the Replication subject matter expert are shown Figure 1-8. Replication in a complex environment needs to be addressed by a comprehensive management tool like the TotalStorage Productivity Center for Replication.

Figure 1-8 Monitor and Configure the Storage Infrastructure Replication area

Functions
Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Replication Manager administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). At this time TotalStorage Productivity Center for Replication supports the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Productivity Center for Replication also supports the session concept, such that multiple pairs are handled as a consistent unit, and that Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments.

Chapter 1. IBM TotalStorage Productivity Center overview

15

Productivity Center for Replication provides a user interface for creating, maintaining, and using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. Some of the tasks you can perform with Productivity Center for Replication are: Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: Create Session Wizard. Select Source Group. Select Copy Type. Select Target Pool. Save Session.

Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 15, Using TotalStorage Productivity Center for Replication on page 827.

1.4 IBM TotalStorage Productivity Center


All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the IBM TotalStorage Productivity Center. The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure using existing storage management products Productivity Center for Data, Productivity Center for Fabric, Productivity Center for Disk and Productivity Center for Replication from one physical place. The IBM TotalStorage Productivity Center components can be launched from the IBM TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 17.

16

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 1-9 IBM TotalStorage Productivity Center Launch Pad

The IBM TotalStorage Productivity Center establishes the foundation for IBMs e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment.

1.4.1 Productivity Center for Disk and Productivity Center for Replication
The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console (Figure 1-10 on page 18). This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device

Chapter 1. IBM TotalStorage Productivity Center overview

17

Figure 1-10 Managing multiple devices

Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 19 provides an overview of Productivity Center for Disk and Productivity Center for Replication.

18

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center Performance Manager Replication Manager

Device Manager
IBM Director
WebSphere Application Server DB2

Figure 1-11 Productivity Center overview

The Productivity Center for Disk and Productivity Center for Replication provides support for configuration, tuning, and replication of the virtualized SAN. As with the individual devices, the Productivity Center for Disk and Productivity Center for Replication layers are open and can be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center for Disk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for Replication

Device Manager
The Device Manager is responsible for the discovery of supported devices; collecting asset, configuration, and availability data from the supported devices; and providing a limited topography view of the storage usage relationships between those devices. The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storage devices adheres to the SNIA SMI-S specification standards. Device Manager uses the Service Level Protocol (SLP) to discover SMI-S enabled devices. The Device Manager creates managed objects to represent these discovered devices. The discovered managed objects are displayed as individual icons in the Group Contents pane of the IBM Director Console as shown in Figure 1-12 on page 20.

Chapter 1. IBM TotalStorage Productivity Center overview

19

Figure 1-12 IBM Director Console

Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device.

SAN Management
When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure that Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts' LUN configurations change

20

IBM TotalStorage Productivity Center V2.3: Getting Started

Performance Manager function


The Performance Manager function provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the Performance Manager is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The Performance Manager enables you to perform sophisticated performance analysis for the supported storage devices.

Functions
Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series, IBM TotalStorage DS6000 and IBM TotalStorage DS8000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. The eligible metrics for threshold checking are fixed for each storage device. If the threshold metrics are modified by the user, the modifications are accepted immediately and applied to checking being performed by active performance collection tasks. Examples of threshold metrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rate There is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices.

Chapter 1. IBM TotalStorage Productivity Center overview

21

Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database.

Gauges
The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed.

Database services for managing the collected performance data


The performance data collected from the supported devices is stored in a DB2 database. Database services are provided that enable you to manage the potential volumes of data. Database purge function A database purge function deletes older performance data samples and, optionally, the associated exception data. Flexibility is built into the purge function, and it enables you to specify the data to purge, allowing important data to be maintained for trend purposes. You can specify to purge all of the sample data from all types of devices older than a specified number of days. You can specify to purge the data associated with a particular type of device. If threshold checking was enabled at the time of data collection, you can exclude data that exceeded at least one threshold value from being purged. You can specify the number of days that data is to remain in the database before being purged. Sample data and, optionally, exception data older than the specified number of days will be purged. A reorganization function is performed on the database tables after the sample data is deleted from the respective database tables.

22

IBM TotalStorage Productivity Center V2.3: Getting Started

Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI.

Volume Performance Advisor


The advanced performance analysis provided by Productivity Center for Disk is intended to address the challenge of allocating more storage in a storage system so that the users of the newly allocated storage achieve the best possible performance. The Volume Performance Advisor is an automated tool that helps the storage administrator pick the best possible placement of a new LUN to be allocated (that is, the best placement from a performance perspective). It also uses the historical performance statistics collected from the supported devices to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables that are user-controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function so that, for example, when a new LUN is added to the ESS, the Device Manager can seamlessly select the best possible LUN.

Replication Manager function


Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions are provided for the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication is controlled by applying predefined policies to Groups and Pools, which are groupings of LUNs that are managed by the Replication Manager. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.

1.4.2 Event services


At the heart of any systems management solution is the ability to alert the system administrator in the event of a system problem. IBM Director provides a method of alerting called Event Action Plans, which enables the definition of event triggers independently from actions that might be taken.

Chapter 1. IBM TotalStorage Productivity Center overview

23

An event is an occurrence of a predefined condition relating to a specific managed object that identifies a change in a system process or a device. The notification of that change can be generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept.

1.5 Taking steps toward an On Demand environment


So what is an On Demand operating environment? It is not a specific set of hardware and software. Rather, it is an environment that supports the needs of the business, allowing it to become and remain responsive, variable, focused, and resilient. An On Demand operating environment unlocks the value within the IT infrastructure to be applied to solving business problems. It is an integrated platform, based on open standards, to enable rapid deployment and integration of business applications and processes. Combined with an environment that allows true virtualization and automation of the infrastructure, it enables delivery of IT capability On Demand.

24

IBM TotalStorage Productivity Center V2.3: Getting Started

An On Demand operating environment must be: Flexible Self-managing Scalable Economical Resilient Based on open standards The move to an On Demand storage environment is an evolving one, it does not happen all at once. There are several next steps that you may take to move to the On Demand environment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows. No matter which steps you take to an On Demand environment there will be results. The results will be improved application availability, optimized storage resource utilization, and enhanced storage personnel productivity.

Chapter 1. IBM TotalStorage Productivity Center overview

25

26

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 2.

Key concepts
There are certain industry standards and protocols that are the basis of the IBM TotalStorage Productivity Center. The understanding of these concepts is important for installing and customizing the IBM TotalStorage Productivity Center. In this chapter, we describe the standards on which the IBM TotalStorage Productivity Center is built, as well as the methods of communication used to discover and manage storage devices. We also discuss communication between the various components of the IBM TotalStorage Productivity Center. To help you understand these concepts, we provide diagrams to show the relationship and interaction of the various elements in the IBM TotalStorage Productivity Center environment.

Copyright IBM Corp. 2005. All rights reserved.

27

2.1 IBM TotalStorage Productivity Center architecture


This chapter provides an overview of the components and functions that are included in the IBM TotalStorage Productivity Center.

2.1.1 Architectural overview diagram


The architectural overview diagram in Figure 2-1 helps to illustrate the governing ideas and building blocks of the product suite which makes up the IBM TotalStorage Productivity Center. It provides a logical overview of the main conceptual elements and relationships in the architecture, components, connections, users, and external systems.

Figure 2-1 IBM TotalStorage Productivity Center architecture overview diagram

IBM TotalStorage Productivity Center and Tivoli Provisioning Manager are presented as building blocks in the diagram. Both of the products are not a single application but a complex environment by themselves. The diagram also shows the different methods used to collect information from multiple systems to give an administrator the necessary views on the environment, for example: Software clients (agents) Standard interfaces and protocols (for example, Simple Network Management Protocol (SNMP), Common Information Model (CIM) Agent) Proprietary interfaces (for only a few devices) In addition to the central data collection, Productivity Center provides a single point of control for a storage administrator, even though each manager still comes with its own interface. A program called the Launchpad is provided to start the individual applications from a central dashboard.

28

IBM TotalStorage Productivity Center V2.3: Getting Started

The Tivoli Provisioning Manager relies on Productivity Center to make provisioning possible.

2.1.2 Architectural layers


The IBM TotalStorage Productivity Center architecture can be broken up in three layers as shown in Figure 2-2. Layer one represents a high level overview. There is only IBM TotalStorage Productivity Center instance in the environment. Layers two and three drill down into the TotalStorage Productivity Center environment so you can see the managers and the prerequisite components.

Figure 2-2 Architectural layers

Layer two consists of the individual components that are part of the product suite: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Data Throughout this redbook, these products are referred to as managers or components. Layer three includes all the prerequisite components, for example IBM DB2, IBM WebSphere, IBM Director, IBM Tivoli NetView, and Tivoli Common Agent Services. IBM TotalStorage Productivity Center for Fabric can be installed on a full version of WebSphere Application Server or on the embedded WebSphere Application Server, which is shipped with Productivity Center for Fabric. Installation on a full version of WebSphere Application Server is used when other components of TotalStorage Productivity Center are installed on the same logical server. IBM TotalStorage Productivity Center for Fabric can utilize an existing IBM Tivoli Netview installation or can be installed along with it. Note: Each of the manager and prerequisite components can be drilled down even further, but in this book we go into this detail only where necessary. The only exception is Tivoli Common Agent Services, which is a new underlying service in the Tivoli product family.

Terms and definitions


When you look at the diagram in Figure 2-2, you see that each layer has a different name. The following sections explain each of these names as well as other terms commonly used in this book.

Chapter 2. Key concepts

29

Product A product is something that is available to be ordered. The individual products that are
included in IBM TotalStorage Productivity Center are introduced in Chapter 1, IBM TotalStorage Productivity Center overview on page 3.

Components
Products (licensed software packages) and prerequisite software applications are in general called components. Some of the components are internal, meaning that, from the installation and configuration point of view, they are somewhat transparent. External components have to be separately installed. We usually use the term components for the following applications: IBM Director (external, used by Disk and Replication Manager) IBM DB2 (external, used by all managers) IBM WebSphere Application Server (external, used by Disk and Replication Manager, used by Fabric Manager if installed on the same logical server) Embedded WebSphere Application Server (internal, used by Fabric Manager) Tivoli NetView (internal, used by Fabric Manager) Tivoli Common Agent Services (external, used by Data and Fabric Manager) Not all of the internal components are always shown in the diagrams and lists in this book. The term subcomponent is used to emphasize that a certain component (the subcomponent) belongs to or is used by another component. For example, a Resource Manager is a subcomponent of the Fabric or Data Manager.

Managers The managers are the central components of the IBM TotalStorage Productivity Center
environment. They may share some of the prerequisite components. For example, IBM DB2 and IBM WebSphere are used by different managers. In this book, we sometimes use the following terms: Disk Manager for Productivity Center for Disk Replication Manager for Productivity Center for Replication Data Manager for Productivity Center for Data Fabric Manager for Productivity Center for Fabric In addition, we use the term manager for the Agent Manager for Tivoli Agent Manager component, because the name of the component already includes that term.

Agents The agents are not shown in the diagram in Figure 2-2 on page 29, but they have an
important role in the IBM TotalStorage Productivity Center environment. There are two types of agents: Common Information Model (CIM) Agents and agents that belong to one of the managers: CIM Agents: Agents that offer a CIM interface for management applications, for example, for IBM TotalStorage DS8000 and DS6000 series storage systems, IBM TotalStorage Enterprise Storage Server (ESS), SAN (Storage Area Network) Volume Controller, and DS4000 Storage Systems formerly known as FAStT (Fibre Array Storage Technology) Storage Systems Agents that belong to one of the managers: Data Agents: Agents to collect data for the Data Manager Fabric Agents: Agents that are used by the Fabric Manager for inband SAN data discovery and collection 30
IBM TotalStorage Productivity Center V2.3: Getting Started

In addition to these agents, the Service Location Protocol (SLP) also use the term agent for these components: User Agent Service Agent Directory Agent

Elements
We use the generic term element whenever we do not differentiate between components and managers.

2.1.3 Relationships between the managers and components


An IBM TotalStorage Productivity Center environment includes many elements and is complex. This section tries to explain how all the elements work together to form a center for storage administration. Figure 2-3 shows the communication between the elements and how they relate to each other. Each gray box in the diagram represents one machine. The dotted line within a machine separates two distinct managers of the IBM TotalStorage Productivity Center.

Figure 2-3 Manager and component relationship diagram

All these components can also run on one machine. In this case all managers and IBM Director will share the same DB2 installation and all managers and IBM Tivoli Agent Manager will share the same WebSphere installation.

Chapter 2. Key concepts

31

2.1.4 Collecting data


Multiple methods are used within the different components to collect data from the devices in your environment. In this version of the product, the information is stored in different databases (see Table 3-6 on page 62) that are not shared between the individual components.

Productivity Center for Disk and Productivity Center for Replication


Productivity Center for Disk and Productivity Center for Replication use the Storage Management Initiative - Specification (SMI-S) standard (see Storage Management Initiative Specification on page 35) to collect information about subsystems. For devices that are not CIM ready, this requires the installation of a proxy application (CIM Agent or CIM Object Manager (CIMOM)). It does not use its own agent such as the Data Manager and Fabric Manager.

IBM TotalStorage Productivity Center for Fabric


IBM TotalStorage Productivity Center for Fabric uses two methods to collect information: inband and outband discovery. You can use either method or you can use both at the same time to obtain the most complete picture of your environment. Using just one of the methods will give you incomplete information, but topology information will be available in both cases. Outband discovery is the process of discovering SAN information, including topology and device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is normally used to manage devices such as switches and hubs that support SNMP. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. Inband discovery uses the following general process: The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches. The switch returns the information through the Fibre Channel network and the HBA to the Agent. The Agent queries the endpoint devices using RNID and SCSI protocols. The Agent returns the information to the Manager over the IP network. The Manager then responds to the new information by updating the database and redrawing the topology map if necessary. Internet SCSI (iSCSI) Discovery is an Internet Protocol (IP)-based storage networking standard for linking data storage. It was developed by the Internet Engineering Task Force (IETF). iSCSI can be used to transmit data over LANs and WANs.

32

IBM TotalStorage Productivity Center V2.3: Getting Started

The discovery paths are shown in parentheses in the diagram in Figure 2-4.

Figure 2-4 Fabric Manager inband and outband discovery paths

IBM TotalStorage Productivity Center for Data


Within the IBM TotalStorage Productivity Center, the data manager is used to collect information about logical drives, file systems, individual files, database usage, and more. Agents are installed on the application servers and perform a regular scan to report back the information. To report on a subsystem level, a SMI-S interface is also built in. This information is correlated with the data that is gathered from the agents to show the LUNs that a host is using (an agent must be installed on that host). In contrast to Productivity Center for Disk and Productivity Center for Replication, the SMI-S interface in Productivity Center for Data is only used to retrieve information, but not to configure a device. Restriction: The SLP User Agent integrated into the Data Manager uses SLP Directory Agents and Service Agents to find services in the local subnet. To discover CIM Agents from remote networks, they have to be registered to either the Directory Agent or Service Agent, which is located in the local subnet unless routers are configured to also route multicast packets. You need to add each CIM Agent (that is not discovered) manually to the Data Manager; refer to Configuring the CIM Agents on page 290.

Chapter 2. Key concepts

33

2.2 Standards used in IBM TotalStorage Productivity Center


This section presents an overview of the standards that are used within IBM TotalStorage Productivity Center by the different components. SLP and CIM are described in detail since they are new concepts to many people that work with IBM TotalStorage Productivity Center and are important to understand. Vendor specific tools are available to manage devices in the SAN, but these proprietary interfaces are not used within IBM TotalStorage Productivity Center. The only exception is the application programming interface (API) that Brocade has made available to manage their Fibre Channel switches. This API is used within IBM TotalStorage Productivity Center for Fabric.

2.2.1 ANSI standards


Several standards have been published for the inband management of storage devices, for example, SCSI Enclosure Services (SES).

T11 committee
Since the 1970s, the objective of the ANSI T11 committee is to define interface standards for high-performance and mass storage applications. Since that time, the committee has completed work on three projects: High-Performance Parallel Interface (HIPPI) Intelligent Peripheral Interface (IPI) Single-Byte Command Code Sets Connection (SBCON) Currently the group is working on Fibre Channel (FC) and Storage Network Management (SM) standards.

Fibre Channel Generic Services


The Fibre Channel Generic Services (FC-GS-3) Directory Service and the Management Service are being used within IBM TotalStorage Productivity Center for the SAN management. The availability and level of function depends on the implementation by the individual vendor. IBM TotalStorage Productivity Center for Fabric uses this standard.

2.2.2 Web-Based Enterprise Management


Web-Based Enterprise Management (WBEM) is an initiative of the Distributed Management Task Force (DTMF) with the objective to enable the management of complex IT environments. It defines a set of management and Internet standard technologies to unify the management of complex IT environments. The three main conceptual elements of the WBEM initiative are: Common Information Model (CIM) CIM is a formal object-oriented modeling language that is used to describe the management aspects of systems. See also Common Information Model on page 47. xmlCIM This is a grammar to describe CIM declarations and messages used by the CIM protocol. Hypertext Transfer Protocol (HTTP) HTTP is used as a way to enable communication between a management application and a device that both use CIM.

34

IBM TotalStorage Productivity Center V2.3: Getting Started

The WBEM architecture defines the following elements: CIM Client The CIM Client is a management application similar to IBM TotalStorage Productivity Center that uses CIM to manage devices. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. CIM Managed Object A CIM Managed Object is a hardware or software component that can be managed by a management application using CIM. CIM Agent The CIM Agent is embedded into a device or it can be installed on the server using the CIM provider as the translator of devices proprietary commands to CIM calls, and interfaces with the management application (the CIM Client). The CIM Agent is linked to one device. CIM Provider A CIM Provider is the element that translates CIM calls to the device-specific commands. It is like a device driver. A CIM Provider is always closely linked to a CIM Object Manager or CIM Agent. CIM Object Manager A CIM Object Manager (CIMOM) is a part of the CIM Server that links the CIM Client to the CIM Provider. It enables a single CIM Agent to talk to multiple devices. CIM Server A CIM Server is the software that runs the CIMOM and the CIM provider for a set of devices. This approach is used when the devices do not have an embedded CIM Agent. This term is often not used. Instead people often use the term CIMOM when they really mean the CIM Server.

2.2.3 Storage Networking Industry Association


The Storage Networking Industry Association (SNIA) defines standards that are used within IBM TotalStorage Productivity Center. You can find more information on the Web at:
http://www.snia.org

Fibre Channel Common HBA API


The Fibre Channel Common HBA API is used as a standard for inband storage management. It acts as a bridge between a SAN management application like Fabric Manager and the Fibre Channel Generic Services. IBM TotalStorage Productivity Center for Fabric Agent uses this standard.

Storage Management Initiative - Specification


SNIA has fully adopted and enhanced the CIM for Storage Management in its SMI-S. SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks.

Chapter 2. Key concepts

35

The idea behind SMI-S is to standardize the management interfaces so that management applications can use these and provide cross device management. This means that a newly introduced device can be immediately managed as it conforms to the standards. SMI-S extends CIM and WBEM with the following features: A single management transport Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model SMI-S defines profiles and recipes within the CIM that enables a management client to reliably use a component vendors implementation of the standard, such as the control of LUNs and zones in the context of a SAN. Consistent use of durable names As a storage network configuration evolves and is re-configured, key long-lived resources, such as disk volumes, must be uniquely and consistently identified over time. Rigorously documented client implementation considerations SMI-S provides client developers with vital information for traversing CIM classes within a device or subsystem and between devices and subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system SMI-S compliant products, when introduced in a SAN environment, automatically announce their presence and capabilities to other constituents using SLP (see 2.3.1, SLP architecture on page 38). Resource locking SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources through a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA also provides interoperability tests which help vendors to test their applications and devices if they conform to the standard. Managers or components that use this standard include: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Data

2.2.4 Simple Network Management Protocol


The SNMP is an Internet Engineering Task Force (IETF) protocol for monitoring and managing systems and devices in a network. Functions supported by the SNMP protocol are the request and retrieval of data, the setting or writing of data, and traps that signal the occurrence of events. SNMP is a method that enables a management application to query information from a managed device. The managed device has software running that sends and receives the SNMP information. This software module is usually called the SNMP agent.

36

IBM TotalStorage Productivity Center V2.3: Getting Started

Device management
An SNMP manager can read information from an SNMP agent to monitor a device. Therefore the device needs to be polled on an interval basis. The SNMP manager can also change the configuration of a device, by setting certain values to corresponding variables. Managers or components that use these standards include the IBM TotalStorage Productivity Center for Fabric.

Traps
A device can also be set up to send a notification to the SNMP manager (this is called a trap) to asynchronously inform this SNMP manager of a status change. Depending on the existing environment and organization, it is likely that your environment already has an SNMP management application in place. The managers or components that use this standard are: IBM TotalStorage Productivity Center for Fabric (sending and receiving of traps) IBM TotalStorage Productivity Center for Data can be set up to send traps, but does not receive traps IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication events can be sent as SNMP traps by utilizing the IBM Director infrastructure.

Management Information Base


SNMP use a hierarchical structured Management Information Base (MIB) to define the meaning and the type of a particular value. An MIB defines managed objects that describe the behavior of the SNMP entity, which can be anything from a IP router to a storage subsystem. The information is organized in a tree structure. Note: For more information about SNMP, refer to TCP/IP Tutorial and Technical Overview, GG24-3376.

IBM TotalStorage Productivity Center for Data MIB file


For users planning to use the IBM TotalStorage Productivity Center for Data SNMP trap alert notification capabilities, an SNMP MIB is included in the server installation. You can find the SNMP MIB in the file tivoli_install_directory/snmp/tivoliSRM.MIB. The MIB is provided for use by your SNMP management console software. Most SNMP management station products provide a program called an MIB compiler that can be used to import MIBs. This allows you to better view Productivity Center for Data generated SNMP traps from within your management console software. Refer to your management console software documentation for instructions on how to compile or import a third-party MIB.

2.2.5 Fibre Alliance MIB


The Fibre Alliance has defined an MIB for the management of storage devices. The Fibre Alliance is presenting the MIB to the IETF standardization. The intention of putting together this MIB was to have one MIB that covers most (if not all) of the attributes of storage devices from multiple vendors. The idea was to have only one MIB that is loaded onto an SNMP manager, one MIB file for each component. However, this requires that all devices comply with that standard MIB, which is not always the case.

Chapter 2. Key concepts

37

Note: This MIB is not part of IBM TotalStorage Productivity Center. To learn more about Fibre Alliance and MIB, refer to the following Web sites:
http://www.fibrealliance.org http://www.fibrealliance.org/fb/mib_intro.htm

2.3 Service Location Protocol (SLP) overview


The SLP is an IETF standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services. SLP enables the discovery and selection of generic services, which can range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user can specify to search for all available printers that support PostScript, based on the given service type (printers), and the given attributes (PostScript). SLP searches the users network for any matching services and returns the discovered list to the user.

2.3.1 SLP architecture


The SLP architecture includes three major components, a Service Agent (SA), a User Agent (UA), and a Directory Agent (DA). The SA and UA are required components in an SLP environment, where the SLP DA is optional. The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. In SLP, an SA is used to report to UAs that a service that has been registered with the SA is available. The following sections describe each of these components.

Service Agent (SA)


The SLP SA is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services by using broadcasts. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available.

38

IBM TotalStorage Productivity Center V2.3: Getting Started

The SA can run in the same process or in a different process as the service itself. In either case, the SA supports registration and de-registration requests for the service (as shown in the right part of Figure 2-5). The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. In the left part of the diagram, you can see the interaction between a UA and the SA.

Figure 2-5 SLP SA interactions (without SLP DA)

A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if a service becomes inactive without removing the registration for itself, that old registration is removed automatically when its life span expires. The maximum life span of a registration is 65535 seconds (about 18 hours).

User Agent (UA)


The SLP UA is a process working on the behalf of the user to establish contact with some network service. The UA retrieves (or queries for) service information from the Service Agents or Directory Agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services in the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the services URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP Service Agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA can, therefore, discover these SAs with a minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message.

Chapter 2. Key concepts

39

The SLP UA follows the multicast convergence algorithm and sends repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the services URL (see Figure 2-6).

Figure 2-6 SLP UA interactions without SLP DA

An SLP UA is not required to discover all matching services that exist in the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets. They can be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs can recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range. This can cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs.

Directory Agent
The SLP DA is an optional component of SLP that collects and caches network service broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP performance. You can consider the SLP DA as an intermediate tier in the SLP architecture. It is placed between the UAs and the SAs so that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic in the network. It also protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment.

40

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 2-7 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs.

Figure 2-7 SLP User Agent interactions with User Agent and Service Agent

When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request. It also specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery. It is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service URLs and attributes. Figure 2-8 shows the interactions of UAs and SAs with DAs, during active DA discovery.

Figure 2-8 SLP Directory Agent discovery interactions

Chapter 2. Key concepts

41

The SLP DA functions similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DAs IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that may already be active in the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-9 shows the interactions of DAs with SAs and UAs, during passive DA discovery.

Figure 2-9 Service Location Protocol passive DA discovery

Why use an SLP DA?


The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and timeouts are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicasting enabled, you can configure SLP to use broadcast. However, broadcast is inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a result, in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets.

42

IBM TotalStorage Productivity Center V2.3: Getting Started

When to use DAs


Use DAs in your enterprise when any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services.

SLP communication
SLP uses three methods to send messages across an IP network: unicast, broadcast, or multicast. Data can be sent to one single destination (unicast) or to multiple destinations that are listening at the same time (multicast). The difference between a multicast and a broadcast is quite important. A broadcast addresses all stations in a network. Multicast messages are only used by those stations that are members of a multicast group (that have joined a multicast group).

Unicast
The most common communication method, unicast, requires that a sender of a message identifies one and only one target of that message. The target IP address is encoded within the message packet, and is used by the routers along the network path to route the packet to the proper destination. If a sender wants to send the same message to multiple recipients, then multiple messages must be generated and placed in the network, one message per recipient. When there are many potential recipients for a particular message, then this places an unnecessary strain on the network resources, since the same data is duplicated many times, where the only difference is the target IP address encoded within the messages.

Broadcast
In cases where the same message must be sent to many targets, broadcast is a much better choice than unicast, since it puts much less strain in the network. Broadcasting uses a special IP address, 255.255.255.255, which indicates that the message packet is intended to be sent to all nodes in a network. As a result, the sender of a message needs to generate only a single copy of that message, and can still transmit it to multiple recipients, that is to all members of the network. The routers multiplex the message packet, as it is sent along all possible routes in the network to reach all possible destinations. This puts much less strain on the network bandwidth, since only a single message stream enters the network, as opposed to one message stream per recipient. However, it puts much more strain on the individual nodes (and routers) in the network, since every node receives the message, even though most likely not every node is interested in the message. This means that those members of the network that were not the intended recipients, who receive the message anyway, must receive the unwanted message and discard it. Due to this inefficiency, in most network configurations, routers are configured to not forward any broadcast traffic. This means that any broadcast messages can only reach nodes on the same subnet as the sender.

Multicast
The ability of the SLP to automatically discover services that are available in the network, without a lot of setup or configuration, depends in a large part on the use of IP multicasting. IP multicasting is a broad subject in itself, and only a brief and simple overview is provided here.
Chapter 2. Key concepts

43

Multicasting can be thought of as more sophisticated broadcast, which aims to solve some of the inefficiencies inherent in the broadcasting mechanism. With multicasting, again the sender of a message has to generate only a single copy of the message, saving network bandwidth. However unlike broadcasting, with multicasting, not every member of the network receives the message. Only those members who have explicitly expressed an interest in the particular multicast stream receive the message. Multicasting introduces a concept called a multicast group, where each multicast group is associated with a specific IP address. A particular network node (host) can join one or more multicast groups, which notifies the associated router or routers that there is an interest in receiving multicast streams for those groups. When the sender, who does not necessarily have to be part of the same group, sends messages to a particular multicast group, that message is routed appropriately to only those subnets, which contain members of that multicast group. This avoids flooding the entire network with the message, as is the case for broadcast traffic.

Multicast addresses
The Internet Assigned Numbers Authority (IANA), which controls the assignment of IP addresses, has assigned the old Class D IP address range to be used for IP multicasting. Of this entire range, which extends from 224.0.0.0 to 239.255.255.255, the 224.0.0.* addresses are reserved for router management and communication. Some of the 224.0.1.* addresses are reserved for particular standardized multicast applications. Each of the remaining addresses corresponds to a particular general purpose multicast group. The Service Location Protocol uses address 239.255.255.253 for all its multicast traffic. The port number for SLP is 427, for both unicast and multicast.

Configuration recommendations
Ideally, after IBM TotalStorage Productivity Center is installed, it would discover all storage devices that it can physically reach over the IP network. However in most situations, this is not the case. This is primarily due to the previously mentioned limitations of multicasting and the fact that the majority of routers have multicasting disabled by default. As a result, in most cases without any additional configuration, IBM TotalStorage Productivity Center discovers only those storage devices that reside in its own subnet, but no more. The following sections provide some configuration recommendations to enable TotalStorage Productivity Center to discover a larger set of storage devices.

Router configuration
The vast majority of the intelligence that allows multicasting to work is implemented in the router operating system software. As a result, it is necessary to properly configure the routers in the network to allow multicasting to work effectively. Unfortunately, there is a dizzying array of protocols and algorithms which can be used to configure particular routers to enable multicasting. These are the most common ones: Internet Group Management Protocol (IGMP) is used to register individual hosts in particular multicast groups, and to query group membership on particular subnets. Distance Vector Multicast Routing Protocol (DVMRP) is a set of routing algorithms that use a technique called Reverse Path Forwarding to decide how multicast packets are to be routed in the network. Protocol-Independent Multicast (PIM) comes in two varieties: dense mode (PIM-DM) and sparse mode (PIM-SM). They are optimized to networks where either a large percentage of nodes require multicast traffic (dense), or a small percentage require the traffic (sparse).

44

IBM TotalStorage Productivity Center V2.3: Getting Started

Multicast Open Shortest Path First (MOSPF) is an extension of OSPF, a link-state unicast routing protocol that attempts to find the shortest path between any two networks or subnets to provide the most optimal routing of packets. The routers of interest are all those which are associated with subnets that contain one or more storage devices which are to be discovered and managed by TotalStorage Productivity Center. You can configure the routers in the network to enable multicasting in general, or at least to allow multicasting for the SLP multicast address, 239.255.255.253, and port, 427. This is the most generic solution and permits discovery to work the way that it was intended by the designers of SLP. To properly configure your routers for multicasting, refer to your router manufacturers reference and configuration documentation. Although older hardware may not support multicasting, all modern routers do. However, in most cases, multicast support is disabled by default, which means that multicast traffic is sent only among the nodes of a subnet but is not forwarded to other subnets. For SLP, this means that service discovery is limited to only those agents which reside in the same subnet.

Firewall configuration
In the case where one or more firewalls are used between TotalStorage Productivity Center and the storage devices that are to be managed, the firewalls need to be configured to pass traffic in both directions, as SLP communication is two way. This means that when TotalStorage Productivity Center, for example, queries an SLP DA that is behind a firewall for the registered services, the response will not use an already opened TCP/IP session but will establish another connection in the direction from the SLP DA to the TotalStorage Productivity Center. For this reason, port 427 should be opened in both directions, otherwise the response will not be received and TotalStorage Productivity Center will not recognize services offered by this SLP DA.

SLP DA configuration
If router configuration is not feasible, another technique is to use SLP DAs to circumvent the multicast limitations. Since with statically configured DAs, all service requests are unicast instead of multicast by the UA, it is possible to simply configure one DA for each subnet that contains storage devices which are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets, although more can be configured without harm, perhaps for reasons of fault tolerance. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow Productivity Center to discover all of the devices, you must statically configure it with the addresses of each of these DAs. You accomplish this using the IBM Director GUIs Discovery Preference panel. From the MDM SLP Configuration tab, you can enter a list of DA addresses. As described previously, Productivity Center unicasts service requests to each of these statically configured DAs, but also multicasts service requests on the local subnet on which Productivity Center is installed. Figure 2-10 on page 46 displays a sample environment where DAs have been used to bridge the multicast gap between subnets in this manner. Note: At this time, you cannot set up IBM TotalStorage Productivity Center for Data to use remote DAs such as Productivity Center for Disk and Productivity Center for Replication. You need to define all remote CIM Agents by creating a new entry in the CIMOM Login panel or you can register remote services in DA which resides in local subnet. Refer to Configuring the CIM Agents on page 290 for detailed information.

Chapter 2. Key concepts

45

Figure 2-10 Recommended SLP configuration

You can easily configure an SLP DA by changing the configuration of the SLP SA included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA instead. The procedure to perform this configuration is explained in 6.2, SLP DA definition on page 248. Note that the change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function as normal, sending registration and de-registration commands to the DA directly.

SLP configuration with services outside local subnet


SLA DA and SA can also be configured to cache CIM services information from non-local subnets. Usually CIM Agents or CIMOMs will have local SLP SA function. When there is a need to discover CIM services outside the local subnet and the network configuration does not permit the use of SLP DA in each of them (for example, firewall rules do not allow two way communication on port 427), remote services can be registered on the SLP DA in the local subnet. This configuration can be done by using slptool, which is part of SLP installation packages. Such registration is not persistent across system restarts. To achieve persistent registration of services outside of the local subnet, these services need to be defined in the registration file used by SLP DA at startup. Refer to 5.7.3, Setting up the Service Location Protocol Directory Agent on page 221 for information on setting up the slp.reg file.

46

IBM TotalStorage Productivity Center V2.3: Getting Started

2.3.2 Common Information Model


The CIM Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. CIM uses schemas as a kind of class library to define objects and methods. The schemas can be categorized into three types: Core schema: Defines classes and relationships of objects Common schema: Defines common components of systems Extension schema: Entry point for vendors to implement their own schema The CIM/WBEM architecture defines the following elements: Agent code or CIM Agent An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. The Agent is embedded into a device, which can be hardware or software. CIM Object Manager The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or a device provider such as a CIM Agent. Client application or CIM Client A storage management program, such as TotalStorage Productivity Center, that initiates CIM requests to the CIM Agent for the device. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. Device or CIM Managed Object A Managed Object is a hardware or software component that can be managed by a management application by using CIM, for example, a IBM SAN Volume Controller. Device provider A device-specific handler that serves as a plug-in for the CIMOM. That is, the CIMOM uses the handler to interface with the device. Note: The terms CIM Agent and CIMOM are often used interchangeably. At this time, few devices come with an integrated CIM Agent. Most devices need a external CIMOM for CIM to enable management applications (CIM Clients) to talk to the device. For ease of installation, IBM provides an Integrated Configuration Agent Technology (ICAT), which is a bundle that includes the CIMOM, the device provider, and an SLP SA.

Integrating legacy devices into the CIM model


Since these standards are still evolving, we cannot expect that all devices will support the native CIM interface. Because of this, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-11 on page 48.

Chapter 2. Key concepts

47

The CIM Agent or CIMOM translates a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage ESS includes a CIMOM inside it. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the Embedded Model in Figure 2-11. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.

CIM Client Management Application


0..n

CIMxml CIM operations over http [TCP/IP]

Agent
1 1

Object Manager 0..n Agent Device or Subsystem 0..n Provider


1 n

0..n

Proprietary

Proprietary

Device or Subsystem

Device or Subsystem

Proxy Model

Embedded Model

Proxy Model

Figure 2-11 CIM Agent and Object Manager overview

CIM Agent implementation


When a CIM Agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables IBM TotalStorage Productivity Center for Data, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-12 on page 49 shows an overview of the CIM Agent.

48

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 2-12 CIM Agent overview

The CIM Agent includes a CIMOM, which adapts various devices using a plug-in called a provider. The CIM Agent can work as a proxy or can be embedded in storage devices. When the CIM Agent is installed as a proxy, the IBM CIM Agent can be installed on the same server that supports the device user interface.

CIM Object Manager


The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between the catcher and sender use the language and models defined by the SMI-S standard. This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions.

2.4 Component interaction


This section provides an overview of the interactions between the different components by using standardized management methods and protocols.

2.4.1 CIMOM discovery with SLP


The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. SLP is explained in more detail in 2.3, Service Location Protocol (SLP) overview on page 38. Figure 2-13 on page 50 shows the interaction between CIMOMs and SLP components.

Chapter 2. Key concepts

49

Lock Manager
SA SA 0..n UA

CIM Client Management Application


0..n

Directory Manager
DA 0..n

SLP TCP/IP CIMxml CIM operations over http [TCP/IP]


SA Agent
1 1

SA Object Manager 0..n SA Agent Device or Subsystem 0..n Provider


1 n

0..n

Proprietary

Proprietary

Device or Subsystem

Device or Subsystem

Proxy Model

Embedded Model

Proxy Model

Figure 2-13 SMI-S extensions to WBEM/CIM

2.4.2 How CIM Agent works


The CIM Agent typically works as explained in the following sequence and as shown in Figure 2-14 on page 51: 1. The client application locates the CIMOM by calling an SLP directory service. 2. The CIMOM is invoked. 3. The CIMOM registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. 4. With this information, the client application starts to directly communicate with the CIMOM. 5. The client application sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. 6. The CIMOM directs the requests to the appropriate functional component of the CIMOM or to a device provider. 7. The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy client application requests. 8. 10. The client application requests are made.

50

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 2-14 CIM Agent work flow

2.5 Tivoli Common Agent Services


The Tivoli Common Agent Services is a new concept with the goal to provide a set of functions for the management of agents that will be common to all Tivoli products. At the time of this writing, IBM TotalStorage Productivity Center for Fabric and IBM TotalStorage Productivity Center for Data are the first applications that use this new concept. See Figure 2-15 on page 52 for an overview of the three elements in the Tivoli Common Agent Services infrastructure. In each of the planning and installation guides of the Productivity Center for Fabric and Productivity Center for Data, there is a chapter that provides information about the benefits, system requirements and sizing, security considerations, and the installation procedures. The Agent Manager is the central network element, that together with the distributed Common Agents, builds an infrastructure which is used by other applications to deploy and manage an agent environment. Each application uses a Resource Manager that is built into the application server (Productivity Center for Data or Productivity Center for Fabric) to integrate in this environment. Note: You can have multiple Resource Managers of the same type using a single Agent Manager. This may be necessary to scale the environment when, for example, one Data Manager cannot handle the load any more. The Agents will be managed by only one of the Data Managers as in this example.

Chapter 2. Key concepts

51

Figure 2-15 Tivoli Common Agent Services

The Common Agent provides the platform for the application specific agents. Depending on the tasks for which a subagent is used, the Common Agent is installed on the customers application servers, desktop PCs, or notebooks. Note: In different documentation, Readme files, directory and file names, you also see the terms Common Endpoint, Endpoint, or simply EP. This always refers to the Common Agent, which is part of the Tivoli Common Agent Services. The Common Agent talks to the application specific subagent, with the Agent Manager and the Resource Manager, but the actual system level functions are invoked by the subagent. The information that the subagent collects is sent directly to the Resource Manager by using the applications native protocol. This is enabled to have down-level agents in the same environment, as the new agents that are shipped with the IBM TotalStorage Productivity Center. Certificates are used to validate if a requester is allowed to establish a communication. Demo keys are supplied to quickly set up and configure a small environment, since every installation CD uses the same certificates, this is not secure. If you want to use Tivoli Common Agent Services in a production environment, we recommend that you use your own keys that can be created during the Tivoli Agent Manager installation. One of the most important certificates is stored in the agentTrust.jks file. The certificate can also be created during the installation of Tivoli Agent Manager. If you do not use the demo certificates, you need to have this file available during the installation of the Common Agent and the Resource Manager. This file is locked with a password (the agent registration password) to secure the access to the certificates. You can use the ikeyman utility in the java\jre subdirectory to verify your password. 52
IBM TotalStorage Productivity Center V2.3: Getting Started

2.5.1 Tivoli Agent Manager


The Tivoli Agent Manager requires a database to store information in what is called the registry. Currently there are three options for installing the database: using IBM Cloudscape (provided on the installation CD), a local DB2 database, or a remote DB2 database. Since the registry does not contain much information, using the Cloudscape database is OK. In our setup described later in the book, we chose a local DB2 database, because the DB2 database was required for another component that was installed on the same machine. WebSphere Application Server is the second prerequisite for the Tivoli Agent Manager. This is installed if you use the Productivity Center Suite Installer or if you choose to use the Tivoli Agent Manager installer. We recommend that you do not install WebSphere Application Server manually. Three dedicated ports are used by the Agent Manager (9511-9513). Port 9511 is the most important port because you have to enter this port during the installation of a Resource Manager or Common Agent, if you choose to change the defaults. When the WebSphere Application Server is being installed, make sure that the Microsoft Internet Information Server (IIS) is not running, or even better that it is not installed. Port 80 is used by the Tivoli Agent Manager for the recovery of agents that can no longer communicate with the manager, because of lost passwords or certificates. This Agent Recovery Service is located by a DNS entry with the unqualified host name of TivoliAgentRecovery. Periodically, check the Agent Manager log for agents that are unable to communicate with the Agent Manager server. The recovery log is in the %WAS_INSTALL_ROOT%\AgentManager\ logs\SystemOut.log file. Use the information in the log file to determine why the agent could not register and then take corrective action. During the installation, you also have to specify the agent registration password and the Agent Registration Context Root. The password is stored in the AgentManager.properties file on the Tivoli Agent Manager. This password is also used to lock the agentTrust.jks certificate file. Important: A detailed description about how to change the password is available in the corresponding Resource Manager Planning and Installation Guide. Since this involves redistributing the agentTrust.jks files to all Common Agents, we encourage you to use your own certificates from the beginning. To control the access from the Resource Manager to the Common Agent, certificates are used to make sure that only an authorized Resource Manager can install and run code on a computer system. This certificate is stored in the agentTrust.jks and locked with the agent registration password.

2.5.2 Common Agent


As mentioned earlier, the Common Agent is used as a platform for application specific agents. These agents sometimes are called subagents. The subagents can be installed using two different methods: Using an application specific installer From a central location once the Common Agent is installed

Chapter 2. Key concepts

53

When you install the software, the agent has to register with the Tivoli Agent Manager. During this procedure, you need to specify the registration port on the manager (by default 9511). Furthermore, you need to specify an agent registration password. This registration is performed by the Common Agent, which is installed automatically if not already installed. If the subagent is deployed from a central location, the port 9510 is by default used by the installer (running on the central machine), to communicate with the Common Agent to download and install the code. When this method is used, no password or certificate is required, because these were already provided during the Common Agent installation on the machine. If you choose to use your own certificate during the Tivoli Agent Manager installation, you need to supply it for the Common Agent installation.

54

IBM TotalStorage Productivity Center V2.3: Getting Started

Part 2

Part

Installing the IBM TotalStorage Productivity Center base product suite


In this part of the book we provide information to help you successfully install the prerequisite products that are required before you can install the IBM TotalStorage Productivity Center product suite. This includes installing: DB2 IBM Director WebSphere Application Server Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication

Copyright IBM Corp. 2005. All rights reserved.

55

56

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 3.

Installation planning and considerations


IBM TotalStorage Productivity Center is made up of several products which can be installed individually, as a complete suite, or any combination in between. By installing multiple products, a synergy is created which allows the products to interact with each other to provide a more complete solution to help you meet your business storage management objectives. This chapter contains information that you will need before beginning the installation. It also discusses the supported environments and pre-installation tasks.

Copyright IBM Corp. 2005. All rights reserved.

57

3.1 Configuration
You can install the storage management components of IBM TotalStorage Productivity Center on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: Windows 2000 Server with Service Pack 4 Windows 2000 Advanced Server Windows 2003 Enterprise Edition Note: Refer to the following Web site for the updated support summaries, including specific software, hardware, and firmware levels supported:
http://www.storage.ibm.com/software/index.html

If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend that you install IBM Tivoli Provisioning Manager on a separate Windows machine.

3.2 Installation prerequisites


This section lists the minimum prerequisites for installing IBM TotalStorage Productivity Center.

Hardware
The following hardware is required: Dual Pentium 4 or Intel Xeon 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Center for Fabric (optional) 5 GB available disk space.

Database
You must comply with the following database requirements: The installation of DB2 Version 8.2 is part of the Prerequisite Software Installer and is required by all the managers. Other databases that are supported are: For IBM TotalStorage Productivity Center for Fabric: IBM Cloudscape 5.1.60 (provided on the CD) Microsoft SQL Server Version 7.0, 2000 Oracle 8i, 9i, 9i V2 Sybase SQL Server (Adaptive Server Enterprise) Version 12.5 or higher IBM Cloudscape 5.1.60 (provided on the CD) For IBM TotalStorage Productivity Center for Data:

58

IBM TotalStorage Productivity Center V2.3: Getting Started

3.2.1 TCP/IP ports used by TotalStorage Productivity Center


This section provides an overview of the TCP/IP ports used by IBM TotalStorage Productivity Center.

TCP/IP ports used by Disk and Replication Manager


The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication Manager installation program preconfigures the TCP/IP ports used by WebSphere. Table 3-1 lists the values that correspond to the WebSphere ports.
Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base Port value 427 2809 9080 9443 9090 9043 5559 5557 5558 8980 7873 WebSphere ports SLP port Bootstrap port HTTP Transport port HTTPS Transport port Administrative Console port Administrative Console Secure Server port JMS Server Direct Address port JMS Server Security port 5 JMS Server Queued Address port SOAP Connector Address port DRS Client Address port

TCP/IP ports used by Agent Manager


The Agent Manager uses the TCP/IP ports listed in Table 3-2.
Table 3-2 TCP/IP ports for Agent Manager Port value 9511 9512 Usage Registering agents and resource managers Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets Requesting updates to the certificate revocation list Requesting Agent Manager information Downloading the truststore file Agent recovery service

9513

80

Chapter 3. Installation planning and considerations

59

TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric


The Fabric Manager uses the default TCP/IP ports listed in Table 3-3.
Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Port value 8080 9550 9551 9552 9553 9554 9555 9556 9557 9558 9559 9560 9661 9562 9563 9564 9565 9565 9567 9568 9569 9570 9571 9572 Usage NetView Remote Web console HTTP port Reserved Reserved Cloudscape server port NVDAEMON port NVREQUESTER port SNMPTrapPort port on which to get events forwarded from Tivoli NetView Reserved Reserved Tivoli NetView Pager daemon Tivoli NetView Object Database daemon Tivoli NetView Topology Manager daemon Tivoli NetView Topology Manager socket Tivoli General Topology Manager Tivoli NetView OVs_PMD request services Tivoli NetView OVs_PMD management services Tivoli NetView trapd socket Tivoli NetView PMD service Tivoli NetView General Topology map service Tivoli NetView Object Database event socket Tivoli NetView Object Collection facility socket Tivoli NetView Web Server socket Tivoli NetView SnmpServer

60

IBM TotalStorage Productivity Center V2.3: Getting Started

Fabric Manager remote console TCP/IP default ports


The Fabric Manager uses the ports in Table 3-4 for its remote console.
Table 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console Port value 9560 9561 9562 9563 9564 9565 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 Usage HTTP port 9561 Reserved Reserved ASF Jakarta Tomcats Local Server port Tomcats warp port NVDAEMON port NVREQUESTER port Tivoli NetView Pager daemon Tivoli NetView Object Database daemon Tivoli NetView Topology Manager daemon Tivoli NetView Topology Manager socket Tivoli General Topology Manager Tivoli NetView OVs_PMD request services Tivoli NetView OVs_PMD management services Tivoli NetView trapd socket Tivoli NetView PMD service Tivoli NetView General Topology map service Tivoli NetView Object Database event socket Tivoli NetView Object Collection facility socket Tivoli NetView Web Server socket Tivoli NetView SnmpServer

Fabric Agents TCP/IP ports


The Fabric Agents use the TCP/IP ports listed in Table 3-5.
Table 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Agents Port value 9510 9514 9515 Usage Common agent Used to restart the agent Used to restart the agent

Chapter 3. Installation planning and considerations

61

3.2.2 Default databases created during the installation


During the installation of IBM TotalStorage Productivity Center, we recommend that you use DB2 as the preferred database type. Table 3-6 lists all the default databases that the installer creates during the installation.
Table 3-6 Default DB2 databases Application IBM Director Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk and Replication Base IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication hardware subcomponent IBM TotalStorage Productivity Center for Replication element catalog IBM TotalStorage Productivity Center for Replication replication manager IBM TotalStorage Productivity Center for Replication SVC hardware subcomponent IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Data Default database name (DB2) No default; we created database, IBMDIR IBMCDB DMCOSERV PMDATA ESSHWL ELEMCAT REPMGR SVCHWL ITSANM No default; we created Database, TPCDATA

3.3 Our lab setup environment


This section gives a brief overview of what our lab setup environment looked like and what we used to document the installation.

Server hardware used


We used four IBM Eserver xSeries servers with: 2 x 2.4 GHz CPU per system 4 GB Memory per system 73 GB HDD per system Windows 2000 with Service Pack 4

System 1
The name of our first system was Colorado. The following applications were installed on this system: DB2 IBM Director WebSphere Application Server WebSphere Application Server update Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk and Replication Base IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication

62

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center for Data IBM TotalStorage Productivity Center for Fabric

System 2
The name of our second system was Gallium. The following applications were installed on this server: Data Agent

System 3
The name of our third system was PQDISRV. The following applications were installed on this server: DB2 Application software

Systems used for CIMOM servers


We used four xSeries servers for our Common Information Model Object Manager (CIMOM) servers. They consisted of: 2 GHz CPU per system 2 GB Memory per system 40 GB HDD per system Windows 2000 server with Service Pack 4

CIMOM system 1
Our first CIMOM server was named TPCMAN. On this server, we installed ESS CLI ESS CIMOM LSI Provider (FAStT CIMOM)

CIMOM system 3
Our third CIMOM system was named SVCCON. We installed the following applications on this server: SAN Volume Controller (SVC) Console SVC CIMOM

Networking
We used the following switches for networking: IBM Ethernet 10/100 24 Port switch 2109 F16 Fiber switch

Storage devices
We employed the following storage devices: IBM TotalStorage Enterprise Storage Server (ESS) 800 and F20 DS8000 DS6000 DS4000 IBM SVC Figure 3-1 on page 64 shows a diagram of our lab setup environment.

Chapter 3. Installation planning and considerations

63

ESS
XXX.YYY.6.29 XXX.YYY.6.26

Management Console SVC Cluster


XXX.YYY.140.14 XXX.YYY.140.15 XXX.YYY.ZZZ.25 Ethernet Switch
XXX.YYY.ZZZ.10 W2K

SVCCCONN SVC CIMOM


W2K

MARYLAMB ESS CIMOM


W2K

TPCMAN FAStT CIMOM


W2K

XXX.YYY.ZZZ.34

XXX.YYY.ZZZ.35

XXX.YYY.ZZZ.73

Colorado Server
W2K -> IBM TotalStorage Productivity Center for Disk, and Replication -> IBM TotalStorage Productivity Center for Fabric

Gallium Server
W2K -> Tivoli Agent Manager -> IBM TotalStorage Productivity Center for Data

PQDISRV Server
W2K

Faroe Server
W2K

-> Application Server

->Application Server

XXX.YYY.ZZZ.49

XXX.YYY.ZZZ.36

XXX.YYY.ZZZ.100

XXX.YYY.ZZZ.69

2109-F16 Fiber Switch


XXX.YYY.ZZZ.201

FAStT 700
XXX.YYY.ZZZ.202 XXX.YYY.ZZZ.203

Figure 3-1 Lab setup environment

3.4 Pre-installation check list


You need to complete the following tasks in preparation for installing the IBM TotalStorage Productivity Center. Print the tables in Appendix A, Worksheets on page 991, to keep track of the information you will need during the installation, such as user names, ports, IP addresses, and locations of servers and managed devices. 1. Determine which elements of the TotalStorage Productivity Center you will install. 2. Uninstall Internet Information Services. 3. Grant the following privileges to the user account that will be used to install the TotalStorage Productivity Center: Act as part of the operating system Create a token object Increase quotas Replace a process-level token Logon as a service

4. Install and configure Simple Network Management Protocol (SNMP) (Fabric requirement). 5. Identify any firewalls and obtain the required authorization. 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers.

64

IBM TotalStorage Productivity Center V2.3: Getting Started

3.5 User IDs and security


This section discusses the user IDs that are used during the installation and those that are used to manage and work with TotalStorage Productivity Center. It also explains how you can increase the basic security of the different components.

3.5.1 User IDs


This section lists and explains the user IDs used in a IBM TotalStorage Productivity Center environment. For some of the IDs, refer to Table 3-8 for a link to additional information that is available in the manuals.

Suite Installer user


We recommend that you use the Windows Administrator or a dedicated user for the installation of TotalStorage Productivity Center. That user ID should have the user rights listed in Table 3-7.
Table 3-7 Requirements for the Suite Installer user User rights/policy Act as part of the operating system Used for DB2 Productivity Center for Disk Fabric Manager DB2 Productivity Center for Disk DB2 Productivity Center for Disk DB2 Productivity Center for Disk DB2 Productivity Center for Disk

Create a token object Increase quotas Replace a process-level token Log on as a service Debug programs

Table 3-8 shows the user IDs that are used in a TotalStorage Productivity Center environment. It provides information about the Windows group to which the user ID must belong, whether it is a new user ID that is created during the installation, and when the user ID is used.
Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element Suite Installer DB2 User ID Administrator db2admina New user No Yes, will be created No Windows DB2 management and Windows Service Account DirAdmin or DirSuper Windows Service Account Type Group or groups Usage

IBM Director (see also IBM Director on page 67)

tpcadmina

Windows

Chapter 3. Installation planning and considerations

65

Element Resource Manager

User ID managerb

New user No, default user No

Type Tivoli Agent Manager Tivoli Agent Manager Windows

Group or groups N/A, internal user N/A, internal user Windows

Usage Used during the registration of a Resource Manager to the Agent Manager Used to authenticate agents and lock the certificate key files Windows Service Account

Common Agent (see also Common Agent on page 67) Common Agent

AgentMgrb

itcauserb

Yes, will be created Yes, will be created

TotalStorage Productivity Center universal user Tivoli NetView IBM WebSphere Host Authentication

tpccimoma

Windows

DirAdmin

This ID is used to accomplish connectivity with the managed devices. For example, this ID has to be set up on the CIM Agents. See Fabric Manager User IDs on page 68 See Fabric Manager User IDs on page 68 See Fabric Manager User IDs on page 68

Windows Windows Windows

a. This account can have any name you choose. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here. See Fabric Manager User IDs on page 68.

Granting privileges
Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They may not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can optionally set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, follow these steps: 1. Click Start Settings Control Panel. 2. Double-click Administrative Tools. 3. Double-click Local Security Policy. 4. The Local Security Settings window opens. Expand Local Policies. Then double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: a. Highlight the policy to be selected. b. Double-click the policy and look for the users name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are selected.

66

IBM TotalStorage Productivity Center V2.3: Getting Started

c. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: i. In the Local Security Policy Setting window, click Add. ii. In the Select Users or Groups window, under the Name column, highlight the user of group. iii. Click Add to place the name in the lower window. iv. Click OK to add the policy to the user or group. 5. After you set these user rights, either by using the installation program or manually, log off the system and then log on again for the user rights to take effect. 6. Restart the installation program to continue with the IBM TotalStorage Productivity Center for Disk and Replication Base.

TotalStorage Productivity Center communication user


The communication user account is used for authentication between several different elements of the environment. For example, if WebSphere Application Server is installed with the Suite Installer, its Administrator ID is the communication users.

IBM Director
With Version 4.1, you no longer need to create an internal user account. All user IDs must be operating system accounts and members of one of the following groups: DirAdmin or DirSuper groups (Windows), diradmin, or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux) In addition, a host authentication password is used to allow managed hosts and remote consoles to communicate with IBM Director.

Resource Manager
The user ID and password (default is manager and password) for the Resource Manager is stored in the AgentManager\config\Authorization.xml file on the Agent Manager. Since this is used only during the initial registration of a new Resource Manager, there is no problem with changing the values at any time. You can find a detailed procedure on how to change this in the Installation and Planning Guides of the corresponding manager. You can have multiple Resource Manager user IDs if you want to separate the administrators for the different managers, for example for IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric.

Common Agent
Each time the Common Agent is started, this context and password are used to validate the registration of the agent with the Tivoli Agent Manager. Furthermore the password is used to lock the certificate key files (agentTrust.jks). The default password is changeMe, but you should change the password when you install the Tivoli Agent Manager. The Tivoli Agent Manager stores this password in the AgentManager.properties file. If you start with the defaults, but want to change the password later, all the agents have to be changed. A procedure to change the password is available in the Installation and Planning Guides of the corresponding managers (at this time Data or Fabric). Since the password is used to lock the certificate files, you must also apply this change to Resource Managers.

Chapter 3. Installation planning and considerations

67

The Common Agent user ID AgentMgr is not a user ID, but rather the context in which the agent is registered at the Tivoli Agent Manager. There is no need to change this, so we recommend that you accept the default.

TotalStorage Productivity Center universal user


The account used to accomplish connectivity with managed devices has to be part of the DirAdmin (Windows) or diradmin (Linux) group. This user ID communicates with CIMOMs during install and post install. It also communicates with WebSphere.

Fabric Manager User IDs


During the installation of IBM TotalStorage Productivity Center for Fabric, you can select if you want to use individual passwords for such subcomponents as DB2, IBM WebSphere, NetView and the Host Authentication. You can also choose to use the DB2 administrators user ID and password to make the configuration simpler. Figure 4-117 on page 164 shows the window where you can choose the options.

3.5.2 Increasing user security


The goal of increasing security is to have multiple roles available for the various tasks that can be performed. Each role is associated with a certain group. The users are only added to those groups that they need to be part of to fulfill their work. Not all components have the possibility to increase the security. Others methods require some degree of knowledge about the specific components to perform the configuration successfully.

IBM TotalStorage Productivity Center for Data


During the installation of Productivity Center for Data, you can enter the name of a Windows group. Every user within this group is allowed to manage Productivity Center for Data. Other users may only start the interface and look at it. You can add or change the name of that group later by editing the server.config file and restarting Productivity Center for Data. Productivity Center for Data does not support the following domain login formats for logging into its server component: (domain name)/(username) (username)@(domain) Because it does not support these formats, you must set up users in a domain account that can log into the server. Perform the following steps before you install Productivity Center for Data in your environment: 1. Create a Local Admin Group. 2. Create a Domain Global Group. 3. Add the Domain Global Group to the Local Admin Group. Productivity Center for Data looks up the SID for the domain user when the login occurs. You only need to specify a user name and password.

68

IBM TotalStorage Productivity Center V2.3: Getting Started

3.5.3 Certificates and key files


Within a TotalStorage Productivity Center environment, several applications use certificates to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli Agent Manager.

Productivity Center for Disk and Replication certificates


The WebSphere Application Server that is part of Productivity Center for Disk and Replication uses certificates for Secure Sockets Layer (SSL) communication. During the installation, key files can be generated as self-signed certificates, but you must enter a password for each file to lock it. The default file names are: MDMServerKeyFile.jks MDServerTrusFile.jks The default directory for the key file on the Agent Manager is C:\IBM\mdm\dm\keys.

Tivoli Agent Manager certificates


The Agent Manager comes with demonstration certificates that you can use. However, you can also create new certificates during the installation of Agent Manager (see Figure 4-26 on page 104). If you choose to create new files, the password that you enter on the panel, as shown in Figure 4-27 on page 105, as the Agent registration password is used to lock the agentTrust.jks key file. The default directory for that key file on the Agent Manager is C:\Program Files\IBM\AgentManager\certs. There are more key files in that directory, but during the installation and first steps, the agentTrust.jks file is the most important one. This is only important if you allow the installer to create your keys.

3.5.4 Services and service accounts


The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-9 provides an overview of the most important services. To keep it simple, we did not include all the DB2 services in the table.
Table 3-9 Services and service accounts Element DB2 IBM Director Agent Manager IBM Director Server IBM WebSphere Application Server V5 Tivoli Agent Manager IBM Tivoli Common Agent C:\Program Files\tivoli\ep IBM TotalStorage Productivity Center for Data server IBM WebSphere Application Server V5 Fabric Manager Service name Service account db2admin Administrator LocalSystem Comment The account needs to be part of Administrators and DB2ADMNS. You need to modify the account to be part of one of the groups: DirAdmin or DirSuper. You need to set this service to start automatically, after the installation.

Common Agent Productivity Center for Data Productivity Center for Fabric

itcauser TSRMsrv1 LocalSystem

Chapter 3. Installation planning and considerations

69

Element Tivoli NetView Service

Service name Tivoli NetView Service

Service account NetView

Comment

3.6 Starting and stopping the managers


To start, stop or restart one of the managers or components, you use the Windows control panel. Table 3-10 shows a list of the services.
Table 3-10 Services used for TotalStorage Productivity Center Element DB2 IBM Director Agent Manager Common Agent Productivity Center for Data Productivity Center for Fabric Tivoli NetView Service IBM Director Server IBM WebSphere Application Server V5 - Tivoli Agent Manager IBM Tivoli Common Agent C:\Program Files\tivoli\ep IBM TotalStorage Productivity Center for Data Server IBM WebSphere Application Server V5 - Fabric Manager Tivoli NetView Service Service name Service account db2admin Administrator LocalSystem itcauser TSRMsrv1 LocalSystem NetView

3.7 Windows Management Instrumentation


Before beginning the Prerequisite Software installation, the Windows Management Instrumentation service must first be stopped and disabled. To disable the service, follow the steps below. 1. Go to Start Settings Control Panel Administrative Tools Services. 2. Scroll down and double-click the Windows Management Instrumentation service (see Figure 3-2 on page 71).

70

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 3-2 Windows Management Instrumentation service

3. In the Windows Management Instrumentation Properties window, go down to Service status and click the Stop button (Figure 3-3). Wait for the service to stop.

Figure 3-3 Stopping Windows Management Instrumentation

Chapter 3. Installation planning and considerations

71

4. After the service is stopped, in the Windows Management Instrumentation Properties window, change the Startup type to Disabled (Figure 3-4) and click OK.

Figure 3-4 Disabled Windows Management Instrumentation

5. After disabling the service, it may start again. If so, go back and stop the service again. The service should now be stopped and disabled as shown in Figure 3-5.

Figure 3-5 Windows Management Instrumentation successfully disabled

Important: After the Prerequisite Software installation completes. You must enable the Windows Management Instrumentation service before installing the suite. To enable the service, change the Startup type from Disabled (see Figure 3-4) to Automatic.

72

IBM TotalStorage Productivity Center V2.3: Getting Started

3.8 World Wide Web Publishing


As with the Windows Management Instrumentation service, the World Wide Web Publishing service must also be stopped and disabled before starting the Prerequisite Software Installer. To stop the World Wide Web Publishing service, simply follow the same steps in section Figure 3.7 on page 70. This service can remain disabled.

3.9 Uninstalling Internet Information Services


Make sure Internet Information Services (IIS) is not installed on the server. If it is installed, uninstall it using the following procedure. 1. 2. 3. 4. Click Start Settings Control Panel. Click Add/Remove Programs. In the Add or Remove Programs window, click Add/Remove Windows Components. In the Windows Components panel, deselect IIS.

3.10 Installing SNMP


Before you install the components of the TotalStorage Productivity Center, install and configure SNMP. 1. Click Start Settings Control Panel. 2. Click Add/Remove Programs. 3. In the Add or Remove Programs window, click Add/Remove Windows Components. 4. Double-click Management and Monitoring Tools. 5. In the Windows Components panel, select Simple Network Management Protocol and click OK. 6. Close the panels and accept the installation of the components. 7. The Windows installation CD or installation files are required. Make sure that the SNMP services are configured as explained in these steps: a. Right-click My Computer and select Manage. b. In the Computer Management window, click Services and Applications. c. Double-click Services. 8. Scroll down to and double-click SNMP Service. 9. In the SNMP Service Properties window, follow these steps: 10.Click the Traps tab (see Figure 3-6 on page 74).

Chapter 3. Installation planning and considerations

73

d. Make sure that the public name is available.

Figure 3-6 Traps tab in the SNMP Service Properties window

e. Click the Security tab (see Figure 3-7). f. Select Accept SNMP packets from any host. g. Click OK.

Figure 3-7 SNMP Security Properties window

11.After you set the public community name, restart the SNMP community service. 74
IBM TotalStorage Productivity Center V2.3: Getting Started

3.11 IBM TotalStorage Productivity Center for Fabric


Prior to installing IBM TotalStorage Productivity Center for Fabric, there are planning considerations and prerequisite tasks that you need to complete.

3.11.1 The computer name


IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the manager, managed hosts, and the remote console. To verify your computer name on Windows, follow this procedure. 1. Right-click the My Computer icon on your desktop and select Properties. 2. The System Properties window opens. a. Click the Network Identification tab. Click Properties. b. The Identification Changes panel opens. i. Verify that your computer name is entered correctly. This is the name that the computer is identified as in the network. ii. Verify that the full computer name is a fully qualified host name. For example, user1.sanjose.ibm.com is a fully qualified host name. iii. Click More. c. The DNS Suffix and NetBIOS Computer Name panel opens. Verify that the Primary DNS suffix field displays a domain name. Important: The fully qualified host name must match the HOSTS file name (including case-sensitive characters).

3.11.2 Database considerations


When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is automatically created if you specified the DB2 database. The default database name is TSANMDB. If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2 database, and want to save the information in the database before you re-install the manager, you must use DB2 commands to back up the database. The default name for the IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database name for Cloudscape is also TSANMDB. You cannot change this database name. If you are installing the manager on more than one machine in a Windows domain, the managers on different machines may end up sharing the same DB2 database. To avoid this situation, you must either use different database names or different DB2 user names when installing the manager on different machines.

3.11.3 Windows Terminal Services


You cannot use the Windows Terminal Services to access a machine that is running the IBM TotalStorage Productivity Center for Fabric console (either the manager or remote console machine). Any TotalStorage Productivity Center for Fabric dialogs launched from the SAN menu in Tivoli NetView appear on the manager or remote console machine only. The dialogs do not appear in the Windows Terminal Services session.

Chapter 3. Installation planning and considerations

75

3.11.4 Tivoli NetView


IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric upgrades it to version 7.1.3. If you have a Tivoli NetView release earlier than Version 7.1.1, IBM TotalStorage Productivity Center for Fabric prompts you to uninstall Tivoli NetView before you install this product. If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped. You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop. Web Console Web Console Security MIB Loader MIB Browser Netmon Seed Editor Tivoli Event Console Adapter Important: Ensure that the Windows 2000 Terminal Services is not running. Go to the Services panel and check for Terminal Services.

User IDs and password considerations


TotalStorage Productivity Center for Fabric only supports local user IDs and groups. It does not support domain user IDs and groups.

Cloudscape database
If you install TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the Agent Manager Resource manager user ID and password to register with the Agent Manager WebSphere administrative user ID and password host authentication password Tivoli NetView password only

DB2 database
If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the Agent Manager Resource manager user ID and password to register with the Agent Manager DB2 administrator user ID and password DB2 user ID and password WebSphere administrative user ID and password Host authentication password only Tivoli NetView password only Note: If you are running Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must act as part of the operating system user.

WebSphere
To change the WebSphere user ID and password, follow this procedure: 1. Open the install_location\apps\was\properties\soap.client.props file.

76

IBM TotalStorage Productivity Center V2.3: Getting Started

2. Modify the following entries: com.ibm.SOAP. login Userid=user_ID (enter a value for user_ID) com.ibm.SOAP. login Password=password (enter a value for password) 3. Save the file. 4. Run the following script:
ChangeWASAdminPass.bat user_ID password install_dir

Here user_ID is the WebSphere user ID and password is the password. install_dir is the directory where the manager is installed and is optional. For example, install_dir is c:\Program Files\IBM\TPC\Fabric\manager\bin\W32-ix86.

3.11.5 Personal firewall


If you have a software firewall on your system, disable the firewall while installing the Fabric Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall after you install the Fabric Manager.

Security considerations
You set up security by using certificates. There are demonstration certificates or you can generate new certificates. This option is specified when you installed the Agent Manager. See Figure 4-26 on page 104. We recommend that you generate new certificates. If you used the demonstration certificates, continue with the installation. If you generated new certificates, follow this procedure: 1. Copy the manager CD image to your computer. 2. Copy the agentTrust.jks file from the Agent Manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This overwrites the existing agentTrust.jks file. 3. You can write a new CD image with the new file or keep this image on your computer and point the Suite Installer to the directory when requested.

3.11.6 Changing the HOSTS file


When you install Service Pack 3 for Windows 2000 on your computers, follow these steps to avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The problem is caused by the address resolution protocol, which returns the short name and not the fully qualified host name. You can avoid this problem by changing the entries in the corresponding host tables on the Domain Name System (DNS) server and on the local computer. The fully qualified host name must be listed before the short name as shown in Example 3-1. See 3.11.1, The computer name on page 75, for details about determining the host name. To correct this problem, you have to edit the HOSTS file. The HOSTS file is in the %SystemRoot%\system32\drivers\etc\ directory.
Example 3-1 Sample HOSTS file # # # # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. This file contains the mappings of IP addresses to host names. Each entry should be kept on an individual line. The IP address should be placed in the first column followed by the corresponding host name.

Chapter 3. Installation planning and considerations

77

# # # # # # # # # #

The IP address and the host name should be separated by at least one space. Additionally, comments (such as these) may be inserted on individual lines or following the machine name denoted by a '#' symbol. For example: 102.54.94.97 38.25.63.10 rhino.acme.com x.acme.com # source server # x client host

127.0.0.1 localhost # 192.168.123.146 jason.groupa.mycompany.com jason 192.168.123.146 jason jason.groupa.mycompany.com

Note: Host names are case sensitive, which is limitation within WebSphere. Check your host name.

3.12 IBM TotalStorage Productivity Center for Data


Prior to installing IBM TotalStorage Productivity Center for Data, there are planning considerations and prerequisite tasks that you need to complete.

3.12.1 Server recommendations


The IBM TotalStorage Productivity Center for Data server component acts as a traffic officer for directing information and handling requests from the agent and UI components installed within an environment. You need to install at least one server within your environment. We recommend that you do not manage more than 1000 agents with a single server. If you need to install more than 1000 agents, we suggest that you install an additional server for those agents to maintain optimal performance.

3.12.2 Supported subsystems and databases


This section contains the subsystems, file system formats, and databases that the TotalStorage Productivity Center for Data supports.

Storage subsystem support


Data Manager currently supports the monitoring and reporting of the following storage subsystems: Hitachi Data Systems HP StorageWorks IBM FAStT 200, 600, 700, and 900 with an SMI-S 1.0 compliant CIM interface SAN Volume Controller Console Version 1.1.0.2, 1.1.0.9, 1.2.0.5, 1.2.0.6 (1.3.2 Patch available), 1.2.1.x, 1.2.0.6, SAN Volume Controller CIMOM Version 1.1.0.1, 1.2.0.4, 1.2.0.5 (1.3.2 patch available), 1.2.0.5, 1.2.1.x ESS ICAT 1.1.0.2, 1.2.0.15, 1.2.0.29, 1.2.x, 1.2.1.40 and later for ESS

78

IBM TotalStorage Productivity Center V2.3: Getting Started

File system support


Data Manager supports the monitoring and reporting of the following file systems: FAT FAT32 NTFS4, NTFS5 EXT2, EXT3 AIX_JFS HP_HFS VXFS UFS TMPFS AIX_OLD NW_FAT NW_NSS NF WAFL FAKE AIX_JFS2 SANFS REISERFS

Network File System support


Data Manager currently supports the monitoring and reporting of the following Network File Systems (NFS): IBM TotalStorage SAN File System 1.0 (Version 1 Release 1), from AIX V5.1 (32-bit) and Windows 2000 Server/Advanced Server clients IBM TotalStorage SAN File System 2.1, 2.2 from AIX V5.1 (32-bit), Windows 2000 Server/Advanced Server, Red Hat Enterprise Linux 3.0 Advanced Server, and SUN Solaris 9 clients General Parallel File System (GPFS) v2.1, v2.2

RDBMS support
Data Manager currently supports the monitoring of the following relational database management systems (RDBMS): Microsoft SQL Server 7.0, 2000 Oracle 8i, 9i, 9i V2, 10G Sybase SQL Server 11.0.9 and higher DB2 Universal Database (UDB) 7.1, 7.2, 8.1, 8.2 (64-bit UDB DB2 instances are supported)

3.12.3 Security considerations


This section describes the security issues that you must consider when installing Data Manager.

Chapter 3. Installation planning and considerations

79

User levels
There are two levels of users within IBM TotalStorage Productivity Center for Data: non-administrator users and administrator users. The level of users determine how they use IBM TotalStorage Productivity Center for Data. Non-administrator users View the data collected by IBM TotalStorage Productivity Center for Data. Create, generate, and save reports. IBM TotalStorage Productivity Center for Data administrators. These users can: Create, modify, and schedule Pings, Probes, and Scans Create, generate, and save reports Perform administrative tasks and customize the IBM TotalStorage Productivity Center for Data environment Create Groups, Profiles, Quotas, and Constraints Set alerts Important: Security is set up by using the certificates. You can use the demonstration certificates or you can generate new certificates. It is recommended that you generate new certificates when you install the Agent Manager.

Certificates
If you generated new certificates, follow this procedure: 1. Copy the CD image to your computer. 2. Copy the agentTrust.jks file from the Agent Manager directory AgentManager/certs to the CommonAgent\certs directory of the manager CD image. This overwrites the existing agentTrust.jks file. You can write a new CD image with the new file or keep this image on your computer and point the Suite Installer to the directory when requested. Important: Before installing IBM TotalStorage Productivity Center for Data, define the group within your environment that will have administrator rights within Data Manager. This group must exist on the same machine where you are installing the Server component. During the installation, you are prompted to enter the name of this group.

80

IBM TotalStorage Productivity Center V2.3: Getting Started

3.12.4 Creating the DB2 database


Before you install the component, create the IBM TotalStorage Productivity Center for Data database. 1. From the start menu, select Start Programs IBM DB2 General Administration Tools Control Center. 2. This launches the DB2 Control Center. Create a database that is used for IBM TotalStorage Productivity Center for Data as shown in Figure 3-8. Select All Databases, right-click and select Create Databases Standard.

Figure 3-8 DB2 database creation

Chapter 3. Installation planning and considerations

81

3. In the window that opens (Figure 3-9), complete the required database name information. We used the database name of TPCDATA. Click Finish to complete the database creation.

Figure 3-9 DB2 database information for creation

82

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 4.

Installing the IBM TotalStorage Productivity Center suite


Installation of the TotalStorage Productivity Center suite of products is done using the install wizards. The first, the Prerequisite Software Installer, installs all the products needed before one can install the TotalStorage Productivity Center suite. The second, the Suite Installer, installs the individual components or the entire suite of products. This chapter documents the use of the Prerequisite Software Installer and the Suite Installer. It also includes hints and tips based on our experience.

Copyright IBM Corp. 2005. All rights reserved.

83

4.1 Installing the IBM TotalStorage Productivity Center


IBM TotalStorage Productivity Center provides a Prerequisite Software Installer and Suite Installer that helps guide you through the installation process. You can also use the Suite Installer to install stand-alone components. The Prerequisite Software Installer installs the following products in this order: 1. DB2, which is required by all the managers 2. WebSphere Application Server, which is required by all the managers except for TotalStorage Productivity Center for Data 3. Tivoli Agent Manager, which is required by Fabric Manager and Data Manager The Suite Installer installs the following products or components in this order: 1. IBM Director, which is required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication 2. Productivity Center for Disk and Replication Base, which is required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication 3. TotalStorage Productivity Center for Disk 4. TotalStorage Productivity Center for Replication 5. TotalStorage Productivity Center for Fabric - Manager 6. TotalStorage Productivity Center for Data - Manager In addition to the manager installations, the Suite Installer guides you through the installation of other IBM TotalStorage Productivity Center components. You can select more than one installation option at a time. This redbook separates the types of installations into several sections to help explain them. The additional types of installation tasks are: IBM TotalStorage Productivity Center Agent installations IBM TotalStorage Productivity Center GUI/Client installations Language Pack installations IBM TotalStorage Productivity Center product uninstallations

4.1.1 Considerations
You may want to use IBM TotalStorage Productivity Center for Disk to manage the IBM TotalStorage Enterprise Storage Server (ESS), DS8000, DS6000, Storage Area Network (SAN) Volume Controller (SVC), IBM TotalStorage Fibre Array Storage Technology (FAStT), or DS4000 storage subsystems. In this case, you must install the prerequisite input/output (I/O) Subsystem Licensed Internal Code (SLIC) and Common Information Model (CIM) Agent for the devices. See Chapter 6, Configuring IBM TotalStorage Productivity Center for Disk on page 247, for more information. If you are installing the CIM Agent for the ESS, or the DS8000 or DS6000 you must install it on a separate machine. TotalStorage Productivity Center 2.3 does not support Linux on zSeries or on S/390. Nor does IBM TotalStorage Productivity Center support Windows domains.

84

IBM TotalStorage Productivity Center V2.3: Getting Started

4.2 Prerequisite Software Installation


This section guides you step by step through the install process of the prerequisite software components.

4.2.1 Best practices


Before you begin installing the prerequisite software components, we recommend that you complete the following tasks: 1. Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center components, including the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric. For details refer to Granting privileges on page 66. 2. Make sure Internet Information Services (IIS) is not installed on the server. If it is installed, uninstall it using the procedure in 3.9, Uninstalling Internet Information Services on page 73. 3. Install and configure Simple Network Management Protocol (SNMP) described in 3.10, Installing SNMP on page 73. 4. Stop and disable Windows Management Instrumentation (Figure 3.7 on page 70) and World Wide Web Publishing (3.8, World Wide Web Publishing on page 73) services. 5. Create a database for Agent Manager installation. To create the database, see 3.12.4, Creating the DB2 database on page 81. The default database name for Agent Manager is IBMCDB.

4.2.2 Installing prerequisite software


Follow these steps to install the prerequisite software components: 1. Insert the IBM TotalStorage Productivity Center Prerequisite Software Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Double-click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient. Eventually, you see the language selection panel (Figure 4-1). 2. The installer language window (Figure 4-1) opens. From the list, select a language. This is the language that is used to install this product. Click OK.

Figure 4-1 Prerequisite Software Installer language

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

85

3. The Prerequisite Software Installer wizard welcome pane in Figure 4-2 opens. Click Next. The Software License Agreement panel is then displayed. Read the terms of the license agreement. If you agree with the terms of the license agreement select the I accept the terms in the license agreement radio button and click Next to continue.

Figure 4-2 Prerequisite Software Installer wizard

4. The prerequisite operating system check panel in Figure 4-3 on page 87 opens. When it completes successfully click Next.

86

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 4-3 Prerequisite Operating System check

5. The Tivoli Common Directory location panel (Figure 4-4) opens and prompts for a location for the log files. Accept the default location or enter a different location. Click Next to continue.

Figure 4-4 Tivoli Common Directory location

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

87

6. The product selection panel (Figure 4-5) opens. To install the entire TotalStorage Productivity Center suite, check the boxes next to DB2, WebSphere, and Agent Manager.

Figure 4-5 Product selection

7. The DB2 Universal Database panel (Figure 4-6) opens. Select Enterprise Server Edition and click Next to continue.

Figure 4-6 DB2 Universal Database

88

IBM TotalStorage Productivity Center V2.3: Getting Started

Note: After clicking Next (Figure 4-6), if you see the panel in Figure 4-7, you must first stop and disable the Windows Management Instrumentation service before continuing with the installation. See Figure 3.7 on page 70 for detailed instructions.

Figure 4-7 Windows Management Instrumentation service warning

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

89

8. The DB2 user name and password panel (Figure 4-8) opens. If the DB2 user name exists on the system, the correct password must be entered or the DB2 installation will fail. If the DB2 user name does not exist it will be created by the DB2 install. In our installation we accepted the default user name and entered a unique password. Click Next to continue.

Figure 4-8 DB2 user configuration

90

IBM TotalStorage Productivity Center V2.3: Getting Started

9. The Target Directory Confirmation panel (Figure 4-9) opens. Accept the default target directories for DB2 installation or enter a different location. Click Next.

Figure 4-9 Target Directory Confirmation

10.The select the languages panel (Figure 4-10) opens. This installs the languages selected for DB2. Select your desired language(s). Click Next.

Figure 4-10 Language selection

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

91

11.The Preview Prerequisite Software Information panel (Figure 4-11) opens. Review the information and click Next.

Figure 4-11 Preview Prerequisite Software Information

12.The WebSphere Application Server system prerequisites check panel (Figure 4-12) opens. When the check completes successfully click Next.

Figure 4-12 WebSphere Application Server system prerequisites check

92

IBM TotalStorage Productivity Center V2.3: Getting Started

13.The installation options panel (Figure 4-13) opens. Select the type of installation you wish to perform. The rest of this section guides you through Unattended Installation. Unattended Installation guides you through copying all installation images to a central location called the installation image depot. Once the copies are completed, the component installations proceed with no further intervention needed. Attended Installation prompts you to enter the location of each install image as needed. Click Next to continue.

Figure 4-13 Installation options

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

93

14.The install image depot location panel opens (see Figure 4-14). Enter the location where all installation images are to be copied. Click Next.

Figure 4-14 Install image depot location

15.You are first prompted for the location of the DB2 installation image (see Figure 4-15). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy.

Figure 4-15 DB2 installation source

94

IBM TotalStorage Productivity Center V2.3: Getting Started

16.After the DB2 installation image is copied to the install image depot, you are prompted for the location of the WebSphere installation image (see Figure 4-16). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy.

Figure 4-16 WebSphere installation source

17.After the WebSphere installation image is copied, you are prompted for the location of the WebSphere Cumulative fix 3 installation image (see Figure 4-17). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy.

Figure 4-17 WebSphere fix 3 installation source

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

95

18.When an install image has been successfully copied to the Install Image Depot, a green check mark appears to the right of the prerequisite. After all the prerequisite software images are successfully copied to the install image depot (Figure 4-18), click Next.

Figure 4-18 Installation images copied successfully

96

IBM TotalStorage Productivity Center V2.3: Getting Started

19.The installation of DB2, WebSphere, and the WebSphere Fix Pack begins. When a prerequisite is successfully installed, a green check mark appears to its left. If the installation of a prerequisite fails, a red X appears to the left. If a prerequisite installation fails, exit the installer, check the logs to determine and correct the problem, and restart the installer. When the installation completes successfully (see Figure 4-19), click Next.

Figure 4-19 DB2 and WebSphere installation complete

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

97

20.The Agent Manager Registry Information panel opens. Select the type of database, specify the database name, and choose a local or remote database. The default DB2 database name is IBMCDB. For a local database connection, the DB2 database will be created if it does not exist. We recommend that you take the default database name for a local database. Click Next to continue (see Figure 4-20). Attention: For a remote database connection, the database specified below must exist. Refer to 3.12.4, Creating the DB2 database on page 81 for information on how to create a database in DB2.

Figure 4-20 Agent Manager Registry Information

98

IBM TotalStorage Productivity Center V2.3: Getting Started

21.The Database Connection Information panel in Figure 4-21 opens. Specify the location of the database software directory (for DB2, the default install location is C:\Program Files\IBM\SQLLIB), the database user name and password. You must specify the database host name and port if you are using a remote database. Click Next to continue.

Figure 4-21 Agent Manager database connection Information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

99

Note: For a remote database connection the database specified in Figure 4-20 on page 98 must exist. If the database does not exist, you will see the error message shown in Figure 4-22. Refer to 3.12.4, Creating the DB2 database on page 81 for information on how to create a database in DB2.

Figure 4-22 DB2 database error

100

IBM TotalStorage Productivity Center V2.3: Getting Started

22.A panel opens prompting for a location to install Tivoli Agent Manager (see Figure 4-23). Accept the default location or enter a different location. Click Next to continue.

Figure 4-23 Tivoli Agent Manager installation directory

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

101

23.The WebSphere Application Server Information panel (Figure 4-24) opens. This panel lets you specify the host name or IP address, and the cell and node names on which to install the Agent Manager. If you specify a host name, use the fully qualified host name. For example, specify HELIUM.almaden.ibm.com. If you use the IP address, use a static IP address. This value is used in the URLs for all Agent Manager services. We recommend that you use the fully qualified host name, not the IP address of the Agent Manager server. Typically the cell and node name are both the same as the host name of the computer. If WebSphere was installed before you started the Agent Manager installation wizard, you can look up the cell and node name values in the %WebSphere Application Server_INSTALL_ROOT%\bin\SetupCmdLine.bat file. You can also specify the ports used by the Agent Manager. We recommend that you accept the defaults. Registration Port: The default is 9511 for the server-side Secure Sockets Layer (SSL). Secure Port: The default is 9512 for client authentication, two-way SSL. Public Port: The default is 9513. If you are using WebSphere network deployment or a customized deployment, make sure that the cell and node names are correct. For more information about WebSphere deployment, see your WebSphere documentation. After filling in the required information in the WebSphere Application Server Information panel, click Next.

Figure 4-24 WebSphere Application Server Information

102

IBM TotalStorage Productivity Center V2.3: Getting Started

Note: If an IP address is entered in the WebSphere Application Server Information panel shown in Figure 4-24, the next panel (see Figure 4-25) explains why a host name is recommended. Click Back to use a host name or click Next to use the IP address.

Figure 4-25 Agent Manager IP address warning

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

103

24.The Security Certificates panel (Figure 4-26) opens. Specify whether to create new certificates or to use the demonstration certificates. In a typical production environment, you would create new certificates. The ability to use demonstration certificates is provided as a convenience for testing and demonstration purposes. Make a selection and click Next.

Figure 4-26 Security Certificates

104

IBM TotalStorage Productivity Center V2.3: Getting Started

25.The Security Certificate Settings panel (see Figure 4-27) opens. Specify the certificate authority name, security domain, and agent registration password. The agent registration password is used to register the agents. You must provide this password when you install the agents. This password also sets the Agent Manager key store and trust store files. Record this password, it will be used again in the installation process. The domain name is used in the right-hand portion of the distinguished name (DN) of every certificate issued by the Agent Manager. It is the name of the security domain defined by the Agent Manager. Typically, this value is the registered domain name or contains the registered domain name. For example, for the computer system myserver.ibm.com, the domain name is ibm.com. This value must be unique in your environment. If you have multiple Agent Managers installed, this value must be different on each Agent Manager. The default agent registration password is changeMe and it is case sensitive. Click Next to continue.

Figure 4-27 Security Certificate Settings

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

105

26.The User input summary panel for Agent Manager (see Figure 4-28) opens. Review the information and click Next.

Figure 4-28 User input summary

106

IBM TotalStorage Productivity Center V2.3: Getting Started

27.The summary information for Agent Manager panel (see Figure 4-29) opens. Click Next.

Figure 4-29 Agent Manager installation summary

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

107

28.You will see a panel indicating the status of the Agent Manager install process. the IBMCDB database will be created and tables are added to the database. Once the installation of agent manager completes the Summary of Installation and Configuration Results panel (see Figure 4-30) opens. Click Next to continue.

Figure 4-30 Summary of Installation and Configuration Results

108

IBM TotalStorage Productivity Center V2.3: Getting Started

29.The next panel (Figure 4-31) informs you when the Agent Manager service started successfully. Click Finish.

Figure 4-31 Agent Manager service started

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

109

30.The next panel (Figure 4-32) indicates the installation of prerequisite software is complete. Click Finish to exit the prerequisite installer.

Figure 4-32 Prerequisite software installation complete

4.3 Suite installation


This section guides you through the step by step process to install the TotalStorage Productivity Center components you select. The Suite Installer launches the installation wizard for each manager you chose to install.

4.3.1 Best practices


Before you begin installing the suite of products complete the following tasks. 1. If you are running the Fabric Manager installation under Windows 2000, the Fabric Manager installation requires the user ID to have the following user rights: Act as part of the operating system Log on as a service user rights see Granting privileges under 3.5.1, User IDs on page 65 2. Enable Windows Management Instrumentation (see Figure 3.7 on page 70) 3. Install SNMP (see 3.10, Installing SNMP on page 73) 4. Create the database for the TotalStorage Productivity Center for Data installation (see 3.12.4, Creating the DB2 database on page 81).

4.3.2 Installing the TotalStorage Productivity Center suite


Follow these steps for successful installation: 110
IBM TotalStorage Productivity Center V2.3: Getting Started

1. Insert the IBM TotalStorage Productivity Center Suite Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Double-click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient. Eventually, you see the language selection panel (Figure 4-33). 2. The Installer language window (see Figure 4-33) opens. From the list, select a language. This is the language used to install this product. Click OK.

Figure 4-33 Installer Wizard

3. You see the Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel (see Figure 4-34). Click Next.

Figure 4-34 Welcome to IBM TotalStorage Productivity Center panel

4. The Software License Agreement panel (Figure 4-35 on page 112) opens. Read the terms of the license agreement. If you agree with the terms of the license agreement, select the I accept the terms of the license agreement radio button. Then click Next. If you do not accept the terms of the license agreement, the installation program ends without installing IBM TotalStorage Productivity Center components.

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

111

Figure 4-35 License agreement

5. The next panel enables you to select the type of installation (Figure 4-36). Select Manager installations of Data, Disk, Fabric, and Replication and then click Next.

Figure 4-36 IBM TotalStorage Productivity Center options panel

112

IBM TotalStorage Productivity Center V2.3: Getting Started

6. In the next panel (see Figure 4-37), select the components that you want to install. Click Next to continue.

Figure 4-37 IBM TotalStorage Productivity Center components

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

113

7. The suite installer installs the IBM Director first (see Figure 4-38). Click Next.

Figure 4-38 IBM Director prerequisite install

8. The IBM Director installation is now ready to begin (see Figure 4-39). Click Next.

Figure 4-39 Begin IBM Director installation

114

IBM TotalStorage Productivity Center V2.3: Getting Started

9. The package location for IBM Director panel (see Figure 4-40) opens. Enter the appropriate information and click Next. Note: Make sure the Windows Management Instrumentation service is disabled (see Figure 3.7 on page 70 for detailed instructions). If it is enabled, a window appears prompting you to disable the service after you click Next to continue.

Figure 4-40 IBM Director package location

10.The next panel (see Figure 4-41) provides information about the IBM Director post installation reboot option. When prompted, choose the option to reboot later. Click Next.

Figure 4-41 IBM Director information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

115

11.The IBM Director Server - InstallShield Wizard panel (Figure 4-42) opens. It indicates that the IBM Director installation wizard will launch. Click Next.

Figure 4-42 IBM Director InstallShield Wizard

12.The License Agreement window opens (Figure 4-43). Read the license agreement. Click I accept the terms in the license agreement radio button and then click Next.

Figure 4-43 IBM Director license agreement

116

IBM TotalStorage Productivity Center V2.3: Getting Started

13.The next window (Figure 4-44) displays an advertisement for Enhance IBM Director with the new Server Plus Pack window. Click Next.

Figure 4-44 IBM Director new Server Plus Pack window

14.The Feature and installation directory window (Figure 4-45) opens. Accept the default settings and click Next.

Figure 4-45 IBM Director feature and installation directory window

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

117

15.The IBM Director service account information window (see Figure 4-46) opens. a. Type the domain for the IBM Director system administrator. Alternatively, if there is no domain, then type the local host name (the recommended setup). b. Type a user name and password for IBM Director. The IBM Director will run under this user name and you will log on to the IBM Director console using this user name. In our installation we used the user ID we created to install the TotalStorage Productivity Center. This user must be part of the Administrator group. c. Click Next to continue.

Figure 4-46 Account information

16.The Encryption settings window (Figure 4-47) opens. Accept the default settings in the Encryption settings window. Click Next.

Figure 4-47 Encryption settings

118

IBM TotalStorage Productivity Center V2.3: Getting Started

17.In the Software Distribution settings window (Figure 4-48), accept the default values and click Next. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director.

Figure 4-48 Installation target directory

18.The Ready to Install the Program window (Figure 4-49) opens. Click Install.

Figure 4-49 Installation ready

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

119

19.The Installing IBM Director server window (Figure 4-50) reports the status of the installation.

Figure 4-50 Installation progress

20.The Network driver configuration window (Figure 4-51) opens. Accept the default settings and click OK.

Figure 4-51 Network driver configuration

The secondary window closes and the installation wizard performs additional actions which are tracked in the status window.

120

IBM TotalStorage Productivity Center V2.3: Getting Started

21.The Select the database to be configured window (Figure 4-52) opens. Select IBM DB2 Universal Database and click Next.

Figure 4-52 Database selection

22.The IBM Director DB2 Universal Database configuration window (Figure 4-53) opens. It may be behind the status window. You must click this window to bring it to the foreground. a. In the Database name field, type a new database name for the IBM Director database table or type an existing database name. b. In the User ID and Password fields, type the DB2 user ID and password that you created during the DB2 installation. c. Click Next to continue.

Figure 4-53 Database selection configuration

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

121

23.In the IBM Director DB2 Universal Database configuration secondary window (Figure 4-54), accept the default DB2 node name LOCAL - DB2. Click OK.

Figure 4-54 Database node name selection

24.The Database configuration in progress window is displayed at the bottom of the IBM Director DB2 Universal Database configuration window. Wait for the configuration to complete and the secondary window to close. 25.When the InstallShield Wizard Completed window (Figure 4-55) opens, click Finish.

Figure 4-55 Completed installation

Important: Do not reboot the machine at the end of the IBM Director installation. The Suite Installer reboots the machine.

122

IBM TotalStorage Productivity Center V2.3: Getting Started

26.When you see IBM Director Server Installer Information window (Figure 4-56), click No.

Figure 4-56 IBM Director reboot option

Important: Are you installing IBM TotalStorage Productivity Center for Data? If so, have you created the database for IBM TotalStorage Productivity Center for Data or are you using a existing database? If you are installing Tivoli Disk manager, you must have created the administrative superuser ID and group and set the privileges. 27.The Install Status panel (see Figure 4-57) opens after a successful installation. Click Next.

Figure 4-57 IBM Director Install Status successful

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

123

28.In the machine reboot window (see Figure 4-58), click Next to reboot the machine. Important: If the server does not reboot at this point, cancel the installer and reboot the server.

Figure 4-58 Install wizard completion

124

IBM TotalStorage Productivity Center V2.3: Getting Started

4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base
There are three separate installations to perform: Install the IBM TotalStorage Productivity Center for Disk and Replication Base code. Install the IBM TotalStorage Productivity Center for Disk. Install the IBM TotalStorage Productivity Center for Replication. IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in 3.5.1, User IDs on page 65. Act as part of the operating system Create a token object Increase quotas Replace a process level token Debug programs After rebooting the machine the installer initializes to continue the suite install. A window opens prompting you to select the installation language to be used for this wizard (Figure 4-59). Select the language and click OK.

Figure 4-59 Selecting the language for the IBM TotalStorage Productivity Center installation wizard

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

125

1. The next panel enables you to select the type of installation (Figure 4-60). Select Manager installations of Data, Disk, Fabric, and Replication and click Next.

Figure 4-60 IBM TotalStorage Productivity Center options panel

2. The next window (Figure 4-61) opens allowing you to select which components to install. Select the components you wish to install (all components in this case) and click Next.

Figure 4-61 TotalStorage Productivity Center components

126

IBM TotalStorage Productivity Center V2.3: Getting Started

3. The installer checks that all prerequisite software is installed on your system (see Figure 4-62). Click Next.

Figure 4-62 Prerequisite software check

4. Figure 4-63 shows the Installer window about to begin installation of Productivity Center for Disk and Replication Base. The window also displays the products that are yet to be installed. Click Next to begin the installation.

Figure 4-63 IBM TotalStorage Productivity Center installation information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

127

5. The Package Location for Disk and Replication Manager window (Figure 4-64) opens. Enter the appropriate information and click Next.

Figure 4-64 Package location for Productivity Center Disk and Replication

6. The Information for Disk and Replication Base Manager panel (see Figure 4-65) opens. Click Next.

Figure 4-65 Installer information

128

IBM TotalStorage Productivity Center V2.3: Getting Started

7. The Welcome panel (see Figure 4-66) opens. It indicates that the Disk and Replication Base Manager installation wizard will be launched. Click Next.

Figure 4-66 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

129

8. In the Destination Directory panel (Figure 4-67), you confirm the target directories. Enter the directory path or accept the default directory and click Next.

Figure 4-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory

9. In the IBM WebSphere Instance Selection panel (see Figure 4-68), click Next.

Figure 4-68 WebSphere Application Server information

130

IBM TotalStorage Productivity Center V2.3: Getting Started

10.If the installation user ID privileges were not set, you see an information panel stating that you need to set the privileges (see Figure 4-69). Click Yes.

Figure 4-69 Verifying the effective privileges

11.The required user privileges are set and an informational window opens (see Figure 4-70). Click OK.

Figure 4-70 Message indicating the enablement of the required privileges

12.At this point, the installation terminates. You must close the installer. Log off of Windows, log back on again, and then restart the installer.

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

131

13.In the Installation Type panel (Figure 4-71), select Typical and click Next.

Figure 4-71 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation

14.If the IBM Director Support Program and IBM Director Server service is still running, the Servers Check panel (see Figure 4-72) opens and prompts you to stop the services. Click Next to stop the services.

Figure 4-72 Server checks

132

IBM TotalStorage Productivity Center V2.3: Getting Started

15.In the User Name Input 1 of 2 panel (Figure 4-73), enter the name and password for the IBM TotalStorage Productivity Center for Disk and Replication Base super user ID. This user name must be defined to the operating system. In our environment we used tpccimom as our super user. After entering the required information click Next to continue.

Figure 4-73 IBM TotalStorage Productivity Center for Disk and Replication Base superuser information

16.If the specified super user ID is not defined to the operating system a window asking if you would like to create it appears (see Figure 4-74). Click Yes to continue.

Figure 4-74 Create new local user account

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

133

17.In the User Name Input 2 of 2 panel (Figure 4-75), enter the user name and password for the IBM DB2 Universal Database Server. This is the user ID that was specified when DB2 was installed (see Figure 4-8 on page 90). Click Next to continue.

Figure 4-75 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information

134

IBM TotalStorage Productivity Center V2.3: Getting Started

18.The SSL Configuration panel (Figure 4-76) opens. If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation. The information that you enter will be used later. a. Choose either of the following options: Generate a self-signed certificate: Select this option if you want the installer to automatically generate these certificate files. We generate the certificates in our installation. Defer the generation of the certificate as a manual post-installation task: Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility.

b. Enter the Key file and Trust file passwords. The passwords must be a minimum of six characters in length and cannot contain spaces. You should record the passwords in the worksheets provided in Appendix A, Worksheets on page 991. c. Click Next.

Figure 4-76 Key and Trust file options

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

135

The Generate Self-Signed Certificate window opens (see Figure 4-77). Complete all the required fields and click Next to continue.

Figure 4-77 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information

136

IBM TotalStorage Productivity Center V2.3: Getting Started

19.Next you see the Create Local Database window (Figure 4-78). Accept the default database name of DMCOSERV, or optionally enter the database name. Click Next to continue. Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications.

Figure 4-78 IBM TotalStorage Productivity Center for Disk and Replication Base database name

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

137

20.The Preview window (Figure 4-79) displays a summary of all of the choices that you made during the customizing phase of the installation. Click Install to complete the installation.

Figure 4-79 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information

138

IBM TotalStorage Productivity Center V2.3: Getting Started

21.The DB2 database is created, the keys are generated, and the Productivity Center for Disk and Replication base is installed. The Finish window opens. You can view the log file for any possible error messages. The log file is located in installeddirectory\logs\dmlog.txt. The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the installation.

Figure 4-80 Productivity Center for Disk and Replication Base Installer - Finish

Notepad opens and displays the post-installation tasks information. Read the information and complete any required tasks. 22.The Install Status window (Figure 4-81) opens after the successful Productivity Center for Disk and Replication Base installation. Click Next.

Figure 4-81 Install Status for Productivity Center for Disk and Replication Base successful

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

139

4.3.4 IBM TotalStorage Productivity Center for Disk


The next product to install is the Productivity Center for Disk as indicated in Figure 4-82. Click Next to begin the installation.

Figure 4-82 IBM TotalStorage Productivity Center installer information

1. A window (Figure 4-83) opens that prompts you for the package location for CD-ROM labeled IBM TotalStorage Productivity Center for Disk. Enter the appropriate information and click Next.

Figure 4-83 Productivity Center for Disk installation package location

140

IBM TotalStorage Productivity Center V2.3: Getting Started

2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Disk installer wizard will be launched (see Figure 4-84). Click Next.

Figure 4-84 IBM TotalStorage Productivity Center for Disk installer

3. The Productivity Center for Disk Installer - Welcome panel (see Figure 4-85) opens. Click Next.

Figure 4-85 IBM TotalStorage Productivity Center for Disk Installer Welcome

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

141

4. The Destination Directory panel (Figure 4-86) opens. Enter the directory path or accept the default directory and click Next.

Figure 4-86 Productivity Center for Disk Installer - Destination Directory

5. The Installation Type panel (Figure 4-87) opens. Select Typical and click Next.

Figure 4-87 Productivity Center for Disk - Installation Type

142

IBM TotalStorage Productivity Center V2.3: Getting Started

6. The Create Local Database panel (Figure 4-88) opens. Accept the default database name of PMDATA or re-enter a new database name. Then click Next.

Figure 4-88 IBM TotalStorage Productivity Center for Disk - Create Local Database

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

143

7. Review the information on the IBM TotalStorage Productivity Center for Disk Preview panel (Figure 4-89) and click Install.

Figure 4-89 IBM TotalStorage Productivity Center for Disk Installer - Preview

8. The installer creates the required database (see Figure 4-90) and installs the product. You see a progress bar for the Productivity Center for Disk installation status.

Figure 4-90 Productivity Center for Disk DB2 database creation

144

IBM TotalStorage Productivity Center V2.3: Getting Started

9. When the installation is complete, you see the Finish panel (Figure 4-91). Review the post installation tasks. Click Finish.

Figure 4-91 Productivity Center for Disk Installer - Finish

10.The Install Status window (Figure 4-92) opens after the successful Productivity Center for Disk installation. Click Next.

Figure 4-92 Install Status for Productivity Center for Disk successful

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

145

4.3.5 IBM TotalStorage Productivity Center for Replication


A panel opens that indicates that the installation for IBM TotalStorage Productivity Center for Replication is about to begin (see Figure 4-93). Click Next to begin the installation.

Figure 4-93 IBM TotalStorage Productivity Center installation overview

1. The Package Location for Replication Manager panel (Figure 4-94) opens. Enter the appropriate information and click Next.

Figure 4-94 Productivity Center for Replication install package location

146

IBM TotalStorage Productivity Center V2.3: Getting Started

2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Replication installer wizard will be launched (see Figure 4-95). Click Next.

Figure 4-95 Productivity Center for Replication installer

3. The Welcome window (Figure 4-96) opens. It suggests documentation that you can review prior to the installation. Click Next to continue or click Cancel to exit the installation.

Figure 4-96 IBM TotalStorage Productivity Center for Replication Installer Welcome

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

147

4. The Destination Directory panel (Figure 4-97) opens. Enter the directory path or accept the default directory. Click Next to continue.

Figure 4-97 IBM TotalStorage Productivity Center for Replication Installer Destination Directory

148

IBM TotalStorage Productivity Center V2.3: Getting Started

5. The next panel (see Figure 4-98) asks you to select the installation type. Select the Typical radio button and click Next.

Figure 4-98 Productivity Center for Replication Installer Installation Type

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

149

6. In the Create Local Database for Hardware Subcomponent window (see Figure 4-99), in the Database name field, enter a value for the new Hardware subcomponent database or accept the default. We recommend that you accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

Figure 4-99 IBM TotalStorage Productivity Center for Replication: Hardware subcomponent

150

IBM TotalStorage Productivity Center V2.3: Getting Started

7. In the Create Local Database for ElementCatalog Subcomponent window (see Figure 4-100), in the Database name field, enter for the new Element Catalog subcomponent database or accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

Figure 4-100 IBM TotalStorage Productivity Center for Replication: Element Catalog subcomponent

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

151

8. In the Create Local Database for ReplicationManager Subcomponent window (see Figure 4-101), in the Database name field, enter the new Replication Manager subcomponent database or accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

Figure 4-101 TotalStorage Productivity Center for Replication: Replication Manager subcomponent

152

IBM TotalStorage Productivity Center V2.3: Getting Started

9. In the Create Local Database for ReplicationManager Subcomponent window (see Figure 4-102), in the Database name field, enter the new SVC hardware subcomponent database or accept the default. Click Next. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.

Figure 4-102 IBM TotalStorage Productivity Center for Replication: SVC Hardware subcomponent

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

153

10.The Setting Tuning Cycle Parameter window (Figure 4-103) opens. Accept the default value of tuning every 24 hours or change the value. You can change this value later in the ElementCatalog.properties file. Click Next.

Figure 4-103 IBM TotalStorage Productivity Center for Replication: Database tuning cycle

154

IBM TotalStorage Productivity Center V2.3: Getting Started

11.Review the information in the TotalStorage Productivity Center for Replication Installer Preview panel (Figure 4-104). Click Install.

Figure 4-104 IBM TotalStorage Productivity Center for Replication Installer Preview

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

155

12.You see the Productivity Center for Replication Installer - Finish panel (see Figure 4-105) upon successful installation. Read the post installation tasks. Click Finish to complete the installation.

Figure 4-105 Productivity Center for Replication Installer Finish

13.The Install Status window (Figure 4-106) opens after the successful Productivity Center for Disk and Replication Base installation. Click Next.

Figure 4-106 Install Status for Productivity Center for Replication successful

156

IBM TotalStorage Productivity Center V2.3: Getting Started

4.3.6 IBM TotalStorage Productivity Center for Fabric


Prior to installing IBM TotalStorage Productivity Center for Fabric, you must complete several prerequisite tasks. These tasks are described in detail in 3.11, IBM TotalStorage Productivity Center for Fabric on page 75. Specifically, complete the tasks in the following sections: 3.10, Installing SNMP on page 73 3.11.1, The computer name on page 75 Figure 3.11.2 on page 75 3.11.3, Windows Terminal Services on page 75 User IDs and password considerations on page 76 3.11.4, Tivoli NetView on page 76 3.11.5, Personal firewall on page 77 Security considerations on page 77

Installing the manager


After successful installation of the Productivity Center for Replication, the Suite Installer begins the installation of Productivity Center for Fabric (see Figure 4-107). Click Next.

Figure 4-107 IBM TotalStorage Productivity Center installation information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

157

1. The panel that opens prompts you to specify the location of the install package for Productivity Center for Fabric Manager (see Figure 4-108). Enter the appropriate path and click Next. Important: If you used the demonstration certificates, point to the CD-ROM drive. If you generated new certificates, point to the manager CD image with the new agentTrust.jks file.

Figure 4-108 Productivity Center for Fabric install package location

158

IBM TotalStorage Productivity Center V2.3: Getting Started

2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Fabric installer wizard will be launched (see Figure 4-109). Click Next.

Figure 4-109 Productivity Center for Fabric installer

3. A window opens in which you select the language to use for the wizard (see Figure 4-110). Select the required language and click OK.

Figure 4-110 IBM TotalStorage Productivity Center for Fabric installer: Selecting the language

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

159

4. A panel opens asking you to select the type of installation you wish to perform (Figure 4-111). In this case, we install the IBM TotalStorage Productivity Center for Fabric code. You can also use the Suite Installer to perform remote deployment of the Fabric Agent. You can perform this operation only if you installed the common agent previously on machines. For example, you may have installed the Data Agent on the machines and want to add the Fabric Agent to the same machines. You must have the Fabric Manager installed before you can deploy the Fabric Agent. You cannot select both Fabric Manager Installation and Remote Fabric Agent Deployment at the same time. You can only select one option. Click Next.

Figure 4-111 Fabric Manager installation

160

IBM TotalStorage Productivity Center V2.3: Getting Started

5. The Welcome panel (Figure 4-112) opens. Click Next.

Figure 4-112 IBM TotalStorage Productivity Center for Fabric: Welcome information

6. The next window that opens prompts you to confirm the target directory (see Figure 4-113). Enter the directory path or accept the default directory. Click Next.

Figure 4-113 IBM TotalStorage Productivity Center for Fabric installation directory

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

161

7. In the next panel (see Figure 4-114), you specify the port number. This is a range of 25 port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port number that you specify is considered the primary port number. You only need to enter the primary port number. The primary port number and the next 24 numbers are reserved for use by IBM TotalStorage Productivity Center for Fabric. For example, if you specify port number 9550, IBM TotalStorage Productivity Center for Fabric uses port numbers 9550 through 9574. Ensure that the port numbers you use are not used by other applications at the same time. To determine which port numbers are in use on a particular computer, type either of the following commands from a command prompt.
netstat -a netstat -an

We recommend that you use the first of these two commands. The port numbers in use on the system are listed in the Local Address column of the output. This field has the format host:port. Enter the primary port number and click Next.

Figure 4-114 IBM TotalStorage Productivity Center for Fabric port number

162

IBM TotalStorage Productivity Center V2.3: Getting Started

8. As shown in Figure 4-115, select the database repository, either DB2 or Cloudscape. If you select DB2, you must have previously installed DB2 on the server. DB2 is the recommended installation option. Click Next.

Figure 4-115 IBM TotalStorage Productivity Center for Fabric database selection type

9. In the next panel (see Figure 4-117 on page 164), select the WebSphere Application Server to use in the installation. WebSphere Application Server was installed as part of the prerequisite software so we chose the Non Embedded (Full) WebSphere Application Server option. If the Fabric manager is to be installed standalone on a server choose the Embedded WebSphere Application Server - Express option. Click Next.

Figure 4-116 Productivity Center for Fabric WebSphere Application Server type selection

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

163

10.The Single/Multiple User ID/Password Choice panel (see Figure 4-117), using DB2, opens. If you select DB2 as your database, you see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2, WebSphere, Host Authentication, and NetView. If you select all the boxes, you are only prompted for the DB2 user ID and password which is used for all instances. In our install we only selected DB2 and NetView. A different user ID and password will be used for WebSphere and Host Authentication. Note: If you selected IBM Cloudscape as your database, this panel is not displayed. Click Next.

Figure 4-117 IBM TotalStorage Productivity Center for Fabric user and password options

164

IBM TotalStorage Productivity Center V2.3: Getting Started

11.The DB2 Administrator user ID and password panel (Figure 4-118), using DB2, opens. If you selected DB2 as your database, you see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2. The user ID and password specified during the DB2 installation in Figure 4-8 on page 90 was used in this example. Enter the required user ID and password. Click Next. The installer will verify that the user ID entered exists.

Figure 4-118 IBM TotalStorage Productivity Center for Fabric database user information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

165

12.In the next window (see Figure 4-119) that opens, type the name of the new database in the Type database name: field or accept the default. In our install we accepted the default database name. Click Next. Note: The database name must be unique. You cannot share the IBM TotalStorage Productivity Center for Fabric database with any other applications.

Figure 4-119 IBM TotalStorage Productivity Center for Fabric database name

13.Since we did not check the box for WebSphere in Figure 4-117 on page 164, the panel in Figure 4-120 on page 167 opens prompting for a WebSphere user ID and password. We used the tpcadmin user ID, which is what we used for the IBM Director service account (refer to Figure 4-46 on page 118). Enter the required information and click Next.

166

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 4-120 WebSphere Application Server user ID and password

14.Since we also did not check the box for Host Authentication (Figure 4-117 on page 164), the following panel (Figure 4-121) opens. Enter the password for Host Authentication. This password is used by the Fabric agents. Click Next.

Figure 4-121 Host Authentication password

15.In the window (Figure 4-122 on page 168) that opens, enter the parameters for the Tivoli NetView drive name. Click Next.

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

167

Figure 4-122 IBM TotalStorage Productivity Center for Fabric database drive information

16.The Agent Manager Information panel (Figure 4-123 on page 169) opens. You must complete the following fields: Agent manager name or IP address: This is the host name or IP address of your Agent Manager. Agent manager registration port: This is the port number of your Agent Manager. The default value is 9511. Agent Manager public port: This is a public port. The default value is 9513. Agent registration password (twice): This is the password used to register the common agent with the Agent Manager as shown in Figure 4-27 on page 105. If the password is not set and the default is accepted, the password is changeMe. This password is case sensitive. The agent registration password resides in the AgentManager.properties file where the Agent Manager is installed. It is located in the following directory: %WSAS_INSTALL_ROOT%\InstalledApps\<cell>\AgentManager.ear\AgentManag er.war\WEB-INF\classes\resource Resource manager registration user ID: This is the user ID used to register the resource manager with the Agent Manager. The default is manager. The Resource Manager registration user ID and password reside in the Authorization.xml file where the Agent Manager is installed. It is located in the following directory: <Agent_Manager_install_dir>\config Resource manager registration password (twice): This is the password used to register the resource manager with the Agent Manager. The default is password. Fill in the information and click Next.

168

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 4-123 IBM TotalStorage Productivity Center for Fabric Agent Manager information

17.The next panel (Figure 4-124) that opens provides information about the location and size of IBM TotalStorage Productivity Center for Fabric - Manager. Click Next.

Figure 4-124 IBM TotalStorage Productivity Center for Fabric installation information

18.You see the Status panel. The installation can take about 15 to 20 minutes to complete. 19.When the installation has completed, you see a panel indicating that the wizard successfully installed the Fabric Manager (see Figure 4-125 on page 170). Click Next.

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

169

Figure 4-125 IBM TotalStorage Productivity Center for Fabric installation status

20.In the next panel (see Figure 4-126), you are prompted to restart your computer. Select No, I will restart my computer later because you do not want to restart your computer now. Click Finish to complete the installation.

Figure 4-126 IBM TotalStorage Productivity Center for Fabric restart options

21.The Install Status panel (see Figure 4-127 on page 171) opens. It indicates that the Productivity Center for Fabric installation was successful. Click Next.

170

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 4-127 IBM TotalStorage Productivity Center installation information

4.3.7 IBM TotalStorage Productivity Center for Data


Prior to installing IBM TotalStorage Productivity Center for Data, you need to complete several prerequisite tasks. These tasks are described in detail in 3.12, IBM TotalStorage Productivity Center for Data on page 78. Specifically you must complete the tasks in the following sections: 3.12.1, Server recommendations on page 78 3.12.2, Supported subsystems and databases on page 78 3.12.3, Security considerations on page 79 3.12.4, Creating the DB2 database on page 81 The IBM TotalStorage Productivity Center for Data database needs to be created before you begin the installation. This section provides an overview of the steps you need to perform when installing IBM TotalStorage Productivity Center for Data. Important: Make sure that the Tivoli Agent Manager service is started before you begin the installation. You see the panel indicating that the installation of Productivity Center for Data - Manager is about to begin (see Figure 4-128 on page 172). Click Next to begin the installation.

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

171

Figure 4-128 IBM TotalStorage Productivity Center for Data installation information

1. In the window that opens, you are prompted to enter the install package location for IBM TotalStorage Productivity Center for Data (see Figure 4-129). Enter the appropriate information and click Next.

Figure 4-129 Productivity Center for Data install package location

2. The next window that opens indicates that the IBM TotalStorage Productivity Center for Data installer wizard will be launched (see Figure 4-130 on page 173). Click Next.

172

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 4-130 Productivity Center for Data installer

3. In the next panel (see Figure 4-131), select Install Productivity Center for Data and click Next.

Figure 4-131 IBM TotalStorage Productivity Center for Data install window

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

173

4. Read the License Agreement shown in Figure 4-132. Indicate your acceptance of the agreement by selecting the I have read and AGREE to abide by the license agreement above check box. Then click Next.

Figure 4-132 IBM TotalStorage Productivity Center for Data license agreement

5. The next panel asks you to confirm that you read the license agreement (see Figure 4-133). Click Yes to indicate that you have read and accepted the license agreement.

Figure 4-133 Confirmation the Productivity Center for Data license agreement has been read

174

IBM TotalStorage Productivity Center V2.3: Getting Started

6. The next window shown in Figure 4-134 allows you to choose the type of installation that you are performing. Select The Productivity Center for Data Server and an Agent on this machine. This installs the server, agent, and user interface components on the machine where the installation program is running. You must install the server on at least one machine within your environment. Click Next.

Figure 4-134 IBM TotalStorage Productivity Center for Data selection options

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

175

7. Review and enter the license key for the appropriate functions if required. See Figure 4-135. Click Next.

Figure 4-135 IBM TotalStorage Productivity Center for Data license key information

176

IBM TotalStorage Productivity Center V2.3: Getting Started

8. The installation program validates the license key and you are asked to select the relational database management system (RDBMS) that you want to host the Data Manager repository. See Figure 4-136. The repository is a set of relational database tables where Data Manager builds a database of statistics to keep track of your environment. For our installation, we select IBM DB2 UDB. Click Next.

Figure 4-136 IBM TotalStorage Productivity Center for Data database selection

9. The Create Service Account panel opens to create the TSRMsrv1 local account. Click Yes.

Figure 4-137 Create Service Account

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

177

10.In the next window (see Figure 4-138), complete these tasks: a. Select the database that was created as a prerequisite. Refer to 3.12.4, Creating the DB2 database on page 81. b. Fill in the required user ID and password. This is the DB2 user ID and password defined previously. c. Click Next.

Figure 4-138 IBM TotalStorage Productivity Center for Data database selection option

11.The Repository Creation Parameters panel (see Figure 4-139 on page 179) for UDB opens. On this panel you can specify the database schema and tablespace name. If you are using DB2 as the repository, you can also choose how you will manage the database space: System Managed (SMS): This option indicates that the space is managed by the OS. In this case you specify the Container Directory, which is then managed by the system, and can grow as large as the free space on the file system. Tip: If you do not have in house database skills, the System Managed approach is recommended. Database Managed (DMS): This option means that the space is managed by the database. In this case you need to specify the Container Directory, Container File, and Size fields. The Container File specifies a filename for the repository, and Size is the predefined space for that file. You can later change this by using the ALTER TABLESPACE command. We accepted the defaults.

178

IBM TotalStorage Productivity Center V2.3: Getting Started

Tip: We recommend that you use meaningful names for Container Directory and Container File at installation. This can help you in case you need to find the Container File. Enter the necessary information and click Next.

Figure 4-139 IBM TotalStorage Productivity Center for Data repository information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

179

12.The Productivity Center for Data Parameters panel (Figure 4-140) opens. Use the Agent Manager Parameters window (Figure 4-141 on page 182) to provide information about the Agent Manager installed in your environment. Click Next.

Figure 4-140 IBM TotalStorage Productivity Center for Data installation parameters

180

IBM TotalStorage Productivity Center V2.3: Getting Started

13.The Agent Manager Parameters panel (Figure 4-141 on page 182) provides information about the Agent Manager installed in your environment. Table 4-1 provides a description of the fields in the panel.
Table 4-1 Agent Manager Parameters descriptions Field Hostname Registration Port Public Port Resource Manager Username Description Enter the fully qualified network name or IP address of the Agent Manager server as seen by the agents. Enter the port number of the Agent Manager. The default is 9511. Enter the public port for Agent Manager. The default is 9513. Enter the Agent Manager user ID. This is the user ID used to register the common agent with the Agent Manager. The default is manager. Enter the password used to register the common agent with the Agent Manager. This is the password that was set during the Tivoli Agent Manager installation Figure 4-27 on page 105. The default password is changeMe, the password is stored in the AgentManager.properties file, in the %install dir%\AgentManager\image directory.

Resource Manager Password Agent Registration password

Click Next. Note: If an error is displayed during this part of the installation, verify that the Agenttrust.jks file was copied across and verify the Agent Registration password.

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

181

Figure 4-141 IBM TotalStorage Productivity Center for Data Agent Manager install information

182

IBM TotalStorage Productivity Center V2.3: Getting Started

14.Use the NAS Discovery Parameters panel in Figure 4-142 to configure Data Manager for use with any network-attached storage (NAS) devices in your environment. Click Next. You can leave the fields blank if you do not have any NAS devices.

Figure 4-142 IBM TotalStorage Productivity Center for Data NAS options

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

183

15.The Space Requirements panel for the Productivity Center for Data Server (Figure 4-143) opens. Enter the directory path or accept the default directory. If the current disk or device does not have enough space for the installation, then you can enter a different location for the installation in the Choose the installation directory field. Or you can click Browse to browse your system for an available and appropriate space. The default installation directory is C:\Program Files\IBM\TPC\Data. Click Next.

Figure 4-143 IBM TotalStorage Productivity Center for Data installation destination options

16.Confirm the path for installing the Productivity Center for Data Server as shown in Figure 4-144. At this point, the installation process has gathered all of the information that is needed to perform the installation. Click OK.

Figure 4-144 IBM TotalStorage Productivity Center for Data Server destination path confirmation

184

IBM TotalStorage Productivity Center V2.3: Getting Started

17.Review and change the Productivity Center for Data Agent Parameters (see Figure 4-145) as required. We recommend that you accept the defaults. Click Next.

Figure 4-145 IBM TotalStorage Productivity Center for Data agent parameters

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

185

18.The Windows Service Account panel shown in Figure 4-146 opens. Choose Create a local account for the agent to run under and click Next.

Figure 4-146 Windows Service Account

186

IBM TotalStorage Productivity Center V2.3: Getting Started

19.The Space Requirements panel (see Figure 4-147) opens for the Productivity Center for Data Agent. Enter the directory path or accept the default directory. If the current disk or device does not have enough space for the installation, then you can enter a different location for the installation in the Choose the Common Agent installation directory field. Or you can click Browse to browse your system for an available and appropriate space. The default installation directory is C:\Program Files\Tivoli\ep. Click Next.

Figure 4-147 IBM TotalStorage Productivity Center for Datacommon agent installation information

20.When you see a message similar to the one in Figure 4-148, confirm the path where Productivity Center for Data Agent is to be installed. At this point, the installation process has gathered all of the information necessary to perform the installation. Click OK.

Figure 4-148 IBM TotalStorage Productivity Center for Data Agent destination path confirmation

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

187

21.When you see a window similar to the example in Figure 4-149, review the choices that you have made. Then click Next.

Figure 4-149 IBM TotalStorage Productivity Center for Data preview options

188

IBM TotalStorage Productivity Center V2.3: Getting Started

22.A window opens that tracks the progress of the installation (see Figure 4-150).

Figure 4-150 IBM TotalStorage Productivity Center for Data installation information

Chapter 4. Installing the IBM TotalStorage Productivity Center suite

189

23.When the installation is done, the progress window shows a message indicating that the installation completed successfully (see Figure 4-151). Review this panel and click Done.

Figure 4-151 IBM TotalStorage Productivity Center for Data success information

24.The Install Status panel opens showing the message The Productivity Center for Data installation was successful. Click Next to complete the installation.

Figure 4-152 Install Status for Productivity Center for Data successful

190

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 5.

CIMOM install and configuration


This chapter provides a step-by-step guide to configure the Common Information Model Object Manager (CIMOM), LSI Provider, and Service Location Protocol (SLP) that are required to use the IBM TotalStorage Productivity Center.

Copyright IBM Corp. 2005. All rights reserved.

191

5.1 Introduction
After you have completed the installation of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Fabric, or TotalStorage Productivity Center for Data, you will need to install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Fabric, and TotalStorage Productivity Center for Data simply as the TotalStorage Productivity Center. The TotalStorage Productivity Center uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation General performance guidelines Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server and DS6000/DS8000 Verifying connection to ESS Verify connection to DS6000/DS8000 Setting up Service Location Protocol Directory Agent (SLP DA) Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller

5.2 Planning considerations for Service Location Protocol


The Service Location Protocol (SLP) has three major components, Service Agent (SA) and User Agent (UA) and a Directory Agent (DA). The SA and UA are required components and DA is an optional component. You may have to make a decision whether to use SLP DA in your environment based on considerations as described below.

5.2.1 Considerations for using SLP DAs


You may consider to use a DA to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades.

192

IBM TotalStorage Productivity Center V2.3: Getting Started

By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You may consider to use DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You would like to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.

5.2.2 SLP configuration recommendation


Some configuration recommendations are provided for enabling TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration and SLP directory agent configuration.

Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast TTL (time-to-live) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. Multicast discovery does not discover Director V1.x systems or systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations).

Chapter 5. CIMOM install and configuration

193

SLP directory agent configuration


Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered byTotalStorage Productivity Center. One DA is sufficient for each subnet. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center This setup is described in Configuring TotalStorage Productivity Center for SLP discovery on page 223.

5.3 General performance guidelines


Here are some general performance considerations for configuring the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication environment. Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that TotalStorage Productivity Center for Disk includes a built-in SLP User Agent (UA) that will receive information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center for Disk installation. You should have not more than one DA per subnet. Misconfiguring the IBM Director discovery preferences may impact performance on auto discovery or on device presence checking. It may also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the ESS CLI and ESS CIM agent or DS CIM agent, and LSI Provider software on another host of comparable size to the main TotalStorage Productivity Center server. Attempting to run a full TotalStorage Productivity Center implementation (Disk Manager, Data Manager, Fabric Manager, Replication Manager, DB2, IBM Director and the WebSphere Application server) on the same host as the CIM agent, will result in dramatically increased wait times for data retrieval. You may also experience resource contention and port conflicts.

5.4 Planning considerations for CIMOM


The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface. Figure 5-1 on page 195 shows overview of CIM agent.

194

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-1 CIM Agent overview

You may plan to install CIM agent code on the same server which also has device management interface or you may install it on a separate server. Attention: At this time only few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM enable management applications (CIM Clients) to be able to communicate with device. For the ease of the installation IBM provides an ICAT (short for Integrated Configuration Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider and an SLP SA.

5.4.1 CIMOM configuration recommendations


Following recommendations are based on our experience in ITSO Lab environment: The CIMOM agent code which you are planning to use, must be supported by the installed version of TotalStorage Productivity Center. You may refer to the link below for the latest updates:
http://www-1.ibm.com/servers/storage/support/software/tpc/

You must have the CIMOM supported firmware level on the storage devices. It you have an incorrect version of firmware, you may not be able to discover and manage any the storage devices. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. Hence it is recommended to have a dedicated server for the CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices of same type. You may also plan to locate this server within same data center where storage devices are located. This is in consideration to firewall port requirements. Typically, it is best practice to minimize firewall port openings between data center and external network. If you consolidate the CIMOM servers within the data center then you may be able to minimize the need to open the firewall ports only for TotalStorage Productivity Center communication with CIMOM. Co-location of CIM agent instances of the differing type on the same server is not recommended because of resource contention. It is strongly recommended to have a separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center. This is due to resource contention, TCP/IP port requirements and system services co-existence.

Chapter 5. CIMOM install and configuration

195

5.5 Installing CIM agent for ESS


Before starting Multiple Device Manager discovery, you must first configure the Common Information Model Object Manager (CIMOM) for ESS. The ESS CIM Agent package is made up of the following parts (see Figure 5-2).

Figure 5-2 ESS CIM Agent Package

This section provides an overview of the installation and configuration of the ESS CIM Agent on a Windows 2000 Advanced Server operating system.

5.5.1 ESS CLI Install


The following list of installation and configuration tasks are in the order in which they should be performed: Before you install the DS CIM Agent you must install the IBM TotalStorage Enterprise Storage System Command Line Interface (ESS CLI) if you plan to manage 2105-F20s or 2105-800s with this CIM agent. The DS CIM Agent installation program checks your system for the existence of the ESS CLI and provides the warning shown in Figure 5-16 on page 205 if no valid ESS CLI is found. Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236.

Perform the following steps to install the ESS CLI for Windows: 1. Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the instructions as shown in Figure 5-3 on page 197 through Figure 5-11 on page 201. Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI.

196

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-3 ESS CLI InstallShield Wizard I

2. Select I accept the terms of the license agreement and click Next.

Figure 5-4 ESS CLI License agreement

3. Click Next.

Chapter 5. CIMOM install and configuration

197

Figure 5-5 ESS CLI choose target system panel

4. Click Next.

Figure 5-6 ESS CLI Setup Status panel

5. Click Next.

198

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-7 ESS CLI selected options summary

Figure 5-8 ESS CLI Installation Progress

6. Click Next.

Chapter 5. CIMOM install and configuration

199

Figure 5-9 ESS CLI installation complete panel

7. Read the information and click Next.

Figure 5-10 ESS CLI Readme

8. Reboot your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service unless you reboot your system.

200

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-11 ESS CLI Reboot panel

9. Verify that the ESS CLI is installed: Click Start Settings Control Panel. Double-click the Add/Remove Programs icon. Verify that there is an IBM ESS CLI entry. 10.Verify that the ESS CLI is operational and can connect to the ESS. For example, from a command prompt window, issue the following command: esscli -u userid -p password -s 9.1.11.111 list server Where: 9.1.11.111 represents the IP address of the Enterprise Storage Server usedid represents the Enterprise Storage Server Specialist user name password represents the Enterprise Storage Server Specialist password for the user name Figure 5-12 shows the response from the esscli command.

Figure 5-12 ESS CLI verification

Chapter 5. CIMOM install and configuration

201

5.5.2 DS CIM Agent install


To install the DS CIM Agent in your Windows system, perform the following steps: 1. Log on to your system as the local administrator. 2. Insert the CIM Agent for DS CD into the CD-ROM drive. The Install Wizard launchpad should start automatically, if you have autorun mode set on your system. You should see a launchpad window similar to Figure 5-13. 3. You may review the Readme file from the launchpad menu. Subsequently, you can click Installation Wizard. The Installation Wizard starts executing the setup.exe program and shows the Welcome panel in Figure 5-14 on page 203. Note: The DS CIM Agent program should start within 15 - 30 seconds if you have autorun mode set on your system. If the installer window does not open, perform the following steps: Use a Command Prompt or Windows Explorer to change to the Windows directory on the CD. If you are using a Command Prompt window, run launchpad.bat. If you are using Windows Explorer, double-click on the launchpad.bat file. Note: If you using CIMOM code from IBM download Web site and not from the distribution CD, then you must ensure to use a shorter windows directory pathname. Executing Launchpad.bat from the longer pathname may fail. An example of a short pathname is C:\CIMOM\setup.exe.

Figure 5-13 DSCIM Agent launchpad

202

IBM TotalStorage Productivity Center V2.3: Getting Started

4. The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 5-14).

Figure 5-14 DS CIM Agent welcome window

Chapter 5. CIMOM install and configuration

203

5. The License Agreement window opens. Read the license agreement information. Select I accept the terms of the license agreement, then click Next to accept the license agreement (see Figure 5-15).

Figure 5-15 DS CIM Agent license agreement

204

IBM TotalStorage Productivity Center V2.3: Getting Started

The window shown in Figure 5-16 only appears if no valid ESS CLI installed. If you do not plan to manage an ESS from this CIM agent, then click Next. Important: If you plan to manage an ESS from this CIM agent, then click Cancel. Install the ESS CLI following the instructions in 5.5.1, ESS CLI Install on page 196.

Figure 5-16 DS CIM Agent ESS CLI warning

Chapter 5. CIMOM install and configuration

205

6. The Destination Directory window opens. Accept the default directory and click Next (see Figure 5-17).

Figure 5-17 DS CIM Agent destination directory panel

206

IBM TotalStorage Productivity Center V2.3: Getting Started

7. The Updating CIMOM Port window opens (see Figure 5-18). You Click Next to accept the default port if it available and free in your environment. For our ITSO setup we used default port 5989. Note: If the default port is the same as another port already in use, modify the default port and click Next. Use the following command to check which ports are in use: netstat -a

Figure 5-18 DS CIM Agent port window

Chapter 5. CIMOM install and configuration

207

8. The Installation Confirmation window opens (see Figure 5-19). Click Install to confirm the installation location and file size.

Figure 5-19 DS CIM Agent installation confirmation

208

IBM TotalStorage Productivity Center V2.3: Getting Started

9. The Installation Progress window opens (see Figure 5-20) indicating how much of the installation has completed.

Figure 5-20 DS CIM Agent installation progress

10.When the Installation Progress window closes, the Finish window opens (see Figure 5-21 on page 210). Check the View post installation tasks check box if you want to view the post installation tasks readme when the wizard closes. We recommend you review the post installation tasks. Click Finish to exit the installation wizard (Figure 5-21 on page 210). Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxx\logs\install.log, where xxx is the destination directory where the DS CIM Agent for Windows is installed.

Chapter 5. CIMOM install and configuration

209

Figure 5-21 DS CIM Agent install successful

11.If you checked the view post installation tasks box, then the window shown in Figure 5-22 appears. Close the window when you have finished reviewing the post installation tasks.

Figure 5-22 DS CIM Agent post install readme

The launchpad window (Figure 5-13 on page 202) appears. Click Exit.

210

IBM TotalStorage Productivity Center V2.3: Getting Started

5.5.3 Post Installation tasks


Continue with the following post installation tasks for the ESS CIM Agent.

Verify the installation of the SLP


Proceed as follows: Verify that the Service Location Protocol is started. Select Start Settings Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find Service Location Protocol in the Services window list. For this component, the Status column should be marked Started as shown in Figure 5-23.

Figure 5-23 Verify Service Location Protocol started

If SLP is not started, right-click on the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started.

Verify the installation of the DS CIM Agent


Proceed as follows: Verify that the CIMOM service is started. If you closed the Services window, select Start Settings Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find the IBM CIM Object Manager - ESS in the Services window list. For this component, the Status column should be marked Started and the Startup Type column should be marked Automatic, as shown in Figure 5-24 on page 212.

Chapter 5. CIMOM install and configuration

211

Figure 5-24 DS CIM Object Manager started confirmation

If the IBM CIM Object Manager is not started, right-click on the IBM CIM Object Manager ESS and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the ESS CIM Agent has been successfully installed on your Windows system. Next, perform the configuration tasks.

5.6 Configuring the DS CIM Agent for Windows


This task configures the DS CIM Agent after it has been successfully installed. Perform the following steps to configure the DS CIM Agent: Configure the ESS CIM Agent with the information for each Enterprise Storage Server the ESS CIM Agent is to access. Start Programs IBM TotalStorage CIM Agent for ESS CIM agent for the IBM TotalStorage DS Open API Enable DS Communications as shown in Figure 5-25.

Figure 5-25 Configuring the ESS CIM Agent

5.6.1 Registering DS Devices


Type the following command for each DS server that is configured: addessserver <ip> <user> <password> Where: <ip> represents the IP address of the Enterprise Storage Server <user> represents the DS Storage Server user name <password> represents the DS Storage Server password for the user name 212
IBM TotalStorage Productivity Center V2.3: Getting Started

Repeat the previous step for each additional DS device that you want to configure. Note: CIMOM collects and caches the information from the defined DS servers at startup time; the starting of the CIMOM might take a longer period of time the next time you start it.

Attention: If the username and password entered is incorrect or the DS CIM agent does not connect to the DS this will cause a error and the DS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. rmessserver <ip> Whenever you add or remove DS from CIMOM registration, you must re-start the CIMOM to pick up the updated DS device list.

5.6.2 Registering ESS Devices


Proceed as follows: Type the command addess <ip> <user> <password> command for each ESS (as shown in Figure ): Where: <ip> represents the IP address of the cluster of Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name <password> represents the Enterprise Storage Server Specialist password for the user name. The addess command example is shown in Figure 5-26 on page 214. Important: DS CIM agent relies on ESS CLI connectivity from DS CIMOM server to ESS devices. Make sure that the ESS devices you are registering are reachable and available at this point. It is recommended to verify this by launching ESS specialist browser from the ESS CIMOM server. You may logon to both ESS clusters for each ESS and make sure you are authenticated with correct ESS passwords and IP addresses. If the ESS are on the different subnet than the ESS CIMOM server and behind a firewall, then you must authenticate through firewall first before registering the ESS with CIMOM. If you have a bi-directional firewall between ESS devices and CIMOM server then you must verify the connection using rsTestConnection command of ESS CLI code. If the ESS CLI connection is not successful, you must authenticate through the firewall in both directions i.e from ESS to CIMOM server and also from CIMOM server to ESS. Once you are satisfied that you are able to authenticate and receive ESS CLI heartbeat with all the ESS successfully, you may proceed for entering ESS IP addresses. If CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it re-tries the authentication.

Chapter 5. CIMOM install and configuration

213

Figure 5-26 The addess command example

5.6.3 Register ESS server for Copy services


Type the following command for each ESS server that is configured for Copy Services: addesserver <ip> <user> <password> Where <ip> represents the IP address of the Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name <password> represents the Enterprise Storage Server Specialist password for the user name Repeat the previous step for each additional ESS device that you want to configure. Close the setdevice interactive session by typing exit. Once you have defined all the ESS servers, you must stop and restart the CIMOM to make the CIMOM initialize the information for the ESS servers. Note: CIMOM collects and caches the information from the defined ESS servers at startup time, the starting of the CIMOM might take a longer period of time the next time you start it.

Attention: If the username and password entered is incorrect or the ESS CIM agent does not connect to the ESS this will cause a error and the ESS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. rmessserver <ip> Whenever you add or remove an ESS from CIMOM registration, you must re-start the CIMOM to pick up updated ESS device list.

214

IBM TotalStorage Productivity Center V2.3: Getting Started

5.6.4 Restart the CIMOM


Perform these steps to use the Windows Start Menu facility to stop and restart the CIMOM. This is required so that CIMOM can register new devices or un-register deleted devices: Stop the CIMOM by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Stop CIMOM service. A Command Prompt window opens to track the stoppage of the CIMOM (as shown in Figure 5-27). If the CIMOM has stopped successfully, the following message is displayed:

Figure 5-27 Stop ESS CIM Agent

Restart the CIMOM by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 5-28 is displayed.

Figure 5-28 Restart ESS CIM Agent

Note: The restarting of the CIMOM may take a while because it is connecting to the defined ESS servers and is caching that information for future use.

5.6.5 CIMOM user authentication


Use the setuser interactive tool to configure the CIMOM for the users who will have the authority to use the CIMOM. The user is the TotalStorage Productivity Center for Disk and Replication superuser. Important: A TotalStorage Productivity Center for Disk and Replication superuserid and password must be create. This userid is initially used to by TotalStorage Productivity Center to connect to the CIM Agent. It is easiest if this superuserid is used for all CIM Agents. It can be set individually for each CIM Agent if necessary. This user ID should be less than or equal to eight characters. Upon installation of the CIM Agent for ESS, the provided default user name is superuser with a default password of passw0rd. The first time you use the setuser tool, you must use this user name and password combination. Once you have defined other user names, you can start the setuser command by specifying other defined CIMOM user names.

Chapter 5. CIMOM install and configuration

215

Note: The users which you configure to have authority to use the CIMOM are uniquely defined to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. Here is the procedure: Open a Command Prompt window and change directory to the ESS CIM Agent directory, for example C:\Program Files\IBM\cimagent. Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. Type the command adduser cimuser cimpass in the setuser interactive session to define new users. where cimuser represents the new user name to access the ESS CIM Agent CIMOM

cimpass represents the password for the new user name to access the ESS CIM
Agent CIMOM

Close the setdevice interactive session by typing exit. For our ITSO Lab setup we used TPCCIMOM as superuser and TPCCIMOM as the password.

5.7 Verifying connection to the ESS


During this task the ESS CIM Agent software connectivity to the Enterprise Storage Server (ESS) is verified. The connection to the ESS is through the ESS CLI software. If the network connectivity fails or if the user name and password that you set in the configuration task is incorrect, the ESS CIM Agent cannot connect successfully to the ESS. The installation, verification, and configuration of the ESS CIM Agent must be completed before you verify the connection to the ESS. Verify that you have network connectivity to the ESS from the system where the ESS CIM Agent is installed. Issue a ping command to the ESS and check that you can see reply statistics from the ESS IP address. Verify that the SLP is active by selecting Start Settings Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon. You should see similar to Figure 5-23 on page 211. Ensure that Status is Started. Verify that the CIMOM is active by selecting Start Settings Control Panel Administrative Tools Services. Launch Services panel and select IBM CIM Object Manager service. Verify the Status is shown as Started, as shown in Figure 5-29 on page 217.

216

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-29 Verify ESS CIMOM has started

Verify that SLP has dependency on CIMOM, this is automatically configured when you installed the CIM agent software. Verify this by selecting Start Settings Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon and subsequently select properties on Service Location Protocol as shown in Figure Figure 5-30.

Figure 5-30 SLP properties panel

Click Properties and select the Dependencies tab as shown in Figure 5-31 on page 218. You must ensure that IBM CIM Object Manager has a dependency on Service Location Protocol (this should be the case by default).

Chapter 5. CIMOM install and configuration

217

Figure 5-31 SLP dependency on CIMOM

Verify CIMOM registration with SLP by selecting Start Programs CIM Agent for the IBM TotalStorage DS Open API Check CIMOM Registration. A window opens displaying the WBEM services as shown in Figure 5-32. These services have either registered themselves with SLP or you explicitly registered them with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It may take some time for a CIM Agent to register with SLP.

Figure 5-32 Verify CIM Agent registration with SLP

Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the ESS CIMOM will attempt to contact each ESS registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered ESSs. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center for Disk superuser name and passw0rd in order for TotalStorage Productivity Center for Disk to have the authority to manage the CIMOM. The verifyconfig command checks the registration for the ESS CIM Agent and checks that it can

218

IBM TotalStorage Productivity Center V2.3: Getting Started

connect to the ESSs. At ITSO Lab we had configured two ESSs (as shown in Figure 5-33 on page 219).

Figure 5-33 The verifyconfig command

5.7.1 Problem determination


You might run into the some errors. If that is the case, you may verify with the cimom.log file. This file is located in C:\Program Files\IBM\cimagent directory. You may verify that you have the entries with your current install timestamp as shown in Figure 5-34. The entries of specific interest are: CMMOM050OI Registered service service:wbem:https://x.x.x.x:5989 with SLP SA CMMOM0409I Server waiting for connections This first entry indicates that the CIMOM has successfully registered with SLP using the port number specified at ESS CIM agent install time, and the second entry indicates that it has started successfully and waiting for connections.

Figure 5-34 CIMOM Log Success

Chapter 5. CIMOM install and configuration

219

If you still have problems, Refer to the DS Open Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the doc directory at the root of the CIM Agent CD.

5.7.2 Confirming the ESS CIMOM is available


Before you proceed, you need to be sure that the DS CIMOM is listening for incoming connections. To do this run a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on the configured port (as indicated by a black screen with cursor on the top left) will tell you that the DS CIMOM is active. You selected this port during DS CIMOM code installation. If the telnet connection fails, you will have a panel like the one shown in Figure 5-35. In such case, you have to investigate the problem until you get a blank screen for telnet port.

Figure 5-35 Example of telnet fail connection

Another method to verify that DS CIMOM is up and running is to use the CIM Browser interface. For Windows machines change the working directory to c:\Program Files\ibm\cimagent and run startcimbrowser. The WBEM browser in Figure 5-36 will appear. The default user name is superuser and the default password is passw0rd. If you have already changed it, using the setuser command, the new userid and password must be provided. This should be set to the TotalStorage Productivity Center for Disk userid and password.

Figure 5-36 WBEM Browser

220

IBM TotalStorage Productivity Center V2.3: Getting Started

When login is successful, you should see a panel like the one in Figure 5-37.

Figure 5-37 CIMOM Browser window

5.7.3 Setting up the Service Location Protocol Directory Agent


You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center for Disk can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center for Disk resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center for Disk to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Perform the following steps to edit the file: For example, if you have DS CIM agent installed in the default install directory path, then go to the C:\Program Files\IBM\cimagent\slp directory. Look for file named slp.conf. Make a backup copy of this file and name it slp.conf.bak.

Chapter 5. CIMOM install and configuration

221

Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:\winnt), or in the /etc directory for UNIX machines. 4. It is recommended to reboot the SLP server at this stage. Otherwise, alternatively, you may choose to restart the SLP and CIMOM services. You can do this from your Windows desktop Start Menu Settings Control Panel Administrative tools Services. Launch the Services GUI Locate the Service Location Protocol, right click and select stop. It will pop-up another panel which will request to stop IBM CIM Object Manager service. You may click Yes. You may start the SLP daemon again after it has stopped successfully. Alternatively, you may choose to re-start the CIMOM using command line as described in Restart the CIMOM on page 215

Creating slp.reg file


Important: To avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is C:\winnt\. slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received.

slp.reg file example


Example 5-1 is a slp.reg file sample.
Example 5-1 slp.reg file ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # #############################################################################

#---------------------------------------------------------------------------# Register Service - SVC CIMOMS #----------------------------------------------------------------------------

service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2

222

IBM TotalStorage Productivity Center V2.3: Getting Started

description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20

5.7.4 Configuring TotalStorage Productivity Center for SLP discovery


You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center for Disk is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA.

Chapter 5. CIMOM install and configuration

223

You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. Attention: You will need to register the IP address of the server running the SLP DA daemon with the IBM Director to facilitate MDM SLP discovery. You can do this using the IBM Director console interface of TotalStorage Productivity Center for Disk. The procedure to register the IP address is described in 6.2, SLP DA definition on page 248.

5.7.5 Registering the DS CIM Agent to SLP


You need to manually register the DS CIM agent to the SLP DA only when the following conditions are both true: There is no DS CIM Agent in the TotalStorage Productivity Center for Disk server subnet (TotalStorage Productivity Center for Disk). The SLP DA used by Multiple Device Manager is also not running an DS CIM Agent. Tip: If either of the preceding conditions are false, you do not need to perform the following steps. To register the DS CIM Agent issue the following command on the SLP DA server: C:\>CD C:\Program Files\IBM\cimagent\slp slptool register service:wbem:https://ipaddress:port Where ipaddress is the ESS CIM Agent ip address. For our ITSO setup, we used IP address of our ESS CIMOM server as 9.1.38.48 and default port number 5989. Issue a verifyconfig command as shown in Figure 5-33 on page 219 to confirm that SLP is aware of the registration. Attention: Whenever you update SLP configuration as shown above, you may have to stop and start the slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you re-start SLP daemon, ensure that IBM DS CIMOM agent has also re-started. Otherwise you may issue startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server. Please note the for DS CIMOM startup takes longer time.

5.7.6 Verifying and managing CIMOMs availability


You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered to the SLP DA. See Verifying and managing CIMOMs availability on page 256.

224

IBM TotalStorage Productivity Center V2.3: Getting Started

5.8 Installing CIM agent for IBM DS4000 family


The latest code for the IBM DS4000 family is available at the IBM support Web site. You need to download the correct and supported level of CIMOM code for TotalStorage Productivity Center for Disk Version 2.3. You can navigate from the following IBM support Web site for TotalStorage Productivity Center for Disk to acquire the correct CIMOM code:
http://www-1.ibm.com/servers/storage/support/software/tpcdisk/

You may to have traverse through multiple links to get to the download files. At the time of writing this book, we go to the Web page shown in Figure 5-38.

Figure 5-38 IBM support matrix Web page

Chapter 5. CIMOM install and configuration

225

By scrolling down the same Web page, we go to the following link for DS 4000 CIMOM code as shown in Figure 5-39. This link leads to the Engenio Provider site. The current supported code level is 1.0.59, as indicated in the Web page.

Figure 5-39 Web download link for DS Family CIMOM code

From the Web site, select the operating system used for the server on which the IBM DS family CIM Agent will be installed. You will download a setup.exe file. Save it to a directory on the server on which you will be installing the DS 4000 CIM Agent (see Figure 5-40 on page 227).

226

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-40 DS CIMOM Install

Launch the setup.exe file to begin the DS 4000 family CIM agent installation. The InstallShield Wizard for LSI SMI-S Provider window opens (see Figure 5-41). Click Next to continue.

Figure 5-41 LSI SMI-SProvider window

Chapter 5. CIMOM install and configuration

227

The LSI License Agreement window opens next. If you agree with the terms of the license agreement, click Yes to accept the terms and continue the installation (see Figure 5-42).

Figure 5-42 LSI License Agreement

The LSI System Info window opens. The minimum requirements are listed along with the install system disk free space and memory attributes as shown in Figure 5-43. If the install system fails the minimum requirements evaluation, then a notification window will appear and the installation will fail. Click Next to continue.

Figure 5-43 System Info window

228

IBM TotalStorage Productivity Center V2.3: Getting Started

The Choose Destination Location window appears. Click Browse to choose another location or click Next to begin the installation of the FAStT CIM agent (see Figure 5-44).

Figure 5-44 Choose a destination

The InstallShield Wizard will now prepare and copy the files into the destination directory. See Figure 5-45.

Figure 5-45 Install Preparation window

Chapter 5. CIMOM install and configuration

229

The README appears after the files have been installed. Read through it to become familiar with the most current information (see Figure 5-46). Click Next when ready to continue.

Figure 5-46 README file

In the Enter IPs and/or Hostnames window, enter the IP addresses and hostnames of the FAStT devices that this FAStT CIM agent will manage as shown in Figure 5-47.

Figure 5-47 FAStT device list

230

IBM TotalStorage Productivity Center V2.3: Getting Started

Use the Add New Entry button to add the IP addresses or hostnames of the FAStT devices that this FAStT CIM agent will communicate with. Enter one IP address or hostname at a time until all the FAStT devices have been entered and click Next (see Figure 5-48).

Figure 5-48 Enter hostname or IP address

Do not enter the IP address of a FAStT device in multiple FAStT CIM Agents within the same subnet. This may cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the FAStT devices. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button, which will open the Windows Explorer. Locate and select the file and then click Open to import the file contents. When all the FAStT device hostnames and IP addresses have been entered, click Next to start the SMI-S Provider Service (see Figure 5-49).

Figure 5-49 Provider Service starting

Chapter 5. CIMOM install and configuration

231

When the Service has started, the installation of the FAStT CIM agent is complete (see Figure 5-50).

Figure 5-50 Installation complete

Arrayhosts file
The installer will create a file called %installroot%\SMI-SProvider\wbemservices\cimom\bin\arrayhosts.txt The arrayhosts file is shown in Figure 5-51. In this file the IP addresses of installed DS 4000 units can be reviewed, added, or edited.

Figure 5-51 Arrayhosts file

Verifying LSI Provider Service availability


You can verify from Windows Services Panel that the LSI Provider service has started as shown in Figure 5-52 on page 233. If you change the contents of the arrayhost file for adding and deleting DS 4000 devices, then you will need to restart the LSI Provider service using the Windows Services Panel.

232

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 5-52 LSI Provider Service

Registering DS4000 CIM agent


The DS4000 CIM Agent needs to be registered with an SLP DA if the FAStT CIM Agent is in a different subnet then that of IBM TotalStorage Productivity Center for Disk and Replication Base environment. The registration is not currently provided automatically by the CIM Agent. You register the DS 4000 CIM Agent with SLP DA from a command prompt using the slptool command. An example of the slptool command is shown below. You must change the IP address to reflect the IP address of the workstation or server where you installed the DS 4000 family DS 4000 CIM Agent. The IP address of our FAStT CIM Agent is 9.1.38.79 and port 5988. You need to execute this command on your SLP DA server. It our ITSO lab, we used SLP DA on the ESS CIMOM server. You need to go to directory C:\Program Files\IBM\cimagent\slp and run: slptool register service:wbem:http:\\9.1.38.79:5988 Important: You cannot have the FAStT management password set if you are using IBM TotalStorage Productivity Center. At this point you may run following command on the SLP DA server to verify that DS 4000 family FAStT CIM agent is registered with SLP DA. slptool findsrvs wbem The response from this command will show the available services which you may verify.

5.8.1 Verifying and Managing CIMOM availability


You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered by SLP DA. You can proceed to your TotalStorage Productivity Center for Disk server. See Verifying and managing CIMOMs availability on page 256.

Chapter 5. CIMOM install and configuration

233

5.9 Configuring CIMOM for SAN Volume Controller


The CIM Agent for SAN Volume Controller is part of the SAN Volume Controller Console and provides the TotalStorage Productivity Center for Disk with access to SAN Volume Controller clusters. You must customize the CIM Agents in your enterprise to accept the TotalStorage Productivity Center for Disk user name and password. Figure 5-53 explains the communication between the TotalStorage Productivity Center for Disk and SAN Volume Controller Environment.

Figure 5-53 TotalStorage Productivity Center for Disk and SVC communication

For additional details on how to configure the SAN Volume Controller Console, refer to the redbook IBM TotalStorage Introducing the SAN Volume Controller and SAN Integration Server, SG24-6423. To discover and manage the SAN Volume Controller, we need to ensure that our TotalStorage Productivity Center for Disk superuser name and password (the account we specify in the TotalStorage Productivity Center for Disk configuration panel, as shown in 5.9.1, Adding the SVC TotalStorage Productivity Center for Disk user account on page 235) matches an account defined on the SAN Volume Controller console. In our case we implemented username TPCSUID and password ITSOSJ. You may want to adapt a similar nomenclature and set up the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center for Disk.

234

IBM TotalStorage Productivity Center V2.3: Getting Started

5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account
As stated previously, you should implement a unique userid to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Login to the SAN Volume Controller console with a superuser account 2. Click Users under My Work on the left side of the panel (see Figure 5-54).

Figure 5-54 SAN Volume Controller console

Chapter 5. CIMOM install and configuration

235

3. Select Add a user in the drop down under Users panel and click Go (see Figure 5-55).

Figure 5-55 SAN Volume Controller console Add a user

236

IBM TotalStorage Productivity Center V2.3: Getting Started

4. An introduction screen is opened, click Next (see Figure 5-56).

Figure 5-56 SAN Volume Controller Add a user wizard

Chapter 5. CIMOM install and configuration

237

5. Enter the User Name and Password and click Next (see Figure 5-57).

Figure 5-57 SAN Volume Controller Console Define users panel

6. Select your candidate cluster and move it to the right under Administrator Clusters (see Figure 5-58). Click Next to continue.

Figure 5-58 SAN Volume Controller console Assign administrator roles

238

IBM TotalStorage Productivity Center V2.3: Getting Started

7. Click Next after you Assign service roles (see Figure 5-59).

Figure 5-59 SAN Volume Controller Console Assign user roles

Chapter 5. CIMOM install and configuration

239

8. Click Finish after you verify user roles (see Figure 5-60).

Figure 5-60 SAN Volume Controller Console Verify user roles

9. After you click Finish, the Viewing users panel opens (see Figure 5-61).

Figure 5-61 SAN Volume Controller Console Viewing Users

240

IBM TotalStorage Productivity Center V2.3: Getting Started

Confirming that the SAN Volume Controller CIMOM is available


Before you proceed, you need to be sure that the CIMOM on the SAN Volume Controller is listening for incoming connections. To do this, issue a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on port 5989 (as indicated by a black screen with cursor on the top left) will tell you that the CIMOM SAN Volume Controller console is active. If the telnet connection fails, you will have a panel like the one in Figure 5-62.

Figure 5-62 Example of telnet fail connection

5.9.2 Registering the SAN Volume Controller host in SLP


The next step to detecting an SAN Volume Controller is to manually register the SAN Volume Controller console to the SLP DA. Tip: If your SAN Volume Controller console resides in the same subnet as the TotalStorage Productivity Center server, SLP registration will be automatic so you do not need to perform the following step. To register the SAN Volume Controller Console perform the following command on the SLP DA server: slptool register service:wbem:https://ipaddress:5989 Where ipaddress is the SAN Volume Controller console ip address. Run a verifyconfig command to confirm that SLP ia aware of the SAN Volume Controller console registration.

5.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary
The TotalStorage Productivity Center discovers both IBM storage devices that comply with the Storage Management Initiative Specification (SMI-S) and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the Service Location Protocol (SLP).

Chapter 5. CIMOM install and configuration

241

The TotalStorage Productivity Center server software performs SLP discovery on the network. The User Agent looks for all registered services with a service type of service:wbem. The TotalStorage Productivity Center performs the following discovery tasks: Locates individual storage devices Retrieves vital characteristics for those storage devices Populates The TotalStorage Productivity Center internal databases with the discovered information The TotalStorage Productivity Center can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services have been discovered through SLP, the TotalStorage Productivity Center contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. TotalStorage Productivity Center gathers the vital characteristics of each of these devices. For the TotalStorage Productivity Center to successfully communicate with the CIMOMs, the following conditions must be met: A common user name and password must be configured for all the CIM Agent instances that are associated with storage devices that are discoverable by TotalStorage Productivity Center (use adduser as described in 5.6.5, CIMOM user authentication on page 215). That same user name and password must also be configured for TotalStorage Productivity Center using the Configure MDM task in the TotalStorage Productivity Center interface. If a CIMOM is not configured with the matching user name and password, it will be impossible to determine which devices the CIMOM supports. As a result, no devices for that CIMOM will appear in the IBM Director Group Content pane. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host where TotalStorage Productivity Center is installed must include in its list of domain names all the domains that contain storage devices that are discoverable by the TotalStorage Productivity Center. It is important to verify that CIMOM is up and running. To do that, use the following command from TotalStorage Productivity Center server: telnet CIMip port Where: CIMip is the ip address where CIM Agent run and port is the port value used for the communication (5989 for secure connection, 5988 for unsecure connection).

5.10.1 SLP registration and slptool


TotalStorage Productivity Center for Disk uses Service Location Protocol (SLP) discovery, which requires that all of the CIMOMs that TotalStorage Productivity Center for Disk discovers are registered using the Service Location Protocol (SLP). SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure that the CIM_InteropSchemaNamespace and Namespace attributes are specified. For example, type the following command: slptool register service:wbem:https://myhost.com:port Where: myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989.

242

IBM TotalStorage Productivity Center V2.3: Getting Started

5.10.2 Persistency of SLP registration


Although it is acceptable to register services manually into SLP, it is possible for SLP users to to statically register legacy services (applications that were not compiled to use the SLP library) using a configuration file that SLP reads at startup, called slp.reg. All of the registrations are maintained by slpd and will remain registered as long as slpd is alive. The Service Location Protocol (SLP) registration is lost if the server where SLP resides is rebooted or when the Service Location Protocol (SLP) service is stopped. A Service Location Protocol (SLP) manual registration is needed for all the CIMOMs outside the subnet where SLP DA resides. Important: To avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is for Windows machines c:\winnt, or /etc directory for UNIX machines. slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received

5.10.3 Configuring slp.reg file


Example 5-2 shows a typical slp.reg file:
Example 5-2 An slp.reg file ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # #############################################################################

#---------------------------------------------------------------------------# Register Service - SVC CIMOMS #---------------------------------------------------------------------------service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 Chapter 5. CIMOM install and configuration

243

#---------------------------------------------------------------------------# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20

244

IBM TotalStorage Productivity Center V2.3: Getting Started

Part 3

Part

Configuring the IBM TotalStorage Productivity Center


In this part of the book we provide information about customizing the components of the IBM TotalStorage Productivity Center product suite, for the following components: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Data We also include a chapter on how to set up the individual (sub) agents on a managed host.

Copyright IBM Corp. 2005. All rights reserved.

245

246

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 6.

Configuring IBM TotalStorage Productivity Center for Disk


This chapter provides information about the basic tasks that you need to complete after you install IBM TotalStorage Productivity Center for Disk: Define SLP DA servers to IBM TotalStorage Productivity Center for Disk Discover CIM Agents Configure CIM Agents to IBM TotalStorage Productivity Center for Disk Discover Storage devices Install the remote GUI

Copyright IBM Corp. 2005. All rights reserved.

247

6.1 Productivity Center for Disk Discovery summary


Productivity Center for Disk discovers both IBM storage devices that comply with the SMI-S and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the SLP. The Productivity Center for Disk server software performs SLP discovery in the network. The User Agent looks for all registered services with a service type of service:wbem. Productivity Center for Disk performs the following discovery tasks: Locates individual CIM Agents Locates individual storage devices Retrieves vital characteristics for those storage devices Populates the internal Productivity Center for Disk databases with the discovered information Productivity Center for Disk can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services are discovered through SLP, Productivity Center for Disk contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. Productivity Center for Disk gathers the vital characteristics of each of these devices. For Productivity Center for Disk to successfully communicate with the CIMOMs, you must meet the following conditions: A common user name (superuser) and password must be set during installation of the IBM TotalStorage Productivity Center for Disk base. This user name and password can be changed using the Configure MDM task in the Productivity Center for Disk interface. If a CIMOM is not configured with the matching user name and password, then you must configure each CIMOM with the correct userid and password using the panel shown in Figure 6-16 on page 258. We recommend that the common user name and password be used for each CIMOM. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host, where Productivity Center for Disk is installed, must include in its list of domain names all the domains that contain storage devices that are discoverable by Productivity Center for Disk. It is important to verify that CIMOM is up and running. To do that, use the following command:
telnet CIMip port

Here, CIMip is the IP address where the CIM Agent runs, and port is the port value used for the communication (5989 for a secure connection; 5988 for an unsecure connection).

6.2 SLP DA definition


Productivity Center for Disk can discover CIM Agents on the same subnet through SLP without any additional configuration. SLP DA should be set up on each subnet as described in 5.7.3, Setting up the Service Location Protocol Directory Agent on page 221. The SLP DA can then be defined to Productivity Center for Disk using the panel located at Options Discovery Preferences MDM SLP Configuration as shown in Figure 6-1 on page 249. Enter the IP address of the server with the SLP DA into the SLP directory agent host box and click Add.

248

IBM TotalStorage Productivity Center V2.3: Getting Started

We are assuming that you have followed the steps outlined in Chapter 5, CIMOM install and configuration on page 191. You should complete the following tasks in order to discover devices defined to our Productivity Center common base host. Make sure that: All CIM agents are running and are registered with the SLP server. The SLP agent host is defined in the IBM Director options (Figure 6-1) if it resides in a different subnet from that of the TotalStorage Productivity Center server (Options Discovery Preferences MDM SLP Configuration tab). Note: If the Productivity Center common base host server resides in the same subnet as the CIMOM, then it is not a requirement that the SLP DA host IP address be specified in the Discovery Preferences panel as shown in Figure 6-2. Refer to Chapter 2, Key concepts on page 27 for details on SLP discovery. Here we provide a step-by-step procedure: 1. Discovery will happen automatically based on preferences that are defined in the Options Discovery Preferences MDM SLP Configuration tab. The default values for Auto discovery interval and Presence check interval is set to 0 (see Figure 6-1). These values should be set to a more suitable value, for example, to 1 hour for Auto discovery interval and 15 minutes for Presence check interval. The values you specify will have a performance impact on the CIMOMs and Productivity Center common base server, so do not set these values too low.

Figure 6-1 Setting discovery preferences

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

249

Continue entering IP addresses for all SLP DA servers. Click OK when finished (see Figure 6-2).

Figure 6-2 Discovery preference set

2. Turn off automatic inventory on discovery. Important: Because of the time and CIMOM resources needed to perform inventory on storage devices, it is undesirable and unnecessary to perform this operation each time Productivity Center common base performs a device discovery.

250

IBM TotalStorage Productivity Center V2.3: Getting Started

Turn off automatic inventory by selecting Options Server Preferences as shown in Figure 6-3.

Figure 6-3 Selecting Server Preferences

Now uncheck the Collect On Discovery check box as shown in Figure 6-4, all other options can remain unchanged. Select OK when done.

Figure 6-4 Server Preferences

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

251

3. You can click Discover all Systems in the top left corner of the IBM Director Console to initiate an immediate discovery task (see Figure 6-5).

Figure 6-5 Discover All Systems icon

252

IBM TotalStorage Productivity Center V2.3: Getting Started

4. You can also use the IBM Director Scheduler to create a scheduled job for new device discovery. Either click the scheduler icon in the IBM Director tool bar or use the menu, Tasks Scheduler (see Figure 6-6).

Figure 6-6 Tasks Scheduler option for Discovery

In the Scheduler, click File New Job (see Figure 6-7).

Figure 6-7 Task Scheduler Discovery job

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

253

Establish parameters for the new job. Under the Date/Time tab, include date and time to perform the job, and whether the job is to be repeated (see Figure 6-8).

Figure 6-8 Discover job parameters

From the Task tab (see Figure 6-9), select Discover MDM storage devices/SAN Elements, then click Select.

Figure 6-9 Discover job selection task

254

IBM TotalStorage Productivity Center V2.3: Getting Started

Click File Save as, or use the Save as icon. Provide a descriptive job name in the Save Job panel (see Figure 6-10) and click OK.

Figure 6-10 Discover task job name

Now run the discovery process by selecting Tasks Discover Systems All Systems and Devices (Figure 6-11).

Figure 6-11 Perform discovery

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

255

Double-click the Manage CIMOM task to see the status of the discovery (Figure 6-12).

Figure 6-12 Configure CIMOMs

The CIMOMs will appear in the list as they are discovered.

6.2.1 Verifying and managing CIMOMs availability


You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered to the SLP DA. Launch the IBM Director Console and select TotalStorage Productivity Center for Disk Manage CIMOMs in the tasks panel as shown in Figure 6-13. The panel shows the status of connection to the respective CIMOM servers. Our ITSO DS CIMOM server connection status is indicated in first line, with IP address 9.1.38.48, port 5996, and status as Success.

Figure 6-13 Manage CIMOM panel

256

IBM TotalStorage Productivity Center V2.3: Getting Started

It should not be necessary to change any information if you followed the recommendation to use the same superuser id and password for all CIMOMs. Select the CIMOM to be configured and click Properties to configure a CIMOM (Figure 6-14).

Figure 6-14 Select a CIMOM to configure

1. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. Figure 6-16 on page 258 shows the properties panel. You may verify the information and update if necessary. The namespace, username and password are picked up automatically, hence they are not normally required to be entered manually. This username is used by CIMOM to logon to TotalStorage Productivity Center for Disk. If you have difficulty getting a successful connection, then you may manually enter namespace, username, and password. Update the properties panel and test the connection to the CIMOM: a. Enter the Namespace value. It is \root\ibm for the ESS, DS6000 and DS8000 It is \interop for the DS4000. b. Select the protocol. It is typically https for ESS, DS6000 and DS8000. It is http for DS4000. c. Enter the User name and password. The default is the superuser password entered earlier. If you entered a different user name and password with the setuser command for the CIM agent, then enter that user name and password here. d. Click Test Connection to verify correct configuration. e. You should see the panel in Figure 6-15. Click Close on the panel.

Figure 6-15 Successful test of the connection to a CIMOM

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

257

f. Click OK on the panel shown in Figure 6-16 to save the properties.

Figure 6-16 CIMOM Properties

2. After the connection to the CIMOM is successful, then perform discovery again as shown before in Figure 6-11 on page 255. This will discover the storage devices connected to each CIMOM (Figure 6-17).

Figure 6-17 DS4000 CIMOM Properties Panel

3. Click the Test Connection button to see a panel similar to Figure 6-15 on page 257, showing that the connection is successful. Tip: If you move or delete CIMOMs in your environment, the old CIMOM entries are not automatically updated, and entries with a Failure status will be seen as in Figure 6-13 on page 256. These invalid entries can slow down discovery performance, as TotalStorage Productivity Center tries to contact them each time it performs a discovery. You cannot delete CIMOM entries directly from the Productivity Center common base interface. Delete them using the DB2 control center tool as described in 16.6, Manually removing old CIMOM entries on page 911.

258

IBM TotalStorage Productivity Center V2.3: Getting Started

6.3 Disk and Replication Manager remote GUI


It is possible to install a TotalStorage Productivity Center for Disk console on a server other than the one on which the TotalStorage Productivity Center for Disk code is installed. This allows you to manage TotalStorage Productivity Center for Disk from a secondary location. Having a secondary TotalStorage Productivity Center for Disk console will offload workload from the TotalStorage Productivity Center for Disk server. Note: You are only installing the IBM Director and TotalStorage Productivity Center for Disk console code. You do not need to install any other code for the remote console. In our lab we installed the remote console on a dedicated Windows 2000 server with 2 GB RAM. You must install all the consoles and clients on the same server. Here are the steps: 1. Install the IBM Director console. 2. Install the TotalStorage Productivity Center for Disk console. 3. Install the Performance Manager client if the Performance Manager component is installed.

Installing the IBM Director console


Follow these steps: 1. Start the setup.exe of IBM Director. 2. The main IBM Director window (Figure 6-18) opens. Click INSTALL IBM DIRECTOR.

Figure 6-18 IBM Director installer

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

259

3. In the IBM Director Installation panel (Figure 6-19), select IBM Director Console installation.

Figure 6-19 Installation options for IBM Director

4. After a moment, the InstallShield Wizard for IBM Director Console panel (Figure 6-20) opens. Click Next.

Figure 6-20 Welcome panel

260

IBM TotalStorage Productivity Center V2.3: Getting Started

5. In the License Agreement panel (Figure 6-21), select I accept the terms in the license agreement. Then click Next.

Figure 6-21 License Agreement

6. The next panel (Figure 6-22) contains information about enhancing IBM Director. Click Next to continue.

Figure 6-22 Enhance IBM Director

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

261

7. The Feature and installation directory selection panel (Figure 6-23) allows you to change how a program feature is installed. Click Next.

Figure 6-23 Selecting the program features to install

8. In the Ready to Install the Program window (Figure 6-24), accept the default selection. Then click Install to start the installation.

Figure 6-24 Ready to Install the Program panel

262

IBM TotalStorage Productivity Center V2.3: Getting Started

9. The installation takes a few minutes. When it is finished, you see the InstallShield Wizard Completed window (Figure 6-25). Click Finish to complete the installation.

Figure 6-25 Installation finished

The remote console of IBM Director is now installed.

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

263

Installing the remote console for Productivity Center for Disk


To install the remote console for Productivity Center for Disk follow these steps: 1. Insert the installation media for Productivity Center for Disk and Replication Base. 2. Change to the W2K directory. Figure 6-26 shows the files in that directory.

Figure 6-26 Files in the W2K directory

3. Start the LaunchPad.bat batch file. Coincidently this file has the same name as the TotalStorage Productivity Center Launchpad, although it has nothing to do with it. 4. Click Installation wizard to begin the installation (Figure 6-27 on page 265).

264

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 6-27 Multiple Device Manager LaunchPad

5. For a brief moment, you see a DOS box with the installer being unpacked. When this is done, you see the Welcome window shown in Figure 6-28. Click Next.

Figure 6-28 Welcome window

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

265

6. The License Agreement window (Figure 6-29) is displayed. Select I accept the term in the license agreement and click Next.

Figure 6-29 License Agreement

266

IBM TotalStorage Productivity Center V2.3: Getting Started

7. The Destination Directory window (Figure 6-30) opens. Accept the default path or enter the target directory for the installation. Click Next.

Figure 6-30 Installation directory

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

267

8. In the Select Product Type window (Figure 6-31), select Productivity Center for Disk and Replication Base Console for the product type. Click Next.

Figure 6-31 Installation options

268

IBM TotalStorage Productivity Center V2.3: Getting Started

9. The Preview window (Figure 6-32) contains the installation information. Review it and click Install to start the console install.

Figure 6-32 Summary

10.When you reach the Finish window, click Finish to exit the add-on installer (Figure 6-33).

Figure 6-33 Installation finished Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

269

11.You return to the IBM TotalStorage Productivity Center for Disk and Replication Base installer window shown in Figure 6-27 on page 265. Click Exit to end the installation. The IBM Director remote console is now installed. The add-ons for IBM TotalStorage Productivity Center for Disk and Replication Base have been added. If the TotalStorage Productivity Center Launchpad is installed, it detects that the IBM Director remote console is available the next time the LaunchPad is started. Also the Launchpad can now be used to start IBM Director.

6.3.1 Installing Remote Console for Performance Manager function


After installing IBM Director Console and TotalStorage Productivity Center for Disk base console, you will need to install remote console for Performance Manager function. For this, insert the CD-ROM which contains the code for TotalStorage Productivity Center for Disk and click setup.exe. In our example, we used the downloaded code as shown in the screenshot in Figure 6-34.

Figure 6-34 Screenshot of our lab download directory location

270

IBM TotalStorage Productivity Center V2.3: Getting Started

Next, you will see Welcome panel shown in Figure 6-35.Click Next.

Figure 6-35 Welcome panel from TotalStorage Productivity Center for Disk installer

The License Agreement panel shown in Figure 6-36 on page 272 appears. Select I accept the terms in the license agreement and click Next to continue.

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

271

Figure 6-36 Accept the terms of license agreement.

Choose the default destination directory as shown in Figure 6-37 and click Next.

Figure 6-37 Choose default destination directory

272

IBM TotalStorage Productivity Center V2.3: Getting Started

In the next panel, choose to install Productivity Center for Disk Client and click Next as shown in Figure 6-38.

Figure 6-38 Select Product Type

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

273

In the next panel, select both check boxes for products, if you would like to install the console and command line client for the Performance Manager function (see Figure 6-39). Click Next.

Figure 6-39 TotalStorage Productivity Center for Disk features selection

274

IBM TotalStorage Productivity Center V2.3: Getting Started

The Productivity Center for Disk Installer - CoServer Parameters panel opens (see Figure 6-40). Enter the TPC user ID and password and the IP that the remote console will use to validate with the TPC server. This is the IP of the TPC server and IBM Director logon.

Figure 6-40 Productivity Center for Disk Installer - CoServer parameters

The Productivity Center for Disk Installer - Preview panel appears (see Figure 6-41 on page 276). Review the information and click Install to start the process of installing the remote console.

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

275

Figure 6-41 Productivity Center for Disk Installer - Preview

When the install is complete you will see the Productivity Center for Disk Installer - Finish panel as shown in Figure 6-42. Click Finish to complete the install process.

Figure 6-42 TotalStorage Productivity Center for Disk finish panel

276

IBM TotalStorage Productivity Center V2.3: Getting Started

6.3.2 Launching Remote Console for TotalStorage Productivity Center


You can launch the remote console from the TotalStorage Productivity Center desktop icon from the remote console server. You will see the window in Figure 6-43.

Figure 6-43 TotalStorage Productivity Center launch window

You may click Manage Disk Performance and Replication as highlighted in the figure. This will launch IBM director remote console. You may logon to director server and start using remote console functions except for Replication Manager. Note: At this point, you have installed the remote console for Performance Manager function only and not for replication manager. You can install remote console for replication manager if you wish.

Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk

277

278

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 7.

Configuring TotalStorage Productivity Center for Replication


This chapter provides information to help you customize the TotalStorage Productivity Center for Replication component of the TotalStorage Productivity Center. In particular, we describe how to set up a remote GUI and CLI.

Copyright IBM Corp. 2005. All rights reserved.

279

7.1 Installing a remote GUI and CLI


A replication session can be managed remotely using the graphical user interface (GUI) and command line interface (CLI). To install, follow the procedure below. 1. Copy the suite install and Replication manager code to the computer you wish to use. 2. In the suite install folder, double click on the setup.exe file to launch the installer wizard. 3. At the language panel (Figure 7-1), choose the language you wish to use during the install.

Figure 7-1 Select a language

4. At the welcome screen (Figure 7-2), click Next.

Figure 7-2 Welcome screen

280

IBM TotalStorage Productivity Center V2.3: Getting Started

5. The software license agreement panel appears (Figure 7-3). Click the radio button next to I accept the terms of the license agreement and click Next to continue.

Figure 7-3 License agreement

6. In the TotalStorage Productivity Center install options panel (Figure 7-4), click the radio button next to User interface installations of Data, Disk, Fabric, and Replication and click Next.

Figure 7-4 TotalStorage Productivity Center install options

Chapter 7. Configuring TotalStorage Productivity Center for Replication

281

7. In the Remote GUI/Command LIne Client component window (Figure 7-5), check the box by The Productivity Center for Replication - Command Line Client and click Next.

Figure 7-5 Select Remote GUI/Command Line Client

8. A window opens (Figure 7-6) to begin the replication command line client install.

Figure 7-6 Replication command client install

282

IBM TotalStorage Productivity Center V2.3: Getting Started

9. In the next window, enter the location of the Replication Manager install package (Figure 7-7).

Figure 7-7 Install package location for replication

10.A window opens prompting you to interact with the Replication Manager install wizard (Figure 7-8).

Figure 7-8 Launch Replication Manager installer

Chapter 7. Configuring TotalStorage Productivity Center for Replication

283

11.The window in Figure 7-9 appears until the install wizard is launched.

Figure 7-9 Launching installer

12.The Productivity Center for Replication Installer - Welcome wizard window (Figure 7-10) opens. Click Next.

Figure 7-10 Replication remote CLI install wizard

284

IBM TotalStorage Productivity Center V2.3: Getting Started

13.Specify the directory path of the Replication Manager installation files in the window shown in Figure 7-11. Click Next.

Figure 7-11 Replication remote CLI Installer - destination directory

14.In the CoServer Parameters window shown in Figure 7-12, enter the following information: Host Name: Host name or IP address of the Replication Manager server Host Port: Port number of the Replication Manager server (default value is 9443) User Name: User name of the CIM Agent managing the storage device(s) User Password: User password of the CIM Agent managing the storage device(s)

Click Next to continue.

Chapter 7. Configuring TotalStorage Productivity Center for Replication

285

Figure 7-12 Replication remote CLI Installer - coserver parameters

15.Review the information in the Preview window shown in Figure 7-13 and click Install.

Figure 7-13 Replication remote CLI Installer - preview

286

IBM TotalStorage Productivity Center V2.3: Getting Started

16.After successfully installing the remote CLI, the window in Figure 7-14 appears. Click Finish.

Figure 7-14 Replication remote CLI Installer - finish

17.After clicking Finish, the postinstall.txt file opens.You may read the file or close and view it at a later time. 18.A window opens informing you of a successful installation (see Figure 7-15). Click Next to finish.

Figure 7-15 Remote CLI installation successful

Chapter 7. Configuring TotalStorage Productivity Center for Replication

287

288

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 8.

Configuring IBM TotalStorage Productivity Center for Data


This chapter describes the necessary tasks to start using IBM TotalStorage Productivity Center for Data in your environment. After you install Productivity Center for Data, there are a few remaining steps to perform, but you can start to use it without performing these steps at first. Most people use Productivity Center for Data to look at the environment and see how the storage capacity is distributed. This chapter focuses on what is necessary to fulfill this task. The following procedures are covered in this chapter: Configuring a discovered IBM TotalStorage Enterprise Storage Server (ESS) Common Information Model (CIM) Agent Configuring a discovered Fibre Array Storage Technology (FAStT) CIM Agent Adding a CIM Agent that is located in a remote network Setting up the IBM TotalStorage Productivity Center for Data Web interface Setting up a remote console We also recommend that you perform the following actions, although we do not describe them here: Setting up the alerting dispositions: Simple Network Management Protocol (SNMP), Tivoli Enterprise Console (TEC), and mail Setting up retention of log files and other information

Copyright IBM Corp. 2005. All rights reserved.

289

8.1 Configuring the CIM Agents


Configuration of the CIM Agents for IBM TotalStorage Productivity Center for Data is different than the configuration you have to perform within Productivity Center for Disk. This section explains how to set up the CIM Agents in two ways: if it was discovered by the Data Manager, or if the CIM Agent is located in a different subnet and multicasts are not enabled. Here is an overview of the procedure to work with CIM Agents: 1. Perform discovery of a new CIM Agent (using Service Location Protocols (SLP)). 2. Configure the discovered CIM Agent properties or definition of a new CIM Agent. 3. Discovery collects the device. 4. After the characteristics are available, set up the device for monitoring. 5. A probe on the device gathers information about the disks and logical unit numbers (LUNs).

8.1.1 CIM and SLP interfaces within Data Manager


The CIM interface within Data Manager is used only to gather information about the disks, the LUNs, and some asset information. The data is correlated with the data that the manager receives from the agents. Since there is no way to install the agent of Data Manager directly on a storage subsystem, Data Manager obtains the information from storage subsystems by using the Storage Management Initiative - Specification (SMI-S) standard. This standard uses another standard, CIM. Data Manager uses this interface to access a storage subsystem. A CIM Agent (also called CIM Object Manager (OM)) that ideally runs within the subsystem, but can also run on a separate host, announces its existence by using the SLP. You can learn more about this protocol in 2.3, Service Location Protocol (SLP) overview on page 38. Within Data Manager, an SLP User Agent (UA) is integrated, and that agents performs a discovery of devices. This discovery is limited to the local subnet of the Data Manager, and is expanded only if multicasts are enabled on the network routers. See Multicast on page 43 for details. Unlike Productivity Center for Disk, the User Agent that is integrated within the Data Manager cannot talk to an SLP Directory Agent (DA). This restriction requires you to manually configure every storage subsystem that was not automatically discovered.

8.1.2 Configuring CIM Agents


The procedure to configure a CIM Agent is simple. If a CIM Agent was discovered, you simply enter the security information. We use the term CIM Agent instead of CIMOM because this is a more generic term. Figure 8-1 on page 291 shows the panel where the CIM Agents are configured. In our example, the first two entries show CIM Agents that were discovered but are not yet configured. The last two entries show an ESS and a FAStT CIM Agent that have already been configured.

290

IBM TotalStorage Productivity Center V2.3: Getting Started

If you want to configure a CIM Agent that cannot be discovered because of the restriction explained in 8.1.1, CIM and SLP interfaces within Data Manager on page 290, then you also need to enter the IP address and select the right protocol.

Figure 8-1 Selecting CIM Agent Logins

If you completed the worksheets (see Appendix A, Worksheets on page 991), have them available for the next steps.

Configuring discovered CIM Agents


For discovered CIM Agents that are not configured, complete these steps: 1. In the CIM/OM Login Administration panel (Figure 8-1), highlight the discovered CIM Agent. Click Edit. 2. The Edit CIM/OM Login Properties window (Figure 8-2 on page 292) opens. Proceed as follows: a. Verify the IP address, port, and protocol. Note: Not all CIM Agents provide a secure communication via https. For example, FAStT does not provide https, so you have to select http. b. Enter the name and password for the user which was configured in the CIM Agent of that device. Note: At the time of this writing, a FAStT CIM Agent does not use a special user to secure the access. Data Manager still requires an input in the user and password field, so type anything you want. c. If you selected https as the protocol to use, enter the complete path and file name of the certificate file that is used to secure the communication between the CIM Agent and the Data Manager. Note: The Truststore file of the ESS CIM Agent is located in the C:\Program Files\ibm\cimagent directory on the CIM Agent Host.

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

291

d. Click Save to finish the configuration.

Figure 8-2 CIM Agent login properties

Configuring new CIM Agents


If you have to enter a CIM Agent manually, click New in the CIM/OM Login Administration panel (Figure 8-1 on page 291). The New CIM/OM Login Properties window (Figure 8-3) opens. You perform the same steps as described in Configuring discovered CIM Agents on page 291. For a new CIM Agent, you must also specify the IP address and protocol to use. The port is set depending on the protocol.

292

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 8-3 New CIM Agent login properties

Next steps
After you configure the CIM Agent properties, run discovery on the storage subsystems. During this process, the Data Manager talks to the CIM Agent to gather information about the devices. When this is completed, you see an entry for the subsystem in the Storage Subsystem Administration panel (Figure 8-5 on page 294).

8.1.3 Setting up a disk alias


Optionally, you can change the name of a disk subsystem to a more meaningful name: 1. In the Data Manager GUI, in the Navigation Tree, expand Administrative Services Configuration Data Manager subtree as shown in Figure 8-4. Select Storage Subsystem Administration.

Figure 8-4 Navigation Tree

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

293

2. The panel shown in Figure 8-5 on page 294 opens. a. Highlight the subsystem. b. Place a check mark in the Monitored column. Note: Select the Monitored column if you want Data Manager to probe the subsystem whenever a probe job is run against it. If you deselect the Monitored check box for a storage subsystem, the following actions occur: All the data gathered by the server for the storage subsystem is removed from the enterprise repository. You can no longer run Monitoring, Alerting, or Policy Management jobs against the storage subsystem. c. Click Set disk alias.

Figure 8-5 Storage Subsystem Administration

3. The Set Disk Alias window (Figure 8-6) opens. a. Enter the Alias/Name. b. Click OK to finish.

Figure 8-6 Set Disk Alias

4. You may need to refresh the GUI for the changes to become effective. Right-click an old entry in the Navigation Tree, and select Refresh.

Next steps
Now that you have set up the CIM Agent properties and specified to monitor the subsystems, run a probe against it to collect data about the disks and LUNs. After you do this, you can look at the results in different reports.

294

IBM TotalStorage Productivity Center V2.3: Getting Started

8.2 Setting up the Web GUI


The Web GUI is basically the same as the remote GUI that you can install on any machine. You simply use a Web browser to download a Java application that is then launched. We show only the basic setup of the Web server, which may not be very secure. The objective here is to gain access to the Data Manager from a machine that does not have the remote GUI installed. Attention: We had the Tivoli Agent Manager running on the same machine. The Agent Manager comes with an application (the Agent Recovery Service) that uses port 80, so we had to find an unused port on the same machine. In addition, you must be careful if you use the Internet Information Server (IIS). IIS uses several ports by default which may interfere with the installed WebSphere Application Server. Therefore we recommend that you use the IBM HTTP Server.

8.2.1 Using IBM HTTP Server


This section explains how to set up the IBM HTTP Server to make the remote GUI available via the Web. When you install WebSphere Application Server on a machine, the IBM HTTP Server is installed on the same machine. The IBM HTTP server does not come with a GUI for the administration. Instead you use configuration files to modify any settings. The HTTP server in installed in C:\Program Files\WebSphere\AppServer\HTTPServer. This directory contains the conf subdirectory, which contains the httpd.conf file, which is used to configure the server. 1. In C:\Program Files\WebSphere\AppServer\HTTPServer\conf directory, open the httpd.conf file. 2. Locate the line where the port is defined. See Example 8-1. Change the port number. In our example, we used 2077.
Example 8-1 Abstracts of the httpd.conf file ServerName GALLIUM # This is the main server configuration file. See URL http://www.apache.org/ # for instructions. # Do NOT simply read the instructions in here without understanding # what they do, if you are unsure consult the online docs. You have been # warned. # Originally by Rob McCool # # # # Note: Where filenames are specified, you must use forward slashes instead of backslashes. e.g. "c:/apache" instead of "c:\apache". If the drive letter is omitted, the drive where Apache.exe is located will be assumed

.... # Port: The port the standalone listens to. #Port 80

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

295

Port 2077

3. Locate the line AfpaEnable. Comment out the three Afpa... lines as shown in Example 8-2.
Example 8-2 Afpa #AfpaEnable #AfpaCache on #AfpaLogFile "C:\Program Files\WebSphere\AppServer\HTTPServer/logs/afpalog" V-ECLF

296

IBM TotalStorage Productivity Center V2.3: Getting Started

4. Locate the line that starts with <Directory. Modify the line to point the directory to C:\Program Files\IBM\TPC\Data\gui as shown in Example 8-3.
Example 8-3 Directory setting # --------------------------------------------------------------------------# This section defines server settings which affect which types of services # are allowed, and in what circumstances. # Each directory to which Apache has access, can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # # # Note: Where filenames are specified, you must use forward slashes instead of backslashes. e.g. "c:/apache" instead of "c:\apache". If the drive letter is omitted, the drive where Apache.exe is located will be assumed

# First, we configure the "default" to be a very restrictive set of # permissions. # # # # Note that from this point forward you must specifically allow particular features to be enabled - so if something's not working as you might expect, make sure that you have specifically enabled it below.

# This should be changed to whatever you set DocumentRoot to. #<Directory "C:\Program Files\WebSphere\AppServer\HTTPServer/htdocs/en_US"> <Directory "C:\Program Files\IBM\TPC\Data\gui">

5. Locate the line that starts with DocumentRoot. Modify the line to point the directory to C:\Program Files\IBM\TPC\Data\gui as shown in Example 8-4.
Example 8-4 DocumentRoot # # # # -------------------------------------------------------------------------------In the following section, you define the name space that users see of your http server. This also defines server settings which affect how requests are serviced, and how results should be formatted.

# See the tutorials at http://www.apache.org/ for # more information. # # # # Note: Where filenames are specified, you must use forward slashes instead of backslashes. e.g. "c:/apache" instead of "c:\apache". If the drive letter is omitted, the drive where Apache.exe is located will be assumed.

# DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. #DocumentRoot "C:\Program Files\WebSphere\AppServer\HTTPServer/htdocs/en_US" DocumentRoot "C:\Program Files\IBM\TPC\Data\gui"

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

297

6. Locate the line that starts with DirectoryIndex. Modify the line to use TPCD.html as the index document as shown in Example 8-4 on page 297.
Example 8-5 Directory index # DirectoryIndex: Name of the file or files to use as a pre-written HTML # directory index. Separate multiple entries with spaces. #DirectoryIndex index.html DirectoryIndex tpcd.html

7. Save the file. 8. Start the HTTP server. 9. Open a command prompt. a. Change to the directory C:\Program Files\WebSphere\AppServer\HTTPServer. b. Type apache, and press Enter. 10.This starts the HTTP server as a foreground application. Now when you use a Web browser, simply enter:
http://servername:portumber

In our environment, we entered:


http://gallium:2077

You see a Web page, and a Java application is then loaded. (Java is installed if necessary.) Note: Do not omit the http://. Since we do not use the default, you have to tell the browser which protocol to use.

298

IBM TotalStorage Productivity Center V2.3: Getting Started

8.2.2 Using Internet Information Server


If you have IIS installed on the server running Data Manager, use these steps to enable the access to the remote GUI via a Web site. Attention: If you have WebSphere Application Server running on the same server, be careful not to create port conflicts, especially since port 80 is in use by both applications. 1. Start the Internet Information Services administration GUI. 2. A window opens as shown in Figure 8-7. In the left panel, right-click the entry with your host name and select New Web Site to launch the Web Site Creation Wizard.

Figure 8-7 Internet Information Server administration GUI

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

299

3. The Web Site Creation Wizard opens, displaying the Welcome panel (see Figure 8-8). Click Next.

Figure 8-8 Web Site Creation Wizard

4. The Web Site Description panel (Figure 8-9) opens. Enter a description in the panel and click Next.

Figure 8-9 Web Site Description panel

300

IBM TotalStorage Productivity Center V2.3: Getting Started

5. The IP Address and Port Settings panel (Figure 8-10) opens. Enter an unused port number and click Next.

Figure 8-10 IP Address and Port Setting panel

6. In the Web Site Home Directory panel (Figure 8-11), enter the home directory of the Web server. This is the directory where the files for the remote Web GUI are stored. The default is C:\Program Files\IBM\TPC\Data\gui. Click Next.

Figure 8-11 Web Site Home Directory panel

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

301

7. The Web Site Access Permissions panel (Figure 8-12) opens. Accept the default access permissions, and click Next.

Figure 8-12 Web Site Access Permissions panel

8. When you see the window indicating that you have successfully completed the Web Site Creation Wizard (Figure 8-13), click Finish.

Figure 8-13 Setup finished

9. In the Internet Information Services window (Figure 8-7 on page 299), right-click the new Web server entry, and select Properties.

302

IBM TotalStorage Productivity Center V2.3: Getting Started

10.The Data Manager Properties window (Figure 8-14) opens. a. Select the Documents tab.

Figure 8-14 Adding a default document

b. Click Add. c. In the window that opens, enter tpcd.html. Click OK. d. Click OK to close the Properties window. 11.This starts the HTTP Server as a foreground application. Now, when you use a Web browser, simply enter:
http://servername:portumber

In our installation, we entered:


http://gallium:2077

You see a Web page and a Java application is loaded. (Java is installed if necessary.) Note: Do not omit the http://. Since we dont use the default, you have to tell the browser which protocol to use.

8.2.3 Configuring the URL in Fabric Manager


The user properties file of the Fabric Manager contains settings that control polling, SNMP traps destination, and the fully qualified host name of Data Manager. As an administrator, you can use srmcp manager service commands to display and set the values in the user properties file. The srmcp ConfigService set command sets the value of the specified property to a new value in the user properties file (user.properties). This command can be run only on the manager computer.

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

303

Issuing a command on Windows


Use these steps to enter a command using Windows. 1. Open a command prompt window. 2. Change the directory to installation directory\manager\bin\w32-ix86. The default installation directory is C:\Program Files\IBM\TPC\Fabric\manager\bin\w32-ix86. 3. Enter the following command:
setenv

4. Enter the following command:


srmcp -u Administrator -p password ConfigService set SRMURL http://data.itso.ibm.com:2077

The change is picked up immediately. There is no need to restart Fabric Manager.

8.3 Installing the Data Manager remote console


To install the remote console for Productivity Center for Data, use the procedure explained in this section. You can also start the installation using the Suite Installer. However, when the Data Manager installer is launched, you begin with the first step of the procedure that follows. 1. Select language. We selected English (Figure 8-16 on page 305).

Figure 8-15 Welcome panel

2. The next panel (see Figure 8-16 on page 305) is the Software License Agreement. Click I accept the terms in the license agreement and click Next to continue.

304

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 8-16 License agreement

3. The next panel allows you to select the components to be installed. For the remote console installation, select the User interface installations of Data, Disk, Fabric, and Replication (see Figure 8-17). Click Next to continue.

Figure 8-17 Product selection

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

305

4. The next panel allows you to select which Remote GUI will be installed. Select the Productivity Center for Data (see Figure 8-18) and click Next to continue.

Figure 8-18 Remote GUI selection panel

5. The next panel is informational (see Figure 8-19) and verifies that the Productivity Center for Data GUI will be installed.

Figure 8-19 Verification panel

306

IBM TotalStorage Productivity Center V2.3: Getting Started

6. The install package location panel is displayed. Specify the required information (see Figure 8-20) and click Next to continue.

Figure 8-20 Install package location

7. Another information panel is displayed (see Figure 8-21) indicating that the product installer will be launched. Click Next to continue.

Figure 8-21 The installer will be launched

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

307

8. .In the window that opens, like the one in Figure 8-22, select Install Productivity Center for Data and click Next.

Figure 8-22 Installation action

9. The License Agreement panel (Figure 8-23) opens. Select I have read and AGREE to abide by the license agreement above and click Next.

Figure 8-23 License Agreement

308

IBM TotalStorage Productivity Center V2.3: Getting Started

10.A License Agreement Confirmation window (Figure 8-24) opens. Click Yes to confirm.

Figure 8-24 License Agreement Confirmation

11.The next window that opens prompts you to specify what you want to install (see Figure 8-25). In this example, we already had the agent installed on our machine. Therefore all options are still available. Select The GUI for reporting and click Next.

Figure 8-25 Installation options

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

309

12.In the Productivity Center for Data Parameters panel (Figure 8-26), enter the Data Manager connection details and a Data Manager server name. Change the port if necessary and click Next.

Figure 8-26 Data Manager connection details

310

IBM TotalStorage Productivity Center V2.3: Getting Started

13.In the Space Requirements panel (Figure 8-27), you can change the installation directory or leave the default. Click Next.

Figure 8-27 Installation directory

14.If the directory does not exist, you see the message shown in Figure 8-28. Click OK to continue or Cancel to change the directory.

Figure 8-28 Directory does not exist

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

311

15.You see the window shown in Figure 8-29 indicating that Productivity Center for Data has verified your entries and is ready to start the installation. Click Next to start the installation.

Figure 8-29 Ready to start the installation

312

IBM TotalStorage Productivity Center V2.3: Getting Started

16.During the installation, you see a progress indicator. When the installation is finished, you see the Install Progress panel (Figure 8-30). Click Done to exit the installer.

Figure 8-30 Installation completed

The IBM TotalStorage Productivity Center for Data remote console is now installed. If the TotalStorage Productivity Center Launchpad is installed, it detects that Productivity Center for Data remote console is available the next time the LaunchPad is started. The LaunchPad can now be used to start Productivity Center for Data.

8.4 Configuring Data Manager for Databases


Complete the following steps before attempting to monitor your databases with Data Manager. 1. Go to Administrative Services Configuration General License Keys and double-click IBM TPC for Data - Databases (Figure 8-31 on page 314).

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

313

Figure 8-31 TPC for Data - Databases License Keys

2. From the list of agents, select those you wish to monitor by checking the box under Licensed (Figure 8-32). After checking the desired boxes, click the RDBMS Logins tab.

Figure 8-32 TPC for Data - Databases Licensing tab

314

IBM TotalStorage Productivity Center V2.3: Getting Started

3. To successfully scan a database, you must provide a login name and password for each instance. Click Add New... (Figure 8-33).

Figure 8-33 RDBMS Logins

4. In the RDBMS Login Editor window, enter the required information: Database - the database type you wish to monitor Agent Host - the host you wish to monitor Instance - the name of the instance User - login ID for the instance Password - password for the instance Port - port where database is listening

Figure 8-34 RDBMS Login Editor

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

315

5. After the database is successfully registered, click OK (Figure 8-35).

Figure 8-35 RDBMS successfully registered

8.5 Alert Disposition


This section describes the available alerting options one can configure. This option defines how the Alerts are generated when a corresponding event is discovered. This panel is shown in Figure 8-36 by going to Administrative Services Configuration General Alert Disposition.

Figure 8-36 Alert Disposition panel

316

IBM TotalStorage Productivity Center V2.3: Getting Started

You can specify these parameters: SNMP Community - The name of the SNMP community for sending traps Host - The system (event manager) which will receive the traps Port - The port on which traps will be sent (the standard port is 162) TEC (Tivoli Enterprise Console) TEC Server - for sending traps to; the system (TEC) that will receive the traps TEC Port - to which traps will be sent (the standard port is 5529) E-mail Mail Server - The mail server which will be used for sending the e-mail. Mail Port - The port used for sending the mail to the mail server. Default Domain - Default domain to be used for sending the e-mail. Return To - The return address for undeliverable e-mail. Reply To - The address to use when will replying to an Alert-triggered e-mail.

Alert Log Disposition Delete Alert Log Records - older than how long the Alert Log files will be kept.

Chapter 8. Configuring IBM TotalStorage Productivity Center for Data

317

318

IBM TotalStorage Productivity Center V2.3: Getting Started

Chapter 9.

Configuring IBM TotalStorage Productivity Center for Fabric


This chapter explains the steps that you must follow, after you install IBM TotalStorage Productivity Center for Fabric from the CD, to configure the environment. Refer to 4.3.6, IBM TotalStorage Productivity Center for Fabric on page 157, which shows the installation procedure for installing IBM TotalStorage Productivity Center for Fabric using the Suite Installer. IBM TotalStorage Productivity Center for Fabric is a rebranding of IBM Tivoli Storage Area Network Manager. Since the configuration process has not changed, the information provided is still applicable. This IBM Redbook complements the IBM Redbook IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848. You may also want to refer to that redbook to learn about design or deployment considerations, which are not covered in this redbook.

Copyright IBM Corp. 2005. All rights reserved.

319

9.1 TotalStorage Productivity Center component interaction


This section discusses the interaction between IBM TotalStorage Productivity Center for Fabric and the other IBM TotalStorage Productivity Center components. IBM TotalStorage Productivity Center interaction includes external products and devices. IBM TotalStorage Productivity Center for Fabric uses standard calls to devices to provide and gather information to enable it to provide information about your environment.

9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base
When a supported storage area network (SAN) Manager is installed and configured, IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions, such as logical unit number (LUN) creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. In Version 2.1 of TotalStorage Productivity Center, IBM TotalStorage Productivity Center for Fabric is the supported SAN Manager. The set of SAN Manager functions that are exploited are: The ability to retrieve SAN topology information, including switches, hosts, ports, and storage devices. The ability to retrieve and to modify the zoning configuration on the SAN. The ability to register for event notification this ensures that IBM TotalStorage Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when host LUN configurations change.

9.1.2 SNMP
IBM TotalStorage Productivity Center for Fabric acts as a Simple Network Management Protocol (SNMP) manager to receive traps from managed devices in the event of status changes or updates. These traps are used to manage all the devices that the Productivity Center for Fabric is monitoring to provide the status window shown by NetView. These traps should then be passed onto a product, such as Tivoli Event Console (TEC), for central monitoring and management of multiple devices and products within your environment. When using the IBM TotalStorage Productivity Center Suite Installer, the SNMP configuration is performed for you. If you install IBM TotalStorage Productivity Center for Fabric manually, then you need to configure SNMP. The NetView code that is provided when you install IBM TotalStorage Productivity Center for Fabric is to be used only for this product. If you configure this NetView as your SNMP listening device for non-IBM TotalStorage Productivity Center for Fabric purposes, then you need to purchase the relevant NetView license.

9.1.3 Tivoli Provisioning Manager


Tivoli Provisioning Manager uses IBM TotalStorage Productivity Center for Fabric when it performs its data resource provisioning. Provisioning is the use of workflows to provide resources (data or server) whenever workloads exceed specified thresholds and dictate that a resource change is necessary to continue to satisfy service-level agreements or business objectives. If the new resources are data resources which are part of the SAN fabric, then IBM TotalStorage Productivity Center for Fabric is invoked to provide LUN allocation, path definition, or zoning changes as necessary.

320

IBM TotalStorage Productivity Center V2.3: Getting Started

Refer to the IBM Redbook Exploring Storage Management Efficiencies and Provisioning: Understanding IBM TotalStorage Productivity Center and IBM TotalStorage Productivity Center with Advanced Provisioning, SG24-6373, which presents an overview of the product components and functions. It explains the architecture and shows the use of storage provisioning workflows.

9.2 Post-installation procedures


This section discusses the next steps that we performed after the initial product installation from the CD, to take advantage of the function IBM TotalStorage Productivity Center for Fabric provides. After you install the Fabric Manager, you need to decide on which machines you will install the Agent and on which machines you will install the Remote Console. The following sections show how to install these components.

9.2.1 Installing Productivity Center for Fabric Agent


This section explains how to install the Productivity Center for Fabric Agent. The installation must be performed by someone who has a user ID with administrator rights (Windows) or root authority (UNIX). We used the Suite Installer to install the Agent. You can also install directly from the appropriate subdirectory on the CD. Because the installation is Java based, it looks the same on all platforms. 1. In the window that opens, select the language for installation. We chose English. Click Next. You will see the Welcome screen (Figure 9-1). Click Next to continue.

Figure 9-1 Welcome screen

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

321

2. The next screen (see Figure 9-2) is the Software License Agreement. Click I accept the terms in the license agreement and click Next to continue.

Figure 9-2 License Agreement

3. In the Suite Installer panel (Figure 9-3), select the Agent installations of Data, Fabric, and CIM Agent option. Then click Next.

Figure 9-3 Suite Installer panel for selecting Agent installations

322

IBM TotalStorage Productivity Center V2.3: Getting Started

4. In the next window, select one or more Agent components (see Figure 9-4). In this example, we chose The Productivity Center for Fabric - Agent option. Click Next.

Figure 9-4 Agent type selection panel

5. In the next panel, confirm the components to install. See Figure 9-5. Click Next.

Figure 9-5 Productivity Center for Fabric - Agent confirmation

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

323

6. As shown in Figure 9-6, enter the install package location. In our case, we installed from the I: drive. You will most likely use the product CD. Click Next.

Figure 9-6 Input location selection panel

7. A panel (Figure 9-7) opens indicating that the Productivity Center for Fabric installer will be launched. At this point, the Suite Installer is invoking the Installer process for the individual agent install. If you install the Agent directly from the CD, without using the Suite Installer, you commence the process after this point. The Suite Installer masks a few displays from you when it calls the product installer. Click Next.

Figure 9-7 Product Installer will be launched

324

IBM TotalStorage Productivity Center V2.3: Getting Started

8. In the window that opens, select the language for installation. We chose English. Click Next.

Figure 9-8 Welcome screen

9. As In Figure 9-9 on page 326, specify the Fabric Manager Name and Fabric Manager Port Number. Type the Fabric Manager Name with the machine name where the Productivity Center for Fabric - Manager is installed. If it is a different domain, you must fully qualify the server name. In our case, colorado is the machine name of the server where Productivity Center for Fabric - Manager is installed. The port number is automatically inserted, but you can change it if you used a different port when you installed the Productivity Center for Fabric - Manager. Click Next.

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

325

Figure 9-9 Fabric Manager name and port option

10.The Host Authentication password is entered in the panel shown in Figure 9-10. This was specified during the Agent Manager install and is used for agent installs.

Figure 9-10 Host Authentication password

326

IBM TotalStorage Productivity Center V2.3: Getting Started

11.In the next panel (Figure 9-11), you have the option to change the default installation directory. We clicked Next to accept the default.

Figure 9-11 Selecting the installation directory

12.The Agent Information panel (Figure 9-12) asks you to specify a label which is applied to the Agent on this machine. We used the name of the machine. The port number is the port through which this Agent communicates. Click Next.

Figure 9-12 Agent label and port

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

327

13.In the panel in Figure 9-13, specify the account that the Fabric Agent is to run under. We used the Administrator account.

Figure 9-13 Fabric agent account

14.In the Agent Management Information panel (Figure 9-14 on page 329), enter the location of the Tivoli Agent Manager. In our configuration, colorado is the machine name in our Domain Name Server (DNS) where Tivoli Agent Manager is installed. The Registration Password is the password that you used when you installed Tivoli Agent Manager. Click Next.

328

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 9-14 Agent Manager information

15.Finally you see a confirmation panel (Figure 9-15) that shows the installation summary. Review the information and click Next.

Figure 9-15 Installation summary panel

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

329

16.You see the installation status bar. Then you see a panel indicating a successful installation (Figure 9-16). Click Finish.

Figure 9-16 Successful installation panel

17.The panel in Figure 9-17 indicates the successful install of the Fabric agent.

Figure 9-17 Successful install of fabric agent panel.

18.You return to the Suite Installer window where you have the option to install other Agents. Click Cancel to finish.

330

IBM TotalStorage Productivity Center V2.3: Getting Started

Upon successful installation, you notice that nothing is added to the Start menu. The only evidence that the Agent is installed and running is that a Service is automatically started. Figure 9-18 shows the started Services in our Windows environment.

Figure 9-18 Common Agent Service indicator

If you look in the Control Panel, under Add/Remove Programs, there is now an entry for IBM TotalStorage Productivity Center for Fabric - Agent. To remove the Agent, you click this entry.

9.2.2 Installing Productivity Center for Fabric Remote Console


This section explains how to install the Productivity Center for Fabric Remote Console.

Pre-installation tasks
Before you begin the installation, make sure that you have met the requirements that are discussed in the following sections.

SNMP service installed


Make sure that you have installed the SNMP service and have an SNMP community name of Public defined. For more information, see 3.10, Installing SNMP on page 73.

Existing Tivoli NetView installation


If you have an existing Tivoli NetView 7.1.4 installation, you can use it with Productivity Center for Fabric installation. If you have any other version installed, you must uninstall it before you install Productivity Center for Fabric Remote Console.
Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

331

Installing the console


The Productivity Center for Fabric Console remotely displays information about the monitored SAN. A user who has Administration rights must perform the installation. At this time of writing this redbook, this installation was supported on the Windows 2000 and Windows XP platforms. The following steps show a successful installation. We used the Suite Installer to install the Console, and the following windows reflect that process. 1. Select the language. We selected English. The next panel (see Figure 9-19) is the installer Welcome panel.

Figure 9-19 Installer Welcome panel.

332

IBM TotalStorage Productivity Center V2.3: Getting Started

2. The next screen (see Figure 9-20) is the Software License Agreement. Click I accept the terms in the license agreement and click Next to continue.

Figure 9-20 License agreement panel

3. The first Suite Installer window (Figure 9-21) opens. Select the User interface Installations of Data, Fabric, and Replication option and then click Next.

Figure 9-21 Suite Installer for selecting Console

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

333

4. In the next panel (Figure 9-22), select one or more remote GUI or command line components. To install the console, select The Productivity Center for Fabric - Remote GUI Client. Click Next.

Figure 9-22 Selecting the Remote Console

5. In the installation confirmation panel (Figure 9-23), click Next.

Figure 9-23 Remote GUI Client installation confirmation

334

IBM TotalStorage Productivity Center V2.3: Getting Started

6. As shown in Figure 9-24, enter the location of the source code for the installation. In most cases, this the product CD drive. In our case, we installed the code from the E: drive. Click Next.

Figure 9-24 Source code location panel

7. The next panel (Figure 9-25) indicates that the Fabric Installer will be launched. If you install the Agent directly from the CD, without using the Suite Installer, you begin the process after this point. The Suite Installer masks a few displays from you when it calls the product installer. Click Next.

Figure 9-25 Installer will be launched

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

335

8. Select the language. We selected English. The Suite Installer launches the Fabric installer (see Figure 9-26).

Figure 9-26 Productivity Center for Fabric installer launched

9. The InstallShield Wizard opens for IBM TotalStorage Productivity Center for Fabric Console (see Figure 9-27, InstallShield Wizard for Console on page 336). Click Next.

Figure 9-27 InstallShield Wizard for Console

336

IBM TotalStorage Productivity Center V2.3: Getting Started

10.In the next panel, you can specify the location of the directory into which the product will be installed. Figure 9-28 shows the default location. Click Next.

Figure 9-28 Default installation directory

11.Specify the name and port number of the host where the Productivity Center for Fabric Manager is installed. See Figure 9-29. Click Next.

Figure 9-29 Productivity Center for Fabric Manager details

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

337

12.In the next panel (Figure 9-30), specify a starting port number from which the installer will allocate a series of ports for communication. We used the default. Click Next.

Figure 9-30 Starting port number

13.Type the password that you will use for all remote consoles or that the managed hosts will use for authentication with the manager (see Figure 9-31). This password must be the same as the one you entered in the Fabric Manager Installation. Click Next.

Figure 9-31 Host Authentication panel

338

IBM TotalStorage Productivity Center V2.3: Getting Started

14.Specify the drive where NetView is to be installed or accept the default (see Figure 9-32). Click Next.

Figure 9-32 Selecting the NetView installation drive

15.As shown in Figure 9-33, specify a password which will be used to run the NetView Service. Then click Next.

Figure 9-33 NetView Service password

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

339

16.A panel opens that displays a summary of the installation (Figure 9-34). Click Next to begin the installation.

Figure 9-34 Summary panel

17.The installation completes successfully as indicated by the message in the panel shown in Figure 9-35. Click Next.

Figure 9-35 Installation successful message

340

IBM TotalStorage Productivity Center V2.3: Getting Started

18.You are prompted to restart your machine (Figure 9-36). You may elect to restart immediately or at another time. We chose Yes, restart my computer. Click Finish.

Figure 9-36 restart Computer request

After rebooting your system, you see a new Service is automatically started, as shown in Figure 9-37.

Figure 9-37 NetView Service

To start the Remote Console, click Start Programs Tivoli NetView NetView Console.

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

341

9.3 Configuring IBM TotalStorage Productivity Center for Fabric


This section explains how to configure IBM TotalStorage Productivity Center for Fabric.

9.3.1 Configuring SNMP


When using the IBM TotalStorage Productivity Center Suite Installer, the SNMP configuration is performed for you. If you install IBM TotalStorage Productivity Center for Fabric manually, then you need to configure the Productivity Center for Fabric. There are several ways to configure Productivity Center for Fabric for SNMP traps.

Method 1: Forward traps to the local Tivoli NetView console


In this scenario, you set up the devices to send SNMP traps to the NetView console, which is installed on the Productivity Center for Fabric Server. Figure 9-38 shows an example of this setup.

Managed Host (Agent) Disk array Managed Host (Agent)

Disk array

Managed Host (Agent)

SAN
Disk array Switch

SNMP

Disk array Disk array Productivity Centre for Fabric Manager

Figure 9-38 SNMP traps to local NetView console

342

IBM TotalStorage Productivity Center V2.3: Getting Started

NetView listens for SNMP traps on port 162, and the default community is public. When the trap arrives to the Tivoli NetView console, it is logged in the NetView Event browser and then forwarded to Productivity Center for Fabric as shown in Figure 9-39. Tivoli NetView is configured during installation of the Productivity Center for Fabric Server for trap forwarding to the Productivity Center for Fabric Server.

Productivity Center for Fabric Server


SNMP Trap
TCP 162

Tivoli NetView

SAN Manager

fibre channel switch

trapfrwd .conf (trap forwarding to TCP /IP port 9556 )

Figure 9-39 SNMP trap reception

NetView forwards SNMP traps to the defined TCP/IP port, which is the sixth port derived from the base port defined during installation. We used the base port 9550, so the trap forwarding port is 9556. With this setup, the SNMP trap information appears in the NetView Event browser. Productivity Center for Fabric uses this information for changing the topology map. Note: If the traps are not forwarded to Productivity Center for Fabric, the topology map is updated based on the information coming from Agents at regular polling intervals. The default Productivity Center for Fabric Server installation (including the NetView installation) sets up the trap forwarding correctly.

Existing NetView installation


If you installed Productivity Center for Fabric with an existing NetView, you need to set up trap forwarding: 1. Configure the Tivoli NetView trapfrwd daemon. Edit the trapfrwd.conf file in the \usr\ov\conf directory. This file has two sections: Hosts and Traps. a. Modify the Hosts section to specify the host name and port to forward traps to (in our case, port 9556 on host COLORADO.ALMADEN.IBM.COM). b. Modify the Traps section to specify which traps Tivoli NetView should forward. The traps to forward for Productivity Center for Fabric are:
1.3.6.1.2 *(Includes MIB-2 traps (and McDATAs FC Management MIB traps) 1.3.6.1.3 *(Includes FE MIB and FC Management MIB traps) 1.3.6.1.4 *(Includes proprietary MIB traps (and QLogics FC Management MIB traps))

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

343

Example 9-1 shows a sample trapfrwd.conf file.


Example 9-1 trapfrwd.conf file [Hosts] #host1.tivoli.com 0 #localhost 1662 colorado.almaden.ibm.com 9556 [End Hosts] [Traps] #1.3.6.1.4.1.2.6.3 * #mgmt 1.3.6.1.2 * #experimental 1.3.6.1.3 * #Andiamo 1.3.6.1.4.1.9524 * #Brocade 1.3.6.1.4.1.1588 * #Cisco 1.3.6.1.4.1.9 * #Gadzoox 1.3.6.1.4.1.1754 * #Inrange 1.3.6.1.4.1.5808 * #McData 1.3.6.1.4.1.289 * #Nishan 1.3.6.1.4.1.4369 * #QLogic 1.3.6.1.4.1.1663 * [End Traps]

2. The trapfrwd daemon must be running before traps are forwarded. Tivoli NetView does not start this daemon by default. To configure Tivoli NetView to start the trapfrwd daemon, enter these commands at a command prompt:
ovaddobj \usr\ov\lrf\trapfrwd.lrf ovstart trapfrwd

3. To verify that trapfrwd is running, in NetView, select Options Server Setup. In the Server Setup Tivoli NetView window (Figure 9-40 on page 345), you see that trapfrwd is running.

344

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 9-40 Trapfwd daemon

After trap forwarding is enabled, configure the SAN components, such as switches, to send their SNMP traps to the NetView console. Note: This type of setup gives you the best results, especially for devices where you cannot change the number of SNMP recipients and the destination ports.

Method 2: Forward traps directly to Productivity Center for Fabric


In this example, you configure the SAN devices to send SNMP traps directly to the Productivity Center for Fabric Server. The receiving port number is the primary port number plus six ports. In this case, traps are only used to reflect the topology changes and they are not shown in the NetView Event browser. Note: Some of the devices do not allow you to change the SNMP port. They only send traps to port 162. In such cases, this scenario is not useful.

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

345

Method 3: Traps to the Productivity Center for Fabric and SNMP console
In this example, you set up the SAN devices to send SNMP traps to both the Productivity Center for Fabric Server and to a separate SNMP console, which you installed in your organization. See Figure 9-41.

p
Managed Host (Agent) Disk array Managed Host (Agent)

Disk array Managed Host (Agent)

SAN
Disk array Switch

Disk array Disk array TotalStorage Productivity Center For Fabric Server

SNMP Console port 162

Figure 9-41 SNMP traps for two destinations

The receiving port number for the Productivity Center for Fabric Server is the primary port number plus six ports. The receiving port number for the SNMP console is 162. In this case traps are used to reflect the topology changes and they will display in the SNMP console events. The SNMP console, in this case, can be another Tivoli NetView installation or any other SNMP management application. For such a setup, the devices have to support setting multiple traps receivers and changing the trap destination port. Since this functionality is not supported in all devices, we do not recommend this scenario.

9.3.2 Configuring the outband agents


Productivity Center for Fabric Server uses agents to discover the storage environment and to monitor the status. These agents are setup in the Agent Configuration panel. 1. From the NetView console, select SAN Configuration Configure Manager.

346

IBM TotalStorage Productivity Center V2.3: Getting Started

2. The SAN Configuration window (Figure 9-42) opens. a. Select the Switches and Other SNMP Agents tab on the left side.

Figure 9-42 Selecting switches and other SNMP agents

b. You see the outband agents in the right panel. Define all the switches in the SAN that you want to monitor. To define such an Agent, click Add. c. The Enter IP Address window (Figure 9-43) opens. Enter the host name or IP address of the switch and click OK.

Figure 9-43 Outband agent definition

d. The agent appears in the agent list as shown in Figure 9-42. The state of the agent must be Contacted if you want Productivity Center for Fabric to get data from it. e. To remove an already defined agent, select it and click Remove.

Defining a logon ID for zone information


Productivity Center for Fabric can retrieve the zone information from IBM Fibre Channel Switches and from Brocade Silkworm Fibre Channel Switches. To accomplish this, Productivity Center for Fabric uses application programming interface (API) calls to retrieve zoning information. To use this API, Productivity Center for Fabric must login into the switch with administrative rights. If you want to see zoning information, you need to specify the login ID for the Agents you define.

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

347

Here is the procedure: 1. In the SAN Configuration window (Figure 9-42 on page 347), select the defined Agent and click Advanced. 2. In the SNMP Agent Configuration window (Figure 9-44), enter the user name and password for the switch login. Click OK to save this information.

Figure 9-44 Logon ID definition

You can now see zone information for your switches. Tip: You must enter user ID and password information only for one switch in each SAN to retrieve the zoning information. We recommend that you enter this information for at least two switches for redundancy. Enabling more switches than necessary for API zone discovery may slow performance.

9.3.3 Checking inband agents


After you install agents on the managed systems, as explained in 9.2.1, Installing Productivity Center for Fabric Agent on page 321, the Agents should appear in the Agent Configuration window with an Agent state of Contacted (see Figure 9-42 on page 347). If the Agent does not appear in the panel, check the Agent log file for the cause. You can only remove Agents which are no longer responding to the server. Such Agents display a status of Not responding, as shown in Figure 9-45.

Figure 9-45 Not responding inband Agent

348

IBM TotalStorage Productivity Center V2.3: Getting Started

9.3.4 Performing an initial poll and setting up the poll interval


After you set up the Agents and devices for use with the Productivity Center for Fabric Server, you perform the initial poll. You can manually poll using the SAN Configuration panel (Figure 9-46): 1. In NetView, select SAN Configure. 2. In the SAN Configuration window, click Poll Now to perform a manual poll. Note: Polling takes time, and depends on the size of the SAN. 3. If you did not configure trap forwarding for the SAN devices, (as described in 9.3.1, Configuring SNMP on page 342), you must define the polling interval. In this case, the topology change will not be event driven from the devices, but will be updated regularly at the polling interval. You set the poll interval in the SAN Configuration panel (Figure 9-46). You can specify the polling interval in: Minutes Hours Days: You can specify the time of the day for polling. Weeks: You can specify the day of the week and time of the day for polling.

After you set the poll interval, click OK to save the changes.

Figure 9-46 SAN Configuration

Tip: You do not need to configure the polling interval if all your devices are set to send SNMP traps to the local NetView console or the Productivity Center for Fabric Server. 349

Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric

350

IBM TotalStorage Productivity Center V2.3: Getting Started

10

Chapter 10.

Deployment of agents
Chapter 9, Configuring IBM TotalStorage Productivity Center for Fabric on page 319, covers the installation of the managers that are the central part of IBM TotalStorage Productivity Center. During that installation, the Resource Managers of Productivity Center for Data and Productivity Center for Fabric were installed and registered to a Tivoli Agent Manager, either an existing one, or one that was installed as a prerequisite in the first phase of the installation. This chapter explains how to set up the individual agents (subagents) on a managed host. The agents of Data Manager and Fabric Manager are called subagents, because they reside within the scope of the common agent.

Copyright IBM Corp. 2005. All rights reserved.

351

10.1 Installing the agents


There are two ways to set up a new subagent on a host, depending on the state of the target machine: The common agent is not installed. In this case, install the software using an installer. The common agent is installed. In this case, deploy the agent from the data or fabric manager, or install it using the installer. To install the agent, follow these steps: 1. In the Suite Installer panel (Figure 10-1), select Agent installations of Data, Fabric, and CIM Agent. Click Next.

Figure 10-1 Suite installer installation action

352

IBM TotalStorage Productivity Center V2.3: Getting Started

2. In the next panel (Figure 10-2), select one or more agents to install. The options include the IBM TotalStorage Enterprise Storage Server (ESS) Common Information Model (CIM) Agent. However, this agent does not use any functions of Tivoli Agent Manager.

Figure 10-2 Agent selection panel

The next window asks you to enter the location of the installation code. Then the panel that follows tells you that the individual product installer is launched and you are asked to interact with it.

Chapter 10. Deployment of agents

353

10.2 Data Agent installation using the installer


After the product installer for IBM TotalStorage Productivity Center for Data starts, you can choose to install, uninstall, or apply maintenance to this component. Since no component of the Productivity Center for Data is installed on the server, only one option is available to install. 1. In the Install window (Figure 10-3), select Install Productivity Center for Data and click Next.

Figure 10-3 IBM TotalStorage Productivity Center for Data installation action

354

IBM TotalStorage Productivity Center V2.3: Getting Started

2. A window opens showing the license agreement (see Figure 10-4). Select the I have read and AGREE to abide by the license agreement above check box and click Next.

Figure 10-4 License agreement

3. The License Agreement Confirmation window (Figure 10-5) opens. It asks you to confirm that you have read the license agreement. Click Yes.

Figure 10-5 License Agreement Confirmation

Chapter 10. Deployment of agents

355

4. In the next panel (Figure 10-6), choose the option of Productivity Center for Data that you want to install. To install the agent locally on the same machine on which the installer is currently running, select An agent on this machine and click Next.

Figure 10-6 Installation options

356

IBM TotalStorage Productivity Center V2.3: Getting Started

5. In the Productivity Center for Data Parameters panel (Figure 10-7), enter the server name and port of your Productivity Center for Data Manager. In our environment, the name of the server was gallium, and the port was the default, which is 2078. We did not need to use the fully qualified host name, but this may be different in your environment. Click Next.

Figure 10-7 Data Manager server details

The installer tries to contact the Data Manager server. If this is successful, you see a message like Server gallium:2078 connection successful - server parameters verified in the Progress Log section of the installation window Figure 10-7. 6. The installer checks whether a common agent is already installed on the machine. Because in our environment no common agent was installed on the machine, the installer issues the message No compatible Common Agents were discovered so one will be installed. See the Progress log in Figure 10-8 on page 358.

Chapter 10. Deployment of agents

357

7. As shown in Figure 10-8, enter the parameters of the agent: a. Use the suggested default port 9510. b. Deselect Agent should perform a SCAN when first brought up because this may take a long time and you want to schedule this during the night. c. Leave Agent may run scripts sent by server as selected. d. The Agent Registration Information is the password that you specified during the installation of the Tivoli Agent Manager. Note: Do not change the common agent port, because this may prevent the deployment of agents later. e. Click Next to continue the installation.

Figure 10-8 Parameter for the common agent and Data Agent

358

IBM TotalStorage Productivity Center V2.3: Getting Started

8. In the Space Requirements panel (Figure 10-9), accept the default directory for the common agent installation. Click Next to proceed with the installation.

Figure 10-9 Common Agent installation directory

9. If the directory that you specify does not exist, you see the message shown in Figure 10-10. Click OK to acknowledge this message and continue the installation.

Figure 10-10 Creating the directory

Chapter 10. Deployment of agents

359

10.Figure 10-11 shows the last panel before the installation starts. Review the progress log. If you want to review the parameters, click Prev to go to the previous panels. Then click Next.

Figure 10-11 Review settings

360

IBM TotalStorage Productivity Center V2.3: Getting Started

11.The installation starts and displays the progress in the Install Progress window (Figure 10-12). The progress bar is not shown in the picture, but you can see the messages in the progress log. When the installation is complete, click Done to end the installation.

Figure 10-12 Installation progress

12.The installer closes now, and the Suite Installer is active again. It also reports the successful installation. Click Next to return to the panel shown in Figure 10-1 on page 352 to install another agent (for example a Fabric Agent) or click Cancel to exit the installation.

10.3 Deploying the agent


The deployment of an agent is a convenient way to install a subagent onto multiple machines at the same time. You can also use this method if you do not want to install agents directly on each machine where an agent should be installed. The most important prerequisite software to install on the target machines is the common agent. If the common agent is not already installed on the target machine, the deployment will not work. For example, if you installed one of the two Productivity Center agents, on the targets, you can deploy the other agent using the methods described here. At the time of this writing, Suite Installer does not have the option to deploy agents, so you have to use the native installer setup.exe program for Fabric Manager. The packaging of Data Manager is different and you can use the Suite Installer to install it.

Chapter 10. Deployment of agents

361

Note: For agent deployment, you do not need to have the certificate files available, because the target machines already have the necessary certificates installed during the common agent installation.

Data Agent
You can perform this installation from any machine. It does not have to be the Data Manager server itself. When you use the Suite Installer, there is no option to deploy agents. However, you can choose to install an agent, to launch the Data Manager installer, and later deploy an agent instead of installing it (see Figure 10-6 on page 356). We did not use the Suite Installer for the agent deployment. 1. Start the installer by running setup.exe from the Data Manager installation CD. 2. After a few seconds, you see the panel shown in Figure 10-13. If you have the Data Manager or the agent already installed on that machine where you started the installer, select Uninstall Productivity Center for Data or Apply maintenance to Productivity Center for Data. Click Next.

Figure 10-13 Productivity Center for Data Installation action

3. A window opens displaying the license agreement (see Figure 10-4 on page 355). Follow the steps as explained in steps 2 on page 355 and 3 on page 355.

362

IBM TotalStorage Productivity Center V2.3: Getting Started

4. In the next window that opens (Figure 10-14), select Agents on other machines. Then click Next.

Figure 10-14 Productivity Center for Data Install agents options

Chapter 10. Deployment of agents

363

5. In the Productivity Center for Data Parameters panel (Figure 10-15), enter the Productivity Center for Data server name and the port number. Then click Next.

Figure 10-15 Data Manager server details

364

IBM TotalStorage Productivity Center V2.3: Getting Started

6. The installer tries to verify your input by connecting to the Data Manager server. The message Server gallium:2078 connection successful - server parameters verified is displayed in the progress log (see Figure 10-16) if it is successful. Click Next. 7. In our environment, we did not have a Windows domain, so we entered the details of the target machines manually. Click Manually Enter Agents.

Figure 10-16 Select the Remote Agents to install: Manually entering Agents

Chapter 10. Deployment of agents

365

8. In the Manually Enter Agent window (Figure 10-17), enter the IP address or host name of the target computer and a user ID and password of valid Windows users on that machine. You can only enter more than one machine here, if all the machines can be managed with the same user ID and password. Click OK after you enter all computers that can be managed with the same user ID.

Figure 10-17 Manually Enter Agents panel

366

IBM TotalStorage Productivity Center V2.3: Getting Started

9. The list with the computers that installs the subagent is updated and now appears as shown in Figure 10-18. If you want to install the subagent onto a second computer, but the computer uses a different user ID than the previous one, click Manually Enter Agents again to enter the information for that second computer. Repeat this step for every computer that uses a different user ID and password. After you enter all target computers, click Next.

Figure 10-18 Selecting the Remote Agents to install: Computers targeted for a remote agent install

10.At this time, the installer tries to contact the common agent on target computers to get information about them. This may take a while, so at first you cannot select anything in the window that is presented next (see Figure 10-19 on page 368). Look at the progress log in the lower section of the window to determine what is currently happening. If the installer cannot contact the target computer, verify that the common agent is running. You can do that by looking at the status of the Windows services of the target machine. Another way is to open a telnet connection from a Command Prompt to that machine on port 9510.
c:\>telnet 9.1.38.104 9510

If the common agent is running, it listens for requests on that port and opens a connection. You simply see a blank screen. If the common agent is not running, you see the message Connecting To 9.1.38.104...Could not open a connection to host on port 9510 : Connect failed. When the installer is done with this step, you see the message Productivity Center for Data subagent an 9.1.38.104 will be installed at C:\Program Files\tivoli\ep\TPC\Data. Deselect Agent should perform a SCAN when first brought up, because this may take a long time and you want to schedule this during the night. Click Install to start the deployment.

Chapter 10. Deployment of agents

367

Figure 10-19 Common agent status

11.When the deployment is finished, you see the message shown in Figure 10-20. Review the progress log. Click OK to end the installation.

Figure 10-20 Agent deployment installation completed

368

IBM TotalStorage Productivity Center V2.3: Getting Started

Fabric Agent
There are differences between the Data Agent deployment and the Fabric Agent deployment. To remotely deploy one or many fabric manager subagents, you must be logged on to the fabric manager server. This is different than the data subagent deployment where you can start the installation from any machine. At this time, there is no way to use the Suite Installer, so you have to use the native fabric manager installer. The Fabric Manager comes with a separate package for the Fabric Agent. Data Manager comes with only one installation program for all the possible install options (server, agent, remote agent or GUI). To start the deployment, you start the Fabric Manager installer. You do not start the installer for the Fabric Agent. 1. Launch setup.exe from the fabric manager installation media. 2. After a Java Virtual Machine is prepared and you select the language of the installer, a window opens that prompts you to select the type of installation to perform. See Figure 10-21. Select Remote Fabric Agent Deployment and click Next.

Figure 10-21 Installation action

3. A Welcome window opens. Click Next.

Chapter 10. Deployment of agents

369

4. The IBM License Agreement Panel (Figure 10-22) opens. Select I accept the terms in the license agreement and click Next.

Figure 10-22 License agreement

5. The installer connects to the Tivoli Agent Manager and presents a list of hosts. Select the hosts to deploy the agents. See Figure 10-23. Click Next to start the deployment.

Figure 10-23 Remote host selection

370

IBM TotalStorage Productivity Center V2.3: Getting Started

6. The next panel (Figure 10-24) displays the selected hosts. Verify the information. You can click Back to change your selection or click Next to start the installation.

Figure 10-24 Remote host confirmation

7. When the installation is completed, you see a summary window similar to the example in Figure 10-25. Click Finish.

Figure 10-25 Agent Deployment summary

Your agent should now be installed on the remote hosts.

Chapter 10. Deployment of agents

371

372

IBM TotalStorage Productivity Center V2.3: Getting Started

Part 4

Part

Using the IBM TotalStorage Productivity Center


In this part of the book we provide information about using the components of the IBM TotalStorage Productivity Center product suite. We include a chapter filled with hints and tips about setting up the IBM TotalStorage Productivity Center environment and problem determination basics, as well as a chapter on maintaining the DB2 database.

Copyright IBM Corp. 2005. All rights reserved.

373

374

IBM TotalStorage Productivity Center V2.3: Getting Started

11

Chapter 11.

Using TotalStorage Productivity Center for Disk


This chapter provides information about the functions of the Productivity Center common base. Components of the Productivity Center common base include these topics: Launching and logging on to TotalStorage Productivity Center Launching device managers Performing device inventory collection Working with the ESS, DS6000, and DS8000 families Working with SAN Volume Controller Working with the IBM DS4000 family (formerly FAStT) Event management

Copyright IBM Corp. 2005. All rights reserved.

375

11.1 Productivity Center common base: Introduction


Before using Productivity Center common base features, you need to perform some configuration steps. This will permit you to detect storage devices to be managed. Version 2.3 of Productivity Center common base permits you to discover and manage: ESS 2105-F20, 2105-800, 2105-750 DS6000 and DS8000 family SAN Volume Controller (SVC) DS4000 family (formally FAStT product range) Provided that you have discovered a supported IBM storage device, Productivity Center common base storage management functions will be available for drag-and-drop operations. Alternatively, right-clicking the discovered device will display a drop-down menu with all available functions specific to it. We review the available operations that can be performed in the sections that follow. Note: Not all functions of TotalStorage Productivity Center are applicable to all device types. For example, you cannot display the virtual disks on a DS4000 because the virtual disks concept is only applicable to the SAN Volume Controller. The sections that follow cover the functions available for each of the supported device types.

11.2 Launching TotalStorage Productivity Center


Productivity Center common base along with TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication are accessed via the TotalStorage Productivity Center Launchpad (Figure 11-1) icon on your desktop. Select Manage Disk Performance and Replication to start the IBM Director console interface.

Figure 11-1 TotalStorage Productivity Center launchpad

376

IBM TotalStorage Productivity Center V2.3: Getting Started

Alternatively access IBM Director from Windows Start Programs IBM Director IBM Director Console Log on to IBM Director using the superuser id and password defined at installation. Please note that passwords are case sensitive. Login values are:IBM Director Server: Hostname of the machine where IBM Director is installed User ID: The username to logon with. This is the superuser ID. Enter it in the form <hostname>\<username> Password: The case sensitive superuser ID password Figure 11-2 shows the IBM Director Login panel you will see after launching IBM Director.

Figure 11-2 IBM Director Log on

11.3 Exploiting Productivity Center common base


The Productivity Center common base module adds the Multiple Device Manager submenu task on the right-hand Tasks pane of the IBM Director Console as shown in Figure 11-3 on page 378. Note: The Multiple Device Manager product has been rebranded to TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. You will still see the name Multiple Device Manager in some panels and messages. Productivity Center common base will install the following sub-components into the Multiple Device Manager menu: Launch Device Manager Launch Tivoli SAN Manager (now called TotalStorage Productivity Center for Fabric) Manage CIMOMs Manage Storage Units (menu) Inventory Status Managed Disks Virtual Disks Volumes
Chapter 11. Using TotalStorage Productivity Center for Disk

377

Note: The Manage Performance and Manage Replication tasks that you see in Figure 11-3 become visible when TotalStorage Productivity Center for Disk or TotalStorage Productivity Center for Replication are installed. Although this chapter covers Productivity Center common base, you would have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or both.

Figure 11-3 IBM Director Console with Productivity Center common base

11.3.1 Launch Device Manager


The Launch Device Manager task may be dragged onto an available storage device. For ESS, this will open the ESS Specialist window for a chosen device. For SAN Volume Controller, it will launch a browser session to that device. For DS4000 or FAStT devices, the function is not available.

11.4 Performing volume inventory


This function is used to collect the detailed volume information from a discovered device and place it into the Productivity Center common base databases. You need to do this at least once before Productivity Center common base can start to work with a device.

378

IBM TotalStorage Productivity Center V2.3: Getting Started

When the Productivity Center common base functions are subsequently used to create/remove LUNs the volume inventory is automatically kept up to date and it is therefore not necessary to repeatedly run inventory collection from the storage devices. Version 2.3 of Productivity Center common base does not currently contain the full feature set of all functions for the supported storage devices. This will make it necessary to use the storage devices own management tools for some tasks. For instance you can create new VDisks with Productivity Center common base on a SAN Volume Controller but you can not delete them. You will need to use the SAN Volume Controllers own management tools to do this. For these types of changes to be reflected in Productivity Center common base an inventory collection will be necessary to re-synchronize the storage device and Productivity Center common base inventory. Attention: The use of volume inventory is common to ALL supported storage devices and must be performed before disk management functions are available. To start inventory collection, right-click the chosen device and select Perform Inventory Collection as shown in Figure 11-4.

Figure 11-4 Launch Perform Inventory Collection

A new panel will appear (Figure 11-5 on page 380) as a progress indication that the inventory process is running. At this stage Productivity Center common base is talking to the relevant CIMOM to collect volume information from the storage device. After a short while the information panel will indicated that the collection has been successful. You can now close this window.

Chapter 11. Using TotalStorage Productivity Center for Disk

379

Figure 11-5 Inventory collection in progress

Attention: When the panel in Figure 11-5 indicates that the collection has been done successfully, it does not necessarily mean that the volume information has been fully processed by Productivity Center common base at this point. To track the detailed processing status, launch the Inventory Status task as seen in Figure 11-6.

Figure 11-6 Launch Inventory Status

380

IBM TotalStorage Productivity Center V2.3: Getting Started

To see the processing status of an inventory collection, launch the Inventory Status task as shown in Figure 11-7.

Figure 11-7 Inventory Status

The example Inventory Status panel seen in Figure 11-7 shows the progress of the processing for a SAN Volume Controller. Use the Refresh button in the bottom left of the panel to update it with the latest progress. You can also launch the Inventory Status panel before starting an inventory collection to watch the process end to end. In our test lab the inventory process time for an SVC took around 2 minutes, end to end.

Chapter 11. Using TotalStorage Productivity Center for Disk

381

11.5 Changing the display name of a storage device


You can change the display name of a discovered storage device to something more meaningful to your organization. Right-click the chosen storage device (Figure 11-8) and select the Rename option.

Figure 11-8 Changing the display name of a storage device

Enter a more meaningful device name as in Figure 11-9 and click OK.

Figure 11-9 Entering a user defined storage device name

382

IBM TotalStorage Productivity Center V2.3: Getting Started

11.6 Working with ESS


This section covers the Productivity Center common base functions that are available when managing ESS devices. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-10: Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management functions can be invoked by dragging them onto the chosen device. However, not all functions are applicable to all supported devices. Right-click access: To access all functions available for a specific device, simply right-click it to see a drop-down menu of options for that device. Figure 11-10 shows the drop-down menu for an ESS. Figure 11-10 also shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions, you would always have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or both.

Figure 11-10 Accessing Productivity Center common base functions

Chapter 11. Using TotalStorage Productivity Center for Disk

383

11.6.1 ESS Volume inventory


To view the status of the volumes available within a given ESS device, perform one of the following actions: Right-click the ESS device and select Volumes as in Figure 11-11. On the right-hand side under the Tasks column, drag Managed Storage Units Volumes onto the storage device you want to query. Tip: Before volumes can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to view volumes for an ESS that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection, see 11.4, Performing volume inventory on page 378.

Figure 11-11 Working with ESS volumes

384

IBM TotalStorage Productivity Center V2.3: Getting Started

In either case, in the bottom left corner, the status will change from Ready to Starting Task, and it will remain this way until the volume inventory appears. Figure 11-12 shows the Volumes panel for the select ESS device that will appear.

Figure 11-12 ESS volume inventory panel

11.6.2 Assigning and unassigning ESS Volumes


From the ESS volume inventory panel (Figure 11-12), you can modify existing volume assignments by either assigning a volume to a new host port(s) or by unassigning a host from an existing volume to host port(s) mapping. To assign a volume to a host port, select the volume then click the Assign host button on the right side of the volume inventory panel (Figure 11-12). You will be presented with a panel like the one shown below in Figure 11-13 on page 386. Select from the list of available host port world wide port names (WWPNs), and select either a single host port WWPN, or select more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for Volume assignment, click OK.

Chapter 11. Using TotalStorage Productivity Center for Disk

385

Figure 11-13 Assigning ESS LUNs

When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed, you will see a message panel as shown in Figure 11-14. When the volume has been successfully assigned to the selected host port, the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703, for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.

Figure 11-14 Tivoli SAN Manager warning

386

IBM TotalStorage Productivity Center V2.3: Getting Started

11.6.3 Creating new ESS volumes


To create new ESS volumes select the Create button from the Volumes panel as seen in Figure 11-12 on page 385. The Create volume panel will appear (Figure 11-15).

Figure 11-15 ESS create volume

Use the drop-down fields to select the Storage type and choose from Available arrays on the ESS. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking hosts. On clicking OK TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric (formally known as TSANM) is not installed you will see a message panel as seen in Figure 11-16. If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703 for complete details of its operation.

Figure 11-16 Tivoli SAN Manager warning

Chapter 11. Using TotalStorage Productivity Center for Disk

387

Figure 11-17 Remove a host path from a volume

Figure 11-18 Display ESS volume properties

11.6.4 Launch device manager for an ESS device


This option allows you to link directly to the ESS Specialist of the chosen device: Right-click the ESS storage resource, and select Launch Device Manager. On the right-hand side under the Tasks column, drag Managed Storage Units Launch Device Managers onto the storage device you want to query.

388

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-19 ESS specialist launched by Productivity Center common base

11.7 Working with DS8000


This section covers the Productivity Center common base functions that are available when managing DS8000 devices. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-20 on page 390. Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However, not all functions are applicable to all supported devices. Right-click access: To access all functions available for a specific device, simply right-click it to see a drop-down menu of options for that device. Figure 11-10 on page 383 shows the drop-down menu for an ESS. Figure 11-20 on page 390 also shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions, you would always have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, or both.

Chapter 11. Using TotalStorage Productivity Center for Disk

389

Figure 11-20 Accessing Productivity Center common base functions

11.7.1 DS8000 Volume inventory


To view the status of the volumes available within a given DS8000 device, perform one of the following actions: Right-click the ESS device and select Volumes as in Figure 11-21 on page 391. On the right-hand side under the Tasks column, drag Managed Storage Units Volumes onto the storage device you want to query. Tip: Before volumes can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to view volumes for an DS8000 that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection, see 11.4, Performing volume inventory on page 378.

390

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-21 Working with DS8000 volumes

In either case, in the bottom left corner, the status will change from Ready to Starting Task, and it will remain this way until the volume inventory appears. Figure 11-22 shows the Volumes panel for the select DS8000 device that will appear.

Figure 11-22 DS8000 volume inventory panel

Chapter 11. Using TotalStorage Productivity Center for Disk

391

11.7.2 Assigning and unassigning DS8000 Volumes


From the DS8000 volume inventory panel (Figure 11-12 on page 385) you can modify existing volume assignments by either assigning a volume to a new host port(s) or by unassigning a host from an existing volume to host port(s) mapping. To assign a volume to a host port, you can click the Assign host button on the right side of the volume inventory panel. You will be presented with a panel like the one in Figure 11-23. Select from the list of available host port world wide port names (WWPNs), and select either a single host port WWPN, or select more than one, by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for Volume assignment, click OK. .

Figure 11-23 Assigning DS8000 LUNs

When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed, you will see a message panel as seen in Figure 11-24 on page 393. When the volume has been successfully assigned to the selected host port, the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703 for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.

392

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-24 Tivoli SAN Manager warning

11.7.3 Creating new DS8000 volumes


To create new ESS volumes, select the Create button from the Volumes panel as seen in Figure 11-12 on page 385. The Create volume panel will appear (Figure 11-25).

Figure 11-25 DS8000 create volume

Use the drop-down fields to select the Storage type and choose from Available arrays on the DS8000. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking hosts. On clicking OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric is not installed you will see a message panel as shown in Figure 11-26 on page 394.

Chapter 11. Using TotalStorage Productivity Center for Disk

393

Figure 11-26 Tivoli SAN Manager warning

11.7.4 Launch device manager for an DS8000 device


This option allows you to link directly to the DS8000 device manager of the chosen device: Right-click the DS8000 storage resource, and select Launch Device Manager. On the right-hand side under the Tasks column, drag Managed Storage Units Launch Device Managers onto the storage device you want to query. We received a message that TotalStorage Productivity Center for Disk could not automatically logon (Figure 11-27). Click OK to get the DS8000 storage manager screen as shown in Figure 11-28 on page 395.

Figure 11-27 DS8000 storage manager launch warning

394

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-28 shows the DS8000 device manager launched by Productivity Center common base.

Figure 11-28 DS8000 device manager launched by Productivity Center common base

Chapter 11. Using TotalStorage Productivity Center for Disk

395

11.8 Working with SAN Volume Controller


This section covers the Productivity Center common base functions that are available when managing SAN Volume Controller subsystems. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-29 on page 397: Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management functions can be invoked by dragging them onto the chosen device. However, not all functions are appropriate to all supported devices. Right-click access: To access all functions available for a specific device, right-click it to see a drop-down menu of options for that device. Figure 11-29 on page 397 shows the drop-down menu for a SAN Volume Controller. Note: Overall, the SAN Volume Controller functionality offered in Productivity Center common base compared to that of the native SAN Volume Controller Web based GUI is fairly limited in version 2.1. There is the ability to add existing unmanaged LUNs to existing MDisk groups, but there are no tools to remove MDisks from a group or create/delete MDisk groups. The functions available for VDisks are similar too. Productivity Center common base can create new VDisks in a given MDisk group, but there is little other control over the placement of these volumes. It is not possible to remove VDisks or reassign them to other hosts using Productivity Center common base.

11.8.1 Working with SAN Volume Controller MDisks


To view the properties of SAN Volume Controller managed disks (MDisk) as shown in Figure 11-30 on page 398, perform one of the following actions: Right-click the SVC storage resource, and select Managed Disks (Figure 11-29 on page 397). On the right-hand side under the Tasks column, drag Managed Storage Units Managed Disks onto the storage device you want to query. Tip: Before SAN Volume Controller managed disk properties (MDisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Managed Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. Refer to 11.4, Performing volume inventory on page 378 for details on performing this operation.

396

IBM TotalStorage Productivity Center V2.3: Getting Started

Here is the panel for selecting managed disks (Figure 11-29).

Figure 11-29 Select managed disk

Chapter 11. Using TotalStorage Productivity Center for Disk

397

Next, you should see the panel shown in Figure 11-30.

Figure 11-30 The MDisk properties panel for SAN Volume Controller

Figure 11-30 shows candidate or unmanaged MDisks, which are available for inclusion into an existing MDisk group. To add one or more unmanaged disks to an existing MDisk group: Select the MDisk group from the pull-down menu. Select one MDisk from the list of candidate MDisks, or use the <Ctrl> key to select multiple disks. Click the OK button at the bottom of the screen and the selected MDisk(s) will be added to the MDisk group (Figure 11-31 on page 399).

398

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-31 Add MDisk to a managed disk group

11.8.2 Creating new MDisks on supported storage devices


Attention: The Create button, as seen in Figure 11-30 on page 398, is not for creating new MDisk groups. It is for creating new MDisks on storage devices serving the SAN Volume controller. It is not possible to create new MDisk groups using Version 2.3 of Productivity Center common base. Select the MDisk group from the pull-down menu (Figure 11-30 on page 398). Select the Create button. A new panel opens to create the storage volume (Figure 11-32 on page 400). Select a device accessible to the SVC (devices not marked by an asterisk). Devices marked with an asterisk are not acting as storage to the selected SAN Volume Controller. Figure 11-32 on page 400 shows an ESS with an asterisk next to it. This is because of the setup on the test environment. Make sure the device you select does not have an asterisk next to it. Specify the number of MDisks in the Volume quantity and size in the Requested volume size. Select the Defined SVC ports that should be assigned to these new MDisks.

Chapter 11. Using TotalStorage Productivity Center for Disk

399

Note: If TotalStorage Productivity Center for Fabric is installed and configured, extra panels will appear to create appropriate zoning for this operation. See Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703 for details. Click OK to start a process that will create a new volume on the selected storage device and then add it to the SAN Volume Controllers MDisk group.

Figure 11-32 Create volumes to be added as MDisks Productivity Center common base will now request the specified storage amount from the specified back-end storage device (see Figure 11-33).

Figure 11-33 Volume creation results

400

IBM TotalStorage Productivity Center V2.3: Getting Started

The next step is to add the MDisks to an MDisk group (see Figure 11-34).

Figure 11-34 Assign MDisk to an MDisk group

Chapter 11. Using TotalStorage Productivity Center for Disk

401

Figure 11-35 shows the result of adding the mdisk4 to the selected MDisk group.

Figure 11-35 Result of adding the mdisk4 to the MDisk group

11.8.3 Create and view SAN Volume Controller VDisks


To create or view the properties of SAN Volume Controller virtual disks (VDisk) as shown in Figure 11-36 on page 403, perform one of the following actions: Right-click the SVC storage resource, and select Virtual Disks. On the right-hand side under the Tasks column, drag Managed Storage Units Virtual Disks onto the storage device you want to query. In version 2.3 of Productivity Center common base, it is not possible to delete VDisks. It is also not possible to assign or reassign VDisks to a host after the creation process. Keep this in mind when working with storage using Productivity Center common base on a SAN Volume Controller. These tasks can still be performed using the native SAN Volume Controller Web based GUI.

402

IBM TotalStorage Productivity Center V2.3: Getting Started

Tip: Before SAN Volume Controller virtual disk properties (VDisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Virtual Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection, see 11.4, Performing volume inventory on page 378.

Figure 11-36 Launch Virtual Disks

Chapter 11. Using TotalStorage Productivity Center for Disk

403

Viewing VDisks
Figure 11-37 shows the VDisk inventory and volume attributes for the selected SAN Volume controller.

Figure 11-37 The VDisk properties panel

404

IBM TotalStorage Productivity Center V2.3: Getting Started

Creating a VDisk
To create a new VDisk, use the Create button as shown in Figure 11-37 on page 404. You need to provide a suitable VDisk name and select the MDisk group from which you want to create the VDisk. Specify the number of VDisks to be created and the size in megabytes or gigabytes that each VDisk should be. Figure 11-38 shows some example input in these fields.

Figure 11-38 SAN Volume Controller VDisk creation

The Host ports section of the VDisk properties panel allows you to use TotalStorage Productivity Center for Fabric functionality to perform zoning actions to provide VDisk access to specific host WWPNS. If TSANM is not installed, you will receive a warning If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703 for details on how to configure and use it.

Chapter 11. Using TotalStorage Productivity Center for Disk

405

Figure 11-39 shows that the creation of the VDisk was successful.

Figure 11-39 Volume creation results

11.9 Working with DS4000 family or FAStT storage


This section covers the Productivity Center common base functions that are available when managing DS4000 and FAStT type subsystems. There are two ways to access Productivity Center functions for a given device, and these can be seen in Figure 11-40 on page 407: Tasks access: You will see in the right-hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However, not all functions are appropriate to all supported devices. Right-click access: To access all functions available for the selected device, right-click it to see a drop-down menu of options for it. Figure 11-40 on page 407 shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have either or both TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed.

406

IBM TotalStorage Productivity Center V2.3: Getting Started

11.9.1 Working with DS4000 or FAStT volumes


To view the status of the volumes available within a selected DS4000 or FAStT device, perform one of the following actions: Right-click the DS4000 or FAStT storage resource, and select Volumes (Figure 11-40). On the right-hand side under the Tasks column, drag Managed Storage Units Volumes onto the storage device you want to query. In either case, in the bottom left corner, the status will change from Ready to Starting Task and it will remain this way until the volume inventory is completed (see Figure 11-41 on page 408). Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. Refer to 11.4, Performing volume inventory on page 378 for details.

Figure 11-40 Working with DS4000 and FAStT volumes

Chapter 11. Using TotalStorage Productivity Center for Disk

407

Figure 11-41 DS4000 and FAStT volumes panel

Figure 11-41 shows the volume inventory for the selected device. From this panel you can Create and Delete volumes or assign and unassign volumes to hosts.

408

IBM TotalStorage Productivity Center V2.3: Getting Started

11.9.2 Creating DS4000 or FAStT volumes


To create new storage volumes on a DS4000 or FAStT, select the Create button from the right side of the Volumes panel (Figure 11-41 on page 408). You will be presented with the Create volume panel as in Figure 11-42.

Figure 11-42 DS4000 or FAStT create volumes

Select the desired Storage Type and array from Available arrays using the drop-down menus. Then enter the Volume quantity and Requested volume size of the new volumes. Finally select the host posts you wish to assign to the new volumes from the Defined host ports scroll box, holding the <Crtl> key to select multiple ports. The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 11-43. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703 for details on how to configure and use it.

Figure 11-43 Tivoli SAN Manager warning

If TotalStorage Productivity Center for Fabric is not installed, click OK to continue. You should then see the panels shown below (Figure 11-43 through Figure 11-48 on page 412).

Chapter 11. Using TotalStorage Productivity Center for Disk

409

Figure 11-44 Volume creation results (1)

Figure 11-45 Volume creation results (2)

410

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-46 Volume creation results (3)

Figure 11-47 Volume creation results (4)

Chapter 11. Using TotalStorage Productivity Center for Disk

411

Figure 11-48 Volume creation results (5)

412

IBM TotalStorage Productivity Center V2.3: Getting Started

11.9.3 Assigning hosts to DS4000 and FAStT Volumes


Use this feature to assign hosts to an existing DS4000 or FAStT volume. To assign a DS4000 or FAStT volume to a host port, first select a volume by clicking it from the volumes panel (Figure 11-41 on page 408). Now click the Assign host button from the right side of the Volumes panel. You will be presented with a panel as shown in Figure 11-49. Select from the list of available host ports world wide port names (WWPNs), and select either a single host port WWPN, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK.

Figure 11-49 Assign host ports to DS4000 or FAStT

The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 11-50. If TotalStorage Productivity Center for Fabric is installed, refer to Chapter 14, Using TotalStorage Productivity Center for Fabric on page 703 for details on how to configure and use it.

Figure 11-50 Tivoli SAN Manager warning

Chapter 11. Using TotalStorage Productivity Center for Disk

413

If TotalStorage Productivity Center for Fabric is not installed, click OK to continue (Figure 11-51).

Figure 11-51 DS4000 volumes successfully assigned to a host

11.9.4 Unassigning hosts from DS4000 or FAStT volumes


To unassign a DS4000 or FAStT volume from a host port, first select a volume by clicking it from the volumes panel (Figure 11-41 on page 408). Now click the Unassign host button from the right side of the Volumes panel. You will be presented with a panel as in Figure 11-52. Select from the list of available host port world wide port names (WWPNs), and select either a single host port WWPN, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK. Note: If the Unassign host button is grayed out when you have selected a volume, this means that there are no current hosts assignments for that volume. If you believe this is incorrect, it could be that the Productivity Center common base inventory is out of step with this devices configuration. This situation can arise when an administrator makes changes to the device outside of the Productivity Center common base interface. To correct this problem, perform an inventory for the DS4000 or FAStT and repeat. Refer to 11.4, Performing volume inventory on page 378

Figure 11-52 Unassign host ports from DS4000 or FAStT

414

IBM TotalStorage Productivity Center V2.3: Getting Started

TotalStorage Productivity Center for Fabric is not called to perform zoning cleanup in version 2.1. This functionality is planned in a future release.

Figure 11-53 Volume unassignment results

11.9.5 Volume properties

Figure 11-54 DS4000 or FAStT volume properties

Chapter 11. Using TotalStorage Productivity Center for Disk

415

11.10 Event Action Plan Builder


The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment.

Understanding Event Action Plans


An Event Action Plan associates one or more event filters with one or more actions. For example, an Event Action Plan can be created to send a page to the network administrator's pager if an event with a severity level of critical or fatal is received by the IBM Director Server. You can include as many event filter and action pairs as needed in a single Event Action Plan. An Event Action Plan is activated only when you apply it to a managed system or group. If an event targets a system to which the plan is applied and that event meets the filtering criteria defined in the plan, the associated actions are performed. Multiple event filters can be associated with the same action, and a single event filter can be associated with multiple actions. The list of action templates you can use to define actions are listed in the Actions pane of the Event Action Plan Builder window (see Figure 11-55).

Figure 11-55 Action templates

416

IBM TotalStorage Productivity Center V2.3: Getting Started

Creating an Event Action Plan


Event Action Plans are created in the Event Action Plan Builder window. To open this window from the Director Console, click the Event Action Plan Builder icon on the toolbar. The Event Action Plan Builder window is displayed (see Figure 11-56).

Figure 11-56 Event Action Plan Builder

Here are the tasks to create an Event Action Plan. 1. To begin, do one of the following actions: Right-click Event Actions Plan in the Event Action Plans pane to access the context menu, and then select New. Select File New Event Action Plan from the menu bar. Double-click the Event Action Plan folder in the Event Action Plans pane (see Figure 11-57).

Figure 11-57 Create Event Action Plan

Chapter 11. Using TotalStorage Productivity Center for Disk

417

2. Enter the name you want to assign to the plan and click OK to save the new plan. The new plan entry with the name you assigned is displayed in the Event Action Plans pane. The plan is also added to the Event Action Plans task as a child entry in the Director Console (see Figure 11-58). Now that you have defined an event action plan, you can assign one or more filters and actions to the plan.

Figure 11-58 New Event Action Plan

Notes: You can create a plan without having defined any filters or actions. The order in which you build a filter, action, and Event Action Plan does not matter. 3. Assign at least one filter to the Event Action Plan using one of the following methods: Drag the event filter from the Event Filters pane to the Event Action Plan in the Event Action Plans pane. Highlight the Event Action Plan, then right-click the event filter to display the context menu and select Add to Event Action Plan. Highlight the event filter, then right-click the Event Action Plan to display the context menu and select Add Event Filter (see Figure 11-59 on page 419).

418

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 11-59 Add events to the action plan

The filter is now displayed as a child entry under the plan (see Figure 11-60).

Figure 11-60 Events added to action plan

Chapter 11. Using TotalStorage Productivity Center for Disk

419

4. Assign at least one action to at least one filter in the Event Action Plan using one of the following methods: Drag the action from the Actions pane to the target event filter under the desired Event Action Plan in the Event Action Plans pane. Highlight the target filter, then right-click the desired action to display the context menu and select Add to Event Action Plan. Highlight the desired action, then right-click the target filter to display the context menu and select Add Action. The action is now displayed as a child entry under the filter (see Figure 11-61).

Figure 11-61 Action as child of Display Events Action Plan

5. Repeat the previous two steps for as many filter and action pairings as you want to add to the plan. You can assign multiple actions to a single filter and multiple filters to a single plan. Note: The plan you have just created is not active because it has not been applied to a managed system or a group. In the next section we explain how to apply an Event Action Plan to a managed system or group.

420

IBM TotalStorage Productivity Center V2.3: Getting Started

11.10.1 Applying an Event Action Plan to a managed system or group


An Event Action Plan is activated only when it is applied to a managed system or group. To activate a plan: Drag the plan from the Tasks pane of the Director Console to a managed system in the Group Contents pane or to a group in the Groups pane. Drag the system or group to the plan. Select the plan, right-click the system or group, and select Add Event Action Plan (see Figure 11-62).

Figure 11-62 Notification of Event Action Plan added to group/system(s)

Repeat this step for all associations you want to make. You can activate the same Event Action Plan for multiple systems (see Figure 11-63).

Figure 11-63 Director with Event Action Plan - Display Events

Once applied, the plan is activated and displayed as a child entry of the managed system or group to which it is applied when the Associations - Event Action Plans item is checked.

Chapter 11. Using TotalStorage Productivity Center for Disk

421

Message Browser
When an event occurs, the Message Browser (see Figure 11-64) pops up on the server console.

Figure 11-64 Message Browser

If the message has not yet been viewed, then that Status for that message will be blank. When viewed, a checked envelope icon will appear under the Status column next to the message. To see greater detail on a particular message, select the message in the left pain and click the Event Details button (see Figure 11-65).

Figure 11-65 Event Details window

422

IBM TotalStorage Productivity Center V2.3: Getting Started

11.10.2 Exporting and importing Event Action Plans


With the Event Action Plan Builder, you can import and export action plans to files. This enables you to move action plans quickly from one IBM Director Server to another or to import action plans that others have provided.

Export
Event Action Plans can be exported to three types of files: Archive: Backs up the selected action plan to a file that can be imported into any IBM Director Server. HTML: Creates a detailed listing of the selected action plans, including its filters and actions, in an HTML file format. XML: Creates a detailed listing of the selected action plans, including its filters and actions, in an XML file format. To export an Event Action Plan, do the following steps: 1. Open the Event Action Plan Builder. 2. Select an Event Action Plan from those available under the Event Action Plan folder. 3. Select File Export, then click the type of file you want to export to (see Figure 11-66). If this Event Action Plan will be imported by an IBM Director Server, then select Archive.

Figure 11-66 Archiving an Event Action Plan

Chapter 11. Using TotalStorage Productivity Center for Disk

423

4. Name the archive and set a location to save in the Select Archive File for Export window as shown in Figure 11-67.

Figure 11-67 Select destination and file name

Tip: When you export an action plan, regardless of the type, the file is created on a local drive on the IBM Director Server. If an IBM Director Console is used to access the IBM Director Server, then the file could be saved to either the Server or the Console by selecting Server or Local from the Destinations pull-down. It cannot be saved to a network drive. Use the File Transfer task if you want to copy the file elsewhere.

424

IBM TotalStorage Productivity Center V2.3: Getting Started

Import
Event Action Plans can be imported from a file, which must be an Archive export of an action plan from another IBM Director Server. Follow these steps to import an Event Action Plan: 1. Transfer the archive file to be imported to a drive on the IBM Director Server. 2. Open the Event Action Plan Builder from the main Console window. 3. Click File Import Archive (see Figure 11-68).

Figure 11-68 Importing an Event Action Plan

4. From the Select File for Import window (see Figure 11-69), select the archive file and location. The file must be located on the IBM Director Server. If using the Console, you must transfer the file to the IBM Director Server before it can be imported.

Figure 11-69 Select file for import

Chapter 11. Using TotalStorage Productivity Center for Disk

425

5. Click OK to begin the import process. The Import Action Plan window opens, displaying the action plan to import (see Figure 11-70). If the action plan had been assigned previously to systems or groups, you will be given the option to preserve associations during the import. Select Import to complete the import process.

Figure 11-70 Verifying import of Event Action Plan

426

IBM TotalStorage Productivity Center V2.3: Getting Started

12

Chapter 12.

Using TotalStorage Productivity Center Performance Manager


This chapter provides a step-by-step guide to help you configure and use the Performance Manager functions provided by the TotalStorage Productivity Center for Disk.

Copyright IBM Corp. 2005. All rights reserved.

427

12.1 Exploiting Performance Manager


You can use the Performance Manager component of TotalStorage Productivity Center for Disk to manage and monitor the performance of the storage devices that TotalStorage Productivity Center for Disk supports. Performance Manager provides the following functions: Collecting data from devices Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS) and IBM TotalStorage SAN Volume Controller in the first release. Configuring performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria allows Performance Manager to notify you when a certain threshold has been crossed, thus enabling you to take action before a critical event occurs. Viewing performance data You can view performance data from the Performance Manager database using the gauge application programming interfaces (APIs). These gauges present performance data in graphical and tabular forms. Using Volume Performance Advisor (VPA) The Volume performance advisor is an automated tool that helps you select the best possible placement of a new LUN from a performance perspective. This function is integrated with Device Manager so that, when the VPA has recommended locations for requested LUNs, the LUNs can ne allocated and assigned to the appropriate host without going back to Device Manager. Managing workload profiles You can use Performance Manager to select a predefined workload profile or to create a new workload profile that is based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. The installation of the Performance Manager component onto an existing TotalStorage Productivity Center for Disk server provides a new Manage Performance task tree (Figure 12-1) on the right-hand side of the TotalStorage Productivity Center for Disk host. This task tree includes the various elements shown.

Figure 12-1 New Performance Manager tasks

428

IBM TotalStorage Productivity Center V2.3: Getting Started

12.1.1 Performance Manager GUI


The Performance Manager Graphical User Interface can be launched from the IBM Director console Interface. After logging on to IBM Director, you will see a screen as in Figure 12-2. On the rightmost Tasks pane, you will see Manage Performance launch menu. It is highlighted and expanded in the figure shown.

Figure 12-2 IBM Director Console with Performance Manager

12.1.2 Performance Manager data collection


To collect performance data for the Enterprise Storage Server (ESS), Performance Manager invokes the ESS Specialist server, setting a particular performance data collection frequency and duration of collection. Specialist collects the performance statistics from an ESS, establishes a connection, and sends the collected performance data to Performance Manager. Performance Manager then processes the performance data and saves it in Performance Manager database tables. From this section you can create data collection tasks for the supported, discovered IBM storage devices. There are two ways to use the Data Collection task to begin gathering device performance data. 1. Drag and drop the data collection task option from the right-hand side of the Multiple Device Manager application, onto the Storage Device for which you want to create the new task.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

429

2. Or, right-click a storage device in the center column, and select the Performance Data Collection Panel menu option as shown in Figure 12-3.

Figure 12-3 ESS tasks panel

Either operation results in a new window named Create Performance Data Collection Task (Figure 12-4). In this window you will specify: A task name A brief description of the task The sample frequency in minutes The duration of data collection task (in hours)

Figure 12-4 Create Performance Data Collection Task for ESS

430

IBM TotalStorage Productivity Center V2.3: Getting Started

In our example, we are setting up a data collection task on an ESS with Device ID 2105.16603. After we have created a task name Cottle _ESS with sample frequency of 5 minutes and duration is 1 hour. It is possible to add more ESSs to the same data collection task, by clicking the Add button on the right-hand side. You can click individual devices, or select multiples by making use of the Ctrl key. See Figure 12-5 for an example of this panel. In our example, we created a task for the ESS with device ID 2105.22513.

Figure 12-5 Adding multiple devices to a single task

Chapter 12. Using TotalStorage Productivity Center Performance Manager

431

Once we have established the scope of our data collection task and have clicked the OK button, we see our new data collection task available in the right-hand task column (see Figure 12-6). We have created task Cottle_ESS in the example. Tip: When providing a description for a new data collection task, you may elect to provide information about the duration and frequency of the task.

Figure 12-6 A new data collection task

432

IBM TotalStorage Productivity Center V2.3: Getting Started

In order to schedule it, right-click the selected task (see Figure 12-7).

Figure 12-7 Scheduling new data collection task

You will see another window as shown in Figure 12-8.

Figure 12-8 Scheduling task

You have the option to use the job scheduling facility of TotalStorage Productivity Center for Disk, or to execute the task immediately.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

433

If you select Execute Now, you will see a panel similar to the one in Figure 12-9, providing you with some information about task name and task status, including the time the task was initialized.

Figure 12-9 Task progress panel

If you would rather schedule the task to occur at a future time, or to specify additional parameters for the job schedule, you can walk through the panel in Figure 12-10. You may provide a scheduled job description for the scheduled job. In our example, we created a job, 24March Cottle ESS.

Figure 12-10 New scheduled job panel

434

IBM TotalStorage Productivity Center V2.3: Getting Started

12.1.3 Using IBM Director Scheduler function


You may specify additional scheduled job parameters by using the Advanced button. You will see the panel in Figure 12-11. You can also launch this panel from IBM Director Console Tasks Scheduler File New Job. You can also set up the repeat frequency of the task.

Figure 12-11 New scheduled job, advanced tab

Once you are finished customizing the job options, you may save it using File Save as menu. Or, you may do this by clicking the diskette icon panel. in the top left corner of the advanced

Chapter 12. Using TotalStorage Productivity Center Performance Manager

435

When you save with advanced job options, you may provide a descriptive name for the job as shown in Figure 12-12.

Figure 12-12 Save job panel with advanced options

You should receive a confirmation that your job has been saved as shown in Figure 12-13.

Figure 12-13 scheduled job is saved

436

IBM TotalStorage Productivity Center V2.3: Getting Started

12.1.4 Reviewing data collection task status


You can review the task status using Task Status under the rightmost column Tasks. See Figure 12-14.

Figure 12-14 Task Status

Chapter 12. Using TotalStorage Productivity Center Performance Manager

437

Upon double-clicking Task Status, it launches the following panel as shown in Figure 12-15.

Figure 12-15 Task Status Panel

To review the task status, you can click the task shown under the Task name column. For example, we selected the task FCA18P, which was aborted, as shown in Figure 12-16 on page 439. Subsequently, it will show the details with Device ID, Device status, and Error Message ID in the Device status box. You can click the entry in the device status box. It will further show the Error message in the Error message box.

438

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 12-16 Task status details

12.1.5 Managing Performance Manager Database


The collected performance data is stored in a back-end DB2 database. This database needs to be maintained in order to keep only relevant data in the database. Your may decide on a frequency for purging old data based on your organizations requirements.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

439

The performance database panel can be launched on clicking Performance Database as shown in Figure 12-17. It will display the Performance Database Properties panel as shown in Figure 12-18 on page 441.

Figure 12-17 Launch Performance Manager database

440

IBM TotalStorage Productivity Center V2.3: Getting Started

You can use the performance database panel to specify properties for a performance database purge task. The sizing function on this panel shows used space and free space in the database. You can choose to purge performance data based on age of the data, the type of the data, and the storage devices associated with the data (Figure 12-18).

Figure 12-18 Properties of Performance database

The Performance database properties panel shows the following data: Database name The name of the database Database location The file system on which the database resides. Total file system capacity The total capacity available to the file system, in gigabytes. Space currently used on file system Space is shown in gigabytes and also by percentage. Performance manager database full The amount of space used by Performance Manager. The percentage shown is the percentage of available space (total space - currently used space) used by the Performance Manager database.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

441

The following formula is used to derive the percentage of disk space full in the Performance Manager database: a = the total capacity of the file system b = the total allocated space for Performance Manager database on the file system c = the portion of the allocated space that is used by the Performance Manager database For any decimal amount over a particular number, the percentage is rounded up to the next largest integer. For example, 5.1% is rounded to and displayed as 6%. Space status advisor The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now High: You should purge data soon Critical: You need to purge data now Disk space thresholds for status categories: low if utilization <0.8, high if 0.8 <= utilization <0.9 and critical otherwise. The delimiters between low/high/critical are 80% and 90% full. Purge database options Groups the database purge information. Name Type A name for the performance database purge task. The maximum length for a name can be from 1 to 250 characters. Description (optional) Type a description for the performance database purge task. The maximum length for a description can be from 1 to 250 characters. Device type Select one or more storage device types for the performance database purge. Options are SVC, ESS, or All. (Default is All.) Purge performance data older than Select the maximum age for data to be retained when the purge task is run. You can specify this value in days (1-365) or years (1-10). For example, if you select the Days button and a value of 10, the database purge task will purge all data older than 10 days when it is run. Therefore, if it has been more than 10 days since the task was run, all performance data would be purged. Defaults are 365 days or 10 years. Purge data containing threshold exception information Deselecting this option will preserve performance data that contains information about threshold exceptions. This information is required to display exception gauges. This option is selected by default. Save as task button When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved to the IBM Director Task pane under the Performance Manager Database. Once it is saved, the task can be scheduled using the IBM Director scheduler function.

442

IBM TotalStorage Productivity Center V2.3: Getting Started

12.1.6 Performance Manager gauges


Once data collection is complete, you may use the Gauges task to retrieve information about a variety of storage device metrics. Gauges are used to tunnel down to the level of detail necessary to isolate performance issues on the storage device. To view information collected by the Performance Manager, a gauge must be created or a custom script written to access the DB2 tables/fields directly.

Creating a gauge
Open the IBM Director and do one of the following tasks: Right-click the storage device in the center pane and select Gauges (see Figure 12-19).

Figure 12-19 Right-click gauge opening

You can click Gauges on the panel shown and it will produce the Job Status window as shown in Figure 12-21 on page 444. It is also possible to launch Gauge creation by expanding Multiple Device Manager - Manage Performance in the rightmost column. Drag the Gauges item to the storage device desired and drop to open the gauges for that device (see Figure 12-20 on page 444).

Chapter 12. Using TotalStorage Productivity Center Performance Manager

443

Figure 12-20 Drag-n-drop gauge opening

This will produce the Job status window (see Figure 12-21) while the Performance gauges window opens. You will see the Job status window while other selected windows are opening.

Figure 12-21 Opening Performance gauges job status

The Performance gauges window will be empty until a gauge is created for use. We have created three gauges.(see Figure 12-22).

Figure 12-22 Performance gauges

Clicking the Create button to the left brings up the Job status window while the Create performance gauge window opens.

444

IBM TotalStorage Productivity Center V2.3: Getting Started

The Create performance gauge window changes values depending on whether the cluster, array, or volume items are selected in the left pane. Clicking the cluster item in the left pane produces a window as seen in Figure 12-23.

Figure 12-23 Create performance gauge - Performance

Under the Type pull-down menu, select Performance or Exception.

Performance
Cluster Performance gauges provide details on the average cache holding time in seconds as well as the percent of I/O requests that were delayed due to NVS memory shortages. Two Cluster Performance gauges are required per ESS to view the available historical data for each cluster. Additional gauges can be created to view live performance data. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window, thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. If test were used as a gauge name, then it could not be used for another gauge - even if another storage device were selected as it would not be unique in the database. Example names: 28019P_C1H would represent the ESS serial number (28019), the performance gauge type (P), the cluster (C1), and historical (H), while 28019E would represent the exception (E) gauge for the same ESS. Gauges for the clusters and arrays would build on that nomenclature to group the gauges by ESS on the Gauges window.
Chapter 12. Using TotalStorage Productivity Center Performance Manager

445

Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window. Metric(s): Click the metric(s) that will be display by default when the gauge is opened for viewing. Those metrics with the same value under the Units column in the Metrics table can be selected together using either Shift mouse-click or Ctrl mouse-click. The metrics in this field can be changed on the historical gauge after the gauge has been opened for viewing. In other words, a historical gauge for each metric or group of metrics is not necessary. However, these metrics cannot be changed for live gauges. A new gauge is required for each metric or group of metrics desired. Component: Select a single device from the Component table. This field cannot be changed when the gauge is opened for viewing. Data points: Selecting this radio button enables the gauge to display most recent data being obtained from currently running performance collectors against the storage device. One most recent performance data gauge is required per cluster and per metric to view live collection data. The Device pull-down menu displays text informing the user whether or not a performance collection task is running against this Device. You can select number of datapoints for your requirements to display the last x data points from the date of the last collection. The data collection could be currently running or the most recent one. Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying.

446

IBM TotalStorage Productivity Center V2.3: Getting Started

Click the OK button when ready to save the performance gauge (see Figure 12-24). In this example, we have created a gauge with the name 22513C1H and the description, average cache holding time. We selected a starting and ending date as 11-March-2005. This corresponds with our data collection task schedule.

Figure 12-24 Ready to save performance gauge

The gauge appears after clicking the OK button with the Display gauge box checked or when the Display button is clicked after selecting the appropriate gauge on the Performance gauges window (see Figure 12-26 on page 448). If you decide not to display gauge and save only, then you will see a panel as shown here in Figure 12-25.

Figure 12-25 Saved performance gauges

Chapter 12. Using TotalStorage Productivity Center Performance Manager

447

Figure 12-26 Cluster performance gauge - upper

The top of the gauge contains the following labels: Graph Name Description Device Component level Component ID Threshold The name of the gauge The description of the gauge The storage device selected for the gauge Cluster, array, volume The ID # of the component (cluster, array, volume) The thresholds that were applied to the metrics

Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Metrics may be selected either individually or in groups as long as the data types are the same (for example, seconds with seconds, milliseconds with milliseconds, or percent with percent). Click the Apply button to force a Performance Gauge section update with the new y-axis data. The Start Date:, End Date:, Start Time:, and End Time: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force a Performance Gauge section update with the new x-axis data. For example, we applied Total I/O Rate metric to the saved gauge, and the resultant graph is shown in Figure 12-27 on page 449. Here, the Performance Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section.

448

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 12-27 Cluster performance gauge with applied I/O rate metric

Click the Refresh button in the Performance Gauge section to update the graph with the original metrics and date/time criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed are updated first, followed by the contents of the graph, which can take up to several minutes to update. Finally, the data used to generate the graph is displayed at the bottom of the window (see Figure 12-28 on page 450). Each of the columns in the data section can be sorted up or down by clicking the column heading (see Figure 12-32 on page 453). The sort reads the data from left to right, so the results may not be as expected. The gauges for the array and volume components function in the same manner as the cluster gauge created above.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

449

Figure 12-28 Create Performance Gauge- Lower

Exception
Exception gauges display data only for those active thresholds that were crossed during the reporting period. One Exception gauge displays threshold exceptions for the entire storage device based on the thresholds active at the time of collection.

450

IBM TotalStorage Productivity Center V2.3: Getting Started

To create an exception gauge, select Exception from the Type pull-down menu (see Figure 12-29).

Figure 12-29 Create performance gauge - Exception

By default, the Cluster will be highlighted in the left pane and the metrics and component sections will not be available. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

451

Click the OK button when ready to save the performance gauge. We created an exception gauge as shown in Figure 12-30.

Figure 12-30 Ready to save exception gauge

The top of the gauge contains the following labels: Graph Name Description Device Threshold The name of the gauge The description of the gauge The storage device selected for the gauge The thresholds that were applied to the metrics

Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Start Date: and End Date: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force an Exceptions Gauge section update with the new x-axis data. The Exceptions Gauge section of the gauge displays graphically, the information over time selected by the gauge, and the options in the Display Properties section (see Figure 12-31 on page 453).

452

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 12-31 Exceptions gauge - upper

Click the Refresh button in the Exceptions Gauge section to update the graph with the original date criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed are updated first, followed by the contents of the graph, which can take up to several minutes to update. Finally, the data used to generate the graph are displayed at the bottom of the window. Each of the columns in the data section can be sorted up or down by clicking the column heading (see Figure 12-32).

Figure 12-32 Data sort options

Chapter 12. Using TotalStorage Productivity Center Performance Manager

453

Display Gauges
To display previously created gauges, either right-click the storage device and select gauges (see Figure 12-19 on page 443) or drag and drop the Gauges item on the storage device (see Figure 12-20 on page 444) to open the Performance gauges window, shown here in Figure 12-33).

Figure 12-33 Performance gauges window

Select one of the gauges and then click Display.

Gauge Properties
The Properties button allows the following fields or choices to be modified.

Performance
These are the performance related possibilities: Description Metrics Component Data points Date range date and time ranges

454

IBM TotalStorage Productivity Center V2.3: Getting Started

You can change the data displayed in the gauge from Data points with an active data collection to Date range (see Figure 12-34). Selecting Date range allows you to choose the Start date and End Date using the performance data stored in the DB2 database.

Figure 12-34 Performance gauge properties

Chapter 12. Using TotalStorage Productivity Center Performance Manager

455

Exception
You can change the Type property of the gauge definition from Performance to Exception. For a gauge type of Exception, you can only choose to view data for a Date range (see Figure 12-35).

Figure 12-35 Exception gauge properties

Delete a gauge
To delete a previously created gauge, either right-click the storage device and select gauges (see Figure 12-19 on page 443) or drag and drop the Gauges item on the storage device (see Figure 12-20 on page 444) to open the Performance gauges window shown in Figure 12-33 on page 454. Select the gauge to remove and click Delete. A pop-up window will prompt for confirmation to remove the gauge (see Figure 12-36).

Figure 12-36 Confirm gauge removal

456

IBM TotalStorage Productivity Center V2.3: Getting Started

To confirm, click Yes and the gauge will be removed. The gauge name may now be reused, if desired.

12.1.7 ESS thresholds


Thresholds are used to determine watermarks for warning and error indicators for an assortment of storage metrics, including: Disk Utilization Cache Holding Time NVS Cache Full Total I/O Requests Thresholds are used either by: 1. Right-clicking a storage device in the center panel of TotalStorage Productivity Center for Disk, and selecting the thresholds menu option (Figure 12-37) 2. Or, by dragging and dropping the thresholds task from the right tasks panel in Multiple Device Manager, onto the desired storage device, to display or modify the thresholds for that device

Figure 12-37 Opening the thresholds panel

Chapter 12. Using TotalStorage Productivity Center Performance Manager

457

Upon opening the thresholds submenu, you will see the following display, which shows the default thresholds in place for ESS, as shown in Figure 12-38.

Figure 12-38 Performance Thresholds main panel

On the right-hand side, there are buttons for Enable, Disable, Copy Threshold Properties, Filters, and Properties. If the selected task is already enabled, then the Enable button will appear greyed out, as in our case. If we attempt to disable a threshold that is currently enabled, by clicking the Disable button, a message will be displayed as shown in Figure 12-39.

Figure 12-39 Disabling threshold warning panel

You may elect to continue, and disable the selected threshold, or to cancel the operation by clicking Dont disable threshold.

458

IBM TotalStorage Productivity Center V2.3: Getting Started

The copy threshold properties button will allow you to copy existing thresholds to other devices of similar type (ESS, in our case). The window in Figure 12-40 is displayed.

Figure 12-40 Copying thresholds panel

Note: As shown in Figure 12-40, the copying threshold panel is aware that we have registered on our ESS CIM agent host both clusters of our model 800 ESS, as indicated by the semicolon delimited IP address field for the device ID 2105.22219. The Filters window is another available thresholds option. From this panel, you can enable, disable, and modify existing filter values against selected thresholds as shown in Figure 12-41.

Figure 12-41 Threshold filters panel

Chapter 12. Using TotalStorage Productivity Center Performance Manager

459

Finally, you can open the properties panel for a selected threshold, and are shown the panel in Figure 12-42. You have options to acknowledge the values at their current settings, or modify the warning or error levels, or select the alert level (none, warning only, and warning or error are the available options).

Figure 12-42 Threshold properties panel

12.1.8 Data collection for SAN Volume Controller


Performance Manager uses an integrated configuration assistant tool (ICAT) interface of a SAN Volume Controller (SVC) to start and stop performance statistics collection on an SAN Volume Controller device. The process for performing data collection on SAN Volume Controller is similar to that of ESS. You will need to setup a new Performance Data Collection Task for the SAN Volume Controller device. Figure 12-43 on page 461 is an example of the panel you should see when you drag the Data Collection task onto the SAN Volume Controller device, or right-click the device and left-click Data Collection. As with the ESS data collection task: Define a task name and description Select sample frequency and duration of the task and click OK Note: The SAN Volume Controller can perform data collection at a minimum 15 minute interval. You may use the Add button to include additional SAN Volume Controller devices in the same data collection task, or use the Remove button to exclude SAN Volume Controllers from an existing task. In our case we are performing data collection against a single SAN Volume Controller.

460

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 12-43 The SVC Performance Data Collection Task

As long as at least one data collection task has been completed, you are able to proceed with the steps to create a gauge to view your performance data.

12.1.9 SAN Volume Controller thresholds


To view the available Performance Manager Thresholds, you can right-click the SAN Volume Controller device and click Thresholds, or drag the Threshold task from the right-hand panel onto the SAN Volume Controller device you want to query. A panel like the one in Figure 12-44 appears.

Figure 12-44 The SVC performance thresholds panel

SVC has following thresholds with their default properties: VDisk I/Os rate Total number of virtual disk I/Os for each I/O group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None VDisk bytes per second Virtual disk bytes per second for each I/O group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None

Chapter 12. Using TotalStorage Productivity Center Performance Manager

461

MDisk I/O rate Total number of managed disk I/Os for each managed disk group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None MDisk bytes per second Managed disk bytes per second for each managed disk group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None You may only enable a particular threshold once minimum values for warning and error levels have been defined. If you attempt to select a threshold and enable it without first modifying these values, you will see a notification like the one in Figure 12-45.

Figure 12-45 SAN Volume Controller threshold enable warning

Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values of -1.0 are indicators that there is no recommended minimum value for the threshold and are therefore entirely user defined. You may elect to provide any reasonable value for these thresholds, keeping in mind the workload in your environment. To modify the warning and error values for a given threshold, you may select the threshold, and click the Properties button. The panel in Figure 12-46 will be shown. You can modify the threshold as appropriate, and accept the new values by selecting the OK button.

Figure 12-46 Modifying threshold warning and error values

462

IBM TotalStorage Productivity Center V2.3: Getting Started

12.1.10 Data collection for the DS6000 and DS8000


.The process for performing data collection on DS6000/DS8000 is similar to that of ESS. You will need to set up a new Performance Data Collection Task for the DS6000/DS8000 device. Figure 12-47 is an example of the panel you should see when you drag the Data Collection task onto the SAN Volume Controller device, or right-click the device and left-click Data Collection. Figure 12-48 shows user validation. As with the ESS data collection task: Define a task name and description Select sample frequency and duration of the task and click OK.

Figure 12-47 DS6000/DS8000 user name and password

Figure 12-48 The DS6000/DS8000 Data Collection Task

As long as at least one data collection task has been completed, you are able to proceed with the steps to create a gauge to view your performance data (Figure 12-49 on page 464 through Figure 12-51 on page 466).

Chapter 12. Using TotalStorage Productivity Center Performance Manager

463

Figure 12-49 DS6000/DS8000 Cluster level gauge values

464

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 12-50 DS6000/DS8000 Rank Group level gauges

Chapter 12. Using TotalStorage Productivity Center Performance Manager

465

Figure 12-51 DS6000/DS8000 Volume level gauges

12.1.11 DS6000 and DS8000 thresholds


To view the available Performance Manager Thresholds, you can right-click the DS6000/Ds8000 device and click Thresholds, or drag the Threshold task from the right-hand panel onto the DS6000/DS8000 device you want to query. A panel like the one in Figure 12-52 appears.

Figure 12-52 The DS6000/DS8000 performance thresholds panel

466

IBM TotalStorage Productivity Center V2.3: Getting Started

You may only enable a particular threshold once minimum values for warning and error levels have been defined. If you attempt to select a threshold and enable it without first modifying these values, you will see a notification like the one in Figure 12-45 on page 462.

Figure 12-53 DS6000/DS8000 threshold enable warning

Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values of -1.0 are indicators that there is no recommended minimum value for the threshold and are therefore entirely user defined. You may elect to provide any reasonable value for these thresholds, keeping in mind the workload in your environment. To modify the warning and error values for a given threshold, you may select the threshold, and click the Properties button. The panel in Figure 12-46 on page 462 will be shown. You can modify the threshold as appropriate, and accept the new values by clicking the OK button.

Figure 12-54 Modifying DS6000/DS8000 threshold warning and error values

12.2 Exploiting gauges


Gauges are a very useful tool and help in identifying performance bottlenecks. In this section we show the drill down capabilities of gauges. The purpose of this section is not to cover performance analysis in detail for a specific product, but to highlight capabilities of the tool. You may adopt and use a similar approach for the performance analysis.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

467

12.2.1 Before you begin


Before you begin with customizing gauges, ensure that enough correct samples of data are collected in the performance database. This is true for any performance analysis. The data samples you collect must cover an appropriate time period that corresponds with high/low instances of the I/O workload. Also, the samples should cover sufficient iterations of the peak activity to perform analysis over a period of time. This is true for analyzing a pattern. You may use the advanced scheduler function of IBM Director to configure a repetitive task. If you plan to perform analysis for one specific instance of activity, then you can ensure that the performance data collection task covers the specific time period.

12.2.2 Creating gauges: an example


In this example, we will cover creation and customization of gauges for ESS. First of all, we scheduled an ESS performance data collection task at every 3-hour interval using the IBM Director scheduler function for 8 days. For details on using the IBM Director scheduler, refer to 12.1.3, Using IBM Director Scheduler function on page 435. For creating the gauge, we launched the Performance gauges panel as shown in Figure 12-55, by right-clicking the ESS device.

Figure 12-55 Gauges panel

468

IBM TotalStorage Productivity Center V2.3: Getting Started

Click the Create button to create a new gauge. You will see a panel similar to Figure 12-56.

Figure 12-56 Create performance gauge

We selected Cluster in the top left corner, Total I/O Rate metric in the metrics box, and Cluster 1 in the component box. Also, we entered the following parameters: Name: 22219P_drilldown_analysis Description: Eiderdown analysis for 22219 ESS

Chapter 12. Using TotalStorage Productivity Center Performance Manager

469

For the Date range, we selected our historical data collection sampling period and clicked Display gauge. Upon clicking OK, we got the next panel as shown in Figure 12-57.

Figure 12-57 Gauge for ESS 22219 Cluster performance

470

IBM TotalStorage Productivity Center V2.3: Getting Started

12.2.3 Zooming in on the specific time period


The previous chart shows some peaks of high cluster I/O rate between the period from April 6th to 8th. We decided to zoom into the peak activity and hence selected a more narrow time period as shown in Figure 12-58 and clicked the Apply button.

Figure 12-58 Zooming on specific time period for Total IO rate metric

12.2.4 Modify gauge to view array level metrics


For the next chart, we decided to have an array level metric for the same time period as before. Hence, we selected the gauge that we created earlier and clicked Properties as shown in Figure 12-59.

Figure 12-59 Properties for a defined gauge

Chapter 12. Using TotalStorage Productivity Center Performance Manager

471

The subsequent panel is shown in Figure 12-60. We selected Array level metric for Cluster 1, Device Adapter 1, Loop A, and Disk Group 2 for Avg. Response time as circled in Figure 12-60.

Figure 12-60 Customizing gauge for array level metric

472

IBM TotalStorage Productivity Center V2.3: Getting Started

The resultant chart is shown in Figure 12-61.

Figure 12-61 Modified gauge with Avg. response time chart

Chapter 12. Using TotalStorage Productivity Center Performance Manager

473

12.2.5 Modify gauge to review multiple metrics in same chart


Next, we decided to review Total I/O, read/sec and writes/sec in the same chart for comparison purpose. We selected these three metrics in the Gauge properties panel and clicked the Apply button. The resultant chart is shown in Figure 12-62. Tip: For selecting multiple metrics in the same chart, click the first metric, hold the shift key, and click the last metric. If the metrics you plan to choose are not in the continuous list, but are separated, then hold the control key instead of the shift key.

Figure 12-62 Viewing multiple metrics in the same chart

The chart Writes and Total I/O are shown as overlapping and Reads are shown as zero. Tip: If you select multiple metrics that do not have the same units for the y-axis, then the error is displayed as shown in Figure 12-63.

Figure 12-63 Error displayed if there are no common units

474

IBM TotalStorage Productivity Center V2.3: Getting Started

12.3 Performance Manager command line interface


The Performance Manager module includes a command line interface known as perfcli, located in the directory c:\Program Files\IBM\mdm\pm\pmcli. In its present release, the perfcli utility includes support for ESS and SAN Volume Controller data collection task creation and management (starting and stopping data collection tasks). There are also executables that support viewing and management of task filters, alert thresholds, and gauges. There is detailed help available at the command line, with information about syntax and specific examples of usage.

12.3.1 Performance Manager CLI commands


The Performance Manager Command Line Interface (perfcli) includes the following commands shown in Figure 12-64.

Figure 12-64 Directory listing of the perfcli commands

startesscollection/startsvccollection: These commands are used to build and run data collection against the ESS or SAN Volume Controller, respectively. lscollection: This command is used to list the running, aborted, or finished data collection tasks on the Performance Management server. stopcollection: This command may be used to stop data collection against a specified task name. lsgauge: You can use the lsgauge command to display a list of existing gauge names, types, device types, device IDs, modified dates, and description information. rmgauge: Use this command to remove existing gauges. showgauge: This command is used to display performance data output using an existing defined gauge. setessthresh/setsvcthresh: These two commands are respectively used to set ESS and SAN Volume Controller performance thresholds. cpthresh: You can use the cpthresh command to copy threshold properties from one selected device to one or more other devices. setfilter: You can use setfilter to set or change the existing threshold filters. lsfilter: This command may be used to display the threshold filter settings for all devices specified.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

475

setoutput: This command may be used to view or modify the existing data collection output formats, including settings for paging, row printing, format (default, XML, or character delimited), header printing, and output verbosity. lsdev: This command can be used list the storage devices that are used by TotalStorage Productivity Center for Disk. lslun: This command can be used list the LUNs or Performance Manager volumes associated with storage devices. lsthreshold: This command can be used to list the threshold status associated with storage devices. lsgauge: This command can be used list the existing gauge names, gauge type, device name, device ID, date modified, and optionally device information. showgauge: Use this command to display performance output by triggering an existing gauge. showcapacity: This command displays managed capacity, the sum of managed capacity by device type, and the total of all ESS and SAN Volume Controller managed storage. showdbinfo: This command displays percent full, used space, and free space of the Performance Manager database. lsprofile: Use this command to display Volume Performance Advisor profiles. cpprofile: Use this command to copy Volume Performance Advisor profiles. mkprofile: Use this command to create a workload profile that you can use later with mkrecom command to create a performance recommendation for ESS volume allocation. mkreom: Use this command to generate and, optionally, apply a performance LUN advisor recommendation for ESS volumes. lsdbpurge: This command can be used to display the status of database purge tasks running in TotalStorage Productivity Center for Disk. tracklun: This command can be used to obtain historical performance statistics used to create a profile. startdbpurge: Use this command to start a database purge task. showdev: Use this command to display device properties. setoutput: This command sets output format for the administrative command line interface. cpthresh: This command can be used to copy threshold properties from one device to other devices that are of the same type. rmprofile: Use this command to remove delete performance LUN advisor profiles.

476

IBM TotalStorage Productivity Center V2.3: Getting Started

12.3.2 Sample command outputs


We show some sample commands in Figure 12-65. This sample shows how to invoke perfcli commands from the Windows command line interface.

Figure 12-65 Sample perfcli command from Windows command line interface

Figure 12-66 and Figure 12-67 show perfcli sample commands within the perfcli tool.

Figure 12-66 perfcli sample command within perfcli tool

Figure 12-67 perfcli lslun sample command within perfcli tool

Chapter 12. Using TotalStorage Productivity Center Performance Manager

477

12.4 Volume Performance Advisor (VPA)


The Volume Performance Advisor (VPA) is designed to be an expert advisor that recommends allocations for storage space based on considerations of the size of the request, an estimate of the performance requirement and type of workload, as well as the existing load on an ESS that might compete with the new request. The Volume Performance Advisor will then make a recommendation as to the number and size of Logical Unit Numbers (logical volumes or LUNs) to allocate, and a location within the ESS which is a good placement with respect to the defined performance considerations. The user is given the option of implementing the recommendation (allocating the storage), or obtaining subsequent recommendations.

12.4.1 VPA introduction


Data placement within a large, complex storage subsystem has long been recognized as a storage and performance management issue. Performance may suffer if done casually or carelessly. It can also be costly to discover and correct those performance problems, adding to the total cost of ownership. Performance Manager is designed to contain an automated approach for storage allocation through the functions of a storage performance advisor. It is called the Volume Performance Advisor (VPA). The advisor is designed to automate decisions that could be achieved by an expert storage analyst given the time and sufficient information. The goal is to give very good advice by allowing VPA to consider the same factors that an administrator would in deciding where to best allocate storage. Note: At this point in time, the VPA tool is available for IBM ESS only.

12.4.2 The provisioning challenge


You want to allocate a specific amount of storage to run a particular workload. You could be a storage administrator interacting through a user interface, or the user could be another system component (such as a SAN management product, file system, DataBase Management System (DBMS), or logical volume manager) interacting with the VPA Application Programming Interface (API). A storage request is satisfied by selecting some number of logical volumes (Logical Unit Numbers (LUNs). For example, if you ask for 400GB of storage, then a low I/O rate, cache-friendly workload could be handled on a single 400GB logical disk residing on a single disk array; whereas a cache-unfriendly, high-bandwidth application might need several logical volumes allocated across multiple disk arrays, using LVM, file system, or database striping to achieve the required performance. The performance of those logical disks depends on their placement on physical storage, and what other applications might be sharing the arrays. The job of the Volume Performance Advisor (VPA) is to select an appropriate set (number and placement) of logical disks that: Consider the performance requirements of the new workload Balance the workload across the physical resources Consider the effects of the other workloads competing for the resources

478

IBM TotalStorage Productivity Center V2.3: Getting Started

Storage administrators and application developers need tools that pull together all the components of the decision process used for provisioning storage. They need tools to characterize and manage workload profiles. They need tools to monitor existing performance, and tools to help them understand the impact of future workloads on current performance. What they need is a tool that automates this entire process, which is what VPA for ESS does.

12.4.3 Workload characterization and workload profiles


Intelligent data placement requires a rudimentary understanding of the application workload, and the demand likely to be placed on the storage system. For example, cache-unfriendly workloads with high I/O intensity require a larger number of physical disks than cache-friendly or lightweight workloads. To account for this, the VPA requires specific workload descriptions to drive its decision-making process. These workload descriptions are precise, indicating I/O intensity rates; percentages of read, write, random, and sequential content; cache information; and transfer sizes. This workload-based approach is designed to allow the VPA to correctly match performance attributes of the storage with the workload attributes with a high degree of accuracy. For example, high random-write content workloads might best be pointed to RAID10 storage. High cache hit ratio environments can probably be satisfied with fewer numbers of logical disks. Most users have little experience or capability for specifying detailed workload characteristics. The VPA is designed to deal with this problem in three ways: Predefined workload definitions based on characterizations of environments across various industries and applications. They include standard OLTP type workloads, such as OLTP High, and Batch Sequential. Capturing existing workloads by observing storage access patterns in the environment. The VPA allows the user to point to a grouping of volumes and a particular window of time, and create a workload profile based on the observed behavior of those volumes. Creation of hypothetical workloads that are similar to existing profiles, but differ in some specific metrics. The VPA has tools to manage a library of predefined and custom workloads, to create new workload profiles, and to modify profiles for specific purposes.

12.4.4 Workload profile values


It is possible to change many specific values in the workload profile. For example, the access density may be high because a test workload used small files. It can be adjusted to a more accurate number. Average transfer size always defaults to 8KB, and should be modified if other information is available for the actual transfer size. The peak activity information should also be adjusted. It defaults to the time when the profile workload was measured. In an existing environment it should specify the time period for contention analysis between existing workloads and the new workload. Figure 12-68 on page 480 shows a user defined VPA workload profile.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

479

Figure 12-68 User defined workload profile details example

12.4.5 How the Volume Performance Advisor makes decisions


As mentioned previously, the VPA is designed to take several factors into account when recommending volume allocation: Total amount of space required Minimum and maximum number of volumes, and sizes of volumes Workload requirements Contention from other workloads VPA tries to allocate volumes on the least busy resources, at the same time balancing workload across available resources. It uses the workload profile to estimate how busy internal ESS resources will become if that workload is allocated on those resources. So it estimates how busy the raid arrays, disk adapters, and controllers will become. The workload profile is very important in making that decision. For example, cache hit ratios affect the activity on the disk adapters and raid arrays. When creating a workload profile from existing data, it's important for you to pick a representative time sample to analyze. Also you should examine the IO/sec per GB. Many applications have access density in the range of 0.1 to 3.0. If it is significantly outside this range, then this might not be an appropriate sample. 480
IBM TotalStorage Productivity Center V2.3: Getting Started

The VPA will tend to utilize resources that can best accommodate a particular type of workload. For example, high write content will make Raid 5 arrays busier than RAID 10 and VPA will therefore bias to RAID 10. Faster devices will be less busy, so VPA biases allocations to the faster devices. VPA also analyzes the historical data to determine how busy the internal ESS components (arrays, disk adapters, clusters) are due to other workloads. In this way, VPA tries to avoid allocating on already busy ESS components. If VPA has a choice among several places to allocate volumes, and they appear to be about equal, it is designed to apply a randomizing factor. This keeps the advisor from always giving the same advice, which might cause certain resources to be overloaded if everyone followed that advice. This also means that several usages of VPA by the same user may not necessarily get the same advice, even if the workload profiles are identical. Note: VPA tries to allocate the fewest possible volumes, as long as it can allocate on low utilization components. If the components look too busy, it will allocate more (smaller) volumes as a way of spreading the workload.It will not recommend more volumes than the maximum specified by the user. VPA may however be required to recommend allocation on very busy components. A utilization indicator in the user panels will indicate whether allocations would cause components to become heavily utilized. The I/O demand specified in the workload profile for the new storage being allocated is not a Service Level Agreement (SLA). In other words, there is no guarantee that the new storage, once allocated, will perform at or above the specified access density. The VPA will make recommendations unless the available space on the target devices is exhausted. An invocation of VPA can be used for multiple recommendations. To handle a situation when multiple sets of volumes are to be allocated with different workload profiles, it is important that the same VPA wizard be used for all sets of recommendations. Select Make additional recommendations on the View Recommendations page, as opposed to starting a completely new sequence for each separate set of volumes to be allocated. VPA is designed to remember each additional (hypothetical) workload when making additional recommendations. There are, of course, limitations to the use of an expert advisor such as VPA. There may well be other constraints (like source and target Flashcopy requirements), which must be considered. Sometimes these constraints can be accommodated with careful use of the tool, and sometimes they may be so severe that the tool must be used very carefully. That is why VPA is designed as an advisor. In summary, the Volume Performance Advisor (VPA) provides you a tool to help automate complex decisions involved in data placement and provisioning. It short, it represents a future direction of storage management software! Computers should monitor their resources and make autonomic adjustments based on the information. The VPA is an expert advisor which provides you a step in that direction.

12.4.6 Enabling the Trace Logging for Director GUI Interface


Enabling GUI logging can be a useful for troubleshooting GUI problems, however unlikely they may occur, which you may encounter while using VPA. Since this function requires a server reboot where TotalStorage Productivity Center for Disk is running, you may consider doing this prior to engaging in use of the VPA.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

481

On the Windows platform, follow these steps: 1. Start Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE SOFTWARE Tivoli Director CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). The log file for the Director is com.tivoli.console.ConsoleLauncher.stderr. On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example in bash: $export TWG_DEBUG_CONSOLE=true

12.4.7 Getting started


In this section, we provide detailed steps of using VPA with pre-defined performance parameters (workload profile) you can utilize for advice in optimal volume placement in your environment. For detailed steps on creating customized workload profiles, you may refer to 12.4.8, Creating and managing workload profiles on page 508. To use VPA with customized workload profile, the major steps are: Create a data collection task in Performance Manager In order to utilize the VPA, you must first have a useful amount of performance data collected from the device you want to examine. Refer to Performance Manager data collection on page 429 for more detailed instructions regarding use of the Performance data collection feature of the Performance Manager. Schedule and run a successful performance data collection task It is important to have an adequate amount of historical to provide you a statistically relevant sampling population. Create or use a user-defined workload profile Use the Volume Performance Advisor to: Add Devices Specify Settings Select workload profile (predefined or user defined) View Profile Details Choose Candidate Location Verify Settings Approve Recommendations or restart VPA process with different parameters)

Workload profiles
The basic VPA concept, and the storage administrators goal, is to balance the workload across all device components. This requires detailed ESS configuration information including all components (clusters, device adapters, logical subsystems, ranks, and volumes)

482

IBM TotalStorage Productivity Center V2.3: Getting Started

To express the workload represented by the new volumes, they are assigned a workload profile. A workload profile contains various performance attributes: I/O demand, in I/O operations per second per GB of volume size Average transfer size, in KBs per second Percentage mix of I/O - sequential or random, and read or write Cache utilization - percent of: cache hits for random reads, cache misses for random writes Peak activity time - the time period when the workload is most active You can create your own workload profile definitions in two ways By copying existing profiles, and editing their attributes By performing an analysis of existing volumes in the environment This second option is known as a Workload Analysis. You may select one or more existing volumes, and the historical performance data for these volumes retrieved, to determine their (average) performance behavior over time.

Using VPA with pre-defined workload profile


This section describes a VPA example using a default workload profile. The purpose of this section to help you familiarize for using VPA tool. Although, it is recommended to generate and use your customized workload profile after gathering performance data. The customized profile will be realistic in terms of your application performance requirements. The VPA provides five predefined (canned) Workload Profile definitions. They are: 1. OLTP Standard: for general Online Transaction Processing Environment (OLTP) 2. OLTP High: for higher demand OLTP applications 3. Data Warehouse: for data warehousing applications 4. Batch Sequential: for batch applications accessing data sequentially 5. Document Archival: for archival applications, write-once, read-infrequently Note: Online Transaction Processing (OLTP) is a type of program that facilitates and manages transaction-oriented applications. OLTP is frequently used for data entry and retrieval transactions in a number of industries, including banking, airlines, mail order, supermarkets, and manufacturers. Probably the most widely installed OLTP product is IBM's Customer Information Control System (CICS).

Launching VPA tool


The steps to utilize a default workload profile to have the Volume Performance Advisor examine and advise you on volume placement are: 1. In the IBM Director Task pane, click Multiple Device Manager. 2. Click Manage Performance 3. Click Volume Performance Advisor

Chapter 12. Using TotalStorage Productivity Center Performance Manager

483

4. You can choose two methods to launch VPA: a. Drag and Drop the VPA icon to the storage device to be examined (see Figure 12-69).

Figure 12-69 Drag and Drop the VPA icon to the storage device

484

IBM TotalStorage Productivity Center V2.3: Getting Started

b. Select storage device right-click the device select Volume Performance Advisor (see Figure 12-70).

Figure 12-70 Select ESS and right-click for VPA

If a storage device is selected for the drag and drop step, that is not in the scope of the VPA, the following message will open (see Figure 12-71). Devices such as a CIMOM or an SNMP device will generate this error. Only ESS is supported at this time.

Figure 12-71 Error launching VPA example

ESS User Validation


If this is the first time your are using VPA tool for the selected ESS device, then the ESS User Validation panel will display as shown in Figure 12-72 on page 486. Otherwise, if you have already validated the ESS user for VPA usage, then it will skip this panel and it will launch the VPA setting default panel as shown in Figure 12-77 on page 488.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

485

Figure 12-72 ESS User validation screen example

In the ESS User Validation panel, specify the user name, password, and port for each of the IBM TotalStorage Enterprise Storage Servers (ESSs) that you want to examine. During the initial setup of the VPA, on the ESS User Validation window, you need to first select the ESS (as shown in Figure 12-73 on page 487) and then input correct username, correct password and password verification.

You must click Set after you have input the correct username, password, and password verification in the appropriate fields (see highlighted portion with circle in Figure 12-74 on page 487). When you click Set, the application will populate the data you input (masked) into the correct Device Information fields in the Device Information box (see Figure 12-75 on page 487).
If you do not click Set, before selecting OK, the following error(s) will appear depending on what data needs to be entered. BWN005921E (ESS Specialist username has not been entered correctly or applied) BWN005922E (ESS Specialist password has not been entered correctly or applied) If you encounter these errors, ensure you have correctly input the values in the input fields in the lower part of the ESS user validation window and then retry by clicking OK. The ESS user validation window contains the following fields: Devices table - Select an ESS from this table. It includes device IDs and device IP addresses of the ESS devices on which this task was dropped. ESS Specialist username - Type a valid ESS Specialist user name and password for the selected ESS. Subsequent displays of the same information for this ESS show the user name and password that was entered. You can change the user name by entering a new user name in this field. ESS Specialist password - Type a valid ESS Specialist password for the selected ESS. Any existing password entries are removed when you change the ESS user name. Confirm password - Type the valid ESS Specialist password again exactly as you typed it in the password field. ESS Specialist port - Type a valid ESS port number. The default is 80. Set button - Click to set names, passwords, and ports without closing the panel. Remove button - Click to remove the selected information. Add button - Click to invoke the Add devices panel.

486

IBM TotalStorage Productivity Center V2.3: Getting Started

OK button - Click to save the changes and close the panel.

Figure 12-73 ESS User validation - select ESS

Figure 12-74 Apply ESS Specialist user defined input

Figure 12-75 Applied ESS Specialist user defined input

Chapter 12. Using TotalStorage Productivity Center Performance Manager

487

Click the OK button to save the changes and close the panel. The application will attempt to access the ESS storage device. The error message in Figure 12-76 can be indicative of use of an incorrect username or password for authentication. Additionally, If you have a firewall and are not adequately authenticating to the storage device, the error may appear. If this does occur, check to ensure you are using the correct username and password for the authentication and have firewall access and are properly authenticating to establish storage device connectivity.

Figure 12-76 Authentication error example

Configuring VPA settings for the ESS diskspace request


After you have successfully completed the User Validation step, the VPA Settings window will open (see Figure 12-77).

Figure 12-77 VPA Settings default panel

488

IBM TotalStorage Productivity Center V2.3: Getting Started

You use the Volume performance advisor - Settings window to identify your requirements for host attachment and the total amount of space that you need. You can also use this panel to specify volume number and size constraints, if any. We will begin with our example as shown in Figure 12-78.

Figure 12-78 VPA settings for example

Chapter 12. Using TotalStorage Productivity Center Performance Manager

489

Here we describe the fields in this window: Total space required (GB) - Type the total space required in gigabytes. The smallest allowed value is 0.1 GB. We requested 3 GB for our example. Note: You cannot exceed the volume space available for examination on the server(s) you select. To show the error, in this example we selected host Zombie and Total required space as 400 GB. We got the error shown in Figure 12-79. Action: Retry with different values and look at the server log for details. Solution(s): Select a smaller maximum Total (volume) Space required GB and retry this step. Select more hosts which will include adequate volume space for this task. You may want to select the box entitled Consider volumes that have already been allocated but not assigned in the performance recommendation. Director log file enabling will generate logs for troubleshooting Director GUI components, including the PM coswearer. In this example, the file we reference is: com.tivoli.console.ConsoleLauncher.stderr. (com.tivoli.console.ConsoleLauncher.stdout is also useful) The sample log is shown in Figure 12-80.

Figure 12-79 Error showing exceeded the space requested

Figure 12-80 Director GUI console errorlog

Specify a volume size range button - Click the button to activate the field, then use the Minimum size (GB) spinner and the Maximum size (GB) spinner to specify the range. In this example, we selected 1 GB as minimum and 3 GB as maximum.

490

IBM TotalStorage Productivity Center V2.3: Getting Started

Specify a volume quantity range button - Click the button to activate the field, then use the Minimum number spinner and the Maximum number spinner to specify the range. Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation. If you check this box, VPA will use these types of volumes in the volume performance examination process. When this box (Consider volumes...) is checked and you click Next, the VPA wizard will open the following warning window (see Figure 12-81).

Figure 12-81 Consider volumes - warning window example

Note: The BWN005996W message is a warning (W). You have selected to reuse unassigned existing volumes which could potentially cause data loss. Go Back to the VPA Settings window by clicking OK if you do not want to consider unassigned volumes. Click the Help button for more information. Explanation: The Volume Performance Advisor will assume that all currently unassigned volumes are not in use, and may recommend the reuse of these volumes. If any of these unassigned volumes are in use for example, as replication targets or other data replication purposes and these volumes are recommended for reuse, the result could be potential data loss. Action: Go back to the Settings window and unselect Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation if you do not want to consider volumes that may potentially be used for other purposes. If you want to continue to consider unassigned volumes in your recommendations, then continue. Host Attachments table - Select one or more hosts from this table. This table lists all hosts (by device ID) known to the ESS that you selected for this task. It is important to only choose hosts for volume consideration that are the same server type. It is also important to note that the VPA takes into consideration the maximum volume limitations of server type such as (Windows 256 volumes maximum) and AIX (approximately 4000 volumes). If you select a volume range above the server limit, VPA will display an error. In our example we used the host Zombie. Next button - Click to invoke the Choose workload profile window. You use this window to select a workload profile from a list of existing profile templates.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

491

5. Click Next, after inputting your preferred parameters, and the Choose workload profile window will display (see Figure 12-82).

Figure 12-82 VPA Choose workload profile window example

Choosing a workload profile


You can use the Choose workload profile window to select a workload profile from a list of existing profiles. The Volume performance advisor uses the workload profile and other performance information to advise you about where volumes should be created. For our example we have selected the OLTP Standard default profile type. Workload profiles table - Select a profile from this table to view or modify. The table lists predefined or existing workload profile names and descriptions. Predefined workload profiles are shipped with Performance Manager. Workload profiles that you previously created, if any, are also listed. Manage profiles button - Click to invoke the Manage workload profile panel. Profile details button - Click to see details about the selected profile in the Profile details panel as shown in Figure 12-83 on page 493. Details include the following types of information: Total I/O per second per GB Random read cache hits Sequential and random reads and writes Start and end dates Duration (days)

492

IBM TotalStorage Productivity Center V2.3: Getting Started

Note: You cannot modify the properties of the workload profile from this panel. The panel options are greyed out (inactive). You can make changes to a workload profile from Manage Profile Create like panel. Next button - Click to invoke the Choose candidate locations window. You can use this panel to select volume locations for the VPA to consider.

Figure 12-83 Properties for OLTP Standard profile

6. After reviewing the properties for predefined workload profiles, you may select a workload profile from the table which closely resemble your workload profile requirements. For our scenario, we have selected the OLTP Standard workload name from the Choose workload profile window. We are going to use this workload profile for the LUN placement recommendations. Name - Shows the default profile name. The following restrictions apply to the profile name. The workload profile name must be between 1 to 64 characters. Legal characters are A-Z, a-z, 0-9, -, _, ., and : The first character cannot be - or _.
Chapter 12. Using TotalStorage Productivity Center Performance Manager

493

Spaces are not acceptable characters.

Description - Shows the description of workload profile. Total I/O per second per GB - Shows the values for the selected workload profile Total I/O per second rate. Average transfer size (KB) - Shows the values for the selected workload profile. Caching information box - Shows the cache hits and destage percentages: Random read cache hits Range from 1 - 100%. The default is 40%. Random write destage Range from 1 - 100%. The default is 33%. Read/Write information box - Shows the read and write values. The percentages for the four fields must equal 100% Sequential reads - The default is 14%. Sequential writes - The default is 23%. Random reads - The default is 36%. Random writes - The default is 32%.

Peak activity information box Since currently we are only viewing properties of an existing profile, the parameters for this box are not selectable. But as reference for subsequent usage, you may review this box. After you review properties for this box, you may click the Close button. While creating new profile, this box will allow you to input following parameters: Use all available performance data radio button. You can select this option if you want to include all available performance data previously collected in consideration for this workload profile. Use the specified peak activity period radio button. You can select this button as an alternate option (instead of using the Use all available performance data option) for consideration in this workload profile definition. Time setting drop down menu. Select from the following options for the time setting you want to use for this workload profile. - Device time - Client time - Server time - GMT Past days to analyze spinner. Use this (or manually enter the number) to select the number of days of historical information you want to consider for this workload profile analysis Time Range drop down lists. Select the Start time and End time to consider using the appropriate fields.

Close button - Click to close the panel. You will be returned to the Choose workload profile window.

494

IBM TotalStorage Productivity Center V2.3: Getting Started

Choosing candidate locations


Select the name of the profile you want to use from the VPA Choose workload profile window and then the Choose Candidate Locations window will open (see Figure 12-84). We chose our OTLP Standard workload profile for the VPA analysis.

Figure 12-84 Choose candidate locations window

You can use the Choose candidate locations page to select volume locations for the performance advisor to consider. You can choose to either include or exclude the selected locations for the advisor's consideration. The VPA uses historical performance information to advise you about where volumes should be created. The Choose candidate locations page is one of the panels the performance advisor uses to collect and evaluate the information. Device list - Displays device IDs or names for each ESS on which the task was activated (each ESS on which you dropped the Volume advisor icon). Component Type tree - When you select a device from the Device list, the selection tree opens on the left side of the panel. The ESS component levels are shown in the tree. The following objects might be included: ESS cluster device adapter
Chapter 12. Using TotalStorage Productivity Center Performance Manager

495

array disk group

The component level names are followed by information about the capacity and the disk utilization of the component level. For example, we used System component level. It shows Component ID - 2105-F20-16603,Type- System, Description 2105-F20-16603-IBM, Available capacity - 311GB, Utilization - Low.(see Figure 12-84 on page 495). Tip: You can select the different ESS component types and the VPA will reconsider the volume placement advise based on that particular select. To familiarize yourself with the options, select each component in turn to determine which component type centric advise you prefer before proceeding to the next step. Select a component type from the tree to display a list of the available volumes for that component in the Candidates table (see Figure 12-84 on page 495). We chose system for this example. It represents entire ESS system in this case. Click Add button to add the component selected in the Candidates table to the Selected candidates table. See Figure 12-85. It shows Selected candidate as 2105-F20-16603.

Figure 12-85 VPA Chose candidate locations Component Type tree example (system)

496

IBM TotalStorage Productivity Center V2.3: Getting Started

Verify settings for VPA


Click the Next button to invoke the Verify Settings window (see Figure 12-86).

Figure 12-86 VPA Verify settings window example

You can use the Verify settings panel to verify the volume settings that you specified in the previous panels of the VPA.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

497

Approve recommendations
After you have successfully completed the Verify Settings step, click the Next button, and the Approve Recommendations window opens (see Figure 12-87).

Figure 12-87 VPA Recommendations window example

You use the Recommendations window to first view the recommendations from the VPA and then to create new volumes based on the recommendations. In this example, VPA also recommends the location of volume as 16603:2:4:1:1700 in the Component ID column. This means recommended volume location is at ESS with ID 16603, Cluster 2, Device Adapter 4, Array 1 and volume ID1700. With this information, it is also possible to create volume manually via ESS specialist Browser interface or use VPA to create the same. In the Recommendations window of the wizard, you can choose whether the recommendations are to be implemented, and whether to loop around for another set of recommendations. At this time, you have two options (other than to cancel the operation). Make your final selection to Finish or return to the VPA for further recommendations. a. If you do not want to assign the volumes using the current VPA advice, or want the VPA to make another recommendation, check only the Make Additional Recommendations box. 498
IBM TotalStorage Productivity Center V2.3: Getting Started

b. If you want to use the current VPA recommendation and make additional volume assistants at this time, select both the Implement Recommendations and Make Additional Recommendations check boxes. If you choose both options, you must first wait until the current set of volume recommendations are created, or created and assigned, before continuing. If you make this type of selection, a secondary window will appear which runs synchronously within the VPA. Tip: Stay in the same VPA session if you are going to implement volumes and add new volumes. This will enable VPA to provide advice for your current selections, checking for previous assignments, and verifying that no other VPA is processing the same volumes.

VPA loopback after Implement Recommendations selected


In the following example, we show the results of a VPA session. 1. In this example, we decided to Implement recommendations and also Make additional recommendations. Hence we selected both check boxes (see Figure 12-88).

Figure 12-88 VPA Recommendation selected check box

Chapter 12. Using TotalStorage Productivity Center Performance Manager

499

2. Click the Continue button to proceed with VPA advice (see Figure 12-88 on page 499).

Figure 12-89 VPA results - in progress panel

3. In Figure 12-89, we can see that the volumes are being created on the server we selected previously. This process takes a little time, so be patient. 4. Figure 12-90 indicates that the volume creation and assigning to ESS has completed. Be patient and momentarily, the VPA loopback sequence will continue.

Figure 12-90 VPA final results

500

IBM TotalStorage Productivity Center V2.3: Getting Started

5. After the volume creation step has successfully completed, the following Settings window will again open so that you may add more volumes (see Figure 12-91).

Figure 12-91 VPA settings default

Chapter 12. Using TotalStorage Productivity Center Performance Manager

501

For the additional recommendations, we decided to use same server. But, we specified the Volume quantity range instead of Volume size range for the requested space of 2GB. See Figure 12-92.

Figure 12-92 VPA additional space request

502

IBM TotalStorage Productivity Center V2.3: Getting Started

After clicking Next, the Choose Profile Panels opens. We selected the same profile as before: OLTP Standard. See Figure 12-93.

Figure 12-93 Choose Profile

Chapter 12. Using TotalStorage Productivity Center Performance Manager

503

After clicking Next, the Choose candidate locations panel opens. We selected Cluster from the Component Type drop-down list. See Figure 12-94.

Figure 12-94 Choose candidate location

The Component Type Cluster shows Component ID as 2105-F20-16603:2, Types as Cluster, Descriptor as 2, Available capacity as 308GB and Utilization as Low. This indicates that VPA plans to provision additional capacity on this Cluster 2 of ESS.

504

IBM TotalStorage Productivity Center V2.3: Getting Started

After clicking the Add button, Cluster 2 is a selected candidate for new volume. See Figure 12-95.

Figure 12-95 Choose candidate location - select cluster

Chapter 12. Using TotalStorage Productivity Center Performance Manager

505

Upon clicking Next, the Verify settings panel opens as shown in Figure 12-96.

Figure 12-96 Verify settings

506

IBM TotalStorage Productivity Center V2.3: Getting Started

After verifying settings and clicking Next, VPA recommendations window opens. See Figure 12-97.

Figure 12-97 VPA recommendations

Chapter 12. Using TotalStorage Productivity Center Performance Manager

507

Since the purpose of this example is to show our readers the VPA looping only, we decided to un-check both check boxes for Implement Recommendations and Make additional recommendations. Clicking Finish completed the VPA example (Figure 12-98).

Figure 12-98 Finish VPA panel

12.4.8 Creating and managing workload profiles


The VPA makes decisions based on characteristics of the workload profile to decide volume placement recommendations. VPA decisions will not be accurate if an improper workload profile is chosen, and it may cause future performance issues for application. It is a must to have a valid and appropriate workload profile created prior to using VPA for any application. Therefore, creating and managing workload profile is an important task, which involves regular upkeep of workload profiles for each application disk I/O served by ESS. Figure 12-99 on page 509 shows a typical sequence of managing workload profiles.

508

IBM TotalStorage Productivity Center V2.3: Getting Started

M A N A G I N G P R O F I L E S

D eterm in e I/O w o rk lo a d typ e o f ta rg e t a p p lic a tio n

C re a te I/O p erfo rm a n c e da ta c o lle c tio n ta s k

N o m atch w ith p re-d efined p ro file

C lo s e m a tc h w ith pre d e fine d p ro file

C C o o s e e P re -dfin e d d h h o s P re -d e e fin e o C re a a lik e o r r C retete lik e p ro file p ro file

In itia te I/O p e rfo rm a n c e d a ta c o lle c tio n c ove rin g p e a k lo ad tim e s an d g a th e r s u fficie n t s a m p le s

S p e c ify tim e p e rio d o f p ea k a c tiv ity

C h o o s e C reate p ro file

V a lid ate an alysis res u lts

If res u lts n o t a c ce p tab le , re-v a lid a te d a ta c o lle c tio n p a ra m e te rs

R es u lts ac c ep ted

S a ve P ro file

Figure 12-99 Typical sequence for managing workload profiles

Before using VPA for any additional disk space requirement for an application, you will need to: determine typical I/O workload type of that application and; have performance data collected which covers peak load time periods You will need to determine the broad category in selected I/O workload fits in, e.g. whether it is OLTP high, OLTP standard, Data Warehouse, Batch sequential or Document archival. This is shown as highlighted box in the diagram. The TotalStorage Productivity Center for Disk provides pre-defined profiles for these workload types and it allows you to create additional similar profiles by choosing Create like profiles. If do not find any match with pre-defined profiles, then you may prefer to Create a new profile. While choosing Create like or Create profiles, you will also need to specify historical performance data samples covering peak load activity time period. Optionally, you may specify additional I/O parameters. Upon submitting the Create or Create like profile, the performance analysis will performed and results will be displayed. Depending upon the outcome of the results, you may need to re-validate the parameters for data collection task and ensure that peak load samples are taken correctly. If the results are acceptable, you may Save the profile. This profile can be referenced for future usage by VPA. In Choosing workload profiles on page 510, we cover step-by-step tasks using an example.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

509

Choosing workload profiles


You can use Performance Manager to select a predefined workload profile or to create a new workload profile based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. You can also use a set of Performance Manager panels to create and manage the workload profiles. There are three methods you can use to choose a workload profile as shown in Figure 12-100.

Figure 12-100 Choosing workload profiles

Note: Using a predefined profile does not require pre-existing performance data, but the other two methods require historical performance data from the target storage device. You can launch the workload profiles management tool using the drag and drop method from the IBM Director console GUI interface. Drag the Manage Workload Profile task to the target storage device as shown in Figure 12-101.

Figure 12-101 Launch Manage Workload Profile

510

IBM TotalStorage Productivity Center V2.3: Getting Started

If you are using Manage Workload Profile or VPA tool for first time of the selected ESS device, then you will need to authorize ESS user validation. This has been described in detail in ESS User Validation on page 485. The ESS User Validation is the same for VPA and Manage Workload Profile tools. After the successful ESS User validation, the Manage Workload Profile panel will be opened as shown in Figure 12-102.

Figure 12-102 Manage workload profiles

You can create or manage a workload profile using the following three methods: 1. Selecting a predefined workload profile Several predefined workloads are shipped with Performance Manager. You can use the Choose workload profile panel to select the predefined workload profile that most closely matches your storage allocation needs. The default profiles shipped with Performance Manager are shown in Figure 12-103.

Figure 12-103 Default workload profiles

You can select the properties panel of the respective pre-defined profile to verify the profile details. A sample profile for OLTP Standard is shown in Figure 12-83 on page 493.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

511

2. Creating a workload profile similar to another profile You can use the Create like panel to modify the details of a selected workload profile.You can then save the changes and assign a new name to create a new workload profile from the existing profile. To Create like a particular profile, these are the tasks involved: a. Create a performance data collection task for target storage device: You may need to include multiple storage devices based on your profile requirements for the application. b. Schedule data collection task: You may need to ensure that a data collection task runs over a sufficient period of time, which truly represents a typical I/O load of the respective application. The key is to have sufficient historical data. Tip: Best practice is to schedule frequency of a performance data collection task in such a way that it covers peak load periods of I/O activity and it has at least a few samples of peak loads. The number of samples depends on I/O characteristics of the application. c. Determine the closest workload profile match: Determine whether new workload profile matches w.r.t existing or pre-defined profiles. Note that it may not be the exact fit, but should be of somewhat similar type. d. Create the new similar profile: Using the Manage Workload Profile task, create new profile. You will need to select appropriate time period for historical data, which you have collected earlier. In our example, we created similar profile using Batch Sequential pre-defined profile. First, we select Batch Sequential profile and click Create like button as shown in Figure 12-104.

Figure 12-104 Manage workload profile - create like

512

IBM TotalStorage Productivity Center V2.3: Getting Started

The Properties panel for Batch Sequential is opened, as shown in Figure 12-105.

Figure 12-105 Properties for Batch sequential profile

We changed the following values for our new profile: Name: ITSO_Batch_Daily Description: For ITSO batch applications Average transfer size: 20KB Sequential reads: 65% Random reads: 10% Peak Activity information: We used time period as past 24 days from 12AM to 11PM.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

513

We saved our new profile (see Figure 12-106).

Figure 12-106 New Profile

514

IBM TotalStorage Productivity Center V2.3: Getting Started

This new profile - ITSO_Daily_batch is now available in Manage workload profile panel as shown in Figure 12-107. This profile can now be used for VPA analysis. This completes our example.

Figure 12-107 Manage profile panel with new profile

3. Creating a new workload profile from historical data You can use the Manage workload profile panel to create a workload profile based on historical data about existing volumes. You can select one or more volumes as the base for the new workload profile. You can then assign a name to the workload profile, optionally provide a description, and finally create the new profile. To create a new workload profile, click the Create button as shown in Figure 12-108.

Figure 12-108 Create a new workload profile

Chapter 12. Using TotalStorage Productivity Center Performance Manager

515

This will launch a new panel for Creating workload profile as shown in Figure 12-109. At this stage, you will need to specify the volumes for performance data analysis. In our example, we selected all volumes. For selecting multiple volumes but not all, click the first volume, hold the Shift key and click the last volume in the list. After all the required volumes are selected (shown as dark blue), click the Add button. See Figure 12-109. Note: The ESS volumes you specify should be representative of I/O behavior of the application, for which you are planning to allocate space using the VPA tool.

Figure 12-109 Create new profile and add volumes

516

IBM TotalStorage Productivity Center V2.3: Getting Started

Upon clicking the Add button, all the selected volumes will be moved to the selected volumes box as shown in Figure 12-110.

Figure 12-110 Selected volumes and performance period for new workload profile

In the Peak activity information box, you will need to specify an activity sample period for Volume performance analysis. You can select the option Use all available performance data or select Use the specified peak activity period. Based on your application peak I/O behavior, you may specify the sample period with Start date, Duration in days, and Start / End time. For the time setting, you can choose the drop-down box: Device time, or Client time, or Server time, or GMT

Chapter 12. Using TotalStorage Productivity Center Performance Manager

517

After you have entered all the fields, click Next. You will see the Create workload profile Review panel as shown in Figure 12-111.

Figure 12-111 Review new workload profile parameters

You can specify a Name for the new workload profile and a Description. You may put in detailed description that covers: The application name for which the profile is being created What application I/O activity is represented by the peak activity sample When it was created Who created it (optional) Any other relevant information your organization requires In our example, we created profile named New_ITSO_app1_profile. At this point you may click Finish. At this point, the TotalStorage Productivity Center for Disk will begin Volume performance analysis based on the parameters you have provided. This process may take some time depending upon number of volumes and sampling time period. Hence, be patient. Finally, it will show the outcome of the analysis.

518

IBM TotalStorage Productivity Center V2.3: Getting Started

In our example, we got the results notification message as shown in Figure 12-112. Analysis yielded that results are not statistically significant, as shown message: BWN005965E: Analysis results are not significant. This may indicate that: There is not enough I/O activity on selected volumes, OR The time period chosen for sampling is not correct, OR Correct volumes were not chosen You have an option to Save or Discard the profile. We decided to save the profile (Figure 12-113).

Figure 12-112 Results for Create Profile

Upon saving the profile, it is now listed in the Manage workload profile panel as shown in Figure 12-113.

Figure 12-113 Manage workload profile with new saved profile

The new profile can now be referenced by VPA for future usage.

Chapter 12. Using TotalStorage Productivity Center Performance Manager

519

520

IBM TotalStorage Productivity Center V2.3: Getting Started

13

Chapter 13.

Using TotalStorage Productivity Center for Data


This chapter introduces you to the TotalStorage Productivity Center for Data and discusses the available functions. The information in this chapter provides the information necessary to accomplish the following tasks: Discover and monitor storage assets enterprise-wide Report on enterprise-wide assets, files and filesystems, databases, users, and applications Provide alerts (set by the user) on issues such as capacity problems, policy violations, etc. Support chargebacks by usage or capacity

Copyright IBM Corp. 2005. All rights reserved.

521

13.1 TotalStorage Productivity Center for Data overview


This section describes the business purpose of TotalStorage Productivity Center for Data (Data Manager), its architecture, components, and supported platforms.

13.1.1 Business purpose of TotalStorage Productivity Center for Data


The primary business purpose of TotalStorage Productivity Center for Data is to help the storage administrator keep data available to applications so the company can produce revenue. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.

13.1.2 Components of TotalStorage Productivity Center for Data


At a high level, the major components of TotalStorage Productivity Center for Data are: Server, running on a managing server, with access to a database repository Agents, running on one or more Managed Devices Clients (using either a locally installed GUI, or a browser-based Web GUI) which users and administrators use to perform storage monitoring tasks.

Data Manager Server


The Data Manager Server: Controls the discovery, reporting, and Alert functions Stores all data in the central repository Issues commands to Agents for jobs (either scheduled or ad hoc) Receives requests from the user interface clients for information, and retrieves the requested information from the central data repository. Extends filesystems automatically Reports on the IBM TotalStorage Enterprise Storage Server (ESS) and can also provide LUN provisioning An RDBMS (either locally or remote) manages the repository of data collected from the Agents, and the reporting and monitoring capabilities defined by the users.

WWW Server
The Web Server is optional, and handles communications to allow remote Web access to the Server. The WWW Server can run on the same physical server as the Data Manager Server.

Data Agent (on a Managed System)


The Agent runs Probes and Scans, collects storage-related information from the managed system, and forwards it to the Manager to be stored in the database repository, and acted on if so defined. An Agent is required for every host system to be monitored, with the exception of NetWare and NAS devices.

522

IBM TotalStorage Productivity Center V2.3: Getting Started

Novell NetWare and NAS devices do not currently support locally installed Agents - they are managed through an Agent installed on a machine that uses (accesses) the NetWare or NAS device. The Agent will discover information on the volumes or filesystems that are accessible to the Agents host. The Agents are quite lightweight. Agents listen for commands from the host, and then perform a Probe (against the operating system), and/or a Scan (against selected filesystems). Normal operations might see one scheduled Scan per day or week, plus various ad hoc Scans. Scans and Probes are discussed later in this chapter.

Clients (direct-connected and Web connected)


Direct-connect Clients have the GUI to the Server installed locally. They communicate directly to the Manager to perform administration, monitoring, and reporting. The Manager retrieves information requested by the Clients from the database repository. Web-connect clients use the WWW Server to access the user interface through a Web browser. The Java administrative applet is downloaded to the Web Client machine and presents the same user interface that Direct-connect Clients see.

13.1.3 Security considerations


TotalStorage Productivity Center for Data has two security levels: non-administrative users and administrators: Non-administrator users can: View the data collected by TotalStorage Productivity Center for Data Create, generate, and save reports Administrators can: Create, modify, and schedule Pings, Probes, and Scans Create, generate, and save reports Perform administrative tasks and customize the TotalStorage Productivity Center for Data environment Create Groups, Profiles, Quotas, and Constraints Set Alerts

13.2 Functions of TotalStorage Productivity Center for Data


An overview of the functions of TotalStorage Productivity Center for Data is provided in this section and explored in detail in the rest of the chapter. TotalStorage Productivity Center for Data is designed to be easy to use, quick to install, with flexible and powerful configuration. The main functions of the product are: Automatically discover and monitor disks, partitions, shared directories, and servers Reporting to track asset usage and availability Physical inventory - disks, partitions, servers Logical inventory - filesystems and files, databases and tables Forecasting demand versus capacity Standardized and customized reports, on-demand and batched Various user-defined levels of grouping From summary level down to individual file for userID granularity Alerts - execute scripts, e-mail, SNMP traps, event log Quotas Chargeback
Chapter 13. Using TotalStorage Productivity Center for Data

523

13.2.1 Basic menu displays


Figure 13-1 shows the main menu for TotalStorage Productivity Center for Data. You can see that the Agents configured show under the Agents entry. This display thus shows a quick summary of the state of each Agent. There are several icons to indicate the status of the Agents. Green circle - Agent is communicating with the Server Red crossed circle - Agent is down. Red triangle - Agent on that system is not reachable Red crossed square - Agent was connected, but currently there is an update for TotalStorage Productivity Center for Data agent running.

Figure 13-1 Agent summary

524

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-2 shows the TotalStorage Productivity Center for Data dashboard. This is the default right-hand pane display when you start TotalStorage Productivity Center for Data and shows a quick summary of the overall health of the storage environment. It can quickly show you potential problem areas for further investigation.

Figure 13-2 TotalStorage Productivity Center for Data - dashboard

The dashboard contains four viewable areas, which cycle among seven pre-defined sets of panels. To cycle, use the Cycle Panels button. Use the Refresh button to update the display.

Enterprise-wide summary
The Enterprise-wide Summary panel shows statistics accumulated from all the Agents. The statistics are: Total filesystem capacity available Total filesystem capacity used Total filesystem free capacity Total allocated and unallocated disk space Total disk space unallocated to filesystems Total LUN capacity Total usable LUN capacity Total number of monitored servers Total number of unmonitored servers Total number of storage subsystems Total number of users Total number of disks Total number of LUNs Total number of filesystems Total number of directories Total number of files

Chapter 13. Using TotalStorage Productivity Center for Data

525

Filesystem Used Space


This panel displays a pie chart showing the distribution of used and free space in all filesystems. Different chart types can be selected here. This provides a quick snapshot of your filesystem space utilization efficiency.

Users Consuming the Most Space


By default this panel displays a bar chart (different chart types can be selected) of the users who are using the largest amount of filesystem space.

Monitored Server Summary


This panel shows a table of total disk filesystem capacity for the monitored servers sorted by OS type.

Filesystems with Least Free Space Percentage


This panel shows a table of the most full filesystems, including the percent of space free, the total filesystem capacity, and the filesystem mount point.

Users Consuming the Most Space Report


This panel shows the same information as the Users Consuming the Most Space panel, but in a table format.

Alerts Pending
This panel shows active Alerts that have been triggered but are still pending.

13.2.2 Discover and monitor Agents, disks, filesystems, and databases


TotalStorage Productivity Center for Data uses three methods to discover information about the assets in the storage environment: Pings, Probes, and Scans. These are typically set up to run automatically as scheduled tasks. You can define different Ping, Probe, and Scan jobs to run against different Agents or groups of Agents (for example, to run a regular Probe of all Windows systems) according to your particular requirements.

Pings
A Ping is a standard ICMP Ping which checks registered Agents for availability. If an Agent does not respond to a Ping (or a pre-defined number of Pings) you can set up an Alert to take some action. The actions could be one, any, or all of: SNMP trap TEC Event Notification at login Entry in the Windows event log Run a script Send e-mail to a specified user(s)

526

IBM TotalStorage Productivity Center V2.3: Getting Started

Pings are used to generate Availability Reports, which list the percentage of times a computer has responded to the Ping. An example of an Availability Report for Ping is shown in Figure 13-3. Availability Reports are discussed in detail in 13.11.3, Availability Reporting on page 604.

Figure 13-3 Availability Report - Ping

Probes
Probes are used to gather information about the assets and system resources of monitored servers, such as processor count and speed, memory size, disk count and size, filesystems, etc. The data collected by the Probe process is used in the Asset Reports described in 13.11.1, Asset Reporting on page 595. Figure 13-4 shows an Asset report for detected disks.

Figure 13-4 Asset Report of discovered disks

Chapter 13. Using TotalStorage Productivity Center for Data

527

Figure 13-5 shows an Asset Report for detected database tablespaces.

Figure 13-5 Asset Report of database tablespaces

Scans
The Scan process is used to gather statistics about usage and trends of the server storage. Data collected by the Scan jobs are tailored by Profiles. Results of Scan jobs are stored in the enterprise repository. This data supplies the data for the Capacity, Usage, Usage Violations, and Backup Reporting functions. These reports can be scheduled to run regularly, or they can be run ad hoc by the administrator.

Profiles limit the scanning according to the parameters specified in the Profile. Profiles are
used in Scan jobs to specify what file patterns will be scanned, what attributes will be gathered, what summary view will be available in reports and the retention period for the statistics. TotalStorage Productivity Center for Data supplies a number of default Profiles which can be used, or additional Profiles can be defined. Table 13-1 on page 547 shows the default Profiles provided. Some of these include: Largest files - Gathers statistics on the largest files Largest directories - Gathers statistics on the largest directories Most at risk - Gathers statistics on the files that have been modified the longest time ago and have not been backed up since modified (Windows Agents only)

528

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-6 shows a sample of a report produced from data collected in Scans.

Figure 13-6 Summary View - by filesystem, disk space used and disk space free

This report shows a list of the filesystems on each Agent, the amount of space used in each, expressed in bytes and as a percentage, the amount of free space, and the total capacity available in the filesystem.

13.2.3 Reporting
Reporting in TotalStorage Productivity Center for Data is very powerful, with over 300 pre-defined views, and the capability to customize those standard views, save the custom report, and add it to your menu for scheduled or ad hoc reports. You can also create your own individual reports according to particular needs and set them to run as needed, or in batch (regularly). Reports can be produced in table format for a variety of charting (graph) views. You can export reports to CSV or HTML formats for external usage. Reports are generated against data already in the repository. A common practice is to schedule Scans and Probes just before running reports. Reporting can be done at almost any level in the system, from the enterprise down to a specific entity and any level in between. Figure 13-6 shows a high-level summary report. Or, you can drill down to something very specific. Figure 13-7 is an example of a lower-level report, where the administrator has focused on a particular Agent, KANAGA, to look at a particular disk on a particular controller.

Chapter 13. Using TotalStorage Productivity Center for Data

529

Figure 13-7 Asset Report - KANAGA assets

Reports can be produced either system-wide or grouped into views, such as by computer, or OS type. Restriction: Currently, there is a maximum of 32,767 (216 -1) rows per report. Therefore, you cannot produce a report to list all the .HTM files in a directory containing a million files. However, you can (and it would be more productive to do so) produce a report of the 20 largest files in the directory, or the 20 oldest files, for example.

TotalStorage Productivity Center for Data allows you to group information about similar entities (disk, filesystems, etc.) from different servers or business units into a summary report, so that business and technology administrators can manage an enterprise infrastructure. Or, you can summarize information from a specific server - the flexibility and choice of configuration is entirely up to the administrator. You can report as at a point in time, or produce a historical report, showing storage growth trends over time. Reporting lets you track actual demand for disk over time, and then use this information to forecast future demand for the next quarter, two quarters, year, etc. Figure 13-8 is an example of a historical report, showing a graph of the number of files on the C drive on the Agent KANAGA.

530

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-8 Historical report of filesystem utilization

TotalStorage Productivity Center for Data has three basic types of reports: Computers and filesystems Databases Chargeback

Reporting categories
Major reporting categories for filesystems and databases are: Asset Reporting uses the data collected Probes to build a hardware inventory of the storage assets. You can then navigate through a hierarchical view of the assets by drilling down through computers, controllers, disks, filesystems, directories, and exports. For database reporting, information on instances, databases, tables, and data files is presented for reporting. Storage Subsystems Reporting provides information showing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level. These reports also enable you to view the relationships among the components of a storage subsystem. For a list of supported devices, see <table>. Availability Reporting shows responses to Ping jobs, as well as computer uptime. Capacity Reporting shows how much storage capacity is installed, how much of the installed capacity is being used, and how much is available for future growth. Reporting is done by disk and filesystem, and for databases, by database. Usage Reporting shows the usage and growth of storage consumption, grouped by filesystem, and computers, individual users, or enterprise-wide. Usage Violation Reporting shows violations to the corporate storage usage policies, as defined through TotalStorage Productivity Center for Data. Violations are either of Quota (defining how much storage a user or group of users is allowed) or Constraint (defining which file types, owners and file sizes are allowed on a computer or storage entity). You can define what action should be taken when a violation is detected - for example, SNMP trap, e-mail, or running a user-written script. Backup Reporting identifies files which are at risk because they have not been backed up.
Chapter 13. Using TotalStorage Productivity Center for Data

531

Reporting on the Web


It is easy to customize Tivoli Storage Resource Manager to set up a reports Web site, so that anyone in the organization can view selected reports through their browser. 13.16, Setting up a reports Web site on page 698 explains how to do this. Figure 13-9 <change> shows an example of a simple Web site to view TotalStorage Productivity Center for Data reports.

Figure 13-9 TotalStorage Productivity Center for Data Reports on the Web

13.2.4 Alerts
An Alert defines an action to be performed if a particular event occurs or condition is found. Alerts can be set on physical objects (computers and disks) or a logical objects (filesystems, directories, users, databases, and OS user groups). Alerts can tell you, for instance, if a disk has a lot of recent defects, or if a filesystem or database is approaching capacity. Alerts on computers and disks come from the output of Probe jobs and are generated for each object that meets the triggering condition. If you have specified a triggered action (running a script, sending an e-mail, etc.) then that action will be performed if the condition is met. Alerts on filesystems, directories, users, and OS user groups come from the combined output of a Probe and a Scan. Again, if you have specified an action, that action will be performed if the condition is met. An Alert will register in the Alert log, plus you can also define one, some, or all of the following actions to be performed in addition: Send an e-mail indicating the nature of the Alert. Run a specific script with relevant parameters supplied from the content of the Alert. Make an entry into the Windows event log. Pop up next time the user logs in to TotalStorage Productivity Center for Data. Send an SNMP trap. Log a TEC event Refer to 13.4, OS Alerts on page 555 for details on alerts. 532
IBM TotalStorage Productivity Center V2.3: Getting Started

13.2.5 Chargeback: Charging for storage usage


TotalStorage Productivity Center for Data provides the ability to produce Chargeback information for storage usage. The following items can have charges allocated against them: Operating system storage by user Operating system disk capacity by computer Storage usage by database user Total size by database tablespace TotalStorage Productivity Center for Data can directly produce an invoice or create a file in CIMS format. CIMS is a set of resource accounting tools that allow you to track, manage, allocate, and charge for IT resources and costs. For more information on CIMS see the Web site:
http://www.cims.com

Chargeback is a very powerful tool for raising the awareness within the organization of the cost of storage, and the need to have the appropriate tools and processes in place to manage storage effectively and efficiently. Refer to 13.17, Charging for storage usage on page 700 for more details on Chargebacks.

13.3 OS Monitoring
The Monitoring features of TotalStorage Productivity Center for Data enable you to run regularly scheduled or ad hoc data collection jobs. These jobs gather statistics about the storage assets and their availability and their usage within your enterprise, and make the collected data available for reporting. This section gives a quick overview of the monitoring jobs, and explains how they work through practical examples. Reporting on the collected data is explained in Data Manager reporting capabilities on page 592.

Chapter 13. Using TotalStorage Productivity Center for Data

533

13.3.1 Navigation tree


Figure 13-10 shows the complete navigation tree for OS Monitoring which includes Groups, Discovery, Pings, Probes, Scans, and Profiles.

Figure 13-10 OS Monitoring tree

Except for Discovery, you can create multiple definitions for each of those monitoring features of TotalStorage Productivity Center for Data. To create a new definition, right-click the feature and select Create <feature>. Figure 13-11 shows how to create a new Scan job.

Figure 13-11 Create Scan job creation

534

IBM TotalStorage Productivity Center V2.3: Getting Started

Once saved, any definition within TotalStorage Productivity Center for Data can be updated by clicking the object. This will put you in Edit mode. Save your changes by clicking the floppy disk icon in the top menu bar. Discovery, Pings, Probes, and Scan menus contain jobs that can run on a scheduled basis or ad hoc. To execute a job immediately, right-click the job then select Run Now (see Figure 13-12). Each execution of a job creates a time-stamped output that can be displayed by expanding the tree under the job (you may need to right-click the job and select Refresh Job List).

Figure 13-12 OS Monitoring - Jobs list

The color of the job output represents the job status: Green - Successful run Brown - Warnings occurred during the run Red - Errors occurred during the run Blue - Running jobs To view the output of a job, double click the job. Groups and Profiles are definitions that may be used by other jobs - they do not produce an output in themselves. As shown in Figure 13-12, all objects created within Data Manager are prefixed with the user ID of the creator. Default definitions, created during product installation, are prefixed with TPCUser.Default. Groups, Discovery, Probes, Scans, and Profiles are explained in the following sections.

13.3.2 Groups
Before defining monitoring and management jobs, it may be useful to group your resources so you can limit the scope of monitoring or data collection.

Chapter 13. Using TotalStorage Productivity Center for Data

535

Computer Groups
Computer Groups allow you to target management jobs on specific computers based on your own criteria. Some criteria you might consider for grouping computers are platform type, application type, database type, and environment type (for example, test or production). Our lab environment contains Windows 2000 servers. In order to target specific servers for monitoring based on OS and/or database type, we will defined the following groups: Windows Systems Windows DB Systems To create the first group, expand Data Manager Monitoring Groups Computer, right-click Computer and select Create Computer Group. Our first group will contain all Windows systems as shown in Figure 13-13. To add or remove a host from the group, highlight it in either the Available or Current Selections panel and use the arrow buttons. You can also enter a meaningful description in the field.

Figure 13-13 Computer Group definition

To save the new Group, click the floppy disk icon in the menu bar, and enter the Group name in the confirmation box shown in Figure 13-14.

Figure 13-14 Save a new Computer Group

536

IBM TotalStorage Productivity Center V2.3: Getting Started

We created the other group using the same process, and named it Windows DB Systems. Important: To avoid redundant data collection, a computer can belong to only one Group at a time. If you add a system which is already in a Group, to a second Group, it will automatically be removed from the first Group.

Figure 13-15 shows the final Group configuration, with the members of the Windows Systems group.

Figure 13-15 Final Computers Group definitions

Note: The default group TPCUser.Default Computer Group contains all servers that have been discovered, but not yet assigned to a Group.

Filesystem Groups
Filesystem Groups are used to associate together filesystems from different computers that
have some commonality. You can then use this group definition to focus the Scan and the Alert processes to those filesystems. To create a Filesystem Group, you have to select explicitly each filesystem for each computer you want to include in the group. There is no way to do a grouped selection, e.g. / (root) filesystem for all UNIX servers or C:\ for all Windows platforms. Note: As for computers, a filesystem can belong to only one Group.

Chapter 13. Using TotalStorage Productivity Center for Data

537

Directory Groups
Use Directory Groups to group together directories to which you want to apply the same storage management rules. Figure 13-16 shows the Directory Group definition screen by going to Data Manager Monitoring Groups Directory and right-clicking Directory and selecting Create Directory Group.

Figure 13-16 Directory group definition

The Directory Group definition has two views for directory selection: Use directories by computer to specify several directories for one computer. Use computers by directory to specify one directory for several computers. The button on the bottom of the screen toggles between New computer and New directory depending on the view you select.

538

IBM TotalStorage Productivity Center V2.3: Getting Started

We will define one Directory Group with a DB2 directory for a specific computer (Colorado). To define the Group: 1. Select directories by computer. 2. Click New computer. 3. Select colorado from the pull-down Computer field. 4. Enter C:\DB2\NODE0000 in the Directories field and click Add (see Figure 13-17).

Figure 13-17 Directories for computer configuration

5. Click OK. 6. Save the group as DB2 Node. Figure 13-18 shows our final Groups configuration and details of the OracleArchive Group.

Figure 13-18 Final Directories Group definition

Chapter 13. Using TotalStorage Productivity Center for Data

539

User Groups
You can define Groups made up of selected user IDs. These groupings will enable you to easily define and focus storage management rules such as scanning and Constraints on the defined IDs. Note: You can include in a User Group only user IDs defined on the discovered hosts, which have files belonging to them.

Note: As with computers, a user can be defined in only one Group.

OS User Group Groups


You can define Groups consisting of operating system user groups such as Administrators for Windows or adm for UNIX. To define a Group consisting of user groups, select OS User Group from the Groups entry on the left hand panel. Note: As for users, an OS User Group will be added to the list of available Groups only when a Scan job finds at least one file owned by a user belonging to that Group.

Note: As with users, an OS User Group can belong to only one Group at a time.

13.3.3 Discovery
The Discovery process is used to discover new computers within your enterprise that have not yet been monitored by Data Manager. The discovery process will: Request a list of Windows systems from the Windows Domain Controller Contact, through SNMP, all NAS filers and check if they are registered in the nas.config file Discover all NetWare servers in the NetWare trees reported by Agents Search UNIX Agents mount tables, looking for remote filesystems and discover NAS filers More details of NAS and NetWare discovery are given in the manual IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886. Use the path Data Manager Monitoring Discovery to change the settings of the Discovery job. The following options are available.

When to run tab


The initial tab, When to Run (Figure 13-19) is used to modify the scheduling settings. You can specify to execute the Discovery: Now - Run once when the job is saved. Once - at a specified time in the future Repeatedly - Choose the frequency in minutes, hours, days, weeks, or months. You can limit the run to specific days of the week.

540

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-19 Discovery When to Run options

Alert tab
The second tab, Alert, enables you to be notified when a new computer is discovered. See 13.4, OS Alerts on page 555 for more details on the Alerting process.

Options tab
The third tab, Options (Figure 13-20) sets the discovery runtime properties.

Figure 13-20 Discovery job options Chapter 13. Using TotalStorage Productivity Center for Data

541

Uncheck the Skip Workstations field if you want to discover the Windows workstations reported by the Windows Domain Controller.

13.3.4 Pings
The Ping process will: launch TCP/IP pings against monitored computers generate statistics on computer Availability in the central repository generate an Alert if the process fails because of an unavailable host summarizes the Ping process. Pings gather statistics about the availability of monitored servers. The scheduled job will Ping your servers and consider them active if it gets an answer. This is purely ICMP-protocol based - there is no measurement of individual application availability. When you create a new Ping job, you can set the following options.

Computers tab
Figure 13-21 shows the Computers tab, which is used to limit the scope of the computers that are to be Pinged.

Figure 13-21 Ping job configuration - Computers

542

IBM TotalStorage Productivity Center V2.3: Getting Started

When to Ping tab


The tab, When to PING, sets the frequency used for checking. We selected a frequency of 10 minutes as shown in Figure 13-22 on page 543.

Figure 13-22 Ping job configuration - When to Ping

Options tab
On the Options tab, you specify how often the Ping statistics are saved in the database repository. By default, TotalStorage Productivity Center for Data keeps its Ping statistics in memory for eight Pings before flushing them to the database and calculating an average availability. You can change the flushing interval to another time amount, or a number of Pings (for example, to calculate availability after every 10 Pings). The system availability is calculated as:
(Count of successful pings) / (Count of pings)

A lower interval can increase database size, but gives you more accuracy on the availability history. We selected to save to the database at each Ping (), which means we will have an availability of 100% or of 0%, but we have a more granular view of the availability of our servers (Figure 13-23).

Chapter 13. Using TotalStorage Productivity Center for Data

543

Figure 13-23 Ping job configuration - Options

Alert tab
The Alert tab (shown in Figure 13-24) is used to generate Alerts for each host that is unavailable. Alert mechanisms are explained in more detail in 13.4, OS Alerts on page 555. You can choose any Alert type from the following: SNMP trap to send a trap to the Event manager defined in Administrative services Configuration General Alert Disposition TEC Event to send an event to a Tivoli Enterprise Console Login Notification to direct the Alert to the specified user in the Alert Log (see 13.4, OS Alerts on page 555) Windows Event Log to generate an event to the Windows event log Run Script to run a script on the specified server Email to send a mail to the specified user through the Mail server defined in Administrative services Configuration General Alert Disposition

544

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-24 Ping job configuration - Alert

We selected to run a script that will send popup messages to selected administrators. The script is listed in Example 13-1. Optimally, you would send an event to a central console such as the Tivoli Enterprise Console. Note that certain parameters are passed to the script - more information is given in Alerts tab on page 560.
Example 13-1 Script PINGFAILED.BAT net send /DOMAIN:Colorado Computer %1 did not respond to last %2 ping(s). Please check it

We then saved the Ping job as PingHosts, and tested it by right-clicking and selecting Run now. As the hosts did not respond, we received notifications as shown in Figure 13-25.

Figure 13-25 Ping failed popup for GALLIUM

More details about the related reporting features of TotalStorage Productivity Center for DataTotalStorage Productivity Center for Data are in 13.11.3, Availability Reporting on page 604.

13.3.5 Probes
The Probe process will: Gather Assets data on monitored computers Store data in the central repository Generate an Alert if the process fails

Chapter 13. Using TotalStorage Productivity Center for Data

545

The Probe process gathers data about the assets and system resources of Agents such as: Memory size Processor count and speed Hard disks Filesystems The data collected by the Probe process is used by the Asset Reports described in 13.11.1, Asset Reporting on page 595.

Computers tab
Figure 13-26 shows that we included the TPCUser.Default Computer Group in the Probe so that all computers, including those not yet assigned to an existing Group, will be Probed. We saved the Probe as ProbeHosts.

Figure 13-26 New Probe configuration

Important: Only the filesystems that have been returned by a Probe job will be available for further use by Scan, Alerts, and policy management within TotalStorage Productivity Center for Data.

When to Probe tab


This tab has the same configuration as for the Ping process. We set up a weekly Probe to run on Sunday for all computers. We recommend running the Probe job at a time where all the production data you want to monitor is available to the system.

Alert tab
As this is not a business-critical process, we asked to be alerted by mail for any failed Probe. Figure 13-27 shows the default mail text configuration for a Probe failure.

546

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-27 Probe alert - mail configuration

13.3.6 Profiles
Profiles are used: In Scan jobs To limit files to be scanned To specify files tabulates to be scanned To select the summary view Directories and filesystems User ids OS user groups To set statistics retention period TotalStorage Productivity Center for Disk provides default profiles that provide data for all the default reports. Profiles are used in Scan jobs to specify: The pattern of files to be scanned The attributes of files to be gathered The summary view that will be available in reports The statistics retention period Specifying correct profiles avoids gathering unnecessary information that may lead to space problems within the repository. However, you will not be able to report on or check Quotas on files that are not used by the Profile. Data Manager comes with several default profiles, (shown in Table 13-1) prefixed with TPCUser, which can be reused in any Scan jobs you define.
Table 13-1 Default profile Default profile name BY_ACCESS BY_CREATION Description Gathers statistics by length of time since last access of files Gathers statistics by length of time since creation of files

Chapter 13. Using TotalStorage Productivity Center for Data

547

Default profile name BY_MOD_NOT_BACKED_UP BY_MODIFICATION FILE_SIZE_DISTRIBUTION LARGEST_DIRECTORIES LARGEST_FILES LARGEST_ORPHANS MOST_AT_RISK

Description Gathers statistics by length of time since last modification (only for files not backed up since modification). Windows only Gathers statistics by length of time since last modification of files Gathers file size distribution statistics Gathers statistics on the n largest directories. (20 is the default amount.) Gathers statistics on the n largest files. (20 is the default amount.) Gathers statistics on the n largest orphan files. (20 is the default amount.) Gathers statistics on the n files that have been modified the longest time ago and have not yet been backed up since they were modified. Windows only. (20 is the default amount.) Gathers statistics on the n oldest orphan files. (20 is the default amount.) Gathers statistics on the n most obsolete files (i.e., files that have not been accessed or modified for the longest period of time). (20 is the default amount.) Summarizes space usage by file extension Summarizes space usage by Filesystem or Directory Summarizes space usage by OS Group Summarizes space usage by Owner Gathers statistics on network-wide space consumed by temporary files Gathers statistics on non-OS files not accessed in the last year and orphaned files

OLDEST_ORPHANS MOST_OBSOLETE_FILES

SUMMARY_BY_FILE_TYPE SUMMARY_BY_FILESYSTEM /DIRECTORY SUMMARY_BY_GROUP SUMMARY_BY_OWNER TEMPORARY_FILES WASTED_SPACE

Those default profiles, when set in a Scan job, gather data needed for all the default Data Manager reports. As an example, we will define an additional Profile to limit a Scan job to the 500 largest Postscript or PDF files unused in the last six months. We also want to keep weekly statistics at a filesystem and directory level for two weeks.

Statistics tab
On the Statistics tab (shown in Figure 13-28), we specified: Retain filesystem summary for two weeks Gather data based on creation data Select the 500 largest files The Statistics tab is used to specify the type of data that is gathered, and has a direct impact on the type of reports that will be available. In our specific case, the Scan associated with this profile will not create data for reports based on user IDs and users groups. Neither will it create data for reports on directory size.

548

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-28 New Profile - Statistics tab

The Summarize space usage by section of the Statistics tab specifies how the space usage data must be summarized. If no summary level is checked, the data will not be summarized, and therefore will not be available for reporting in the corresponding level of Usage Reporting section of TotalStorage Productivity Center for Data. In our particular case, because we select to summarize by filesystem and directory, we will see space used by PDF and Postscript files at those levels, providing we set up the Scan profile correctly. See 13.3.7, Scans on page 552 for information on this. We will not see which users or groups have allocated those PDF and Postscript files. Restriction: For Windows servers, users and groups statistics will not be created for FAT filesystems. The Accumulate history section sets the retention period of the collected data. In this case, we will see a weekly summary for the last two weeks. The Gather statistics by length of time since section sets the base date used to calculate the file load. It determines if data will be gathered and summarized for the Data Manager Reporting Usage Files reporting view. The Gather information on the section sets the amount of files to retrieve for each of the report views available under Data Manager Reporting Usage Access Load.

Files filter tab


The Files filter tab is used to limit the scope of files that are returned by the Scan job. To create a selection, right-click the All files selected context-menu option as shown in Figure 13-29.

Chapter 13. Using TotalStorage Productivity Center for Data

549

Figure 13-29 New Profile - File filter

With the New Condition menu, you can create a single filter on the files while the New Group enables you to combine several conditions with: All of Any of None of Not all of The file is selected if all conditions are met (AND) The file is selected if at least one condition is met (OR) The file is NOT selected if at least one condition is met (NOT OR) The file is selected if none of the conditions are met (NOT AND)

The Condition Group can contain individual conditions or other condition groups. Each individual condition will filter files based on one of the listed items: Name Last access time Last modified Creation time Owner user ID Owner group Windows files attributes Size Type Length We want to select files that meet our conditions: (name is *.ps or name is *.pdf) and unused since six months. The AND between our two conditions will be translated to All of, while the OR within our first condition will be translated to Any of. On the screen shown in Figure 13-29, we selected New Group. From the popup screen, Figure 13-30, we selected All of and clicked OK.

550

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-30 New Condition Group

Now, within our All of group we will create one dependant Any of group using the same sequence. The result is shown in Figure 13-31.

Figure 13-31 New Profile - Conditions Groups

Now, we create individual conditions within each group by right-clicking New Condition on the group where the conditions must be created. Figure 13-32 shows the creation of our first condition for the Any of group. We enter in our file specifications (*.ps and *.pdf) here.

Figure 13-32 New Profile - New condition

Chapter 13. Using TotalStorage Productivity Center for Data

551

We repeated the operation for the second condition (All of). The final result is shown in Figure 13-33.

Figure 13-33 New Profile - Conditions

The bottom of the right pane shows the textual form of the created condition. You can see that it corresponds to our initial condition. We saved the profile as PS_PDF_FILES (Figure 13-34).

Figure 13-34 Profile save

13.3.7 Scans
The Scan process is used to gather data about files and to summarize Usage statistics as specified in the associated profiles. It is mandatory for Quotas and Constraints management. The Scan process gathers statistics about the usage and trends of the server storage. Scan job results are stored in the repository and supply the data necessary for the Capacity, Usage, Usage Violations, and Backup Reporting facilities. To create a new Scan job, Data Manager Monitoring Scans, right-click and select Create Scan. The scope of each Scan job is set by five different tabs on the right pane.

Filesystems tab
You can specify a specific filesystem for one computer, a filesystem Group (see Filesystem Groups on page 537) or all filesystems for a specific computer. Only the filesystems you have selected will be scanned. Figure 13-35 shows how to configure the Scan to gather data on all our servers. Note: Only filesystems found by the Probe process will be available for Scan. 552
IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-35 New Scan configuration - Filesystem tab

Directory Groups tab


Use this tab to extend the scope of the Scan and also summarize data for the selected directories. Only directories in the previously selected filesystems will be scanned.

Profiles tab
As explained in 13.3.6, Profiles on page 547, the Profiles are used to select the files that are scanned for information gathering. A Scan job scans and gathers data only for files that are scoped by selected Profiles. You can specify Profiles at two levels: Filesystems: All selected filesystems will be scanned and data summarized for each filesystem. Directory: All selected directories (if included in the filesystem) will be scanned and data summarized for each directory.

Chapter 13. Using TotalStorage Productivity Center for Data

553

Figure 13-36 shows how to configure a Scan to have data summarized at both the filesystem and directory level. .

Figure 13-36 New Scan configuration - Profiles tab

When to SCAN tab


As with the Probe and Ping jobs, the scheduling of the job is specified on the When to Scan tab.

Alert tab
You can be alerted through mail, script, Windows Event Log, SNMP trap, TEC event, or Login notification if the Scan job fails. The Scan job may fail if an Agent is unreachable. Click the floppy icon to save your new Scan job, shown in Figure 13-37.

Figure 13-37 New Scan - Save

554

IBM TotalStorage Productivity Center V2.3: Getting Started

Putting it all together


Table 13-2 summarizes the reports views for filesystems and directories that will be available depending on the settings of the Profiles and the Scan jobs. We assume the Profiles have been defined with the Summarize space by Filesystem/Directory option. Note that in order to get reports by filesystem or directory, you need to select either or both in the Scan Profile.
Table 13-2 Profiles/Scans versus Reports Scan Jobs settings Filesystem /Computer x x x x x Directory x x x x Filesystem profile x x Directory profile x x What is scanned FS FS Dir if in specified FS FS Dir if in specified FS FS Dir if in specified FS FS Dir scanned if in specified FS FS FS x Available reports By Filesystem Reports x x By Directory Reports x x

x x

x -

x x

13.4 OS Alerts
TotalStorage Productivity Center for Data enables you to define Alerts on computers, filesystems, and directories. Once the Alerts are defined, it will monitor the results of the Probe and Scan jobs, and will trigger an Alert when the threshold or the condition is met. TotalStorage Productivity Center for Data provides a number options for Alert mechanisms from which you can choose depending on the severity you assign to the Alert. Depending on the severity of the triggered event or the functions available in your environment, you may want to be alerted with:

Chapter 13. Using TotalStorage Productivity Center for Data

555

An SNMP trap to an event manager. Figure 13-38 shows a Filesystem space low Alert as displayed in our SNMP application, IBM Tivoli NetView. Defining the event manager is explained in 8.5, Alert Disposition on page 316.

Figure 13-38 Alert - SNMP trap sample

A TEC (Tivoli Enterprise Console) event. An entry in the Alert Log (see Figure 13-39). You can configure Data Manager, so that the Alert Log will be automatically displayed when you log on to the GUI by using Preferences Edit General (see Figure 13-40).

Figure 13-39 Alert - Logged alerts sample

556

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-40 Alert - Preferences

An entry in the Windows Event log, as shown in Figure 13-41. This is useful for lower severity alerts or when you are monitoring your Windows event logs with an automated tool such as IBM Tivoli Distributed Monitoring.

Figure 13-41 Alerts - Windows Event Viewer sample

Running a specified script - The script runs on the specified computer with the authority of the Agent (root or Administrator). See 13.5.5, Scheduled Actions on page 582 for special considerations with scripts execution. An e-mail - TotalStorage Productivity Center for Data must be configured with a valid SMTP server and port as explained in 8.5, Alert Disposition on page 316.

Chapter 13. Using TotalStorage Productivity Center for Data

557

13.4.1 Alerting navigation tree


Figure 13-42 shows the complete navigation tree for OS Alerting which includes Computer Alerts, Filesystem Alerts, Directory Alerts, and Alert Log.

Figure 13-42 OS Alerting tree

558

IBM TotalStorage Productivity Center V2.3: Getting Started

Except for the Alert Log, you can create multiple definitions for each of those Alert features of TotalStorage Productivity Center for Data. To create a new definition, right-click the feature and select Create <feature>. Figure 13-43 shows how to create a new Filesystem Alert.

Figure 13-43 Filesystem alert creation

Chapter 13. Using TotalStorage Productivity Center for Data

559

13.4.2 Computer Alerts


Computer Alerts act on the output of Probe jobs (see 13.3.5, Probes on page 545) and generate an Alert for each computer that meets the triggering condition. Figure 13-44 shows the configuration screen for a Computer Alert.

Figure 13-44 Computer alerts - Alerts

Alerts tab
The Alerts tab contains two parts:

Triggering condition to specify the computer component you want to be monitored. You can monitor a computer for:
RAM increased RAM decreased Virtual Memory increased Virtual Memory decreased New disk detected Disk not found New disk defect found Total disk defects exceed. You will have to specify a threshold. Disk failure predicted New filesystem detected

Information about disk failures is gathered through commands against disks with the following exceptions: IDE disks do support only Disk failure predicted queries AIX SCSI disks do not support failures and predicted failures queries

Triggered action where you specify the action that must be executed. If you choose to run
a script, it will receive several positional parameters that depends on the triggering condition. The parameters display on the Specify Script panel, which is accessed by checking Run Script and clicking the Define button. 560
IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-45 shows the parameters passed to the script for a RAM decreased condition.

Figure 13-45 Computer alerts - RAM decreased script parameters

Figure 13-46 shows the parameters passed to the script for a Disk not found condition.

Figure 13-46 Computer alerts - Disk not found script parameters

Computers tab
This limits the Alert process to specific computers or computer Groups (Figure 13-47).

Figure 13-47 Computer alerts - Computers tab Chapter 13. Using TotalStorage Productivity Center for Data

561

13.4.3 Filesystem Alerts


Filesystem Alerts will act on the output of Probe and Scan jobs and generate an Alert for each filesystem that meets the specified threshold. Figure 13-48 shows the configuration screen for a Filesystem Alert.

Figure 13-48 Filesystem Alerts - Alert

Alerts tab
As for Computer Alerts, the Alerts tab contains two parts. In the Triggering condition section you can specify to be alerted if a: Filesystem is not found, which means the filesystem was not mounted during the most recent Probe or Scan. Filesystem is reconfigured. Filesystem free space is less than a threshold specified in percent, KB, MB, or GB. Free UNIX filesystem inode count is less than a threshold (either percent or inodes count).

562

IBM TotalStorage Productivity Center V2.3: Getting Started

You can choose to run a script (click the Define button next to Run Script), or you can also change the content of the default generated mail by clicking Edit Email. You will see a popup with the default mail skeleton which is editable. Figure 13-49 shows the default e-mail message.

Figure 13-49 Filesystem alert - Freespace default mail

13.4.4 Directory Alerts


Directory Alerts will act on the output of Scan jobs.

Alerts tab
Directory Alerts configuration is similar to Filesystem alerts. The supported triggers are: Directory not found Directory consumes more than the specified threshold set in percent, KB, MB or GB.

Directories tab
Since Probe jobs do not report on directories and Scan jobs report only on directories. if a directory Profile has been assigned (See Putting it all together on page 555) you can only choose to be alerted for any directory that has already been included in a Scan and actually scanned.

Chapter 13. Using TotalStorage Productivity Center for Data

563

13.4.5 Alert logs


The Data Manager Alerting Alert log menu (Figure 13-50) lists all Alerts that have been generated.

Figure 13-50 Alerts log

There are nine different views. Each of them will show only the Alerts related to the selected view except: All view - Shows all Alerts Alerts Directed to <logged user> - Shows all Alerts where the current logged user has been specified in the Login notification field When you click the icon on the left of a listed Alert, you will see detailed information on the selected Alert as shown in Figure 13-51.

564

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-51 Detailed Alert information

13.5 Policy management


The Policy Management functions of Data Manager enable you to: Define space limits (Quotas) on storage resources used by user IDs and user groups. These limits can be set at a network (whole environment), computer, and filesystem level. To define space limits (Quotas) on NAS resources used by user IDs and user groups. To perform checks (Constraints) on specific files owned by the users and perform any action on those files. Define a filesystem extension policy can be used to automatically increase filesystem capacity for managed hosts when utilization reaches a specified level. The LUN provisioning option can be enabled to extend filesystems within an ESS. To schedule scripts against your storage resources.

13.5.1 Quotas
Quotas can be set at either a user or at an OS User Group level. For the OS User Group level, this could be either an OS User Group, (see OS User Group Groups on page 540), or a standard OS group (such as system on UNIX, or Administrators on Windows). The User Quotas trigger an action when one of the monitored users has reached the limit while the OS User group Quotas trigger the action when the sum of space used by all users of monitored groups has reached the limit. The Quotas definition mechanism is the same for both except for the following differences: The menu tree to use: Data Manager Policy Management Quotas User Data Manager Policy Management Quotas OS User group

Chapter 13. Using TotalStorage Productivity Center for Data

565

The monitored elements you can specify: User and user groups for User Quotas OS User Group and OS User Group Groups for OS User Group Quota We will show how to configure User Quotas. User Group Quotas are configured similarly. Note that the Quota enforcement is soft - that is, users are not automatically prevented from exceeding their defined Quota, but the defined actions will trigger if that happens. There are three sub-entries for Quotas: Network Quotas, Computer Quotas, and Filesystem Quotas

Network Quotas
A Network Quota defines the maximum cumulated space a user can occupy on all the scanned servers. An Alert will be triggered for each user that exceeds the limit specified in the Quota definition. Use Data Manager Policy Management Quotas User Network, right-click and select Create Quota to create a new Quota. The right pane displays the Quota configuration screen with four tabs.

Users tab
Figure 13-52 shows the Users tab for Network Quotas.

Figure 13-52 User Network Quotas - Users tab

From the Available column, select any user ID or OS User Group you want to monitor for space usage. The Profile pull-down menu is used to specify the file types that will be subject to the Quota. The list will display all Profiles that create summaries by user (by file owner). Select the Profile you want to use from the pull-down. The default Profile Summary by Owner collects information about all files and summarizes them on the user level. The ALLGIFFILES profile collects information about GIF files and creates a summary at a user level as displayed in Figure 13-53. This (non-default) profile was created using the process shown in 13.3.6, Profiles on page 547.

566

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-53 Profile with user summary

Using this profile option, we can define general Quotas for all files and more restrictive Quotas for some multimedia files such as GIF and MP3.

Filesystem tab
On the Filesystem tab shown Figure 13-54, select the filesystems or computers you want to be included in the space usage for Quota management.

Figure 13-54 User Network Quotas - Filesystem tab

In this configuration, for each user, his cumulated space usage on all servers will be calculated and checked against the Quota limit.
Chapter 13. Using TotalStorage Productivity Center for Data

567

When to check
The Quota management is based on the output of the Scan jobs. Therefore, each Quota definition must be scheduled to run after the Scan jobs that collect the adequate information. The When to CHECK tab is standard, and allows you to define a one off or a recurring job.

Alert tab
On the Alert tab, specify the Quota limit in: KB, MB or GB, and the action to run when the Quota is exceeded.

Figure 13-55 User Network Quotas - Alert tab

You can choose from the standard Alerts type available with TotalStorage Productivity Center for Data. Each Alert will be fired once for each user exceeding their Quota. We have selected to run a script that we wrote, QUOTAUSERNET.BAT, listed in Example 13-2.
Example 13-2 QUOTAUSERNET.BAT script echo NETWORK quota exceeded - %1 %2 uses %3 - Limit set to %4 >>quotausernet.txt

Example 13-3 shows the output file created by QUOTAUSERNET.BAT.


Example 13-3 Content of quotausernet.txt NETWORK quota exceeded - user root uses 11.16GB - Limit set to 5.0GB NETWORK quota exceeded - user Administrators@BUILTIN uses 11.97GB - Limit set to 5GB

The Alert has fired for user root and Administrators. This clearly shows that administrative users such as root and Administrators should not normally be included in standard Quotas monitoring.

568

IBM TotalStorage Productivity Center V2.3: Getting Started

Computer Quotas
Computer Quotas enable you to fire Alerts when a user exceeds their space Quota on a specific computer as shown in Figure 13-56. Multiple Alerts are generated if a user violates the Quota on separate computers.

Figure 13-56 Computer Quota - Alerts log

Filesystem Quotas
A Filesystem Quota defines a space usage limit at the filesystem level. An Alert will be fired for each filesystem where a user exceeds the limit specified in the Quota definition. Use Data Manager Policy Management Quota User Filesystem, right-click, and select Create Quota to create a new Quota. After setting up and running a Quota for selected filesystems, we received the following entries in the Alert History, shown in Figure 13-57.

Figure 13-57 Filesystem Quota - Alerts log

Chapter 13. Using TotalStorage Productivity Center for Data

569

13.5.2 Network Appliance Quotas


Using Data Manager Policy Management Network Appliance Quotas Schedules, you can compare the space used by users against Quotas defined inside Network Appliance filers, using the appropriate software, and raise an Alert whenever a user is close to reaching the NetApp Quota. When you run a Network Appliance Quota job, the NetApp Quota definitions will be imported into TotalStorage Productivity Center for Data for read-only purposes. Note: Network Appliance Quotas jobs must be scheduled after the Scan jobs, since they use the statistics gathered by the latest Scan to trigger any NetApp Quota violation. With Data Manager Policy Management Network Appliance Quotas Imported User Quotas and Imported OS User Group Quotas, you can view the definitions of the Quotas defined on your NetApp filers.

13.5.3 Constraints
The main features of Constraints are listed in Figure 13-58.

Constraints
Reports and triggers actions based on specific files which use too much space on monitored servers Files can be selected based on
server and filesystem name pattern (eg: *.mp3, *.avi) owner age size attributes

Actions triggered through standard Alerting mechanism when total space used by files exceeds a threshold

ibm.com/redbooks

Figure 13-58 Constraints

Constraints are used to generate Alerts when files matching specified criteria are consuming too much space on the monitored servers.

570

IBM TotalStorage Productivity Center V2.3: Getting Started

Constraints provide a deeper level of Data Management. Quotas will allow reporting on users who have exceeded their space limitations. With Constraints, we can get more detailed information to specify limits on particular file types or other attributes, such as owner, age, and so on. The output of a Constraint when applied to a Scan will return a list of the files that are consuming too much space. Note: Unlike Quotas, Constraints are automatically checked during Scan jobs and do not need to be scheduled. Also, the Scan does not need to be associated with Profiles that will cause data to be stored for reporting.

Filesystems tab
The Filesystems tab helps you select the computers and filesystems you want to check for the current Constraint. The selection method for computers and filesystems is the same as for Scan jobs (see 13.3.7, Scans on page 552).

File Types tab


On the File Types tab, you can explicitly allow or disallow certain file patterns (Figure 13-59).

Figure 13-59 Constraint - File Types

Use the buttons on the top of the screen, to allow or forbid files depending on their name. The left column shows some default file patterns, or you can use the bottom field to create your own pattern. Click >> to add your pattern to the allowed/forbidden files.

Chapter 13. Using TotalStorage Productivity Center for Data

571

Users tab
The Users tab (shown in Figure 13-60) is used to allow or restrict the selected users in the Constraint.

Figure 13-60 Constraint - Users

Important: The file condition is logically ORed with the User condition. A file will be selected for Constraint processing if it meets at least one of the conditions.

Options tab
The Options tab provides additional conditions for file selection, and limits the number of selected files to store in the central repository. Once again, the conditions added in the tab will be logically ORed with the previous set in the File Types and Users tab.

572

IBM TotalStorage Productivity Center V2.3: Getting Started

The bottom part of the tab, shown in Figure 13-61, contains the textual form of the Condition, taking into account all the entries made in the Filesystems, File Types, Users and Options tabs.

Figure 13-61 Constraints - Options

You can change this condition or add additional conditions, by using the Edit Filter button. It displays the file filter popup (Figure 13-62) to change, add, and remove conditions or conditions groups as previously explained in 13.3.6, Profiles on page 547.

Figure 13-62 Constraints - File filter

Chapter 13. Using TotalStorage Productivity Center for Data

573

We changed the file filter to a more appropriate one by changing the OR operator to AND (Figure 13-63).

Figure 13-63 Constraints - File filter changed

Alert tab
After selecting the files, you may want to generate an Alert only if the total used space meeting the Constraint conditions exceeds a predefined limit. Use the Alert tab to specify the triggering condition and action (Figure 13-64).

Figure 13-64 Constraints - Alert

574

IBM TotalStorage Productivity Center V2.3: Getting Started

In our Constraint definition, a script is triggered for each filesystem where the selected files exceed one Gigabyte. We select the script by checking the Run Script option and selecting Change... as shown in Figure 13-65. The script will be passed several parameters including a path to a file that contains the list of files meeting the Constraint. You can use this list to execute any action including delete or archive commands.

Figure 13-65 Constraints - Script parameters

Our example uses a sample script (tsm_arch_del.vbs) which is shipped with TotalStorage Productivity Center for Data, which archives all the files in the produced list to a Tivoli Storage Manager server, and then deletes them from local storage. This script is installed with TotalStorage Productivity Center for Data server, and stored in the scripts subdirectory of the server installation. It can be edited or customized if required - we recommend that you save the original files first. Versions for Windows (tsm_arch_del.vbs) and UNIX (tsm_arch_del) are provided. If you will run this Constraint on a UNIX agent, then PERL is required to be installed on the agent. A Tivoli Storage Manager server must be available and configured for this script to work. For more information on the sample scripts, see Appendix A of the IBM Tivoli Storage Resource Manager Users Guide, SC32-9069.

Chapter 13. Using TotalStorage Productivity Center for Data

575

13.5.4 Filesystem extension and LUN provisioning


The main functions of Filesystem Extension are shown in Figure 13-66.

Filesystem Extension
Automates filesystem extension Supported platforms
AIX using JFS SUN using VxFS

Support for automatic LUN provisioning with IBM ESS Storage Subsystem Actions triggered through standard Alerting mechanism when a filesystem is performed

ibm.com/redbooks

Figure 13-66 Filesystem Extension

576

IBM TotalStorage Productivity Center V2.3: Getting Started

We use filesystem extension policy to automatically extend filesystems when utilization reaches a specified threshold. We can also enable LUN provisioning to extend filesystems within an ESS. To set up filesystem extension policy, select Data Manager Policy Management Filesystem Extension. Right-click Filesystem Extension and select Create Filesystem Extension Rules as seen in Figure 13-67.

Figure 13-67 Create Filesystem Extension Rules

Chapter 13. Using TotalStorage Productivity Center for Data

577

In the Filesystems tab, select the filesystems which will use filesystem extension policy by moving them to the Current Selections panel. Note the Enabled checkbox - the default is to check it, meaning the rule will be active. If you uncheck the box, it will toggle to Disabled - you can still save the rule, but the job will not run. To specify the extension parameters, select the Extension tab (Figure 13-68).

Figure 13-68 Filesystem Extension - Extension

This tab specifies how a filesystem will be extended. An explanation of the fields is provided below.

Amount to Extend
We have the following options: Add - the amount of space used for extension in MB or GB, or as a percentage of filesystem capacity. Make Freespace - the amount of freespace that will be maintained in the filesystems by this policy. If freespace falls below the amount that is specified, the difference will be added. Freespace can be specified in MB or GB increments, or by a percentage of filesystem capacity. Make Capacity - the total capacity that will be maintained in the selected filesystems. If the capacity falls below the amount specified, the difference will be added.

578

IBM TotalStorage Productivity Center V2.3: Getting Started

Limit Maximum Filesystem Capacity?


When this option is enabled, the Filesystem Maximum Capacity is used in conjunction with the Add or Make Freespace under Amount to Extend. If you enter a maximum capacity for a filesystem in the Filesystem Maximum Capacity field and the filesystem reaches the specified size, the filesystem will be removed from the policy and an Alert will be triggered.

Condition for Filesystem Extension


The options are: Extend filesystems regardless of remaining freespace - the filesystem will be expanded regardless of the available free space. Extend filesystems when freespace is less than - defines the threshold for the freespace which will be used to trigger the filesystem expansion. If freespace falls below this value, the policy will be executed. Freespace can be specified in MB or GB increments, or by a percentage of filesystem capacity. Note: If you select Make Capacity under Amount to Extend, the Extend filesystems when freespace is less than option is not available.

Use LOG ONLY Mode


Enable Do Not Extend Filesystems - Log Only when you want the policy to log the filesystem extension. The extension actions that would have taken place are written to the log file, but no extension takes place. In the Provisioning tab (Figure 13-69) we define LUN provision parameters. Note that LUN provisioning is available at the time of writing for filesystems on an ESS only.

Figure 13-69 Filesystem Extension - Provisioning

Chapter 13. Using TotalStorage Productivity Center for Data

579

LUN Provisioning is an optional feature for filesystem extension. When the Enable Automatic LUN Provisioning is selected, LUN provisioning is enabled. In the Create LUNs that are at least field, you can specify a minimum size for new LUNs. If you select this option, LUNs of at least the size specified will be created. If no size is specified, then the Amount to Extend option specified for the filesystem (in Amount to Extend on page 578) will be used. For more information on LUN provisioning, see IBM Tivoli Storage Resource Manager 1.2 Users Guide. The Model for New LUNs feature means that new LUNs will be created similar to existing LUNs in your setup. At least one ESS LUN must be currently assigned to the TotalStorage Productivity Center for Data Agent associated with the filesystem you want to extend. There are two options for LUN modeling: Model new LUNs on others in the volume group of the filesystem being extended provisioned LUNS are modeled on existing LUNs in the extended filesystems volume group. Model new LUNs on others on the same host as the filesystem being extended provisioned LUNS are modeled on existing LUNs in the extended filesystems volume group. If the corresponding LUN model cannot satisfy the requirements. it will look for other LUNs on the same host. The LUN Source option defines the location of the new LUN in the ESS, and has two options: Same Storage Pool - provisioned LUNs will be created using space in an existing Storage Pool. In ESS terminology this is called the Logical Sub System or LSS. Same Storage Subsystem - provisioned LUNs can be created in any Storage Pool or ESS LSS. The When to Enforce Policy tab (Figure 13-70) specifies when to apply the filesystem extension policy to the selected filesystems.

Figure 13-70 When to Enforce Policy tab

580

IBM TotalStorage Productivity Center V2.3: Getting Started

The options are: Enforce Policy after every Probe or Scan automatically enforces the policy after every Probe or Scan job. The policy will stay in effect until you either change this setting or disable the policy. Enforce Policy Now enforces the policy immediately for a single instance. Enforce Policy Once at enforces the policy once at the specified time, specifying the month, day, year, hour, minute, and AM/PM The Alert tab (Figure 13-71) can define an Alert that will be triggered by the filesystem extension job.

Figure 13-71 Alert tab

Chapter 13. Using TotalStorage Productivity Center for Data

581

Currently the only available condition is A filesystem extension action started automatically. Refer to Alert tab on page 544 for an explanation of the definitions. Important: After making configuration changes to any of the above filesystem extension options, you must save the policy, as shown in Figure 13-72. If you selected Enforce Policy Now, the policy will be executed after saving.

Figure 13-72 Save filesystem changes

For more information on filesystem extension and LUN provisioning, see IBM Tivoli Storage Resource Manager: A Practical Introduction.

13.5.5 Scheduled Actions


TotalStorage Productivity Center for Data comes with an integrated tool to schedule script execution on any of the Agents. If a script fails due to an unreachable Agent, the standard Alert processes can be used. To create a Scheduled action, select Data Manager Policy Management Scheduled Actions Scripts, right-click and select Create Script.

Computers tab
On the Computers tab, select the computers or computer groups to execute the script.

Script Options tab


From the pull-down field, select a script that exists on the server. You can also enter the name of a script not yet existing on the server or that only resides on the Agents.

582

IBM TotalStorage Productivity Center V2.3: Getting Started

The Script options tab is shown in Figure 13-73.

Figure 13-73 Scheduled action - Script options

The Script Name pull-down field lists all files (including non-script files) in the servers script directory. Attention: For Windows Agents, the script must have an extension that has an associated script engine on the computer running the script (for example: .BAT, .CMD, or .VBS). For UNIX Agents: The extension is removed from the specified script name The path to the shell (for example, /bin/bsh, /bin/ksh) must be specified in the first line of the script If the script is located in a Windows TotalStorage Productivity Center for Data Server scripts directory, the script must have been created on a UNIX platform, and then transferred in binary mode to the Server or you can use UNIX OS tools such as dos2unix to convert the scripts. This will ensure that the CR/LF characters are correctly inserted for execution under UNIX.

When to Run tab


As for other Data Manager jobs, you can choose to run a script once or repeatedly at a predefined interval.

Alerts tab
With the Alert tab you can choose to be notified when a script fails due to an unreachable Agent or a script not found condition. The standard Alert Mechanism described in 13.4, OS Alerts on page 555 is used.

13.6 Database monitoring


The Monitoring functions of Data Manager are extended to databases when the license key is enabled (8.4, Configuring Data Manager for Databases on page 313). Currently, MS SQL-Server, Oracle, DB2, and Sybase are supported.
Chapter 13. Using TotalStorage Productivity Center for Data

583

We will now review the Groups, Probes, Scans, and Profiles definitions for Data Manager for Databases, and show the main differences compared to the core Data Manager monitoring functions. Figure 13-74 shows the navigation tree for Data Manager for Databases.

Figure 13-74 Databases - Navigation Tree

13.6.1 Groups
To get targeted monitoring of your database assets, you can create Groups consisting of: Computers Databases-Tablespaces Tables Users

Computer Groups
All databases residing on the selected computers will be probed, scanned, and managed for Quotas. The groups you have created using TotalStorage Productivity Center for Data remain available for TotalStorage Productivity Center for Data for Databases. If you create a new Group, the computers you put in it will be removed from the Group they currently belong to. To create a Computer Group, use Data Manager - Databases Monitoring Groups Computer, right-click, and select Create Computer Group. Computer Groups on page 536 gives more information on creating Computer Groups.

Databases-Tablespaces Groups
Creating Groups with specific databases and tablespaces may be useful for applying identical management rules for databases with the same functional role within your enterprise. 584
IBM TotalStorage Productivity Center V2.3: Getting Started

An example could be to create a group with all the Oracle-Server system databases, as you will probably apply the same rules for space and alerting on those databases. This is shown in Figure 13-75.

Figure 13-75 Database group definition

Table Groups
You can use Table Groups to create Groups of the same set of tables for selected or all database instances. You can use two different views to create a table group:

Tables by instance selects several tables for one instance. Instances by table selects several instances for one table.
You can combine both views as each entry you add will be added to the group.

User Groups
As for core TotalStorage Productivity Center for Data, you can put user IDs in groups. The user groups you create will be available for the whole TotalStorage Productivity Center for Data product set. Tip: The Oracle and MS SQL-Server user IDs (SYSTEM, sa, ...) are also included in the available users list after the first database Probe.

13.6.2 Probes
The Probe process is used to gather data about the files, instances, logs, and objects that make up monitored databases. The results of Probe jobs are stored in the repository and are used to supply the data necessary for Asset Reporting. Use Data Manager - Databases Monitoring Probe, right-click, and select Create Probe to define a new Probe job. In the Instance tab of the Probe configuration, you can select specific instances, computers, and computer groups (Figure 13-76).

Chapter 13. Using TotalStorage Productivity Center for Data

585

Figure 13-76 Database Probe definition

The Computers list contains only computers that have been defined for Data Manager for Databases. The definition procedure is described in Configuring Data Manager for Databases on page 313.

13.6.3 Profiles
As for TotalStorage Productivity Center for Data, Profiles in Data Manager for Databases are used to determine the databases attributes that are to be scanned. They also determine the summary level and retention time to keep in the repository. Use Data Manager - Databases Monitoring Profiles, right-click, and select Create Profile to define a new profile. Figure 13-77 shows the Profile definition screen.

Figure 13-77 Database profile definition

586

IBM TotalStorage Productivity Center V2.3: Getting Started

You can choose to gather data on tables size, database extents, or database free space and summarize the results at the database or user level.

13.6.4 Scans
Scan jobs in Data Manager for Databases collect statistics about the storage usage and trends within your databases. The gathered data is used as input to the usage reporting and Quota analysis. Defining a Scan job requires defining: The database, computer, and instances to Scan The tables to monitor for detailed information such as size, used space, indexes, rows count The profile that will determine the data that is gathered and the report views that will be made available by the Scan The job scheduling frequency Oracle-only additional options to gather information about pages allocated to a segment that has enough free space for additional rows The alerting mechanism to use should the Scan fail All this information is set through the Scan definition screen that contains one tab for each previously listed item. To define a new Scan, select Data Manager - Databases Monitoring Scans, right-click and select Create Scan as in Figure 13-78.

Figure 13-78 Database Scan definition

Note: If you request detailed scanning of tables, the tables will only be scanned if their respective databases have also been selected for scanning.

Chapter 13. Using TotalStorage Productivity Center for Data

587

13.7 Database Alerts


TotalStorage Productivity Center for Data for Databases enables you to define Alerts on instances, databases, and tables. The Probe and Scan jobs output are processed and compared to the defined alerts. If a threshold is reached, an Alert will be triggered. Tivoli Storage Resource Manage for Databases uses the standard Alert mechanisms described in 13.4, OS Alerts on page 555.

13.7.1 Instance Alerts


Data Manager - Databases Alerting Instance Alerts, right-click and select Create Alert lets you define some alerts as shown in Table 13-3. Those Alerts are triggered during the Probe process.
Table 13-3 Instance Alerts Alert type New database discovered New tablespace discovered Archive log contains more than X units New device discovered Device dropped Device free space greater than X units Device free space less than X units x x x x x x Oracle Sybase x MSSQL x

An interesting Alert is the Archive Log Directory Contains More Than for Oracle, since the Oracle application can hang if there is no more space available for its archive log. This Alert can be used to monitor the space used in this specific directory and trigger a script that will archive the files to an external manager such as Tivoli Storage Manager once the predefined threshold is reached. For a detailed example, refer to IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.

13.7.2 Database-Tablespace Alerts


To define a Database-Tablespace Alert, select Data Manager - Databases Alerting Database-Tablespace Alerts, right-click, and select Create Alert. You can define various monitoring options on your databases as shown in Table 13-4. Those Alerts are triggered during the Probe process.
Table 13-4 Instance alerts Alert type Database/Tablespace freespace lower than Database/Tablespace offline Database/Tablespace dropped Freespace fragmented in more than n extents Largest free extent lower than Oracle x x x x x Sybase x x x MSSQL x x x

588

IBM TotalStorage Productivity Center V2.3: Getting Started

Alert type Database Log freespace lower than Last dump time previous to n days

Oracle

Sybase x x

MSSQL x

13.7.3 Table Alerts


To define a new Table Alert, use Data Manager - Databases Alerting Table Alerts, right-click, and select Create Alert. With this option you can set up monitoring on database tables. The Alerts that can be triggered for a table as shown in Table 13-5 below. Those Alerts are triggered during the Scan processes and only if the Scan includes a Table Group.
Table 13-5 Table alerts Alert type Total Table Size Greater Than Table Dropped (Max Extents - Allocated) < Segment Has More Than Chained Row Count Greater Than Empty Used Segment Space Exceeds Forwarded Row Count Greater Than Oracle Sybase x x MsSQL x x

x
x

x
x x x x

13.7.4 Alert log


The Data Manager - Databases Alerting Alert Log menu lists all Alerts that have been fired by the Probe jobs, the Scan jobs, the defined Alerts, and the violated Quotas. Tip: Please refer to 13.4.5, Alert logs on page 564 for more information about using the Alert log tree.

13.8 Databases policy management


The Policy Management functions of Data Manager for Databases enable you to: Define space limits (Quotas) on database space used by tables owners. Those limits can be set at a network (whole environment), at an instance or at a database level. Schedule scripts against your database resources.

Chapter 13. Using TotalStorage Productivity Center for Data

589

13.8.1 Network Quotas


A Network Quota will define the maximum cumulated space a user can occupy on all the scanned databases. An Alert will be fired for each user that exceeds the limit specified in the Quota definition. We used Data Manager - Databases Policy Management Quotas Network, right-click and select Create Quota to create a new Quota. The right pane will switch to a Quota configuration screen with four tabs.

Users tab
On the Users tab, specify the database users you want to be monitored for Quotas. You can also select a profile in the Profile pull-down field on the top right of the tab. In this field, you can select any Profile that stores summary data on a user level. The Quota will only be fired for databases that have been scanned using this Profile (Figure 13-79).

Figure 13-79 Database Quota - Users tab

Database-Tablespace tab
Use this tab to restrict Quota checking to certain databases. You can choose several databases or computers. If you choose a computer, all the databases running on it will be included for Quota management.

When to run tab


As with Data Manager, you can select the time to run from: Immediate Once at a schedule date and time Repetitive at predefined intervals

Alert tab
On the Alert tab you can specify the space limit allowed for each user and the action to run. If no action is selected, the Quota violation will only be logged in the Alert log.

590

IBM TotalStorage Productivity Center V2.3: Getting Started

13.8.2 Instance Quota


The Instance Quota mechanism is similar to the Network Quota, except that it is set at the instance level. Whenever a user reaches the Quota on one instance, an Alert will be fired.

13.8.3 Database Quota


With Database Quota, the Quota is set at the database level. Each monitored user will be reported back as soon as he reaches the limit on at least one of the monitored database.

13.9 Database administration samples


We now list some typical checks done regularly by Oracle database administrators and show how they can be automated using Data Manager for Databases.

13.9.1 Database up
Data Manager for Databases can be used to test for database availability using Probe and Scan jobs since they will fail and trigger an Alert if either the database or the listener is not available. Since those jobs use system resources to execute, you may instead choose scheduled scripts to test for database availability. Due to limited scheduling options and the need for user-written scripts, we recommend using dedicated monitoring products such as Tivoli Monitoring for Databases.

13.9.2 Database utilization


There are a number of different levels where system utilization can be monitored and checked in a database environment.

Tablespace space usage


This is a standard Alert provided by Data Manager for Databases. This Alert will be triggered by the Probe jobs.

Archive log directory space usage


This is a standard alert provided by Data Manager for Databases. This Alert will be triggered by the Probe jobs as shown in 13.7.1, Instance Alerts on page 588.

Maximum extents used


Your application may become unavailable if a table reaches its maximum allowed number of extents. This is an indicator that can be monitored using the (Max Events - Allocated Extents) < Table Alert.

13.9.3 Need for reorganization


To ensure good application performance, it is important to be notified promptly if a database reorganization is required.

Count of Used table extents


than n extents.
You can monitor for table reorganization need using the table Alert trigger Segment has more

Chapter 13. Using TotalStorage Productivity Center for Data

591

Count of chained rows


Chained rows can have an impact on database access performance. This issue can be monitored using the Chained Row Count Greater than table Alert trigger.

Count of Used table extents


You can monitor the need for table reorganization using the table Alert trigger Segment has more than n extents.

Freelist count
You cannot monitor the count of freelists in an Oracle table using Data Manager for Databases.

13.10 Data Manager reporting capabilities


The reporting capabilities of Data Manager are very rich, with over 300 predefined views. You can see the data from a very high-level; for example, the total amount of free space available over the enterprise; or from a low-level, for example, the amount of free space available on a particular volume or a table in a database. The data can be displayed in tabular or graphical format, or can be exported as HTML, Comma Separated Variable (CSV), or formatted report files. The reporting function uses the data stored in the Data Manager repository. Therefore, in order for reporting to be accurate in terms of using current data, regular discovery, Ping, Probe, and Scan jobs must be scheduled. These jobs are discussed in 13.3, OS Monitoring on page 533. Figure 13-80 shows the Data Manager main screen with the reporting options highlighted. The Reporting sections are used for interactive reporting. They can be used to answer ad hoc questions such as, How much free space is available on my UNIX systems? Typically, you will start looking at data at a high-level and drill down to find specific detail. Much of the information can also be displayed in graphical form as well as in the default table form. The My Reports sections give you access to predefined reports. Some of these reports are pre-defined by Data Manager, others can be created by individual users saving reporting criteria in the Reporting options. You can also set up Batch Reports to create reports automatically on a schedule.

My Reports will be covered in more detail in 13.14, Creating customized reports on page 683, and 13.15, Setting up a schedule for daily reports on page 697.
The additional feature, TotalStorage Productivity Center for Data for Chargeback produces storage usage Chargeback data, as described in 13.17, Charging for storage usage on page 700.

592

IBM TotalStorage Productivity Center V2.3: Getting Started

Predefined reports provided by TotalStorage Productivity Center for Data Reports customized and saved by user tpcadmin Schedule reports to run in batch mode Interactive reporting options

Database reporting options

Figure 13-80 TotalStorage Productivity Center for Data main screen showing reporting options

13.10.1 Major reporting categories


Data Manager collects data for reporting purposes in seven major categories. These will be covered in the following sections. Within each major category there are a number of sub-categories. Most categories are available for both operating system level reporting and database reporting. However, a few are for operating system reporting only. The description of each category specifies which applies, and in the more detailed following sections for each category, we present the capabilities separately for both Data Manager and Data Manager for Databases as appropriate.

Asset Reporting
Asset data is collected by Probe processes and reports on physical components such as
systems, disk drives, and controllers. Currently, Asset Reporting down to the disk level is only available for locally attached devices. Asset Reporting is available for both operating system and database reporting.

Chapter 13. Using TotalStorage Productivity Center for Data

593

Storage Subsystems Reporting


Storage Subsystem data is collected by Probe processes. It provides a mechanism for viewing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level. These reports also enable you to view the relationships among the components of a storage subsystem. Storage Subsystem reporting is currently only available for IBM TotalStorage Enterprise Storage Servers (ESS). Storage Subsystems Reporting is available for operating system only.

Availability Reporting
Availability data is collected by Ping processes and allows you to report on the availability of your storage resources and computer systems. Availability Reporting is provided for operating system reporting only.

Capacity Reporting
Capacity Reporting shows how much storage you have and how much of it is being used.
You can report at anywhere from an entire network level down to an individual filesystem. Capacity Reporting is provided for both operating system and database reporting.

Usage Reporting
Usage Reporting goes down a level from Capacity Reporting. It is concerned not so much with how much space is in use, but rather with how the space is actually being used for. For example, you can create a report that shows usage by user, or a wasted space report. You define what wasted space means, but it could be for example files of a particular type or files within a certain directory, which are more than 30 days old. Usage Reporting is provided for both operating system and database reporting.

Usage Violation Reporting


Usage Violation Reporting allows you to set up rules for the type and/or amount of data that can be stored, and then report on exceptions to those rules. For example, you could have a rule that says that MP3 and AVI files are not allowed to be stored on file servers. You can also set Quotas for how much space an individual user can consume. Note that usage violations are only softly enforced - Data Manager will not enforce the rules in real time, but will generate an exception report after the fact. Usage Violation Reporting is provided for both operating system and database reporting.

Backup Reporting
Backup Reporting identifies files that have not been backed up. Backup Reporting is provided
for operating system reporting only.

13.11 Using the standard reporting functions


This section discusses Data Managers standard reporting capabilities. Customized reporting is covered in 13.14, Creating customized reports on page 683. This section is not intended to cover exhaustively all of the reporting options available, as these are very numerous, and are covered in detail in the Reporting section of the manual IBM Tivoli Storage Resource Manager V1.1 Reference Guide SC32-9069. Instead, this section provides a basic overview of Data Manager reporting, with some examples of what types of reports can be produced, and additional information on some of the less straightforward reporting options.

594

IBM TotalStorage Productivity Center V2.3: Getting Started

To demonstrate the reporting capabilities of TotalStorage Productivity Center for Data, we installed the Server code on a Windows 2000 system called Colorado, and deployed these Windows Agents: Gallium Wisla Lochness Colorado is also an Agent as well as being the Server. The host GALLIUM has both Microsoft SQL-Server and Oracle database installed to demonstrate database reporting. The Agent on LOCHNESS also provides data for a NAS device call NAS200. The Agent on VMWAREW2KSRV1 also provides data for a NetWare server called ITSOSJNW6. The lab setup is shown in Figure 13-81.

Tivoli Storage Resource Manager: Lab Environment


ITSRM Scan
ITSRM Database

A23BLTZM WNT ITSRM Agent & GUI

LOCHNESS W2K ITSRM Server

Ethernet

NetWare

VMWAREW2KSRV1 W2K (Vmware) ITSRM Agent

EASTER HP-UX ITSRM Agent

SOL-E Solaris ITSRM Agent

GALLIUM W2K ITSRM Agent

CRETE AIX ITSRM Agent

BRAZIL AIX ITSRM Agent

IBM NAS200

ITSRM Scan VMWAREW2KSRV1 W2K (Vmware) ITSRM A t

ibm.com/redbooks

Figure 13-81 TotalStorage Productivity Center for Data Lab Environment

13.11.1 Asset Reporting


Asset Reporting provides configuration information for the TotalStorage Productivity Center for Data Agents. The information available includes typical asset details such as disk system name and disk capacities, but provides a large amount of additional detail.

IBM TotalStorage Productivity Center for Data


Figure 13-82 shows the major subtypes within Asset Reporting. Note that unlike the other reporting categories where most of the drill-down functions are chosen from the right-hand panel, in Asset Reporting the drill-down functions are mostly available on the left-hand pane.

Chapter 13. Using TotalStorage Productivity Center for Data

595

Figure 13-82 Reporting - Asset

By Cluster View
Click By Cluster to drill down into a virtual server or cluster node. You can drill down further to a specific controller to see the disks under it and/or drill down on a disk to see the file systems under it.

By Computer view
Click By Computer to see a list of all of the monitored systems (Figure 13-83.)

Figure 13-83 Reporting - Asset - By Computer

596

IBM TotalStorage Productivity Center V2.3: Getting Started

From there we can drill down on the assets associated with each system. We will take a look at node GALLIUM. In Figure 13-84 we have shown most of the items for GALLIUM expanded, with the details for Disk 2 displayed in the right-hand bottom pane. You will see a detailed level of information, both in terms of the type of objects for which data is collected (for example, Exports or Shares), and the specific detail for a given device.

Figure 13-84 Report - GALLIUM assets

By OS Type view
This view of the Asset data provides the same information as the By Computer view, with the difference that the Agent systems are displayed sorted by operating system platform.

By Storage Subsystem view


Data Manager provides reporting for storage subsystems, any disk array subsystems whose SMI-S Providers are CTP certified by SNIA for SMI-S 1.0.2, and IBM SAN Volume Controller clusters. For disk array subsystems, you can view information about: Disk groups (for IBM TotalStorage ESS subsystems) Array sites (for IBM TotalStorage DS6000/8000 only) Ranks (for IBM TotalStorage DS6000/8000 only) Storage pools (for disk array subsystems) Disks (for disk array subsystems) LUNs (for disk array subsystems)

For IBM SAN Volume Controllers, you can view information about Managed disk groups Managed disks Virtual disks

Chapter 13. Using TotalStorage Productivity Center for Data

597

System-wide view
The System-wide view however does provide additional capability, as it can give a System-wide view rather than a node-by-node view of some of the data. A graphical view of some of the data is also available. Figure 13-85 shows most of the options available from the System-wide view and in the main panel, the report of all exports or shares available.

Figure 13-85 Reporting - Assets - System-wide view

Each of the options available under the System-wide view are self explanatory with the possible exception of Monitored Directories. Data Manager can monitor utilization at a directory level as well as a device or filesystem level. However, by default, directory level monitoring is disabled. To enable directory monitoring, define a Directory Group by selecting Data Manager Monitoring Groups Directory, right-click Directory and choose Create Directory Group. The process of setting up Directory Groups is discussed in more detail in 13.3.2, Groups on page 535. Once the Directory Group is created it must be assigned to a Scan job, and that job must be run on the systems where the directories to be monitored exist. By setting up a monitored directory you will get additional information for that directory. Note that the information collected includes any subdirectories. Information collected about the directory tree includes the number of files, number of subdirectories, total space used, and average file size. This can be graphed over time to determine space usage patterns.

IBM TotalStorage Productivity Center for Data for Databases


Asset Reporting for databases is similar to that for filesystems; however, filesystem entities like controllers, disks, filesystems, and shares are replaced with database instances, databases, tables, and data files. 598
IBM TotalStorage Productivity Center V2.3: Getting Started

Very specific information regarding an individual database is available as shown in Figure 13-86 for the database DMCOSERV on node COLORADO.

Figure 13-86 DMCOSERV database asset details

Or you can see rollup information for all databases on a given system (using the System-wide view) as shown in Figure 13-87.

Figure 13-87 System-wide view of database assets

Chapter 13. Using TotalStorage Productivity Center for Data

599

All of the database Asset Reporting options are quite straightforward with the exception of one. In order to receive table level asset information, one or more Table Groups needs to be defined. This is a similar process to that for Directory Groups as described in System-wide view on page 598. You would not typically include all database tables within Table Groups, but perhaps either critical or rapidly growing tables. We will set up a group for UDB. To set up a Table Group, Data Manager - Databases Monitoring Groups Table, right-click Table and choose Create Table Group (Figure 13-88).

Figure 13-88 Create a new database table group

We have entered a description of Colorado Table Group. Now we click New Instance to enter the details of the database and tables that we want to monitor. From the drop down box, we select the database instance, in this case the UDB instance on Colorado. We then enter three tables in turn. For each table, we entered the database name (DMCOSERV), the creator name (db2admin) and a table name. After entering the values, click Add to enter more tables or finish. We entered the table names of BASEENTITY, DMSTORAGEPOOL, and DMVOLUME, as shown in Figure 13-89. Once all of the tables have been entered click OK.

600

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-89 Add UDB tables to table group

Now we return to the Create Table Group panel, and we see in Figure 13-90 the information about the newly entered tables.

Figure 13-90 Tables added to table group

Now we Save by clicking the floppy disk icon and when prompted, we entered the Table Group name of ColoradoTableGroup. In order for the information for our tables to be collected, the Table Group needs to be assigned to a Scan job. We will assign it to the default database scan job called Tivoli.Default DB Scan by choosing Data Manager - Databases Monitoring Scans TPCUser.Default Db Scan.

Chapter 13. Using TotalStorage Productivity Center for Data

601

The definition for this scan job is shown in Figure 13-91 and in particular we see the Table Groups tab. Our new Table Group is shown initially in the left hand pane. We moved it to the right hand pane by selecting it and clicking >>. We then save the updates to the Scan job by choosing File Save (or with the floppy disk icon from the tool bar). Finally, we can execute the Scan job by right-clicking it and choosing Run Now. Figure 13-91 shows the Scan job definition after the Table Group had been assigned to it.

Figure 13-91 Table group added to scan job

Example 13-4 is an extract from the Scan job log showing that the table information is now being collected. You can view the Scan job log through the TotalStorage Productivity Center for Data GUI by first expanding the particular Scan job definition. A list of Scan execution reports will be shown; select the one of interest. You may need to right-click the Scan job definition and choose Refresh Job List. The list of Scan executions for the Tivoli.Default DB Scan is shown in Figure 13-92.

Figure 13-92 Displaying Scan job list

602

IBM TotalStorage Productivity Center V2.3: Getting Started

Once you have the actual job chosen you can click the detail icon for the system that you are interested in to display the job log. The actual file specification of the log file on the Agent system will be displayed at the top of the output when viewed through the GUI. Example 13-4 shows the actual file output.
Example 13-4 Database scan job showing table monitoring 09-19 18:01:01 DBA0036I: The following databases-tablespaces will be scanned: MS SQLServer gallium/gallium Databases: master model msdb Northwind pubs tempdb Oracle itsrm Tablespaces: ITSRM.DRSYS ITSRM.INDX ITSRM.RBS ITSRM.SYSTEM ITSRM.TEMP ITSRM.TOOLS ITSRM.USERS 09-19 18:01:01 DBA0041I: Monitored Tables: .CTXSYS.DR$OBJECT Northwind.dbo.Employees Northwind.dbo.Customers Northwind.dbo.Suppliers

Finally, we can produce table level asset reports by choosing for example, Data Manager Databases Reporting Asset System-wide All DBMSs Tables By Total Size. This is shown in Figure 13-93.

Figure 13-93 Tables by total size asset report

Chapter 13. Using TotalStorage Productivity Center for Data

603

13.11.2 Storage Subsystems Reporting


Storage Subsystems Reporting is covered in detail in 13.12, TotalStorage Productivity Center for Data ESS Reporting on page 634.

13.11.3 Availability Reporting


Availability Reporting is quite simple. Two different sets of numbers are reported - Ping and Computer Uptime. Ping is only concerned with whether or not the system is up and responding to the ICMP requires - it does not care whether the Data Agent is running or not. Ping results are collected by a Ping job, so this must be scheduled to run on a regular basis. See 13.3.4, Pings on page 542. Computer Uptime detects whether or not the Data Agent is running. Computer Uptime statistics are gathered by a Probe job so this must be scheduled to run on a regular basis. See 13.3.5, Probes on page 545. Figure 13-94 shows the Ping report for our TotalStorage Productivity Center for Data environment, and Figure 13-95 shows the Computer Uptime report. To generate these reports, we had to select the computers of interest and select Generate Report.

Figure 13-94 Reports - Availability - Ping

604

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-95 Reports - Availability - Computer Uptime

13.11.4 Capacity Reporting


Capacity Reporting shows how much storage capacity is installed, and of that capacity, how much is being used and how much is available for future growth.

IBM TotalStorage Productivity Center for Data


There are four capacity report views within TotalStorage Productivity Center for Data: Disk Capacity Filesystem Capacity Filesystem Used Space Filesystem Free Space However, in reality there are really only two views, or perhaps three. The Filesystem Capacity and Filesystem Used Space views are nearly identical - the only differences being in the order of the columns and the row sort order. And there is relatively little difference between these two views and the Filesystem Free Space view. The Filesystem Capacity and Filesystem Used Space views report on used space, so include columns like percent used space whereas Filesystem Free Space includes columns like percent free space. All other data is identical. Therefore, there are really only two views: a Disk Capacity view and a Filesystem Capacity view. The Disk Capacity view provides information about physical or logical disk devices and what proportion of them has been allocated. Figure 13-96 shows the Disk Capacity by Disk selection window.

Chapter 13. Using TotalStorage Productivity Center for Data

605

Figure 13-96 Disk capacity report selection window

Often there is a one-to-one relationship between devices and filesystems as seen in Figure 13-97, particularly on Windows systems. However, if a single physical disk has two partitions the detailed description will show two partitions at the bottom of the right-hand pane.

Figure 13-97 Capacity report - Gallium Disk 0

606

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center for Data for Databases


Capacity Reporting for databases is very straightforward. You can report on: All databases of any type All databases of a given type on a particular system or group of systems On a specific database Example 13-98 shows a Capacity Report by Computer Group. We actually have databases in just one Computer Group, WindowsDBServers. We then drilled down to see all systems within the WindowsDBServers group, then specifically to node GALLIUM, so that we could see all databases on GALLIUM.

Figure 13-98 Database Capacity report by Computer Group

13.11.5 Usage Reporting


The reporting categories covered so far have been mostly concerned with reporting at the system or device level. Usage Reporting goes down one more step to report at a level lower than the filesystem. You can produce reports that answer questions such as: How old is my data? When was it created, last accessed, or modified? What are my largest files? What are my largest directories? Do I have any orphan files?

Data Manager
With Usage Reporting, you will be able to: Identify orphan files and either update their ownership or delete them to free up space Identify the largest files and determine whether they are needed or whether parts of the data could be archived Identify obsolete files so that they can be either deleted or archived

Chapter 13. Using TotalStorage Productivity Center for Data

607

There are a few restrictions on Usage Reporting: In order to report by directory or by Directory Group you will need to set them up in Data Manager Monitoring Groups Directory. UNIX systems do not record file create dates, so no reporting by creation time is available for these systems.

Data Manager for Databases


Like database Asset Reporting, all of the database Usage Reporting options are quite straightforward with the exception of table level reporting. From a usage perspective there are two types of table report available: Largest tables Monitored tables We can report on database largest tables by choosing for example, Data Manager Databases Reporting Usage All DBMSs Tables Largest Tables By RDBMS Type. This report is shown in Figure 13-99.

Figure 13-99 Largest tables by RDBMS type

608

IBM TotalStorage Productivity Center V2.3: Getting Started

A Monitored Tables by RDBMS Type report is shown in Figure 13-100. In this case, only tables which are part of a Table Group, which is included in a Scan job will be reported on.

Figure 13-100 Monitored tables by RDBMS type

Chapter 13. Using TotalStorage Productivity Center for Data

609

13.11.6 Usage Violation Reporting


Usage Violation Reporting enforces Data Manager Constraints and Quotas. A Constraint is a limit, by file name syntax, on the type of data that can be stored on a system. A Quota is a storage usage limit placed on a user or operating system User Group, and can be defined at the network, computer, or filesystem level. Constraints and Quotas were described in 13.5, Policy management on page 565. It is important to remember that Quotas and Constraints are not hard limits - users will not be stopped from working if a Quota or Constraint is violated, but this event will trigger an exception, which will be reported.

Data Manager Constraint Violation Reporting


There are a number of predefined Constraints in Data Manager. Before we produce a Constraint violation report, we need to set up a new Constraint called forbidden files. Setting up Constraints was described in 13.5.3, Constraints on page 570. First navigate Data Manager Policy Management Constraints. Existing Constraints will be listed. Right-click Constraints and choose Create Constraint. On the Filesystems tab we entered a description of forbidden files, chose Computer Groups, then selected tpcadmin.Windows Systems and tpcadmin.Windows DB Systems and clicked >>. The completed Filesystems tab is shown in Figure 13-101.

Figure 13-101 Create a Constraint - Filesystems tab

610

IBM TotalStorage Productivity Center V2.3: Getting Started

We then need to specify in the File Types tab, what a forbidden file is. You can define the criteria as either inclusive or exclusive; that is, you can specify just those files types that will violate the Constraint, or you can specify that all files will violate the Constraint except those specified. There are a number of predefined file types included; you can also chose additional files by entering appropriate values in the Or enter a pattern field at the bottom of the form. We have chosen MP3 and AVI files. The completed File Types tab is shown in Figure 13-102.

Figure 13-102 Create a Constraint - File Types tab

The Users tab is very similar to the File Types tab - you can specify which users should be included or excluded from the selection criteria. We have taken the default, which is to include all users. In the Options tab, we nominate a maximum number of rows to be returned. We can also apply some more specific selection criteria here such as only including files that are larger than a defined size. Note, however that these criteria are added to the file list. For example, if we specified here that we only wanted to include files greater than 1 MB, the search criteria would be changed to ((NAME matches any of ('*.AVI', '*.mp3') AND TYPE <> DIRECTORY) OR SIZE > 1 MB). So the returned list of files would be any file greater than 1 MB in size plus any *.MP3 or *.AVI files.

Chapter 13. Using TotalStorage Productivity Center for Data

611

If you wish to change the selection criteria so that instead you select any *.MP3 or *.AVI files that are larger than 1 MB, you can enter 1 MB against the bigger than option, and then click the Edit Filter button shown in Figure 13-105. You will then see the file filter as shown in Figure 13-103. To add the size criteria to the file type criteria, click the Size > 1MB entry and drag it up to the All of tag. The changed filter is shown in Figure 13-104. You can also see the Boolean expression for the filter has changed to reflect this condition.

Figure 13-103 Edit a Constraint file filter - before change

Figure 13-104 Edit a Constraint file filter - after change

612

IBM TotalStorage Productivity Center V2.3: Getting Started

In this case we did not want to apply a size criteria, so we left the Option tab entries at their defaults as shown in Figure 13-105.

Figure 13-105 Create a Constraint - Options tab

Finally, we can specify that we want an Alert generated if a triggering condition is met. The only choice here is to specify a maximum amount of space consumed by the files that meet our selection criteria. We left all of the Alert tab options at their defaults other than specifying an upper limit of 100 MB for files that have met our selection criteria.

Chapter 13. Using TotalStorage Productivity Center for Data

613

The Alert tab is shown in Figure 13-106. Alerting is covered in more detail in 13.4, OS Alerts on page 555.

Figure 13-106 Create a Constraint - Alert tab

We then clicked the Save button and entered a name of Forbidden Files as shown in Figure 13-107.

Figure 13-107 Create a Constraint - save

Before we can report against the Constraint, we need to ensure that a Scan job has been run to collect the appropriate information. Once the Scan has completed successfully, you can go ahead and produce Constraint Violation Reports. Note that you cannot produce a report of violations of a particular Constraint - the report will include entries for any Constraint violation. However, once the report is generated, you can drill down into specific Constraint violations. We produced the report by choosing Data Manager Reporting Usage Violations Constraint Violation By Computer. You will see a screen like Figure 13-108 where you can select a subset of the clients if appropriate - after selecting, click Generate Report.

614

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-108 Constraint violation report selection screen

You will then see a list of all of those instances of Constraint violations as shown in Figure 13-109. The report shows multiple types of Constraints. Some of these Constraints were predefined (Orphaned File Constraint and Obsolete File Constraint) and others (ALLFILES and forbidden files) we defined. An orphaned file is any file that does not have an owner. This allows you to easily identify files that belonged to users who have left your organization or have had an incorrect ownership set.

Figure 13-109 Constraint violations by computer

Chapter 13. Using TotalStorage Productivity Center for Data

615

From there you can drill down on a specific Constraint, then filesystems within the Constraint, and finally to a list of files that violated the Constraint on that filesystem by selecting the magnifying glass icon next to the entry of interest. Or, as shown in Figure 13-110, by clicking the pie chart icon next to the entry for forbidden files, you can produce a graph indicating what proportion of capacity is being utilized by files violating the Constraint. Position the cursor over any segment of the pie chart to show the percentage and number of bytes consumed by that segment.

Figure 13-110 Graph of capacity used by Constraint violating files

Constraint violations are also written to the Data Manager Alert Log. Figure 13-111 shows the same list of violations as if you had produced a Constraint Violations by computer report.

Figure 13-111 Alert log showing Constraint violations

616

IBM TotalStorage Productivity Center V2.3: Getting Started

Quota Violation Reporting


The process of producing a Quota violation report is very similar to producing a Constraint violation report, but with some key differences. One difference between Quotas and Constraints is the process of collecting data. For Constraints, the data is collected as part of a standard Scan job in a similar way to adding an additional Profile to a Scan. Quota data collections are performed in a separately scheduled job. So, when you set up a Quota you need to specify scheduling parameters. We set up a Quota rule called Big Windows Users by choosing Data Manager Policy Management Quotas User Computer, right-clicking Computer and selecting Create Quota. On the Users screen we entered a description of Big Windows Users and then selected User Groups and then TPCUser.Default User Group as show in Figure 13-112.

Figure 13-112 Create Quota - Users tab

Chapter 13. Using TotalStorage Productivity Center for Data

617

On the Computers tab we chose our Windows group: tpcadmin.Windows Systems (Figure 13-113).

Figure 13-113 Create Quota - Computers tab

We then had to specify when and how often we wanted the Quota job to run. We chose to run the job weekly under the When to CHECK tab as shown in Figure 13-114.

Figure 13-114 Create Quota - When to Check

618

IBM TotalStorage Productivity Center V2.3: Getting Started

On the Alert tab, shown in Figure 13-115, we accepted all of the defaults other than to specify the limit under User Consumes More Than, in this case, 1 GB. No Alerts will be generated other than to log any exceptions in the Data Manager Alert Log.

Figure 13-115 Create Quota - Alert

Finally, we save the Quota definition, calling it Big Windows Users as shown in Figure 13-116.

Figure 13-116 Create Quota - save

Chapter 13. Using TotalStorage Productivity Center for Data

619

The new Quota now appears under Data Manager Policy Management Quotas User > Computer as tpcadmin.Big Windows Users (where tpcadmin is our Data Manager username). We right-clicked the Quota and chose Run Now as in Figure 13-117.

Figure 13-117 Run new Quota job

This job will collect data related to the Quota, and add any Quota Violations to the Alert Log as shown in Figure 13-118.

Figure 13-118 Alert Log - Quota violations

620

IBM TotalStorage Productivity Center V2.3: Getting Started

We then drilled down on one of the Alerts to see the details (Figure 13-119).

Figure 13-119 Alert Log - Quota violation detail

And finally we can create a Quota Violation report by choosing Data Manager Reporting Usage Violations Quota Violations Computer Quotas By Computer. The high-level report is shown in Figure 13-120.

Figure 13-120 Quota violations by computer

Chapter 13. Using TotalStorage Productivity Center for Data

621

We can then drill down further for additional detail or to produce a graphical representation of the data behind the violation. The graph in Figure 13-121 shows a breakdown of the users data by file size.

Figure 13-121 Quota violation graphical breakdown by file size

Data Manager for Databases


Filesystem Usage Violation Reporting includes both Quota and Constraint violations. However, for databases, only Quota violations are available. You can place a Quota on users, user groups, or all users and you can limit the Quota by computer, computer group, database instance, database tablespace group or tablespace. We will set up an Instance Quota that limits any individual user to 100 MB of space per instance for any database on any server in the tpcadmin.WindowsDBServers computer group. To do this, navigate to Data Manager - Databases Policy Management Quotas Instance. Right-click Instance and choose Create Quota. Figure 13-122 shows the Quota definition screen. We entered a description of Big DB Users and selected the TPCUser.Default User Group by expanding User Groups, clicking TPCUser.Default User Group, and then clicking >>.

622

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-122 Create database Quota - Users tab

On the Instances tab, expand Computer Groups, select tpcadmin.Windows DB Systems and then click >> to add it to the Current Selections as shown in Figure 13-123.

Figure 13-123 Create database Quota - Instances tab

Chapter 13. Using TotalStorage Productivity Center for Data

623

On the When to Run tab shown in Figure 13-124, we chose to run the Quota job weekly and chose a time of day for the job to run. Other values were left at the defaults.

Figure 13-124 Create a database Quota - When to Run tab

On the Alert tab (shown in Figure 13-125) we specified the actual Quota that we wanted enforced, which was a 100 MB per user Quota. Other values were left as defaults.

Figure 13-125 Create a database Quota - Alert tab

624

IBM TotalStorage Productivity Center V2.3: Getting Started

We saved the new Quota definition with a name of Big DB Users as shown Figure 13-126.

Figure 13-126 Create a database Quota - Save

We now run the Quota by right-clicking it and choosing Run Now as seen in Figure 13-127.

Figure 13-127 Run the database Quota

Chapter 13. Using TotalStorage Productivity Center for Data

625

To check if any user has violated the Quota, navigate Data Manager - Databases Alerting Alert Log All DBMSs All. We see one violation as shown in Figure 13-128.

Figure 13-128 DB Quota violation

We can also now run a database Quota violation report by choosing Data Manager Databases Reporting Usage Violations Quota Violations All Quotas By User Quota. This report can be seen in Figure 13-129.

Figure 13-129 Database Quota violation report

626

IBM TotalStorage Productivity Center V2.3: Getting Started

13.11.7 Backup Reporting


Backup Reporting is designed to do two things: It can alert you to situations where files have been modified but not backed up, and it can provide data on the volume of data that will be backed up. Figure 13-130 shows the options that are available for Backup Reporting.

Figure 13-130 Backup Reporting options

Most at Risk Files


Data Manager defines most at risk files as those that are least-recently modified, but have not been backed up. There are some points worth noting about this report: Since the report relies on the archive bit being set to determine whether the file has changed, this report will only work on Windows systems as UNIX systems have no equivalent to the archive bit When using most backup products, once a file has been backed up the archive bit is cleared. Before Version 5.2, IBM Tivoli Storage Manager did not do this, therefore if this level of Tivoli Storage Manager was used, this report would list files that actually may have been backed up. IBM Tivoli Storage Manager Version 5.2 has the ability to reset the Windows archive bit after a successful backup of a file.

Chapter 13. Using TotalStorage Productivity Center for Data

627

By default, information on only 20 files will be returned. Figure 13-131 shows the selection screen for the report. You will notice that the report uses the Profile TPCUser.Most at Risk. It is in this Profile that the 20 file limit is set, although the value can be changed. You can override the value on the selection screen, but you can only reduce the value here, not increase it. By updating the Profile you can also exclude files from the report. By default, any file in the

\WINNT\system* directory tree on any device will be excluded. You can add entries to the
exclusion list if appropriate. Ideally, the exclusion list should be the same as that in your backup product.

Figure 13-131 Files most at risk report - selection

Modified Files Not Backed Up


The report provides an aging analysis of your data that has been modified but not backed up. It will show what proportion of the data has been modified within the past 24 hours, between one and seven days, between one week and one month, and so on. Figure 13-132 shows the selection taken in our Windows environment. Like the Most at Risk Files report, this report also relies on the archive bit, so check to see if your backup application uses this.

628

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-132 Modified Files not backed up selection

To view the report, click Generate report. We choose to view it as a graphic by then clicking the pie icon and selecting Chart: Space Distribution for All. This is shown in Figure 13-133. This chart tells you the amount of space consumed by files have not been backed up since the last backup was run for this server.

Figure 13-133 Modified Files not backed up chart overall view

Chapter 13. Using TotalStorage Productivity Center for Data

629

We can also select Chart: Count Distribution for All as shown in Figure 13-134 to show the number of files in each category.

Figure 13-134 Files need backed up chart in detail view

The different charts can be viewed in different ways. To select another type of chart, right-click in the chart area, select Customize this chart, and click the radio button next to the desired chart type.

Backup Storage Requirements Reporting


This option allows you determine how much data would be backed up if you were to perform either a full or an incremental backup. The Full Backup Size option can be used regardless of the OS type and the backup application in use.

630

IBM TotalStorage Productivity Center V2.3: Getting Started

In Figure 13-135, the report is run against Windows systems by filesystem.

Figure 13-135 Backup storage requirements per filesystem

The selection can also run by computer, as shown in Figure 13-136.

Figure 13-136 Backup storage requirement per computer and per filesystem

Chapter 13. Using TotalStorage Productivity Center for Data

631

The Incremental Backup Size option makes use of the archive bit, so it can only be used on Windows systems, and if Tivoli Storage Manager is the backup application, the resetarchiveattribute option must be used (for Version 5.2). A sample report is shown in Figure 13-137.

Figure 13-137 Incremental reporting per Node and Filesystem based on files

632

IBM TotalStorage Productivity Center V2.3: Getting Started

The third report type here is Incremental Range Sizes Reporting. This does not rely on the archive bit (instead, it uses the modification date) so is more generically applicable. It is possible to show through the use of this report the actual difference between a traditional weekly full/daily incremental backup process versus Tivoli Storage Managers progressive incremental approach. To generate this report, select Data Manager Reporting Backup Backup Storage Requirements Incremental Range Sizes By Computer as shown in Figure 13-138.

Figure 13-138 Incremental Range Size select By Computer

Chapter 13. Using TotalStorage Productivity Center for Data

633

After you select the Computers of interest, click Generate Report. Figure 13-139 shows the output from this report, with the amount of data changed for different time ranges. Note that the values are cumulative, so for each time range; the values shown include the smaller time periods.

Figure 13-139 Incremental Range Sizes Report

13.12 TotalStorage Productivity Center for Data ESS Reporting


The reporting capabilities in TotalStorage Productivity Center for Data are expanded in Version 1.2 to include Enterprise Storage Subsystem (ESS) reporting. IBM Tivoli Storage Resource Manager uses Probe jobs to collect information about the ESS. We can then use the reporting facility to view that information. The new subsystem reports show the capacity, controllers, disks, and LUNs of an ESS and their relationships to computers and filesystems within a network.

13.12.1 ESS Reporting


For this section we discuss ESS asset and storage subsystem reporting, making references to the ESS lab environment in Figure 13-140 below. Note that the host which accesses the ESS had a TotalStorage Productivity Center for Data Agent installed. This provides the fullest combination of reporting ability for the ESS. If an ESS-attached host does not have a TotalStorage Productivity Center for Data Agent installed, items such as filesystem, logical volume, and device logical names will not be displayed.

634

IBM TotalStorage Productivity Center V2.3: Getting Started

Win2k Srv sp3 CIM/OM server w2kadvtsm 172.31.1.135

43p AIX 5.1 ML 4 ITSRM Agent tsmsrv43p 172.31.1.155

ESSF20 172.31.1.1 2109

Win2k Srv sp3 ITSRM Server w2kadvtsrm 172.31.1.133

Intranet

Figure 13-140 ESS reporting lab

Prerequisites to ESS Reporting


Before doing ESS reporting with Data Manager, the following conditions are required: CIM/OM server successfully installed. Data Manager successfully logs into CIM/OM server. Data Manager successfully runs a discovery and probes the ESS. Important: Refer to Chapter 5, CIMOM install and configuration on page 191 and 8.1, Configuring the CIM Agents on page 290 for additional details on confirming these prerequisites. Data Manager will run a discovery to locate the CIM/OM server in our environment, which in turn discovers the ESSs. See 8.1.2, Configuring CIM Agents on page 290.

Creating the ESS Probe


IBM Tivoli Storage Resource Manager will then run a Probe to query the discovered ESS. The Probe collects detailed statistics about the storage assets in our enterprise, such as computers, storage subsystems, disk controllers, hard disks, and filesystems.

Chapter 13. Using TotalStorage Productivity Center for Data

635

Next, we show how to create a Probe for an ESS-F20. Select Probes Select new probe, then under the Computers tab, choose Storage Subsystems. See Figure 13-141.

Figure 13-141 Creating ESS probe

On the When to PROBE tab, we selected PROBE Now because we need to populate the backend repository. See Figure 13-142.

Figure 13-142 ESS - When to probe

636

IBM TotalStorage Productivity Center V2.3: Getting Started

Next is the Alert tab, shown in Figure 13-143. This defines the type of notification for a Probe.

Figure 13-143 ESS - Alert tab

After all parameters are defined, save the Probe definition. At this point the Probe is submitted and will run immediately. Note: For additional information on creating Probes, see 13.3.5, Probes on page 545. There are several ways to check the status of the Probe job. First, we can check the color of the Probe job entry in the navigation tree, then in the content panel. There are two colors that represent job status. They are: GREEN - Job successfully complete with no errors RED - Job completed with errors

Chapter 13. Using TotalStorage Productivity Center for Data

637

The status of the Probe job is displayed in text and in color, as shown in Figure 13-144, after selecting the Probe job output in the navigation tree. The job at 1:55 pm is in green, indicating success.

Figure 13-144 ESS - probe job status

We open the Probe job by selecting it and double-clicking the spy glass icon next to the job in the content window. We see the contents of the job, including detailed information on the status, as in Figure 13-145. Here, we have selected the successful Probe.

Figure 13-145 Probe job log

638

IBM TotalStorage Productivity Center V2.3: Getting Started

Asset Reports - By Storage Subsystem


With Asset reporting by storage subsystem, you can view the centralized asset repository that Data Manager constructs during a Probe. The Probe itemizes the information about computers, disks, controllers, and filesystems, and builds a hardware inventory of assets. With the backend repository now populated with DS6000 asset information, we will show how to view reports to display the storage resources. We choose Data Manager Reporting Asset By Storage Subsystem Tucson DS6000. This report provides specific resource information of the DS6000 and allows us to view storage capacity by a computer, filesystem, storage subsystem, LUN, and disk level. We can also view the relationships between the components of a storage subsystem. Notice that the navigation tree is hierarchical. See Figure 13-146.

Figure 13-146 Asset by storage subsystem

Chapter 13. Using TotalStorage Productivity Center for Data

639

We drill down to the Disk Groups. The disk group contains information related to the ESS, as well as the volume spaces and disks associated with those Disk Groups. Expanding the Disk Group node, a list of all Disk Groups on the ESS displays (Figure 13-147).

Figure 13-147 ESS disk group

Continuing, we expand the disk group DG1 to view the disks and volume spaces within it. We open Volume Space VS3, which shows the disks and LUNs associated with it. The Disks subsection shows the individual disks associated with the Volume Space (see Figure 13-148).

640

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-148 Disks in volume spaces

Notice the LUNs subsection for disk DD0105 (Figure 13-149). This shows the LUN to disk relationship. The LUNs shown here are just a subset of all the LUNs. You can see that the LUN is spread across all the displayed disks in the content window.

Figure 13-149 Disk and LUN association with volume space

Chapter 13. Using TotalStorage Productivity Center for Data

641

Figure 13-150 shows the discovery of a disk with no LUN associations. This is known as a hot spare. It can be used when one of the other seven disks in the disk group fails.

Figure 13-150 Hot spare LUN

642

IBM TotalStorage Productivity Center V2.3: Getting Started

We now show a high level view of all disks in ESSF20. There are 32 disks in the ESS, as shown in Figure 13-146 on page 639 in the Number of Disks field. Figure 13-151 shows a partial listing of the disks.

Figure 13-151 ESS all disks

Chapter 13. Using TotalStorage Productivity Center for Data

643

We can also display a report of all the LUNs in the ESS. This report provides the physical disk association with each LUN. We have a total of 56 LUNs in the ESSF20 as shown in Figure 13-146 on page 639 (number of LUNS). A partial listing is shown in Figure 13-152.

Figure 13-152 ESS all LUNs

644

IBM TotalStorage Productivity Center V2.3: Getting Started

Storage Subsystem Reporting


We now open Reporting Storage subsystems. Storage Subsystems Reporting allows viewing storage capacity at a computer, filesystem, storage subsystem, LUN, and disk level.

By Computer
We drill down Computers Views By Computer. The report displays the association of filesystems to the storage subsystem, LUNS, and disks on ESSF20. These reports are useful for relating computers and filesystems to different storage subsystem components. There are three options available in the Relate Computers to: pull down, as shown in Figure 13-153.

Figure 13-153 By Computer - Relate Computer to

We select Storage Subsystems from the pull down, select the desired computer and click Generate. Figure 13-154 shows that the generated report TSMSRV43P uses 9.24 GB in the ESS.

Figure 13-154 By Computer - storage subsystem

Chapter 13. Using TotalStorage Productivity Center for Data

645

Returning to the selection screen tab (Figure 13-153 on page 645) we select LUNs. We choose the same host, and click Generate. Figure 13-155 shows the generated report; the relationship between TSMSRV43P and its assigned LUNs. TSMSRV43P has one LUN created on the ESS.

Figure 13-155 By Computer - LUNs

Finally, from the Selection tab (Figure 13-153 on page 645), we select Disks, our host TSMSRV43P, and click Generate. Figure 13-156 shows the report: the ESS disks assigned to the LUN on the host.

Figure 13-156 By Computer - disk

646

IBM TotalStorage Productivity Center V2.3: Getting Started

By Filesystem/Logical Volume
We will now drill to Computer Views By Filesystem/Logical Volume. The report displays the association of filesystems to the storage subsystem, LUNS, and disks on ESSF20. These reports are useful for relating computers and filesystems to different storage subsystem components. There are three options available in the Relate Filesystem/Logical Volumes to pull down, shown in Figure 13-157.

Figure 13-157 By filesystem/logical volume

Select Storage Subsystem, the host (TSMSRV43P), and click Generate. Figure 13-158 shows the filesystems on the host, which are located on the ESS.

Figure 13-158 By filesystem/logical volumes - storage subsystem

Chapter 13. Using TotalStorage Productivity Center for Data

647

From the Selection tab (Figure 13-157 on page 647) we now choose LUNs, the host (TSMSRV43P), and click Generate. Figure 13-159 shows the LUN location of each filesystem on the host.

Figure 13-159 By filesystem/logical volume - LUN

From the Selection tab (Figure 13-157 on page 647) we now choose Disks, the host (TSMSRV43P), and click Generate. Figure 13-160 shows which disks are comprising each filesystem and logical volume.

Figure 13-160 By filesystem/logical volume - Disk

648

IBM TotalStorage Productivity Center V2.3: Getting Started

By Storage Subsystem
We will now drill down Storage Subsystem Views By Storage Subsystem. These reports display the relationships of the ESS components (storage subsystems, LUNs, and disks) to the computers and filesystems and logical volumes. There are two options available in the Relate Storage Subsystems to: the pull down, shown in Figure 13-161.

Figure 13-161 By Storage Subsystems

Select Computers from the pull down, the subsystem ESSF20, and click Generate. Figure 13-162 shows the space used by each host on the storage subsystem.

Figure 13-162 By Storage subsystem - Computer

Chapter 13. Using TotalStorage Productivity Center for Data

649

Now, select Filesystem/logical Volumes from Figure 13-161, the ESSF20 subsystem, and click Generate. Figure 13-163 shows each hosts filesystems and logical volumes, with their capacity and free space.

Figure 13-163 By storage subsystem - filesystem/logical volume

By LUN
Continuing, we drill down Storage Subsystem Views By LUNs (Figure 13-164).

Figure 13-164 By LUNs

650

IBM TotalStorage Productivity Center V2.3: Getting Started

Select Computer from the Relate LUNs to: pull down, select the subsystem (ESSF20) with the associated disks (default is all), and click Generate Report. Figure 13-165 shows the LUNs assigned to each host, with the hosts logical name for the LUN (/dev/hdisk1 in this case).

Figure 13-165 By LUN - computer

Now select Filesystem/Logical Volumes from the Relate LUNS to pull down, the ESSF20 subsystem with associated logical disks (default is all), and click. Next, we clicked Generate Report. Figure 13-166 shows the relationships between the LUNs, computers, and filesystems/logical volumes, including free space and host device logical names.

Figure 13-166 By LUNS - filesystem/logical volumes

Chapter 13. Using TotalStorage Productivity Center for Data

651

Disks
Now we drill to Storage Subsystem Views Disks. There are two options available in the Relate Disks to: pull down, shown in Figure 13-167.

Figure 13-167 Disks

Select Computer from the pull down, the ESSF20 subsystem with related disks (default is all), and click Generate Report. Figure 13-168 shows the relationships of the disks to the hosts.

Figure 13-168 Disks - computer

652

IBM TotalStorage Productivity Center V2.3: Getting Started

Now select Filesystem/Logical Volumes from the pull down (Figure 13-167 on page 652), the ESSF20 subsystem with related disks (default is all), and click Generate Report. Figure 13-169 shows the relationship between the ESS disks and the filesystems and logical volumes.

Figure 13-169 Disks - filesystem/logical volumes

Note: For demonstration purposes, we have reduced some of the fields in the reports.

13.13 IBM Tivoli Storage Resource Manager top 10 reports


After analyzing typical customer scenarios, we have compiled the following list of Top 10 reports which we recommend running regularly for best practices: ESS used and free storage ESS attached hosts report Computer Uptime Growth in storage used and number of files Incremental backup trends Database reports against DBMS size Database Instance storage report Database reports size by instance and by computer Locate the LUN on which a database is allocated Finding important files on your systems

13.13.1 ESS used and free storage


This report shows the free and used storage on an ESS system. To generate this filesystem logical view report, navigate Data Manager Reporting Storage Subsystem Computer Views By Filesystem/Logical Volumes. Select the computers to report on, and select Disks from the pull-down Relate Filesystems/Logical Volumes To as in Figure 13-170.

Chapter 13. Using TotalStorage Productivity Center for Data

653

Figure 13-170 ESS relation to computer selected by disk

Click Generate Report. The report is shown in Figure 13-171. Various columns are displayed: Storage Subsystem Storage Subsystem Type Manufacturer Model Serial Number Computer Filesystem/Logical Volume Path Capacity Free Space Physical Allocation

Figure 13-171 Report for Filesystem/Logical Volumes Part 1

654

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-172 shows the right hand columns of the same report.

Figure 13-172 Report for Filesystem/Logical Volumes Part 2

This report provides quick answers to how much space on the ESS is allocated to each filesystem. Select LUNs this time from the pull-down in Figure 13-170 on page 654. The report in Figure 13-173 shows the LUN to host mapping for the ESS, which filesystem is associated with each LUN, and the free space.

Figure 13-173 Computer view to the filesystem with capacity and free space

Chapter 13. Using TotalStorage Productivity Center for Data

655

13.13.2 ESS attached hosts report


This report shows which systems are using storage on an ESS. This is useful when ESS maintenance is applied so that the administrators of affected systems can be informed. To generate this report, select Data Manager Reporting Storage Subsystem Computer Views By Computer tree. We have selected all computers as in Figure 13-174.

Figure 13-174 ESS selection per computer

Click the Generate Report field - the report is shown in Figure 13-175.

Figure 13-175 ESS connections to computer report

656

IBM TotalStorage Productivity Center V2.3: Getting Started

Note that you can sort the report on a different column heading by clicking it. The current sort field is indicated by the small pointer next to the field name. Clicking again in the same column reverses the sort order.

13.13.3 Computer Uptime Reporting


Uptime is an important IT metric in the enterprise. To generate a Computer Uptime report, select Data Manager Reporting Availability Computer Uptime By Computer. Select the computers of interest by clicking the Selection... button and checking the boxes next to the desired computers in the Computer Selection window (Figure 13-176) and click OK.

Figure 13-176 Computer Uptime Report - computer selection

In the Selection window, specify a date range (optional), and click Generate Report, as shown in Figure 13-177.

Figure 13-177 Computer Uptime report selection

Chapter 13. Using TotalStorage Productivity Center for Data

657

For each computer, percent availability, number of reboots, total down time, and average downtime is given, as in Figure 13-178 shows the selection. The default sort order is by descending Total Down Time.

Figure 13-178 Computer Uptime report part 1

You can also display this information graphically, by selecting the pie chart icon at the top of the report, as shown in Figure 13-179.

Figure 13-179 Computer Uptime report graphical combined (stacked bar)

658

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-180 shows an unstacked bar chart of the same information (right-click and select Bar Chart).

Figure 13-180 Computer Uptime report graphical (bar chart)

13.13.4 Growth in storage used and number of files


The Backup Reporting features of Data Manager also give a convenient way to track the total storage used by files in each computer, as well as the number of files stored. It can be presented graphically, to show historical numbers and future trends. This information helps you plan future storage requirements, be alerted to potential problems, and also (if using a traditional full and incremental backup product), plan your backup server storage requirements, since this report shows the size of a full backup on each computer. Select Data Manager Reporting Backup Backup Storage Requirements Full Backup Size By Computer. We used the Profile: TPCUser.Summary By Filesystem/ Directory and selected all computers, as in Figure 13-181. Click Generate Report.

Figure 13-181 Generate Full Backup Size report Chapter 13. Using TotalStorage Productivity Center for Data

659

Figure 13-182 shows the total disk space used by all the files, and the number of files on each computer. The top column shows the totals for all Agents.

Figure 13-182 Select History chart for File count

To drill down, select all the computers (using the Shift key) so they are highlighted, then click the pie icon, and select History Chart: Space Usage for Selected. The generated report (Figure 13-183), shows how the total full backup size has fluctuated, and is predicted to change in the future (dotted lines - to disable this, click Hide Trends).

Figure 13-183 History Chart: Space Used

660

IBM TotalStorage Productivity Center V2.3: Getting Started

To display the file count graph, select History Chart: File count from the pie icon in Figure 13-182. The output report is shown in Figure 13-184, which shows trends in the number of files on each computer.

Figure 13-184 History chart: File Count

These reports will help you find potential problems (e.g. a computer system that shows an unexpected sudden upward or downward spike) and also predicts disk and backup requirements for the future.

13.13.5 Incremental backup trends


This report shows the rate of modification of files, which is very useful for incremental backup planning. Select Data Manager Reporting Backup Backup Storage Requirements Incremental Range Size By Filesystem. Select Profile: TPCUser.By Modification as shown in Figure 13-185.

Chapter 13. Using TotalStorage Productivity Center for Data

661

Figure 13-185 Incremental Range selection based on filespace

The generated report shows all the filesystems on the selected computers as in Figure 13-186.

Figure 13-186 Summary of all filespace

The third column shows the total number and total size of files (for all the systems, then broken down by filesystem). Then there are Last Modified columns for one day, one week, one month, two months, three, six, nine, and one year selections. Each of these gives the number and size of the modified files.

662

IBM TotalStorage Productivity Center V2.3: Getting Started

To generate charts, highlight all the systems, and click the pie icon. Select Chart: Count Distribution for Selected, as shown in Figure 13-187.

Figure 13-187 Selection for Filesystem and computer to generate a graphic

The chart is shown in Figure 13-188. Note that when your cursor passes over a bar, a pop-up shows the number of files associated with that bar.

Figure 13-188 Bar chart for Incremental Range Size by Filesystem

Chapter 13. Using TotalStorage Productivity Center for Data

663

You can display other filesystems using the Next 2 and Prev 2 buttons. Change the chart format by right-clicking and selecting a different layout. Figure 13-189 is a pie chart of the same data. The pop-ups work here also.

Figure 13-189 Pie chart selected with number of files which have modified

With these reports you can track and forecast your backups. You can also display backup behavior for the last one, three, nine, or 12 months.

664

IBM TotalStorage Productivity Center V2.3: Getting Started

13.13.6 Database reports against DBMS size


This report shows an enterprise wide view of storage usage by all RDBMS. Select Data Manager - Databases Reporting Capacity All DBMSs Total Instance Storage Network-wide and click Generate Report. Figure 13-190 shows a sample output.

Figure 13-190 Total Instance storage used network wide

This is a quick overview database space consumption across the network. To drill down on a particular RDBMS type, select the appropriate magnifying glass icon as in Figure 13-191.

Figure 13-191 DBMS drill down to the computer reports

Chapter 13. Using TotalStorage Productivity Center for Data

665

The report (Figure 13-192) displays.

Figure 13-192 DBMS drill down to the computer result

Figure 13-192 shows the fields for an Oracle database. The fields for a DB2 database are as follows: Computer name Total Size Container Capacity Container Free Space Log File Capacity Tablespace Count Container Count Log File Count

666

IBM TotalStorage Productivity Center V2.3: Getting Started

13.13.7 Database instance storage report


This report shows storage utilization by database instance. Go to Data Manager - Databases Reporting Capacity UDB Total Instance Storage by Instance, select the computer(s) of interest, and click Generate Report. Figure 13-191 shows the result.

Figure 13-193 DBMS report Total Instance Storage by Instance

Note you could select any RDBMS which is installed in your network. The report shows the following information for each Agent with DB2, plus a total (summary): Computer name RDBMS instance RDBMS type Total size Container Capacity Container free space Log file capacity Tablespace count Container count Log file count

13.13.8 Database reports size by instance and by computer


The next report is based on the previous report (database Instance storage report), but in more detail. From the report in Figure 13-193, click the magnifying glass next to a computer of interest. Then do a further drill down on the generated report as in Figure 13-194.

Chapter 13. Using TotalStorage Productivity Center for Data

667

Figure 13-194 Instance report RDBMS overview

Select the computer again, and click the magnifying glass. The report shows the entire DB2 environment running on computer Colorado. We have 10 DB2 UDB databases, shown in Figure 13-195 and Figure 13-196.

Figure 13-195 Instance running on computer Colorado first part

668

IBM TotalStorage Productivity Center V2.3: Getting Started

Scroll to the right side of the panel (Figure 13-196).

Figure 13-196 Instance running on computer Colorado second part

Here we can see which databases are running in ARCHIVELOG mode.

13.13.9 Locate the LUN on which a database is allocated


This report shows you which disk or LUN is used by a database. Go to Data Manager Databases Reporting Capacity UDB Total Instance Storage By Instance, select the Agent(s) of interest, then click Generate Report. Figure 13-197 shows the result.

Figure 13-197 LUN report selection for a database

Chapter 13. Using TotalStorage Productivity Center for Data

669

Select an Agent, and click the magnifying glass to drill down. Figure 13-198 displays. The report shows the following columns: File Type Path File Size Free Space Auto Extend of an File

Figure 13-198 Database select File and Path

670

IBM TotalStorage Productivity Center V2.3: Getting Started

Select now a particular data file, and click the magnifying glass. The generated pie chart is shown in Figure 13-199. We can see this data file is allocated on the C: drive.

Figure 13-199 Report DB2 File in a Pie Chart for DB2 File

Click the View Logical Volume button at the bottom to display the LUN report (Figure 13-200).

Figure 13-200 LUN information

Chapter 13. Using TotalStorage Productivity Center for Data

671

Using this procedure, we can find the LUNs where all the database data files are stored. This information is useful for a variety of purposes, e.g. for performance planning, availability planning, and assessing the impact of a LUN failure.

13.13.10 Finding important files on your systems


This report generates a search for specific files over all computers managed by a Data Manager Server. As an example, we created a text file each on Lochness and Wisla called lochness.txt and wisla.txt, respectively. We have chosen this search for all machines because it will return a relatively small number of results; however, any search criteria could be used. The task requires a number of steps: 1. 2. 3. 4. 5. Define new Profile Bind new Profile into a Scan Generate a Report with your Profile Define new Constraint Generate a Report to find defined Constraint First create the Profile - Data Manager Monitoring Profiles, right-click, and select Create Profile. Fill out the description field accordingly, and check the Summarize space usage by, Accumulate history, and Gather information on the fields as desired. In the bottom half click size distribution of files, as shown in Figure 13-201.

1. Define the new Profile

Figure 13-201 Create Profile for own File search

672

IBM TotalStorage Productivity Center V2.3: Getting Started

Now select the File Filter tab. Click in the All files selected area and right-click to create a new condition, as shown in Figure 13-202.

Figure 13-202 Create new Condition

Enter the desired file pattern into the Match field, and click Add to bring the condition to the display window below, as in Figure 13-203. You can select from different conditions: Matches any of Matches none of Matches Does not match

When you have finished the condition, click OK. In our case we are matching Tivoli Storage Manager option files.

Figure 13-203 Create Condition add

Chapter 13. Using TotalStorage Productivity Center for Data

673

Figure 13-204 shows our newly created Condition.

Figure 13-204 Saved Condition in new Profile

Now save the new Profile with an appropriate name, (in this instance, Search for files). The saved Profile now appears in the Profiles list, see Figure 13-205 on page 675. Tip: We recommend choosing meaningful Profile names, which reflect the content or function of the profile.

674

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-205 Listed Profiles containing Search for files

2. Bind new Profile into a Scan. First, create a new Scan - Data Manager Monitoring Scans. We chose TPCUser.Default Scan as shown in Figure 13-206 on page 675. Fill in a description for this Scan and select the Filesystems and Computers on which the Scan will run.

Figure 13-206 Add Profile to Scan

Chapter 13. Using TotalStorage Productivity Center for Data

675

On the Profiles tab, select the newly created Profile and add it to the Profiles to apply to Filesystems column, as shown in Figure 13-207.

Figure 13-207 Add Profiles to apply to filesystems

Now select the schedule time when the schedule should run, save the Scan, then check the result. 3. Generate Report with your Profile. To view the results, select Data Manager Reporting Usage Files File Size Distribution By Filesystem. Select all filesystems you which to report, select the Profile: administrator (Figure 13-208), and click Generate Report. The report contains all the option files discovered by the Scan as in Figure 13-209.

676

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-208 Select Profile: tpcadmin.Search for files

Figure 13-209 Report with number of found Search for files

Note that on LOCHNESS and WISLA C drive we found 1 file each.

Chapter 13. Using TotalStorage Productivity Center for Data

677

4. Define new Constraint. We would like to know where specifically these files are located. To set up this search, select Data Manager Policy Management Constraints TPCUser.Orphaned File Constraint, as shown in Figure 13-210. Enter a description, and select the Filesystem Groups and Computers where you want to locate the files.

Figure 13-210 Create Orphaned File search

Select the Options tab, then select Edit Filter as shown in Figure 13-211.

Figure 13-211 Update the Orphaned selection

678

IBM TotalStorage Productivity Center V2.3: Getting Started

On the Edit Filter pop-up, double click the ATTRIBUTES Filter. Here we will replace the ORPHANED condition with our own filter, since we want to actually search for the text files we created, not orphaned files (Figure 13-212).

Figure 13-212 Update the selection with own data

Use the Del button to delete the ORPHANED condition, then select NAME from the Attributes pull-down, and the Add button to add another Attributes condition. We will specify to search for the text files we created, as in Figure 13-213.

Figure 13-213 Enter the file search criteria

Chapter 13. Using TotalStorage Productivity Center for Data

679

After each file pattern entry, click Add to save it. When all search arguments are entered, click OK to save the search. The selection is now complete as in Figure 13-214.

Figure 13-214 File Filter selection reconfirm

Click OK again. Save the search with a new description and name (File Save As), so that you do not overwrite the original TPCUser.Orphaned File Constraint. We called saved the file as File search. Finally, we run the Scan and check the Scan job log for correct execution, as shown in Figure 13-215.

Figure 13-215 Scan log check

680

IBM TotalStorage Productivity Center V2.3: Getting Started

5. Generate Report to find defined Constraint Now look for the results of the file name search. Select Data Manager Reporting Usage Violations Constraint Violations By Computer, select all computers and generate the report. The report will present a summary as in Figure 13-216.

Figure 13-216 Summary report of all Tivoli Storage Manager option files

To drill down, click the magnifying glass on WISLA as in Figure 13-217. This shows all the filesystems on WISLA where matching files were found.

Figure 13-217 File selection for computer WISLA

Chapter 13. Using TotalStorage Productivity Center for Data

681

Click the magnifying class on a filesystem (C drive, in this case). This will show all the files found which matched the pattern, as in Figure 13-218. Note that there was 1 file reported, which matches the summary view given in Figure 13-209 on page 677.

Figure 13-218 Report for Tivoli Storage Manager Option file searched

You can also drill down to individual files, for detailed information as in Figure 13-219.

Figure 13-219 File detail information

682

IBM TotalStorage Productivity Center V2.3: Getting Started

13.14 Creating customized reports


Customized Reporting within Data Manager is done through the My Reports option, which is available for both Data Manager and Data Manager for Databases. There are three main options available within My Reports: System Reports Reports owned by username Batch Reports System Reports, while included here in the customized reporting section, is in fact not customizable currently. We will still discuss it in this section as it is part of the My Reports group. Reports owned by usernames Reports, where username is the currently logged in Data Manager username, are modified versions of standard reports from the Reporting option. You will only see reports here that you have modified and saved. Batch Reports are reports that are typically set up to run on a schedule, although they can be run interactively. The key difference between Batch Reports and other reporting options is that with Batch Reports, the output will always be written to an output file rather than displayed on the screen.

13.14.1 System Reports


These reports can, at this point in time at least, only be run as is. You cannot modify the parameters in any way, nor can you add additional reports to the list. These reports provide the same information than is available from running reports from the Reporting option. The intent of these reports is to provide frequently needed information, which can be provided quickly and repetitively without having to reenter parameters.

Chapter 13. Using TotalStorage Productivity Center for Data

683

Data Manager
Figure 13-220 shows the available System Reports for Data Manager.

Figure 13-220 My Reports - System Reports

Figure 13-221 shows the output from running the Storage Capacity system report. We could have generated exactly the same output by selecting Data Manager Reporting Capacity Disk Capacity By Computer Generate Report. Obviously, selecting Data Manager My Reports Storage Capacity is a lot simpler.

Figure 13-221 My Reports - Storage Capacity

684

IBM TotalStorage Productivity Center V2.3: Getting Started

Data Manager for Databases


The System Reports available for Data Manager for Databases are shown in Figure 13-222. While there are quite a few reports available, they fall into three main categories: Database storage by database Database storage by user Database freespace The only report that does not fall into one of those categories is a usage violation report. Figure 13-222 shows the output from the All Dbms - User Database Space Usage report. We are not so much interested in the report contents as such here, but rather in the fact that when the report was run it produced a report for all users. You can go back to the selection tab and select specific users if required. This capability exists for all of the System Reports.

Figure 13-222 Available System Reports for databases

Chapter 13. Using TotalStorage Productivity Center for Data

685

13.14.2 Reports owned by a specific username


In concept this option is very similar to System Reports. You can include here those reports that you need to run regularly, consistently and easily. The difference, compared to System Reports, is that you get to decide what reports are included and what they look like. However, it is important to remember that you will only see those reports that have been created by the currently logged in TotalStorage Productivity Center for Data username.

Data Manager
We will define a report here for tpcadmin, the username that we are currently logged in as. We will create a report that is exactly the same as the Storage Capacity system report as shown in Figure 13-221 on page 684. In practice this is not something you would normally do as a report already exists. However, this will demonstrate more clearly how the options relate to each other. We select Data Manager Reporting Capacity Disk Capacity By Computer and click Generate Report. Once the report is produced, we save the report definition, using the name My Storage Capacity. This is shown in Figure 13-223.

Figure 13-223 Create My Storage Capacity report

686

IBM TotalStorage Productivity Center V2.3: Getting Started

Once the report is saved you will see it available under usernames Reports for tpcadmin as shown in Figure 13-224. There are a few features of saved reports worth mentioning here. Firstly, characteristics such as sort order are not saved with the report definition; however, selection criteria are saved. Secondly, you can override the selection criteria when running your report. By default the objects selected at the time of the save only will be reported. However, you can use the Selection tab when running the saved report to include or exclude objects from the report. If you change selection criteria you can resave the report, or save it under another name to update the definition or create a new definition respectively.

Figure 13-224 My Storage Report saved

Chapter 13. Using TotalStorage Productivity Center for Data

687

Data Manager for Databases


Database Reports created for specific users, in this case tpcadmin, are set up the same as in Data Manager. We will show one brief example here. We will take one of the reports that we created earlier in our discussion on Reporting (in this case Figure 13-100 on page 609) the Monitored Tables by RDBMS Type report and set it up to be able to run more easily. First we run the report by choosing Data Manager - Databases Reporting Usage All DBMSs Tables Monitored Tables By RDBMS Type and click Generate Report. We then saved the report definition, naming it Monitored Tables by RDBMS Type. This is shown in Figure 13-225.

Figure 13-225 Monitored Tables by RDBMS Types customized report

The report is more easily run now by choosing IBM Tivoli SRM for Databases My Reports usernames Reports Monitored Tables by RDBMS Type.

13.14.3 Batch Reports


In this section we will show how we set up some Batch Reports. All of the reports were set up in the same way so we will use only one as an example. The process is the same whether the report is for Data Manager or Data Manager for Databases.

688

IBM TotalStorage Productivity Center V2.3: Getting Started

Data Manager
To set up a new report Data Manager My Reports Batch Reports right-click Batch Reports and select Create Batch Report. You will then see the screen shown in Figure 13-226.

Figure 13-226 Create a Batch Report

Now, it is a simply a matter of specifying what has to be reported, plus when and what the output should be. In this case we are going to create a system uptime report. As shown in Figure 13-227, we entered our report description of System Uptime and have then selected Availability Computer Uptime By Computer and clicked >>. Our selection is then moved into the right hand panel, Current Selections.

Figure 13-227 Create a Batch Report - report selection

Chapter 13. Using TotalStorage Productivity Center for Data

689

We then selected the Selection tab, which is shown in Figure 13-228. Here we are able to select a subset of available data by either reporting for a specified time range or a subset of available systems. We took the defaults here.

Figure 13-228 Create a Batch Report - selection

On the Options tab, we specified that the report should be executed and generated on the Agent called COLORADO, which is our Data Manager server. We selected HTML for Report Type Specification and then changed the rules for the naming of the output file under Output File Specification. By default the name will be {Report creator}.{Report name}.{Report run number}. In this case we do not really care who created the report and having a variable like report run number, which changes every time a new version of the report is created and makes it difficult to access the file from a static Web page. So we changed the report name to be {Report name}.html. The report will be created in <install-directory>\log\Data-agent-name\reports on the Agent system where the report job is executed. There is no ability to override the directory name. For example, C:\Program Files\tivoli\ep\subagents\TPC\Data\log\colorado\reports on our Windows 2000 Data Manager server COLORADO or /usr/tivoli/tsrm/log/brazil/reports on an AIX Data Manager Agent called BRAZIL. The Option tab is shown in Figure 13-229. Note here that it possible to run a script after the report is created to perform some type of post-processing. For example, you might need to copy the output file to another system if your Web server is on a system that is not running a Data Manager Agent.

690

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-229 Create a Batch Report - options

On the When to REPORT tab we specified when the report should be generated. We chose REPORT Repeatedly and then selected a time early in the morning (3:00 AM) and specified that the report should be generated every day. This is shown in Figure 13-230.

Figure 13-230 Create a Batch Report - When to REPORT

Chapter 13. Using TotalStorage Productivity Center for Data

691

We left the Alert tab options as default, but it is possible to generate an Alert through several mechanisms including e-mail, an SNMP trap, or the Windows event log should the generation of the report fail. Finally, we saved the report, calling it System Uptime, as shown in Figure 13-231.

Figure 13-231 Create a Batch Report - saving the report

692

IBM TotalStorage Productivity Center V2.3: Getting Started

Data Manager for Databases


We will use the same example here as we used in Figure 13.14.2 on page 686, that is a Monitored Tables by RDBMS Type, but here we will save it in HTML format. We choose Data Manager - Databases My Reports Batch Reports, right-click Batch Reports and select Create Batch Report as shown in Figure 13-232.

Figure 13-232 Create a database Batch Report

Figure 13-233 shows the Report tab. We expanded in turn Usage All DBMSs Tables Monitored Tables By RDBMS Type and clicked >>. We also entered a Description of Monitored Tables by RDBMS Type.

Chapter 13. Using TotalStorage Productivity Center for Data

693

Figure 13-233 Create a database Batch Report - Report tab

We accepted the defaults on the Selection tab, which is to report on all RDBMS types and then went to the Options tab, shown in Figure 13-234. We set the Agent computer, which will run the report to COLORADO. Note that the system that you run the report on must be licensed for each type of database that you are reporting on. If we were to run the report on COLORADO, the Data Manager server system, we would need to have the Data Manager for Databases licences for Oracle and SQL-Server licences loaded there even though COLORADO does not run these databases.

name}.html. This is shown in Figure 13-234.

We also set the report type to HTML and changed the output file name to be {Report

694

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-234 Create a database Batch Report - Options tab

On the When to Report tab, shown in Figure 13-235, we chose REPORT Repeatedly and set a start time.

Figure 13-235 Create a database Batch Report - When to Report tab

We did not change anything in the Alert tab. We saved the definition with the name Monitored Tables by RDBMS Type as shown in Figure 13-236.

Chapter 13. Using TotalStorage Productivity Center for Data

695

Figure 13-236 Create a database Batch Report - save definition

We can now run the report by choosing Data Manager - Databases My Reports Batch Reports and then right-clicking tpcadmin.Monitored Tables by RDBMS Type and choosing Run Now. Figure 13-237 shows the output from the report execution.

Figure 13-237 Monitored Tables by RDBMS Type batch report output

696

IBM TotalStorage Productivity Center V2.3: Getting Started

13.15 Setting up a schedule for daily reports


Data Manager can produce reports according to a schedule. In our lab environment, we set up a number of Batch Reports as shown in Figure 13-238. Note that the name of each of the reports is prefixed by tpcadmin. This is the Windows username that we used to log into Data Manager. Even though the reports were created by a particular user, other Data Manager administrative users still have access to the reports (Data Manager non-administrative users can only look at the results). It is possible to generate output from Batch Reports in various formats including HTML,CSV, (comma separated values) and formatted reports. For all of the reports that we set up, we specified HTML as the output type, and also set them to run on a daily schedule. That way it is easy to use a browser to quickly look at the state of the organizations storage. It also means that anyone can look at the reported data through their browser, without having access to, or indeed, knowing how to use Data Manager. Obviously, if unrestricted access to this data was not desirable some sort of password based security could be included within the Web page. Currently, all of the HTML output from Batch Reports is in table format - graphs cannot be produced. There is also no ability to affect the layout of the reports in terms of sort order, nominating the columns to be displayed or the column size. Using the interactive reporting capability of the product does allow graphs to be produced and gives you some additional capability in determining what the output looks like. To go further than that you can export to a CSV file, and then use a tool such as Lotus 1-2-3 or Microsoft Excel to manipulate the output.

Figure 13-238 Batch Reports listing

The next section shows how to develop the Web site.

Chapter 13. Using TotalStorage Productivity Center for Data

697

13.16 Setting up a reports Web site


Since Data Manager can easily generate reports in HTML format, it is a logical extension to set up a Web site where the reports can be easily viewed. Since Data Manager itself is easy to install and use, we likewise took a fairly simplistic view to creating the Web site. We used the Microsoft Word Web Page Wizard to create the basic layout of the page as shown in Figure 13-239. The main page has two frames. In the left hand frame we have created links to each of the report files. The right hand frame is where the reports are displayed. As additional Batch Reports are needed, it is a relatively simple process of editing the HTML source and including another hot link. Obviously, this could be made more sophisticated. An example would be to have the browser list all HTML files within the report directory.

Figure 13-239 MS Word created Web page

We then used the Virtual Directory Creation Wizard within Microsoft Internet Information Server (IIS) to set up access to the reports as shown in Figure 13-240. Detailed information on using IIS is shown in 8.2.2, Using Internet Information Server on page 299.

698

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 13-240 Setting up a Virtual Directory within IIS

We could then access the reports through a Web browser as shown in Figure 13-241.

Figure 13-241 Reports available from a Web browser

Chapter 13. Using TotalStorage Productivity Center for Data

699

13.17 Charging for storage usage


Through the Data Manager for Chargeback product, Data Manager provides the ability to produce Chargeback information for storage usage. The following items can have charges allocated against them: Operating system storage by user Operating system disk capacity by computer Storage usage by database user Total size by database-tablespace For each of the Chargeback by user options, a Profile needs to be specified. Profiles are covered in Probes on page 527. Data Manager can directly produce an invoice or create a file in CIMS format. CIMS is a set of resource accounting tools that allow you to track, manage, allocate, and charge for IT resources and costs. For more information on CIMS see: http://www.cims.com. Figure 13-242 shows the Parameter Definition screen. The costs allocated here do not represent any real environment, but represent an example, based on these assumptions: Disk hardware costs, including controllers and switches. is $0.50 per MB Hardware costs are only 20% of the total cost over the life of the storage = $2.50 /MB On average only 50% of the capacity is used = $5.00 /MB used The expected life of the storage is 4 years - $5.00 /48 = 0.1042 /MB /month The figures used are for monthly Chargeback Chargeback is for cost recovery only, no profit

Figure 13-242 Chargeback parameter definition

In this example we have chosen to perform Chargeback by computer. It is possible to separately charge for database usage and use a different rate from the computer rate. To do this you would need to set up a Profile that excluded the database data, otherwise, it would be counted twice.

700

IBM TotalStorage Productivity Center V2.3: Getting Started

Chargeback is useful, even if you do not actually collect revenue from your users for the resources consumes. It is a very powerful tool for raising the awareness within the organization of the cost of storage, and the need to have the appropriate tools and processes in place to manage storage effectively and efficiently. Figure 13-243 shows the Chargeback Report being created. Currently, it is not possible to have the Chargeback Report created automatically (that is, scheduled).

Figure 13-243 Create the Chargeback Report

Example 13-5 shows the Chargeback Report that was produced.


Example 13-5 Chargeback Report Data Manager - Chargeback Computer Disk Space Invoice page 1 Aug 23, 2005

tpcadmin.Linux Systems NAME SPACE GB 0 0 COST 0.104/GB 0.00 0.00 page 2 Aug 23, 2005

klchl5h group total Data Manager - Chargeback Computer Disk Space Invoice

tpcadmin.Windows DB Systems NAME SPACE GB 69 0 69 COST 0.104/GB 7.19 0.00 7.19 page 3 Aug 23, 2005

colorado senegal group total Data Manager - Chargeback Computer Disk Space Invoice

Chapter 13. Using TotalStorage Productivity Center for Data

701

tpcadmin.Windows Systems NAME SPACE GB 59 75 75 209 COST 0.104/GB 6.15 7.82 7.82 21.79 page 4 Aug 23, 2005

gallium lochness wisla group total Data Manager - Chargeback Computer Disk Space Invoice

TPCUser.Default Computer Group NAME SPACE GB 137 137 COST 0.104/GB 14.28 14.28 page 5 Aug 23, 2005

Cluster Group.DB2CLUSTER.ITSOSJNT group total Data Manager - Chargeback Run Summary

Computer Disk Space Invoice run total

415 GB

43.26 43.26

Example 13-6 shows the Chargeback Report in CIMS format.


Example 13-6 Chargeback Report in CIMS format TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Linux Systems,klchl5h,1,0 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows DB Systems,colorado,1,71687000 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows DB Systems,senegal,1,0 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,gallium,1,61762720 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,lochness,1,78156288 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,tpcadmin,Windows Systems,wisla,1,78156288 TSAOC1,20050823,20050823,17:55:00,17:55:59,,3,TPCUser,Default Computer Group,Cluster Group.DB2CLUSTER.ITSOSJNT,1,142849536

702

IBM TotalStorage Productivity Center V2.3: Getting Started

14

Chapter 14.

Using TotalStorage Productivity Center for Fabric


In this chapter we provide an introduction to the features of TotalStorage Productivity Center for Fabric. We discuss the following topics: IBM Tivoli NetView navigation overview Topology view Data collection, reporting, and SmartSets

Copyright IBM Corp. 2005. All rights reserved.

703

14.1 NetView navigation overview


Since TotalStorage Productivity enter for Fabric (formerly Tivoli SAN Manager) uses IBM Tivoli NetView (abbreviated as NetView) for display, before going into further details, we give you a basic overview of the NetView interface, how to navigate in it and how TotalStorage Productivity Center for Fabric integrates with NetView. Detailed information on NetView is in the redbook Tivoli NetView V6.01 and Friends, SG24-6019.

14.1.1 NetView interface


NetView uses a graphical interface to display a map of the IP network with all the components and interconnect elements that are discovered in the IP network. As your Storage Area network (SAN) is a network, TotalStorage Productivity Center for Fabric uses NetView and its graphical interface to display a mapping of the discovered storage network.

14.1.2 Maps and submaps


NetView uses maps and submaps to navigate in your network and to display deeper details as you drill down. The main map is called the root map while each dependent map is called a submap. Your SAN topology will be displayed in the Storage Area Network submap and its dependents. You can navigate from one map to its submap simply by double-clicking the element you want to display.

14.1.3 NetView window structure


Figure 14-1 shows a basic NetView window.

submap window

submap stack

child submap area

Figure 14-1 NetView window

704

IBM TotalStorage Productivity Center: Getting Started

The NetView window is divided into three parts: The submap window displays the elements included in the current view. Each element can be another submap or a device The submap stack is located on the left side of the submap window. This area displays a stack of icons representing the parent submaps that you have already displayed. It shows the hierarchy of submaps you have opened for a particular map. This navigation bar can be used to go back to a higher level with one click The child submap area is located at the bottom of the submap window. This submap area shows the submaps that you have previously opened from the current submap. You can open a submap from this area, or bring it into view if it is already opened in another window on the window.

14.1.4 NetView Explorer


From the NetView map based window, you can switch to an Explorer view where all maps, submaps and objects are displayed in a tree scheme (similar to the Microsoft Windows Explorer interface). To switch to this view, right-click a submap icon and select Explore as shown in Figure 14-2.

Figure 14-2 NetView Explorer option

Chapter 14. Using TotalStorage Productivity Center for Fabric

705

Figure 14-3 shows the new display using the NetView Explorer.

Figure 14-3 NetView explorer window

From here, you can change the information displayed on the right pane by changing to the Tivoli Storage Area Network Manager view on the top pull-down field. The previously displayed view was System Configuration view. The new display is shown in Figure 14-4.

Figure 14-4 NetView explorer window with Tivoli Storage Area Network Manager view

706

IBM TotalStorage Productivity Center: Getting Started

Now, the right pane shows Label, Name, Type and Status for the device. You may scroll right to see additional fields.

14.1.5 NetView Navigation Tree


From the any NetView window, you can switch to the Navigation Tree by clicking the tree icon circled on Figure 14-5.

Figure 14-5 NetView toolbar

NetView will display, with a tree format, all the objects contained in the maps you have already explored. Figure 14-6 shows the tree view.

Figure 14-6 NetView tree map

You can see that our SAN circled in red does not show its dependent objects since we have not yet opened this map through the standard NetView navigation window. You can click any object and it will open its submap in the standard NetView view.

14.1.6 Object selection and NetView properties


To select an object, right-click it. NetView displays a context-sensitive menu with several options including Object Properties as shown in Figure 14-7.

Chapter 14. Using TotalStorage Productivity Center for Fabric

707

Figure 14-7 NetView objects properties menu

The Object Properties for that device will display (Figure 14-8). This will allow you to change NetView properties such as the label and icon type of the selected object.

Figure 14-8 NetView objects properties

Important: As TotalStorage Productivity Center for Fabric runs its own polling and discovery processes and only uses NetView to display the discovered objects, each change to the NetView object properties will be lost as soon as TotalStorage Productivity Center for Fabric regenerates a new map.

708

IBM TotalStorage Productivity Center: Getting Started

14.1.7 Object symbols


TotalStorage Productivity Center for Fabric uses its own set of icons as shown in Figure 14-9. Two new icons have been added for Version 1.2 - ESS and SAN Volume Controller.

Figure 14-9 Productivity Center for Fabric icons

14.1.8 Object status


The color of a symbol or the connection represents its status. The colors used by Productivity Center for Fabric and their corresponding status are shown in Table 14-1.
Table 14-1 Productivity Center for Fabric symbols color meaning Symbol color Green Green Connection color Black Black Status Normal New Status meaning The device was detected in at least one of the scans The device was detected in at least one of the scans and a new discovery has not yet been performed since the device was detected Device detected - the status is impaired but still functional None of the scans that previously detected the device are now reporting it

Yellow Red

Yellow Red

Marginal (suspect) Missing

IBM Tivoli NetView uses additional colors to show the specific status of the devices, however these are not used in the same way by Productivity Center for Fabric (Table 14-2).
Table 14-2 IBM Tivoli NetView additional colors Symbol color Blue Wheat (tan) Dark green Status Unknown Unmanaged Acknowledged Status Meaning Status not determined The device is no longer monitored for topology and status changes. The device was Missing, Suspect or Unknown. The problem has been recognized and is being resolved Status not determined

Gray (used in NetView Explorer left pane)

Unknown

If you suspect problems in your SAN, look in the topology displays for icons indicating a status of other than normal/green. To assist in problem determination, Table 14-3 provides an overview of symbol status with possible explanations of the problem.
Chapter 14. Using TotalStorage Productivity Center for Fabric

709

Table 14-3 Problem determination Display Agents Any Device Normal (green) Link Marginal (yellow) Non-ISL explanation One or more, but not all links to the device in this topology are missing. All links to the device in this topology are missing, while other links to this device in other topologies are normal. All links to the device in this topology are missing, while all other links to devices in other topologies are missing (if any) ISL explanation One or more, but not all links between the two switches is missing All links between the two switches are missing, but the out-of-band communication to the switch is normal All links between the two switches are missing, and the out-of-band communication to the switch is missing or indicates that the switch is in critical condition This condition should not happen. If you see this on an ISL where switches on either side of the link have an out-of-band agent connected to your SAN Manager, then you are having problems with your out-of-band agent. This condition should not happen. If you see this on an ISL where switches on either side of the link have an out-of-band agent connected to your SAN Manager, then you are having problems with your out-of-band agent.

Any

Normal (green)

Critical (red)

Any

Critical (red)

Critical (red)

Both

Critical (red)

Normal (black)

All in-band agents monitoring the device can no longer detect the device. For example, a server reboot, power-off, shutdown of agent service, Ethernet problems, and soon.

Both

Critical (red)

Marginal (yellow)

At least one link to the device in this topology is normal and one or more links are missing. In addition, all in-band agents monitoring the device can no longer detect the device

710

IBM TotalStorage Productivity Center: Getting Started

14.1.9 Status propagation


Each object has a color representing its status. If the object is an individual device, the status shown is that of the device. If the object is a submap, the status shown reflects the summary status of all objects in its child submap. Status of lower level objects is propagated to the higher submap as shown in Table 14-4.
Table 14-4 Status propagation rules Object status Unknown Normal Suspect (marginal) Symbols in the child submap No symbols with status of normal, critical, suspect or unmanaged All symbols are normal or acknowledged All symbols are suspect or Normal and suspect symbols or Normal, suspect and critical symbols At least one symbol is critical and no symbol are normal

Critical

14.1.10 NetView and Productivity Center for Fabric integration


Productivity Center for Fabric adds a SAN menu entry in the IBM Tivoli NetView interface, shown in Figure 14-10. The SAN pull-down menu contains the following entries: SAN Properties to display and change object properties, such as object label and icon Launch Application to run a management application ED/FI Properties to view ED/FI events ED/FI Configuration to start, stop, and configure ED/FI Configure Agents to add and remove agents Configure Manager to configure the polling and discovery scheduling Set Event Destination to configure SNMP and TEC events recipients Storage Resource Manager to launch TotalStorage Productivity Center for Data Help

Figure 14-10 SAN Properties menu

Chapter 14. Using TotalStorage Productivity Center for Fabric

711

All those items will subsequently be described in more detail.

14.2 Walk-through of Productivity Center for Fabric


This section takes you through the Productivity Center for Fabric. It steps through different views to help you understand how to use different panels. To anyone familiar with NetView, this is similar. Productivity Center for Fabric uses NetView to display your SAN along with your IP network. In the first view, you see three icons: IP Internet, SmartSets, and SAN. We focus on the SAN icon. Figure 14-11 shows the root display window when you first launch NetView. The green background on the SAN icon indicates that all is well in that environment.

Figure 14-11 NetView root display

712

IBM TotalStorage Productivity Center: Getting Started

There are three different types of views in Productivity Center for Fabric: Device Centric view, Host Centric view, and SAN view. In our configuration, the NetView display (Figure 14-12) shows two separate SANs that we are monitoring: TPC SAN and TSM SAN.

Figure 14-12 SAN view

14.2.1 Device Centric view


The first view is the Device Centric view. From this view, you can drill down to see the device point of view. In this example, we have a view of two IBM FAStT devices. The one we are using is labeled FAStT-1T14859668. We drill down on that device to see which systems are using LUNs from the FAStT. Here we see that two LUNs are available. As we drill down on LUN1, we see that a host named PDQDISRV has been assigned that LUN. If we go further, we can see that this system is a Windows 2000 system.

Chapter 14. Using TotalStorage Productivity Center for Fabric

713

14.2.2 Host Centric view


Now we investigate the Host Centric view. Note: Only the systems that have the Productivity Center for Fabric agent installed on them are displayed in this view. The Host Centric view displays all host systems and their logical relationships to local and SAN-attached devices. Here again we see a system called PQDISRV. If we drill down on this system, we can see that this is a Windows 2000 system that has four file systems defined on it. We can also look at the properties of those file systems. This enables us to see such information as the type of file system, mount point, total amount of space and how much free space is available. As we drill down further, we can see the logical volume or volumes behind those file systems.

14.2.3 SAN view


The SAN view displays one symbol for each SAN. You can see from Figure 14-12 on page 713 that there are two SANs. When we double-click the SAN icon labelled TPC SAN, we see the underlying submap (Figure 14-13). From the submap, you can choose either the Topology View or Zone View.

Figure 14-13 SAN subview

714

IBM TotalStorage Productivity Center: Getting Started

First we explore the Zone View. The Zone View displays information by zone groupings. Figure 14-14 displays the information about the three zones that have been setup on the Fibre Channel switch: the Colorado, Gallium and PQDI zones.

Figure 14-14 Zone View

Chapter 14. Using TotalStorage Productivity Center for Fabric

715

We can drill down in each zone and see which system and devices have been assigned to that specific zone. Figure 14-15 shows the Colorado zone in which there is one host and a FAStT disk subsystem.

Figure 14-15 Colorado zone contents

716

IBM TotalStorage Productivity Center: Getting Started

Now we look at the Topology View (Figure 14-16). The Topology View draws a picture of how the SAN is configured, which devices are connected to which ports, and so on. As we drill down in the Topology View, we first see the interconnect elements. This shows you the connection between any switches. In our small environment, we have only one switch, so the only device connected is the itsosw4 switch, which is an IBM 2109-F16 switch.

Figure 14-16 Topology View of switches

Chapter 14. Using TotalStorage Productivity Center for Fabric

717

If we had two switches in our SAN, we would see a switch icon on either side of the Interconnect elements icon. As we drill down on the switch, we see what devices and systems are directly attached to it. Figure 14-17 shows five hosts, the FAStT device, and the IBM switch in the middle.

Figure 14-17 SAN topology

From here, we show you several features of Productivity Center for Fabric such as: How to configure the manager and what happens when things go wrong Properties of a host with the Productivity Center for Fabric agent installed How to configure SNMP agents

718

IBM TotalStorage Productivity Center: Getting Started

We begin by showing what happens when things go wrong. Figure 14-18 shows that the FAStT disk system has a redundant connection. Lets see what happens when one connection goes down.

Figure 14-18 FAStT dual connections

Chapter 14. Using TotalStorage Productivity Center for Fabric

719

In Figure 14-19 you notice on the left, that all of the parent icons have turned yellow. This indicates that something has happened in your SAN environment. You can then drill down, following the yellow trail until you find the problem. Here we can see that one of the connections to the FAStT disk system has gone down.

Figure 14-19 Failed resource

This gives an administrator a place to start looking. After they determine what the problem is, they can take corrective action. The FAStT icon has turned Red, not because it has failed, but so you can see that it is affected. In our case, we lost access to one of the controllers of the FAStT, because it was the only path to that controller. If you right-click the FAStT icon and then select Acknowledge, it changes back to Green if the device itself is OK. The path to the icon still remains Yellow. When the problem is corrected, the topology is updated to reflect the resolution.

720

IBM TotalStorage Productivity Center: Getting Started

Now let us see the kind of information that we can view from a host that has a Productivity Center for Fabric agent installed on it. You select the required host, and then click SAN SAN Properties. A Properties window (Figure 14-20) opens. It shows such information as IP address, operating system, host bus adapter type driver versions, and firmware levels.

Figure 14-20 Properties of host GALLIUM

Chapter 14. Using TotalStorage Productivity Center for Fabric

721

When you click the Connection tab on the left, you see the port on the switch to which the specific host is connected as shown in Figure 14-21.

Figure 14-21 GALLIUM connection

Now that you have seen the agents and where you can define them, lets look at the manager configuration. The manager configuration is simple and enables you set the polling intervals. Figure 9-46 on page 349 shows the polling setup, in which you specify how often you want your agents to poll the SAN. You can set this to minutes, hours, days, weeks, which days you want to poll on, and the exact time. Or you can poll manually by clicking the Poll Now button. The Clear History button changes the state of an object that previously had a problem but is back up. The state appears as yellow, but the Clear History button changes it back to normal or green.

722

IBM TotalStorage Productivity Center: Getting Started

14.2.4 Launching element managers


Productivity Center for Fabric also has the ability to launch element managers. By element manager, we are referring to applications that vendors use to configure their hardware. Figure 14-22 shows the Productivity Center for Fabric launching the element manager for the IBM 2109 Fibre Channel switch.

Figure 14-22 Launching an element manager

Chapter 14. Using TotalStorage Productivity Center for Fabric

723

Figure 14-23 shows the management tool for the IBM 2109 after being launched from Productivity Center for Fabric.

Figure 14-23 Switch management

724

IBM TotalStorage Productivity Center: Getting Started

14.2.5 Explore view


Along with the Productivity Center for Fabric Topology View, you can view your SAN environment with a Windows Explore type view. By clicking the Submap Explorer button in the center of the toolbar, you see a view like the example in Figure 14-24. The Navigation Tree button shows a flowchart-type view of the Productivity Center for Fabric views.

Figure 14-24 Explorer view

14.3 Topology views


The standard IP-based IBM Tivoli NetView root map contains IP Internet and SmartSets submaps. Productivity Center for Fabric adds a third submap, called Storage Area Network, to allow the navigation through your discovered SAN. Figure 14-25 shows the NetView root map with the addition of Productivity Center for Fabric.

Chapter 14. Using TotalStorage Productivity Center for Fabric

725

Figure 14-25 IBM Tivoli NetView root map

The Storage Area Network submap (shown in Figure 14-26) displays an icon for each available topology view. There will be a SAN view icon for each discovered SAN fabric (three in our case), a Device Centric View icon, and a Host Centric View icon.

Figure 14-26 Storage Area Network submap

726

IBM TotalStorage Productivity Center: Getting Started

You can see in this figure that we had three fabrics. They are named Fabric1, Fabric3, and Fabric4, since we have changed their label using SAN SAN Properties as explained in Properties on page 736. Figure 14-27 shows the complete list of views available. In the following sections we will describe the content of each view.

Topology views
Tivoli NetView root map Storage Area Network SAN view Topology view Switches Elements Interconnect elements Elements (switches) Zone view Zones Elements Device Centric view Devices (storage servers) LUNs Host Platform Host Centric view Hosts Platform Filesystems Volumes

Figure 14-27 Topology views

14.3.1 SAN view


The SAN view allows you to see the SAN topology at the fabric level. In this case we clicked the Fabric1 icon shown in Figure 14-26 on page 726. The display in Figure 14-28 appears, giving access to two further submaps: Topology view Zone view

Figure 14-28 Storage Area Network view

Chapter 14. Using TotalStorage Productivity Center for Fabric

727

Topology view
The topology view is used to display all elements of the fabric including switches, hosts, devices, and interconnects. As shown on Figure 14-29, this particular fabric has two switches.

Figure 14-29 Topology view

Now, you can click a switch icon to display all the hosts and devices connected to the selected switch (Figure 14-30).

Figure 14-30 Switch submap

728

IBM TotalStorage Productivity Center: Getting Started

On the Topology View (shown in Figure 14-29 on page 728) you can also click Interconnect Elements to display information about all the switches in that SAN (Figure 14-31).

Figure 14-31 Interconnect submap

The switch submap (Figure 14-30), shows that six devices are connected to switch ITSOSW1. Each connection line represents a logical connection. Click a connection bar twice to display the exact number of physical connections (Figure 14-32). We now see that, for this example, SOL-E is connected to two ports on the switch ITSOSW1.

Figure 14-32 Physical connections view

Chapter 14. Using TotalStorage Productivity Center for Fabric

729

When the connection represents only one physical connection (or, if we click one of the two connections shown in Figure 14-32), NetView displays its properties panel (Figure 14-33).

Figure 14-33 NetView properties panel

Zone view
The Zone view submap displays all zones defined in the SAN fabric. Our configuration contains two zones called FASTT and TSM (Figure 14-34).

Figure 14-34 Zone view submap

730

IBM TotalStorage Productivity Center: Getting Started

Click twice on the FASTT icon to see all the elements included in the FASTT zone (Figure 14-35).

Figure 14-35 FASTT zone

In lab1, the FASTT zone contains five hosts and one storage server. We have installed TotalStorage Productivity Center for Fabric Agents on the four hosts that are labelled with their correct hostname (BRAZIL, GALLIUM, SICILY and SOL-E). For the fifth host, LEAD, we have not installed the agent. However, it is discovered since it is connected to the switch. Productivity Center for Fabric displays it as a host device, and not as an unknown device, because the QLogic HBA drivers installed on LEAD support RNID. This RNID support gives the ability for the switch to get additional information, including the device type (shown by the icon displayed), and the WWN. The disk subsystem is shown with a question mark because the FAStT700 was not yet fully supported (with the level of code available at the time of writing) and Productivity Center for Fabric was not able to determine all the properties from the information returned by the inband and outband agents.

14.3.2 Device Centric View


You may have several SAN fabrics with multiple storage servers. The Device Centric View (accessed from the Storage Area Network view, as shown in Figure 14-26 on page 726), displays the storage devices connected to your SANs and their relationship to the hosts. This is a logical view as the connection elements are not shown. Because of this, you may prefer to see this information using the NetView Explorer interface as shown in Figure 14-36. This has the advantage of automatically displaying all the lower level items for Device Centric View listed in Example 14-27 on page 727 simultaneously, such as LUNs and Host.

Chapter 14. Using TotalStorage Productivity Center for Fabric

731

Figure 14-36 Device Centric View

In the preceding figure, we can see the twelve defined LUNs and the host to which they have been allocated. The dependency tree is not retrieved from the FAStT server but is consolidated from the information retrieved from the managed hosts. Therefore, the filesystems are not displayed as they can be spread on several LUNs and this information is transparent to the host. Note that the information is also available for the MSS storage server, the other disk storage device in our SAN.

14.3.3 Host Centric View


The Host Centric View (accessed from the Storage Area Network view, as shown in Figure 14-26 on page 726) displays all the hosts in the SAN and their related local and SAN-attached storage devices. This is a logical view that does not show the interconnect elements (and runs across the fabrics). Since this is also a logical view, like the Device Centric View, the NetView Explorer presents a more comprehensive display (Figure 14-37).

732

IBM TotalStorage Productivity Center: Getting Started

Figure 14-37 Host Centric View for Lab 1

We see our four hosts and all their local filesystems whether they are locally or SAN-attached. NFS-mounted filesystems and shared directories are not displayed. Since no agent is running on LEAD, it is not shown in this view.

14.3.4 iSCSI discovery


For this environment we will reference SAN Lab 2 (Lab 2 environment on page 752).

Starting discovery
You can discover and manage devices that use the iSCSI storage networking protocol through Productivity Center for Fabric using IBM Tivoli NetView. Before discovery, SNMP and the iSCSI MIBs must be enabled on the iSCSI device, the Tivoli NetView IP Discovery must be enabled. See 14.11, Real-time reporting on page 786 for enabling IP discovery. The IBM Tivoli NetView nvsniffer daemon will discover the iSCSI devices. Depending on the iSCSI operation chosen, a corresponding iSCSI SmartSet will be created under the IBM Tivoli NetView SmartSets icon. By default, the nvsniffer utility runs every 60 minutes. Once nvsniffer discovers a iSCSI device, it creates an iSCSI SmartSet located on the NetView Topology map at the root level. The user can select what type of iSCSI device is discovered. From the menu bar, click Tools iSCSI Operations menu and select Discover All iSCSI Devices, Discover All iSCSI Initiators or Discover All iSCSI Targets, as shown in Figure 14-38. For more details about iSCSI, refer to 14.12, Productivity Center for Fabric and iSCSI on page 810.

Chapter 14. Using TotalStorage Productivity Center for Fabric

733

Figure 14-38 iSCSI discovery

Double-click the iSCSI SmartSet icon to display all iSCSI devices. Once all iSCSI devices are discovered by NetView, the iSCSI SmartSet can be managed from a high level. Status for iSCSI devices is propagated to the higher level, as described in 14.1.9, Status propagation on page 711. If you detect a problem, drill to the SmartSet icon and continue drilling through the iSCSI icon to determine what iSCSI device is having the problem. Figure 14-39 shows an iSCSI SmartSet.

Figure 14-39 iSCSI SmartSet

14.3.5 MDS 9000 discovery


The Cisco MDS 9000 is a family of intelligent multilayer directors and fabric switches that have such features as: virtual SANs (VSANs), advanced security, sophisticated debug analysis tools and an element manager for SAN management. Productivity Center for Fabric has enhanced compatibility for the Cisco MDS 9000 Series switch. Tivoli NetView displays the port numbers in a format of SSPP, where SS is the slot number and PP is the port number. The Launch Application menu item is available for the Cisco switch. When the Launch Application is selected, the Cisco Fabric Manager application is started. For more details, see 14.7.1, Cisco MDS 9000 discovery on page 745.

734

IBM TotalStorage Productivity Center: Getting Started

14.4 SAN menu options


In this section we describe some of the menu options contained under the SAN pull-down menu option for Productivity Center for Fabric.

14.4.1 SAN Properties


As shown in Figure 14-40, select an object and use SAN SAN Properties to display the properties gathered by Productivity Center for Fabric. In this case we are selecting a particular filesystem (the root filesystem) from the Agent SOL-E.

Figure 14-40 SAN Properties menu

This will display a SAN Properties window that is divided into two panes. The left pane always contains Properties, and may also contain Connection and Sensors/Events, depending on the type of object being displayed. The right pane contains the details of the object. These are some of the device types that give information in the SAN Properties menu: Disk drive Hdisk Host file system LUN Log volume OS Physical volume Port SAN

Chapter 14. Using TotalStorage Productivity Center for Fabric

735

Switch System Tape drive Volume group Zone

Properties
The first grouping item is named Properties and contains generic information about the selected device. The information that is displayed depends on the object type. This section shows at least the following information:

Label: The label of the object as it is displayed by Productivity Center for Fabric. If you
update this field, this change will be kept over all discoveries.

Icon: The symbol representing the device type. If the object is of an unknown type, this
field will be in read-write mode and you will be able to select the correct symbol.

Name: The reported name of the device.


Figure 14-41 shows the Properties section for a filesystem. You can see that it displays the filesystem name and type, the mount point, and both the total and available space. Since a filesystem is not related to a port connection and also does not return sensor events, only the Properties section is available.

Figure 14-41 Productivity Center for Fabric Properties Filesystem

Figure 14-42 shows the Properties section for a host. You can see that it displays the hostname, the IP address, the hardware type, and information about the HBA. Since the host does not give back sensor related events, only the Properties and Connections sections are available.

736

IBM TotalStorage Productivity Center: Getting Started

Figure 14-42 Productivity Center for Fabric Properties Host

Figure 14-43 shows the Properties section for a switch. You can see that it displays fields including the name, the IP address, and the WWN. The switch is a connection device and sends back information about the events and the sensors. Therefore, all three item groups are available (Properties, Connections, and Sensors/Events).

Figure 14-43 Productivity Center for Fabric Properties Switch

Chapter 14. Using TotalStorage Productivity Center for Fabric

737

Figure 14-44 shows the properties for an unknown device. Here you can change the icon to a predefined one by using the pull-down field Icon. You can also change the label of a device even if the device is of a known type.

Figure 14-44 Changing icon and name of a device

Connection
The second grouping item, Connections shows all ports in use for the device. This section appears only when it is appropriate to the device displayed switch or host. On Figure 14-45, we see the Connection tab for one switch where six ports are used. Port 0 is used for the Inter-Switch Link (ISL) to switch ITSOSW2. This is a very useful display, as it shows which device is connected on each switch port.

Figure 14-45 Connection information

738

IBM TotalStorage Productivity Center: Getting Started

Sensors/Events
The third grouping item, Sensors/Events, is shown in Figure 14-46. It shows the sensors status and the device events for a switch. It may include information about fans, batteries, power supplies, transmitter, enclosure, board, and others.

Figure 14-46 Sensors/Events information

14.5 Application launch


Many SAN devices have vendor-provided management applications. Productivity Center for Fabric provides a launch facility for many of these.

Chapter 14. Using TotalStorage Productivity Center for Fabric

739

14.5.1 Native support


For some supported devices, Productivity Center for Fabric will automatically discover and launch the device-related administration tool. To launch, select the device and then click SAN Launch Application. This will launch the Web application associated with the device. In our case, it launches the Brocade switch management Web interface for the switch ITSOSW1, shown in Figure 14-47.

Figure 14-47 Brocade switch management application

14.5.2 NetView support for Web interfaces


For devices that have not identified their management application, IBM Tivoli NetView allows you to manually configure the launch of a Web interface for any application, by doing the following actions: Right-click the device and select Object Properties from the context-sensitive menu. On the dialog box, select the Other tab (shown in Figure 14-48). Select LANMAN from the pull-down menu. Check isHTTPManaged. Enter the URL of the management application in the Management URL field. Click Verify, Apply, OK.

740

IBM TotalStorage Productivity Center: Getting Started

Figure 14-48 NetView objects properties Other tab

After this, you can launch the Web application by right-clicking the object and then selecting Management Page, as shown in Figure 14-49.

Figure 14-49 Launch of the management page

Important: This definition will be lost if your device is removed from the SAN and subsequently rediscovered, since it will be a new object for NetView.

Chapter 14. Using TotalStorage Productivity Center for Fabric

741

14.5.3 Launching TotalStorage Productivity Center for Data


The TotalStorage Productivity Center for Data interface can be started by using TotalStorage Productivity Center for Fabric NetView console. To do this, select SAN Storage Resource Manager, as shown in Figure 14-50.

Figure 14-50 Launch Tivoli Storage Resource Manager

The user properties file contains an SRMURL setting that defaults to the fully qualified host name of Tivoli Storage Area Network Manager. This default assumes that both TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Fabric are installed on the same machine. If TotalStorage Productivity Center for Data is installed on a separate machine, you can modify the SRMURL value to specify the host name of the TotalStorage Productivity Center for Data machine. For instructions on how to do this, please refer to the manual IBM Tivoli Storage Area Network Manager Users Guide, SC23-4698. If the following conditions are true, you can start the TotalStorage Productivity Center for Data graphical interface from the Tivoli NetView console: TotalStorage Productivity Center for Data or the TotalStorage Productivity Center for Data graphical interface is installed on the same machine as TotalStorage Productivity Center for Fabric, or the SRMURL value specifies the hostname of TotalStorage Productivity Center for Data. The TotalStorage Productivity Center for Fabric is currently running. For more information on TotalStorage Productivity Center for Data, see the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.

14.5.4 Other menu options


For the other options on the SAN pull-down menu: Configure Agents is covered in *Configuring the outband agents on page 346 and Checking inband agents on page 348. Configure Manager is covered in Performing an initial poll and setting up the poll interval on page 349. Set Event Destination is covered in Configuring SNMP on page 342 ED/FI Properties and ED/FI Configuration are covered in Configuration for ED/FI - SAN Error Predictor on page 818. 742
IBM TotalStorage Productivity Center: Getting Started

14.6 Status cycles


Figure 14-51 shows the typical color change status cycles which reflect normal operation as a device goes down and comes up. Table 14-1 on page 709 and Table 14-2 on page 709 list the meanings of the different colors.

NEW GREEN
Clear History

Device down

NORMAL GREEN
Device down Device up

MISSING RED
Figure 14-51 IBM Tivoli SAN Manager normal status cycle

If you do not manually use NetView capabilities to change status, the status of a Tivoli SAN Manager object goes from green to red and from red to green. Note that the only difference between an object in the NORMAL/GREEN and NEW/GREEN status is in the Status field under SAN Properties (see Figure 14-42 on page 737 for an example). A new object will have New in the field and a normal object will show Normal. The icon displayed in the topology map will look identical in both cases. You can encounter situations where your device is down for a known reason such as an upgrade or hardware replacement and you dont want it displayed with a missing/red status. You can use the NetView Unmanage function to set its color as tan to avoid having the yellow or red status reported and propagated in the topology display. See Figure 14-52.

Chapter 14. Using TotalStorage Productivity Center for Fabric

743

IBM Tivoli SAN Manager status cycle (with Unmanage)


NORMAL TAN
Manage / Unmanage

NORMAL GREEN
Device up Device down

MISSING RED
Manage / Unmanage

Clear History

MISSING TAN

Clear History

NOT DISCOVERED / NOT DISPLAYED

Figure 14-52 Status cycle using Unmanage function

However, when a device is unmanaged and you do a SAN Configure Manager Clear History to remove historical data, the missing device will be removed from the Productivity Center for Fabric database and will no longer be reported until it is up back with a new/green status. If you have changed the label of the device, and it is re-discovered after a Clear History, it will reappear with the default generated name, as this information is not saved. See Figure 14-53.

IBM Tivoli SAN Manager status cycle (with Acknowledge)


NORMAL GREEN
Device down

MISSING RED
Ack/Unack

Device up

MISSING DARK GREEN

Figure 14-53 Status cycle using Acknowledge function

You can use the NetView Acknowledge function to specify that you have been notified about the problem and that you are currently searching for more information or for a solution. This will set the devices color as dark green to avoid having the yellow or red status reported and propagated in the topology display. Subsequently, you can use the Unacknowledge function to return in the normal status and colors cycle. When the device becomes available, it will automatically return to the normal reporting cycle.

744

IBM TotalStorage Productivity Center: Getting Started

14.7 Practical cases


We have re-created some typical errors that can happen in a production environment to see how and why Productivity Center for Fabric reacts to them. We have also used different configurations of the inband and outband agents and correlated the results with the explanations.

14.7.1 Cisco MDS 9000 discovery


In this section we discuss the discovery of the Cisco MDS 9509, which is part of the MDS 9000 family. Our MDS 9509 is a multilayer switch/director with a 6 slot configuration. We have one 16 port card and one 32 port card running at 2GB/s. Discovery of the MDS 9509 is performed using inband management. Figure 14-54 is the lab environment used to demonstrate the following discovery. We will call this Lab environment 3.

Sanan ITSANM Agent

Cisco 9509

Intranet
Lochness SAN Manager

Sanxc1

Sanxc2

Sanan3

Figure 14-54 Lab environment 3

We first deployed an Productivity Center for Fabric Agent to SANAN. Once the agent was installed, it registered with the Productivity Center for Fabric - LOCHNESS and discovered the CISCO1 (MDS 9509). The topology in Figure 14-55 was displayed after deploying the agent. Note: In order to discover the MDS 9000, at least one Productivity Center for Fabric Agent must be installed on a host attached to the MDS 9000. Outband management is not supported for the MDS 9000.

Chapter 14. Using TotalStorage Productivity Center for Fabric

745

Figure 14-55 Discovery of MDS 9509

To display the properties of CISCO1, right-click the CISCO1 icon to select it and select SAN SAN Properties. See Figure 14-56.

Figure 14-56 MDS 9509 properties

746

IBM TotalStorage Productivity Center: Getting Started

The Connection option (Figure 14-57) displays information about the slots and ports where the hosts SANXC1, SANXC2 and SANXC3 are connected, as well as the status of each port.

Figure 14-57 MDS 9509 connections

14.7.2 Removing a connection on a device running an inband agent


Next, we removed the FC link between the host SICILY and the switch ITSOSW1. Productivity Center for Fabric does not show that the device is missing but shows that the connection is missing. As the host was running an in-band management agent, the host continues to report its configuration to the manager using the IP network. However, the attached switch sends a trap to the manager to signal the loss of a link. You can use Monitor Events All to view the trap received by NetView. Double-click the trap coming from ITSOSW1 to see details about the trap as shown in Figure 14-58.

Figure 14-58 Trap received by NetView

We see that ITSOSW1 sent a trap to signal that FCPortIndex4 (port number 3) has a status of 2 (which means Offline).

Chapter 14. Using TotalStorage Productivity Center for Fabric

747

The correlation between the inband information and the trap received is then made correctly and only the connection is shown as missing. You can see in Figure 14-59 that the connection line has turned red, using the colors referenced in Table 14-1 on page 709.

Figure 14-59 Connection lost

We then restored the connection, and following the status cycle explained in Figure 14-51 on page 743, the connections returned to normal (Figure 14-60).

Figure 14-60 Connection restored

748

IBM TotalStorage Productivity Center: Getting Started

Next, we removed one out of the two connections from the host TUNGSTEN to ITSOSW3. One link is lost, so the connection is now shown as suspect (yellow) Figure 14-61.

Figure 14-61 Marginal connection

NetView follows its status propagation rules in Table 14-4 on page 711. This connection links to a submap with the two physical connections. The bottom physical connection is missing (red), and the other (top) one is normal (black), resulting is propagated status of (yellow) on the parent map (left hand side). See Figure 14-62.

Figure 14-62 Dual physical connections with different status

Chapter 14. Using TotalStorage Productivity Center for Fabric

749

14.7.3 Removing a connection on a device not running an agent


A device with no agent is only detected via its connection to the switch. If the connection is broken, the host cannot be discovered. In this case, we unplugged the FC link between the host LEAD and the switch ITSOSW2. LEAD is not running either an inband or an outband agent as we can see using SAN Agents configuration, shown in Figure 14-63.

Figure 14-63 Agent configuration

After removing the link on LEAD and we received a standard Windows missing device popup (Figure 14-64) indicating it could no longer see its FC-attached disk device.

Figure 14-64 Unsafe removal of Device

750

IBM TotalStorage Productivity Center: Getting Started

Productivity Center for Fabric shows the device as Missing (the icon changes to red see the color status listing in Table 14-1 on page 709) as it is no longer able to determine the status of the device (see Figure 14-65).

Figure 14-65 Connection lost on a unmanaged host

In Figure 14-66, the host is Unmanaged (tan) status since we decided to unmanage it.

Figure 14-66 Unmanaged host

Chapter 14. Using TotalStorage Productivity Center for Fabric

751

We finally select SAN Configure Manager Clear History (See Figure 14-67).

Figure 14-67 Clear History

After the next discovery, as explained in Figure 14-52 on page 744, the host is no longer displayed (Figure 14-68), since it has been removed from the Productivity Center for Fabric database.

Figure 14-68 NetView unmanaged host not discovered

14.7.4 Powering off a switch


In this test we power off a SAN switch and observe the results.

Lab 2 environment
For demonstration purposes in the following sections, this lab is referenced as Lab 2. The configuration consists of: Two IBM 2109-S08 (ITSOSW1 and ITSOSW2) switches with firmware V2.6.0g

752

IBM TotalStorage Productivity Center: Getting Started

One IBM 2109-S16 (ITSOSW3) switch with firmware V2.6.0g One IBM 2109-F16 (ITSOSW4) switch with firmware V3.0.2 One IBM 2107-G07 SAN Data Gateway Two pSeries 620 (BANDA, KODIAK) running AIX 5.1.1 with: Two IBM 6228 cards One IBM pSeries F50 (BRAZIL) running AIX 5.1.1ML4 with: One IBM 6227 card with firmware 02903291 One IBM 6228 card with firmware 02C03891 One HP Server running HP-UX 11.0 One FC HBA Four Intel servers (TONGA, PALAU, WISLA, LOCHNESS) Two Intel servers (DIOMEDE, SENEGAL) with: Two QLogic QLA2200 card with firmware 8.1.5.12 One IBM xSeries 5500 (BONNIE) with: Two QLogic QLA2300 card with firmware 8.1.5.12 One IBM Ultrium Scalable Tape Library (3583) One IBM TotalStorage FAStT700 storage server Figure 14-69 shows the SAN topology of our lab environment.

Senegal - Win2k Srv sp3

LTO 3583 SDG banda - AIX 5.1


PO W ERFAU L DA T AL AR M T A

lochness Win2k Srv sp3 SAN Manager

tonga - Win2k Srv sp3 sw4

Easter - HPUX 11

wisla - Win2k Srv sp3 bonnie - Win2k Srv sp3 sw2 palau- Win2k Srv sp3 sw1

fastT700 Kodiak - AIX 5.1 sw3 brazil - AIX 5.1

iSCSI

clyde - Win2k Srv sp3 NAS 200

diomede- Win2k Srv sp3

Figure 14-69 SAN lab - environment 2 Chapter 14. Using TotalStorage Productivity Center for Fabric

753

We have powered off the switch ITSOSW4, with managed host SENEGAL enabled. The topology map reflects this as shown in Figure 14-70. The switch and all connections change to red.

Figure 14-70 Switch down Lab 2

The agent running on the managed host (SENEGAL) has scanners listening to the HBAs located in the host. Those HBAs detect that the attached device, ITSOSW4, is not active since there is no signal from ITSOSW4. The information is retrieved by the scanners and reported back to the manager through the standard TCP/IP connection. Since the switch is not active, the hosts can no longer access the storage servers. The active agent (SENEGAL) sends the information to the manager which triggers a new discovery. Since the switch does no longer responds to outband management, Productivity Center for Fabric will correlate all the information and as a result, the connections between the managed hosts and the switch, and the switch itself, are shown as red/missing. The storage server is shown as green/normal because of a second Fibre Channel connection to ITSOSW2. ITSOSW2 is also green/normal because of the outband management being performed on this switch. The active agent host is still reported as normal/green as it sends its information to the Manager through the TCP/IP network. Therefore the Manager can determine that only the agents switch connections, not the host itself, is down.

754

IBM TotalStorage Productivity Center: Getting Started

Now, we powered the switch on again. At startup, the switch sends a trap to the manager. This trap will cause the manager to ask for a new discovery. The result is shown in Figure 14-71.

Figure 14-71 Switch up Lab 2

Now, following the status propagation detailed in 14.6, Status cycles on page 743, all the devices are green/normal.

Chapter 14. Using TotalStorage Productivity Center for Fabric

755

14.7.5 Running discovery on a RNID-compatible device


When you define a host for inband management, the topology scanner will launch inband queries to all attached HBAs. The remote HBAs, if they support RNID, will send back information such as device type. On switch ITSOSW2 is a Windows host, CLYDE, with a QLogic card at the requested driver level. There is no agent installed on this host. We see however that it is discovered as a host rather than as an Unknown device, as shown in Figure 14-72, because of the HBA RNID support.

Figure 14-72 RNID discovered host

You can see under the SAN Properties window, Figure 14-73, that the RNID support only provides the device type (Host) and the WWN. Compare this with the SAN Properties window for a managed host, shown in Figure 14-42 on page 737.

Figure 14-73 RNID discovered host properties

756

IBM TotalStorage Productivity Center: Getting Started

To have a more explicit map, we put CLYDE in the Label field (using the method shown in Figure 14-44) and the host is now displayed with its new label.

Figure 14-74 RNID host with changed label

Chapter 14. Using TotalStorage Productivity Center for Fabric

757

14.7.6 Outband agents only


To see what happens if there are only outband agents, that is, with no Productivity Center for Fabric agents running, we stopped all the running inband agents, cleared the Productivity Center for Fabric configuration, by using SAN Configure Agent Remove button, and then re-configured the outband agents on the switches ITSOSW1, ITSOSW2, and ITSOSW4 as shown in Figure 14-75.

Figure 14-75 Only outband agents

When configuring the agents, we also used the Advanced button to enter the administrator userid and password for the switches. This information is needed by the scanners to obtain administrative information such as zoning for Brocade switches.

758

IBM TotalStorage Productivity Center: Getting Started

Productivity Center for Fabric discovers the topology by scanning the three registered switches. This is shown in Figure 14-76. The information about the attached devices is limited to the WWN of the device since this information is retrieved from the switch and there is no other inband management. Note the - signs next to Device Centric and Host Centric Views this information is retrieved only by the inband agent so is not available to us here.

Figure 14-76 Explorer view with only outband agents

Figure 14-77 shows the information retrieved from the switches (SAN Properties).

Figure 14-77 Switch information retrieved using outband agents

Chapter 14. Using TotalStorage Productivity Center for Fabric

759

14.7.7 Inband agents only


For this practical case, we first unplugged all Fibre Channel connections from all agents and we removed all the outband agents from the configuration using SAN Configure Agents Remove tab. We then forced a new poll. As expected, the agents returned only information about the node and the local filesystems, shown in Figure 14-78. Note the - sign in front of /data01 for host SICILY. The filesystem is defined but not mounted, as the Fibre Connections are not active.

Figure 14-78 Inband agents only without SAN connections

We reconnected the Fibre Channel connections to all agents into the switch and forced a new polling. We now see that all agents reported information about their filesystems. Since the agents are connected to a switch, the inband agents will retrieve information from it using inband management. That explains why we see all the devices including those without agents installed. Figure 14-79 shows that: Our four inband agents (BRAZIL, GALLIUM, SICILY, SOL-E) are recognized. The two switches ITSOSW1 and ITSOSw2 are found, since agents are connected to them. Device 1000006045161FF5 is displayed since it is connected to the switch ITSOSW1. The device type is Unknown, as there is no inband nor outband agent on this device.

760

IBM TotalStorage Productivity Center: Getting Started

Figure 14-79 Inband agents only with SAN connections

We can also display SAN Properties as shown in Figure 14-80.

Figure 14-80 Switches sensor information

We now have no zoning information available since this is retrieved from the switch outband Agent for the 2109 switch. This is indicated by the sign next to Zone View on Figure 14-79.

Chapter 14. Using TotalStorage Productivity Center for Fabric

761

14.7.8 Disk devices discovery


The topology scanner will launch inband queries to all attached HBAs. The Attribute scanner will then do a SCSI request to get attribute information about the remote devices. Due to LUN masking, the storage server will deny all requests if there are no LUNs defined for the querying host. Figure 14-81 shows how our SAN topology is mapped when there is an IBM MSS storage server but with no LUNs defined or accessible for the hosts in the same fabric. The storage server is shown as an Unknown device because the inband agents were not allowed to do SCSI requests to the storage servers as they had no assigned LUNs.

Figure 14-81 Discovered SAN with no LUNS defined on the storage server

762

IBM TotalStorage Productivity Center: Getting Started

Figure 14-82 shows that the host CRETE is not included in the MSS zone (we have enabled the outband agent for the switch in order to display zone information). This zone includes TUNGSTEN, which has no LUNs defined on the MSS.

Figure 14-82 MSS zoning display

We changed the MSS zone to include the CRETE server. We run cfgmgr on CRETE so that it scans its configuration and finds the disk located on the MSS as shown in Example 14-1.
Example 14-1 cfgmgr to discover new disks # lspv hdisk0 hdisk1 hdisk2 hdisk3 # cfgmgr # lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 00030cbf4a3eae8a 00030cbf49153cab 00030cbf170d8baa 00030cbf170d9439 rootvg None datavg datavg

00030cbf4a3eae8a 00030cbf49153cab 00030cbf170d8baa 00030cbf170d9439 00030cbf8c071018

rootvg None datavg datavg None

Chapter 14. Using TotalStorage Productivity Center for Fabric

763

Now, the agent on CRETE is able to run SCSI commands on the MSS and discovers that it is a storage server. Productivity Center for Fabric maps it correctly in Figure 14-83.

Figure 14-83 MSS zone with CRETE and recognized storage server

14.7.9 Well placed agent strategy


The placement of inband and outband agents will determine the information displayed: For a topology map, you need to define inband and outband agents on some selected servers and switches in order to discover all your topology. Switch zoning and LUN masking may restrict access to some devices. For a complete topology map, including correct device icons, you need to define inband and outband on all servers and switches, except on those supporting RNID. For information on zones, you need to define the switches as outband agents and set the user ID and password on the Advanced properties. For a complete Device Centric and Host Centric views, you need to place inband agents on all servers you want to be displayed. Before implementing inband and outband agents, you should have a clear idea of your environment and the information you want to collect. This will help you to select the agents and may minimize overhead caused by inband and outband agents. In our configuration, we decided to place one agent on GALLIUM which is connected to the two fabrics and has LUNs assigned on the FAStT storage server (Figure 14-84).

764

IBM TotalStorage Productivity Center: Getting Started

Figure 14-84 Well-placed agent configuration

The agent will use inband management to: Query the directly attached devices. Query the name server of the switches to get the list of other attached devices. Launch inband management to other devices to get their WWN and device type (for RNID compatible supported drivers). Launch SCSI request to get LUN information from storage servers. You can see in Figure 14-85 that the agent on GALLIUM has returned information on: Directly attached switches (ITSOSW1 and ITSOSW4) Devices attached to those switches (if they are in the same zones) LUNs defined on the FAStT for this server Its own filesystems Because of the other hosts, only CLYDE runs with RNID compatible drivers, all other devices excluding switches and FAStT storage server are displayed with an unknown device icon. However, we have shown how we can get a complete map of our SAN by deploying just one inband agent.

Chapter 14. Using TotalStorage Productivity Center for Fabric

765

Figure 14-85 Discovery process with one well-placed agent

14.8 Netview
In this section we describe how to use the NetView programs predefined performance applications and how to create your own applications to monitor the Storage Area Network performance. The NetView program helps you manage performance by providing several ways to track and collect Fibre Channel MIB objects. You can use performance information in any of the following ways: Monitoring the network for signs of potential problems Resolving network problems Collecting information for trend analysis Allocating network resources Planning future resource acquisition The data collected by the NetView program is based on the values of MIB objects. The NetView program provides applications that display performance information: NetView Graph displays MIB object values in graphs. Other NetView tools display MIB object values in tables or forms.

766

IBM TotalStorage Productivity Center: Getting Started

14.8.1 Reporting overview


The NetView MIB Tool Builder enables you to create applications that collect, display, and save real-time MIB data. The MIB Data Collector provides a way to collect and analyze historical MIB data over long periods of time to give you a more complete picture of your networks performance. We will explain the SNMP concepts and standards, demonstrate the creation of Data Collections and the use of the MIB Tool Builder as it applies to SAN network management. Figure 14-86 lists the topics we cover in this overview section.

NetView Reporting Overview


Understanding SNMP and MIBs Configuring
MIBs (copying and loading) IBM 2109 NetView

MIB Data Collector MIB Tool Builder NetView Graphing tool


Figure 14-86 Overview

14.8.2 SNMP and MIBs


The Simple Network Management Protocol (SNMP) has become the de facto standard for internet work (TCP/IP) management. Because it is a simple solution, requiring little code to implement, vendors can easily build SNMP agents for their products. SNMP is extensible, allowing vendors to easily add network management functions to their existing products. SNMP also separates the management architecture from the architecture of the hardware devices, which broadens the base of multi vendor support. SNMP is widely implemented and available today. SNMP network management system contains two primary elements: Manager This is the console through which the network administrator performs network management functions. Agents These are the entities that interface to the actual device being managed. Switches and directors are examples of managed devices that contain managed objects Important: In our configuration, the SNMP manager is NetView and the SNMP agents are IBM 2109 Fibre Channel Switches. These objects are arranged in what is known as the Management Information Base (MIB). SNMP allows managers and agents to communicate for the purpose of accessing these objects. Figure 14-87 provides an overview of the SNMP architecture.

Chapter 14. Using TotalStorage Productivity Center for Fabric

767

SNMP architecture
iSCSI MIB

Application Server iSCSI Initiator


SNMP agent

Desktop iSCSI Initiator 2109 Fibre Channel switch

Ethernet

IP Storage iSCSI Target

SNMP agent FC switch MIB FA/FC MIB FE MIB

Tivoli SAN Manager NetView

Figure 14-87 SNMP architecture overview

A typical SNMP manager performs the following tasks: Queries agents Gets responses from agents Sets variables in agents Acknowledges asynchronous events from agents A typical SNMP agent performs the following tasks: Stores and retrieves management data as defined by the MIB Signals an event to the manager

MIBs supported by NetView


NetView supports the following types of MIBs: Standard MIB: All devices that support SNMP are also required to support a standard set of common managed object definitions, of which a MIB is composed. The standard MIB object definitions, MIB-I and MIB-II, enable you to monitor and control SNMP managed devices. Agents contain the intelligence required to access these MIB values. Enterprise-specific MIB: SNMP permits vendors to define MIB extensions, or enterprise-specific MIBs, specifically for controlling their products. These enterprise-specific MIBs must follow certain definition standards, just as other MIBs must, to ensure that the information they contain can be accessed and modified by agents. The NetView program provides the ability to load enterprise-specific MIBs from a MIB description file. By loading a MIB description file containing enterprise-specific MIBs on an SNMP management station, you can monitor and control vendor devices. Note: We are using the Brocade 2.6 enterprise-specific MIBs for SAN network performance reporting and the IBM TotalStorage IP Storage 200i iSCSI MIB

768

IBM TotalStorage Productivity Center: Getting Started

MIB tree structure


MIB objects are logically organized in a hierarchy called a tree structure. Each MIB object has a name derived from its location in the tree structure. This name, called an object ID, is created by tracing the path from the top of the tree structure, or the root, to the bottom, the object itself. Each place where the path branches is called a node. A node can have both a parent and children. If a node has no children, it is called a leaf node. A leaf node is the actual MIB object. Only leaf nodes return MIB values from agents. The MIB tree structure is shown in Figure 14-88. Note the leaf entry for bcsi which has been added into the tree. For more information regarding SNMP MIB tree structures. See the following Web sites relating to SNMP RFCs.
http://silver.he.net/~rrg/snmpworld.htm http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/snmp.htm

TOP

CCITT (0)

JOINT-ISO-CCITT (2)

ISO (1)
STD (0) REG AUTHORITY (1) MEMBER BODY (2)

ORG (3)

DOD (6)

INTERNET (1)

PRIVATE (4)
DIRECTORY (1) MGMT (2) EXPERIMENTAL (3)

MIB (1)

ENTERPRISE (1)

RESERVED (0)

IBM (2)

bcsi (1588)

iSCSI

Figure 14-88 MIB tree structure

14.9 NetView setup and configuration


In this section we provide step by step details for copying and loading the Fibre Channel and iSCSI MIBs into NetView. We then describe the FE MIB and SW MIB in the Brocade 2109 Fibre Channel switch and also describe the FC (Fibre Alliance) MIB in the IBM TotalStorage IP Storage 200i device. Note: The FC (Fibre Alliance) MIB is shipped with most Fibre Channel switch vendors. Brocade Communications provides limited support for the FC MIB.

14.9.1 Advanced Menu


In order to enable certain advanced features in NetView, we must first enable the Advanced Menu feature in the Options pull-down menu as shown in Figure 14-89. Shut down and restart NetView for the changes to take effect.
Chapter 14. Using TotalStorage Productivity Center for Fabric

769

Figure 14-89 Enabling the advanced menu

14.9.2 Copy Brocade MIBs


Before MIBs can be loaded into NetView, they must first be copied into the \os\ov\snmp_mibs directory. All vendor specific MIBS are located here. We accessed the Brocade MIBs from the Web site:
http://www.brocade.com/support/mibs_rsh/index.jsp

We downloaded the MIBs below and copied them to the directory. v2_6trp.mib (Enterprise Specific trap) v2_6sw.mib (Fibre Channel Switch) v2_6fe.mib (Fabric Element) v2_6fa.mib (Fibre Alliance) Note: If you have unloaded all the MIBs in the MIB description file (\usr\ov\snmp_mibs), you must load MIB-I or MIB-II before you can load any enterprise-specific MIBs. These are loaded by default in NetView. In Example 14-2 we show the \usr\ov\snmp_mibs directory listing with our newly added MIBs.
Example 14-2 MIB directory Directory of C:\usr\ov\snmp_mibs 04/13/2002 08/27/2002 04/13/2002 04/13/2002 09:33a 02:45p 09:33a 09:33a 4 File(s) 81,253 v2_6FA.mib 79,095 v2_6FE.mib 60,139 v2_6SW.mib 5,240 v2_6TRP.mib 225,727 bytes

770

IBM TotalStorage Productivity Center: Getting Started

0 Dir(s) C:\usr\ov\snmp_mibs>

6,595,670,016 bytes free

14.9.3 Loading MIBs


After copying the MIBs to the appropriate directory, they must then be loaded into NetView.

IBM 2109
The IBM 2109 comes configured to use the MIB II-private MIB (TRP-MIB), FC Switch MIB (SW-MIB), Fibre Alliance MIB (FA-MIB) and Fabric Element MIB (FE-MIB). By default, the MIBs are not enabled. Here is a description of each MIB and their respective groupings.

MIB II-private MIB (v2_6trp.mib or TRP-MIB)


The object types in MIB-II are organized into the following groupings: The System Group The Interfaces Group The Address Translation Group The IP Group The ICMP Group The TCP Group The UDP Group The EGP Group The Transmission Group The SNMP Group

FC_MGMT (Fibre Alliance) MIB (v2_6fa.mib or FA-MIB)


The object types in FA-MIB are organized into the following groupings. Currently Brocade does not write any performance related data into the OIDs for this MIB. Connectivity Trap Registration Revision Number Statistic Set

Fabric Element MIB (v2_6fe.mib or FE-MIB)


The object types in FE-MIB are organized into these groupings: Configuration Operational Error Accounting Capability

FC Switch MIB (v2_6sw.mib or SW-MIB)


The object types in SW-MIB are organized into the following groupings: swSystem swFabric swActCfg swFCport swNs swEvent swFwSystem swEndDevice

Chapter 14. Using TotalStorage Productivity Center for Fabric

771

To enable the MIBs for the IBM/Brocade switch, log into the switch via a telnet session, using an ID with administrator privilege (for example, the default admin ID). We enabled all four of the above MIBS using the snmpmibcapset command. The command can either disable or enable a specific MIB within the switch. Example 14-3 shows output from the snmpmibcapset command.
Example 14-3 snmpmibcapset command on IBM 2109 itsosw2:admin> snmpmibcapset The SNMP Mib/Trap Capability has been set to support FE-MIB SW-MIB FA-MIB SW-TRAP FA-TRAP SW-EXTTRAP FA-MIB (yes, y, no, n): [yes] SW-TRAP (yes, y, no, n): [yes] FA-TRAP (yes, y, no, n): [yes] SW-EXTTRAP (yes, y, no, n): [yes] no change itsosw2:admin>

NetView
The purpose of loading a MIB is to define the MIB objects so the NetView programs applications can use those MIB definitions. The MIB you are interested in must be loaded on the system where you want to use the MIB Data Collector or MIB Tool Builder. Some vendors specific MIBs are already loaded into NetView. Since we want to collect performance MIB objects types for the Brocade 2109 switch, we will load its MIB. On the NetView interface, select Tools MIB Loader SNMP V1. This will launch the MIB Loader interface as shown in Figure 14-90.

Figure 14-90 MIB loader interface

772

IBM TotalStorage Productivity Center: Getting Started

Each MIB that you load adds a subtree to the MIB tree structure. You must load MIBs in order of their interdependencies. We loaded the v2_6TRP.MIB first by clicking Load then selecting the TRP.MIB from the \usr\ov\snmp_mibs directory see Figure 14-91.

Figure 14-91 Select and load TRP.MIB

Click Open and the MIB will loaded into NetView. Figure 14-92 shows the MIB loading indicator.

Figure 14-92 Loading MIB

Chapter 14. Using TotalStorage Productivity Center for Fabric

773

We then loaded the v2_6 SW.MIB, v2_6FE.MIB and v2_6FA.MIBs in turn using the same process. You must load the MIBs in order of their interdependencies. A MIB is dependent on another MIB if its highest node is defined in the other MIB. After the MIBs are loaded, we now verify that we are able to traverse the MIB tree and select objects from the enterprise-specific MIB. We used the NetView MIB Browser to traverse the branches of the above MIBs. Click Tools MIB Browser SNMP v1. to launch the MIB browser and use the Down Tree button to navigate down through a MIB (see Figure 14-93).

Figure 14-93 NetView MIB Browser

14.10 Historical reporting


NetView provides a graphical reporting tool that can be used against real-time and historical data. After loading the Brocade (IBM 2109) MIBs into NetView, we demonstrate how to compile historical performance data about the IBM 2109 by using the NetView MIB Data Collector and querying the MIB referred to in 14.9.3, Loading MIBs on page 771. This tool enables us to manipulate data in several ways, including: Collect MIB data from the IBM 2109 at regular intervals. Store MIB data about the IBM 2109. Define thresholds for MIB data and generate events when the specified thresholds are exceeded. Setting MIB thresholds enables us to automatically monitor important SAN performance parameters to help report, detect and isolate trends or problems.

774

IBM TotalStorage Productivity Center: Getting Started

Brocade 2109 MIBs and MIB objects


We now need to understand what MIB objects to collect. The IBM 2109 has four MIBs loaded and enabled, described in 14.9.3, Loading MIBs on page 771. We selected the MIB object identifiers in Figure 14-94 because of their importance in managing SAN network performance. SAN network administrators may want to specify other MIB object identifiers to meet their own requirements for performance reporting. You should consult your vendor-specific MIB documentation for details of the objects in the MIB. We will describe how to create a MIB Data Collector for the following object identifiers in the following MIBs, shown in Figure 14-94 and Figure 14-95.

FE-MIB Error Group


fcFXPortLinkFailures

- Number of link failures detected by this FxPort


the FxPort

fcFXPortSyncLosses - Number of loss of synchronization detected by


fcFXPortSigLosses - Number of signal losses detected by the FxPort.

Figure 14-94 FE-MIB Error Group

SW-MIB Port Table Group


swFcPortTXWords - Number of FC words transmitted by the port
swFcPortRXWords - Number of FC words received by the port swFcPortTXFrames - Number of FC frames transmitted by the port swFcPortRXFrames - Number of FC frames received by the port swFcPortTXC2Frames - Number of Class 2 frames received by the port swFcPortTXC3Frames - Number of Class 3 frames received by the port
Figure 14-95 SW MIB Port Table Group

14.10.1 Creating a Data Collection


Our first Data Collection will target the MIB object swFCPortTxFrames. The swFCPortTxFrames counts the number of Fibre Channel frames that the port has transmitted. It contains group information about the physical state, operational status, performance and error statistics of each Fibre Channel port on the switch for example, F_Port, E_Port, U_Port, FL_Port.

Chapter 14. Using TotalStorage Productivity Center for Fabric

775

Figure 14-96 describes the MIB tree where this object identifier resides. The root of the tree, bcsi, stands for Brocade Communication Systems Incorporated. The next several pages describe the step-by-step process for defining a Data Collection on the swFcPortTxFrames MIB object identifier using NetView.
bcsi (1588)
commDev (2)

Fibre channel (1)

fcSwitch (1)

sw (1)

swFCPort (6)

swFCPortTable (2)

IBM 2109 private MIB tree


Figure 14-96 Private MIB tree for bcsi

swFCPortEntry (1)

swFCPortTxFrames (13)

1. To create the NetView Data Collection, select Tools MIB Collect Data from the NetView main menu. The MIB Data Collector interface displays (Figure 14-97). Select New to create a collection.

Figure 14-97 MIB Data Collector GUI

776

IBM TotalStorage Productivity Center: Getting Started

2. If creating the first Data Collection, you will also see the pop-up in Figure 14-98 to start the Data Collection daemon. Click Yes to start the SNMPCollect daemon.

Figure 14-98 starting the SNMP collect daemon

3. The Data Collection Wizard GUI then displays (Figure 14-99). This is the first step in creating a new Data Collection. By default NetView has navigated down to the Internet branch of the tree (.iso.org.dod.internet). See Figure 14-88 on page 769 for the overall tree structure. Highlight private and click Down Tree to navigate to the private MIB.

Figure 14-99 internet branch of MIB tree

We have now reached the private branch of the MIB tree (.iso.org.dod.internet.private). See Figure 14-100.

Chapter 14. Using TotalStorage Productivity Center for Fabric

777

Figure 14-100 Private arm of MIB tree

4. Continue to navigate down the enterprise branch of the tree by clicking Down Tree. Figure 14-101 shows the enterprise branch of the tree (.iso.org.dod.internet.private.enterprise).

Figure 14-101 Enterprise branch of MIB tree

778

IBM TotalStorage Productivity Center: Getting Started

5. We reach the bcsi branch of the tree by clicking Down Tree. Figure 14-102 shows the bcsi (Brocade) branch of the tree (.iso.org.dod.internet.private.enterprise.bcsi).

Figure 14-102 bcsi branch of MIB tree

Chapter 14. Using TotalStorage Productivity Center for Fabric

779

6. We continue to navigate down the tree, using the path shown in Figure 14-96, and, as shown in Figure 14-103 on page 780, eventually reaching: .iso.org.dod.internet.private.enterprise.bcsi.commDev.fibrechannel.fcSwitch.sw.swFCport. swFCPort.swFCPortEnrty.swFCPortTxFrames.

Figure 14-103 swFCPortTxFrames MIB object identifier

7. We selected swFCPortTxFrames and clicked OK. We received the following pop-up (Figure 14-104) from the collection wizard. This pop-up occurs because this will be the first node added to this collection. NetView then adds the swFCTxFrames MIB Data Collection definition as a valid data collector entry.

Figure 14-104 Adding the nodes

780

IBM TotalStorage Productivity Center: Getting Started

This launches the Add Nodes to the Collection Dialog, which is the second step in creating a new Data Collection. See Figure 14-105.

Figure 14-105 Add Nodes to the Collection Dialog

8. We proceeded to customize the section Collect MIB Data from fields, using the following steps: a. We entered the switch node name for which we wanted to collect performance data (in this case, ITSOSW2.ALMADEN.IBM.COM) and clicked Add Node. You can add a node either by selecting it on the topology map or typing in the field as the IP address or hostname for the device. Also, you can select multiple devices on the topology map and click Add Selected Nodes from Map. This adds all the selected nodes selected on the topology map to the Collect MIB Data From field. We also added several nodes to the collection by adding one device at a time in the Node field and clicking Add Node. To remove the node, just click the node name in the list and click Remove. b. We then customized the section Set the Polling Properties for these Nodes, using the following steps: i. We changed the Poll Nodes Every field to 5 minutes. This specifies the frequency in which the nodes are polled. Important: Before setting the polling interval, you should have a clear understanding of available and used bandwidth in your network. Shorter polling intervals generate more SNMP data on the network. ii. We checked Store MIB Data. This will store the MIB data that is collected to C:/usr/ov/databases. iii. The Check Threshold if box was checked. This will define the arm threshold. We want to collect data and signal an event each time more than 200 frames are sent on a particular port. Since we checked this box, we will be required to define the trap value and rearm number fields. iv. The option then send Trap Number was configured. We used the default setting, which is the MIB-II enterprise-specific trap.

Chapter 14. Using TotalStorage Productivity Center for Fabric

781

v. We then configured and rearm When. We specified a rearm value of greater than or equal to 75%. of the arm threshold value. This means that a trap will be generated and sent when the number of TX frames reaches 150. Note that these traps are NetView-specific traps (separate from Productivity Center for Fabric traps) and will therefore be sent to the NetView console. 9. Click OK to create the new Data Collection, shown in Figure 14-106. Select the swFCPortTxFrames Data Collection and click Collect.

Figure 14-106 Newly added Data Collection for swFCTxFrames

Note: It could take up to 2 minutes before the newly defined Data Collection is being collected by NetView. To verify that data is being captured, navigate to: c:\usr\ov\databases\snmpcollect. If there are files present, then the Data Collection is functioning properly. 10.Click Close and the Stop and restart Collection dialog is displayed as in Figure 14-107. Click Yes to recycle the snmpcollect daemon. At this point the Data Collection status (Figure 14-106 above) should change from Suspended to To be Collected

Figure 14-107 Restart the collection daemon

We are now collecting the data swFCTxFrames on ITSOSW2. Depending upon the level of granularity that is required for your reporting needs, you may want to collect data over shorter or longer periods. In our lab we collected every 5 minutes, but you may want to collect data once every hour for a week or once every hour for a month. We will now use the NetView Graph tool to display the data collected as described in 14.10.4, NetView Graph Utility on page 784.

782

IBM TotalStorage Productivity Center: Getting Started

Note: We followed the same procedure to add the remaining metrics for Data Collection swFCRxFrames, swFCTxErrors, and swFCRxErrors. For demonstration purposes we used a of 50 for an arm threshold and a value of 75% for re-arm. Your values for arm/re-arm may differ from what we used.

14.10.2 Database maintenance


You can periodically purge the Data Collection entries by selecting Options Server Setup, the Files tab page, then select Schedule SNMP Files to Delete from the drop-down list. See Figure 14-108. Select the Purge day at a specific time.

Figure 14-108 Purge Data Collection files

Important: There are documented steps on how to perform important maintenance of Tivoli NetView. Refer to the IBM Redbook Tivoli NetVIew and Friends, SG24-6019.

Chapter 14. Using TotalStorage Productivity Center for Fabric

783

14.10.3 Troubleshooting the Data Collection daemon


If you find data is not being collected, ensure that the snmpCollect daemon is running and that there is space available in the collection file system \usr\ov\databases\snmpcollect. The daemon can stop running if there is no filesystem space. To verify that the daemon is running, type ovstatus snmpcollect from the DOS command prompt (see Example 14-4).
Example 14-4 snmpcollect daemon running C:\>ovstatus snmpcollect object manager name: snmpcollect behavior: OVs_WELL_BEHAVED state: RUNNING PID: 1536 last message: Initialization complete. exit status: Done C:\>

If the snmpcollect daemon is not running, you will see a state value of NOT RUNNING from the ovstatus snmpcollect command as shown in Example 14-5.
Example 14-5 snmpcollect daemon stopped C:\>ovstatus snmpcollect object manager name: snmpcollect behavior: OVs_WELL_BEHAVED state: NOT RUNNING PID: 1536 last message: Exited due to user request. exit status: Done C:\>

The snmpcollect daemon can be started manually. At a command prompt, we typed in ovstart snmpcollect. You will see the output shown in Example 14-6. We then issued an ovstatus snmpcollect for verification, as shown in Example 14-4.
Example 14-6 snmpcolllect started C:\>ovstart snmpcollect Done C:\>

Note: If no Data Collections are currently defined to the MIB Data Collector tool, the snmpcollect daemon will not run.

14.10.4 NetView Graph Utility


We used the NetView graph utility to display the MIB object data that we collected in 14.10.1, Creating a Data Collection on page 775.

784

IBM TotalStorage Productivity Center: Getting Started

We used the NetView Graph tool to display the collected data. This provides a convenient way to display numerical performance information on collected data. We now show how to display the collected data from the previous Data Collection that was built for ITSOSW2 (swFCPortTxFrames). We start by single-clicking ITSOSW2 on the NetView topology map (Figure 14-109).

Figure 14-109 Select ITSOSW2

Select Tools MIB Graph Data to launch the graph utility This will report on the historical data that has been collected on ITSOSW2. After selecting this, NetView takes some time to process the data and present it in the graphical display. The graph build time depends on the amount of data collected. Figure 14-110 shows the progress indicator.

Figure 14-110 Building graph

After the graph is built, it displays the swFCTxFrames data that was collected (Figure 14-111). Note there are multiple instances of the object ID mapped that is, swFCPortTxFrames.1, swFCPortTxFrames.2 and so on. In this case they represent the data collected for each port in the switch.

Chapter 14. Using TotalStorage Productivity Center for Fabric

785

Figure 14-111 Graphing of swFCTxFrames

For viewing purposes, we adjusted the x-axis for Time by clicking Edit Graph Properties in the open graph window. This allowed us to zoom into shorter time periods. See Figure 14-112.

Figure 14-112 Graph properties

Any MIB object identifier that has been collected using the NetView MIB Data Collector can be graphed using the NetView Graph facility using the above process.

14.11 Real-time reporting


In section we introduce the NetView MIB Tool Builder for real-time reporting. Figure 14-113 provides an overview.

786

IBM TotalStorage Productivity Center: Getting Started

Describe the MIB Tool Builder Use of the Tool Builder


build modify delete
Figure 14-113 Real-time reporting Tool Builder overview

Important: Depending on the configuration, some advanced functionality may be initially disabled in NetView under Tivoli SAN Manager. This section requires this functionality to be enabled. To enable all functionality required, in NetView, click Options Polling and check the Poll All Nodes field. This is shown in Figure 14-114.

Figure 14-114 Enabling all functions in NetView

14.11.1 MIB Tool Builder


In section we introduce the NetView MIB Tool Builder. The Tool Builder enables you to build, modify, and delete MIB applications. MIB Applications are programs used by NetView to monitor the network. The Tool Builder allows you to build MIB applications without programming. The MIB Application monitors the real-time performance of specific MIB objects on a regular basis and produces output such as forms, tables, or graphs.

Chapter 14. Using TotalStorage Productivity Center for Fabric

787

We will demonstrate how to build a a MIB application that will query the swFCPortTxFrames MIB object identifier in the SW-MIB. This process can used to query any SNMP enabled device using NetView. With the switch ITSOSW2 selected, we start building the MIB Application by launching the Tool Builder. Select Tools MIB Tool Builder New. The MIB Tool Builder interface is launched as in Figure 14-115. Click New to create a new Tool Builder entry for collecting data on ITSOSW2.

Figure 14-115 MIB tool Builder interface

The Tool Builder Wizard Step1 window is displayed (Figure 14-116). We entered FCPortTxFrames in the Title field and clicked in the Tool ID field to auto populate the remaining fields. We clicked Next to continue with the wizard.

Figure 14-116 Tool Wizard Step 1

788

IBM TotalStorage Productivity Center: Getting Started

The Tool Wizard Step 2 interface displays. You can see our title of FCPortTxFrames has carried over. We are now ready to select the display type. We can choose between Forms, Tables, or Graphs. We will choose Graph and click New as shown in Figure 14-117.

Figure 14-117 Tool Wizard Step 2

The NetView MIB Browser is now displayed. We will use the MIB Browser to navigate down to the FCPortTxFrames object identifier. Use the Down Tree button to navigate through the MIB tree. Figure 14-118 shows the path through the SW-MIB error table. Click OK to add the object identifier.

SW MIB - Port Table group private... enterprise... bcsi... commDev... fibrechannel... fcSwitch... sw... swFcPort... swFcPortTable... swFCPortTxFrames
Figure 14-118 SW-MIB Port Table

Chapter 14. Using TotalStorage Productivity Center for Fabric

789

The newly created MIB application is displayed in the Tool Builder Step 2 of 2 window. See Figure 14-119 for the completed MIB Application. Click OK to complete the definition.

Figure 14-119 Final step of Tool Wizard

Now, the final window for the Tool Builder is displayed. It shows the newly created MIB application in the window, Figure 14-120. Click Close to close the window. The new MIB Application has been successfully created.

Figure 14-120 New MIB application FXPortTXFrames

790

IBM TotalStorage Productivity Center: Getting Started

14.11.2 Displaying real-time data


Now that we have a MIB application, we want to collect real-time data from the switch. Select ITSOSW2 from the NetView topology map by single clicking the ITSOSW2 symbol, then select Monitor Other FCPortTXFrames. Our MIB application FCPortTXFrames has been added to the menu (shown in Figure 14-121).

Figure 14-121 Monitor pull-down menu

Clicking on the FCPortTXFrames option, launches a graph utility, shown in Figure 14-122.

Figure 14-122 NetView Graph starting

The collection of MIB data starts immediately after selecting the swFCPortTXFrames MIB application from the Monitor Other menu. Figure 14-123 shows the data being collected and displayed for each MIB instance of the ITSOSW2.

Chapter 14. Using TotalStorage Productivity Center for Fabric

791

Figure 14-123 Graph of FCPortTXFrames

The polling interval of the application can be controlled using the Poll Nodes Every field located under Edit Graph Properties. See Figure 14-124.

Figure 14-124 Graph Properties

792

IBM TotalStorage Productivity Center: Getting Started

This launches a dialog to specify how often NetView Graph receives real-time data for graphing, shown in Figure 14-125. This determines how often the nodes are asked for data.

Figure 14-125 Polling Interval

We continued to use the Tool Builder process defined in 14.11.1, MIB Tool Builder on page 787 to build additional MIB applications for real-time performance monitoring. We used the following MIB objects: swFcPortTXWords swFcPortRXC2Frames swFCPortRXC3Frames fcFXPortLinkFailures fcFXPortSyncLosses fcFXPortSigLosses Figure 14-126 shows the newly defined MIB Applications as they appear in the Tool Builder.

Figure 14-126 Tool Builder with all MIB objects defined

Chapter 14. Using TotalStorage Productivity Center for Fabric

793

Figure 14-127 shows all the above MIB objects as they appear in the NetView Monitor pull-down menu. Note we have abbreviated the names of the MIB applications listed in the Monitor Other menu for ease of use.

Figure 14-127 All MIB objects in NetView

14.11.3 SmartSets
With Productivity Center for Fabric (Tivoli SAN Manager) providing the management of the SAN, we can further extend the management functionality of the SAN from a LAN and iSCSI perspective. NetView SmartSets gives us this ability. This section describes the concept of the NetView SmartSet. See Figure 14-128 below. For an overview, we provide details on how to group and mange your SAN attached resources from an TCP/IP (SNMP) perspective. By default, the iSCSI SmartSet is created by Productivity Center for Fabric when nvsniffer is enabled. SmartSets for iSCSI initiators and targets can be created using the process described here.

What is a SmartSet? Why SmartSets? Defining a SmartSet SmartSets and Data Collections
Figure 14-128 SmartSet Overview

794

IBM TotalStorage Productivity Center: Getting Started

In NetView a SmartSet is used to monitor a set of objects (devices). NetView allows for user-defined SmartSets. We use this to define and manage our SAN devices as one item. SmartSets can be used to group together systems that support a specific operating system, device type or business function. The symbol status displayed for nodes appearing in user-defined SmartSets is based solely on the IP status, not Fibre Channel status. You can customize the attributes available for creating a SmartSets. Refer to the manual Tivoli NetView for Windows Users Guide, SC31-8888 for more information. With Productivity Center for Fabric using the TCP/IP and Fibre Channel protocols to manage the SAN, we will demonstrate how to complement this by using SNMP to manage the same components of the SAN using SmartSets. Important: Depending on the configuration, some advanced functionality required for SmartSets may be disabled in NetView in Productivity Center for Fabric. This section requires this functionality to be enabled. To enable all functionality required, in NetView, click Options Polling and check the Poll All Nodes field. This is shown in Figure 14-114 on page 787. We will demonstrate how to group all the IBM 2109 Fibre Channel switches (ITSOSW1, ITSOSW2 and ITSOSW3) in our configuration, into one SmartSet called IBM2109. 1. On the NetView topology display select the switches ITSOSW1, ITSOSW2 and ITSOSW3. See Figure 14-129 for selected switches. Each symbol can be selected by holding down the Shift-Key and clicking once on each symbol.

Figure 14-129 Selected Fibre Channel switches

Chapter 14. Using TotalStorage Productivity Center for Fabric

795

2. Select Submap New Smartset from the main menu. The Find window is displayed, as in Figure 14-130.

Figure 14-130 Defining a SmartSet

796

IBM TotalStorage Productivity Center: Getting Started

3. Click the Advanced tab this will allow the selected switches on the topology map to be added to the SmartSet. See Figure 14-131.

Figure 14-131 Advanced window

Chapter 14. Using TotalStorage Productivity Center for Fabric

797

4. Click the Add Selected Objects to add ITSOSW1, ITSOSW2, and ITSOSW3 to the Combined Functions field (Figure 14-132).

Figure 14-132 Advanced window with 2109s added

5. Click Create SmartSet. This launches the New SmartSet dialog. We entered the name of our SmartSet as IBM2109, and added a description. See Figure 14-133. Note that no spaces are allowed in the SmartSet Name field.

Figure 14-133 New SmartSet

798

IBM TotalStorage Productivity Center: Getting Started

6. At this point, the SmartSet definition is complete. Click the SmartSets tab to verify that the IBM2109 SmartSet was created as shown in Figure 14-134.

Figure 14-134 New SmartSet IBM 2109

Chapter 14. Using TotalStorage Productivity Center for Fabric

799

Verifying SmartSet creation


To verify that the SmartSet was created successfully, we follow these steps: 1. We go to the NetView root map and click the SmartSets icon. 2. We can see the IBM2109 SmartSet that we created (Figure 14-135).

Figure 14-135 SmartSet topology map

3. Clicking on the IBM2109 SmartSet, we find its members ITSOSW1, ITSOSW2, and ITSOSW3, as shown in Figure 14-136. Note: Symbols on the topology map have links back to their respective objects, since the same symbol can reside in more than one location in NetView. In the case of the switch discussed here, the same symbol in the SmartSet also resides on the IP Internet map. Propagation of status occurs to all symbols regardless of their location on the topology. For example, if there is a problem with the switch, causing it to change to a critical (RED) status, this will be reflected in both the SmartSet and on the IP Internet map.

800

IBM TotalStorage Productivity Center: Getting Started

Figure 14-136 ITSOSW1, ITSOSW2 and ITSOSW3 in IBM2109 SmartSet

SmartSets can be used to group your devices using a logical taxonomy for the enterprise. For our setup, we categorized our SAN resources by Fabric and Operating System. This allows us to easily manage those devices at a high level. Alternatively, we could have grouped the devices by SAN fabric, or by Application or Business Function. We created the following SmartSets, shown in Figure 14-137: IBM 2109 contains all IBM 2109 Fibre Channel switches SANfabricA_AIX contains all AIX SAN attached hosts SANfabricA_HPUX contains all HP-UX SAN attached hosts SANfabricA_Solaris contains all Solaris SAN attached hosts SANfabricA_Win2k contains all Windows 2000 SAN attached hosts TivoliSANManager contains all the Tivoli SAN Manager hosts.

Chapter 14. Using TotalStorage Productivity Center for Fabric

801

Now we can manage our SAN attached devices from both SAN and LAN perspectives from a single console.

Figure 14-137 Additional SmartSets

14.11.4 SmartSets and Data Collections


Since SmartSets allows us to group objects, we now have additional flexibility when creating Data Collections. See 14.10.1, Creating a Data Collection on page 775 for more information on Data Collections. We can now apply a Data Collection against a SmartSet. The IBM2109 SmartSet already defined contains switches ITSOSW1, ITSOSW2 and ITSOSW3, therefore we can now collect the swFCPortTxFrames MIB object from all three switches using one definition. 1. We follow the same process defined in 14.10.1, Creating a Data Collection on page 775. 2. At the Collection Wizard Step2 of 2 window, we selected the IBM2109 Smartset from the Add SmartSet pull down menu instead of adding in a new node. See Figure 14-138. We then clicked OK, then closed the MIB Data Collector window.

802

IBM TotalStorage Productivity Center: Getting Started

Figure 14-138 IBM2109 SmartSet defined to Data Collection

3. After allowing the Data Collection to collect data, we then graph the data using Tools MIB Graph Data All. The NetView Graph dialog (Figure 14-139) is displayed while the information is collected this can take some time, depending on the amount of data returned.

Figure 14-139 NetView Graph starting

4. A window displays, presenting all MIB instances of the swFCPortTxFrames MIB object (Figure 14-140) for all three switches in the SmartSet. Since the total number of entries is greater than 15, we get a message on the menu bar indicating that Maximum Graph Lines Exceeded. The NetView Graph utility can only graph 15 lines at a time.

Chapter 14. Using TotalStorage Productivity Center for Fabric

803

Figure 14-140 IBM2109 SmartSet data collected

5. Next, we need to select the desired instance of the MIB object for each switch that we want to graph. We then clicked Add to add the selected MIB labels to the Lines To Graph panel, then we clicked OK. For this example, we chose the first 5 instances for each of the three switches, shown in Figure 14-141. Click OK to start the graph.

Figure 14-141 Selected MIB instances

804

IBM TotalStorage Productivity Center: Getting Started

The NetView Graph for the fifteen MIB instances we selected is shown in Figure 14-142.

Figure 14-142 Graph showing selected instances

14.11.5 Seed file


When NetView is started for the first time, the default IP management region is the system on which the NetView program is operating, plus any IP networks to which it is attached. The discovery process generates the IP Internet topology map by working outward from the management system. We re-defined our management region by using a seed file. The seed file contains a listing of the IP addresses for our SAN management domain. This contains all Fibre Channel devices that have IP connectivity. Only nodes listed in this file will be used by netmon daemon for rediscovery. This forces discovery to be strictly limited to the contents of the seed file. Using a seed file forces the discovery process to generate the topology map beginning from nodes other than the management system. We wanted our management domain to be limited to our IP connected SAN devices, thus the use of the seed file.

Chapter 14. Using TotalStorage Productivity Center for Fabric

805

NetView uses the default template located in \usr\ov\conf\netmon.seed. We modified the netmon seed file to include the specific IP addresses of all the LAN attached SAN devices. for more details on the seed file please refer to the comments section of the \usr\ov\conf\netmon.seed. Example 14-7 shows a partial listing of the seed file. Note: iSCSI discovery requires that IP discovery in Tivoli NetView that is shipped with Productivity Center for Fabric be enabled. Be aware that when you turn on IP discovery, there can be a lot of network activity depending on how many devices are in your IP network. For this reason we advise the use of the seed file.
Example 14-7 Modified seed file for limited discovery ## All seed file errors are logged in the \usr\OV\log\nv.log file. Any # entry that is invalid will be ignored. # # If the <SystemRoot>\system32\drivers\etc\networks file has entries # for subnets that are contained in your network, the network names # as specified in the file will appear on the map instead of the network # numbers. # ############################################################################ 9.1.38.188 9.1.38.184 9.1.38.186 9.1.38.187 9.1.38.189 9.1.38.153 9.1.38.154 9.1.38.191 9.1.38.155 9.1.38.152 9.1.38.157 9.1.38.158 9.1.38.159 9.1.38.201 !*

Once the seed file is updated and saved, we then need to clear out the NetView databases where the current topology information is stored. Start Server Setup by clicking Options Server Setup as in Figure 14-143. Important: Performing the Clear Databases on NetView will delete all previously saved NetView object and topology information only. This does affect the Tivoli SAN Manager and WebSphere Application Server databases.

806

IBM TotalStorage Productivity Center: Getting Started

The Server Setup options window (Figure 14-143) displays.

Figure 14-143 Server Setup

Now we want to configure NetView to use the updated seed file. Click the Discovery tab in the Server Setup options window. Under Discovery, check Use Seed File, shown in Figure 14-144, and click OK.

Chapter 14. Using TotalStorage Productivity Center for Fabric

807

Figure 14-144 Server Setup options window

Click the Databases tab. Click the pull-down, select Clear Databases, shown in Figure 14-145, and click OK. This starts the process to clear the databases.

Figure 14-145 Clear Database

NetView prompts one last time to verify that you want to clear the databases. Click Yes. Figure 14-146 shows the warning message.

Figure 14-146 Clear databases warning

808

IBM TotalStorage Productivity Center: Getting Started

Clearing the databases typically takes a minute, however, this will vary depending on the size of the NetView databases being cleared. The NetView console will automatically shut down and restart when complete. See Figure 14-147.

Figure 14-147 NetView stopping clearing databases

When NetView restarts, it will discover and display the nodes that we defined in our netmon.seed file, shown in Figure 14-148.

Figure 14-148 With seed file

Chapter 14. Using TotalStorage Productivity Center for Fabric

809

To demonstrate the difference in the discovered IP topologies, Figure 14-149 shows the NetView display without using a seed file for discovery. In this case, NetView discovers itself and all other nodes on the subnet.

Figure 14-149 Without seed file

This completes our demonstration of how existing NetView capabilities can be leveraged to further extend the capabilities of Productivity Center for Fabric.

14.12 Productivity Center for Fabric and iSCSI


IBM is a leader in the development and delivery of iSCSI technology and storage products. IBM, as well as other network and storage vendors, is working closely with the Internet Engineering Task force (IETF) in developing iSCSI standards. This chapter provides an overview of the Small Computer Systems Interface over IP (iSCSI) standard and how Productivity Center for Fabric discovers and monitors iSCSI devices. We cover these topics: What is iSCSI? How does iSCSI work? Productivity Center for Fabric and iSCSI Functional Description iSCSI Discovery

810

IBM TotalStorage Productivity Center: Getting Started

14.13 What is iSCSI?


Internet Small Computer Systems Interface (iSCSI) is a proposed industry standard that allows SCSI block I/O protocols (commands, sequences and attributes) to be sent over a network using the TCP/IP protocol. The iSCSI proposal was made to the Internet Engineering Task Force (IETF) standards body jointly by IBM and Cisco. For details, refer to:
http://www.ietf.org/

14.14 How does iSCSI work?


The iSCSI protocol is used on servers and workstations called (initiators), and storage devices (called targets). The client initiator issues the commands to the storage server (target). The storage server (target) then fulfills the request. The initiator and targets are identified by their world wide unique iSCSI names. Figure 14-150 shows the basic components of iSCSI.

iSCSI Components
Application Server

initiator

SCSI (Block I/O) Protocol

IP
initiator

target

Client Desktop

Storage

Figure 14-150 iSCSI components

iSCSI uses standard Ethernet switches and routers to move the data from server to storage. It also allows the IP and Ethernet infrastructure to be used for expanding access to SAN storage and extending SAN connectivity across any distance.

Chapter 14. Using TotalStorage Productivity Center for Fabric

811

Figure 14-151 shows a comparison of Fibre Channel to iSCSI.

FC SAN
Database Application Block I/O

iSCSI
Database Application Block I/O

FC Network SCSI Protocols

Fibre Channel vs iSCSI

IP Network iSCSI Protocols

Pooled Storage

Pooled Storage

Figure 14-151 Fibre Channel versus iSCSI

Below we list some common iSCSI terms: iSCSI Adapter - iSCSI Adapters combine the functions of Network Interface Cards (NICs) with the function of a storage Host Bus Adapter (HBA). These adapters take the data in block form, and perform processing on the adapter card with TCP/IP processing engines, and then send the IP packets across an IP network. The implementation of these functions enables users to create an IP based SAN without lowering the performance of the server. I iSCSI Drivers - before the introduction of iSCSI adapters, some vendors released software versions of iSCSI adapters. These software-enabled adapters accept block level data from applications, but still require CPU cycles for the TCP/IP processing. The advantage of such adapters is that they can work on existing Ethernet NICs. The main disadvantage is that they require heavy CPU utilization for TCP/IP processing. iSCSI Name - The name of the iSCSI initiator or iSCSI target. iSCSI Node - This represent either an iSCSI initiator or iSCSI target. The iSCSI node is identified by its iSCSI name.

14.15 Productivity Center for Fabric and iSCSI


You can discover and manage devices that use the iSCSI storage networking protocol through Productivity Center for Fabric using IBM Tivoli NetView. Productivity Center for Fabric also provides the Internet Storage Name Service (iSNS) MIB, which is a storage management protocol from IETF for managing iSCSI devices. The iSNS provides registration for storage devices and hosts with an iSNS server. Subsequently, the hosts can either query the iSNS server or receive asynchronous updates from the iSNS server on the status of the storage devices.The Productivity Center for Fabric iSCSI support can be used either independently or in conjunction with the iSNS management framework.

812

IBM TotalStorage Productivity Center: Getting Started

14.15.1 Functional description


The following is a functional description of iSCSI support in NetView 9as used by Productivity Center for Fabric: All iSCSI devices discovered in the IP network are placed in a unique iSCSI SmartSet. Additionally, the user will have the option to create separate SmartSets for iSCSI initiator devices and target devices. NetViews nvsniffer utility performs the discovery of iSCSI devices. The nvsniffer program uses a configuration file which: Governs which services to discover. Determines which service SmartSets to create. Determines which ports to test for a given service. Determines whether to use custom tests for discovering and checking the status for a service.

The iSCSI MIBs and iSNS MIBs are pre-installed into the c:\usr\ov\snmp_mibs directory. This is performed so that the NetView MIB browser can be used to query the iSCSI MIBs.
Restriction: Note that IBM Tivoli NetView does not currently support MIB Tool Builder and Data Collections against SNMP V2.

The iSCSI MIB trap definition files are used by Tivoli NetView for event processing.

14.15.2 iSCSI discovery


The iSCSI discovery is performed separately from the SAN device discovery done by Productivity Center for Fabric. iSCSI discovery is done through the nvsniffer program and can be scheduled to refresh the iSCSI SmartSets at specified intervals. Before you can perform iSCSI discovery, you must first enable SNMP on the iSCSI device and IP Internet discovery. Be aware that when you turn on the IP network discovery, there can be a lot of activity depending on how many devices you have in your IP network. Before enabling IP discovery, update the netmon seed file - c:\usr\ov\snmpconf\netmon.seed. See 14.11.5, Seed file on page 805 for defining a seed file. See 14.11, Real-time reporting on page 786 for enabling IP Internet functionality. In addition to the above mentioned references, Productivity Center for Fabric also requires the following for iSCSI discovery: The device must have iSCSI MIB support. The device should be configured so that the iSCSI MIB support is active. The iSCSI device must be discovered first as an IP device by NetView before nvsniffer can discover it as an iSCSI device.

iSCSI MIBs
Before managing the iSCSI device, the MIBs must be loaded. By default, the MIBs are not loaded in to Tivoli NetView at installation time. You have to load these MIBs using the NetView MIB loading function. The purpose of loading a MIB is to define the MIB objects so NetViews applications can use those MIB definitions. You load the iSCSI MIB files one at a time into Tivoli NetView.

Chapter 14. Using TotalStorage Productivity Center for Fabric

813

The iSCSI MIBs should loaded in the following order:


iSCSI MIB - The iSCSI MIB is layered between the SCSI MIB and the TCP MIB, and makes use of the iSCSI Auth MIB iSCSI Auth MIB - Each iSCSI target node can have a list of authorized initiators. Each of the entries in this list points to an identity within the Auth MIB that will be allowed to access the target. iSCSI initiator nodes can also have a list of authorized targets. Each of the entries in this list points to an identity within the Auth MIB to which the initiator should attempt to establish sessions. The Auth MIB includes information used to identify initiators and targets by their iSCSI name, IP address, and/or credentials. FC_MGMT MIB - This MIB is also know as the Fibre Alliance MIB. The goal of the industry consortium is to develop and implement standard methods for managing heterogeneous Fibre Channel-based networks of storage systems, connectivity equipment and computer servers. The FC_MGMT MIB is organized in the following groups:

Connectivity Trap Registration Revision Number Statistic Set Service Set

iSNS MIB - The Internet Storage Name Service iSNS defines a mechanism for IP based storage devices to register and query for other storage devices in the network. The iSNS MIB is designed to allow SNMP to be used to monitor and manage iSCSI devices.

See 14.9.3, Loading MIBs on page 771 for detailed instructions on loading MIBs.

14.16 ED/FI - SAN Error Predictor


In this section we discuss the new ED/FI feature of TotalStorage Productivity Center for Fabric (also known as SAN Error Predictor). We discuss how ED/FI operates, how it needs to be setup, and how its features can be utilized to efficiently manage SANs.

14.16.1 Overview
SANs are becoming more and more critical in the corporate infrastructure, therefore they should be made as highly available as possible, just like other IT components. SANs are complex network environments, with potentially hundreds or even thousands of individual devices. Hardware outages cause disruptions to the business environment, leading to lost revenue and reduced customer satisfaction. Minimizing outages due to hardware failures is therefore a goal of SAN management and one way to do this is by predicting and detecting likely errors before they cause outages. Typically, servers have multiple redundant paths to devices. Determining a root error cause in such an environment is usually problematic. Some of the most important factors in complex root cause analysis: Error data can be inconsistent and sparse Complexity of error counter implementations Error indications can be dispersed from the source - they can propagate across the SAN Error Detection and Fault Isolation (ED/FI - SAN Error Predictor) is implemented in Productivity Center for Fabric Version 1.2 to provide a way to predict errors on the optical links that are used to connect SAN components (including HBA to switch, switch to switch, and switch to storage connections.

814

IBM TotalStorage Productivity Center: Getting Started

ED/FI functions are listed in Figure 14-152.

ED/FI - SAN Error Predictor functions


Proactive error prediction Predictive Failure Analysis (PFA) based on Fibre Channel link counter data Predict and isolate a potential link failure, giving the oportunity to: reduce unscheduled downtime reduce scheduled downtime for isolation and resolution Uses Statictical analyses for determining the cause of the problem (possible use of external rules)
Figure 14-152 ED/FI - SAN Error Predictor overview

By using Predictive Failure Analysis (PFA), downtime of SAN components can be significantly decreased as it is possible to remove problematic components before failure. This can significantly reduce operational cost of SANs. The ED/FI function collects data from Productivity Center for Fabric agents, outband and/or inband as available. The polling interval is every 15 minutes. The data is stored in the ITSANMDB database.This data is then analyzed using various statistical methods and from this future errors are predicted. The predicted errors are presented in the NetView interface by adorning the appropriate icons as shown in Figure 14-153. The adornment means that the exclamation point is superimposed on the icon representing the device where the error is predicted. A TEC event and SNMP trap are also generated.

ED/FI Failure indication


Sending a TEC event Sending a SNMP trap Through NetView GUI Adornment

Figure 14-153 Failure indication

Figure 14-154 shows an example of a failing device, in this case, the host SENEGAL. Although in this case, the icon is actually red, indicating a SAN Manager detected failure, note that typically, adorned icons will still show green, indicating they are available. This is because the ED/FI function is designed to flag potential problems before they have escalated to an actual failure. This allows you to replace hardware preemptively at a convenient time, rather than incurring an unplanned outage due to failure.

Chapter 14. Using TotalStorage Productivity Center for Fabric

815

Figure 14-154 Adornment example

14.16.2 Error processing


ED/FI error processing is shown in Figure 14-155.

SAN Error Predictor error processing


Agents gather error counters and sends data to Manager Manager looks for counters that have changed values PFA takes counters that have changed along with previous data and evaluates if the counter changes meet the criteria to create an "indication". Fault isolation looks for "indications" and then runs the data through Rule sets". If indications match rules criteria a notification is created that results in a user viewable "Adornment" on the Netview GUI.

User can use this to perform specific corrective actions

Figure 14-155 Error processing cycle

Data is collected from the following counters: FA MIB Counters FE MIB Counters

816

IBM TotalStorage Productivity Center: Getting Started

Brocade Switch MIB Counters HBA APIs (Request Port Status, Read Link Status) - inband only
Note: Not all the switch vendors collect data on all the defined counters in the MIB schema. This depends on the particular implementation and adherence to the various standards. At the time of writing, the fullest ED/FI functionality is available on Brocade switches. Fewer counters are available for monitoring on other switch vendors.

Predictive Failure Analysis is build on a Stochastic model called Dispersion Frame Technique (DFT), which was developed and tested at Carnegie Mellon University. The method eliminates complexity through simple and effective pattern recognition of error occurrences. DFT involves a set of rules for predicting failures, based on the proximity of error occurrences to each other in time. ED/FI uses a set of these rules to determine when a set of counters exceeding a threshold will indicate an error. While the specific rules are internal to ED/FI, they are used to detect the difference between normal and abnormal behavior by using an increase in error rate and a decrease in time intervals between error occurrences. An example rule might be to trigger if a counter exceeds a threshold 3 times within a defined interval. When the PFA process sees that counters have changed, along with previous data, it evaluates the counters. If the counter changes meet the criteria using DFT rules, an indication is created. An Indication Record is created for each port/counter/rule group. These indications are them passed on to the Fault Isolation process (FI). The FI process analyzes the indications by further filtering the errors. FI also uses topology and attribute information provided by Productivity Center for Fabric and with this data isolates faults to the specific Fibre Channel (FC) link. If all requirements are met, FI will create a Fault Record. After a defined number of faults occurs (as defined in the FI rules), a Notification Record will be created. The Notification Record will be presented in NetView by adorning the corresponding device as shown in Figure 14-154 on page 816. The Notification Record is permanent and can only be removed with explicit user intervention (via the GUI). When a user clears the adornment, a Cleared Record will be created in the ITSANMDB database and the device port will be set to a cleared state. If another fault occurs on the same port it may be immediately upgraded to a Notification. The whole FI flow is shown in Figure 14-156.

Fault Isolation indication flow


After successful isolation, FI will upgrade a PFA indication to fault After a number of faults, defined by FI rules, a fault is upgraded to a notification A notification adorns a device Notifications can be cleared by users A cleared notification can be upgraded again by FI if isolation requires

Fault State Diagram


Indication FI Upgrade

Cleared

FI Upgrade

Fault

User Input

Notification

FI Upgrade

Figure 14-156 Fault Isolation indication flow

Chapter 14. Using TotalStorage Productivity Center for Fabric

817

Fault Isolation will adorn the transmitter of the link (rather than the sender), because it is most likely that the faulty component in the group of link, transmitter, cable and receiver, is the transmitter.
Note: (1) Switches cannot be adorned if inband agents are not active, except in the case of cascaded switches using outband management only. (2) Endpoint devices cannot be adorned if outband agents are not active. Important: Error counters can also change for non error conditions including:

Rebooting the system Configuration changes Clearing of counter manually As Fault Isolation mechanism will count them as error conditions, it is recommended that Error Detection/Fault Isolation is disabled in such cases to avoid spurious adornments.

14.16.3 Configuration for ED/FI - SAN Error Predictor


ED/FI is an integrated function of Productivity Center for Fabric, and can be accessed from the standard menus. Select SAN ED/FI Configuration as shown in Figure 14-157.

Figure 14-157 ED/FI Menu Selection

818

IBM TotalStorage Productivity Center: Getting Started

You will see a window similar to Figure 14-158.

Figure 14-158 ED/FI Configuration

In this window you can enable or disable ED/FI using the Enable Error Detection and Fault Isolation radio button.
Tip: As stated in the window, it is recommended that you disable the error prediction in case of service actions so that false notifications can be avoided.

Chapter 14. Using TotalStorage Productivity Center for Fabric

819

In the Rule Set Selection you can see the available rules and which rules are active. The active rules here are used in error processing as described in 14.16.2, Error processing on page 816. To see the notes for the specific rule select the rule and click View, you will see a window similar to Figure 14-159.

Figure 14-159 Rule description

14.16.4 Using ED/FI


After enabling ED/FI, it will start collecting the data required for error prediction. When the collected counters match the requirements from the FI rules, SAN Manager will adorn the corresponding icons similar to Figure 14-160.

820

IBM TotalStorage Productivity Center: Getting Started

Figure 14-160 Adornments on the topology map

In our example we simulated errors by disabling and enabling a port on the switch ITSOSW1 over a period of time. As well as the graphical display of the adornments, they are also listed under SAN ED/FI Configuration in the Properties tab as shown in Figure 14-161.

Figure 14-161 Devices currently in Notification State

Chapter 14. Using TotalStorage Productivity Center for Fabric

821

This window displays the list of potentially faulty SAN devices, using the following columns:
Clear - check this box to clear the adornment on a particular device. Time - the time when the error was identified by FI rules. Faulted Device - the device which was predicted by FI to be failing. The rule here is that the device with the transmitter will be marked as failed, as explained in 14.16.2, Error processing on page 816. If the device has a Productivity Center for Fabric agent installed and running it will appear with its Global Unique Identifier (GUID) similar to the first entry in Figure 14-161 on page 821. If there is no agent running or the device is a switch, the device will be identified by its node WWN. In our example the fifth entry in Figure 14-161 on page 821 is a server without an agent and the sixth is a switch. Faulted Port - if the device has several ports, the actual faulting port WWN will be displayed here. Indicated Device - the device which actually detected the errors. It is identified in the same way as the faulted device. Figure 14-162(which is simply the same as Figure 14-161 on page 821, scrolled to the right), shows an example. Indicated Port - if the device has several ports, the actual port WWN on which errors where detected will be displayed. PD Reference - the reference to Problem Determination guides which can be used by IBM Support to diagnose the problem (if it is an IBM-supported piece of hardware).

Figure 14-162 Indicated device

14.16.5 Searching for the faulted device on the topology map


As we have seen, both the GUID and node and port WWNs are used to identify the notification records. In a medium to large SAN, the topology map is complex and adornment icons may not be readily located, given its GUID or WWN. To make identification easier, you can use the NetView search function to find adorned devices. First you need to identify the faulted device from the devices currently in notification state, as shown in Figure 14-161 on page 821. As the notifications are persistent (until cleared), you should check the timestamp of the notification before searching to ensure it is still of interest. The device with that port can be found by selecting Edit Find from the NetView menus, as shown in Figure 14-163.

822

IBM TotalStorage Productivity Center: Getting Started

To find the object with the corresponding GUID or port WWN, enter it in the Object Name field. NetView uses both GUID and port WWNs for the Object Names. As the GUID and ports are usually uniquely identified by less then the whole numeric string, you can use wildcards, rather than the entire string, as shown in Figure 14-163.

Figure 14-163 NetView Search dialog

In our example we used the last four numbers of the GUID displayed in the first entry shown in Figure 14-161 on page 821. The search string is actually the least significant digits of the GUID, which is truncated in that figure. The full string for the GUID, including the searched string is displayed in Figure 14-166 on page 825. After entering the search string, click OK. The search results are displayed in Figure 14-164.

Figure 14-164 Found objects

Chapter 14. Using TotalStorage Productivity Center for Fabric

823

If you double-click on a returned object, NetView will open the topology map, highlighting the device, as shown in Figure 14-165. We can see the notification is for the host SENEGAL, which is adorned.

Figure 14-165 Found device on topology map

Now you can clearly see where the faulted device is located in the SAN, and you can start planning the necessary action to diagnose or repair the faulting device. ED/FI isolates faults only to the link level. Therefore, either side of the link or the cable itself might be the faulty component. Before replacing hardware, you should consult your service contracts and product problem determination guides for direction. Cleaning, cable seating, and diagnostic execution are some of the steps that might be recommended that lead to a definitive decision on parts repair or replacement. IBM Service can use ED/FI information in conjunction with problem determination guides to advise what/if part replacements are necessary. If you can identify a component, you should diagnose the problem and repair or replace the component as soon as possible before a permanent failure occurs. If you cannot identify a component, at a minimum you should monitor the link for further errors. In environments where high systems availability is a requirement or service level agreements are in place, you can contact service representatives about replacing the Fibre Channel component.

824

IBM TotalStorage Productivity Center: Getting Started

14.16.6 Removing notifications


After the fault has been fixed the notification should be removed, so that the notification lists and topology maps stay current. To remove a notification, access the current device notification list using SAN ED/FI Configuration. Select the Properties tag, which will open the current list of device notifications as shown in Figure 14-161 on page 821. To remove a notification, check the Clear box and click Apply as shown in Figure 14-166.

Figure 14-166 Clear the notification

In Figure 14-167 you can see that the selected entry is now removed.

Figure 14-167 After clearing the notification

Chapter 14. Using TotalStorage Productivity Center for Fabric

825

The removal is also reflected in the topology map as shown in Figure 14-168. The host SENEGAL is no longer adorned.

Figure 14-168 Topology change after notification clearance

826

IBM TotalStorage Productivity Center: Getting Started

15

Chapter 15.

Using TotalStorage Productivity Center for Replication


This chapter provides information to help you configure and use the TotalStorage Productivity Center for Replication component of TotalStorage Productivity Center. In this chapter we describe: Concepts and terminology of replication Step-by-step instructions for creating paths, groups, pools, and finally, replication sessions Session managing, using GUI and CLI

Copyright IBM Corp. 2005. All rights reserved.

827

15.1 TotalStorage Productivity Center for Replication overview


Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. TotalStorage Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. TotalStorage Productivity Center manages two types of copy services: the Continuous Copy (also known as Peer-to-Peer Remote Copy, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). TotalStorage Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary copy services operations. Multiple pairs are handled as a consistent unit, and Freeze-and-Go functions can be performed when errors in mirroring occur. TotalStorage Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale client environments. TotalStorage Productivity Center for Replication is implemented by applying predefined policies to Groups and Pools, which are groupings of LUNs. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.

15.1.1 Supported Copy Services


TotalStorage Productivity Center for Replication supports FlashCopy and Synchronous PPRC for ESS. Future releases will add other copy services functions and supports additional storage devices. Check the current TotalStorage Productivity Center for Replication documentation for the required ESS LIC, ESS CLI, and CIM Agent levels. The supported products list can be found at:
http://www-1.ibm.com/servers/storage/support/software/tpcrep/installing.html

or:
http://www.ibm.com/storage/support

Then select Storage software in Product family select TPC for Replication select the Install and use tab. The ESS Copy Services supported with TotalStorage Productivity Center for Replication V2.3 include: ESS PPRC Synchronous remote copy Add / delete paths Add / delete volume pairs Full background copy Freeze / Run Suspend / resume Query status of the session, paths, and pairs

ESS FlashCopy Full background copy

828

IBM TotalStorage Productivity Center: Getting Started

PPRC
PPRC is a function of a storage server that constantly updates a secondary copy of a volume to match changes made to a primary volume. The primary and the secondary volumes can be on the same storage server or on separate storage servers. PPRC differs from FlashCopy in two essential ways. First, as the name implies, the primary and secondary volumes can be located at some distance from each other. Second, and more significantly, PPRC is not aimed at capturing the state of the source at some point in time, but rather aims at reflecting all changes made to the source data at the target. PPRC is application independent. Because the copying function occurs at the disk subsystem level, the hosts operating system or application has no knowledge of its existence. In contrast to that, host-based mirroring is controlled by software at the operating system or file system level: The storage subsystem does not know about that. Table 15-1 summarizes characteristics of both approaches.
Table 15-1 Comparison of PPRC and host-based mirroring

Peer-to-Peer Remote Copy


Operation is performed by storage subsystem, transparent for host operating system. The functionality is the same for all operating systems and applications. Read and write operations are sent to the primary volume only. There is an unidirectional relationship from the primary to the secondary volume. Failure recovery is different for the primary and secondary volume.

Host-based mirroring
Operation is performed by host software or host bus adapter, transparent for storage subsystem. The functionality depends on capabilities of the operating system or host bus adapter. Write operations are sent to both volumes. Read operations are sent to any volume, depending on read policy. The relationship between the volumes is symmetric. Failure recovery is identical for both volumes.

FlashCopy
FlashCopy makes a single point-in-time copy of a LUN. This is also known as a time-zero copy. The target copy is available once the FlashCopy command has been processed. FlashCopy provides an instant or point-in-time copy of an ESS logical volume. Point-in-time copy functions give you an instantaneous copy, or view of what the original data looked like at a specific point-in-time.The point-in-time copy created by FlashCopy is typically used where you need a copy of production data to be produced with minimal application downtime. It can be used for backup, testing of new applications, or for copying a database for data mining purposes. The copy looks exactly like the original source volume and is an instantly available. TotalStorage Productivity Center for Replication provides a user interface for creating, maintaining, using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. TotalStorage Productivity Center for Replication uses different names for copy services than ESS: Point-in-Time Copy is equivalent to FlashCopy on ESS Continuous Synchronous Remote Copy is equivalent to Peer to Peer Remote Copy on ESS Refer to Figure 15-1 for an illustration of these concepts.

Chapter 15. Using TotalStorage Productivity Center for Replication

829

Figure 15-1 TotalStorage Productivity Center for Replication - manager tasks

Figure 15-1 illustrates a list of the tasks you can perform from Manage Replication group, which represents TotalStorage Productivity Center for Replication: Create and manage groups, which are collections of volumes grouped together so that they can be managed concurrently. Create and manage paths between storage subsystems which are required for remote copy functionality. Create and manage pools which are collections of target volumes. Add Replication Devices for improved performance (ESS model 800 only in TPC 2.3) Run the wizard for creating a session: Select copy type Select source group Select target pool Save session or start a replication session

Monitor, terminate, or suspend running sessions A user can also perform these tasks with the TotalStorage Productivity Center for Replication command-line interface, which is described in 15.3, Using Command Line Interface (CLI) for replication on page 884.

15.1.2 Replication session


A replication session is a set of copy relationships which are maintained as a unit in a manner which provides consistency; especially across box or other hardware boundaries. The replication session, then, associates a pool with a group and gives them a particular copy relationship either a continuous synchronous remote copy or a point-in-time copy.

830

IBM TotalStorage Productivity Center: Getting Started

TotalStorage Productivity Center for Replication supports the session concept in which multiple pairs are handled as a consistent unit. You can create and manage copy relationships between source and target volume pairs or source volume groups, and among target pools through a Replication Manager copy session. The Replication Manager Sessions panel shows sessions and their associated status. The status indicates if the volume is a source, target, or both; and it shows the copy mode of the volume. You can also use this panel to assess if current replication activities are proceeding normally or abnormally. When you are creating a replication session, you can select source and target volume pairs or volume groups, then establish a continuous synchronous remote copy (remote copy) or point-in-time copy (flash copy) relationship between them. The Sessions panel includes the following options:
Create - invokes the Create Session wizard, which you can use to create copy relationships for a new session. Delete - deletes an existing session. Flash - starts a created or terminated session (for Point-in-Time only) Start - starts a created, suspended or terminated session (for Remote Copy only). Properties - displays the Session Properties panel for an existing session. Suspend (consistent) - suspends an existing session, which results in a consistent target copy if there are no errors. Suspend (immediate) - stops an existing session with no guarantee of consistency. Terminate - stops an existing session and withdraws the relationships.

15.1.3 Storage group


A storage group is a collection of storage units that jointly contain all the data for a specified set of storage units, such as volumes. The storage units in a group must be from storage devices of the same type. Groups can be created to identify sets of volumes that need to be managed as a consistent unit. A general purpose group can be used as a container for volumes that share some association, for example a group of volumes that are all associated with a specific application. After a storage group is created, you can perform the following tasks: Add volumes Delete volumes Change the description of the group A storage group is managed by a Replication Manager session and used as a collection of source volumes for a copy.

15.1.4 Storage pools


A storage pool is an aggregation of storage resources on a storage area network (SAN) that you have set aside for a particular purpose. For example, you could use a storage pool for targets of copy operations that a collection of storage devices on the SAN can use. The storage devices can be from different vendors but must be a type that TotalStorage Productivity Center for Replication supports.

Chapter 15. Using TotalStorage Productivity Center for Replication

831

15.1.5 Relationship of group, pool, and session


This section illustrates the interdependency between a replication group, pool, and session in the context of the Replication Manager. To review, the definitions are:
Group: A set of volumes containing related data, which are managed as a unit they are managed concurrently. Pool: Volumes set aside for copy services targets these must not be in use by any other application. Session: A set of copy relationships which are maintained as a unit to provide consistency, across storage and server hardware boundaries.

TotalStorage Productivity Center for Replication provides the ability to copy a group to a pool, in which case it creates the valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. Sessions are a set of multiple pairs that are managed as a consistent unit from which freeze and run functions can be performed when errors occur. The session can also be viewed as a consistency group. The following chart (see Figure 15-2) graphically depicts the interactions of groups, pools and a session. It shows one group of related volumes on a source ESS (volumes S1 and S2) that we want to copy to another target pool of volumes (the T volumes). Once we have identified and created the source volumes in the group and the target volumes in a pool, we can then establish the relationship.

Group (source volumes)

Pool (target volumes)

S1
Remote or Flash Copy

T1

T3

S2

T2

T4

Session
Figure 15-2 Relationship of a group, pool and session

Our example session shows that S1 is associated with T1, similarly S2 with T2. The T1 and T2 volumes are now persistently bound to the relationship whereas T3 and T4 are still available for use. TotalStorage Productivity Center for Replication can automatically create the source to target relationship. Once created, these volumes are now part of a session or consistency group. This means again that any error on any of the volumes in this session could trigger a suspend across all the volumes to ensure data consistency. Events such as loss of access to a source subsystem or the loss of the PPRC links could be examples of such conditions to trigger a freeze event.

832

IBM TotalStorage Productivity Center: Getting Started

15.1.6 Copyset and sequence concepts


A copyset incorporates all the volumes that make up an instance of a given copy type. In other words, it comprises the source volume, target volume, and the copy relationships. With Replication Manager you can manually select target volumes, or have Replication Manager select them for you. Subsequent releases of TotalStorage Productivity Center for Replication will enhance the number of volumes in a copy set, and a session will be able to manage one to thousands of copy sets. A sequence includes the set of all copy relationships at any given stage of a copy operation. For Continuous Synchronous Remote Copy and Point-in-Time Copy there is only one sequence. A sequence will share the same pool criteria policy. Using Figure 15-3, in a copy relationship, S1, S2 are members of the same group but different copysets. You can visualize this as two copysets, along with their target volumes. When you create the copy session, Replication Manager automatically maps the disk in each copyset to appropriate available disks, or you can choose the targets manually if you want to.

Sequence Copyset S1 Copyset S2


Remote or Flash Copy Remote or Flash Copy

T1

T2

Figure 15-3 Replication manager sequence relationship example

Sequences will be further utilized in subsequent releases of TotalStorage Productivity Center as more complex copy types are supported.

Chapter 15. Using TotalStorage Productivity Center for Replication

833

15.2 Exploiting Productivity Center for replication


This section describes how to setup and use the TotalStorage Productivity Center for Replication component. To create and start a session for remote copy or point-in-time copy you have to perform steps as shown in Figure 15-4.

Create a Session

Manage Replication

Create a Pool Verify a Session Check Paths

Figure 15-4 Steps for creating replication copy session

15.2.1 Before you start


Before you start using Replication Manager, make sure that: CIMOM for ESS is operational and you have registered all ESSs you want to manage. You have access to ESS from CIMOM server. Run the following command located in the ESS CLI folder.
rsTestConnection.exe

The ESS Copy Services servers are defined to the CIMOM using the addessserver command. Each ESS cluster which acts as copy services server must be defined to the ESS CIMOM. Refer to Register ESS server for Copy services on page 214. Verify the ESSs you will use are at the required LIC level. TotalStorage Productivity Center for Replication V.2.3 requires for ESS 750 and ESS 800 LIC level 2.4.3.38 or above. ESS models F10 and F20 require LIC of 2.3.256 or above. The paths between ESSs you want to replicate are defined.

15.2.2 Adding a replication device


TotalStorage Productivity Center for Replication V2.3 introduces a new feature that allows you to specify an ESS (2105-800) for replication. By adding a replication device, the performance of time critical functions improves when compared to the previous releases. To add a replication device: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Expand Manage Replication. 3. Double-click Replication Devices (Figure 15-5).
834
IBM TotalStorage Productivity Center: Getting Started

Start a Session

Create a Group

Figure 15-5 Replication Devices Task

4. The Replication Manager Device List panel opens (Figure 15-6). If devices were previously added, they will be displayed in this panel. Click Add... to add a device.

Figure 15-6 Add replication device

5. The Add New Device wizard launches (Figure 15-7) to guide you through the process. Click Next.

Figure 15-7 Add new replication device - step 1 of 4

Chapter 15. Using TotalStorage Productivity Center for Replication

835

6. In step 2 of the wizard, select the storage subsystem you wish to add from the drop down list (Figure 15-8). After choosing a device, click Next.
Note: A successful device Discovery must be completed before adding a replication device.

Figure 15-8 Add new replication device - step 2 of 4

7. In step 3, you must enter the Specialist user ID and password for Cluster 1 and 2 of the device (Figure 15-9). Click Next.

Figure 15-9 Add new replication device - step 3 of 4

836

IBM TotalStorage Productivity Center: Getting Started

8. In the final step, click Finish to add the device (Figure 15-10).

Figure 15-10 Add new replication device - step 4 of 4

9. After adding a replication device, the Device List panel opens (Figure 15-11). The device is listed with the connection status information.

Figure 15-11 Replication device confirmation

Chapter 15. Using TotalStorage Productivity Center for Replication

837

15.2.3 Creating a storage group


TotalStorage Productivity Center for Replication uses groups and sessions you define to manage the replication process. Perform the following steps to create a Replication Manager group: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication (see Figure 15-1 on page 830). 3. Double-click Groups, the Groups panel opens (see Figure 15-12).

Figure 15-12 Replication manager groups

4. Click Create. The Create Group wizard opens (see Figure 15-13). 5. Click the device shown in Device Components pane and select logical storage subsystem (LSS).
Note: Device Components shown in the group window do not use the same names defined in the Group Contents pane of the IBM Director console (see Figure 15-1 on page 830). The Device Component pane uses the format: device_type.serial_number. In our example Device Component ESS.2105-16603 in Figure 15-13 indicates ESS 2105 F20 16603 in Figure 15-1 on page 830.

838

IBM TotalStorage Productivity Center: Getting Started

6. Select one or more volumes (press Ctrl and click for multiple volumes selection) from the Available Volumes pane of the Create group pane. Click Add (see Figure 15-13). You can also click Select all if you want to add all available volumes to a group. In our example we chose two volumes from ESS F20 (16603) and two volumes from ESS 800 (22513).

Figure 15-13 Select volumes for the new group example

Note: Although you can only select volumes from one LSS at a time, you can select different LSSs within the same Create Group session. As you select each LSS, the Available volumes pane updates the list of volumes that are available for the selected device.

7.

If you want to remove a volume from the Selected volumes panel, select it, and then click
Remove.

8. Click Next. The Save group window opens (see Figure 15-14).

Chapter 15. Using TotalStorage Productivity Center for Replication

839

Figure 15-14 Save group setup example

9. Enter a name for the new group in the Name field. The name is required and must not exceed 250 characters and may not contain special characters such as spaces. 10.Enter a description for the new group in the Description field. The Description is optional and can be 0 - 250 characters. 11.Click Finish to save the new group and close the wizard.

Result
The new group appears in the Groups window (see Figure 15-15). In our example we created two groups which will be used for Point-in-time copy and Remote Copy.

Figure 15-15 Groups window example

840

IBM TotalStorage Productivity Center: Getting Started

15.2.4 Modifying a storage group


Use the Group properties panel to modify one or more properties of a Replication Manager group of source volumes, for example, to add or remove volumes from a group. Perform the following steps to modify a Replication Manager group: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab (see Figure 15-1 on page 830). 2. Click Manage Replication. 3. Double-click Groups. The Groups window opens 4. Select the group to be modified from the Groups list. 5. Click Properties. The Group Properties window opens (see Figure 15-16).You can edit the text in the Description window.

Figure 15-16 Group properties

Chapter 15. Using TotalStorage Productivity Center for Replication

841

6. To change volumes which belong to the group, click Update. The Group properties window with volumes opens (similar to the one shown in Figure 15-17).

Figure 15-17 Group properties for a selected group panel

7. To add volumes to the group: Select one or more volumes (using Ctrl) in the Available volumes panel. Click Add. 8. To remove volumes from the group: Select one or more volumes (using Ctrl) in Selected volumes panel. Click Remove.
Attention: Check if existing defined sessions use volumes which you want to remove from the group you are updating.

9. Click OK to submit your changes and close the window.

15.2.5 Viewing storage group properties


You can use the Replication Manager group properties panel to view properties for a selected group.
Note: You must have created and saved a group before you can view its properties.

842

IBM TotalStorage Productivity Center: Getting Started

Perform the following steps to view the properties of an Replication Manager group: 1. Expand Multiple Device Manager in the IBM Director Console Tasks pane. 2. Click Manage Replication. 3. Click Groups, the Groups panel opens. 4. In the Groups table, select the group that you want to view (see Figure 15-15 on page 840). 5. Click Properties. The Properties panel opens for the selected group. You can view the following information: Group name Description of the group The table of the volumes that are managed by the group which shows: Volume ID Device (for example ESS.2105-16603) Volume location - logical storage subsystem Volume type (FB for open systems) Volume size

15.2.6 Deleting a storage group


You can use this procedure to delete a selected Replication Manager group from the Groups list.
Note: Before you delete a group make sure that no session uses the group for replication.

Perform the following steps to delete a Replication Manager group: 1. Expand the Multiple Device Manager tab in the IBM Director Console Tasks pane. 2. Click Manage Replication. 3. Click Groups (see Figure 15-1 on page 830). The Groups panel opens. In the Groups list, select the group that you want to delete (see Figure 15-18).

Figure 15-18 Groups window

Chapter 15. Using TotalStorage Productivity Center for Replication

843

4. Click Delete. A window opens asking to verify the delete request (see Figure 15-19).

Figure 15-19 Delete Group confirmation

5. Click Yes to delete the group. Alternatively, click No to cancel the delete.

Result of Delete Group


The selected group is no longer displayed in the list of groups in the Groups window.

15.2.7 Creating a storage pool


You can perform this task to create pools of volumes which will be used as a set for target volumes for copy operations. Perform the following steps to create a storage pool: 1. From the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens (see Figure 15-20).

Figure 15-20 Replication manager Pools panel example

844

IBM TotalStorage Productivity Center: Getting Started

4. Click Create. The Create Pool Wizard opens (see Figure 15-21).

Figure 15-21 Select volumes example for creating a pool

5. Click the device shown in the Device Component pane and select a logical storage subsystem (LSS).
Note: Device Components shown in the Group window do not use the same names defined in the Group Contents panel in the IBM Director Console (see Figure 15-1 on page 830). The Device Component pane uses the format device_type.serial_number. In our example Device Component ESS.2105-16603 in Figure 15-13 indicates ESS 2105 F20 16603 in Figure 15-1 on page 830.

6. Select one or more volumes (press Ctrl and click for multiple selection) in the Available volumes pane and click Add. You can also click Select all if you want to add all available volumes to a pool. In our example we chose two volumes from ESS F20 (16603) and two volumes from ESS 800 (22513).
Important: The size of a source and target volume of a copy relationship has to be equal.

7.

If you want to remove a volume from the Selected volumes panel, select it, and then click
Remove.

8. Click Next. The Save pool window opens (see Figure 15-22). 9. Enter a name (required), description (optional) and location (optional).
Note: We recommend that you enter a Location name, which helps in automatic allocation of target volumes during creating a session.

Chapter 15. Using TotalStorage Productivity Center for Replication

845

10.Click Finish to save the new pool (Figure 15-22).

Figure 15-22 Save pool window

Result of creating a storage pool


The new pool is added to the Pools table as shown in Figure 15-23.

Figure 15-23 Created pool

846

IBM TotalStorage Productivity Center: Getting Started

Note: You do not have to use all volumes of pool when you create a session. Additionally, a pool and even the same volume from a pool can be defined as a target for multiple sessions.

15.2.8 Modifying a storage pool


You can use the Pool properties panel to modify one or more properties of a Replication Manager pool of target volumes. Perform the following steps to modify a Replication Manager pool: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens (see Figure 15-23 on page 846). 4. Select the pool to be modified in the Pools table and click Properties. The Pool properties window opens (see Figure 15-24).

Figure 15-24 Pool properties

5. You can change text in the Description panel and Location.


Attention: Changing the Location name can destroy a session which uses the pool you are modifying.

Chapter 15. Using TotalStorage Productivity Center for Replication

847

6. To change volumes which belong to the pool, click Update. The Pool properties window with volumes opens (similar to the one shown in Figure 15-21 on page 845). 7. To add volumes to the group: Select one or more volumes (using Ctrl) in the Available volumes panel. Click Add. 8. To remove volumes from the group: Select one or more volumes (using Ctrl) in Selected volumes panel Click Remove.
Attention: Check if any defined sessions use volumes which you want to remove from the pool you are modifying.

9. Click OK to commit changes and close the window or click Cancel if you want to cancel the modifications.

15.2.9 Deleting a storage pool


Perform this task to delete a selected Replication Manager storage pool from the Pools table. Perform the following steps to delete a Replication Manager pool: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens (see Figure 15-25). In the Pools list, select the pool that you want to delete.

Figure 15-25 Pools window

848

IBM TotalStorage Productivity Center: Getting Started

4. Click Delete. A window with the message Are you sure you want to delete pool pool_name? opens as shown in Figure 15-26.

Figure 15-26 Delete a pool confirmation

5. Click Yes to delete the pool or No to cancel.

Result of deleting a storage pool


The selected pool is removed from the list of pools in the Pools table.

15.2.10 Viewing storage pool properties


You can view information about a Replication Manager storage pool in the Pool properties window.
Note: You must have created and saved a pool before you can view its properties. A storage pool is a predefined set of direct access storage device (DASD) volumes used to store groups of logically related data according to user requirements for service or according to storage management tools and techniques.

1. Expand the Multiple Device Manager in the IBM Director Console Tasks panel. 2. Click Manage Replication. 3. Click Pools. The Pools panel opens. 4. In the Pools table, select the pool that you want to view. 5. Click Properties. The properties window opens for the selected pool (see Figure 15-24 on page 847). You can view the following information: Pool name Description of the pool Location name The table of the volumes that are managed by the group which shows: Volume ID Device (for example ESS.2105-16603) Volume location - logical storage subsystem Volume type (FB for open systems) Volume size

Chapter 15. Using TotalStorage Productivity Center for Replication

849

15.2.11 Creating storage paths


The TotalStorage Productivity Center for Replication provides a graphical method to view the relationships and links between logical storage subsystems.
Important: Check path availability before starting remote copy sessions.

The ability to create paths is now supported within TotalStorage Productivity Center for Replication V.2.3. At the time a Replication Manager session is initiated, the paths in effect at the time are retained, and restored on subsequent restarts of the session. To create a path: 1. From the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Expand Manage Replication. 3. Click Paths. The Paths panel opens. 4. In the Paths panel, click Create. 5. The Create Path wizard launches (Figure 15-27). Step 1 welcomes you to the wizard. Click Next.

Figure 15-27 Create Path wizard - step 1 of 5

6. In step 2 of the Create Path wizard, select the source storage device and LSS from the drop down menus as seen in Figure 15-28. Click Next.

Figure 15-28 Create Path wizard - step 2 of 5

850

IBM TotalStorage Productivity Center: Getting Started

7. In step 3 of the Create Path wizard, select the target storage device and LSS from the drop down menus as seen in Figure 15-29. Click Next.

Figure 15-29 Create Path wizard - step 3 of 5

8. In step 4, select the path type and number of paths to create. In the example below, we create three Fibre Channel paths between our source and target (Figure 15-30).

Figure 15-30 Create Path wizard - step 4 of 5

9. In the final step (Figure 15-31), review the information and confirm the choices by clicking Next.

Figure 15-31 Create Path wizard - step 5 of 5

Chapter 15. Using TotalStorage Productivity Center for Replication

851

10.The Paths panel is displayed (Figure 15-32) and you can see the status of the path you just created. The path we created above is highlighted and is Established. Click Close to close the panel.

Figure 15-32 Create Paths - Path status

15.2.12 Point-in-Time Copy - creating a session


If you created a Replication Manager group and pool you can define a session which will run the copy task. This section describes Point-in-Time copy which creates an instant copy of a volume on the same storage server. However, with TotalStorage Productivity Center for Replication you can define a set of many instant copy tasks in the same session which will run all tasks at the same time. This provides consistent data spread on many volumes on different storage servers. Perform the following steps to create Point-in-Time Copy session: 1. In the IBM Director Console Tasks pane, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Double-click Groups. The Groups window opens (see Figure 15-15 on page 840) 4. Select the group which you want to copy and click Replicate. The Create Session wizard opens for the group you chose (see Figure 15-34 on page 853). or 1. In the IBM Director Console Tasks panel, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens (see Figure 15-33 on page 853).

852

IBM TotalStorage Productivity Center: Getting Started

Figure 15-33 Session window

4. Select Create session action. The Create session window opens (see Figure 15-34). Choose Point-in-Time Copy and click Next.

Figure 15-34 Create session window with Point-in-Time Copy selection

Note: You can define another session which uses the same group.

5. The Choose source group window opens (see Figure 15-35). Choose the Group name which you want to copy and click Next. If you ran the wizard from the Groups window you can see only one Group which you selected before.

Figure 15-35 Choosing source group for replication

Chapter 15. Using TotalStorage Productivity Center for Replication

853

6. The Choose a target pool window opens as shown in Figure 15-36.

Figure 15-36 Choosing target pool for point in time copy

7. In the Location filter field enter the name of the location of the target pool. You can enter an asterisk (*) as a wildcard for the first or last character of the filter. 8. Click Apply to see volumes of all locations which meet the criteria. 9. Select the All listed locations radio button if you want to use volumes from more than one location or the Select single location radio button then select the location from the Location pane and click Next.
Note: We recommend that you enter the entire location name in the Location filter field instead of using wildcards. Remember, location name is case sensitive

10.Enter the session Name and Description in the Create session - Set session settings panel (see Figure 15-37).

Figure 15-37 Set session settings window

854

IBM TotalStorage Productivity Center: Getting Started

Select one of the following options in the Session approval pane: Automatic - indicates that you allow Replication Manager to automatically create relationship between source and target volume Manual - indicated that you want to select volumes and approve relationships 11.Click Next, the Review session properties window opens. Verify your input and click Finish to submit (see Figure 15-38).

Figure 15-38 Review session properties panel

12.The session will be created and a new window opens with a message that the command completed successfully. If you get a message as shown in Figure 15-41 on page 856, refer to 15.2.13, Creating a session - verifying source-target relationship on page 856. 13.In the Sessions pane you can see the newly created session (see Figure 15-39).

Figure 15-39 Sessions window with created session

Chapter 15. Using TotalStorage Productivity Center for Replication

855

If the session was created successfully, select Flash from the Session actions pull down to run a Point-in-time copy session (see Figure 15-40). We recommend that you verify the source-target volumes before running a session. To verify relationships, refer to 15.2.13, Creating a session - verifying source-target relationship on page 856.

Figure 15-40 Running Point-in-Time copy

14.Now you can see in the ESS Specialist interface that FlashCopy is running as shown in Figure 15-50 on page 861. In our example there are two pairs of FlashCopy on two different ESS devices running in the same session.

15.2.13 Creating a session - verifying source-target relationship


When you create a session, Replication Manager can automatically create relationships of source volumes in a group and target volumes in a pool (if you chose Automatic Session Approval as in Figure 15-37 on page 854). If Replication Manager could not set the relationships, you get a message like the one in Figure 15-39 on page 855.
Tip: We recommend that you check pairs of source and target volumes before starting a session. Though you chose Automatic Session Approval and got a message that the session was created successfully, you should check if relationships are set correctly.

Figure 15-41 Creating session - error message

Perform the following steps to verify source-volume pairs. 1. If you got a message that creating command was completed with errors, click Details (see Figure 15-41). A window with messages opens, and you can see detailed messages (see Figure 15-42).

856

IBM TotalStorage Productivity Center: Getting Started

Figure 15-42 Detailed messages

Close both windows. You can see the created session in the Sessions pane. In our example in Figure 15-43, we created a session named FC_F20_800.

Figure 15-43 Sessions window with created session

2. Click a session you want to verify in the Sessions panel. 3. Click the box, Please select one drop down, and choose Properties. The window Session properties opens. Click the Copyset tab. See Figure 15-44.

Chapter 15. Using TotalStorage Productivity Center for Replication

857

4. The number under Non-approved copysets: indicates that during creating a session or relationships they could not be created automatically. In our example, we chose the Automatic Session Approval method; two pairs were set automatically, however the next two were not approved (see Figure 15-44). Click Copyset details. The Copyset window opens as shown in Figure 15-45.

Figure 15-44 Session properties window, Copyset tab

5. Select the Invalid Copyset to see details of the last result and click Modify copyset target. In our example two pairs are approved and two are not valid and should be modified as shown in Figure 15-45.

Figure 15-45 Sessions copysets

858

IBM TotalStorage Productivity Center: Getting Started

Tip: Copyset ID is related to the source volume of copy pair.

6. The Choose Target window opens. Select target volumes to create copy pair with source volume and click Next. In our example (see Figure 15-46) source volume is 1300 and we have two available targets 1304 and 1305.

Figure 15-46 Choose Target window

7. The Choose Target Verify window opens. If it shows the correct target volume for modifying the copyset click Finish to approve. 8. Perform steps 5 - 7 for all copysets which are invalid, which means that source-target pairs were not set and approved. 9. If all copysets are correct you will see status as shown in Figure 15-47. Select modified copyset to verify the last result says that the relationship was successfully created.

Figure 15-47 Approved copysets

Chapter 15. Using TotalStorage Productivity Center for Replication

859

10.Go back to Session properties windows, Copyset tab (see Figure 15-48 on page 860) and click Refresh. If you modified all copysets correctly you should get results as shown in Figure 15-48.

Figure 15-48 Session properties window - status of corrected copysets

11.Go back to the main Sessions window. Select Session pull down actions and click Flash to run a Point-in-time copy session as shown in Figure 15-49. The Confirmation window opens, click Yes to run or No to cancel.

Figure 15-49 Running Point-in-Time copy

860

IBM TotalStorage Productivity Center: Getting Started

12.You can see in the ESS Specialist interface that FlashCopy is running as shown in Figure 15-50. In our example there are two pairs of FlashCopy on two different ESS devices running in the same session.

Figure 15-50 FlashCopy pairs created and run by TotalStorage Productivity Center for Replication

15.2.14 Continuous Synchronous Remote Copy - creating a session


If you created a Replication Manager group and pool, you can define a session which will run the copy task. This section describes Remote Copy which creates a synchronous copy of a volume on another or the same storage server. You can define a set of many pairs of mirroring volumes in the same session which runs all tasks at the same time to have consistent data spread on many volumes on different storage servers. Perform the following steps to create a Remote Copy session: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Groups. The Groups window opens (see Figure 15-15 on page 840). 4. Select the group which you want to copy and click Replicate. Create Session wizard opens for the chosen group (see Figure 15-51).

Chapter 15. Using TotalStorage Productivity Center for Replication

861

Figure 15-51 Create session window with Continuous Synchronous Remote Copy selection

- or 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens (see Figure 15-52). 4. Select Create session action. The Create session window opens (see Figure 15-51).

Figure 15-52 Session window

5. Choose Continuous Synchronous Remote Copy and click Next. The Choose source group window opens Choose the Group name which you want to copy and click Next (see Figure 15-53). If you ran the wizard from Groups window you can see only one Group which you selected before. Choose a target pool window opens as shown in Figure 15-54 on page 863.

Figure 15-53 Choosing source group for remote copy replication

862

IBM TotalStorage Productivity Center: Getting Started

6. In the Location filter field enter the name of the location of the target pool. You can enter an asterisk (*) as a wildcard for the first or last character of the filter. 7. Click Apply to see volumes of all locations which meet criteria. 8. Select All listed locations if you want to use volumes from more than one location or select a single location, then select correct location, and click Next.
Note: Remember, location name is case sensitive

Figure 15-54 Choosing target pool for point in time copy

Chapter 15. Using TotalStorage Productivity Center for Replication

863

9. The Set session settings window opens. Enter the name and description (see Figure 15-55). 10.Select one of the following options in the Session approval panel: Automatic - indicates that you allow Replication Manager to automatically create a relationship between the source and target volume Manual - indicates that you want to select volumes and approve the relationship

Figure 15-55 Set session settings window

11.Click Next, the Review session window opens. Validate the information and click Finish to submit (see Figure 15-56).

Figure 15-56 Creating session review

12.The session will be created and new window opens with a message that command completed successfully as shown in Figure 15-57. If you get a message as shown in Figure 15-41 on page 856, read 15.2.13, Creating a session - verifying source-target relationship on page 856.

864

IBM TotalStorage Productivity Center: Getting Started

Figure 15-57 Continuous Synchronous Remote Copy session created successfully

13.In the Sessions window you can see a new created session (see Figure 15-58).

Figure 15-58 Sessions window with created Continuous Synchronous Remote Copy session.

14.If session was created successfully, select the session you want to run, select Session actions and click Start to run a Remote Copy session (see Figure 15-59). However, we recommend that you verify source-target volumes before running a session. To verify relationships, read 15.2.13, Creating a session - verifying source-target relationship on page 856.

Figure 15-59 Starting Remote Copy session

Chapter 15. Using TotalStorage Productivity Center for Replication

865

15.You can see in the ESS Specialist interface that a Remote Copy is running as shown in Figure 15-60. In our example there are two pairs of Remote Copy between volumes on two different ESSs running in the same session.

Figure 15-60 Remote copy pairs created and run by TotalStorage Productivity Center for Replication

15.2.15 Managing a Point-in-Time copy


From the Session window you perform actions for a (the actions are different for Continuous Synchronous Remote Copy):
Create new session Delete defined session Flash - start a session Properties - view and change properties of a session Terminate started session

Using any copy services requires that you create an accurate plan before running and a detailed plan for future management. Any mistakes can cause loss of data, for example if you use a wrong volume as the target of a copy session. Therefore we recommend that you verify all pairs in a session before staring the copy process, see 15.2.13, Creating a session verifying source-target relationship on page 856.

Sessions window
When you create, verify and run a session you can monitor its status in the main Session window, which gives you basic information about a given session. Each session can include many pairs of volumes which are in copy relationships and create a consistent group. If the status of a given session is not optimal, you need to review the properties for a given session to check if there is a general problem or if it is related to a certain pair of volumes.
866
IBM TotalStorage Productivity Center: Getting Started

Perform the following steps to check a basic status of a session: 1. Click Multiple Device Manager in the IBM Director Task panel. 2. Click Manage Replication. 3. Click Sessions. The Sessions window opens (also called the main Session window). There are eight fields in a Sessions window: a. Name of a session b. Status field can have one of the following status:
Normal (green icon): Point-in-Time Copy was invoked successfully. Medium (yellow icon): A session is not started or was terminated Severe (red icon): An error occurred. Defined - a session is created and not started or was terminated Active - a session is running

c. State field can have one of the following status:

d. Group - name of a Group of volumes which are sources of copy pairs e. Copy Type - Point-in-Time Copy or Continuous Synchronous Remote Copy f. Recoverable - indicates if any sequences in a session are considered recoverable g. Shadowing - indicates if any part of a session is shadowing data h. Volume Exceptions - shows the total number of volumes which are in an exception state. Before starting a created session, you should see the following field values as shown in Figure 15-61: Status - Medium State - Defined Recoverable - No Shadowing - No Volume Exceptions - No

Figure 15-61 Defined state of Point-in-Time Copy

Chapter 15. Using TotalStorage Productivity Center for Replication

867

When you successfully flashed a new or terminated session you will see the values for the following parameters shown in Figure 15-70: Status - Normal (green) State - Active (changed from Defined) Recoverable - Yes

Figure 15-62 Flashed Point-in-Time Copy session.

Properties window
The Sessions window shows a status of session as a group of volume pairs. If you want to see details, perform following steps to use Properties window: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens 4. Select a session which you want to manage, select the Properties session action. The Properties window opens. 5. There are three tabs in the Properties window a. General - shows general information about a session. Comparing the information to the Session window you get additional information like Description, number of volumes and approval status. The only parameter you can change is the approval status which can be automatic or manual. b. Copyset - you can check if all pairs are valid and approved. This panel is mostly used during verification of a session, see Creating a session - verifying source-target relationship on page 856. c. Sequence - this tab is mostly used to see detailed information about the status of a session, especially when used together with the Pairs window. The General tab shows basic information, like the Session window. For example, when you flashed a session, you should see the following values (see Figure 15-63): Copy type - Point-in-Time Copy State - Active Status - Normal Group - name of group used for this session

868

IBM TotalStorage Productivity Center: Getting Started

Source Volumes - number of volumes in a group Approval status - Automatic or Manual

Figure 15-63 General tab in Properties window for flashed session

The Copyset tab generally does not change while managing a session unless some error occurs. Figure 15-64 shows the status in our environment.

Figure 15-64 Copyset tab for correctly defined session with 4 pairs of volumes

Chapter 15. Using TotalStorage Productivity Center for Replication

869

To see more information about copysets, especially If some of them are invalid, click Copyset details. The Copyset window opens, displaying the table of copysets in the session. You can check for problems in the following tables: The Copyset table indicates if the copyset is invalid. The Last Result column displays the latest message issued for a copyset and indicates why it is invalid. The Last Result column of the Copyset Relationships table displays the last message issued for a copyset pair. If a message ends in E or W, the pair is considered an exception pair. For more details refer to Creating a session - verifying source-target relationship on page 856. The Sequence tab is the most useful when you manage replication sessions, especially during synchronization. You can see which volume pairs are synchronized and the status of the others. In the Sequence panel the following columns are available: Recoverable - true or false. Indicates if all pairs in a sequence are recoverable Exception - yes or no. Indicates if at least one pair is in exception state. Shadowing - yes or no. Indicates if all pairs are in shadowing state. Exception volumes - shows number of volumes which are in exception state. Recoverable pairs - shows number of volume pairs which are recoverable Shadowing pairs - shows number of volume pairs which are in shadowing state Total pairs - shows total number of pairs in a sequence Recoverable timestamp - shows time when a session was suspended Following is an example from our environment of different states of replication session. In our example, after you created or terminated a session, you will see the Sequence tab as shown in Figure 15-65.

Figure 15-65 Sequence tab in Session properties window for defined Point-in-Time Copy session

870

IBM TotalStorage Productivity Center: Getting Started

When the session is created or terminated, it is in defined state. You can see in the Sequences pane: Name - Local point in time copy sequence Recoverable - false - it is not recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0 - no pair is recoverable Shadowing pairs - 0 - no pair is shadowing Total pairs - 4 (in our example) - total number of pairs is two Recoverable timestamp - n/a - is non-available In the Sequence states panel you see that four pairs are in Defined state. To see more details, select Sequence in the Sequences panel and click Pairs. A new window opens as shown in Figure 15-66.

Figure 15-66 Pair of Point-in-Time Copy session in defined state

The Sequence Flashed Target pairs window contains the following information:
Source Volume - source volume of a pair, includes type and number of ESS and volume number Target Volume - target volume of a pair State - Defined - means a session is created or terminated but not running Recoverable - No - indicates if a pair is flashed Shadowing - No New - Yes - indicates it is new session

Timestamp
Last result - the code of the last result, you can see description in Last result panel if you click one pair in Pairs panel

Chapter 15. Using TotalStorage Productivity Center for Replication

871

When you flash a new or terminated session you will see the Sequence tab as shown in Figure 15-77.

Figure 15-67 Sequence tab in Session properties window

Notice the values of the following columns: Recoverable - true Recoverable timestamp - time, when Point-in-Time Copy session was successfully flashed The Sequence Flashed Target pairs window shown in Figure 15-68 shows successfully flashed volumes.

Figure 15-68 Pairs of successfully flashed volumes.

872

IBM TotalStorage Productivity Center: Getting Started

15.2.16 Managing a Continuous Synchronous Remote Copy


From the Session window you can perform several tasks for Continuous Synchronous Remote Copy (the tasks are different for Point-in-Time Copy):
Create a new session. Delete a defined session. Properties (view and change properties of a session). Start a session. Suspend an already-started and synchronized session. Terminate a started session.

Using any copy services requires that you create an accurate plan before running and a detailed plan for managing the copy services. Any mistake can cause loss of data, for example when you use the wrong volume as a target of a copy session. Therefore we recommend that you verify all pairs in a session before starting the copy process. Refer to Creating a session - verifying source-target relationship on page 856.

Sessions window
When you create, verify, and run a session, you can monitor its status in the main Session window, which gives you basic information about a given session. Each session can include many pairs of volumes which are in copy relationships and create a consistent group. If the status of a given session is not optimal, you need to review the properties for a given session to check if there is a general problem or if it is related to a certain pair of volumes. Perform the following steps to check a basic status of a session: 1. Click Multiple Device Manager in the IBM Director Task panel. 2. Click Manage Replication. 3. Click Sessions. The Sessions window opens (also called as main Session window) 4. There are eight fields in a Sessions window: a. Name of a session b. Status field can have one of the following status:
Normal (green icon): All source volumes are replicating in both directions and copy is active. All volumes were established successfully and are synchronized. Medium (yellow icon): A session is not started, was terminated or is synchronizing but at least one volume is not synchronized with a source. Severe (red icon): An error caused a hardware device to respond at multiple addresses or, for a fibre-channel connection, a volume failed to be established. Defined - a session is created and not started or was terminated Active - a session is running

c. State field can have one of the following status:


d. Group - name of a Group of volumes which are sources of copy pairs e. Copy Type - can be a Point-in-Time Copy or Continuous Synchronous Remote Copy (as described in this chapter) f. Recoverable - indicates if any sequences in a session are considered recoverable g. Shadowing - indicates if any part of a session is shadowing data h. Volume Exceptions - shows the total number of volumes which are in an exception state.
Chapter 15. Using TotalStorage Productivity Center for Replication

873

After you have created a session, before starting it, you should see the following values for several fields (see Figure 15-69): Status - Medium State - Defined Recoverable - No Shadowing - No Volume Exceptions - No

Figure 15-69 Defined state

When you start a new session or resume suspended session you will see the following values (see Figure 15-70): Status - Medium (is still not optimal) State - Active (changed from Defined) Recoverable - No Shadowing - Yes (changed) Volume Exceptions - No

Figure 15-70 Synchronizing (copy pending) status

874

IBM TotalStorage Productivity Center: Getting Started

If all pairs in a session are synchronized you should see the following values (see Figure 15-71): Status - Normal (changed, now it is optimal state) State - Active Recoverable - Yes (changed, now you can recover data in case of disaster) Shadowing - Yes Volume Exceptions - No

Figure 15-71 Synchronized (full-duplex) status

If a session is suspended you should see the following values (see Figure 15-72): Status - Normal State - Active Recoverable - Yes Shadowing - No Volume Exceptions - No

Figure 15-72 Suspended status

Chapter 15. Using TotalStorage Productivity Center for Replication

875

Properties window
The Session Properties window shows a status of the session as a group of volume pairs. If you want to see details, perform the following steps to use Properties window: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens 4. Select a session which you want to manage, select Properties session action. The Properties window opens. 5. There are three tabs in the Properties window: General - This tab shows general information about a session. Comparing to Session window you get additional information such as Description, number of volumes and approval status. The only parameter you can change is the approval status which can be automatic or manual. Copyset - This tab lets you check if all pairs are valid and approved. This panel is mostly used during verification of a session, see Creating a session - verifying source-target relationship on page 856. Sequence - This tab is mostly used to see detailed information about the status of a session, especially together with the Pairs window. The General tab shows basic information, like the Session window. For example, when you create a session, before starting it, you should see the values in Figure 15-73: Copy type - Continuous Synchronous Remote Copy State - Defined Status - Medium Group - name of group used for this session Source Volumes - number of volumes in a group Approval status - Automatic or Manual

Figure 15-73 General tab in Properties window for defined session

876

IBM TotalStorage Productivity Center: Getting Started

The Copyset tab information generally does not change while managing a session unless some error occurs. You should see the following status as shown in Figure 15-74.

Figure 15-74 Copyset tab in Properties window for correctly defined session

To see more information about copysets, especially If some of them are invalid, click Copyset details. The Copyset window opens, displaying the table of copy sets in the session. You can check for problems in the following tables: The Copyset table indicates if the copy set is invalid. The Last Result column displays the latest message issued for a copyset and indicates why it is invalid. The Last Result column of the Copyset Relationships table displays the last message issued for a copyset pair. If a message ends in E or W, the pair is considered an exception pair. For additional details, refer to Creating a session - verifying source-target relationship on page 856.
Sequence tab is the most useful when you manage replication sessions, especially during synchronization you can see which volume pairs are synchronized and the status of the others. In the Sequence panel the following columns are available:

Recoverable - true or false. Indicates if all pairs in a sequence are recoverable Exception - yes or no. Indicates if at least one pair is in exception state. Shadowing - yes or no. Indicates if all pairs are in shadowing state. Exception volumes - shows number of volumes which are in exception state. Recoverable pairs - shows number of volume pairs which are recoverable Shadowing pairs - shows number of volume pairs which are in shadowing state Total pairs - shows total number of pairs in a sequence Recoverable timestamp - shows time when a session was suspended The following example is taken from our environment, showing different states of a replication session.

Chapter 15. Using TotalStorage Productivity Center for Replication

877

If you created session or terminated running session, you will see the Sequence tab as shown in Figure 15-75.

Figure 15-75 Sequence tab in Session properties window for defined session

When the session is created or terminated, it is in a defined state. You can see in Sequences panel the following values: Recoverable - false - it is not recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0 -no pair is recoverable Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - n/a - is non-available In the Sequence states panel you see that two pairs are in Defined state. For more details, select Sequence in Sequences panel and click Pairs. The Sequence Remote Target pairs window opens as shown in Figure 15-76.

Figure 15-76 Pairs of remote mirror session in defined state

878

IBM TotalStorage Productivity Center: Getting Started

The Sequence Remote Target pairs window contains the following columns:
Source Volume - source volume of a pair, includes type and number of ESS and volume number Target Volume - target volume of a pair State - Defined - means a session is created or terminated but not running Recoverable - No - indicates if a pair is synchronized Shadowing - No New - Yes - indicates it is new session Timestamp Last result - the return code of the last result, you can see a description in the Last result panel if you click one pair in the Pairs panel

When you start a new session or resume suspended session you will see the Sequence tab as shown in Figure 15-77

Figure 15-77 Sequence tab in Session properties window for just started session

Chapter 15. Using TotalStorage Productivity Center for Replication

879

The following columns have changed their state: Shadowing - yes Shadowing pairs - 2 It looks similar to the Sequence Remote Target pairs window as shown in Figure 15-78.

Figure 15-78 Pairs of just started remote mirror session

If one volume is synchronized but another is still synchronizing you will see the status as shown in Figure 15-79.

Figure 15-79 Sequence tab in a Session properties window for partially synchronized session

880

IBM TotalStorage Productivity Center: Getting Started

One pair is in Duplex state which means synchronized and another pair is in synchronizing state. Notice that Recoverable state is still false, because not all pairs are synchronized. To see which pair is in full duplex state, click Pairs (see Figure 15-80).

Figure 15-80 Pairs of partially synchronized session

In our example, one pair is in Duplex state (volume 1703 on ESS F20 16603 is synchronized with volume 1301 on ESS 800 22513) while the second pair is still synchronizing. When all pairs in a session are synchronized, you will see the status as shown in Figure 15-81.

Figure 15-81 Sequence tab in Session properties window in synchronized state

Notice that Recoverable status is true which means that all pairs are in Duplex state which means synchronized. The same status is shown for all pairs separately in the Sequence Remote Target pairs window as shown in Figure 15-82.

Chapter 15. Using TotalStorage Productivity Center for Replication

881

Figure 15-82 Pairs of fully synchronized session

When a session is fully synchronized you can suspend a session to have a consistent state of data on remote server. If you successfully suspended a session you will see Sequence tab information as shown in Figure 15-83.

Figure 15-83 Sequence tab in Session properties window in successfully suspended state

You can see in the Sequence tab panel:


Recoverable - true - it is recoverable

Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state
Recoverable pairs - 2- two pairs are recoverable

Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two
Recoverable timestamp - time, when a session was successfully suspended

In the Sequence states panel you see that two pairs are in Suspended state. To see more details, select Sequence in Sequences panel and click Pairs. A new window opens as shown in Figure 15-84.

882

IBM TotalStorage Productivity Center: Getting Started

Figure 15-84 Pairs of successfully suspended session

For a successfully suspended session, you should see the following values in Pairs window: State - Suspended Recoverable - Yes Shadowing - No New - No
Important: Remember to check the status of a session if it is successfully synchronized before you invoke the suspend command. Otherwise you will get invalid and inconsistent data on the remote site.

If you suspended a session which was not synchronized you will get the information in the Sequence tab as shown in Figure 15-85.

Figure 15-85 Sequence tab in suspended but not recoverable state

You can see in the Sequences panel:


Recoverable - false - it is not recoverable

Exception - No - there are no exceptions

Chapter 15. Using TotalStorage Productivity Center for Replication

883

Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0- no pair is recoverable Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two
Recoverable timestamp - n/a, recovery is not possible so there is no time information

Notice that state is Suspended but a session is not recoverable. In a Sequence states panel you see that two pairs are in Suspended state but the Recoverable value is false. To see more details, select Sequence in Sequences panel and click Pairs. A new window opens as shown in Figure 15-86.

Figure 15-86 Pairs of suspended not synchronized session

When a session was suspended in an inconsistent state, you will see the following values in the Pairs pane: State - Suspended Recoverable - No Shadowing - No New - Yes

15.3 Using Command Line Interface (CLI) for replication


This section introduces the command-line interface (CLI) for TotalStorage Productivity Center for Replication. We will focus on the main commands used for managing a session. See the IBM TotalStorage Productivity Center for Disk and Replication: Command-Line Interface Users Guide, SC30-4109 for detailed description of all available commands. Using the CLI you can create, delete sessions, groups, pools, and related copy pairs as well as run, suspend, and terminate replication sessions. You can use CLI installed together with TotalStorage Productivity Center for Replication on the main server or install CLI on another machine and invoke commands remotely. See Installing CIM agent for ESS on page 196 for installation instructions.

884

IBM TotalStorage Productivity Center: Getting Started

repcli utility
To use the CLI, you have to run the repcli utility. The default folder location of CLI for Replication Manager is c:\Program Files\IBM\mdm\rm\rmcli. This utility can also run commands in interactive mode, a single command, or a set of commands from a script. Syntax of repcli command:
repcli [ { -ver|-overview|-script file_name|command | - } ] [ { -help|h|-? } ]

Where:
-ver

Displays the current version


-overview

Displays the overview information about the repcli utility, including command modes, standard command and listing parameters, syntax diagram conventions, and user assistance.
-script filename

Runs the set of command strings in the specified file outside of a repcli session. You must specify a file name. The format options specified using the setoutput command apply to all commands in the script. Output from successful commands routes to stdout. Output from unsuccessful commands route to stderr. If an error occurs while one of the commands in the script is running, the script exits at the point of failure and return to the system prompt. example:
repcli -script start_backup.scr

-command_string

Runs the specified command string outside of a repcli session. example:


repcli lssess

CLI commands for replication


Following is a list of all commands available in CLI for TotalStorage Productivity Center for Replication:
approvecpset chcpset chcredentials chgrp chsess chtgtpool exit flashsess generatecpset lsgrp lsiogrp lslss lspair lspath lsseq lssess lstgtpool lsvol mksess mktgtpool quit repcli rmcpset rmcredentials rmgrp rmpath rmsess setoutput showcredentials showdev showgrp showmessage showsess showtgtpool startsess stopflashsess

Chapter 15. Using TotalStorage Productivity Center for Replication

885

help lscpset lscredentials lsdev

mkcpset

rmtgtpool

stopsess suspendsess

mkcredentials setoutput mkgrp mkpath showcapacity showcpset

In this section we focus on a few of these commands, which are mostly used for managing replication sessions:
flashsess - start Point-in-Time Copy session lspair - show information about a copy pair for a session lsseq - show information about a sequence for a session lssess - show details about all or filtered sessions setoutput - change default format for output showsess - show details about certain session startsess - start Continuous Synchronous Remote Copy session stopflashsess - terminate Point-in-Time Copy session stopsess - terminate Continuous Synchronous Remote Copy session suspendsess - suspend Continuous Synchronous Remote Copy session

15.3.1 Session details


Before you start a session, check its status using lssess or showsess commands. Use the lssess command to get basic or detailed information about all sessions or to find a session which fulfills specific criteria. The showsess command displays detailed information about a given session.

lssess command
lssess [ { -help|-h|-? } ] [ { -l (long)|-s (short) } ] [-fmt default|xml|delim|stanza] [-p on|off] [-delim char] [-hdr on|off] [-r #] [-v on|off] [-cptype flash|pprc] [-state defined|active] [-status norm|warn|sev|unknown] [-recov yes|no] [-shadow yes|no] [-err yes|no] [session_name ... | -] -s

An optional parameter that displays only the session name.


-l

Displays more details - default output plus approval type, pool criteria, copysets, non-approved, invalid, and description.
-cptype copytype

An optional parameter that displays only the sessions with the copy type specified.
-state defined | active

Displays only the sessions that are in the state specified.

886

IBM TotalStorage Productivity Center: Getting Started

-status norm | warn | sev

Displays only the sessions that have the status specified.


-recov yes | no

An optional parameter that is set to yes or no to indicate whether the session can be considered recoverable, based on whether any sequences in the session can be considered recoverable.
-shadow yes | no

An optional parameter that indicates whether any part of the session is shadowing data.
-err yes | no

An optional parameter that shows sessions that have errors or no errors.


session_name [,...] |

An optional parameter that displays only the sessions with the session name specified. Separate multiple session names with a comma between each name. If no session name is specified, all sessions are displayed unless another filter is used. In our example you should see the following results for created sessions using the lssess command as shown in Example 15-1.
Example 15-1 lssess - defined sessions repcli> lssess Name Status State Group Type Recover Shadow Err ================================================================= PPRC_800_to_F20 warning Defined PPRC_src pprc No No No FC_F20_800 warning Defined FC_src flash No No No

showsess command
You can see details about certain sessions using lssess with the -l parameter or the showsess command.
showsess session_name

This command shows the following information (like lssess -l): Name - Session Name. Copy type - Point-in-Time Copy or Continuous Synchronous Remote Copy. State - Defined or Active. Status - Unknown, Normal, Low, Medium, Severe, or Fatal. Group - Name of group of source volumes. Source volumes - Shows number of volumes in the group being replicated by this session. Approval status - Automatic or Manual. Copysets - Shows number of copysets that the session is managing. Non-approved - Indicates the number of copysets that have yet to be verified. Invalid copysets - Indicates the number of copysets that were determined to be invalid. Seq - Valid sequence names are Remote Target for remote copy and Flashed Target for point-in-time copy. Use quotes around the entire flag e.g. Flashed Target:location=RTP.
Chapter 15. Using TotalStorage Productivity Center for Replication

887

Pool Criteria - Location exact name or filter. Shadow - Yes or no. Indicates if a session is shadowing data. Recov - Yes or no. Indicates if all pairs in a session are recoverable. Approve - Yes or no. Indicates if all copysets are approved. Description - User-defined session description. In Example 15-2 you can see the result of the showsess command for defined sessions. You can compare the parameters to the information you can get using the graphical interface as described in Managing a Continuous Synchronous Remote Copy on page 873.
Example 15-2 showsess - defined sessions repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Defined Status warning Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow No Recover No Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully. repcli> showsess FC_F20_800 Name FC_F20_800 Type flash State Defined Status warning Group FC_src Source Volumes 4 Approval Status Automatic Copysets 4 Non-approved 0 Invalid 0 Seq "Flashed Target" Pool Criteria P% Shadow No Recover No Err No Approve Yes Description Point in time copy of 4 volumes, 2 on ESS F20 and 2 on ESS 800 AWN007080I Command completed successfully.

15.3.2 Starting a session


This section shows the commands to start a replication session.

888

IBM TotalStorage Productivity Center: Getting Started

flashsess command
To run a created or terminated Point-in-Time Copy session, invoke flashsess command:
flashsess [-quiet] session_name [. . .] session_name

specifies the session name to be activated. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN).
-quiet

An optional parameter that turns off the confirmation prompt for this command.
Note: In a batch program use the quiet parameter where available, otherwise the program will wait for your confirmation

Example 15-3 shows an example of the flashsess command.


Example 15-3 flashsess command repcli> flashsess -quiet FC_F20_800 AWN007110I Command completed successfully.

To start a created, terminated, or suspended Continuous Synchronous Remote Copy session, invoke the startsess command as shown in Example 15-4.
Example 15-4 startsess command repcli> startsess PPRC_800_to_F20 AWN007100I Command completed successfully.

Example 15-5 shows the status of started sessions. Point-in-Time Copy was created successfully which confirms normal status and yes value of recover parameter. However, Continuous Synchronous Remote Copy is running (Active State) but not synchronized, which shows in the Recover and Status parameters.
Example 15-5 lssess - started sessions repcli> lssess Name Status State Group Type Recover Shadow Err ================================================================ PPRC_800_to_F20 warning Active PPRC_src pprc No Yes No FC_F20_800 normal Active FC_src flash Yes Yes No

lsseq command
You should use two additional commands, lsseq and lspair, to get more details about the current state of sessions.
lsseq [ { -l |-s } ] [-recov yes|no] [-shadow yes|no] [-err yes|no] session_name -s

An optional parameter that displays volumes only.


-l

An optional parameter that displays all valid output. This is the default.

Chapter 15. Using TotalStorage Productivity Center for Replication

889

-recov yes | no

An optional parameter that indicates whether any sequences in the session can be considered recoverable.
-shadow yes | no

An optional parameter that indicates whether or not the sequence is shadowing (copying) the data.
-err yes | no

An optional parameter that shows sessions that have errors or no errors.


session_name

Specifies the session name to be activated. In Example 15-6 you can find the time when the Point-in-Time Copy was run in the Timestamp column. For Continuous Synchronous Remote Copy session, one pair is synchronized, which shows in the Recov Pairs parameter. The state of the Recov parameter will change to yes when all pairs are synchronized.
Example 15-6 lsseq - started sessions
repcli> lsseq FC_F20_800 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ===================================================================================================== Flashed Target Yes No Yes 0 4 4 4 2005/04/12 16:34:00 PDT repcli> repcli> lsseq PPRC_800_to_F20 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ============================================================================================ Remote Target No No Yes 0 1 2 2 n/a

lspair command
lspair [ { -l |-s } ] { -seq sequence_name|-cpset source_vol_id } [-state defined |active|duplex|suspended|synch|flashed] [-recov yes|no] [-shadow yes|no] [-new yes|no] [-err yes|no] session_name | -

You can use the lspair command to list the source and target of the copy service pairs and their status.
-s

An optional parameter that displays information about only pairs.


-l

An optional parameter that displays the default output, including pairs.


-seq sequence_name

Displays only pairs of the sequence name specified. Mutually exclusive with -cpset.
-cpset source_vol_id

Specifies the source volume ID of the copy set on which you want a list of pairs. Mutually exclusive with -seq.
-state defined | active | duplex | suspended | synch | flashed

An optional parameter that displays the state. The state can be defined, active, duplex, suspended, synch, or flashed.
-recov yes | no

An optional parameter that displays only pairs in the corresponding recoverable state.

890

IBM TotalStorage Productivity Center: Getting Started

-shadow yes | no

An optional parameter that displays only pairs that are in the new state.
-new yes | no

An optional parameter that displays only pairs that are in the new state specified.
-err yes | no

An optional parameter that displays only pairs that are in the error state. session_name The session name by which the pairs are identified. In Example 15-7 you can see details about volume pairs. For a Continuous Synchronous Remote Copy session, one pair of volumes is synchronized, which shows Duplex State, but the second one is still synchronizing.
Example 15-7 lspair - started sessions
repcli> lspair -seq 'Flashed Target' FC_F20_800 Source Target State Recov Shadow New Copyset Timestamp Last result ==================================================================================================================================== ESS:2105.16603:VOL:1702 ESS:2105.16603:VOL:1706 Flashed Yes Yes No ESS:2105.16603:VOL:1702 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.16603:VOL:1703 ESS:2105.16603:VOL:1705 Flashed Yes Yes No ESS:2105.16603:VOL:1703 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.22513:VOL:1300 ESS:2105.22513:VOL:1305 Flashed Yes Yes No ESS:2105.22513:VOL:1300 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.22513:VOL:1301 ESS:2105.22513:VOL:1304 Flashed Yes Yes No ESS:2105.22513:VOL:1301 2005/04/12 16:34:00 PDT IWNR2016I repcli> repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ============================================================================================================================ ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Duplex Yes Yes No ESS:2105.22513:VOL:1302 n/a IWNR2011I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 SYNCHRONIZING No Yes Yes ESS:2105.22513:VOL:1303 n/a IWNR2011I

When all volume pairs of Continuous Synchronous Remote Copy session are synchronized, you should get results as shown in Example 15-8.
Example 15-8 Duplex state of Continuous Synchronous Remote Copy session
repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Active Status normal Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow Yes Recover Yes Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully. repcli> repcli> lsseq PPRC_800_to_F20 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ============================================================================================ Remote Target Yes No Yes 0 2 2 2 n/a repcli> repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ===================================================================================================================== ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Duplex Yes Yes No ESS:2105.22513:VOL:1302 n/a IWNR2011I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 Duplex Yes Yes No ESS:2105.22513:VOL:1303 n/a IWNR2011I

Chapter 15. Using TotalStorage Productivity Center for Replication

891

15.3.3 Suspending a session


TotalStorage Productivity Center for Replication allows you to suspend Continuous Synchronous Remote Copy session if you requires consistent status of data on remote site, which can be used for example to do a backup. All changes will be registered and when you start a suspended session, only modified data will be copied to remote volume to obtain synchronized state.

suspendsess command
You can use suspendsess to suspend a Continuous Synchronous Remote Copy session. To restart a session, invoke the startsess command.
Note: To keep data consistency use -type consist parameter for suspendsess command suspendsess [ { -help|-h|-? } ] [-quiet] -type consist|immed session_name ... | -quiet

An optional parameter that turns off the confirmation prompt for this command.
-type consist | immed

Specifies the type of session to suspend. Specify consist to freeze a PPRC session, or specify immed (for immediately) to stop a session.
session_name [...] | -

Specifies the session name to be suspended. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). Example 15-9 shows the suspendsess command.
Example 15-9 suspendsess repcli> suspendsess -quiet -type consist PPRC_800_to_F20 AWN007140I Command completed successfully.

When a Continuous Synchronous Remote Copy session is suspended you should see results as shown in Example 15-10 from the lssess command. Notice that a session is recoverable and is not shadowing.
Example 15-10 lssess - suspended session repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ============================================================== PPRC_800_to_F20 normal Active PPRC_src pprc Yes No No

lspair command
Invoke the lspair command to see that all volume pairs are suspended and time, when the session was frozen as shown in Example 15-11.
Example 15-11 lspair - suspended session
repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ====================================================================================================================================== ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Suspended Yes No No ESS:2105.22513:VOL:1302 2005/04/12 19:43:25 PDT IWNR2015I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 Suspended Yes No No ESS:2105.22513:VOL:1303 2005/04/12 19:43:25 PDT IWNR2015I

892

IBM TotalStorage Productivity Center: Getting Started

15.3.4 Terminating a session


This section details the commands used to terminate a replication session.

stopflashsess command
You can use the stopflashsess command at any point during the life of a Point-in-Time Copy session once that session is in active state. This command withdraws all relationships between volumes on the storage subsystem. Example 15-12 shows an example of the stopflashsess command.
Example 15-12 stopflashsess repcli> stopflashsess -quiet FC_F20_800 AWN007150I Command completed successfully. repcli> lssess FC_F20_800 Name Status State Group Type Recover Shadow Err ========================================================== FC_F20_800 warning Defined FC_src flash No No No repcli> showsess FC_F20_800 Name FC_F20_800 Type flash State Defined Status warning Group FC_src Source Volumes 4 Approval Status Automatic Copysets 4 Non-approved 0 Invalid 0 Seq "Flashed Target" Pool Criteria P% Shadow No Recover No Err No Approve Yes Description Point in time copy of 4 volumes, 2 on ESS F20 and 2 on ESS 800 AWN007080I Command completed successfully.

stopsess command
To stop Continuous Synchronous Remote Copy session you can use the stopsess command at any point during the life of a session once that session is in active state. This command withdraws the relationship on the hardware.
stopsess [-quiet] session_name [. . .] -quiet

An optional parameter that turns off the confirmation prompt for this command.
session_name [...] | -

Specifies the session name to be stopped. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN).

Chapter 15. Using TotalStorage Productivity Center for Replication

893

Example 15-13 shows an example of the stopsess command.


Example 15-13 stopsess repcli> stopsess -quiet PPRC_800_to_F20 AWN007120I Command completed successfully. repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ================================================================ PPRC_800_to_F20 warning Defined PPRC_src pprc No No No repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Defined Status warning Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow No Recover No Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully.

Output format
This section details the commands to display repcli commands.

setoutput command
You can use the setoutput command to display current output settings for repcli commands. The output format set by this command remains in effect for the duration of a command session or until the options are reset.
setoutput [ { -help|-h|-? } ] [-p on|off] [-r #] [-fmt default|xml|delim|stanza] [delim character] [-hdr on|off] [-v on|off]

? | h | help
Displays a detailed description of this command, including syntax, parameter descriptions, and examples. If you specify a help option, all other command options are ignored.
fmt

Specifies the format of the output. You can specify one of the following values:
default

Specifies that output is to be displayed in a tabular format using spaces as the delimiter between the columns. This is the default value.

894

IBM TotalStorage Productivity Center: Getting Started

delim

Specifies that output is to be displayed in a tabular format using the specified character to separate the columns. If you use a shell meta character (for example, * or \t) as the delimiting character, enclose the character in single quotation mark () or double quotation mark (). A blank space is not a valid character.
xml

Specifies that output is to be displayed using XML format


stanza

Specifies that output is to be displayed in rows

delim character
Specifies character to separate the columns when -fmt delim parameter is used

p
Specifies whether to display one page of text at a time or all text at once.
off

Displays all text at one time. This is the default value when the perftool command is run in single-shot mode.
on

Displays one page of text at time. Pressing any key displays the next page. This is the default value when the repcli command is run in interactive mode.
hdr

Specifies whether to display the table header.


on

Displays the table header. This is the default value.


off

Does not display the table header.


r number

Specifies the number of rows per page to display when the p parameter is on. The default is 24 rows. You can specify a value from 1 to 100.
v

Specifies whether to enable verbose mode.


off

Disables verbose mode. This is the default value.


on

Enables verbose mode. Example 15-14 shows the different formats of output.
Example 15-14 Default output settings repcli> setoutput Paging Rows Format Headers Verbose Banner ========================================== On 22 Default On Off Off

Chapter 15. Using TotalStorage Productivity Center for Replication

895

If you want to use an output format other than the default format only once per repcli session, use the setoutput command. There are output parameters for the commands:
lssess lspair lsseq

The output parameters for these commands are:


[-fmt default|xml|delim|stanza] [delim character] [-hdr on|off] [-v on|off]

The syntax is the same as for the setoutput command. You can see the different formats of output for the lssess command in Example 15-15, Example 15-16, Example 15-17, and Example 15-18.
Example 15-15 default output format repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ============================================================== PPRC_800_to_F20 normal Active PPRC_src pprc Yes Yes No

Example 15-16 XML output format repcli> lssess -fmt xml PPRC_800_to_F20 <IRETURNVALUE> <INSTANCE CLASSNAME="RM_Session"><PROPERTY NAME="session_name" TYPE="string"><VALUE TYPE="string">PPRC_800_to_F20</VALUE></PROPERTY><PROPERTY NAME="cptype" TYPE="string"><VALUE TYPE="string">pprc</VALUE></PROPERTY><PROPERTY NAME="state" TYPE="string"><VALUE TYPE="string">Active</VALUE></PROPERTY><PROPERTY NAME="status" TYPE="string"><VALUE TYPE="string">normal</VALUE></PROPERTY><PROPERTY NAME="srcgrp" TYPE="string"><VALUE TYPE="string">PPRC_src</VALUE></PROPERTY><PROPERTY NAME="shadow" TYPE="string"><VALUE TYPE="string">Yes</VALUE></PROPERTY><PROPERTY NAME="recov" TYPE="string"><VALUE TYPE="string">Yes</VALUE></PROPERTY><PROPERTY NAME="err" TYPE="string"><VALUE TYPE="string">No</VALUE></PROPERTY></INSTANCE> </IRETURNVALUE>

Example 15-17 stanza output format repcli> Name Status State Group Type Recover Shadow Err repcli> Name Status State Group Type Recover Shadow lssess -fmt stanza PPRC_800_to_F20 PPRC_800_to_F20 normal Active PPRC_src pprc Yes Yes No lssess -l -fmt stanza PPRC_800_to_F20 PPRC_800_to_F20 normal Active PPRC_src pprc Yes Yes

896

IBM TotalStorage Productivity Center: Getting Started

Err Approval Status Pool Criteria Copysets Non-approved Invalid Description Seq Source Volumes Approve

No Automatic F20 2 0 0 Remote copy of 2 volumes from ESS 800 to F20 "Remote Target" 2 Yes

Example 15-18 delim output format repcli> lssess -fmt delim -delim ',' PPRC_800_to_F20 Name,Status,State,Group,Type,Recover,Shadow,Err =============================================== PPRC_800_to_F20,normal,Active,PPRC_src,pprc,Yes,Yes,No

Note: Use lssess -l command instead of showsess in batch programs, because showsess shows results only in one, stanza format. See Example 15-19.

Example 15-19 is a sample lssess -l command which can be easily used in a batch program.
Example 15-19 Using lssess with -l (long) parameter in delim format
repcli> lssess -l -fmt delim PPRC_800_to_F20 Name,Status,State,Group,Type,Recover,Shadow,Err,Approval Status,Pool Criteria,Copysets,Non-approved,Invalid,Description,Seq,Source Volumes,Approve ================================================================================================================================================== PPRC_800_to_F20,normal,Active,PPRC_src,pprc,Yes,Yes,No,Automatic,F20,2,0,0,Remote copy of 2 volumes from ESS 800 to F20,"Remote Target",2,Yes

Chapter 15. Using TotalStorage Productivity Center for Replication

897

898

IBM TotalStorage Productivity Center: Getting Started

16

Chapter 16.

Hints, tips, and good-to-knows


This chapter provides useful information about the components of IBM TotalStorage Productivity Center such as: Configuring Service Location Protocols (SLP) User IDs and password locations Determining agent status Launchpad customization

Copyright IBM Corp. 2005. All rights reserved.

899

16.1 SLP configuration recommendation


Some configuration recommendations are provided to enable Productivity Center for Disk to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This section discusses router configuration, SLP Directory Agent configuration, and environment configuration.

Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address, port 239.255.255.253, and port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation.

SLP Directory Agent configuration


Configure the SLP Directory Agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the User Agent (UA). Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered by Productivity Center for Disk. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow Productivity Center for Disk to discover all of the devices, you must statically configure it with the addresses of each of these DAs. You can accomplish this by using the Productivity Center for Disk Discovery Preference panel as discussed in SLP DA definition on page 248. You can use this panel to enter a list of DA addresses. Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which Productivity Center for Disk is installed. Configure an SLP DA by changing the configuration of the SLP Service Agent (SA) that is included as part of an existing Common Information Model (CIM) Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA.
Note: The change from SA to DA does not affect the CIM Object Manager (CIMOM) service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly.

Environment configuration
It may be advantageous to configure SLP DAs in the following environments: Where there are other non-Productivity Center for Disk SLP UAs that frequently perform discovery on the available services: This ensures that the existing SAs are not overwhelmed by too many service requests. Where there are many SLP SAs: A DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. We particularly recommend the configuration of an SLP DA when there are more than 60 SAs that need to respond to any given multicast service request.

900

IBM TotalStorage Productivity Center V2.3: Getting Started

16.1.1 SLP registration and slptool


Productivity Center for Disk uses SLP discovery. This requires that all of the CIMOMs that Productivity Center for Disk discovers are registered using the SLP. SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure that the CIM_InteropSchemaNamespace and Namespace attributes are specified. For example, type the following command:
slptool register service:wbem:https://myhost.com:port

Here, myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989.

16.2 Tivoli Common Agent Services


This section provides some tips that can help you when you run into problems during the registration of the Common Agent or the Resource Manager.

16.2.1 Locations of configured user IDs


This section lists the user IDs and their locations for IBM TotalStorage Productivity Center components.

Resource Manager
You can find the locations of the configured user ID for the Resource Manager in the files listed in Table 16-1.
Table 16-1 Resource Manager user ID and password

Component/server
Tivoli Agent Manager Productivity Center for Data Productivity Center for Fabric

File
...\AgentManager\config\Authorization.xml ...\TPC\Data\config\ep_manager.config ...\TPC\Fabric\manager\conf\AgentManager\config\endpoint.properties

The password of the Resource Manager is stored in the same files, but the password is in a readable format only on the Tivoli Agent Manager. The Resource Manager certificate subdirectory contains a file called pwd, which contains the agent registration password, which is required to open the certificate files.

Chapter 16. Hints, tips, and good-to-knows

901

Common Agent
The Common Agent does not have a user ID. Instead it has a context name which the agent uses for communication (see Table 16-2).
Table 16-2 Common Agent user ID and password

Component/Server
Tivoli Agent Manager Common Agent

File
...\WebSphere\AppServer\installedApps\<server>\AgentManager.ear\Agent Manager.war\WEB-INF\classes\resourcesl\AgentManager.properties ...\Tivoli\ep\config\endpoint.properties

AgentManager.properties has the context name and the password stored in clear text, so it is easy to find the values if you did not note them during the installation. On the Common Agent, the password is encrypted and stored in the pwd file in the same directory as the certification files, which are located in ...\Tivoli\ep\cert. If you go through the procedure of replacing the current certificates with a new one, do not forget to delete the pwd file, because it no longer matches the certificate file.

Ikeyman.exe
On all systems that have the one of the Tivoli Common Agent Services components installed, you will find a tool called ikeyman.exe. You can use this tool to open the certificate file, if you know the agent registration password. This is a quick way to verify that you still know the password that was used to lock a certificate file.

16.2.2 Resource Manager registration


The registration of a Resource Manager is performed during the start of the corresponding manager. If for any reason the registration fails, you can correct the problem and restart the manager. For Productivity Center for Fabric, you can search for the messages in the file ...\TPC\Fabric\manager\log\msgTPCFabric.log. Table 16-3 lists the messages and their meaning.
Table 16-3 Resource Manager messages in Productivity Center for Fabric

Message
BTACS0031I The Fabric Manager server is not registered with the Agent Manager BTACS0034I The Fabric Manager credentials are current

Meaning
Registration failed Fabric manager was restarted, Agent Manager could be contacted, everything is OK Attempt to register Registration was successful

BTACS0032I Registering with the Agent Manager at 9.1.38.36:9511. BTACS0037I Fabric Manager successfully registered with the Agent Manager. AgentManagerClient register

16.2.3 Tivoli Agent Manager status


There is no GUI for looking at the status and configuration of the Agent Manager. Nevertheless, there is a way to obtain information about the manager when DB2 is used as the Agent Managers registry.
902
IBM TotalStorage Productivity Center V2.3: Getting Started

Open the DB2 control center, navigate to the tables view and open a table as shown in Figure 16-1.

Figure 16-1 Tivoli Agent Manager tables in DB2

In the table named IP_ADDRESS (Figure 16-2), you find the IP addresses of all registered Common Agents and Resource Managers.

Figure 16-2 DB2 table with IP addresses

Chapter 16. Hints, tips, and good-to-knows

903

16.2.4 Registered Fabric Agents


If you want to know which agents have successfully registered with the Fabric Manager, follow these steps: 1. Open the Fabric Manager GUI. For example, click the Tivoli NetView 7.1.3 icon on the desktop. 2. From the menu bar, select SAN Configuration. 3. You see the window shown in Figure 16-3. Click Configure Manager.

Figure 16-3 TotalStorage Productivity Center Configuration for Fabric

904

IBM TotalStorage Productivity Center V2.3: Getting Started

4. In the SAN Configuration window, click the Agent Configuration tab. In the right pane, you see a table with the IP addresses, the names and the state of the Fabric Agents, similar to the example in Figure 16-4.

Figure 16-4 Agent Configuration

Chapter 16. Hints, tips, and good-to-knows

905

16.2.5 Registered Data Agents


If you want to see which Data Agents have successfully registered with the Data Manager, complete these actions: 1. Open the Data Manager GUI. 2. In the left panel, expand Administrative Services Agents. For each registered agent you should see an entry. If you click that entry, you see more information about the selected agent in the right panel, as shown in Figure 16-5. In the list, you see a green icon in front of the agent name if the agent is up, or a red icon if the agent seems to be down as with mlres4.

Figure 16-5 Data Agent status

16.3 Launchpad
The TotalStorage Productivity Center comes with a Launchpad that is used to launch the interfaces to the individual managers. This includes the IBM Director for both Productivity Center for Disk and Productivity Center for Replication, Productivity Center for Data, and Productivity Center for Fabric. When you plan to install the remote management interfaces for the different managers you may also want to have to Launchpad installed on that machine. We explain the manual installation of the Launchpad. In 16.3.2, Launchpad customization on page 909, you see how you can integrate other applications into the Launchpad so that the Launchpad really becomes the central point for your storage management.

906

IBM TotalStorage Productivity Center V2.3: Getting Started

16.3.1 Launchpad installation


The Launchpad is usually installed by the Suite Installer. However since it is a self-contained application, it can also be installed manually. At the time of this writing, you can find the Launchpad installer in the ...\W2K\TPC subdirectory of the IBM TotalStorage Productivity Center for Disk and Replication Base CD. Figure 16-6 shows the files of the TPC folder.

Figure 16-6 Launchpad install files

To install the Launchpad on a different machine, use the following procedure: 1. Start setup.exe. 2. After a few seconds, you see the Welcome display (Figure 16-7). Click Next.

Figure 16-7 Welcome screen

Chapter 16. Hints, tips, and good-to-knows

907

3. In the window shown in Figure 16-8, enter the installation directory name and click Next.

Figure 16-8 Installation directory

4. The next panel (Figure 16-9) summarizes the installation information. Click Install.

Figure 16-9 Installation status

5. The Launchpad installation takes less than five minutes. When it is completed, you see the installation status (see Figure 16-10). Click Finish to exit the installer.

908

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 16-10 Installation status

6. Start the Launchpad by clicking the IBM TotalStorage Productivity Center desktop icon.
Note: The start of the Launchpad take some time. Be patient because there is no indication as to whether it really started.

16.3.2 Launchpad customization


When installed, the Launchpad is used to start the interfaces to the managers that are included within Productivity Center. Two icons are predefined for other IBM storage management applications. See Table 16-4.
Table 16-4 Default components of the Launchpad

Label
Manage Disk Performance and Replication Manage Disk Performance and Replication Manage Storage Network Fabric

Component
Productivity Center for Disk

Application
IBM Director

Configuration
No configuration necessary No configuration necessary No configuration necessary No configuration necessary Specify URL Specify URL

Productivity Center for Replication Productivity Center for Fabric

IBM Director

Productivity Center for Fabric

Manage File System and Database Utilization Manage Data Availability Automate Provisioning

Productivity Center for Data

Productivity Center for Data

Tivoli Storage Manager Tivoli Provisioning Manager

Tivoli Storage Manager Tivoli Provisioning Manager

Chapter 16. Hints, tips, and good-to-knows

909

Every time the Launchpad starts, it looks for installed TotalStorage Productivity Center components. If a component is found or if Manage Data Availability or Automate Provisioning is configured, TotalStorage Productivity Center changes the icon from the standard icon to an application specific icon. In addition, Productivity Center provides the capability for to add your own launchable application to the Productivity Center user interface. The readme file (readme.html), which is located in C:\Program Files\IBM\ProductivityCenter, contains information about how to integrate your own applications into the Launchpad in the Customizing the Productivity Center User Interface section. You need to provide a properties file and a launcher program. The launcher program is an executable program that the launchpad invokes that knows how to launch the management application. Launching may involve starting another executable or opening a Web page. The key to the support is the properties file specification. Each Productivity Center user interface application must have a separate properties file located in the executable path com/ibm/storage/launchpad/resources/extensionN.properties. In this path, N must be an integer value starting at 0. Any number of extension applications can be specified as long as sequential N values are used. That is, if there are four extension applications, the properties files must be named: extension0.properties extension1.properties extension2.properties extension3.properties The basic format of the content of the extension file is as follows:
[product] product.title

String to be used as the application title on the interface Name of graphic file to be used as the icon on the interface. The graphic file must be in a path relative to the execution path for the productivity center user interface executable. The main launchpad application uses a default icon if this is not specified or cannot be found. Program that is invoked when the extension application is selected. The application prompts the user for the launcher executable if this value is not specified or cannot be found.

product.description String to be used as the description of the product on the interface product.icon

product.exepath

Example 16-1 shows a sample extension file with the name extension0.properties.
Example 16-1 Example of a extension0.properties file [PartnerProduct] PartnerProduct.title = Vendor Extension 1 PartnerProduct.description = Description of the product. PartnerProduct.icon = ./images/otherProd1_16.gif PartnerProduct.exepath = LaunchTest

Attention: We used explorer.exe http://some.url.com as a Partner Product. At the time of this writing, we received an error message every time we tried to start the application. Figure 16-11 shows an example of that message. The error message was triggered as a result of explorer.exe always returning an error code of 9009. Nevertheless, the application was launched without any problems.

910

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 16-11 Launchpad error message

16.4 Remote consoles


All four managers that are part of TotalStorage Productivity Center include a console that can be installed on a normal Desktop PC to work with managers remotely. The consoles for Productivity Center for Data and Productivity Center for Fabric can be installed by using the Suite Installer. For Productivity Center for Disk, only the command line interface can be installed by using the Suite Installer. Installation of the remote consoles is described in each component customization chapter.

16.5 Verifying whether a port is in use


A quick way to verify whether a port is in use, or to see if a certain application is running (of which you know the port), is to open a telnet connection to that machine. Normally, when you do not specify a port with telnet, you are connected to port 23 on the target machine. You can change this and try to connect to any other port by adding the port number after the target address. In the following example, we try to determine if the common agent is running on a certain machine. The port 9510 is the default port of the common agent. From a command prompt, enter:
c:\>telnet 9.1.38.104 9510

If the common agent is running, it listens for requests on that port and opens a connection. You simply see an empty screen. The common agent is not running if you see the message Connecting To 9.1.38.104...Could not open a connection to host on port 9510 : Connect failed.

16.6 Manually removing old CIMOM entries


It may be necessary from time to time to remove CIMOMs entries from Productivity Center common base. This can happen if you move a CIMOM to another server in your environment or change the CIMOM IP address. Productivity Center common base does not allow direct removal of a CIMOM entry using the IBM Director interface. To delete a CIMOM remove the data rows manually from DB2 using the following process: Delete any non-existing storage devices from the TotalStorage Productivity Center that are associated with the CIMOM entry to be removed. Launch DB2 Control Center. Navigate to the DMCOSERV database. Locate the DMCIMOM table.

Chapter 16. Hints, tips, and good-to-knows

911

Delete the data rows relating to old CIMOMs. Commit changes to DMCIMOM table. Locate the BASEENTITY table. Filter rows DISCRIM_BASEENTITY = DMCIMOM. Delete the data rows relating to the old CIMOMs. Commit changes to BASEENTITY table. Locate the DMREFERENCE table. Delete the data rows relating to the old CIMOM(s). Commit changes to DB2 table. The following screen captures illustrate the process. Before deleting a non-existing CIMOM through DB2 tables, you should first delete any storage devices that are associated with the CIMOM in TotalStorage Productivity Center. Right-click the device and select Delete as shown in Figure 16-12.

Figure 16-12 Delete invalid device from TotalStorage Productivity Center

912

IBM TotalStorage Productivity Center V2.3: Getting Started

Launch DB2 Control Center (Figure 16-13). This is a general administration tool for managing DB2 databases and tables.
Attention: DB2 Control Center is a database administration tool. It gives you direct and complete access to the data stored in all the TotalStorage Productivity Center databases. Altering data through this tool can cause damage to the TotalStorage Productivity Center environment. Be careful not to alter data unnecessarily using this tool.

Figure 16-13 Launching DB2 Control Center

Navigate down the structure in the left hand panel to open up the DMCOSERV database, then click on the Tables option. A list of tables for this database will appear in the right hand upper panel as seen in Figure 16-14.

Chapter 16. Hints, tips, and good-to-knows

913

Figure 16-14 Navigate to the DMCOSERV database

Locate the DMCIMOM table and double click to open a new window (Figure 16-15) showing the data rows. Identify the CIMOM rows to be deleted by their IP address.

Figure 16-15 Deleting rows from the DMCIMOM table in DB2

Click once on the row to be delete to select it. Click on the Delete Row button to remove it from the table. When you have made your changes you must click the Commit button for the table changes to be made effective. Click Close to finish with this table.

914

IBM TotalStorage Productivity Center V2.3: Getting Started

If you make any mistakes before you have clicked the Commit button, you can click the Roll Back button to undo the changes. Next locate the BASEENTITY table from Control Center panel as seen in Figure 16-14 on page 914. Open it with a double click. This table contains many rows of data. Filter the data to show only entries that relate to CIMOMs (see Figure 16-16). Click the Filter button to open the filter panel as shown in Figure 16-17.

Figure 16-16 BASEENTITY table

Enter DMCIMOM in the values field as shown in Figure 16-17 and click OK.

Figure 16-17 Filtering the BASEENTITY table

Chapter 16. Hints, tips, and good-to-knows

915

The table data is now filtered to show only CIMOM entries as shown in Figure 16-18.

Figure 16-18 BASEENTITY table filtered to CIMOMs

Use a single click to select the entries by IP address that relate to the non-existent CIMOMs. Click Delete Row to remove them. Click Commit to make the changes effective, then Close. You can used Roll Back to undo any mistakes before a Commit. Now locate the DMREFERENCE table from Control Center panel as seen in Figure 16-14 on page 914. Open it with a double click (see Figure 16-19).
Note: The DMREFERENCE table may contain more than one entry for each of the non-existent CIMOMs. It may not contain any rows at all for the CIMOMs. Delete all relevant rows for the non-existent CIMOMs if they exist. If there are no rows in this table for the CIMOMs you are deleting they are not linked to any devices and this is OK.

Figure 16-19 DMREFERENCE table

916

IBM TotalStorage Productivity Center V2.3: Getting Started

16.7 Collecting logs for support


To better assist you on any issues you are experiencing with TotalStorage Productivity Center, a support representative may require log files. One set of logs can be gathered using a batch file installed with the product. Executing the collectLogs.bat file will collect logs and zip up the files automatically. The collectLogs.bat file is located in the directory C:\Program Files\IBM\Director\support.The generated zip file, named collectedLogs.zip is located in C:\Program Files\IBM\Director\log.

16.7.1 IBM Director logfiles


There are extensive logging capabilities within the IBM Director framework that can be used to isolate issues with TotalStorage Productivity Center for Disk CIMOM discovery or other events. To view the IBM Director event logs, click the Events task in the right-hand column, to expand the hierarchy of available eventlog filters (all events, warning, critical, fatal). When you start the Event Log without specifying a filter or managed system, up to the last 100 events received over the last 24 hours are displayed. To view all logged events, double-click the Event Log in the Tasks pane of the Director Console or right-click and select the Open option. The Event Log is started and displays all logged events (see Figure 16-20).

Figure 16-20 IBM Director Event Log

Chapter 16. Hints, tips, and good-to-knows

917

In our case, note the large quantity of User ID/Password Incorrect from Server on CIMOM messages. For us, these messages were indicative of the CIM agents (not ours) that resided in the local subnet which were not configured to provide data to our TotalStorage Productivity Center for Disk administrative username and password (superuser/password). The Director console reports back that it has been rejected from accessing a CIM agent.

Viewing All Logged Events


When you start the Event Log without specifying a filter or managed system, up to the last 100 events received over the last 24 hours are displayed. To view all logged events, double-click the Event Log in the Tasks pane of the Director Console or right-click and select the Open option. The Event Log is started and displays all logged events.

Viewing Events by Filter Characteristics


Director supplies predefined filters and you can create user-defined filters to reduce the number of displayed events to only those that meet a filtering criterion. To view a filtered list of events from all managed systems, click the sign by the Event Log icon in the Tasks pane to display the event filters (see Figure 16-21), then double-click the event filter you want to apply (see Figure 16-22).
Note: The day and time of day characteristics of a filter do not apply when used in this context.

918

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 16-21 IBM Director Console - Event Log

Figure 16-22 Minor Events Filter

Chapter 16. Hints, tips, and good-to-knows

919

Viewing Events by System


IBM Director supplies predefined system groups and you can create user-defined groups to limit the number of displayed events to only those that meet a filtering criterion and originate from a specified managed system or group of systems. To view a filtered list of events from a single managed system or group, either drag the icon of the managed system or group from the Groups pane onto the event filter in the Tasks pane, or drag the event filter from the Tasks pane onto the managed system or group in the Groups pane. For example, dragging the Harmless Events to the IBM Director Systems Group produces a window similar to the one in Figure 16-23.

Figure 16-23 Harmless events

Deleting Events from the Event Log


To delete an event entry while viewing events in the Event Log, right-click the entry to display the context menu, then select Delete. The user may highlight one or more events and select the trash can icon or use the Edit Delete menu item.

Creating an Event Filter for a Selected Event


To create a filter for a specific event, right-click the event, then select Create a Filter from the context menu (see Figure 16-24). The Event Filter Builder dialog is displayed.

920

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 16-24 Create specific filter

Changing the number of entries viewed in the Event Log


The number of event entries that are displayed can be controlled by specifying: Total number of entries displayed Time interval for entries displayed By default, the Event Log displays the last 100 events over the last 24 hours. To change the number of entries displayed, select Options Set Log View Count from the menu bar. The maximum number of event entries that can be viewed is equal to the maximum size of the event log. To change the time interval for entries displayed, select Options Set Time Range from the menu bar.

16.7.2 Using Event Action Plans


IBM Director can produce a significant quantity of events in a very brief period of time. This can make searching for specific events difficult. Director supports the creation of Event Action Plans to filter the quantity of events by any number of categories. Further you can apply actions to specific events, for example to generate new logfile outputs, or to send messages to the console, or to e-mail server administrators. In our example, we create an action plan to filter the event log for the discovery of new CIMOMs. Aside from filtering the whole event log for these type of messages, we create an action to broadcast a message to our team that a new CIMOM has been detected by TotalStorage Productivity Center for Disk. See Event Action Plan Builder on page 416 for detailed information.

16.7.3 Following Discovery using Windows raswatch utility


To follow the process of device discovery, you can use raswatch, an IBM Director executable that can be accessed from the Windows 2000 command prompt by running:
raswatch -dev_mgr -high

Chapter 16. Hints, tips, and good-to-knows

921

Figure 16-25 is an example of the raswatch during the trace of TotalStorage Productivity Center discovery.

Figure 16-25 Using raswatch to trace TotalStorage Productivity Center discovery

The raswatch output can be very verbose and will scroll very quickly off the screen. Consider logging the output into a file using raswatch -dev_mgr -high > c:/testlog.txt. This will allow you to open the raswatch file in the Notepad editor and search for IP or hostname strings that will validate the TotalStorage Productivity Center discovery process.

16.7.4 DB2 database checking


To validate that the DB2 databases are functioning as they should, we make use of the DB2 UDB tools to check overall Database health, and to confirm that tables we expect to be populated get populated, for example, following a data collection task. The DB2 UDB tool that can be used to review the logfiles is called the Journal. To open the Journal utility, click Start Programs IBM DB2 General Administration Tools Journal. You should see a window like Figure 16-26.

922

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 16-26 DB2 Journal message viewer

Chapter 16. Hints, tips, and good-to-knows

923

Additional information about viewing and managing events logged in the DB2 journal can be referenced by accessing the help menus. By default, DB2 instance health checking is disabled. It may be advisable to enable the health monitor. Even the default alert thresholds that are in place when the health monitor is enabled will provide for the TotalStorage Productivity Center administrator at least some idea of any issues with the DB2 instance. You can open the DB2 Health Center by clicking Start Programs IBM DB2 Monitoring Tools Health Center (Figure 16-27).

Figure 16-27 DB2 Health Center panel

924

IBM TotalStorage Productivity Center V2.3: Getting Started

Remember that by default, the Health Center monitoring is disabled. You can tell by the green circle on our DB2 instance and associated databases that we have enabled monitoring. At present we have no issues that have generated any alerts. Figure 16-28 shows the typical default threshold settings for each of the TotalStorage Productivity Center for Disk databases.

Figure 16-28 DB2 Object Health Indicator settings panel

Chapter 16. Hints, tips, and good-to-knows

925

Aside from viewing the DB2 events, or ensuring that event monitoring is enabled, we can also review the contents of specific database tables to ensure that we are receiving data and that the appropriate tablespaces are being populated. For example, if we have just performed an ESS data collection task, we should have entries in the following three tables in the PMDATA database: VPCCH - Volume data VPCRK - Array data VPCLUS - Cluster data We can review the contents of these tables from within the DB2 Control Center, by navigating through the tree on the left-hand side, from system, instance, to the database, and tables, as shown in Figure 16-29.

Figure 16-29 Viewing the database tables

926

IBM TotalStorage Productivity Center V2.3: Getting Started

To view the contents of a specific table, right-click the table you want to view and select Sample Contents. You should see a table like the one in Figure 16-30.

Figure 16-30 Sample database contents

The presence of these rows in this table tells us that we have successfully performed a data collection task against the ESS with serial number 22219. We should also see data in the other tables cited above.

16.7.5 IBM WebSphere tracing and logfile browsing


The first WebSphere logfile of interest is the startServer.log. This file is updated each time you perform the startup of the WebSphere application server. Figure 16-31 shows the WebSphere start logfile.

Chapter 16. Hints, tips, and good-to-knows

927

Figure 16-31 WebSphere Application Server server start logfile

A considerable amount of application level information can be obtained from within the IBM WebSphere framework, using the Administrative Console.

16.8 SLP and CIM Agent problem determination


We have already outlined some procedures for ensuring that the CIM agents are correctly configured in your TotalStorage Productivity Center environment Chapter 5, CIMOM install and configuration on page 191.

Configuration guideline summary


Here is a summary of the guidelines: 1. It is advisable that the TotalStorage Productivity Center for Disk host and SLP agent host (if they are on different servers), reside in their own subnet, isolated from other devices. This will reduce the possibility that network traffic generated from CIM agents outside of your control will impact your TotalStorage Productivity Center for Disk host. 2. Make a list of the CIMOMs you intend to have registered on your SLP. The list should include: IP address of CIM Type and version of CIM agent (SAN Volume Controller, ESS, FAStT, DS4000, DS8000, DS6000) This list can be used later as a starting point for creation of your slp.reg file (Persistency of SLP registration on page 243) 3. Test that the CIM agents you intend to register with your SLP host are active. (Confirming the ESS CIMOM is available on page 220). 4. Ensure that the username and password on each CIM agent matches that of your TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication username and password. For ESS devices this should always be the case, since we register each ESS in the CIM using the command:
setdevice, addess <ip address of ESS> <specialist username> <specialist password>

928

IBM TotalStorage Productivity Center V2.3: Getting Started

For SVC, the TotalStorage Productivity Center for Disk superuser name and password must be synchronized by creating the same account credentials in the SAN Volume Controller Console GUI. 5. Follow the TotalStorage Productivity Center for Disk discovery process using raswatch (Figure 16-25 on page 922) for evidence that the new CIM: a. Has been detected b. Has allowed TotalStorage Productivity Center for Disk to authenticate correctly The same activities may be traced in the IBM Director logfiles (Figure 16-20 on page 917).

16.8.1 Enabling SLP tracing


It is possible to modify the slp.conf file to enable verbose tracing of SLP registrations and other events of interest during problem determination. The following lines from the slp.conf file can be modified to enable the SLP logging. Simply remove the semicolon from the line, ;net.slp.traceMsg = true, and restart the SLP service to invoke the changes (Figure 16-32).

#---------------------------------------------------------------------------# Tracing and Logging #---------------------------------------------------------------------------# A boolean controlling printing of messages about traffic with DAs. # Default is false. ;net.slp.traceDATraffic = true # A boolean controlling printing of details on SLP messages. The fields in # all incoming messages and outgoing replies are printed. ;net.slp.traceMsg = true # A boolean controlling printing details when a SLP message is dropped for # any reason. Default is false. Default is false.

;net.slp.traceDrop = true # A boolean controlling dumps of all registered services upon registration # and deregistration. If true, the contents of the DA or SA server are Default is false.

# dumped after a registration or deregistration occurs. ;net.slp.traceReg = true


Figure 16-32 Enabling SLP tracing in slp.conf

The more detailed output from SLP tracing is in the slp logfile, located in c:/WINNT/slpd.log.

Chapter 16. Hints, tips, and good-to-knows

929

Important: After the required tracing information has been gathered, you should disable slp trace. The logfile can become very large.

16.8.2 Device registration


At any time where the ESS Specialist, DS6000 or DS8000 account username and/or password is changed, the device registration to the relevant CIMOM must be updated to reflect the change.
Important: If the device CIMOM registration change is not done, the TotalStorage Productivity Center for Disk functions such as data collection will fail.

16.9 Replication Manager problem determination


In this section, we present some methods and log sources for troubleshooting the Replication Manager component. Debugging for almost all Replication Manager problems requires the WebSphere Application Server trace, and most problems also require an ICAT trace at a minimum. In general, RM provides two major functions: Setting up a copy session Controlling a copy session The majority of Replication Manager problems are symptoms of a problem with the underlying interface to the hardware. Replication Manager communicates with ICAT, which for ESS communicates with ESSNI, which talks to Copy Services, which talks to the actual ESS microcode. Any breakdown in communications along this path causes Replication Manager to behave incorrectly or not function at all. These are the major categories of interface problems: 1. Lack of state changes (indications) coming from the ESS to Replication Manager. Each time a copy relationship on a volume changes state, Replication Manager is notified using an indication. An indication is a CIM event which is delivered asynchronously from the actual underlying event. Replication Manager uses indications to update its knowledge of the physical copy relationships as they change dynamically. Loss of indications causes Replication Manager to appear to be stuck or not to have worked at all. If indications do not arrive after the Start operation is performed on a session, then the session stays in defined state and does not appear to be operating. Another symptom of loss of indications is that the volume relationship states as reported by the Copy Services application on the ESS do not match the states reported by Replication Manager. 2. Unexpected freeze (suspend) operations can be seen when Replication Manager loses connection to the ICAT or ICAT loses connection to the ESS for longer than 90 seconds. Replication Manager periodically checks for all ESSs being alive, and initiates a freeze when there is an active session on an ESS and that ESS does not respond to a presence check. Upon a timeout when waiting for ESS status, Replication Manager performs a freeze operation since the timeout could be the first symptom of a disaster. 3. Extremely long durations can be seen to display the session properties panel or the path status panel. Replication Manager issues hardware queries to display these panels. Under some circumstances when the ICAT or ESSNI does not respond to the query, the user will see an extremely long response time, or the user interface may hang completely.

930

IBM TotalStorage Productivity Center V2.3: Getting Started

16.9.1 Diagnosing an indications problem


In the WebSphere Application Server (WAS) trace.log file, look for trace entries similar to this example (the actual trace entry is on one line). The token (HWLAYER) identifies which Replication Manager subcomponent wrote this trace entry. In this example, the HIPPRCLogicalPathEvent item is the specific indication which was received.
[5/28/04 9:55:22:609 MST] 68e5f2b1 HWLAYER > com.ibm.storage.hw.ess.cim.ESSIndicationHandler handleIndication(CIMEvent) (9.11.192.145) CIM_ProcessIndication: NIPPRCLogicalPathEvent has occurred

Seeing trace entries of this type shows that Replication Manager is receiving indications properly. If Replication Manager is not receiving indications, then no entries of this type will be seen surrounding the event in question. If Replication Manager is not seeing indications, then usually one or more layers of the software stack need to be restarted.

16.9.2 Restarting the replication environment


An unknown hardware layer error message might appear immediately after installing Replication Manager. You might receive an unknown hardware layer message on the first Start operation for a Continuous Remote Copy session, or first Flash operation for a FlashCopy session, after installing Replication Manager. If this occurs, restart IBM Director and try the operation again. If the problem is still not resolved after restarting all of the system components in turn, then capture problem determination information including the ICAT logs, the WebSphere Application Server logs, and a state save on the ESS.

16.10 Enabling trace logging


To allow logging of the console output, you need to set up the Director Stdout Logging function. This is especially required for problem determination using the GUI. On the Windows platform, follow these steps: 1. Start Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE SOFTWARE Tivoli Director CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server. The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example, in bash: $ export TWG_DEBUG_CONSOLE=true

Chapter 16. Hints, tips, and good-to-knows

931

16.10.1 Enabling WebSphere Application Server trace


It is very useful to enable the WebSphere Application Server trace tool when troubleshooting WebSphere Application Server related problems. This section details the steps to make the most use of this tool. To enable the traces of the WebSphere-based components of TotalStorage Productivity Center, the corresponding logging/trace-settings have to be configured with the WebSphere Administrative Console. Tracing defaults to disabled. Use the following steps to change the logging state: 1. Launch the WebSphere Application Server Administrative console at the following URL:
http://servername:9090/admin

This will redirect the browser to the secure login page, and afterward the login goes to the WebSphere Application Server Administrative root page (see Figure 16-33).

Figure 16-33 WebSphere Application Server Admin root URL example

932

IBM TotalStorage Productivity Center V2.3: Getting Started

2. Click Servers (see Figure 16-34).

Figure 16-34 WebSphere Application Server trace tool - select servers example

Chapter 16. Hints, tips, and good-to-knows

933

3. Click Application Servers (see Figure 16-35).

Figure 16-35 WebSphere Application Server trace tool - select application servers example

934

IBM TotalStorage Productivity Center V2.3: Getting Started

4. Click server1 (see Figure 16-36).

Figure 16-36 WebSphere Application Server trace tool - select server 1 example

Chapter 16. Hints, tips, and good-to-knows

935

5. Click Logging and Tracing (see Figure 16-37).

Figure 16-37 WebSphere Application Server trace tool - select logging and tracing example

936

IBM TotalStorage Productivity Center V2.3: Getting Started

6. Click Diagnostic Trace (see Figure 16-38).

Figure 16-38 WebSphere Application Server trace tool - select diagnostic trace example

7. Check the enable trace box: Enter the required trace entries into the trace specification box, separated by colons. Insert all of the trace specifications in this table which might be used byTotalStorage Productivity Center.

Chapter 16. Hints, tips, and good-to-knows

937

Table 16-5 provides the default TotalStorage Productivity Center for Replication specifications (see Figure 16-39 for the trace).
Table 16-5 Default trace specifications

Component
general format

Default
Comp=level=state where: Comp is the component to trace Level is the amount of trace *State is enabled or disabled ELEMCAT=all=enabled HWLAYER=all=enabled REPMGR=all=enabled DMINT=all=enabled

Replication Manager Element Catalog Replication Manager Hardware layer Replication Manager Session Manager Replication Manager integration with Device Manager

*This is the value which should be set all the time, unless otherwise specified. For TotalStorage Productivity Center for Replication, the full setting is:
REPMGR=all=enabled:HWLAYER=all=enabled:DMINT=all=enabled:ELEMCAT=all=enabled

The remaining settings on this page are used to control how much trace is captured before it is overwritten. The actual settings to use depend on the actual server configuration, but here are some guidelines to use: Always choose the setting which sends the trace to a file. 20 MB seems to be a good size to use for each trace file. Enable at least one historical file. The more history is available, the better, since many TotalStorage Productivity Center are long-running and may result in a lot of trace data. Be sure to leave sufficient free space. The total trace will take up the number of historical files plus 1 multiplied by the size of each file.
Recommended settings:

20 MB per file 10 historical files (unless there is not enough disk space on the server)
Tip: The default file name is ${SERVER_LOG_ROOT}/trace.log and it is best to keep this default whenever possible. This will ensure that the automated tools to collect log and trace information can find the files. If the log files need to be written to a different location, for example to a different disk to manage free disk space, it is better to change the environment variable SERVER_LOG_ROOT. Refer to the WebSphere Application Server documentation for information about how to change this environment variable.

938

IBM TotalStorage Productivity Center V2.3: Getting Started

Figure 16-39 is a sample window showing several values have been changed.

Figure 16-39 WebSphere Application Server trace tool - several trace values changed

8. After making all changes, press the OK button, and then click save the changes.
Tip: To change trace settings immediately without restarting WebSphere Application Server, make the equivalent change in the Runtime tab instead of the Configuration tab. When Apply or OK is clicked from the Runtime tab, the change takes effect immediately.

Chapter 16. Hints, tips, and good-to-knows

939

16.11 ESS user authentication problem


While creating workload profile for Volume Performance Advisor (VPA) use, you may encounter problem of ESS user authentication. If you are creating a workload profile for the first time for any ESS, you need to specify ESS specialist user name and password. Upon launching Manage Workload profile for a new ESS from IBM Director Console, you should see panel similar to Figure 16-40. This panel allows you to specify ESS specialist username and password for VPA use. If you do not see this panel and instead get an error, you may need to download a patch 15484 from IBM support Web site. This patch is for IBM Director console. To apply this patch: 1. Download the patch, it consists of file mdmpmconsole.jar. 2. If you want to revert back to original state, back up following file:
c:\Program Files\IBM\Director\classes\mdm\lib\mdmpmconsole.jar

3. Copy mdmpmconsole.jar to the same directory as previous step. 4. Restart the IBM Director server from the TotalStorage Productivity Center for Disk Server Start Menu Control Panel Administrative tools Services IBM Director Server. 5. You may need to wait for some time till the IBM Director has re-started.

Figure 16-40 ESS user validation panel

16.12 SVC Data collection task failure


The Performance Manager data collection task may fail due to a previously running task, since SVC data collection allows only one such task to run at a time. You may need to stop previous data collection tasks. You can stop the task using Performance Manager Command Line Interface perfcli tool or from the SVC Console. To stop the task from CLI tool, go to the C:\Program Files\IBM\mdm\pm\pmcli and run the command:
stopsvcollection -devtype svc <task_name>

You will be requested to confirm whether to stop the task. You may respond Y (yes).
940
IBM TotalStorage Productivity Center V2.3: Getting Started

Alternatively, you may launch the SVC Console Web browser interface. After logging into the SVC console, choose Clusters under My Work column and click in the check box for the respective SVC cluster in the Clusters column and click Go. In the next panel select Manage Cluster. You will see a panel similar to Figure 16-41.Choose Stop Statistics Collection.

Figure 16-41 Manage Cluster for SVC console

Chapter 16. Hints, tips, and good-to-knows

941

The next panel is shown in Figure 16-42.

Figure 16-42 Stopping data collection for SVC

Click Yes. This will stop all the performance data collection for SVC.

942

IBM TotalStorage Productivity Center V2.3: Getting Started

17

Chapter 17.

Database management and reporting


This chapter provides information about how to maintain the DB2 database used by the components of the TotalStorage Productivity Center. Topics include deleting old data and importing and exporting the database for backup. Included in this chapter is an example custom report created from TotalStorage Productivity Center for Disk PM tables as well as suggestions for additional report content. You must have performance data collected prior to creating reports.

Copyright IBM Corp. 2004 2005. All rights reserved.

943

17.1 DB2 database overview


DB2 UDB is a relational database management system (RDBMS) that enables you to create, update, and control relational databases using the Structured Query Language (SQL). The DB2 UDB family of products is designed to meet the information needs of small and large businesses alike. IBM's DB2 database software is the worldwide market share leader in the relational database industry. It is multimedia and Web-ready relational database management system delivering leading capabilities in reliability, performance, and scalability with less skill and fewer resources. DB2 is built on open standards for ease of access and sharing of information and is the database of choice for customers and partners developing and deploying critical solutions. The TotalStorage Productivity Center for Disk utilizes the DB2 UDB as the backbone for its data storage and reporting functions. It is important to understand how TotalStorage Productivity Center for Disk allocates and uses DB2 resources so that you can efficiently customize and use the information provided by the Performance Management function. The TotalStorage Productivity Center for Disk incorporates IBM DB2 Express Version 8.1.2 with Fixpack 2. DB2 Express is a specially tailored database offering for the worldwide small and medium business (SMB) customers.

17.2 Database purging in TotalStorage Productivity Center


Data collected from performance data collection tasks is stored in a TotalStorage Productivity Center DB2 database. Two database functions enable you to manage Performance Manager data:
Database-size monitoring:

The sizing function on this panel shows used space and free space in the database. Space status advisor: The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now High: You should purge data soon Critical: You need to purge data now Disk space thresholds for status categories: The thresholds are as follows: Low if utilization <0.8 High if 0.8 <= utilization <0.9 Critical otherwise The delimiters between low/high/critical are 80% and 90% full.
Database purging:

You use the Performance Manager database panel to specify properties for a performance database purge task. You can purge performance data based on the age of the data, the type of data, and the storage devices associated with the data. After you specify the database purge information, it is saved as a noninteractive IBM Director task. You schedule all performance data-collection tasks using the IBM Director scheduler function.

944

Managing Disk Subsystems Using TotalStorage Productivity Center

17.2.1 Performance manager database panel


To access the Performance manager database panel (see Figure 17-2), use the path shown in Figure 17-1.
IBM Director Task pane Multiple Device Manager Manage Performance Performance Database Performance Manager Database

Figure 17-1 Accessing Performance Manager Database panel

Chapter 17. Database management and reporting

945

Figure 17-2 Purge database definition example

The current database information is shown. Use this panel to specify the properties for a new performance database purge task. The fields are:
Name: Type a name for the performance database purge task, from 1 - 250 characters. Description (optional): Type a description for the performance database purge task, from 1 to 250 characters. Device type: Select one or more storage device types for the performance database purge. Purge performance data older than: Select the maximum number of days or the number of years that you want the performance data to reside in the database before it is purged. Purge data containing threshold exception information: When you select this check box, you choose to purge exception data.

946

Managing Disk Subsystems Using TotalStorage Productivity Center

Save as task: When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved as a noninteractive task to the IBM Director Task pane under the Performance Manager Database. All performance database tasks can be scheduled using the IBM Director scheduler function as seen in Figure 17-3.

Figure 17-3 Scheduling a database purge task

Right-click the newly created database purge task (Figure 17-3) to schedule it for execution. Execution is either immediate or scheduled as seen in Figure 17-4.

Figure 17-4 Executing a database purge task

Chapter 17. Database management and reporting

947

17.3 IBM DB2 tool suite


The IBM DB2 tool suite which is installed as a component of TotalStorage Productivity Center provides you a GUI interface to help you define and manage systems and databases. In the TotalStorage Productivity Center context, the tool suite can also be used to view and extract data gathered from the storage devices you are monitoring. A component of the tools suite is an interface to online DB2 Online Support Web site resources. In this section we briefly describe the tools and provide examples of their use with TotalStorage Productivity Center.
Tip: For detailed information and usage examples on GUI tools for DB2 UDB Express, see An Introduction to DB2 UDB Express GUI tools (Part 1):
http://www-106.ibm.com/developerworks/db2/library/techarticle/0307chong/0307chong.html

And, An Introduction to DB2 UDB Express GUI tools (Part 2 of 2):


http://www-106.ibm.com/developerworks/db2/library/techarticle/0308chong/0308chong.html

To access the DB2 tool suite use the path:


Start Programs IBM DB2

The following main menu options are available to you for use with the TotalStorage Productivity Center databases or any other DB2 database instance you may have on your TotalStorage Productivity Center server. We will be putting more emphasis on the Command Line Tools in a TotalStorage Productivity Center reporting framework. Command Line Tools Development Tools General Administration Tools Information Monitoring Tools Set-up Tools
Tip: For detailed information, use the DB2 Tool Suite Help Screens or DB2 Online Information at the following URL:
http://publib.boulder.ibm.com/infocenter/db2help/index.jsp

17.3.1 Command Line Tools


The Command Line Tools options are: Command Center Command Line Processor Command Window

Command line processor


The command line processor (CLP) command (in either case) is typed at the command prompt. The command is sent to the command shell by pressing the ENTER key. Output is automatically directed to the standard output device. Piping and redirection are supported. The user is notified of successful and unsuccessful completion. Following execution of the command, control returns to the operating system command prompt, and the user may enter more commands.

948

Managing Disk Subsystems Using TotalStorage Productivity Center

Before accessing a database, you must perform preliminary tasks, such as starting DB2 with START DATABASE MANAGER. You must also connect to a database before it can be queried. Connect to a database by doing one of the following: Issue the SQL CONNECT TO database statement (see Figure 17-5 or Data Extraction using DB2 Command Line Processor Interface on page 981). Establish an implicit connection to the default database defined by the environment variable DB2DBDFT. If a command exceeds the character limit allowed at the command prompt, a backslash (\) can be used as the line continuation character. When the command line processor encounters the line continuation character, it reads the next line and concatenates the characters contained on both lines. Alternatively, the -t option can be used to set a line termination character. In this case, the line continuation character is invalid, and all statements and commands must end with the line termination character. For more information, use the DB2 UDB online help. In current releases of DB2 UDB CLP starts in the interactive mode, which is indicated by a DOS-looking command prompt db2=>. In this mode, end-users may enter one DB2 UDB command or one SQL statement by typing it in at the prompt, and then pressing the enter key. Figure 17-5 shows an example query in an IBM DB2 Command Line Processor window.

Figure 17-5 DB2 UDB command line processor example

In this example, a connect DB2 UDB command was executed to connect to the TotalStorage Productivity Center Performance Manager database named PMDATA (the TotalStorage Productivity Center performance database alias). When this command is executed, you can then enter a SELECT SQL statement against any of the PMDATA tables in the database. The commands are not case sensitive but the user ID (MDMSUID) and password (MDMSPW) are case sensitive based on how they were defined in the database setup during installation or thereafter. The interactive mode is exited by typing QUIT and pressing enter. The DB2 UDB Tool Suite also has another CLP which operates in a non-interactive mode; Command Window. It may be opened up from the path:
Start IBM DB2 Command Line Tools Command Window

Chapter 17. Database management and reporting

949

The SQL queries are invoked by starting each SQL statement with the characters db2, for example db2 connect to pmdata. This CLP has the same case sensitivity requirements of the Command Line Processor. For additional examples of the Command Line Processor, refer to Data Extraction using DB2 Command Line Processor Interface on page 981.

17.3.2 Development Tools


To access the DB2 Development Tools, use the path:
Start Programs IBM DB2 Development Tools

The Development Tools options are Development Center Project Deployment Tools DB2 Development Center provides an easy-to-use development environment for creating, installing, and testing stored procedures. It allows you to focus on creating your stored procedure logic rather than the details of registering, building, and installing stored procedures on a DB2 server. Additionally, with Development Center, you can develop stored procedures on one operating system and build them on other server operating systems. Development Center is a graphical application that supports rapid development. Using Development Center, you can perform the following tasks: Create new stored procedures. Build stored procedures on local and remote DB2 servers. Modify and rebuild existing stored procedures. Test and debug the execution of installed stored procedures.

17.3.3 General Administration Tools


Menu Path: Start Programs IBM DB2 General Administration Tools Control Center Journal Replication Center Task Center

Control Center
A GUI for snapshot and event monitoring. For snapshots, it allows you to define performance variables in terms of the metrics returned by the database system monitor and graph them over time. For example, you can request that it take a snapshot and graph the progression of a performance variable over the last eight hours. Alerts can be set to notify the DBA when certain threshold are reached. For event monitors, it allows you to create, activate, start, stop, and delete event monitors. See the online help for the Control Center for more information (also see Control Center on page 976).

Journal
You can start the Journal by selecting the icon from the Control Center toolbar. The Journal allows you to monitor jobs and review results. From the Journal, you can also display the recovery history and DB2 messages. The Journal allows you to monitor pending jobs, running jobs, and job histories; review results; display recovery history and alert messages; and show the log of DB2 messages.

950

Managing Disk Subsystems Using TotalStorage Productivity Center

Replication Center
The Replication Center stores the initial information about registered sources, subscription sets, and alert conditions in the control tables. The Capture program, the Apply program, and the Capture triggers update the control tables to indicate the progress of replication and to coordinate the processing of changes. The Replication Alert Monitor reads the control tables that have been updated by the Capture program, Apply program, and the Capture triggers to understand the problems and progress at a server.

Task Center
Use the Task Center to create, schedule, and run tasks. You can create the following types of tasks: DB2 scripts that contain DB2 commands OS scripts, which has operating system commands MVS shell scripts to run on OS/390 and z/OS operating systems JCL scripts to run in a host environment Grouping tasks, which contain other tasks Task schedules are managed by a scheduler, while the tasks are run on one or more systems, called run systems. You define the conditions for a task to fail or succeed with a success code set. Based on the success or failure of a task, or group of tasks, you can run additional tasks, disable scheduled tasks, and other actions.
Tip: You can also define notifications to send after a task completes. You can send an e-mail notification to people in your contacts list, or you can send a notification to the Journal.

17.3.4 Monitoring Tools


To access the DB2 Monitoring Tools use the path:
Start Programs IBM DB2 Monitoring Tools

The DB2 Monitoring Tools options are: Event Analyzer Health Center Indoubt Transaction Manager Memory Visualizer

Event Analyzer
The Event Analyzer GUI is sued for viewing file event monitor traces. Information collected on connections, deadlocks, overflows, transactions, statements, and subsections is organized and displayed in a tabular format. See the online help for the Event Analyzer for more information.

Health Center
Use the Health Center GUI tool to set up thresholds that, when exceeded, will prompt alert notifications, or even actions to relieve the situation. In other words, you can have the database manage itself!

Chapter 17. Database management and reporting

951

17.4 DB2 Command Center overview


The IBM DB2 Command Center provides you tools to use for database management and SQL capabilities for data compilation and extraction. You can then export the data you have retrieved and use this for the basis of management reporting, SAN environment PD, and host server application performance examination at the storage server level. Any of the query commands used in this book, in addition to your own custom queries can be setup as scripts with the IBM DB2 Utilities Command Center feature. This section describes some of the functions available to you in the Command Center and provides examples. Use the Command Center to execute DB2 commands and SQL statements, to execute z/OS or OS/390 host system console commands, to work with command scripts, and to view a graphical representation of the access plan for explained SQL statements. Working within the DB2 UDB Command Center, you can run SQL statements, DB2 UDB commands, and operating system commands in an Interactive mode. Like with most DB GUI tools, you will first connect to the database that you want to run your queries against. From there, Command Center can display a list of tables to which you have access. Command Center can also assist in writing the query by allowing you to pick table names, column names, filters, conditions, predicates, and other table signifies from its windows. You can also execute a stack of SQL statements within the Script tab portion of the window. Multiple SQL statements can be executed as a unit of work (UOW), which means each statement must complete successfully for the others to complete successfully. If any statement fails, the work done by all previously completed statements will be rolled back. In addition to the Command Center, you may want to use the IBM DB2 Control Center. Each of these tools share much of the same functionality but each has specific feature related capabilities. Which tool you you will depend upon what type of information you want to extract, and what your needs are regarding output of the data; screen output, file output, or both.

17.4.1 Command Center navigation example


To open the Command Center use the path:
Start Programs IBM DB2 Command Line Tools Command Center.

952

Managing Disk Subsystems Using TotalStorage Productivity Center

The Command Center opens as shown in Figure 17-6.

Figure 17-6 Open Command Center example

Tip: Alternatively, if the Control Center is open, click the Center opens.

icon. The Command

You can use the toolbar icons (see Figure 17-7) to open DB2 tools, view the legend for Command Center objects, and view DB2 information.

Figure 17-7 Command Center toolbar example

The toolbar icons are:

Execute

Executes the SQL statements, DB2 CLP commands, scripts, or MVS system commands that you enter on the Interactive or Script page. The results are displayed on the Query Results and the Access Plan pages.

Chapter 17. Database management and reporting

953

Control Center

Opens the Control Center so that you can display all of your systems, databases, and database objects and perform administration tasks on them.

Replication Center

Opens the Replication Center so that you can design your replication environment and set up your replication environment.

Satellite Administration Center

Opens the Satellite Administration Center so that you can set up and administer satellites and the information that is maintained in the satellite control tables.

Data Warehouse Center

Opens the Data Warehouse Center so that you can manage Data Warehouse objects.

Task Center

Opens the Task Center so that you can create, schedule, and execute tasks.

Information Catalog Center

Opens the Information Catalog Center so that you can manage your business metadata.

Health Center

Opens the Health Center so that you can work with alerts generated while using DB2.

Journal

Opens the Journal so that you can schedule jobs that are to run unattended and view notification log entries.

License Center

Opens the License Center so that you can display license status and usage information for the DB2 products installed on your system and use the License Center to configure your system for license monitoring.

954

Managing Disk Subsystems Using TotalStorage Productivity Center

Development Center

Opens the Development Center so that you can develop stored procedures, user-defined functions, and structured types.

Contacts

Opens the Contacts window where you can specify contact information for individual names or groups.

Tools Settings

Opens the Tools Settings notebook so that you can customize settings and properties for the administration tools and for replication tasks.

Legend

Opens the Legend window that displays all of the object icons available in the Command Center by icon and name.

Retrieve Table Data

Retrieves the data for the table you have executed SQL statements against and displays it on the Query Results page.

Create Access Plan

Creates the access plan for the current SQL statement and displays it on the Access Plan page.

Information Center

Opens the Information Center so that you can search for help on tasks, commands, and information in the DB2 library.

Help

Displays help for getting started with the Command Center.


Tip: We suggest you use this extremely useful Help feature to navigate the DB2 Tool Suite until you are comfortable with the function provided in the DB2 Express Tool Suite.

Chapter 17. Database management and reporting

955

17.5 DB2 Command Center custom report example


It is very important to consider the following when you create your SQL scripts, scheduled script tasks, or database query tasks. These types of activities add overhead to your TotalStorage Productivity Center host processor. The following tips can help make these tasks more efficient, quicker to return results, and easier for use in problem determination, administrator notification, and processor load. For normal daily queries, it is better to use smaller queries run concurrently than one large, complex query. The more processors and memory your TotalStorage Productivity Center host has, the faster your queries will be and theoretically, the more complex your queries can be defined. The more applications you have running concurrently on your TotalStorage Productivity Center host when you are executing SQL queries; the slower your host performance, and the longer it will take for your queries to complete. The more granular and often your PM and DM data collections are, the more load is placed on the TotalStorage Productivity Center server. Due to this fact, this will have an effect on your SQL query completion speed (and vice-versa). Keep this in mind when you are creating, scheduling, and executing these tasks. For storage server monitoring SQL queries, query a minimal number of values that are important to you as an administrator. You can setup key performance indicators in your queries which are appropriate indicators of system performance health. More complex reports can be created from consolidating output from smaller queries, compiling and further formatting the data exported from the TotalStorage Productivity Center database into a spreadsheet application. From there, you can setup macros to sort and filter the data into a final report which only considers system spikes and for investigating bottlenecks and threshold exceptions. Use your TotalStorage Productivity Center event notifications to guide you in subsequent database searches for pertinent information. Use your ESS Specialist host-volume relationships as corresponding information in problem determination. The TotalStorage Productivity Center database queries can be defined to drill down to the volume ID or other suitable granular level. You can then correlate this to your ESS Specialist volume host-volume definitions. From there, you can determine which host applications are being utilized during certain suspect time periods in order to rectify those types of problems.

17.5.1 Extracting LUN data report


The level of detail available on the DB2 database takes us all the way down to the LUN level. This would require several reports to demonstrate. The necessary steps to achieve this type of report are detailed in order in the following sequence. Step 1 lists the TotalStorage Productivity Center table and the column within the table in the format (Table:Column). The information extracted from the tables is used in subsequent steps. 1. Run a base report against the an Enterprise Storage Server (Model 800 in this example) broken down by Serial # displaying the following information - either across or down - in the header of the report output - (P_TASK, M_MACH_SN, and M_CLUSTER_N are keys across the tables VPVPD, VPCFG, VPCLUS);
Type Model Serial RAM 956

(VPVPD:M_MACH_TY) (VPVPD:M_MODEL_N) (VPVPD:M_MACH_SN) (VPVPD:M_RAM)

Managing Disk Subsystems Using TotalStorage Productivity Center

NVS from-date/time to-date/time

(VPVPD:M_NVS) (VPCLUS:PC_DATE_B/PC_TIME_B) (VPCLUS:PC_DATE_E/PC_TIME_E)

Next, display three columns with Date over the left column, Cluster 1 over the middle column, and Cluster 2 over the right column Sort by date/time in the left column (VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E). Left column is keyed by VPCLUS:P_TASK VPCLUS:M_MACH_SN VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E

Under each cluster column, display sub-column headers of I/O Rate, Avg Cache Hold Time, and NVS % Full; Center and right columns are keyed by VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E VPCLUS:M_CLUSTER_N
I/O rate, (VPCLUS:Q_CL_IO_RATE) Average cache hold time, (VPCLUS:Q_CL_AVG_HOLD_TIME) NVS % full, (VPCLUS:Q_CL_NVS_FULL_PRCT)

Under the center and right columns, display rows with the following:

2. Select a cluster from step 1 to investigate further. 3. Then, build a report broken by Logical SubSystem (LSS) displaying the following information in the header to reflect the row data;
Type, M_MACH_TY Model, M_MODEL_N Serial, M_MACH_SN Cluster, M_CLUSTER_N from-date/time, PC_DATE_B/PC_TIME_B to-date/time, PC_DATE_E/PC_TIME_E

Sort by date/time and sub-sort by Device Adapter number (DA #) PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_CARD_NUM
DA ID #, M_CARD_NUM Loop A or B, M_LOOP_ID Array ID, M_ARRAY_ID Array Type, M_STOR_TYPE Average ms to satisfy all requests to this array, PC_IOR_AVG % Time Array Busy, Q_SAMP_DEV_UTIL Total I/O read/writes to this array, Q_IO_TOTAL Total sequential read/writes to this array, Q_IO_SEQ

Display row data with corresponding headers for:

4. Select an LSS from step 2 to investigate further. 5. Then, build a report broken by loop displaying the following row information with the corresponding column headers;
Type, M_MACH_TY Model, M_MODEL_N

Chapter 17. Database management and reporting

957

Serial, M_MACH_SN Cluster, M_CLUSTER_N LSS, M_LSS_LA DA #, M_CARD_NUM Loop, M_LOOP_ID from-date/time,PC_DATE_B/PC_TIME_B to-date/time, PC_DATE_E/PC_TIME_E

Sort by date/time and sub-sort by Array PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_ARRAY_ID


Array ID, M_ARRAY_ID Array Type, M_STORE_TYPE # of write reqs issues to this array, PC_IO_WRITE # of ms to satisfy reads to this array, PC_RT_READ # of ms to satisfy writes to this array, PC_RT_WRITE Avg I/O rate for all requests, PC_IOR_AVG # of ms avg to satisfy all requests to this array, PC_MSR_AVG bytes read / sec from this array, PC_RBT_AVG bytes written / sec from this array, PC_WBT_AVG % time array busy, Q_SAMP_DEV_UTIL total I/O's issued to this array, Q_IO_TOTAL total sequential read/write requests to this array, Q_IO_SEQ

Display column headers for:

6. Select an array from step 3 to investigate further. 7. Build a report broken by volume displaying the following row data with the corresponding column names;
Type Model Serial Cluster LSS DA# Loop Array from-date/time to-date/time

M_MACH_TY M_MODEL_N M_MACH_SN M_CLUSTER_N M_LSS_LA M_CARD_NUM M_LOOP_ID M_ARRAY_ID PC_DATE_B/PC_TIME_B PC_DATE_E/PC_TIME_E

Sort by date/time and sub-sort by volume PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_VOL_NUM


# of the logical volume, M_VOL_NUM F / C, M_VOL_TY LUN Serial or SSID+base device address, M_VOL_ADDR

Display column headers for:

958

Managing Disk Subsystems Using TotalStorage Productivity Center

Tip: The proceeding global to granular reporting sequence example can be achieved through the DB2 Tool Suite Command Center (for instance). The individual component reports can be scheduled in the DB2 Tool Suite and the output parsed and compiled into a spreadsheet format (table data output exported in .WKS format for example) such as Lotus 123 or Excel.

Once the data is formatted into the worksheet, the native macro functionality of the spreadsheet could be used to process the data further to output graphical reports, summary reports, problem analysis document for root cause analysis, and performance analysis of SAN components, in order to meet the particular needs of your organization and SAN environment.

17.5.2 Command Center report


The proceeding global to granular reporting sequence example can be achieved through the DB2 Tool Suite Command Center. The individual component reports can be scheduled in the DB2 Tool Suite and the output parsed and compiled into a spreadsheet format (table data output exported in .WKS format for example) such as Lotus 123 or Excel. Once the data is formatted into the worksheet, the native macro function of the spreadsheet could be used to process the data further to output graphical reports, summary reports, problem analysis document for root cause analysis, performance analysis of SAN components, and so on, to meet the particular needs of your organization and SAN environment.

Chapter 17. Database management and reporting

959

PMDATA - VPVPD table report example


In this section we present a detailed example of using the Command Center to extract data from the TotalStorage Productivity Center Performance Manager (PMDATA) database. We will be using the information provided in Step 1 of DB2 Command Center custom report example on page 956. 1. Use the following menu path to enter the IBM DB2 Tool Suite Command Center
Start Programs IBM DB2 Command Line Tools Command Center

2. Using the Interactive option of the Command Center, click that button in the middle left of the window (see Figure 17-8). This will give you the option of Executing single SQL statements and DB2 CLP or executing MVS system commands. You also have the option to run an existing command script or one that you create in this example.

Figure 17-8 Open Command Center window select Interactive button example

960

Managing Disk Subsystems Using TotalStorage Productivity Center

3. Utilize performance data collected from your storage servers using the PM database tables (PMDTATA). With the Command Center window open, click the Database Connection button ..... to the right of the Database Connection bar. The Select Database window opens. Figure 17-9 shows an example of selecting the PM database tables (PMDATA) in the Command Center.

Figure 17-9 Connect to PMDATA database example

4. Once you have selected the database you want to work with, you can use the previously described functions to manage information within the database, extract data, or set up SQL queries using the Interactive or SQL Assist options. For our example, we use only the PMDATA database information. You could just as easily utilize the DM (DMCOSERV database) data to retrieve related information (such as Asset data) or the IBMDIR database to manage information in the IBM Director tables.. We will proceed to run a base report against a specific Model 800 ESS.
Note: We can also run this report against all similar storage server types for which we currently have performance previously collected and stored in the PMDATA database. If we preferred to do this, we would include specific, or all, M_MACH_SN (and their associated) values available in the database.

Chapter 17. Database management and reporting

961

5. We have connected to the PMDATA database and will utilize the SQL Assist function within the Command Center for our query example. Click the SQL Assist button to begin our SQL query script definition (see Figure 17-10).
Note: Use SQL Assist to create SQL statements. With SQL Assist and some knowledge of SQL, you can create SQL SELECT statements. In some environments, you can also use SQL Assist to create INSERT, UPDATE, or DELETE statements. SQL Assist is a tool that uses an outline and details panels to help you organize the information that you need to create an SQL statement.

SQL Assist is available in the Control Center, the Command Center, the Replication Center, and the Development Center. See the Online Help for more information. SQL Assist and other functions within the DB2 Tool Suite incorporate button sensitive (mouse over) help pop-up windows to aid you in navigating and making your menu selections within the tools.

Figure 17-10 Connect to PMDATA database (Interactive) example

962

Managing Disk Subsystems Using TotalStorage Productivity Center

6. The SQL Assist window will now open (see Figure 17-11). Click the Select radio button, in the middle right area of the window, since we are only going to retrieve data from the tables with our database queries in this example. The radio button options available in this window include the SQL query options; Select, Insert, Update, Delete. It is not recommended to do any SQL commands on your production database other than Select statements since with this function, you are only reading data from the database. If you want to manipulate your database further, it is recommended to make a copy of the database (not the production backup) and work with the backup.

Figure 17-11 SQL Assist window

Chapter 17. Database management and reporting

963

7. Notice that in the lower pane of the SQL assist window, there is the initial syntax of a select statement. We are now going to go through the steps to complete a comprehensive SQL Select query statement to view (or extract) data from our PMDATA database. In the upper left-hand pane called Outline, double-click the FROM (Source Tables) icon and the Details pane will open in the center of the SQL Assist window. The Available Tables tree is listed. Select DB2ADMIN to find the tables that you want to use in our query (see Figure 17-12).

Figure 17-12 Selecting the DB2ADMIN table button pull-down listing of available tables

964

Managing Disk Subsystems Using TotalStorage Productivity Center

8. We will now select the VPVPD table which contains a storage server configuration snapshot. Use the slider on the Available tables pane until you can click on the VPVPD table. Now click the > button and the table name will be populated into the Selected Source Tables in the upper right corner of the SQL Assist window (see Figure 17-13). Notice how the VPVPD table selection has now been entered automatically into the rudimentary Select statement in the lower pane of the window. This window will show you the SQL validated statement you have created thus far and keep track as you proceed through the SQL Assist function.

Figure 17-13 Selecting VPVPD table

Chapter 17. Database management and reporting

965

9. Now that you have selected the table you want to query for data, click the SELECT (Result Columns) icon in the Outline pane on the left of the window. The DB2ADMIN.VPVPD (Instance.Table) icon will appear in the pane. Select the + button in this icon and a list of the VPVPD columns will list in the table tree column listing (see Figure 17-14).

Figure 17-14 View the VPVPD available columns

966

Managing Disk Subsystems Using TotalStorage Productivity Center

10.Now select the VPVPD columns we want in our view with our SQL statement. You can select the columns in several ways. One way is to select the columns one at a time, or by clicking on the column name, selecting multiple column names by clicking the column names while holding the shift key down, or you can select all the columns by clicking the >> button. In our example we elected the columns we want while holding down the shift key and clicking the column names. Now click the > button to populate the Result columns pane (which is greyed out until you make your selections. Once you click the > button, a moment will pass and the column names will appear in the right-hand pane and in the SQL validated query statement that is building in the lower SQL Assist pane (see Figure 17-15). User defined field variables are found in the SQL validated statement.

Figure 17-15 Select VPVPD columns for our SQL query

Chapter 17. Database management and reporting

967

11.Next click the Where (Row filter) icon in the outline pane. This will present a table list in the Available columns pane from which you can select where, and how, we want to filter the query (see Figure 17-16).

Figure 17-16 Where (Row filter) for column M_MACH_SN values (note mouse over help)

968

Managing Disk Subsystems Using TotalStorage Productivity Center

12.Now define ther statement to return results where the M_MACH_SN value (ESS serial number) equals 2105.22219. Place the cursor in the Value field. You can either type a specific value in or use the pull-down arrow to being up other options. One of the options presents you the opportunity to see field results already in the table. This will open a subsequent screen showing current column results. You can select from that screen how many results to display. The default is to show 25 rows. You can increase of decrease this value through the menu. After you have made your selection for the Value, click the > button to enter the value into the Search Condition pane and in the SQL validated pane (see Figure 17-17).

Figure 17-17 M_MACH_SN column value selection

Chapter 17. Database management and reporting

969

13.You will not be using the Group By or Having SQL query functions for this simple query example. See the help screens for further information in for those query options. Now click Order By (Sort Criteria) in the outline pane. In the Available columns pane is the VPVPD table content tree (column names). Click value P_CDATE (performance collection date). Now click the > button and the column name appears in the Sort Columns pane. You have the option to select ASC (ascending) or DSC (descending) sort order. ASC is setup as a default sort order. Leave the ASC as-is (see Figure 17-18).

Figure 17-18 Order By P_CDATE ascending order value definition

14.Now that you have completed building a query statement, click the Run button to view the results (see Figure 17-19).

Figure 17-19 VPVPD table query statement results

970

Managing Disk Subsystems Using TotalStorage Productivity Center

15.After reviewing the results of query, click the OK button to return the SQL code to the main Command Center, Interactive window. Notice the mouse over, pop-up window (see Figure 17-20). The SQL code from this example is shown in Example 17-1.
Example 17-1 SQL code sample SELECT VPVPD.M_MACH_SN, VPVPD.M_MACH_TY, VPVPD.M_MODEL_N, VPVPD.M_CLUSTER_N, VPVPD.M_RAM, VPVPD.M_NVS, VPVPD.P_CDATE, VPVPD.P_CTIME FROM DB2ADMIN.VPVPD AS VPVPD WHERE VPVPD.M_MACH_SN = '2105.22219 ' ORDER BY VPVPD.P_CDATE ASC

Figure 17-20 Return SQL code we created to the Command Center, Interactive window

Chapter 17. Database management and reporting

971

16.You can now save your SQL code for future use. You can schedule this as a recurring task or use it ad hoc. Click the Interactive tab on the menu bar at the top of the window and then click Save Command As... (see Figure 17-21).

Figure 17-21 Save Command As... an ascii file for later use

972

Managing Disk Subsystems Using TotalStorage Productivity Center

PMDATA - VPCCH table percentages and averages ad hoc report


This SQL query and report is comprised of contents from the VPCCH performance data table. In this example we are only querying for select information from the table from any 2105 model storage server (LIKE %2105%) but you could also set up the query to view results by time, time/date, particular storage server, or a wide range of granular levels. What you query for will depend largely on what you want to examine for your performance checks and reporting and how granular you need to be depending on root cause analysis needs (see Example 17-2 and Figure 17-22).
Example 17-2 Sample SQL for VPCCH table SELECT VPCRK.M_MACH_SN, VPCRK.Q_IO_TOTAL, VPCRK.PC_B_HR_PRCT, VPCRK.PC_IOR_AVG, VPCRK.PC_MSR_AVG, VPCRK.PC_RBT_AVG, VPCRK.PC_WBT_AVG FROM DB2ADMIN.VPCRK AS VPCRK WHERE VPCRK.M_MACH_SN LIKE '%2105%' ORDER BY VPCRK.M_MACH_SN ASC, VPCRK.PC_IOR_AVG DESC, VPCRK.PC_MSR_AVG DESC, VPCRK.PC_RBT_AVG DESC, VPCRK.PC_WBT_AVG DESC, VPCRK.PC_B_HR_PRCT DESC

Figure 17-22 Sample VPCCH query

Chapter 17. Database management and reporting

973

PMDATA - VPCRK query


Now create a query for Average millisecond time to satisfy all subsystem I/O requests issued to a logical array: 1. Look for a high average for the VPCRK.PC_MSR_AVG column. From the following example query, you are going to investigate in a more granular fashion in the subsequent query in Example 17-3. The result is shown in Figure 17-23.
Example 17-3 PMDATA VPCRK query SELECT VPCRK.M_MACH_SN, VPCRK.PC_DEV_DATE_E, VPCRK.PC_DEV_TIME_E, VPCRK.PC_MSR_AVG, VPCRK.Q_IO_TOTAL, VPCRK.Q_SAMP_DEV_UTIL FROM DB2ADMIN.VPCRK AS VPCRK WHERE VPCRK.M_MACH_SN LIKE '%2105%' ORDER BY VPCRK.PC_DEV_DATE_E ASC, VPCRK.Q_SAMP_DEV_UTIL DESC, VPCRK.PC_MSR_AVG DESC

Figure 17-23 VPCCH high level query

2. Drill down with the next query for the suspect time period to see which arrays were involved. This could be done with any of the table data you want to examine that have data associated with them. The SQL query is shown in Example 17-4. The query result is shown in Figure 17-24.
Example 17-4 VPCRK SQL query for specific time period SELECT VPCRK.M_MACH_SN, VPCRK.PC_DEV_DATE_E, VPCRK.PC_DEV_TIME_E, VPCRK.M_LSS_LA, VPCRK.M_ARRAY_ID, VPCRK.M_DDM_NUM, VPCRK.M_CARD_NUM, VPCCH.M_VOL_NUM, VPCCH.M_VOL_ADDR, VPCCH.M_VOL_TY, VPCRK.PC_MSR_AVG FROM DB2ADMIN.VPCRK AS VPCRK, DB2ADMIN.VPCCH AS VPCCH WHERE VPCRK.M_MACH_SN LIKE '%2105%' AND VPCRK.M_MACH_SN LIKE '%2105%' AND VPCRK.PC_DEV_DATE_E = '2004-06-08' ORDER BY VPCRK.PC_DEV_DATE_E ASC, VPCRK.Q_SAMP_DEV_UTIL DESC, VPCRK.PC_MSR_AVG DESC

974

Managing Disk Subsystems Using TotalStorage Productivity Center

Figure 17-24 VPCRK granular SQL query example

From the information above, you can determine the date, time of day, rank number, volume number, and volume address for the time period we examined from the previous query. You know that all the hits for this report are indicating the volumes are OS/390 assigned storage (value C in the M_VOL_TYPE column).
Tip: You could save this query and run it as a scheduled task from the DB2 Tool Suite. You could also export the data for further manipulation and presentation with the use of a spreadsheet application. You could also setup SQL query tasks to run on a schedule, using the TotalStorage Productivity Center gauge reports to determine which areas need further investigation. The information derived from these queries of the PM database tables will correlate with the information you can derive from the ESS Specialist so you can determine which hosts and associated applications are causing performance concerns.

For further information and examples of SQL queries, you can refer to the redbook IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016.

Chapter 17. Database management and reporting

975

17.6 Exporting collected performance data to a file


The IBM TotalStorage Productivity Center DB2 Tool Suite includes features which will enable you to export collected performance data in a spreadsheet (WorkSheet Format, WKS) or Comma Separated Variable (CSV) format. You can do this by opening the PM Performance Data collection task Execution History window. This can be accomplished several ways. Following is an example: 1. Open the Scheduler (either Month, Week, Day, or Job calendar views). 2. Right-click the specific task you want to export. 3. Click the Open Execution History... option. 4. Right-click the Export option of the specific task that was scheduled. The Spreadsheet (.csv) pull-down menu will appear. 5. Right-click the Spreadsheet (.csv) option and the Export Comma Separated Value Format window will open. 6. In the Export Comma Separated Value Format window, input the File Name, Drive, and determine where you want to save the file (see Figure 17-25).

Figure 17-25 Export Comma Separated Value Format window example

17.6.1 Control Center


The DB2 UDB Tool Suite includes the Control Center which provides an insight into the database you are using. You can use the Control Center to manage systems, DB2 Universal Database instances, DB2 Universal Database for OS/390 subsystems, databases, and database objects such as tables and views. In the Control Center, you can display all of your systems, databases, and database objects and perform administration tasks on them. From the Control Center, you can also open other centers and tools to help you optimize queries, jobs, and scripts, perform data warehousing tasks, create stored procedures, and work with DB2 commands. The following is a brief overview of how to discover useful information about the TotalStorage Productivity Center for Disk database and use this as the basis for your query statement creation.

976

Managing Disk Subsystems Using TotalStorage Productivity Center

Tip: Within the DB Tool Suite, there are context sensitive pop-up help windows to aid you in navigating through the menu selections and tasks.

Important: Never make any modifications to your existing TotalStorage Productivity Center database. If you want to learn and experiment, create a new database or export your production database and perform operations on the exported database only.

Once you have opened the DB2 UDB Control Center, you can drill down to your TotalStorage Productivity Center database (PMDATA in this example) by using the explorer window on the left-hand side of the window (Figure 17-26).

Figure 17-26 Control center main window, tables view

On the right-hand side of the Control Center main window you can view the tables of the PMDATA database (since the Tables folder is highlighted on the left-hand side). In Figure 17-27 we explore the VTHRESHOLD table further by viewing the columns. This is done by double-clicking on the particular table you want to view details on. We have selected the Columns tab at the top left side of the window. The column attributes are listed under the window column headers. You can also view the tables Keys, Check Constraints, or General Table attributes.

Chapter 17. Database management and reporting

977

Figure 17-27 Control center VPTHRESHOLD columns explore example

From this window, you can further explore the table by selecting the tabs on the upper portion of the window. We will now view the Primary Key(s) window for the CNODE table (Figure 17-28). This is very useful information when you are creating your own query statements. It will reduce the amount of research time spent digging through hardcopy documentation.

Figure 17-28 Control center primary key window example

You can also show any current SQL statement you are generating within the Control Center, Estimate the size of this table and determine the current size or add unique or foreign key associations from this window. Additional database information is available from within the Control Center and useful help is available as needed.

978

Managing Disk Subsystems Using TotalStorage Productivity Center

17.6.2 Data extraction tools, tips and reporting methods


One of the most frequently asked questions about the TotalStorage Productivity Center is: How can I extract performance data from TotalStorage Productivity Center, so that I can keep and use it outside of TotalStorage Productivity Center for management reports or for problem determination at a granular level? This section contains useful information about the different tools and methods of extracting, manipulating, and exporting data from the TotalStorage Productivity Center database. We will also examine the requirements and important safe database practices to avoid causing unnecessary grief to yourself and your data.

Reporting tools
In this section we outline some of the processes for getting the most out of the TotalStorage Productivity Center database using applications and tools outside of the TotalStorage Productivity Center product. This is not an exhaustive list, but these are things to keep in mind when you are trying to make decisions with respect to exporting data, creating, and disseminating custom reports: IBM DB2 Express (or any version 8 full-featured IBM DB2 product) is required on the query system (laptop, desktop) Portable programming language and other tools for data extraction/parsing REXX C and C++ (can be compiled and disseminated in AIX environment) ESSCLI (asset/capacity data only) CLI Python QMF

Spreadsheet application Microsoft Excel IBM Lotus 123 Quick print-to-screen reports Parsed and formatted SQL query output Data Output types Data to files (compressed and uncompressed) Binary, ASCII, etc.

DELimited (ASCII) WKS (worksheet)

IBM DB2 DataJoiner


DB2 DataJoiner enables you to view all your data (for example, IBM, multi-vendor, relational, non-relational, local, remote, and geographic) as though it were local data. These are the highlights of this product: With a single SQL statement, you can access and join tables located across multiple data sources without needing to know the source location. Native support for popular relational data sources: DB2 Family, Informix, Microsoft SQL Server, Oracle, Sybase SQL Server, Teradata, and others. Client access (using DB2 clients) from a multitude of platforms, including Java (using JDBC).

Chapter 17. Database management and reporting

979

Integrated replication administration. DDL statements to easily create, drop, and alter data source mappings, users, data types and functions (user-defined and built-in). Excellent performance and intelligent use of pushdown and remote query caching. Refer to the following Web site for more information about IBM DataJoiner:
http://www.ibm.com/software/data/datajoiner/

QMF for Windows


QMF for Windows provides a Windows or Java interface to build queries or execute predefined queries with easy-to-use, point-and-click, drag-and-drop form creation for fast aggregation, grouping or formatting performed directly in the query results. It provides easy manipulation and integration with important commercial or custom Windows applications such as spreadsheets, desktop databases and executive information systems. DB2 QMF Version 8.1 transforms business data into a visual information platform for the entire enterprise with visual data on demand. Highlights of this release include: Support for DB2 Universal Database Version 8 functionality including IBM DB2 Cube Views, long names, unicode, and enhancements to SQL. The ability to easily build OLAP analytics, SQL queries, pivot tables, and other business analysis and reports with simple drag-and-drop actions. Visual information appliances such as executive dashboards that offer rich interactive functionality specific to virtually any information need. A database explorer for easily browsing, identifying, and referencing database assets. DB2 QMF for WebSphere, a tool that lets any Web browser become a zero-maintenance thin client for visual on demand access to enterprise DB2 business data. Simplified packaging for easier ordering. For more information about QMF for Windows, refer to the following Web sites:
http://www.ibm.com/software/data/qmf/ http://www.rocketsoftware.com/qmf/

You can download the free QMF for Windows Try and Buy version from the following Web site:
http://www-3.ibm.com/software/data/qmf/reporter/june98/downloads.html

Other SQL Query tools


In this section we discuss applications, other than the IBM DB2 Tool suite, from which Structured Query Language (SQL) can be written and executed. This is far from an exhaustive list of the tools and applications available for implementing SQL queries in your environment. You can run a SQL statement using one of several platform-specific tools for writing and executing SQL statements. The best suggestion is to determine which is the most appropriate interface, tool, or application for your particular needs. For DB2 Universal Database (UDB) UNIX and Intel platforms you can use the Command Center or the Command Line Processor (CLP) (see Data Extraction using DB2 Command Line Processor Interface on page 981). You may be familiar with tools such as Query Management Facility (QMF) for Windows. It is a graphical user interface (GUI) that connects to any DB2 UDB.

980

Managing Disk Subsystems Using TotalStorage Productivity Center

There are numerous other tools and applications such as IBM DB2 Intelligent Miner, IBM Object REXX, LotusScript which contain powerful scripting and/or report formatting capabilities and can access DB2 UDB on UNIX or Intel, IBM ~ iSeries, z/OS, as well as any database manager connected to DataJoiner. Refer to following Web sites for more information about these other tools: DB2 Intelligent Miner:
http://www.ibm.com/software/data/iminer

Object REXX:
http://www.ibm.com/software/awdtools/obj-rexx/

LotusScript:
http://www.ibm.com/software/data/db2/db2lotus/db2lscpt.htm

There are no direct solutions to print the built-in reports or save the report files directly from TotalStorage Productivity Center. However, you can issue standard SQL statements to extract the data. All asset, capacity, and performance data is available in the form of DB2 tables. DB2 UDB management tools will be useful in utilizing your table data in the most efficient manner.

Data Extraction using DB2 Command Line Processor Interface


It is very important to exercise caution when editing the DB2 tables and creating or dropping indices. You run the risk of losing the information within the tables and completely corrupting the DB2 database on the host machine by these activities. We strongly recommend that you backup your original database and do any database manipulation from your backup database copy.
Note: The TotalStorage Productivity Center database has already been optimized through indexing. There is no need for you to perform any further indexing on the database.

Export and import utilities for IBM DB2 CLP Interface


DB2 UDB provides export and import utilities. These utilities operate on logical objects as opposed to physical objects. For example, you can use an export command to copy an individual table to a file system file. At some later time, you might want to restore the table, in which case you would use the import command. Although export and import can be used for backup and restore operations, they are really designed for moving data, for example, for workload balancing or migration.

Chapter 17. Database management and reporting

981

Usage Notes: The db2move tool:

Exports, imports, or loads user-created tables. If a database is to be duplicated from one operating system to another operating system, db2move only helps you to move the tables. You also need to move all other objects associated with the tables, such as: aliases, views, triggers, user-defined functions, and so on. db2look is another DB2 UDB tool to help you easily move some of these objects by extracting the Data Definition Language (DDL) statements from the database. When export, import, or load APIs are called by db2move, the FileTypeMod parameter is set to lobsinfile. That is, LOB data is kept in separate files from PC/IXF files. There are 26 000 file names available for LOB files. The lLOAD action must be run locally on the machine where the database and the data file reside. When the load API is called by db2move, the CopyTargetList parameter is set to NULL; that is, no copying is done. If logretain is on, the load operation cannot be rolled forward later. The tablespace where the loaded tables reside is placed in backup pending state, and is not accessible. A full database backup, or a tablespace backup, is required to take the tablespace out of backup pending state.

SQL commands to extract data to a file example


In this section we show how to extract specific DB2 table information for use in another application or to export to another DB2 database. SQL commands are used to redirect DB2 select statement output to a file in a Windows or LINUX environment. The commands in the following example may be executed from a Command Prompt or from the DB2 Command Window (db2 needs to prefix every line of commands) or Command Line Processor. We will use the Command Line Processor for the examples below. 1. Export data to file format DEL (space delimited format), WSF (WorkSheet Format), CSV (Comma Separated Variables), or IXF (Integrated Exchange Format). The following commands provided use SQL. The examples provided are on a Windows platform.
Note: The Integrated Exchange Format (IXF) data interchange architecture is a host file format designed to enable exchange of relational database structure and data. The personal computer (PC) version of the IXF format (PC/IXF) is a database manager adaptation of the host IXF format. A PC/IXF file contains a structured description of a database table or view. Data that was exported in PC/IXF format can be imported or loaded into another DB2 database.

2. Create a folder to for the output files of the data extract. For example,
mkdir c:\ibmout

(Windows 2000 command window, or use Windows Explorer to create new folder) 3. Start Programs IBM DB2 Command Line Tools click Command Line Processor (CLP) 4. Connect to the DB2 using the following command in the CLP window
connect to pmdata user db2admin using db2admin Note: The screen response should be a few lines which says you are connected and the level of DB2 is 8.1.2. You should replace the word db2admin (following the word using) with the actual password you are using for your database instance).

982

Managing Disk Subsystems Using TotalStorage Productivity Center

5. Issue the following command to extract the data from the VPVPD table substituting the folder name with one of your choice (VPVPD table contains cluster-level and storage server-level configuration data; generated at the start of Performance Data Collection) and replacing the mmdd with the date of the extract. Export VPVPD table data to file
export to c:\ibmout\vpvpdmmdd.txt of del select * from vpvpd

6. Issue the following command to extract a specific day's worth of data from the VPCRK table (logical array-level performance data) substituting the date to be extracted and the same substitution for the filename. VPCRK discrete table output directed to file
export to c:\ibmout\vpcrkmmdd.txt of del select * from vpcrk where pc_date_b = 'mm/dd/yyyy' Note: Be patient while this process takes place. The prompt will return when the process is complete. The more complex your SQL statement is, the more data to be extracted, and the amount of background host processor load will have a limiting effect on the speed of processing the command.

Redirect output to a file


The following commands can be used to redirect output to a file instead of to your screen. 1. Start the DB2 Command Line Processor. The backslash (\) character is used to continue a statement onto another line. 2. Enter the following commands and press the enter key at the end of each command line entry.
connect to pmdata user db2admin using db2admin quit db2 (select * from vpvpd) > c:\ibmout\vpvpdmmdd.txt db2 (select * from vpcrk where pc_date_b = 'mm/dd/yyyy') > \ c:\ibmout\vpcrkmmdd.txt

The following can also be performed through the DB2 Tools Suite Command Line Interface or Command Center. The commands are not case sensitive and are presented here and include explanations. a. Sample connect to database
connect to pmdata user db2admin using db2admin

Where pmdata is the TotalStorage Productivity Center DB2 database, user db2admin, using the password of db2admin b. Select * from CNODE command example
select * from vpclul

The previous SQL query will select all (column) information from table vpclul, all rows are implied by the asterisk *
Note: Data is stored as a matrix with Columns (Field names) and Rows (Field Values). For more information about relational database tables, see the redbook IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016.

Chapter 17. Database management and reporting

983

Tip: You can use the FETCH clause in your SQL statement when testing your queries and scripts. This will limit your large script output to how ever many rows you have defined in the FETCH clause. Remember to remove the FETCH clause when your testing is completed to eliminate the erroneous output from your scripts. This can be placed as the last line of your SQL statement (note the semicolon which indicates to the CLP the end of the SQL statement); fetch first 10 rows only;

17.7 Database backup and recovery overview


A database can become unusable because of hardware or software failure, or both. You may, at one time or another, encounter storage problems, power interruptions, and application failures, and different failure scenarios require different recovery actions. You can protect your data against the possibility of loss by having a well rehearsed recovery strategy in place. DB2 UDB provides a range of facilities for backing up, restoring, and rolling data forward, which enable one to build a recovery procedure. Good warehousing practice covers reliability of the target databases together with the warehouse control databases. Just protecting the target databases is not enough if you want to keep an operational service running satisfactorily. Similarly just maintaining backup of the warehouse metadata helps your IT staff but doesn't satisfy the users who need the data from the target database Some of the questions that you should answer when developing your recovery strategy are: Will the database be recoverable? How much time can be spent recovering the database? How much time will pass between backup operations? How much storage space can be allocated for backup copies and archived logs? Will tablespace level backups be sufficient, or will full database backups be necessary? What level of complexity is acceptable to for the value of the data? Can I recreate any lost data from other sources? What database skills are available? Your database recovery strategy should ensure that all information is available when it is required for database recovery. You should include a regular schedule for taking database backups. You should also include in your overall strategy procedures for recovering command scripts, applications, user-defined functions (UDFs), stored procedure code in operating system libraries, and load copies. The concept of a database backup is the same as any other data backup: taking a copy of the data and then storing it on a different medium in case of failure or damage to the original. The simplest case of a backup involves shutting down the database to ensure that no further transactions occur, and then simply backing it up. You can then rebuild or recover the database if it becomes damaged or corrupted in some way. Different recovery methods are discussed to fit with your data warehouse business requirements.

Planning considerations
Planning is one of the most important areas for consideration before beginning to do database backups. We cover the factors which should be weighed against one another in planning for recovery, for example, type of database, backup windows and relative speed of backup and recovery methods. We also introduce various backup methods. In general terms, DB2 can offer a number of options for backup and recovery management to meet the needs of a wide range of applications. The more simple backup and recovery options provide data protection with minimal administrator skill or effort. Other more powerful options give greater levels of data protection but require more administrator skill and require more effort to maintain.

984

Managing Disk Subsystems Using TotalStorage Productivity Center

If your organization has existing high levels of skills with DB2 or other relational databases, you may already have standard operating procedures for protecting databases. If your organization is less skilled in this area, you may wish to chose a simple backup and recovery process that doesnt require a lot of new administrator skill or effort.

Speed of recovery
If you ask users how quickly they would like you to be able to recover lost data, they usually answer, Immediately. In practice, however, recovery takes time. The actual time taken depends on a number of factors, some of which are outside your control (for example, hardware may need to be repaired or replaced). Nevertheless, there are certain things that you can control and that will help to ensure that recovery time is acceptable: Develop a strategy that strikes the right balance between the cost of backup and the speed of recovery. Document the procedures necessary to recover from the loss of different groups or types of data files. Estimate the time required to execute these procedures (and do not forget the time involved in identifying the problem and the solution). Set user expectations realistically, for example, by publishing service levels that you are confident you can achieve.

Backup and recovery considerations


DB2 automatically takes care of problems caused by power interruptions. It will automatically restart and return a database to the state it was in at the time of the last complete transaction. Media and application failures are more severe. The simplest case of a backup involves shutting down the database to ensure that no further transactions occur, and then just backing it up. If a database needs to be restored to a point beyond the last backup, then logs are required to reapply any changes made by transactions that committed after the backup was made. You can: Back up a database to a fixed disk, a tape, or a location managed by a storage management product Back up a database that is active or inactive. Back up a database immediately or schedule backups for a later time. Back up a complete database or only selected table spaces. Some additional resources for DB2 backup and recovery are available in the Backing Up DB2 Using Tivoli Storage Manager, SG24-6247 and DB2 Warehouse Management: High Availability and Problem Determination Guide, SG24-6544.

Database logging
In DB2 UDB databases, log files are used to keep records of all data changes. They are specific to DB2 UDB activity. Logs record actions of transactions. If there is a crash, Logs are used to playback/redo committed transactions during recovery. Logging is always on for regular tables in DB2 UDB: It is possible to mark some tables or columns as NOT LOGGED. It is possible to declare and use USER temporary tables. There are two kinds of logging: Circular logging (default) This is the TotalStorage Productivity Center database default logging type. Archive logging

Chapter 17. Database management and reporting

985

In addition, capture logging is available for replication purposes. Each type of logging corresponds to the method of recovery you want to perform. Circular logging is used if the maximum recovery you want to perform is crash or restore recovery. Archive logging is used if you want to be able to perform rollforward recovery.
Note: IBM does not recommend or support the use of Archival logging with the TotalStorage Productivity Center product database.

Circular logging
Circular logging is the default behavior when a new database is created. (The logretain database configuration parameter setting is NO). With this type of logging, only full, offline backups of the database are valid. As the name suggests, circular logging uses a ring of online logs to provide recovery from transaction failures and system crashes. The logs are used in a round-robin fashion and retained only to the point of ensuring the integrity of current transactions. Circular logging does not allow you to roll a database forward through transactions performed after the last full backup operation. All changes occurring since the last backup operation are lost. Only the crash recovery and restore recovery can be performed when this type of logging. Active logs are used during crash recovery to prevent a failure (system power or application error) from leaving a database in an inconsistent state. The data changes are recorded in the log files and when all the units of work are committed or rolled back in a particular log file, the file can be reused. The number of log files used by circular logging is defined by the logprimary and logsecondary database configuration parameters. If there are UOWs running in a database using all the primary log files and still not reaching a point of consistency, then the secondary log files are allocated one at a time. Figure 17-29 shows a Circular Logging log path.

Figure 17-29 Circular logging log path example

986

Managing Disk Subsystems Using TotalStorage Productivity Center

Archive logging
Archive logging is used specifically for rollforward recovery. You can configure this logging mode by setting the logretain database configuration parameter to RECOVERY. Rollforward recovery can use both archived logs and active logs to rebuild a database or a tablespace either to the end of the logs, or to a specific point-in-time. The rollforward utility achieves this by reapplying committed changes found in the following three types of log files: Active logs: Crash recovery also manipulates active logs, which uses them to place the database into a consistent state. They contain transaction records that have not been committed and also the committed transaction information that has not been written to the database on disk. You can locate active log files in the LOGPATH directory. Online archived logs: When changes in the active log are no longer needed for normal processing, the log is closed, and becomes an archived log. An archived log is said to be online when it is stored in the database log path directory (see Figure 17-30).

Figure 17-30 Online Archival logging log path example

Offline archived logs: An archived log is said to be offline when it is no longer found in the database log path directory. When you want to use archive logging (see Figure 17-31), you must make provision for the logs to be stored away from the database. This is done in DB2 UDB by specifying a userexit parameter and interfacing to a suitable archive manager. Full documentation of this is supplied in the DB2 manuals (see Online resources on page 997).

Chapter 17. Database management and reporting

987

Figure 17-31 Offline Archival logging log path example

Database recovery
A database restore will recreate the database from a backup. The database will exist as it did at the time the backup completed. If archival logging were used before the database crash, it would then be possible to roll forward through the log files to reapply any changes since the backup was taken. It is possible to roll forward either to the end of the logs or to a specific point in time. The granularity available on the last transaction needs to be weighed against database performance.
Important: Logfiles are just as important as the backup files. It is not possible restore the database without logfiles.

17.8 Backup example


This example uses the simple method of backup for TotalStorage Productivity Center. It is not necessary to configure archive logging and therefore requires the minimum database management, administration and planning. Backups taken in this way are performed with the database offline. To achieve this TotalStorage Productivity Center will need to be stopped while the backup takes place. The following example script will stop TotalStorage Productivity Center, performs a backup of all the DB2 databases then restarts TotalStorage Productivity Center. In our test the environment backup took less than 7 minutes. The are two files, which we describe below:
TPC_backup.bat - The script you run database_list - The DB2 scripted list of databases to back up

988

Managing Disk Subsystems Using TotalStorage Productivity Center

File: database_list

This file contains a line for each database to backup. Depending on the TotalStorage Productivity Center components you have installed this list may vary. You can use DB2 Control Center to establish the full list of databases in your installation. The backup data will reside in C:\db2_backups in this example. You need to create this directory before using this process as shown in Example 17-5.
Example 17-5 Example of database_list backup backup backup backup backup backup backup database database database database database database database DIRECTOR to C:\db2_backups without prompting; DMCOSERV to C:\db2_backups without prompting; ELEMCAT to C:\db2_backups without prompting; ESSHWL to C:\db2_backups without prompting; PMDATA to C:\db2_backups without prompting; REPMGR to C:\db2_backups without prompting; TOOLSDB to C:\db2_backups without prompting;

File: TCP_backup.bat

This is the script you run (see Example 17-6). It stops the IBM Director service which will close all connection to the DB2 databases. This will allow DB2 to take an offline backup. The script then restarts the IBM Director service.
Example 17-6 Example of backup script @ECHO ON @REM This is a sample backup script @REM to backup TotalStorage Productivity Center @REM for Disk and Replication @REM ----

@REM @REM

Stopping TotalStorage Productivity Center -----------------------------------------

net stop IBM Director Support Program @REM Starting backup of DB2 databases @REM -------------------------------C:\PROGRA~1\IBM\SQLLIB\BIN\db2cmd.exe /c /w /i db2 -tvf C:\scripts\database_list @REM @REM Restarting TotalStorage Productivity Center -------------------------------------------

net start IBM Director Support Program

Chapter 17. Database management and reporting

989

990

Managing Disk Subsystems Using TotalStorage Productivity Center

Appendix A.

Worksheets
This appendix contains worksheets that are intended for you to use during the planning and the installation of the TotalStorage Productivity Center. The worksheets are meant to be examples. Therefore, you can decide whether you need to use them, for example, if you already have all or most of the information collected somewhere. If the tables are too small for your handwriting, or you want to store the information in an electronic format, simply use a word processor or spreadsheet application, and use our examples as a guide, to create your own installation worksheets. This appendix contains the following worksheets: User IDs and passwords Storage device information: IBM TotalStorage Enterprise Storage Server (ESS) IBM Fibre Array Storage Technology (FAStT) IBM San Volume Controller

Copyright IBM Corp. 2005. All rights reserved.

991

User IDs and passwords


We created a table to help you write down the users IDs and passwords that you will use during the installation of IBM TotalStorage Productivity Center for reference during the installation of the components and for future add-ons and agent deployment. Use this table for planning purposes. You need one of the worksheets in the following sections for each machine where at least one of the components or agents of Productivity Center will be installed. This is because you may have multiple DB2 databases or logon accounts and you need to remember the IDs of each DB2 individually.

Server information
Table A-1 contains detailed information about the servers that comprise the TotalStorage Productivity Center environment.
Table A-1 Productivity Center server

Server
Machine Hostname IP address

Configuration information

____.____.____.____

In Table A-2, simply mark whether a manager or a component will be installed on this machine.
Table A-2 Managers/components installed

Manager/component
Productivity Center for Disk Productivity Center for Replication Productivity Center for Fabric Productivity Center for Data Tivoli Agent Manager DB2 WebSphere

Installed (y/n)?

992

IBM TotalStorage Productivity Center V2.3: Getting Started

User IDs and passwords for key files and installation


Use Table A-3 to note the passwords that you used to lock the key files.
Table A-3 Passwords used to lock the key files

Default key file name

Key file name

Password

MDMServerKeyFile.jks MDServerTrusFile.jks agentTrust.jks

Enter the user IDs and password that you used during the installation in Table A-4. Depending on the selected managers and components, some of the lines are not used for this machine.
Table A-4 User IDs used on this machine

Element
Suite Installer DB2 IBM Director Resource Manager Common Agent Common Agent TotalStorage Productivity Center universal user Tivoli NetView IBM WebSphere Host Authentication

Default/ recommended user ID


Administrator db2admina Administratora managerb AgentMgrb itcauserb tpcsuida
c

Enter user ID

Enter password

a. This account can have any name you choose. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here. See Fabric Manager User IDs on page 68.

For details about the purpose of the user IDs, see 3.5.1, User IDs on page 65.

Appendix A. Worksheets

993

Storage device information


This section contains worksheets which you can use to gather important information about the storage devices that will be managed by TotalStorage Productivity Center. You need to have this information during the configuration of the Productivity Center. You need some of the information before you install the device specific Common Object Model (CIM) Agent, because this sometimes depends on a specific code level. Determine if there are firewalls in the IP path between the TotalStorage Productivity Center server or servers and the devices, which may not allow the necessary communication. In the first column of each table, enter as much information as possible to identify the devices later.

IBM TotalStorage Enterprise Storage Server


Use Table A-5 to collect the information about your ESS devices.
Important: Check the device support matrix for the associated CIM Agent.
Table A-5 Enterprise Storage Server

Name, location, organization

Both IP addresses

LIC level

ESS user name

ESS password

CIM Agent host name and protocol

994

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM FAStT
Use Table A-6 to collect the information about your FAStT devices. Check the device support matrix before you install the CIM Agent for the correct level.
Table A-6 FAStT devices

Name, location, organization

Firmware level

IP address

CIM Agent host name and protocol

Appendix A. Worksheets

995

IBM SAN Volume Controller


Use Table A-7 to collect the information about your SVC devices.
Table A-7 SAN Volume Controller devices

Name, location, organization

Firmware level

IP address

User ID

Password

CIM Agent host name and protocol

996

IBM TotalStorage Productivity Center V2.3: Getting Started

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 998. Note that some of the documents referenced here may be available in softcopy only.
TCP/IP Tutorial and Technical Overview, GG24-3376 Exploring Storage Management Efficiencies and Provisioning: Understanding IBM TotalStorage Productivity Center and IBM TotalStorage Productivity Center with Advanced Provisioning, SG24-6373 IBM TotalStorage SAN Volume Controller, SG24-6423 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886

Other publications
These publications are also relevant as further information sources:
IBM Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697 IBM TotalStorage Enterprise Storage Server Command-Line Interface User's Guide, SC26-7494 IBM Tivoli Storage Resource Manager Configuration and Getting Started, SC32-9067

Online resources
These Web sites and URLs are also relevant as further information sources: Tivoli software products index:
http://www-306.ibm.com/software/tivoli/products/

Open Software Family:


http://www.storage.ibm.com/software/index.html

Apache Software Foundation:


http://www.apache.org

Engenio:
http://www.engenio.com

FibreAlliance:
http://www.fibrealliance.org

Copyright IBM Corp. 2005. All rights reserved.

997

FibreAlliance MIB introduction:


http://www.fibrealliance.org/fb/mib_intro.htm

The Static Registration File:


http://www.openslp.org/doc/html/UsersGuide/SlpReg.html

Storage Networking Industry Association (SNIA):


http://www.snia.org

How to get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

998

IBM TotalStorage Productivity Center V2.3: Getting Started

Index
A
addess command 213 addessserver command 212, 214, 834 adduser command 216 administrator ID 67 agent deployment 361 Agent Manager 30, 51, 295, 902 %WAS_INSTALL_ROOT%AgentManager logsSyste mOut.log file 53 agentTrust.jks file 77 certificates 69 common agent 168 DB2 control center 902 default password 105 host name 102 IBMCDB 85, 98 key file 69 port number 181 ports used 102 status 902 TCP/IP ports 59 Agent Manager certificates 80 Agent Manager ports 59 agent placement 764 Agent registration password 69 agent registration password 52, 105 AgentManager registration password 105 AgentManager.properties file 168 Agenttrust.jks file 181 agentTrust.jks file 5253, 69, 77, 158 archive bit 627628 archive logging 985, 987 asset reporting 593, 595, 598 associated CIM Agent device support matrix 994 at risk files 548 Attended Installation 93 availability reporting 594, 604

C
capacity reporting 594, 605, 607 Carnegie Mellon University 817 changeMe password 105 Chargeback 592 chargeback 700 CIM Agent 30, 32, 35, 4748, 66, 84, 289290, 352 agent code 47 client application 47 device 47 device provider 47 ESS CLI 196 ESS configuration 196 overview 50 CIM Browser interface 220 CIM Client 35 CIM Managed Object 35 CIM Object Manager 194 CIM Object Manager (CIMOM) 32, 35, 47, 49 CIM Provider 35 CIM request 47 CIM Server 35 CIM/OM 635 discovery 635 CIM-compliant 47 CIMOM customization 192 delete entries 258 CIMOM (CIM Object Manager) 32, 47, 49 CIMOM communication 242, 248 CIMOM configuration recommendations 195 CIMOM discovery 49 CIMOM SVC console 241 CIMS 700 circular logging 985986 Cisco MDS 9000 734 Cleared Record 817 client application CIM requests 47 Cloudscape database 76 collect daemon 777 collectLogs.bat file 917 Command Center 952, 980 Command Line Processor (CLP) 949, 980 command-line interface 884 Common Agent AgentManager.properties file 67 context name 902 demo certificates 52 password 67 subagent 52 Common Agent Services Agent Manager 51 Resource Manager 51

B
backup reporting 594, 627, 659 backup storage requirements 661 backup storage requirements reporting 630 bandwidth 781 BASEENTITY table 915 batch reporting 683, 688 batch reports 697 Brocade 768 Brocade Silkworm Fibre Channel Switch 347 BWN005921E message 486 BWN005922E message 486 BWN005996W message 491

Copyright IBM Corp. 2005. All rights reserved.

999

Common HBA API 35 Common Information Model (CIM) 28, 34 common schema 47 components 30 computer uptime 604, 657 configured user ID locations 901 Constraint 610, 614 Constraint violation report 610 Contacted status 348 Control Center 976 core schema 47 cpthresh 475 creating gauges 443 CSV 697 CSV format report 976 CSV output 592, 697 customized reporting 683 customized reports 976

D
DA benefit 42 Data Agent 30, 160, 354, 358 Data Agents 906 data collection SVC,SVC data collection 460 data collection task 429, 431 Data Manager 30, 7879, 84, 290, 293, 304, 351, 906 administrator rights 80 CIM interface 290 DB2 database 81 fully qualified host name 303 GUI 293, 906 local subnet 290 Parameters window 180 repository 177 security certificates 80 server 357 SLP interfaces 290 subtree 293 Data Manager security issues 79 Data planning considerations 78 Data user levels 80 database Alert 588 asset reporting 585, 598 availability 591 batch reporting 693 capacity reporting 607 chained rows 592 chargeback 700 Instance Alert 588 Instance Quota 591 policy management 589 Quota 589, 591 Quota violation reporting 622 Scan 601 space usage report 685 system reports 685 Table Alert 589 usage reporting 608

usage violation report 685 used table extents 591 user specific reporting 688 utilization 591 database backup 984 database instance storage report 667 database LUN assignment 669 database name 62, 150 new Element Catalog subcomponent database 151 database purge function 22 database recovery time 985 database restore 988 database size report 667 database storage usage 665 DataJoiner 981 DB2 Agent manager registry 902 Health Center monitoring 925 view a table 927 DB2 Command Center 952 DB2 Control Center 913, 976 DB2 Cube Views 980 DB2 database connect to 949 DB2 database health 922 DB2 DataJoiner 979 DB2 Development Center 950 DB2 Development Tools 950 DB2 Event Analyzer 951 DB2 General Administration Tools 950 Control Center 950 Journal 950 Replication Center 951 Task Center 951 DB2 Health Center 924 DB2 Intelligent Miner 981 DB2 journal 924 DB2 logging 985 DB2 Monitoring Tools 951 DB2 report customized example 956 DB2 Tool Suite command line processor 948 Command Line Tools 948 DB2 tool suite 948 DB2 UDB Journal utility 922 DB2 Utilities Command Center 952 db2move tool 982 DDL 980 default directory 359 delete a gauge 456 demo keys 52 Device Centric view 713 device discovery 921 device management 37 Device Manager device discovery 249 LUN mapping 385, 392 mdisk display 384, 390, 396, 403 overview 19

1000

IBM TotalStorage Productivity Center V2.3: Getting Started

directors 767 Directory Agent 31, 33, 38, 40, 900 configuring for subnet 45, 900 discover all services (DAS) 900 discovery 635, 733 device 249 disk capacity 595, 605 disk subject matter expert 12 Dispersion Frame Technique 817 display gauges 447, 454 Distance Vector Multicast Routing Protocol 44 distinguished name (DN) 105 DMCIMOM table 914 DMCOSERV database 913 dmlog.txt file 139 DNS server 77 DVMRP 44

Event Action Plan 23 export and import action plans 423 Message Browser 422 Event Action Planner creating an action 417 Event Action Plans 921 Event Filter Builder 920 Event Filters 24 event forwarding 774 event logging 781 Event Services 23 exception gauges 450 extension schema 47

F
Fabric Agent 30, 61, 361, 369 separate package 369 Fabric Agent deployment 160 Fabric Agent registration 904 Fabric fully qualified host name 77 Fabric Manager 84, 303, 321, 351, 902 install user ID rights 110 native installer setup.exe program 361 Fabric Manager Installation option 160 Fabric Manager ports 60 Fabric user ID password considerations 76 FA-MIB 771 FAStT CIM Agent 290291 defining devices 230 SLP registration 233 FAStT device 718, 995 fault isolation 817 Fault Record 817 FC_MGMT MIB 814 FCPortTxFrames 789 FE-MIB 771 FETCH clause 984 Fibre Alliance MIB 814 Fibre Channel 34 Common HBA API 35 Generic Service 34 switch 715 Fibre Channel MIB 769 files at risk 548 forbidden 611 modified since backup 629 most at risk 627 obsolete 548, 615 orphaned 548, 615 statistics 528 filesystem capacity 605 filesystems 736 firewall configuration 45 FlashCopy 829 flashsess command 889

E
element manager 723 ElementCatalog.properties file 154 enable logging 931 enabling WAS trace 932 enterprise-specific MIB 768 ESS 594 attached hosts report 656 CIM/OM 635 Logical Sub System 580 reporting 634635, 653, 656 Tivoli Storage Resource Manager Probe 635 used and free storage report 653 ESS CIM Agent 196, 291, 353 addess command 213 addessserver command 212, 214 configuring 212 install 202 log files 209 post install 211 setuser interactive tool 215 Truststore file 291 verify ESS connectivity 216 verify install 211 ESS CIM agent SLP registration 224 ESS CIMOM CIM Browser interface 220 restart 215 SLP registration 218 telnet command 220 ESS CIMOM verification 216 ESS CLI 196 install 196 verification 201 verifyconfig 218 ESS data collection 429 ESS thresholds 457 ESS user authentication 940 esscli command 201 Ethernet 812

G
gauge definition 14 Index

1001

gauge properties 454 gauges 22, 443 creating 443 delete 456 display 447, 454 exception 450 performance 443 properties 454 General Parallel File System (GPFS) 79 GPFS (General Parallel File System) 79 group 537 GUID 822

H
HBA 731 health monitoring 20 High-Performance Parallel Interface (HIPPI) 34 HIPPI (High-Performance Parallel Interface) 34 historical reporting 774 Host Centric view 714 host name 75, 343 hot link 698 HTML 592, 697 HTML output 697 HTTP 34

I
IBM Director 18 database name 121 Event Action Plans 921 Event Services 24 host name 118 user name 118 IBM Director (ID) 29, 55, 259, 993 IBM Director event logs 917 IBM Director Scheduler device discovery 253 IBM FAStT 713, 991 models for storage subsystem support 78 IBM Object REXX 981 IBM Tivoli Storage Area Network Manager 319 IBM Tivoli SAN Manager 20 IBM TotalStorage Enterprise Storage Server CIM Agent 30 managing Productivity Center for Disk 84 storage device 63 IBM TotalStorage Open Software Family 3, 5 IBM WebSphere Application Server 30 selection panel 130 IBMCDB 85 IBMCDB database 98 ICAT (Integrated Configuration Agent Technology) 47 ICMP 604 IDE 560 IDs and passwords (IP) 59 IETF 811812 IGMP (Internet Group Management Protocol) 44

IIS 85, 698 ikeyman 902 ikeyman utility 52, 135 inband agent 348 Inband discovery 32 inband discovery 12 incremental backup planning 661 Indication Record 817 inode 562 install image depot 94 installation Agent Manager 98 agentTrust.jks file 158 check list 64 Data Manager 171 Data planning 78 database considerations 75 database tuning 154 database types 62 DB2 90 DB2 username 134 Disk and Replication Base 127 DMCOSERV database 137 dmlog.txt 139 Fabric Agent deployment 160 Fabric manager 157 Fabric planning 75 Fabric port numbers 162 fully qualified host names 75 Generate Self-Signed Certificate 136 hardware 58 IBM Director 114 image depot 93 Internet Information Services 73 personal firewall 77 PMDATA database 143 prerequisite software 85 Productivity Center for Disk 140 Productivity Center for Replication 146 Replication Manager subcomponent database 152 SNMP 73 Suite Installer 110 SVC Replication Manager subcomponent database 153 TSRMsrv1 local account 177 user ID privileges 66, 131 user IDs 65 WebSphere Application Server 92 Windows Management Instrumentation 110 Windows Terminal Services 75 installation directory 187, 304 installation image depot 93 Integrated Configuration Agent Technology (ICAT) 47 Intelligent Peripheral Interface (IPI) 34 Internet Assigned Numbers Authority 44 Internet Engineering Task Force 32 Internet Group Management Protocol (IGMP) 44 Internet Information Server (IIS) 53, 295 Internet Information Services 73, 85 IP address 102, 291, 347, 366, 903, 992

1002

IBM TotalStorage Productivity Center V2.3: Getting Started

IP address assignment 44 IP multicasting 43 IP network 38, 248, 712, 805 iSCSI 810811 adapter 812 Auth MIB 814 discovery 733, 813 driver 812 initiators 811 iSNS MIB 814 MIB 813 NetView discovery 733, 813 SmartSet 734, 794 SNMP 733 targets 811 iSCSI Discovery 32 ISL 728, 738 iSNS 812 MIB 814

J
JDBC 979 job scheduling facility 433

K
key file for Agent Manager 69

L
Launch Device Manager 378 launcher program 910 Launchpad 28, 264, 313, 899 launchpad 906 customization 909 management application 910 leaf node 769 local subnet 45, 290, 900 multicast service requests 900 log files 917 Logical Sub System 580 Lotus 1-2-3 697 lscollection 475 lsfilter 475 lsgauge 475 lspair command 889 lsseq command 889 lssess command 886 LUN modeling 580 LUN to host port mapping 385, 392 LUNs 731

mdisks display 384, 390, 396, 403 MDS 9000 734 MIB 766767 applications 787 definitions 772 enable 772 enable in NetView 769 enterprise-specific 768, 770, 774 iSCSI 813 object ID 769, 776, 786 objects 769, 775, 784, 787, 804 performance objects 772 standard 768 subtree 773 thresholds 774 tree structure 769 MIB compiler 37 MIB-II 771, 781 Microsoft Excel 697 Internet Information Server 698 modified since backup files 629 MOSPF (Multicast Open Shortest Path First) 45 most at risk files 627 multicast 43 multicast group 44 individual hosts 44 multicast messages 39 Multicast Open Shortest Path First (MOSPF) 45 multicast request 40 multicast traffic 42 Multiple Device Manager database maintenance 977 database query 976 DB2 logging 986 device discovery 248 export PM data 979 report tools 979 SAN Manager 320 SQL commands 982 multiple machines, installing at the same time 361

N
namespace nterop 257 ootbm 257 NAS (network attached storage) 183 NAS devices 183 nestat command 207 netmon 805 netstat command 162 NetView 320, 704 acknowledge 744 Advanced Menu 769 arm threshold 781 child submap area 705 clearing the database 806 copying MIB 770 data collection 775, 791 Index

M
manage replication sessions 877 management application 30, 34 Management Information Base 37, 767 Management Information Base (MIB) 37 managers 30 mdisk group 398

1003

data collection troubleshooting 784 database 806 database maintenance 783 discovery process 805 enable MIB 769 event browser 343, 345 event logging 781 existing installation 343 explorer display 705, 731 graph 782, 803 Graph Properties 792 graphs 766, 774, 784 historical reporting 767, 774, 785 interface 704 iSCSI 813 iSCSI discovery 733, 813 iSCSI SmartSet 734, 794 loading MIB 771772 Management Page 741 maps 704 MIB applications 787, 790 MIB Browser 774, 789 MIB Data Collection 775, 802 MIB Data Collector 772, 774, 786 MIB Tool Builder 772, 786787, 793 Navigation Tree 707 netmon daemon 805 Object Properties 707, 740 performance applications 766 performance data 774, 781 polling 787, 792, 795 properties panel 730 real-time reporting 767, 786, 793 restricting discovery 805 root map 704, 725, 800 search function 822 seed file 805, 807 Server Setup 806807 SmartSets 725, 733, 794, 802, 813 SmartSets and Data Collection 802 status propagation 711, 800 submap stack 705 submap window 705 submaps 704, 725 supported MIBs 768 System Configuration view 706 Tivoli Storage Area Network Manager view 706 topology map 781, 785, 805 trap 781 trap forwarding 343, 346 trapfrwd daemon 343344 traps 342 unacknowledge 744 unmanage object 743 NetView commands nvsniffer 733, 794, 813 ovaddobj 344 ovstart 344 ovstart snmpcollect 784 ovstatus snmpcollect 784

network attached storage (NAS) 183 network bandwidth 781 network management 767 network monitoring 766, 787 network problem resolution 766 network resource allocation 766 NIC 812 Not responding status 348 Notification Record 817 NW_FAT (NF) 79

O
object ID 769 obsolete files 548, 615 offline archived logs 987 On Demand environment 25 Open Shortest Path First (OSPF) 45 Oracle archive log 588 regular administration 591 orphaned files 548, 615 OSPF (Open Shortest Path First) 45 outband agent 346 Outband discovery 32 outband discovery 11 ovaddobj command 344 ovstart command 344 own subnet 900

P
password Common Agent Services 901 Resource manager 901 Tivoli Agent Manager 901 perfcli tool 940 performance database purge task 944 performance gauge 443 Performance Manager 21 command line interface 475 customized reports 976 Enable threshold button 458 ESS data collection 429 ESS data collection task 429 exporting data 976 function 428 gauges 22, 443 remote console 270 threshold filters 459 thresholds task 457 Volume Performance Advisor 23 performance metrics 774 perftool 475 personal firewall 77 PFA 815 PIM 44 Ping 592, 594, 604 ping command 216 pmcli 475 PMDATA table 977

1004

IBM TotalStorage Productivity Center V2.3: Getting Started

poll interval for the SAN configuration 349 polling interval 349, 722 port 343 port number 295, 364, 911 port statistics 775 PPRC 829 Predictive Failure Analysis 815 prerequisite software 85 Probe 560, 585, 592594, 604, 634635 problem determination dmlog.txt 139 product 30 Productivity Center 16, 83, 362, 910 components 16 Productivity Center for Data 7 architecture 9 features 8 managing user IDs 68 Productivity Center for Disk functions 13 gauge definition 14 remote console 264 Volume Performance Advisor 14 Productivity Center for Fabric 9 agent 721 application launch 739 benefits 12 change device icon 738 change device label 738 Connection 738 Device Centric view 713 element managers 723 Host Centric View 732 Host Centric view 713714 Interconnect Elements 729 overview 9 parent icons 720 physical view 726 polling interval 722 SAN topology 718 SAN view 713714 Sensors/Events 737, 739 Submap Explorer button 725 switch display 737 switch port 722 topology map 781 topology view 725, 728 troubleshooting 902 unknown device 738 Productivity Center for Fabric - Agent checking inband agents 348 Productivity Center for Fabric Server outband agents 346 Productivity Center for Fabric. See Tivoli SAN Manager Productivity Center for Replication 23 overview 14, 828 Profile 617, 628, 672, 700 progressive incremental backup 633 Protocol-Independent Multicast 44 provider component 49 provisioning 320

proxy model 47 Python 979

Q
QMF 979 QMF for Windows 980 Query Management Facility (QMF) 980 Quota 594, 610 violation report 617

R
raswatch 921 RDBMS 177 RDBMS support 79 Redbooks Web site 998 Contact us xvii relational database management system 944 remote console 259 Performance Manager 270 Remote Fabric Agent Deployment option 160 remote GUI 295, 334 repcli command 894 repcli command syntax 885 repcli utility 885 Replication Manager 23 CLI 884 Continuous Synchronous Remote Copy 861, 873 copyset 833 Copyset details 870 create a group 838 create a storage group 838 Create Path wizard 850 define storage group 843 delete a storage pool 848 freeze operations 930 groups 831 Managing a storage pool 847 modify a storage group 841 overview 280, 828 Point-in-Time Copy session 852 remote CLI 280 replica sessions 23, 828 restarting 931 sequence 833 Session Properties window 876 Sessions window 873 setting up 834 start replication session 888 storage paths 850 storage pool 831 storage pool create 844 suspend a session 882 suspended status 875 synchronized state 881 tasks 830 troubleshooting 930 verifying source-target relationship 856 view group properties 842 view storage pool properties 849

Index

1005

Replication Manager (RM) 59 Replication Manager problem determination 930 Replication Manager subcomponent 150151 replication session 830 Replication subject matter expert 14 reporting assets 593, 595 availability 594, 604 backup 594 backup storage requirements 630 backups 627 batch 683, 688, 697 by userID 532, 683 capacity 594, 605 computer uptime 657 Constraint violation 614 customized 683 database assets 598 database batch 693 database capacity 607 database Quota violations 622 database space usage 685 database usage 608 disk capacity 605 filesystem capacity 605 owned by a username 686 Quota violation 617 saved reports 687 scheduling 683, 697 storage capacity 605, 684 storage subsystems 594 top 10 reports 653 uptime 657 usage 594, 607 usage violation 594, 610 wasted space 594 Web publishing 698 reporting categories 593 Tivoli Storage Resource Manager 593 reports HTML output 697 resetarchiveattribute 632 resource accounting 700 Resource Manager 30, 51, 168, 351, 993 configured user ID 901 registration 902 troubleshootng 902 user ID 76 user ID and password 67 Resource Manager ports 59 Reverse Path Forwarding 44 REXX 979 RFC 769 rmgauge 475 RNID 731 root cause analysis 814

S
SAN Cleared Record 817

discovery 805 fault isolation 817 Fault Record 817 historical reporting 767, 785 interconnects 728 management 794 monitoring 766, 787 navigation 725 Notification Record 817 performance data 774 real-time reporting 767, 786, 793 root cause analysis 814 switch port statistics 775 topology 704, 726, 728, 800 SAN (storage area network) 30 SAN component 345 SAN Manager 20, 320 SAN view 714 SAN Volume Controller thresholds 461, 466 SAN Volume Controller (SVC) 78, 84, 996 Scan 537, 563, 592, 601602, 614, 617, 675 Scan job log 602 scheduled reports 592 scheduling 697 SCSI 560 protocol 811 seed file 805, 807 Service Agent 31, 38, 900 service information 39 Service Location Protocol multicast 42 Service Location Protocol (SLP) 38, 290 service agent 38 user agent 38 service URL 40, 42 setdevice command 928 setessthresh/setsvcthresh 475 setfilter 475 setoutput 476 setoutput command 894 setuser command 216, 257 setuser interactive tool 215 showgauge 475 showsess command 886 Simple Network Management Protocol 36 Simple Network Management Protocol (SNMP) 64, 85 SLP active DA discovery 41 address 44 broadcast communication 43 CIM Agent 47 configuration recommendation 193, 900 DA configuration 900 DA considerations 192 DA discovery 41 DA functions 42 directory agent configuration 194, 900 Discovery 901 discovery requirements 242, 248

1006

IBM TotalStorage Productivity Center V2.3: Getting Started

environment configuration 900 ESS CIM agent 224 firewall configuration 45 multicast address and port 193 multicast communication 43 multicast group 44 multicast messages 39 multicast request 40 multicast service request 41 passive DA discovery 42 port number 44, 901 registration 39 registration persistency 243 router configuration 193, 900 service agent 38 service attributes 39 service type 39 slp.conf 221 starting 211 unicast 42, 193 unicast communication 43 user agent 39 User Datagram Protocol message 39 verify install 211 verifyconfig command 224 when to use DA 43, 193 SLP (Service Location Protocol) 38 SLP considerations 928 SLP DA 38, 290, 900 SLP discovery summary 242, 901 SLP environment 38 slp logfile 929 SLP tracing 929 slp.conf file 221, 929 slp.reg file 243 slptool command 901 SmartSet 794 SmartSets 725, 733734, 794, 813 SMI-S 32 SMI-S (Storage Management Initiative - Specification) 35 SNIA (Storage Networking Industry Association) 35 SNMP 36, 73, 110, 733, 767, 795 agents 767 collect daemon 777, 782, 784 community name 343 console 346 manager 767 port 317, 345 trap 692, 815 trap forwarding 342343 SNMP management application 37, 346 station product 37 SNMP manager 37, 320 SNMP trap 37, 342343 spreadsheets 697 SQL Assist 962 SQL command example 982 SQL scripts considerations 956

SQL-Server 600 SRMURL 742 SSL configuration 135 standard MIB 768 standard reporting 594 standards organizations 4 startcimbrowser command 220 startesscollection 475 startsvccollection 475 Stochastic 817 stopcollection 475 stopflashsess command 893 stopsess command 893 storage capacity 605, 684 utilization 548 storage area network (SAN) 30 storage device 27, 63, 248, 320, 900, 994 important information 994 inband management 34 Storage Management Initiative - Specification 32 Storage Management Initiative - Specification (SMI-S) 35 Storage Networking Industry Association (SNIA) 35 storage orchestration 5 storage subsystem 290 Monitored check box 294 Policy Management jobs 294 Storage Subsystems reporting 594, 634 Structured Query Language (SQL) 944, 980 subagent 351352 subcomponent 30 subnet 900 query group membership 44 Subsystem Device Driver (SDD) 58 Suite Installer 110 suite installer 83, 304, 319, 361, 911, 993 launchpad 907 suspendsess command 892 SVC mdisk group 398 SVC CIMOM 234 console 234 Multiple Device Manager console account 235 register to SLP DA 241 SVC console account 234 verification 241 SVC data collection error 940 SWDATA 949 swFCPortTxFrames 775, 788, 802803 swFCRxErrors 783 swFCRxFrames 783 swFCTxErrors 783 swFCTxFrames 785 switch commands snmpmibcapset 772 switch port 722 switches administrative rights 347 API 347 display 728

Index

1007

environmentals 739 login ID 347, 772 management applications 740, 767 performance data 774, 781 port connections 738 port statistics 775 sensors 739 trap forwarding 342 zone information 347, 730 SW-MIB 771, 788 system reports 685

T
Table 600601 TCP/IP 811 TCP/IP ports 59 TEC 317, 711, 815 telnet 772 telnet command 220, 241 telnet connection 911 Test Connection button 258 threshold checking 21 threshold properties 462, 467 thresholds task 457 Tivoli 342 Tivoli Agent Manager 66 password 901 Tivoli Agent Manager service 171 Tivoli Common Agent Service 29, 901 Tivoli Common Agent Services 51 Tivoli Enterprise Console 545 Tivoli Event 76, 320 Tivoli Event Console (TEC) 320 Tivoli Monitoring for Databases 591 Tivoli NetView 12, 30, 60, 331, 993 7.1.3 icon 904 General Topology map service 60 Object Collection facility socket 60 Object Database 60 Object Database event socket 60 OVs_PMD management service 60 OVs_PMD request service 60 Pager 60 PMD service 60 SAN menu 75 SnmpServer 60 Topology Manager 60 Topology Manager socket 60 trapd socket 60 Web Server socket 60 Tivoli NetView installation 346 Tivoli NetView service 70 Tivoli SAN Manager 817 agents 343, 731 application discovery 740 change device icon type 708 change device label 708, 727, 744 Clear History 744 Configure Agents 711 configure management application 740

Configure Manager 349, 711, 744 database 815 Device Centric View 726, 731 device icons 709, 736 device label 736 device properties 736 display switch connections 738 ED/FI Configuration 711 ED/FI Properties 711 event forwarding 774 fabric ports 738 filesystem display 736 historical reporting 767, 774, 785 Host Centric View 726 host display 736 icons 709, 743 indication record 817 initial poll 349 iSCSI 812 iSCSI discovery 733, 813 iSCSi discovery 733 Launch Application 711, 740 launch Tivoli Storage Resource Manager 711 logical views 731732 LUN display 731 managed hosts 732 Navigation Tree 707 NetView traps 342 object status 709, 743 outband agents 346 physical topology 725 polling 343, 349 Predictive Failure Analysis 815 propagation 711 real-time reporting 767, 786, 793 RNID 731 SAN menu 711, 735 SAN Properties 711, 727, 735, 743 Sensors/Events 735 Set Event Destination 711 status colors 709 status cycle 743 status propagation 711, 800 submap 711 switch environmentals 739 Tivoli Storage Resource Manager 711 topology map 343, 726, 743744, 800 trap forwarding 342343, 346 zone display 347, 727, 730 Tivoli Storage Manage resetarchiveattribute 632 Tivoli Storage Manager 575, 588, 627 archive bit 627 Constraint violation report 610 progressive incremental backup 633 Tivoli Storage Manager capabilities Backup-Restore 594 Tivoli Storage Resource Manager 607, 615, 697 Alert 613 Alert Disposition 316

1008

IBM TotalStorage Productivity Center V2.3: Getting Started

Alert log 317, 564, 616, 619 asset reporting 593, 595 at risk files 548 availability reporting 594, 604 backup reporting 594, 627, 659 backup storage requirements 630, 661 batch reports 592, 683, 688, 697 capacity reporting 594, 605 chargeback 700 Computer Alert 560 Computer Group 561 Computer Uptime 657 computer uptime 657 Constraint 610, 614 Constraint Violation report 614 CSV output 592, 697 customized reporting 683 database asset reporting 598 define Alert 559 Directory Alert 563 Directory Group 538, 598 directory monitoring 598 disk capacity 605 email notification 317 ESS reporting 634635, 653, 656 file statistics 528 Filesystem Alert 562 filesystem capacity 605 Filesystem Group 537 forbidden file 611 graphical reporting 616 Group definition 539 HTML output 592, 693, 697 interactive reporting 592 launch 711 LUN modeling 580 mail port 317 modified since backup files 629 monitored directories 598 most at risk files 627 My Reports 592, 683 NetWare reporting 595 obsolete files 548, 615 orphaned files 548 OS User Group Group 540 Ping 592, 594, 604 pre-defined reports 592 Probe 560, 592594, 604, 634635 Profile 617, 628, 672, 700 Quota 594, 610 Quota violation report 617 report scheduling 683, 697 reports on the Web 698 saved reports 687 Scan 563, 592, 614, 617, 675 Scan job log 602 scheduled reports 592 script parameters 561 standard reporting 594 Storage Subsystem Reporting 594, 653, 656

System Reports 683 system-wide view 598 TEC configuration 317 tool bar 602 top 10 reports 653 Triggered Action 560 Triggering condition 560, 562 uptime reporting 604, 657 usage reporting 594 usage violation reporting 594, 610 User Group 540 username reporting 532, 683 wasted space report 594 Tivoli Storage Resource Manager for Chargeback 592, 700 Tivoli Storage Resource Manager for Databases Alert 588 Alert log 589 asset reporting 585, 598 availability check 591 batch reports 693 capacity reporting 607 Computer Groups 584, 607 create Table Group 601 database instance report 667 database LUN reporting 669 Database Quota 591 database Scan 601 Database-Tablespace Alert 588 Instance Alert 588 Instance Quota 591, 622 My Reports 683 Network Quota 590 policy management 589 Probe 585 Profile 586 Quota 589 Quota violations 622 Scan 587, 601 script 589, 591 storage usage 665 system reports 685 Table Alert 589 Table Group 589, 600 usage reporting 608 User Group 585 user specific reports 688 top 10 reports 653 TotalStorage Productivity Center 6 communication user 67 components installed 910 Device Manager 20 distinct managers 31 environment 30 Event Action Plan 23 foundation 1 Launchpad 264, 313 launchpad 906 performance considerations 194 Performance Manager 21

Index

1009

remote console 259 SAN Manager 20 SAN view 727 server 994 universal user 66 vdisk display 384, 390, 396, 403 Version 2.1 320 TotalStorage Productivity Center (TPC) 83, 320, 991 trap 37 trap forwarding 342343, 346 trapfrwd 344 trapfrwd daemon 343 trapfrwd.conf 344 traps 781 trend analysis 766 troubleshooting enable trace logging 931 TRP-MIB 771 TSANM 405, 409, 413 TSANMDB database 75 TSRMsrv1 local account 177

W
WAS trace 930 WAS trace control 938 wasted space report 594 WBEM (Web-Based Enterprise Management) 34 WBEM browser 220 WBEM initiative 34 Web browser 698 Web-Based Enterprise Management (WBEM) 34 Webshpere Application Server trace 932 WebSphere changing user ID 76 startServer.log 927 WebSphere Application Server 29, 67, 295 ikeyman utility 135 Information panel 102 install 92 SSL communication 69 update 62 WebSphere logfile 927 WebSphere user ID password 76 Windows archive bit 627, 632 Windows Management Instrumentation 85 Windows Services 69 Windows Terminal Services 75 WWN 731, 822

U
unattended installation 93 unicode 980 Uniform Resource Locator (URL) 39 UNIX inode 562 uptime 604, 657 usage reporting 594, 607 usage violation reporting 594, 610 User Agent (UA) 31, 3840, 248, 290, 900 SLP User Agent interactions 41 User Datagram Protocol (UDP) 39 message 39 user ID 65, 85, 366, 991, 993 user name 348 user rights 125

X
XML 49 xmlCIM 34

Z
zones 730 zoning 347

V
vdisks display 384, 390, 396, 403 verifyconfig command 218, 224 Volume Performance Advisor 14, 23 authentication 488 getting started 482 multiple recommendations 481 predefined workload profiles 483 recommendation process 480 workload characteristics 479 workload profile 482 VPA overview 478 VPCCH table 926, 973 VPCLUS table 926 VPCRK table 926, 974, 983 VPVPD table 983 VTHRESHOLD table 977

1010

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center V2.3: Getting Started

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center V2.3: Getting Started

IBM TotalStorage Productivity Center V2.3: Getting Started

Back cover

IBM TotalStorage Productivity Center V2.3: Getting Started


Effectively use the IBM TotalStorage Productivity Center Learn to install and customize the IBM TotalStorage Productivity Center Understand the IBM TotalStorage Open Software Family
IBM TotalStorage Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center. It provides an overview of the product components and functions. We describe the hardware and software environment required, provide a step-by-step installation procedure, and offer customization and usage hints and tips. This book is not a replacement for the existing IBM Redbooks, or product manuals, that detail the implementation and configuration of the individual products that make up the IBM TotalStorage Productivity Center, or the products as they may have been called in previous versions. We refer to those books as appropriate throughout this book.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-6490-01 ISBN 0738494364

S-ar putea să vă placă și