Documente Academic
Documente Profesional
Documente Cultură
Understand Storage Resource Agents and their function Plan to migrate, implement and customize this release
Mary Lovelace Alejandro Berardinelli H. Antonio Vazquez Brust Harsha Gunatilaka Hector Hugo Ibarra Danijel Paulin Markus Standau
ibm.com/redbooks
7894edno.fm
International Technical Support Organization Tivoli Storage Productivity Center V4.2 Release Update December 2010
SG24-7894-00
7894edno.fm
Note: Before using this information and the product it supports, read the information in Notices on page xi.
First Edition (December 2010) This edition applies to Version4, Release 2 of IBM Tivoli Storage Productivity Center (product numbers 5608-WB1, 5608-WB2, 5608-WB3, 5608-WC3, 5608-WC4,5608-E14). This document created or updated on February 17, 2011.
Copyright International Business Machines Corporation 2010. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
7894edno.fm
7894edno.fm
7894TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview . . . . . . 1 1.1 Introduction to IBM Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Function in Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Data server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Device server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.4 Tivoli Integrated Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.5 Tivoli Storage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.6 DB2 database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.7 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.8 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.9 Integration with other applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Tivoli Storage Productivity Center family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 TPC for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.2 TPC for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3 TPC for Disk Midrange Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.4 TPC Basic Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.5 Tivoli Storage Productivity Center Standard Edition . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.6 TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.7 SSPC 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.8 IBM System Director Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 New function since version 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.1 What is new for Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.2 What is new for IBM Tivoli Storage Productivity Center for Replication . . . . . . . . 13 1.5 Contents of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 2. Tivoli Storage Productivity Center install on Windows . . . . . . . . . . . . . . . 2.1 Tivoli Storage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Preinstallation steps for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Verifying system hardware and software prerequisites. . . . . . . . . . . . . . . . . . . . . 2.2.2 Verifying primary domain name systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Activating NetBIOS settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 User IDs and passwords to be used and defined . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Installing TPC prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Creating the Database Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2010. All rights reserved.
17 18 18 19 20 21 21 23 24 24 25 34 35 iii
7894TOC.fm
2.4.2 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . 2.4.3 Agent installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Disabling TPC or TPC for Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Applying a new build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43 62 64 67
Chapter 3. Tivoli Storage Productivity Center install on Linux. . . . . . . . . . . . . . . . . . . 75 3.1 Tivoli Storage Productivity Center installation on Linux . . . . . . . . . . . . . . . . . . . . . . . . 76 3.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.2 Preinstallation steps for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.2.1 Verifying system hardware and software prerequisites. . . . . . . . . . . . . . . . . . . . . 78 3.2.2 Prerequisite component for Tivoli Storage Productivity Center V4.2 . . . . . . . . . . 78 3.3 Installing the TPC prerequisite for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.3.1 DB2 installation: GUI install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.4 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . 96 3.4.1 Creating the database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.4.2 Installing TPC Servers, GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Chapter 4. Tivoli Storage Productivity Center install on AIX . . . . . . . . . . . . . . . . . . . 4.1 Tivoli Storage Productivity Center installation on AIX . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Preinstallation steps for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Verifying system hardware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Verifying system software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Prerequisite component for Tivoli Storage Productivity Center V4.2 . . . . . . . . . 4.3 Installing the prerequisites for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 DB2 installation: Command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Creating the database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level. . 5.1 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 TPC-R considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Credentials migration tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Agent Manager, Data and Fabric agents consideration . . . . . . . . . . . . . . . . . . . . . . . 5.4 Migration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Migration from version 3.x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Migration from version 4.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Upgrading Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Upgrading TPC-R in high availability environment . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Agent migration and upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 CAS and SRA history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Scenarios to migrate from CAS to SRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Installation wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 TPC user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 CIMOM to NAPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 122 122 122 123 123 124 124 124 125 130 132 138 157 158 158 158 159 167 171 172 173 173 196 202 205 206 207 207 208 208 215 215
iv
7894TOC.fm
Chapter 7. Native API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 NAPI and other changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Changed panels and/or tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Behind the scenes - The External Process Manager . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Solution Design for device access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Planning for NAPI and NAPI Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Planning for CIMOM Discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Planning for Configure Devices Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Planning for Monitoring Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Using Configure Devices wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Add/configure a IBM DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Add/configure a IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Add and configure a IBM XIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Add and configure a CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Adding Fabrics and Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Add/configure a Brocade Fabric/Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Add/configure a McData Fabric/Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Add/configure a Cisco Fabric or Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Add/configure a Qlogic Fabric/Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Add/configure a Brocade/McData Fabric/Switch. . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Other enhancements and changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
217 218 219 219 222 223 229 231 234 236 245 245 247 248 248 249 250 251 252 253 253
Chapter 8. Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.2 SRA Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.2.1 User Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.2.2 Platform Dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 8.2.3 Communication Requirements and Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.3 SRA installation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.3.1 Local graphical installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 8.4 SRA deployment from TPC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 8.5 Local/CLI install of SRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 8.5.1 Steps to install the Storage Resource Agents through CLI. . . . . . . . . . . . . . . . . 271 8.6 Database Monitoring with SRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 8.6.1 Register the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 8.6.2 Set up probes and scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 8.6.3 Database capacity reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 8.6.4 Database usage reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 8.7 NetApp/NSeries Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 8.7.1 Overview of NAS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 8.7.2 Configure TPC to manage IBM N Series or NetApp Filer through Windows Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 8.7.3 Configure TPC to manage IBM N Series or NetApp Filer through UNIX Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 8.7.4 Retrieving and Displaying Data about NAS Filer . . . . . . . . . . . . . . . . . . . . . . . . 310 8.8 VMware Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 8.9 VMware Virtual Machine Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 8.10 Batch Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 8.11 SRA Fabric Functionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 8.11.1 HBA Library Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 8.11.2 SRA Fabric Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 8.11.3 Fabric Agent Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 8.12 Agent resource utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Contents
7894TOC.fm
8.13 HBA Information Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 8.14 Collecting SRA Support Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 8.15 Clustering support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Chapter 9. Tivoli Storage Productivity Center for Disk Midrange Edition . . . . . . . . . 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Supported devices and firmware levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Licensing methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Key benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. Tivoli Storage Productivity Center for Replication . . . . . . . . . . . . . . . . . 10.1 Whats new . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Open HyperSwap replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 How to setup Open HyperSwap session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 How to perform Open HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.5 TPC-R high availability with Open HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Copy set soft removal of hardware relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Log packages download from TPC-R GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Path Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 SVC enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 SAN Volume Controller space-efficient volumes . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 SAN Volume Controller incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . 10.7 DS8000 enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 DS8000 extent space efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Global Mirror session enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.3 Multiple Global Mirror sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11. XIV Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Supported Firmware Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Adding an XIV to TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 XIV Performance Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 XIV storage provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Purpose of SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Whats New in TPC 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Pre-requisites for using SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Supported storage subsystems in SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Storage Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Storage Resource Group Monitoring and Alerting . . . . . . . . . . . . . . . . . . . . . . 12.6 Creating a Space-Only SAN Planner Recommendation . . . . . . . . . . . . . . . . . . . . . . 12.7 Creating a DR Planner Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8 SAN Planner with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 332 332 332 333 335 336 336 337 337 339 352 357 361 367 370 376 376 377 385 385 388 392 395 396 396 396 397 401 402 402 402 403 403 403 405 422 446
Chapter 13. Job Management Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 13.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 13.2 Job Management Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 13.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schedule460 13.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Run460 13.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Job460 13.3 Job Management changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 13.3.1 Default Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
vi
7894TOC.fm
13.3.2 CLI and Event Driven Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Job Management Panel explained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Entities details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Schedules details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3 Runs and Jobs details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Walk through examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 List Schedule log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 View and act on recommendation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 14. Fabric enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 FCoE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Additional switch models supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Brocade 3016 / 5410 / 5470 / 5480 / 7800 / M5440 . . . . . . . . . . . . . . . . . . . . . 14.2.2 Brocade Silk Worm 7800 (IBM 2005-R06) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 Brocade DCX-4S Backbone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.4 Brocade 8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.5 Cisco Nexus 5000 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Additional HBA and CNA models supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Integration with Brocade Data Center Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Supported functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Adding a DCFM server into Tivoli Storage Productivity Center . . . . . . . . . . . . Chapter 15. IBM Total Productivity Center database considerations . . . . . . . . . . . . 15.1 Database tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.1 Setting DB2 variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.2 Tune the database manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.3 Change DB2 active logs directory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 TPC repository database sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1 Storage subsystem performance data sizing . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Repository calculation templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.1 Worksheet - Sizing SVC performance collection . . . . . . . . . . . . . . . . . . . . . . . Chapter 16. IBM Total Productivity Center database backup on Windows . . . . . . . . 16.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Scripts provided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Database backup method considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.1 Offline backup advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Common backup setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Offline backup to filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7 Offline backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 16.7.1 Add new variables to Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.2 Configure Tivoli Storage Manager option file and password. . . . . . . . . . . . . . . 16.7.3 Reboot the TPC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7.4 Create an offline backup to Tivoli Storage Manager script . . . . . . . . . . . . . . . . 16.8 Online backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 16.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager. . . . . . 16.8.2 Create online backup script for Tivoli Storage Manager . . . . . . . . . . . . . . . . . . 16.9 Online backup to a filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9.1 Set up DB2 archive logging to a filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9.2 Create online backup script to filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 Performing offline database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.1 Performing an offline backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.2 Performing an offline backup to Tivoli Storage Manager . . . . . . . . . . . . . . . .
Contents
462 462 466 466 467 467 467 472 477 478 478 478 478 478 478 479 479 480 480 480 489 490 490 492 494 496 498 505 506 513 514 514 514 515 515 516 518 519 520 521 523 524 524 525 527 528 528 530 531 531 531 vii
7894TOC.fm
16.11 Performing online database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.11.1 Performing an online database backup to Tivoli Storage Manager . . . . . . . . 16.11.2 Performing an online backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 16.12 Other backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.13 Managing database backup versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.13.1 Managing backup versions for a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . 16.13.2 Managing archive log files on a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.13.3 Managing backup versions that you store in Tivoli Storage Manager. . . . . . . 16.14 Verifying a backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.15 Restoring the TPC database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.15.1 Restoring from offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.15.2 Restoring from online backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.15.3 Potential agent issues after the restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.16 Backup scheduling and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.16.1 Frequency of full TPCDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.16.2 TPCDB backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 17. IBM Total Productivity Center database backup on AIX . . . . . . . . . . . . . 17.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Scripts provided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 Database backup method considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.1 Offline backup advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Common backup setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Offline backup to filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7 Offline backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 17.7.1 Add new variables to AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.2 Configure Tivoli Storage Manager option file and password. . . . . . . . . . . . . . . 17.7.3 Restart DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.4 Create an offline backup to Tivoli Storage Manager script . . . . . . . . . . . . . . . . 17.8 Online backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 17.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager. . . . . . 17.8.2 Create an online backup script for Tivoli Storage Manager . . . . . . . . . . . . . . . 17.9 Online backup to a filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.9.1 Set up DB2 archive logging to a filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.9.2 Create online backup script to filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.10 Performing offline database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.10.1 Performing an offline backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 17.10.2 Performing an offline backup to Tivoli Storage Manager . . . . . . . . . . . . . . . . 17.11 Performing online database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.11.1 Performing an online database backup to Tivoli Storage Manager . . . . . . . . 17.11.2 Performing an online backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 17.12 Other backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.13 Managing database backup versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.13.1 Managing backup versions for a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . 17.13.2 Managing archive log files on a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.13.3 Managing backup versions that you store in Tivoli Storage Manager. . . . . . . 17.14 Verifying a backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.15 Restoring the TPC database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.15.1 Restoring from offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.15.2 Restoring from online backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.15.3 Potential agent issues after the restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.16 Backup scheduling and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
532 532 533 533 534 534 535 536 541 541 542 546 551 551 552 552 553 554 554 554 555 555 556 557 559 559 560 561 563 563 564 566 566 567 568 569 569 570 572 572 572 573 573 573 574 575 580 581 582 585 588 589
viii
7894TOC.fm
17.16.1 Frequency of full TPCDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 17.16.2 TPCDB backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Chapter 18. IBM Total Productivity Center database backup on Linux . . . . . . . . . . . 18.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Scripts provided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4 Database backup method considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.1 Offline backup advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Common backup setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6 Offline backup to filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7 Offline backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 18.7.1 Add new variables to Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7.2 Configure Tivoli Storage Manager option file and password. . . . . . . . . . . . . . . 18.7.3 Restart DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7.4 Create an offline backup to Tivoli Storage Manager script . . . . . . . . . . . . . . . . 18.8 Online backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 18.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager. . . . . . 18.8.2 Create an online backup script for Tivoli Storage Manager . . . . . . . . . . . . . . . 18.9 Online backup to a filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.9.1 Set up DB2 archive logging to a filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.9.2 Create online backup script to filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.10 Performing offline database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.10.1 Performing an offline backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 18.10.2 Performing an offline backup to Tivoli Storage Manager . . . . . . . . . . . . . . . . 18.11 Performing online database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.11.1 Performing an online database backup to Tivoli Storage Manager . . . . . . . . 18.11.2 Performing an online backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 18.12 Other backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.13 Managing database backup versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.13.1 Managing backup versions for a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . 18.13.2 Managing archive log files on a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.13.3 Managing backup versions that you store in Tivoli Storage Manager. . . . . . . 18.14 Verifying a backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.15 Restoring the TPC database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.15.1 Restoring from offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.15.2 Restoring from online backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.15.3 Potential agent issues after the restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.16 Backup scheduling and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.16.1 Frequency of full TPCDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.16.2 TPCDB backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 19. Useful information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.1 User defined properties for Fabrics and Switches . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 IBM Software Support Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 IBM Support Assistant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 Certificate errors in Windows Internet Explorer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.1 Step 1: Adress mismatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.2 Step 2: New certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5 Tivoli Storage Productivity Center support matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6 DB2 hints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6.1 SQL5005C System Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6.2 User ID to stop and start DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 592 592 592 593 593 594 595 597 597 598 599 601 601 602 604 604 605 606 607 607 608 610 610 610 611 611 611 612 613 619 620 621 624 627 628 628 628 629 630 630 631 631 632 632 636 639 639 639
Contents
ix
7894TOC.fm
Appendix A. DB2 table space considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting an SMS or DMS table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of an SMS table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of a DMS table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . LDAP information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM System Storage Enterprise Storage Server/DS6000/DS8000. . . . . . . . . . . . . . . . IBM DS4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix C. Configuring X11 forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparing the display export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparation of the AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparation of the Windows workstation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launching an Xming X Window session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VNC Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
641 642 642 642 645 646 646 647 648 649 649 650 651 653 654 654 655 659 662 665 665 665 665 666 666
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
7894spec.fm
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
xi
7894spec.fm
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX AS/400 DB2 Universal Database DB2 DS4000 DS6000 DS8000 Enterprise Storage Server FlashCopy HACMP HyperSwap IBM NetView Passport Advantage Power Systems pSeries RDN Redbooks Redbooks (logo) System p System Storage DS System Storage System x System z Tivoli Enterprise Console Tivoli TotalStorage WebSphere XIV z/OS zSeries
The following terms are trademarks of other companies: Data ONTAP, FilerView, NetApp, Network Appliance, and the Network Appliance logo are trademarks or registered trademarks of Network Appliance, Inc. in the U.S. and other countries. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Itanium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xii
7894pref.fm
Preface
IBM Tivoli Storage Productivity Center V4.2 is a feature-rich storage management software suite. The integrated suite provides detailed monitoring, reporting, and management within a single console. This IBM Redbooks publication is intended for storage administrators and users who are installing and exploiting the features and functions in IBM Tivoli Storage Productivity Center V4.2. The information in the the book can be used to plan for, install and customize the components of Tivoli Storage Productivity Center in your storage infrastructure. There are Important functional enhancements in this release: Storage Resource Agent - now supports file level and database level storage resource management (SRM) reporting for a broad set of platforms. IBM XIV Storage System support updated - adds discovery, provisioning, and performance management support . Storage Area Network (SAN) configuration planning - now includes best practice provisioning of replication relationships and supports basic provisioning of non-IBM storage systems. Open HyperSwap for the IBM AIX environment - delivers application failover (no disruption of application I/O) across a synchronous mirror distance. Step-by-step procedures are provided to help perform tasks such as migrate to Storage Resource Agents, utilize Native APIs, understand and use SAN configuration planning functions, and maintain your DB2 database repository.
xiii
7894pref.fm
experience with a wide array of IBM storage products and software. He holds a degree in Management Information Systems from the University of Arizona. Hector Ibarra is an Infrastructure IT Architect specialized in cloud computing and storage solutions based in Argentina and currently working at the IBM Argentina Delivery Center (DCA). Hector has been designated as ITA Leader for The VMware Center of Competence in 2006. He specializes in virtualization technologies and has assisted several global IBM clients to deploy virtualized infrastructures across the world. Since 2009 he has been working as the Leader for the DCA Strategy and Architecture Services department from where major projects are driven.. Danijel Paulin is an Systems Architect in IBM Croatia, working for the Systems Architect team in the CEE region. He has 13 years of experience in IT. Before joining IBM Croatia in 2003 he worked for a financial company in Croatia and was responsible for IBM mainframe and storage administration. His areas of expertise include IBM high-end disk and tape storage subsystems and architecture and design of various HA/DR/BC solutions for mainframe and open systems. . Markus Standauis an IT Specialist from Germany who joined IBM in 1996. He has worked for IBM Global Services for eight years in the field of storage services and is currently working as a Field Technical Sales Support Specialist. He studied at the University of Cooperative Education in Stuttgart, Germany, and is a graduate engineer in Information Technology. His areas of expertise include Backup and Recovery, Disaster Recovery, and OnDemand Storage Services, as well as IBM Tivoli Storage Productivity Center since Version 2.1. Thanks to the following people for their contributions to this project: Don Brennan Rich Conway Bob Haimowitz International Technical Support Organization Marcelo Ricardo Die IBM Argentina Randy Blea Diana Duan William Olsen Wayne Sun
xiv
7894pref.fm
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xv
7894pref.fm
xvi
7894Overview.fm
Chapter 1.
7894Overview.fm
Fabric management: Zoning Asset reporting Performance reporting Replication management Alerting In addition to these basic functions, Tivoli Storage Productivity Center includes more advanced functions that provide you with a set of analytics functions such as: Topology Viewer Data path explorer Configuration history Storage optimizer SAN planner Configuration analytics Consider that the type of license you have will determine the function that is available to you. Table 1-1 on page 3 provides a summary of the functions provided in Tivoli Storage Productivity Center.
7894Overview.fm
Table 1-1
Tivoli Storage Productivity Center summary of functions Disk Management For Storage subsystems Discovery of storage resources Monitoring of storage resources Configuration (for example, creating volumes) Performance management Fabric Management For Fabrics Discovery of storage resources Monitoring of storage resources Configuration (for example, zoning) Performance management Tape Management For tape libraries Discovery of storage resources Monitoring of storage resources
Data Management Host centric Discovery of storage resources Monitoring of storage resources Filesystem extension Enterprise-wide reporting Application centric Monitoring of relational databases DB2, Oracle, SQL server, Sybase Chargeback
1.2 Architecture
The IBM Tivoli Storage Productivity Center consists of several key components. In this section we show how this components are related and we describe them briefly. We also describe the different interfaces that you can use to access Tivoli Storage Productivity Center and finally its integration with other products.
Figure 1-1 Tivoli Storage Productivity Center Version 4.2 - Architecture Overview
7894Overview.fm
Single sign-on
Enables you to access Tivoli Storage Productivity Center and then Tivoli Storage Productivity Center for Replication using a single user ID and password.
7894Overview.fm
1.2.7 Agents
Outside of the server, there are several interfaces that are used to gather information about the environment. The most important sources of information are the TPC agents (Storage resource agent, Data agent and Fabric agent) as well as SMI-S enabled storage devices that use a CIMOM agent (either embedded or as a proxy agent). Storage Resource agent, CIM agents, and Out of Band fabric agents gather host, application, storage system, and SAN fabric information and send that information to the Data Server or Device server. Note that Data agents and Fabric agents are supported in Tivoli Storage Productivity Center V4.2. However, no new functions were added to those agents for this release. For optimal results when using Tivoli Storage Productivity Center, migrate the Data agents and Fabric agents to Storage Resource agents.
1.2.8 Interfaces
As TPC gathers information from your storage (servers, subsystems, and switches) across your enterprise, it accumulates a repository of knowledge about your storage assets and how they are used. You can use the reports provided in the user interface view and analyze that repository of information from various perspectives to gain insight into the use of storage across your enterprise. The user interfaces (UI) enable users to request information and then generate and display reports based on that information. Certain user interfaces can also be used for configuration of TPC or storage provisioning for supported devices. The following interfaces are available for TPC: TPC GUI: This is the central point of TPC administration. Here you have the choice of configuring TPC after installation, define jobs to gather information, initiate provisioning functions, view reports, and work with the advanced analytics functions. Java Web Start GUI: When you use Java Web Start, the regular TPC GUI will be downloaded to your workstation and started automatically, so you do not have to install the GUI separately. The main reason for using the Java Web Start is that it can be integrated into other products (for example, TIP). By using Launch in Context from those products, you will be guided directly to the select panel. The Launch in Context URLs can also be assembled manually and be used as bookmarks. TPCTOOL: This is a command line (CLI) based program which interacts with the TPC Device Server. Most frequently it is used to extract performance data from the TPC repository database in order to create graphs and charts with multiple metrics, with various unit types and for multiple entities (for example, Subsystems, Volumes, Controller, Arrays) using charting software. Commands are entered as lines of text (that is, sequences of types of characters) and output can be received as text. Furthermore, the tool provides queries, management, and reporting capabilities, but you cannot initiate Discoveries, Probes and performance collection from the tool. Database access: Starting with TPC V4, the TPC database provides views that provide access to the data stored in the repository, which allows you to create customized reports. The views and the
Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview
7894Overview.fm
required functions are grouped together into a database schema called TPCREPORT. For this, you need to have sufficient knowledge about SQL. To access the views, DB2 supports various interfaces, for example, JDBC and ODBC.
SNMP
For users planning to use the Simple Network Managment Protocol (SNMP) trap alert notification capabilities of Tivoli Storage Productivity Center, SNMP Management Information Base (SNMP MIB) files are included on the installation media. The MIB is provided for use by your SNMP management console software (for example, IBM Tivoli NetView or HP Openview). This will allow you to better view TPC-generated SNMP traps from within your management console software.
7894Overview.fm
7894Overview.fm
7894Overview.fm
utility was preinstalled on the SSPC server so that you could convert a PuTTY .ppk key that was protected by a password to OpenSSH key format. The PuTTY utility is no longer preinstalled on the SSPC server. Instead, you can download and install the PuTTYgen utility from a website. With SSPC 1.5, single sign-on authentication is extended to SAN Volume Controller. When you use single sign-on authentication, you can enter one user ID and password to access multiple applications. IBM Storwize V7000 Storwize V7000 is a hardware and software solution that provides online storage optimization through real-time data compression. This solution helps to reduce costs without performance degradation. The Storwize V7000 hardware consists of a set of drive enclosures. Control enclosures contain disk drives and two node canisters. The two nodes within the canisters make an I/O group that is attached to the SAN fabric. A single pair of nodes is responsible for serving I/O on a given volume. Because a volume is served by two nodes, there is no loss of availability if one node fails or is taken offline. Storwize V7000 can be used as a traditional RAID storage system where the internal drives are configured into arrays, and volumes are created from those arrays. Storwize V7000 can also be used to virtualize other storage systems. Storwize V7000 supports both regular and solid-state drives (SSDs). A Storwize V7000 system without any internal drives can be used as a storage virtualization solution. Tivoli Storage Productivity Center provides basic support for this product, including discovery, probes, performance monitoring, Storage Optimizer, and SAN Planner. Similar to other systems, Storwize V7000 is displayed as an entity within data sources, reports, data collection schedules, the topology viewer, and so on. Tivoli Storage Productivity Center for Replication also supports Storwize V7000. Like SAN Volume Controller, single sign-on authentication is available to Storwize V7000. System Storage DS Storage Manager Release 10.70 The IBM System Storage DS Storage Manager user interface is available for you to optionally install on the SSPC server or on a remote server. The DS Storage Manager 10.70 can manage the DS3000, DS4000, and DS5000. With DS Storage Manager 10.70, when you use Tivoli Storage Productivity Center to add and discover a DS CIM agent you can start the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of Tivoli Storage Productivity Center. IBM Java Release 1.6 IBM Java 1.6 is preinstalled and can be used with DS Storage Manager 10.70. You do not need to download Java from Sun Microsystems. DS CIM agent management commands The DS CIM agent management commands (DSCIMCLI) for Release 6.0 are preinstalled on the SSPC server. Optional media to recover image for 2805-MC5 Order Feature Code 9010 if you did not back up your SSPC 2805-MC5 image and you need to recover it. The optional media feature includes a recovery CD for the Microsoft Windows Server 2008 R2 Standard operating system and two recovery DVDs for the SSPC 1.5 image.
7894Overview.fm
Native storage system interfaces provided for DS8000, SVC and XIV
To improve the management capabilities and performance of data collection for the DS8000, SAN Volume Controller (SVC), and XIV storage systems, native storage system interfaces are provided. Now TPC communicates with these storage systems through ESSNI interface for DS8000, SSH for SVC and XCLI for XIV. These interfaces replace the CIM agent (SMI-S agent) implementation.
10
7894Overview.fm
SAN Planner
Tivoli Storage Productivity Center provides a new SAN Planner wizard which has been enhanced to support the following functions.
Space-efficient volumes
The SAN Planner now has an option to provision space-efficient volumes on supported storage subsystems. These storage subsystems are: SVC (v4.3 or later), XIV (v10.1 or later), and DS8000 (v4.3 or later).
Encrypted volumes
Tivoli Storage Productivity Center supports encrypted volumes for the DS8000. The SAN Planner has been enhanced to allow input from the user for encrypted volumes as needed. The SAN Planner currently supports encrypted volumes for the DS8000 and SAN Volume Controller (if the DS8000 is used as a backend device).
11
7894Overview.fm
12
7894Overview.fm
1.4.2 What is new for IBM Tivoli Storage Productivity Center for Replication
Tivoli Storage Productivity Center for Replication 4.2 adds the following new features, functions, and enhancements since Tivoli Storage Productivity Center V4.1. More details on Tivoli Storage Productivity Center for Replication can be found in Chapter 10, Tivoli Storage Productivity Center for Replication on page 335.
13
7894Overview.fm
7894Overview.fm
Tivoli Storage Productivity Center for Replication XIV Support SAN Planner Job Management Panel Fabric Enhancements VIO Server environment Native Reporting provided in the Tivoli Storage Productivity Center Tivoli Storage Productivity Center database backup on Windows Tivoli Storage Productivity Center database backup on AIX Tivoli Storage Productivity Center database backup on Linux
15
7894Overview.fm
16
7894Install.fm
Chapter 2.
17
7894Install.fm
Typical installation
The Typical installation allows you to install all the components of the Tivoli Storage Productivity Center on the local server in one step, but you still can decide which components to install: Server:Data Server, Device Server, Replication Manager and TIP Clients:Tivoli Storage Productivity Center GUI Storage Resource Agent The drawback of using the Typical installation is that the defaults for the components are used. Our recommendation is not to use the Typical installation, because the control of the installation process is much better when you use the Custom installation method.
Custom installation
The Custom installation allows you to install parts of Tivoli Storage Productivity Center separately and provides options toc change default settings, like user IDs and directories. This is the installation method that we recommend. When you install Tivoli Storage Productivity Center, you have these installable components: Database Schema Data Server and Device Server Graphical User Interface (GUI) 18
Tivoli Storage Productivity Center V4.2 Release Update
7894Install.fm
Installation timing
The approximate time to install Tivoli Storage Productivity Center, including Tivoli Integrated Portal, is about 60 minutes. The approximate time to install Tivoli Storage Productivity Center for Replication is about 20 minutes.
Disk1 part 2 - contains these Tivoli Productivity Center components: IBM Tivoli Integrated Portal IBM Tivoli Storage Productivity Center for Replication Disk1 part 3 - contains: IBM Tivoli Integrated Portal Fixpack Note: Part 1, part2 and part 3 are require for every TPC installation and need to be downloaded and extracted to a single directory. Disk1 part 4 - contains an optional component: IBM Tivoli Storage Productivity Center Monitoring Agent for IBM Tivoli Monitoring Note: On Windows, ensure that the directory name where the installation images reside has no spaces or special characters. This will cause the Tivoli Storage Productivity Center installation to fail. For example, this happens if you have a directory name such as: C:\tpc 42 standard edition\disk1
19
7894Install.fm
The SRA zip file contains Tivoli Storage Productivity Center Storage Resource Agents (SRAs). It doesnt come with a GUI installer. In order to understand how this installation method works see <<< SRA chapter >>> Tivoli Storage Productivity Center Storage Resource Agent contains the local agent installation components: Storage Resource agent Installation scripts for the Virtual I/O server The content of this disk is: Directory: readme Directory: sra File: version.txt In addition to the images mentioned above there are these images available: Tivoli Storage Productivity Center Storage National Language Support IBM Tivoli Storage Productivity Center for Replication Two Site Business Continuity License, which is available for Windows, Linux and AIX IBM Tivoli Storage Productivity Center for Replication Three Site Business Continuity License, which is available for Windows, Linux and AIX
Physical media
The physical media shipped with the TPC V4.2 product consists of a DVD and a CD. The DVD contains the Disk1 part 1 and Disk1 part 2 content described in Passport Advantage and Web media content on page 19. The physical media CD is the same as the Web Disk2 media.
20
7894Install.fm
21
7894Install.fm
4. Enter the host name in the Computer name field. Click More to continue (see Figure 2-2).
5. In the next panel, verify that Primary DNS suffix field displays the correct domain name. Click OK (see Figure 2-3).
6. If you made any changes, you must restart your computer for the changes to take effect (see Figure 2-4).
Figure 2-4 You must restart the computer for changes to take effect
22
7894Install.fm
23
7894Install.fm
On the WINS tab, select Enable NetBIOS over TCP/IP and click OK (see Figure 2-6).
24
7894Install.fm
25
7894Install.fm
2. The next panel allows you to select the DB2 product to be installed. Select the DB2 Enterprise Server Edition Version 9.7 by clicking Install New to proceed as shown in Figure 2-8.
3. The DB2 Setup wizard panel is displayed, as shown in Figure 2-9. Click Next to proceed.
4. The next panel displays the license agreement; click I accept the terms in the license agreement (Figure 2-10).
26
7894Install.fm
5. To select the installation type, accept the default of Typical and click Next to continue (see Figure 2-11).
6. Select Install DB2 Enterprise Server Edition on this computer and save my settings in a response file (see Figure 2-12). Specify the path and the file name for the response file in the Response file name field. The response file will be generated at the end of the installation flow and it can be used to perform additional silent installations of DB2 using the same parameters specified during this installation. Click Next to continue.
27
7894Install.fm
7. The panel shown in Figure 2-13 shows the default values for the drive and directory to be used as the installation folder. You can change these or accept the defaults, then click Next to continue. In our installation, we accept to install on the C: drive.
8. The next panel requires user information for the DB2 Administration Server; it can be a Windows domain user. If it is a local user, select None - use local user account for the Domain field. The user name field is prefilled with a default user name.You can change it or leave the default and type the password of the DB2 user account that you want to create (see Figure 2-14). Leave the check-box Use the same user name and password for the remaining DB2 services checked and click Next to continue. DB2 creates a user with the following administrative rights: Act as a part of an operating system. Create a token object.
28
7894Install.fm
9. In the Configure DB2 instances panel, accept the default and click Next to continue (see Figure 2-15).
29
7894Install.fm
10.The next panel allows you to specify options to prepare the DB2 tools catalog. Accept the defaults, as shown in Figure 2-16. Verify that Prepare the DB2 tools catalog on this computer is not selected. Click Next to continue.
11.The next panel, shown in Figure 2-17, allows you to set the DB2 server to send notifications when the database needs attention. Ensure that the check-box Set up your DB2 server to send notification is unchecked and then click Next to continue.
30
7894Install.fm
12.Accept the defaults for the DB2 administrators group and DB2 users group in the Enable operating system security for DB2 objects panel shown in Figure 2-18 and click Next to proceed.
13.Figure 2-19 shows the summary panel about what is going to be installed, based on your input. Review the settings and click Finish to continue.
31
7894Install.fm
The DB2 installation proceeds and you see a progress panel similar to the one shown in Figure 2-20.
32
7894Install.fm
15.The next panel allows you to install additional products. In our installation, we clicked Finish on the panel shown in Figure 2-22 to exit the DB2 setup wizard.
16. Click Exit on the DB2 Setup Launchapd (Figure 2-23) to complete the installation.
33
7894Install.fm
2. Create the SAMPLE database, entering the db2sampl command as shown in Figure 2-25.
3. Enter the following DB2 commands. Connect to the SAMPLE database, issue a simple SQL query, and reset the database connection: db2 connect to sample db2 select * from staff where dept = 20 db2 connect reset The result of these commands is shown in Figure 2-26.
34
7894Install.fm
We split the installation in two separate stages: we install the Database Schema first and, after that, we install the Data Server and the Device Server. This is because if you install all the components in one step and any part of the installation fails for any reason (for example, space or passwords), the installation suspends and rolls back, uninstalling all the previously installed components. Other than that, you could also install the schema and Tivoli Storage Productivity Center at the same time
35
7894Install.fm
3. The License Agreement panel is displayed. Read the terms and select I accept the terms of the license agreement. Then click Next to continue (see Figure 2-28).
4. Figure 2-29 shows how to select typical or custom installation. You have the following options: Typical installation: This selection allows you to install all of the components on the same computer by selecting Servers, Agents, and Clients. Custom installation: This selection allows you to install the database schema, the TPC server, CLI, GUI and Storage Resource Agent separately. Installation licenses: This selection installs the Tivoli Storage Productivity Center licenses. The Tivoli Storage Productivity Center license is on the CD. You only need to run this option when you add a license to a Tivoli Storage Productivity Center package that has already been installed on your system. For example, if you have installed Tivoli Storage Productivity Center for Data package, the license will be installed automatically when you install the product. If you decide to later enable Tivoli Storage Productivity Center for Disk, run the installer and select Installation licenses. This option will allow you to install the license key from the CD. You do not have to install the Tivoli Storage Productivity Center for Disk product. In this chapter, we document Custom Installation. Select also the directory where you want to install Tivoli Storage Productivity Center. A default install directory is suggested; you can accept it or change it and then click Next to continue.
36
7894Install.fm
5. In the Custom installation, you can select all the components in the panel shown in Figure 2-30. By default, all components are checked. Because in our scenario, we show the installation in stages, we only select the option to Create database schema, and click Next to proceed (see Figure 2-30).
37
7894Install.fm
6. To start the Database creation, you must specify a DB2 user ID and password. We suggest that you use the same DB2 user ID that you created when you installed DB2. Click Next, as shown in Figure 2-31.
Note: It is importnat that the user that is entered her need to be part of the DB2ADMNS group, because only those users are allowed to do the actions that are required to create a new database and install the schema into that database. If thet user is not part of the DB2ADMNS group it is likely that the installation will fail at about 7% (see Figure 2-32 on page 39).
38
7894Install.fm
7. Enter your DB2 user ID and password again. This ID doesnt have to be the same ID as the first one. Make sure that you have the option Create local database selected. By default, a database named TPCDB is created. Click Database creation details... to continue (see Figure 2-33).
.
39
7894Install.fm
The panel in Figure 2-34 allows you to change the default space assigned to the database. Review the defaults and make any changes. In our installation we accepted the defaults. For better performance, we recommend that you: Allocate TEMP DB on a separate physical disk from the Tivoli Storage Productivity Center components. Create larger Key and Big Databases. Select System managed (SMS) and click OK and then Next to proceed (Figure 2-34). To understand the advantage of an SMS database versus a DMS database or the Automatic Storage, refer to the section entitled, Selecting an SMS or DMS table space in Appendix A, DB2 table space considerations on page 639.
Note: The Tivoli Storage Productivity Center schema name cannot be longer than eight characters. 8. You will see the Tivoli Storage Productivity Center installation information that you selected as shown in Figure 2-35. Click Install to continue.
40
7894Install.fm
Figure 2-36 is the Database Schema installation progress panel. Wait for the installation to complete.
9. Upon completion, the Successfully Installed panel is displayed. Click Finish to continue (Figure 2-37).
41
7894Install.fm
Attention: Do not edit or modify anything in the DB2 Control Center. This can cause serious damage to your table space. Simply use the DB2 Control Center to browse your configuration.
42
7894Install.fm
Log files
Check for errors and Java exceptions in the log files at the following locations: <InstallLocation>\TPC.log <InstallLocation>\log\dbSchema\install For Windows, the default InstallLocation is c:\Program Files\IBM\TPC. Check for the success message at the end of the log files for successful installation.
Preinstallation tasks
To install Data Server and Device Server components, you must log on to the Windows system with a user ID that has the following rights, which any user that is part of the DB2ADMNS group has automatically: Log on as a service. Act as part of the operating system. Adjust memory quotas for a process. Create a token object. Debug programs. Replace a process-level token. Be certain that the following tasks are completed: The Database Schema must be installed successfully to start the Data Server installation. The Data Server must be successfully installed prior to installing the GUI. The Device Server must be successfully installed prior to installing the CLI.
43
7894Install.fm
Custom installation
To perform a custom installation, follow these steps: 1. Start the Tivoli Storage Productivity Center installer. 2. Choose the language to be used for installation. 3. Accept the terms of the License Agreement. 4. Select the Custom Installation. 5. Select the components you want to install. In our scenario, we select the Servers, GUI, CLI as shown in Figure 2-39. Notice that the field, Create database schema, is grayed out. Click Next to continue. Note: Because we selected to install also the Data agents and Fabric agents, the Register with the agent manager check box is selected and grayed out.
6. If you are running the installation on a system with at least 4 GB but less than 8 GB of RAM, you will get the warning message shown in Figure 2-40. Click the OK button to dismiss it and proceed with the installation.
7. In the Database administrator information, the DB2 user ID and password are filled in because we used them to create the Database Schema. See Figure 2-41. Click Next. 44
Tivoli Storage Productivity Center V4.2 Release Update
7894Install.fm
The user ID is saved to the install/uninstall configuration files, so if the password has changed from the time you first installed a TPC component a wrong password might be populated into this panel. 8. We want to use the database TPCDB we created in the previous section on the same machine. So we select Use local database and we click Next to continue (Figure 2-42).
TPC can also run having the DB schema installed on another server. In this case you have to install the TPC schema on that server following the procedure documented in the previous section. Then, installing the other TPC components, you have to select the
Chapter 2. Tivoli Storage Productivity Center install on Windows
45
7894Install.fm
Use remote database option and specify the host name of the server running the DB2 Manager. The other fields must be prefilled as shown in Figure 2-43. Verify their values and click Next. Note: If you have the TPC schema already installed locally, the option of using a remote database is disabled. You have to uninstall the local copy and rerun the installation program to enable the remote database option.
If you selected to use a remote database, a warning message shown in Figure 2-44 is presented, reminding you to ensure that the remote DB2 instance is running before proceeding.
46
7894Install.fm
9. In the panel in Figure 2-45, enter the following information: Data Server Name: Enter the fully qualified host name of the Data Server. Data Server Port: Enter the Data Server port. The default is 9549. Device Server Name: Enter the fully qualified host name of the Device Server. Device Server Port: Enter the Device Server port. The default is 9550. TPC Superuser: Enter the name of a OS group that will be granted the superuser role within TPC. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value that you enter for LDAP TPC Administrator group overrides the value that you entered here for the TPC superuser. Host Authentication Password: This is the password used internal communication between TPC components, like the Data Server and the Device Server. Note: This password can be changed when you right click Administrative Services -> Services -> Device Server -> Device Server and select change password. Data Server Account Password:
47
7894Install.fm
For Windows only. TPC installer will create an ID called TSRMsrv1 with the password you specified here to run the Data Server service. The display name for the Data Server in Windows Services panel is: IBM Tivoli Storage Productivity Center - Data Server WebSphere Application Server admin ID and Password: This is the user ID and password required by the Device Server to communicate with the embedded WebSphere. You can use the same user as the user that was entered on the panel in Figure 2-42 on page 45. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value entered for the LDAP TPC Administrator group overrides the value you entered here for the WebSphere Application Server admin ID and password. If you click the Security roles... button: The Advanced security roles mapping panel is displayed. You can assign a Windows OS group to a role group for each TPC role that you want to make an association with, so you can have separate authority IDs to do various TPC operations. The operating group must exist before you can associate a TPC role with a group. You do not have to assign security roles at installation time, you can assign these roles after you have installed TPC. If you click the NAS discovery... button: The NAS discovery information panel is displayed. You can enter the NAS filer login default user name and password and the SNMP communities to be used for NAS discovery. You do not have to assign the NAS discovery information at installation time, you can configure it after you installed TPC. Click Next to continue (Figure 2-45).
10.The following panel lets us select an existing Tivoli Integrated Portal to use or install a new one. Because we are installing a new instance, we have to specify the installation 48
Tivoli Storage Productivity Center V4.2 Release Update
7894Install.fm
directory and the port number. See Figure 2-46. TIP will use 10 port numbers starting from the one specified in the Port field (called Base Port). The 10 ports will be: base port base port+1 base port+2 base port+3 base port+5 base port+6 base port+8 base port+10 base port+12 base port+13
The TIP administrator ID and password are pre-filled with the WebSphere admin ID and password specified during the Device Server installation (see Figure 2-45 on page 48).
11.The next panel, shown in Figure 2-47, allows you to choose the authentication method that TPC will use to authenticate the users: If you want to authenticate the users against the operating system, select this option and click Next. If you want to use an LDAP or Active Directory, you need to have an LDAP server already installed and configured. If you decide to use this option, select the LDAP/Active directory radio button, click Next, and additional panels are displayed.
49
7894Install.fm
a. If you selected the LDAP/Active Directory option, the panel shown in Figure 2-48 is displayed. Insert the LDAP Server host name and change the LDAP Port Number if it is not corresponding to the proposed default value. You also need to fill in the Bind Distinguished Name and the Bind Password only if the anonymous binds are disabled on your LDAP server. Then click Next to continue.
b. In the panel shown in Figure 2-49, you are required to insert the LDAP RDN for users and groups and the attributes that must be used to search the directory. When you click
50
7894Install.fm
Next, the TPC installation makes an attempt to connect to the LDAP server to validate the provided parameters. If the validation is successful, you are prompted with the next panel; otherwise an error message is shown explaining the problem encountered.
c. In the panel shown in Figure 2-50, you are requested to specify the LDAP user ID and password corresponding to the TPC Administrator and the LDAP group that will be mapped to the TPC Administrator group. Also in this panel, after filling in the fields and clicking Next, the installation program will connect to the LDAP server to verify the correctness of the provided values. If the validation is successful, the next installation panel is shown.
51
7894Install.fm
Warning: Due to the WebSphere Application Server APAR PK77578, the LDAP TPC Administrator user name value must not contain a space in it. 12.The Summary information panel is displayed. Review the information, then click Install to continue (see Figure 2-51).
The installation starts. You might see several messages related to Data Server installation similar to Figure 2-52.
52
7894Install.fm
You might also see several messages about the Device Server installation, as shown in Figure 2-53 and after that, messages related to the TIP installation, similar to Figure 2-54.
Note: The installation of the Tivoli Integrated Portal (TIP) can be a time consuming process, requiring more time than the other TPC components. TIP is finished the the process bar has reached 74%
53
7894Install.fm
a. The Welcome panel is displayed. See Figure 2-56. Click Next to proceed.
54
7894Install.fm
Warning: If you do not plan to use TPC for Replication, do not interrupt the installation by clicking the Cancel button. This will result in an interruption in the installation process with a subsequent complete TPC installation rollback. Complete the installation and then disable TPC for Replication. b. The installation wizard checks on the system prerequisites to verify that the operating system is supported and the appropriate fix packs are installed (see Figure 2-57).
c. If the system pass the prerequisites check, the panel shown in Figure 2-58 is displayed. Click the Next button.
55
7894Install.fm
d. The license agreement panel is shown. Accept it and click Next as shown in Figure 2-59.
e. On the panel shown in Figure 2-60, you can select the directory where TPC for Replication will be installed. A default location is displayed. You can accept it or change it based on your requirements. We decided to install TPC for Replication into the E: drive. When done, click Next to continue.
56
7894Install.fm
f. In the panel shown in Figure 2-61 you can select the TPC for Replication user ID and Password. This ID is usually the system administrator user ID. If you are using Local OS Authentication and you want to enable the Single Sign-On feature for this user ID you have to provide the same credentials provided for the WebSphere Application Server Administrator (see step 9 on page 47).
Note: If you want to use another user ID, you need to create it before beginning the installation and ensure that it has administrator rights. g. The Default ports panel is displayed.Ensure that the selected ports are available on the server and then click Next. See Figure 2-62.
57
7894Install.fm
h. Review the settings shown in Figure 2-63 and click Install to start the installation.
i. The installation of TPC for Replication starts. Several messages about the installation process are shown, such as the one in Figure 2-64.
j. After the completion of the TPC for Replication installation, a summary panel is shown reporting also the URL where the Web browser can be pointed to access the TPC-R Web-User Interface. By clicking the Finish button, this panel is closed and the installation flow goes back to the TPC installation panels (see Figure 2-65).
58
7894Install.fm
Note: Tivoli Storage Productivity Center for Replication is installed with no license. You must install the Two Site or Three Site Business Continuity (BC) license. 14.After the creation of the TPC uninstaller, you see the summary information panel (Figure 2-66). Read and verify the information and click Finish to complete the installation.
59
7894Install.fm
The following services are related to TPC: IBM Tivoli Storage Productivity Center - Data Server IBM WebSphere Application Server v6.1 - Device Server IBM WebSphere Application Server v6.1 - CSM is the service related to TPC for Replication Moreover, another process, shown Figure 2-68, is present in the Services list and it represents the Tivoli Integrated Portal.
60
7894Install.fm
61
7894Install.fm
used is Windows (SMB). We have also decided to run the agent as a non-daemon service. As a result, the command that we are issuing requires a minimum set of parameters and will look similar to these: Agent -install -serverPort <serverport> -serverIP <serverIP> -installLoc <installLocation> -userID <userID> -password <password> The meanings and the values of these parameters are specified in Table 2-1.
Table 2-1 Storage Resource agent install parameters Parameter serverPort serverIP installLoc userID Explanation The port of the TPC Data Server. The default value is 9549. IP address or fully qualified DNS name of the server. Location where the agent will be installeda. The user ID defined on the agent system. This is the user ID t.hat the server can use to connect to the agent system Password for the specified User ID. Value 9549 colorado.itso.ibm.com c:\tpcsra Administrator
password
itso13sj
a. Make sure that when you specify a directory to install the Storage Resource agent into, you do not specify an ending slash mark (\). For example, do not specify C:\agent1\ because this will cause the installation to fail.
62
7894Install.fm
To verify that the installation completed correctly from the TPC GUI, log on to the TPC GUI and go to Administrative Services Data Sources Data/Storage Resource Agents. The installed agent is now present in the list as shown in Figure 2-70.
Note: For the agent installed on server maryl.itso.ibm.com, the Agent Type column is Storage Resource and the Last Communication Type is Windows.
63
7894Install.fm
On the panel shown in Figure 2-72, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel.
Disabling TPC
To disable the TPC, go to Start Settings Control Panel Administrative Tools Services. Right-click the following service: IBM WebSphere Application Server V6.1 - DeviceServer Then select Properties, as shown in Figure 2-73.
On the panel shown in Figure 2-74, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel.
64
7894Install.fm
Repeat the same procedure for the following services: IBM Tivoli Storage Productivity Center - Data Server IBM Tivoli Storage Resource agent - <directory> if a Storage Resource agent is installed. <directory> is where the Storage Resource agent is installed. The default is <TPC_install_directory>\agent Optionally you can also disable the following two services: Tivoli Integrated Portal - TIPProfile_Port_<xxxxx> where <xxxxx> indicates the port specified during installation. The default port is 16310. IBM ADE Service (Tivoli Integrated Portal registry). Note: Stop Tivoli Integrated Portal and IBM ADE Service only if no other applications are using these services and you are not using LDAP
65
7894Install.fm
2. Prepare your environment. To upgrade Tivoli Storage Productivity Center you need to make sure that all the GUIs are closed. There is no need to stop any Tivoli Storage Productivity Center service.
3. Run the installer. Double click on setup.exe to start the installation program. The select language for the installation window pops up (Figure 2-77 on page 67).Click OK to continue.
66
7894Install.fm
4. On Figure 2-78 accept the license by clicking the radio button and click Next to continue.
5. Select Custom installation as the installation type (Figure 2-79 on page 68). Click Next to continue.
67
7894Install.fm
6. On the component selection window (Figure 2-80) all options are grayed out so you cannot change them. Click Next to continue.
7. Provide Database administrator information (Figure 2-81 on page 69). The fields should be filled in automatically. Click Next to continue.
68
7894Install.fm
8. Provide database schema information on Figure 2-82. The fields should be autocompleted. Click Next to continue.
69
7894Install.fm
9. Review the information provided and click Install to proceed with the upgrade (Figure 2-83). During the installation process you are presented several windows with the installation progress.
10.The installation wizard may not allways shutdown the device server service and the installation will fail with a message Cannot upgrade component Device Server as shown in Figure 2-84. To avoid this error you need to kill the process that is using the file C:\Program Files\IBM\TPC\device\apps\was. We used for that purpose the process explorer utility as we will show next.
Process Explorer can be downloaded for free from the Microsoft website at http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx. It is an executable file (no installation required) which provides much more information from Windows processes than the preincluded Windows Task Manager. We will show in the following steps how it works focusing on the issue we had with the device server process not being restarted. It is not our intention to show all the function of Process Explorer.
70
7894Install.fm
a. Once you have downloaded the utility, double click on its icon and it will open a window like the one shown in Figure 2-85. If your operating system is working on 64 bits a new executable file called procexp64.exe will be generated on the same path where the original one resides.
b. Next we need to know what process is causing the installation to fail so we will need to know the image path for the process running. Click on View Select Columns () and then, on the Process Image tab click the checkbox Image Path as shown in . This will add a new column to Process Explorer main window where you are able to see the full image path of the each process running. Click OK to finish.
71
7894Install.fm
c. Now back to the Process Explorer window scroll right to look at the image paths (Figure 2-88) and look for an entry like the one highlighted in the red box.
d. Finally, the process highlighted is that one running on the path shown on Tivoli Storage Productivity Center installation wizard, kill the process by either pressing Del on your keyboard or right clicking over the process and selecting Kill Process. You should be able to continue with the installation by clicking Next on the Tivoli Storage Productivity Center installation wizard.
72
7894Install.fm
11.After the installation completes click Finish to exit the install wizard, Figure 2-89.
73
7894Install.fm
74
7894LinuxInstall.fm
Chapter 3.
75
7894LinuxInstall.fm
Typical installation
The Typical installation allows you to install all the components of the Tivoli Storage Productivity Center on the local server in one step. Our recommendation is not to use the Typical installation, because the control of the installation process is much better when you use the Custom installation method.
Custom installation
The Custom installation allows you to install each component of the Tivoli Storage Productivity Center separately and deploy remote Fabric and or Data agents on various computers. Additional panels are presented allowing you to control the installation sequence of the components and specify additional TPC parameters. This is the installation method that we recommend. Note: Tivoli Storage Productivity Center for Replication is no longer a stand-alone application. Tivoli Storage Productivity Center Version 4.2 now installs Tivoli Integrated Portal and Tivoli Storage Productivity Center for Replication Version 4.2 during the server components installation process. When you install Tivoli Storage Productivity Center using custom installation, you have the following installable components: Database schema Tivoli Storage Productivity Center Servers Graphical User Interface (GUI) Command Line Interface (CLI) Data agent 76
Tivoli Storage Productivity Center V4.2 Release Update
7894LinuxInstall.fm
Fabric agent After Tivoli Storage Productivity Center Standard Edition is installed, the installation program will start the Tivoli Storage Productivity Center for Replication installation wizard. The approximate time to install Tivoli Storage Productivity Center, including Tivoli Integrated Portal, is about 60 minutes. The approximate time to install Tivoli Storage Productivity Center for Replication is about 20 minutes.
Physical media
The physical media shipped with the TPC V4.2 product consists of a DVD and a CD. The DVD contains the Disk1 parts described in Passport Advantage and Web media content on page 77. The physical media CD contains the SRA package.
77
7894LinuxInstall.fm
7894LinuxInstall.fm
Note: You must have the X11 graphical capability installed before installing DB2 using the GUI. Refer to Appendix C, Configuring X11 forwarding on page 651. To install DB2, log on as a user with root authority, and then use the following procedures.
79
7894LinuxInstall.fm
If during the prerequisite check you receive an error message such as the one shown in Figure 3-1, you might need to install additional packages to satisfy DB2 dependencies before proceeding with the installation.
Refer to the following URL for additional information about DB2 installation requirements for your specific platform: http://www.ibm.com/software/data/db2/udb/sysreqs.html 2. Run the following command to execute the graphical installer: ./db2setup This will open the DB2 Setup Launchpad, as shown in Figure 3-2.
80
7894LinuxInstall.fm
3. Select Install a Product from the left-hand panel, then select DB2 Enterprise Server Edition Version 9.7 and click the Install New button in order to proceed with the installation, as shown in Figure 3-3.
81
7894LinuxInstall.fm
4. The DB2 Setup wizard panel is displayed, as shown in Figure 3-4. Click Next to proceed.
82
7894LinuxInstall.fm
5. The next panel displays the software license agreement. Click Read non-IBM terms to display additional license information and, if you agree with all terms, click Accept and Next to continue (see Figure 3-5).
83
7894LinuxInstall.fm
6. You will be prompted to select the installation type. Accept the default of Typical and click Next to continue (see Figure 3-6).
84
7894LinuxInstall.fm
7. On the next panel, accept the default: Install DB2 Enterprise Server Edition on this computer and save my settings in a response file. Even though not required, we recommend that you generate such a response file because it will greatly ease tasks such as documenting your work. Specify a valid path and the file name for the response file in the Response file name field. Click Next when you are ready, as shown in Figure 3-7.
85
7894LinuxInstall.fm
8. The panel shown in Figure 3-8 shows the default directory to be used as the installation folder. You can change the directory or accept the defaults. Make sure the installation folder has sufficient free space available, then click Next to continue.
9. After you click Next, if your system is an IBM System x or System p, you might see a panel titled, Install the IBM Tivoli System Automation for Multiplatforms Base Component (SA MP Base Component). This component is not required by Tivoli Storage Productivity Center, so choose Do not install SA MP Base Component and click Next.
Figure 3-9 Create new user for DB2 Administration Server (DAS)
86
7894LinuxInstall.fm
10.You will be prompted whether you want to set up a DB2 instance. Accept the default to Create a DB2 instance and click Next to continue (see Figure 3-10).
87
7894LinuxInstall.fm
11.In the panel shown in Figure 3-11, accept the default to create a Single partition instance and click Next to continue.
88
7894LinuxInstall.fm
12.The next panel, shown in Figure 3-12, will prompt you for user information for the DB2 instance owner. This user must have a minimal set of system privileges. Accept the default to create a New user and specify a password. Click Next when you are ready. Note: The TPC database repository will be stored in the home directory of the DB2 instance owner specified here. Make sure to place the users home directory on a file system that has sufficient free space available; /home is usually not large enough for database repositories! In general, choose the file system with the most available free space on your system to hold database repositories. If you are uncertain about the available file systems and their size, use the df -h command to get an overview.
89
7894LinuxInstall.fm
13.The last user you have to specify is the DB2 fenced user, which is used to execute User Defined Functions (UDFs) and stored procedures. This user must have minimal system privileges as well. We recommend that that you create a New user as shown in Figure 3-13. Specify a new password and click Next to continue.
90
7894LinuxInstall.fm
14.The next panel prompts you to prepare the DB2 tools catalog. Because this component is not required by Tivoli Storage Productivity Center, click Do not prepare the DB2 tools catalog as shown in Figure 3-14. Click Next when you are ready.
91
7894LinuxInstall.fm
15.The panel shown in Figure 3-15 allows you to specify a Notification SMTP (e-mail) server. You can optionally specify an existing server or click Do not set up your DB2 server to send notifications at this timea notification server can always be specified after the installation is finished. Make a choice and click Next to continue. Tip: Configuring DB2 to send e-mail notifications on errors and warning conditions can help resolve those conditions more quickly, thus improving overall stability and resiliency of the solution. This is an important factor in preventing unplanned outages!
16.Figure 3-16 shows the summary panel about what is going to be installed. Review all settings and, if you agree with them, click Finish to begin copying files.
92
7894LinuxInstall.fm
17.You will see a progress panel as the installer copies the required files. Wait for the installation to complete. 18.When the installation was successful, you will see a panel such as Figure 3-17. Click Finish to close the panel.
93
7894LinuxInstall.fm
19.After you have completed installing DB2, you need to edit the file /etc/group and add the root account to the db2iadm1 group. The db2iadm1 group line in /etc/group looks like this: db2iadm1:x:102:root
2. In order to set the environment variables for the database instance, you need to source the instance profile (db2profile) found in the instance users home directory: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. After setting the DB2 environment variables, you can verify the installed version of DB2 by issuing the db2level command. The output will indicate which DB2 instance is currently being used, which code release is installed, and whether the selected DB2 instance is 32-bit or 64-bit, as shown in Figure 3-19.
94
7894LinuxInstall.fm
Important: Especially note whether the selected DB2 instance is 32-bit or 64-bit because this will greatly affect future installation steps! 4. Make sure that DB2 was started and is currently running by issuing the db2start command. If this gives you an error as shown in Figure 3-20, that means DB2 was already running when you issued the command. Otherwise it will be started now.
5. Enter the db2sampl command to create the SAMPLE database. The results look similar to Figure 3-21.
Note: This process can take several minutes to complete. 6. Enter the following commands to connect to the SAMPLE database, retrieve a list of all the employees that work in Department 20, and reset the database connection: db2 connect to sample db2 select * from staff where dept = 20 db2 connect reset 7. If all steps completed successfully, you can remove the SAMPLE database. Enter the following command to do so: db2 drop database sample The results look similar to Figure 3-22.
95
7894LinuxInstall.fm
96
7894LinuxInstall.fm
mount -o ro /dev/cdrom /cdrom 3. Change to the installation directory where the CD-ROM is mounted, for example: cd /cdrom
Note: Be sure to extract all parts of Disk 1 into the same directory!
97
7894LinuxInstall.fm
Note: After adding the root account to the db2iadm1 group as outlined in 3.3.1, DB2 installation: GUI install on page 78, you need to log out and log back in to allow the system to pick up this change. Before proceeding, check that root is a member of this group by issuing the id command. Make sure that the output line contains the db2iadm1 group. 2. In order to set the environment variables for the database instance, you need to source the instance profile (db2profile) found in the instance users home directory: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. Make sure that DB2 was started and is currently running by issuing the db2start command. If this gives you an error as shown in Figure 3-20 on page 95, that means DB2 was already running when you issued the command. Otherwise it will be started now. 4. Change to the directory where you have extracted the Tivoli Storage Productivity Center Disk 1 software package, then launch the graphical installer by issuing the command: ./setup.sh 5. Tivoli Storage Productivity Center installer is launched, prompting you to select an installation language (see Figure 3-23). Choose a language and click OK to continue.
98
7894LinuxInstall.fm
6. The International Program License Agreement is displayed. Read the license text and, if you agree with it, click I accept the terms of the license agreement as shown in Figure 3-24. Click Next when you are ready to proceed with the installation.
7. The Installation Types panel is displayed, as seen in Figure 3-25. Click Custom Installation. In addition, you can change the TPC Installation Location to suite your requirements or accept the defaults. Make sure that the installation folder has sufficient free space available, then click Next to continue.
99
7894LinuxInstall.fm
8. The panel shown in Figure 3-26 prompts you to select one or more components to install. Remove all check marks except for Create database schema for now. Click Next to continue with the installation.
9. The Database administrator information panel is displayed. Specify a user ID with administrative database authority as Database administrator, such as db2inst1, as seen in Figure 3-27. Specify the according password and click Next to continue.
100
7894LinuxInstall.fm
10.The panel shown in Figure 3-28 is displayed. Enter the administrative user ID (in our case, db2inst1) and according password again as DB user ID. Make sure that you select Create local database.
You can click Database creation details in order to verify additional details, as shown in Figure 3-29. Do not change the default values unless you are a knowledgeable DB2 administrator. Click Next to proceed with the installation.
Note: The TPC schema name cannot be longer than eight characters.
101
7894LinuxInstall.fm
11.Figure 3-30 shows the summary information panel. Review the information that you have provided for the database schema installation. If you are in agreement that all data entered is correct, you can proceed by clicking Install.
12.The progress panel is displayed. Wait for the installation to finish; the results panel looks like Figure 3-31.
102
7894LinuxInstall.fm
103
7894LinuxInstall.fm
Note: You must have the X11 graphical capability installed before installing Tivoli Storage Productivity Center using the GUI. Refer to Appendix C, Configuring X11 forwarding on page 651. Follow these steps to complete the installation process: 1. Log on as a user with root authority. 2. In order to set the environment variables for the database instance, you need to source the instance profile (db2profile) found in the instance users home directory: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. Make sure that DB2 was started and is currently running by issuing the db2start command. If this gives you an error as shown in Figure 3-20 on page 95, that means DB2 was already running when you issued the command. Otherwise it will be started now. 4. Change to the directory where you have extracted the Tivoli Storage Productivity Center Disk 1 software package, then launch the graphical installer by issuing the following command: ./setup.sh 5. Tivoli Storage Productivity Center installer is launched, prompting you to select an installation language (see Figure 3-33). Choose a language and click OK to continue.
6. The International Program License Agreement is displayed. Read the license text and, if you agree with it, click I accept the terms of the license agreement as shown in Figure 3-34. Click Next when you are ready to proceed with the installation.
104
7894LinuxInstall.fm
7. The Installation Types panel is displayed, as seen in Figure 3-35. Click Custom Installation, then click Next to continue.
105
7894LinuxInstall.fm
8. The next panel prompts you to select one or more components to install. Remove all check marks except for Tivoli Storage Productivity Center Servers, GUI and CLI. Note that the database schema is greyed out, because it was already installed previously; see Figure 3-36. Click Next to continue with the installation.
Note: We dont reccomend to install the Storage Resource Agent (SRA) at this time. Installing any SRA via the installer requires you to also uninstall the SRA using the installer, so in most cases later using the TPC GUI to deploy agents (instead of using the installer) is the more flexible approach. Because we do not plan to install the SRA at this time, there is no need to Register with the agent manager; we perform this step subsequently. 9. If you are running the TPC installation on a system with at least 4 GB but less than the recommended 8 GB of RAM, a warning message will be displayed as seen in Figure 3-37. To ignore this message and continue with the installation, click OK.
106
7894LinuxInstall.fm
Note: 8 GB of RAM is the minimum memory requirement to run both Tivoli Storage Productivity Center and Tivoli and Tivoli Storage Productivity Center for Replication. If you have less than 8 GB of RAM, you have to run only Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication because of system load. To do that, you must disable Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication after installation. Refer to Disabling TPC or TPC for Replication on page 63. 10.The Database administrator information panel is displayed, as seen in Figure 3-38. The database administrator user and password are automatically filled in. This is due to the fact that we previously used it to create the database schema. Click Next to continue.
11.The database schema information panel is displayed, as shown in Figure 3-39. Because we already installed the database schema previously, nothing can be changed here. Click Next to continue with the installation. Note that if you want to use a remote database on another machine, then you need to install the TPC schema component on this machine first, following the procedure documented in the previous section. Afterwards, install the TPC server components, select the Use remote database option, and specify the host name of the server running the DB2 Manager.
107
7894LinuxInstall.fm
12.If you selected to use a remote database, a warning message is presented to ensure that the remote DB2 instance is running before proceeding; see Figure 3-40.
13.The panel shown in Figure 3-41 requires the following inputs: Data Server Name: Enter the fully qualified host name of the Data Server. Data Server Port: Enter the Data Server port. The default is 9549. Device Server Name: Enter the fully qualified host name of the Device Server. Device Server Port: Enter the Device Server port. The default is 9550. TPC Superuser: Enter an operating system group name to associate with the TPC superuser role. This group must exist in your operating system before you install Tivoli Storage Productivity Center . Membership in this group provides full access to the Tivoli Storage Productivity Center product. You can assign a user ID to this group on your operating system and log on to the TPC GUI using this user ID.
108
7894LinuxInstall.fm
If you click the Security roles... button, the Advanced security roles mapping panel is displayed. You can assign an operating system group for each TPC role you want to make an association with, so you can have separate authority IDs to do various TPC operations. The operating system group must exist before you can associate a TPC role with it. Except for the superuser role, you do not have to assign security roles at installation time; you can assign these roles after you have installed TPC. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the values you enter for LDAP TPC Administrator groups override the values you entered here for the TPC superuser. You can record information used in the component installation, such as user IDs, passwords, and storage subsystems in the worksheets in Appendix B, Worksheets on page 643. Host Authentication Password: This is the password used for the Fabric agents to communicate with the Device Server. This password must be specified when you install the Fabric agent. Data Server Account Password: This is not required for Linux installations, this is only required for Windows. WebSphere Application Server (WebSphere Application Server) Admin ID and Password: This is the user ID and password required by the Device Server to communicate with the embedded WebSphere. You can use any existing user ID here, such as the dasusr1 ID created upon DB2 installation. The WebSphere Application Server admin ID does not need to have any operating system privileges. The user will be used for the local Tivoli Integrated Portal (TIP) administrator ID. If you click the NAS discovery... button, the NAS discovery information panel is displayed. You can enter the NAS filer login default user name and password and the SNMP communities to be used for NAS discovery. You do not have to assign the NAS discovery information at installation time, you can configure it after you have installed TPC. Important: Ensure that you record all passwords that are used during the installation of Tivoli Storage Productivity Center .
109
7894LinuxInstall.fm
When you are ready, click Next to continue. 14.The Tivoli Integrated Portal panel is displayed, as seen in Figure 3-42. You can select to install a new version of TIP or use an already existing install on the local machine. TIP will use 10 port numbers starting from the one specified in the Port field (referred to as the Base Port). The 10 ports will be: base port base port+1 base port+2 base port+3 base port+5 base port+6 base port+8 base port+10 base port+12 base port+13
The TIP administrator ID and password are pre-filled with the WebSphere Application Server admin ID and password specified in the previous step (Device Server installation). Click Next to continue.
110
7894LinuxInstall.fm
Important: TPC Version 4.2 only supports a TIP instance that is exclusively used by TPC and TPC-R, but no other application exploiting TIP. Expect support for multiple applications using a shared TIP instance in a future release. 15.The authentication selection panel is displayed (Figure 3-43). This panel refers to the authentication method that will be used by TPC to authenticate the users.
111
7894LinuxInstall.fm
16.If you already have a valid Tivoli Integrated Portal instance on the system and it uses either OS-based or LDAP-based authentication, then TPC will use that existing authentication method. Otherwise, select the authentication method to use: OS Authentication: This uses the operating system of the TPC server for user authentication. LDAP/Active Directory: If you select LDAP or Microsoft Active Directory for authentication, you must have LDAP or Active Directory installed already. Choose either of the two options and click Next to proceed. 17.If you decide to use LDAP/Active Directory authentication, additional panels are displayed to configure this authentication method. Refer to Installing TPC Servers, GUI and CLI on page 103 for additional details. 18.The summary information panel is displayed. Review the information, then click Install to continue as illustrated in Figure 3-44.
112
7894LinuxInstall.fm
19.You will see a progress window as Tivoli Storage Productivity Center is installed. Wait for the installation to complete. 20.After the TPC Data Server, Device Server, GUI, and CLI installation are complete, the installing TIP panel is displayed (see Figure 3-45). Wait for the TIP installation to finish as well.
After the TIP installation has completed, the TPC for Replication installation is launched in a separate window. The TPC installation is temporarily suspended in the background and the TPC for Replication panel is displayed as seen in Figure 3-46.
113
7894LinuxInstall.fm
3. If the system passes the check as seen in Figure 3-48, you can continue by clicking Next.
114
7894LinuxInstall.fm
4. Read the license agreement text displayed on the next panel (Figure 3-49) and, if you agree with it, select I accept the terms of the license agreement prior to clicking Next.
5. Specify the Directory Name where you want to install TPC for Replication. You can either choose a directory by changing the location or by accepting the default directory. See Figure 3-50 for an example. Make sure that the installation folder has sufficient free space available, then click Next to continue with the installation.
6. The TPC-R Administrator user panel is displayed, as illustrated in Figure 3-51. Enter the user ID and password that will be used as TPC-R administrator. This user must already exist in the operating system and have administrator rights, such as the root account. When you are done, click Next to continue.
115
7894LinuxInstall.fm
Note: If you prefer to use another user, you are required to create it beforehand and ensure that it has administrator rights. 7. The default WebSphere Application Server ports panel is displayed, as shown in Figure 3-52. Accept the defaults and click Next to continue.
8. The Installation Summary panel is displayed (Figure 3-53). Review the settings and make necessary changes if needed by clicking Back. Otherwise, click Install.
116
7894LinuxInstall.fm
9. The TPC for Replication installation progress panel is displayed, as seen in Figure 3-54. Wait for the installation to finish.
10.The TPC-R Installation Result panel is displayed, as shown in Figure 3-55. Notice the URL to connect to TPC-R. Click Finish. Note: Tivoli Storage Productivity Center for Replication is installed with FlashCopy as the only licensed service. You must install the Two Site or Three Site Business Continuity (BC) license in order to use synchronous Metro Mirror and asynchronous Global Mirror capabilities.
117
7894LinuxInstall.fm
11.After the TPC-R installation has completed, the TPC installer will continue creating the uninstaller as seen in Figure 3-56. Wait for the installation to complete.
12.After the installation has finished, you see the Summary Information panel (Figure 3-57). Read and verify the information and click Finish to complete the installation.
118
7894LinuxInstall.fm
119
7894LinuxInstall.fm
You have now successfully completed Tivoli Storage Productivity Center server installation.
120
7894AIXInstall.fm
Chapter 4.
121
7894AIXInstall.fm
122
7894AIXInstall.fm
Disk 1 contains all Tivoli Productivity Center components: Database Schema Data Server Device Server GUI CLI Local Data agent Local Fabric agent Storage Resource agent Remote Data agent Remote Fabric agent Tivoli Integrated Portal Tivoli Storage Productivity Center for Replication Note: Disk 1 has four parts to it. All parts must be downloaded and extracted into the same directory. The SRA packagecontains local and remote agent installation images: Local Data agent Local Fabric agent Storage Resource agents Remote Data agent Remote Fabric agent Installation scripts for the Virtual I/O server
Physical media
The physical media shipped with the TPC V4.2 product consists of a DVD and a CD. The DVD contains the Disk1 parts described in Passport Advantage and Web media content. The physical media CD contains the SRA package.
123
7894AIXInstall.fm
If you have less than 8 GB of RAM, you have to run only Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication because of system load. To do that, you must disable Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication after installation. For installations on AIX, you need a total of 6 GB of free disk space. 2.25 GB for the /tmp directory 3 GB for the /opt directory 250 MB in the /home directory 10 KB of free space in /etc directory 200 MB in the /usr directory 50 MB in the /var directory
124
7894AIXInstall.fm
125
7894AIXInstall.fm
3. Select the product to install, ESE (DB2 Enterprise Server Edition) as shown in Figure 4-2.
4. Figure 4-3 shows the DB2 installation being initiated and informs you of the estimated time to perform all tasks.
126
7894AIXInstall.fm
2. Verify the owner of the directories. Do this by typing ls -ld against the directories as seen in Figure 4-4; the directory owners are displayed as defined in Step 1.
3. Set the DB2 user passwords: passwd db2inst1 You are required to enter the password twice for verification, this presents the password that you want to use for the DB2 instance. passwd db2fenc1 You are required to enter the password twice for verification, this presents the password that you want to use for the fenced user. passwd dasusr1 You are required to enter the password twice for verification, this presents the password that you want to use for the DB2 administration server (DAS) user. 4. Add authentication attributes to the users: pwdadm -f NOCHECK db2inst1 pwdadm -f NOCHECK db2fenc1 pwdadm -f NOCHECK dasusr1 5. Change group db2iadm1 to include the root user: chgroup users=db2inst1,root db2iadm1 6. Create a DB2 Administration Server (DAS): /opt/IBM/db2/V9.7/instance/dascrt -u dasusr1
127
7894AIXInstall.fm
As shown in Figure 4-5, you should see a message indicating that the program completed successfully.
7. Create a DB2 instance: /opt/IBM/db2/V9.7/instance/db2icrt -a server -u db2fenc1 db2inst1 As shown in Figure 4-6, you ought to see a message indicating that the program completed successfully.
Source the instance profile: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 8. Optional: Change the default location for database repositories. By default, this location is /home/db2inst1. Note: /home is usually not large enough for database repositories. Choose a file system with enough free space to contain the IBM Tivoli Storage Productivity repository. In our case, we use the default repository location. To change the default location, complete the following steps: a. Type db2 update dbm cfg using DFTDBPATH <new repository path> IMMEDIATE, where <new repository path> represents the new location for the repository. b. Type chown -R db2inst1:db2iadm1 <new repository path> to assign ownership to db2inst1 and permission to anyone in db2iadm1. 9. Configure DB2 communication: a. Edit /etc/services and verify or add the following line at the end of the file: db2c_db2inst1 50000/tcp b. Type db2 update dbm cfg using svcename db2c_db2inst1 c. Type db2set DB2COMM=tcpip
128
7894AIXInstall.fm
10.Add the DB2 license: a. Type cd /opt/IBM/db2/V9.7/adm b. Type ./db2licm -a <DB2 installer location>/db2/ese/disk1/db2/license/db2ese_o.lic Here, <DB2 installer location> represents the directory where the DB2 installer is located. 11.Restart DB2, as shown in Figure 4-7. a. Type db2stop force b. Type db2 terminate c. Type db2start
129
7894AIXInstall.fm
5. If all steps completed successfully, you can remove the SAMPLE database. Enter the command db2 drop database sample to drop the SAMPLE database.
130
7894AIXInstall.fm
131
7894AIXInstall.fm
132
7894AIXInstall.fm
5. The International Program Licence Agreement is displayed. Click I accept the terms of the licence agreement, and then click Next, as seen in Figure 4-10.
The Installation Types panel is displayed (Figure 4-11). Click Custom installation. In addition, you can change the TPC Installation Location to suit your requirements; we choose the default location, which is /opt/IBM/TPC. After you have completed this panel, click Next to continue.
133
7894AIXInstall.fm
6. The panel, Select one or more components to install on the local or remote computer, is displayed. Remove all check marks except for Create database schema as specified during the DB2 install. See Figure 4-12. Click Next to continue.
7. The Database administrator information panel is displayed. Enter the user ID and password for the DB2 instance owner as shown in Figure 4-13. Click Next to continue.
8. The new database schema information panel is displayed: a. Enter the DB user ID and password and select Create local database as seen in Figure 4-14. 134
Tivoli Storage Productivity Center V4.2 Release Update
7894AIXInstall.fm
b. If you click Database creation details, you will see the Database schema creation information panel (Figure 4-15). Do not change the default values unless you are a knowledgeable DB2 administrator. Click Next to continue. Refer to Appendix A, DB2 table space considerations on page 639 for the differences between SMS and DMS table spaces.
135
7894AIXInstall.fm
9. The summary information panel is displayed (Figure 4-16). Click Install to begin the database schema installation.
136
7894AIXInstall.fm
11.When the installation is complete, the installation results panel is displayed as shown in Figure 4-18.
137
7894AIXInstall.fm
5. The International Program Licence Agreement is displayed, click I accept the terms of the licence agreement, and then click Next, as seen in Figure 4-21.
138
7894AIXInstall.fm
6. The installation types panel is displayed (Figure 4-22). Click Custom installation and then click Next to continue.
7. The panel, Select one or more components to install, is displayed. Select these choices: Tivoli Storage Productivity Center Servers GUI CLI Data agent (optional) Fabric agent (optional)
139
7894AIXInstall.fm
After you have made the required selections as shown in Figure 4-23, click Next.
Note: We do not reccomend installing the Storage Resource Agent (SRA) at this time. Installing any SRA via the installer requires you to also uninstall the SRA using the installer, so in most cases later using the Tivoli Storage Productivity Center GUI to deploy agents (instead of using the installer) is the more flexible approach. 8. If you are running the Tivoli Storage Productivity Center installation on a system with at least 4 GB but less than the recommended 8 GB of RAM, a warning message will be displayed as seen in Figure 4-24. To ignore this message and continue with the installation, click OK.
Note: When attempting to install Tivoli Storage Productivity Center V4.2 on a system with less than 4 GB, this will result in an error message and the installation will fail. 9. The Database administrator information panel is displayed (Figure 4-25). The DB2 user ID and password are automatically filled in. This is due to the fact that we used it to create the database schema. Click Next.
140
7894AIXInstall.fm
10.The database schema panel is displayed (Figure 4-26). You have the option to select a local database or alternatively a remote database to be used by the Data Server and Device Server. We select the Use local database, because this is the database schema installed in the previous steps. Click Next.
11.The next panel, shown in Figure 4-27, requires the following inputs: Data Server Name: Enter the fully-qualified host name of the Data Server.
141
7894AIXInstall.fm
Data Server Port: Enter the Data Server port. The default is 9549. Device Server Name: Enter the fully-qualified host name of the Device Server. Device Server Port: Enter the Device Server port. The default is 9550. TPC Superuser: Enter an operating system group name to associate with the TPC superuser role. This group must exist in your operating system before you install Tivoli Storage Productivity Center. Membership in this group provides full access to the Tivoli Storage Productivity Center product. You can assign a user ID to this group on your operating system and start the Tivoli Storage Productivity Center GUI using this user ID. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value you enter for the LDAP TPC Administrator group overrides the value you entered here for the TPC superuser. Host authentication password: This is the password used by the Fabric agent to communicate with the Device Server. This password must be specified when you install the Fabric agent. Data Server Account Password: This is not required for AIX installations; it is only required for Windows. WebSphere Application Server Admin ID and Password: This is the WebSphere administrator user ID and password required by the Device Server to communicate with embedded WebSphere. In our case, we use the db2inst1 user; you can use the TPC Superuser here. This user will be used for the local Tivoli Integrated Portal administrator ID. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value you enter for the LDAP TPC Administrator group overrides the value you entered here for the WebSphere Application Server admin ID and password.
Important: Ensure that you record all passwords that are used during the installation of TPC. If you click the Security roles... button, the Advanced security roles mapping panel is displayed. You can assign a system group for each TPC role that you want to make an association with; this allows you the flexibility to set up separate authority IDs to perform various TPC operations. The operating group must exist before you can associate a TPC role with a group. You do not have to assign security roles at installation time; you can assign these roles after you have installed TPC. If you click the NAS discovery... button, the NAS discovery information panel is displayed. You can enter the NAS filer login default user name and password and the SNMP communities to be used for NAS discovery. You do not have to assign the NAS discovery information at installation time, you can configure it after you have installed Tivoli Storage Productivity Center. 142
Tivoli Storage Productivity Center V4.2 Release Update
7894AIXInstall.fm
Figure 4-27 Tivoli Storage Productivity Center Server and Agent information
12.The Tivoli Integrated Portal (TIP) panel is displayed (see Figure 4-28). You can select to install the TIP program or use an existing TIP install. Important: TIP must be installed on the same server as the TPC server. It is important to note that you are limited to one Tivoli Storage Productivity Center instance per TIP. TIP will use 10 port numbers starting from the one specified in the Port field (referred to as the Base Port). The 10 ports will be: base port+1 base port+2 base port+3 base port+5 base port+6 base port+8 base port+10 base port+12 base port+13
The TIP administrator ID and password are pre-filled with the WebSphere Application Server admin ID and password specified during step 11 (Device Server installation). We have chosen to install the TIP program and not use an existing TIP. You have to specify the installation directory as well as the port to be used; we accept the defaults. Click Next to continue.
143
7894AIXInstall.fm
13.The authentication selection panel is displayed (Figure 4-29). This panel refers to the authentication method that will be used by Tivoli Storage Productivity Center to authenticate the users.
If you have a valid Tivoli Integrated Portal instance on the system and it uses either OS-based or LDAP-based authentication, then Tivoli Storage Productivity Center will use that existing authentication method. Otherwise, select the authentication method to use: OS Authentication: 144
Tivoli Storage Productivity Center V4.2 Release Update
7894AIXInstall.fm
This uses the operating system for user authentication. LDAP/Active Directory: If you select LDAP or Microsoft Active Directory for authentication, you must have an LDAP or Active Directory already installed and set up. Method 1: Choose OS Authentication, then click Next to continue. 14.The summary information panel is displayed (Figure 4-30). Review the information; at this stage, it is a good idea to check that you have sufficient space in the required file systems. Click Install to continue. Note: Remember that the Replication Server is included in the installation of Tivoli Storage Productivity Center V4.2 by default, as mentioned before.
15.You will see the installing panel indicating various stages within the installation process, as shown in the following example. The installation starts with the Data Server installation as seen in Figure 4-31. The installer will proceed through the separate components after the previous component has installed successfully. Note: If the installer fails to install a specific component, the process will stop and the installer will uninstall all components.
145
7894AIXInstall.fm
The Installing Device Server panel is shown in Figure 4-32. You can see various messages during the Device Server installation process, and when complete, the installer will briefly display the installing panel for the GUI, CLI, and Agents (if selected). When done, the installing TIP panel is displayed as seen in Figure 4-33.
146
7894AIXInstall.fm
Important: During the installation of Tivoli Integrated Portal on AIX systems, the progress bar incorrectly indicates that the Tivoli Integrated Portal installation is 100% complete even though it is not yet complete. Continue to wait until the installation is complete. The installation of Tivoli Integrated Portal can be a time consuming exercise, so be patient.
147
7894AIXInstall.fm
To install TPC for Replication, we follow steps a through j: a. The Welcome panel is displayed as seen in Figure 4-34; choose Next to continue. IMPORTANT: If you are not planning to use TPC for Replication and you attempt to cancel or bypass the installation, it will result in an interruption in the installation process, which will invoke a complete Tivoli Storage Productivity Center installation rollback. b. The System prerequisites check panel is displayed (Figure 4-35). At this stage the wizard will check that the operating system meets all prerequisite requirements as well as having the necessary fix packs installed.
c. If the system passes the check as seen in , you can continue by clicking Next to continue.
148
7894AIXInstall.fm
d. Accept the License Agreement as seen in Figure 4-37. Click Next to continue.
e. Select the Directory Name where you want to install TPC for Replication. You can choose a directory either by changing the location or by accepting the default directory as we have done in Figure 4-38. Click Next to continue.
149
7894AIXInstall.fm
f. The TPC Administrator user panel is displayed (Figure 4-39). You are required to enter the user ID and password that will be used; this ID is usually the operating system administrator user ID. We choose the root user ID. Note: If you prefer to use another user ID, you are required to create it beforehand and ensure that it has administrator/system rights.
g. The Default WebSphere Application Server ports panel is displayed (Figure 4-40). Accept the defaults. Click Next to continue.
150
7894AIXInstall.fm
h. The settings panel is displayed (Figure 4-41). Review the settings and make the necessary changes if needed by clicking Back. Otherwise, click Install to continue.
i. The TPC for Replication installation progress panel is displayed (see Figure 4-42).
151
7894AIXInstall.fm
j. The TPC-R installation result panel is displayed (see Figure 4-43). Notice that the URL to connect to TPC-R is displayed. Click Finish to continue.
Note: Tivoli Storage Productivity Center for Replication is installed with FlashCopy as the only licensed service. You must install the Two Site or Three Site Business Continuity (BC) license in order to use synchronous Metro Mirror and asynchronous Global Mirror capabilities. 16.After the TPC-R installation has completed, the Tivoli Storage Productivity Center Installer will resume as seen in Figure 4-44.
152
7894AIXInstall.fm
17.The Tivoli Storage Productivity Center installation results panel is displayed (see Figure 4-45). Click Finish to continue.
153
7894AIXInstall.fm
2. Start the Tivoli Storage Productivity Center user interface (see Figure 4-47).
154
7894AIXInstall.fm
3. Verify that all services are started (Figure 4-48), the nodes ought to reflect as green.
155
7894AIXInstall.fm
156
7894Migrate.fm
Chapter 5.
157
7894Migrate.fm
5.1.1 Prerequisites
Before starting the upgrade, you have to ensure that your system meets the hardware and software requirements of Tivoli Storage Productivity Center version 4.2. You can check these at the following location: http://www.ibm.com/support/entry/portal/Planning/Software/Tivoli/Tivoli_Storage_Pr oductivity_Center_Standard_Edition
7894Migrate.fm
1. Stop the IBM Tivoli Storage Productivity Center services and Agent Manager (if you have Agent Manager installed). 2. Pre-check the database for migration. 3. Back up the database. 4. Install DB2 9.7 5. Migrate the DB2 instance. 6. Migrate the database. 7. Verify the migration. 8. Start the IBM Tivoli Storage Productivity Center services and Agent Manager (if you have Agent Manager installed). For more information about the upgrade to DB2 version 9.7, see TPC 4.2 Installation and Configuration Guide SC27-2337-02 For more information about the upgrade to DB2 version 9.7, see also Upgrade to DB2 Version 9.7 in the IBM DB2 Information Center at the following location: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2. luw.qb.upgrade.doc/doc/c0023662.html Note: If the Tivoli Storage Productivity Center database is on a remote system from the server, you must also upgrade the remote database.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
159
7894Migrate.fm
160
7894Migrate.fm
3. On the panel shown in Figure 5-2, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel.
On Linux and AIX: 1. To stop the Tivoli Storage Productivity Center for Replication Server on Linux and AIX issue the following command from the command prompt as shown in Figure 5-3: /opt/IBM/replication/eWAS/profiles/CSM/bin/stopServer.sh server1 -username <username> -password <password> 2. Here, <username> is the user ID and <password> is the password created during installation.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
161
7894Migrate.fm
3. To disable the Tivoli Storage Productivity Center for Replication Server from starting on system reboot, you must edit the /etc/inittab and hash out the line that starts up Tivoli Storage Productivity Center for Replication, as shown in Figure 5-4.
If you plan to use only Tivoli Storage Productivity Center for Replication you can disable Tivoli Storage Productivity Center after the upgrade.
162
7894Migrate.fm
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
163
7894Migrate.fm
3. On the panel shown in Figure 5-6, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel.
4. Repeat the same procedure for the following services: IBM Tivoli Storage Productivity Center - Data Server IBM Tivoli Common Agent - <directory> (<directory> is where the Common Agent is installed. The default is <TPC_install_directory>\ca) IBM Tivoli Storage Resource agent - <directory> (<directory> is where the Storage Resource agent is installed. The default is <TPC_install_directory>\agent) Tivoli Integrated Portal - TIPProfile_Port_<xxxxx> (<xxxxx> indicates the port specified during installation. The default port is 16310.) IBM ADE Service (Tivoli Integrated Portal registry) Note: Stop Tivoli Integrated Portal and IBM ADE Service only if no other applications are using these services. On Linux: 1. To stop the Tivoli Storage Productivity Center services as seen in Figure 5-7, run these commands in the command prompt window: Data Server: /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop Device Server: /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh
164
7894Migrate.fm
2. Depending on whether or not you have a Data agent or Storage Resource agent installed, issue these commands accordingly: Common Agent: /<usr or opt>/IBM/TPC/ca/endpoint.sh stop Storage Resource agent: /<usr or opt>/IBM/TPC/agent/bin/agent.sh stop
On AIX: 1. To stop the Tivoli Storage Productivity Center services as seen in Figure 5-7, run these commands in the command prompt window: Data Server: stopsrc -s TSRMsrv1 Device Server: /<usr or opt>/IBM/TPC/device/bin/aix/stopTPCF.sh 2. Depending on whether or not you have a Data agent or Storage Resource agent installed, issue these commands accordingly: Common Agent: /<usr or opt>/IBM/TPC/ca/endpoint.sh stop Storage Resource agent: /<usr or opt>/IBM/TPC/agent/bin/agent.sh stop 3. To disable the Tivoli Storage Productivity Center Server from starting on system reboot, you must edit the /etc/inittab and hash out the line that starts up Tivoli Storage Productivity Center, as shown in Figure 5-8.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
165
7894Migrate.fm
Stop Tivoli Integrated Portal on AIX and Linux: 1. To stop Tivoli Integrated Portal, run this command in a command prompt window as shown in Figure 5-9: <install_directory>/tip/profiles/TIPProfile/bin/stopServer server1 -username <tipadmin> -password <password> Here, <tipadmin> is the administrator user ID and <password> is the administrator password. Wait for the server to complete the operation. 2. To stop the IBM ADE Service, run this command in a command prompt window: Source the environment: . /var/ibm/common/acsi/setenv.sh Run this command: /usr/ibm/common/acsi/bin/acsisrv.sh stop Note: Stop Tivoli Integrated Portal and IBM ADE Service only if no other applications are using these services.
166
7894Migrate.fm
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
167
7894Migrate.fm
Note: If a device is in the probe definition before the upgrade to TPC 4.2 it will not show up in the Configuration Device wizard because the Configuration Device wizard is only for configuring those devices that are unconfigured for monitoring. In this case we recommend to run the migration tool before the upgrade.
If you run the credentials migration tool after you already upgrade your Tivoli Storage Productivity Center to version 4.2, you will get the following error (Figure 5-11). In this case you have to run the credentials migration tool from Tivoli Storage Productivity Center GUI.
168
7894Migrate.fm
Note: On Windows, a new DLL msvcr90.dll is required to run the tool. If it is not installed, the migration tool will not start. If that happens, start TPC 4.2 installer, choose the language, and accept the license terms. At that point, the required DLL will be installed. You can go back and launch the stand-alone migration tool.
Figure 5-12 Credential Migration tool selection within TPC GUI installer
If you select that you want to run the tool, the User Credentials Migration Tool window will open after you click Install button on the summary window. The window shows the table list of the subsystems that can be updated (Figure 5-10)
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
169
7894Migrate.fm
When you click on the Update Subsystems button the panel for changing credentials will open (Figure 5-14). If you close the Welcome screen, you can also start the panel from Navigation Tree under Configuration Update Storage Subsystem Credentials.
170
7894Migrate.fm
Note: The storage system credential migration applies to all DS8000 systems, XIV systems, and SAN Volume Controller systems. If you have run a CIMOM discovery job for a storage system but have not run a probe job for that system, then upgrade Tivoli Storage Productivity Center, the IP address does not display in the GUI. You must manually enter the IP address for that storage system.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
171
7894Migrate.fm
Important: When running with a Tivoli Storage Productivity Center 4.2 server and a Data agent version 3.3.x or 4.1.x, you see the following limitations: When you are using a Tivoli Storage Productivity Center 4.2 server and a Data agent lower than version 4.1.0, you get error messages in the logs for the storage subsystem performance and switch performance reports (GEN0324E and GEN0008E). If there is data. These error messages do not affect the reports. The report job ends with a warning message. The job status is correct, and the job log reflects the results of the report. The performance constraint violation reports will not be able to run with a Tivoli Storage Productivity Center 4.2 server and a Data agent version 4.1.0 or lower. The Data agents have been removed from the agent list. You can migrate the Data agent to a Storage Resource agent to get a performance constraint violation report. You cannot create a batch report for Rollup Reports by clicking IBM Tivoli Storage Productivity Center Reporting Rollup Reports Asset Computers By Computer. The Data agents have been removed from the agent list. You can migrate the Data agent to a Storage Resource agent to get a batch report for Rollup Reports. If you have Tivoli Storage Productivity Center 4.1.1 (or earlier) agents installed, and if you want to continue to use them, Figure 5-1 shows the valid upgrade scenarios.
Table 5-1 Agent upgrade scenarios Tivoli Storage Productivity Center agent V4.1.1 (or earlier) installed Data agent or Fabric agent or both installed (version 4.1.1 or earlier) on local machine Use Tivoli Storage Productivity Center V4.1.1 installation program on non-TPC server for local install If the Data agent or Fabric agent is down level, the agent will be upgraded to the latest V4.1.1 level. If the Data agent or Fabric agent is at the latest V4.1.1 level, you see a message that the agent is already installed. If the Storage Resource agent is at the latest V4.1.1 level, the Storage Resource agent is left as is. If the Storage Resource agent is not at the latest V4.1.1 level, the agent is migrated to a Data agent or Fabric agent. The Data agent or Fabric agent is installed.
No agent installed
172
7894Migrate.fm
Table 5-2 Agent migration scenarios TPC 4.1.1 or earlier versions Data agent or Fabric agent or both are installed Migration scenarios to TPC 4.2 You have a choice: Leave the Data agent or Fabric agent at the down level version Migrate the Data agent or Fabric agent to Storage Resource agent Storage Resource Agent is installed No Agent installed You have a choice to upgrade or not upgrade the Storage Resource agent to 4.2. The default Storage Resource agent is installed
Note: You cannot use the Tivoli Storage Productivity Center V4.1.1 (or earlier) installation program on a Tivoli Storage Productivity Center V4.2 system.
Note: You can use the Tivoli Storage Productivity Center V4.2 installation program to install a local Storage Resource agent on a system that does not have the Tivoli Storage Productivity Center server installed. You can also use the Tivoli Storage Productivity Center GUI to deploy the Storage Resource agents (from the server system).
Step 2:
Note: Upgrade from TPC version 3.3.2 to TPC version 4.1 is described in detail inIBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725, Chapter 3
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
173
7894Migrate.fm
CROSSREFERENCE> shows you how to upgrade the agents after the TPC is succesfully upgraded to the version 4.2 Before proceeding with the upgrade, there are various steps that must be performed.
c. Close the Terminal Services Manager window. 3. Stop all the TPC services. To stop the services on windows: Go to Start Setting Control Panel Administrative Tools Services. Right-click the service and select Stop. The following services have to be stopped: IBM WebSphere Application Server V6 - Device Server IBM TotalStorage Productivity Center - Data Server IBM Tivoli Common Agent <directory> where <directory> is where the Common Agent is installed. The default is <TPC_install_dir>/ca. IBM WebSphere Application Server v6.1 - CSM if you also have TPC for Replication
174
7894Migrate.fm
To stop the services on Linux: Device server: /<TPC_install_directory>/device/bin/linux/stopTPCF.sh Data server: /<TPC_install_directory>/data/server/tpcdsrv1 stop Common agent: /<common_agent_install_directory>/ca/endpoint.sh stop Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh stop IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/stopServer.sh server1 -username <username> -password <passsword> (where <username> represents the ID of the TPC superuser and <password> represents the password for that user)
Device server: /<TPC_install_directory>/device/bin/aix/stopTPCF.sh Data server: stopsrc -s TSRMsrv1 Common agent: /<common_agent_install_directory>/ca/endpoint.sh stop Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh stop IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/stopServer.sh server1 -username <username> -password <passsword> (where <username> represents the ID of the TPC superuser and <password> represents the password for that user) 4. Back up your current TPC 4.1 server and databases (TPCDB and IBMCDB). IBMCDB is the Agent Manager database and TPCDB is the Tivoli Storage Productivity Center database.This is important in case of an upgrade failure: a. Back up your TPC database using the DB2 backup process (reference to 4.1 redbook and 4.2 install guide) b. For Tivoli Storage Productivity Center and Tivoli Integrated Portal single signon authentication configuration, back up the WebSphere configuration files. The configuration files are located in the following directories: TIP_installation_directory/profiles/TIPProfile/bin TPC_installation_directory/device/apps/was/profiles/deviceServer/bin The backup file is named: WebSphereConfig_yyyy_mm_dd.zip Where yyyy is the year, mm is the month, and dd is the day.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
175
7894Migrate.fm
Run the following commands on UNIX or Linux systems: /IBM/Tivoli/tip/profiles/TIPProfile/bin/backupConfig.sh /IBM/TPC/device/apps/was/profiles/deviceServer/bin/backupConfig.sh Run the following commands on Windows systems: \IBM\Tivoli\tip\profiles\TIPProfile\bin\backupConfig.bat \IBM\TPC\device\apps\was\profiles\deviceServer\bin\backupConfig.bat c. Backup the following registries: InstallShield registries Back up the following registries: AIX: /usr/lib/objrepos/InstallShield/Universal/IBM-TPC/ UNIX: /root/InstallShield/Universal/IBM-TPC Windows: C:\Program Files\Common Files\InstallShield\Universal\IBM-TPC SRM legacy registry Back up the following registries: AIX: subsystem TSRMsrv# where # can be any number UNIX: /etc/Tivoli/TSRM Windows registry Back up the Windows registry. Common agent registry (if you have Data agents and Fabric agents installed) Back up the following registries: AIX or UNIX: /usr/tivoli/ep*, /opt/tivoli/ep* Windoes: C:\Program Files\Tivoli\ep* d. Back up the Tivoli GUID setting. Go to C:\Program Files\Tivoli\guid (for Windows) or /opt/tivoli/guid (for AIX or UNIX). Run the following command: tivguid -show >tpc_tivguid.txt e. Back up the Agent Manager files and directories if you have Agent Manager installed. AM_installation_directory/AppServer/agentmanager/config/cells/ AgentManagerCell/security.xml AM_installation_directory/AppServer/agentmanager/installedApps/ AgentManager.ear/AgentManager.war/WEB-INF/classes/resources/ AgentManager.properties AM_installation_directory/os.guid AM_installation_directory/certs f. Back up Tivoli Storage Productivity Center server files and directories. TPC_installation_directory/config TPC_installation_directory/data/config TPC_installation_directory/device/config g. Back up the Data agent and Fabric agent files and directories (if you have the Data agent and Fabric agent installed). TPC_installation_directory/config TPC_installation_directory/ca/cert TPC_installation_directory/ca/config TPC_installation_directory/ca/*.sys TPC_installation_directory/ca/subagents/TPC/Data/config
176
7894Migrate.fm
TPC_installation_directory/ca/subagents/TPC/Fabric/config h. Back up any interim fixes or work around code provided by Tivoli Storage Productivity Center support. 5. Restart all TPC services. To start the services on windows: Go to Start Setting Control Panel Administrative Tools Services. Right-click the service and select Start. The following services need to be restarted are: IBM WebSphere Application Server V6 - Device Server IBM TotalStorage Productivity Center - Data Server IBM Tivoli Common Agent <directory> where <directory> is where the Common Agent is installed. The default is <TPC_install_dir>/ca. IBM WebSphere Application Server v6.1 - CSM if you also have TPC for Replication
To start the services on Linux: Device server: /<TPC_install_directory>/device/bin/linux/startTPCF.sh Data server: /<TPC_install_directory>/data/server/tpcdsrv1 start Common agent: /<common_agent_install_directory>/ca/endpoint.sh start Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh start IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/startServer.sh server1 -username <username> -password <passsword> (where <username> represents the ID of the TPC superuser and <password> represents the password for that user) To start the services on AIX: Device server: /<TPC_install_directory>/device/bin/aix/startTPCF.sh Data server: startsrc -s TSRMsrv1 Common agent: /<common_agent_install_directory>/ca/endpoint.sh start Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh start IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/startServer.sh server1 -username <username> -password <passsword>
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
177
7894Migrate.fm
(where <username> represents the ID of the TPC superuser and <password> represents the password for that user)
Note: If possible, reboot the Tivoli Storage Productivity Center servers. This action will stop any remaining Tivoli Storage Productivity Center Java processes that might not stop in a timely manner. It is important for the Tivoli Storage Productivity Center Device server to stop and restart cleanly. If this does not occur, a server reboot might be indicated. 6. Stop all Tivoli Storage Productivity Center jobs: Stop all jobs including performance monitor jobs, system and fabric probe jobs, scan jobs, and other probe jobs.
3. The License Agreement panel is displayed. Read the terms and select I accept the terms of the license agreement. Then click Next to continue (see Figure 5-17 on page 179).
178
7894Migrate.fm
4. Figure 5-18 shows how to select typical or custom installation. You have the following options: Typical installation: This selection allows you to upgrade all of the components on the same computer. You have the options to select Servers, Clients and Storage Resource Agent. Custom installation: This selection allows you to select the components that you can upgrade. Installation licenses: This selection installs the Tivoli Storage Productivity Center licenses. The Tivoli Storage Productivity Center license is on the DVD. You only need to run this option when you add a license to a Tivoli Storage Productivity Center package that has already been installed on your system. Note that the installation directory field is automatically filled with the TPC installation directory on the current machine and grayed out. In our case, a previous version of TPC is already installed in C:\Program Files\IBM\TPC directory. Select Custom Installation and Click Next to continue. Note: We recommend to select custom installation which allows you to install each component of the Tivoli Storage Productivity Center separately.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
179
7894Migrate.fm
5. The panel with the TPC components is shown. The components already installed on the system are discovered, selected for upgrade, and greyed out. The current version of each component is displayed next to it. In our case, we have a TPC version 4.1.1.55 installed on our system without local Data agents or Fabric agents. Figure 5-19 shows the corresponding panel. Click Next to proceed with the installation. Note: Storage Resource Agent will be upgraded using TPC user interface after the TPC upgrade.
180
7894Migrate.fm
6. If you are running the upgrade on a system with at least 4 GB but less than 8 GB of RAM, you will get the warning message shown in Figure 5-20. You can close the message panel by clicking OK. Note: 8 GB of RAM is the minimum memory requirement to run both Tivoli Storage Productivity Center and Tivoli and Tivoli Storage Productivity Center for Replication. If you have less than 8 GB of RAM, you have to run only Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication because of system load. To do that, you must disable Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication after installation.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
181
7894Migrate.fm
7. The DB2 user ID and password panel is shown as in Figure 5-21. The information in these fields is propagated. Click Next to proceed.
8. The Database Schema panel is shown as in Figure 5-22. All the information in this panel is already propagated. Verify it and click Next to continue.
182
7894Migrate.fm
9. In the TPC servers panel shown in Figure 5-23, verify that the fields are filled with the correct information. Also, the password fields are filled with propagated information. Click Next when you are done.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
183
7894Migrate.fm
10.If Tivoli Storage Productivity Center detects that you have a DS8000, XIV, or SAN Volume Controller storage system, the "Storage Subsystem Credential Migration Tool" panel is displayed. Figure 5-24shows the Storage Subsystem Credentials Migration Tool panel which will help you to migrate the existing storage system credentials for the native interfaces. If you want to run the migration tool after the upgrade, uncheck the box for Run Storage Subsystem Credential Migration Tool. Otherwise, check the box for this option. Click Next.
184
7894Migrate.fm
11.If the validation is successful, the summary panel shown in Figure 5-25 is presented. Review its content and click Install to start the upgrade.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
185
7894Migrate.fm
Note: During the upgrade you will not see the panels with Tivoli Integrated Portal installation. 12.Upgrade will start with deploying Storage Subsystem Credential Migration Tool (Figure 5-26).
186
7894Migrate.fm
13.The Storage Subsystem Credential Migration Tool panel in Figure 5-27 will open where you can select the subsystems with credentials that can be updated automatically. Select the subsystems that you want to update and click Update.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
187
7894Migrate.fm
Note: During this upgrade we update only DS8000 subsystem. Updating SVC credentials is described in detail in Chapter xx 14.After you click the Update, the subsystem will be updated and removed from the table list. You can click on Finish button if you updated the selected subsystems and click Yes to confirm to close the Storage Subsystem Credential Migration Tool panel (Figure 5-28).
15.Multiple panels such as the ones in Figure 5-29 and Figure 5-30 are now shown.
188
7894Migrate.fm
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
189
7894Migrate.fm
Note: When you are upgrading the system, you might see several windows prompting you with the text Replace Existing File. Reply Yes to All to these prompts. Sometimes this dialog window is hidden behind the main installation panel. Make sure you check behind the main installation panel to see if there are any hidden dialog panels. 16.During the TPC upgrade, the TPC for Replication upgrade program is launched. The TPC installation is temporarily suspended and remains in the background while the TPC for Replication installation starts and a Welcome panel is displayed. See Figure 5-31 If TPC for Replication is already installed on your system, it will be upgraded, whereas if it is not present, it will be installed. In our system we have a previous version of TPC for Replication already installed, so the following panels will show a TPC for Replication upgrade. If it is the first time that TPC for Replication is installed on the system, the installation process and panels will be the same as those shown in Chapter xxx
190
7894Migrate.fm
17.The installation wizard checks on the system prerequisites to verify that the operating system is supported and the appropriate fix packs are installed. If the system passes the prerequisites check, the panel shown in Figure 5-32 is displayed. Click Next to continue.
18.The license agreement panel is shown. Accept it and click Next as shown in Figure 5-33.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
191
7894Migrate.fm
19.In the panel shown in Figure 5-34, you can select the directory where TPC for Replication will be installed. The directory where TPC for Replication is correctly installed is proposed as the default location. You can accept it or change it based on your requirements. Click Next to continue.
20.The upgrade program checks for currently running TPC for Replication instances. If it is found, the message shown in Figure 5-35 is presented. Clicking Yes will continue the TPC for Replication installation and TPC for Replication service will restart during the upgrade.
192
7894Migrate.fm
21.Review the settings shown in Figure 5-36 and click Install to start the upgrade.
22.The installation of TPC for Replication starts. Several messages about the installation process are shown, such as the following in Figure 5-37, Figure 5-38 and Figure 5-39.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
193
7894Migrate.fm
194
7894Migrate.fm
23.After the completion of the TPC for Replication upgrade, a summary panel is shown reporting also the URL where the Web browser can be pointed to access the TPC-R Web-UI (see Figure 5-40). By clicking the Finish button, this panel is closed and the installation flows goes back to the TPC installation panels.
24.The TPC installation continues its flow, creating the uninstaller for TPC and completes with summary information panel (Figure 5-41). Click Finish to complete the upgrade.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
195
7894Migrate.fm
Note: If you are upgrading TPC on AIX and if you see the panel indicating that the product has been 100% installed and receive the following message: /opt/IBM/TPC/service/service.sh exist on this system and is newer than the file being installed. Do you want to replace this file? Click Yes to All.
Installation wizard
When upgrading the Tivoli Storage Productivity Center server using the installation wizard, you can select to upgrade the Storage Resource Agent. If you choose not to upgrade the agent as part of the server upgrade, you can launch the graphical installer at a later time to upgrade the agent. After you start the Tivoli Storage Productivity Center installation wizard you can choose the Typical Installation or Custom Installation. In this section we document Custom Installation. By selecting the Custom Installation in the Figure 5-42, the panel in Figure 5-43 will open where you select Storage Resource Agent. Clicking on Next will open Storage Resource Agent panel (Figure 5-44) where you can enter the same options that are provided for Storage Resource Agent installation. For details see Chapter XX or Install Guide.
196
7894Migrate.fm
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
197
7894Migrate.fm
198
7894Migrate.fm
Wiht the successful upgrade of Tivoli Storage Productivity Center, the Storage Resource Agent is also successfully upgraded.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
199
7894Migrate.fm
2. In the content panel, select one or more agents for which you want to upgrade and click Upgrade Agents (Figure 5-46). As you can see in the Figure 5-46 the status of the Storage Resource Agents which have to be upgraded is Need to upgrade agent software. If you have enabled auto upgrade action, the Storage Resource Agent will be automatically upgraded after you upgrade TPC server.
3. The Create Storage Resource Agent Upgrade panel is displayed. Use this panel to select the computer and schedule an upgrade of the Storage Resource Agent (Figure 5-47).
4. Run the upgrade job. The status of Storage Resource Agent is changed to Upgrading agent software (Figure 5-48) and check the status in Job Management (Figure 5-49).
200
7894Migrate.fm
5. After the succesful upgrade Storage Resource Agent status is Up (Figure 5-50)
Note: The Storage Resource Agent upgrade job delivers the software upgrade packages to the agent computer. The job log displays the status of the delivery. The actual status of the upgrade is found in the agent log on the agent computer. If the agent log indicates that the upgrade failed and the state of the Storage Resource agent remains in the "Upgrading agent software" status, try restarting the agent and running the upgrade job again. 201
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
7894Migrate.fm
202
7894Migrate.fm
server. Then perform a takeover operation and reestablish the High Availability function from the active server to the standby server. During the initial synchronization, the current information in the database is saved and held until the synchronization is complete. If an error occurs during this process, the server database is restored to its original state before the synchronization process began. If an error occurs during the synchronization process which causes the status to be in the disconnected or inconsistent state, you can reconnect to a synchronized state.
Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level
203
7894Migrate.fm
204
7894AgentMigrate.fm
Chapter 6.
205
7894AgentMigrate.fm
206
7894AgentMigrate.fm
6.2 Prerequisites
When planning the migration of Data agents and Fabric agents, you have to consider what agents can be migrated, what are unsupported platforms and functions and what are the limitations. You can check the following link for the agent support: http://www-01.ibm.com/support/docview.wss?uid=swg27019380#Agents
Data agent or Fabric agent or both installed (version 4.1.1 or earlier) Storage Resource agent is installed No agent installed
When you are upgrading a Tivoli Storage Productivity Center agent using the user intarface, these are the valid upgrade scenarios.
Table 6-2 Agent upgrade via user interface Tivoli Storage Productivity Center agent Data agent or Fabric agent or both on local computer Storage Resource agent V4.1 on local computer Upgrade agent using Tivoli Storage Productivity Center 4.2 user interface Not supported. You can migrate a Data agent or Fabric agent to a Storage Resource agent. The Storage Resource agent is upgraded to the latest 4.2 level.
207
7894AgentMigrate.fm
Tivoli Storage Productivity Center agent Storage Resource agent V4.2 on local computer
Upgrade agent using Tivoli Storage Productivity Center 4.2 user interface The Storage Resource agent is upgraded to the latest 4.2 level (use the force option).
When you are upgrading a Tivoli Storage Productivity Center agent using the command line interface, these are the valid upgrade scenarios.
Table 6-3 Agent upgrade via CLI Tivoli Storage Productivity Center agent Data agent or Fabric agent or both on local computer Storage Resource agent 4.1 on local computer Storage Resource agent V4.2 on local computer Upgrade agent using Tivoli Storage Productivity Center 4.2 command line Not supported. Migrate the Data agent or Fabric agent to a Storage Resource agent. The Storage Resource agent is upgraded to the latest 4.2 level (cannot change commtype). The Storage Resource agent is upgraded to the latest 4.2 level (must use force option).
208
7894AgentMigrate.fm
2. Select the agent that you want to migrate and click on the Migrate button. The Create Data/Fabric Agent Migration panel is displayed (Figure 6-2).
209
7894AgentMigrate.fm
3. The computer selection tab allows you to select machines that have Data agents, Fabric agents, or both (Figure 6-3). Select the computer and schedule a migration job in When to Run tab.
210
7894AgentMigrate.fm
Note: When a computer has both a Data and a Fabric agent the migration job will always migrate both agents. There is no option to migrate one and not the other. In the case where both the Data agent and the Fabric agent are being migrated, the migration will migrate both to a single SRA If the agent only has one agent, after migration the SRA will be capable of performing both Data and Fabric functions. The concept of a well placed Fabric agent has been removed in this release. See the fabric section of this presentation for details. 4. In the Options tab you can select how a Storage Resource agent is run after the migration. In our example we select to run the agent as deamon service (Figure 6-4).
211
7894AgentMigrate.fm
5. When you click the save button the panel in Figure 6-5 will appear. The job will not be saved until the verification is complete and the Proceed button is clicked.
212
7894AgentMigrate.fm
6. After the verification is completed you can click on the Proceed button and you can save the migration job (Figure 6-6).
7. Click OK and go to Job Management panel to check the status of migration job (xxx).
213
7894AgentMigrate.fm
Note: Each migration job will create one job log, regardless of how many computers are selected. When multiple computers are being migrated, the migrations are performed simultaneously in a maximum of 10 threads. The progress of each computer can be tracked by hostname. If the migration completes with warnings, the migration succeeded but there is some minor issue 8. In our example migration job completed with warnings because the migration process was not able to cleanup some of the old files on the remote machine. This is a common issue on Windows (Figure 6-8)
9. Click on the View Log File(s) button to view the details (Figure 6-9)
214
7894AgentMigrate.fm
10.In our example Common Agent log files were not deleted. To finish the migration any files under the TPC_install_dir\TPC\ca directory can be manually deleted. 11.After the succesful migration Storage Resource Agent status is Up (Figure 6-10).
215
7894AgentMigrate.fm
Migration
You have three options to migrate the CIMOM user credentials/access information to NAPI : You can provide authentication info while running earlier Tivoli Storage Productivity Center versions before upgrading to Tivoli Storage Productivity Center 4.2 by running the stand alone credential migration toll. The information will be stored in the database for later use During the upgrade, the installer will check if you provided user authentication info for Native API devices or not. If not, installer will provide an option to launch the stand alone credential migration toll After upgrading to Tivoli Storage Productivity Center 4.2, you can use the Administration Services Data Source Storage Subsystems panel to provide new authentication info. The Configure Devices wizard will usually not work, because typically the Native API devices are already part of a probe job. Credentials migration tool is described in details in Chapter 5, Migrate Tivoli Storage Productivity Center base code to current level on page 157. If you migrate a NAPI devices either prior to or as part of the upgrade to TPC 4.2, any embedded DS8k CIMOMs, SVC CIMOMs, and XIV CIMOMs will be automatically deleted from TPC . Proxy DS CIMOMs will NOT be automatically deleted, even if TPC knows of no other devices configured on that CIMOM. If the NAPI device is down at the time of the TPC Data server startup, its CIMOM will not be deleted If you are upgrading from TPC 4.1.1 to TPC 4.2, and you want to migrate an existing TPC 4.1.1 XIV CIMOM note that: Previous historical data will be retained (true for all NAPI devices), but capability data will not be updated After the upgrade, a reprobe of the subsystem is necessary to enable new 4.2 capabilities (for example, creating and deleting XIV volumes).
216
7894NativeAPI.fm
Chapter 7.
Native API
In version 4.2 Tivoli Storage Productivity Center provides a new access method to gather informaton from devices. This method is called the Native API (NAPI) and is at this time only available for a limited number of disk storage subsystems. While this chapter is focused on the Native API method we will also explain some other new or changed parts of TPC to provide a full picture of device confuguration and handling within TPC V4.2.
217
7894NativeAPI.fm
Figure 7-1 Navigation Tree changes change between TPC V4.1 and TPC V4.2 - overview
218
7894NativeAPI.fm
In the rest of this chapter we will provide more information about the new or changed panels and tasks and quickly explain to you the NAPI inferface.
Storage Subsystem
Configure Devices
219
7894NativeAPI.fm
Probes
With the implementation of EPM there has been additional changes in the way TPC is doing probes for multiple devices. For every device type there is a process running in the opperating system. Each of those processes is collecting the information for one of the devices at a time. As a result, the work will be running in parallel for different device types but running sequentially for the devices of the same type. See Figure 7-3 on page 220
Performance Data
220
7894NativeAPI.fm
For the user there has not been changed much with the introduction of the EPM. The minimum interval in which TPC will collect performance data is still 5 minutes for NAPI attached devices, so you can expect to see every interval one or more proceses to be started to collect and insert the performance data into the TPC database. New is the ability to collect XIV performance data, but that change was not caused by the introduction of the EPM. In terms of stability there has been a change in TPC V4.2 which allows TPC for fail over to a redundant path (e.g. secondary HMC) if TPC was not able to collect for some intervals. There is no parameter to control the re-try and there will not be any alerts sent, but this greatly enhances the overall stability of performance data collections. In TPC V 4.2.1 the failover mechanism will also be added for CIM based performance data collections. Note: Because of this change its now safer to let a performance data collection job run continuously. Anyway we still recomment to stop and restart it, because currently you will not receive any alerts if something really failed within a continuously running job. Since the changes in TPC 4.1.1 which allows you to specify to run a job for 24 instead of 23 hours there is little advantage in letting a job run continuously. With setting a job to 24 hours and restart it daily you dont loose any performance data, and still have the chance to receive alerts about a failing job (at least once a day). In the figure below you can see in the blue boxes what that TPC had changed your entries when you tried to specifiy a run time of 24 hours with daily restart.
221
7894NativeAPI.fm
Events
For events there have been some changes with the introduction of Native API and EPM: there is no such concept of CIM indications for Native API A DS8000 will asynchronously send event to TPC For SVC and XIV TPC will poll every minute to get new events from the subsystems. So you will see every minute processes in your opperating system being started and stopped. The goal of TPC V4.2 is to retain the same level of device support than previous levels of TPC, so for that reason no new alerts have been added to TPC.
222
7894NativeAPI.fm
223
7894NativeAPI.fm
The discovery of new NAPI devices is part of the Switches and Subsystem (IP Scan) job. The job had been there is earlier versions of TPC but now had functions added so that it will identify subsystems that are used by the NAPI method. As for the CIMOM discovery we generally do not recommend using the discovery. By default TPC has no subnets configured to be scanned. If you do want to use it be aware that you need to add the address range that you want to scan.
Author Comment: is that above true: tpc is not looking into the subnet its installed to?
The scan of IP ranges for subsystems and switches can be seperated in such a way, that TPC either looks for switches or for storage subsystems, both or none. This setting is applied to all IP address ranges that you specify. Figure 7-4 on page 224 shows the Administrative Services Discovery Switch and Subsystem (IP Scan) panel what you can configure these settings. For example in our environment we specified the range 9.11.98.190 to 9.11.98.210.
Figure 7-4 Choose for what kind of device TPC will search
If you want to get notified you need to change the alerts options for the IP Scan job, for example by entering your e-mail address on the Alerts tab.
IBM DS8000
In this section we discuss how the DS8000 interacts with the NAPI Access method used: ESS Network Interface (ESSNI) Failover:
224
7894NativeAPI.fm
For the communication with a DS8000 TPC uses the ESSNI client. This is basically the same library that is included in any DS8000 CLI. Since this component has built in capabilities to do a failover from one HMC to another HMC it is a good idea to specify the secondary HMC IP address if you DS8000 has one. The failover might still cause some errors in a TPC job, but the next command that is sent to the device should be using the redundant connection. Network There are no special network considerations. TPC need to be able to talk to the HMC just as be fore when the embedde CIMOM was used. TPC is currently not able to provide specific messages for the vast majority of ESSNI error codes. You can still look up the errors in the DS8000 Information center, doing this often provides useful information for example, that the user ID is wrong or that the password has expired, which will not be in any TPC logs http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp Example: 2010-08-05 16:58:09.296 HWNEP0003E A DS8000 ESSNI command failed. The error code is CMUN02021E This is the generic TPC error whose user action directs the TPC user to lookup the ESSNI code in the DS8000 Information Center. Doing so reveals that this error code means Unable to create logical volume: the volume number already exists.
225
7894NativeAPI.fm
You can only use passphrases with OpenSSH keys If the key is in putty format and you do require passphrases you need to convert the key over into the OpenSSH format manually The way SVC works with SSH keys is: Public key is stored on the SVC cluster User or client application uses Private key
If there have no keys uploaded that you want to use, you have three options use the default key that comes with TPC (not recommended) use a putty key that you have generated and saved without a passphrase use a OpenSSH key that you have generatedwith or without a passphrase Background information and general considerations for different SVC versions: In table Table 7-2 on page 226 we explain some general differences between SVC versions. Later on in <<< reference to implementation>> we will explain additional things to watch out for when you are adding a SVC to TPC.
Table 7-2 Special considerations for different SVC versions SVC 4 Concept of SSH key SSH keys are associated to an authority level/role there are no individual users to associate a SSH key to Note: You can still upload multiple keys and let each user use a different key. This enables you to revoke access for a particular user, without any implications for other users. the SSH key needs to have the Administrator access level SVC 5+ SSH keys are associated to a user ID. A user ID is always associated with a group within SVC and therfore with an authority level since each key is associated with a user ID you cannot use one keypair for more than one userID
Authority Level
SVC version 5 has introduced a real user and group concept The User needs to be part of the SVC Administrator group If the user just has monitoring access rights, you still can do probes, but you cannot run performance monitor jobs or performan any kind of volume provisioning A user ID can be optionally assigned a password. This password is only used when a user wants to log in through the SVC GUI or through the CIM interface, it is not used for SSH access
User ID / password
n/a
226
7894NativeAPI.fm
As described in Table 7-2 the association of SSH keys is different for SVC Version 4 and SVC Version 5+. Figure 7-5 on page 227 will show you the logical concept of the difference
Note: Allthough SVC Version 5+ uses now user IDs you still need to start an SSH session to the SVC with the user string admin but provide the key file of the user ID that you want to use for login - SVC will look through the list of key files and see if a maching public key can be found. Using the svcinfo lsuser command you can see which user ID is associated with the SSH session that you have open. Unfortunately we could not find a command that would list all the stored keys and the corresponding user IDs Recommendations: SSH key names: you should give the SSH key files meaning full names, because it is hard to later find out which user is using a certain key pair. For example assign the user name as the file name to the key. We recommend that each TPC server has its own pair of SSH keys when working with an SVC. These keys can be used for accessing multiple SVCs, but the association should always be like shown in Figure 7-6.
IBM XIV
Th this section we disccuss how the XIV interacts with the native API. Access method used: Native API will be the XML-formatted version of the XIV command-line interface called XCLI Failover In TPC 4.2.1 failover support has been added for XIV devices. TPC does not need to be provided with all the possible interface modules , instead TPC will query the XIV during the setup for the IP addresses of the other interface modules.
227
7894NativeAPI.fm
Network You should only use the address of one of the Interface Module to add an XIV to TPC, adding more than one IP address by starting the Configure Devices wizard again.
Migration
Within TPCs philosophy, the term migration is used when the architecture changes, for example going from CIMOM to NAPI. In contrast TPC refers to upgrades when just the version of a component changes, but the architecture stays the same, for example going from SRA version 4.1 to SRA version 4.2. You have three options when to migrate the user credentials/access information: You can provide authentication information while running earlier TPC versions before upgrading to TPC V4.2 by running the stand-alone tool migrateuserinfo.bat/.sh. The information will be stored in the database for later use. During the upgrade, the installer will check if customer has provided user authentication info for NAPI devices or not. If not, installer provide an option to launch the stand-alone tool After upgrading to TPC V4.2, you can use the Administration Services Data Source Storage Subsystems panel to provide new authentication info. The Configure Devices wizard will usually not work, because typically the NAPIdevices are already part of a probe job.
Considerations
If you migrate a NAPI devices either prior to or as part of the upgrade to TPC V4.2, any embedded DS8k CIMOMs, SVC CIMOMs, and XIV CIMOMs will be automatically deleted from TPC proxy DS CIMOMs will NOT be automatically deleted, even if TPC knows of no other devices configured on that CIMOM. If the NAPI device is down at the time of the TPC Data server startup, its CIMOM will not be deleted 228
Tivoli Storage Productivity Center V4.2 Release Update
7894NativeAPI.fm
If you are upgrading from TPC V4.1.1 to TPC V4.2, and you want to migrate an existing TPC 4.1.1 XIV CIMOM note that: Previous historical data will be retained (true for all NAPI devices), but capability data will not be updated After the upgrade, a reprobe of the subsystem is necessary to enable new 4.2 capabilities (e.g. creating/deleting XIV volumes)
229
7894NativeAPI.fm
TPC will discover CIMOMs that are not within the local subnet of the server by using SLP, which needs to be configured at the TPC side (provide the SLP DA IP address) and the SLP DA side (configure the list of devices available in that subnet) Because the CIMOM Discovery will often fail for obvious reasons (CIMOM that dont have credentials defined at the TPC server) we recomend to not use the capability to look for new CIMOMs. You can do this simply by not specifying SLP DAs and not letting TPC look for new CIMOMs in its local subnet as shown in Figure 7-7 on page 230. This way TPC will still look for new device at already configured CIMOMs and in addition you will still get status information. When you read the following list, keep in mind that you could have a proxy CIMOM that has multiple devices of different types attached, but TPC does not support all of those devices to be used with CIM agents anymore because they are now being used via Native API. the discovery will filter out CIMOMs of the devices that are only supported through the new interface, so the embedded CIMOMs for DS8000 and XIV will be ignored as well as any SVC CIM agent. if the discovery finds a DSopen API CIMOM this one will be added to Administrative Services Data Sources CIMOM Agents list The reason for this is that at this stage of a discovery TPC does not yet know what devices are attached to the newly discovered CIMOM. For this to happen you will need to add the credentials for the CIMOM, and run the discovery again. Once TPC can get a list of devices from the CIMOM, DS8000 devices will be filtered out, and the remaining (DS6000 and ESS) devices will be added as managed devices. If at this point there are only DS8000 devices attached to the CIMOM, there will be no mangeddevices added. Based on user feedback the decision was taken to TPC but the CIMOM will also not be removed, since it can be used for other subsystems.
230
7894NativeAPI.fm
The only changes that you may want consider are: change the scheduling of the dicovery add alert options to the job change the job description, for better sorting in the Job Management Panel (see <<< Job Management Panel Chapter >>>
231
7894NativeAPI.fm
These steps will still have to be performed in the backup, but the wizard guides you through the whole process.
Migration
Within in TPC philosophy of TPC the term migration is used when the architecture changes, for example going from CIMOM to NAPI. In contrast TPC referes to upgrades when just the version of a component changes, but the architecture stays the same, for example going from SRA version 4.1 to SRA version 4.2. You have three options when to migrate the user credentials/access information: You can provide authentication info while running earlier TPC versions before upgrading to TPC V4.2 by running the stand-alone tool migrateuserinfo.bat/.sh. The information will be stored in the database for later use. During the upgrade, the installer will check if customer has provided user authentication info for NAPI devices or not. If not, installer provide an option to launch the stand-alone tool After upgrading to TPC V4.2, you can use the Administration Services Data Source Storage Subsystems panel to provide new authentication info. The Configure Devices wizard will usually not work, because typically the NAPIdevices are already part of a probe job. Considerations If you migrate a NAPI devices either prior to or as part of the upgrade to TPC V4.2, any embedded DS8k CIMOMs, SVC CIMOMs, and XIV CIMOMs will be automatically deleted from TPC proxy DS CIMOMs will NOT be automatically deleted, even if TPC knows of no other devices configured on that CIMOM. If the NAPI device is down at the time of the TPC Data server startup, its CIMOM will not be deleted If you are upgrading from TPC V4.1.1 to TPC V4.2, and you want to migrate an existing TPC 4.1.1 XIV CIMOM note that: Previous historical data will be retained (true for all NAPI devices), but capability data will not be updated After the upgrade, a reprobe of the subsystem is necessary to enable new 4.2 capabilities (e.g. creating/deleting XIV volumes)
232
7894NativeAPI.fm
Implementation
In table Table 7-3 we show a side by side comparison of the different ways that exist for adding devices to TPC. The purpose is to give you an understanding how the Configure Devices Wizard will guide you through the steps that you need to execute to add a device to TPC. TPC V4.2 does not force you to use the new Configure Devices wizard (CD wizard) and the number of steps that you need to execute are not less, but it will make sure that you run through this process quickly and in the right order. In the table below we have highlighted the steps where you need to add the credentials with a bold black border, to visualize that you always at one point need to supply the user credentials. The blue background in the table shows you which steps the Configure Devices wizard will guide you through, in contrast to the manual steps which are outlined in the first two columns.
Table 7-3 Comparison of the different ways of adding devices to TPC Steps not using the CD wizard for CIMOM devices create or ask for user credentials Steps not using the CD wizard for NAPI devices create or ask for user credentials TPC 4.2 without discovery using CD wizard create or ask for user credentials TPC 4.2 with CIMOM discovery using CD wizard create or ask for user credentials CIMOM Discovery add CIMOM to TPC with credentials CIMOM Discovery add and configure device add device with credentials wizard runs discovery select discovered devices add/define probe define alerts add/define probe define alerts add/define probe define alerts select discovered devices add/define probe define alerts add NAPI to TPC with credentials add credentials for CIMOM scheduled discovery configure existing device scheduled discovery configure existing device add credentials wizard runs discovery select discovered devices add/define probe define alerts TPC 4.2 with NAPI discovery using CD wizard create or ask for user credentials
233
7894NativeAPI.fm
When we use the term groups we refer to the complete concept as shown above. If we use the term Monitoring Group we refer just to the real groups and not the concept with probes and alerts.
234
7894NativeAPI.fm
In the <<< Variable TPC V4.2 installation guide >>> you can find all the details of the default settings for the probe jobs and alerts in the chapter called Alerts and schedules
235
7894NativeAPI.fm
236
7894NativeAPI.fm
Select device During this step you will either add a new device or select an existing device that has been added to TPC but not yet been added to a probe. If the device has just been discovered you may also need to add the credentials for that device. If you look at Figure 7-10 on page 238, we show an example where we have selected multiple devices.
237
7894NativeAPI.fm
In case you selected one or more devices, all the devices that you configure in a single step through the wizard will share the same probe schedule and alert conditions. If at this point you choose to add a new device to TPC (in the example above it would be a storage subsystem) on the next screen you will see a panel where you can select the device type and enter the required information for that device. We decided to dedicade seperate sections for this procedure later on in this chapter because we wanted to just provide a high level overview at this point. Here you can see where you will find more information IBM DS8000: 7.4.1, Add/configure a IBM DS8000 on page 245 IBM SVC: 7.4.2, Add/configure a IBM SAN Volume Controller on page 245 IBM XIV: 7.4.3, Add and configure a IBM XIV on page 247 CIMOM: 7.4.4, Add and configure a CIMOM on page 248 Fabric/switches Once this step is finished, TPC will initiate a short discovery job in order to verify it can talk to the device. Once the discovery has completed you can click finish to continue with the next step. Configure devices Because at this step we have selected two SVCs which have been discovered by the Switch and Subsystem (IP Scan) Job but no credentials have been provided, the wizard will add this step to the sequence. You need to update the credentials for the devices or remove them from the selection. In the example in Figure 7-11 we have updated the credentials for the SVC with the IP address 9.11.98.198, so that the credential status has changed from missing to
238
7894NativeAPI.fm
present. The next step is to either do the same for the other SVC with the IP address 9.11.98.200 or remove that SVC from the list of selected devices.
We choose to remove the second SVC for the rest of this walk through. By doing so, the wizard automatically presented us the confirmation page as shown in Figure 7-12 on page 240. At this point you can still remove a device from the list of selected devices.
239
7894NativeAPI.fm
When you press next TPC will initiate a discovery to verify the connection to the decive, or in case you added a CIMOM, it will look for devices at that CIMOM. If you press Cancel than TPC asks for a confirmation, like shown in Figure 7-13 on page 240
After the discovery is finished, canceling the wizard will result in a different message as shown in Figure 7-14 on page 240
240
7894NativeAPI.fm
Data collection On the panel in Figure 7-15 on page 241 you will need to choose a Monitoring Group or Template. This will add your device to a group, and since the groups are part of the pre-defined probe jobs and alert definitions your device will be manged with those settings. If you select Monitoring Group in the first drop down box, the second drop down box lists all the defined groups, that are associated with a probe job.
Note: Even though the group you selected will add your device not only to a probe job but also to the alerts of the group, the alerts are not shown on this page. The wizard will show the alerts on the summary panel. If you select Monitoring Template like shown in Figure 7-16 on page 242, you can create a probe jobs and alerts. The wizard does not need all the detailed inputs, that you usually need to create new job, it will simply derive that information for the three templates that are provided with TPC.
241
7894NativeAPI.fm
Tip: When you enter the string for the Collect Job / Alert Prefix you should end this string with some kind of separation character for example _. Summary The summary panel in Figure 7-17 on page 243 shows not only the details about the probe job that the device or devices will be assigned to, but also the alerts.
242
7894NativeAPI.fm
Results On the results panel in Figure 7-18 on page 244 you see if the wizard was successful in adding the devices to the groups and jobs, as well as if the Configure Devices wizard has started a probe for the devices.
243
7894NativeAPI.fm
This probe is a special job. It is not the job that is defined for your device, because that could also run the probe for other devices defined in the same job. If that probe was running, starting it again would end up with errors, because you cannot run two probes for a single device from a single server at the same time. This special job is the CLI and Event Driven Job.
dialog again.
At first you will see the dialog box in Figure 7-19 until you check the Do not display this
If you press View Job History TPC will open the Job Management panel as shown in Figure 7-20 on page 245.
244
7894NativeAPI.fm
Figure 7-20 Probe started through CLI and Event Driven Jobs
245
7894NativeAPI.fm
There are four different scenarios that you can experience which we outlined in Table 7-5 and what information you need to supply.
Table 7-5 SVC scenarios use existing uploaded key SVC version 4 Remember: SVC4 does not associate a SSH key with a user so you dont need to provide a username. This is what you need to fill in: select SVC version enter IP address private SSH key passphrase if the key is protected upload new keys This is what you need to fill in: select SVC version IP address Admin Username TPC needs the SVC admin user ID to be able to upload the key Admin password private SSH key passphrase if the key is protected This is what you need to fill in: select SVC version IP address Admin Username TPC needs the SVC admin user ID to be able to upload the key Admin password Once you provided the admin username and password the Select User ... button becomes available Username You do not need toselect a user using the Select User ... button, you can also type private SSH key passphrase if the key is protected
SVC version 5+
This is what you need to fill in: select SVC version enter IP address private SSH key passphrase, if the key is protected
Notes: Make sure that you have selected the right SVC version, TPC will sometimes not do this automatically for you TPC does not store any of the following information in its database because the information is just needed for adding the SVC not for communication with the SVC later on Admin username Admin password Username Remember, that any SSH session to a SVC is not opened using a real username. A session is always initiated with the username string admin and the correct key, so the username is only required when you upload a new key for SVC5+ You do not need to select a user using the Select User ... button, you can also type the name of the user. Avoid using admin or superuser in the Username filed, as this could overwrite and lockout others when you upload new keys. TPC will also not show these userIDs in the list of users when you click the Select User ... button If you want to, TPC can also create a new SVC user for you and associate the key with that user. To do this you simply enter a name of a user that does not yet exist on the SVC.
246
7894NativeAPI.fm
While TPC is asking for the private key it will use that key to generate a new public key from that private key, so you only need to provide that one key file name. The key file needs to be stored on the system that the GUI runs on, so if you are remotely connected to the TPC server the browse function cannot select a key that is already stored on the TPC server. When you run the GUI on a different machine than the TPC server the TPC GUI will upload the key to the TPC server and stored in the key file directory: ...\TPC\device\cert If there is already a key with the same file name stored on the server, TPC will append a number to the key file name. The key file tpc_svc.pem that is shipped with TPC will be in different locations depending on if you are currently installing the product or if you have already installed TPC. If you are currently installing TPC and running the migration it will be stored on the CD or the directory from where you are installing. When TPC is already running its in the TPC server directory tree under ...\TPC\device\conf\ If you delete a SVC from TPC, the key file will not be deleted from the key file directory In Figure 7-22 on page 247 we provide an example of adding a SVC version 5 to TPC.
247
7894NativeAPI.fm
248
7894NativeAPI.fm
Figure 7-25 TPC explainations of data sources when you add a switch
We do not point out the In-band Fabric Agent (the old Commom Agent based one) anymore, because it is discontinued and has been replaced by the SRA. So besides for the time until you completed the upgrade and migration you will probably not run that type of agent anymore. The In-band Fabric Agent is still supported and you can find the details of when to use that agent in the Appendix A Supported Fabric Agent Types in the Tivoli Storage Productivity Center V4,2 Users Guide, SC27-2338. Besides running the Configure Devices wizard you should not forget to set up the switches to send SNMP traps to TPC.
Switch Performance
249
7894NativeAPI.fm
Requirements Events
Brocade Manually add TPC as a SNMP receiver to the switch configuration, this is not done by the CD Wizard
Since it is recommended to use the CIMOM the wizard has pre-selected this option as shown in Figure 7-26.
250
7894NativeAPI.fm
McData Switch Performance Events CIMOM agent Manually add TPC as a SNMP receiver to the switch configuration, this is not done by the CD Wizard
Since it is recommended to use the CIMOM the wizard has pre-selected this option as shown in Figure 7-27.
Although SRAs are also supported for getting topology information, since your are in the process of adding a switch, the wizard has pre-selected this option as shown in Figure 7-28.
251
7894NativeAPI.fm
Although SRAs are also supported for getting topology information, since your are in the process of adding a switch the wizard has pre-selected this option as shown in Figure 7-29.
252
7894NativeAPI.fm
Deleting a device
When you delete a device the performance monitoring job will be deleted as well, so there is no need for a clean up at this point.
Navigation Tree
There is still a Navigation Tree item called Jobs under Disk Manager Monitoring Jobs even though.
253
7894NativeAPI.fm
Alert Log
If you click on the Delete all button within the Alert Log panel, TPC will delete all alerts and not only the alerts shown on the currently opened panel.
Job History
Since TPC V4.2 individual job runs are no longer shown as a subtree that you can expand under the job name Navigation Tree entry. You can still right click on a job, and select Job History, this will open the Job Management panel and highlight the job that you have selected.
Job Management
TPC 4.2 include a new Job Management panel, which we will describe in detail in chapter <<< reference needed >>>. start a job => do you want to got to the JMP now? select multiple log files
254
7894SRA.fm
Chapter 8.
255
7894SRA.fm
8.1 Overview
The Storage Resource Agent (SRA) was introduced in Tivoli Storage Productivity Center 4.1 as a lightweight agent to collect host disk and filesystem information. In TPC V4.2, this functionality was enhanced to include the following items. File System Scan Database Monitoring N-Series support including automatic discovery and manual entry Fabric management Collect topology information Collect zone, zone set information Perform zone control Perform agent assignment
TSM Integration Batch Reporting Changes Path Planner support Data Sources Panel improvements IBM Tivoli Monitoring Integration SRAs can be either deployed remotely from the TPC GUI or they can be locally installed on the individual host machines.
256
7894SRA.fm
A user ID that has administrative rights on those computers. You enter this ID when creating a deployment schedule. Tivoli Storage Productivity Center uses this ID to log in to the target computers and install and configure the necessary runtime files for the agents. The user under which a Storage Resource agent (daemon or non-daemon) runs must have the following authorities on the target computers: On Linux or UNIX, the user must have root authority. By default, an agent runs under the user 'root'. On Windows, the user must have Administrator authority and be a member of the Administrators group. By default, a Storage Resource agent runs under the 'Local System' account. Storage Resource agents do not collect information about orphan zones. An orphan zone is a zone that does not belong to at least one zoneset During deployment, the server communicates with the target computer using one of the following protocols: Windows server message block protocol (SMB) Secure Shell protocol (SSH) Remote execution protocol (REXEC) Remote shell protocol (RSH) If RSH is configured to use a user ID and password, the connection fails. To successfully connect to a system using RSH, you must set up the .rhosts file (in the home directory of the account). RSH must be configured to accept a login from the system that is running your application. If you want to install a Storage Resource agent or Data agent on Windows targets, the Enable NetBIOS over TCP/IP option must be selected in the Control Panel settings for the computer's network connections properties. To set this option, click Start > Settings > Control Panel > Network and Dial-Up Connections > <some_connection> > Properties > Internet Protocol (TCP/IP) > Advanced > WINS > Enable NetBIOS over TCP/IP. See the documentation for your firewall to determine if these ports are not blocked for inbound requests. Note: On Windows 2008, make sure to turn off the Windows firewall before deploying the SRA. If you do not turn off the firewall, the deployment will fail.
257
7894SRA.fm
AIX 6.1 Minimum maintenance level required is Technology Level 6100-04 Service Pack 5. Red Hat Linux 5 Requires compat-libstdc++-296-2.96-138.i386.rpm or later
258
7894SRA.fm
To create a new deployment job, select Add Storage Resource Agents under the select action drop down menu on this panel(Figure 8-2 on page 260).
259
7894SRA.fm
Once you select this option, you will be taken to the Create Storage Resource Agent Deployments Panel(See Figure 8-3 on page 261).
260
7894SRA.fm
In this panel, click on the Add Host List button to add information about the Storage Resource Agents you would like to deploy. This will take you to the login information panel(See Figure 8-4 on page 262).
261
7894SRA.fm
In this panel, enter the host name and installation location for each of the SRAs that you would like to install. Each of these machines should have a common user id and password, which you will enter on the lower half of the panel(see Figure 8-5 on page 263). Note: If you have different user ids and passwords, you may launch the Login Information panel once for each user ID and password combination.
262
7894SRA.fm
By default, SRAs on Windows will run under the Local System account. We recommend leaving this default option. If you would like to change this to a specific user, you may click the Windows Service Information button and specify a custom user and password(see Figure 8-6). This user can be an existing local or domain user---or they can specify a new local user and that user will be created.
You also have the ability to install SRAs on machines in a Windows Active Directory Domain. To do so, click on the Add Agents from MS Directory button on the Login Information Panel., A panel asking for the domain controller information will popup(see Figure 8-7).
263
7894SRA.fm
Once you enter the domain controller information, TPC will authenticate to the active directory domain and display all of the machines available in that domain(see Figure 8-8).
Select which agents you would like to deploy SRAs to within this panel. Once you do so, they will automatically be added to the SRA Login Information Panel and be validated to ensure that the proper install prerequisites are met. You are then taken back to the SRA deployments panel, which will list each of the SRAs to be installed(Figure 8-9 on page 265).
264
7894SRA.fm
If you would like to schedule the SRA deployment job to run at a later time, click on the When to Run panel and choose when you would like to job to run(see Figure 8-10 on page 266). By default, the job will be run immediately after you save the job.
265
7894SRA.fm
In the event that the SRA deployment job fails, you can set TPC to send alerts under the Alert panel (see Figure 8-11 on page 267).
266
7894SRA.fm
Once you have verified the SRA deployment information, go to File->Save to save and run the SRA deployment job. You will be prompted for a Storage Resource Agent Deployment Name(see Figure 8-12). Input a descriptive name and click ok on this panel.
Once you click ok, you will be prompted whether you would like to view the job status information(Figure 8-13).
Once you click on yes, you will be taken to the job management panel for the SRA deployment job(see Figure 8-14 on page 268).
267
7894SRA.fm
Within the job management panel, click on the Refresh All button to update the job status. After a few minutes, the job will complete and you may view the job logs(see Figure 8-15 on page 269).
268
7894SRA.fm
Once the deployment job completes, you will be able to see the SRAs under Administrative Services->Data Sources->Data/Storage Resource Agents(see Figure 8-16 on page 270).
269
7894SRA.fm
270
7894SRA.fm
Table 8-1 SRA non-daemon protocols Protocol SSH Description Requires the user ID and password or user ID, certificate, and passphrase. Requires the user ID and password. Requires user ID and password. Requires the user ID.
The installation images are contained in either the disk1 or the Storage Resource Agent images. The images for each of the platforms are located on the paths below.The images are located in the following directory: TPC_installation_image_location/data/sra/operating_system Table 8-2 on page 271 shows the Storage Resource Agent installation images.
Table 8-2 SRA image path names Installation Image AIX HP-UX Linux x86 Linux for Power Systems Linux s390 Solaris Windows Operating System aix_power hp-ux_itanium linux_ix86 linux_power linux_s390 solaris_sparc windows
271
7894SRA.fm
Figure 8-17 SRA Command Line Install Table 8-3 SRA CLI install parameters Option commtype installLoc serverip serverport agentport debug duser dpassword Description Use only when installing a daemon agent
Location where the agent is installed. Enclose the directory name in quotation marks, for example, "C:\Program Files\IBM\TPC_SRA\"
The IP address of the TPC server This is the port for the Data Server. The default is 9549.
If the agent is run as a daemon service, then the agent port must be specified.
This is an optional parameter for debugging purposes.
If the installation fails, see the return codes in the Information Center. Search for Return codes used by Storage Resource Agent.
272
7894SRA.fm
For example, it can notify you when an Oracle table space is reaching a critical shortage of free space or when a Sybase table is dropped. By alerting you to these and other issues related to your stored data for the databases within your environment, it enables you to prevent unnecessary system and application downtime. In this section, we show you how to register a database and extract capacity and usage reports. To better demonstrate all of the reporting types, refer to Figure 8-18 on page 273. Note that not all entries are fully expanded.
273
7894SRA.fm
2. In Figure 8-19, select the magnifying glass icon that is located to the left of the line TPC for Data - Databases. Figure 8-20 appears.
3. In Figure 8-20 on page 274, select the RDBMS Logins tab, and then select Add New. Configure the login properties for your database instance on the target server, as shown in Figure 8-21.
274
7894SRA.fm
4. After you click Save in Figure 8-21 to complete this task, you get a success message, as shown in Figure 8-22. You will now see that the new database host and information are listed after the configuration.
275
7894SRA.fm
276
7894SRA.fm
2. You can add Instances and use the arrows to move the instances to the Current Selections column. In Figure 8-24, select File Save and give the probe a name. Select OK, and the probe will be submitted.
277
7894SRA.fm
4. Next, we need to create a scan to gather more detailed information about the database. Select Data Manager for Databases Monitoring Scans. Select the default scan and select Run Now, as shown in Figure 8-26.
278
7894SRA.fm
279
7894SRA.fm
3. Click Generate Report to create the report By Computer. The report displays the capacity for each computer (Figure 8-28).
280
7894SRA.fm
4. Click the magnifying glass icon to view a selected server, as shown in Figure 8-29.
5. Now, drill down on the selected computer for instance information (Figure 8-30).
6. If you click the magnifying glass icon, you can get a listing of the Database Files, as shown in Figure 8-32 on page 283.
281
7894SRA.fm
282
7894SRA.fm
283
7894SRA.fm
284
7894SRA.fm
5. Now, click the line graph icon, and you see the Usage graph report (Figure 8-34).
7894SRA.fm
Monitored Directories
A NAS device must supply a unique sysName, which maps to the network hostname of the NAS device. By the time of writing this book, TPC 4.2 supports the following IBM Nseries Models and NetApp filers. Check the website at: http://www-1.ibm.com/support/docview.wss?rs=597&uid=ssg1S1003019
286
7894SRA.fm
8.7.2 Configure TPC to manage IBM N Series or NetApp Filer through Windows Storage Resource Agent
We will document the procedure to manage the device though a Windows server. Note: The Windows server used as a proxy Storage Resource Agent must be a member of Windows domain, and the NAS filer should also be added to the Windows domain.
287
7894SRA.fm
288
7894SRA.fm
Map NAS CIFS share to Windows server and run read and write I/O
The account used for scanning NAS for Windows must be a Domain account that can log into both the Windows agent machine and the NAS device. In our lab, we login to the Windows server using such a Domain Account, and map the NAS CIFS share that we defined in previous to the Windows server, then we make some read and write I/O on the mapped network drive (for example, copy a file to the mapped drive) to make sure the NAS share is working correctly(see Figure 8-38). Note: You must use a Domain User which has Domain Administrator privileges.
289
7894SRA.fm
2. You must set a default domain login/password in Tivoli Storage Productivity Center for it to discover the NetApp devices. Select the Administrative Services Configuration License Keys, double click on the TPC for Data entry(see Figure 8-39).
Click on the Filer Logins tab and click on the button Update default login and password. Enter the appropriate domain User ID and Password(see Figure 8-40). Note: This user must be have Domain Administrator privileges.
290
7894SRA.fm
3. Expand the Administrative Services in the navigation tree and select Administrative Services Discovery. Note: You must verify that the correct SNMP community name is defined in the Windows Domain, NAS and SAN FS job. To verify, click on Administrative Services Discovery Windows Domain, NAS and SAN FS and select the Options panel. Add the correct SNMP community name for the filer. To run a NAS/NetApp Discovery job, right click Windows Domain, NAS and SAN FS and select Run Now as shown in Figure 8-41. Note: The discovery will discover all entities in the Windows Domain. To shorten the time taken for discovery, you can check the Skip Workstations open in the Discovery properties panel(See Figue 6-41).
291
7894SRA.fm
To check the running job status, right click Windows Domain, NAS and SAN FS and select Update Job Status. Wait till the discovery job finish. 4. After the discovery job completes but before IBM Tivoli Storage Productivity Center can perform operations (Probe/Scan) against NetWare/NAS/NetApp filers, they need to be licensed first. This is because in a very large environments that the customer may not want to be automatically license all the discovered NAS, so the customer has a choice as to which servers to license. Note: For manually entered add NAS, you dont need to do this, the filer will be licensed automatically. Select the Administrative Services Configuration License Keys, double click on the TPC for Data entry(see Figure 8-42).
292
7894SRA.fm
The Licensing tab is displayed. Locate the desired NetWare/NAS/NetApp filer. Select the associated checkbox for the NAS filer in the Licensed column, and select the Disk icon on the toolbar to save changes(see Figure 8-43 on page 293). Important: You must save the changes after you license the NAS filer before you leave this panel, otherwise TPC will not save the change to its repository.
5. Set Filer Login ID/Password For Windows attached NAS, you have to specify the login ID and password to the NAS. Click the Filer Logins Tab, and select the NAS filer, then click the Set login per row button, enter the Logon Id and Password in the popped up Filer Login Editor panel, click Save to save the changes(see Figure 8-44). This ID and password must be a Domain account that can log into both the Windows agent machine and the NAS device. Note: Set Filer Login/Password is not required if it is a UNIX attached NAS filer.
293
7894SRA.fm
Figure 8-44 Logon ID and password for NAS filer for Windows
6. Run a discovery again After licensing the NAS filer and setting the login ID/password, we need to run a discovery job again to get further information on the NAS filer. Refer to Figure 8-41.
294
7894SRA.fm
3. Add NAS Server Click Add NAS Server button, enter the following information in the panel Figure 8-46 and click OK to continue: Network name Enter the fully qualified host name or IP address of the NAS filer. Data Manager Agent OS Type Select the operating system of the computer that contains the agent that will gather information about the NAS filer. In our case, we select Windows here. Accessible from Select the agent host from drop-down list that you want to discover the NAS filer. The drop-down list will only display agents that are: Running under the operating system selected in the Data Manager Agent OS Type field. Located on Windows or UNIX computers that are accessible to the NAS filers (Storage Resource Agents are not located on the NAS filers themselves): Windows: agents are located on Windows computers within the same domain as the NAS filers.
SNMP Community Enter the SNMP community name, the default is PUBLIC. This is used to get information from the NAS filer.(See General NAS System Requirements) Login ID and Password
295
7894SRA.fm
Windows only. Enter the Login ID and password, it must be a Domain account that can login to both the Windows agent machine and the NAS filer.
If all the information you provided above is correct, you will see the NAS filer added to the following panel shown as Figure 8-47.
296
7894SRA.fm
Note: Keep in mind that you are assigning work to computers, these scans go over the IP network, so it is better to pick a Proxy Agent on the same LAN. These scans will be slower than a local scan, and it may take a very long time to run if the scan is also doing a Proxy scan on a large device. For large NAS devices, use multiple Proxy Agents. Normally the default Scan of once per day is really not required, look into weekly Scans. Select Administrative Services Configuration Scan/Probe Agent Administration. Click the NAS filer that you want to define Scan/Probe agent. In our lab, we did multiple selections by holding the Ctrl key and click the NAS filer entries. Click Set agent for all selected rows, in the popped up panel, choose the Windows Storage Resource Agent that has the NAS filer attached(see Figure 8-48).
Now you can see the NAS filer filesystems have been assigned Scan/Probe Agent, make sure you click the Save button in the toolbar to save the changes(see Figure 8-49).
297
7894SRA.fm
Click When to Run tab, and choose the radio button of Run Now, and save the probe job by click the Save button on the toolbar(see Figure 8-51), the probe jobs starts. You can right click on the Probe Job and select Update Job Status to check the running job status. Wait till the probe job finish.
298
7894SRA.fm
Once a probe has been successfully completed, you can verify that TPC has the filer data by viewing the TPC dashboard. On the Monitored Server Summary Panel, you should see the total number of Network Appliance devices monitored and the total Filesystem/Disk Capacities(See Figure 8-52).
Figure 8-52 Tivoli Storage Productivity Center Dashboard showing NAS Filer Information
299
7894SRA.fm
In the Profiles tab, we select all of the default profiles and apply them to file systems and directories by clicking the >> button, see Figure 8-54. Profiles allow us to specify what statistical information is gathered and to fine-tune and control what files are scanned.
300
7894SRA.fm
Click When to Run tab, and choose the radio button of Run Now, and save the scan job by click the Save button on the toolbar(see Figure 8-55), TPC ask for a name for the Scan job, give a job name and click OK to start the scan job. You can right click the job name and select Update Job Status to check the running job status. Wait till the scan job finish.
8.7.3 Configure TPC to manage IBM N Series or NetApp Filer through UNIX Storage Resource Agent
In this section, we document the procedure to monitor a NAS filer through a UNIX proxy Storage Resource Agent.
301
7894SRA.fm
302
7894SRA.fm
You do not need to enter the Filer Login tab, because it is only required for Windows environment. 4. Run a discovery again We need to run a discovery job again to get further information on the NAS filer. Please refer to Figure 8-41.
303
7894SRA.fm
2. Expand the Administrative Services in the navigation tree and select Administrative Services Configuration Manual NAS/Netware Server entry. You will see the following panel shown as Figure 8-59. Click Add NAS Server button and enter the information, then click OK. Network name Enter the fully qualified host name or IP address of the NAS filer. Data Manager Agent OS Type Select the operating system of the computer that contains the agent that will gather information about the NAS filer. In out case, we select Unix here. Accessible from Select the Unix agent host from drop-down list that you want to discover the NAS filer. SNMP Community Enter the SNMP community name, the default is PUBLIC. This is used to get information from the NAS filer. Login ID and Password Windows only. It is grayed out when we choose Unix under Data Manager Agent OS Type.
If all the information you provided above is correct, you will see the NAS filer added to the following panel shown as Figure 8-60.
304
7894SRA.fm
305
7894SRA.fm
Now you can see the NAS filer filesystems have been assigned Scan/Probe Agents, make sure you click the Save button in the toolbar to save the change(see Figure 8-62).
306
7894SRA.fm
Click When to Run tab, and choose the radio button of Run Now, and save the probe job by click the Save button on the toolbar(see Figure 8-51), the probe jobs starts. You can right click on the Probe Job and select Update Job Status to check the running job status. Wait till the probe job finishes. Once a probe has been successfully completed, you can verify that TPC has the filer data by viewing the TPC dashboard. On the Monitored Server Summary Panel, you should see the total number of Network Appliance devices monitored and the total Filesystem/Disk Capacities(See Figure 8-52).
307
7894SRA.fm
Figure 8-64 Tivoli Storage Productivity Center Dashboard showing NAS Filer Information
308
7894SRA.fm
In the Profiles tab, we select all of the default profiles and apply them to file systems and directories by clicking the >> button, see Figure 8-54. Profiles allow us to specify what statistical information is gathered and to fine-tune and control what files are scanned. Click When to Run tab, and choose the radio button of Run Now, and save the scan job by click the Save button on the toolbar(see Figure 8-66), TPC ask for a name for the Scan job, give a job name and click OK to start the scan job. You can right click the job name and select Update Job Status to check the running job status. Wait till the scan job finishes.
309
7894SRA.fm
Double click the NAS filer, you can see more detail information of this NAS filer from the L2: Computer view(see Figure 8-68).
310
7894SRA.fm
311
7894SRA.fm
Filesystem Reporting
We can also generate reports from the NAS filesystems. Here is an example on how to generate a report. 1. Expand Data Manager Reporting Capacity Filesystem Free Space By Computer. Click Generate Report as shown in Figure 8-70.
312
7894SRA.fm
2. From the following panel, highlight the NAS filer and click the magnifier icon in front of it(Figure 8-71).
3. From the following panel, highlight the mount point you are interested in, and right click on them, choose Chart space usage for selected as shown in Figure 8-72.
313
7894SRA.fm
Filesystem free space chart is presented in the following panel, these chart shows the current free space on each volumes on the NAS filer. You can right-click the chart and select Customize this chart button to customize the chart. On the popped up panel, we choose 4 in the Maximum number of charts or series per screen drop-down menu(see Figure 8-73).
314
7894SRA.fm
Now you can see the customized Chart of Filesystem Free Space by Computer in the following shown panel(Figure 8-74). You can click on Prev and Next button to see more charts of other NAS volumes.
315
7894SRA.fm
316
7894SRA.fm
As shown in Figure 8-76 on page 318, this report shows detailed information regarding the machines assets, including: Machine Hostname Host ID: Unique machine identifier generated by the Tivoli GUID Group and Domain Information Network Address, IP Address Machine Time Zone Manufacturer, Model and Serial Number Processor Type, Speed and Count RAM Information Operating System Type and Version CPU Architecture and Swap Space Disk Capacity, Unallocated Disk Space Filesystem Free Space Last Boot Time, Discovered Time Last Probe Time and Status For VMWare virtual machines, information regarding the hypervisor and VM configuration file.
317
7894SRA.fm
From this view, you can drill down into a particular virtual machines controllers, disks, file systems, exports/shares and monitored directories. To view details regarding disks assigned to a virtual machine select Data Manager->Reporting->Asset->By Computer->[Computer Name]->Disks->[Disk #]. The disk detail panel is made up of four tabs: General, Paths, Latest Probe and Probe History. The general page includes the computer name, path name, SCSI target ID, logical unit number and the number of access paths. This page also includes disk information such as the manufacturer, model number, firmware, serial number and manufacture date of the disk. The paths page shows information regarding the host, OS type, path, controller, instance, bus number, SCSI target id, and logical unit number. The latest probe page shows information gathered by TPC during the most recent probe of the disk. This page includes information about the sectors, number of heads, number of cylinders, logical block size, disk capacity, RPM information, power-on time, failure prediction, disk defect information and time of last probe. The probe history page shows the history of probes which have been run on this disk for tracking purposes.
318
7894SRA.fm
For a given virtual machine disk, you can also view how it is mapped from the hypervisor. To do so, select the Mapping to Hypervisor tab on the disk information report.
319
7894SRA.fm
3. Select the report type, as shown in Figure 8-79. You can select the following reports: Asset System-wide Storage Subsystems Availability Capacity Usage Usage Violations Backup Groups
Based on the report type, the selections panel gives you the ability to narrow down the columns in the report or filter the report to include specific entities (see Figure 8-80 on page 321).
320
7894SRA.fm
The Options panel has been enhanced in Tivoli Storage Productivity Center 4.2 to include the ability to either generate the batch reports on the TPC server machine or to specify a custom location for the generated batch reports. You can also select the type and format of report to generate in this panel(see Figure 8-81 on page 322).
321
7894SRA.fm
This panel also gives the user the ability to specify the format for the output file name(see Figure 8-82 on page 323).
322
7894SRA.fm
The next panel gives you the ability to specify when to run the batch report, and set up a schedule to run the defined batch report repeatedly(see Figure 8-83 on page 324).
323
7894SRA.fm
The Alert panel will allow you to specify an alert to be generated if the batch report generation fails(see Figure 8-84 on page 325).
324
7894SRA.fm
325
7894SRA.fm
oductivity_Center_Standard_Edition, click the Documentation link, and enter Platform Support: Agents, Servers and GUI in the Search box. Click the link to the TPC 4.2 document
326
7894SRA.fm
Clicking on the magnifying glass will take you to an agent detail panel which displays the properities of the installed HBAs(see Figure 8-86 on page 327).
You may also run a DB2 query to generate an output of all of the HBAs installed in your environment. See Example 8-1 below for details on the query. The report can be generating using either the DB2 command line interface or the command center. Note: You may also use a reporting tool, such as BIRT, to generate this report.
Example 8-1
327
7894SRA.fm
HBA.SERIAL_NUMBER, HBA.INSTANCE_NUMBER, HBA.DRIVER_VERSION, HBA.FIRMWARE_VERSION, HBA.ROM_VERSION, HBA.HW_VERSION, HBA.MODEL, HBA.WWNN, HBA.WWPN, HBA.BUS_NUMBER from TPCREPORT.COMPUTERSYSTEM CS right join TPCREPORT.HOSTBUSADAPTER HBA on CS.COMPUTER_ID = HBA.COMPUTER_ID and HBA.DETECTABLE='True' where CS.DETECTABLE='True'
This will invoke the service collection script on the Storage Resource Agent, zip the contents and send it to the TPC server(see Figure 8-88).
328
7894SRA.fm
329
7894SRA.fm
330
7894MRE.fm
Chapter 9.
331
7894MRE.fm
9.1 Overview
Tivoli Storage Productivity Center for Disk Midrange Edition provides the same features and functions as Tivoli Storage Productivity Center for Disk, but is limited to managing IBM Storwize V7000 and IBM System Storage DS3000, DS4000, DS5000 and FAStT devices. It provides performance management, monitoring and reporting for these devices. IBM Tivoli Storage Productivity Center for Disk Midrange Edition is designed to provide storage device configuration, performance monitoring, and management of Storage Area Network (SAN)-attached devices from a single console. Tivoli Storage Productivity Center for Disk Midrange Edition: Offers continuous real-time monitoring and fault identification to help improve SAN availability Provides reporting across multiple arrays from a single console Helps monitor metrics such as throughput, input and output (I/O) and data rates, and cache utilization Receives timely alerts that can enable event action based on your policies when thresholds are exceeded Helps to improve storage return on investment by aiding to keep SANs reliably and dependably operational Can reduce storage administration costs by helping simplify the management of complex SANs
332
7894MRE.fm
Tivoli Storage Productivity Center for Disk MRE is licensed per Storage Device. A Storage Device is an independently powered, channel attached device that stores or controls the storage of data on magnetic disks or solid state drives, such as disk controllers and their respective expansion units, each constituting separate Storage Devices. You must obtain entitlements for every Storage Device which runs, uses services provided by, or otherwise accesses the Program and for every Storage Device on which the Program is installed. Tivoli Storage Productivity Center for Disk MRE performance monitoring is supported on IBM Storwize V7000, DS3000, DS4000, DS5000 and FAStT Storage Devices and includes Basic Edition capabilities. SVC performance monitoring is supported with the Tivoli Storage Productivity Center for Disk MRE license when the backend storage is made up of the supported IBM storage devices. IBM Tivoli Storage Productivity Center for Disk Midrange Edition is set apart from IBM Tivoli Storage Productivity Center for Disk because it is:
333
7894MRE.fm
334
7894Replication.fm
10
Chapter 10.
335
7894Replication.fm
336
7894Replication.fm
10.2.1 Description
Open HyperSwap replication applies to both planned and unplanned site switches. When a session has Open HyperSwap enabled, an I/O error on the primary site automatically causes the I/O to switch to the secondary site without any user interaction and with minimal application impact. In addition, while Open HyperSwap is enabled, the Metro Mirror session supports disaster recovery. If a write is successful on the primary site but is unable to get replicated on the secondary site, IBM Tivoli Storage Productivity Center for Replication suspends the entire set of data consistency checking, thus ensuring that a consistent copy of the data exists on the secondary site. If the system fails, this data might not be the latest data, but the data should be consistent and allow the user to manually switch host servers to the secondary site. You can control Open HyperSwap from any system running Tivoli Storage Productivity Center for Replication (AIX, Windows, Linux, or z/OS). However, the volumes that are involved with Open HyperSwap must be attached to an AIX system. The AIX system is then connected to Tivoli Storage Productivity Center for Replication. The following figure shows overview of Open HyperSwap function (Figure 10-1).
10.2.2 Prerequisites
TPC-R requirements
If you want to use Open HyperSwap session you must have the Tivoli Storage Productivity Center for Replication Two Site Business Continuity licence. Tivoli Storage Productivity Center for Replication by default uses TCP/IP port 9930 for communication with the AIX host for the Open HyperSwap and TCP/IP port 1750 for communication with DS8000 (HMC connection).
337
7894Replication.fm
Details about other TCP/IP ports used by Tivoli Storage Productivity Center for Replication are described in Tivoli Storage Productivity Center Installation and Customization Guide, SC27-2337.
AIX requirements
Open HyperSwap support requires AIX version 5.3 (with required APARs) or 6.1. You can find the supported AIX version for each Tivoli Storage Productivity Center for Replication release in the support matrix at the following link: http://www-01.ibm.com/support/docview.wss?rs=40&context=SSBSEX&context=SSMN28&cont ext=SSMMUP&context=SS8JB5&context=SS8JFM&uid=swg21386446&loc=en_US&cs=utf-8&lang=e n You must have the following AIX modules installed: Subsystem Device Driver Path Control Module (SDDPCM) version 3.0.0.0 or later Multi-Path Input/Output (MPIO) module (the version that is provided with AIX version 5.3 or 6.1) The TCP/IP connections between the AIX host systems and the Tivoli Storage Productivity Center for Replication server must be established.
SDDPCM details
SDD distinguishes the paths of the source volume from the paths of the target volume on an Open HyperSwap copy set. With an Open HyperSwap device, I/O can only be sent to the source volume, so when SDD selects paths for I/O, it only selects paths that are connected to the source volume. If there is no path on the source volume that can be used, SDD will initiate the Open HyperSwap request to Tivoli Storage Productivity Center for Replication and work together to perform the swap. After the swap, SDD will select the target volume paths for I/O. AE deamon is a new in SDDPCM and it is added to the SDDPCM installation package beginning with SDDPCM 3.0.0.0. The deamon is used for communication with Tivoli Storage Productivity Center for Replication to support Open HyperSwap.
Important: Open HyperSwap does not yet support host clustering solutions such as Power HA (High Availability Cluster Multi-processing - HACMP). Open HyperSwap device is not supported with a SAN boot volume group. Currently DB2 with raw device access is not supported.
Note: TPC-R must not be installed on any of the AIX hosts involved in the Open HyperSwap session.
338
7894Replication.fm
Note: For AIX 5.3, a single host should manage a maximum of 1024 devices when devices have been enabled for Open HyperSwap on the host, with 8 logical paths configured for each copy set in the session. For AIX 6.1, a single host should manage a maximum of 1024 devices when devices have been enabled for Open HyperSwap on the host, with 16 logical paths configured for each copy set in the session
339
7894Replication.fm
1. After you installed the SDDPCM driver check the AE deamon by typing the following command in Example 10-1. If the deamon is not active start it by using the start command startsrc -s AE/startAE
Example 10-1 SDDPCM AE deamon
PID 12036
Status active
2. Use the AIX configuration manager (cfgmgr) to identify all the volumes that are involved with the Open HyperSwap session. You can check the volumes by issuing the SDDPCM command in Example 10-2. In our example we will use 20 volumes (10 primary and 10 secondary). Only 2 primary volumes are shown in the following example (Example 10-2).
Example 10-2 SDDPCM path query device command
jerome> pcmpath query device Total Dual Active and Active/Asymmetrc Devices : 20 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2107900 ALGORITHM: Load Balance SERIAL: 75VG4116300 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2107900 ALGORITHM: Load Balance SERIAL: 75VG4116301 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 . . 3. To manage the Open HyperSwap AIX host must be connected to TPC-R server. To add the AIX host click on the Host Systems from the TPC-R menu and tit will open the panel in Figure 10-2. Note: A single session that has Open HyperSwap enabled can manage multiple hosts. However, each host can be associated with only one session. Multiple hosts can share the same session.
340
7894Replication.fm
4. Click on Add Host Connection and it will open the window in Figure 10-3. Enter the AIX hostname or IP address and click on Add Host. If the connection from TPC-R and AIX host is established the AIX host will be added to TPC-R (Figure 10-4)
341
7894Replication.fm
5. After you add the host connection, click on the Sessions. This will provide you with an overview of all defined sessions. At this point there are no defined sessions. Click Create Session to continue (Figure 10-5).
6. In the Create session window select Metro Mirror Failover/Failback and click next (Figure 10-6). This is the only supported session with Open HyperSwap function.
7. This leads us to the Properties panel as Figure 10-7 shows. The Properties panel is important because it gives you the option to specify this session with Open HyperSwap function. If you select this option, Open HyperSwap will be triggered and redirect application I/O to the secondary volumes, when there is a failure on the host accessible volumes. You can also specify to prevent a HyperSwap from occurring by command or
342
7894Replication.fm
event by selecting Disable Open HyperSwap. In some situations it might be necessary to temporarily disable the Open HyperSwap capabilities for a session. Within the Properties panel you can also select Metro Mirror suspend policy. You basically have two options available: Hold I/O after Suspend: Select this option if you want to block the application from writing while a consistent copy of the data is formed on the remote site. However, it does not automatically release the application. This option keeps the source equal to the target. You must use the Release I/O command on the session or wait for the Hardware Freeze Timeout Timer to expire before the application can continue to write to the source. Select this option if you want to block writes to the application while a consistent copy of the data is formed on the remote site, followed immediately by releasing the block so that the application can continue writing to the source. This option allows for little application impact, but causes the source to potentially be different than the target. This is the default setting for all new sessions.
8. In the following panels you can define your site locations (Figure 10-8 and Figure 10-9).
343
7894Replication.fm
9. After you click Next you will see that the session was successfully created (Figure 10-10). Click Finish and your session will be shown in the Session overview panel (Figure 10-11).
10.After the Metro Mirror session with Open HyperSwap function is created, you have to populate this session with Copy Sets which consists of one H1 and one H2 Metro Mirror volumes. In the session overview panel select the session name radio button and choose Add Copy Sets from the Select Action pull-down menu (Figure 10-12). Click Go to invoke the Add Copy Sets wizard.
344
7894Replication.fm
11.Add Copy Set wizzard will provide details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volume resides. Once the LSS has been selected, choose the appropriate volumes from the Host 1 volume pull-down list. The alternative way to add a large number of volumes to this session is to create CSV file. In case you have CSV file ready select Use a CSV file to import copy sets check box and provide a path to your CSV file. In our example we selected DS8000 disk subsystem and all volumes from selected LSS as shown in Figure 10-13. Click Next to continue.
345
7894Replication.fm
12.In the panel shown in Figure 10-14 select the Host 2 LSS and volumes in the same way you did it in the previous step. TPC-R will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In this example we selected All Volumes. Click Next to continue.
13.The next screen in Figure 10-15 displays message with regard to the matching results. In our example we have a message that all copy sets matching were successful. However, you may expect warning message due to one of the following reason: the number of volumes at Host 1 storage subsystem LSS and the Host 2 storage subsystem LSS is not the same volumes at Host 2 storage subsystem LSS are smaller then Host 1 storage subsystem LSS volumes the Host 1 or Host 2 storage subsystems are already defined in some other copy services session However, the warning message does not mean the failure of Copy Sets creation. Click Next to see the list of available Copy Sets.
346
7894Replication.fm
14.All copy sets volumes which met the matching criteria are automatically selected. You still have a chance to modify the current selection and deselect any of the volume pair included in the list. The Show hyperlink next to each matching volume pair provides Copy Set information. We selected all copy sets as shown in Figure 10-16. Click Next to continue.
15.The next screen displays the number of copy sets which are going to be created as well as the number of unresolved matches (or not selected) as shown in Figure 10-17. Click Next to continue.
347
7894Replication.fm
16.TPC-R internally adds that copy set to its database. Figure 10-18 displays a progress panel which reports the number of copy sets added to the TPC-R inventory database. Note that this does not start to establish Metro Mirror copy pairs. It is just an TPC-R internal process to add this copy set to the TPC-R database (inventory database). After a few seconds the progress panel reaches 100% and leaves the Adding Copy Sets panel and progress to the next panel shown in Figure 10-19. Click Finish to exit the Add Copy Sets wizard.
17.Figure 10-20 shows and confirms that the session is populated with Copy Sets. The status of the session is still Inactive.
348
7894Replication.fm
18.Once the session is defined and populated with Copy Sets you can start the Metro Mirror session with Open HyperSwap. Initially the session can only be started in direction from Host 1 to Host 2. To start it, from the Select Action pull-down menu select Start H1->H2 and click Go (Figure 10-21).
19.The next message shown in Figure 10-22 is a warning that you are about to initiate a Metro Mirror session. It will start copying of data from Host 1 to Host 2 volumes defined previously by adding copy sets, thus overwriting any data on Host 2 volumes. Click Yes to continue.
349
7894Replication.fm
20.In Figure 10-23, the message at the top of the screen confirms that start of Metro Mirror session with Open HyperSwap is completed. The session is in Preparing state and with Warning status.
21.The Figure 10-24 shows the Metro Mirror session with Open HyperSwap session without errors. The copy progress is 100% and the session has changed to Prepared state with Normal status.
350
7894Replication.fm
Open HyperSwap session. The example shows you only two AIX hdisks with primary and secondary volumes. OS direction is from H1 to H2.
Example 10-3 SDDPCM path query device command of Open HyperSwap devices
jerome> pcmpath query device Total Dual Active and Active/Asymmetrc Devices : 10 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: HyperSwap OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 75VG3816300 SECONDARY SERIAL: 75VG4116300 ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: HyperSwap OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 75VG3816301 SECONDARY SERIAL: 75VG4116301 ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 You can check the status of Open HyperSwap session using SDDPCM command in Example 10-4.
Example 10-4 SDDPCM query session command
jerome> pcmpath query session Total Open Hyperswap Sessions : 1 SESSION NAME: HyperSwap SessId 0 Host_OS_State READY Host_copysets 10 Disabled 0 Quies 0 Resum 0 SwRes 0
In our example we created a new Volume Group and a filesystem using the Open HyperSwap LUNs. After mounting the filesystem we generate I/O activity on it. As shown in Example 10-5, an asterisk next to the primary volume serial number shows that there is I/O activity on the primary volumes.
Example 10-5 SDDPCM path query device command showing I/O activity on primary volumes
jerome> pcmpath query device Total Dual Active and Active/Asymmetrc Devices : 10 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: HyperSwap OS DIRECTION: H1->H2 ==========================================================================
Chapter 10. Tivoli Storage Productivity Center for Replication
351
7894Replication.fm
PRIMARY SERIAL: 75VG3816300 * ----------------------------Path# Adapter/Path Name 0 fscsi0/path1 SECONDARY SERIAL: 75VG4116300 ----------------------------Path# Adapter/Path Name 1 fscsi0/path0
State OPEN
Mode NORMAL
Select 457
Errors 0
State OPEN
Mode NORMAL
Select 8
Errors 0
DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: HyperSwap OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 75VG3816301 * ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path1 OPEN NORMAL 232 0 SECONDARY SERIAL: 75VG4116301 ----------------------------Path# Adapter/Path Name State Mode Select Errors 1 fscsi0/path0 OPEN NORMAL 8 0
352
7894Replication.fm
caused the swap must be fixed and then the session must be manually restarted to begin copying to the other site.
353
7894Replication.fm
2. The next message shown in Figure 10-26 is a warning that you are about to initiate Open HyperSwap action. It will move application I/O from H1 to H2 volumes. Click Yes to continue.
3. In Figure 10-27, the message at the top of the screen confirms that the Open HyperSwap action is successfuly completed. All application I/O are moved to H2 volumes. The Figure 10-28 shows you console log with all actions performed by TPC-R during the swap.
4. After the successful swap, it is not possible to re-enable copy to H2 volumes. Therefore, it is not possible to issue a Start H1->H2 command. The only way to restart the copy is a Start H2->H1 command. To have the volumes protected with high availability and disaster recovery again, the error that caused the swap must be fixed and then the session must be manually restarted to begin copying to the other site. To restart the session from the Sessions panel select your Metro Mirror with Open HyperSwap session. From the Select Action pull-down list select Start H2->H1 and click Go as shown in Figure 10-29. 354
Tivoli Storage Productivity Center V4.2 Release Update
7894Replication.fm
5. The next message shown in Figure 10-30 is a warning that you are about to initiate copying data from H2 to H1. Click Yes to continue.
6. In Figure 10-31, the message at the top of the screen confirms that the Start H2->H1 action is successfuly completed and that data are copying from H2 to H1.
355
7894Replication.fm
7. The status of the volumes on AIX host in Example 10-6 shows that OS direction has changed to H2->H1. It also shows an asterisk next to the secondary volume serial number which mean that the I/O activity is moved to the secondary volumes.
Example 10-6 SDDPCM path query device command showing I/O activity on secondary volumes
jerome> pcmpath query device Total Dual Active and Active/Asymmetrc Devices : 10 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: HyperSwap OS DIRECTION: H1<-H2 ========================================================================== PRIMARY SERIAL: 75VG3816300 ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path1 OPEN NORMAL 457 0 SECONDARY SERIAL: 75VG4116300 * ----------------------------Path# Adapter/Path Name State Mode Select Errors 1 fscsi0/path0 OPEN NORMAL 8 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: HyperSwap OS DIRECTION: H1<-H2 ========================================================================== PRIMARY SERIAL: 75VG3816301 ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path1 OPEN NORMAL 232 0 SECONDARY SERIAL: 75VG4116301 * ----------------------------Path# Adapter/Path Name State Mode Select Errors 1 fscsi0/path0 OPEN NORMAL 8 0
356
7894Replication.fm
Note: If you want to go back and change OS direction to H1->H2 you have to swap from H2 to H1 and perform the same steps as described in the example above.
357
7894Replication.fm
Takeover action must be started from the standby management server. On the standby management server select Management Servers from the menu on the left. In the Management Servers panel select Takeover action from the drop down menu (Figure 10-33).
358
7894Replication.fm
After you click the Go button you will get the warning message that both management servers will become active with identical configuration (Figure 10-34). You must ensure that you shut down the active management server first.
After the takeover is succesfully completed you will see the standby management server with the role Active (Figure 10-35).
359
7894Replication.fm
The Open HyperSwap session status will change to severe while the configuration is being loaded to the AIX host (Figure 10-36). In our example you see also Metro Mirror session without Open HyperSwap. This session status is not changed during the takeover action because it is not dependent on TPC-R and host connectivity.
After the configuration is loaded to the AIX host the status will change to normal and AIX host will be capable of performing Open HyperSwap actions (Figure 10-37).
The following Figure 10-38 shows TPC-R console log of the takeover action.
360
7894Replication.fm
361
7894Replication.fm
2. From the drop-down menus in the Remove Copy Sets wizard, select the Host 1 storage system, logical storage subsystem, and volume or select the all option. If you select all for a filter, the lower level filter or filters will be disabled (Figure 10-40). Click Next to continue.
3. Select the copy sets that you want to remove and click Next (Figure 10-41). In our example we will select only one copy set (volume 6300).
362
7894Replication.fm
4. The panel in Figure 10-42 shows the number of selected copy sets to be removed. In this panel you can select if you want to remove or keep hardware relationship. In our example we will keep the base hardware relationship on the storage system. You can also select to force the copy set to be removed from the session if any errors occured. The force command is to force the removal from the TPC-R session even if there were hardware errors. Click Next to continue.
5. The Figure 10-43 shows you the progress of removing copy set, and after the process is succesfully finished click Finish to exit the wizard (Figure 10-44).
363
7894Replication.fm
6. If you check DS8000 GUI, you will see that the status of removed copy set is not changed and that it is not removed from the storage system (Figure 10-45). The source and the target volume in our example is 6300.
364
7894Replication.fm
In our example we removed one copy set (volume 6300) and we left the hardware relationship as shown in Figure 10-45. The session now contains nine copy sets (Figure 10-46).
After we suspend the session (Figure 10-47), the removed copy set (volume 6300) will be also suspended. Figure 10-48 shows you the status of copy sets in DS8000 GUI after suspend action.
Figure 10-48 Copy set status in DS8000 GUI after suspending the session
If you resume the session, only the copy sets within the session will be resumed, while the removed copy set will remain in suspended status (Figure 10-49).
Chapter 10. Tivoli Storage Productivity Center for Replication
365
7894Replication.fm
Figure 10-49 Copy set status in DS8000 GUI after resuming the session
The following example shows you SDDPCM pcmpath query device command for a coupled copy set. As you can see the session name is blank, because the device is no longer associated with the session.
Example 10-7 SDDPCM pcmpath query device command for a coupled copy set
jerome> pcmpath query device Total Dual Active and Active/Asymmetrc Devices : 10 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: OS DIRECTION: H1->H2 ========================================================================== 366
Tivoli Storage Productivity Center V4.2 Release Update
7894Replication.fm
PRIMARY SERIAL: 75VG3816300 * ----------------------------Path# Adapter/Path Name 0 fscsi0/path1 SECONDARY SERIAL: 75VG4116300 ----------------------------Path# Adapter/Path Name 1 fscsi0/path0
State OPEN
Mode NORMAL
Select 1506
Errors 0
State OPEN
Mode NORMAL
Select 8
Errors 0
To decouple the copy set follow this general steps: 1. On the AIX host, stop the application that has opened the device, unmount file system and vary offline the volume group 2. Remove the device using the following command where hdisk_number is the number of the hdisk (device) that you want to remove: rmdev -dl hdisk hdisk_number 3. Run the cfgmgr command to discover the device. If you run pcmpath query device command, separate devices are presented for the copy set pair.
367
7894Replication.fm
2. Click on Create button and the packages will start creating (Figure 10-52).
368
7894Replication.fm
3. After the packages are created you will see the following message in Figure 10-53.
369
7894Replication.fm
4. Click on the hyperlink Display PE Packages and the panel in Figure 10-54 will show the new or existing packages. Click on the package and save it localy to your selected folder.
370
7894Replication.fm
Tivoli Storage Productivity Center for Replication 4.2 provides you the following options to create the logical paths and specify port pairing: adding logical paths automatically - TPC-R will automatically pick the paths or use ones that were already established adding logical paths and creating port pairing by using a comma separated (CSV) file adding logical paths by using TPC-R GUI Note: The Path Manager will not affect, modify, or deal with Global Mirror control paths in any way shape or form. This only applies to data paths between Metro Mirror or Global Copy relationships.
Note: If you decide to use the CSV file, you should always manage your port pairings by using only the CSV file. To add logical paths using a CSV file follow this steps: 1. Create a CSV file named portpairings.csv in the install_root/eWAS/profiles/ CSM/properties directory. Note: In the install_root/eWAS/profiles/ CSM/properties directory you will find portpairings.csv.sample file which you can use and rename it to portpairings.csv You can also create the CSV file in a spreadsheet such as Microsoft Excel or in a text editor. An example of CSV file is shown in Example 10-8:
Example 10-8 Portpairings CSV sample file
# Examples: 2107.04131:2107.01532,0x0331:0x0024,0x0330:0x0100,0x0331:0x000C
371
7894Replication.fm
2107.05131:2107.01532,0x0330:0x0029,0x0331:0x0001 Each line in the file represents a storage system to storage system pairing. The first value represents the storage systems, which are delimited by a colon. The remaining values are the port pairs, which are delimited by a colon. All values are separated by a comma and commented lines must start with #. Note: When specifying storage system in the CSV file use only the last five numbers from the storage ID. If the storage ID is for example IBM.2107-7516381, the storage system should be specified as 2107.16381. 2. To enable the changes in the file, you must perform a task that requires new paths to be established. For example, suspend a session to remove the logical paths and then issue the Start H1->H2 command to enable the paths to use the port pairings in the CSV file. After the start command finished succesfully you will see the paths as Auto-Generated NO in the ESS/DS panel (Figure 10-55).
Note: These are important rules to keep in mind when using the CSV file: 1. The entry for a storage system pair and the port pairs are bi-directional. This means that a line that has systemA:systemB is equivalent to a line that has systemB:systemA 2. Lines that are incorrectly formatted are discarded. For example, if a line contains ports without the 0x, or doesn't contain port pairs delimited by the : character, then the whole line is discarded 3. A line can be properly formatted but contain invalid ports for your given storage system configuration. In this case, the ports will be passed down to the storage system to be established, there is no validation done in TPC-R. The valid ports maybe established by the storage system, while the invalid ones could be rejected. If there is no valid ports you will get the error shown in Figure 10-56 4. If a file contains duplicate lines for the same storage systems, the last line wins; The ports on the last line are the ones that will be used. Also note that the entries are bi-directional. Thus, if you have systemA:systemB and then a line with systemB:systemA, this second line will be the line that is used. 5. Any line that starts with a # character is counted as a comment and is discarded. The # must be at the start of the line. Placing it in other positions could cause the line to be invalid. 6. The portpairings.csv is not shared between two TPC-R servers in an High Avablilty environment. Thus, it is possible that different port pairings can be established from the standby server after a takeover. You have to copy the portpairings.csv file to the standby server to ensure that the two files are equal.
372
7894Replication.fm
2. Click on the Manage Paths button. It will open the Path Management Wizard (Figure 10-58). From the drop-down boxes in the Path Management wizard, select the source storage system, source logical storage system, target storage system, and target logical storage system. Then click Next.
3. From the drop-down boxes in the Path Management wizard, select the source port and target port and click Add. You can add multiple paths between the logical storage subsystems, or just one at a time. When you have made your selections, click Next (Figure 10-59).
373
7894Replication.fm
374
7894Replication.fm
5. Verify the Results panel and click Finish to exit the wizard (Figure 10-61).
6. By clicking on the Storage System you see the path that we have just added (Figure 10-62).
375
7894Replication.fm
The Figure 10-64 shows you volumes panel where you can see your SAN Volume Controller space-efficient volumes marked with Yes in Space Efficient column.
376
7894Replication.fm
If you select the incremental option on those sessions, it will significantly reduce the amount of time it takes to perform a copy operation. A session created with the incremental FlashCopy option will copy only the data that has been changed on the source or the target
377
7894Replication.fm
since the previous copy completed. The incremental FlashCopy can substantially reduce the time that is required to recreate an independent image. Another example where incremental FlashCopy option can benefit is when the FlashCopy mapping was stopped before the background copy completes and when the mapping gets restarted the data which had been copied before the mapping was stopped does not get copied again. For instance, if an incremental mapping reaches 10% progress when it is stopped and restarted, that 10% of data will not be re-copied when the mapping is restarted, assuming of course that it was not changed. Note: Even if you use incremental FlashCopy option, the first copy process copies all of the data from the source to the target SVC VDisk. In the following example we will show you how to create and start incremental FlashCopy session. 1. In the Tivoli Storage Productivity Center for Replication navigation tree, select Sessions. Click on Create Session button and the Create Session wizard will display (Figure 10-66).
2. From the drop-down menu, select the FlashCopy session type and click Next (Figure 10-67).
3. In the Properties panel shown in Figure 10-68, type a session name and description. Here you can specify if you want to use incremental FlashCopy. By selecting this option you will set up the relationship for recording changes to the source volume (H1 volume). This means that any subsequent FlashCopy operation for that session copies only the tracks that have changed since the last flash. Incremental always assumes persistence. In the Background Copy Rate box you can type the copy rate that the SAN Volume Controller will use to perform the background copy of the FlashCopy role pair. You can specify a percentage between 0 and 100. The default is 50. Specifying 0 is equivalent to specifying the No Copy option for a System Storage DS8000 or TotalStorage Enterprise 378
Tivoli Storage Productivity Center V4.2 Release Update
7894Replication.fm
Storage Server FlashCopy session. You can modify this value at any time during the session. If the session is performing a background copy when you change the option, Tivoli Storage Productivity Center for Replication immediately modifies the background copy rate of the consistency group on the SAN Volume Controller. The SAN Volume Controller consistency group begins to use this new rate to complete the background copy that it is performing. Click next to continue.
4. From the drop-down menu, choose a location for Site 1 and click Next (Figure 10-69).
5. In the following panel verify that the session was added successfully. If the session is successfully created click Launch Add Copy Sets Wizard to add copy sets to the session. It will open the wizard as shown in Figure 10-70. From the drop-down menus you will select the Host 1 storage system, IO group, and volume. For the IO group and volume, you can select all entries, or an individual entry. In
379
7894Replication.fm
our example we will select only one volume from an IO group. If you want to import a copy set, select the Use a CSV file to import copy sets check box. You can either enter the full path name of the comma separated value (CSV) file in the text box, or click Browse to open a file dialog box and select the CSV file. Click Next to continue.
6. From the drop-down menus shown in Figure 10-71, select the Target 1 storage system, IO group and volume. In our example we will use space-efficient target volume. Click Next to continue.
7. The following panel (Figure 10-72) will show you copy sets which you can select. Select the copy sets that you want to add. You can click Select All to select all the boxes, Deselect All to clear all the boxes, or Add More to add more copy sets to this session. In our example we will add only one copy set. If you click on Show link, you will see the copy set information which shows you the volumes that you have selected. Click Next to continue.
380
7894Replication.fm
8. Figure 10-73 shows you the panel with the number of copy sets that will be added. Click Next to confirm and add copy sets. A progress bar will display the status of adding the copy set.
9. When the progress completes you will see the results panel. Click finish to exit the wizard.
381
7894Replication.fm
10.The Figure 10-75 shows you defined SVC FlashCopy session with one copy set. The session is in defined status and it can be started.
11.To start the FlashCopy session, from the session panel select the session and from the drop-down menu, select the Start action and click go (Figure 10-76). Note: Start action will perform any steps necessary to define the relationship before performing a FlashCopy operation. The session will have prepared status.
12.After you click the Go button you will get the warning message that the relationship will be established and prepared. Click Yes to establish the incremental FlashCopy session (Figure 10-77).
382
7894Replication.fm
13.After the command completed succesfully the session will be in preparing status and it will change to prepared status.
14.To perform the FlashCopy operation, from the session panel select the session and from the drop-down menu, select the Flash action and click go (xxxx). Flash action will create a data consistent point-in-time copy. The FlashCopy session with the incremental option will ensure that only the regions of disk space where data has been changed since the FlashCopy mapping was last started are copied to the target volume.
383
7894Replication.fm
15.After you click the Go button you will get the warning message that the point-in-time copy will be created. Click Yes to create the incremental FlashCopy.
16.After the command completed succesfully (Figure 10-81) the session will be in Target Available status and the copy process will start in the background (Figure 10-82). If for example the FlashCopy session is stopped before the background copy completes, when the session gets restarted the data which had been copied before the session was stopped does not get copied again.
384
7894Replication.fm
385
7894Replication.fm
Note: For the support of space efficient volumes on DS8000 storage systems and required microcode level check the informations at the following links: DS8100 and DS8300: http://www-01.ibm.com/support/docview.wss?rs=1113&uid=ssg1S1002949 DS8700: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003593&rs=1329 Tivoli Storage Productivity Center for Replication 4.2 added support for extent space efficient volumes and it displays whether a volume is extent space efficient or not. It will differentiate between extent space efficient and track space efficient volumes. Figure 10-83 shows you the panel with extent space efficient volumes. Figure 10-84 and Figure 10-85 shows you track space efficient volumes and standard DS8000 volumes.
386
7894Replication.fm
Even the TPC-R shows extent space efficient volumes, the extent space efficient volumes can not be used in copy service relationships. If you try to add extent space efficient volumes to a copy set you will get an error. Figure 10-86 shows you the error. Note: Extent space efficient volumes are currently restricted from participating in copy service relationships.
Track Space Efficient can be used in a copy sets but only in Target or Journal roles. If you try to add it into any other role you will not be able to select it and you will see the message No valid selection (Figure 10-87).
387
7894Replication.fm
If you click on the Global Mirror info panel tab on session details page it will show you the following new informations (Figure 10-89):
388
7894Replication.fm
GM master LSS Master Consistency group time Master time during last query Data exposure time Session ID Master State Unsuccessful CGs during last formation CG interval time Max Coordination time Max CG drain time List of subordinates (only displayed if they exist) The table in the panel display the ratio of successful and unsuccessful CGs formed since the last query and overall.
If there were unsuccessful CG formation, you will see reason codes in expandable section listing (Figure 10-90).
389
7894Replication.fm
The failure message also contains the following info: Failing LSS Error reason Master state Global Mirror Data Exposure Graph in the next figures shows you the last 15 minutes and last 24 hours interval. It displays the data exposure over time. You can setup data exposure threshold, which will highlight outliers or unusual spikes. The graph also shows you consistency group interval time and if no data is collected due to loss of communication with the storage system it is also indicated. Figure 10-91 shows you data exposure graph for the last 24 hours.
390
7894Replication.fm
The Figure 10-92 shows you the graph for the last 15 minutes.
Figure 10-92 GM data exposure during last 15 minutes Chapter 10. Tivoli Storage Productivity Center for Replication
391
7894Replication.fm
Migration
If you are upgrading to Tivoli Storage Productivity Center for Replication 4.2 from the previous Tivoli Storage Productivity Center for Replication releases, a Global Mirror session will be maintained during the upgrade. You do not need to make additional steps to migrate your existing sessions. If you are upgrading Tivoli Storage Productivity Center for Replication with your Global mirror session and if you have Tivoli Storage Productivity Center for Replication in High Availability configuration these are the general steps for the upgrade: 1. First issue takeover from the standby TPC-R server 2. Upgrade the active TPC-R server 3. Ensure sessions are all maintained on active TPC-R server 4. Upgrade standby TPC-R server 392
Tivoli Storage Productivity Center V4.2 Release Update
7894Replication.fm
393
7894Replication.fm
394
7894XIV.fm
11
Chapter 11.
XIV Support
In this chapter we describe the XIV device management and performance monitoring support provided by Tivoli Storage Productivity Center. In Tivoli Storage Productivity Center 4.2, we utilize the XIV Native API/CLI to monitor and manage the XIV storage devices. This provides additional relisency in the communication between TPC and the XIV devices compared to the SMI-S interface. Tivoli Storage Productivity Center 4.2 adds support for performance management of XIV storage subsystems.
395
7894XIV.fm
Note: Although an XIV can have up to three IP addresses, currently only one can be added to TPC. In a future release, TPC will be able to automatically discover the other IP addresses and utilize those in a fail-over scenario.
Note: Performance statistics are only collected for the XIV interface modules, so there will be a maximum of 6 modules listed per device. If no volume I/O is routed through a particular module, there will not be any statistics available for that module. The metrics below are available for XIV components. 396
Tivoli Storage Productivity Center V4.2 Release Update
7894XIV.fm
Table 11-3 . Performance Metric Read/Write/Total I/O Rate(overall) Read/Write/Total Cache Hit Percentage(overall) Read/Write/Total Data Rate Read/Write/Overall Response Time Read/Write/Overall Transfer Size Volume Utilization Percentage Description I/Os per second Percentage MB per second MS per op KB per operation, computed Computed
Note: Unlike other storage subsystems, the XIV does not track performance of the underlying disk drives. Therefore no "backend" metrics are available in TPC for XIV components.
397
7894XIV.fm
To create a new volume, select the Create Volume button, which will launch the Create Volume Wizard (see Figure 11-2 on page 398). Specify the desired number of volumes, size of each and a volume name prefix. Note: The RAID Level selection should be ignored for XIV, since the device does not utilize a traditional RAID scheme.
Note: The Available Arrays drop-down lists the XIV pools. New volumes will be defined in the selected pool. The next panel allows you to assign the volumes to one or more host ports(see Figure 11-3 on page 399)
398
7894XIV.fm
Note: In this example, we will not utilize the create volume wizard to make any SAN zoning changes. The final panel in the Create Volume wizard will present you with a summary of the proposed changes (see Figure 11-4 on page 400). You can ensure that these are expected, and if so click on the finish button.
399
7894XIV.fm
Once you click finish, a confirmation window will come up (see Figure 11-5) and you will have the opportunity to launch into the job management panel to see the status of the job.
Note: SAN Planner can also be used to provision XIV storage. See the SAN Planner chapter for additional details.
400
7894SAN Planner.fm
12
Chapter 12.
SAN Planner
In this chapter we discuss the function of SAN Planner in Tivoli Storage Productivity Center V4.2 . SAN Planner allows the user to perform end-to-end planning including fabics, hosts, storage controllers, volumes, paths, ports, zones, zone sets, storage resource groups (SRGs), and replication. Once a plan is made, the user can select to have the plan implemented by SAN Planner. SAN Planner supports TotalStorage Enterprise Storage Server, IBM System Storage DS6000, IBM System Storage DS8000, and IBM System Storage SAN Volume Controller. SAN Planner supports the Space Only workload profile option for any other storage system supported by Tivoli Storage Productivity Center.
401
7894SAN Planner.fm
7894SAN Planner.fm
Make sure proper connectivity exists between the subsystems for Replication. For vdisk mirroring planning, make sure you have io group memory configured on the SVC. For performance-based virtual disk planning, make sure you have performance data collected for all the back-end subsystems of SVC. In addition, the back-end subsystems should be one or more of DS8000, DS6000, DS4000 or ESS only. For replication planning, make sure you have an appropriate TPC-R license installed and Subsystem Device Feature codes enabled.
Replication planning
If you want to do replication planning as well, then the supported subsystems are IBM DS8000, DS6000, ESS and SVC.
403
7894SAN Planner.fm
The information about the alerts is also propagated from the members upwards to the group. The corresponding graphical indicator is visible next to the Storage Resource Group icon and an alert overlay is made available as shown in Figure 12-2.
404
7894SAN Planner.fm
Additionally the Storage Resource Groups health status is reported in the Storage Resource Group Management panel, as shown in Figure 12-3.
405
7894SAN Planner.fm
Once you select this option, the SAN Planner Wizard will pop-up as shown in Figure 12-5 on page 407.
406
7894SAN Planner.fm
Once you click next, you will be asked to select the type of planning you wish to perform(see Figure 12-6 on page 408).
407
7894SAN Planner.fm
Here, you can choose to provision storage only, provision storage with replication, setup replication on existing storage or configure multipathing or zoning to existing storage. In this example, we will provision storage only. Once you click next, you are presented with the option to select what type of new storage to provision, as well as whether to include multipathing or zoning considerations in the plan(see Figure 12-7 on page 409). Select vdisks to provision SAN Volume Controller storage, or volumes for any other type of storage device.
408
7894SAN Planner.fm
In this example, we will be provisioning volumes and include the multipath and zoning considerations. Once you select next, you will be presented with the option to specify the planner content(see Figure 12-8 on page 410).
409
7894SAN Planner.fm
Select the Add button to add storage devices and hosts to provision storage on, As shown in Figure 12-9 on page 411, this will launch the topology viewer.
410
7894SAN Planner.fm
In this example, we will choose two DS8000s to allocate storage from and a single Windows host on which to assign the storage. First, select each storage device and click the >> button to move it to the selected elements pane(see Figure 12-10 on page 412)
411
7894SAN Planner.fm
Secondly, select the host and move it to the selected elements pane(Figure 12-11 on page 413)
412
7894SAN Planner.fm
Once you have selected the appropriate components, click on the OK button. The specify plan contents window will be updated with the entities you have selected(see Figure 12-12 on page 414)
413
7894SAN Planner.fm
Once you verify the plan contents, click on the next button. This will take you to the capacity plan panel(see Figure 12-13 on page 415). Here you can set the total storage capacity that you would like allocated, and how you want that storage carved into multiple volumes.
414
7894SAN Planner.fm
For this example, we will allocate 10GB of storage, divided into 5 volumes. We will let the system choose the RAID level, and select the default Space Only workload profile(see Figure 12-14 on page 416)
415
7894SAN Planner.fm
Once you click next on the wizard panel, you will be taken to the Advanced Capacity Plan panel. Here, you can select to utilize thin provisioning, solid state disks, or disk encryption. For this example, we will not select any advanced options (see Figure 12-15 on page 417)
416
7894SAN Planner.fm
After clicking next, a confirmation panel will appear and you are able to validate the resources and options selected (see Figure 12-16 on page 418)
417
7894SAN Planner.fm
Once you click on next in this panel, a recommendation based on your inputs will be generated (see Figure 12-17 on page 419)
418
7894SAN Planner.fm
Once the recommendation is generated, you are presented with a list of proposed changes(see Figure 12-18 on page 420). These changes include volume creations, volume assignments and zoning changes. You can validate that all of these changes are expected and if you would like to change any of your inputs, you may click on the back button in the wizard.
419
7894SAN Planner.fm
In this panel, you have the option to either run the plan now, or schedule it to run at a future time(for example, during your next scheduled change window). In this example, we will choose to run it now. Once we click on finish, the plan will be executed, and the volumes will be created and assigned. The wizard will prompt you for a plan name(see Figure 12-19 on page 421)
420
7894SAN Planner.fm
Once you have saved the job, you can view the status through the job management panel(see Figure 12-20 on page 422)
421
7894SAN Planner.fm
7894SAN Planner.fm
Asynchronous Global Mirror failover/failback with practice Asynchronous Global Mirror single direction Three-site Metro Global Mirror with practice Three-site Metro Global Mirror Replication can also be setup to existing storage. This option allows you to extend existing replication sessions by adding more storage resources and protecting it. The new volumes are added to existing replication sessions. SAN Planner ensures that source volumes are added to the source side of a copy relationship and target volumes are added to the target side. We will go through a example DR Planner scenario utilizing Synchronous Metro Mirror failover/failback. First, we add the storage devices to a canidate Storage Resource Group. Navigate under IBM Tivoli Storage Productivity Center->Storage Resource Group Management and click on the create button(see Figure 12-21)
The Create Storage Resource Group window appears(see Figure 12-24 on page 426).
423
7894SAN Planner.fm
Creator: Displays the user name of the creator. Name: Displays the name of the storage resource group or unnamed, if it is not yet named. Description: Optional: Displays the user defined description for the storage resource group. Selected Elements: Lists the elements selected to be members of this storage resource group. Add: Adds one or more selected elements to the list. The Storage resource group element selection panel is displayed. Remove: Removes one or more selected elements from the list. Default Provisioning Profile: Lists the available provisioning profiles which can be associated with storage resource groups. The list also includes None. If this storage resource group is used as input to the SAN Planner, the settings defined in this profile will be used to pre-populate the planner inputs.
424
7894SAN Planner.fm
Create a New Profile: Launches the Provisioning Profile creation wizard. When you complete the wizard, the Provisioning Profile list is updated. User defined property 1 (UDP1): Specifies any user-defined properties that will be used by the Topology Viewer to provide custom groupings. User defined property 2 (UDP2): Specifies any user-defined properties that will be used by the Topology Viewer to provide custom groupings. User defined property 3 (UDP3): Specifies any user-defined properties that will be used by the Topology Viewer to provide custom groupings.
In this example, we will add two DS8000s to the canidate group(see Figure 12-24 on page 426)
425
7894SAN Planner.fm
We will also create a new provisioning profile to be used with this SRG. Click on Create a New Profile and the Create Provisioning Profile Window will appear(see Figure 12-25 on page 427)
426
7894SAN Planner.fm
In this example, we will create a new profile without using an existing profile, so leave the default options and click next. The next panel asks for the volume settings. We will provision 10 GB of storage, divided into 2 volumes. All other options will be left default(see Figure 12-26 on page 428)
427
7894SAN Planner.fm
We will not do any multipath modification, so uncheck the Setup Multipath options checkbox and click next(see Figure 12-27 on page 429)
428
7894SAN Planner.fm
For this example, we will also not do any zoning modifications so uncheck the zoning option(see Figure 12-28 on page 430)
429
7894SAN Planner.fm
Click on the finish button to create the provisioning profile. This will take you back to the Create Storage Resource Group panel. Make sure to select the provisioning panel that you just created under the Default Provisioning Profile(see Figure 12-29 on page 431)
430
7894SAN Planner.fm
Save the Storage Resource Group by either clicking on the disk icon or File->Save(see Figure 12-30 on page 431)
Once you have created the Storage Resource Group, you can launch the SAN Planner by navigating under IBM Tivoli Storage Productivity Center->Analytics->SAN Planner, right clicking and selecting Create Plan(see Figure 12-31 on page 432)
431
7894SAN Planner.fm
Select Provision storage with replication on the select planning task window and click next(see Figure 12-32 on page 433)
432
7894SAN Planner.fm
On the planning task window, we will choose to create new volumes, without multipath or zoning considerations(see Figure 12-33 on page 434)
433
7894SAN Planner.fm
For the plan content, we will choose a canidate SRG instead of manually selecting entities for storage to be provisioned from(see Figure 12-34 on page 435)
434
7894SAN Planner.fm
We will choose to provision 10 GB of capacity between two volumes and select space only mode(see Figure 12-35 on page 436)
435
7894SAN Planner.fm
In this example, we will not select any advanced plan options(see Figure 12-36 on page 437)
436
7894SAN Planner.fm
The next panel show any existing replication sessions, and if applicable, allow you to select to use an existing replication session. For this example, we will create a new replication session(see Figure 12-37 on page 438)
437
7894SAN Planner.fm
Once you click next, you will be taken to the session properties panel, where you can input the replication session name and choose the session type. This will be session properties that you will see in Tivoli Storage Productivity Center for Replication(see Figure 12-38 on page 439)
438
7894SAN Planner.fm
Figure 12-38
The next panel will request you to chose the secondary location. In this case, we have chosen a secondary canidate storage resource group. This is where the target replication storage will be allocated from(see Figure 12-39 on page 440)
439
7894SAN Planner.fm
The Review User Selections panel will let you validate the wizard inputs before the recommendation is generated(see Figure 12-40 on page 441)
440
7894SAN Planner.fm
Validate that these inputs are expected, and then click on next to generate the recommendation(see Figure 12-41 on page 442)
441
7894SAN Planner.fm
The final panel within the SAN Planner wizard will show all of the recommended changes(see Figure 12-42 on page 443)
442
7894SAN Planner.fm
Within this panel, you can also view the proposed changes in a graphical view by clicking on the Show Plan Topology View, This will display the SAN Planner Output, but not the current environment(see Figure 12-43 on page 444)
443
7894SAN Planner.fm
Once you have validated the recommendation, you are able to either run the job immediately or schedule it to run in the future. You then click the Finish button and the job is saved(and started if you selected run immediately). You can check on the status of the SAN Planner job within the Job Management Panel (see Figure 12-44 on page 445)
444
7894SAN Planner.fm
For our job, we can see that it completed successfully and that the replication relationship was created in Tivoli Storage Productivity Center for Replication(see Example 12-1)
Example 12-1 SAN Planner Execution Job
9/15/10 9:24:52 AM STS0306I: Job queued for processing. Waiting for idle thread. 9/15/10 9:24:52 AM HWNLM0001I: HWNLM0001I An integrated SAN Planner job started with schedule administrator. ITSO_DR_Planner 9/15/10 9:24:52 AM HWNLM0011I: HWNLM0011I Started to create storage volumes. HWN020001I Operation createStorageVolumes processed successfully. HWNEP0138I External process was successfully executed for device 2107.75VG931. HWNEP0138I External process was successfully executed for device 2107.75VG931. HWNEP0138I External process was successfully executed for device 2107.1302541. HWNEP0138I External process was successfully executed for device 2107.1302541. 9/15/10 9:25:31 AM HWNLM0013I: HWNLM0013I Completed creating storage volumes. 9/15/10 9:25:33 AM HWNLM810I: HWNLM810I Storage Subsystem Configuration refreshed successfully in Replication Manager Subsystem=DS8000-941-75VG931. 9/15/10 9:25:42 AM HWNLM810I: HWNLM810I Storage Subsystem Configuration refreshed successfully in Replication Manager Subsystem=DS8000-941-1302541. 9/15/10 9:25:42 AM HWNLM803I: HWNLM803I Replication Session was created successfully ITSO_DR_Planner. 9/15/10 9:25:43 AM HWNLM807I: HWNLM807I CopySets Added to Session successfully ITSO_DR_Planner. 9/15/10 9:25:43 AM HWNLM813I: HWNLM813I Replication Session was started successfully ITSO_DR_Planner.
445
7894SAN Planner.fm
9/15/10 9:25:43 AM HWNLM0003I: HWNLM0003I The integrated SAN Planner job completed. We can also go to the Tivoli Storage Productivity Center for Replication Console and see that the session was created and successfully started (see Figure 12-45).
446
7894SAN Planner.fm
Click on next on the introduction page, which will take you to the select planning task window. Here, select Provisioning Storage with Replication (see Figure 12-47 on page 448).
447
7894SAN Planner.fm
In the next panel, we will choose to provision virtual disks, and uncheck the options for the zone and multipath planning (see Figure 12-48 on page 449).
448
7894SAN Planner.fm
We will choose a single SAN Volume Controller to allocate storage from on the Specify Plan Content panel (see Figure 12-49 on page 450).
449
7894SAN Planner.fm
For this example, we will create a single VDisk with 5GB of storage capacity and a default space only workload profile(see Figure 12-50 on page 451)
450
7894SAN Planner.fm
We will not choose any advanced capacity plan options, so leave these selections as the defaults(see Figure 12-51 on page 452)
451
7894SAN Planner.fm
We will choose to create a new replication session for our VDisk Mirroring relationship (see Figure 12-52 on page 453)
452
7894SAN Planner.fm
On the session properties panel, select a session name and choose the Virtual Disk Mirroring option. We will leave all other fields as the default options(see Figure 12-53 on page 454)
453
7894SAN Planner.fm
We will leave the secondary location default options(see Figure 12-54 on page 455)
454
7894SAN Planner.fm
The SAN Planner will provide you with a panel to review your user selections. Ensure that all of these selections are accurate and click next(see Figure 12-55 on page 456)
455
7894SAN Planner.fm
The final panel will present you with a list of proposed changes(see Figure 12-56 on page 457). These include the primary Virtual Disk as well as the secondary Virtual Disk Mirror that will be created.
456
7894SAN Planner.fm
We will choose to run the SAN Planner job now, and can see the latest status in the job management panel(see Figure 12-57 on page 458)
457
7894SAN Planner.fm
Once the job has completed successfully, we can see the virtual disk mirror details in the SAN Volume Controller GUI(see Figure 12-58)
Figure 12-58 VDIsk Copy Details in the SAN Volume Controller GUI
458
7894JobManagement.fm
13
Chapter 13.
459
7894JobManagement.fm
13.1 Background
The TPC GUI had some limitations and issues in the way Schedules, Runs and Jobs were located and displayed in the GUI, for example Given a particular device, there was not an easy way to identify all the schedules in which it participates. There was not an easy way to determine what was going on in the TPC server at any given moment. There was not an easy way to determine what jobs within TPC were having problems, and thus what portions of the environment may not be monitored to its fullest potential. Any control activities invoked via APIs (TPM Workflow integration) or CLIs were not visible in the TPC GUI. Any mini-probes or activities initiated by some event in the storage network were not visible in the TPC GUI The new Job Management Panel tries to address these issues by consolidating the management job schedules and jobs into a central panel. Filter help to reduce the information displayed so that you can focus on a single or a few devices of interest, For example you can use Storage Resource Groups to view only schedules with the devices associates with a particular group. Recommendations will be generated when TPC is not fully leveraged to monitor the storage environment. Typically these will be to add a performance monitor job for a device.
13.2.1 Schedule
A schedule is what we would commonly refer to as a job definition. The Default Probe created during the installation of TPC is an example of a Schedule.
13.2.2 Run
A Run (or job run) is a single invocation of an entire schedule. One schedule may (and almost always will) have multiple runs. Each Run has a number (the Run Number) associated that will start with 1 and increment each time the Schedule is invoked again. In the TPC GUI prior to version 4,2 you will typically see the Run Number followed by the date and time of that run, for example see Figure 13-3 on page 462)
13.2.3 Job
A job is a unit of work within a schedule run, and the degree of complexity for this unit of work varies by implementation. For example, with a subsystem probe schedule, there is one job created per subsystem in the schedule. One run of a schedule may have multiple jobs.
460
7894JobManagement.fm
For some type of schedules each run does include two wrapping jobs, that usually log the start and end of the other job or jobs that are executed during this run.
The context menu entries have been adjusted, as shown in Figure 13-2 on page 461
If you now click on Job history (previously simply called History) the Job Management Panel will be launched.
461
7894JobManagement.fm
In TPC V4.2 this Navigation Tree item has been retained just as the other schedule entries, but the only thing that you can do is to open the context menu and go to the Job Management Panel. Depending on the manager one of the following Schedules will be pre-selected: Default Disk Jobs Default Fabric Jobs
462
7894JobManagement.fm
Navigation Tree You can use the Navigation Tree menu as displayed on the right hand side of Figure 13-1 on page 461 Context Menu The context menu entry Job History will launch the Job Management Panel, with the schedule that the context menu was opened for being pre-selected. See Figure 13-2 on page 461 Note: Although you can open the Job Management Panel from multiple places within the TPC GUI you only one panel can be open, so opening the panel a second time will reset the already opened panel, and you loose your current selections. The Log File Viewer panels can each display multiple log files in tabs and you can have multiple Log File Viewer panels open at the same time. for a screen shot of the Log File Viewer panel see Figure 13-11 on page 472. In Figure 13-5 on page 464 we have highlighted the different parts of the panel that we will describe next.
463
7894JobManagement.fm
In Figure 13-6 on page 465 you we have added red boarders to highlight the part of the Job Management Panel which you can use to filter the amount of information shown. Both the Schedules and the Run and Jobs sections will show a number of displayed line in each of the tables on the lower right corners of each section (shown only for the Runs and Jobs Section)
464
7894JobManagement.fm
Usually you start from the top of the panel in the Entities section by selecting or filtering what you want to be displayed in the middle section. The middle section doesnt offer so many filters, but most of the time you reduced the amount of information displayed here by selecting only a few devices in devices section. Selecting a schedule will show the Runs and Jobs in the third and bottom section of the panel from which you can further use filters, and finally open the log files. While this sounds complex and like you need a lot of manual clicks you will soon realize that this is a nice consolidated panel because you can see all the jobs for a single device or groups of devices, including discoveries, probe, performance monitor jobs and more.
465
7894JobManagement.fm
466
7894JobManagement.fm
467
7894JobManagement.fm
Once you have selected the entity type (here Storage Resource Groups) only the defined SRGs are displayed, see Figure 13-8 on page 469 We entered a name filter to reduced the number of SRGs displayed. Notice, that with each letter entered, the list on the right will be refined.
468
7894JobManagement.fm
In the Entity list on the right you can change the selection from Show All and select one or more of the listed entities. In our example this had no effect, because the list only included one entity. so Figure 13-9 on page 470 doesnt look different
469
7894JobManagement.fm
We focussed out attention on the failed scheduled and selected that entry. In the bottom section of the screen nothing changed, because the last run was more than a week ago, so we changed that filter into showing older entries, as shown in Figure 13-10 on page 471
470
7894JobManagement.fm
Figure 13-10 select log files and open Log File Viewer
First we opened the failed Run from the list of Runs, than we selected the three Jobs that belonged to that Run, and pressed the button on top of the Run list. Figure 13-11 on page 472 shows the log file viewer with three tabs, one for each of the selected Job logs files from Figure 13-10
471
7894JobManagement.fm
472
7894JobManagement.fm
When you press the button marked in Figure 13-12 TPC will open a dialog with the details about the recommendation (see Figure 13-13).
473
7894JobManagement.fm
In our example there is only one device selected so only one recommendation is displayed. Figure 13-14 is taken at a different time in the same environment so there are many more recommendations for different device types.
If you want to implement the recommendation (s) you need to highlight them and press the Take Action button. In this case TPC will open the panel for creating a new performance monitor, but not actually add the device to the monitor yet, as shown in Figure 13-15
474
7894JobManagement.fm
Figure 13-15 Take Action opens the Performance Monitor Job panel
475
7894JobManagement.fm
476
7894FabricManagement.fm
14
Chapter 14.
Fabric enhancements
IBM Tivoli Storage Productivity Center V4.2 fabric management capabilities improve on previous versions by adding support for new HBAs, CNAs and switch models, and enabling integration with Brocades DCFM/IA. Also, TPC V4.2 now offers basic FCoE (Fibre Channel over Ethernet) support. In this chapter we will briefly describe these new features.
477
7894FabricManagement.fm
478
7894FabricManagement.fm
4Gbps HBAs
Brocade 415 / 425 Emulex LP1105-BC (Blade Servers) HP AD300A (HP-UX Itanium) QLogic QLA2440, QLE2360 / 2362, QLE2440
8Gbps HBAs
Brocade 815 / 825 QLogic QLE2564 (Standard Servers), QMI2742 (Blade Servers)
479
7894FabricManagement.fm
Alerts Note: The Tivoli Storage Productivity Center qualified DCFM version is 10.4.1
480
7894FabricManagement.fm
3. Select Add and and configure new fabrics (see Figure 14-3) and click Next to continue.
481
7894FabricManagement.fm
4. Select Configure a CIMOM Agent for monitoring the fabric and fill in your DCFM server information (see Figure 14-4). Click Add after entering the required information.
482
7894FabricManagement.fm
Note: For a DCFM CIMOM, the default values are: Protocol: HTTPS Username: Administrator Password: password Interoperability Namespace: /interop
(please notice the capital A in the username) 5. The DCFM server will be added to the lista at the bottom of the panel (see Figure 14-5). You can add more DCFM servers, or click Next to continue.
483
7894FabricManagement.fm
7. When the Fabric discovery is done, you will be able to click Next to continue (see Figure 14-7).
484
7894FabricManagement.fm
8. You will be presented with a list of every newly discovered fabric (see Figure 14-8). These are the same fabrics under DCFM management. Select the ones you intend to manage with Tivoli Storage Productivity Center, or all fabrics, and click Next to continue.
9. In the next panel you will be able to add the newly discovered fabrics to a previously defined Monitoring Group (see Figure 14-9). Select one and click Next to continue.
485
7894FabricManagement.fm
10.Next you will get a summary of your choices for review (see Figure 14-10). Click Next to continue.
486
7894FabricManagement.fm
11.Tivoli Storage Productivity Center will process the changes and present a results screen (see Figure 14-11). Congratulations! Youve added your DCFM managed fabrics to your Tivoli Storage Productivity Center environment.
487
7894FabricManagement.fm
488
7894DatabaseConsiderations_updated.fm
15
Chapter 15.
489
7894DatabaseConsiderations_updated.fm
Default application heap size Database heap size Log buffer size Log file size Number of primary log files Number of secondary log files Maximum DB files open per application mon_heap_sz Statement heap size IBMDEFAULTBP - Buffer pool size TPCBFPDATA - Buffer pool size TPCBFPKEYS - Buffer pool size TPCBFPTEMP - Buffer pool size
10240 1000 8 2500 8 100 64 132 10240 250 250 250 250
applheapsz dbheap logbufsz logfilsiz logprimary logsecond maxfilop mon_heap_sz stmtheap IBMDEFAULTBP TPCBFPDATA TPCBFPKEYS TPCBFPTEMP
490
7894DatabaseConsiderations_updated.fm
On AIX or Linux: Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile:
. /home/db2inst1/sqllib/db2profile
491
7894DatabaseConsiderations_updated.fm
connect to tpcdb
connect reset
On AIX or Linux:
attach to db2inst1
492
7894DatabaseConsiderations_updated.fm
2. Update DB2 database manager settings: update dbm cfg using MON_HEAP_SZ 1024
The new settings will go into effect the next time that the database closes and opens. Stop TPC and restart it to use the new settings. Another method to use the new settings is to reboot the TPC server.
On AIX: Restart the TPC server by issuing the commands that are shown in Example 15-2 in a command window.
Example 15-2 AIX commands to restart TPC stopsrc -s TSRMsrv1 /<usr or opt>/IBM/TPC/device/bin/aix/stopTPCF.sh startsrc -s TSRMsrv1 /<usr or opt>/IBM/TPC/device/bin/aix/startTPCF.sh
On Linux: Restart the TPC server by issuing the commands that are shown in Example 15-3 in a command window.
Example 15-3 Linux commands to restart TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 start /<usr or opt>/IBM/TPC/device/bin/linux/startTPCF.sh
493
7894DatabaseConsiderations_updated.fm
Figure 15-3 DB2 command line processor Issue the following commands in the window: On Windows: update db cfg for TPCDB using newlogpath D:\DB2_active_logs\TPCDB On AIX or Linux: update db cfg for TPCDB using newlogpath /var/DB2/active_logs/TPCDB
The new log path goes into effect the next time that the database closes and opens. Stop TPC and restart it to use the new log path. Another method to use the new log path is to reboot the TPC server.
On AIX:
494
7894DatabaseConsiderations_updated.fm
Stop the TPC server by issuing the commands that are shown in Example 15-5 in a command window.
Example 15-5 AIX commands to stop TPC stopsrc -s TSRMsrv1 /<usr or opt>/IBM/TPC/device/bin/aix/stopTPCF.sh
On Linux: Stop the TPC server by issuing the commands that are shown in Example 15-6 in a command window.
Example 15-6 Linux commands to stop TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh
On AIX: Start the TPC server by issuing the commands that are shown in Example 15-8 in a command window.
Example 15-8 AIX commands to start TPC startsrc -s TSRMsrv1 /<usr or opt>/IBM/TPC/device/bin/aix/startTPCF.sh
On Linux: Stop the TPC server by issuing the commands that are shown in Example 15-9 in a command window.
Example 15-9 Linux commands to start TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 start /<usr or opt>/IBM/TPC/device/bin/linux/startTPCF.sh
495
7894DatabaseConsiderations_updated.fm
496
7894DatabaseConsiderations_updated.fm
The Performance Monitors values in Figure 15-4are: Per performance monitoring task The value that you set here defines the number of days that TPC keeps individual data samples for all of the devices that send performance data. The example shows 14 days. When per sample data reaches this age, TPC permanently deletes it from the database. Increasing this value allows you to look back at device performance at the most granular level at the expense of consuming more storage space in the TPC repository database. Data held at this level is good for plotting performance over a small time period but not for plotting data over many days or weeks because of the number of data points. Consider keeping more data in the hourly and daily sections for longer time period reports. Checking this field determines whether history retention is on or off. If you remove the check, TPC does not keep any history for per sample data. Hourly This value defines the number of days that TPC holds performance data that has been grouped into hourly averages. Hourly average data potentially consumes less space in the database. For example, if you collect performance data from an SVC at 15-minute intervals, retaining the hourly averages requires four times less space in the database. The check box determines whether history retention is on or off. If you remove the check, TPC does not keep any history for hourly data. Daily This value defines the number of days that TPC holds performance data that has been grouped into daily averages. After the defined number of days, TPC permanently deletes records of the daily history from the repository.
497
7894DatabaseConsiderations_updated.fm
Daily averaged data requires 24 times less space in the data for storage compared to hourly data. This is at the expense of granularity; however, plotting performance over a longer period (perhaps weeks or months) becomes more meaningful. The check box determines whether history retention is on or off. If you remove the check, TPC does not keep any history for daily data.
XIV_production EMC_remote
500 320
(f) Total required per day (g) Number of days to retain per sample = 14 days (f) x (g)/1,024,000 + 50%
43200000 907 MB
Note: Notice that the final figure includes an additional 50%. This amount provides for DB2 table indexes and other database overhead. You can see that the amount of space that is required increases dramatically as the sample rate increases. Remember this when you plan the appropriate sample rate for your environment. 498
Tivoli Storage Productivity Center V4.2 Release Update
7894DatabaseConsiderations_updated.fm
Next, use Table 15-3 to calculate the amount of storage that is needed to hold the performance data for the hourly and daily history averages. When complete, add together the totals from Table 15-2 on page 498 and Table 15-3 to give you the total repository requirement for these types of storage subsystems as seen in Table 15-4. Calculation method example for XIV_production: 500 volumes x 200 bytes per sample x 24 = 2,400,000 bytes for hourly history average
Table 15-3 Hourly and daily repository database sizing for ESS, DSxxxx, and non-IBM storage (a) Subsystem name (b) Number of volumes sampled (LUNs) 500 320 (c) Performance data record size (bytes) 200 200 (d) Hourly requirement
(b) x (c) x 24
XIV_production EMC_remote
2,400,000 1,536,000
100,000 64,000
164,000
Table 15-4 shows the total TPC repository space required for XIV, DSxxxx, and non-IBM storage subsystems. The total TPC repository space is the sum of the totals of both Table 15-2 on page 498 and Table 15-3.
Table 15-4 Total TPC repository space required for XIV, DSxxxx, and non-IBM subsystems Total space required MB 907 190 1,097 MB
499
7894DatabaseConsiderations_updated.fm
one table for each chosen sample rate, and then, add the two tables together to give you an overall total.
Table 15-5 Repository sizing for SVC and TPC Version 3.1.3 and above Subsystem TEC_SVC SVC_DR Number of VDisks 900 3,000 Number of MDisks 500 1,500 I/O groups 1 2 MDisk groups 4 6 Cluster pairs 1 2
2,000 78 156,000
10 128 1,280
(b) Hourly amount @ 15-minute sample rate (60/15) x (a) (c) Daily amount (b) x 24 (d) 30-day retention of samples (b) x 24 x 30 (e) 30-day retention of hourly 24 x (a) x 30 (f) 90-day retention of daily (a) x 90 (g) Overall total required (MB) (d) + (e) + (f)/1,024,000 + 50%
Important: Notice that the overall figure in (g) adds 50% to the amounts calculated through the table. The majority of this overhead takes the DB2 table indexes for this data plus database page overhead into account.
500
7894DatabaseConsiderations_updated.fm
Note: Table 15-6 shows all of the switches sampled at 5-minute intervals. If you plan to monitor some switches at one rate and other switches at another rate, create a separate table for each rate. The final figure includes a 50% uplift for indexing and DB2 storage overhead.
Table 15-6 SAN switch performance repository data sizing Switch name (a) Number of ports 32 32 64 64 (b) Size (bytes) 400 400 400 400 (c) Sample rate (minutes) 5 5 5 5 (d) Hourly amount (bytes) (60/(c)) x (a) x (b) 153,600 153,600 307,200 307,200
Totals
(e) 192
(f) Total sample rate per hour (g) 30 days retain sample rate (f) x 24 x 30 (h) 30 days retain hourly rate (e) x (b) x 24 x 30 (i) 90 days retain daily rate (e) x (b) x 90 Overall Total MB (g) + (h) + (i)/1,024,000 + 50%
501
7894DatabaseConsiderations_updated.fm
Total number of different file types (that is, *.txt, *.exe, *.doc, *.mp3, and so forth) Number of machines with data agents deployed and collecting data Total number of file names collected and stored for reporting Key largest repository tables: T_STAT_USER_HIST - User history file T_STAT_FTYPE_HIST - File type history T_STAT_FILE - Stored file names Figure 15-5, Table 15-8 on page 502, and Table 15-9 on page 504 help to estimate the worst case sizing for these key tables.
Table 15-7 Estimating the user history repository requirement Statistic name Number of filesystems covered 300 250 500 Number of users covered 800 1500 2000 Days to keep scan history 30 30 30 Number of weeks of scan history 52 52 52 Number of months of scan history 24 24 24
Totals
(a)
1050
(b)
4300
(c)
90
(d)
156
(e)
72
64,609,650,000
6,460,965,000 6,310 MB
Note: Unlike the performance tables, we must estimate much more here. For example, there might be 500 filesystems covered by the Windows_stat and 2,000 users with data across the 500 filesystems, but not all of the 500 filesystems have files owned by the 2,000 users. There is likely only a subset of filesystems with data for all 2,000 users. This is why the realistic figure is reduced to only 10% of the worst case figure. You might want to change the 10% factor to your specific requirements. Use Table 15-8 to calculate the repository space required to store file type history information. Estimating the file type history buildup is more accurate than estimating the user table history, because the data entering this table is more constant for a given profile.
Table 15-8 Estimating the file type repository requirement Statistic profile name Number of file types Number of TPC agents covered 200 Days to keep scan history 30 Number of weeks of scan history 52 Number of months of scan history 24
Win_types
50
502
7894DatabaseConsiderations_updated.fm
50 30
Totals
(a)
130
(b)
400
(c)
150
(d)
156
(e)
72
Total - bytes
a x b x (c + d + e) x 55 bytes
1,081,080,000 1,056 MB
Total MB - total/1,024,000
The third TPC for Data repository table of significant size is the T_STAT_FILE table. This table holds a record of the file names, which have been collected by profiles for largest, most obsolete, orphan files, and so forth. Note: If you plan to use TPC for Data for duplicate file spotting or to archive specific files for you, it is likely that you will increase the number of file names that each agent will collect. When completing Table 15-9 on page 504, the Total file names per agent will be the total of all types as seen in Figure 15-5 on page 503. In this example, it will be 1,800 file names per agent.
503
7894DatabaseConsiderations_updated.fm
Table 15-9 Estimating the file name repository requirement Statistic profile name (a) Total file names collected per agent 2,000 200 200 (b) Number of agents to which this profile applies 500 150 50 Total files per statistic axb 1,000,000 30,000 10,000
The final step for sizing the TPC for Data repository is to total the three tables and add an overhead for the default statistics. The average overhead for the default statistic types is provided at 1.5 MB per TPC agent. Therefore: Default TPC for Data overhead = Total agents x 1.5 MB Example: 1,000 x 1.5 = 1,500 MB Enter this figure in Table 15-10.
Table 15-10 TPC for Data repository total Source User history File type history File names Default statistics overhead Total requirement (MB) Amount in MB 6,310 1,056 410 1,500 9,276
504
7894DatabaseConsiderations_updated.fm
Totals Record size (bytes) Byte totals (a) Sample rate (bytes) (b) 15-minute sample rate (60/15) x (a) (c) Daily amount (b) x 24 (d) 14-day retention of samples (b) x 24 x 14 (e) 30-day retention of hourly 24 x (a) x 30 (f) 90 days of daily retention (a) x 90 (g) Overall total required (MB) (d) + (e) + (f)/1,024,000 + 50% 198 78 500 128 492
505
7894DatabaseConsiderations_updated.fm
Worksheet - Sizing performance collection for XIV, DSxxxx, and non-IBM subsystems
Refer to Sizing the repository for XIV, DSxxxx, and non-IBM subsystems on page 498 for a working example of this table. Table 15-12 is the first of two worksheets necessary to calculate the repository space that is required for these types of subsystems.
Table 15-12 Per sample repository database sizing for XIV, DSxxxx, and non-IBM subsystems (a) Subsystem name (b) Number of volumes (LUNs) sampled (c) Performance collection interval (minutes) (d) Performance data record size (e) Daily amount of data collected (60/(c) x 24) x (b) x (d) = (e) 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 (f) Total required per day (g) Number of days to retain per sample = 14 days (f) x (g)/1,024,000 + 50%
506
7894DatabaseConsiderations_updated.fm
Table 15-13 is the second table needed to calculate repository space required for these types of subsystems.
Table 15-13 Hourly and daily repository database sizing for XIV, DSxxxx, and non-IBM storage (a) Subsystem name (b) Number of volumes sampled (LUNs) (c) Performance data record size (bytes) 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 Daily totals Hourly days x 30 Daily days = 90 Total MB (f) + (g)/1,024,000 + 50% (f) (g) (d) Hourly requirement
(b) x (c) x 24
507
7894DatabaseConsiderations_updated.fm
508
7894DatabaseConsiderations_updated.fm
Totals
(a)
(b)
(c)
(d)
(e)
Table 15-16 Estimating the file type repository requirement Statistic profile name Number of file types Number of TPC agents covered Days to keep scan history Number of weeks of scan history Number of months of scan history
Totals
(a)
(b)
(d)
(e)
a x b x (c + d + e) x 55 bytes
Total MB - total/1,024,000
509
7894DatabaseConsiderations_updated.fm
Table 15-17 Estimating the file name repository requirement Statistic profile name (a) Total file names collected per agent (b) Number of agents to which this profile applies Total files per statistic axb
The default overhead for the default statistic types is 1.5 MB per TPC for Data agent. Therefore: Default TPC for Data overhead = Total agents x 1.5 MB Example: 1,000 x 1.5 = 1,500 MB Enter this figure in Table 15-18 on page 510.
Table 15-18 TPC for Data repository total Source User history File type history File names Default statistics overhead Total requirement MB Amount in MB
510
7894Database_backup_on_Windows.fm
16
Chapter 16.
511
7894Database_backup_on_Windows.fm
Backup Types
There are two primary methods of backing up DB2 databases: Offline backup (sometimes known as cold backup) is when all database access is terminated, and the database is closed. The backup then runs standalone before the
512
7894Database_backup_on_Windows.fm
database is restarted and access-enabled. This is the simplest type of backup to set up, configure, and maintain. Online backup (sometimes known as hot backup) is when all user and application database access continues to run while the backup process takes place. This type of backup provides for continuous availability of the database and the applications that require it. This is a more complex type of backup to set up, configure, and maintain.
Database logging
DB2 UDB uses log files to keep a sequential record of all database changes. The log files are specific to DB2 UDB activity. The logs record the database activity in transactions. If there is a crash, you use logs to play back or redo committed transactions during recovery. There are two types of logging: Circular logging (default): This is the simplest method and the default logging type that TPC uses. Archive logging: This type of logging enables online backup as well as roll-forward recovery of a database to a point-in-time. It is, however, more complex to manage.
513
7894Database_backup_on_Windows.fm
Simple: You can perform offline backup with DB2 logging set to the default circular method. DB2 skills: Offline backup requires a minimum amount of DB2 skills DB2 to perform, because it is the simplest method of backup. Logging: Circular logs are the simplest to manage and maintain. Disadvantages: Stopped TPC server services: The offline method entails stopping all of the TPC server services on a regular basis (typically daily) to perform the backup. This regular outage might not be acceptable to all organizations that want to use TPC. Missed performance data collection: If you have set up TPC to continuously collect disk subsystem and SAN fabric performance statistics, you lose data points for the duration that TPC is down each day for backup. You can minimize the impact of this loss by scheduling the backup at a time when the monitored equipment statistics are of little importance from a reporting perspective. This loss of data points might not be acceptable to all organizations wanting to use TPC. Missed events: TPC monitors the infrastructure and alerts you about events, such as failures within a SAN fabric. You run the risk that you can miss critical events if the events occur when you stop the TPC server services for the backup process.
514
7894Database_backup_on_Windows.fm
You need to change the DB2 parameter num_db_backups and set the value to the number of backup versions that you require. You also need to set the rec_his_retentn parameter to a value of -1. By setting this value to -1, rec_his_retentn follows the value set in num_db_backups. Important: Changing this value requires a stop and a start of the TPC service to take effect. This restart does not necessarily need to happen directly after you change the parameter. 2. Start a DB2 command line processor window as shown in Figure 16-1.
4. Example 16-1 on page 516 shows how to set the num_db_backups value to 4 versions and rec_his_retentn to -1 for both the the TPC database.
515
7894Database_backup_on_Windows.fm
5. Issue the following commands at the db2 => prompt in the command line processor window.
Example 16-1 DB2 commands to configure how many backup versions to keep
connect to TPCDB update db cfg using num_db_backups 4 update db cfg using rec_his_retentn -1 disconnect TPCDB exit Important: When you set new values for num_db_backups and rec_his_retentn, the new values are not effective until you stop all of the database connections. 6. Restart TPC to make the changes effective. You can either reboot the server, or alternatively stop and start the services. You can either stop and start the services through the Windows Services interface or open a command prompt window and issue the commands in Example 16-2. (use the command net start to obtain a list of active services in case your are using different versions). This process applies to Windows servers only.
Example 16-2 Windows commands to stop and start TPC services
net stop "IBM Tivoli Storage Productivity Center - Data Server" net stop "IBM WebSphere Application Server V6.1 - DeviceServer" net start "IBM Tivoli Storage Productivity Center - Data Server" net start "IBM WebSphere Application Server V6.1 - DeviceServer"
Important: DB2 does not create this directory for you. Create this directory before you attempt a backup.
516
7894Database_backup_on_Windows.fm
2. Create a batch script to control the backup process. There are two files. The script, which is shown in Example 16-3, to run the backup is: C:\scripts\TPC_backup_offline_file.bat
Example 16-3 File C:\scripts\TPC_backup_offline_file.bat
@echo on @REM This is a sample backup script @REM To backup TPC offline @REM To disk filesystems
@REM Stopping Tivoli Storage Productivity Center services @REM -------------------------------------------------net stop "IBM Tivoli Storage Productivity Center - Data Server" net stop "IBM WebSphere Application Server V6.1 - DeviceServer" @REM Starting backup of the DB2 database @REM -----------------------------------C:\PROGRA~1\IBM\SQLLIB\BIN\db2cmd.exe /c /w /i db2 -tv force application all C:\PROGRA~1\IBM\SQLLIB\BIN\db2cmd.exe /c /w /i db2 -tvf C:\scripts\database_list_offline_file.txt @REM Restarting Tivoli Storage Productivity Center services @REM ---------------------------------------------------net start "IBM Tivoli Storage Productivity Center - Data Server" net start "IBM WebSphere Application Server V6.1 - DeviceServer" @REM Offline backup process complete @REM -------------------------------
3. The DB2 scripted list of databases ( shown in Example 16-4) to back up is: C:\scripts\database_list_offline_file.txt
Example 16-4 File C:\scripts\database_list_offline_file.txt
backup database TPCDB to "D:\TPC_database_backups" without prompting; See 16.10.1, Performing an offline backup to a filesystem on page 529 to run an offline backup.
517
7894Database_backup_on_Windows.fm
You have already installed a Tivoli Storage Manager Backup/Archive client on the TPC server, and you have configured it to do standard file backups. You have installed the Tivoli Storage Manager API Client on the TPC server. You used default installation paths for Tivoli Storage Manager. Note: You need to stop TPC and DB2 as part of this configuration process. We recommend that you reboot the TPC server to complete the configuration process, because this process also adds operating system environment variables. Plan this exercise at a time when you can reboot the TPC server. Follow these steps to configure DB2 to Tivoli Storage Manager integration.
The steps to add new variables to Windows are: 1. Figure 16-3 on page 518 shows the Windows System Properties panel. Click Environment Variables to proceed to the next step.
518
7894Database_backup_on_Windows.fm
3. Add all three new system variables that are listed in Table 16-2 on page 518. Repeat the add process as seen in Figure 16-5 on page 519 for each variable.
This section describes the steps necessary to configure the Tivoli Storage Manager option file dsm.opt and then set the Tivoli Storage Manager password so that the DB2 backup process can communicate with the Tivoli Storage Manager API. Important: At this point your TSM client should be already registered with a TSM server. If the TSM server is accepting open registrations, just by starting the TSM client GUI or command line you will be asked for a password to register your client. If the TSM server is using closed registration, you will need the TSM administrator to register your client.
The steps are: 1. Edit the dsm.opt, which is located in C:\Program Files\Tivoli\TSM\baclient by default. 2. Make sure that the client option PASSWORDACCESS is set to GENERATE as shown in Figure 16-6, and save the file.
519
7894Database_backup_on_Windows.fm
3. Now, set the Tivoli Storage Manager password so that DB2 can authenticate with the Tivoli Storage Manager server when DB2 performs a backup or restore operation: a. Open a Windows Command prompt window. b. Change to Run the dsmapipw command as shown in Figure 16-7. c. Enter the current and new Tivoli Storage Manager password. You can reuse the existing Tivoli Storage Manager password. Important: You must run the dsmapipw command even if you do not intend to change the Tivoli Storage Manager password. Running this command registers it with the Tivoli Storage Manager API. Registering this password in the setup phase means that a DB2 operator can perform backup and restore operations without needing to know the Tivoli Storage Manager client password. If a Tivoli Storage Manager administrator changes or resets the Tivoli Storage Manager password, you need to run the dsmapipw command again.
520
7894Database_backup_on_Windows.fm
Important: The dsmapipw command displays both the old and new passwords on the window in plain text. Ensure that you perform this task in a secure area to prevent password exposure.
521
7894Database_backup_on_Windows.fm
The second file is the DB2 scripted list of databases to backup: C:\scripts\database_list_offline_tsm.txt
Example 16-6 File C:\scripts\database_list_offline_tsm.txt backup database TPCDB use tsm without prompting;
522
7894Database_backup_on_Windows.fm
Note: You need to stop TPC to perform these tasks. DB2 requires a full backup of each database before you can start the TPC database again after these reconfiguration steps. We include the instructions to perform a full backup of the database. Allow time in your outage planning for the backup to complete. Also, complete the steps in 16.5, Common backup setup steps on page 514 to set the number of backup versions that you want to retain in the history file.
Important: If you set up DB2 for online backup to Tivoli Storage Manager, you cannot easily change to an online backup to filesystem. You need to choose between these methods, because you are setting the destination for the archive logging process. If you decide in the future to change to the online filesystem method, you will need to reconfigure DB2 to send the archive logs to filesystem. This reconfiguration requires a TPC restart to complete the task. It is possible to perform an online backup to filesystem and have the archive logs going to Tivoli Storage Manager. However, we do not recommend that you do this because the difficulty of managing and tracking information makes this a poor practice. Set up and test DB2 to Tivoli Storage Manager integration before you attempt this section. Use 16.7, Offline backup to Tivoli Storage Manager setup steps on page 517. When you are satisfied that DB2 is communicating with Tivoli Storage Manager and you have performed at least one successful offline backup, return to this section.
16.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager
To set up archive logging to Tivoli Storage Manager, complete the following tasks: 1. You need to make a number of parameter choices for the configuration of archive logging as seen in Table 16-3 on page 523. These parameters determine where DB2 keeps its log files, the number of log files, and the size of the log files.
Table 16-3 DB2 parameters DB2 parameter Primary log path Example value C:\DB2_active_logs Comment This is the location where DB2 keeps the current logs for the database. For best performance, place these logs on a separate volume than the volume that holds the data. This is the location where DB2 put log files if the archive process fails. This can happen if Tivoli Storage Manager is down or unreachable when DB2 tries to send a log file to Tivoli Storage Manager.
D:\DB2_failed_log
2. Stop TPC using the commands in Example 16-7. You can also perform this task through the Windows Services interface.
Example 16-7 Windows commands to stop TPC net stop "IBM Tivoli Storage Productivity Center - Data Server"
523
7894Database_backup_on_Windows.fm
3. Launch a DB2 command line processor as in Figure 16-8 on page 524 and issue the commands as in Example 16-8 on page 525.
4. Issue the commands from Example 16-8 on page 525 in the command line processor window. Substitute your chosen values for the parameters that form part of the UPDATE DB CFG command. See Table 16-3 on page 523. Note that the final command performs an offline backup of the database. Important: The database backup is required after this reconfiguration, and the DB2 database will not open again until the database backup is completed.
524
7894Database_backup_on_Windows.fm
Example 16-8 DB2 command to configure archive logging to Tivoli Storage Manager CONNECT TO TPCDB QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS UNQUIESCE DATABASE CONNECT RESET UPDATE DB CFG FOR TPCDB USING logarchmeth1 TSM failarchpath "D:\DB2_failed_logs" newlogpath C:\DB2_active_logs\TPCD BACKUP DATABASE TPCDB USE TSM QUIT
5. When the database backups is complete, you can restart TPC. Either use the Windows Services interface or issue the commands shown in Example 16-9 in a command prompt window.
Example 16-9 Start TPC net start "IBM Tivoli Storage Productivity Center - Data Server" net start "IBM WebSphere Application Server V6.1 - DeviceServer"
The second file (Example 16-11) is the DB2 scripted list of databases to back up: C:\scripts\database_list_online_file.txt
Example 16-11 File C:\scripts\database_list_online_tsm.txt
525
7894Database_backup_on_Windows.fm
C:\DB2_archive_logs\TPCDB D:\DB2_failed_log
The location where DB2 will archive log files for the TPCDB database. This is a location where DB2 will put log files if the archive process fails, which can happen if the filesystem for the primary logs fills up. Choose a location
that is NOT on the same filesystem as the archive logs. 2. Choose a filesystem path to store the DB2 database backups.
Table 16-5 Filesystem location for database backups Database backup path D:\TPC_database_backups
526
7894Database_backup_on_Windows.fm
3. Stop TPC by using the commands in Example 16-12. You can also perform this task through the Windows Services interface.
Example 16-12 Windows commands to stop TPC net stop "IBM Tivoli Storage Productivity Center - Data Server" net stop "IBM WebSphere Application Server V6.1 - DeviceServer"
4. Launch a DB2 command line processor as shown in Figure 16-10 on page 527 and issue the commands in the command line processor.
6. Issue the commands from Example 16-13 on page 528 in the command line processor window. Substitute your chosen values for the parameters that form part of the UPDATE DB CFG command. See Table 16-4 on page 526. Note that the final command performs an offline backup of the database.
Chapter 16. IBM Total Productivity Center database backup on Windows
527
7894Database_backup_on_Windows.fm
Important: The offline backup of the database is required after the reconfiguration; the DB2 database will not open until the backup is complete.
Example 16-13 DB2 command to configure archive logging to a filesystem CONNECT TO TPCDB QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS UNQUIESCE DATABASE CONNECT RESET UPDATE DB CFG FOR TPCDB USING logarchmeth1 "DISK:C:\DB2_archive_logs" failarchpath "D:\DB2_failed_logs" newlogpath C:\DB2_active_logs\TPCD BACKUP DATABASE TPCDB TO "D:\TPC_database_backups"
7. When both of the database backups complete, you can restart TPC. Either use the Windows Services interface or issue the commands shown in Example 16-14 in a command prompt window.
Example 16-14 Start TPC net start "IBM Tivoli Storage Productivity Center - Data Server" net start "IBM WebSphere Application Server V6.1 - DeviceServer"
The second file is the DB2 scripted list of databases to backup. C:\scripts\database_list_online_file.txt - The DB2 scripted list of databases to backup.
Example 16-16 File C:\scripts\database_list_online_file.txt
528
7894Database_backup_on_Windows.fm
529
7894Database_backup_on_Windows.fm
Note: You must complete the initial setup steps that we detail in 16.7, Offline backup to Tivoli Storage Manager setup steps on page 517 before you can start to perform offline backups. Running an offline DB2 database backup takes TPC out of service for the period of the backup. Make sure it is acceptable to take TPC out of service before you proceed.
530
7894Database_backup_on_Windows.fm
531
7894Database_backup_on_Windows.fm
TPC_Server_install_dir/data/config/ TPC_Server_install_dir/device/conf/ These directories contain the various configuration files for your installation. It is important to save these directories, because they might be customized configurations and not the defaults configurations.
532
7894Database_backup_on_Windows.fm
533
7894Database_backup_on_Windows.fm
16.13.3 Managing backup versions that you store in Tivoli Storage Manager
This section describes how to maintain, view, and delete backup data and archive logs that you have sent to Tivoli Storage Manager. DB2 does not automatically prune backup versions and log files from Tivoli Storage Manager. You need to use the db2adutl tool to perform these housekeeping functions. Note: This section is not intended to be a comprehensive guide to the db2adutl tool. The intent here is to detail the commands that you likely need to maintain the data that is held in Tivoli Storage Manager on a regular basis.
7894Database_backup_on_Windows.fm
based on its policies. You can use db2adutl to explicitly remove DB2 archive logs, but if Tivoli Storage Manager archive retention policy is set appropriately, this is not necessary. Important: Make sure that the Tivoli Storage Manager archive retention policy that you use to store the DB2 logs is set for a sufficient period of time to allow recovery of your oldest database backup. However, you also want to make sure that the policy for the retention period is not so long that it wastes storage space in Tivoli Storage Manager.
535
7894Database_backup_on_Windows.fm
Figure 16-18 Sample output from a db2adutl query database TPCDB command
536
7894Database_backup_on_Windows.fm
Figure 16-19 Example of a db2adutl delete full keep 3 database TPCDB command
537
7894Database_backup_on_Windows.fm
Important: Be careful when you delete archive log files. If you delete logs that are still needed for some of your backup versions, you render those backups useless. Archive logs only exist in Tivoli Storage Manager if you have configured archive logging so that online backup is possible. Ask the Tivoli Storage Manager administrator to configure Tivoli Storage Manager to delete the archive logs on a regular basis by configuring the Tivoli Storage Manager archive copy group that DB2 uses. Set a retention period that suits your needs. If you use a general purpose archive copy group, Tivoli Storage Manager might keep all archive logs for several years causing unnecessary usage of the storage in your Tivoli Storage Manager environment. To delete archive logs, first query the Tivoli Storage Manager server to establish which logs you want to delete. Figure 16-18 on page 536 shows example output. To query the Tivoli Storage Manager server for the TPCDB database, issue the command: db2adutl query database TPCDB Look at the oldest log number against the oldest backup version. After we deleted some backups as shown in Figure 16-19 on page 537, the oldest log is S0000010.log. Then, look at the list of log files from the same output to see if there are any earlier logs. If there are earlier logs and you do not want to wait for Tivoli Storage Manager to expire them, use the follow command to delete them. See Figure 16-20 on page 538. db2adutl delete logs between S0000001 and S0000004 database TPCDB Tip: When specifying log numbers, you need to add the S at the start of the number but not the .LOG at the end.
538
7894Database_backup_on_Windows.fm
If the verification fails, that backup is not usable and you will need to take a new one.
539
7894Database_backup_on_Windows.fm
When you configure archive logging, you have the ability to restore a backup and then roll forward through the logs to any point-in-time to minimize data loss. This gives you an enhanced level of protection to the TPC repository data at the expense of more complexity in the process. You cannot simply restore a backup taken online as is, because an online backup is not logically consistent in its own right. Following an online restore, some roll forward is necessary to bring the restored database to a consistent and usable state. Finally, we do not intend for this section to be a comprehensive guide to the DB2 restore commands. We intend to give you the basic restore functions that you need to recover a database from both filesystem and Tivoli Storage Manager backups. See IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-24411, for detailed information about this subject.
540
7894Database_backup_on_Windows.fm
You will have a backup image timestamp. For example: 20100921215859 You need this timestamp number for the next step, Restore the TPCDB database (offline) on page 542.
541
7894Database_backup_on_Windows.fm
You end up with a backup image timestamp. For example: 20100922170304 You need this timestamp number for the next step.
542
7894Database_backup_on_Windows.fm
To restore from filesystem backups, issue the commands in Example 16-18 in the DB2 command line processor using the timestamps that you have selected.
Example 16-18 Restore command from filesystem backups
restore database TPCDB from "D:\TPC_database_backups" taken at 20100921215859 If you restore from Tivoli Storage Manager, use the commands that are shown in Example 16-19.
Example 16-19 Restore command from Tivoli Storage Manager backups
Figure 16-26 on page 544 shows an example of the restore process dialog for the TPCDB database restore process from a filesystem.
543
7894Database_backup_on_Windows.fm
544
7894Database_backup_on_Windows.fm
545
7894Database_backup_on_Windows.fm
To restore the database from filesystem backups, issue the commands in Example 16-22 in the DB2 command line processor using the timestamp that you have selected.
Example 16-22 Restore command from filesystem backups
restore database TPCDB from "D:\TPC_database_backups" taken at 20100924135535 If you restore from Tivoli Storage Manager, use different commands as in Example 16-23.
Example 16-23 Restore command from Tivoli Storage Manager backups
Figure 16-29 on page 547 shows an example of the restore process dialog for the TPCDB database restore from a filesystem.
546
7894Database_backup_on_Windows.fm
547
7894Database_backup_on_Windows.fm
Figure 16-30 Roll forward TPCDB to the end of the logs and complete
548
7894Database_backup_on_Windows.fm
Notice that the actual last committed transaction time is slightly different than the time that is requested in the roll forward. This is the closest that DB2 can get to the requested time and still keep the database in a consistent state.
549
7894Database_backup_on_Windows.fm
How often you shoul dtake a full backup of your TPC database will depends on how critical the TPCDB data is to your business. We recommend that the full TPCDB backup should be run once a week. If it's significantly critical, a TPCDB backup strategy should be implemented to accommodate your business needs. For example, a full TPCDB backup can be scheduled every weekend, and incremental backups (not explained in this chapter) can be scheduled every week day. Please refer to the manual IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-2441, for detailed information about this subject.
Taking backups of the TPCDB database can be automated. Some of the available options for this task are: Windows Task Scheduler DB2 Administration Server's scheduler Tivoli Storage Manager Backup-Archive Scheduler. Refer to manuals IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-2441, and Backing Up DB2 Using IBM Tivoli Storage Management, SG24-6247, for detailed information about this subject.
550
7894Database_backup_on_AIX.fm
17
Chapter 17.
551
7894Database_backup_on_AIX.fm
Backup Types
There are two primary methods of backing up DB2 databases:
552
7894Database_backup_on_AIX.fm
Offline backup (sometimes known as cold backup) is when all database access is terminated, and the database is closed. The backup then runs standalone before the database is restarted and access-enabled. This is the simplest type of backup to set up, configure, and maintain. Online backup (sometimes known as hot backup) is when all user and application database access continues to run while the backup process takes place. This type of backup provides for continuous availability of the database and the applications that require it. This is a more complex type of backup to set up, configure, and maintain.
Database logging
DB2 UDB uses log files to keep a sequential record of all database changes. The log files are specific to DB2 UDB activity. The logs record the database activity in transactions. If there is a crash, you use logs to play back or redo committed transactions during recovery. There are two types of logging: Circular logging (default): This is the simplest method and the default logging type that TPC uses. Archive logging: This type of logging enables online backup as well as roll-forward recovery of a database to a point-in-time. It is, however, more complex to manage.
553
7894Database_backup_on_AIX.fm
Advantages: Simple: You can perform offline backup with DB2 logging set to the default circular method. DB2 skills: Offline backup requires a minimum amount of DB2 skills DB2 to perform, because it is the simplest method of backup. Logging: Circular logs are the simplest to manage and maintain. Disadvantages: Stopped TPC server services: The offline method entails stopping all of the TPC server services on a regular basis (typically daily) to perform the backup. This regular outage might not be acceptable to all organizations that want to use TPC. Missed performance data collection: If you have set up TPC to continuously collect disk subsystem and SAN fabric performance statistics, you lose data points for the duration that TPC is down each day for backup. You can minimize the impact of this loss by scheduling the backup at a time when the monitored equipment statistics are of little importance from a reporting perspective. This loss of data points might not be acceptable to all organizations wanting to use TPC. Missed events: TPC monitors the infrastructure and alerts you about events, such as failures within a SAN fabric. You run the risk that you can miss critical events if the events occur when you stop the TPC server services for the backup process.
554
7894Database_backup_on_AIX.fm
You need to change the DB2 parameter num_db_backups and set the value to the number of backup versions that you require. You also need to set the rec_his_retentn parameter to a value of -1. By setting this value to -1, rec_his_retentn follows the value set in num_db_backups. Important: Changing this value requires a stop and a start of the TPC services to take effect. This restart does not necessarily need to happen directly after you change the parameter.
2. Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile: . /home/db2inst1/sqllib/db2profile Then initiate the DB2 command line processor: db2
3. Example 17-1 on page 555 shows how to set the num_db_backups value to 4 versions and rec_his_retentn to -1 for both the the TPC database. 4. Issue the following commands at the db2 => prompt in the command line processor window.
Example 17-1 DB2 commands to configure how many backup versions to keep
connect to TPCDB update db cfg using num_db_backups 4 update db cfg using rec_his_retentn -1 disconnect TPCDB quit Important: When you set new values for num_db_backups and rec_his_retentn, the new values are not effective until you stop all of the database connections. 5. Restart TPC to make the changes effective. You can either reboot the server, or alternatively stop and start the services issuing the command shown in Example 17-2.
Example 17-2 AIX commands to stop and start TPC services stopsrc -s TSRMsrv1 /opt/IBM/TPC/device/bin/aix/stopTPCF.sh startsrc -s TSRMsrv1 /opt/IBM/TPC/device/bin/aix/startTPCF.sh
555
7894Database_backup_on_AIX.fm
Note: Ensure that you perform the steps in 17.5, Common backup setup steps on page 554 as well as these steps. The steps are: 1. Choose a location to use for the DB2 backup output. Choose a directory that has enough free space to hold the number of backups that you plan to retain. We advise that you use a separate filesystem rather than the filesystem that contains the DB2 database. You can choose to use a location that is a remotely mounted CIFS or NFS filesystem, so that the backup data is secured to another server, perhaps at another location in your organization. This example uses /var/TPC_database_backups Important: DB2 does not create this directory for you. Create this directory before you attempt a backup, and make sure that user db2inst1 has write permissions. 2. Create a batch script to control the backup process. There are two files. The script, which is shown in Example 17-3, to run the backup is: /home/root/TPCBKP/TPC_backup_offline_file
Example 17-3 File /home/root/TPCBKP/TPC_backup_offline_file
#!/bin/ksh #This is a sample backup script #To backup TPC offline #To disk filesystems . /home/db2inst1/sqllib/db2profile echo "Stopping Tivoli Storage Productivity Center services" echo "---------------------------------------------------" echo stopsrc -s TSRMsrv1 /opt/IBM/TPC/device/bin/aix/stopTPCF.sh echo echo "Starting backup of the DB2 database" echo "-----------------------------------" db2 force application all db2 $(cat /home/root/TPCBKP/database_list_offline_file.txt ) echo echo "Restarting Tivoli Storage Productivity Center services" echo "---------------------------------------------------" echo startsrc -s TSRMsrv1 /opt/IBM/TPC/device/bin/aix/startTPCF.sh echo echo echo "Offline backup process complete" echo "-------------------------------" exit 0
Note: Remember to make the script executable by using the chmod command:
chmod +x /home/root/TPCBKP/TPC_backup_offline_file
556
7894Database_backup_on_AIX.fm
3. The DB2 scripted list of databases (shown in Example 17-4) to back up is: /home/root/TPCBKP/database_list_offline_file.txt
Example 17-4 File /home/root/TPCBKP/database_list_offline_file.txt
backup database TPCDB to /var/TPC_database_backups without prompting See 17.10.1, Performing an offline backup to a filesystem on page 567 to run an offline backup.
557
7894Database_backup_on_AIX.fm
Set the environment variables for the API client. As shown in Example 17-5, add the following lines in the $HOME/.profile file for the DB2 instance administrator (usually /home/db2inst1/.profile):
Example 17-5 Add TSM variables to the DB2 profile
echo 'export DSMI_DIR=/usr/tivoli/tsm/client/api/bin64 export DSMI_CONFIG=/home/db2inst1/tsm/dsm.opt export DSMI_LOG=/home/db2inst1/tsm ' >> /home/db2inst1/sqllib/db2profile Important: If you are using a 32bit version of DB2, use /opt/tivoli/tsm/client/api/bin instead of /opt/tivoli/tsm/client/api/bin64
Important: If it doesnt exist, create directory /home/db2inst1/tsm, and make sure that user db2inst1 is the owner.
This section describes the steps necessary to configure the Tivoli Storage Manager option file dsm.opt and then set the Tivoli Storage Manager password so that the DB2 backup process can communicate with the Tivoli Storage Manager API. Important: At this point your TSM client should be already registered with a TSM server. If the TSM server is accepting open registrations, just by starting the TSM client GUI or command line you will be asked for a password to register your client. If the TSM server is using closed registration, you will need the TSM administrator to register your client.
The steps are: 1. Edit the dsm.sys file, located in /usr/tivoli/tsm/client/api/bin64/. Make sure that the client option PASSWORDACCESS is set to GENERATE as shown in Figure 17-1. tpcblade4-14v3> cat /usr/tivoli/tsm/client/api/bin64/dsm.sys Servername TSMsrv1 COMMMethod TCPIP TCPPort 1500 TCPSERVERADDRESS tsmsrv1.storage.tucson.ibm.com PASSWORDACCESS GENERATE
Figure 17-1 Contents of the dsm.sys file
Important: If you are using a 32bit version of DB2, edit /usr/tivoli/tsm/client/api/bin/dsm.sys instead.
558
7894Database_backup_on_AIX.fm
2. Create or edit the dsm.opt file, located in /home/db2inst1/tsm/ The dsm.opt file only needs to have one line in it which is a reference to the server stanza in the dsm.sys file, which in our case is TSMsrv, as shown in Figure 17-2. tpcblade4-14v3> cat /home/db2inst1/tsm/dsm.opt Servername TSMsrv1
Figure 17-2 Contents of the dsm.opt file
3. Now, set the Tivoli Storage Manager password so that DB2 can authenticate with the Tivoli Storage Manager server when DB2 performs a backup or restore operation: Run the dsmapipw command as shown in Figure 17-3. Enter the current and new Tivoli Storage Manager password. You can reuse the existing Tivoli Storage Manager password. Important: You must run the dsmapipw command even if you do not intend to change the Tivoli Storage Manager password. Running this command registers it with the Tivoli Storage Manager API. Registering this password in the setup phase means that a DB2 operator can perform backup and restore operations without needing to know the Tivoli Storage Manager client password. If a Tivoli Storage Manager administrator changes or resets the Tivoli Storage Manager password, you need to run the dsmapipw command again.
tpcblade4-14v3> /home/db2inst1/sqllib/adsm/dsmapipw ************************************************************* * Tivoli Storage Manager * * API Version = 5.4.0 * ************************************************************* Enter your current password: Enter your new password: Enter your new password again: Your new password has been accepted and updated.
Figure 17-3 Running the dsmapipw command
Important: Check that files dsierror.log and dsm.opt in directory /home/db2inst1/tsm are owned by the DB2 instance owner (db2inst1) to avoid errors during the backup process.
559
7894Database_backup_on_AIX.fm
560
7894Database_backup_on_AIX.fm
Note: Remember to make the script executable by using the chmod command:
chmod +x /root/TPCBKP/TPC_backup_offline_tsm
The second file is the DB2 scripted list of databases to backup: /home/root/TPCBKP/database_list_offline_tsm.txt
Example 17-8 File /home/root/TPCBKP/database_list_offline_tsm.txt backup database TPCDB use tsm without prompting
561
7894Database_backup_on_AIX.fm
Take time to consider the advantages and disadvantages of archive logging before continuing with this setup. For full details of DB2 logging methods, refer to the DB2 product manuals. See IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-2441, for detailed information about this subject. Note: You need to stop TPC to perform these tasks. DB2 requires a full backup of each database before you can start the TPC database again after these reconfiguration steps. We include the instructions to perform a full backup of the database. Allow time in your outage planning for the backup to complete. Also, complete the steps in 17.5, Common backup setup steps on page 554 to set the number of backup versions that you want to retain in the history file.
Important: If you set up DB2 for online backup to Tivoli Storage Manager, you cannot easily change to an online backup to filesystem. You need to choose between these methods, because you are setting the destination for the archive logging process. If you decide in the future to change to the online filesystem method, you will need to reconfigure DB2 to send the archive logs to filesystem. This reconfiguration requires a TPC restart to complete the task. It is possible to perform an online backup to filesystem and have the archive logs going to Tivoli Storage Manager. However, we do not recommend that you do this because the difficulty of managing and tracking information makes this a poor practice. Set up and test DB2 to Tivoli Storage Manager integration before you attempt this section. Use 17.7, Offline backup to Tivoli Storage Manager setup steps on page 557. When you are satisfied that DB2 is communicating with Tivoli Storage Manager and you have performed at least one successful offline backup, return to this section.
17.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager
To set up archive logging to Tivoli Storage Manager, complete the following tasks: 1. You need to make a number of parameter choices for the configuration of archive logging as seen in Table 17-3 on page 563. These parameters determine where DB2 keeps its log files. User db2inst1 should be the owner of all log directories.
562
7894Database_backup_on_AIX.fm
Table 17-3 DB2 parameters DB2 parameter Primary log path Example value /var/DB2/active_logs Comment This is the location where DB2 keeps the current logs for the database. For best performance, place these logs on a separate volume than the volume that holds the data. This is the location where DB2 put log files if the archive process fails. This can happen if Tivoli Storage Manager is down or unreachable when DB2 tries to send a log file to Tivoli Storage Manager.
/var/DB2/failed_logs
3. Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile: . /home/db2inst1/sqllib/db2profile Then initiate the DB2 command line processor: db2 4. Issue the commands from Example 17-10 on page 563 in the command line processor window. Substitute your chosen values for the parameters that form part of the UPDATE DB CFG command. See Table 17-3 on page 563. Note that the final command performs an offline backup of the database. Important: The database backup is required after this reconfiguration, and the DB2 database will not open again until the database backup is completed.
Example 17-10 DB2 command to configure archive logging to Tivoli Storage Manager CONNECT TO TPCDB QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS UNQUIESCE DATABASE CONNECT RESET UPDATE DB CFG FOR TPCDB USING logarchmeth1 TSM failarchpath /var/DB2/failed_logs newlogpath /var/DB2/active_logs BACKUP DATABASE TPCDB USE TSM QUIT
563
7894Database_backup_on_AIX.fm
Note: Check that directories /var/DB2/logs/DB2_failed_logs and /var/DB2/logs/DB2_active_logs exist and are owned by user db2inst1 5. When the database backups is complete, you can restart TPC. Issue the commands shown in Example 17-11.
Example 17-11 Start TPC startsrc -s TSRMsrv1 /opt/IBM/TPC/device/bin/aix/startTPCF.sh
Note: Remember to make the script executable by using the chmod command:
chmod +x /home/root/TPCBKP/TPC_backup_online_tsm
The second file (Example 17-13) is the DB2 scripted list of databases to back up: /home/root/TPCBKP/database_list_online_tsm.txt
Example 17-13 File C:\scripts\database_list_online_tsm.txt
564
7894Database_backup_on_AIX.fm
files after a specific amount of time to prevent the system from filling up. You also need to plan for this amount of space. The log space required for a TPC database can become many times larger than the database over a number of weeks. To be able to restore an online DB2 database taken two weeks ago, for example, you need to have log files going back to that same date that you can restore. An online DB2 database backup is not standalone, because you cannot restore the online DB2 database backup without at least some logs for it to roll forward to a consistent state. Important: Although it is straightforward to switch between a backup destination of online to a filesystem and online to Tivoli Storage Manager, it is not so easy to switch the logging path. To switch the logging from Tivoli Storage Manager to a filesystem requires a stop and a start of the database and, therefore, a restart of the TPC services. We recommend that you choose either Tivoli Storage Manager backup or a filesystem backup and stay with that specific method.
2. Choose a filesystem path to store the DB2 database backups. Ensure that that the directory isowned by user db2inst1 .
Table 17-5 Filesystem location for database backups Database backup path /var/TPC_database_backups
565
7894Database_backup_on_AIX.fm
4. Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile: . /home/db2inst1/sqllib/db2profile Then initiate the DB2 command line processor: db2 5. Issue the commands from Example 17-15 on page 566 in the command line processor window. Substitute your chosen values for the parameters that form part of the UPDATE DB CFG command. See Table 17-4 on page 565. Note that the final command performs an offline backup of the database. Important: The offline backup of the database is required after the reconfiguration; the DB2 database will not open until the backup is complete.
Example 17-15 DB2 command to configure archive logging to a filesystem CONNECT TO TPCDB QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS UNQUIESCE DATABASE CONNECT RESET UPDATE DB CFG FOR TPCDB USING logarchmeth1 DISK:/var/DB2/archive_logs/TPCDB failarchpath /var/DB2/failed_logs newlogpath /var/DB2/active_logs BACKUP DATABASE TPCDB TO /var/TPC_database_backups
6. When both of the database backups complete, you can restart TPC. Issue the commands shown in Example 17-16.
Example 17-16 Start TPC startsrc -s TSRMsrv1 /opt/IBM/TPC/device/bin/aix/startTPCF.sh
566
7894Database_backup_on_AIX.fm
#This is a sample backup script #To backup TPC online #To Tivoli Storage Manager . /home/db2inst1/sqllib/db2profile echo echo "Starting backup of the DB2 database" echo "-----------------------------------" db2 $(cat /home/root/TPCBKP/database_list_online_file.txt) echo echo "Online backup process complete" echo "-------------------------------" exit 0
Note: Remember to make the script executable by using the chmod command:
chmod +x /home/root/TPCBKP/TPC_backup_online_file
The second file is the DB2 scripted list of databases to backup. /home/root/TPCBKP/database_list_online_file.txt - The DB2 scripted list of databases to backup.
Example 17-18 File /home/root/database_list_online_file.txt
567
7894Database_backup_on_AIX.fm
p55ap1(root)/> /home/root/TPCBKP/TPC_backup_offline_file Stopping Tivoli Storage Productivity Center services --------------------------------------------------0513-044 The TSRMsrv1 Subsystem was requested to stop. Setting Variables for SANM Stopping server1 with default options ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/stopS erver.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3201I: Server stop request issued. Waiting for stop status. ADMU4000I: Server server1 stop completed.
Starting backup of the DB2 database ----------------------------------DB20000I The FORCE APPLICATION command completed successfully. DB21024I This command is asynchronous and may not be effective immediately.
Restarting Tivoli Storage Productivity Center services --------------------------------------------------0513-059 The TSRMsrv1 Subsystem has been started. Subsystem PID is 1130594. Setting Variables for SANM Starting server1 for Device Manager ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/start Server.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3200I: Server launched. Waiting for initialization status. ADMU3000I: Server server1 open for e-business; process id is 508002
Offline backup process complete ------------------------------Figure 17-4 Running an offline backup to a filesystem
568
7894Database_backup_on_AIX.fm
Note: You must complete the initial setup steps that we detail in 17.7, Offline backup to Tivoli Storage Manager setup steps on page 557 before you can start to perform offline backups. Running an offline DB2 database backup takes TPC out of service for the period of the backup. Make sure it is acceptable to take TPC out of service before you proceed.
p55ap1(root)/> /home/root/TPCBKP/TPC_backup_offline_tsm Stopping Tivoli Storage Productivity Center services --------------------------------------------------0513-044 The TSRMsrv1 Subsystem was requested to stop. Setting Variables for SANM Stopping server1 with default options ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/stopServer.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3201I: Server stop request issued. Waiting for stop status. ADMU4000I: Server server1 stop completed.
Starting backup of the DB2 database ----------------------------------DB20000I The FORCE APPLICATION command completed successfully. DB21024I This command is asynchronous and may not be effective immediately.
Restarting Tivoli Storage Productivity Center services --------------------------------------------------0513-059 The TSRMsrv1 Subsystem has been started. Subsystem PID is 1028106. Setting Variables for SANM Starting server1 for Device Manager ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/startServer.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3200I: Server launched. Waiting for initialization status. ADMU3000I: Server server1 open for e-business; process id is 331802 Offline backup process complete ------------------------------Figure 17-5 Running an offline backup to Tivoli Storage Manager
569
7894Database_backup_on_AIX.fm
p55ap1(root)/> /home/root/TPCBKP/TPC_backup_online_tsm Starting backup of the DB2 database ----------------------------------Backup successful. The timestamp for this backup image is : 20101001144937
Online backup process complete ------------------------------Figure 17-6 Running an online backup to Tivoli Storage Manager
570
7894Database_backup_on_AIX.fm
p55ap1(root)/> /home/root/TPCBKP/TPC_backup_online_file Starting backup of the DB2 database ----------------------------------Backup successful. The timestamp for this backup image is : 20101001145102
Online backup process complete ------------------------------Figure 17-7 Running an online backup to filesystem
571
7894Database_backup_on_AIX.fm
The example in Figure 17-8 shows a directory containing backups taken on October 01 2010 at different times. DB2 timestamps all backups in this way. Every time a backup is made, a new file with a name starting with TPCDB.0.DB2.NODE0000.CATN0000 is created, with the last parte of its name made up of the date in yyyyMMDD format (like 20100930). Plan to delete old backup files to suit the requirements of your backup and recovery policy.
p55ap1(root)/> ls -R /var/DB2/archive_logs/TPCDB/ /var/DB2/archive_logs/TPCDB/: db2inst1 /var/DB2/archive_logs/TPCDB/db2inst1: TPCDB /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB: NODE0000 /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB/NODE0000: C0000000 /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB/NODE0000/C0000000: S0000001.LOG S0000002.LOG S0000003.LOG S0000004.LOG S0000005.LOG S0000006.LOG
Figure 17-9 DB2 archive logs
572
7894Database_backup_on_AIX.fm
17.13.3 Managing backup versions that you store in Tivoli Storage Manager
This section describes how to maintain, view, and delete backup data and archive logs that you have sent to Tivoli Storage Manager. DB2 does not automatically prune backup versions and log files from Tivoli Storage Manager. You need to use the db2adutl tool to perform these housekeeping functions. Note: This section is not intended to be a comprehensive guide to the db2adutl tool. The intent here is to detail the commands that you likely need to maintain the data that is held in Tivoli Storage Manager on a regular basis.
573
7894Database_backup_on_AIX.fm
db2adutl query database TPCDB Figure 17-10 on page 574 shows the sample output from this command. The output shows that two database backups are stored in Tivoli Storage Manager as well as six archive logs. This command has a shorter output. It lists only the database backup versions and the archive logs. db2adutl query full
Retrieving FULL DATABASE BACKUP information. 1 Time: 20101001150356 Oldest log: S0000007.LOG Sessions: 2 2 Time: 20101001150044 Oldest log: S0000006.LOG Sessions: 2 3 Time: 20101001145751 Oldest log: S0000005.LOG Sessions: 2 4 Time: 20101001145519 Oldest log: S0000004.LOG Sessions: 2 5 Time: 20101001144937 Oldest log: S0000002.LOG Sessions: 2 6 Time: 20101001144758 Oldest log: S0000002.LOG Sessions: 1 7 Time: 20101001142657 Oldest log: S0000000.LOG Sessions: 1 -- 8< ---- OUTPUT CLIPPED -- 8< ----
DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0
Retrieving LOG ARCHIVE information. Log file: S0000000.LOG, Chain Num: 2010-10-01-14.40.15 Log file: S0000001.LOG, Chain Num: 2010-10-01-14.47.38 Log file: S0000002.LOG, Chain Num: 2010-10-01-14.49.53 Log file: S0000003.LOG, Chain Num: 2010-10-01-14.51.02 Log file: S0000004.LOG, Chain Num: 2010-10-01-14.55.20 Log file: S0000005.LOG, Chain Num: 2010-10-01-14.57.52 Log file: S0000006.LOG, Chain Num: 2010-10-01-15.00.46 Log file: S0000007.LOG, Chain Num: 2010-10-01-15.03.58
0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at:
Figure 17-10 Sample output from a db2adutl query database TPCDB command
574
7894Database_backup_on_AIX.fm
Important: TSM will not allow the root user to delete backups created by the db2inst1 instance, so log in with user id db2inst1 before trying this commands.
This command deletes backup versions from Tivoli Storage Manager that are older than three days. This type of command is useful, because you can easily script it to run each day to remove the older backup. db2adutl delete full older than 90 days Or specify a database name: db2adutl delete full older than 90 days database TPCDB Figure 17-11 on page 576 gives you an example of running this command. This command deletes all backup versions from Tivoli Storage Manager, except for the last three versions. Again, this command is useful when scripting an automatic process. db2adutl delete full keep 5 O specify a database name: db2adutl delete full keep 5 database TPCDB
575
7894Database_backup_on_AIX.fm
$ db2adutl delete full keep 5 database TPCDB Query for database TPCDB
Retrieving FULL DATABASE BACKUP information. Taken at: 20101001144758 DB Partition Number: 0 Sessions: 1 Taken at: 20101001142657 DB Partition Number: 0 Sessions: 1 Do you want to delete these backup images (Y/N)? Y Are you sure (Y/N)? Y
The current delete transaction failed. You do not have sufficient authorization. Attempting to deactivate backup image(s) instead... Success.
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB
Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB
Figure 17-11 Example of a db2adutl delete full keep 5 database TPCDB command
576
7894Database_backup_on_AIX.fm
Retrieving FULL DATABASE BACKUP information. 1 Time: 20101001150356 Oldest log: S0000007.LOG Sessions: 2 2 Time: 20101001150044 Oldest log: S0000006.LOG Sessions: 2 3 Time: 20101001145751 Oldest log: S0000005.LOG Sessions: 2 4 Time: 20101001145519 Oldest log: S0000004.LOG Sessions: 2 5 Time: 20101001144937 Oldest log: S0000002.LOG Sessions: 2 -- 8< ---- OUTPUT CLIPPED -- 8< ---Retrieving LOG ARCHIVE information. Log file: S0000000.LOG, Chain Num: 2010-10-01-14.40.15 Log file: S0000001.LOG, Chain Num: 2010-10-01-14.47.38 Log file: S0000002.LOG, Chain Num: 2010-10-01-14.49.53 Log file: S0000003.LOG, Chain Num: 2010-10-01-14.51.02 Log file: S0000004.LOG, Chain Num: 2010-10-01-14.55.20 Log file: S0000005.LOG, Chain Num: 2010-10-01-14.57.52 Log file: S0000006.LOG, Chain Num: 2010-10-01-15.00.46 Log file: S0000007.LOG, Chain Num: 2010-10-01-15.03.58
DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0
0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at:
Look at the oldest log number against the oldest backup version. After we deleted one backup as shown in Figure 17-11 on page 576, the oldest log is S0000002.log. Then, look at the list of log archive files from the same output to see if there are any earlier logs. If there are earlier logs and you do not want to wait for Tivoli Storage Manager to expire them, use the follow command to delete them. See Figure 17-13 on page 578. db2adutl delete logs between S0000000 and S0000001 database TPCDB Tip: When specifying log numbers, you need to add the S at the start of the number but not the .LOG at the end.
577
7894Database_backup_on_AIX.fm
$ db2adutl delete logs between S0000000 and S0000001 database TPCDB Query for database TPCDB
Retrieving LOG ARCHIVE information. Log file: S0000000.LOG, Chain Num: 0, DB Partition Number: 0, Taken at: 2010-10-01-14.40.15 Do you want to delete this log image (Y/N)? Y Are you sure (Y/N)? Y Log file: S0000001.LOG, Chain Num: 0, DB Partition Number: 0, Taken at: 2010-10-01-14.47.38 Do you want to delete this log image (Y/N)? Y Are you sure (Y/N)? Y
Figure 17-13 Example command to delete DB2 archive logs
578
7894Database_backup_on_AIX.fm
$ db2adutl verify full taken at 20101001145519 db TPCDB Query for database TPCDB
Please wait.
FULL DATABASE BACKUP image: ./TPCDB.0.db2inst1.NODE0000.CATN0000.20101001145519.001, DB Partition Number: 0 ./TPCDB.0.db2inst1.NODE0000.CATN0000.20101001145519.002, DB Partition Number: 0 Do you wish to verify this image (Y/N)? Y Verifying file: ./TPCDB.0.db2inst1.NODE0000.CATN0000.20101001145519.001 ############################################################################### ############ Read 0 bytes, assuming we are at the end of the image ./TPCDB.0.db2inst1.NODE0000.CATN0000.20101001145519.002 ## WARNING only partial image read, bytes read: 16384 of 1576960
Read 0 bytes, assuming we are at the end of the image Image Verification Complete - successful.
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB
Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB
Figure 17-14 Performing a backup verification
If the verification fails, that backup is not usable and you will need to take a new one.
579
7894Database_backup_on_AIX.fm
database backup on a 24 hour cycle, you lose updates to the TPC repository that were made between these points. When you configure archive logging, you have the ability to restore a backup and then roll forward through the logs to any point-in-time to minimize data loss. This gives you an enhanced level of protection to the TPC repository data at the expense of more complexity in the process. You cannot simply restore a backup taken online as is, because an online backup is not logically consistent in its own right. Following an online restore, some roll forward is necessary to bring the restored database to a consistent and usable state. Finally, we do not intend for this section to be a comprehensive guide to the DB2 restore commands. We intend to give you the basic restore functions that you need to recover a database from both filesystem and Tivoli Storage Manager backups. See IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-24411, for detailed information about this subject.
580
7894Database_backup_on_AIX.fm
$ ls -l /var/TPC_database_backups total 868096 -rw------1 db2inst1 db2iadm1 149676032 Oct 01 14:38 TPCDB.0.db2inst1.NODE0000.CATN0000.20101001143814.001 -rw------1 db2inst1 db2iadm1 149676032 Oct 01 14:40 TPCDB.0.db2inst1.NODE0000.CATN0000.20101001144033.001 -rw------1 db2inst1 db2iadm1 145100800 Oct 01 14:51 TPCDB.0.db2inst1.NODE0000.CATN0000.20101001145102.001
Figure 17-15 Viewing backup versions available for restore
From the filename, we extract the backup image timestamp. In this case: 20101001143814 You need this timestamp number for the next step, Restore the TPCDB database (offline) on page 582.
581
7894Database_backup_on_AIX.fm
Retrieving FULL DATABASE BACKUP information. 1 Time: 20101001150356 Oldest log: S0000007.LOG Sessions: 2 2 Time: 20101001150044 Oldest log: S0000006.LOG Sessions: 2 3 Time: 20101001145751 Oldest log: S0000005.LOG Sessions: 2 4 Time: 20101001145519 Oldest log: S0000004.LOG Sessions: 2 5 Time: 20101001144937 Oldest log: S0000002.LOG Sessions: 2
DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB
Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB
Figure 17-16 Command db2adutl example to query backup versions available
From the list, select a backup timestamp. For example: 20101001145519 You need this timestamp number for the next step.
To restore from filesystem backups, issue the commands in Example 17-20 in the DB2 command line processor using the timestamps that you have selected.
Example 17-20 Restore command from filesystem backups
582
7894Database_backup_on_AIX.fm
If you restore from Tivoli Storage Manager, use the commands that are shown in Example 17-21.
Example 17-21 Restore command from Tivoli Storage Manager backups
Figure 17-17 on page 583 shows an example of the restore process dialog for the TPCDB database restore process from TSM. db2 => restore database TPCDB use TSM taken at 20101001145519 SQL2539W Warning! Restoring to an existing database that is the same as the backup image database. The database files will be deleted. Do you want to continue ? (y/n) Y DB20000I The RESTORE DATABASE command completed successfully.
Figure 17-17 Example of offline restore of TPCDB from TSM
583
7894Database_backup_on_AIX.fm
To restore the database from filesystem backups, issue the commands in Example 17-24 in the DB2 command line processor using the timestamp that you have selected.
Example 17-24 Restore command from filesystem backups
restore database TPCDB from /var/TPC_database_backups taken at 20101001145102 If you restore from Tivoli Storage Manager, use different commands as in Example 17-25.
Example 17-25 Restore command from Tivoli Storage Manager backups
Figure 17-18 on page 584 shows an example of the restore process dialog for the TPCDB database restore from filesystem.
db2 => restore database TPCDB from /var/TPC_database_backups taken at 20101001145102 SQL2539W Warning! Restoring to an existing database that is the same as the backup image database. The database files will be deleted. Do you want to continue ? (y/n) y DB20000I The RESTORE DATABASE command completed successfully.
Figure 17-18 Example of online restore of TPCDB from filesystem
584
7894Database_backup_on_AIX.fm
db2 => rollforward database TPCDB to end of logs and complete Rollforward Status Input database alias Number of nodes have returned status Node number Rollforward status Next log file to be read Log files processed Last committed transaction DB20000I = TPCDB = 1 = = = = = 0 not pending S0000004.LOG - S0000006.LOG 2010-10-01-18.59.01.000000 UTC
Figure 17-19 Roll forward TPCDB to the end of the logs and complete
585
7894Database_backup_on_AIX.fm
Note: By default, DB2 uses UTC-0 time for the point-in-time roll forward. Add the use local time flag to the command if you want to specify a time in your local time zone. Follow these steps: 1. Use the DB2 command line processor as seen in Figure 17-20 on page 586 to enter the rollforward command. In this example, we rolled the TPCDB database forward to a few minutes after the restore time. We entered the time using the use local time option. 2. Enter the point-in-time as YYYY-MM-DD-HH.MM.SS. The command for the TPCDB database is: rollforward database TPCDB to 2010-09-30-22.40 using local time and complete
db2 => rollforward database TPCDB to 2010-10-01-15.00 using local time and complete Rollforward Status Input database alias Number of nodes have returned status Node number Rollforward status Next log file to be read Log files processed Last committed transaction DB20000I = TPCDB = 1 = = = = = 0 not pending S0000004.LOG - S0000006.LOG 2010-10-01-14.59.01.000000 Local
Notice that the actual last committed transaction time is slightly different than the time that is requested in the roll forward. This is the closest that DB2 can get to the requested time and still keep the database in a consistent state.
7894Database_backup_on_AIX.fm
Reinstall the agents using the 'force' parameter using the Agent command or a deployment job from the GUI.
How often you shoul dtake a full backup of your TPC database will depends on how critical the TPCDB data is to your business. We recommend that the full TPCDB backup should be run once a week. If it's significantly critical, a TPCDB backup strategy should be implemented to accommodate your business needs. For example, a full TPCDB backup can be scheduled every weekend, and incremental backups (not explained in this chapter) can be scheduled every week day. Please refer to the manual IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-2441, for detailed information about this subject.
Taking backups of the TPCDB database can be automated. Some of the available options for this task are: AIX cron scheduler DB2 Administration Server's scheduler Tivoli Storage Manager Backup-Archive Scheduler. Please refer to manuals IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-2441, and Backing Up DB2 Using IBM Tivoli Storage Management, SG24-6247, for detailed information about this subject.
587
7894Database_backup_on_AIX.fm
[root@tpcblade6-11 ~]# ls -R /var/DB2/archive_logs/TPCDB/ /var/DB2/archive_logs/TPCDB/: db2inst1 /var/DB2/archive_logs/TPCDB/db2inst1: TPCDB /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB: NODE0000 /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB/NODE0000: C0000000 /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB/NODE0000/C0000000: S0000001.LOG S0000002.LOG S0000003.LOG S0000004.LOG S0000005.LOG S0000006.LOG
Figure 17-21 DB2 archive logs
588
7894Database_backup_on_Linux.fm
18
Chapter 18.
589
7894Database_backup_on_Linux.fm
Backup Types
There are two primary methods of backing up DB2 databases:
590
7894Database_backup_on_Linux.fm
Offline backup (sometimes known as cold backup) is when all database access is terminated, and the database is closed. The backup then runs standalone before the database is restarted and access-enabled. This is the simplest type of backup to set up, configure, and maintain. Online backup (sometimes known as hot backup) is when all user and application database access continues to run while the backup process takes place. This type of backup provides for continuous availability of the database and the applications that require it. This is a more complex type of backup to set up, configure, and maintain.
Database logging
DB2 UDB uses log files to keep a sequential record of all database changes. The log files are specific to DB2 UDB activity. The logs record the database activity in transactions. If there is a crash, you use logs to play back or redo committed transactions during recovery. There are two types of logging: Circular logging (default): This is the simplest method and the default logging type that TPC uses. Archive logging: This type of logging enables online backup as well as roll-forward recovery of a database to a point-in-time. It is, however, more complex to manage.
591
7894Database_backup_on_Linux.fm
Advantages: Simple: You can perform offline backup with DB2 logging set to the default circular method. DB2 skills: Offline backup requires a minimum amount of DB2 skills DB2 to perform, because it is the simplest method of backup. Logging: Circular logs are the simplest to manage and maintain. Disadvantages: Stopped TPC server services: The offline method entails stopping all of the TPC server services on a regular basis (typically daily) to perform the backup. This regular outage might not be acceptable to all organizations that want to use TPC. Missed performance data collection: If you have set up TPC to continuously collect disk subsystem and SAN fabric performance statistics, you lose data points for the duration that TPC is down each day for backup. You can minimize the impact of this loss by scheduling the backup at a time when the monitored equipment statistics are of little importance from a reporting perspective. This loss of data points might not be acceptable to all organizations wanting to use TPC. Missed events: TPC monitors the infrastructure and alerts you about events, such as failures within a SAN fabric. You run the risk that you can miss critical events if the events occur when you stop the TPC server services for the backup process.
592
7894Database_backup_on_Linux.fm
You need to change the DB2 parameter num_db_backups and set the value to the number of backup versions that you require. You also need to set the rec_his_retentn parameter to a value of -1. By setting this value to -1, rec_his_retentn follows the value set in num_db_backups. Important: Changing this value requires a stop and a start of the TPC services to take effect. This restart does not necessarily need to happen directly after you change the parameter.
2. Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile: . /home/db2inst1/sqllib/db2profile Then initiate the DB2 command line processor: db2
3. Example 18-1 on page 593 shows how to set the num_db_backups value to 4 versions and rec_his_retentn to -1 for both the the TPC database. 4. Issue the following commands at the db2 => prompt in the command line processor window.
Example 18-1 DB2 commands to configure how many backup versions to keep
connect to TPCDB update db cfg using num_db_backups 4 update db cfg using rec_his_retentn -1 disconnect TPCDB quit Important: When you set new values for num_db_backups and rec_his_retentn, the new values are not effective until you stop all of the database connections. 5. Restart TPC to make the changes effective. You can either reboot the server, or alternatively stop and start the services.
Example 18-2 Linux commands to stop and start TPC services /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 start /<usr or opt>/IBM/TPC/device/bin/linux/startTPCF.sh
593
7894Database_backup_on_Linux.fm
Note: Ensure that you perform the steps in 18.5, Common backup setup steps on page 592 as well as these steps. The steps are: 1. Choose a location to use for the DB2 backup output. Choose a directory that has enough free space to hold the number of backups that you plan to retain. We advise that you use a separate filesystem rather than the filesystem that contains the DB2 database. You can choose to use a location that is a remotely mounted CIFS or NFS filesystem, so that the backup data is secured to another server, perhaps at another location in your organization. This example uses /var/TPC_database_backups Important: DB2 does not create this directory for you. Create this directory before you attempt a backup, and make sure that user db2inst1 has write permissions. 2. Create a batch script to control the backup process. There are two files. The script, which is shown in Example 18-3, to run the backup is: /root/TPCBKP/TPC_backup_offline_file
Example 18-3 File /root/TPCBKP/TPC_backup_offline_file
#!/bin/bash #This is a sample backup script #To backup TPC offline #To disk filesystems . /home/db2inst1/sqllib/db2profile echo "Stopping Tivoli Storage Productivity Center services" echo "---------------------------------------------------" echo /opt/IBM/TPC/data/server/tpcdsrv1 stop /opt/IBM/TPC/device/bin/linux/stopTPCF.sh echo echo "Starting backup of the DB2 database" echo "-----------------------------------" db2 force application all db2 $(cat /root/TPCBKP/database_list_offline_file.txt ) echo echo "Restarting Tivoli Storage Productivity Center services" echo "---------------------------------------------------" echo /opt/IBM/TPC/data/server/tpcdsrv1 start /opt/IBM/TPC/device/bin/linux/startTPCF.sh echo echo echo "Offline backup process complete" echo "-------------------------------" exit 0
Note: Remember to make the script executable by using the chmod command:
chmod +x /root/TPCBKP/TPC_backup_offline_file
594
7894Database_backup_on_Linux.fm
3. The DB2 scripted list of databases (shown in Example 18-4) to back up is: /root/TPCBKP/database_list_offline_file.txt
Example 18-4 File /root/TPCBKP/database_list_offline_file.txt
backup database TPCDB to /var/TPC_database_backups without prompting See 18.10.1, Performing an offline backup to a filesystem on page 605 to run an offline backup.
Set the environment variables for the API client. As shown in Example 18-5, add the following lines in the $HOME/.profile file for the DB2 instance administrator (usually /home/db2inst1/.profile):
595
7894Database_backup_on_Linux.fm
echo 'export DSMI_DIR=/opt/tivoli/tsm/client/api/bin64 export DSMI_CONFIG=/home/db2inst1/tsm/dsm.opt export DSMI_LOG=/home/db2inst1/tsm ' >> /home/db2inst1/sqllib/db2profile Important: If you are using a 32bit version of DB2, use /opt/tivoli/tsm/client/api/bin instead of /opt/tivoli/tsm/client/api/bin64
Important: If it doesnt exist, create directory /home/db2inst1/tsm, and make sure that user db2inst1 is the owner.
This section describes the steps necessary to configure the Tivoli Storage Manager option file dsm.opt and then set the Tivoli Storage Manager password so that the DB2 backup process can communicate with the Tivoli Storage Manager API. Important: At this point your TSM client should be already registered with a TSM server. If the TSM server is accepting open registrations, just by starting the TSM client GUI or command line you will be asked for a password to register your client. If the TSM server is using closed registration, you will need the TSM administrator to register your client.
The steps are: 1. Edit the dsm.sys file, located in /opt/tivoli/tsm/client/api/bin64/. Make sure that the client option PASSWORDACCESS is set to GENERATE as shown in Figure 18-1. [root@tpcblade6-11 bin64]# cat /opt/tivoli/tsm/client/api/bin64/dsm.sys Servername TSMsrv1 COMMMethod TCPIP TCPPort 1500 TCPSERVERADDRESS tsmsrv1.storage.tucson.ibm.com PASSWORDACCESS GENERATE
Figure 18-1 Contents of the dsm.sys file
Important: If you are using a 32bit version of DB2, edit /usr/tivoli/tsm/client/api/bin/dsm.sys instead.
596
7894Database_backup_on_Linux.fm
2. Create or edit the dsm.opt file, located in /home/db2inst1/tsm/ The dsm.opt file only needs to have one line in it which is a reference to the server stanza in the dsm.sys file, which in our case is TSMsrv, as shown in Figure 18-2. [root@tpcblade6-11 bin64]# cat /home/db2inst1/tsm/dsm.opt Servername TSMsrv1
Figure 18-2 Contents of the dsm.opt file
3. Now, set the Tivoli Storage Manager password so that DB2 can authenticate with the Tivoli Storage Manager server when DB2 performs a backup or restore operation: Run the dsmapipw command as shown in Figure 18-3. Enter the current and new Tivoli Storage Manager password. You can reuse the existing Tivoli Storage Manager password. Important: You must run the dsmapipw command even if you do not intend to change the Tivoli Storage Manager password. Running this command registers it with the Tivoli Storage Manager API. Registering this password in the setup phase means that a DB2 operator can perform backup and restore operations without needing to know the Tivoli Storage Manager client password. If a Tivoli Storage Manager administrator changes or resets the Tivoli Storage Manager password, you need to run the dsmapipw command again.
[root@tpcblade6-11 ba]# /home/db2inst1/sqllib/adsm/dsmapipw ************************************************************* * Tivoli Storage Manager * * API Version = 6.2.1 * ************************************************************* Enter your current password: Enter your new password: Enter your new password again: Your new password has been accepted and updated.
Figure 18-3 Running the dsmapipw command
Important: Now check that files dsierror.log and dsm.opt in directory /home/db2inst1/tsm are owned by the DB2 instance owner (db2inst1) to avoid errors during the backup process.
597
7894Database_backup_on_Linux.fm
598
7894Database_backup_on_Linux.fm
Note: Remember to make the script executable by using the chmod command:
chmod +x /root/TPCBKP/TPC_backup_offline_tsm
The second file is the DB2 scripted list of databases to backup: root/TPCBKP/database_list_offline_tsm.txt
Example 18-8 File /root/TPCBKP/database_list_offline_tsm.txt backup database TPCDB use tsm without prompting
599
7894Database_backup_on_Linux.fm
provides many backup and recovery benefits at the expense of increased complexity in the database operation. Take time to consider the advantages and disadvantages of archive logging before continuing with this setup. For full details of DB2 logging methods, refer to the DB2 product manuals. See IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-2441, for detailed information about this subject. Note: You need to stop TPC to perform these tasks. DB2 requires a full backup of each database before you can start the TPC database again after these reconfiguration steps. We include the instructions to perform a full backup of the database. Allow time in your outage planning for the backup to complete. Also, complete the steps in 18.5, Common backup setup steps on page 592 to set the number of backup versions that you want to retain in the history file.
Important: If you set up DB2 for online backup to Tivoli Storage Manager, you cannot easily change to an online backup to filesystem. You need to choose between these methods, because you are setting the destination for the archive logging process. If you decide in the future to change to the online filesystem method, you will need to reconfigure DB2 to send the archive logs to filesystem. This reconfiguration requires a TPC restart to complete the task. It is possible to perform an online backup to filesystem and have the archive logs going to Tivoli Storage Manager. However, we do not recommend that you do this because the difficulty of managing and tracking information makes this a poor practice. Set up and test DB2 to Tivoli Storage Manager integration before you attempt this section. Use 18.7, Offline backup to Tivoli Storage Manager setup steps on page 595. When you are satisfied that DB2 is communicating with Tivoli Storage Manager and you have performed at least one successful offline backup, return to this section.
18.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager
To set up archive logging to Tivoli Storage Manager, complete the following tasks: 1. You need to make a number of parameter choices for the configuration of archive logging as seen in Table 18-3 on page 601. These parameters determine where DB2 keeps its log files. User db2inst1 should be the owner of all log directories.
600
7894Database_backup_on_Linux.fm
Table 18-3 DB2 parameters DB2 parameter Primary log path Example value /var/DB2/active_logs Comment This is the location where DB2 keeps the current logs for the database. For best performance, place these logs on a separate volume than the volume that holds the data. This is the location where DB2 put log files if the archive process fails. This can happen if Tivoli Storage Manager is down or unreachable when DB2 tries to send a log file to Tivoli Storage Manager.
/var/DB2/failed_logs
3. Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile: . /home/db2inst1/sqllib/db2profile Then initiate the DB2 command line processor: db2 4. Issue the commands from Example 18-10 on page 601 at the command line processor window. Substitute your chosen values for the parameters that form part of the UPDATE DB CFG command. See Table 18-3 on page 601. Note that the final command performs an offline backup of the database. Important: The database backup is required after this reconfiguration, and the DB2 database will not open again until the database backup is completed.
Example 18-10 DB2 command to configure archive logging to Tivoli Storage Manager CONNECT TO TPCDB QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS UNQUIESCE DATABASE CONNECT RESET UPDATE DB CFG FOR TPCDB USING logarchmeth1 TSM failarchpath /var/DB2/failed_logs newlogpath /var/DB2/active_logs BACKUP DATABASE TPCDB USE TSM QUIT
601
7894Database_backup_on_Linux.fm
Note: Check that directories /var/DB2/logs/DB2_failed_logs and /var/DB2/logs/DB2_active_logs exist and are owned by user db2inst1
5. When the database backup is complete, you can restart TPC. Issue the commands shown in Example 18-11.
Example 18-11 Start TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 start /<usr or opt>/IBM/TPC/device/bin/linux/startTPCF.sh
Note: Remember to make the script executable by using the chmod command:
chmod +x /root/TPCBKP/TPC_backup_online_tsm
The second file (Example 18-13) is the DB2 scripted list of databases to back up: /root/TPCBKP/database_list_online_tsm.txt
Example 18-13 File /root/TPCBKP/database_list_online_tsm.txt
602
7894Database_backup_on_Linux.fm
necessary archive log files. Therefore, you need to put processes in place to clean up old log files after a specific amount of time to prevent the system from filling up. You also need to plan for this amount of space. The log space required for a TPC database can become many times larger than the database over a number of weeks. To be able to restore an online DB2 database taken two weeks ago, for example, you need to have log files going back to that same date that you can restore. An online DB2 database backup is not standalone, because you cannot restore the online DB2 database backup without at least some logs for it to roll forward to a consistent state. Important: Although it is straightforward to switch between a backup destination of online to a filesystem and online to Tivoli Storage Manager, it is not so easy to switch the logging path. To switch the logging from Tivoli Storage Manager to a filesystem requires a stop and a start of the database and, therefore, a restart of the TPC services. We recommend that you choose either Tivoli Storage Manager backup or a filesystem backup and stay with that specific method.
2. Choose a filesystem path to store the DB2 database backups. Ensure that that the directory isowned by user db2inst1 .
Table 18-5 Filesystem location for database backups Database backup path /var/TPC_database_backups
603
7894Database_backup_on_Linux.fm
Example 18-14 Linux commands to stop TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh
4. Log into your DB2 server and switch to the DB instance owner ID (usually db2inst1) or source the instance profile: . /home/db2inst1/sqllib/db2profile Then initiate the DB2 command line processor: db2 5. Issue the commands from Example 18-15 on page 604 in the command line processor window. Substitute your chosen values for the parameters that form part of the UPDATE DB CFG command. See Table 18-4 on page 603. Note that the final command performs an offline backup of the database. Important: The offline backup of the database is required after the reconfiguration; the DB2 database will not open until the backup is complete.
Example 18-15 DB2 command to configure archive logging to a filesystem CONNECT TO TPCDB QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS UNQUIESCE DATABASE CONNECT RESET UPDATE DB CFG FOR TPCDB USING logarchmeth1 DISK:/var/DB2/archive_logs/TPCDB failarchpath /var/DB2/failed_logs newlogpath /var/DB2/active_logs BACKUP DATABASE TPCDB TO /var/TPC_database_backups
6. When both of the database backups complete, you can restart TPC. Issue the commands shown in Example 18-16 .
Example 18-16 Start TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 start /<usr or opt>/IBM/TPC/device/bin/linux/startTPCF.sh
604
7894Database_backup_on_Linux.fm
Note: Remember to make the script executable by using the chmod command:
chmod +x /root/TPCBKP/TPC_backup_online_file
The second file is the DB2 scripted list of databases to backup. /root/TPCBKP/database_list_online_file.txt - The DB2 scripted list of databases to backup.
Example 18-18 File /home/root/TPCBKP/database_list_online_file.txt
605
7894Database_backup_on_Linux.fm
[root@tpcblade6-11 ~]# /root/TPCBKP/TPC_backup_offline_file Stopping Tivoli Storage Productivity Center services --------------------------------------------------Setting Variables for SANM Stopping server1 with default options ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/stopServer.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3201I: Server stop request issued. Waiting for stop status. ADMU4000I: Server server1 stop completed.
Starting backup of the DB2 database ----------------------------------DB20000I The FORCE APPLICATION command completed successfully. DB21024I This command is asynchronous and may not be effective immediately.
Restarting Tivoli Storage Productivity Center services --------------------------------------------------Setting Variables for SANM Starting server1 for Device Manager 9/30/10 5:41:57 PM GEN0198I: Server starting ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/startServer.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3200I: Server launched. Waiting for initialization status. ADMU3000I: Server server1 open for e-business; process id is 28700
Offline backup process complete ------------------------------Figure 18-4 Running an offline backup to a filesystem
606
7894Database_backup_on_Linux.fm
Note: You must complete the initial setup steps that we detail in 18.7, Offline backup to Tivoli Storage Manager setup steps on page 595 before you can start to perform offline backups. Running an offline DB2 database backup takes TPC out of service for the period of the backup. Make sure it is acceptable to take TPC out of service before you proceed.
[root@tpcblade6-11 ~]# /root/TPCBKP/TPC_backup_offline_tsm Stopping Tivoli Storage Productivity Center services --------------------------------------------------Setting Variables for SANM Stopping server1 with default options ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/stopServer.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3201I: Server stop request issued. Waiting for stop status. ADMU4000I: Server server1 stop completed.
Starting backup of the DB2 database ----------------------------------DB20000I The FORCE APPLICATION command completed successfully. DB21024I This command is asynchronous and may not be effective immediately.
Restarting Tivoli Storage Productivity Center services --------------------------------------------------Setting Variables for SANM Starting server1 for Device Manager 9/30/10 5:48:47 PM GEN0198I: Server starting ADMU0116I: Tool information is being logged in file /opt/IBM/TPC/device/apps/was/profiles/deviceServer/logs/server1/startServer.log ADMU0128I: Starting tool with the deviceServer profile ADMU3100I: Reading configuration for server: server1 ADMU3200I: Server launched. Waiting for initialization status. ADMU3000I: Server server1 open for e-business; process id is 30481 Offline backup process complete ------------------------------Figure 18-5 Running an offline backup to Tivoli Storage Manager
607
7894Database_backup_on_Linux.fm
[root@tpcblade6-11 ~]# /root/TPCBKP/TPC_backup_online_tsm Starting backup of the DB2 database ----------------------------------Backup successful. The timestamp for this backup image is : 20100930175157
Online backup process complete ------------------------------Figure 18-6 Running an online backup to Tivoli Storage Manager
608
7894Database_backup_on_Linux.fm
[root@tpcblade6-11 ~]#
/root/TPCBKP/TPC_backup_online_file
Starting backup of the DB2 database ----------------------------------Backup successful. The timestamp for this backup image is : 20100930175729
Online backup process complete ------------------------------Figure 18-7 Running an online backup to filesystem
609
7894Database_backup_on_Linux.fm
The example in Figure 18-8 shows a directory containing backups taken on September 30 2010 at different times. DB2 timestamps all backups in this way. Every time a backup is made, a new file with a name starting with TPCDB.0.DB2.NODE0000.CATN0000 is created, with the last parte of its name made up of the date in yyyyMMDD format (like 20100930). Plan to delete old backup files to suit the requirements of your backup and recovery policy.
610
7894Database_backup_on_Linux.fm
[root@tpcblade6-11 ~]# ls -R /var/DB2/archive_logs/TPCDB/ /var/DB2/archive_logs/TPCDB/: db2inst1 /var/DB2/archive_logs/TPCDB/db2inst1: TPCDB /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB: NODE0000 /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB/NODE0000: C0000000 /var/DB2/archive_logs/TPCDB/db2inst1/TPCDB/NODE0000/C0000000: S0000001.LOG S0000002.LOG S0000003.LOG S0000004.LOG S0000005.LOG S0000006.LOG
Figure 18-9 DB2 archive logs
18.13.3 Managing backup versions that you store in Tivoli Storage Manager
This section describes how to maintain, view, and delete backup data and archive logs that you have sent to Tivoli Storage Manager. DB2 does not automatically prune backup versions and log files from Tivoli Storage Manager. You need to use the db2adutl tool to perform these housekeeping functions. Note: This section is not intended to be a comprehensive guide to the db2adutl tool. The intent here is to detail the commands that you likely need to maintain the data that is held in Tivoli Storage Manager on a regular basis.
611
7894Database_backup_on_Linux.fm
Important: Make sure that the Tivoli Storage Manager archive retention policy that you use to store the DB2 logs is set for a sufficient period of time to allow recovery of your oldest database backup. However, you also want to make sure that the policy for the retention period is not so long that it wastes storage space in Tivoli Storage Manager.
612
7894Database_backup_on_Linux.fm
[db2inst1@tpcblade6-11 ~]$ db2adutl query database TPCDB Query for database TPCDB
Retrieving FULL DATABASE BACKUP information. 1 Time: 20100930204416 Oldest log: S0000025.LOG Sessions: 2 2 Time: 20100930204152 Oldest log: S0000024.LOG Sessions: 2 3 Time: 20100930203906 Oldest log: S0000024.LOG Sessions: 1 4 Time: 20100930202923 Oldest log: S0000023.LOG Sessions: 1 5 Time: 20100930202350 Oldest log: S0000022.LOG Sessions: 1 6 Time: 20100930201854 Oldest log: S0000021.LOG Sessions: 1 7 Time: 20100930200626 Oldest log: S0000020.LOG Sessions: 1 8 Time: 20100930194948 Oldest log: S0000018.LOG Sessions: 2 9 Time: 20100930193637 Oldest log: S0000017.LOG Sessions: 2 10 Time: 20100930192744 Oldest log: S0000017.LOG Sessions: 1 11 Time: 20100930191237 Oldest log: S0000013.LOG Sessions: 2 12 Time: 20100930184747 Oldest log: S0000008.LOG Sessions: 2 -- 8< ---- OUTPUT CLIPPED -- 8< ---Retrieving LOG ARCHIVE information. Log file: S0000000.LOG, Chain Num: 2010-09-30-15.57.39 Log file: S0000002.LOG, Chain Num: 2010-09-30-18.22.12 Log file: S0000024.LOG, Chain Num: 2010-09-30-20.32.56 Log file: S0000025.LOG, Chain Num: 2010-09-30-20.35.33
DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0
0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at:
Figure 18-10 Sample output from a db2adutl query database TPCDB command
613
7894Database_backup_on_Linux.fm
Important: TSM will not allow the root user to delete backups created by the db2inst1 instance, so log in with user id db2inst1 before trying this commands.
This command deletes backup versions from Tivoli Storage Manager that are older than three days. This type of command is useful, because you can easily script it to run each day to remove the older backup. db2adutl delete full older than 90 days Or specify a database name: db2adutl delete full older than 90 days database TPCDB Figure 18-11 on page 614 gives you an example of running this command. This command deletes all backup versions from Tivoli Storage Manager, except for the last three versions. Again, this command is useful when scripting an automatic process. db2adutl delete full keep 5 O specify a database name: db2adutl delete full keep 5 database TPCDB
[db2inst1@tpcblade6-11 ~]$ db2adutl delete full keep 5 database TPCDB Query for database TPCDB
Taken at: 20100930201854 DB Partition Number: 0 Sessions: 1 Taken at: 20100930200626 DB Partition Number: 0 Sessions: Taken at: 20100930194948 DB Partition Number: 0 Sessions: Taken at: 20100930193637 DB Partition Number: 0 Sessions: Taken at: 20100930192744 DB Partition Number: 0 Sessions: Taken at: 20100930191237 DB Partition Number: 0 Sessions: Taken at: 20100930184747 DB Partition Number: 0 Sessions: Do you want to delete these backup images (Y/N)? Y Are you sure (Y/N)? Are you sure (Y/N)? Y The current delete transaction failed. You do not have sufficient authorization. Attempting to deactivate backup image(s) instead... Success.
1 2 2 1 2 2
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB
Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB
Figure 18-11 Example of a db2adutl delete full keep 5 database TPCDB command
614
7894Database_backup_on_Linux.fm
615
7894Database_backup_on_Linux.fm
[db2inst1@tpcblade6-11 ~]$ db2adutl query database TPCDB Query for database TPCDB
Retrieving FULL DATABASE BACKUP information. 1 Time: 20100930204416 Oldest log: S0000025.LOG Sessions: 2 2 Time: 20100930204152 Oldest log: S0000024.LOG Sessions: 2 3 Time: 20100930203906 Oldest log: S0000024.LOG Sessions: 1 4 Time: 20100930202923 Oldest log: S0000023.LOG Sessions: 1 5 Time: 20100930202350 Oldest log: S0000022.LOG Sessions: 1
DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB Retrieving TABLESPACE BACKUP information. No TABLESPACE BACKUP images found for TPCDB Retrieving INCREMENTAL TABLESPACE BACKUP information. No INCREMENTAL TABLESPACE BACKUP images found for TPCDB Retrieving DELTA TABLESPACE BACKUP information. No DELTA TABLESPACE BACKUP images found for TPCDB Retrieving LOAD COPY information. No LOAD COPY images found for TPCDB Retrieving LOG ARCHIVE information. Log file: S0000000.LOG, Chain Num: 2010-09-30-15.57.39 Log file: S0000002.LOG, Chain Num: 2010-09-30-18.22.12 Log file: S0000024.LOG, Chain Num: 2010-09-30-20.32.56 Log file: S0000025.LOG, Chain Num: 2010-09-30-20.35.33
0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at: 0, DB Partition Number: 0, Taken at:
Look at the oldest log number against the oldest backup version. After we deleted one backup as shown in Figure 18-11 on page 614, the oldest log is S0000022.log. Then, look at the list of log archive files from the same output to see if there are any earlier logs. If there are earlier logs and you do not want to wait for Tivoli Storage Manager to expire them, use the follow command to delete them. See Figure 18-13 on page 617.
616
7894Database_backup_on_Linux.fm
db2adutl delete logs between S0000000 and S0000002 database TPCDB Tip: When specifying log numbers, you need to add the S at the start of the number but not the .LOG at the end.
[db2inst1@tpcblade6-11 ~]$ db2adutl delete logs between S0000000 and S0000002 database TPCDB Query for database TPCDB
Retrieving LOG ARCHIVE information. Log file: S0000000.LOG, Chain Num: 0, DB Partition Number: 0, Taken at: 2010-09-30-15.57.39 Do you want to delete this log image (Y/N)? Y Are you sure (Y/N)? Are you sure (Y/N)? Y Log file: S0000002.LOG, Chain Num: 0, DB Partition Number: 0, Taken at: 2010-09-30-18.22.12 Do you want to delete this log image (Y/N)? Y Are you sure (Y/N)? Are you sure (Y/N)? Y
617
7894Database_backup_on_Linux.fm
[db2inst1@tpcblade6-11 ~]$ db2adutl verify full taken at 20100930204152 db TPCDB Query for database TPCDB
Please wait.
FULL DATABASE BACKUP image: ./TPCDB.0.db2inst1.NODE0000.CATN0000.20100930204152.001, DB Partition Number: 0 ./TPCDB.0.db2inst1.NODE0000.CATN0000.20100930204152.002, DB Partition Number: 0 Do you wish to verify this image (Y/N)? Y Verifying file: ./TPCDB.0.db2inst1.NODE0000.CATN0000.20100930204152.001 ############## Read 0 bytes, assuming we are at the end of the image ./TPCDB.0.db2inst1.NODE0000.CATN0000.20100930204152.002 ## WARNING only partial image read, bytes read: 16384 of 16781312
Read 0 bytes, assuming we are at the end of the image Image Verification Complete - successful.
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB
Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB
Figure 18-14 Performing a backup verification
If the verification fails, that backup is not usable and you will need to take a new one.
618
7894Database_backup_on_Linux.fm
database backup on a 24 hour cycle, you lose updates to the TPC repository that were made between these points. When you configure archive logging, you have the ability to restore a backup and then roll forward through the logs to any point-in-time to minimize data loss. This gives you an enhanced level of protection to the TPC repository data at the expense of more complexity in the process. You cannot simply restore a backup taken online as is, because an online backup is not logically consistent in its own right. Following an online restore, some roll forward is necessary to bring the restored database to a consistent and usable state. Finally, we do not intend for this section to be a comprehensive guide to the DB2 restore commands. We intend to give you the basic restore functions that you need to recover a database from both filesystem and Tivoli Storage Manager backups. See IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference, SC27-24411, for detailed information about this subject.
619
7894Database_backup_on_Linux.fm
[db2inst1@tpcblade6-11 ~]$ ls -l /var/TPC_database_backups total 2017944 -rw------- 1 db2inst1 db2iadm1 218177536 Sep 30 14:13 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930141335.001 -rw------- 1 db2inst1 db2iadm1 234958848 Sep 30 17:34 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930173433.001 -rw------- 1 db2inst1 db2iadm1 218177536 Sep 30 17:39 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930173932.001 -rw------- 1 db2inst1 db2iadm1 218177536 Sep 30 17:41 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930174149.001 -rw------- 1 db2inst1 db2iadm1 234958848 Sep 30 17:57 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930175729.001 -rw------- 1 db2inst1 db2iadm1 234958848 Sep 30 19:00 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930190048.001 -rw------- 1 db2inst1 db2iadm1 234958848 Sep 30 19:10 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930191010.001 -rw------- 1 db2inst1 db2iadm1 234958848 Sep 30 19:10 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930191031.001 -rw------- 1 db2inst1 db2iadm1 234958848 Sep 30 19:10 TPCDB.0.db2inst1.NODE0000.CATN0000.20100930191041.001
Figure 18-15 Viewing backup versions available for restore
From the filename, we extract the backup image timestamp. In this case: 20100930141335 You need this timestamp number for the next step, Restore the TPCDB database (offline) on page 621.
620
7894Database_backup_on_Linux.fm
[db2inst1@tpcblade6-11 ~]$ db2adutl query full database TPCDB Query for database TPCDB
Retrieving FULL DATABASE BACKUP information. 1 Time: 20100930204416 Oldest log: S0000025.LOG Sessions: 2 2 Time: 20100930204152 Oldest log: S0000024.LOG Sessions: 2 3 Time: 20100930203906 Oldest log: S0000024.LOG Sessions: 1 4 Time: 20100930202923 Oldest log: S0000023.LOG Sessions: 1 5 Time: 20100930202350 Oldest log: S0000022.LOG Sessions: 1
DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0 DB Partition Number: 0
Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for TPCDB
Retrieving DELTA DATABASE BACKUP information. No DELTA DATABASE BACKUP images found for TPCDB
Figure 18-16 Command db2adutl example to query backup versions available
From the list, select a backup timestamp. For example: 20100930204416 You need this timestamp number for the next step.
To restore from filesystem backups, issue the commands in Example 18-20 in the DB2 command line processor using the timestamps that you have selected.
Example 18-20 Restore command from filesystem backups
621
7894Database_backup_on_Linux.fm
If you restore from Tivoli Storage Manager, use the commands that are shown in Example 18-21.
Example 18-21 Restore command from Tivoli Storage Manager backups
Figure 18-17 on page 622 shows an example of the restore process dialog for the TPCDB database restore process from a filesystem. db2 => restore database TPCDB from /var/TPC_database_backups taken at 20100930141335 SQL2539W Warning! Restoring to an existing database that is the same as the backup image database. The database files will be deleted. Do you want to continue ? (y/n) y DB20000I The RESTORE DATABASE command completed successfully.
Figure 18-17 Example of offline restore of TPCDB from a filesystem
622
7894Database_backup_on_Linux.fm
Example 18-23 Linux commands to stop TPC /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh
To restore the database from filesystem backups, issue the commands in Example 18-24 in the DB2 command line processor using the timestamp that you have selected.
Example 18-24 Restore command from filesystem backups
restore database TPCDB from /var/TPC_database_backups taken at 20100930214725 If you restore from Tivoli Storage Manager, use different commands as in Example 18-25.
Example 18-25 Restore command from Tivoli Storage Manager backups
Figure 18-18 on page 623 shows an example of the restore process dialog for the TPCDB database restore from filesystem.
db2 => restore database TPCDB from /var/TPC_database_backups taken at 20100930214725 SQL2539W Warning! Restoring to an existing database that is the same as the backup image database. The database files will be deleted. Do you want to continue ? (y/n) y DB20000I The RESTORE DATABASE command completed successfully.
Figure 18-18 Example of online restore of TPCDB from filesystem
623
7894Database_backup_on_Linux.fm
db2 => rollforward database TPCDB to end of logs and complete Rollforward Status Input database alias Number of nodes have returned status Node number Rollforward status Next log file to be read Log files processed Last committed transaction DB20000I = TPCDB = 1 = = = = = 0 not pending S0000000.LOG - S0000000.LOG 2010-10-01-04.47.28.000000 UTC
Figure 18-19 Roll forward TPCDB to the end of the logs and complete
624
7894Database_backup_on_Linux.fm
Note: By default, DB2 uses UTC-0 time for the point-in-time roll forward. Add the use local time flag to the command if you want to specify a time in your local time zone. Follow these steps: 1. Use the DB2 command line processor as seen in Figure 18-20 on page 625 to enter the rollforward command. In this example, we rolled the TPCDB database forward to a few minutes after the restore time. We entered the time using the use local time option. 2. Enter the point-in-time as YYYY-MM-DD-HH.MM.SS. The command for the TPCDB database is: rollforward database TPCDB to 2010-09-30-22.40 using local time and complete
db2 => rollforward database TPCDB to 2010-09-30-22.40 using local time and complete Rollforward Status Input database alias Number of nodes have returned status Node number Rollforward status Next log file to be read Log files processed Last committed transaction DB20000I = TPCDB = 1 = = = = = 0 not pending S0000000.LOG - S0000000.LOG 2010-09-30-22.35.52.000000 Local
Notice that the actual last committed transaction time is slightly different than the time that is requested in the roll forward. This is the closest that DB2 can get to the requested time and still keep the database in a consistent state.
625
7894Database_backup_on_Linux.fm
a older version of the database, out in the environment. To correct this problem, you need to instruct the orphaned agents to re-register themselves with the TPC server. Reinstall the agents using the 'force' parameter using the Agent command or a deployment job from the GUI.
626
7894BestPractices.fm
19
Chapter 19.
Useful information
In this chapter we provide useful information we encountered in the writing of this book. Alert Logs Backup and Recovery Alerts shown at login How to setup Configure to avoid DS8K Certificate errors Profiles Workload Provisioning Disaster Recovery Backup DB2 database and restore High Availability of TPC Server Clustering Database sizing Security
627
7894BestPractices.fm
628
7894BestPractices.fm
If you check your products and then click on the View details button at the bottom of the page, you will get a list of only the select products, to which you can subscribe using RSS technology.
629
7894BestPractices.fm
When you see the two error messages then you have to do both of the following steps, but usually this will only happen one time. The second problem needs to be fixed in the Internet Explorer configuration only once, but the first message will be issued for any new website that provides a certificate not yet stored in your certificate store.
3. You need to restart Internet Explorer for the changes to take effect.
630
7894BestPractices.fm
3. Click on the Certificate Error box, you are presented with the box as shown in Figure 19-4 on page 631.
4. Click the 'View certificates' link in bottom of dialog window. You are presened with the Certificate dialog box as shown in Figure 19-6 on page 632.
631
7894BestPractices.fm
5. Click the 'Install Certificate..' button. The Certificate Import Wizard is presented. Click the 'Next' button.
6. Select the 'Place all certificates in the following store' radio button as show below Figure 19-8 on page 633.
632
7894BestPractices.fm
7. Click the 'Browse' button, and you will see the Select Certificate Store dialog as shown in Figure 19-9.
8. Check the 'Show physical stores' checkbox. 9. Expand the 'Trusted Root Certification Authorities' tree entry. 10.Select 'Local Computer' sub-tree entry. 11.Click the 'OK' button to return to the Import Certificate Wizard, as shown in Figure 19-8 on page 633. 12.Click the 'Next' button, and you will see Figure 19-10 on page 634.
633
7894BestPractices.fm
13.Click the 'Finish' button. 14.Click 'OK' botton on the message box 'the import was successful' 15.Click 'OK' button to close the 'Certificate' window. 16.Close the browser and re-launch the 'website you tried to access
634
7894BestPractices.fm
3. Find the document titled Help to Find the Supported Products and Platforms Interoperability Matrix Links, and open it. See Figure 19-12 on page 636.
635
7894BestPractices.fm
Figure 19-12 Help to Find the Supported Products and Platforms Interoperability Matrix Links
4. The webpage will have some tables that include links to the individual support matrics, see Figure 19-13 on page 637.
636
7894BestPractices.fm
637
7894BestPractices.fm
638
7894DB2_TabSpc.fm
Appendix A.
639
7894DB2_TabSpc.fm
7894DB2_TabSpc.fm
The location of the data on the disk can be controlled, if this is allowed by the operating system. If all table data is in a single table space, a table space can be dropped and redefined with less overhead than dropping and redefining a table. In general, a well-tuned set of DMS table spaces will outperform SMS table spaces. In general, small personal databases are easiest to manage with SMS table spaces. On the other hand, for large, growing databases, you will probably only want to use SMS table spaces for the temporary table spaces, and separate DMS table spaces, with multiple containers, for each table. In addition, you will probably want to store long field data and indexes in their own table spaces.
641
7894DB2_TabSpc.fm
642
7894axUidPw.fm
Appendix B.
Worksheets
In this appendix, we provide worksheets that are intended for you to use during the planning and the installation of the Tivoli Storage Productivity Center. The worksheets are meant to be examples. Therefore you can decide whether you need to use them, for example, if you already have all or most of the information collected somewhere. Note: If the tables are too small for your handwriting, or you want to store the information in an electronic format, simply use a word processor or spreadsheet application, and use our examples as a guide, to create your own installation worksheets. This appendix contains the following worksheets: User IDs and passwords Storage device information
643
7894axUidPw.fm
Server information
Table B-1 contains detailed information about the servers that comprise the Tivoli Storage Productivity Center environment.
Table B-1 Productivity Center server Server Machine Host name IP address ____.____.____.____ Configuration information
In Table B-2, simply mark whether a manager or a component will be installed on this machine.
Table B-2 Managers/components installed Manager/component Productivity Center for Disk Productivity Center for Replication Productivity Center for Data Tivoli Agent Manager DB2 Installed (y/n)?
644
7894axUidPw.fm
Enter the user IDs and passwords that you used during the installation in Table B-4. Depending on the selected managers and components, certain lines are not used for this machine.
Table B-4 User IDs used on this machine Element DB2 DAS User DB2 Instance Owner DB2 Fenced User Resource Manager Host Authentication Tivoli Storage Productivity Center Admin userc IBM WebSpherec TPC for Replication Administrator c a. This account can have any name you choose. b. This account name cannot be changed during the installation. c. If LDAP Authentication is selected this value is overwritten. tpcsuida Default/ recommended user ID db2admina db2inst1 db2fenc1 managerb Enter user ID Enter password
Appendix B. Worksheets
645
7894axUidPw.fm
LDAP information
If you plan to use an LDAP compliant directory server for authentication, you will be required to provide additional information during the installation of TPC. Contact your LDAP administrator and gather the required information.
Table B-5 LDAP information Element LDAP Server Hostname LDAP Port Number Bind Distinguished Name Bind Password Relative DN for user names Attribute to use for user names Relative DN for groups Attribute to use for groups LDAP TPC Administrator user name LDAP TPC Administrator password LDAP TPC Administrator group cn uid 389 Default / recommended value Actual value
646
7894axUidPw.fm
Appendix B. Worksheets
647
7894axUidPw.fm
IBM DS4000
Use Table B-7 to collect the information about your DS4000 devices.
Table B-7 IBM DS4000 devices Name, location, organization Firmware level IP address CIM agent host name and protocol
648
7894axUidPw.fm
Appendix B. Worksheets
649
7894axUidPw.fm
650
7894x11.fm
Appendix C.
651
7894x11.fm
After download, install the openssl package for AIX using rpm.
652
7894x11.fm
openssh The third component that is used on AIX in this solution is the OpenSSH on AIX package. It is available on the open source software web site, sourceforge.net. The open source software project OpenSSH on AIX can be reached via: https://sourceforge.net/projects/openssh-aix/ Access the latest version of OpenSSH on AIX and download the package. After download, install the OpenSSH on AIX package using smitty / installp.
Xming installation
Download the setup.exe as follows: 1. On the Windows workstation you want to use to receive the X11 forwarding, double-click on the installer file (Xming-6-9-0-31-setup.exe in our case). The Xming setup will start and welcome you with a panel similar to the one shown in Figure C-2.
2. Click Next to continue with the installation. A new window similar to Figure C-3 shows the directory where Xming will be installed. You can leave the default or choose a different directory. Click Next.
653
7894x11.fm
3. Then the install programs asks you to select the software components to be installed. We recommend leaving the defaults as shown in Figure C-4. Make sure that Normal PuTTY Link SSH client or Portable PuTTY Link SSH client is selected; this will be necessary to enable X11 forwarding. Click Next to continue.
654
7894x11.fm
4. The next dialog offers to create a Start menu folder and shortcuts (see Figure C-5). Click Next to continue.
5. This dialog lets you select additional shortcuts to be created. You can leave the defaults as shown in Figure C-6. Click Next to continue.
6. Before the installation begins, you can review your selections from the previous steps (see Figure C-7). Click Install to continue.
655
7894x11.fm
7. You will see a progress bar showing the installation status (see Figure C-8).
8. When the installation is complete, you will see a confirmation screen (Figure C-9). Click Finish to launch Xming and close the installer.
656
7894x11.fm
657
7894x11.fm
3. On the Session type selection screen, choose Start a program (see Figure C-11). Click Next to continue.
4. Now select xterm as the program to be started, and Run Remote using Putty (plink.exe). Enter your AIX host name, the user you will login as (tipically root) and the user password (see Figure C-12). Click Next to continue.
658
7894x11.fm
5. On this screen you can specify additional settings and enable clipboard capability for your X Windows environment. You can leave the defaults (see Figure C-13). Click Next to continue.
6. Finally, you can save the connection settings to use them in future sessions as shown in Figure C-14. If you decide to do so, you can also save your password for an automatic login the next time you launch this connection. Click Next to continue.
659
7894x11.fm
7. An xterm window will appear (see Figure C-15). Congratulations! You can now launch graphical programs on your AIX server from your Windows workstation.
VNC Server
If you do not want to use programs such as Xming running on the windows server, then you can install and configure a VNC server on the AIX server. VNC exports the AIX display, making it accessible through either a VNC viewer application or a current web browser with Java enabled running from the users workstation. We describe the steps needed to achieve this. 660
Tivoli Storage Productivity Center V4.2 Release Update
7894x11.fm
The VNC server for AIX is available for download from the IBM AIX Toolbox for Linux site at: fhttp://www-03.ibm.com/systems/power/software/aix/linux/toolbox/alpha.html Follow these installation steps: 1. Download the vnc RPM package from the AIX Toolbox site (the file name is vnc-3.3.3r2-6.aix5.1.ppc.rpm at the time of this writing) 1. Install the RPM package (on the AIX server) using the following command: rpm -Uhv vnc-3.3.3r2-6.aix5.1.ppc.rpm 2. Define a password for VNC access using the command: vncserver 3. Start the VNC service using the command: vncserver tpcblade4-14v3> vncserver New 'X' desktop is tpcblade4-14v3:1 Starting applications specified in /home/root/.vnc/xstartup Log file is /home/root/.vnc/tpcblade4-14v3:1.log
4. Open your web browser and enter the name or IP of your AIX server, and port number 580X, where X is the your assignaed display number as seen in step 3. In our case, the VNC display is :1, so we will use port 5801. 5. Log on using the password created in step 4, as seen in Figure C-16.
661
7894x11.fm
6. You should obtain an X session with a terminal console, as shown in Figure C-17. Now you are ready to launch graphical applications on your AIX server.
662
7894bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources: IBM Tivoli Storage Productivity Center: Installation and Configuration Guide, SC27-2337 IBM Tivoli Storage Productivity Center Users Guide Version 4.1, SC27-2338 IBM Tivoli Common Reporting Users Guide, SC23-8737
Online resources
These Web sites are also relevant as further information sources: Tivoli Storage Productivity Center support site: https://www-01.ibm.com/software/sysmgmt/products/support/IBMTotalStorageProduct ivityCenterStandardEdition.html IBM Software Support Lifecyle webpage: http://www-01.ibm.com/software/support/lifecycle/index_t.html Partner World Technical Delivery Assessment: https://www-304.ibm.com/jct09002c/partnerworld/wps/servlet/ContentHandler/LLIE6M7NYY/lc=en_US Tivoli Integrated Portal demonstration: http://www14.software.ibm.com/webapp/download/demo.jsp?id=Tivoli+Integrated+Por tal+Walkthrough+Aug08&locale=en Tivoli Integrated Portal documentation: http://publib.boulder.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=/com.ib m.tip.doc/ctip_install_overview.html
663
7894bibl.fm
LDAP information: http://en.wikipedia.org/wiki/IBM_Lightweight_Third-Party_Authentication Tivoli Storage Productivity Center Standard Edition Web site: http://www-947.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Sto rage_Productivity_Center_Standard_Edition Tivoli Enterprise Portal site: http://publib.boulder.ibm.com/infocenter/tivihelp/v15r1/topic/com.ibm.itm.doc/i tm610usersguide.htm IBM Tivoli Monitoring: http://publib.boulder.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=/com.ib m.itm.doc_6.1/welcome.htm DB2 install requirements: http://www.ibm.com/software/data/db2/udb/sysreqs.html IBM Certification Web site: certify@us.ibm.com IBM training site: http://www.ibm.com/training
664
7894IX.fm
Index
A
Active Directory authentication 49 administrative rights DB2 user 28 agent issues on restore 551, 588, 627 Agent Manager support 11 agents supported 5 AIN install Database administrator 140 LDAP authentication 142 AIX installation media 125 AIX install 122 Advanced security roles 142 authentication 144 command line 125 configure DB2 127 crfs command 131 Custom installation 133 DAS user 127 Data Server Account Password 142 database repository location 128 database schema 132, 134, 141 database verification 137 DB2 communication 128 DB2 Enterprise Server Edition 126 DB2 installation 126 DB2 instance 128 DB2 license 129 DB2 SAMPLE database 129 DB2 table spaces 135 DB2 user passwords 127 DB2 verification 129 db2iadm1 group 127 db2profile script 132 disable TPC 165 disk space 124 ESE DB2 126 fenced user 127 gunzip command 131 installation media 125, 131 installer 125 NAS discovery 142 prereqs 124 product media 122 progress indicator 147 RAM requirements 140 restart DB2 129 root account 138 software prerequisites 124 source instance profile 128 SRA install 140 superuser role 142 TIP instance 143 TIP ports 143 TPC for Replication install 147 TPC-R license 152 TPC-R url 152 TPC-Replication 147 verify success 154 VNC viewer 131 WebSphere Application Server ports 150 WebSphere user 142 X-Windows display 131 alert notification 6 analytics functions 2 anonymous binds 50 APAR PK77578 52 architecture overview 3 archive log files on filesystem 535, 574, 612 archive logging 515, 555, 593 parameter choices 525, 564, 602 autorun enabled 25
B
backup TPC server directories 533, 573, 611 backup considerations 533, 573, 611 Basic Edition 7 batch report type 320 Brocade 8000 478 Brocade Data Center Fabric Manager 12 Brocade DCFM 480 Brocade switches 478
C
candidate storage resource group 12 capacity information 279, 282 CD layout 19, 122 CIM agent 649 circular logging 515, 555, 593 Cisco Nexus 5000 Series 479 Cisco Nexus switches 478 Common Agent Services 11 Configure Devices wizard 10 Converged Enhanced Ethernet 478 Converged Network Adapter 479 Converged Network Adaptor 478 Create database schema 37 crfs command 125 Custom install 18, 76, 179 Custom installation 36 custom installation 133 cygwin installation 655
D
daily samples
665
7894IX.fm
History Aggregation 496 DAS user 127 Data Server port 9549 47, 108 Data server 4 database attention required 30 backup output destination 515, 555, 593 offline backup 514, 555, 593 online backup 515, 555, 593 database backup archive log files 535, 574, 612 archive logging 524, 563, 601 archive logging to filesystem 528, 567, 605 delete versions 538, 577, 615 list of backup versions 542, 582, 621 managing versions 534, 573, 611 online 532, 572, 610 online backup script to filesystem 530, 568, 606 online backup to filesystem 533, 572, 610 online backups to filesystem 528, 566, 604 online TSM setup steps 524, 563, 601 parameter num_db_backups 517, 557, 595 rec_his_retentn parameter 517, 557, 595 setup steps 516, 556, 594 to filesystem 518, 557, 595 TSM password 522, 561, 599 database backups offline 531, 569, 607 database instance 4 database instance error 43 database managed space 641 database restore 541, 581, 620 agent issues 551, 588, 627 circular logging 541, 581, 620 from filesystem 545, 584, 623 Roll-forward 549, 587, 626 Roll-Forward Pending 547, 586, 625 database scripts 514, 554, 592 database sizing 496 ESS 498 Fabric performance data 500 TPC for Data 501 worksheets 498 databbase restore offline backups 542, 582, 621 DB2 Agent Manager support 11 e-mail notifications 92 DB2 9.7 8 DB2 Administration Server 28 DB2 database 4 performance 40 TPC-R 14 DB2 database sizing 40 DB2 fenced user 90 DB2 hints 639 DB2 install 2425, 78, 124 verify 42 DB2 installation
verify installation 33 DB2 instance owner 89 DB2 license AIX 129 DB2 log files 43 DB2 parameters Failed log path 525, 565, 603 Primary log path 525, 565, 603 DB2 Setup wizard 82 DB2 table space 642 DB2 tools catalog 30, 91 DB2 user account 28 DB2 user passwords 127 db2adutl command search backup image 543, 583, 622 db2adutl tool 536, 575, 613 db2sampl command 34, 95 DCFM managed fabrics 487 Device Server port 9550. 47, 108 Device server 4 device support matrix associated CIM Agent 649 disable Tivoli Storage Productivity Center 163 disable TPC 65 DMS table space 642 DNS suffix 21 domain name system 21 DS Storage Manager 9 DS8000 encrypted volumes 11 dsm.opt file 521, 560, 598 dsmapipw command 522, 561, 599
E
encrypted volumes 11 EPM 219 Event Integration Facility 6 External Process Manager 219
F
Fabric agent install 62 Fabric management 2 fabrics user defined properties 630 FCoE support 478 fenced user 127 free disk space 124
G
GUI support Windows 7 12 GUID 23 gunzip command 131
H
hardware prerequisites 21, 123
666
7894IX.fm
hardware relationships soft removal 13 health status Storage Resource Group 405 host bus adapter 8 hot backups 532, 572, 610
I
IBM ADE Service 166 IE certificate conflict 631 install authentication method 49 LDAP authentication 47 NAS discovery 48 RAM warning 44 remote database 46 sequence 35 TPC for Replication 55 verification 61 install planning disk space 124 install sequence 96 installation licenses 36, 179 IP address 646 ISD Storage Productivity Center 10
SA MP Base Component 86 TIP installation 113 TIP ports 110 TPC for Replication 114 Typical install 76 verification 119 Web media 77, 122 WebSphere Application Server ports 116 log files CLI 62 Device server 62 GUI 62
M
managing backup versions 534, 573, 611 memory requirement 181 Midrange Edition 7 Monitoring Agent 13
N
NAS discovery 48, 109 Native API EPM 219 native storage system interfaces 10 NETBIOS 23
J
Java 1.6 9 Job Management panel 10
O
offline backup 515, 555, 593 TSM server 519, 559, 597 offline backups 542, 582, 621 offline database backups 531, 569, 607 OMNIbus 6 online backup 516, 556, 594 online backup to filesystem 533, 572, 610 online backups to filesystem 528, 566, 604 Open HyperSwap 13 OpenSSH format 8 openssl package 654 organize backups on filesystem 534, 573, 611
L
LDAP authentication 49, 109 LDAP server anonymous binds 50 legacy agents 11 Legato NetWorker 515, 555, 593 Linux install 75 component install sequence 96 Custom install 76 Data Server Account Password 109 database schema 97, 103 DB2 environment variables 94 DB2 fenced user 90 DB2 instance owner 89 DB2 notification server 92 DB2 response file 85 DB2 setup wizard 82 DB2 tools catalog 91 db2iadm1 group access 94 db2prereqcheck 79 db2sampl command 95 db2setup 80 df -h command 89 Disk 2 content 77, 123 Host Authentication Password 109 mount point 96 NAS discovery 109 prereqs 78
P
Performance Monitors Hourly 497 performance monitoring task 497 performance monitors history aggregation 496 Perofrmance Monitors daily 497 Plan for migration, implementation and customization of the latest release 1 preinstall tasks 43 probe setup 275 Process Explorer 71 procexp64.exe 72 PuTTY 8
Index
667
7894IX.fm
R
RAM requirements 140 RAM warning message 44 RDBMS Logins 274 recover image 9 Redbooks Web site 666 Contact us xv register a database 274 remote database 46 Replication product 4 Reporting capacity 279, 282 reporting for databases 272 repository calculation templates 505 resiliency profile 11 Resource Manager 647 response file 85 Roll forward databases to a point-in-time 550, 587, 626 to end of logs 549, 587, 626 roll-forward database restore 549, 587, 626 root account 138 rpm tool 654
stop TPC services 174 storage device 649 important information 649 Storage Resource agent daemon 62 Storage Resource agents 10 Storage Resource Group 403 alert overlay 403 health status 405 Storwize V7000 9 TPC support 9 subsystem storage management 2 support matrix 636 SVC 6.1 8 system managed space 641
T
table space 642 TIP portal 154 TIP ports 143 TIP TPC instance 143 Tivoli Common Reporting 4 Tivoli Enterprise Console 6 Tivoli Integrated Portal 4 stop 166 Tivoli Storage Manager 6 Tivoli STorage Productivity Center disable 163 Topology View Storage Resource Group 403 TotalStorage Productivity Center component install 34, 96 license 36, 179 server 649 TotalStorage Productivity Center (TPC) 645 TPC Basic Edition 7 Standard edition 7 TPC for Data database sizing 501 repository tables 502 T_STAT_FILE table 503 TPC for Disk 7 TPC for Replication disable 64 install 55 ports 58 TPC install memory requirements 181 TPC MRE 7, 332 licensing 332 supported devices 332 TPC Replication 7 TPC services 61 TPC superuser 142 TPC upgrade process explorer 71 TPC_backup_offline_file.bat script 531, 569, 607 TPCDB database 39, 45, 103 TPCDB database logs 494
S
SA MP Base Component 86 SAN Planner 11 SAN Volume Controller (SVC) 651 schema log files 43 schema name 40 Simple Network Managment Protocol 6 single sign-on 4, 9 Single Sign-On feature 58 sizing performance collection for ESS 507 sizing SAN Switch performance collection 509 SMS table space 642 SNMP MIB files 6 space-based planning 11 space-efficient volumes 11 SQL5005C error 639 SRA install 62 SRA install parameters installLoc 63 password 63 serverIP 63 serverPort 63 userID 63 SRM function 2 SSH key 10 SSPC 8 DS Storage Manager 9 host bus adapter 8 Java 1.6 9 power requirements 8 PuTTY 8 redundant hard disk 8 single sign-on 9 SVC 6.1 8 Standard Edition 7
668
7894IX.fm
TPC-R service disabled 161 TPC-R license 152 TPC-R url 152 TSM API environments variables 520, 559, 597 TSM backup script offline backup 524, 563, 601 TSM option file 521, 560, 598 TSM password 522, 561, 599 TSM server database backup 519, 559, 597 TSRMsrv1 ID 48 Typical install 19, 179 typical install 27 Typical install time 77 Typical installation 36
U
UPDATE DB CFG command 526, 529, 565, 568, 603, 606 upgrade failure 175 user ID 645, 647 user interface database access 5 user interfaces 5
V
version 630 version installed 67 VNC Viewer 131
W
WebSphere Application Server admin ID and password 109 Windows install DB2 reboot 43 worksheet 498 TPC for Data repository 510
X
X11 forwarding 653 Cygwin 654 openssl package 654 X11 graphical capability 79 XBSA 515, 555, 593 Xming 654 X-Windows display 131 X-Windows server application 131
Index
669
7894IX.fm
670
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
7894spine.fm
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
7894spine.fm
Back cover