Documente Academic
Documente Profesional
Documente Cultură
0 and Later
20282-20 Rev. 1
Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on
page E-1.
1 Administration Overview
Administrators Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Administration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Initial System Setup and Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Netezza Software Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Managing the External Network Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Managing Domain Name Service (DNS) Updates . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Setting up Remote Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Administration Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Other Netezza Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
iii
Specifying Non-Default NPS Port Numbers for Clients . . . . . . . . . . . . . . . . . . . 2-14
Creating Encrypted Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15
Using Stored Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
iv
Identifying the Active and Standby Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Monitoring the Cluster and Resource Group Status . . . . . . . . . . . . . . . . . . . . . . . 4-6
nps Resource Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Failover Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Relocate to the Standby Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Safe Manual Control of the Hosts (And Heartbeat) . . . . . . . . . . . . . . . . . . . . . . . 4-9
Transition to Maintenance (Non-Heartbeat) Mode . . . . . . . . . . . . . . . . . . . . . . . 4-10
Transitioning from Maintenance to Clustering Mode . . . . . . . . . . . . . . . . . . . . . 4-11
Cluster Manager Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
Logging and Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
DRBD Administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
Monitoring DRBD Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Sample DRBD Status Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Split-Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Administration Reference and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
IP Address Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Forcing Heartbeat to Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Shutting Down Heartbeat on Both Nodes without Causing Relocate . . . . . . . . . . 4-17
Restarting Heartbeat after Maintenance Network Issues . . . . . . . . . . . . . . . . . . 4-17
Resolving Configuration Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Fixed a Problem, but crm_mon Still Shows Failed Items . . . . . . . . . . . . . . . . . . 4-18
Output From crm_mon Does Not Show the nps Resource Group . . . . . . . . . . . . . 4-18
Linux Users and Groups Required for HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
Checking for User Sessions and Activity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
v
Hardware Management Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Callhome File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Displaying Hardware Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Managing Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Managing SPUs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Managing Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Managing Data Slices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Displaying Data Slice Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Monitor Data Slice Status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Regenerate a Data Slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21
Rebalance Data Slices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23
Displaying the Active Path Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Handling Transactions during Failover and Regeneration . . . . . . . . . . . . . . . . . . 5-25
Automatic Query and Load Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Power Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
PDU and Circuit Breakers Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Powering On the IBM Netezza 1000 and IBM PureData System for Analytics N10015-
28
Powering Off the IBM Netezza 1000 or IBM PureData System for Analytics N1001 . 5-
29
Powering on an IBM Netezza C1000 System . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
Powering off an IBM Netezza C1000 System . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
NEC InfoFrame DWH PDU and Circuit Breakers Overview . . . . . . . . . . . . . . . . . 5-32
Powering On the NEC InfoFrame DWH Appliance . . . . . . . . . . . . . . . . . . . . . . . 5-33
Powering Off an NEC InfoFrame DWH Appliance . . . . . . . . . . . . . . . . . . . . . . . 5-34
vi
Resume the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Take the System Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Restart the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Overview of the Netezza System Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
System States during Netezza Start-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10
System Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11
System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Backup and Restore Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Bootserver Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Client Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Database Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Event Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
Flow Communications Retransmit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Host Statistics Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Load Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Postgres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Session Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
SPU Cores Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
Startup Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16
Statistics Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
The nzDbosSpill File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Display Configuration Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Changing the System Registry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19
vii
Specifying the Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
The sendMail.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Aggregating Event E-mail Messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
Creating a Custom Event Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18
Template Event Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Specifying System State Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Hardware Service Requested . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
Hardware Needs Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-21
Hardware Path Down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Hardware Restarted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Specifying Disk Space Threshold Notification. . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Specifying Runaway Query Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
Monitoring the System State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27
Monitoring for Disk Predictive Failure Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . 7-28
Monitoring for ECC Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Monitoring Regeneration Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Monitoring Disk Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
Monitoring Hardware Temperature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
Monitoring System Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
Query History Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
Monitoring SPU Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Monitoring Voltage Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Monitoring Transaction Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-38
Switch Port Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39
Reachability and Availability Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39
Event Types Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
Network Interface State Change Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
Topology Imbalance Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
S-Blade CPU Core Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-41
Displaying Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-41
viii
Creating Netezza Database Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Altering Netezza Database Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Deleting Netezza Database Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Creating Netezza Database Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Altering Netezza Database Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Deleting Netezza Database Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Security Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Administrator Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Object Privileges on Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Object Privileges by Class. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
Scope of Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
Revoking Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Privileges by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Indirect Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Always Available Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Creating an Administrative User Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Logon Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Local Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
LDAP Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Commands Related to Authentication Methods. . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Passwords and Logons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Netezza Client Encryption and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-22
Configuring the SSL Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-22
Configuring the Netezza Host Authentication for Clients . . . . . . . . . . . . . . . . . . 8-23
Commands Related to Netezza Client Connection Methods . . . . . . . . . . . . . . . . 8-26
Setting User and Group Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26
Specifying User Rowset Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-27
Specifying Query Timeout Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Specifying Session Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Specifying Session Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Logging Netezza SQL Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Logging Netezza SQL Information on the Server . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Logging Netezza SQL Information on the Client . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Group Public Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-31
ix
Best Practices for Disk Space Usage in Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Database and Table Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Accessing Rows in Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Understanding Transaction IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
Creating Distribution Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
Selecting a Distribution Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Criteria for Selecting Distribution Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Choosing a Distribution Key for a Subset Table. . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Distribution Keys and Collocated Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Dynamic Redistribution or Broadcasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Verifying Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Avoiding Data Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
Specifying Distribution Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
Viewing Data Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
Using Clustered Base Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11
Organizing Keys and Zone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12
Selecting Organizing Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12
Reorganizing the Table Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-13
Copying Clustered Base Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Updating Database Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Maintaining Table Statistics Automatically. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
Running the GENERATE STATISTICS Command . . . . . . . . . . . . . . . . . . . . . . . 9-16
Just in Time Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16
Zone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-17
Grooming Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-18
GROOM and the nzreclaim Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19
Identifying Clustered Base Tables that Require Grooming . . . . . . . . . . . . . . . . . 9-19
About the Organization Percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Groom and Backup Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Managing Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Using the nzsession Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-22
Running Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Transaction Control and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Transactions Per System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23
Transaction Concurrency and Isolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
Concurrent Transaction Serialization and Queueing, Implicit Transactions. . . . . . 9-24
Concurrent Transaction Serialization and Queueing, Explicit Transactions . . . . . . 9-25
x
Netezza Optimizer and Query Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-26
Execution Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-26
Displaying Plan Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Analyzing Query Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28
Viewing Query Status and History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28
xi
Using the Symantec NetBackup Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Installing the Symantec NetBackup License. . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Configuring NetBackup for a Netezza Client . . . . . . . . . . . . . . . . . . . . . . . . . . 10-34
Integrating Symantec NetBackup to Netezza . . . . . . . . . . . . . . . . . . . . . . . . . 10-35
Procedures for Backing Up and Restoring Using Symantec NetBackup . . . . . . . 10-39
Using the IBM Tivoli Storage Manager Connector . . . . . . . . . . . . . . . . . . . . . . . . . 10-41
About the Tivoli Backup Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-41
Configuring the Netezza Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-42
Configuring the Tivoli Storage Manager Server . . . . . . . . . . . . . . . . . . . . . . . . 10-46
Special Considerations for Large Databases . . . . . . . . . . . . . . . . . . . . . . . . . . 10-52
Running nzbackup and nzrestore with the TSM Connector . . . . . . . . . . . . . . . . 10-54
Host Backup and Restore to the TSM Server . . . . . . . . . . . . . . . . . . . . . . . . . 10-55
Backing up and Restoring Data Using the TSM Interfaces . . . . . . . . . . . . . . . . 10-56
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-57
Using the EMC NetWorker Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-59
Preparing your System for EMC NetWorker Integration . . . . . . . . . . . . . . . . . . 10-59
NetWorker Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-60
NetWorker Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-60
NetWorker Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-65
xii
Managing History Configurations Using NzAdmin . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Query History Views and User Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
Query History and Audit History Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
_v_querystatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
_v_planstatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
$v_hist_queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18
$v_hist_successful_queries and $v_hist_unsuccessful_queries. . . . . . . . . . . . . 11-19
$v_hist_incomplete_queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-19
$v_hist_table_access_stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
$v_hist_column_access_stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
$v_hist_log_events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21
$hist_version. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
$hist_nps_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
$hist_log_entry_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-23
$hist_failed_authentication_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . 11-23
$hist_session_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 11-24
$hist_session_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 11-26
$hist_query_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-27
$hist_query_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-28
$hist_query_overflow_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 11-29
$hist_service_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-30
$hist_state_change_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-31
$hist_table_access_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-32
$hist_column_access_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . . . . . . . . 11-33
$hist_plan_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-34
$hist_plan_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-36
History Table Helper Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-36
FORMAT_QUERY_STATUS () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-37
FORMAT_PLAN_STATUS () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-37
FORMAT_TABLE_ACCESS() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-37
FORMAT_COLUMN_ACCESS() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-38
Example Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-38
xiii
Resource Sharing Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2
Concurrent Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3
Managing Short Query Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-4
Managing GRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-6
Resource Percentages and System Resources. . . . . . . . . . . . . . . . . . . . . . . . . . 12-6
Assigning Users to Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
Resource Groups Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
GRA Allocations Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-9
Resource Allocations for the Admin User . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10
Allocations for Multiple Jobs in the Same Group. . . . . . . . . . . . . . . . . . . . . . . 12-11
Priority and GRA Resource Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-12
Guaranteed Resource Allocation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13
Tracking GRA Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-14
Monitoring Resource Utilization and Compliance . . . . . . . . . . . . . . . . . . . . . . 12-15
Managing PQE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-19
Netezza Priority Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-20
Managing the Gate Keeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-21
xiv
The nzstats Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15
To display table types and fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15
To display a specific table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-15
xv
nzcontents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
nzconvert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
nzds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
nzevent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-16
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
nzhistcleanupdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-17
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-19
nzhistcreatedb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-21
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
nzhostbackup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-22
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24
nzhostrestore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24
xvi
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-24
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26
nzhw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-26
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-31
nzload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
nzpassword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35
nzreclaim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-35
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
nzrestore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
nzrev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-37
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
nzsession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-40
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-41
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-43
nzspupart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-43
xvii
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-46
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-46
nzstart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48
nzstate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-48
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
nzstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-50
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-52
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53
nzstop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53
Syntax Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-53
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
nzsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56
Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-57
Customer Service Troubleshooting Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-58
nzconvertsyscase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-59
nzdumpschema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-61
xviii
nzinitsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62
nzlogmerge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62
xix
SPU Configuration Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-10
Index
xx
Tables
xxi
Table 7-8: Hardware Service Requested Event Rule . . . . . . . . . . . . . . . . . . . . 7-20
Table 7-9: Hardware Needs Attention Event Rule . . . . . . . . . . . . . . . . . . . . . . 7-22
Table 7-10: Hardware Path Down Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
Table 7-11: Hardware Restarted Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Table 7-12: Disk Space Event Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25
Table 7-13: Threshold and States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
Table 7-14: Runaway Query Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27
Table 7-15: SCSI Predictive Failure Event Rule . . . . . . . . . . . . . . . . . . . . . . . . 7-28
Table 7-16: ECC Error Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Table 7-17: Regen Fault Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
Table 7-18: SCSI Disk Error Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31
Table 7-19: Thermal Fault Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
Table 7-20: Sys Heat Threshold Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
Table 7-21: histCaptureEvent Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
Table 7-22: histLoadEvent Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35
Table 7-23: SPU Core Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Table 7-24: Voltage Fault Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Table 7-25: Transaction Limit Event Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-39
Table 8-1: Administrator Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9
Table 8-2: Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Table 8-3: Netezza SQL Commands for Displaying Privileges . . . . . . . . . . . . . . 8-13
Table 8-4: Privileges by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14
Table 8-5: Indirect Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Table 8-6: Authentication-Related Commands . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Table 8-7: Client Connection-Related Commands . . . . . . . . . . . . . . . . . . . . . . 8-26
Table 8-8: User and Group Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26
Table 8-9: Public Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-31
Table 8-10: System Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-32
Table 9-1: Data Type Disk Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Table 9-2: Table Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-10
Table 9-3: Database Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Table 9-4: Generate Statistics Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
Table 9-5: Automatic Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16
Table 9-6: cbts_needing_groom Input Options . . . . . . . . . . . . . . . . . . . . . . . . 9-20
Table 9-7: The 64th read/write Transaction Queueing . . . . . . . . . . . . . . . . . . . 9-25
Table 9-8: The _v_qrystat View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-29
Table 9-9: The _v_qryhist View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-29
xxii
Table 10-1: Choosing a Backup and Restore Method. . . . . . . . . . . . . . . . . . . . . 10-2
Table 10-2: Backup/Restore Commands and Content . . . . . . . . . . . . . . . . . . . . 10-3
Table 10-3: Retaining Specials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5
Table 10-4: The nzbackup Command Options . . . . . . . . . . . . . . . . . . . . . . . . 10-11
Table 10-5: Environment Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-14
Table 10-6: Backup History Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20
Table 10-7: Backup and Restore Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21
Table 10-8: The nzrestore Command Options . . . . . . . . . . . . . . . . . . . . . . . . . 10-23
Table 10-9: Environment Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-27
Table 10-10: Backup History Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-31
Table 10-11: Restore History Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33
Table 10-12: NetBackup Policy Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-34
Table 11-1: History Loader Settings and Behavior. . . . . . . . . . . . . . . . . . . . . . . 11-9
Table 11-2: _v_querystatus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
Table 11-3: _v_planstatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
Table 11-4: $v_hist_queries View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18
Table 11-5: $v_hist_incomplete_queries View . . . . . . . . . . . . . . . . . . . . . . . . 11-19
Table 11-6: $v_hist_table_access_stats View . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
Table 11-7: $v_hist_column_access_stats View . . . . . . . . . . . . . . . . . . . . . . . 11-20
Table 11-8: $v_hist_log_events View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21
Table 11-9: $hist_version. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
Table 11-10: $hist_nps_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
Table 11-11: $hist_log_entry_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . 11-23
Table 11-12: $hist_failed_authentication_$SCHEMA_VERSION. . . . . . . . . . . . . 11-23
Table 11-13: $hist_session_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-24
Table 11-14: $hist_session_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-26
Table 11-15: $hist_query_prolog_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . 11-27
Table 11-16: $hist_query_epilog_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . 11-28
Table 11-17: $hist_query_overflow_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-29
Table 11-18: $hist_service_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . . . . . 11-30
Table 11-19: $hist_state_change_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . 11-31
Table 11-20: $hist_table_access_$SCHEMA_VERSION. . . . . . . . . . . . . . . . . . . 11-32
Table 11-21: $hist_column_access_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . 11-33
Table 11-22: $hist_plan_prolog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . 11-34
Table 11-23: $hist_plan_epilog_$SCHEMA_VERSION . . . . . . . . . . . . . . . . . . . 11-36
Table 12-1: Workload Management Feature Summary . . . . . . . . . . . . . . . . . . . . 12-2
Table 12-2: Short Query Bias Registry Settings. . . . . . . . . . . . . . . . . . . . . . . . . 12-5
xxiii
Table 12-3: Sample Resource Sharing Groups . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
Table 12-4: Assigning Resources to Active RSGs . . . . . . . . . . . . . . . . . . . . . . . 12-9
Table 12-5: Guaranteed Resource Allocation Settings . . . . . . . . . . . . . . . . . . . 12-13
Table 12-6: GRA Compliance Registry Settings . . . . . . . . . . . . . . . . . . . . . . . 12-14
Table 12-7: GRA Report Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-16
Table 12-8: Netezza Priorities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-20
Table 12-9: Gate Keeper Registry Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-22
Table 13-1: Netezza Groups and Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-1
Table 13-2: Database Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-2
Table 13-3: DBMS Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3
Table 13-4: Host CPU Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3
Table 13-5: Host File System Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-4
Table 13-6: Host Interfaces Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-4
Table 13-7: Host Management Channel Table . . . . . . . . . . . . . . . . . . . . . . . . . 13-6
Table 13-8: Host Network Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-7
Table 13-9: Host Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8
Table 13-10: Hardware Management Channel Table . . . . . . . . . . . . . . . . . . . . . . 13-9
Table 13-11: Per Table Data Slice Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10
Table 13-12: Query Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10
Table 13-13: Query History Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11
Table 13-14: SPU Partition Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-12
Table 13-15: SPU Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13
Table 13-16: System Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13
Table 13-17: Table Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-14
Table A-1: Command Line Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Table A-2: Administrator Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-4
Table A-3: Object Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-5
Table A-4: nzds Input Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9
Table A-5: nzds Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
Table A-6: nzevent Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Table A-7: nzevent Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13
Table A-8: nzhistcleanupdb Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . A-18
Table A-9: nzhistcreatedb Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20
Table A-10: nzhistcreatedb Output Messages . . . . . . . . . . . . . . . . . . . . . . . . . . A-21
Table A-11: nzhostbackup Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-23
Table A-12: nzhostrestore Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25
Table A-13: nzhostrestore Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-25
xxiv
Table A-14: nzhw Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-27
Table A-15: nzhw Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-30
Table A-16: nzpassword Input Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-33
Table A-17: nzpassword Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-34
Table A-18: nzreclaim Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Table A-19: nzreclaim Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-36
Table A-20: nzrev input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-38
Table A-21: nzsession Input Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-39
Table A-22: nzsession Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-40
Table A-23: Session Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-42
Table A-24: nzspupart Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Table A-25: nzspupart Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-44
Table A-26: nzstart Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-47
Table A-27: nzstate Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49
Table A-28: nzstate Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-49
Table A-29: nzstats Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Table A-30: nzstats Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-51
Table A-31: nzstop Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Table A-32: nzstop Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-54
Table A-33: nzsystem Inputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-55
Table A-34: nzsystem Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-56
Table A-35: Diagnostic Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-58
Table A-36: nzconvertsyscase Input Options. . . . . . . . . . . . . . . . . . . . . . . . . . . A-60
Table A-37: nzdumpschema Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-61
Table A-38: nzlogmerge Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-62
Table C-1: User Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1
Table C-2: System Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-3
Table D-1: Startup Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1
Table D-2: System Manager Configuration Options . . . . . . . . . . . . . . . . . . . . . . D-3
Table D-3: Host Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-6
Table D-4: SPU Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-10
xxv
xxvi
Figures
xxvii
Figure 12-7: Resource Allocation Performance History Window . . . . . . . . . . . . . 12-18
Figure 12-8: Resource Allocation Performance Graph. . . . . . . . . . . . . . . . . . . . 12-19
Figure 12-9: Using PQE to Control Job Concurrency by Runtime and Priority . . . 12-21
Figure 12-10: Gate Keeper Default Normal Work Queue . . . . . . . . . . . . . . . . . . . 12-23
Figure 12-11: Gate Keeper Time-Based Normal Queues and Registry Settings . . . 12-24
Figure 14-1: Mantra and MantraVM Service . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-1
xxviii
Preface
The IBM Netezza data warehouse appliance is a high performance, integrated database
appliance that provides unparalleled performance, extensive scaling, high reliability, and
ease of use. The Netezza appliance uses a unique architecture that combines current
trends in processor, network, and software technologies to deliver a very high performance
system for large enterprise customers.
xxix
If You Need Help
If you are having trouble using the Netezza appliance, you should:
1. Retry the action, carefully following the instructions given for that task in the
documentation.
2. Go to the IBM Support Portal at: http://www.ibm.com/support. Log in using your IBM
ID and password. You can search the Support Portal for solutions. To submit a support
request, click the Service Requests & PMRs tab.
3. If you have an active service contract maintenance agreement with IBM, you can con-
tact customer support teams via telephone. For individual countries, visit the Technical
Support section of the IBM Directory of worldwide contacts (http://www14.soft-
ware.ibm.com/webapp/set2/sas/f/handbook/contacts.html#phone).
xxx
CHAPTER 1
Administration Overview
Whats in this chapter
Administrators Roles
Administration Tasks
Initial System Setup and Information
Administration Interfaces
Other Netezza Documentation
This chapter provides an introduction and overview to the tasks involved in administering
an IBM Netezza data warehouse appliance.
Administrators Roles
Netezza administration tasks typically fall into two categories:
System administration managing the hardware, configuration settings, system sta-
tus, access, disk space, usage, upgrades, and other tasks
Database administration managing the user databases and their content, loading
data, backing up data, restoring data, controlling access to data and permissions
In some customer environments, one person could be both the system and database
administrator to perform the tasks when needed. In other environments, multiple people
may share these responsibilities, or they may own specific tasks or responsibilities. You can
develop the administrative model that works best for your environment.
In addition to the administrator roles, there are also database user roles. A database user is
someone who has access to one or more databases and has permission to run queries on
the data stored within those databases. In general, database users have access permissions
to one or more user databases, and they have permission to perform certain types of tasks
as well as to create or manage certain types of objects (tables, synonyms, and so forth)
within those databases.
Administration Tasks
The administration tasks generally fall into these categories:
Deploying and installing Netezza clients
Managing the Netezza appliance
1-1
IBM Netezza System Administrators Guide
Netezza Support and Sales representatives will work with you to install and initially config-
ure the Netezza in your customer environment. Typically, the initial rollout consists of
installing the system in your data center, and then performing some configuration steps to
set the systems hostname and IP address to connect the system to your network and make
it accessible to users. They will also work with you to perform initial studies of the system
usage and query performance, and may advocate other configuration settings or administra-
tion ideas to improve the performance of and access to the Netezza for your users.
The data Directory The /nz/data directory contains the following subdirectories:
data.<ver>/base Contains system tables, catalog information and subdirectories for
the databases. Each database you create has its own subdirectory whose name
matches the databases object ID value. For example, base/1/ is the system database,
base/2/ is the master_db database, and base/nnn is an end-user database, where nnn
is the object ID of the database.
The kit Directory The kit directory contains the following subdirectories:
kit.<rev>/ Top level directory for the release <rev> (for example, kit.6.0).
kit.<rev>/bin/ All user-level CLI programs.
kit.<rev>/bin/adm Internal CLI programs.
kit.<rev>/log/<pgm name>/ Component log files, one subdirectory per component
containing a file per day of log information up to seven days. The information in the
logs includes when the process started, when the process exited or completed, and any
error conditions.
kit.<rev>/ sbin Internal host and utility programs not intended to be run directly by
users. These programs are not specifically prefixed (for example, clientmgr).
kit.<rev>/share/ Postgres-specific files.
kit.<rev>/sys/ System configuration files, startup.cfg and some subdirectories (init,
include, strings).
kit.<rev>/sys/init/ Files used for system initialization.
Overwriting DNS Information with a Text File To change the DNS information by reading the
information from an existing text file:
1. Log in to either host as root.
2. Create a text file with your DNS information. Your text file should have a format similar
to the following:
search yourcompany.com
nameserver 1.2.3.4
nameserver 1.2.5.6
3. Enter the following command, where file is the fully qualified pathname to the text file:
[root@nzhost1 ~]# service nzresolv update file
Appending DNS Information from the Command Prompt To change the DNS information by
entering the information from the command prompt:
1. Log in to either host as root.
2. Enter the following command (note the dash character at the end of the command):
[root@nzhost1 ~]# service nzresolv update -
The command prompt proceeds to a new line where you can enter the DNS informa-
tion. Enter the complete DNS information, because the text that you type replaces the
existing information in the resolv.conf file.
3. After you finish typing the DNS information, type one of the following commands:
Control-D to save the information that you entered and exit the editor.
Control-C to exit without saving any changes.
Administration Interfaces
Netezza offers several ways or interfaces that allow you to perform the various system and
database management tasks:
Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the
Netezza host. For many of the nz* commands, you must be able to log on to the
Netezza system to access and run those commands. In most cases, users log in as the
default nz user account, but you may have created other Linux user accounts on your
system. Some commands require you to specify a database user account, password,
and database to ensure that you have permissions to perform the task.
The Netezza CLI client kits package a subset of the nz* commands that can be run
from Windows and UNIX client systems. The client commands may also require you to
specify a database user account, password, and database to ensure that you have data-
base administrative and object permissions to perform the task.
SQL commands. The SQL commands support administration tasks and queries within a
SQL database session. You can run the SQL commands from the Netezza nzsql com-
mand interpreter or through SQL APIs such as ODBC, JDBC, and the OLE DB Provider.
You must have a database user account to run the SQL commands with appropriate
permissions for the queries and tasks that you perform.
NzAdmin tool. NzAdmin is a Netezza interface that runs on Windows client worksta-
tions to manage Netezza systems.
Web Admin. Web Admin is a Web browser client that users can access on the Netezza
system or a compatible Linux server to manage their Netezza systems.
Netezza Performance Portal. The Netezza Performance Portal is a Web browser client
that provides detailed monitoring capabilities for your Netezza systems. You can use
the portal to answer questions about system usage, workload, capacity planning, and
overall query performance.
The nz* commands are installed and available on the Netezza system, but it is more com-
mon for users to install Netezza client applications on client workstations. Netezza
supports a variety of Windows and UNIX client operating systems. Chapter 2, Installing
the Netezza Client Software, describes the Netezza clients and how to install them.
Chapter 3, Using the Netezza Administration Interfaces, describes how to get started
using the administration interfaces.
The client interfaces provide you with different ways to perform similar tasks. While most
users tend to use the nz* commands or SQL commands to perform tasks, you can use any
combination of the client interfaces, depending upon the task or your workstation environ-
ment, or interface preferences.
In most cases, the only applications that Netezza administrators or users need to install are
the client applications to access the Netezza system. Netezza provides client software that
runs on a variety of systems such as Windows, Linux, Solaris, AIX, and HP-UX systems. For
a description of the client applications, see Administration Interfaces on page 1-7.
This chapter describes how to install the Netezza CLI clients, NzAdmin tool, and Web
Admin interface. Note that the instructions to install and use the Netezza Performance Por-
tal are in the IBM Netezza Performance Portal Users Guide, which is available with the
software kit for that interface.
Note: This chapter does not describe how to install the Netezza system software or how to
upgrade the Netezza host software. Typically, Netezza Support works with you for any situa-
tions that might require software reinstallations, and the steps to upgrade a Netezza system
are described in the IBM Netezza Software Upgrade Guide.
If your users or their business reporting applications access the Netezza system through
ODBC, JDBC, or OLE-DB Provider APIs, see the IBM Netezza ODBC, JDBC and OLE DB
Installation and Configuration Guide for detailed instructions on the installation and setup
of these data connectivity clients.
2-1
IBM Netezza System Administrators Guide
Windows
Linux
Red Hat LAS Linux 4.0, 5.2, 5.3, 5.5, 6.1 Intel/AMD Intel/AMD
SUSE Linux Enterprise Server 10 and 11, IBM System z IBM System z
and Red Hat Enterprise Linux 5.x
UNIX
Note: The Netezza client kits are designed to run on the vendors proprietary hardware
architecture. For example, the AIX, HP-UX, and Solaris clients are intended for each ven-
dor's proprietary RISC architecture. The Linux client is intended for RedHat or SUSE on the
32-bit Intel architecture.
mount /media/cdrom
or
mount /media/cdrecorder
Table 2-2 describes other common mount commands for the supported UNIX clients. If
you encounter any problems mounting or accessing the media drive on your client sys-
tem, refer to your operating system documentation or command man pages.
Table 2-2: Sample UNIX CD/DVD Mount Commands
Platform Command
4. To change to the mount point, use the cd command and specify the mount pathname
that you used in step 3. This guide uses the term /mountPoint to refer to the applicable
CD/DVD mount point location on your system, as used in step 3.
cd /mountPoint
5. Navigate to the directory where the unpack command resides and run the unpack com-
mand as follows:
./unpack
Note: On some UNIX systems such as Red Hat 5.3, the auto-mounter settings may not
provide execute permissions by default. If the unpack command returns a permission
denied error, you can copy the installation files from the disk to a local directory and
run the unpack command from that local directory.
Note: For installations on Linux, be sure to use the unpack in the linux directory, not
the linux64 directory (which contains only the executable for the 64-bit ODBC driver).
Note: On an HP-UX 11i client, /bin/sh may not be available. You can use the command
form sh ./unpack to unpack the client.
6. The unpack program checks the client system to ensure that it supports the CLI pack-
age and prompts you for an installation location. The default is /usr/local/nz for Linux,
but you can install the CLI tools to any location on the client. The program prompts you
to create the directory if it does not exist. Sample command output follows:
------------------------------------------------------------------
IBM Netezza -- NPS Linux Client 7.0
(C) Copyright IBM Corp. 2002, 2012 All Rights Reserved.
------------------------------------------------------------------
Unpacking complete.
After the installation completes, the Netezza CLI commands will be installed to the speci-
fied destination directory. In addition, the installer stores copies of the software licenses in
the /opt/nz/licenses directory.
Installation Requirements
The installation package requires a computer system running a supported Windows operat-
ing system such as Windows 2003, XP (32- and 64-bit), VISTA (32-bit), 2008 (32- and
64-bit) and Windows 7 (32- and 64-bit). The client system must also have either a CD/DVD
drive or a network connection.
Note: If you will be using or viewing object names that use UTF-8 encoded characters, your
Windows client systems require the Microsoft universal font to display the characters within
the NzAdmin tool. The Arial Unicode MS font is installed by default on Windows XP sys-
tems, but you may need to run a manual installation for other Windows platforms such as
2003 or others. For more information, see the Microsoft support article at
http://office.microsoft.com/en-us/help/hp052558401033.aspx.
Environment Variables
Table 2-3 lists the operating system environment variables that the installation tool adds
for the Netezza console applications.
attempts to install the packages using the yum command. The yum command must be cor-
rectly configured to retrieve packages from your configured repositories. (Contact your Red
Hat administrator for questions about yum package management and package sources/
repositories in your environment.)
For more information about the Web Admin interface, see Using the Web Admin Applica-
tion on page 3-20.
5. Run the unpack command to add the software files to the system:
[root@nzhost1 ~]# ./unpack
The unpack script installs the software files for the Web Admin interface. During the
unpack process, you may be prompted for instructions to remove existing Web services
RPM packages, to choose whether to use SSL security for Web connections, and other
tasks. This sample output uses Enter to show that the user pressed the Enter key for these
types of prompts. Sample command output follows:
----------------------------------------------------------------------
IBM Netezza -- NPS Web Admin 7.0
(C) Copyright IBM Corp. 2002, 2012 All Rights Reserved.
----------------------------------------------------------------------
*********************************************************************
Unpacking WebAdmin files into: /usr/local/nzWebAdmin
*********************************************************************
**********************************************************************
Previous odbc configuration moved to /etc/odbcinst.ini.30724
**********************************************************************
Starting httpd: [ OK ]
Unpacking complete.
The unpacking process automatically starts the Web Admin server. If you need to stop the
Web Admin server at any time, log in as root or a superuser account and use the following
command:
service httpd stop
To start the Web Admin server, log in as root or a superuser account and use the following
command:
service httpd start
To install the new Web Admin client, follow the steps described in the section Installing
the Web Admin Server and Application Files on page 2-8.
Directory Contents
1. Set the command prompt to use an appropriate True Type font that contains the
required glyphs. To select a font:
a. Select Start > Programs > Accessories.
b. Right-click Command Prompt and then select Properties from the pop-up menu.
The Command Prompt Properties dialog box appears.
c. Select the Font tab. In the Font list, the True Type fixed width font(s) are controlled
by the registry setting HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Console\TrueTypeFont.
On a standard US system, the font is Lucida Console (which does not contain UTF-
8 mapped glyphs for Kanji). On a Japanese system, the font is MS Gothic, which
contains those glyphs.
2. In a DOS command prompt window, change the code page to UTF-8 by entering the
following command:
chcp 65001
As an alternative to these DOS setup steps, the input/output from the DOS clients can be
piped from/to nzconvert and converted to a native code page, such as 932 for Japanese.
On a Windows system, the fonts that you use for your display must meet these following
Microsoft requirements as outlined on the Support site at http://support.microsoft.com/
default.aspx?scid=kb;EN-US;Q247815.
5480 NZ_DBMS_PORT The postgres port for the nzsql command, NzAdmin
tool, ODBC, and JDBC.
5481 NZ_CLIENT_MGR_PORT The port for the CLI and NzAdmin tool messaging.
Note: Netezza personnel, if granted access for remote service, use port 22 for SSH, and
ports 20 and 21 for FTP.
Before you begin, make sure that you choose a port number that is not already in use. To
check the port number, you can review the /etc/services file to see if the port number is
already specified for another process. You can also use the netstat | grep port command to
see if the designated port is in use.
To change the default port numbers for your Netezza system:
1. Log in to the Netezza host as the nz user.
2. Change to the /nz/kit/sys/init directory.
3. Create a backup of the current nzinitrc.sh file:
[nz@nzhost init]$ cp nzinitrc.sh nzinitrc.sh.backup
4. Review the nzinitrc.sh file to see if the Netezza port(s) listed in Table 2-5 that you want
to change are already present in the file. For example, you may find a section that looks
similar to the following, or you might find these variables defined separately within the
nzinitrc.sh file.
# Application Port Numbers
# ------------------------
If you do not find your variable(s) in the file, you can edit the file to define each vari-
able and its new port definition. To define a variable in the nzinitrc.sh file, use the for-
mat NZ_DBMS_PORT=value; export NZ_DBMS_PORT as shown above.
Note: As a hint, you can append the contents of the nzinitrc.sh.sample file to the nzini-
trc.sh file to create an editable section of variable definitions. You must be able to log
in to the Netezza host as the root user; then, change to the /nz/kit/sys/init directory and
run the following command:
5. Using a text editor, edit the nzinitrc.sh file. For each port that you want to change,
remove the comment symbol (#) from the definition line and specify the new port num-
ber. For example, to change the NZ_DBMS_PORT variable value to 5486:
NZ_DBMS_PORT=5486; export NZ_DBMS_PORT
# NZ_CLIENT_MGR_PORT=5481; export NZ_CLIENT_MGR_PORT
# NZ_LOAD_MGR_PORT=5482; export NZ_LOAD_MGR_PORT
# NZ_BNR_MGR_PORT=5483; export NZ_BNR_MGR_PORT
# NZ_RECLAIM_MGR_PORT=5484; export NZ_RECLAIM_MGR_PORT
6. Review your changes carefully to make sure that they are correct and save the file.
Note: If you change the default port numbers, some of the Netezza CLI commands may
no longer work. For example, if you change the NZ_DBMS_PORT or NZ_CLIENT_MGR_
PORT value, commands such as nzds, nzstate, and others could fail because they
expect the default port value. To avoid this problem, copy the custom port variable def-
initions in the nzinitrc.sh file to the /export/home/nz/.bashrc file. You can edit the
.bashrc file using any text editor.
7. To place the new port value(s) into effect, stop and start the Netezza server using the
following commands:
[nz@nzhost init]$ nzstop
[nz@nzhost init]$ nzstart
Some Netezza commands such as nzsql and nzload have a -port option that allows the user
to specify the DB access port. In addition, users could create local definitions of the envi-
ronment variables to specify the new port number.
For example, on Windows clients, users could create an NZ_DBMS_PORT user environment
variable in the System Properties > Environment Variables dialog to specify the non-default
port of the Netezza system. For clients such as NzAdmin, the environment variable is the
only way to specify a non-default database port for a target Netezza system. For many sys-
tems, the variable name and value take effect immediately and are used the next time you
start NzAdmin. When you start NzAdmin and connect to a system, if you receive an error
that you cannot connect to the Netezza database and the reported port number is incorrect,
check the variable name and value to confirm that they are correct. You may need to reboot
the client system for the variable to take effect.
For a Linux system, you could define a session-level variable using a command similar to
the following:
$ NZ_DBMS_PORT=5486; export NZ_DBMS_PORT
For the instructions to define environment variables on your Windows, Linux, or UNIX cli-
ent, refer to the operating system documentation for your client.
If a client user connects to multiple Netezza hosts that each use different port numbers,
those users may need to use the -port option on the commands as an override, or change
the environment variables value on the client before they connect to each Netezza host.
When using the Netezza CLI commands, the clear-text password must be entered on
the command line. Note that you can set the environment variable NZ_PASSWORD to
avoid typing the password on the command line, but the variable is stored in clear text
with the other environment variables.
To avoid displaying the password on the command line, in scripts, or in the environ-
ment variables, you can use the nzpassword command to create a locally stored
encrypted password.
Note: You cannot use stored passwords with ODBC or JDBC.
The password is the Netezza database users password in the Netezza system catalog or
the password specified in the environment variable NZ_PASSWORD. If you do not sup-
ply a password on the command line or in the environment variable, the system
prompts you for a password.
The hostname is the Netezza host. If you do not specify the hostname on the command
line, the nzpassword command uses the environment variable NZ_HOST. You can cre-
ate encrypted passwords for any number of user name/host pairs.
When you use the nzpassword add command to cache the password, note that quotation
marks are not required around the user name or password values. You should only qualify
the user name or password with a surrounding set of single-quote double-quote pairs (for
example, '"Bob"') in cases where the value is case-sensitive. If you specify quoted or
unquoted names or passwords in nzpassword or other nz commands, you must use the
same quoting style in all cases.
If you qualify a case-insensitive user name with quotes (for example '"netezza"'), the com-
mand may still complete successfully, but this is not recommended and not guaranteed to
work in all command cases.
After you type the nzpassword command, the system sends the encrypted password to the
Netezza host where it is compared against the user name/password in the system catalog.
If the information matches, the Netezza stores the encrypted information in a local
password cache, and displays no additional message.
On Linux and Solaris, the password cache is the file .nzpassword in the users
home directory. Note that the system creates this file without access permissions
to other users, and refuses to honor a password cache whose permission allows
other users access.
On Windows, the password cache is stored in the registry.
If the information does not match, the Netezza displays a message indicating that the
authentication request failed. The Netezza also logs all verification attempts.
If the database administrator changed a user password in the system catalog, the exist-
ing nzpasswords are invalid.
In all cases using the -pw option on the command line, using the NZ_PASSWORD envi-
ronment variable, or using the locally stored password stored through the nzpassword
command the Netezza compares the password against the entry in the system catalog
for local authentication or against the LDAP account definition. The authentication protocol
is the same, and the Netezza never sends clear-text passwords over the network.
In Release 6.0.x, note that the encryption used for locally encrypted passwords has
changed. In prior releases, Netezza used the Blowfish encryption routines; Release 6.0 now
uses the Advanced Encryption Standard AES-256 standard. When you cache a password
using a release 6.0 client, the password is saved in AES-256 format unless there is an
existing password file in Blowfish format. In that case, new stored passwords will be saved
in Blowfish format.
If you upgrade to a Release 6.0.x or later client, the client can support passwords in either
the Blowfish format or the AES-256 format. If you want to convert your existing password
file to the AES-256 encryption format, you can use the nzpassword resetkey command to
update the file. If you want to convert your password file from the AES-256 format to the
Blowfish format, use the nzpassword resetkey -none command.
Older clients, such as those for Release 5.0.x and those earlier than Release 4.6.6, do not
support AES-256 format passwords. If your password file is in AES-256 format, the older
client commands will prompt for a password, which can cause automated scripts to hang.
Also, if you use an older client to add a cached password to or delete a cached password
from an AES-256 format file, you could corrupt the AES-256 password file and lose the
cached passwords. If you typically run multiple releases of Netezza clients, you should use
the Blowfish format for your cached passwords.
3-1
IBM Netezza System Administrators Guide
Summary of Commands
Table 3-1 describes the nz* commands you can use to monitor and manage the Netezza
system. These commands reside in the /nz/kit/bin directory on the Netezza host. Many of
these commands are also installed with the Netezza client kits and can be run from a
remote client workstation.
nzds Manages and displays For command syntax, see nzds on page A-8.
information about the
data slices on the
system.
nzload Loads data into data- For command syntax, see the IBM Netezza
base files. Data Loading Guide.
nzodbcsql A client command on See the IBM Netezza ODBC, JDBC, and OLE
Netezza UNIX clients DB Installation and Configuration Guide.
that tests ODBC
connectivity.
nzreclaim Uses the SQL GROOM For command syntax, see nzreclaim on
TABLE command to page A-35. For more information, see Groom-
reclaim disk space from ing Tables on page 9-18.
user tables, and also to
reorganize the tables.
nzspupart Shows a list of all the For usage information, see nzspupart on
SPU partitions and the page A-43.
disks that support
them; controls regener-
ations for degraded
partitions.
nzsql Invokes the SQL com- For usage information, see Chapter 9, Manag-
mand interpreter. ing User Content on the Netezza Appliance.
For command syntax, see the IBM Netezza
Database Users Guide.
Command Locations
Table 3-2 lists the default location of the Netezza CLI commands and whether they are
available in the various UNIX or Windows client kits. Remember to add the appropriate bin
directory to your search path to simplify command invocation.
C:\Program
/nz/kit/
Default Location /usr/local/nz/bin Files\Netezza
bin
Tools\Bin
nzbackup
nzhistcleanupdb
nzhistcreatedb
nzhostbackup
nzhostrestore
nzrestore
nzstart
C:\Program
/nz/kit/
Default Location /usr/local/nz/bin Files\Netezza
bin
Tools\Bin
nzstop
nzwebstart
nzwebstop
nzcontents
nzsql
nzreclaim
nzconvert
nzds
nzevent
nzhw
nzload
nzodbcsql
nzpassword
nzrev
nzsession
nzspupart
nzstate
nzstats
nzsystem
out without a value, the system waits 300 seconds. The maximum timeout value is 100
million seconds.
Note: In this example, note that you did not have to specify a host, user, or password. The
command simply displayed information that was already available on the local Windows
client.
To back up a Netezza database (you must run the command while logged in to the
Netezza system, as this is not supported from a client):
[nz@npshost ~]$ nzbackup -dir /home/user/backups -u user -pw
password -db db1
Backup of database db1 to backupset 20090116125409 completed
successfully.
'\'Identifier\''
The syntax is single-quote, backslash, single-quote, identifier, backslash, single-quote, sin-
gle-quote. This syntax protects the quotes so that the identifier remains quoted in the
Netezza system.
nzsql Command
The nzsql command is a SQL command interpreter. You can use it on the Netezza host or
on UNIX client systems to create database objects, run queries, and manage the database.
Note: The nzsql command is not yet available on Windows client systems.
Argument Description
-c <query> Runs only a single query (or slash command) and exits.
Argument Description
-r Suppresses the row count displayed at the end of the SQL output.
-T text Sets HTML table tag options (width, border) (-P tableattr=).
Argument Description
-v name=value Sets the nzsql variable name to the specified value. You can spec-
ify one or more -v arguments to set several options, for example:
nzsql -v HISTSIZE=600 -v USER=user1 -v PASSWORD=password
-securityLevel Specifies the security level that you want to use for the session.
The argument has four values:
preferredUnsecured This is the default value. Specify this
option when you would prefer an unsecured connection, but
you will accept a secured connection if the Netezza system
requires one.
preferredSecured Specify this option when you want a
secured connection to the Netezza system, but you will accept
an unsecured connection if the Netezza system is configured
to use only unsecured connections.
onlyUnsecured Specify this option when you want an unse-
cured connection to the Netezza system. If the Netezza system
requires a secured connection, the connection will be rejected.
onlySecured Specify this option when you want a secured
connection to the Netezza system. If the Netezza system
accepts only unsecured connections, or if you are attempting
to connect to a Netezza system that is running a release prior
to 4.5, the connection will be rejected.
-caCertFile Specifies the pathname of the root CA certificate file on the client
system. This argument is used by Netezza clients who use peer
authentication to verify the Netezza host system. The default
value is NULL which skips the peer authentication process.
Within the nzsql command interpreter, you can enter the following commands for help or to
execute a command:
\h Help for SQL commands.
\? Internal slash commands. See Table 3-4.
\g or terminate with semicolon Execute a query.
\q Quit.
nzsql -v ON_ERROR_STOP=
You do not have to supply a value; simply defining it is sufficient.
You can also toggle batch processing with a SQL script. For example:
\set ON_ERROR_STOP
\unset ON_ERROR_STOP
You can use the $HOME/.nzsqlrc file to store values, such as the ON_ERROR_STOP, and
have it apply to all future nzsql sessions and all scripts.
Argument Description
To suppress the row count information, you can use the nzsql -r command when you start
the SQL command line session. When you run a query, the output will not show a row
count:
mydb(myuser)=> select count(*) from nation;
COUNT
-------
25
You can use the NO_ROWCOUNT session variable to toggle the display of the row count
information within a session, as follows:
mydb(myuser)=> select count(*) from nation;
COUNT
-------
25
(1 row)
Client Compatibility
The NzAdmin client is intended to monitor Netezza systems that are at the same Netezza
software release level as the client. The client can monitor Netezza hosts with older
releases, but the client functionality may be incomplete. For example, when you monitor
older Netezza systems, some of the System tab features such as system statistics, event
management, and hardware component state changes are typically disabled. The Database
tab features are usually supported for the older systems.
The NzAdmin client is not compatible with Netezza hosts that are running releases at a
later revision. As a best practice, when you upgrade your Netezza system software you
should also upgrade your client software to match.
If you run the nzadmin.exe in a command window, you can optionally enter the following
login information on the command line to bypass the login dialog:
-host or /host and the name of the Netezza host or its IP address
-user or /user and a valid Netezza database user name
-pw or /pw and a valid password for the Netezza user. The NzAdmin tool also can use
cached passwords on your client system. To specify using a cached password, use the
-pw option without a password string.
You can enter these arguments in any order, but you must separate them with spaces or
commas. You can mix the - and / command forms.
If you enter all three arguments, NzAdmin bypasses the login dialog and connects you
to the host you have specified. If there is an error, NzAdmin displays the login dialog
with the host and user fields completed and you must enter the password.
If you specify only one or two arguments, NzAdmin displays the login dialog. You must
complete the remaining fields.
If you duplicate arguments, that is, specify -host red and -host blue, NzAdmin displays
a warning message and uses the first one (host red).
Note: The NzAdmin tool and Web Admin accept delimited (quoted) user names in their
respective login dialogs. You can also delimit user names passed when invoking the NzAd-
min tool in a command window.
Logging In to NzAdmin
Unless otherwise specified on the command line, the NzAdmin login dialog box requires
three arguments: host, user name, and password. When you enter the password, the NzAd-
min tool allows you to save the encrypted password on the local system. When you login
again, you only need enter the host and user name.
The drop down list in the host field displays the previous host addresses or names you have
used in the past.
You can suppress subsequent warning messages for version incompatibility by selecting
Dont warn me about this again and clicking OK.
In the main hardware view, NzAdmin displays an image of the Netezza system, which could
be one or more racks for Netezza z-series systems or one or more SPAs for IBM Netezza
100, 1000, C1000, or IBM PureData System for Analytics N1001 systems. As you move
the cursor over the image, NzAdmin displays information such as hardware IDs and other
details and the mouse cursor changes to the hyperlink hand. Clicking the image allows you
to drill down to more information about the component.
In the status bar at the bottom of the window, the NzAdmin tool displays your user name
and the duration of the NzAdmin session. If the host system is not in the online state, the
status bar displays the message The host is not online.
You can access commands through the menu bar, the toolbar, or by right-clicking objects.
Status Description
Red Failed. The component is down or failed. It can also indicate that a
component is likely to fail, which is the case if two fans on the same
SPA are down.
Command Description
File > New Allows you to create a database, table, view, materi-
alized view, sequence, synonym, user or group.
Available only in Database tab.
File > System State Allows you to change the system state.
View > System Objects Shows/hides system tables and views and applies to
object privilege lists in the Object Privileges
window.
View > SQL Statements Displays the SQL Window that shows a subset of the
SQL commands NzAdmin has used in this session.
Command Description
Tools > Table Skew Table Skew Displays any tables that meet or
exceed a specified skew threshold.
Tools > Table Storage Table Storage Displays table and materialized
view storage usage by database or by user.
Tools > Query History Configuration Query History Configuration Displays a window
that you can use to create and alter query history
configurations, as well as to set the current
configuration.
Tools > Default Settings Default Settings Displays the materialized view
refresh threshold.
Help > NzAdmin Help Displays the online help for the NzAdmin tool.
Help > About NzAdmin Displays the NzAdmin and Netezza revision num-
bers and copyright text.
As you move the cursor over the SPA image, the NzAdmin tool displays the slot number,
hardware ID, role, and state of each SPU, and the mouse cursor changes to the hyperlink
hand. Clicking the SPU displays the SPU status window and positions the tree control to
the corresponding entry.
Administration Commands
You can access system and database administration commands from both the tree view and
the status pane of NzAdmin. In either case, a popup or context menu supports the com-
mands related to the components displayed.
To activate a pop-up context menu, right-click a component in a list.
The Options hyperlink menu is located in the top bar of the window.
If you enable auto refresh, the NzAdmin tool displays a refresh icon in the right corner of
the status bar. The system stores the refresh state and time interval, and maintains this
information across NzAdmin sessions. Therefore, if you set automatic refresh, it remains in
effect until you change it.
To reduce communication with the server, the NzAdmin tool refreshes data based on the
item you select in the left pane. Table 3-7 lists the items and corresponding data retrieved
on refresh.
Server (database view) All databases and their associated objects, users,
groups, and session information.
If the NzAdmin tool is already communicating with the backend server, such as processing
a user command or performing a manual refresh, it does not execute an auto refresh.
If you click Reconnect, the NzAdmin tool attempts to establish a connection to the
server.
If you click Exit, NzAdmin terminates your session.
You can install the Web Admin server package on the Netezza host system, or on any Linux
system that can connect to the Netezza system. The Linux system should run an operating
system version that matches the Web Admin installation package.
Using the Web Admin interface you can do the following:
Display the status of Netezza hardware, user and system sessions, data storage usage,
database, tables, views, sequences, synonyms, functions, aggregates, stored proce-
dures, active queries and query history, and users and groups.
Note: The query history information accessible from the Web Admin interface uses the
_v_qryhist and _v_qrystat views for backward compatibility. These views will be depre-
cated in the future. For details on the new query history feature, see Chapter 11,
Query History Collection and Reporting.
Navigation Pane
The navigation page is on the left side of the page and contains the main list of site links.
This page is fixed and, with a few exceptions, is present on all pages within the site. Most
links are grouped within system and database commands.
Status Pane
The status pane is at the top of the page, and contains database status and system state,
time of last status update, host revision number, hostname or address, and user name and
authentication setting.
The status area also includes a search box, which you can use to search through system
tables. Depending on the search string you enter, the system finds the following items:
If the search string is numeric, the system searches for hardware identifiers or IP
addresses, such as a SPU or SPA.
If the search string is alphanumeric, the system searches for databases, tables, views,
sequences, synonyms, functions, aggregates, procedures, and user or group names.
The alphanumeric search uses the SQL like operator, therefore you can augment the
search string with SQL pattern characters. For example, the search string cust% finds
all occurrences of the customer table throughout all the databases in the system.
Drilldown Links
The Web Admin interface lets you drill down for more detailed information on system, hard-
ware, and database objects. Many pages contain drilldown links, in text or graphical form.
For example:
In the Hardware View page, you can click on the rack image to drill down to a specific
SPA.
In the SPA Status page, you can click on a SPU within the SPA image to drill down to
detailed information on a SPU.
In the Table List page, you can click on a table name to drill down to table properties.
Action Buttons
At the top of many Web Admin pages, there are action links that provide additional naviga-
tion based on the current pages content. For example, from the Table Properties page you
can select to view the table record distribution or statistics, or truncate or drop the table.
Online Help
The Web Admin interface provides you with two types of help:
Task-oriented help Available when you click Help Contents in the navigation pane.
Context-sensitive help Available when you click the question icon on each page.
The Netezza high availability (HA) solution uses Linux-HA and Distributed Replicated
Block Device (DRBD) as the foundation for cluster management and data mirroring. The
Linux-HA and DRBD applications are commonly used, established, open source projects for
creating HA clusters in various environments. They are supported by a large and active
community for improvements and fixes, and they also offer the flexibility for Netezza to add
corrections or improvements on a faster basis, without waiting for updates from third-party
vendors.
The IBM Netezza 1000, C1000, IBM PureData System for Analytics N1001, and NEC
InfoFrame DWH Appliances are HA systems, which means that they have two host servers
for managing Netezza operations. The host server (often referred to as host within the doc-
umentation) is a Linux server that runs the Netezza software and utilities. This chapter
describes some high-level concepts and basic administration tasks for the Netezza HA
environment.
4-1
IBM Netezza System Administrators Guide
data is written to the /nz partition and the /export/home partition on the primary host, the
DRBD software automatically makes the same changes to the /nz and /export/home parti-
tion of the standby host.
The Netezza implementation uses DRBD in a synchronous mode, which is a tightly coupled
mirroring system. When a block is written, the active host does not record the write as com-
plete until both the active and the standby hosts successfully write the block. The active
host must receive an acknowledgement from the standby host that it also has completed
the write. Synchronous mirroring (DRBD protocol C) is most often used in HA environments
that want the highest possible assurance of no lost transactions should the active node fail
over to the standby node. Heartbeat typically controls the DRBD services, but commands
are available to manually manage the services.
For details about DRBD and its terms and operations, see the documentation available at
http://www.drbd.org.
Enable the NPS service cluadmin -- service enable nps crm_resource -r nps -p target_role -v started
Disable the NPS service cluadmin -- service disable nps crm_resource -r nps -p target_role -v stopped
Start the cluster on each service cluster start service heartbeat start
node
Stop the cluster on each service cluster stop service heartbeat stop
node
In some customer environments that used the previous cluster manager solution, it was
possible to have only the active host running while the secondary was powered off. If
problems occurred on the active host, the Netezza administrator onsite would power off
the active host and power on the standby. In the new Linux-HA DRBD solution, both
HA hosts must be operational at all times. DRBD ensures that the data saved on both
hosts is synchronized, and when Heartbeat detects problems on the active host, the
software automatically fails over to the standby with no manual intervention.
Linux-HA Administration
When you start a Netezza HA system, Heartbeat automatically starts on both hosts. It can
take a few minutes for Heartbeat to start all the members of the nps resource group. You
can use the crm_mon command from either host to observe the status, as described in
Monitoring the Cluster and Resource Group Status on page 4-6.
Heartbeat Configuration
Heartbeat uses the /etc/ha.d/ha.cf configuration file first to load its configuration. The file
contains low-level information about fencing mechanisms, timing parameters, and whether
the configuration is v1 (old-style) or v2 (CIB). Netezza uses the v2 implementation.
Do not modify the file unless directed to in Netezza documentation or by Netezza Support.
CIB
The majority of the Heartbeat configuration is stored in the Cluster Information Base (CIB).
The CIB is located on disk at /var/lib/heartbeat/crm/cib.xml. Heartbeat synchronizes it auto-
matically between the two Netezza hosts.
NEVER manually edit the CIB file! You must use cibadmin (or crm_resource) to modify the
Heartbeat configuration. Wrapper scripts like heartbeat_admin.sh will update the file in a
safe way.
Note: It is possible to get into a situation where Heartbeat will not start properly due to a
manual CIB modificationalthough the CIB cannot be safely modified without Heartbeat
being started (that is, cibadmin cannot run). In this situation, you can run /nzlocal/scripts/
heartbeat_config.sh to reset the CIB and /etc/ha.d/ha.cf to factory-default status. After
doing this, it is necessary to run /nzlocal/scripts/heartbeat_admin.sh --enable-nps to com-
plete the CIB configuration.
the default active host and so HA1 is often synonymous with the active host. The names
HA1 and HA2 are still used to refer to the host servers regardless of their active/standby
role.
In IBM Netezza HA system designs, host1/HA1 is configured by default to be the active
host. You can run cluster management commands from either the active or the standby
host. The nz* commands must be run on the active host, but the commands run the same
regardless of whether host 1 or host 2 is the active host. The Netezza software operation is
not affected by the host that it runs on; the operation is identical when either host 1 or host
2 is the active host.
However, when host 1 is the active host, certain system-level operations such as S-Blade
restarts and reboots often complete more quickly than when host 2/HA2 is the active host.
An S-Blade reboot can take one to two minutes longer to complete when host 2 is the
active host. Certain tasks such as manufacturing and system configuration scripts can
require host 1 to be the active host, and they will display an error if run on host 2 as the
active host. The documentation for these commands indicates whether they require host 1
to be the active host, or if special steps are required when host 2 is the active host.
Guide for your model type. Refer to that guide if you need to perform any of these
procedures.
Table 4-2: Cluster Management Scripts
Type Scripts
Note: The following is a list of other Linux-HA commands available. This list is also pro-
vided as a reference, but it is highly recommended that you do not use any of these
commands unless directed to by Netezza documentation or by Netezza Support.
The host running the nps resource group is considered the active host. Every member of the
nps resource group will start on the same host. The output above shows that they are all
running on nzhost1, which means that nzhost1 is the active host.
Note: If the nps resource group is unable to start, or if it has been manually stopped (such
as by crm_resource -r nps -p target_role -v stopped), neither host is considered to be active.
If this is the case, crm_mon will either show individual resources in the nps group as
stopped, or it will not show the nps resource group at all.
Although the crm_resource output shows that the MantraVM service is started, this is a
general status for Heartbeat monitoring. For details on the MantraVM status, use the ser-
vice mantravm status command which is described in Displaying the Status of the
MantraVM Service on page 14-4.
Note: The crm_mon output also shows the name of the Current DC. The Designated Coordi-
nator (DC) host is not an indication of the active host. The DC is an automatically assigned
role that Linux-HA uses to identify a node that acts as a coordinator when the cluster is in
a healthy state. This is a Linux-HA implementation detail and does not impact Netezza.
Each host is capable of recognizing and recovering from failure, regardless of which one is
the DC. For more information about the DC and Linux-HA implementation details, see
http://www.linux-ha.org/DesignatedCoordinator.
fabric_ip
wall_ip
nz_dnsmasq
mantravm
nzinit
The order of the members of the group matters; group members are started sequentially
from first to last. They are stopped sequentially in reverse order, from last to first. Heart-
beat blocks on each members startup and will not attempt to start the next group member
until the previous member has started successfully. If any member of the resource group is
unable to start (returns an error or times out), Heartbeat performs a failover to the standby
node.
Note: The mantravm resource is not a blocking resource; that is, if the MantraVM service
does not start when the nps resource group is starting, the nps resource group does not wait
for the MantraVM to start.
Failover Criteria
During a failover or resource migration, the nps resource group is stopped on the active
host and started on the standby host. The standby host then becomes the active host.
It is important to differentiate between a resource failover and a resource migration (or relo-
cation). A failover is an automated event which is performed by the cluster manager
without human intervention when it detects a failure case. A resource migration occurs
when an administrator intentionally moves the resources to the standby.
A failover can be triggered by any of the following events:
BOTH maintenance network links to the active host are lost.
ALL fabric network links to the active host are lost.
A user manually stops Heartbeat on the active host.
The active host is cleanly shut down, such as if someone issued the command
shutdown -h on that host.
The active host is uncleanly shut down, such as during a power failure to the system
(both power supplies fail).
If any member of the nps resource group cannot start properly when the resource group
is initially started.
If any one of the following members of the nps resource group fails after the resource
group was successfully started:
drbd_exphome_device or drbd_nz_device: These correspond to low-level DRBD
devices that serve the shared filesystems. If these devices fail, the shared data
would not be accessible on that host.
exphome_filesystem or nz_filesystem: These are the actual mounts for the DRBD
devices.
nz_dnsmasq: The DNS daemon for the Netezza system.
Note: If any of these resource group members experiences a failure, Heartbeat first tries to
restart or repair the process locally. The failover is triggered only if that repair or restart pro-
cess does not work. Other resources in the group not listed above are not monitored for
failover detection.
To relocate the nps resource group from the active host to the standby host:
[root@nzhost1 ~]# /nzlocal/scripts/heartbeat_admin.sh --migrate
Testing DRBD communication channel...Done.
Checking DRBD state...Done.
File systems and eth2 on this host are okay. Going on.
File systems and eth2 on other host are okay. Going on.
This script will configure Host 1 or 2 to own the shared disks and
own the fabric.
Running nz_dnsmasq: [ OK ]
nz_dnsmasq started.
5. As root, start the cluster on the first node, which will become the active node:
[root@nzhost1 ~] service heartbeat start
Starting High-Availability services:
[ OK ]
6. As root, start the cluster on the second node, which will become the standby node:
[root@nzhost2 ~] service heartbeat start
Starting High-Availability services:
[ OK ]
5. After you configure the maillist files, test the event mail by shutting down or rebooting
either host in the cluster. Your specified TO addresses should receive email about the
event.
DRBD Administration
DRBD provides replicated storage of the data in managed partitions (that is, /nz and /
export/home). When a write occurs to one of these locations, the write action is performed
at both the local node and the peer standby node. Both perform the same write to keep the
data in synchronization. The peer responds to the active node when finished, and if the
local write operation is also successfully finished, the active node reports the write as
complete.
Connected the normal and operating state; the host is communicating with its peer.
WFConnection the host is waiting for its peer node connection; usually seen when
other node is rebooting.
Standalone the node is functioning alone due to a lack of network connection with
its peer and will not try to reconnect. If the cluster is in this state, it means that data is
not being replicated. Manual intervention is required to fix this problem.
The common State values include the following:
Primary the primary image; local on active host.
Secondary the mirror image, which receives updates from the primary; local on
standby host.
Unknown always on other host; state of image is unknown.
The common Disk State values include the following:
UpToDate the data on the image is current.
DUnknown this is an unknown data state; usually results from a broken connection.
The DRBD status when the current node is active and the standby node is down:
m:res cs st ds p mounted fstype
0:r1 WFConnection Primary/Unknown UpToDate/DUnknown C /export/home ext3
1:r0 WFConnection Primary/Unknown UpToDate/DUnknown C /nz ext3
Split-Brain
Split-brain is an error state that occurs when the images of data on each Netezza host are
different. It typically occurs when synchronization is disabled and users change data inde-
pendently on each Netezza host. As a result, the two Netezza host images are different, and
it becomes difficult to resolve what the latest, correct image should be.
Split-brain does not occur if clustering is enabled. The fencing controls prevent users from
changing the replicated data on the standby node. It is highly recommended that you allow
DRBD management to be controlled by Heartbeat to avoid the split-brain problems.
However, if a split-brain problem should occur, the following message appears in the /var/
log/messages file:
5. You can check the status of the fix using drbdadm primary resource and the service
drbd status command. Make sure that you run drbdadm secondary resource before you
start Heartbeat.
IP Address Requirements
Table 4-3 is an example block of the eight IP addresses that are recommended for a cus-
tomer to reserve for an HA system:
Table 4-3: HA IP Addresses
HA1 172.16.103.209
Floating IP 172.16.103.212
HA2 172.16.103.213
Reserved 172.16.103.215
Reserved 172.16.103.216
In the IP addressing scheme, note that there are two host IPs, two host management IPs,
and the floating IP, which is HA1 + 3.
Output From crm_mon Does Not Show the nps Resource Group
If the log messages indicate that the nps resource group cannot run anywhere, the cause
is that Heartbeat tried to run the resource group on both HA1 and HA2, but it failed in both
cases. Search in /var/log/messages on each host to find this first failure. Search from the
bottom of the log for the message cannot run anywhere and then scan upward in the log
to find the service failures. You must fix the problem(s) that caused a service to fail to start
before you can successfully start the cluster.
After you fix the failure case, you must restart Heartbeat following the instructions in Tran-
sitioning from Maintenance to Clustering Mode on page 4-11.
The sample output shows three sessions: the last entry is the session created to generate
the results for the nzsession command. The first two entries are user activity, and you
should wait for those sessions to complete or stop them prior before you use the nz.heart-
beat.sh or nz.non-heartbeat.sh commands.
To check for connections to the /export/home and /nz directory:
1. As the nz user on the active host, stop the Netezza software:
[nz@nzhost1 ~]$ /nz/kit/bin/nzstop
2. Log out of the nz account and return to the root account; then use the lsof command to
list any open files that reside in /nz or /export/home. Sample output follows:
[root@nzhost1 ~]# lsof /nz /export/home
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
bash 2913 nz cwd DIR 8,5 4096 1497025 /export/home/nz
indexall. 4493 nz cwd DIR 8,5 4096 1497025 /export/home/nz
less 7399 nz cwd DIR 8,5 4096 1497025 /export/home/nz
lsof 13205 nz cwd DIR 8,5 4096 1497025 /export/home/nz
grep 13206 nz cwd DIR 8,5 4096 1497025 /export/home/nz
tail 22819 nz 3r REG 8,5 146995 1497188 /export/home/nz/fpga_135.log
This example shows that there are several open files in /export/home. If necessary, you
could close the open files using a command such as kill and supplying the process ID (PID)
shown in the second column. Use caution with the kill command; if you are not familiar
with Linux system commands, contact Support or your Linux system administrator for
assistance.
This chapter describes administration tasks for hardware components of the Netezza appli-
ance. Most of the administration tasks focus on obtaining status and information about the
operation of the appliance, and in becoming familiar with the hardware states. This chapter
also describes tasks to perform should a hardware component fail.
Host servers Each Netezza HA system has one or two host Tasks include monitoring of the hardware sta-
servers to run the Netezza software and sup- tus of the active/standby hosts, and
porting applications. If a system has two occasional monitoring of disk space con-
host servers, the hosts operate in a highly sumption on the hosts. At times, the host
available (HA) configuration; that is, one may require Linux OS or health driver
host is the active or primary host, and the upgrades to improve its operational software.
other is a standby host ready to take over
should the active host fail.
5-1
IBM Netezza System Administrators Guide
Snippet pro- SPAs contain the SPUs and associated disk Tasks include monitoring of the SPA environ-
cessing storage which drive the query processing on ment, such as fans, power, temperature, and
arrays (SPAs) the Netezza appliance. IBM Netezza 100 so on. SPUs and disks are monitored
systems have one host server and thus are separately.
not HA configurations.
Storage In the IBM Netezza High Capacity Appliance Tasks include monitoring the status of the
Group C1000 model, disks reside within a storage disks within the storage group.
group. The storage group consists of three
disk enclosures: an intelligent storage enclo-
sure with redundant hardware RAID
controllers, and two expansion disk enclo-
sures. There are four storage groups in each
C1000 rack.
Disks Disks are the storage media for the user Tasks include monitoring the health and sta-
databases and tables managed by the tus of the disk hardware. Should a disk fail,
Netezza appliance. tasks include regenerating the disk to a spare
and replacing the disk.
Data slices Data slices are virtual partitions on the disks Tasks include monitoring the status or health
that contain user databases and tables. Each of the data slices and also the space con-
partition has a redundant copy to ensure that sumption of the data slice.
the data can survive one disk failure.
Fans and These components control the thermal cool- Tasks include monitoring the status of the
blowers ing for the racks and components such as fans and blowers, and should a component
SPAs and disk enclosures. fail, replacing the component to ensure
proper cooling of the hardware.
Power These components provide electrical power Tasks include monitoring the status of the
supplies to the various hardware components of the power supplies, and should a component fail,
system. replacing the component to ensure redundant
power to the hardware.
The Netezza appliance uses SNMP events (described in Chapter 7, Managing Event
Rules) and status indicators to send notifications of any hardware failures. Most hardware
components are redundant; thus, a failure typically means that the remaining hardware
components will assume the work of the component that failed. The system may or may not
be operating in a degraded state, depending upon the component that failed.
Never run the system in a degraded state for a long period of time. It is imperative to
replace a failed component in a timely manner so that the system returns to an optimal
topology and best performance.
Netezza Support and Field Service will work with you to replace failed components to
ensure that the system returns to full service as quickly as possible. Most of the system
components require Field Service support to replace. Components such as disks can be
replaced by customer administrators.
For an IBM Netezza High Capacity Appliance C1000 system, the nzhw output shows the
storage group information, for example:
Figure 5-2: Sample nzhw show Output (IBM Netezza C1000 Systems)
Hardware Types
Each hardware component of the Netezza system has a type that identifies the hardware
component. Table 5-2 describes the hardware types. You see these types when you run the
nzhw command or display hardware using the NzAdmin or Web Admin UIs.
Description Comments
Disk Enclosure A disk enclosure chassis, which contains the disk devices
Blower A fan pack used within the S-Blade chassis for thermal cooling
MM A management device for the associated unit (SPU chassis, disk enclo-
sure). These devices include the AMM and ESM components, or a
RAID controller for an intelligent storage enclosure in a Netezza C1000
system.
Store Group A group of three disk enclosures within an IBM Netezza C1000 system
managed by redundant hardware RAID controllers
Ethernet Switch Ethernet switch (for internal network traffic on the system)
Description Comments
Host disk A disk resident on the host that provides local storage to the host
Database accel- A Netezza Database Accelerator Card (DAC), which is part of the S-
erator card Blade/SPU pair
Hardware IDs
Each hardware component has a unique hardware identifier (ID) which is in the form of an
integer, such as 1000, 1001, 1014, and so on. You can use the hardware ID to perform
operations on a specific hardware component, or to uniquely identify which component in
command output or other informational displays.
To display information about the component with the hardware ID 1001:
[nz@nzhost ~]$ nzhw show -id 1011
Description HW ID Location Role State
----------- ----- -------------------- ------ -----
Disk 1011 spa1.diskEncl4.disk1 Active Ok
Hardware Location
Netezza uses two formats to describe the position of a hardware component within a rack.
The logical location is a string in a dot format that describes the position of a hardware
component within the Netezza rack. For example, the nzhw output shown in Figure 5-1
on page 5-3 shows the logical location for components; a Disk component description
follows:
Disk 1011 spa1.diskEncl1.disk1 Active Ok
In this example, the location of the disk is in SPA 1, disk enclosure one, disk position
one.
Similarly, the disk location for a disk on an IBM Netezza C1000 system shows the loca-
tion including storage group:
Disk 1029 spa1.storeGrp1.diskEncl2.disk5 Active Ok
The physical location is a text string that describes the location of a component. You
can display the physical location of a component using the nzhw locate command. For
example, to display the physical location of disk ID 1011:
[nz@nzhost ~]$ nzhw locate -id 1011
Turned locator LED 'ON' for Disk: Logical
Name:'spa1.diskEncl4.disk1' Physical Location:'1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1'
As shown in the command output, the nzhw locate command also lights the locator
LED for components such as SPUs, disks, and disk enclosures. For hardware compo-
nents that do not have LEDs, the command displays the physical location string.
Figure 5-3 shows an IBM Netezza 1000-12 system or an IBM PureData System for Analyt-
ics N1001-010 system with a closer view of the storage arrays and SPU chassis
components and locations.
SPU Chassis 1
SPU1 occupies slots 1 and 2;
SPU3 occupies slots 3 and 4, up
to SPU 11 which occupies slots
11 and 12.
SPU
Chassis 2
Figure 5-4 shows an IBM Netezza C1000-4 system with a closer view of the storage groups
and SPU chassis components and locations.
Host Row 1 1 2 3 4
KVM
Row 2 5 6 7 8
Storage 9 10 11 12
Row 3
Group 3
Storage
Group 4
SPU1 occupies slots 1 and 2;
SPU3 occupies slots 3 and 4,
SPU Chassis 1 SPU9 occupies slots 9 and 10,
SPU 11 occupies slots 11 and 12.
For detailed information about the locations of various components in the front and back of
the system racks, see the Site Preparation and Specifications: IBM Netezza C1000 Sys-
tems guide.
Hardware Roles
Each hardware component of the Netezza system has a hardware role, which represents
how the hardware is being used. Table 5-3 describes the hardware roles. You see these
roles when you run the nzhw command or display hardware status using the NzAdmin or
Web Admin UIs.
None The None role indicates that the hardware is initial- All active SPUs must be discovered
ized, but it has yet to be discovered by the Netezza before the system can transition from
system. This usually occurs during system startup the Discovery state to the Initializing
before any of the SPUs have sent their discovery state.
information.
Active The hardware component is an active system partici- Normal system state
pant. Failing over this device could impact the
Netezza system.
Failed The hardware has failed. It cannot be used as a spare. Monitor your supply of spare disks. Do
After maintenance has been performed, you must acti- not operate without spare disks.
vate the hardware using the nzhw command before it
can become a spare and used in the system.
Mismatched This role is specific to disks. If the disk has a UUID To use the SPU as a spare, activate it,
that does not match the host UUID, then it is consid- otherwise, remove it from the system.
ered mismatched. You must activate the hardware To delete it from the system catalog,
using the nzhw command before it can become a use the nzhw delete command.
spare and used in the system.
Spare The hardware is not used in the current running Normal system state. After a new disk
Netezza system, but it is available to become active in is added to the system, its role is set to
the event of a failover. Spare.
Incompatible The hardware is incompatible with the system. It Some examples are disks that are
should be removed and replaced with compatible smaller in capacity than the smallest
hardware. disk in use, or blade cards which are
not Netezza SPUs.
Hardware States
The state of a hardware component represents the power status of the hardware. Each
hardware component has a state. Table 5-4 describes the hardware states for all compo-
nents except a SPU.
Note: SPU states are the system states, which are described in Table 6-3 on page 6-4.
You see these states when you run the nzhw command or display hardware status using the
NzAdmin or Web Admin UIs.
None The None state indicates that the hardware is ini- All active SPUs must be discovered before
tialized, but it has yet to be discovered by the the system can transition from the Discov-
Netezza system. This usually occurs during system ery state to the Initializing state. If any
startup before any of the SPUs have sent their dis- active SPUs are still in the Booting state,
covery information. there could be an issue with the hardware
startup.
Invalid
Missing The system manager has detected a new device in This typically occurs when a disk or SPU
a slot that was previously occupied but not has been removed and replaced with a
deleted. spare without deleting the old device. The
old device is considered absent because
the system manager cannot find it within
the system.
Unreachable The system manager cannot communicate with a The device may have been failed or physi-
previously discovered device. cally removed from the system.
Critical The management module has detected a critical Contact Netezza Support to obtain help
hardware problem, and the problem components with identifying and troubleshooting the
amber service light may be illuminated. cause the of the critical alarm.
Note: The system manager also monitors the management modules (MMs) in the system,
which have a status view of all the blades in the system. As a result, you may see messages
similar to the following in the sysmgr.log file:
2011-05-18 13:34:44.711813 EDT Info: Blade in SPA 5, slot 11 changed
from state 'good' to 'discovering', reason is 'No critical or warning
events'
2011-05-18 13:35:33.172005 EDT Info: Blade in SPA 5, slot 11 changed
from state 'discovering' to 'good', reason is 'No critical or warning
events'
A transition from good to discovering indicates that the IMM (a management processor
on the blade) rebooted and that it is querying the blade hardware for status. The blade
remains in the discovering state during the query. The IMM then determines whether the
blade hardware state is good, warning, or critical, and posts the result to the AMM. The sys-
tem manager reports the AMM status using these log messages. You can ignore these
normal messages. However, if you see a frequent number of these messages for the same
blade, there may be an issue with the IMM processor on that blade.
SPU
1003
0
9 11 13 15
1
2
3 1070 1032 1051 1013
4 Disks
10 12 14 16
5
6
7 1071 1033 1052 1014
SPU
1164
0
55 57 59 61
1
2
3 1134 1153 1096 1115
4 Disks
56 58 60 62
5
6
7 1135 1154 1097 1116
If a SPU fails, the system moves all its data slices to the remaining active SPUs for man-
agement. The system moves them in pairs (the pair of disks that contain the primary and
mirror data slices of each other). In this situation, some SPUs will have 10 data partitions
(numbered 0 9).
you use the nzhw command to activate, fail, or otherwise manage disks manually, the RAID
controllers will ensure that the action is allowed at that time; in some cases, commands
will return an error when the requested operation, such as a disk failover, is not allowed.
The RAID controller caches are disabled when any of the following conditions occur:
Battery failure
Cache backup device failure
Peer RAID controller failure (that is, a loss of the mirrored cache)
When the cache is disabled, the storage group (and the Netezza system) experiences a per-
formance degradation until the condition is resolved and the cache is enabled again.
Figure 5-6 shows an illustration of the SPU/storage mapping. Each SPU in a Netezza
C1000 system owns 9 user data slices by default. Each data slice is supported by a three-
disk RAID 5 storage array. The RAID 5 array can support a single disk failure within the
three-disk array. (More than one disk failure within the three-disk array results in the loss of
the data slice.) Seven disks within the storage group in a RAID 5 array are used to hold
important system information such as the nzlocal, swap and log partition.
Data slice 1
If a SPU fails, the system manager distributes the user data partitions and the nzlocal and
log partitions to the other active SPUs in the same SPU chassis. A Netezza C1000 system
requires a minimum of three active SPUs; if only three SPUs are active and one fails, the
system transitions to the down state.
regenerate to spares, it is possible to have an unbalanced topology where the disks are not
evenly distributed among the odd- and even-numbered enclosures. This causes one of the
SAS (also called HBA) paths, which are shown as the dark lines connecting the blade chas-
sis to the disk enclosures, to carry more traffic than the other.
Enclosure1
Enclosure2
Enclosure3
Enclosure4
The system manager can detect and respond to disk topology issues. For example, if an S-
Blade has more disks in the odd-numbered enclosures of its array, the system manager
reports the problem as an overloaded SAS bus. You can use the nzhw rebalance command
to reconfigure the topology so that half of the disks are in the odd-numbered enclosures
and half in the even-numbered. (The rebalance process requires the system to transition to
the pausing now state to accomplish the topology update.)
When the Netezza system restarts, the restart process checks for topology issues such as
overloaded SAS buses or SPAs that have S-Blades with uneven shares of data slices. If the
system detects a spare S-Blade for instance, it will reconfigure the data slice topology to
fairly distribute the workload among the S-Blades.
Callhome File
The callHome.txt file resides in the /nz/data/config directory and it defines important infor-
mation about the Netezza system such as primary and secondary administrator contact
information, as well as system information such as location, model number, and serial
number. Typically, the Netezza installation team member edits this file for you when the
Netezza system is installed onsite, but you can review and/or edit the file as needed to
ensure that the contact information is current. For more information about configuring call-
home, see Adding an Event Rule on page 7-8.
The disks should be replaced to ensure that the system has spares and an optimal topology.
You can also use the NzAdmin and Web Admin interfaces to obtain visibility to hardware
issues and failures.
Managing Hosts
In general, there are very few management tasks relating to the Netezza hosts. In most
cases, the tasks are best practices for the optimal operation of the host. For example:
Do not change or customize the kernel or operating system files unless directed to do
so by Netezza Support or Netezza customer documentation. Changes to the kernel or
operating system files could impact the performance of the host.
Do not install third-party software on the Netezza host without consulting Netezza Sup-
port. While management agents or other applications may be of interest, it is important
to work with Support to ensure that third-party applications do not interfere with the
host processing.
During Netezza software upgrades, host and kernel software revisions are verified to
ensure that the host software is operating with the latest required levels. The upgrade
processes may display messages informing you to update the host software to obtain
the latest performance and security features.
On IBM Netezza 1000, C1000, IBM PureData System for Analytics N1001, and NEC
InfoFrame DWH Appliances, Netezza uses DRBD replication only on the /nz and
/export/home partitions. As new data is written to the Netezza /nz partition and the
/export/home partition on the primary Netezza system, the DRBD software automati-
cally makes the same changes to the /nz and /export/home partition of the standby
Netezza system.
Use caution when saving files to the host disks; in general, it is not recommended that
you store Netezza database backups on the host disks, nor use the host disks to store
large files that could grow and fill the host disks over time. Be sure to clean up and
remove any temporary files that you create on the host disks to keep the disk space as
available as possible for Netezza software and database use.
If the active host fails, the Netezza HA software typically fails over to the standby host to
keep the Netezza operations running. Netezza Support will work with you to schedule field
service to replace the failed host.
Managing SPUs
Snippet Processing Units (SPUs) or S-Blades are hardware components that serve as the
query processing engines of the Netezza appliance. Each SPU has CPUs and FPGAs as well
as memory and I/O to process queries and query results. Each SPU has associated data
partitions that it owns to store the portions of the user databases and tables that the SPU
processes during queries.
The basic SPU management tasks are as follows:
Monitor status and overall health
Activate a spare SPU
Deactivate a spare SPU
Failover a SPU
Locate a SPU in the Netezza rack
Reset (power cycle) a SPU
Delete a failed, inactive, or incompatible SPU
Replace a failed SPU
The following sections describe how to perform these tasks.
You can use the nzhw command to activate, deactivate, failover, locate, and reset a SPU, or
delete SPU information from the system catalog. For more information about the nzhw
command syntax and options, see nzhw on page A-26.
To indicate which SPU you want to control, you can refer to the SPU using its hardware ID.
You can use the nzhw command to display the IDs, as well as obtain the information from
management UIs such as NzAdmin or Web Admin.
Activate a SPU
You can use the nzhw command to activate a SPU that is inactive or failed.
To activate a SPU:
nzhw activate -u admin -pw password -host nzhost -id 1004
Deactivate a SPU
You can use the nzhw command to make a spare SPU unavailable to the system. If the
specified SPU is active, the command displays an error.
To deactivate a spare SPU:
nzhw deactivate -u admin -pw password -host nzhost -id 1004
Failover a SPU
You can use the nzhw command to initiate a SPU failover.
To failover a SPU, enter:
nzhw failover -u admin -pw password -host nzhost -id 1004
Locate a SPU
You can use the nzhw command to turn on or off a SPUs LED and display the physical
location of the SPU. The default is on.
To locate a SPU, enter:
nzhw locate -u admin -pw password -host nzhost -id 1082
Turned locator LED 'ON' for SPU: Logical Name:'spa1.spu11' Physical
Location:'1st Rack, 1st SPA, SPU in 11th slot'
To turn off a SPUs LED, enter:
nzhw locate -u admin -pw password -host nzhost -id 1082 -off
Turned locator LED 'OFF' for SPU: Logical Name:'spa1.spu11'
Physical Location:'1st Rack, 1st SPA, SPU in 11th slot'
Reset a SPU
You can use the nzhw command to power cycle a SPU (a hard reset).
To reset a SPU, enter:
nzhw reset -u admin -pw password -id 1006
Managing Disks
The disks on the system store the user databases and tables that are being managed and
queried by the Netezza appliance. The basic disk management tasks are as follows:
Monitor status and overall health
Activate a inactive, failed, or mismatched disk
Deactivate a spare disk
Failover a disk
Locate a disk in the Netezza rack
Delete a failed, inactive, mismatched, or incompatible disk
Replace a failed disk
The following sections describe how to perform these tasks.
You can use the nzhw command to activate, deactivate, failover, and locate a disk, or delete
disk information from the system catalog. The following sections describe how to perform
these tasks. For more information about the nzhw command syntax and options, see
nzhw on page A-26.
As a best practice to protect against data loss, never remove a disk from an enclosure or
remove a RAID controller or ESM card from its enclosure unless directed to do so by
Netezza Support or when you are using the hardware replacement procedure documenta-
tion. If you remove an Active or Spare disk drive, you could cause the system to restart or
transition to the down state. Data loss and system issues can occur if these components are
removed when it is not safe to do so.
Note: Netezza C1000 systems have RAID controllers to manage the disks and hardware in
the storage groups. You cannot deactivate a disk on a C1000 system. Also, the commands
to activate, fail, or delete a disk may return an error if the storage group cannot support the
action at that time.
To indicate which disk you want to control, you can refer to the disk using its hardware ID.
You can use the nzhw command to display the IDs, as well as obtain the information from
management UIs such as NzAdmin or Web Admin.
Activate a Disk
You can use the nzhw command to make an inactive, failed, or mismatched disk available
to the system as a spare.
To activate a disk:
nzhw activate -u admin -pw password -host nzhost -id 1004
In some cases, the system may display a message that it cannot activate the disk yet
because the SPU has not finished an existing activation request. Disk activation usually
occurs very quickly, unless there are several activations taking place at the same time. In
this case, later activations wait until they are processed in turn.
Note: For a Netezza C1000 system, you cannot activate a disk that is still being used by
the RAID controller for a regeneration or other task. If the disk cannot be activated, an error
message similar to the following appears:
Error: Can not update role of Disk 1004 to Spare - The disk is
still part of a non healthy array. Please wait for the array to
become healthy before activating.
Deactivate a Disk
You can use the nzhw command to make a spare disk unavailable to the system.
To deactivate a disk:
nzhw deactivate -u admin -pw password -host nzhost -id 5004
Note: For a Netezza C1000 system, you cannot deactivate a disk. The command is not sup-
ported on the C1000 platform.
Failover a Disk
You can use the nzhw command to initiate a failover. You cannot fail over a disk until the
system is at least in the initialized state.
To failover a disk, enter:
nzhw failover -u admin -pw password -host nzhost -id 1004
On a Netezza C1000 system, when you fail a disk, the RAID controller automatically starts
a regeneration to a spare disk. Note that the RAID controller may not allow you to fail a disk
if you are attempting to fail a disk in a RAID 5 array that already has a failed disk.
Note: For a Netezza C1000 system, the RAID controller still considers a failed disk to be
part of the array until the regeneration is complete. After the regen completes, the failed
disk is logically removed from the array.
Locate a Disk
You can use the nzhw command to turn on or off a disks LED. The default is on. The com-
mand also displays the physical location of the disk.
To turn on a disks LED, enter:
nzhw locate -u admin -pw password -host nzhost -id 1004
Turned locator LED 'ON' for Disk: Logical
Name:'spa1.diskEncl4.disk1' Physical Location:'1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1'
To turn off a disks LED, enter:
nzhw locate -u admin -pw password -host nzhost -id 1004 -off
Turned locator LED 'OFF' for Disk: Logical
Name:'spa1.diskEncl4.disk1' Physical Location:'1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1'
You can also use the NzAdmin and Web Admin interfaces to obtain visibility to hardware
issues and failures.
Note: Data slice 2 in the sample output is regenerating due to a disk failure. For a Netezza
C1000 system, three disks hold the user data for a data slice; the fourth disk is the regen
target for the failed drive. The the RAID controller still considers a failed disk to be part of
the array until the regeneration is complete. After the regen completes, the failed disk is
logically removed from the array.
To show detailed information about the data slices that are being regenerated:
[nz@nzhost ~]$ nzds show -regenstatus -detail
Data Slice Status SPU Partition Size (GiB) % Used Supporting Disks
Start Time % Done
---------- --------- ---- --------- ---------- ------ -------------------
------------------- ------
2 Repairing 1255 1 3725 0.00 1012,1028,1031,1056
2011-07-01 10:41:44 23
The status of a data slice shows the health of the data slice. Table 5-5 describes the status
values for a data slice. You see these states when you run the nzds command or display
data slices using the NzAdmin or Web Admin UIs.
State Description
Healthy The data slice is operating normally and the data is protected in a
redundant configuration; that is, the data is mirrored (for Netezza
100, Netezza 1000, or N1001 systems), or redundant (for
Netezza C1000 systems).
You can use the nzspupart regen command or the NzAdmin interface to regenerate a disk.
If you do not specify any options, the system manager checks for any degraded partitions
and if found, starts a regeneration to the appropriate spare disk. An example follows:
nz@nzhost ~]$ nzspupart regen
Are you sure you want to proceed (y|n)? [n] y
Info: Regen Configuration - Regen configured on SPA:1 Data slice 20 and 19
.
You can then use the nzspupart show -regenstatus or the nzds show -regenstatus command
to display the progress and details of the regeneration. Sample command output follows for
the nzds command, which shows the status for the data slices:
[nz@nzhost ~]$ nzds -regenstatus
Data Slice Status SPU Partition Size (GiB) % Used Supporting Disks
Start Time % Done
---------- --------- ---- --------- ---------- ------ ----------------
---------- ------
19 Repairing 1057 3 356 5.80 1040,1052 0
0
20 Repairing 1057 2 356 5.81 1040,1052 0
0
Sample output for the nzspupart command follows. In this example, note that the com-
mand shows more detail about the partitions (data, swap, NzLocal, and log) that are being
regenerated:
[nz@nzhost ~]$ nzspupart show -regenstatus
SPU Partition Id Partition Type Status Size (GiB) % Used Supporting Disks
% Done Starttime
---- ------------ -------------- --------- ---------- ------ --------------------------
--- ------ -------------------
1057 2 Data Repairing 356 0.13 1032,1035 0
2011-12-23 04:37:33
1039 101 Swap Repairing 48 25.04 1030,1031,1032,1035,1036,1037
0 2011-12-23 04:37:33
1039 111 Log Repairing 1 3.47 1032,1035
91.336 2011-12-23 04:37:33
If you want to control the regen source and target destimations, you can specify source
SPU and partition IDs, and the target or destination disk ID. The spare disk must reside in
the same SPA as the disk that you are regenerating. You can obtain the IDs for the source
partition in the nzspupart show -details command.
To regenerate a degraded partition and specify the information for the source and
destination:
nzspupart regen -spu 1035 -part 7 -dest 1024
Note: Regeneration can take several hours to complete. If the system is idle and has no
other activity except the regen, or if the user data partitions are not very full, the regenera-
tion takes less time to complete. You can review the status of the regeneration using the
nzspupart show -regenStatus command. During the regeneration, note that user query per-
formance can be impacted while the system is busy processing the regeneration. Likewise,
user query activity can increase the time required for the regeneration.
A regeneration setup failure could occur if the system manager cannot remove the failed
disk from the RAID array, or if it cannot add the spare disk to the RAID array. If a regenera-
tion failure occurs, or if a spare disk is not available for the regeneration, the system
continues processing jobs. The data slices that lost their mirror continue to operate in an
unmirrored or Degraded state; however, you should replace your spare disks as soon as pos-
sible and ensure that all data slices are mirrored. If an unmirrored disk should fail, the
system will be brought to a down state.
Switch 1
port[1] 5 disks: [ 3:encl1Slot01 5:encl1Slot03 9:encl1Slot05 13:encl1Slot07
17:encl1Slot12 ] -> encl1
These warnings indicate problems in the path topology where storage components are over-
loaded. These problems can affect query performance and also system availability should
other path failures occur. Contact Support to troubleshoot these warnings.
To display detailed information about path failure problems, you can use the following
command:
[nz@nzhost ~]$ nzpush -a mpath -issues
spu0109: Encl: 4 Slot: 4 DM: dm-5 HWID: 1093 SN: number PathCnt: 1
PrefPath: yes
spu0107: Encl: 2 Slot: 8 DM: dm-1 HWID: 1055 SN: number PathCnt: 1
PrefPath: yes
spu0111: Encl: 1 Slot: 10 DM: dm-0 HWID: 1036 SN: number PathCnt: 1
PrefPath: no
If the command does not return any output, there are no path failures observed on the sys-
tem. It is not uncommon for some path failures to occur and then clear quickly. However, if
the command displays some output, as in this example, there are path failures on the sys-
tem and system performance could be degraded. The sample output shows that spu0111
is not using the higher performing preferred path (PrefPath: no) and there is only one path
to each disk (PathCnt: 1) instead of the normal 2 paths. Contact Netezza Support and
report the path failures to initiate troubleshooting and repair.
Note: It is possible to see errors reported in the nzpush command output even if the
nzds -topology command does note report any warnings. In these cases, the errors are still
problems in the topology, but they do not affect the performance and availability of the cur-
rent topology. Be sure to report any path failures to ensure that problems are diagnosed and
resolved by Support for optimal system performance.
Pause(ing) Now Aborts only those transactions that Queues the transaction.
cannot be restarted.
The following examples provide specific instances of how the system handles failovers that
happen before, during, or after data is returned.
If the pause -now occurs immediately after a BEGIN command completes, before data
is returned, the transaction is restarted when the system returns to an online state.
If a statement such as the following completes and then the system transitions, the
transaction can restart because data has not been modified and the reboot does not
interrupt a transaction.
BEGIN;
SELECT * FROM emp;
If a statement such as the following completes, but the system goes transitions before
the commit to disk, the transaction is aborted.
BEGIN;
INSERT INTO emp2 FROM emp;
A statement such as the following can be restarted if it has not returned data, in this
case a single number that represents the number of rows in a table. This sample
includes an implicit BEGIN command.
SELECT count(*) FROM small_lineitem;
If a statement such as the following begins to return rows before the system transitions,
the statement will be aborted.
INSERT INTO emp2 SELECT * FROM externaltable;
Note that this transaction, and others that would normally be aborted, would be
restarted if the nzload -allowReplay option applied to the associated table.
Note: There is a retry count for each transaction. If the system transitions to
pause -now more than the number of retries allowed, the transaction is aborted.
Power Procedures
This section describes how to power on the Netezza and NEC InfoFrame DWH Appliance
systems as well as how to power-off the system. Typically, you would only need to power off
the system if you are moving the system physically within the data center, or in the event of
possible maintenance or emergency conditions within the data center.
The instructions to power on or off an IBM Netezza 100 system are available in the Site
Preparation and Specifications: IBM Netezza 100 Systems.
Note: To power cycle a Netezza system, you must have physical access to the system to
press power switches and to connect or disconnect cables. Netezza systems have keyboard/
video/mouse (KVM) units which allow you to enter administrative commands on the hosts.
OFF
ON
Figure 5-8: Netezza 1001-6 and N1001-005 and Larger PDUs and Circuit Breakers
To close the circuit breakers (power up the PDUs), press in each of the 9 breaker pins
until they engage. Be sure to close the 9 pins on both main PDUs in each rack of the
system.
To open the circuit breakers (power off the PDUs), pull out each of the 9 breaker pins
on the left and the right PDU in the rack. If it becomes difficult to pull out the breaker
pins using your fingers, you could use a tool such as a pair of needle-nose pliers to gen-
tly pull out the pins.
On the IBM Netezza 1000-3 or IBM PureData System for Analytics N1001-002 models,
the main input power distribution units (PDUs) are located on the right and left sides of the
rack, as shown in Figure 5-9.
Two circuit
breakers at the
OFF OFF top of the PDU.
ON ON
OFF ON
Figure 5-9: IBM Netezza 1000-3 and IBM PureData System for Analytics N1001-002 PDUs and Circuit Breakers
At the top of each PDU is a pair of breaker rocker switches. (Note that the labels on the
switches are upside down when you view the PDUs.)
To close the circuit breakers (power up the PDUs), you push the On toggle of the rocker
switch in. Make sure that you push in all four rocker switches, two on each PDU.
To open the circuit breakers (power off the PDUs), you must use a tool such as a small
flathead screwdriver; insert the tool into the hole labelled OFF and gently press until
the rocker toggle pops out. Make sure that you open all four of the rocker toggles, two
on each PDU.
Powering On the IBM Netezza 1000 and IBM PureData System for Analytics N1001
Follow these steps to power on IBM Netezza 1000 or IBM PureData System for Analytics
N1001 models:
1. Make sure that the two main power cables are connected to the data center drops;
there are two power cables for each rack of the system.
2. Do one of the following steps depending upon which system model you have:
For an IBM Netezza 1000-6 or larger model, or an IBM PureData System for Ana-
lytics N1001-005 or larger model, push in the 9 breaker pins on both the left and
right lower PDUs as shown in Figure 5-8 on page 5-27. (Repeat these steps for
each rack of the system.)
For an IBM Netezza 1000-3 or IBM PureData System for Analytics N1001-002
model, close the two breaker switches on both the left and right PDUs as shown in
Figure 5-9 on page 5-28.
3. The hosts will power on. Wait a a minute for the power processes to complete, then log
in as root to one of the hosts and confirm that the Netezza software has started as
follows:
a. Run the crm_mon command to obtain the cluster status:
[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk): Started nzhost1
drbd_nz_device (heartbeat:drbddisk): Started nzhost1
exphome_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
nz_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
fabric_ip (heartbeat::ocf:IPaddr): Started nzhost1
wall_ip (heartbeat::ocf:IPaddr): Started nzhost1
nz_dnsmasq (lsb:nz_dnsmasq): Started nzhost1
nzinit (lsb:nzinit): Started nzhost1
fencing_route_to_ha1 (stonith:apcmaster): Started nzhost2
fencing_route_to_ha2 (stonith:apcmaster): Started nzhost1
b. Identify the active host in the cluster, which is the host where the nps resource
group is running:
[root@nzhost1 ~]# crm_resource -r nps -W
Powering Off the IBM Netezza 1000 or IBM PureData System for Analytics N1001
Follow these steps to power off an IBM Netezza 1000 or IBM PureData System for Analyt-
ics N1001 system:
1. Identify the active host in the cluster, which is the host where the nps resource group is
running:
[root@nzhost1 ~]# crm_resource -r nps -W
3. Log in as root to the active host (nzhost1 in this example) and run the following com-
mand to stop heartbeat:
[root@nzhost1 ~]# service heartbeat stop
4. As root on the standby host (nzhost2 in this example), run the following command to
shut down the host:
[root@nzhost2 ~]# shutdown -h now
5. As root on the active host, run the following command to shut down the host:
[root@nzhost1 ~]# shutdown -h now
6. Wait until you see the power lights on both hosts shut off.
7. Do one of the following steps depending upon which IBM Netezza 1000 model you
have:
For an IBM Netezza 1000-6 or larger, or an IBM PureData System for Analytics
N1001-005 or larger model, pull out the 9 breaker pins on both the left and right
lower PDUs as shown in Figure 5-8 on page 5-27. (Repeat these steps for each
rack of the system.)
For an IBM Netezza 1000-3 or IBM PureData System for Analytics N1001-002
model, use a small tool such as a pocket screwdriver to open the two breaker
switches on both the left and right PDUs as shown in Figure 5-9 on page 5-28.
8. Disconnect the main input power cables (two per rack) from the data center power
drops. (As a best practice, do not disconnect the power cords from the plug/connector
on the PDUs in the rack; instead, disconnect them from the power drops outside the
rack.)
8. Wait five minutes and then type the following command to power on all the S-blade
chassis:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -on all
9. Run the crm_mon command to monitor the status of the HA services and cluster
operations:
[root@nzhost1 ~]# crm_mon -i5
The output of the command refreshes at the specified interval rate of 5 seconds (-i5).
10. Review the output and watch for the resource groups to all have a Started status. This
usually takes about 2 to 3 minutes, then proceed to the next step. Sample output
follows:
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk): Started nzhost1
drbd_nz_device (heartbeat:drbddisk): Started nzhost1
exphome_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
nz_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
fabric_ip (heartbeat::ocf:IPaddr): Started nzhost1
wall_ip (heartbeat::ocf:IPaddr): Started nzhost1
nz_dnsmasq (lsb:nz_dnsmasq): Started nzhost1
nzinit (lsb:nzinit): Started nzhost1
fencing_route_to_ha1 (stonith:apcmaster): Started nzhost2
fencing_route_to_ha2 (stonith:apcmaster): Started nzhost1
11. Press Ctrl-C to exit the crm_mon command and return to the command prompt.
12. Log into the nz account.
[root@nzhost1 ~]# su - nz
13. Verify that the system is online using the following command:
[nz@nzhost1 ~]$ nzstate
System state is Online.
2. Identify the active host in the cluster, which is the host where the NPS resource group
is running:
[root@nzhost1 ~]# crm_resource -r nps -W
crm_resource[5377]: 2009/06/07_10:13:12 info: Invoked: crm_resource
-r nps -W
resource nps is running on: nzhost1
3. Log in to the active host (nzhost1 in this example) as nz and run the following com-
mand to stop the Netezza server:
[nz@nzhost1 ~]$ nzstop
4. Type the following commands to stop the clustering processes:
[root@nzhost1 ~]# ssh ha2 'service heartbeat stop'
[root@nzhost1 ~]# service heartbeat stop
5. On ha1, type the following commands to power off the S-blade chassis and storage
groups:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -off all
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -off all -j all
6. Log into ha2 as root and shut down the Linux operating system using the following
command:
[root@nzhost2 ~]# shutdown -h now
The system displays a series of messages as it stops processes and other system activ-
ity. When it finishes, it displays the message power down which indicates that it is
now safe to turn off the power to the server.
7. Press the power button on Host 2 (located in the front of the cabinet) to power down
that NPS host.
8. On ha1, shut down the Linux operating system using the following command:
[root@nzhost1 ~]# shutdown -h now
The system displays a series of messages as it stops processes and other system activ-
ity. When it finishes, it displays the message power down which indicates that it is
now safe to turn off the power to the server.
9. Press the power button on Host 1 (located in the front of the cabinet) to power down
that NPS host.
10. Switch the breakers to OFF on both the left and right PDUs. (Repeat this step for each
rack of the system.)
OFF ON
ON
OFF
ON
Figure 5-10: NEC InfoFrame DWH ZA100 PDUs and Circuit Breakers
b. Identify the active host in the cluster, which is the host where the nps resource
group is running:
[root@nzhost1 ~]# crm_resource -r nps -W
b. To shutdown the standby node, go to the KVM on the standby node and type:
/sbin/service heartbeat stop
Wait until the standby node is down before proceeding.
Note: If you wish to monitor the state of the nodes, you can open another window
(ALT-F2) and run the command crm_mon -i5 in that window. This is optional.
c. When the standby node is down, go to the KVM on the active node and type:
/sbin/service heartbeat stop
Note: Wait until the active node is down before proceeding. Use separate terminal
instance with the crm_mon -i5 command to monitor the state of the active node.
3. Log in to ha2 as root, then shut down the Linux operating system using the following
command:
shutdown -h now
The system displays a series of messages as it stops processes and other system activ-
ity, and the system powers down.
4. Log in to ha1 as root, then shut down the Linux operating system using the following
command:
shutdown -h now
The system displays a series of messages as it stops processes and other system activ-
ity, and the system powers down.
5. Switch off the power to the PDU units (located in the rear of the cabinet) to completely
power down the rack. Make sure that you turn off power to all power switches.
This chapter describes how to manage the Netezza server and processes. The Netezza soft-
ware that runs on the appliance can be stopped and started for maintenance tasks, so this
chapter describes the meaning and impact of system states. This chapter also describes log
files and where to find operational and error messages for troubleshooting activities.
Although the system is configured for typical use in most customer environments, you can
also tailor software operations to meet the special needs of your environment and users
using configuration settings.
6-1
IBM Netezza System Administrators Guide
From a client system, you can use the nzsystem showRev -host host -u user -pw password
command to display the revision information.
When you enter the nzcontents command, Netezza displays the program names, the revi-
sion stamps, the build stamps, and checksum. Note that the sample output below shows a
small set of output, and the checksum values have been truncated to fit the output mes-
sages on the page.
nzcontents
System States
The Netezza system state is the current operational state of the appliance. In most cases,
the system is online and operating normally. There may be times when you need to stop the
system to perform maintenance tasks or as part of a larger procedure.
You can manage the Netezza system state using the nzstate command. It can display as
well as wait for a specific state to occur. For more information about the nzstate command
syntax and options, see nzstate on page A-48.
Online Select this state to make the Netezza fully The system enters this state The system exits the
operational. This is the most common sys- when you use the nzsystem online state when you
tem state. In this state, the system is restart or resume command, use the nzsystem stop,
ready to process or is processing user or after you boot the system. offline, pause, or restart
queries. commands.
Note: You can also use the nzsystem restart command to quickly stop and start all server software.
You can only use the nzsystem restart command on a running Netezza that is in a non-stopped state.
Offline Select this state to interrupt the Netezza. The system enters this state The system exits this
In this state, the system completes any when you use the nzsystem state when you use the
running queries, but displays errors for any offline command. nzsystem resume or stop
queued and new queries. command.
Paused Select this state when you expect a brief The system enters the paused The system exits the
interruption of server availability. In this state when you use the nzsys- paused state when you
state, the system completes any running tem pause command. use the nzsystem resume
queries, but prevents queued or new que- or stop command, or if
ries from starting. Except for the delay there is a hardware fail-
while in the paused state, users should not ure on an active SPU.
notice any interruption in service.
Down The system enters the down state if there Not user invokeable. You must repair the sys-
is insufficient hardware for the system to tem hardware and then
function even in failover mode. For more use the nzsystem resume
information about the cause of the Down command.
state, use the nzstate -reason command.
Stopped Select this state for planned tasks such as The system enters the The system exits the
installation of new software. In this state, stopped state when you use stopped state when you
the system waits for currently running the nzsystem stop or the nzs- use the nzstart
queries to complete, prevents queued or top command. Note that if command.
new queries from starting, and then shuts you use the nzstop command,
down all Netezza software. the system aborts all running
queries.
Note: When you specify the nzsystem pause, offline, restart, and stop commands, the sys-
tem allows already running queries to finish unless you use the -now switch, which
immediately aborts all running queries. For more information about the nzsystem com-
mand, see nzsystem on page A-55.
State Description
Down The system has not been configured (there is no configuration infor-
mation for the data slices to SPU topology) or there is not enough
working hardware to operate the system even in failover.
The SPUs can never be in this state.
Discovered The SPUs and other components are discovered, but the system is
waiting for all components to complete start-up before transitioning
to the initializing state.
Discovering The system manager is in the process of discovering all the system
components that it manages.
Going Offline (Now) The system is in an interim state going to offline now.
Going to Maintain
Initialized The system uses this state during the initial startup sequence.
Maintain
Missing The system manager has detected a new, unknown SPU in a slot
that was previously occupied but not deleted.
Offline (Now) This state is similar to offline, except that the system stops user
jobs immediately during the transition to offline.
For more information, see Table 5-4 on page 5-9.
Paused (Now) This state is similar to paused, except that the system stops user
jobs immediately during the transition to paused.
For more information, see Table 5-4 on page 5-9.
State Description
Pausing The system is transitioning from online to paused. During this state
no new queries or transactions are queued, although the system
allows current transactions to complete, unless you have specified
the nzsystem pause -now command.
Pausing Now The system is attempting to pause due to a hardware failure, or the
administrator entered the nzsystem pause -now command.
Pre-Online The system has completed initialization. The system goes to the
resume state.
Resuming The system is waiting for all its components (SPUs, SFIs, and host
processes) to reach the online state before changing the system
state to online.
Stopped The system is not running. Note that commands assume this state
when they attempt to connect to a system and get no response.
The SPUs can never be in this state.
Stopped (Now) This state is similar to stopped, except that the system stops user
jobs immediately during the transition to stopped.
Stopping Now The system is attempting to stop, or the administrator entered the
nzsystem stop -now command.
Unreachable The system manager cannot communicate with the SPU because it
has failed or it has been physically removed from the system.
To wait for the online state or else timeout after 10 seconds, enter:
nzstate waitfor -u admin -pw password -host nzhost -type online
-timeout 10
Note: All nzsystem subcommands, except the nzsystem showState and showRev com-
mands, require the Manage System administrative privilege. For more information, see
Administrator Privileges on page 8-9.
For IBM Netezza 1000 or IBM PureData System for Analytics N1001 systems, a message
is written to the sysmgr.log file if there are any storage path issues detected when the sys-
tem starts. The log displays a message similar to mpath -issues detected: degraded disk
path(s) or SPU communication error which helps to identify problems within storage
arrays. For more information about how to check and manage path failures, see Hardware
Path Down on page 7-22.
Process Description
bootsvr Informs TFTP client (the SPUs and SFIs) of the location of their initial
program or download images on the host.
Informs the SPUs where to upload their core file in the event that a SPU
is instructed to dump a core image for debugging purposes.
Process Description
dbosDis- Accepts execution plans from the postgres, backup, and restore
patch process(es).
Dynamically generates C code to process the query, and cross-compiles
the query so that it can be run on the host.
Broadcasts the compiled code to the SPUs for execution.
dbosEvent Receives responses and results from the SPUs. As appropriate, it may
have the SPUs perform additional steps as part of the query.
Rolls up the individual result sets (aggregated, sorted, consolidated, and
so on) and sends the final results back to the clients postgres, backup, or
restore process.
eventmgr Processes events and event rules. When an event occurs, such as the sys-
tem changes state, a hardware component fails or is restarted, the
eventmgr checks to see if any action needs to be taken based on the
event and if so, performs the action. The action could be sending an e-
mail message or executing an external program.
For more information about event rules, see Chapter 7, Managing Event
Rules.
nzvacuum- At boot time, the system starts the nzvacuumcat command, which in turn
cat invokes the internal VACUUM command on system catalogs to remove
unneeded rows from system tables and compact disk space to enable
faster system table scanning.
During system operation, the nzvacuumcat program monitors the amount
of host disk space used by system tables in each database. It perfoms
this check every 60 seconds. If the system catalog disk space for a partic-
ular database grows over a threshold amount (128 KB), the nzvacuumcat
program initiates a system table vacuum (VACUUM) on that database.
The VACUUM command works on system tables only after obtaining an
exclusive lock on all system catalog tables. If it is unable to lock the sys-
tem catalog tables, it quits and retries. Only when the VACUUM
command succeeds does the nzvacuumcat program change the size of
the database.
While the VACUUM command is working, the system prevents any new
SQL or system table activity to start. This window of time is usually about
1 to 2 seconds, but can be longer if significant amounts of system cata-
log updates/deletes have occurred since the last VACUUM operation.
Process Description
postmaster Accepts connection requests from clients (nzsql, ODBC, and so on).
Launches one postgres process per connection to service the client.
sessionmgr Keeps the session table current with the state of the different sessions
that are running the system.
For more information, see Session Manager on page 6-16.
startupsvr Launches and then monitors all of the other processes. If any system pro-
cess should die, the startupsvr follows a set of predefined rules, and
either restarts the failed process or restarts the entire system.
Controlled by /nz/kit/sys/startup.cfg
When you power up (or reset) the hardware, each SPU loads an image from its flash mem-
ory and executes it. This image is then responsible for running diagnostics on the SPU,
registering the SPU with the host, and downloading runtime images for the SPUs CPU and
the FPGA disk controller. The system downloads these images from the host through TFTP.
System Errors
During system operation different types of errors can occur. Table 6-5 describes some of
those errors.
User error An error on the part of the user, usu- Invalid user name, invalid SQL
ally due to incorrect or invalid input. syntax.
The Netezza system can take the following actions when an error occurs:
Display an error message Presents an error message string to the users that
describes the error. Generally the system performs this action whenever a user request
is not fulfilled.
Try again During intermittent or temporary failures, keep trying until the error condi-
tion disappears. The retries are often needed when resources are limited, congested, or
locked.
Fail over Switches to an alternate or spare component, because an active compo-
nent has failed. Failover is a system-level recovery mechanism and can be triggered by
a system monitor or an error detected by software trying to use the component.
Log the error Adds an entry to a component log. A log entry contains a date and
time, a severity level, and an error/event description.
Send an event notification Sends notification through e-mail or by running a com-
mand. The decision whether to send an event notification is based on a set of user-
configurable event rules.
Abort the program Terminates the program, because it cannot continue due to an
irreparably damaged internal state or because continuing would corrupt user data.
Software asserts that detect internal programming mistakes often fall into this cate-
gory, because it is difficult to determine that it is safe to continue.
Clean up resources Frees or releases resources that are no longer needed. Software
components are responsible for their own resource cleanup. In many cases, resources
are freed locally as part of each specific error handler. In severe cases, a program
cleanup handler runs just before the program exits and frees/releases any resources
that are still held.
System Logs
All major software components that run on the host have an associated log. Log files have
the following characteristics:
Each log consists of a set of files stored in a component-specific directory. For manag-
ers, there is one log per manager. For servers, there is one log per session, and their log
files have pid and/or date (<pid>.<yyyy-mm-dd>) identifiers.
Each file contains one day of entries, for a default maximum of seven days.
Each file contains entries that have a timestamp (date and time), an entry severity
type, and a message.
The system rotates log files, that is, for all the major components there are the current log
and the archived log files.
For all Netezza components (except postgres) The system creates a new log file at
midnight if there is constant activity for that component. If, however you load data on
Monday and then do not load again until Friday, the system creates a new log file dated
the previous day from the new activity, in this case, Thursday. Although the size of the
log files is unlimited, every 30 days the system removes all log files that have not been
accessed.
For postgres logs By default, the system checks the size of the log file daily and
rotates it to an archive file if it is greater than 1 GB in size. The system keeps 28 days
(four weeks) of archived log files. (Netezza Support can help you to customize these
settings if needed.)
To view the logs, log onto the host as user nz. To enable SQL logging, see Logging Netezza
SQL Information on page 8-30. For more information about these processes, see Over-
view of the Netezza System Processing on page 6-8.
Log file
/nz/kit/log/backupsvr/backupsvr.log Current backup log
/nz/kit/log/restoresvr/restoresvr.log Current restore log
Bootserver Manager
The bootsvr log file records the initiation of all SPUs on the system, usually when the sys-
tem is restarted by the nzstart command and also all stopping and restarting of the bootsvr
process.
Log file
/nz/kit/log/bootsvr/bootsvr.log Current log
/nz/kit/log/bootsvr/bootsvr.YYYY-MM-DD.log Archived log
Client Manager
The clientmgr log file records all connection requests to the database server and also all
stopping and starting of the clientmgr process.
Log file
/nz/kit/log/clientmgr/clientmgr.log Current log
/nz/kit/log/clientmgr/clientmgr.YYYY-MM-DD.log Archived log
Log file
/nz/kit/log/dbos/dbos.log Current log
/nz/kit/log/dbos/dbos.YYYY-MM-DD.log Archived log
Event Manager
The eventmgr log file records system events and the stopping and starting of the eventmgr
process.
Log file
/nz/kit/log/eventmgr/eventmgr.log Current log
/nz/kit/log/eventmgr/eventmgr.YYYY-MM-DD.log Archived log
Log file
/nz/kit/log/fcommrtx/fcommrtx.log Current log
/nz/kit/log/fcommrtx/fcommrtx.2006-03-01.log Archived log
Log file
/nz/kit/log/hostStatsGen/hostStatsGen.log Current log
/nz/kit/log/hostStatsGen/hostStatsGen.YYYY-MM-DD.log Archived log
Load Manager
The loadmgr log file records details of load requests, and the stopping and starting of the
loadmgr.
Log file
/nz/kit/log/loadmgr/loadmgr.log Current log
/nz/kit/log/loadmgr/loadmgr.YYYY-MM-DD.log Archived log
Postgres
The postgres log file is the main database log file. It contains information about database
activities.
Log file
/nz/kit/log/postgres/pg.log Current log
/nz/kit/log/postgres/pg.log.n Archived log
Session Manager
The sessionmgr log file records details about the starting and stopping of the sessionmgr
process, and any errors associated with this process.
Log file
/nz/kit/log/sessionmgr/sessionmgr.log Current log
/nz/kit/log/sessionmgr/sessionmgr.YYYY-MM-DD.log Archived log
Startup Server
The startupsvr log file records the start up of the Netezza processes and any errors encoun-
tered with this process.
Log file
/nz/kit/log/startupsvr/startupsvr.log Current log
/nz/kit/log/startupsvr/startupsvr.YYYY-MM-DD.log Archived log
Statistics Server
The statssvr log file records the details of starting and stopping the statsSvr and any associ-
ated errors.
Log file
/nz/kit/log/statsSvr/statsSvr.log Current log
/nz/kit/log/statsSvr/statsSvr.YYYY-MM-DD.log Archived log
System Manager
The sysmgr log file records details of stopping and starting the sysmgr process, and details
of system initialization and system state status.
Log file
/nz/kit/log/sysmgr/sysmgr.log
A traditional sorter that begins with a random table on the host and sorts it into the
desired order. It can use a simple external sort method to handle very large datasets.
The file on the Linux host for this disk work area is $NZ_TMP_DIR/nzDbosSpill. Within
DBOS there is a database that tracks segments of the file presently in use.
To avoid having a runaway query use up all the host computer's disk space, there is a limit
on the DbosEvent database, and hence the size of the Linux file. This limit is in the
Netezza Registry file. The tag for the value is startup.hostSwapSpaceLimit.
System Configuration
The system configuration file, system.cfg, contains configuration settings that the Netezza
system uses for system startup, system management, host processes, and SPUs. The sys-
tem configuration file is also known as the system registry. Entries in the system.cfg file
allow you to control and tune the system.
As a best practice, you should not change or customize the system registry unless directed
to by Netezza Support or by a documented Netezza procedure. The registry contains
numerous entries, some of which are documented for use or for reference. Most settings are
internal and used only under direction from Netezza Support. Incorrect changes to the reg-
istry can cause performance impacts to the Netezza system. Many of the settings are
documented in Appendix D, System Configuration File Settings.
You can display the system configuration file settings using the nzsystem showRegistry
command. For more information, see nzsystem on page A-55.
Note: A default of zero in many cases indicates a compiled default not the actual value
zero. Text (yes/no) and numbers indicate actual values.
startup.numSpus = 6
startup.numSpares = 0
startup.simMode = no
startup.autoCreateDb = 0
startup.spuSimMemoryMB = 0
startup.noPad = no
startup.mismatchOverRide = yes
startup.overrideSpuRev = 0
startup.dbosStartupTimeout = 300
...
The output from the command is very long; only a small portion is shown in the exam-
ple.
The Netezza event manager monitors the health, status, and activity of the Netezza system
operation and can take action when a specific event occurs. Event monitoring is a proactive
way to manage the system without continuous human observation. You can configure the
event manager to continually watch for specific conditions such as machine state changes,
hardware restarts, faults, or failures. In addition, the event manager can watch for condi-
tions such as reaching a certain percentage of full disk space, queries that have been
running for longer than expected, and other Netezza system behaviors.
This chapter describes how to administer the Netezza system using event rules that you
create and manage.
7-1
IBM Netezza System Administrators Guide
form do not apply to IBM Netezza 1000 or IBM PureData System for Analytics N1001
systems and have been replaced by similar, new events.
NPSNoLongerOnline Notifies you when the system goes from the online
state to another state. For more information, see
Specifying System State Changes on page 7-19.
SpuCore Notifies you when the system detects that a SPU pro-
cess has restarted and resulted in a core file. For more
information, see Monitoring SPU Cores on
page 7-37.
SystemHeatThresholdExceeded When any three boards in an SPA reach the red tem-
perature threshold, the event runs a command to shut
down the SPAs, SFIs, and RPCs. For more information,
see Monitoring System Temperature on page 7-33.
Enabled by default for z-series systems only.
SystemOnline Notifies you when the system is online. For more infor-
mation, see Specifying System State Changes on
page 7-19.
Transaction Limit Event Sends an email notification when the number of out-
standing transaction objects exceeds 90% of the
available objects. For more information, see Monitor-
ing Transaction Limits on page 7-38
Note: Netezza may add new event types to monitor conditions on the system. These event
types may not be available as templates, which means you must manually add a rule to
enable them. For a description of additional event types that could assist you with monitor-
ing and managing the system, see Event Types Reference on page 7-40.
The action to take for an event often depends on the type of event (its impact on the system
operations or performance). Table 7-2 lists some of the predefined template events and
their corresponding impacts and actions.
Disk80PercentFull hwDiskFull Admins, Moder- Full disk pre- Reclaim space or remove
Disk90PercentFull (Notice) DBAs ate to vents some unwanted databases or older
Serious operations. data. For more information,
see Specifying Disk Space
Threshold Notification on
page 7-24.
HardwareServiceRequested hwService- Admins, Moder- Any query or Contact Netezza. For more
Requested NPS ate to work in information, see Hardware
(Warning) Serious progress is Service Requested on
lost. Disk fail- page 7-20.
ures initiate a
regeneration.
HistCaptureEvent histCaptu- Admins, Moder- Query history The size of the staging area
reEvent NPS ate to is unable to has reached the configured
Serious save cap- size threshold, or there is no
tured history available disk space in /nz/
data in the data. Either increase the size
staging area; threshold or free up disk
query history space by deleting old files.
will stop col-
lecting new
data.
HwPathDown hwPath- Admins Serious Query perfor- Contact Netezza Support. For
Down to mance and more information, see Hard-
Critical possible sys- ware Path Down on
tem page 7-22.
downtime.
RegenFault regenFault Admins, Critical May prevent Contact Netezza Support. For
NPS user data more information, see Moni-
from being toring Regeneration Errors
regenerated. on page 7-29.
SpuCore spuCore Admins, Moder- A SPU core The system created a SPU
NPS ate file has core file. See Monitoring
occurred. SPU Cores on page 7-37.
ThermalFault hwThermal- Admins, Serious Can drasti- Contact Netezza Support. For
Fault NPS cally reduce more information, see Moni-
disk life toring Hardware
expectancy if Temperature on page 7-32.
ignored.
TrasactionLimitEvent transaction- Admins, Serious New transac- Abort some existing sessions
LimitEvent NPS tions are which may be old and
blocked if the require cleanup, or stop/start
limit is the Netezza server to close
reached. all existing transactions.
VoltageFault hwVoltage- Admins, Serious May indicate For more information, see
Fault NPS power supply Monitoring Voltage Faults
issues. on page 7-37.
command. The NzAdmin interface has a very intuitive interface for managing events,
including a wizard tool for creating new events. For information on accessing the NzAdmin
interface, see NzAdmin Tool Overview on page 3-11.
Generating an Event
You can use the nzevent generate command to trigger an event for the event manager. If
the event matches a current event rule, the system takes the action defined by the event
rule.
You might generate events for the following cases:
To simulate a system event to test an event rule.
To add new events, because the system is not generating events for conditions for
which you would like notification.
If the event that you want to generate has a restriction, specify the arguments that would
trigger the restriction using the -eventArgs option. For example, if a runaway query event
has a restriction that the duration of the query must be greater than 30 seconds, use a
command similar to the following to ensure that a generated event is triggered:
nzevent generate -eventtype runawayquery -eventArgs 'duration=50'
In this example, the duration meets the event criteria (greater than 30) and the event is
triggered. If you do not specify a value for a restriction argument in the -eventArgs string,
the command uses default values for the arguments. In this example, duration has a
default of 0, so the event would not be triggered since it did not meet the event criteria.
To add an event rule that sends an e-mail message when the system transitions from
the online state to any other state, enter:
nzevent add -name TheSystemGoingOnline -u admin -pw password
-on yes -eventType sysStateChanged -eventArgsExpr '$previousState
== online && $currentState != online' -notifyType email -dst
jdoe@company.com -msg 'NPS system $HOST went from $previousState to
$currentState at $eventTimestamp.' -bodyText
'$notifyMsg\n\nEvent:\n$eventDetail\nEvent
Rule:\n$eventRuleDetail'
Note: If you are creating event rules on a Windows client system, use double quotes instead
of single quotes to specify strings.
The event manager generates notifications for all rules that match the criteria, not just for
the first event rule that matches. Table 7-3 lists the event types you can specify and the
arguments and the values passed with the event. You can list the defined event types using
the nzevent listEventTypes command. Used only on z-series systems such as the 10000-
series, 8000z-series, and 5200-series systems.
eccError hwType, hwId, spaId, spu, <SPU HW ID>, <SPA ID>, <SPA
spaSlot, errType, errCode, Slot>, <Err Type>, <Err Code>, <SPU/
devSerial, devHwRev, SFI Serial>, <Hardware Revision>,
devFwRev <Firmware Revision>
sysHeatThreshold errType, errCode, errString <Err Type>, <Err Code>, <Err String
fwMismatch
hwThermalFault hwType, hwId, label, loca- spu, <SPU HW ID>, <Label String>,
tion, devSerial, errString, <Location String>, <SPU Serial>,
curVal, eventSource <Error String>, <Current Value>,
<Event Source>
Disk Enclosure, <Encl HW ID>,
<Label String>, <Location String>,
<Error String>, <Current Value>,
<Event Source>
would use the expression: $previousState == online && $currentState!=online. The sys-
tem gets the value of previousState and currentState from the actual argument values of a
sysStateChanged event.
You can specify an event using equality expressions, wildcard expressions, compound AND
expressions, or OR expressions. Table 7-4 describes these expressions.
mail.cfg file also contains options that allow you to specify a user name and password for
authentication on the mailserver. You can find a copy of this file in the /nz/data/config
directory on the Netezza host.
eventTimestamp The data and time the event occurred (for example
17-Jun-02, 14:35:33 EDT).
Event rule eventType One of the event types (for example, hwDiskFull).
If you specify the email or runCmd arguments, you must enter the destination and the sub-
ject header. You can use all the following arguments with either command, except the
-ccDst argument, which you cannot use with the runCmd. Table 7-6 lists the syntax of the
message.
-msg The subject field of the e- -msg NPS system $HOST went from $previ-
mail message ousState to $currentState at
$eventTimestamp.
This message substitutes the hostname for
$HOST, the previous system state for $previ-
ousState, the current system state for
$currentState, and the date and time the
event occurred for $eventTimeStamp.
If you issue the nzstop command, the system sends no in-memory aggregations,
instead it updates the event log. In such cases, you should check the event log, espe-
cially if the aggregation interval is 15 minutes or longer.
If you modify or delete an event rule, the system flushes all events aggregated for the
event rule.
1. Write a script that creates a custom event rule. Set the e-mail address.
MY_EMAIL_ADDR=abc@xyz.com
2. Use the nzevent add command to add the event type. The following example creates a
new event type, custom1 with three events, event 1 through 3.
nzevent add -eventType custom1 -name NewRule -notifyType email \
-dst $MY_EMAIL_ADDR \
-msg 'Event #1 ($arg1, $arg2)' \
-bodyText 'Event 1 Body Text\n\narg1 = $arg1\narg2 = $arg2\n' \
-eventArgsExpr '$eventType==NPSNoLongerOnline'
nzevent add -eventType custom1 -name NewRule2 -notifyType email \
-dst $MY_EMAIL_ADDR \
-msg 'Event #2 ($arg1, $arg2)' \
-bodyText 'Event 2 Body Text\n\narg1 = $arg1\narg2 = $arg2\n' \
-eventArgsExpr '$eventType==SystemOnline'
nzevent add -eventType custom1 -name NewRule3 -notifyType email \
-dst $MY_EMAIL_ADDR \
-msg 'Event #3 ($arg1, $arg2)' \
-bodyText 'Event 3 Body Text\n\narg1 = $arg1\narg2 = $arg2\n' \
-eventArgsExpr '$eventType==HardwareFailed'
3. Use the nzevent generate command to generate events when event type NPSNoLong-
erOnline has arguments 3 and 14, event type SystemOnline has arguments 5 and 1,
and event type HardwareFailed has arguments 90 and 15.
nzevent generate -eventType custom1 -eventArgs 'eventType=NPSNoLongerOnline,
arg1=3, arg2=14'
nzevent generate -eventType custom1 -eventArgs 'eventType=SystemOnline, arg1=5,
arg2=1'
nzevent generate -eventType custom1 -eventArgs 'eventType=HardwareFailed,
arg1=90, arg2=15'
4. Save your script.
For source disks used in a disk regeneration to a spare disk, the HardwareServiceRequested
event also notifies you when regeneration encounters a read sector error on the source disk.
The event helps you to identify when a regeneration requires some attention to address pos-
sible issues on the source and newly created mirror disks. The error messages in the event
notification and in the sysmgr.log and eventmgr.log files contain information about the bad
sector, as in the following example:
2012-04-05 19:52:41.637742 EDT Info: received & processing event type
= hwServiceRequested, event args = 'hwType=disk, hwId=1073,
location=Logical Name:'spa1.diskEncl2.disk1' Logical Location:'1st
rack, 2nd disk enclosure, disk in Row 1/Column 1', errString=disk md:
md2 sector: 2051 partition type: DATA table: 201328,
devSerial=9QJ2FMKN00009838VVR9...
The errString value contains more information about the sector that had a read error:
The md value specifies the RAID device on the SPU that encounterd the issue.
The sector value specifies which sector in the device has the read error.
The partition type specifies whether the partition is a user data (DATA) or SYSTEM
partition.
The table value specifies the table ID of the user table affected by the bad sector.
If the system notifies you of a read sector error, contact IBM Netezza Support for assistance
with troubleshooting and resolving the problems.
Table 7-10 lists the arguments to the Hardware Path Down event rule.
location A string that describes the physical 1st Rack, 1st SPA, SPU in 3rd slot
location of the SPU
errString If the failed component is not inven- Disk path event:Spu[1st Rack, 1st
toried, it will be specified in this SPA, SPU in 5th slot] to Disk [disk
string. hwid=1034
sn="9WK4WX9D00009150ECWM"
SPA=1 Parent=1014 Position=12
Address=0x8e92728 ParentEn-
clPosition=1 Spu=1013]
(es=encl1Slot12 dev=sdl major=8
minor=176 status=DOWN)
Note: If you are notified of hardware path down events, you should contact Netezza Sup-
port and alert them to the path failure(s). It is important to identify and resolve the issues
that are causing path failures to return the system to optimal performance as soon as
possible.
Message Details
the end of the output. For more information, see Displaying the Active Path Topology
on page 5-24.
Hardware Restarted
If you enable the event rule HardwareRestarted, you receive notifications when each SPU
successfully re-boots (after the initial startup). Restarts are usually related to a software
fault, whereas hardware causes could include uncorrectable memory faults or a failed disk
driver interaction.
The following is the syntax for the event rule HardwareRestarted:
Event Rule -name HardwareRestarted -on no -eventType hwRestarted -eventArgsExpr
HardwareRestarted '' -notifyType email -dst 'you@company.com' -ccDst '' -msg 'NPS system
$HOST - $hwType $hwId restarted at $eventTimestamp.' -bodyText
'$notifyMsg\n\nSPA ID: $spaId\nSPA Slot: $spaSlot\n' -callHome yes -
eventAggrCount 50
You can modify the event rule to specify that the system include the devices serial number,
its hardware revision, and firmware revision as part of the message and/or subject.
Table 7-11 describes the arguments to the Hardware Restarted event rule.
Note: You should consider aggregating this event. Set the aggregation count to the number
of SPUs in your system divided by 4. For more information about event aggregation, see
Aggregating Event E-mail Messages on page 7-16.
After you enable the event rule, the event manager sends you an e-mail message when the
system disk space percentage exceeds the first threshold and is below the next threshold
value. Note that the event manager sends only one event per sampled value.
For example, if you enable the event rule Disk80PercentFull, which specifies thresholds 80
and 85 percent, the event manager sends you an e-mail message when the disk space is at
least 80, but less than 85 percent full. When you receive the e-mail message, your actual
disk space might have been 84 percent full.
The event manager maintains thresholds for the values 75, 80, 85, 90, and 95. Each of
these values (except for 75) can be in the following states:
Armed The system has not reached this value.
Disarmed The system has exceeded this value.
Fired The system has reached this value.
Re-armed The system has fallen below this value.
Note: If you enable an event rule after the system has fired a threshold, you will not be
notified that you have reached this threshold until you restart the system.
After the Netezza System Manager sends an event for a particular threshold, it disarms all
thresholds at or below that value. (So if 90 fires, it will not fire again until it is re-armed).
The Netezza System Manager re-arms all disarmed higher thresholds when the disk space
percentage full value falls below the previous threshold, which can occur when you delete
tables or databases. The Netezza System Manager arms all thresholds (except 75) when
the system starts up.
Note: To ensure maximum coverage, enable both event rules Disk80PercentFull and
Disk90PercentFull.
To send an e-mail message when the disk is more than 80 percent full, enable the pre-
defined event rule Disk80PercentFull:
nzevent modify -u admin -pw password -name Disk80PercentFull -on
yes -dst jdoe@company.com
If you receive diskFull notification from one or two disks, your data may be unevenly dis-
tributed across the data slices (data skew). Data skew can adversely affect performance for
the tables involved and for combined workloads. For more information about skew, see
Avoiding Data Skew on page 9-8.
Note: You should consider aggregating the e-mail messages for this event. Set the aggrega-
tion count to the number of SPUs. For more information about aggregation, see
Aggregating Event E-mail Messages on page 7-16.
Table 7-14 lists the arguments to the Runaway Query event rule. Note that the arguments
are case sensitive.
sessionId The ID of the runaway session Use these arguments for the email
message.
planId The ID of the plan
Note: Typically you do not aggregate this event because you should consider the perfor-
mance impact of each individual runaway query.
When you specify the duration argument in the -eventArgsExpr string, you can specify an
operator such as: ==, !=, >, >=, <, or <= to specify when to send the event notifica-
tion. As a best practice, use the greater-than (or less-than) versions of the operators to
ensure that the expression will trigger with a match. For example, to ensure that a notifica-
tion event is triggered when the duration of a query exceeds 100 seconds, specify the
-eventArgsExpr as follows:
-eventArgsExpr '$duration > 100'
If a query exceeds its timeout threshold and you have added a runaway query rule, the sys-
tem sends you an e-mail message telling you how long the query has been running. For
example:
NPS system alpha - long-running query detected at 07-Nov-03, 15:43:49
EST.
sessionId: 10056
planId: 27
duration: 105 seconds
hwType The type of hardware affected spu (and sfi for z-series
systems)
errType The type of error, that is, whether the 1 (Failure), 2 (Failure immi-
error is the type failure, failure possible, nent) 3 (Failure possible), 4
or failure imminent (Failure unknown)
If you have enabled the event rule SCSIDiskError, the system sends you an e-mail message
when it fails a disk.
The following is the syntax for the event rule SCSIDiskError:
Event Rule -name 'SCSIDiskError' -on no -eventType scsiDiskError -eventArgsExpr
SCSIDiskError '' -notifyType email -dst '<your email here>' -ccDst '' -msg 'NPS
system $HOST - disk error on disk $diskHwId.' -bodyText
'$notifyMsg\nspuHwId:$spuHwId\ndisk location:$location\nerrType:
$errType\nerrCode:$errCode\noper:$oper\ndataPartition:$dataPartition\n
lba:$lba\ndataSliceId:$dataSliceId\ntableId:$tableId\nblock:$block\nde
vSerial:$devSerial\nfpgaBoardSerial:$fpgaBoardSerial\ndiskSerial:$disk
Serial\ndiskModel:$diskModel\ndiskMfg:$diskMfg\nevent
source:$eventSource\n' -callHome no -eventAggrCount 0
Table 7-18 lists the output from the SCSIDiskError event rule.
errType The type of error, that is, whether the 1 (Failure), 2 (Failure imminent)
error is the type failure, failure possible, 3 (Failure possible), 4 (Failure
or failure imminent unknown)
hwType The hardware type where the error occurred SPU* or disk enclosure
Argument Description
errCode The integer code for the onboard temper- 301 for warning, 302 for critical
ature error
The default behavior is to execute the nzstop command and then use RPC to power off the
Netezza system.
Before you power on the machine, check the SPA that caused this event to occur. You may
need to replace one or more SPUs or SFIs.
After you confirm that the temperature within the environment has returned to normal, you
can power on the RPCs using the following command. Make sure that you are logged in as
root or that your account has sudo permissions to run this command:
/nzlocal/scripts/rpc/spapwr.sh -on all
errString The text string (shown in errCode descrip- History Load Con-
tion) for the related error code fig Info Not Found
hwType The hardware type where the error occurred SPU* or disk enclosure
txid: 0x4eeba
Session id: 101963
PID: 19760
Database: system
User: admin
Client IP: 127.0.0.1
Client PID: 19759
Transaction start date: 2011-08-30 10:55:08
Availability Event
A device is unavailable to the system when it is in the Down or Missing state. A device
could be unavailable for a short period of time because of system maintenance tasks such
as a part replacement. The system manager now detects extended periods when a device is
unavailable and logs an event to notify you of the problem. The sysmgr.availabilityAlertTime
setting specifies how long the device must be Down or Missing before it is considered
unavailable. The default value is 300 seconds. When the timeout expires, the system man-
ager logs a HW_NEEDS_ATTENTION event to notify you of the problem.
If a device is unavailable, the most common reasons are that the device is no longer operat-
ing normally and has been transitioned to the Down state, it has been powered off, or it has
been removed from the system. You should investigate to determine the cause of the avail-
ability issue and take steps to replace the device or correct the problem.
Reachability Event
A device is unreachable when it does not respond to a status request from its device man-
ager. A device could be unreachable for a short period of time because it is busy and
cannot respond in time to the status request, or there may be congestion on the internal
network of the system that delays the status response. The system manager now detects
extended periods when a device is unreachable and logs an event to notify you of the prob-
lem. The sysmgr.reachabilityAlertTime setting specifies how long the device manager will
wait for status before it declares a device to be unreachable. The default value is 300 sec-
onds. When the timeout expires, the system manager raises a HW_NEEDS_ATTENTION
event to notify you of the problem.
If a device is unreachable, the most common reasons are that the device is very busy and
cannot respond to status requests, or there may be a problem with the device. If the device
is temporarily busy, the problem usually clears when the device can respond to a status
request.
The new event is not yet available as an event template. You must add the event using the
following command:
[nz@nzhost ~]$ nzevent add -name TopologyImbalance -on no -eventType
topologyImbalance -eventArgsExpr '' -notifyType email -dst
'you@company.com' -ccDst '' -msg 'NPS system $HOST - Topology
imbalance event has been recorded at $eventTimestamp $eventSource.' -
bodyText '$notifyMsg\n\nWarning:\n$errString\n' -callHome no
-eventAggrCount 0
When an imbalance problem is detected, the system writes more detailed information to
the sysmgr.log and the eventmgr.log files. A sample email for this event follows:
From: NPS Event Manager [mailto:eventsender@netezza.com]
Sent: Friday, June 15, 2012 6:06 PM
To: <you@company.com>
Subject: NPS system nzhost - Regen imbalance event has been recorded at
15-Jun-12, 08:36:07 EDT System initiated.
NPS system nzhost - Topology imbalance event has been recorded at 15-
Jul-12, 08:36:07 EDT System initiated.
Warning:
Topology imbalance after rebalance :
spu0109 hba [0] port [2] has 3 disks
spu0109 hba [0] port [3] has 3 disks
...
SPA 1 SAS switch [sassw01b] port [4] has 7 disks
Note: For systems that use an older topology configuration, you could encounter situations
where the event is triggered frequently but for a known situation. In that event, you can dis-
able the event by setting the following registry value. You must pause the system, set the
variable, and then resume the system (for a similar example, see Concurrent Jobs on
page 12-3):
[nz@nzhost ~]$ nzsystem set -arg
sysmgr.enableTopologyImbalanceEvent=false
Displaying Alerts
If the NzAdmin tool detects an alert, it displays the Alert entry in the navigation list. The
NzAdmin tool displays each error in the list and indicates the associated component. The
Component, Status, and other columns provide additional information.
For the hardware alerts, the alert color indicator takes on the color of the related compo-
nent. If, however, the component is green, the NzAdmin tool sets the alert color to yellow.
To view the alerts list, click the Alerts entry in the left pane.
To get more information about an alert, double-click an entry or right-click and select
Status to display the corresponding component status window.
To refresh alerts, select View > Refresh or click the refresh icon on the toolbar.
Managing security for the Netezza appliance is an important task. You can control access to
the Netezza system itself by placing the appliance in a secured location such as a data
center. You can control access through the network to your Netezza appliance by managing
the Linux user accounts that can log in to the operating system. You control access to the
Netezza database, objects, and tasks on the system by managing the Netezza database
user accounts that can establish SQL connections to the system.
This chapter describes how to manage Netezza database user accounts, and how to apply
administrative and object permissions that allow users access to databases and tasks. This
chapter also describes user session controls such as row limits and priority that help man-
age impacts to system performance by the database users.
Note: Linux accounts allow users to log in to the Netezza server at the operating system
level, but they cannot access the Netezza database via SQL. If some of your users require
Linux accounts to manage the Netezza system as well as database accounts for SQL
access, you could use identical names and passwords for the two accounts to ease manage-
ment. For details on creating Linux user accounts, refer to your Linux documentation or the
quick reference in Appendix B, Linux Host Administration Reference. Throughout this
chapter, any references to users and groups imply Netezza database user accounts, unless
otherwise specified.
8-1
IBM Netezza System Administrators Guide
assign permissions and access properties to that group, and then assign members to the
group as applicable. The members of the group automatically inherit the groups permis-
sions. If you remove a user from the group, the associated permissions for the group are
likewise removed from the user.
If a user is a member of more than one group, the user inherits the union of all permis-
sions from those groups, plus whatever permissions may have been assigned to the user
account specifically. So, for example, if you remove a user from a group that has Create
Table permission or privileges, the user loses that permission unless the user is a member
of another group that has been granted that privilege or the user account has been granted
that privilege.
As a best practice, you should use groups to manage the access permissions and rights of
your database users rather than manage user accounts individually. Groups are an efficient
and time-saving way to manage permissions, even if a group has only one member. Over
time, you will typically add new users, drop existing users, and change user permissions as
roles evolve. New Netezza software releases often add new permissions that you may have
to apply to your users. Rather than manage these changes on an account-by-account basis,
manage the permissions via groups and group membership.
Note: You can also use Netezza groups as resource sharing groups (RSGs) for workload
management. That is, you can create groups and assign them resource utilization percent-
ages, which is the percentage of the Netezza resources that the group should receive when
it and other RSGs are using the system. For a description of RSGs, see Chapter 12, Man-
aging Workloads on the Netezza Appliance.
You can create and manage Netezza database accounts and groups using any or a combina-
tion of the following methods:
Netezza SQL commands, which are the most commonly used method
NzAdmin tool, which provides a windows interface for managing users, groups, and
permissions
Web Admin, which provides web browser access to the Netezza system for managing
users, groups, and permissions
This chapter describes how to manage users and groups using the SQL commands. The
online help for the NzAdmin and Web Admin interfaces provide more details on how to
manage users and groups via those interfaces.
General database users users who are allowed access to one or more databases for
querying, and who may or may not have access to manage objects in the database.
These users may also have lower priority for their work.
Power database users users who require access to critical databases and who may
use more complex SQL queries than the general users. These users may require higher
priority for their work. They may also have permissions for tasks such as creating data-
base objects, running user-defined objects (UDXs), or loading data.
The access model serves as a template for the users and groups that you need to create,
and also provides a map of access permission needs. By creating Netezza database groups
to represent these roles or permission sets, you can easily assign users to the groups to
inherit the various permissions, you can change all the users in a role by changing only the
group permissions, and move users from one role to another by changing their groups, or by
adding them to groups that control those permissions.
The public group is the default user group for all Netezza database users. All users are
automatically added as members of this group and cannot be removed from this group. The
admin user is the owner of the public group. You can use the public group to set the default
set of permissions for all Netezza user accounts. You cannot change the name or the own-
ership of the group.
When a database users account expires, the user has very limited access to the Netezza
system. The user can connect to the Netezza database, but the only query that the user is
allowed to run is the following ALTER USER command, where newPassword represents
their new account password:
SYSTEM(myuser)=> ALTER USER myuseracct WITH PASSWORD 'newPassword';
ALTER USER
The admin user can expire a user account password immediately using the following
command:
SYSTEM(ADMIN)=> ALTER USER myuseracct EXPIRE PASSWORD;
ALTER USER
The expiration does not affect the users current session if the user is connected to a data-
base. The next time that the user connects to a database, the user will have a restricted-
access session and must change his password using the ALTER USER command.
dcredit Specifies the maximum credit for including digits in the password. The
default is 1 credit; if you specify a credit of 3, for example, the user receives 1 credit
per digit up to the maximum of 3 credits to reduce the minlen requirement. If you
specify a negative value such as -2, your users must specify at least two digits in their
password.
ucredit Specifies the maximum credit for including uppercase letters in the pass-
word. The default is 1 credit; if you specify a credit of 2, for example, the user receives
1 credit per uppercase letter up to the maximum of 2 credits to reduce the minlen
requirement. If you specify a negative value such as -1, your users must specify at least
one uppercase letter in their password.
lcredit Specifies the maximum credit for including lowercase letters in the pass-
word. The default is 1 credit; if you specify a credit of 2, for example, the user receives
1 credit per lowercase letter up to the maximum of 2 credits to reduce the minlen
requirement. If you specify a negative value such as -1, your users must specify at least
one lowercase letter in their password.
ocredit Specifies the maximum credit for including non-alphanumeric characters
(often referred to as symbols such as #, &, or *) in the password. The default is 1
credit; if you specify a credit of 1, for example, the user receives 1 credit per non-
alphanumeric character up to the maximum of 1 credits to reduce the minlen require-
ment. If you specify a negative value such as -2, your users must specify at least two
non-alphanumeric characters in their password.
For example, the following command specifies that the minimum length of a weak pass-
word is 10, and it must contain at least one uppercase letter. The presence of at least one
symbol or digit allows for a credit of 1 each to reduce the minimum length of the password:
SYSTEM(ADMIN)=> SET SYSTEM DEFAULT PASSWORDPOLICY TO 'minlen=10,
lcredit=0 ucredit=-1 dcredit=-1 ocredit=1';
SET VARIABLE
As another example, the following command specifies that the minimum length of a
weak password is 8, it must contain at least two digits and one symbol; and the presence
of lowercase characters offers no credit to reduce the minimum password length:
SYSTEM(ADMIN)=> SET SYSTEM DEFAULT PASSWORDPOLICY TO 'minlen=8,
lcredit=0 dcredit=-2 ocredit=-1';
SET VARIABLE
If you are using LDAP authentication, you do not specify a password for the account. The
CREATE USER command has a number of options that you can use to specify timeout
options, account expirations, rowset limits (the maximum number of rows a query can
return), and priority for the users session and queries. The resulting user account is
owned by the user who created the account.
When you create users and groups, you can also specify session access time limits. The
access time limits specify when users can start database sessions. User may be permitted
to start sessions at any time on any day, or they may be given restricted access to certain
days and/or certain hours of the day. If a user attempts to start a session during a time
when they do not have access, the system displays an error message that they are outside
their access time limits. Also, if a user attempts to run an nz* command that creates a
database session, the command will also return the error if the user is not within the
allowed access time window. For more information, see the access time information in the
IBM Netezza Advanced Security Administrators Guide.
Note: Keep in mind that session settings such as access time restrictions, session time-
outs, priority, and rowset limits, can be set on a per-user, per-group, and in some cases a
system-wide level. The Netezza system checks the settings for a user first to find the values
to use; if not set for the user, the system uses the group settings (whatever is the largest or
highest settings for all the groups to which the user belongs); if not set for the group, the
system uses the system-wide settings.
Security Model
The Netezza security model is a combination of administrator privileges granted to users
and/or groups, plus object privileges associated with specific objects (for example, table
xyz) and classes of objects (for example, all tables). As part of the model, any privilege
granted to a database group is automatically granted to (that is, inherited by) all the users
who are members of that group.
Note: Privileges are additive, which means that you cannot remove a privilege from a user
who has been granted that privilege as a consequence of being a member of a group.
Each object has an owner. Individual owners automatically have full access to their objects
and do not require individual object privileges to manage them. The database owner, in
addition, has full access to all objects within the database. The admin user owns all pre-
defined objects and has full access to all administrative permissions and objects. For more
information about the admin user, see Default Netezza Groups and Users on page 8-3.
Administrator Privileges
Administrator privileges give users and groups permission to execute global operations and
to create objects.
Note: When you grant a privilege, the user you grant the privilege to cannot pass that privi-
lege onto another user by default. If you want to allow the user to grant the privilege to
another user, include the WITH GRANT OPTION when you grant the privilege.
Table 8-1 describes the administrator privileges. Note that the words in brackets are
optional.
Privilege Description
Backup Allows the user to perform backups. The user can run the com-
mand nzbackup.
[Create] Aggregate Allows the user to create user-defined aggregates (UDAs), and to
operate on existing UDAs.
[Create] Database Allows the user to create databases. Permission to operate on exist-
ing databases is controlled by object privileges.
[Create] External Allows the user to create external tables. Permission to operate on
Table existing tables is controlled by object privileges.
[Create] Function Allows the user to create user-defined functions (UDFs) and to
operate on existing UDFs.
[Create] Group Allows the user to create groups. Permission to operate on existing
groups is controlled by object privileges.
[Create] Index For system use only. Users cannot create indexes.
[Create] Library Allows the user to create user-defined shared libraries. Permission
to operate on existing shared libraries.
[Create] Table Allows the user to create tables. Permission to operate on existing
tables is controlled by object privileges.
[Create] Temp Table Allows the user to create temporary tables. Permission to operate
on existing tables is controlled by object privileges.
[Create] User Allows the user to create users. Permission to operate on existing
users is controlled by object privileges.
[Create] View Allows the user to create views. Permission to operate on existing
views is controlled by object privileges.
Privilege Description
[Manage] Hardware Allows the user to perform the following hardware-related opera-
tions: view hardware status, manage SPUs, manage topology and
mirroring, and run diagnostics. The user can run these commands:
nzds and nzhw.
[Manage] Security Allows the user to perform commands and operations relating to
history databases such as creating and cleaning up history
databases.
[Manage] System Allows the user to perform the following management operations:
start/stop/pause/resume the system, abort sessions, and view the
distribution map, system statistics, logs, and plan files from active
query or query history lists. The user can use these commands:
nzsystem, nzstate, nzstats, and nzsession priority.
Restore Allows the user to restore the system. The user can run the nzre-
store command.
Privilege Description
Abort Allows the user to abort sessions. Applies to groups and users. For more infor-
mation, see Aborting Sessions or Transactions on page 9-22.
Alter Allows the user to modify object attributes. Applies to all objects.
Privilege Description
Delete Allows the user to delete table rows. Applies only to tables.
Execute Allows the user to execute UDFs and UDAs in SQL queries.
GenStats Allows the user to generate statistics on tables or databases. The user can run
the GENERATE STATISTICS command.
Groom Allows the user to perform general housekeeping and cleanup operations on
tables using the GROOM TABLE command. The GROOM TABLE command
performs reclaim operations to remove deleted rows and also reorganizes
tables based on the clustered base tables organizing keys.
Insert Allows the user to insert rows into a table. Applies only to tables.
List Allows the user to display an objects name, either in a list or in another man-
ner. Applies to all objects.
Select Allows the user to select (or query) rows within a table. Applies to tables and
views.
Truncate Allows the user to delete all rows from a table with no rollback. Applies only to
tables.
Update Allows the user to modify table rows, such as changing field values or chang-
ing the next value of a sequence. Applies to tables only.
Privilege Precedence Netezza uses the following order of precedence for permissions:
1. Privileges granted on a particular object within a particular database
2. Privileges granted on an object class within a particular database
3. Privileges granted on an object class within the system database
You can assign multiple privileges for the same object for the same user. The Netezza sys-
tem uses the rules of precedence to determine which privileges to use. For example, you
can grant users privileges on a global level, but user privileges on a specific object or data-
base level override the global permissions. For example, assume the following three GRANT
commands:
Within the system database, enter:
system(admin)=> GRANT SELECT,INSERT,UPDATE,DELETE,TRUNCATE ON TABLE TO
user1
Within the dev database, enter:
dev(admin)=> GRANT SELECT,INSERT,UPDATE ON TABLE TO user1
Within the dev database, enter:
dev(admin)=> GRANT SELECT, LOAD ON customer TO user1
Using these grant statements and assuming that customer is a user table, user 1 has the
following permissions:
With the first GRANT command, user1 has global permissions to SELECT, INSERT,
UPDATE, DELETE, or TRUNCATE any table in any database.
The second GRANT command restricts user1s permissions specifically on the dev
database. When user1 connects to dev, user1 can perform only SELECT, INSERT, or
UPDATE operations on tables within that database.
The third GRANT command overrides privileges for user1 on the customer table within
the dev database. As a result of this command, the only actions that user1 can perform
on the customer table in the dev database are SELECT and LOAD.
Table 8-3 lists the Netezza SQL built-in commands that you can use to display the privi-
leges for users and groups
Command Description
\dG Displays a list of all defined groups and the users in which they are members.
\dp Displays the list of all privileges assigned to a user, regardless of whether those
privileges were assigned directly or through group membership.
\dpg Displays a list of all privileges assigned to a group as a result of the GRANT
command to the group.
\dpu Displays a list of all privileges assigned to a user as a result of the GRANT com-
mand to the user.
\dU Displays a list of all defined user and the group in which they are members.
Note: When revoking privileges, make sure you sign on to the same database where you
granted the privileges, then use the commands in Table 8-3 to verify the results.
Revoking Privileges
You can revoke administrative and object privileges using the REVOKE command. When
you revoke a privilege from a group, all the members of that group lose the privilege unless
they have the privilege from membership in another group or via their user account.
For example, to revoke the Insert privilege for the group public on the table films, enter:
SYSTEM(ADMIN)=> REVOKE INSERT ON films FROM PUBLIC;
REVOKE
Privileges by Object
There are no implicit privileges. For example, if you grant a user all privileges on a data-
base, you did not grant the user all privileges to the objects within that database. Instead,
you granted the user all the valid privileges for a database (that is, alter, drop, and list).
Privilege Description
Privilege Description
Client session Users can see a sessions user name and query if that user object is
viewable. Users can see the connected database name if that data-
base object is viewable. Users must have the abort privilege on
another user or be the system administrator to abort another users
session or transaction.
Logon Authentication
The Netezza system offers two authentication methods for Netezza database users:
Local authentication, where Netezza administrators define the database users and
their passwords using the CREATE USER command or through the Netezza administra-
tive interfaces. In local authentication, you use the Netezza system to manage
database accounts and passwords, as well as to add and remove database users from
the system. This is the default authentication method.
LDAP authentication, where you can use an LDAP name server to authenticate data-
base users and manage passwords as well as database account activations and
deactivations. The Netezza system then uses a Pluggable Authentication Module (PAM)
to authenticate users on the LDAP name server. Note that Microsoft Active Directory
conforms to the LDAP protocol, so it can be treated like an LDAP server for the pur-
poses of LDAP authentication.
Authentication is a system-wide setting; that is, your users must be either locally authenti-
cated or authenticated using the LDAP method. If you choose LDAP authentication, note
that you can create users with local authentication on a per-user basis. Note that the
Netezza host supports LDAP authentication for database user logins only, not for operating
system logins on the host.
Local Authentication
Local authentication validates that the user name and password entered with the logon
match the ones stored in the Netezza system catalog. The manager process that accepts
the initial client connection is responsible for initiating the authentication checks and dis-
allowing any future requests if the check fails. Because users can make connections across
the network, the system sends passwords from clients in an opaque form.
The Netezza system manages users names and passwords. It does not rely on the underly-
ing (Linux) operating systems user name and password mechanism, other than on user nz,
which runs the Netezza software.
Note: When you create a new user for local authentication, you must specify a password for
that account. You can explicitly create a user with a NULL password, but note that the user
will not be allowed to log on if you use local authentication.
LDAP Authentication
The LDAP authentication method differs from the local authentication method in that the
Netezza system uses the user name and password stored on the LDAP server to authenti-
cate the user. Following successful LDAP authentication, the Netezza system also confirms
that the user account is defined on the Netezza system. The LDAP administrator is respon-
sible for adding and managing the user accounts and passwords, deactivating accounts,
and so on, on the LDAP server.
The Netezza administrator must ensure that each Netezza user is also defined within the
Netezza system catalog. The Netezza user names must match the user names defined in
the LDAP server. If the user names do not match, the Netezza administrator should use the
ALTER USER command to change the user name to match the LDAP user name, or contact
the LDAP administrator to change the LDAP user name.
The command does not leverage any of the settings from previous command instances;
make sure that you specify all the arguments that you require when you use the command.
The command updates the ldap.conf file for the configuration settings specified in the lat-
est SET AUTHENTICATION command.
Note: After you change to LDAP authentication, if you later decide to return to local
authentication, you can use the SET AUTHENTICATION LOCAL command to restore the
default behavior. When you return to local authentication, the command overwrites the
ldap.conf file with the ldap.conf.orig file (that is, the ldap.conf file that resulted after the
first SET AUTHENTICATION LDAP command was issued). The Netezza system then starts
to use local authentication, which requires user accounts with passwords on the Netezza
system. If you have Netezza user accounts with no passwords or that were created with a
NULL password, use the ALTER USER command to update each user account with a
password.
Command Description
Command Description
ALTER USER Modifies a Netezza user account. (If you change from
LDAP to local authentication, you may need to alter
user accounts to ensure that they have a password
defined on the Netezza system.)
If you use secure communications to the Netezza, there are some optional configuration
steps for the Netezza host:
Define SSL certification files in the postgresql.conf file for peer authentication
Create connection records to restrict and manage client access to the Netezza system
The Netezza client users must specify security arguments when they connect to the
Netezza. The nzsql command arguments are described in the IBM Netezza Database Users
Guide. For a description of the changes needed for the ODBC and JDBC clients, refer to the
IBM Netezza ODBC, JDBC and OLE DB Installation and Configuration Guide.
# Uncomment the lines below and mention appropriate path for the
# server certificate and key files. By default the files present
# in the data directory will be used.
#server_cert_file='/nz/data/security/server-cert.pem'
#server_key_file='/nz/data/security/server-key.pem'
4. Delete the pound sign (#) character at the beginning of the server_cert_file and server_
key_file parameters and specify the pathname of your CA server certificate and keys
files where they are saved on the Netezza host.
Client users must install a copy of the CA root certificate file on their client systems.
The client users will specify the location of the CA root certificate when they run com-
mands such as nzsql, nzhw, and others.
Note: Make sure that the keys file is not password protected; by default, it is not.
In the sample output, the connection requests define the following capabilities:
Connection ID 1 specifies that the Netezza host will accept connection requests from
any local user (someone logged in directly to the Netezza) to all databases.
Connection ID 2 specifies that the host will accept either secured or unsecured con-
nection requests from any local user (connecting via IP) to all databases.
Connection ID 3 specifies that the host will accept either secured or unsecured con-
nection requests from any remote client user (connecting via IP) to any database.
It is important to note that the host may accept a connection request, but the user must
still pass account authentication (username/password verification), as well as have permis-
sions to access the requested database.
The first record that matches the client connection information is used to perform authen-
tication. If the first chosen record does not work, the system does not look for a second
record. If no record matches, access is denied. With the default records shown above, any
client user who accesses the Netezza and has proper user account and password creden-
tials will be allowed a connection; they could request either secured or unsecured
connections, as the Netezza host accepts either type.
This example shows the importance of record precedence. Note that record ID 2 will be the
first match for all of the users who remotely connect to the Netezza system. Because it is
set to host, this record will allow either secured or unsecured connections based on the
connection request from the client. To ensure that the user at 1.2.3.4 is authenticated for
a secure connection, drop connection record 2 and add it again using a new SET CONNEC-
TION record to place the more general record after the more specific record for 1.2.3.4.
Command Description
SHOW CONNECTION Displays the current set of connection records for client
access.
Rowset limit User, group, 1 to 2,147,483,647 or Unlimited Maximum rowset limit per query.
and system unlimited (zero) (zero) For more information, see Spec-
ifying User Rowset Limits on
page 8-27.
Session limit User, group, 1 to 2,147,483,647 Unlimited When a SQL session is idle for
and system minutes or unlimited (zero) longer than the specified period,
(zero) the system terminates the ses-
sion. For more information, see
Specifying Session Timeout on
page 8-29.
Session priority User, group, Critical, high, normal, None Defines the default and maxi-
and system or low mum priority for the user or
group.
When you change these values, the system sets them at session startup and they remain in
effect for the duration of the session.
You specify the system defaults with the SET SYSTEM DEFAULT command. To display the
system values, use the SHOW SYSTEM DEFAULT command.
To set a system default, use a command similar to the following, which sets the default
session timeout to 300 minutes:
SYSTEM(ADMIN)=> SET SYSTEM DEFAULT SESSIONTIMEOUT TO 300;
SET VARIABLE
To show the system default for the session timeout, use the following syntax:
SYSTEM(ADMIN)=> SHOW SYSTEM DEFAULT sessiontimeout;
NOTICE: 'session timeout' = '300'
SHOW VARIABLE
You can also impose rowset limits on both individual users and groups. In addition, users
can set their own rowset limits. The admin user does not have a limit on the amount of rows
a query can return.
You can impose query timeout limits on both individual users and groups. In addition,
users can set their own query timeouts.
6. In the Netezza ODBC Driver Configuration, select the CommLog check box. This causes
the system to create a file that contains the following:
The connection string
The SQL commands executed
The first tuple of data returned
The number of tuples returned
The system writes log information to the file specified in the Commlog check box (for exam-
ple, C:\nzsqlodbc_xxxx.log).
_v_operator Objid, operator, owner, create date, description, opr name, opr
left, opr right, opr result, opr code, and opr kind
_v_relation_column Objid, object name, owner, create date, object type, attr num-
ber, attr name, attr type, and not null indicator
_v_relation_column_def Objid, object name, owner, create date, object type, attr num-
ber, attr name, and attr default value
_v_table_dist_map Objid, table name, owner, create date, dist attr number, and
dist attr name
_v_user Objid, User name, owner, valid until date, and create date
_v_view Objid, view name, owner, create date, rel kind, rel checks, rel
triggers, has rules, unique keys, foreign keys, references, has p
keys, and number attributes
Table 8-10 describes the views that show system information. You must have administrator
privileges to display these views.
Unlike other database solutions, the Netezza appliance does not require a database admin-
istrator (DBA) to manage and control user databases. Instead, there are a few system
administration tasks relating to the creation and management of the user content stored on
the system. This chapter describes some basic concepts of Netezza databases, and some
management and maintenance tasks that can help to ensure the best performance for user
queries.
You can manage Netezza databases and their objects using SQL commands that you run
through the nzsql command (which is available on the Netezza system and in the UNIX cli-
ent kits) as well as by using the NzAdmin tool, Web Admin interface, and data connectivity
applications like ODBC, JDBC, and OLE DB. This chapter focuses on running SQL com-
mands (shown in uppercase, such as CREATE DATABASE) via the nzsql command interface
to perform tasks.
9-1
IBM Netezza System Administrators Guide
You cannot delete the system database. The admin user can also make another user the
owner of a database, which gives that user admin-like control over that database and its
contents.
The database creator becomes the default owner of the database. The owner can remove
the database and all its objects, even if other users own objects within the database.
Within a database, permitted users can create tables and populate them with data for que-
ries. For details on the loading process options, see the IBM Netezza Data Loading Guide.
For example, assume that you create a table and insert only one row to the table. The sys-
tem allocates one 3MB extent on a data slice to hold that row. The row is stored in the first
128 KB page of the extent. If you view the table size using a tool such as the NzAdmin
interface, the table shows a Bytes Allocated value of 3MB (the allocated extent for the
table), and a Bytes Used value of 128 KB (the used page in that extent).
For tables that are well distributed with rows on each data slice of the system, the table
allocation will be a minimum of 3MB x <numberOfDataSlices> of storage space. If you
have an evenly distributed table with 24 rows on an IBM Netezza 1000-3 system, which
has 24 data slices, the table will allocate 3MB x 24 extents (72MB) of space for the table.
That same table uses 128KB x 24 pages, or approximately 3MB of disk space.
The Bytes Allocated value is always larger than the Bytes Used value. For very small tables,
the Bytes Allocated value may be much larger than the Bytes Used value, especially on
multi-rack Netezza systems with hundreds of data slices. For larger tables, the Bytes Allo-
cated value is typically much closer in size to the Bytes Used value.
float8s 8 bytes
float4s 4 bytes
dates 4 bytes
char(16) 16 bytes
char(1) 1 byte
You can use the following Netezza SQL command syntax to create tables and specify distri-
bution keys:
To create an explicit distribution key, the Netezza SQL syntax is:
CREATE TABLE <tablename> [ ( <column> [, ] ) ]
DISTRIBUTE ON [HASH] ( <column> [ , ] ) ;
The phrase DISTRIBUTE ON specifies the distribution key, the word HASH is optional.
To create a table without specifying a distribution key, the Netezza SQL syntax is:
CREATE TABLE <tablename> (col1 int, col2 int, col3 int);
The Netezza selects a distribution key. There is no guarantee what that key is and it
can vary depending on the Netezza software release.
To create a random distribution, the Netezza SQL syntax is:
CREATE TABLE <tablename> [ ( <column> [, ] ) ]DISTRIBUTE ON RANDOM;
The phrase DISTRIBUTE ON RANDOM specifies a round-robin distribution.
You can also use the NzAdmin tool to create tables and specify the distribution key. For
more information about the CREATE TABLE command, see the IBM Netezza Database
Users Guide.
nal table, then the new records will be on the same data slices that they started on. The
system has no need to send the records to the host (and consume transmission time and
host processing power). Rather, the SPUs simply create the records locally read from the
same data slices and write back out to same data slices. This way of creating a new table is
much more efficient. In this case the SPU is basically communicating with only its data
slices.
Choosing the same distribution key causes the system to create the new table local to each
data slice (reading from the original table, writing to the new table).
create [ temporary | temp ] TABLE table_name [ (column [, ...] ) ]
as select_clause [ distribute on ( column [, ...] ) ];
When you create a subset table or temp table, you do not need to specify a new distribution
key or distribution method. Instead, allow the new table to inherit the parent tables distri-
bution key. This avoids the extra data distribution that can occur because of the non-match
of inherited and specified keys.
Verifying Distribution
When the system creates records, it assigns them to a logical data slice based on their dis-
tribution key value. You can use the datasliceid keyword in queries to determine how many
records you have stored on each data slice and thus, whether the data is distributed evenly
across all data slices.
To check your distribution, run the following query:
select datasliceid, count(datasliceid) as Rows
from table_name group by datasliceid order by Rows;
You can also view the distribution from the NzAdmin tool. To view record distribution for a
table you must have the following object privileges: list on the database, list on the table,
and select on the table.
and time to complete their jobs. These data slices and the SPUs that manage them become
a performance bottleneck for your queries. Uneven distribution of data is called skew. An
optimal table distribution has no skew.
Skew can happen while distributing or loading the data into the following types of tables:
Base tables Database administrators define the schema and create tables.
Intra-session tables Applications or SQL users create temp tables.
Column Description
Table The name of the table that meets or exceeds the specified skew threshold.
Skew The size difference in megabytes between the smallest data slice for a table
and the largest data slice for the table.
Min/Data The size of the tables smallest portion on a data slice in MB.
slice
Max/Data The size of the tables largest portion on a data slice in MB.
slice
Avg/Data The average data slice size in MB across all the data slices.
slice
CBTs are most often used for large fact or event tables which could have millions or billions
of rows. If the table does not have a record organization that matches the types of queries
that run against it, scanning the records of such a large table requires a lengthy processing
time as full disk scans could be needed to gather the relevant records. By reorganizing the
table to match your queries against it, you can group the records to take advantage of zone
maps and improve performance.
CBTs offer several benefits:
CBTs support multi-dimension lookups where you can organize records by one, two,
three, or four lookup keys. In the example shown in Figure 9-3, if your queries com-
monly restrict on transaction type and store ID, you can organize records using both of
those keys to improve query performance.
CBTs improve query performance by adding more zone maps for a table because the
organizing key columns are also zone mapped (if the organizing column data type sup-
ports zone maps).
CBTs increase the supported data types for zone-mapped columns, thus allowing you to
improve performance for queries that restrict along multiple dimensions.
CBTs allow you to incrementally organize data within your user tables in situations
where data cannot easily be accumulated in staging areas for pre-ordering before inser-
tions/loads. CBTs can help you to eliminate or reduce pre-sorting of new table records
prior to a load/insert operation.
CBTs save disk space. Unlike indexes, materialized views and other auxiliary data
structures, CBTs do not replicate the base table data and do not allocate additional
data structures.
The organizing keys must be columns that can be referenced in zone maps. By default,
Netezza creates zone maps for columns of the following data types:
Integer - 1-byte, 2-byte, 4-byte, and 8-byte
Date
Timestamp
In addition, Netezza also creates zone maps for the following data types if columns of this
type are used as the ORDER BY restriction for a materialized view or as the organizing key
of a CBT:
Char - all sizes, but only the first 8 bytes are used in the zone map
Varchar - all sizes, but only the first 8 bytes are used in the zone map
Nchar - all sizes, but only the first 8 bytes are used in the zone map
Nvarchar - all sizes, but only the first 8 bytes are used in the zone map
Numeric - all sizes up to and including numeric(18)
Float
Double
Bool
Time
Time with timezone
Interval
You specify the organizing keys for a table when you create it (such as using the CREATE
TABLE command), or when you alter it (such as using ALTER TABLE). When you define the
organizing keys for a table, note that Netezza does not automatically take action to reorga-
nize the records; you use the GROOM TABLE command to start the reorganization process.
You can add to, change, or drop the organizing keys for a table using ALTER TABLE. Note
that the additional or changed keys take effect immediately, but you must groom the table
to reorganize the records to the new keys. You cannot drop a column from a table if that
column is specified as an organizing key for that table.
tions run in parallel with the groom operations; the INSERT, UPDATE, and DELETE
operations run serially between the groom steps. For CBTs, the groom steps are somewhat
longer than for non-CBT tables, so INSERT, UPDATE, and DELETE operations may pend for
a longer time until the current step completes.
Note: When you specify organizing keys for an existing table to make it a CBT, the new
organization could impact the compression size of the table. The new organization could
create sequences of records that improve the overall compression benefit, or it could create
sequences that do not compress as well. Following a groom operation, your table size could
change somewhat from its size using the previous organization.
Element Description
The GENERATE STATISTICS command collects this information. If you have the GenStats
privilege, you can run this command on a database, table, or individual columns. By
default, the admin user can run the command on any database (to process all the tables in
the database) or any individual table.
The admin user can assign other users this privilege. For example, to give user1 privilege to
run GENERATE STATISTICS on one or all tables in the DEV database, the admin user must
grant user1 LIST privilege on tables in the system database, and GENSTATS from the dev
database, as in these sample SQL commands:
SYSTEM(ADMIN)=> GRANT LIST ON TABLE TO user1;
DEV(ADMIN)=> GRANT GENSTATS ON TABLE TO user1;
For more information about the GenStats privilege, see Table 8-1 on page 8-9.
Table 9-4 describes the nzsql command syntax for these cases.
Description Syntax
The GENERATE STATISTICS command reads every row in every table to determine disper-
sion values (no sampling). It provides the most accurate and best quality statistics.
Table 9-5 describes when the Netezza automatically maintains table statistics.
Dispersion
Command Row counts Min/Max Null Zone maps
(estimated)
DELETE no no no no no
The number of maximum extents scanned for the target table on the data slices with
the greatest skew
The number of rows scanned for the target table that apply to each join
The number of unique values for any target table column used in subsequent join or
group by processing
This information is conditionally requested for and used in estimating the number of rows
resulting from a table scan, join, or group by operation.
Note: JIT statistics do not eliminate the need to run the GENERATE STATISTICS com-
mand. While JIT statistics help guide row estimation, there are situations where the catalog
information calculated by GENERATE STATISTICS is used in subsequent calculations to
complement the row estimations. Depending on table size, the GENERATE STATISTICS
process will not collect dispersion because the JIT statistics scan will estimate it on-the-fly
as needed.
The system automatically runs JIT statistics for user tables when it detects the following
conditions:
Tables that contain more than five million records.
Queries that contain at least one column restriction.
Tables that participate in a join or have an associated materialized view. JIT statistics
are integrated with materialized views to ensure the exact number of extents is
scanned.
The system runs JIT statistics even in EXPLAIN mode. To check if JIT statistics were run,
review the EXPLAIN VERBOSE output and look for cardinality estimations that are flagged
with the label JIT.
Zone Maps
Zone maps are automatically generated internal tables that the Netezza system uses to
improve the throughput and response time of SQL queries against large grouped or nearly
ordered date, timestamp, byteint, smallint, integer, and bigint data types.
Zone maps reduce disk scan operations required to retrieve data by eliminating records out-
side the start and end range of a WHERE clause on restricted scan queries. The Netezza
Storage Manager uses zone maps to skip portions of tables that do not contain rows of
interest and thus reduces the number of disk pages and extents to scan and the search
time, disk contention, and disk I/O.
Typically, users run queries against a subset of history such as the records for one week,
one month, or one quarter. To optimize query performance, zone maps help to eliminate
scans of the data that is outside the range of interest.
Grooming Tables
As part of your routine database maintenance activities, you should plan to recover disk
space occupied by outdated or deleted rows. In normal Netezza operation, an update or
delete of a table row does not remove the old tuple (version of the row). This approach ben-
efits multiversion concurrency control by retaining tuples that could potentially be visible
to other transactions. Over time however, the outdated or deleted tuples are of no interest
to any transaction. After you have captured them in a backup, you can reclaim the space
they occupy using the SQL GROOM TABLE command.
Note: Starting in Release 6.0, you use the GROOM TABLE command to maintain the user
tables by reclaiming disk space for deleted or outdated rows, as well as to reorganize the
tables by their organizing keys. The GROOM TABLE command processes and reorganizes
the table records in each data slice in a series of steps. Users can perform tasks such as
SELECT, UPDATE, DELETE, and INSERT operations while the online data grooming is tak-
ing place. The SELECT operations run in parallel with the groom operations; any INSERT,
UPDATE, and DELETE operations run serially between the groom steps. For details about
the GROOM TABLE command, see the IBM Netezza Database Users Guide.
Note the following best practices when you groom tables to reclaim disk space:
You should groom tables that receive frequent updates or deletes more often than
tables that are seldom updated.
If you have a mixture of large tables, some of which are heavily updated and others that
are seldom updated, you might want to set up periodic tasks that routinely groom the
frequently updated tables.
Grooming deleted records has no effect on your database statistics, because the pro-
cess physically removes records that were already logically deleted. When you groom
a table, the system leaves the min/max, null, and estimated dispersion values
unchanged. For more information on when to run the GENERATE STATISTICS com-
mand, see Running the GENERATE STATISTICS Command on page 9-16.
Physically reclaiming the records, however, does affect where the remaining records in
the table are located. So when you physically reclaim records, the system updates the
zone map.
Note: When you delete a tables contents completely, consider using the TRUNCATE rather
than the DELETE command, which eliminates the need to run the GROOM TABLE
command.
Option Description
-alldbs Checks all the databases in the system for CBTs that require
grooming.
-db db_name [db_ Checks the specified database in the system for CBTs that
name...] require grooming. You can specify one or more database names
to check only those databases.
Managing Sessions
A session represents a single connection to a Netezza appliance. Sessions begin when
users perform any of the following actions:
Invoke the nzsql command; the session ends when they enter \q (quit) to exit the
session.
invoke the nzload command, the NzAdmin tool, or other client commands; the session
ends when the command completes, or when the user exits the user interface.
Viewing Sessions
You can use the nzsession command to display the list of current user sessions and to list
the session types. You can be logged in as any database user to use the nzsession show
command; however, some of the data displayed by the command could be obscured if your
account does not have correct privileges. The admin user can see all the information.
To list all active sessions, enter:
nzsession show -u admin -pw password
ID Type User Start Time PID Database State
Priority Name Client IP Client PID Command
----- ---- ----- ----------------------- ----- ------------ ------
------------- --------- ---------- ------------------------
16129 sql ADMIN 12-Apr-10, 15:39:11 EDT 11848 TPCH1_NOTHIN idle
normal 127.0.0.1 11821 select * from lineitem;
16133 sql ADMIN 12-Apr-10, 15:45:26 EDT 11964 SYSTEM active
normal 127.0.0.1 11963 SELECT session_id, clien
If you are a database user who does not have any special privileges, information such
as the user name, database, client PID, and SQL command appear only as asterisks,
for example:
nzsession show -u user1 -pw pass
ID Type User Start Time PID Database State
Priority Name Client IP Client PID Command
----- ---- ----- ----------------------- ----- -------- ------ ----
--------- --------- ---------- ------------------------
16129 sql ***** 12-Apr-10, 15:39:11 EDT 11848 ***** idle
normal ***** *****
16134 sql USER1 12-Apr-10, 15:48:00 EDT 12012 SYSTEM active
normal 127.0.0.1 12011 SELECT session_id, clien
For a description of the output from the nzsession command, see nzsession on
page A-39.
To list session types, enter:
nzsession listSessionTypes
Note: Do not abort system sessions. Doing so can cause your system to fail to restart.
Running Transactions
A transaction is a series of one or more operations on database-related objects and/or data.
Transactions provide the following benefits:
Ensure integrity among multiple operations by allowing all or none of the operations to
take effect. You accomplish this by starting a transaction, performing operations, and
then executing either a commit or a rollback (also called an abort).
Provide a means of canceling completed work for a series of operations that fail before
finishing.
Provide a consistent view of data to users, in the midst of changes by other users. The
combination of create and delete transaction IDs associated with each data row plus
Netezza internal controls guarantee that once a transaction has begun, new transac-
tions or ones that have yet to be committed do not affect the view of the data.
The following activities do not count against the read/write transaction limit:
Committed transactions
Transactions that have finished rolling back
SELECT statements that are not inside a multistatement transaction
Transactions that create or modify temporary tables only, and/or modify only tables cre-
ated within the same transaction (for example, CREATE TABLE AS SELECT)
Multi-statement read-only transactions (BEGIN SET TRANSACTION READ ONLY)
Explicit Explicit
Explicit
Implicit begin_queue_if_full=T begin_queue_if_full=T
begin_queue_if_full=F
(default) set session read only
CREATE X X E Ec
CREATE TABLE AS X X E E
DROP X X E E
Explicit Explicit
Explicit
Implicit begin_queue_if_full=T begin_queue_if_full=T
begin_queue_if_full=F
(default) set session read only
TRUNCATE X X E E
INSERT Qb X E E
DELETE Q X E E
UPDATE Q X E E
CREATE/MODIFY X X X X
temporary table
a. X = starts executing
b. Q = request queues
c. E = error message
The optimizer can also dynamically rewrite queries to improve query performance. Many
data warehouses use BI applications that generate SQL that is designed to run on multiple
vendors databases. The portability of these applications is often at the expense of efficient
SQL. The SQL that the application generates does not take advantage of the vendor spe-
cific enhancements, capabilities, or strengths. Hence, the optimizer may rewrite these
queries to improve query performance.
Execution Plans
The optimizer uses statistics to determine the optimal execution plan for queries. The sta-
tistics include the following:
The number of rows in the table
The number of unique or distinct values of each column
The number of NULLs in each column
The minimum and maximum of each column
For the optimizer to create the best execution plan that results in the best performance, it
must have the most up-to-date statistics. For more information about running statistics,
see Updating Database Statistics on page 9-14.
You do not need to be the administrator to view these views, but you must have been
granted list permission for both the user and database objects to the specific users or
groups that want access to this view.
For example, to grant admin1 permission to view bobs queries on database emp, use the
following SQL commands:
GRANT LIST ON bob TO admin1;
GRANT LIST ON emp TO admin1;
You can also use the nzstats command to view the Query Table and Query History Table. For
more information, see Table 13-12 on page 13-10 and Table 13-13 on page 13-11.
Table 9-8 lists the _v_qrystat view, which lists active queries.
Columns Description
SQL statement The SQL statement. Note that the statement is not truncated as it is
with the nzstats command.
Submit date The date and time the query was submitted.
Start date The date and time the query started running.
Priority text The priority of the queue when submitted (normal or high).
Estimated cost The estimated cost, as determined by the optimizer. The units are
thousandths of a second, that is, 1000 equals one second.
Table 9-9 describes the _v_qryhist view, which lists recent queries.
Columns Description
Columns Description
SQL statement The SQL statement. Note that the statement is not truncated as it is
with the nzstats command.
Submit date The date and time the query was submitted.
Start date The date and time the query started running.
End date The date and time that the query ended.
Priority text The priority of the queue when submitted (normal or high).
This chapter describes how to backup and restore data on the Netezza system. It provides
general information on backup and restore methods, and also describes how to use the
third-party storage solutions that are supported by the Netezza system.
As a best practice, make sure that you schedule regular backups of your user databases
and your system catalog to ensure that you can restore your Netezza system. Make sure that
you run backups prior to (and after) major system changes, so that you have snapshots of
the system before and after those changes. A regular and current set of backups can pro-
tect against loss of data following events such as disaster recovery, hardware failure,
accidental data loss, or incorrect changes to existing databases.
10-1
IBM Netezza System Administrators Guide
Create backups of the system catalog (the /nz/data directory) on the Netezza host using
the nzhostbackup command. If the Netezza host fails, you can reload the system cata-
log and metadata using the nzhostrestore command without a full database reload.
Table 10-1 lists the differences among the backup and restore methods.
Schema backup
EMC NetWorker
Automatic incremental
Non-proprietary format
Machine-size independent a a
Rowid preserved
Transaction ID preserved
Upgrade safe
Downgrade safe
a. This method usually takes more time to complete than the compressed internal format backups and loads.
The CREATE EXTERNAL TABLE command and the procedures for using external tables are
described in detail in the IBM Netezza Data Loading Guide.
Symantec and NetBackup are trademarks or registered trademarks of Symantec Corpora-
tion or its affiliates in the U.S. and other countries. EMC and NetWorker are registered
trademarks or trademarks of EMC Corporation in the United States and other countries.
Overview
The Netezza system contains data which is critical to the operation of the system and to the
user databases and tables stored within the Netezza. The data includes the Netezza catalog
metadata, the user databases and tables, and access information such as users, groups,
and global permissions. Netezza provides a set of commands to backup and restore this
information, as described in Table 10-2.
Table 10-2: Backup/Restore Commands and Content
Note: The Netezza backup processes do not back up host software such as the Linux oper-
ating system files or any applications that you may have installed on the Netezza host, such
as the Web Admin client. If you accidentally remove files in the Web Admin installation
directories, you can reinstall the Web Admin client to restore them. If you accidentally
delete Linux host operating system or firmware files, contact Netezza Support for assis-
tance in restoring them.
Note: The nzbackup and nzrestore commands do not back up the analytic executable
objects created by the IBM Netezza Analytics feature. If you use IBM Netezza Analytics on
your Netezza system, be sure to back up the Netezza databases, users, and global objects
(using nzbackup), the host metadata (using nzhostbackup) and the analytic executables
using inzabackup. For more information about backup and restore commands for the IBM
Netezza Analytics, see the User-Defined Analytic Process Developer's Guide.
The Netezza backup and restore operations can use network filesystem locations as well as
several third-party solutions such as IBM Tivoli Storage Manager, Symantec Net-
Backup, and EMC NetWorker as destinations.
Database Completeness
The standard backup and restore using the nzbackup and nzrestore commands provide
transactionally consistent, automated backup and restore of the schema and data for all
objects of a database, including ownership and permissions for objects within that data-
base. You can use these commands to backup and restore an entire database, as well as to
restore a specific table in a database.
The nzrestore command requires that the database be dropped or empty when you restore
the database. Similarly, before you restore a table, you must first drop the table or use the
-droptables option to allow the command to drop a table that is going to be restored.
Portability
Before performing a backup, consider where you plan to restore the data. For example, if
you are restoring data to the same Netezza system or to another Netezza system (which
could be a different model type or have a later software release), use the compressed inter-
nal format files created by the nzbackup command. The compressed internal format files
are smaller and often load more quickly than text external format files. You can restore a
database created on one Netezza model type to a different Netezza model type, such as a
backup from an IBM Netezza 1000-6 to a 1000-12, if the destination Netezza has the
same or later Netezza release. A restore runs slower when you change the destination
model type, because the host on the target system must process and distribute the data
slices according to the target models data slices to SPU topology.
As a best practice, when transferring data to a new Netezza system, or when restoring row-
secure tables, use the nzrestore -globals operation to restore the user, groups, and privi-
leges (that is, the access control and security information) first, before you restore the
databases and tables. If the security information required by a row-secure table is not
present on the system, the restore process exits with an error. For more information about
multi-level security, see the IBM Netezza Advanced Security Administrators Guide.
If you plan to load the Netezza data to a different system (that is, a non-Netezza system),
the text format external tables are the most portable. Data in text external tables can be
read by any product that can read text files, and can be loaded into any database that can
read delimited text files.
A compressed binary format external table (also known as an internal format table) is a pro-
prietary format which typically yields smaller data files, retains information about the
Netezza topology, and thus is often faster to backup and restore. The alternative to com-
pressed binary format is text format, which is a non-proprietary external table format that is
independent of the Netezza topology, but yields larger files and can be slower to backup
and restore.
The different backup/restore methods handle data compression in the following manner:
When you use the standard backup using the nzbackup/nzrestore commands, the sys-
tem automatically uses compressed external tables as the data transfer mechanism.
When you use compressed external table unload, the system compresses the data and
only uncompresses it when you reload the data.
Use manually created external compressed tables for backup when you want table-level
backup or the ability to send data to a named pipe, for example, when using a named
pipe with a third-party backup application.
When you use text format unload, the data is not compressed. For large tables, it is the
slowest method and the one that takes up the most storage space.
Multi-Stream Backup
The Netezza backup process is a multi-stream process. If you specify multiple filesystem
locations, or if you use third-party backup tools that support multiple connections, the
backup process can parallelize the work to send the data to the backup destinations.
Multi-stream support can improve backup performance by reducing the time required to
transfer the data to the destination. To use multi-stream backups, you use the -streams
num option of the nzbackup command.
The maximum number of streams is 16. For systems that have less than 16 dataslices,
such as an IBM Netezza 100, the maximum number of streams is limited to the number of
dataslices. For filesystem backups, the system uses one stream for each backup destina-
tion. For third-party backup applications, the system uses the value specified for the -
streams option; or, if not specified, the value of the host.bnrNumStreamsDefault configura-
tion setting.
Note: If you use multi-stream backup to a third-party backup tool, make sure that you
review the support for maximum jobs or parallelism in that tool. Some tools such as Net-
Backup have a limit on the number of concurrent streams. If you specify more streams than
the NetBackup tool supports, the backup job will fail. If you use the EMC NetWorker
backup connector, be sure to review the section Changing Parallelism Settings on
page 10-61.
For TSM backups, the maximum number of streams is controlled by the MAXSESSIONS
option in the Tivoli Admin console (dsmadmc). You can display the value using query
option MAXSESSIONS and you can set it using setopt MAXSESSIONS value. If you
specify more streams than the MAXSESSIONS value, the TSM server displays the error
Error: Connector init failed: 'ANS1351E (RC51) Session rejected: All server sessions are
currently in use and the backup aborts.
Netezza backup processes include a test to check that the backup tool supports the num-
ber of requested streams. If that test completes successfully, the actual backup process
starts. If the test fails due to connection timeouts, the nzbackup process exits with the
error: Stream unresponsive after 300 seconds, operation aborted. Check concurrency lim-
its on server.
Only backup operations that transfer table data support multiple destinations and streams.
These operations include full and incremental backups. Other operationssuch as
-schema-only backup, -globals backup, and the reportsuse only a single destination and
a single stream, even if you specify multiple destinations. The restore process is always a
single-stream process.
Special Columns
The backup/restore method you use affects how the system retains specials. The term spe-
cials refers to the end-user-invisible columns in every table that the system maintains. The
specials include rowid, datasliceid, createxid, and deletexid.
Table 10-3 describes how the backup method affects these values.
datasliceid Retain when the machine Retain when the machine Recalculate
size stays the same, other- size stays the same, oth-
wise recalculates. erwise recalculates.
createxid Receive the transaction ID Receive the transaction Receive the transaction
of transaction performing ID of transaction perform- ID of transaction per-
the restore. ing the restore. forming the restore.
Upgrade/Downgrade Concerns
The backup method you select also affects your ability to restore data after a Netezza soft-
ware release upgrade or downgrade.
Backups created with the nzbackup command can be safely reloaded/restored after an
upgrade of the Netezza software. Backups created with the nzbackup command are not
guaranteed to support reload/restore after a Netezza software downgrade. These
backup formats are subject to change between releases.
Compressed external table backups can be safely reloaded/restored after an upgrade of
the Netezza software. Compressed external table backups are not guaranteed to sup-
port reload/restore after a Netezza software downgrade. These backup formats are
subject to change between releases.
Text format external tables are insensitive to software revisions, and can be reloaded to
any Netezza software release.
Note: Starting in Release 6.0.x, the nzrestore process no longer supports the restoring of
backups created using NPS Release 2.2 or earlier.
-secret option to encrypt the host key using a user-supplied string. To restore that backup
set, an administrator must specify the same string in the nzrestore -secret option. To pro-
tect the string, it is not captured in the backup and restore log files.
The -secret option is not required. If you do not specify one, the custom host key is
encrypted using the default encryption process. Also, the -secret option is ignored if you do
not use a custom host key for encrypting passwords on your system.
pletes. The command typically takes only a few minutes to complete. It is recommended
that you run database and host backups during a time when the Netezza system is least
busy with queries and users.
Note: It is very important to keep the host backups synchronized with the current database
and database backups. After you change the catalog information, such as by adding new
user accounts, new objects such as synonyms or tables, altering objects, dropping objects,
truncating tables, or grooming tables, you should use the nzhostbackup command to cap-
ture the latest catalog information. You should also update your database backups.
An example follows:
nzhostbackup /backups/nzhost_latest.tar.gz
Starting host backup. System state is 'online'.
Pausing the system ...
Checkpointing host catalog ...
Archiving system catalog ...
Resuming the system ...
Host backup completed successfully. System state is 'online'.
For more information about the nzhostbackup command and its options, see nzhost-
backup on page A-22.
EST 2009.
This operation cannot be undone. Ok to proceed? (y/n) [n] y
Installing system catalog to '/nz/data.1.0' ...
StarSynchronizing data on spus ...
done.
-db database Backs up the specified database Value of NZ_ -db ttdev
and all its objects as well as the DATABASE
users, groups, and permissions
referenced by those objects.
If you specify this option, you
cannot specify -globals. For more
information, see Backing Up
and Restoring Users, Groups,
and Permissions on
page 10-20.
-pw password Specifies the users password. Value of NZ_ -pw XXXXXX
PASSWORD
NZ_USER Same as u
NZ_PASSWORD Same as pw
Reporting Errors
The nzbackup command writes errors to the log file /nz/kit/log/backupsvr/back-
upsvr.pid.date.log. For more information about the log files, see System Logs on
page 6-12.
nzbackup Examples
Several examples of the nzbackup command follow:
To back up the contents of the database db1 to disk in the /home/user/backups direc-
tory, enter:
nzbackup -dir /home/user/backups -u user -pw password -db db1
The nzbackup command saves the database schema, data, and access permissions for
all the objects and user data in the database. Sample output follows:
Backup of database db1 to backupset 20120319201321 completed
successfully.
You can use the -v (verbose) command option to display more detail about the backup:
[Backup Server] : Starting the backup process
[Backup Server] : Backing up to base directory '/home/user/backups'
[Backup Server] : Backing up libraries
[Backup Server] : Backing up functions
[Backup Server] : Backing up aggregates
[Backup Server] : Transferring external code files
[Backup Server] : Start retrieving the schema
[Backup Server] : Backing up metadata to /home/user/backups/
Netezza/hostid/DB1/20120319201402/1/FULL
[Backup Server] : Retrieving host key information
[Backup Server] : Retrieving user information
[Backup Server] : Backing up sequences
[Backup Server] : Backing up table schema.
[Backup Server] : Backing up External Tables.
[Backup Server] : Backing up External table settings.
[Backup Server] : Backing up External table zone settings.
[Backup Server] : Backing up Table Constraints
[Backup Server] : Backing up synonyms
[Backup Server] : Backing up stored procedures
[Backup Server] : Backing up materialized views
[Backup Server] : Backing up view definitions.
[Backup Server] : Retrieving group information
[Backup Server] : Retrieving group members
[Backup Server] : Backing up ACL information
[Backup Server] : Start retrieving the data.
[Backup Server] : Backing up table AAA
[Backup Server] : Backing up table BBB
[Backup Server] : Backing up table sales %
[Backup Server] : Operation committed
Backup of database db1 to backupset 20120319201402 completed
successfully.
To back up the contents of the database db2 to filesystem locations in the /export/
backups1 and /export/backups2 directories, enter:
nzbackup -dir /export/backups1 /export/backups2 -u user -pw password -
db db2
The nzbackup command saves the database schema, data, and access permissions for all
the objects and user data in the database. The database is saved in the two specified file-
system locations.
To back up only the schema of the database db1 to disk in the /home/user/backups
directory, enter:
nzbackup -dir /home/user/backups -schema-only -u user -pw password
-db db1
The nzbackup command saves the schema (that is, the definition of the objects in the
database and any access permissions defined in the database) to a file. An example
follows (also using the -v option) :
[Backup Server] : Starting the backup process
[Backup Server] : Backing up to base directory '/home/user/backups'
[Backup Server] : Backing up libraries
[Backup Server] : Backing up functions
[Backup Server] : Backing up aggregates
[Backup Server] : Transferring external code files
[Backup Server] : Backing up to /home/user/backups/Netezza/hostid/
DB1/20120319202016/1/SCHEMA/md
[Backup Server] : Retrieving host key information
[Backup Server] : Retrieving user information
[Backup Server] : Backing up sequences
[Backup Server] : Backing up table schema.
[Backup Server] : Backing up External Tables.
[Backup Server] : Backing up External table settings.
[Backup Server] : Backing up External table zone settings.
[Backup Server] : Backing up Table Constraints
[Backup Server] : Backing up synonyms
[Backup Server] : Backing up stored procedures
[Backup Server] : Backing up materialized views
[Backup Server] : Backing up view definitions.
[Backup Server] : Retrieving group information
[Backup Server] : Retrieving group members
[Backup Server] : Backing up ACL information
[Backup Server] : Operation committed
Backup of schema for database db1 completed successfully.
To back up the global objects in the /home/user/backups directory, enter:
nzbackup -dir /home/user/backups -globals -u user -pw password
The nzbackup command saves the users, groups, global permissions, and security cat-
egories, cohorts, and levels for multi-level security. Note that it does not capture user
privileges granted in specific databases those permissions are captured in database
backups.
[Backup Server] : Starting the backup process
[Backup Server] : Backing up to base directory '/home/user/backups'
[Backup Server] : Backing up security metadata
[Backup Server] : Start retrieving the schema
[Backup Server] : Backing up metadata to /export/home/nz/backups/
Netezza/hostid/SYSTEM/20120319202355/1/USERS/md
[Backup Server] : Retrieving host key information
[Backup Server] : Retrieving user information
[Backup Server] : Retrieving group information
[Backup Server] : Retrieving group members
If you move the backup archives from one storage location to another, you must maintain
the directory structure. If you want to be able to perform an automated restore, all the
backup increments must be accessible.
Incremental Backups
Incremental backups are database backups that save only the data that has changed since
the last backup. Because the system copies a small subset of the data, incremental back-
ups require less time to complete than full backups. They allow you to keep your backups
current while reducing the frequency of time-consuming full backups.
Netezza supports two types of incremental backups: differential and cumulative.
Differential Includes all the changes made to the database since the previous
backup (full, differential, or cumulative).
Cumulative Includes all the changes made to the database since the last full
backup. Cumulative backups incorporate and replace any differential backups per-
formed since the last full backup.
Use cumulative backups to consolidate differential backups so that if you need to
restore data the restoration will require fewer steps and less media.
Figure 10-1 shows sample backups, beginning with a full backup, then a series of differen-
tial and cumulative backups.
Cumulative
Cumulative
Cumulative
Full
The backups in Figure 10-1 comprise a backup set, which is a collection of backups writ-
ten to a single location consisting of one full backup and any number of incremental
backups.
Note: After you use the nzhostrestore command, note that you cannot perform an incre-
mental backup on the database; you must run a full backup first.
using the NzAdmin tool, or the Web Admin interface. This section describes how to use the
nzbackup command; for details on the interfaces, refer to the online help for NzAdmin and
Web Admin.
Your Netezza user account must have appropriate permissions to view backup history for
databases:
If you are the admin user, you can view all entries in the backup history list.
If you are not the admin user, you can view entries if you are the database owner, or if
you have backup or restore privileges for the database.
The following is the syntax to display the backup history for a database:
nzbackup -history -db name
Database Backupset Seq # OpType Status Date Log File
-------- -------------- ----- ------- --------- ------------------- ----------------------
SQLEXT 20090109155818 1 FULL COMPLETED 2009-01-09 10:58:18 backupsvr.9598.2009-01-
09.log
Note: You can further refine your results by using the -db and -connector options, or use
the -v option for additional information. You use the -db option to see only the history of a
specified database.
Column Description
Backup Set The unique value that identifies the backup set.
Seq.No The sequence number that identifies the increment within the backup set.
not associated with particular databases. The system does not back up permissions that are
defined in specific databases. Those permissions are saved in the regular database back-
ups for those databases
For example, suppose you have four users (user1 to user4) and you grant them the follow-
ing permissions:
nzsql
SYSTEM(ADMIN)=> GRANT CREATE TABLE TO user1;
SYSTEM(ADMIN)=> \c db_product
DB_PRODUCT(ADMIN)=> GRANT CREATE TABLE TO user2;
DB_PRODUCT(ADMIN)=> GRANT LIST ON TABLE TO user3;
DB_PRODUCT(ADMIN)=> GRANT LIST ON emp TO user4;
User1 has global Create Table permission, which allows table creation in all databases on
the Netezza system. User2 and User3 have Create and List permission to tables in the
db_product database. User4 has List permission only to the emp table in the database
db_product.
Table 10-7 describes the results when you invoke the nzbackup and nzrestore commands
using different options.
user2
user3
user4
A regular backup of the db_product database does not include user1 or the CREATE
TABLE GRANT to user1, because those privileges are defined in the system database
(the system catalog).
A -globals backup and restore includes all users (in this case, users1-4), but it only
includes the Create Table permission for user1, which is also defined in the system
database. The -globals backup and restore does not include the privileges defined spe-
cifically in the db_product database.
A -globals backup and restore does not include the admin user or the public group.
Using the nzrestore -globals command allows you to restore users, groups, and permissions.
The restoration of users and groups is nondestructive, that is, the system only creates users
and groups if they do not exist. It does not drop users and groups. Permission restoration is
also nondestructive, that is, the system only grants permissions. It does not revoke
permissions.
Note: Keep in mind when restoring data and users from a backup that the backup reverts
your system to a point in the past. Your user community and their access rights may have
changed, or if you are restoring to a new system, a very stale backup may not reflect your
current user community. After you make any significant user community changes, it is
strongly recommended that you back up the latest changes. After restoring from a backup,
check that the resulting users, groups, and permissions match your current community
permissions.
If you need to grant a user permission to restore a specific database (versus global restore
permissions), you can create an empty database and grant the user privilege for that data-
base. The user will then be able to restore that database.
You can pass parameters to the nzrestore command directly on the command line, or you
can set parameters as part of your environment. For example, you can set the NZ_USER or
NZ_PASSWORD environment variables instead of specifying -u or -pw on the command
line.
When you do a full restore into a database, the nzrestore command performs the following
actions:
1. Verifies the user name given for backup and restore privileges.
2. Checks to see if the database already exists.
3. Recreates the same schema on the new database, including all objects such as tables,
views, sequences, synonyms, and so on.
4. Applies any access privileges to the database and its objects as stored in the backup. If
necessary, the command creates any users or groups which might not currently exist on
the system to apply the privileges as saved in the database backup. The command also
revokes any current user or group privileges to match the privileges that were saved at
the time of the backup.
5. Restores the data.
If you are performing a table-level restore and the table exists in the database, the nzre-
store command will drop and recreate the table if you specify -droptables. If you do not
specify -droptables, the restore fails.
The nzrestore -schema-only command does not restore the /nz/data directory; instead, it
creates a new database or populates an empty database with the database schema from the
backed-up database. The command creates the objects in the database, such as the tables,
synonyms, sequences, views, and so on, and applies any access permissions as defined in
the database. It does not restore data to the user tables in the database; the restored tables
are empty.
Note: In rare cases, a large number of schema objects could cause a restore to fail, with the
system indicating a memory limitation. In such cases, you may need to adjust how you
restore your database. For example, if you attempt to restore a database that includes a
large number of columns (such as 520,000), you would likely receive an error message that
indicates a memory limitation. (The memory limitation error could result from a large num-
ber or columns or other schema objects.) You would likely need to perform a schema-only
restore followed by two or more table-level restore operations.
Argument Description
-v[erbose] Specifies the verbose mode, lists the objects being restored.
-db database Restores the specified database and all its objects as well as the
users, groups, and permissions referenced by those objects.
If you specify this option, you cannot specify -globals. For more infor-
mation, see Backing Up and Restoring Users, Groups, and
Permissions on page 10-20.
Argument Description
-dir directory list For restores from a filesystem, specifies the pathnames of the
backup directories where the schema and data files are stored. You
can specify full pathname(s), or the backup root directories. (For
example, if you used the root directory /usr/backups when you cre-
ated the backup, specify /usr/backups when you restore from that
backup.)
If you saved the backup to multiple filesystem locations, specify the
roots of all the locations in this argument. For example, if a backup
was written to /home/backup1, /home/backup2, and /home/backup3,
you can restore the data in a single operation by specifying all three
locations.
-dirfile Specifies a file with a list of the backup source directories, one per
line.
-connector Names the connector to which you are sending the backup. Valid val-
conname ues are:
filesystem
tsm
netbackup
networker
The system discovers the backup software based on the connector
name that you specify. If you have multiple versions of a backup con-
nector installed (for example, TSM 5 and TSM 6), you can use a
specific version using one of these values instead of the generic argu-
ments above:
tsm5
tsm6
netbackup6
netbackup7
networker7
-sourcedb dbname Specifies the backup set database, overriding the default.
Note: nzrestore restores the most recent backup of -db unless you
specify a different backup set with the option -sourcedb.
-npshost host By default, restores look for backup sets that were created by the
local Netezza host. If you use nzrestore to migrate databases, sche-
mas, or user backups made on a different Netezza host, use this
option to specify the host that created the backup set.
Argument Description
-tables table_list Restores the table or tables as specified in the table_list argument,
which is a space-separated list of tables.
-tablefile filename Restores the tables listed in the table file, which is a file that con-
tains a list of tables with one table per line.
-droptables Drops the tables in the table list before restoring for a table-level
restore.
-increment [ID | If you specify an increment ID, the command performs a partial
NEXT | REST] restore up to the user-specified increment number.
After you perform a partial restore, you can specify NEXT to restore
the next increment from the backup set. If you specify REST, the
command restores the remaining increments from the backup set.
-unlockdb Unlocks the database without performing another restore. This option
is useful in cases where a restore is aborted or fails, because the tar-
get database could remain locked. Use this option to unlock the
database.
-globals Restores the users, groups, and global permissions, as well as multi-
level security information such as categories, cohorts, and levels. For
more information, see Backing Up and Restoring Users, Groups,
and Permissions on page 10-20. You cannot specify -db when you
specify -globals.
The creation of the users, groups and global permissions is non-
destructive; that is, if a user or group exists, the system will not over-
write it. If you specify verbose mode, the nzrestore command
displays at least one user or group creation error message, and tells
you to view the restore log for details.
As a best practice, when transferring data to a new machine, use
nzrestore -globals first to ensure that the users, groups, permissions
and security information is present before you restore the data.
Argument Description
-u username Specifies the user name for connecting to the database to perform
the restore.
-schema-only Restores only the database schema (the definitions of objects and
access permissions), but not the data in the restored tables.
-contents Lists the name and type of each database object in a backup archive.
Note: For file system backup locations, you must also specify -dir for
the location of the backup archive and -db for a specific database.
-disableGroom Disables the automatic groom of versioned tables at the end of the
restore operation.
-disableSe- For nzrestore -db operations, the command confirms that the target
curityCheck system has all the security metadata in the backup set. The target
must have a compatible MLS model with the levels, categories, and
cohorts defined in the backup set. In some instances, the backup set
could include older, unused metadata that is not present in the target
database; by default, nzrestore -db fails in this case. You can use this
switch to bypass the overall metadata check, but if the backup set
has data that includes a label which is not in the target system, the
restore fails and is rolled back.
-enableSe- Checks but does not restore any security metadata in the backup set.
curityCheck
-disableSe- When using nzrestore -globals, this option ignores (does not restore)
curityRestore security metadata if the backup set contains any.
-enableSe- For nzrestore -db operations, restores the security metadata in the
curityRestore backup set to the target system.
-extract [file] Extracts the specified file from the specified backup set. If you do
not specify a file, the option lists all the files in the backup set.
Note that with the -extract option, the restore command does not
restore the specified backupset or files. The -extract option causes
the command to skip the restore operation and output the requested
file or list.
Argument Description
-extractTo path Specifies the name of a file or a directory where you want to save the
extracted output. If you do not specify directory, the -extract option
saves the file in the current directory where you ran the nzrestore
command.
-secret value Specifies a string value needed to generate a 256- bit symmetric key,
which is used to decrypt the host key in the data.
NZ_USER Same as u.
Reporting Errors
The nzrestore command writes errors to the /nz/kit/log/restoresvr/restoresvr.pid.date.log file.
For more information about the log files, see System Logs on page 6-12.
For example, to allow a user to restore all databases, perform the following steps:
1. Invoke nzsql and connect to the system database:
nzsql system;
nzrestore Examples
Several example of the nzrestore command follow.
To restore the database db1 from the /home/user/backups directory:
nzrestore -db db1 -u user -pw password -dir /home/user/backups -v
An example of the command output follows:
[Restore Server] : Starting the restore process
[Restore Server] : Reading schema from /home/user/backups/Netezza/
hostid/DB1/20090116125619/1/FULL
[Restore Server] : Restoring schema
[Restore Server] : Start restoring the data, compressed format.
[Restore Server] : Restoring data from /home/user/backups/Netezza/
hostid/DB1/20090116125619/1/FULL
[Restore Server] : Restoring AAA
[Restore Server] : Restoring BBB
[Restore Server] : Restoring sales %
[Restore Server] : Restoring views, users, groups, permissions
Restore of increment 1 from backupset 20090116125619 to database
'DB1' committed.
To restore the only the schema (objects and user permissions, but not the table and
view data) of db1 to a new, empty database named new_db1:
nzrestore -db new_db1 -sourcedb db1 -schema-only -u user -pw
password -dir /home/user/backups
To restore the users, groups, and privileges in the system catalog:
nzrestore -globals -u user -pw password -dir /home/user/backups
This command restores the users, groups, and global privileges as defined in the sys-
tem catalog. Note that if a user or group currently exists in the system catalog, the
command grants any additional privileges as defined in the backup. The command
does not revoke any current privileges that are not also defined in the backup.
To list all of the objects such as tables, synonyms, user-defined objects, and others that
were saved in a database backup, use the nzrestore -contents command as follows:
nzrestore -contents -db dev -dir /net/backupsvr/nzbackups
Database: TPCH_TEST
List of relations
Oid | Type | Name
-----------+----------------+-----------------------------
210854 | SEQUENCE | MY_SEQ
203248 | TABLE | NATION
203264 | TABLE | REGION
203278 | TABLE | PART
203304 | TABLE | SUPPLIER
203326 | TABLE | PARTSUPP
Restoring Tables
You can use the nzrestore command to identify specific tables in an existing backup
archive and restore only those tables to the target database.
As in a standard restore, by default the system restores the tables schema and data. To
suppress restoration of the data, use the -schema-only option.
Keep in mind the following:
You can specify the nzrestore command line options in any order.
If your table names contain spaces, enclose the names in double quotes.
If your table names begin with dashes (-), you can restore them by listing them in a sin-
gle file and using the -tablefile option.
You can restore to a different target database than the original backup (use the
-sourcedb option to find the backup).
If the target database does not exist, the system creates it.
If the table exists in the database with the same name as the table you are restoring,
both copies of the table exist until the transaction is complete. If there is not enough
disk space for both versions of the table, you must manually drop the table before run-
ning the table-level restore.
Managing Transactions
As with a full-database restore, the system is available to all users during a table-level res-
toration. The majority of a table-level restoration occurs within a single transaction. The
system handles other concurrent operations on the same table in the following manner:
If a concurrent operation begins before the restore has dropped the table, it succeeds.
If the concurrent operation begins after the restore table drop, the system suspends the
concurrent operation until the restore operation either is committed or rolled back.
If the restore transaction is committed, the concurrent operation fails and the sys-
tem displays an error.
If the restore transaction is rolled back, the concurrent operation succeeds against
the original table.
If a concurrent non-read-only transaction locks the same table, the system suspends
the restore operation.
If you abort the table-level restore, the system returns the database to its original state.
Column Description
Backup Set The unique value that identifies the backup set.
Seq.No The sequence number that identifies the increment within the backup set.
Note: Use the -incrementlist option to view a report listing all full and incremental
backups.
Up-to-x Restore Up-to-x restore restores a database from a full backup and then up to the
specified increment. You can follow the up-to-x restore with a step-by-step restore.
Note: Issue the -incrementlist option to view a report listing increment numbers.
For example, the following command restores the full backup of database dev and then up
to increment 4.
nzrestore -db dev -connector netbackup -increment 4
For example, the following command line restores the full backup and then up to a specific
incremental of the database dev, and then steps through the following incrementals.
nzrestore -db dev -connector netbackup -increment 4 -lockdb true
nzrestore -db dev -connector netbackup -increment Next -lockdb true
nzrestore -db dev -connector netbackup -increment Next -lockdb false
Note: To begin with the first increment when the database does not yet exist, specify the
option, -increment 1. You can then step through the increments by specifying -increment
Next.
Remainder Restore A remainder restore restores all the remaining increments from a
backup set that have not yet been restored. For example, after you restore to an increment
ID (and possibly some step restores), the following command restores any remaining incre-
ments in the backup set.
nzrestore -db dev -connector netbackup -increment REST
Column Description
Backup Set The unique value that identifies the backup set.
Seq.No The sequence number that identifies the increment within the backup set.
OpType The type of restore (for example, users, full, Incr:upto, Incr:next, Incr:rest).
Backup Sections Sections List The Netezza host-resident script file that launches
the appropriate Netezza database backup. Specify
the full path on the Netezza host.
2. In your /nz/data/config directory, open the file backupHostname.txt using any text edi-
tor and edit the file as follows:
If your system is an HA system, replace the HOSTNAME value with the ODBC
name you obtained in the previous step.
If your system is a non-HA machine, replace the HOSTNAME value with the exter-
nal DNS name.
3. Install the Symantec NetBackup Client Software onto the Netezza host.
To check the version and release date of the NetBackup software, view the following
file:
/usr/openv/netbackup/bin/version
4. Edit the following file using any text editor:
/usr/openv/netbackup/bp.conf
The file should include the variables CLIENT_CONNECT_TIMEOUT and CLIENT_
READ_TIMEOUT. Set both to the value 18000. Add the variables if they are not in the
file:
CLIENT_CONNECT_TIMEOUT = 18000
CLIENT_READ_TIMEOUT = 18000
Note: If a database restore fails with the error: Connector exited with error: 'ERROR:
NetBackup getObject() failed with errorcode (-1): Server Status: Communication with
the server has not been initiated or the server status has not been retrieved from the
server, the problem could be that the CLIENT_READ_TIMEOUT set on the NetBackup
server expired before the restore finished. This could occur when you are restoring a
database that contains many tables with small changes, such as frequent incremental
backups, or a database that contains many objects such as UDXs, views, or tables. If
your restore fails with this error, you can increase the CLIENT_READ_TIMEOUT value
on the NetBackup server, or you can take steps to avoid the problem by specifying cer-
tain options when you create the database backup. For example, when you create the
database backup, you can specify a multi-stream backup using the nzbackup -streams
num option, or you can reduce the number of files committed in a single transaction
using the nzbackup -connectorArgs "NBC_COMMIT_OBJECT_COUNT=n " option, or
both, to avoid the timeout error. This error message may appear for other reasons, so if
this workaround does not resolve the issue, contact Netezza Support for assistance.
5. Make sure that the backups done by one host are visible to another host. If you have a
Netezza HA environment, for example, the backups performed by Host 1 should be vis-
ible to Host 2.
There are many ways that you can make the backups from one host visible to another.
Refer to the Symantec NetBackup Administrators Guide, Volume I for UNIX and Linux,
and specifically to the chapter on managing client restores. Two possible methods fol-
low:
You can open access to all hosts by touching the following file on the Netbackup
Master Server.
touch /usr/openv/netbackup/db/altnames/No.Restrictions
Note: If the touch command fails, make sure that the altnames directory exists. If nec-
essary, create the altnames directory and re-run the command.
You can give Host1 access to all backups created by Host2 and vice versa. To do
this, you need to touch two files:
touch /usr/openv/netbackup/db/altnames/host1
touch /usr/openv/netbackup/db/altnames/host2
For example, if the names of your HA hosts are nps10200-ha1 and nps10200-ha2
then you would create the following files:
touch /usr/openv/netbackup/db/altnames/nps10200-ha1
touch /usr/openv/netbackup/db/altnames/nps10200-ha2
Note: You must use one of the two methods above to open access. If you skip this step,
your restore will not work correctly on an HA system. This also applies to redirected
restores. Refer to Redirecting a Restore on page 10-39.
7. In the Backup Type dialog, select Automatic Backup to enable it, then click Next.
Note: Do not specify values for the full path script. You supply this information in a
later step.
8. In the Rotation dialog, select your time slot rotation for backups and how long to retain
the backups, then click Next.
9. In the Start Window dialog, select the time options for the backup schedule and click
Next.
10. A dialog appears and prompts you to save or cancel the backup policy that you created.
Click Finish to save the backup policy.
2. Double-click the policy that you created in the previous procedure. The Change Policy
dialog appears.
3. In the Change Policy dialog, click the Backup Selections tab.
4. Click New and specify the full path to the backup script that will be invoked by Net-
Backup as part of scheduled automatic backups. The full path is the pathname on the
Netezza host. Usually the backup script contains a single command line that invokes
nzbackup for a particular backup operation. You can create the script manually using a
text editor.
For example, the following line in a text file would back up the database named sales
using the Netezza user account joe:
/nz/kit/bin/nzbackup -db sales -connector netbackup -u joe -pw
password -connector netbackup -connectorArgs "DATASTORE_
SERVER=NetBackup_master_server:DATASTORE_POLICY=NetBackup_policy_
name"
Note: Rather than specify the -connectorArgs argument, you could set the environment
variables DATASTORE_SERVER and DATASTORE_POLICY. If you set the environment
variables and then use the command line argument -connectorArgs, the command line
argument takes precedence.
If you are concerned about using a clear-text password, you could perform the same
nzbackup as follows:
a. Change user to root.
b. Cache the password for user joe by using the nzpassword command.
c. Invoke nzbackup without the password, as follows.
nzbackup -db sales -connector netbackup -u joe
After you cache the password, you can use nzbackup without the -pw option. You
only need to cache the password one time.
Backups initiated using the nzbackup command on the Netezza host use the sched-
ule of type Application Backup. You can click the Schedules tab and check that
the schedule for allowing backups is set appropriately.
Note: Rather than specify the -connectorArgs argument, you could set the environment
variable DATASTORE_SERVER. If you set the environment variable and then use the com-
mand line argument -connectorArgs, the command line argument takes precedence.
Redirecting a Restore
Typically, you restore a backup to the same Netezza host from which it was created. If you
want to restore a backup created on a different Netezza host:
Configure Symantec NetBackup for a redirected restore. Refer to the Symantec Net-
Backup documentation for more information.
Use the -npshost option of the nzrestore command to identify the Netezza host from
which the backup was created. Sample syntax follows:
nzrestore -db dbname -connector netbackup -connectorArgs
"DATASTORE_SERVER=NetBackup_master_server" -npshost origin_nps
Troubleshooting
The Activity Monitor in the NetBackup Administration Console shows the status of all back-
ups and restores. If the monitor shows that a backup or restore failed, you can double-click
the failed entry to obtain more information about the problems that caused the failure.
7. Transfer the file to NetBackup using the bpbackup utility. An example follows.
bpbackup -p nzhostbackup -w -L /nz/tmp/hostbackup.log /nz/tmp/
hostbackup.20070521
Note the following important points for the bpbackup utility and the example:
Specify the explicit path to the bpbackup command if it is not part of your accounts
PATH setting. The default location for the utility is /usr/openv/netbackup/bin.
In the sample command, the -L option specifies the log file where the status of the
backup operation is written. You should review the file because the utility does not
return error messages to the console.
The -w option causes the bpbackup utility to run synchronously; it does not return until
the operation has completed.
The -p option specifies the name of the NetBackup policy, which you defined in step 1
on page 10-39.
You can display syntax for the bpbackup utility by running bpbackup without
options.
Note: An alternative to the bpbackup command is the bp interactive NetBackup client
utility. The utility steps you through a backup or a restore.
Performing a Restore
Run the bprestore NetBackup utility to restore the host backup file. An example follows.
bprestore -p nzhostbackup -w -L /nz/tmp/hostrestore.log /nz/tmp/
hostbackup.20070521
Note the following important points for the bprestore utility and the example:
Specify the explicit path to the bprestore command if it is not part of your accounts
PATH setting. The default location for the utility is /usr/openv/netbackup/bin.
You can display syntax for the bprestore utility by running bprestore without options.
You can also refer to the Symantec manual, Symantec NetBackup Commands Refer-
ence Guide.
Note: An alternative to the bprestore command is the bp interactive NetBackup client
utility. The utility steps you through a backup or a restore.
#!/bin/bash
#
# nphostbackup.sh - perform backup of host catalog and send it
# to NetBackup.
#
# set up the user (password cached using nzpassword)
export NZ_USER=nzuser
you can use Netezza restore utilities to retrieve and load data from the TSM-managed
backup locations. The Netezza solution has been tested with TSM version 5.4, 5.5. 6.1,
and 6.2
This document does not provide details on the operation or administration of the TSM
server or its commands. For details on the TSM operation and procedures, refer to your
Tivoli Storage Manager user documentation.
3. In your /nz/data/config directory, open the file backupHostname.txt using any text edi-
tor and edit the file as follows:
If your system is an HA system, replace the HOSTNAME value with the ODBC
name you obtained in the previous step.
If your system is a non-HA machine, replace the HOSTNAME value with the exter-
nal DNS name.
mount /media/cdrom
or
mount /media/cdrecorder
If you are not sure which command to use, run the ls /media command to see which
pathname (cdrom or cdrecorder) appears.
4. To change to the mount point, use the cd command and specify the mount pathname
that you used in step 3. This guide uses the term /mountPoint to refer to the applicable
disk mount point location on your system, as used in step 3.
cd /mountPoint
5. Change to the directory where the packages are stored:
cd /mountPoint/tsmcli/linux86
6. Enter the following commands to install the 32-bit TSM ADSM API and the Tivoli Stor-
age Manager Backup-Archive (BA) client. (The BA client is optional, but it is
recommended because it provides helpful features such as the ability to cache pass-
words for TSM access and also to create scheduled commands.)
a. rpm -i TIVsm-API.i386.rpm
b. rpm -i TIVsm-BA.i386.rpm
Make sure that you use the default installation directories for the clients (which are usually
/opt/tivoli/tsm/client/api and /opt/tivoli/tsm/client/ba). After the installation completes, pro-
ceed to the next section to configure the Netezza as a client.
1. Make sure that you are logged in to the Netezza system as root.
2. Change to the following directory:
cd /opt/tivoli/tsm/client/api/bin
3. Copy the file dsm.opt.smp to dsm.opt. Save the copy in the current directory. For
example:
cp dsm.opt.smp dsm.opt
4. Edit the dsm.opt file using any text editor. In the dsm.opt file, proceed to the end of
the file and add the following line, shown in bold below, where server is the hostname
of the TSM server in your environment:
******************************************************************
* IBM Tivoli Storage Manager *
* *
* This file contains an option you can use to specify the TSM
* server to contact if more than one is defined in your client
* system options file (dsm.sys). Copy dsm.opt.smp to dsm.opt.
* If you enter a server name for the option below, remove the
* leading asterisk (*).
******************************************************************
* SErvername A server name defined in the dsm.sys file
SErvername server
If you have multiple TSM servers in your environment, you can add a definition for each
server. However, only one definition should be the active definition. Any additional def-
initions should be commented out using the asterisk (*) character. The active dsm.opt
entry determines which TSM server is used by the Tivoli connector for backup/restore
operations. If there are multiple uncommented SERVERNAME entries in dsm.opt, the
first uncommented entry is used.
5. Save and close the dsm.opt file.
6. Copy the file dsm.sys.smp to dsm.sys. Save the copy in the current directory. For
example:
cp dsm.sys.smp dsm.sys
7. Edit the dsm.sys file using any text editor. In the dsm.sys file, proceed to the end of
the file and add the settings, shown in bold below, where server is the name of the TSM
server in your environment, serverIP is the hostname or IP address of the TSM server,
and client_NPS is the node name for the Netezza host client:
******************************************************************
* IBM Tivoli Storage Manager *
* *
* Sample Client System Options file for UNIX (dsm.sys.smp) *
******************************************************************
******************************************************************
SErvername server
COMMMethod TCPip
TCPPort 1500
TCPServeraddress serverIp
NODENAME client_NPS
As a best practice, for the nodename value, use the naming convention client_NPS,
where client is the hostname of the Netezza host, to help uniquely identify the client
node for the Netezza host system.
If you have multiple TSM servers in your environment, you can create another set of
these definitions and append each set to the file. For example:
SErvername server1
COMMMethod TCPip
TCPPort 1500
TCPServeraddress server1Ip
NODENAME client_NPS
SErvername server2
COMMMethod TCPip
TCPPort 1500
TCPServeraddress server2Ip
NODENAME client_NPS
Note: If you specify more than one TSM server definition in the dsm.sys file, you can
create corresponding definitions in the dsm.opt file as described in step 4.
8. If you installed the Tivoli 5.4 client software on your hosts, you must also add the fol-
lowing options in the dsm.sys file.
ENCRYPTIONTYPE DES56
PASSWORDACCESS prompt
Verify that there are no other uncommented lines for the ENCRYPTIONTYPE and PASS-
WORDACCESS options.
Note: The PASSWORDACCESS prompt option disables automatic, passwordless TSM
authentication. Each operation using the Tivoli connector requires you to enter a pass-
word. You can supply the password in the nzbackup and nzrestore connectorArgs
option as "TSM_PASSWORD=password" or you can set TSM_PASSWORD as an environ-
ment variable.
Managing Tivoli Transaction Sizes For Tivoli configurations, the TXNGROUPMAX option
specifies the number of objects that can be transferred between a client and the server in
one backup transaction. The maximum value is 65000. The default value is 256 objects. If
the value is too low, your backups could fail with a start a new transaction session error.
If you encounter this error, you should review and increase the TXNGROUPMAX setting to a
value that is larger than the maximum number of objects that a single backup operation
will try to create. For example, if you are performing incremental backups, then use a value
that is at least twice the table count. Also add a small number (5) of additional objects for
backup metadata files. If your database has UDXs, add an additional 2 objects for each
UDX. If you are using multi-stream backups, then use the maximum value of either double
the UDXs, or double the tables divided by the stream count, and add the additional 5
objects for metadata objects.
To set the TXNGROUPMAX value using the GUI, go to the Policy Domains and Client Nodes
- <Your client node> - Advanced Settings - Maximum size of a transaction. The options are
"Use server default" or "Specify a number (4-65,000)". Be sure to repeat this process on
each node (Host 1 and Host 2), and to use the same setting for each node. If you choose
"specify a number", note that the setting cannot be changed from the client. If you chose
"use server default", you can specify the value from the clients using the dsmadmc applica-
tion, 'setopt txngroupmax <value>' or 'update node txngroupmax=<value>').
Note: If you have an HA Netezza system, make sure that you repeat these steps on Host 1
and on Host 2.
After the client authentication is successful, subsequent logins will not prompt for a pass-
word until the password changes at the TSM server.
Note: The following procedures describe in general how to use the UIs. Note that the com-
mands and menus could change with updates or patches to the backup software; these
procedures are intended as a general overview.
3. Select the TSM server from which you will be managing your Netezza systems, and
then select View Storage Pools from the Select Action list. The Storage Pools for server
area appears at the bottom of the page.
4. In the Storage Pools section, select Create a Storage Pool from the Select Action list.
The Create a Storage Pool area appears.
5. Type a name for the storage pool and make sure that the storage pool type is Random
access, then click Next.
6. If you are creating a new pool select Create a new disk volume and enter a volume
name and size. The volume name should be an absolute pathname; for example, if you
want to create a volume named vol under the /home/backups directory, type /home/
backups/vol, then click Next.
7. A Summary window appears to display messages about the successful creation of the
storage pool and its information. Click Finish.
6. Select the proxy node and then select Modify Client Node from the Select Action drop-
down list. The client Properties area appears.
7. Select the Proxy Authority tab on the left, and then select Grant Proxy Authority from
the Select Action drop-down list on the right. The Grant Proxy Authority area appears.
8. Select the client node that represents the Netezza host. If the Netezza system is an HA
system, select both client nodes.
9. Click OK to complete the proxy assignment.
Redirecting a Restore
Typically, you restore a backup to the same Netezza host from which it was created. If you
want to restore a backup that was created on one Netezza host to a a different Netezza
host, you must adjust the proxy settings.
For example, assume that you have a Netezza host named NPSA, for which you have
defined a client node named NPSA NPS and a proxy node named NPSA on the TSM
server. Assume also that there is a backup file for the NPSA host on the TSM server.
If you wish to load the backup file onto a different Netezza host named NPSB, then you
must first ensure that NPSB has been registered as a client to the TSM server. Assume that
there is a client node for NPSB NPS and a proxy node named NPSB for this second host.
To redirect the restore file from NPSA to NPSB, you must grant the client node NPSB
NPS proxy authority over the proxy node NPSA. After you grant the proxy authority to
NPSB NPS you should be able to restore the backup for NPSA to the NPSB host using a
command similar to the following:
nzrestore -db database -connector tivoli -npshost NPSA
The value database is the name of the database which was backed up from the Netezza
host NPSA.
The server does not have enough recovery log space to continue the
current operation
The server does not have enough database space to continue the current
operation
There are some configuration settings changes that can help to avoid these errors and com-
plete the backups for large databases. It is important to note that these configuration
settings depend upon factors such as network speed, TSM server load, network load, and
other factors. The values below are conservative estimates based on testing, but the values
for your environment could be different. As a best practice, if you encounter errors such as
timeouts and space limitations, try these conservative values and adjust them to find the
right balance for your server and environment.
For example:
COMMTIMEOUT Specifies the time in seconds that the TSM server waits for an
expected client response. The default is 60 seconds. You can obtain the current value
of the setting using the QUERY OPTION COMMTIMEOUT command. For large data-
bases, consider increasing the value to 3600, 5400, or 7200 seconds to avoid timeout
errors, which could occur if the complete transfer of a database does not complete
within the time limit:
SETOPT COMMTIMEOUT 3600
IDLETIMEOUT Specifies the time in minutes that a client session can be idle before
the TSM server cancels the session. The default is 15 minutes. You can obtain the cur-
rent value of the setting using the QUERY OPTION IDLETIMEOUT command. For large
databases, consider setting the value to 60 minutes as follows:
SETOPT IDLETIMEOUT 60
The default size of the TSM server database, 16MB, may be inadequate for large
Netezza databases. Depending upon the size of your largest Netezza database, you can
increase the default TSM database size to a value such as 500MB.
The size of the recovery log may be inadequate for large Netezza databases or those
that have a large number of objects (tables, UDXs, and so on). An increased value such
as 6GB may be more appropriate. As a best practice, the recovery log should be at least
twice the size in GB as your largest table in TB. For example, if your largest table is
2TB, the recovery log should be at least 4GB. In addition, you may need a larger log
file if you run multiple concurrent backup jobs on the same TSM server, such as several
Netezza backups or a combination of Netezza and other backups within the enterprise.
You can also create a database space trigger using the define spacetrigger db command.
For example, the following command creates a trigger which increases the size of database
by 25% when it hits 85% of its capacity with no limit on maximum size:
define spacetrigger db fullpct=85 spaceexpansion=25 maximumsize=0
For example, the following sample command restores the Netezza database using TSM:
nzrestore -db myDb -connector tivoli -connectorArgs "TSM_
PASSWD=password"
archive="/tmp/nzhostbackup.tar.gz"
(
nzhostbackup "${archive}"
echo
echo "Sending host backup archive '${archive}' to TSM server ..."
dsmc archive "${archive}"
)
exit 0
Similarly, you can create a script to retrieve and reload a host backup from the TSM server:
#!/bin/bash
#
# nzrestore_tsm - restore host backup from TSM using nzhostbackup_tsm
(
dsmc retrieve "${archive}"
echo
echo "Archive '${archive}' retrieved, restoring it..."
nzhostrestore "${archive}"
)
fi
exit 0
If you create more than one scheduled operation, note that the TSM scheduler does not
support overlapping schedules for operations; that is, one operation must start and com-
plete before a new operation will be allowed to start. If you create operations with
overlapping schedules, the second operation will likely be skipped (will not start) because
the first operation is still running. Make sure that you allow enough time for the first opera-
tion to complete before a new operation is scheduled to run.
For example, to create a new client schedule:
1. In the left navigation frame of the ISC Console, click Tivoli Storage Manager. A drop-
down list appears.
2. Click Policy Domains and Client Nodes. The Policy Domains page appears in the right
frame.
3. Select the TSM server from which you will be managing your Netezza systems, and
then select View Policy Domains from the Select Action list. The server Policy Domains
area appears.
4. Select the policy domain that you created for your Netezza host (as described in Cre-
ating a Policy Domain on page 10-48) and select Modify Policy Domain from the
Select Action list.
5. Select the Client Node Schedules list to expand it.
6. From the Select Action list, select Create a Schedule. The Create Schedule area
appears.
7. In the create schedule area, do the following:
a. Enter a name for the schedule in the Schedule name field. You can also supply an
optional description for the schedule.
9. In the Select Repetition Options area, select the data and time and the frequency for
the client schedule, then click Next. The Advanced Schedule Options area appears.
Note: Note that the TSM scheduler does not support overlapping schedules for opera-
tions; that is, one operation must start and complete before a new operation will be
allowed to start. Make sure that you allow enough time for the first operation to com-
plete before a new operation is scheduled to run.
10. You can accept the defaults on the Advanced Schedule Options area and click Next.
The Associate Client Nodes area appears.
11. Select the client nodes (one or more Netezza hosts) that you want to associate with this
schedule. Make sure that you select the client node for the Netezza host, not its proxy
node, then click Next. A Summary area appears.
Note: Typically you would select only one client node (that is, one Netezza host) to per-
form this operation at one time. However, it is possible to select multiple client nodes
if you want to schedule the operation for multiple hosts to occur at the same time.
12. Review the information in the Summary and click Finish to create the client schedule.
Because the script runs as root on the Netezza host, the Netezza user must be set inside
the script using the NZ_USER variable or specified with the -u user argument. The users
password must have been cached using the nzpassword utility, set inside the script using
NZ_PASSWORD, or specified using the -pw password argument.
You can use the backup history to check the status of a backup operation. For more infor-
mation, see Backup History Report on page 10-19.
Troubleshooting
The following sections describe some common problems and workarounds.
Client-Server Connectivity
You can check the network connections and configuration settings to ensure that the
Netezza host (the client) can connect to the TSM server.
To check connectivity, use the following command:
dsmc query session
The command prompts for the client user password, and after a successful authentication,
it shows the session details.
Session Rejected
An error such as Session rejected: Unknown or incorrect ID entered is probably a result
of one of the following problems:
The Netezza host has not been correctly registered on the TSM server.
The dsm.sys file on the Netezza host is not correct.
You should confirm the information in both configurations and retry the operation.
This error typically occurs when the log file dsierror.log is not writable by the user who
invoked the nzbackup or nzrestore operation. Check the permissions on the file as well
as on the directory where it resides (the backupsvr/restoresvr log directory).
3. In your /nz/data/config directory, open the file backupHostname.txt using any text edi-
tor and edit the file as follows:
If your system is an HA system, replace the HOSTNAME value with the ODBC
name you obtained in the previous step.
If your system is a non-HA machine, replace the HOSTNAME value with the exter-
nal DNS name.
NetWorker Installation
Complete instructions for installing the NetWorker Connector client on the Netezza host are
included in the EMC NetWorker Release Installation Guide. Refer to the section on Linux
installation for the steps to install the NetWorker client on the Netezza host, which uses a
Red Hat operating system. If your Netezza system is an HA system, install the software on
both hosts.
Before installing the NetWorker client, ensure that the NetWorker server components are
installed and configured.
NetWorker Configuration
The following sections describe the basic steps involved in configuring NetWorker server
and client software for Netezza hosts.
Note: In addition to these steps, ensure that appropriate storage devices and media pools
are configured.
Adding the Netezza Host NetWorker Client You must follow these steps to add the Netezza
host NetWorker client to the NetWorker server:
1. Open a browser and log into the NMC.
2. Click the Enterprise icon.
3. Choose the applicable server from the list of servers in the left pane.
4. Launch the NetWorker Managed Application from the right pane, which opens a new
window.
5. Click the Configuration icon from the new window.
6. Right click Clients from the left pane and select New from the pop-up menu, which
opens a new Create Client window.
7. In Create Client window, type the name of the Netezza host (such as, hostname.com-
pany.com) in the Name text box.
8. Select an appropriate browse and retention policy for the client.
9. Confirm that the Scheduled backup checkbox is checked. You will provide further
information on scheduled backups later in the configuration.
10. Check the groups to which you are adding the client. You will be creating additional
groups later in the configuration.
11. From the Globals (1 of 2) tab, set appropriate value for the Parallelism field. This
parameter controls how many streams the NetWorker client can simultaneously send in
one or more backup operations. See the section Changing Parallelism Settings on
page 10-61 for help on selecting values for this setting.
12. Under the Globals (2 of 2) tab, add an entry of form user@client in the Remote access
list for any other client that is allowed to restore backups created by this client.
For example, to allow a backup created on Netezza host1 (Netezza-HA-1.netezza.com)
to be restored on Netezza host2 (Netezza-HA-2.netezza.com), ensure that the entry
nz@Netezza-HA-2.netezza.com is present in the Remote access list of Netezza host1
(Netezza-HA-1.netezza.com).
13. Click OK to create the Netezza host NetWorker client.
14. If you have a Netezza HA system, you should also define Netezza host2 (Netezza-HA-
2.netezza.com) as a client, and also allow the backups to be restored on Netezza host1
(Netezza-HA-1.netezza.com). Return to step 6 on page 10-60 and repeat the instruc-
tions to add host2 as a client and ensure that the entry nz@Netezza-HA-1.netezza.com
is present in the Remote access list of Netezza host2 (Netezza-HA-2.netezza.com).
Additionally, if you have more than one Netezza system, you may want to add your
other Netezza systems as clients.
Command Line Backup and Restore After the Netezza host is properly configured for
backup and restores with NetWorker, you can invoke nzbackup and nzrestore at any time
from the Netezza host.
For example, if a NetWorker Server is named "server_name.company.com," the database is
named test, a sample command line for backup using NetWorker Connector is:
/nz/kit/bin/nzbackup -db test -connector networker -connectorArgs "NSR_
SERVER=server_name.company.com"
An example of a restore command is:
/nz/kit/bin/nzrestore -db test -connector networker -connectorArgs "NSR_
SERVER=server_name.company.com"
Scheduled Backup This section provides the steps necessary to create and configure
backup groups needed to schedule backups. The NetWorker server runs the nzbackup com-
mand automatically after creating:
At least one backup group
At least one backup command file
A schedule
A separate command file and associated backup group is required for each scheduled
backup operation. The data from the backup operations performed using one specific com-
mand file form a backup group. For example, if you have two databases, DBX and DBY, and
you want to schedule weekly full backups plus nightly differential backups for each, you
must create four command files, one for each of four backup groups.
Add a Backup Group You must add a backup group, specifically associated with each
nzbackup operation, to the list of groups in the NMC. These steps show how to add a
backup group to a given server:
1. Open a browser and log into the NMC.
2. Click the Enterprise icon.
3. Choose the applicable server from the list of servers in the left pane.
4. Launch the NetWorker Managed Application from the right pane, which opens a new
window.
5. On the Configuration page, right click Groups from the left pane and select New from
the pop-up menu, which opens a new Create Group window.
6. Type a name for the new group (such as nz_db1_daily) in the Name text box. You can
also enter text in the Comment text box.
7. To enable automatic scheduled backups for the group, supply the values for Start time
and Autostart.
8. Click OK to create the group.
Command File For each nzbackup operation, you must create a specific command file that
contains the backup command instructions. Logged in as the root user, create the com-
mand files under the directory /nsr/res, and name each file [backup_group].res using any
text editor. Include content like that in the following example. Content varies depending on
backup operation instructions:
Note: The entire precmd entry, up to the semi-colon, must be on a single line.
type: savepnpc;
precmd: "/nz/kit/bin/nzbackup -u <userid> -pw <password> -db <name_
of_database_to_backup> -connector networker -connectorArgs NSR_
SERVER=server_name.company.com -v;
pstcmd: "echo bye", "/bin/sleep 5";
timeout: "12:00:00";
abort precmd with group: No;
Schedule Backups To enable scheduled Netezza backup operations for a Netezza host:
1. Open a browser and log into the NMC.
2. Click the Enterprise icon.
3. Choose the applicable server from the list of servers in the left pane.
4. Launch the NetWorker Managed Application from the right pane, which opens a new
window.
5. From the Configuration page, select Clients in the left pane, which populates the right
pane with a list of clients.
6. Right click the applicable client and select Properties from the pop-up menu, which
opens a new Properties window.
7. Ensure that the Scheduled backup check box is checked.
8. In the Group section, only the group for this backup operation should be checked (such
as nz_db1_daily).
9. Select the schedule from the Schedule drop-down.
10. On the Apps and Modules tab, type savepnpc in the Backup command text box.
11. Click OK to create the scheduled backup.
Redirecting an nzrestore To perform restore operations from one Netezza host to another
(that is, to restore a backupset created by one Netezza host to a different Netezza host),
Remote access must be configured as described in step 12 of Adding the Netezza Host
NetWorker Client. By default, NetWorker server does not allow a client access to objects
created by other clients.
To restore a backupset onto host2 that was created on host1, log into host2 and run the fol-
lowing command:
/nz/kit/bin/nzrestore -db database -npshost host1 -connector networker
The database value is the name of the database which was backed up from the Netezza
host host1.
NetWorker Troubleshooting
This section contains troubleshooting tips to solve common problems.
Basic Connectivity
For problems with basic connectivity, first check that the server and client are correctly set
up and configured. Also confirm that the clocks on both the server and client are synchro-
nized to within a few seconds.
Use the save and recover NetWorker commands to backup and restore a normal file. If
either command fails, the basic configuration is incorrect.
Query history captures details about the user activity on the Netezza system, such as the
queries that are run, query plans, table access, column access, session creation, and failed
authentication requests. The history information is saved in a history database. Permitted
users can review the query history information to obtain details about the users and activity
on the Netezza system.
Note: The query history feature replaces any previous query history tools for the Netezza
system. Note that the older query history views _v_qryhist and _v_qrystat provided in previ-
ous Netezza releases are maintained for backward compatibility and they will be
deprecated in a future release.
11-1
IBM Netezza System Administrators Guide
4. Enable query history collection. The result is a set of captured data files which will be
loaded into the query history database.
5. Enable access privileges for the users who will be reviewing and reporting on the his-
tory information.
The following sections describe these tasks in more detail.
Within your organization, you might have one user or several users who are responsible for
monitoring query performance, status, and access information. With the Netezza query his-
tory feature, these users can obtain the right information to report on these areas of
performance and operation.
The history database contains special tables to store the history data, and views which
users can query to display the collected history information. Queries on the history data
should use the views to ensure forward compatibility through the releases. Users can be
granted permissions to create additional user tables and views as needed for querying.
Never change, drop, or modify the default tables and views provided in the history data-
base. Any changes to the default tables and views will cause query history collection to stop
working.
There is a latency between the time that the history data is collected and the time when it
is loaded into the history database. The history database is updated at periodic load inter-
vals, which you specify in the history configuration. For more information about the loading
intervals and impacts, see Configuring the Loader Process on page 11-9.
Before you drop a history database, make sure that the active history configuration is not
configured to load data to it. If you drop the active history database, load operations to that
database will fail.
As a best practice, you should create at least one history configuration to collect the type of
information that interests you, and a configuration to disable history collection.
The following command creates a history configuration named hist_disabled that disables
history collection:
SYSTEM(ADMIN)=> CREATE HISTORY CONFIGURATION hist_disabled HISTTYPE
NONE;
For details about the command, see the CREATE HISTORY CONFIGURATION command
syntax in the IBM Netezza Database Users Guide. When you create or alter a history con-
figuration to HISTTYPE NONE, the command automatically sets the default values of
CONFIG_LEVEL, CONFIG_TARGETTYPE, and CONFIG_COLLECTFILTER parameters to
HIST_LEVEL_NONE, HIST_TARGET_LOCAL, and COLLECT_ALL respectively.
For example, the following sample command sets the configuration to the all_hist
configuration:
SYSTEM(ADMIN)=> SET HISTORY CONFIGURATION all_hist;
Then, to start collecting history using that configuration, you stop and restart the Netezza
software as follows:
nzstop
nzstart
For details about the command, see the SET HISTORY CONFIGURATION command syntax
in the IBM Netezza Database Users Guide.
Based on your organizational model for history reporting and management, you can plan for
the number of Netezza user accounts that require access to the history database. For users
to run reports against the collected history data, the users require List and Select privileges
to the history database. Users may also require Create Table or Create View privileges if
they need to create their own tables and views for history reporting purposes.
As a best practice, if you have several users who require access, consider creating an audit
history user group. You assign the correct privileges to the group, and then add or remove
user members as needed. The group members inherit the permissions of the group. This
can help to reduce the time spent managing privileges for individual user accounts. For
more information about user privileges, see Chapter 8, Establishing Security and Access
Control.
nzsql...
SELECT ... Query Data Capture History Database
INSERT INTO...
DELETE...
DROP...
... alcapp alcloader
The staging area is located in the $NZ_DATA/hist/staging directory. The staging area could
have one or several subdirectories named alc_$TIMESEQUENCE, which contain batches
(that is, one or more) captured history data files. The captured data history files are saved
as external tables in text format. The alc* directories also have a CONFIG-INFO file to iden-
tify the history configuration which was active when the files were created.
The captured files in the staging area are transferred to the loading area based on the con-
figuration load settings. From the loading area, the alcloader process (the loader) loads the
external tables into the query history database. The loading frequency, as well as the target
history database and the user account to access that database, are specified in the current
configuration settings.
The loading area is located in the $NZ_DATA/hist/loading directory, and it contains a subdi-
rectory named alc_$TIMESEQUENCE for the batch of history files that it is loading (the
load in progress). There could be zero, one, or several subdirectories if there are several
queued batches waiting to be loaded.
Note: If a batch of files cannot be loaded for some reason, the loader moves the batch to
the $NZ_DATA/hist/error directory. This error directory contains any failed loads. Errors can
occur if you deactivate and drop an active configuration before its history files are loaded, if
the query history user password has changed, or if the history database is dropped or
locked. If you want to load files that were moved to the error directory, resolve the problem
condition that caused the loads to fail, then move the directories in $NZ_DATA/hist/error to
$NZ_DATA/hist/loading. You must stop and restart the Netezza software (nzstop/nzstart) to
load the files that you moved to the loading directory.
After history files are successfully loaded, the Netezza system deletes the batch of external
tables to clean up and free the disk space.
0 0 Non-zero Idle Transfer the captured data in the staging area to the loading
area, regardless of the staging size.
Note: This combination is typically used for test/demonstration
environments. Because of the continuous loading of data, this
setting can cause a performance impact on Netezza systems.
Busy When the captured data in the staging area meets or exceeds
the max threshold, transfer it to the loading area.
0 Non-zero 0 Idle When the captured data in the staging area meets or exceeds
the min threshold, transfer it to the loading area.
Busy Continue collecting data in the staging area until the loader is
idle, then transfer the data to the loading area.
0 Non-zero Non-zero Idle When the captured data in the staging area meets or exceeds
the min threshold, transfer it to the loading area.
Busy Continue collecting data in the staging area until the max
threshold is reached or until the loader is idle, then transfer the
data to the loading area.
Non- 0 0 N/A Transfer any captured data in the staging area to the loading
zero area when the timer expires, regardless of amount of data or
loader state.
Non- 0 Non-zero N/A Transfer captured data in the staging area to the loader when
zero the timer expires, or when the data meets or exceeds the max
threshold.
Non- Non-zero 0 N/A When the timer expires, transfer the captured data in the stag-
zero ing area to the loading area if it meets or exceeds the min
threshold.
Non- Non-zero Non-zero Idle When the timer expires, transfer the captured data in the stag-
zero ing area to the loading area if it meets or exceeds the min
threshold or the max threshold.
Note: This is the recommended combination for a production
Netezza system. You can tune the values of the three loading
settings for your environment as described in the text following
this table.
Busy If the staging area meets or exceeds the max threshold, transfer
the captured data in the staging area to the loading area. Other-
wise, continue collecting data until the next timer expiration.
Depending upon the loader settings and how much history data is collected, it is possible
for the alcloader process to become busy loading history data. You might notice that there
are several batch directories in the loading area, which indicates queued and waiting load
requests. Depending upon how much history data you collect and the overall utilization of
the Netezza system, you might want to try various values for the loader settings to tune it
for best operation in your environment.
Based on the load settings and how busy the loader is, there can be a delay between the
time that query history data is captured, and the time when it is loaded and available for
reporting in the query history database. You can tune the loader settings to help reduce the
delay and balance the load frequency without unduly impacting the Netezza system. Also,
since the captured data is saved in text-based external files, the history reporting users can
also review the files in the staging area to obtain information about very recent activity.
To ensure that the load staging area size does not grow indefinitely, there is a STORAGE-
LIMIT setting which controls how large the staging area can be in MB. If the staging area
reaches or exceeds this size limit, Netezza stops collecting history data. An administrator
must free up disk space in the storage areausually by adjusting the loader settingsto
free up disk space so that history collection can resume. The STORAGELIMIT value must
be greater than LOADMAXTHRESHOLD.
For example, the following command shows information about the current configuration:
SYSTEM(ADMIN)=> SHOW HISTORY CONFIGURATION;
3. Use a command such as grep to search for CONFIG-INFO files that contain the name
of the configuration that you want to drop. For example:
grep -R -i basic .
4. Review the output of the grep command to look for messages similar to the following:
./loading/alc_20080926_162803.964347/CONFIG-INFO:BASIC_HIST
./staging/alc_20080926_162942.198443/CONFIG-INFO:BASIC_HIST
These messages indicate that there are batches in the loading and staging areas that use
the BASIC_HIST configuration. If you drop that configuration before the batch files are
loaded, the loader will classify them as errors when it attempts to process them later. If you
want to ensure that any captured data for the configuration is loaded, do not drop the con-
figuration until after the command in Step 3 returns no output messages for the
configuration that you want to drop.
For details about the command, see the DROP HISTORY CONFIGURATION command syn-
tax in the IBM Netezza Database Users Guide.
To make the selected configuration the current (active) configuration, click Set as Current.
A confirmation dialog appears to inform you that you have changed the current configura-
tion, but the changes will not take effect until you restart the Netezza software. Until you
restart the server, the previously active configuration remains in effect using its settings
specified the last time that the Netezza software was started.
To alter an existing configuration, select it in the Configuration Name drop-down list and
change the settings as applicable. If you want to alter the active configuration, you must
first select a different configuration, then click Set as Current to make it the current config-
uration. You can then select the formerly active configuration to edit it.
For those tables with names that end in $SCHEMA_VERSION, note that this string is the
version number of the history database. For Release 4.6 and later, which uses version 1,
the table names will be similar to $hist_query_prolog_1 and so on.
$v_sig_hist_state_change_$SCHEMA_VERSION
_v_querystatus
The _v_querystatus view shows the query data collected for running/active queries. Even if
history collection is disabled, this view is still populated with data for the active queries.
Table 11-2: _v_querystatus
_v_planstatus
The _v_planstatus view shows the plan data and session data collected for running/active
queries. Even if history collection is disabled, this view is still populated with data for the
active queries..
Table 11-3: _v_planstatus
queuetime timestamp Time at which the plan was queued to the gate
keeper.
gratime timestamp Time at which the plan was placed on the GRA
queue
$v_hist_queries
The $v_hist_queries view shows information about the completed queries and their status,
runtime seconds (for total, cumulative queued, prep time and GRA time), and number of
plans.
Table 11-4: $v_hist_queries View
Name Description
npsinstanceid The instance ID of the nzstart command for the source Netezza
system
opid Operation ID, which is used as a foreign key from query epilog,
overflow as well as plan, table, column access tables.
Name Description
status The Query Completion status (as integer and text string)
verbose_status
queuetime The amount of time the query was queued (as interval and in
queued_seconds seconds)
preptime The amount of time the query spent in "prep" stage (as interval and
prep_seconds in seconds)
gratime The amount of time the query spent in GRA (as interval and in
gra_seconds seconds)
$v_hist_incomplete_queries
The $v_hist_incomplete_queries view lists the queries that were not captured completely.
The problem may be that there was a system reset at the time of logging or because some
epilog/prolog has not been loaded into the database yet.
Table 11-5: $v_hist_incomplete_queries View
Name Description
npsinstanceid The instance ID of the nzstart command for the source Netezza
system
opid Operation ID, which is used as a foreign key from query epilog,
overflow as well as plan, table, column access tables.
Name Description
$v_hist_table_access_stats
The $v_hist_table_access_stats view lists the names of all the tables captured in table
access and provides some cumulative statistics.
Table 11-6: $v_hist_table_access_stats View
Name Description
num_selected The number of times that this table was SELECTED from,
num_inserted INSERTED into, DELETED from, UPDATED, TRUCATED,
num_deleted DROPPED, CREATED, "GENSTATS", LOCKED, or ALTERED
num_updated
num_truncated
num_dropped
num_created
num_genstats
num_locked
num_altered
$v_hist_column_access_stats
The $v_hist_column_access_stats view lists the names of all tables captured in table
access and provides some cumulative statistics.
Table 11-7: $v_hist_column_access_stats View
Name Description
Name Description
$v_hist_log_events
The $v_hist_log_events view shows information about the events that occurred on the
system.
Table 11-8: $v_hist_log_events View
Name Description
npsinstanceid The instance ID of the nzstart command for the source Netezza
system
opid Operation ID, which is used as a foreign key from query epilog,
overflow as well as plan, table, column access tables.
op The integer code and text string describing the actual operation.
op_type The valid values are one of the following:
1 = session create
2 = session logout
3 = failed authentication
4 = query prolog
5 = query epilog
6 = plan prolog
7 = plan epilog
Name Description
checksum These are the checksum and query for query prolog entries, signa-
details ture and plan information for plan prolog entries. For other log
entries, checksum is NULL and details will have other information
like status for epilogs.
client_type The client type, such as none, nzsql, odbc, jdbc, nz(un)load, cli,
bnr, reclaim, old-loader (depcrecated), or internal
$hist_version
The $hist_version table shows information about the schema version number of the history
database.
Table 11-9: $hist_version
$hist_nps_$SCHEMA_VERSION
The $hist_nps_$SCHEMA_VERSION table describes each source Netezza system for which
history is captured in the target database. When a Netezza system connects to a history
database for the first time, a record is added to this table.
Table 11-10: $hist_nps_$SCHEMA_VERSION
npsid integer A unique ID for the Netezza system, and the pri-
mary key for this table (This value is generated
as a sequence on the target database where this
table resides.)
$hist_log_entry_$SCHEMA_VERSION
The $hist_log_entry_$SCHEMA_VERSION table captures the log entries for the operations
performed. It shows the sequence of operations performed on the system. This table is not
populated if history collection has never been enabled or if hist_type = NONE.
Table 11-11: $hist_log_entry_$SCHEMA_VERSION
$hist_failed_authentication_$SCHEMA_VERSION
The $hist_failed_authentication_$SCHEMA_VERSION table captures only the failed
authentication attempts for every operation that is authenticated. A successful authentica-
tion results in a session creation. A failed authentication does not result in a session
creation, but it instead creates a record with a unique operation ID in this table.
Table 11-12: $hist_failed_authentication_$SCHEMA_VERSION
failure varchar(512) The text message for the failure type code
$hist_session_prolog_$SCHEMA_VERSION
The $hist_session_prolog_$SCHEMA_VERSION table stores details about each created
session. Every successful authentication or session creation adds an entry to this table with
a unique operation ID.
Table 11-13: $hist_session_prolog_$SCHEMA_VERSION
operatinguserid bigint The operating user ID for whom the ACL and
permission will be used for validating
permissions
resourcegroupid bigint The group ID of the WLM resource group for this
session
$hist_session_epilog_$SCHEMA_VERSION
The $hist_session_epilog_$SCHEMA_VERSION table stores details about each session
when the session is terminated. Each session completion creates an entry in this table with
a unique operation ID.
Table 11-14: $hist_session_epilog_$SCHEMA_VERSION
$hist_query_prolog_$SCHEMA_VERSION
The $hist_query_prolog_$SCHEMA_VERSION table contains the initial data collected at
the start of a query.
A query with or without a plan, and a plan without a query, causes the creation of a record
with an operation ID in the $hist_operation_$SCHEMA_VERSION table. The query prolog
and epilog, plan prolog and epilog, table access, and column access for that query will
share the same operation ID (opid). Thus, this will be a key for joining all query-related
data. The session related data will be retrieved using the foreign key sessionid.
Table 11-15: $hist_query_prolog_$SCHEMA_VERSION
sessionid bigint The Netezza session ID. This with npsid and
npsinstanceid will be the foreign key from query,
plan, table, and column access tables.
$hist_query_epilog_$SCHEMA_VERSION
The $hist_query_epilog_$SCHEMA_VERSION table contains the final data collected at the
end of the query.
Table 11-16: $hist_query_epilog_$SCHEMA_VERSION
sessionid bigint The Netezza session ID. This with npsid and
npsinstanceid will be the foreign key from query,
plan, table, and column access tables into ses-
sion tables.
$hist_query_overflow_$SCHEMA_VERSION
The $hist_query_overflow_$SCHEMA_VERSION table stores the remaining characters of
the query string that was stored in the querytext column of the
$hist_query_prolog_$SCHEMA_VERSION table. For performance reasons, each row of this
table stores approximately 8KB of the query string; if the query text overflow cannot fit in
one 8KB row, the table uses multiple rows linked by sequenceid to store the entire query
string.
Table 11-17: $hist_query_overflow_$SCHEMA_VERSION
$hist_service_$SCHEMA_VERSION
The $hist_service_$SCHEMA_VERSION table records the CLI usage from the localhost or
remote client. It logs the command name and the timestamp of the command issue. This
information is collected in the query history when COLLECT SERVICE is enabled in the his-
tory configuration. For more information, see the IBM Netezza Advanced Security
Administrators Guide.
Table 11-18: $hist_service_$SCHEMA_VERSION
servicetype bigint The code for the command, which is one of the
following integer values:
1 nzbackup
2 nzrestore
3 nzevent
4 nzinventory (obsoleted in 5.0)
5 nzreclaim
6 nzsfi (obsoleted in 5.0)
7 nzspu (obsoleted in 5.0)
8 nzstate
9 nzstats
10 nzsystem
$hist_state_change_$SCHEMA_VERSION
The $hist_state_change_$SCHEMA_VERSION table logs the state changes in the system.
It logs Online, Paused, Offline and Stopped. For the Online state change, the logging
occurs after the system has gone Online. In other cases, the logging occurs before the state
transition is made to the respective state. This information is collected in the query history
when COLLECT STATE is enabled in the history configuration. For more information, see
the IBM Netezza Advanced Security Administrators Guide.
Table 11-19: $hist_state_change_$SCHEMA_VERSION
changetype bigint The code for the change type, which is one of
the following integer values:
1 The system is Online.
2 The system is going into the Paused
state.
3 The system is going into the Offline
state.
4 The system is going into the Stopped
state.
change varchar(512) The text string for the change code as described
in changetype
$hist_table_access_$SCHEMA_VERSION
The $hist_table_access_$SCHEMA_VERSION table records the table access history for a
query. This table becomes enabled whenever query history type is Table.
Table 11-20: $hist_table_access_$SCHEMA_VERSION
usage integer The following bits will set to true if table appears
in:
(usage & 1) <> 0 = selected
(usage & 2) <> 0 = inserted
(usage & 4) <> 0 = deleted
(usage & 8) <> 0 = updated
(usage & 16) <> 0 = truncated
(usage & 32) <> 0 = dropped
(usage & 64) <> 0 = created
(usage & 128) <> 0 = statsgenerated
(usage & 256) <> 0 = locked
(usage & 512) <> 0 = altered
$hist_column_access_$SCHEMA_VERSION
The $hist_column_access_$SCHEMA_VERSION table records the column access history
for a query. This table becomes enabled whenever query history type is Column.
Table 11-21: $hist_column_access_$SCHEMA_VERSION
$hist_plan_prolog_$SCHEMA_VERSION
The $hist_plan_prolog_$SCHEMA_VERSION table records the plan history information.
This is the data collected at the beginning of the plan execution. This table becomes
enabled whenever query history type is Plan.
Table 11-22: $hist_plan_prolog_$SCHEMA_VERSION
signature bigint The signature of the plan. If two plans have the
same signature, especially for the same query,
most likely the plans are identical.
$hist_plan_epilog_$SCHEMA_VERSION
The $hist_plan_epilog_$SCHEMA_VERSION table records the plan history information.
This is the data collected at the end of the plan execution. This table becomes enabled
whenever query history type is Plan.
Table 11-23: $hist_plan_epilog_$SCHEMA_VERSION
FORMAT_QUERY_STATUS ()
Use this function to display text string versions of the $hist_query_epilog.status column
data. The return value is one of the following status values:
"sucess"
"aborted"
"cancelled"
"failed parsing"
"failed rewrite"
"failed planning"
"failed execution"
"permission denied"
"failed"
"trasaction aborted"
FORMAT_PLAN_STATUS ()
Use this function to display text string versions of the $hist_plan_epilog.status column
data. The return value is one of the following status values:
"sucess"
"aborted"
FORMAT_TABLE_ACCESS()
Use this function to display text string versions of all bits set in the
$hist_table_access.usage column data. The return value is a comma-separated list of one
or more of the following values:
"sel"
"ins"
"del"
"upd"
"drp"
"trc"
"alt"
"crt"
"lck"
"sts"
FORMAT_COLUMN_ACCESS()
Use this function to display text string versions of all bits set in the
$hist_column_access.usage column data. The return value is a comma-separated list of
one or more of the following values:
"sel"
"set"
"res"
"grp"
"hav"
"ord"
"alt"
"sts"
Example Usage
The following sample query shows how you can use these helper functions.
SELECT
substr (querytext, 1, 50) as QUERY,
format_query_status (status) as status,
tb.tablename,
format_table_access (tb.usage),
co.columnname,
format_column_access (co.usage)
from "$hist_query_prolog_1" qp
inner join
"$hist_query_epilog_1" qe using (npsid, npsinstanceid, opid)
inner join
"$hist_table_access_1" tb using (npsid, npsinstanceid, opid)
inner join
"$hist_column_access_1" co using (npsid, npsinstanceid, opid)
where
exists (select tb.dbname
from "$hist_table_access_1" tb
where tb.npsid = qp.npsid and
tb.npsinstanceid = qp.npsinstanceid and
tb.opid = qp.opid and
tb.tablename in (^nation^, ^orders^, ^part^,
^partsupp^, ^supplier^, ^lineitem^,
^region^))
and tb.tableid = co.tableid;
The workload of a Netezza appliance consists of user-initiated jobs such as SQL queries,
administration tasks, backups, and data loads, as well as system-initiated jobs such as
regenerations and rollbacks. The terms workload and job are used interchangeably to
describe the work being performed by a Netezza system.
Workload management (WLM) is the process of assessing the workload of the system and
using job control and prioritization features to allocate the appropriate share of resources to
jobs running on the system. This chapter describes the Netezza workload management fea-
tures and how to configure them.
As a best practice, work with your Netezza Sales or Support representative to assess the
WLM features that are most appropriate for your environment and users. Do not modify the
WLM configuration settings without careful analysis of the impact of the changes. Inappro-
priate changes and settings can impact system behavior in unintended ways and thus
should be carefully planned and implemented for your business environment.
Overview
The following sections provide information on service level planning and the WLM features.
12-1
IBM Netezza System Administrators Guide
Enabled by
Feature Description
Default
Short query bias Yes A special reserve of resources (that is, scheduling slots, memory, and
(SQB) preferential queue placement) for short queries. Short queries are those
estimated to run in two seconds or less. The time limit is a configurable
setting. With SQB, short queries can run even when the system is busy pro-
cessing other, longer queries.
Guaranteed Yes A minimum and/or maximum percentage of the system resources assigned
resource alloca- (requires to specific groups of users. These groups are called resource sharing groups
tion (GRA) resource (RSGs). When users assigned to different RSGs submit work and contend
sharing for resources, the GRA scheduler ensures that each RSG receives a per-
groups) centage of system resources based on its resource minimum percentage.
An RSG could receive more than its minimum when other RSGs are idle,
but an RSG will never receive more than its configured maximum
percentage.
Prioritized query Yes A priority such as critical, high, normal, and low assigned to queries and
execution (PQE) work on the system. Netezza uses the priority when it allocates resources
and schedules the work for the job. Critical and high priority jobs get more
resources over normal and low priority jobs based on configured priority
weighting factors. You can specify different priorities for users, groups, or
sessions. If you also use GRA, different priority work within each RSG
receives proportions of the RSGs resources.
Gate keeper No A process of queuing work based on its assigned priority, and if configured,
the estimated run time of normal priority work. The gate keeper acts as a
throttle that allows only a certain number of different types of jobs to run.
Any jobs that exceed the configured thresholds (or for which there are not
enough resources to run) wait until the gate keeper allows them to pass. By
default, the gate keeper is disabled and the Netezza system passes new
work requests directly to the GRA scheduler.
Most environments typically use only a subset of these features; the features depend upon
the methodology that you use to manage jobs in your environment.
When multiple jobs or users compete for system resources, you might want the Netezza
system to prioritize certain jobs over others. Workload management is a process of classify-
ing jobs and specifying resource allocation rules so that the system can assign resources
using a predetermined service policy. You can identify jobs as higher or lower in priority
than other jobs, and you can partition the system resources so that groups of users receive
a minimum or a maximum percentage of resources when several groups compete for system
resources.
Netezza has some predefined service policies to help prioritize certain jobs or work. For
example, the Netezza admin user account has special characteristics that prioritize its work
over other users work. Similarly, certain types of jobs may have priority over user queries or
other less-critical system jobs.
Concurrent Jobs
Netezza imposes a limit on the number of concurrent jobs that can run on the system at
one time. The limit is controlled by the system registry setting gkMaxConcurrent, which has
a default value of 48. Therefore, the system can run up to 48 concurrent jobs as long as
there are sufficient resources (CPU, disk, memory, and so on) to support all of those jobs.
In some environments, a smaller value may be appropriate for the types of jobs that typi-
cally run on the system. A smaller number of concurrent jobs may result in better
performance and thus better response time for users. During new system testing, your Sales
representative can work with you to identify whether your environment would benefit from a
smaller gkMaxConcurrent setting.
If you determine that a lower setting might be better for your system, you can change a reg-
istry configuration setting to lower the value. To change the setting, you need access to a
Netezza user account that has Manage System privilege (such as the admin user). The fol-
lowing examples use the sample account usr1.
1. Pause the system:
nzsystem pause
Are you sure you want to pause the system (y|n)? [n] y
2. Specify a maximum concurrent jobs setting of 20:
nzsystem set -arg host.gkMaxConcurrent=20
Are you sure you want to change the system configuration (y|n)? [n]
y
3. Resume the system:
nzsystem resume
You can display the current value of a registry setting using the following command:
nzsystem showRegistry | grep gkMaxConcurrent
host.gkMaxConcurrent = 20
host.schedSQB- host.schedSQBReservedSnMB=50
host.schedSQB-
ReservedGraSlots=10 ReservedSnSlots=6
host.schedSQBReservedHostMB=64
Also, because short queries are typically not resource intensive, the Netezza can run several
short queries at a time while the longer work continues.
Table 12-2 describes the configuration registry settings that control the SQB defaults. To
change the setting you use the nzsystem command to pause the system, set the value, and
then resume the system.
For example, if you want to change the definition of a short query in your environment from
two seconds to five seconds, do the following (usr1 must have Manage System privilege):
1. Pause the system:
nzsystem pause
Are you sure you want to pause the system (y|n)? [n] y
2. Specify a short query time length of 5 seconds:
nzsystem set -arg host.schedSQBNominalSecs=5
Are you sure you want to change the system configuration (y|n)? [n]
y
3. Resume the system:
nzsystem resume
You can also display the current value of a registry setting as follows:
nzsystem showRegistry | grep schedSQBNominalSecs
host.schedSQBNominalSecs = 5
Managing GRA
If your environment has distinct groups of users who use the system at the same time, you
can use GRA to partition the system so that each group receives a portion of the system
resources when the group is active. These groups are called resource sharing groups
(RSGs). GRA is enabled by default.
Analysts 50 100
RptQuery 30 60
Public 20 80
When all three RSGs are busy with jobs on the system, the GRA scheduler works to balance
the jobs and resource utilization as shown in Figure 12-2.
To create these RSGs and to alter the existing group Public from its default maximum per-
centage, you can use SQL commands or the NzAdmin tool. For a description of creating
groups using NzAdmin, refer to the online help for that interface.
Examples of the SQL commands follow:
SYSTEM(ADMIN)=> CREATE GROUP analysts WITH RESOURCE MINIMUM 50
RESOURCE MAXIMUM 100;
CREATE GROUP
SYSTEM(ADMIN)=> CREATE GROUP rptquery WITH RESOURCE MINIMUM 30
RESOURCE MAXIMUM 60;
CREATE GROUP
SYSTEM(ADMIN)=> ALTER GROUP public WITH RESOURCE MAXIMUM 80;
ALTER GROUP
You can then assign Netezza user accounts to the RSG. For example, the following com-
mand assigns the user bob to the analysts RSG:
SYSTEM(ADMIN)=> ALTER USER bob IN RESOURCEGROUP analysts;
ALTER USER
The Netezza system ensures that members of the Analysts group get at least 50% of the
available system resources when all the RSGs are active. At the same time, the system
ensures that RptQuery group members and Public users are not starved for resources.
Note the following sample command that creates a group and adds users to the group:
SYSTEM(ADMIN)=> CREATE GROUP analysts WITH RESOURCE MINIMUM 50
RESOURCE MAXIMUM 100 USER bob,jlee;
CREATE GROUP
In this example, the users are assigned to the group but not for resource sharing controls.
Instead, the system uses the group definition to manage the security and privileges of the
analysts group. To assign a user to a group for resource sharing purposes, you must use the
[CREATE|ALTER] USER command and the IN RESOURCEGROUP syntax.
The sum of the active RSGs RESOURCE The system allocates resources based on
MAXIMUM settings is <= 100 the RESOURCE MAXIMUM settings.
The sum of the active RSGs RESOURCE The system allocates resources in propor-
MINIMUM settings is >= 100 tion to the RESOURCE MINIMUM settings
for each RSG.
The sum of the active RSGs RESOURCE The system allocates resources in propor-
MINIMUM is <100. tion to the RESOURCE MINIMUM settings
for each RSG, but the allocations are lim-
ited by their RESOURCE MAXIMUM
settings. Any excess resources are allo-
cated in proportion to the difference
between the allowed resources and the
RESOURCE MAXIMUM settings.
If only a few of the RSGs are busy, the system has more resources to give to the active
RSGs, but it applies the minimum and maximum resource percentages to ensure fair allo-
cations. For example:
If the Analysts RSG is the only active group, it can use up to 100% of the system
resources for its work.
If the RptQuery RSG is the only active group, it can use up to 60% of the available sys-
tem resources (its RESOURCE MAXIMUM). The remaining 40% of the available system
resources remain unallocated until there is new work from other RSGs or the admin
user.
If the Analysts and Public RSGs are busy, their resource minimums total 70% and their
resource maximums total 180%. The system determines their allowed resource per-
centages as follows:
min max allowed
Public 20 80 29% = (20 / (20 + 50))
Analyst 50 100 71% = (50 / (20 + 50))
If the RptQuery and Public RSGs are busy, the system determines their allowed
resource percentages as follows. This example shows that the excess is apportioned to
each RSG, but never to exceed the maximum percentage.
Netezza frequently adjusts the resource percentages based on the currently active RSGs
and their jobs. Because work is often submitted and finished very quickly, at any one point
in time it might appear that certain RSGs have received no resources (because they are
inactive) while other RSGs are monopolizing the system because they are continually
active.
Over time, and especially during peak times when all RSGs are actively using the system,
the GRA usage typically averages out to the RSGs allowed percentage. The measure of
whether a group is receiving its allowed resource percentage is called compliance; Netezza
offers several reports that you can use to monitor resource group compliance. For more
information, see Monitoring Resource Utilization and Compliance on page 12-15.
Using the example Analysts, RptQuery, and Public RSGs, assume that users in all of the
RSGs are active and so is the admin user. The resource allocations shown in Figure 12-2 on
page 12-8 would change to the following percentages shown in Figure 12-3.
The admin user receives 50% of the available resources, so the other RSGs receive half of
their configured percentages. For example, if admin and all three RSGs are busy, the Ana-
lysts group gets 25%, the RptQuery group gets 15%, and the Public group gets 10%.
As a best practice, do not let your users run as the admin user for their work. Instead, cre-
ate an administrative users RSG (for example, NzAdmins) with an appropriate resource
percentage and the correct object and administrative permissions. Add your administrative
user accounts as members to that RSG so that their work does not severely impact the
other RSGs. An administrative users group also makes it easier to manage the account per-
missions and membership for those users collectively, rather than managing permissions
for each user account on a case-by-case basis.
As you plan the resource percentages for each RSG, be sure to consider the number of con-
current active jobs that are likely to occur for that group. You may need to adjust the
resource allocation percentages to ensure that very busy groups have enough resources to
complete the expected number of concurrent jobs in a timely manner. You can also config-
ure a limit on the number of active jobs from an RSG to ensure that a specific number of
active jobs have reasonable resource allocations; any additional jobs will wait until the
active jobs finish.
A value of 0 (or OFF) specifies that the group has no maximum for the number of con-
current jobs. The group is restricted by the usual system settings and controls for
concurrent jobs.
A value of 1 to 48 to set the job maximum to the specified integer value.
A value of -1 (or AUTOMATIC) specifies that the system will calculate a job maximum
value based on the groups resource minimum multiplied by the number of GRA sched-
uler slots (default 48). For example, if a group has a resource minimum of 20%, the
job maximum is (.20 * 48) or approximately 9.
By controlling the number of concurrent jobs, you can help to improve performance for the
active jobs and avoid cases where too many active jobs result in bad performance for all of
them.
For the RptQuery group, which has one Critical and two High priority jobs and a 30%
resource allocation, the system calculates the job resource allocations as follows:
Critical priority job = 8/16 points (50% of the groups resources).
Each High priority job = 4/16 points (25% of the groups resources).
Thus, the Critical priority job receives half of the groups 30% for a total of 15%, and each
each high priority job receives one-quarter of the total 30%, or about 7% of the groups
resources.
For the Public group, which has two Low priority jobs and a 20% group resource allocation,
the system divides the groups resources equally. Thus, each low priority job receives
approximately 10% of the system resources.
You must pause the system, change the setting, and resume the system for the changes to
take effect.
_v_system_util view shows system utilization for certain resources such as host and
SPU CPU, disk, memory, and fabric (communication) resources. The table is updated
based on the increment specified in the host.sysUtilVtUpdateInterval setting, with a
default of 60 seconds. It keeps approximately two days of data.
Netezza saves the resource usage information for the horizon in the _v_sched_gra_ext sys-
tem view. Every 600 seconds (by default), the system adds a new row for each active group
with its resource compliance totals for that period. If a group is not active, Netezza does
not create a new row for that group.
Table 12-7 describes the settings that control the compliance monitoring windows.
host.graVtUpdateInterval int 600 sec Specifies how often the GRA scheduler
updates resource usage statistics for com-
pleted jobs.
To display compliance and resource usage using the _v_sched_gra_ext view, you can use a
SQL command similar to the following (note that the output lines are very long and wrap in
the sample below):
1280741126077846 | 4900 | 39 | 39 | 0 |
0 | 0 | 0.028 | 0 | 100 | 100 |
0 | 2| 5 | 0 | 2 |
3600000000 | 0 | 599.45 | 14.48 | 9.13 | 0 | 0 |
0 | 0.13 | 0 | 0 | 0 | _ADMIN_
| 2010-0 8-02 04:25:26.077846 | 2010-08-02 05:25:26.0778
For each active resource group, the system provides information about how busy each
group is, and how the scheduler is managing the GRA resources as well as scheduler
resources.
Note: Within the view output, you may notice an _ADMIN_ resource group. This is a sys-
tem-default group for the admin user account and cannot be modified.
Summary Displays the GRA performance status for the last 60 minutes. For more
information, see Resource Allocation Performance Summary on page 12-17.
History Displays all available table information from summary data captured in ten-
minute intervals. For more information, see Resource Performance History on
page 12-17.
Graph Displays the resource allocation history for specific days. For more informa-
tion, see Resource Performance History Graph on page 12-18.
The lines for each group show the resource usage trends through the day with the
usage percentage on the left vertical axis.
The blue shaded background shows the number of jobs running at each time interval
with the job count on the right side vertical axis)
The drop-down list allows you to select a different day of resource usage to display.
Managing PQE
If your environment has distinct types of jobs that each have different priorities or service
level goals, you can use priority settings to help Netezza identify and prioritize the more
critical jobs over the less critical ones. Netezza uses the priority query execution (PQE) set-
tings to identify the jobs with the highest importance.
When combined with GRA, Netezza assigns more resources to higher priority jobs over
lower priority jobs; for queued jobs waiting to run, Netezza schedules the higher priority
jobs to run before the lower priority jobs. When used with the gate keeper, PQE can be used
queue and control the number of each type of job that is allowed to run at a given time on
the system.
You assign priority to jobs in several ways:
You can assign a priority to a group of users; each user inherits the priority of the
group.
You can assign priority to a user, which can override a priority specified for the users
group(s).
You can assign a priority as a system default priority for any users who do not have a
priority set by their group or account.
When you configure priority for a user, group, or system-wide, you can specify a default pri-
ority and a maximum priority. The system does not allow the user to specify a priority
greater than his or her maximum priority.
The admin user as well as permitted users can change the priority of a running job. You can
raise the jobs priority, or decrease a jobs priority. Users can raise their jobs priority to the
maximum allowed for them as individuals or as members of a group. For more information
about priority assignment, see Specifying Session Priority on page 12-20.
High Priority user jobs. These jobs take precedences over Yes
normal jobs
Netezza Host
Critical High Normal Low
Netezza SPUs
36 4 2 2
Figure 12-9: Using PQE to Control Job Concurrency by Runtime and Priority
In Figure 12-9, the gate keeper configuration settings allow up to 36 critical, 4 high, 2 nor-
mal, and 2 low priority jobs to run concurrently. If the maximum number of jobs for a
specific priority are already running, the gate keeper queues any additional jobs of that type
(as in the Normal and Low queues). A job of any priority could be queued because there are
not enough resources available to run that specific job. (Although not shown in the figure,
requests from the gate keeper proceed to the GRA scheduler for WLM processing before
they proceed to the SPUs.)
Table 12-9 describes the configuration registry settings that you can use to change the gate
keeper defaults. To change the setting you use the nzsystem command to pause the sys-
tem, set the value, and then resume the system.
host.gkEnabled bool yes Enables the gate keeper. If you enable the
gate keeper, jobs submitted to the Netezza
are first processed by the gatekeeper and
allowed to pass to the GRA only when the
number of currently running jobs are less
than the configured priority and/or
response time thresholds.
The gate keeper uses a default critical priority queue of 36, so the gate keeper allows up to
36 critical priority queries at one time assuming that resources and query slots are avail-
able. This is a hardcoded configuration setting and cannot be changed.
If you do not use PQE, all jobs are considered Normal; the gate keeper uses only one queue
to process new work requests. Figure 12-10 illustrates the case in which gate keeper is
enabled but PQE is not used to prioritize the queries.
Netezza Host
48
host.gkMaxPerQueue=48 host.gkMaxConcurrent=48
Optionally, the Normal queue offers settings that you can use to configure additional queu-
ing controls. For example, Figure 12-11 shows how you can use the host.gkMaxPerQueue
and host.gkQueueThreshold settings to create up to four queues to hold queries of different
estimated runtimes. You can also configure the gate keeper to allow more of the very short
queries to run and less of the longer ones, which can improve performance for the shorter
queries.
Netezza Host
<1 1 <10 10 <60 60>
Netezza SPUs
20 5 3 1
host.gkMaxPerQueue=20,5,3,1
host.gkQueueThreshold=1,10,60,-1
Figure 12-11: Gate Keeper Time-Based Normal Queues and Registry Settings
If you provide a comma-separated list of values for gkQueueThreshold, the gate keeper cre-
ates several queues to hold the queries that have an estimated runtime within that range.
In Figure 12-11, the gkQueueThreshold setting defines four queues: a queue for queries
with estimated runtimes of less than 1 second; one for queries with an estimated runtime
of 1 up to 10 seconds; one for queries with estimated runtimes of 10 up to 60 seconds;
and one for queries that have runtimes of 60 or greater seconds.
Using the gkMaxPerQueue setting, you can control the number of queries from each queue
that are sent to the SPU for processing. In this example, the gate keeper will allow up to 20
queries from the <1 second queue to pass on for processing, with up to 5 queries from the
1-<10 second queue; up to 3 queries from the 10-<60 second queue; and only 1 from the
60> second queue. Thus, the gate keeper will send more of the faster queries and less of
the longer-running queries for processing. With these queue settings, only one 60-second
or greater query can be active on the SPU at one time, and the gate keeper will queue any
additional 60-second or greater queries until the first one completes.
The nzstats command displays operational statistics about system capacity, faults, and
performance. Operational statistics provide you with the following information:
A high-level overview of how your system is running in a context of recent system
activity
Details so that you can diagnose problems, understand performance characteristics,
and interface to system management software
Table 13-1 lists the Netezza core groups and tables that you can view using the nzstats
command.
Host CPU Table Provides information about each See Host CPU Table on
host processor. page 13-3.
Host Filesystem Provides information about each See Host File System Table on
Table local host file system. page 13-4.
13-1
IBM Netezza System Administrators Guide
Host Interface Provides information about the See Host Interface Table on
Table hosts interface. page 13-4.
Host Mgmt Chan- Provides information about the See Host Management Channel
nel Table systems management channel Table on page 13-6.
from the hosts viewpoint.
Host Network Provides information about the See Host Network Table on
Table systems main UDP network layer page 13-7.
from the hosts viewpoint.
Host Table Provides information about each See Host Table on page 13-8.
host.
Per Table Per Data Provides information about tables See Per Table Per Data Slice
Slice Table on a per-data slice basis. Table on page 13-10.
Query History Provides a list of the last 2000 See Query History Table on
Table queries that completed as page 13-11.
recorded in the _v_qryhist view.
SPU Table Provides information about each See SPU Table on page 13-13.
SPUs memory.
Table Table Provides information about data- See Table Table on page 13-14.
base tables and views.
Database Table
If you are the user admin, you can use the nzstats command to display the Database Table,
which displays information about the databases. It has the following columns:
Column Description
Column Description
Create Date The date and time this database was created.
Owner Id The user ID for the user that owns this database.
Num Tables The number of user tables associated with this database.
Num Views The number of user views associated with this database.
Num Active Users The number of users currently attached to this database.
DBMS Group
The DBMS Group displays information about the database server. It has the following
columns:
Column Description
Num Queries The total number of queries that have been submitted, but not
completed.
Num Queries Waiting The total number of currently queries waiting to be run.
Num Transactions The total number of open and recent transactions maintained
by the Transaction Manager.
Column Description
Column Description
Ticks The number of CPU ticks that have occurred. A tick is 1/100th of a sec-
ond. Linux uses the term jiffy for this amount of time.
Idle Ticks The number of ticks where the CPU is not doing anything (that is, run-
ning the idle task).
Non-idle Ticks Non-idle ticks represents time that the CPU is either in user or system
mode.
Avg Load The average, as calculated over the last minute, of the utilization per-
centage for the processor. (Note that commands such as top show the
average utilization for shorter periods of time, such as only the last three
seconds.)
Column Description
Mount Point The directory name on which the file system is mounted.
Column Description
Column Description
MTU The size of the largest datagram that can be sent/received on the inter-
face, specified in bytes.
MAC Address The interface's address at the protocol layer immediately below the net-
work layer in the protocol stack.
In Bytes The total number of bytes received on the interface, including framing
characters.
In Byte Rate The previous 1 minute average rate of bytes received (15 seconds
granularity).
In Pkt Rate The previous 1 minute average rate of packets received (15 seconds
granularity).
In Errors The number of inbound packets that contain errors preventing them
from being deliverable to a higher-layer protocol.
Out Bytes The total number of bytes transmitted out of the interface, including
framing characters.
Out Bytes-64 A 64-bit version of the Out Bytes managed object, updated every 15
seconds.
Out Byte Rate The previous 1 minute average rate of bytes sent (15 seconds
granularity).
Out Pkt Rate The previous 1 minute average rate of packets sent (15 seconds
granularity).
Out Errors The number of outbound packets that could not be transmitted because
of errors.
Column Description
In Byte Rate The previous 1 minute average rate of bytes received (15 sec-
onds granularity).
In Msg Rate The previous 1 minute average rate of messages received (15
seconds granularity).
In Msg Q Len The length of the receive packet queue for messages being
assembled.
Out Bytes-64 A 64-bit version of the Out Bytes managed object, updated every
15 seconds.
Out Byte Rate The previous 1 minute average rate of bytes sent (15 seconds
granularity).
Out Msg Rate The previous 1 minute average rate of messages sent (15 sec-
onds granularity).
Column Description
Out Retransmit Rate The previous 1 minute average rate of retransmissions (15 sec-
onds granularity).
Columns Description
In Byte Rate The previous 1 minute average rate of bytes received (15 sec-
onds granularity).
In Msg Rate The previous 1 minute average rate of messages received (15
seconds granularity).
Out Bytes-64 A 64-bit version of the Out Bytes managed object, updated
every 15 seconds.
Out Byte Rate The previous 1 minute average rate of bytes sent (15 seconds
granularity).
Out Msg Rate The previous 1 minute average rate of messages sent (15 sec-
onds granularity).
Columns Description
Out Retransmit Rate The previous 1 minute average rate of retransmissions (15 sec-
onds granularity).
Host Table
The Host Table displays information about each host on the system. It has the following
columns:
Column Description
Host ID A unique value for each host in the system (always 1).
Column Description
In Byte Rate The previous 1 minute average rate of bytes received (15 sec-
onds granularity).
In Msg Rate The previous 1 minute average rate of messages received (15
seconds granularity).
In Msg Q Len The length of the receive packet queue - for messages being
assembled.
Out Bytes-64 A 64-bit version of the Out Bytes managed object, updated every
15 seconds.
Out Byte Rate The previous 1 minute average rate of bytes sent (15 seconds
granularity).
Out Msg Rate The previous 1 minute average rate of messages sent (15 sec-
onds granularity).
Column Description
Out Retransmit Rate The previous 1 minute average rate of retransmissions (15 sec-
onds granularity).
Column Description
Disk Space The amount of disk space used for this table in this data slice.
Query Table
If you are the admin user, you can use the nzstats command to display the Query Table,
which displays information about the queries currently 'running/executing' on the Netezza
server. Those queries which have completed execution and whose results sets are being
returned to a client user will not be listed in this table. You can use the system view
_v_qrystat to view the status of queries running. For more information see Table 9-8 on
page 9-29.
Note: This query table uses the _v_qrystat view for backward compatibility and will be dep-
recated in a future release. For more information about the new query history feature, see
Chapter 11, Query History Collection and Reporting.
Column Description
SQL Statement The SQL statement. You can see the entire string by increasing the
width of the column.
Column Description
State Text The state of the query in text form. Possible states are pending,
queued, running.
Submit Date The date and time that the query was submitted.
Start Date The date and time that the query started running.
Snippets The number of snippets (steps) in the plan for this query.
Column Description
Column Description
SQL Statement The SQL statement. You can see the entire string by increasing the
width of the column.
Submit Date The date and time that the query was submitted.
Start Date The date and time that the query started running.
End Date The date and time that the query completed.
Snippets The number of snippets (steps) in the plan for this query.
Column Description
HW ID The index into the hardware table for the SPU containing this partition.
Partition Id A unique value (per SPU) for each disk partition within a SPU.
Disk Id The index into the SPU disk table for the disk on which this partition
resides.
Column Description
SPU Table
The SPU Table displays information about each SPUs processor and memory. It has the
following columns:
Column Description
System Group
The System Group displays information about the system as a whole. It has the following
columns:
Column Description
Contact The name of the contact person for this system and contact
information.
Location The physical location of this node (for example, telephone closet,
3rd floor.)
Up Time The time in seconds since the management portion of the system
was last re-initialized.
State The current state of the system (from the nzstate command) as an
integer.
Column Description
State Text The text description of the system state. This matches the display
from the nzstate command.
Model The Netezza model number for this system. Used in the callHome.txt
file.
Serial Num The serial number for this system. Used in the callHome.txt file.
Table Table
If you are the user admin, you can use the nzstats command to display the Table Table,
which displays information about database tables. It has the following columns:
Column Description
Create Date The date and time that this table was created.
Type The type of this table (table, view) expressed as its type integer.
Disk Space The total disk space used to store this table in KB. You can use the
-allocationUnit option to show the disk space used in extents or
blocks.
Avg Space Per DS The average disk space used by each dataslice of the table in KB.
Max Space Per DS The disk space consumed by the largest dataslice for the table in
KB.
Column Description
Min Space Per DS The disk space consumed by the smallest dataslice for the table in
KB.
Space Skew The ratio that shows how disparate the dataslice sizes are as calcu-
lated by (maximum dataslice size - minimum dataslice size) /
average dataslice size.
This chapter describes how to manage and use the MantraVM service. The MantraVM
service is a virtual server environment that runs the Netezza Mantra compliance and audit-
ing application directly on the Netezza host.
Note: The MantraVM service was installed on older IBM Netezza 1000 systems, but it is no
longer installed on new systems. (If your system has an /nz/mantravm directory, the Man-
traVM service is installed on the system.) IBM Netezza 100 systems do not support the
MantraVM service.
Mantra Information
The MantraVM service supports the Netezza Mantra application on IBM Netezza 1000 sys-
tems, as shown in Figure 14-1. You can start, stop, and obtain the status of the MantraVM
service using the service mantravm commands. You use the mantractl command to config-
ure and manage the MantraVM service in this environment.
MantraVM Service
14-1
IBM Netezza System Administrators Guide
Within the MantraVM service, the Mantra application operates identically to a standalone
Mantra appliance; the management tasks are identical in terms of configuration, reporting,
backups, and so on. For details about Mantra compliance reporting, events, and configura-
tion, see the Netezza Mantra Administration Guide, which is available on the Mantra Web
interface. To access the guide, see Accessing the Mantra Web Interface on page 14-8.
Mantra Documentation
The Mantra documentation is installed in the MantraVM service image. To access the docu-
mentation, connect to the Mantra Web interface and go to the Support page to download an
online version of the Netezza Mantra Administration Guide. For a description of how to
access the Mantra Web interface in the MantraVM service environment, see Accessing the
Mantra Web Interface on page 14-8.
Also, if you download the Mantra Console from the Web interface Support page, you can
also access the documentation using the Help menu on the Console. For details about the
Netezza Mantra compliance application and how to create policies, run reports, monitor
activity and events, and use the Mantra interfaces, refer to the Netezza Mantra Administra-
tion Guide.
faces that are being monitored for query activity; if no interfaces are configured for
monitoring, the output displays the word default.
Note that the output also shows that the MantraVM service is enabled. If the service is
not enabled, the service may not be running, or if it is, it will not be restarted the next
time the mantravm service starts. For example:
[root@nzhost1 ~]# ./mantractl
External IP Address of MantraVM: 1.2.3.4
Internal IP Address of MantraVM: 10.1.2.3
mantravm service enabled? false
MantraVM Version: 1.0.060210-2010
Interfaces Monitored: eth8,usb0
For systems that have less than four monitoring interfaces available, sample output follows:
mantravm service stopped
eth13
eth8
eth10
usb0
eth9
traps, configure IP and DNS settings, download Mantra Agent install packages, access user
documentation, and so on. For a full description of the Web Interface, see the Netezza
Mantra Administration Guide.
To access the Mantra Web interface, open a Web browser and enter the following URL
where ipaddr is the external IP address (management address) of the MantraVM
service:
https://ipaddr
Note: You can display the IP address using the mantractl command. Make sure that you
communicate the external IP address to the Mantra users at your site who use the Web
interface or the Console application.
The Netezza Mantra login page appears. Log in using an existing Mantra user account.
There is a default user account named admin (which is not the same as the Netezza
admin database user account). The admin password is specified when the MantraVM appli-
cation was installed. The default password is netezza. The admin user can create additional
Mantra user accounts for accessing the console and Web interface.
Troubleshooting
The following sections describe some possible conditions and troubleshooting steps for the
MantraVM service.
Event Throttling
Mantra contains an event throttle mechanism that helps to prevent unintentionally vague or
all-encompassing policies from overwhelming the Mantra database with stored event data.
The event throttle limits the number of events that can be stored in the event database dur-
ing a single calendar day. If your configured policies capture more than the throttle limit of
event data, an alarm is raised and any event traffic that exceeds the limit is monitored and
analyzed, but it is deflected away from the event database until the throttle alarm resets
automatically at midnight or is cleared manually by an administrator. For more information
about event throttling and how to configure it, see the Netezza Mantra Administration
Guide.
To resolve the issue, remove unnecessary files in the /nz partition to free disk space. If you
are not sure which files to delete, contact Support for assistance to identify temporary and
other files that can be safely removed. After you increase the available disk space, you can
restart the MantraVM service following the instructions in Starting the MantraVM Service
on page 14-3.
A-1
IBM Netezza System Administrators Guide
nzds Manages and displays For command syntax, see nzds on page A-8.
information about the
data slices on the
system.
nzinventory This command is obso- See the command nzhw on page A-26.
lete in Release 5.0.
nzload Loads data into data- For command syntax and more information,
base files. see the IBM Netezza Data Loading Guide.
nzrestore Restores the contents For command syntax and more information,
of a database backup. see Using the nzrestore Command on
page 10-22.
nzsfi This command is obso- See the command nzhw on page A-26.
lete in Release 5.0.
nzspu This command is obso- See the command nzhw on page A-26.
lete in Release 5.0.
nzspupart Shows a list of all the For command syntax, see nzspupart on
SPU partitions and the page A-43.
disks that support
them.
nzsql Invokes the SQL com- For usage information, see Creating Data-
mand interpreter. bases and User Tables on page 9-1. For
command syntax, see the IBM Netezza Data-
base Users Guide.
nztopology This command is obso- See the command nzds on page A-8.
lete in Release 5.0.
Command Privileges
Table A-2 lists the administrative privileges that may be required for certain commands.
The database user account may require one or more of these privileges for a command to
complete successfully. Note that the terms in square brackets are optional.
Privilege Description
[Create] Aggregate Allows the user to create user-defined aggregates (UDAs). Per-
mission to operate on existing UDAs.
[Create] External Table Allows the user to create external tables. Permission to oper-
ate on existing tables is controlled by object privileges.
[Create] Function Allows the user to create user-defined functions (UDFs). Per-
mission to operate on UDFs.
[Create] Index For system use only. Users cannot create indexes.
[Create] Library Allows the user to create user-defined shared libraries. Per-
mission to operate on existing shared libraries.
[Create] Temp Table Allows the user to create temporary tables. Permission to
operate on existing tables is controlled by object privileges.
Privilege Description
[Manage] Security Allows the user to perform commands and operations relating
to history databases such as creating and cleaning up history
databases.
[Manage] System Allows the user to perform the following management opera-
tions: start/stop/pause/resume the system, abort sessions,
view the distribution map, system statistics, and logs. The
user can run the commands: nzsystem, nzstate, nzstats, and
nzsession priority.
Backup Allows user to perform backups. The user can run the com-
mand nzbackup.
Restore Allows the user to restore the system. The user can run the
nzrestore command.
Table A-3 describes the list of available object privileges. As with administrator privileges,
specifying the WITH GRANT option allows a user to grant the privilege to others.
Privilege Description
Abort Allows the user to abort sessions. Applies to groups and users.
Alter Allows the user to modify object attributes. Applies to groups, users, and
tables.
Delete Allows the user to delete table rows. Applies only to tables.
Drop Allows the user to drop objects such as databases, groups, users, tables, and
others.
Execute Allows the user to run UDXs such as user-defined functions, aggregates, and
shared libraries.
Privilege Description
GenStats Allows the user to generate statistics on tables or databases. The user can run
the GENERATE STATISTICS command.
Groom Allows the user to groom tables to reclaim disk space and reorganize data. The
user can run the SQL GROOM TABLE command.
Insert Allows the user to insert rows into a table. Applies only to tables.
List Allows the user to display an objects name, either in a list or in another man-
ner. Applies to all objects.
Select Allows the user to select (or query) rows within a table. Applies to tables and
views.
Truncate Allows the user to delete all rows from a table with no rollback. Applies only to
tables.
Update Allows the user to modify table rows, such as changing field values. Applies
only to tables.
Exit Codes
The nz* commands typically return 0 to indicate a successful completion. If the command
returns 1 or a non-zero number, the command encountered an error and failed. The error
could be a problem during the nz* command itself or it may be a failure in a subcommand.
If a command failed, refer to the messages that appear in the command shell window for
possible additional information about the cause of the failure.
out without a value, the system waits 300 seconds. The maximum timeout value is 100
million seconds.
nzbackup
Use the nzbackup command to back up your database. For a complete description of the
nzbackup command and its use, see The nzbackup Command Syntax on page 10-11.
nzcontents
Use the nzcontents command to display the Netezza program names, the revision level, the
build level, and the checksum of binaries. This command takes several seconds to run and
results in multiple lines of output. Programs with no revisions are either scripts or special
binaries
Syntax
The nzcontents command uses the following syntax:
nzcontents [-h]
Description
The nzcontents command has the following characteristics.
Privileges Required
You do not need special privileges to run the nzcontents command.
Common Tasks
Use the nzcontents command to display the names of programs, and their revision and
build level.
Related Commands
Use the nzrev command to display the software revision level. Use the nzsystem showRev
command to show software revision levels.
Usage
The following provides some sample usage:
To display the software programs and their revisions, enter:
nzconvert
Use the nzconvert command to convert between any two encodings, between these encod-
ings and UTF-8, and from UTF-32, -16, or -8 to NFC, for loading with the nzload command
or external tables.
Syntax
The nzconvert command uses the following syntax:
nzconvert [-h|-rev] [options]
Options
For information on nzconvert options, refer to the IBM Netezza Database Users Guide.
Description
The nzconvert command has the following characteristics.
Privileges Required
No special privileges are required to use this command.
Common Tasks
Use the nzconvert command to convert character encoding before loading with the nzload
command or external tables.
Related Commands
Load converted data with the nzload command.
nzds
Use the nzds command to manage and obtain information about the data slices in the
system.
Syntax
The nzds command has the following syntax:
Inputs
The nzds command takes the following inputs:
Input Description
nzds show [options] Displays information about the data slice topology. The
show subcommand is the default and displays a list of
all the data slices on the system and information about
status, the SPU that manages the data slice, the Pri-
mary Storage (that is, the disk ID where the primary
copy of the data slice resides), the Mirror Storage (that
is, the disk ID where the mirror copy of the data slice
resides), and % Used (the amount of space in the data
slice that contains data).
You can specify one or more options to show specific
output.
nzds show [-detail] Displays information about the data slice topology and
includes information about locations and disk space.
nzds show -spa id Displays information about the data slices which are
owned by a particular S-Blade in the SPA.
nzds show -dsId id Displays information about the specific data slice.
nzds show -id hwId Displays information about the data slices assigned to
the specified hardware.
nzds show -topology Displays the current storage topology. The output
shows how system resources such as SPUs, disks, SAS
switches, and HBA ports are utilized within the system
to support the storage paths.
nzds show -caCertFile path Specifies the pathname of the root CA certificate file
on the client system. This argument is used by
Netezza clients who use peer authentication to verify
the Netezza host system. The default value is NULL
which skips the peer authentication process.
Input Description
nzds show -securityLevel level Specifies the security level that you want to use for the
session. The argument has four values:
preferredUnsecured This is the default value.
Specify this option when you would prefer an unse-
cured connection, but you will accept a secured
connection if the Netezza system requires one.
preferredSecured Specify this option when you
want a secured connection to the Netezza system,
but you will accept an unsecured connection if the
Netezza system is configured to use only unsecured
connections.
onlyUnsecured Specify this option when you
want an unsecured connection to the Netezza sys-
tem. If the Netezza system requires a secured
connection, the connection will be rejected.
onlySecured Specify this option when you want a
secured connection to the Netezza system. If the
Netezza system accepts only unsecured connec-
tions, or if you are attempting to connect to a
Netezza system that is running a release prior to
4.5, the connection will be rejected.
nzds show -regenStatus [-detail] Displays information about the status of any disk
regenerations that are in progress. The command dis-
plays information about the Data Slice being
regenerated, its SPU owner, the Source data slice ID,
its Destination data slice ID, the Start Time of the
regeneration, and % Done.
Include the -detail option for more information such as
the locations of the SPUs and storage areas.
nzds show -issues [-detail] Displays information about data slices that are report-
ing problems. The command displays a list of data
slices to investigate and their Status, SPU, Primary
Storage, Mirror Storage, and % Used.
Include the -detail option for more information such as
location details and data slice size.
Note: The size of the data slice is reported in
Gibibytes, which is in units of 10243 bytes.
Options
The nzds command takes the following options:
Option Description
-timeout <db name> Specifies the amount of time in seconds to wait for the com-
mand to complete before exiting with a timeout error. Default is
300.
Description
The nzds command has the following description.
Privileges Required
Your database user account must have the Manage Hardware privilege.
Common Tasks
Use the nzds command to manage and display information about the data slices in the sys-
tem. You can also use this command to create a balanced topology for best performance of
the system.
Related Commands
Use in conjunction with other system commands, such as the nzsystem and nzhw
commands.
Usage
The following provides some sample usage:
To regenerate a data slice to a spare disk destination, use the following command:
nzds show -regenstatus
Data Slice SPU Source Destination Start Time % Done
---------- ---- ------ ----------- --------------- --------
5 1092 1035 1014 09-Apr-09, 07:24:55 EDT 0.01
6 1092 1035 1014 09-Apr-09, 07:24:55 EDT 0.01
To show the data slice information for the system, use the following command:
nzds show
Data Slice Status SPU Partition Size (GiB) % Used Supporting
Disks
---------- ------- ---- --------- ---------- ------ ---------------
1 Healthy 1017 2 356 58.54 1021,1029
2 Healthy 1017 3 356 58.54 1021,1029
To show the data slice issues reported for the system, use the following command:
nzds show -issues
Data Slice Status SPU Partition Size (GiB) % Used Supporting Disks
---------- -------- ---- --------- ---------- ------ ------__--------
11 Degraded 1113 4 356 11.80 1091
12 Degraded 1113 5 356 11.79 1091
nzevent
Use the nzevent command to perform any of the following:
Show a list of event rules.
Copy a predefined template event rule and use it as your source to add a new rule.
Modify an existing event rule or a copied predefined template.
Add a new event rule.
Delete an event rule.
Generate events.
Syntax
The nzevent command uses the following syntax:
nzevent [-h|-rev|-hc] subcmd [subcmd options]
Inputs
The nzevent command takes the following inputs:
Input Description
Input Description
Options
The nzevent command takes the following options:
nzevent add or -eventType type Specifies the event type for the event. For a list of
nzevent copy or the event types, see Table 7-3 on page 7-9.
nzevent modify -eventArgsExpr Specifies the optional match expression for further
expr filtering. For more information, Table 7-4 on
page 7-13.
-name value If you are adding a new event, specifies the event
rule name. If you are copying an event, specifies
the name of the event you are copying. If you are
modifying an event, specifies the name of the
event that you are changing.
nzevent generate -eventType type Generates the specified type of event. For a list of
the event types, see Table 7-3 on page 7-9.
nzevent show -name rule_name Displays only the event rule corresponding to the
rule_name. If you do not specify a name, the com-
mand displays all event rules.
-orient type Allows you to specify the output display. The value
values are
Horizontal Displays the event types in a
table.
Vertical Displays each event as a complete
record.
Auto Selects the display based on the num-
ber of rows.
-securityLevel Specifies the security level that you want to use for
level the session. The argument has four values:
preferredUnsecured This is the default
value. Specify this option when you would pre-
fer an unsecured connection, but you will
accept a secured connection if the Netezza sys-
tem requires one.
preferredSecured Specify this option when
you want a secured connection to the Netezza
system, but you will accept an unsecured con-
nection if the Netezza system is configured to
use only unsecured connections.
onlyUnsecured Specify this option when you
want an unsecured connection to the Netezza
system. If the Netezza system requires a
secured connection, the connection will be
rejected.
onlySecured Specify this option when you
want a secured connection to the Netezza sys-
tem. If the Netezza system accepts only
unsecured connections, or if you are attempting
to connect to a Netezza system that is running a
release prior to 4.5, the connection will be
rejected.
Description
The nzevent command does the following:
Privileges Required
Your database user account must have Manage System privilege.
Common Tasks
Use the nzevent command to set the preconfigured event rules, and to create your own
event rules.
Related Commands
Use the nzsession command to view and manage sessions. Use the nzsystem command to
change system states.
Usage
The following provides some sample usage:
To add an event rule, enter:
nzevent add -name Newrule -u admin -pw password -host nzhost -on
yes -eventType sysStateChanged -eventArgsExpr $previousState ==
online && $currentState != online -notifyType email -dst
jdoe@netezza.com -msg NPS system $HOST went from $previousState to
$currentState at $eventTimestamp. -bodyText
$notifyMsg\n\nEvent:\n$eventDetail\nEvent
Rule:\n$eventRuleDetail
To copy a template event rule from the template table to the user-modifiable table,
enter:
nzevent copy -u admin -pw password -useTemplate -name
HostNoLongerOnline -on yes -dst jdoe@netezza.com
To delete an event rule, enter:
nzevent delete -u admin -pw password -host nzhost -name Newrule
To generate an event rule, enter:
nzevent generate -u admin -pw password -host nzhost -eventtype
custom1 -eventArgs customType=tooManySessions, numSessions=<n>
To list event types, enter:
nzevent listEventTypes
To list notification types, enter:
nzevent listNotifyTypes
To modify an existing event rule, enter:
nzevent modify -u admin -pw password -host nzhost -name Newrule -on
yes -dst jdoe@netezza.com
To display a specific event rule, enter:
nzevent show -u admin -pw password -host nzhost -name Newrule
To display event rules vertically, enter:
nzevent show -u admin -pw password -host nzhost -orient vertical
nzhistcleanupdb
Use this command to periodically delete old history information from a history database.
Syntax
The command has the following syntax:
nzhistcleanupdb [options]
Inputs
The nzhistcleanupdb command takes the following input options. Note that the input
options have two forms for the option names.
Table A-8: nzhistcleanupdb Input Options
Input Description
-d | --db dbname Specifies the name of the history database from which
you want to remove old data. The name must be a valid,
unquoted, identifier.
-n | --host host Specifies the hostname of the Netezza system where the
database resides. The default and only value for this
option is NZ_HOST.
-u | --user user Specifies the user account that permits access to the
database. The default is NZ_USER. The user must have
Delete privileges on the history database tables.
-p | --pw password Specifies the password for the user account. The default
is NZ_PASSWORD.
-t | --time "<yyyy-mm- Specifies a date and time value; all history data with a
dd[,hh:mm[:ss] ]>" time and date prior to this value will be deleted. The year,
month, and day values are required. The hours, minutes,
and seconds values are optional; if they are not specified,
the default is 12:00 AM of the specified day.
Description
After running the nzhistcleanupdb command, you can groom the table to completely
remove the deleted rows in the table.
Privileges
You must be the nz user to run this command, and you must specify a database user
account who is either the owner or user of the history database or who has administration
privileges to update the history database and its tables.
Related Commands
See nzhistcreatedb for a description of how to create a history database.
Usage
The following sample command deletes history data which is older than October 31, 2009
from the histdb history database:
[nz@nzhost ~]$ nzhistcleanupdb -d histdb -u smith -pw password -t
"2009-10-31"
About to DELETE all history entries older than 2009-10-31 00:00:00
(GMT) from histdb.
Proceed (yes/no)? :yes
BEGIN
DELETE 0
DELETE 98
DELETE 34
DELETE 0
DELETE 0
DELETE 188
DELETE 188
DELETE 62
DELETE 65
DELETE 0
DELETE 0
DELETE 0
DELETE 503
COMMIT
If you also include the -g (or --groom) option, the command calls the GROOM TABLE com-
mand to update statistics on the history database tables. Sample messages follow:
nzsql:/tmp/temp.2947.2:1: NOTICE: Groom processed 0 pages; purged 0
records; scan size unchanged; table size unchanged.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:2: NOTICE: Groom processed 0 pages; purged 0
records; scan size unchanged; table size unchanged.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:3: NOTICE: Groom processed 36 pages; purged
1449 records; scan size shrunk by 36 pages; table size shrunk by 36
extents.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:4: NOTICE: Groom processed 36 pages; purged
1440 records; scan size shrunk by 36 pages; table size shrunk by 36
extents.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:5: NOTICE: Groom processed 36 pages; purged
2284 records; scan size shrunk by 36 pages; table size shrunk by 36
extents.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:6: NOTICE: Groom processed 36 pages; purged
2284 records; scan size shrunk by 36 pages; table size shrunk by 36
extents.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:7: NOTICE: Groom processed 36 pages; purged
545 records; scan size shrunk by 36 pages; table size shrunk by 36
extents.
GROOM DEFAULT
nzsql:/tmp/temp.2947.2:8: NOTICE: Groom processed 36 pages; purged
545 records; scan size shrunk by 36 pages; table size shrunk by 36
extents.
GROOM DEFAULT
nzhistcreatedb
Use this command to create a history database with all its tables, views, and objects for
history collection and reporting.
Syntax
The command has the following syntax:
nzhistcreatedb [options]
Inputs
The nzhistcreatedb command takes the following input options. Note that the input options
have two forms for the option names.
Table A-9: nzhistcreatedb Input Options
Input Description
-d | --db dbname Specifies the name of the history database that you want
to create.
-n | --host host Specifies the hostname of the Netezza system where the
database will reside. The default and only value for this
option is NZ_HOST.
-t | --db-type dbtype Specifies the type of database to create. The only valid
value is query (or q or Q).
Input Description
-p | --pw password Specifies the password for the owner user account. The
default is NZ_PASSWORD.
Outputs
The nzhistcreatedb command has the following output messages.
Table A-10: nzhistcreatedb Output Messages
Message Description
History database name cre- The command created the history database and all its
ated successfully ! tables and views.
ERROR: History database The command failed because the specified user name
qhist not created: did not exist on the system.
ERROR: GrantRevokeCom-
mand: group/user "name"
not found
ERROR: History database The command failed because the specified database
dev not created: name already exists on the system.
ERROR: createdb: object
"hist1" already exists.
Message Description
ERROR: History database The command failed because the password for the speci-
hist1 not created: fied owner was not correct.
nzsql: Password authentica-
tion failed for user 'name'
ERROR: History database The command failed because the specified owner does
hist1 not created: not have Create Database privileges on the system.
ERROR: CREATE DATA-
BASE: permission denied.
ERROR: History database The specified owner account does not have List privilege
hist1 not created: on the specified user account or the User object class.
ERROR: GrantRevokeCom- The owner must have List privilege to complete the privi-
mand: permission denied on lege assignments.
"bug".
Description
The nzhistcreatedb command creates a history database and configures its ownership and
access permissions. It creates the history database object, all the history tables and views,
and grants the permissions for the owner and user accounts specified in the command.
Note that the command can take several (four to five) minutes to complete processing.
Privileges
You must be the logged in as the nz user to run this command.
Related Commands
See nzhistcleanupdb for a description of how to periodically delete old history information
from the database.
Usage
The following sample command creates a query history database named qhist:
nzhistcreatedb -d qhist -t q -v 1 -u histusr -o myuser -p password
History database qhist created successfully !
Note: The command usually requires several minutes to complete, depending upon how
busy the Netezza system is.
nzhostbackup
Use the nzhostbackup command to back up the Netezza data directory and system catalog
on the host. In the rare situations when a Netezza host server or disk fails, but the SPUs
and their data are still intact, you can restore the /nz/data directory (or whatever directory
you use for the Netezza data directory) from the host backup without the additional time to
restore all of the databases. For more information, see Host Backup and Restore on
page 10-8.
Before running the nzhostbackup command, you must do one of the following:
Pause the system.
Set the NZ_USER and NZ_PASSWORD environment variables to a user who has per-
mission to pause the system.
Set NZ_USER to a user who has permission to pause the system, and cache that users
password.
Note: If you run the nzhostbackup command, then change a user's password and then run
the nzhostrestore command, the old password will be replaced.
Syntax
The nzhostbackup command uses the following syntax:
nzhostbackup [-g GRACE_PERIOD] [-D DATA_DIR] FILE
nzhostbackup -h
Inputs
The nzhostbackup command takes the following inputs:
Input Description
FILE Specifies the pathname of the archive file that you want
to create. This file is a gzipped tar file.
nzhostbackup -g GRACE_ Specifies the maximum time to wait (in seconds) for
PERIOD queries (or any system action, such as a load) to finish
before the system begins the backup. After the system
has waited this amount of time, it cancels any remaining
queries and starts the backup. The default is 60
seconds.
Description
The nzhostbackup command does the following:
Privileges Required
You must specify a database user account that has Manage System privileges.
Common Tasks
You can run the nzhostbackup command when the system is online, paused, offline, or
stopped.
If you run the nzhostbackup command while the system is online, the nzhostbackup
command pauses the system for the duration of the backup.
All currently running queries run to completion before the backup begins, subject to
the time_out value you specify, or 60 seconds if you do not specify time_out. The sys-
tem queues new queries until the backup completes.
Related Commands
Use the nzhostrestore command to restore your Netezza metadata.
Usage
The following provides some same usage:
To back up the default data directory, enter:
nzhostbackup /home/host/backup.tar.gz
To specify a timeout period of 5 minutes, rather than the default 60 seconds, enter:
nzhostbackup -g 300 /home/host/backup.tar.gz
nzhostrestore
Use the nzhostrestore command to restore your Netezza data directory and metadata. The
nzbackup and nzrestore commands also back up the system catalog and host data, but in
situations where a Netezza host server fails but the SPUs and their data are still intact, you
can use the nzhostrestore command to quickly restore the catalog data without reinitializ-
ing the system and restoring all of the databases. For more information, see Host Backup
and Restore on page 10-8.
Note: After you perform an nzhostrestore, the system reverts to the mirroring roles (that is,
topology) it had when it was last online.
After you use the nzhostrestore command, note that you cannot perform an incremental
backup on the database; you must run a full backup first.
Syntax
The nzhostrestore command uses the following syntax:
Inputs
The nzhostrestore command takes the following inputs:
Input Description
nzhostrestore -D DATA_DIR Specifies the Netezza data directory to restore. The default
is the data directory (NZ_DATA_DIR), which is usually /nz/
data.
Options
The nzhostrestore command uses the following options:
Option Description
-catverok Skips the catalog verification checks. By default, the command checks the
catalog version of the current /nz/data directory and the archived data
directory. If the catalog versions are not the same, or if the command can-
not detect the catalog version of the current data directory, the command
exits with an error message similar to the following:
Unable to determine catalog version of data directory at
/nz/data.1.0, hence exiting. If you are sure that catalog
versions of current and that of the archived data
directory are same, use the command-line switch -catverok
to skip this check.
Use caution with this switch; if you are not sure that the catalog versions
are the same, do not bypass the checks. Contact Netezza Support for
assistance.
-f Specifies force, which causes the command to accept the defaults for
prompts and confirmation requests. The prompts appear at the beginning
and end of the program.
Restore host data archived Thu May 25 11:24:58 EDT 2006?
(y/n) [n]
Warning: The restore will now rollback spu data to Thu
May 25 11:24:58 EDT 2006. This operation cannot be
undone. Ok to proceed? (y/n) [n]
Description
The nzhostrestore command does the following:
Privileges Required
You must specify a database user account that has Manage System privileges.
Common Tasks
The nzhostrestore command pauses the system before starting the restore.
Note: After a restoration, any SPUs that previously had a role other than active, spare, or
failed are assigned to the role mismatched. The previous roles include assigned, inactive,
or mismatched.
For more information about SPU roles, see Hardware Roles on page 5-7. For more infor-
mation about the nzhw command, see nzhw on page A-26.
Notes
If tables are created after the host backup, the nzhostrestore command marks these tables
as orphaned on the SPUs. They are inaccessible and consume disk space. The nzhostre-
store command checks for these orphan tables and creates a script you can use to drop
orphaned user tables.
For example, if you ran the nzhostrestore command and it found orphaned tables, you
would see the following message:
Checking for orphaned SPU tables...
WARNING: found 2 orphaned SPU table(s).
Run sh /tmp/nz_spu_orphans.18662.sh after the restore has completed
and the system is Online to remove the orphaned table(s).
To drop the orphan tables, run the script, /tmp/nz_spu_orphans.18662.sh
Related Commands
Use the nzhostbackup command to back up your host metadata.
Usage
The following provides sample usage:
To restore the default data directory, enter:
nzhostrestore /home/host/backup.tar.gz
nzhw
Use the nzhw command to manage the hardware of the Netezza system. The command
allows you to show information about the system hardware as well as take actions such as
activate or deactivate components, locate components, or delete them from the system.
Syntax
The nzhw command has the following syntax:
nzhw [-h|-rev] [-hc] subcmd [subcmd options]
Inputs
The nzhw command takes the following inputs:
Input Description
nzhw activate -id hwId Makes a specified hardware component such as a SPU or a
disk available as a spare from a non-Active role (such as
Failed or Mismatched). Specify the hardware ID of the SPU
or disk that you want to activate.
Note: In some cases, the system may display a message that
it cannot activate the disk because the SPU has not finished
an existing activation request. Disk activation usually occurs
very quickly, unless there are several activations taking
place at the same time. In this case, later activations wait
until they are processed in turn.
nzhw deactivate -id hwId Changes the role of a spare SPU or a spare disk to Inactive,
[-force] which makes the component unavailable to the system.
Attempting to deactivate an active component that has a
role other than Spare results in an error.
Specify the hardware ID of the spare SPU or disk that you
want to deactivate. Include the -force option if you do not
want to be prompted with a confirmation.
nzhw failover -id hwId Changes the role of a SPU or disk to Failed, which makes
[-force] the component unavailable to the system. If you fail a SPU,
the system reassigns the data slices managed or owned by
that SPU to the other active SPUs in the chassis. Failing a
disk causes the system to use the disks mirror partition as
the primary partition. For more information about the pro-
cessing of a failover, see Failover Information on
page A-30.
Specify the hardware ID of the spare SPU or disk that you
want to fail. Include the -force option if you do not want to
be prompted with a confirmation.
Input Description
nzhw locate [-id hwId | -all] Identifies a component and its location in the system.
[-off] When used with -id, the command displays a string for the
physical location of the hardware component identified by
the hwid value. For SPUs, disks, and disk enclosures, the
command also turns on its indicator LED so that a techni-
cian at the Netezza system can find the component in the
rack.
Note: On the NEC InfoFrame DWH Appliance, the locate -id
command for a disk drive may require a few minutes to
complete on a busy system. The locate -all option can some-
times require up to 10 minutes to complete.
When used with -all, the command turns on the indicator
LEDs of all the SPUs and disks in the system.
The -off option specifies that the command should turn off
the indicator LED for the specified component or all SPUs
and disks.
Note: If the hardware type specified for the command does
not have an LED, the command only displays the location
string for that component.
nzhw reset {-id hwId | -all } Resets the specified hardware component. Currently, only a
[-force] SPU is supported as a reset target using this command.
You can specify one of the following target options:
-id hwid to reset a particular SPU designated by its hard-
ware ID
-all to reset all SPUs in the system
-spa spaId to reset all the SPUs in the specific SPA iden-
tified by its SPA ID.
Include the -force option if you do not want to be prompted
with a confirmation.
nzhw delete -id hwId Deletes the specified hardware component from the system
[-force] database. The hardware component must have a role of Mis-
matched, Failed, or Inactive. A hardware component in any
other role results in an error. A SPU or disk can be identified
by its unique hardware ID.
Specify the hardware ID of the component that you want to
delete. Include the -force option if you do not want to be
prompted with a confirmation.
nzhw listTypes Displays a list of the valid hardware types that you can input
for the nzhw show -type hardwareType command.
Input Description
nzhw show [options] Displays information about the specified hardware compo-
nent(s). If you do not specify any options, the command
displays a list of every component in the system and its
Type, Hardware ID (HW ID), Location, Role, and State. You
can specify one or more options (described as follows) to
show specific output.
nzhw show -caCertFile Specifies the pathname of the root CA certificate file on the
client system. This argument is used by Netezza clients who
use peer authentication to verify the Netezza host system.
The default value is NULL which skips the peer authentica-
tion process.
nzhw show -securityLevel Specifies the security level that you want to use for the ses-
sion. The argument has four values:
preferredUnsecured This is the default value. Specify
this option when you would prefer an unsecured connec-
tion, but you will accept a secured connection if the
Netezza system requires one.
preferredSecured Specify this option when you want a
secured connection to the Netezza system, but you will
accept an unsecured connection if the Netezza system is
configured to use only unsecured connections.
onlyUnsecured Specify this option when you want an
unsecured connection to the Netezza system. If the
Netezza system requires a secured connection, the con-
nection will be rejected.
onlySecured Specify this option when you want a
secured connection to the Netezza system. If the Netezza
system accepts only unsecured connections, or if you are
attempting to connect to a Netezza system that is run-
ning a release prior to 4.5, the connection will be
rejected.
nzhw show -id hwId Displays information only about the component with the
[-detail] specified hardware ID. Include the -detail option for more
information such as serial number, hardware version, and
additional details.
nzhw show -spa [spa id] Displays information about the hardware components which
are owned by a particular S-Blade in SPA.
nzhw show -type hwType Displays information only about the components of the
[-detail] specified hardware type. To display the supported hardware
types, use the nzhw listTypes command.
If the system has no hardware of the specified type, or if the
type is not supported, the command displays a message.
Include the -detail option for more information such as
serial number, hardware version, and additional details.
Input Description
nzhw show -issues [-detail] Displays information about hardware components that are
reporting problems. The command displays a list compo-
nents to investigate and their Type, Hardware ID (HW ID),
Location, Role, and State. Include the -detail option for
more information such as serial number, hardware version,
and additional details.
Options
The nzhw command takes the following options:
Option Description
-timeout <db name> Specifies the amount of time in seconds to wait for the
command to complete before exiting with a timeout
error. Default is 300.
Description
The nzhw command has the following description.
Privileges Required
You must specify a database user account that has Manage Hardware privilege.
Common Tasks
Use the nzhw command is the primary command for managing and displaying information
about the Netezza system and its hardware components.
Related Commands
Use in conjunction with other system commands, such as the nzsystem and nzds
commands.
Failover Information
When you use the nzhw command to fail over a component, the command checks the sys-
tem and the affected component to make sure that the command is appropriate before
proceeding. Currently, the command operates only on SPUs and disks.
For example, if you try to fail over an active component that does not have an available sec-
ondary component (such as SPUs that can take ownership of the data slices managed by
the SPU that you want to failover, or an active mirror for the disk that you want to fail over),
the command returns an error. Similarly, if you try to fail over a component that is not
highly available, the command will return an error.
For IBM Netezza 1000 systems, one SPU can manage up to 16 data slices.
Usage
The following provides some sample usage:
To activate a failed or mismatched SPU identified as ID 1003 use the following
command:
nzhw activate -id 1003 -u user -pw password
To deactivate the spare disk identified by hardware ID 1081 without being prompted,
use the following command:
nzhw deactivate -id 1081 -force
To fail over the SPU identified by hardware ID 1084, use the following command:
nzhw failover -id 1084
To locate the SPU identified by hardware ID 1061, use the following command:
nzhw locate -id 1061
Turned locator LED 'ON' for SPU: Logical Name:'spa1.spu5' Physical
Location:'1st Rack, 1st SPA, SPU in 5th slot'.
To light the locator LED of all the SPUs and disks, use the following command:
nzhw locate -all
Turned locator LED 'ON' for all Spus and Disks.
To reset the SPU identified by hardware ID 1084, use the following command:
nzhw reset -id 1084
To reset all the SPUs in the SPA identified by ID 1002, use the following command:
nzhw reset -spa 1002
To delete the disk identified by hardware ID 1081, use the following command:
nzhw delete -id 1081
To show the hardware information for the system, use the following command:
nzhw show
Description HW ID Location Role State
------------- ----- --------------------- ------ ------
Rack 1001 rack1 Active None
SPA 1002 spa1 Active None
SPU 1003 spa1.spu7 Active Online
DiskEnclosure 1004 spa1.diskEncl4 Active Ok
Fan 1005 spa1.diskEncl4.fan1 Active Ok
Fan 1006 spa1.diskEncl4.fan2 Active Ok
Fan 1007 spa1.diskEncl4.fan3 Active Ok
Fan 1008 spa1.diskEncl4.fan4 Active Ok
PowerSupply 1009 spa1.diskEncl4.pwr1 Active Ok
PowerSupply 1010 spa1.diskEncl4.pwr2 Active Ok
To show specific information for a component such as the SPUs, use the following
command:
nzhw show -type spu
Description HW ID Location Role State
----------- ----- ---------- ------ ------
SPU 1003 spa1.spu7 Active Online
SPU 1080 spa1.spu1 Active Online
SPU 1081 spa1.spu3 Active Online
SPU 1082 spa1.spu11 Active Online
SPU 1084 spa1.spu5 Active Online
SPU 1085 spa1.spu9 Active Online
To show the hardware issues reported for the system, use the following command:
nzhw show -issues
Type HW ID Location Role State
---- ----- --------------------------- ------ -----
Disk 1041 rack1.spa1.diskEncl2.disk12 Failed Ok
To list the supported hardware types for the nzhw show -type hwType command, use
the following command:
nzhw listTypes
Description Type
------------- --------
rack rack
spa spa
spu spu
diskenclosure diskencl
disk disk
fan fan
blower blower
power supply pwr
mm mm
store group storeGrp
ethernet switch ethsw
host host
SAS Controller SASController
host disk hostDisk
database accelerator dac
nzload
Use the nzload command to load ASCII data into database tables. For a complete descrip-
tion of the nzload command and how to load data into the Netezza system, refer to the IBM
Netezza Data Loading Guide.
nzpassword
Use the nzpassword command to manage passwords. The primary use is to store your pass-
word locally and thus use Netezza CLI commands without having to type your password on
the command line.
Syntax
The nzpassword command uses the following syntax:
nzpassword subcmd [subcmd options]
Inputs
The nzpassword command takes the following inputs:
Input Description
nzpassword resetkey options In normal system operation and without any options, this
command creates a new, unique client key and re-
encrypts the user passwords with the new key.
If you have an existing password file that was created
using older (pre-Release 6.0 or pre-Release 4.6.6 cli-
ents), this command also converts the old Blowfish-
encrypted passwords to AES-256-encrypted passwords.
The client key used for the encryption is auto-generated.
For more information about using encrypted passwords,
refer to Creating Encrypted Passwords on page 2-15.
Options
The nzpassword command uses the following options:
nzpassword show N/A Shows the cached passwords for the cur-
rent user. The command displays the
message No cached passwords if there
are none to display.
Description
The nzpassword command does the following:
Privileges Required
You must be logged in as nz or any valid Linux account for the Netezza system.
Common Tasks
Use the nzpassword command to store a local version of your password.
Related Commands
Use in conjunction with the CREATE USER or ALTER USER command.
Usage
The following provides sample usage:
To add a password, enter:
nzpassword add -u user -pw password -host nzhost
To delete a password, enter:
nzpassword delete -u user -host nzhost
To show the command options, enter:
nzpassword show
To reset the client key and create new encryptions of the passwords, enter:
nzpassword resetkey
For more information about using encrypted passwords, refer to Creating Encrypted Pass-
words on page 2-15.
nzreclaim
Use the nzreclaim command to recover disk space used by updated or deleted data using
the GROOM TABLE command.
Note: Starting in Release 6.0, the SQL GROOM TABLE command has replaced the nzre-
claim command. The nzreclaim command is now a wrapper that calls the GROOM TABLE
command to reclaim space. if you have existing scripts that use the nzreclaim command,
those scripts will continue to run, although some of the options may be deprecated since
they are not used by GROOM TABLE. You should transition to using the GROOM TABLE
command in your scripts.
Syntax
The nzreclaim command uses the following syntax:
nzreclaim [-h|-rev] [options]
Inputs
The nzreclaim command takes the following inputs:
Input Description
nzreclaim -backupset options Specifies the backup set to use to find the rows that can
be reclaimed. By default, nzreclaim uses the most
recent backup set, but you can use this option to specify
a different backup set for the reclaim-backup synchroni-
zation. If you specify NONE, the command reclaims all
rows regardless of whether they were saved in a backup
set.
nzreclaim -blocks options Removes empty blocks at the beginning of the table.
nzreclaim -startEndBlocks Removes empty blocks from the beginning and the end
options of the table.
Options
The nzreclaim command takes the following options:
Option Description
-db database Grooms one or all tables in a specific database [NZ_DATABASE]. You
can use the -t option to specify a table, or -allTbls to groom all the
tables.
-allDbs Grooms all databases. You can use the -t option to specify a table to
groom in all databases, or -allTbls to groom all tables in all databases.
-t tbl Grooms the specified table name. Grooms the specified table name.
You must specify the database where the table resides. You can use
the -db option to groom the table in one database, or -allDbs to groom
that table in all the databases.
-allTbls Grooms all the tables in the database. You can use the -db option to
groom all the tables in one database, or -allDbs to groom all tables in
all databases.
Description
The nzreclaim command does the following:
Privileges Required
You must have the Groom object privilege for the tables that you want to reclaim or
reoirganize.
Common Tasks
Use the nzreclaim command to groom tables and recover disk space. Specify either record-
level or block-level reclamation.
To remove all unused records throughout the table, specify nzreclaim -records.
To remove blocks from the beginning of the table, specify nzreclaim -blocks.
To remove unused blocks from the beginning and end of the table, specify nzreclaim
-startEndBlocks.
Related Commands
Use the TRUNCATE command if you are deleting an entire table.
Usage
The following provides sample usage:
To run a record-level groom on all the tables in the emp database, enter:
nzreclaim -u admin -pw password -db emp -t mytable
nzsql -u admin -pw password emp -c"groom table mytable " 2>&1
NOTICE: Groom processed 392131 pages; purged 2342 records; scan
size unchanged; table size unchanged.
GROOM RECORDS ALL
To run a block-level groom on all the tables in the emp database, enter:
nzreclaim -u admin -pw password -blocks -db emp
To run a block-level groom and remove blocks from the beginning and end of the table,
enter:
nzreclaim -u user -pw password -startEndBlocks -db emp
nzrestore
Use the nzrestore command to restore your database from a backup. For a complete
description of the nzrestore command and its use, see Using the nzrestore Command on
page 10-22.
nzrev
Use the nzrev command to display the Netezza software revision level.
Note: On Linux systems, you can use the nzcontents command to display the revision and
build number of all the executables, plus the checksum of binaries.
Syntax
The nzrev command uses the following syntax:
Inputs
The nzrev command takes the following inputs:
Input Description
nzrev -dirSuffix Displays the directory suffix form. For example, for
Release 5.0 Beta1, the output is:
5.0.B1
nzrev -rev Displays the entire revision string including all fields
(such as variant and patch level). For example:
5.0.0-0.B-1.P-0.Bld-7581
Note: Entering the nzrev -rev command on the host is
the same as entering the nzsystem showRev -u user -pw
password -host host command on the client system. If
you use only the nzrev command on the client, the com-
mand displays the revision of the client kit.
nzrev -buildType Displays the type of build. Typical values are opt or dbg.
Description
The nzrev command does the following:
Privileges Required
You do not need special privileges to run the nzrev command.
Common Tasks
Use the nzrev command to display the revision level of Netezza software components.
Related Commands
See the nzcontents command.
Usage
The following provides sample usage:
To display the directory suffix form, enter:
nzrev -dirSuffix
5.0.6.P1
To display the revision level, enter:
nzrev -rev
Release 5.0.6 (P-1) [Build 11294]
To display the short form, enter:
nzrev -shortLabel
5.0.6
nzsession
Use the nzsession command to view and manage sessions.
Syntax
The nzsession command uses the following syntax:
nzsession subcmd [subcmd options]
Inputs
The nzsession command takes the following inputs:
Input Description
nzsession listSessionTypes Lists the session types, which include the following:
sql database SQL session
sql-odbc database SQL session through ODBC
sql-jdbc database SQL session through JDBC
load data load session (nzload)
client client UI or CLI session
bnr Backup and restore session
reclaim database reclaim session (nzreclaim)
loadsvr data load session (deprecated loader)
nzsession priority options Changes priority of the current and all subsequent jobs
of this session.
Options
The nzsession command takes the following options:
-securityLevel Specifies the security level that you want to use for
level the session. The argument has four values:
preferredUnsecured This is the default
value. Specify this option when you would pre-
fer an unsecured connection, but you will
accept a secured connection if the Netezza sys-
tem requires one.
preferredSecured Specify this option when
you want a secured connection to the Netezza
system, but you will accept an unsecured con-
nection if the Netezza system is configured to
use only unsecured connections.
onlyUnsecured Specify this option when you
want an unsecured connection to the Netezza
system. If the Netezza system requires a
secured connection, the connection will be
rejected.
onlySecured Specify this option when you
want a secured connection to the Netezza sys-
tem. If the Netezza system accepts only
unsecured connections, or if you are attempting
to connect to a Netezza system that is running
a release prior to 4.5, the connection will be
rejected.
-timeout secs Specifies the time to wait in seconds for the com-
mand to complete. The default is 300.
nzsession show -activeTxn Displays the active transactions for the system.
Description
The nzsession command does the following:
Privileges Required
The admin user has full privileges to display all session information, to abort sessions and
transactions, and to change the priority of a session. Other database user accounts require
no special privileges to use the nzsession show command to see all the sessions that are
currently active on the system. However, non-admin users will see asterisks instead of the
user name, client process Id (PID), database, and SQL command unless they have List
privilege on User (to see details about the user, client PID, and SQL command) and List
privilege on Database (to see the database name). Users must have the Manage System
privilege to change the priority of sessions, and Abort privilege to abort sessions and/or
transactions.
Common Tasks
Use the nzsession command to manage sessions. Note that you cannot use a Release 5.0
nzsession client command to manage sessions on a Netezza system that is running a
release prior to 5.0.
Column Description
PID The process identification number of the command you are running.
State The state of the session, which can be one of the following:
Idle The session is connected but it is idle and waiting for a SQL
command to be entered.
Active The session is executing a command (usually applies to a
SQL session that is running a query).
Connect The session is connected, but no commands have been
issued.
Tx-Idle The session is inside an open transaction block (BEGIN
command) but it is idle and waiting for a SQL command to be
entered within the transaction.
Priority Name The priority of the session, which can be one of the following:
Critical The highest priority for user jobs.
High The session jobs are running on the high priority job queue.
Normal The sessions jobs are running on the large or small job
queue.
Low The lowest priority for user jobs.
Related Commands
Use in conjunction with the nzstats and nzsystem commands.
Usage
The following provides sample usage:
To show all sessions, enter:
nzsession show -u bob -pw password
ID Type User Start Time PID Database State Priority Name
Client IP Client PID Command
----- ---- -------- ----------------------- ----- -------- ------ -------------
--------- ---------- ------------------------
16049 sql ***** 28-Jan-10, 08:28:24 EST 26399 ***** active normal
***** *****
16052 sql BOB 28-Jan-10, 08:29:27 EST 26612 SYSTEM active normal
127.0.0.1 26611 SELECT session_id, clien
This sample output appears for a user (bob) who does not have permission to see the
details of the sessions on the system. Only the details for bobs sessions appear. For a
user who has List permission on user and database objects, the output shows all the
details:
nzsession show -u sysadm -pw password
ID Type User Start Time PID Database State Priority Name
Client IP Client PID Command
----- ---- -------- ----------------------- ----- -------- ------ -------------
--------- ---------- ------------------------
16049 sql DBUSR 28-Jan-10, 08:28:24 EST 26399 TPCH1 active normal
127.0.0.1 26398 select * from orders;
16054 sql SYSADM 28-Jan-10, 08:48:22 EST 30515 SYSTEM active normal
127.0.0.1 30514 SELECT session_id, clien
To abort a session, enter:
nzsession abort -u user -pw password -host nzhost -id 1344
To abort a transaction, enter
nzsession abortTxn -u user -pw password -host nzhost -id 437
To list the types of sessions, enter:
nzsession listSessionTypes
To change the session priority, enter:
nzsession priority -u user -pw password -host nzhost -id 437 -high
To show all all the active transactions, enter:
nzsession show -activeTxn
You can use the -activeTxn option to display the active sessions that will be impacted
by a state change (such as pausing -now) before you initiate the state change.
nzspupart
Use the nzspupart command to display information about the SPU partitions on an IBM
Netezza system including status information and the disks that support the partition.
Syntax
The nzspupart command uses the following syntax:
nzspupart [-h|-rev] [-hc] <subcmd> [<subcmd options>]
Inputs
The nzspupart command takes the following inputs:
Input Description
nzspupart show options Displays information about the specified partitions. If you do
not specify any options, the command displays a list of all parti-
tion and their ID, type, status, size, percent used, and
supporting disksYou can specify one or more options to show
specific output.
nzspupart regen options Starts regeneration for SPU partitions. If you do not specify any
[-force] options, the command searches for degraded partitions and
starts regeneration processes to the available spare disks.
Optionally, you can use the options -spu spuId, -part partId,
and -dest diskHwId to specify source and target information for
a specific regeneration. Include the -force option to start the
regen without prompting you for a confirmation.
Note that the regen option is not supported on IBM Netezza
C1000 appliances. On those platforms, the hardware controls
regenerations.
nzspupart listTypes Displays a list of the valid hardware types that you can input for
the nzspupart show -type spuPartitionType command.
Options
The nzspupart command takes the following options:
Description
The nzspupart command has the following description.
Privileges Required
You must specify a database user account that has Manage Hardware privilege.
Common Tasks
Use the nzspupart command to display information about the SPU partitions of an IBM
Netezza C1000 system, or to perform a partition regeneration when th partition is
degraded. You can use the command to obtain status about the partitions and the space
used within them, as well as whether regenerations are in progress, or if there are issues
that require your attention.
Related Commands
Use in conjunction with other system commands, such as the nzhw and nzds commands.
Usage
The following provides sample usage:
To display information about the SPU partitions, enter:
nzspupart
SPU Partition Id Partition Type Status Size (GiB) % Used Supporting Disks
---- ------------ -------------- ------- ---------- ------ -------------------------------
1255 0 Data Healthy 3725 0.00 1129,1151,1167
1255 1 Data Healthy 3725 0.00 1126,1148,1169
1255 2 Data Healthy 3725 0.00 1133,1150,1171
1255 3 Data Healthy 3725 0.00 1132,1145,1170
1255 4 Data Healthy 3725 0.00 1136,1137,1166
1255 5 Data Healthy 3725 0.00 1146,1149,1175
1255 6 Data Healthy 3725 0.00 1130,1153,1165
1255 7 Data Healthy 3725 0.00 1131,1155,1173
1255 8 Data Healthy 3725 0.00 1127,1152,1164
1255 100 NzLocal Healthy 11150 0.00 1134,1135,1147,1154,1168,1172,1174
1255 101 Swap Healthy 24 0.00 1134,1135,1147,1154,1168,1172,1174
1255 110 Log Healthy 1 0.00 1134,1135,1147,1154,1168,1172,1174
To list the SPU partition types, enter:
nzspupart listTypes
Description Type
----------- -------
Data data
NzLocal nzlocal
Swap swap
Log log
To start a partition regeneration:
nzspupart regen
Are you sure you want to proceed (y|n)? [n] y
Info: Regen Configuration - Regen configured on SPA:1 Data slice 2
and 1
If there are no degraded partitions, the command outputs the message No degraded
partitions. If the regen cannot proceed because there are no spare disks on the sys-
tem, the command outputs the message No spares disks available.
nzstart
Use the nzstart command to start system operation after you have stopped the system. The
nzstart command is a script that initiates a system start by setting up the environment and
invoking the startup server.
Note: You must run nzstart on the host. You cannot run it remotely.
Syntax
The nzstart command uses the following syntax:
nzstart [options]
Inputs
The nzstart command takes the following inputs:
Input Description
nzstart -log file Sends the daemon output to the log file instead of to /dev/null.
nzstart -timeout value Specifies the number of seconds to wait for the command to
complete before exiting with a timeout error. The default is
300.
nzstart -newSystem Start a new Netezza system. (Used only the first time a new sys-
tem is started.)
Description
The nzstart command does the following:
Privileges Required
You must be able to log on to the host system as the nz user.
Common Tasks
Use the nzstart command to start system operation after you have stopped the Netezza sys-
tem. The nzstart command verifies the host configuration to ensure that the environment is
configured correctly and completely; it displays messages to direct you to files or settings
that are missing or misconfigured.
If the system is unable to start because of a hardware problem, the command typically dis-
plays a timeout error message. You can review the sysmgr.log file to identify what problems
might have caused the nzstart command to fail.
For IBM Netezza 1000 systems, a message is written to the sysmgr.log file if there are any
storage path issues detected when the system starts. The log displays a message similar to
mpath -issues detected: degraded disk path(s) or SPU communication error which helps
to identify problems within storage arrays. For more information about how to check and
manage path failures, see Hardware Path Down on page 7-22.
Related Commands
See the nzstop command.
Notes
The nzstart script has a default time out, which is 120 seconds + 3* the number of SPUS.
(This default is subject to change in subsequent releases.)
If the system has not started by this time, the nzstart command returns and prints an warn-
ing message indicating that the system has failed to start in xxx seconds. The system,
however, continues to try to start. You can override the default time out by specifying a
timeout.
Usage
The following provides sample usage:
To specify a directory, enter:
nzstart -D /tmp/data
To specify a log file, enter:
nzstart -log /tmp/startlog
To start without waiting, enter:
nzstart -noWait
To specify a timeout, enter:
nzstart -timeout 400
nzstate
Use the nzstate command to display the current system state or to wait for a particular sys-
tem state to occur.
Syntax
The nzstate command uses the following syntax:
nzstate [-h|-rev|-hc] subcmd [subcmd options]]
Inputs
The nzstate command takes the following inputs:
Input Description
nzstate show options Displays the current state. This is the default if you type the
command without any arguments.
nzstate waitFor options Waits for the system to reach the specified state. Note that you
cannot wait for a state that ends in -ing.
Options
The nzstate command takes the following options:
nzstate waitFor -type state_type Waits for the specified state to occur. Use the
listStates subcommand to display the state
types.
Description
The nzstate command does the following:
Privileges Required
You do not need special privileges to run the nzstate listStates command. You must specify
a database user account to show or wait for states.
Common Tasks
Use the nzstate command to display the current state.
Related Commands
See the nzsystem command.
Usage
The following provides sample usage
To list the states, enter:
nzstate listStates
State Symbol Description
------------ ------------------------------------------------------------
initialized used by a system component when first starting
paused already running queries will complete but new ones are queued
pausedNow like paused, except running queries are aborted
offline no queries are queued, only maintenance is allowed
offlineNow like offline, except user jobs are stopped immediately
online system is running normally
stopped system software is not running
down system was not able to initialize successfully
nzstats
Use the nzstats command to display operational statistics about system capacity, faults,
and performance.
Syntax
The nzstats command uses the following syntax:
Inputs
The nzstats command takes the following inputs:
Input Description
nzstats show options Displays the stats from the System Group table.
Options
The nzstats command takes the following options:
nzstats listFields -type type Specifies the type of group or table that you want
to list. The default is system. Valid values
include:
dbms DBMS Group
system System Group
database Database Table
host Host Table
hostCpu Host CPU Table
hostFileSystem Host File System Table
hostIf Host Interface Table
hostMgmtChan Host Management Channel
Table
hostNet Host Network Table
hwMgmtChan HW Management Channel
Table
query Query Table
queryHist Query History Table
spu SPU Table
spuPartition SPU Partition Table
table Table Table
tableDataSlice Per Table Per Data Slice
Table
You can list the valid types using the nzstats list-
Types command.
nzstats listTypes Lists the valid types for which you can display
information, as shown in the listFields
description.
-type type Specifies the type of table that you want to show.
The default is system.You can list the valid types
using the nzstats listTypes command.
-allocationUnit For the Table table, outputs the disk space used
units value in bytes (default), extents, or blocks. The
valid values are usedbytes, extents, or
usedblocks.
Description
The nzstats command does the following:
Privileges Required
Your database user account must have the Manage System privilege to show the actual sys-
tem statistics. Any user can list the fields and types.
Common Tasks
Use the nzstats command to display operational statistics.
Related Commands
Use in conjunction with the nzsession and nzsystem commands.
Usage
The following provides sample usage:
To list the types, enter:
nzstats listTypes
Group/Table Type Description
---------------- ------------------------------
dbms DBMS Group
system System Group
To show the columns that match the string Num Data Slices, enter:
nzstats show -u user -pw password -host nzhost -colMatch "Num Data
Slices"
Field Name Value
--------------- -----
Num Data Slices 46
nzstop
Use the nzstop command to stop system operation. Stopping a system stops all Netezza
host processes. Unless you specify otherwise, stopping the system waits for all running jobs
to complete.
Use either the nzsystem stop or the nzstop command to stop system operation. The nzstop
command is a script that initiates a system stop by halting all processing.
Note: You must run nzstop while logged in as a valid Linux user such as nz on the host. You
cannot run the command remotely.
Syntax Description
The nzstop command uses the following syntax:
nzstop options
Inputs
The nzstop command takes the following inputs:
Input Description
nzstop -timeout secs Specifies the number of seconds to wait for the command to
complete before exiting with a timeout error. The default is
300.
Options
The nzstop command takes the following options:
nzstop -h No options.
Description
The nzstop command does the following:
Privileges Required
You must be able to log on to the Netezza system as a valid Linux user such as nz.
Common Tasks
Use the nzstop command to stop the system.
Related Commands
See the nzsystem command.
Usage
The following provides sample usage:
To display help, enter:
nzstop -h
To specify a timeout of 300 seconds, enter:
nzstop -timeout 300
nzsystem
Use the nzsystem command to change the system state, and show and set configuration
information.
Syntax
The nzsystem command uses the following syntax:
nzsystem [-h|-rev|-hc] subcmd [subcmd_options]
Inputs
The nzsystem command takes the following inputs:
Input Description
nzsystem pause options Pauses the system. Use this command to pause the sys-
tem for administrative work, but allow all current
transactions to complete.
nzsystem restart options Stops and then automatically restarts the system.
nzsystem showState options Displays the system state. This is the default subcom-
mand if you type the nzsystem command without any
subcommands. It is also the same as the nzstate show
command.
Options
The nzsystem command takes the following options:
showRev -build Shows the build string for the Netezza software
as set by the Configuration Manager (CM).
Description
The nzsystem command does the following:
Privileges Required
You can run a subset of the commands such as showRev and showState using any database
user account. However, your database user account must have the Manage System privilege
to start or manage the system states as well as to set or show the registry settings.
Common Tasks
Use the nzsystem command to show and change system state.
Related Commands
See the nzstart, nzstop, and nzstate commands.
Usage
The following provides sample usage:
To take the system offline, enter:
nzsystem offline -u user -password password -host nzhost
To start the system again, use the nzsystem resume command.
To pause the system, enter:
nzsystem pause -u user -password password -host nzhost
To start the system again, use the nzsystem resume command.
To restart the system, enter:
nzsystem restart -u user -password password -host nzhost -now
To resume the system, enter:
nzsystem resume -u user -password password -host nzhost
To configure a system setting, enter:
nzsystem set -u user -password password -host nzhost -regFile
MaxReboot FreqPerHr
To display the system registry settings, enter:
nzsystem showRegistry -u user -password password -host nzhost
To display the revision level, enter:
nzsystem showRev -u user -password password -host nzhost
To display the system state, enter:
nzsystem showState -u user -password password -host nzhost
To display any system issues, enter:
[nz@nzhost ~]$ nzsystem showIssues
Hardware Issues :
Dataslice Issues :
Table A-35 describes some of the more common commands in the bin/adm directory.
These commands are divided into the following categories:
Safe Running the command causes no damage, crashes, or unpredictable behavior.
Unsafe Running the command with some switches could cause no harm, but with
other switches could cause damage.
Dangerous Running the command could cause data corruption or a crash.
Note that these are unsupported commands and they have not been as rigorously tested as
the end-user commands.
nzconvertsyscase Unsafe Converts the Netezza system to the opposite case, for
example, from upper to lower case. For more infor-
mation, see nzconvertsyscase on page A-59.
nzconvertsyscase
Use the nzconvertsyscase command to convert the Netezza system to the opposite case, for
example from upper to lower or vice versa.
Note: Your database must be offline when you use this command (that is, use nzstop first
to stop the system).
Syntax
The nzconvertsyscase command uses the following syntax:
Inputs
The nzconvertsyscase command takes the following inputs:
Input Description
Note: You must specify either l or u. If you specify neither option, the command displays an
error. After converting your system, you must rebuild all views and synonyms in every
database.
Description
The nzconvertsyscase command does the following:
Common Tasks Use the nzconvertsyscase command to convert from one default case to
another. The command uses the values in the objdelim and attdelim fields in the system
tables _t_object and _t_attribute to determine if the identifiers should be converted or
retained. The script converts only the names of objects and attributes created as regular
identifiers. It does not convert delimited identifiers.
Note: If you want to convert the identifier case within a database to the opposite of the
default system case, contact Netezza Support.
Usage
The following provides sample usage:
To convert to lowercase, enter:
nzconvertsyscase -l -D /nz/data
To convert to uppercase, enter:
nzconvertsyscase -u -D /nz/data
To validate the conversion, enter:
nzconvertsyscase -v -u -D /nz/data
nzdumpschema
Use the nzdumpschema command to generate a shell script with SQL statements that
duplicate a database by extracting the given database's schema and statistics.
Note: Because no actual data is dumped, you cannot use this command to back up a
database.
Syntax
The nzdumpschema command uses the following syntax:
nzdumpschema [-h] [-R] database [outfile] [outdir] [datadir]
The nzdumpschema command takes the following inputs:
Option Description
database Specifies the name of a database for which you want statistics
and the schema.
outdir Specifies the output directory where UDX object files registered
in the database will be written.
Description
You must be the admin user to run the nzdumpschema command.
Common Tasks Use the nzdumpschema command to dump the table and view definitions,
the database statistical information, and optionally, any UDXs that are registered within the
database. It is a diagnostic tool that you can use to troubleshoot a variety of problems relat-
ing to a query.
You must run it from the host Netezza system.
You cannot use -u, -pw, -host, or other nz CLI options.
You have must have set the NZ_USER and NZ_PASSWORD environment variables.
You must specify a database.
If the database includes registered user-defined objects (UDXs), you can also dump
copies of the object files that were registered for use with those routines.
If you do not specify an output file, the nzdumpschema command writes to standard
output.
Usage
The following provides sample usage:
To dump table and view definitions to the file named empDBOut, enter:
nzdumpschema empDB empDBOut
To dump the sales database to the file salesSchema and its user-defined objects to the
directory /tmp/UdxObjs, enter:
nzdumpschema sales salesSchema /tmp/UdxObjs
If you relocate the object files in /tmp/UdxObjs to another location, be sure to edit the
object pathnames used in the salesSchema file to reflect the new location of the object
files.
nzinitsystem
Use the nzinitsystem command only under the direction of Technical Support. This is a
dangerous command and must be used with extreme caution to avoid loss of data and sys-
tem behavior.
The nzinitsystem command re-initializes a system by overwriting the catalog information on
the host, which results in loss of data. Typically this command is used to re-initialize a test
system when you want to remove all existing database information on that system. In
extreme cases, this command might be used to recover a system that has been altered
beyond repair, and Support has identified that reinitialization and restores are required for
recovery.
nzlogmerge
Each system component produces a log file that is stored in a subdirectory of the /nz/kit/log
directory. Each entry in this file contains a timestamp. For troubleshooting, it is often
required to merge these entries in chronological order.
To merge all the log files, the nzlogmerge command syntax is:
nzlogmerge list of files to merge
Syntax
The nzlogmerge command takes the following options:
Option Description
-v Verbose mode
Option Description
-a datetime Captures the log entries after the specified time and before the specified
-b datetime time. The dattime value must be in the format YYYY-MM-DD HH:MM:SS.
This appendix describes some of the common Linux procedures. For more details or infor-
mation about other procedures, refer to the Red Hat documentation.
B-1
IBM Netezza System Administrators Guide
The -r switch causes a reboot. You can specify either the word now or any time value. You
could use the -h switch to halt the system. In that case, Linux also powers down the host if
it can.
If you have a Netezza HA system, use caution when shutting down a host. Shutting down
the active host causes the HA software to fail over to the standby host to continue Netezza
operations, which may not be what you intended.
With both commands, you can specify which type of signal to send to stop the task. An
application has the option to intercept various types of signals and keep running, with the
exception of the kill signal (signal number 9, mnemonic SIGKILL). Any UNIX system that
receives a SIGKILL for a process must stop that process without any further action to meet
POSIX compliance (provided that you own the task or you are root). Both the kill and killall
commands accept the signal number as a hyphen argument.
To stop the loadmgr process (number 2146), you could use any of the following commands:
kill -9 2146
killall -KILL loadmgr
kill -SIGKILL 2146
killall -9 loadmgr
Note: When you kill a process with the kill signal, you lose any unsaved data for that
process.
System Administration
This section describes some useful Linux commands that you can use.
Displaying Directories
You can use the ls command to display information about directories:
ls -l Displays the long listing.
ls -lt Sorts the listing by modification time.
ls -ltu Sorts the listing by access time. This is useful to find out who accessed the
file, when, and which files were used.
ls -l --full-time Includes the full date and time in the listing.
Finding Files
You can use several commands to locate files, commands, and packages:
locate string Locates any file on the system that includes the string within the name.
The search is fast because it uses a cache, but it might not show recently added files.
find -name *string* Finds any file in the current directory, or below the current
directory, that includes string within the name.
which command Displays the full path for a command or executable program.
rpm -qa Lists all the packages installed on the host.
Miscellaneous Commands
You can use the following commands for system administration:
nohup command Runs a command immune to hangups and creates a log file. Use
this command when you want a command to run no matter what happens with the sys-
tem. For instance, use this command if you want to avoid having a dialup, VPN
timeout, or a disconnect network cable cancel your job.
unbuffer command Disables the output buffering that occurs when the programs
output is redirected. Use this command when you want to see output immediately.
UNIX systems buffer output to a file, so that a command can appear hung until the
buffer is dumped.
colrm [startcol [endcol]] Removes selected columns from a file or stdin.
split Splits a file into pieces.
This appendix contains information about Netezza user and system views.
User Views
Table C-1 describes the views that display user information. Note that to see a view, users
must have the privilege to list the object.
_v_datatype Returns a list of all sys- objid, DataType, Owner, Descrip- DataType \dT
tem datatypes tion, Size
_v_function Returns a list of all objid, Function, Owner, Create- Function \df
defined functions Date, Description, Result,
Arguments
_v_groupusers Returns a list of all users objid, GroupName, Owner, GroupName, \dG
of a group UserName UserName
_v_index Returns a list of all user objid, IndexName, TableName, TableName, \di
indexes Owner, CreateDate IndexName
_v_operator Returns a list of all objid, Operator, Owner, Create- Operator \do
defined operators Date, Description, oprname,
oprleft, oprright, oprresult,
oprcode, and oprkind
C-1
IBM Netezza System Administrators Guide
_v_procedure Returns a list of all the objid, procedure, owner, create- Procedure
stored procedures and date, objtype, description, result,
their attributes numargs, arguments, procedures-
ignature, builtin,
proceduresource, sproc, and
executedasowner
_v_relation_ Returns a list of all objid, ObjectName, Owner, Cre- ObjectName, and
column attributes of a relation ateDate, ObjectType, attnum, attnum
(table, view, index, and attname, format_
so on.) type(attypid,attypmod), and
attnotnull
_v_relation_ Returns a list of all objid, ObjectName, Owner, Cre- ObjectName, and
column_def attributes of a relation ateDate, Objecttype, attnum, attnum
that have defined attname, and adsrc
defaults
_v_sequence Returns a list of all objid, SeqName, Owner, and SeqName \ds
defined sequences CreateDate
_v_session Returns a list of all active ID, PID, UserName, Database, ID \act
sessions ConnectTime, ConnStatus, and
LastCommand
_v_table Returns a list of all user objid, TableName, Owner, and TableName \dt
tables CreateDate
_v_table_dist_ Returns a list of all fields objid, TableName, Owner, Create- TableName, and
map used to determine the Date, DistNum, and DistFldName DistNum
tables data distribution
_v_table_index Returns a list of all user T.objid, TableName, T.Owner, TableName, and
table indexes IndexName, CreateDate, I.indkey, IndexName
l.indisunique, l.indisprimary,
T.relhasrules, and T.relnatts
_v_user Returns a list of all users objid, UserName, Owner, Vali- UserName \du
dUntil, and CreateDate
_v_usergroups Returns a list of all objid, UserName, Owner, and UserName, and \dU
groups of which the user GroupName GroupName
is a member
_v_view Returns a list of all user objid, ViewName, Owner, Create- ViewName \dv
views Date, relhasindex, relkind,
relchecks, reltriggers, relhasrules,
relukeys, relfkeys, relhaspkey,
and relnatts
System Views
Table C-2 describes the views that display system information. You must have administrator
privileges to display these views.
_v_sys_index Returns a list of all sys- objid,SysIndexName, Table- TableName, and \dSi
tem indexes Name, and Owner SysIndexName
_v_sys_priv Returns a list of all user UserName, ObjectName, DatabaseName, \dp <user>
privileges. This is a DatabaseName, aclobjpriv, and ObjectName
cumulative list of all acladmpriv, aclgobjpriv, and
groups and user-specific aclgadmpriv
privileges.
_v_sys_table Returns a list of all sys- objid, SysTableName, and SysTableName \dSt
tem tables Owner
_v_sys_view Returns a list of all sys- objid, SysViewName, and SysViewName \dSv
tem views Owner
This appendix provides a reference for many of the system configuration file settings. You
can display the current system configuration file settings using the nzsystem showRegistry
command. For more information, see nzsystem on page A-55.
Never change or customize the system registry unless directed to by Netezza Support or by
a documented Netezza procedure. The descriptions in this appendix are provided for refer-
ence information only.
Note: A default of zero in many cases indicates a compiled default not the actual value
zero. Text (yes/no) and numbers indicate actual values.
startup.autoRestart yes Specifies whether to restart the system if a SPU reset fails. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
startup.dbosStartupTimeout 300 Specifies the starupsvr's timeout for launching the dbos dispatch
process at system startup. FOR INTERNAL USE ONLY. DO NOT
CHANGE.
startup.hostSwapSpaceLimit 131072 Specifies the maximum work space on the host. FOR INTERNAL
USE ONLY. DO NOT CHANGE.
D-1
IBM Netezza System Administrators Guide
startup.numSpares 0 Historical.
startup.numSpus 14 Historical.
startup.overrideSpuDiskSize no Specifies whether to override the SPU disk size check. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
startup.overrideSpuRev 0 Overrides the SPU revision. FOR INTERNAL USE ONLY. DO NOT
CHANGE.
startup.planHistFiles 2000 Specifies the number of files that can exist in the /nz/kit/log/plan-
hist/ directory.
startup.queryHistTblSize 2000 Specifies the number of queries to maintain in the Query History
table. The default and suggested value is 2,000. The range of
values permitted is 0 to 15000.
Note: This setting is used for the _v_qryhist view, which is main-
tained for backward compatibility. For more information about the
new query history, see Chapter 11, Query History Collection and
Reporting.
startup.startupTimeout 600 Specifies the number of seconds of grace after system startup.
Allows for staggered starting of SPUs.
startup.virtualDiskSize 128 Simulator mode. FOR INTERNAL USE ONLY. DO NOT CHANGE.
sysmgr.coreCountFailover 1 Specifies the number of SPU CPU cores that can fail before the sys-
tem manager fails over the SPU.
sysmgr.eccErrCountFailover 300 Specifies the number of correctable single bit ECC errors to allow
before failing over.
sysmgr.eccErrDurationFailover 0 Specifies the time interval across Netezza reboots that the system
tracks ECC errors. Zero indicated forever.
sysmgr.enableBalanced yes Specifies whether balanced regen is enabled. Does not apply to IBM
Regen Netezza 1000 or IBM PureData System for Analytics N1001
models.
sysmgr.enableDiskFpga- yes Specifies whether to failover the disk on an FPGA error. Does not
Failover apply to IBM Netezza 1000 or IBM PureData System for Analytics
N1001 systems.
sysmgr.enclStatusElementFil- 160 Specifies a decimal value that represents a combination of the SCSI
terForFailover element status (SES) codes for which the system manager will fail
over a disk drive. The status codes and their numeric values follow:
Unsupported = 0
OK = 2
Critical = 4
Non Critical = 8
Unrecoverable = 16
Not Installed = 32
Unknown = 64
Not Available = 128
No Access = 256
Manufacturers Reserved = 512
Manufacturers Reserved = 1024
Manufacturers Reserved = 2048
Manufacturers Reserved = 4096
Manufacturers Reserved = 8192
Manufacturers Reserved = 16384
Manufacturers Reserved = 32768
The default value of 160 is a combination of the Not Installed and
Not Available options. This means that, by default, the system will
fail a disk only when its SES status is either "Not Installed" or "Not
Available. The system sends a Hardware Status Requested event for
all of these status codes.
sysmgr.logDiskSuccessOnRetry yes Specifies whether to log retry successes for disk I/O operations.
sysmgr.maxAggregateEventInt- 120 Specifies the time interval (seconds) during which events are
erval aggregated.
sysmgr.maxRebootFreqPerHr 3 Specifies the maximum number of reboots per hour before the sys-
tem marks the SPU as failed.
sysmgr.numberOfDownPort- 5 Specifies the number of ports on the same switch that must be in
ToRiseEvent the down state for a specified time (defined by
sysmgr.portDownTime1ToRiseEvent) before the system logs a HW_
NEEDS_ATTENTION event. If you specify zero (0), the system will
not log an event for this condition.
sysmgr.pausingStateTimeout 420 Specifies the number of seconds that the Netezza system can be in
the Pausing Now state before a stuck in state timeout event occurs.
The timeout should be one minute (60 seconds) longer than the
sysmgr.failOverTimeout.
sysmgr.portDownTime1ToRise 300 Specifies the number of seconds that a port must be in the down
Event state before the system logs a HW_NEEDS_ATTENTION event.
(Ports can sometimes change states for short periods of time in nor-
mal conditions, so this setting helps to avoid "false" events for short
state changes.) A value of 0 disables the time duration requirement
as soon as the numberOfDownPortToRiseEvent number has been
met, the system manager logs an event.
sysmgr.portDownTime2ToRise 600 Specifies the number of seconds that any one port must be in the
Event down state before the system logs a HW_NEEDS_ATTENTION event
for that port. A setting of 0 disables this time check, so the system
manager logs the HW_NEEDS_ATTENTION event when it detects
that a port is down.
sysmgr.sfiResetTimeout 600 Specifies the timeout value of the SFI. The value is in seconds.
sysmgr.smartErrCountFailover 1 Specifies the number of SMART errors to allow before failing over.
sysmgr.smartErrDuration- 0 Specifies the time interval across Netezza reboots that the system
Failover tracks SMART errors. Zero means forever.
sysmgr.spuAppDownloadTime- 480 Specifies the time in seconds to wait for the application data to
out download to a SPU before the SPU is reset (that is, power-cycled).
sysmgr.spuDiscoveryTimeout 360 Specifies the time in seconds to wait for a SPU to complete discov-
ery before the SPU is reset (that is, power-cycled).
sysmgr.spuDumpTimeout 1440 Specifies the number of seconds a SPU can send a core file to the
host before it is reset.
sysmgr.spuInitializingTimeout 90 Specifies the time in seconds to wait for a SPU to finish initializing
before the SPU is reset (that is, power-cycled).
sysmgr.spuPollReplyTimeout 600 Specifies the number of seconds to wait for a poll reply from a SPU
before the system manager resets it (that is, reboots Linux on a
SPU).
90 Specifies the number of seconds to wait for a poll reply from a SPU
settingsysmgr.spuPollReply- before the system manager logs a warning message in the sysmgr.log
WarningInterval file.
sysmgr.syncingStateTimeout 900 Specifies the number of seconds that the Netezza z-series system
can be in the Syncronizing state before a stuck in state timeout
event occurs. Does not apply to IBM Netezza systems.
sysmgr.testNoRegen no Specifies whether the noregen test is enabled. FOR INTERNAL USE
ONLY. DO NOT CHANGE.
host.bnrFileSizeLimitGB 1024 Specifies the maximum file size, in bytes, that the backup
process creates when backing up a database. The backup
process creates a series of files of this size to ensure that it
does not exceed the file size limitations of the backup
destination(s).
host.bnrStreamInitTimeoutSec 300 Specifies the number of seconds to wait for the backup pro-
cess to test each stream of a multi-stream backup. If the test
completes within the timeout limit, the backup process con-
tinues with the requested backup. If the timeout expires
before the test completes, the problem typically is that you
requested more streams than the tool can support for one
backup operation. Review the backup tool documentation to
ensure that you do not specify more streams than the tool
can support.
A value of 0 disables the timeout test to each stream.
host.expressAckFreq 4 Unused.
host.nzstatsRequireAdmin yes Specifies that only the admin user can run the nzstats com-
mand. If set to no, other users who also have Manage System
privilege will be allowed to run run the following commands:
nzstats show -type database
nzstats show -type table
nzstats show -type query
nzstats show -type queryHist
host.qcMaxLoadMemory 1350 Specifies the total amount of shared memory available for all
loads. The default calculation is 80 percent of (TotalPhysi-
calMemory - sizeof (Standard Netezza Shared Memory)). If
you specify another number you could reduce the amount of
memory allocated to loads.
host.schedGRAHorizon 3600 Specifies the amount (in seconds) of scheduler usage history
to maintain.
host.schedGRAOverLimit 5 Specifies the over served amount between the actual GRA
and the specified GRA for the resource group.
host.schedGRAUnderLimit -5 Specifies the under served amount between the actual GRA
and the specified GRA for the resource group.
host.schedGRAVeryOverLimit 10 Specifies the very over served amount between the actual
GRA and the specified GRA for the resource group.
host.schedGRAVeryUnderLimit -10 Specifies the very under served amount between the actual
GRA and the specified GRA for the resource group.
host.schedSQBMistakesSecs 20 Unused.
host.snDiskReadCost 4200 Specifies the cost (in ticks) for reading 128 KB data blocks
from the SPU disks.
host.snDiskWriteCost 4200 Specifies the cost (in ticks) for writing 128 KB data blocks
from the SPU disks.
host.snFabricTableBlocks 1536 Specifies in the assumed size [in 128KB blocks] of a table
that is materialized and processed by DBOS, rather than
streaming through in fixed size work units. This size is
charged against the snHostMemoryQuota for each snippet
that has such a table.
host.snHostFabricCost 4200 Specifies the cost (in ticks) for handling 128 KB data blocks
into/out of the host.
host.snHostMemoryQuota 16384 Specifies the number of 128 KB blocks on the host that the
snippet scheduler resource management allocates to
snippets.
host.snPriorityWeights 1,2,4,8 Specifies the weights assigned to low, normal, high and crit-
ical jobs.
host.snSPUFabricCost 31250 Specifies the cost (in ticks) of writing 128 KB data blocks
onto the fabric from the SPU.
host.snSpuMemoryQuota 10000 Specifies the number of 128KB blocks on SPU that the
snippet scheduler resource management allocates to
snippets.
host.snSpuSortSizeFactor 10000 Specifies the scaling factor for the sorted data set size on
SPUs.
host.streamBatchSize 2097152 Specifies the return set batch size limit in bytes.
0
host.unloadWriteFlushThresholdMB 100 Specifies the value in MB at which the unload flushes the
page cache. FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.allocateBuffersVirtual no Specifies whether the system assigns all buffer allocations log-
ical addresses that are not the same as their physical
addresses. DO NOT CHANGE, FOR INTERNAL USE ONLY.
system.dbosAggrWorkBlocks 4096 Specifies the upper limit (bytes) on the space used for the
aggregation operation. FOR INTERNAL USE ONLY. DO NOT
CHANGE.
system.dbosSortWorkBlocks 4096 Specifies the upper limit (bytes) on the space used for the sort
operation. FOR FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.dbosWindowsAggrWork- 4096 Specifies the upper limit (bytes) on the space used for win-
Blocks dows aggregation. FOR INTERNAL USE ONLY. DO NOT
CHANGE.
system.disableGlobalCRC no Specifies whether to disable all new CRC processing. Note that
the FPGA will still calculate (but not validate) CRCs. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
system.disableMicroRegen no Unused.
system.disablePartialWriteRecov- no Specifies whether the system copies new disk block content to
ery non-volatile memory. INTERNAL USE ONLY. DO NOT
CHANGE.
system.diskSmartPollInterval 86400 Specifies the interval (in seconds) the disk controller for
SMART attribute TEC values is polled.
system.diskXferTimeout 31 Specifies the number of seconds the SPU disk driver waits for
a response after issuing an I/O request. The valid range is from
5-7200 seconds. You cannot set it to a value outside this
range.
system.durableMirroring yes Ensures that the primary and mirror data are updated on trans-
action commit. FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.enableAckAggrLdrRotation yes Specifies broadcast ack aggregation protocol. When No, the
same SPUs will always be the leaders for aggregation. Chang-
ing this parameter could cause decreased performance on
those SPUS. FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.enableLargeTables yes This option specifies whether or your Netezza system will use
large tables:
When set to yes, allows a table to consume all of the
available disk space in a dataslice. (Such large tables are
generally not recommended.)
When set to no, enforces the previous limit that a table
could not consume more than 64GB per dataslice.
system.enableResetLog yes Enables the reset log for exception errors. FOR INTERNAL
USE ONLY. DO NOT CHANGE.
system.extentsPerCRCBurst 2 Specifies the number of disk extents (at 24 blocks per extent
in the data partition) that the upgrade process computes and
validates before sleeping. FOR INTERNAL USE ONLY. DO NOT
CHANGE.
system.fpgaRecSizeIncrPct 0 Enables decreasing the size of the FPGAs scan buffer. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
system.fpgaTotalBufSize 3145728 Enables decreasing the size of the FPGA buffer. FOR INTER-
NAL USE ONLY. DO NOT CHANGE.
system.funnelsPerNIC 32 Specifies the number of funnels per NIC. to avoid packet loss.
system.hashflags 288 Specifies the details of hash table construction and probing.
FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.heatNotifyEnabled yes Specifies whether notification is send when the SPUs and SFI
cross temperature thresholds.
system.heatThresholdRearmInter- 30 Specifies the rearm interval. The rearm policy is the same for
val both yellow and red alerts. There is only one interval for the
entire system.
system.host2spuAckFrequency 0 Specifies a single comm ack for this number of packets. The
default is one for every 5 packets. FOR INTERNAL USE ONLY.
DO NOT CHANGE.
system.host2spuSendWindow 0 Specifies the send window for host to SPU distributes (in
packets). FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.host2spuTransSkewKB 128 Specifies the number (in kilobytes) to represent the maximum
amount of data expected to cause transient skew. In other
words, at any given time during a load or host distribute, the
host may parse/read 128KB for a particular destination data
slice before finding data for the next data slice. The system
uses this number to calculate the amount of memory for
receive buffers for a host2spu channel
system.maxJumboFrameSize 9000 Limits the size of the jumbo frames used in the system. 9000
is the maximum size. FOR INTERNAL USE ONLY. DO NOT
CHANGE
system.maxSpuDistPlans 99999 This setting has been deprecated as of Release 4.6 and is no
longer used.
system.nuclStackThreshold 7000 Specifies the expression evaluator stack limit on the SPU.
system.recPtrMaxCfg 1677721 Specifies the number of array elements used in the sorting
6 machine.
system.regenOomRetryCount 6000 Specifies the total number of retry attempts before aborting an
out of memory regeneration.
system.regenOomRetryThreshold- 1800 Specifies the total amount of time (seconds) spend sleeping on
Secs an out of memory regeneration.
system.regenSkipBadHead- no Historical.
erCheck
system.rowIdChunkSize 100000 Specifies the number of rows IDs assigned at one time.
system.rtxTimeoutMillis 300 Specifies the minimum time (in milliseconds) that fcomm
waits before retransmitting a packet.
system.rtxWakeupMillis 200 Specifies the time interval (in milliseconds) that the fcomm
retransmit task sleeps, before checking if packets need to be
retransmitted
system.spu2spuAckFrequency 0 Specifies a single comm ack for this number of packets. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
system.spu2spuSendWindow 0 Specifies the send window for SPU-SPU distributes (in pack-
ets). FOR INTERNAL USE ONLY. DO NOT CHANGE.
system.spuAbortBackTraceVerbos- 2 Specifies SPU abort print buffer stack dump verbosity level.
ity
system.spuCpuModel 2 Specifies the PowerPC chip. 1 is the 855; 2 is the 405. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
system.spuCtrlRAW 16 Specifies the RAW for spu control channel. FOR INTERNAL
USE ONLY. DO NOT CHANGE.
system.spuDiskSchedStarvation- -1 Specifies the SPU disk I/O scheduler scheduling parameter (-1
Threshold = use compiled-in default). FOR INTERNAL USE ONLY. DO
NOT CHANGE.
system.spuJobPrioBiasIntervalMs 2000 Specifies the number of milliseconds that can elapse before a
job is not longer defined as short.
system.spuMACMb 100 Specifies the speed of Ethernet backplane 100 MB for Spar-
row/Finch. 1GB (1000) for Mustang. FOR INTERNAL USE
ONLY. DO NOT CHANGE.
system.spuMemoryMB 512 Specifies the amount of RAM on the SPU. Internal use only.
system.spuMTU 0 Specifies the Maximum Transfer Unit, that is, the maximum
packet size.
system.spunetrxOomFatalTime- 3600 Specifies the SPI low memory comm receive deadlock thresh-
outSecs old for a SPU abort.
system.spunetrxOomTimeoutSecs 360 Specifies the SPI low memory comm receive deadlock thresh-
old for a query abort.
system.spuPlanWorkBlocks 2000 Specifies the gross (not net) amount of memory available to
one snippet on the SPU.
system.spuSwapSpaceConfigured 0 Controls the size of the dummy swap partition table. The
default value allocates all available swap space. The actual
swap space is the minimum of this option and the SpaceLimit
option or the actual partition size. FOR INTERNAL USE ONLY.
DO NOT CHANGE.
system.spuSwapSpaceLimit 0 Specifies artificially limiting the swap space for testing. FOR
INTERNAL USE ONLY. DO NOT CHANGE.
system.sqbFlags 1 Specifies the snippet scheduler Short Query Bias flags, 1=gen-
erate prep snippets at head of plan. FOR INTERNAL USE
ONLY. DO NOT CHANGE.
system.useFpgaPrep yes Specifies whether to generate plans that use the FPGAs filter
or raw reads.
system.virtabSingleMutex yes Controls the locking of internal tables. FOR INTERNAL USE
ONLY. DO NOT CHANGE.
system.zoneMapjoinThreshold 1000 Specifies the maximum number of records in memory per SPU
for a zonemap join. If greater than 1000 records, the system
does not perform a zonemap join.
system.zoneMapTableSizeThresh- 10 Specifies the size, in MB per SPU, for a table to merit a zone
old map.
Note: If you change this value, you must regenerate all of your
zone maps or risk wrong results. Do not change this value with-
out consulting Netezza Support.
This section describes some important notices, trademarks, and compliance information.
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other
countries. Consult your local IBM representative for information on the products and ser-
vices currently available in your area. Any reference to an IBM product, program, or service
is not intended to state or imply that only that IBM product, program, or service may be
used. Any functionally equivalent product, program, or service that does not infringe any
IBM intellectual property right may be used instead. However, it is the user's responsibility
to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in
this document. The furnishing of this document does not grant you any license to these
patents. You can send license inquiries, in writing, to: This information was developed for
products and services offered in the U.S.A.
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellec-
tual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing 2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other country where
such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES
CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
E-1
IBM Netezza System Administrators Guide
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate
programming techniques on various operating platforms. You may copy, modify, and distrib-
ute these sample programs in any form without payment to IBM, for the purposes of
developing, using, marketing or distributing application programs conforming to the appli-
cation programming interface for the operating platform for which the sample programs are
written. These examples have not been thoroughly tested under all conditions. IBM, there-
fore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Each copy or any portion of these sample programs or any derivative work, must include a
copyright notice as follows:
your company name) (year). Portions of this code are derived from IBM Corp. Sample
Programs.
Copyright IBM Corp. _enter the year or years_.
If you are viewing this information softcopy, the photographs and color illustrations may not
appear.
Trademarks
IBM, the IBM logo, ibm.com and Netezza are trademarks or registered trademarks of Inter-
national Business Machines Corporation in the United States, other countries, or both. If
these and other IBM trademarked terms are marked on their first occurrence in this infor-
mation with a trademark symbol ( or ), these symbols indicate U.S. registered or
common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current
list of IBM trademarks is available on the Web at Copyright and trademark information at
ibm.com/legal/copytrade.shtml.
Adobe is a registered trademark of Adobe Systems Incorporated in the United States, and/
or other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corpo-
ration in the United States, other countries, or both.
NEC is a registered trademark of NEC Corporation.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United
States, other countries, or both.
Red Hat is a trademark or registered trademark of Red Hat, Inc. in the United States and/or
other countries.
D-CC, D-C++, Diab+, FastJ, pSOS+, SingleStep, Tornado, VxWorks, Wind River, and the
Wind River logo are trademarks, registered trademarks, or service marks of Wind River Sys-
tems, Inc. Tornado patent pending.
APC and the APC logo are trademarks or registered trademarks of American Power Conver-
sion Corporation.
Other company, product or service names may be trademarks or service marks of others.
Deutschland: Einhaltung des Gesetzes ber die elektromagnetische Vertrglichkeit von Gerten
Dieses Produkt entspricht dem Gesetz ber die elektromagnetische Vertrglichkeit von
Gerten (EMVG). Dies ist die Umsetzung der EU-Richtlinie 2004/108/EG in der Bundes-
republik Deutschland.
Zulassungsbescheinigung laut dem Deutschen Gesetz ber die elektromagnetische Vertrglichkeit von Gerten
(EMVG) (bzw. der EMC EG Richtlinie 2004/108/EG) fr Gerte der Klasse A
Dieses Gert ist berechtigt, in bereinstimmung mit dem Deutschen EMVG das EG-Konfor-
mittszeichen - CE - zu fhren.
Verantwortlich fr die Einhaltung der EMV Vorschriften ist der Hersteller:
International Business Machines Corp.
New Orchard Road
Armonk, New York 10504
914-499-1900
Der verantwortliche Ansprechpartner des Herstellers in der EU ist:
IBM Deutschland
Technical Regulations, Department M456
IBM-Allee 1, 71137 Ehningen, Germany
Telephone: +49 7032 15-2937
Email: tjahn@de.ibm.com
Generelle Informationen:
Das Gert erfllt die Schutzanforderungen nach EN 55024 und EN 55022 Klasse A.
This is a Class A product based on the standard of the Voluntary Control Council for Inter-
ference (VCCI). If this equipment is used in a domestic environment, radio interference
may occur, in which case the user may be required to take corrective actions.
This is electromagnetic wave compatibility equipment for business (Type A). Sellers and
users need to pay attention to it. This is for any areas other than home.
ACL Access Control Lists. On UNIX and UNIX-like systems, file permissions are defined
by the file mode. The file mode contains nine bits that determine access permis-
sions of a file, plus three special bits. This mechanism allows definition of access
permissions for three classes of users: the file owner, the file group, and others.
active node In Linux-HA, the node that controls the resource group. This is called the primary
node in DRBD.
administrator Privileges that authorize database users to administer the database and its objects.
privileges See also object privileges.
aggregate functions Functions that operate on a set of rows to calculate and return a single value. Typical
aggregate functions include avg, count, max, min, and sum.
alias An alternate name for keyword, for renaming columns, also called derived columns.
Column aliases are used for join indexes when two columns have the same name.
ANSI American National Standards Institute. ANSI SQL standards are parallel ISO
standards.
ASCII American Standard Code for Information Interchange. The most widely used charac-
ter coding standard of representing textual data in computer memory and for
communicating with other computers.
backup increment One component of a backup set, which can be the result of a full backup, a differen-
tial backup, or a cumulative backup.
backup set A collection of one full and any number of incremental backups of a database.
base table A permanent table that stores data persistently until you destroy (drop) the table
explicitly.
BLAST Basic Local Alignment Search Tool is a search algorithm used by blastp, blastn,
blastx, tblastn, and tblastx. You use BLAST functions to perform sequence similarity
searching on CLOBs. You use BLAST-related pseudo fields to obtain statistical data
on sequence matching.
BLOB Binary Large OBject. A data type used in some databases to represent large values
for fields of records; typical examples might be images in various formats (for exam-
ple, a picture of an employee in GIF or JPEG format that is included as part of an
employee record), movies in formats, such as MPEG, audio data, radar data, and so
on.
block A group of contiguous sectors on a disk, contains a block header and some integral
number of records.
boot process A start-up process, such as the process of starting a Netezza system from a powered
off state, as well as the process for starting and initializing SPUs.
catalog (SPU) A data structure in the core partition that describes table allocation.
catalog (SQL) A catalog groups a collection of schemas into a single unit. A catalog provides a
mechanism for qualifying the schemas names to avoid name conflicts. It is also the
conceptual repository for the schemas metadata.
character An abstract linguistic concept such as "the Latin letter A" or "the Chinese character
for sun." A single character can be represented by one or more glyphs.
chassis A general term for a hardware cage that contains devices. For example, a chassis
could contain SPUs, disks, fan units, power supplies, or a combination of such
devices.
CLI (1) Callable language interface (ANSI SQL term). (2) Command Line Interface. Com-
mands that users type at the command line prompt rather than through a graphical
user interface. Netezza CLI commands include nzload, nzsql, nzsystem, and others.
code point The name for the binary value associated with each character in a character set,
such as Unicode or Latin-1.
collation Rules that determine how data is compared, ordered, and presented.
column One field of data in a table definition, or in a record or row of a populated database.
combining Unicode allows characters to have their own unique code point value and to be rep-
sequences resented as combinations of other characters, called combining sequences. For
example, the Angstrom character can be represented by the code point or by com-
bining sequence "capital A" code point followed by the combining ring above code
point.
comments An arbitrary sequence or string of characters that are omitted or ignored during pro-
cessing because they begin and possibly end with special characters that are
recognized by the processor. For example, SQL comments typically begin with dou-
ble dashes and extend to the end of the line.
concurrency control In multi-user environments, a system of controls that ensure that modifications
made by one person do not adversely affect another concurrent user.
constants Symbols that represent specific data values. The format of a constant depends on
the data type of the value it represents. Constants are also called literals.
constraint An integrity condition that a database system must enforce. SQL-92 defines column
constraints, foreign keys, and check conditions.
contention A condition that arises when there are more active consumers of a resource than the
system can serve simultaneously.
control file When you use the nzload command, you can use a control file to specify additional
options that the command line does not support.
core partition A Netezza disk partition that is used for storing information about how disk space is
being used. This includes directories, catalogs, dictionaries, and coarse indices.
cost Estimate of the work (in time) required to execute the query.
cumulative backup A type of backup used in conjunction with differential backups. A cumulative
backup includes all the changes sine the last full backup. It consolidates and
replaces all previous differential backups.
cross database access The ability to execute queries that reference tables, views, and synonyms in other
databases on the same Netezza server.
data integrity A state in which all the data values are stored correctly in the database.
data mining Complex statistical processing used to uncover patterns in large data sets.
data slice The data slice number represents that portion of the database stored on a disk. Each
disk services a primary data slice and mirrors the primary data slice of another disk.
During failover the specific disk on which the data slice resides can change, however
the data slice number remains the same.
database A collection of persistent data, which is used by the application systems of a given
enterprise.
DCE Distributed Computing Environment. A device that establishes, maintains and termi-
nates a session on a network.
DCL Data Control Language. Allows you to grant or revoke privileges to users or groups.
DDL Data Definition Language of SQL for defining tables, columns, views, constraints,
users, privileges; primarily the create, alter and drop commands.
DHCP Dynamic Host Configuration Protocol (RFC 2131). DHCP clients obtain their IP
address assignments and other configuration information from DHCP servers. Pro-
vides a mechanism for allocating IP addresses dynamically so that addresses can be
reused when hosts no longer need them.
designated SPU In IBM Netezza 1000, C1000, and IBM PureData System for Analytics N1001 sys-
tems, a SPU within the SPU chassis that has the responsibility to monitor spare and
inactive disks. Typically, this is the SPU that manages the least number of data
slices.
device mapping file A configuration file that defines the configuration of SPUs and disks within a sys-
tem, specific to the model type of the system. The mapping file is used to create and
initialize the Netezza database the first time the system starts. It also communicates
the device mappings to the SPUs when Netezza starts or after a topology change
such as a SPU failure.
dictionary Data structure that specifies all the tables, their columns, order, and data types.
differential backup A type of incremental backup. It includes all the changes since the last full or incre-
mental backup.
directory (SPU) Data structure on a SPU that describes the allocation status of disk extents. See
extents.
dirty read When a SQL transaction reads data written by concurrent uncommitted transactions.
discovery The process of identifying the storage topology and reporting information back to the
system manager. The system manager uses this information to assign SPUs to disks
(and to define the paths connecting them) and also to identify disk enclosure ele-
ments such as fans, power supplies, and sensors for temperature and voltage.
dispersion The number of distinct values in a column. These values are useful to determine a
good distribution column.
distribution key The column or set of columns used to determine the distribution of data on the data
slices. The Netezza systems uses a hash of the distribution key to determine the
data slice location of a given row of the database.
DML Data Manipulation Language of SQL for accessing and modifying database data; pri-
marily the select, update, insert, delete, commit and rollback commands.
DNS Domain Name System. Used in the Internet for translating names of network nodes
into addresses.
double-duty A condition where a disk is servicing queries on its primary disk partition as well as
its mirror disk partition because it is taking the place of a disk that has failed.
DRBD Distributed Replicated Block Device (DRBD) is a block device driver that mirrors the
content of block devices (hard disks, partitions, logical volumes, and so on) between
servers.
DRBD network Static routes over direct cabling between two hosts, bonded. This network is dedi-
cated to DRBD only.
ECC Error-Correcting Code. A memory system that tests for and corrects errors
automatically.
environment An item of data that is updated by the operating system or other control program.
variables They typically reside in memory and can be read by applications to determine the
current status of the system. Netezza environment variables include user name,
password, and database among others.
Ethernet Gigabit Switch A physical switch that resides in a Netezza rack and connects the SPAs to the NICs
installed on the host computer. Each rack includes at least one switch. Depending
upon your Netezza configuration, you could have multiple switches in one or many
racks.
ETL Extract, Transform, and Load. The process by which data is extracted from one or
more source databases, filtered and standardized into common forms and encod-
ings, then loaded into a target database (for example, a Netezza database).
EUC-JP A way to use the Japanese JIS X0208, JIS X0213, and other related standards (usu-
ally called just the "JIS character set"). See Extended UNIX Code.
EUC-KR A way to encode Korean with an 8-bit coding of ISO-2022-KR (KS X 1001), imple-
mented by adding 128 to each byte. See Extended UNIX Code.
execution plan A linear structure that defines the DBOS operations to be performed for a SQL
statement.
Extended UNIX Code (EUC) is an 8-bit character encoding system used primarily for Japanese, Korean,
and simplified Chinese.
extent The smallest unit of allocation on a disk, contains some number of blocks.
fabric Connects the host computer (Linux host and SMP host) with the system's SPUs.
Because the Netezza fabric uses IP-based protocols, the devices on the fabric use IP
addresses.
failover For a Netezza HA host, an automatically triggered action by Linux-HA that causes
the resource group to be failed over from the active node to the standby node. As a
result, the standby node takes control of the resource group and becomes the active
node. For a Netezza SPU, the process of transparently switching to the mirrored copy
of the data when a SPU fails to respond.
fencing A method that forces a Netezza host out of the cluster after Heartbeat detects prob-
lems on that host which would prevent normal operation. In the Netezza
environment, fencing typically causes a forced powercycle to stop the problematic
host and thus force a failover of the nps resource group to the standby host.
foreign key The column or combination of columns whose values match the primary key of
another table.
FPGA Field Programmable Gate Array. The FPGA is a Netezza-designed engine that accel-
erates SQL query performance.
full backup The contents of the entire database copied to a new or empty backup destination.
full restore The creation of a new database and restoration of the contents of a full backup set to
that database.
glyph The concrete visual presentation of a character such as A. A single glyph can repre-
sent more than one character.
GRA Guaranteed Resource Allocation. A policy that allows the system resources to be
reserved by percentages. When there is contention for resources, the system grants
access to that resource based on the defined percentage.
Heartbeat The mechanism that checks the health and liveness of the two Netezza nodes in
the cluster.
host computer A multiprocessor computer that provides access to monitor basic Netezza functions.
It includes a monitor and keyboard. The host receives queries and converts them
into optimized execution plans. It runs the Linux operating system, and provides
monitoring and diagnostic functions.
hot swap The process of replacing hardware components without shutting down the system.
i18N An industry standard abbreviation for Internationalization (because there are 18 let-
ters between the 'I' and the 'n'). It comprises software modifications to support
multiple languages.
ICU International Components for Unicode. A library that enables software programs to
work with text in multiple languages.
Intelligent Query Places the silicon processors in proximity to the storage, so it can filter and process
Streaming records as they come off the storage disk drivetaking only the data that is relevant
to the query.
interface A defined set of properties, methods, and collections that form a logical grouping of
behaviors and data.
inter-rack Between racks. For example, inter-rack connections have source and destination
locations that reside on different racks.
intrarack Within a rack. For example, intrarack connections have sources and destinations
within the same rack.
ISO International Organization for Standardization. ISO SQL standards parallel ANSI
standards.
isolation level The property of a transaction that controls the degree to which data is isolated for
use by one process and guarded against interference from other processes.
JBOD Just a Bunch of Disks. A group of hard disks. An optional feature on a host rack, one
3 Unit JBOD can be installed and used as a staging area for data being extracted or
loaded.
JDBC Java Database Connectivity. Java analog to ODBC. A way to abstract access to
databases.
LAN Local Area Network. A communications network that serves users within a confined
geographical area.
Latin-1 (ISO 8859-1) Is a an 8-bit character encoding. The 256 values correspond to the
first 256 Unicode code points, and the first 128 values correspond to the 7-bit
ASCII.
Load Replay Region Defines a pre-commit within a load. It is used if the system must restart a load.
maintenance network The network that Heartbeat uses to communicate between the two Netezza nodes.
materialized view Sorted, projected, and materialized views (SPM) are views of user data tables (base
tables) that project a subset of the base tables columns and are sorted on a specific
set of the projected columns.
merge-sort A sorting algorithm that works by merging sorted lists into larger sorted lists; in
Netezza, DBOS on the host performs a merge-sort of sorted data received from mul-
tiple SPUs.
metadata Database description information; the ANSI system catalog contains the schema
metadata for a SQL-92 database.
migration In DRBD terms, a migration (or relocation) occurs when a user manually moves the
nps resource group to the standby host, making the standby the active host.
mirror partition A disk partition used for storing tables that are a copy of another disks primary data.
mirroring The SPU software responsible for replicating data stored on one storage device to a
second storage device for high availability of data.
mismatched disk The disk has valid data from another Netezza database. This is the case if you
removed an active disk from another system or storage array, mistaking it for a spare.
multipath A storage configuration that supports multiple paths from servers to disks. The
redundant paths, connections, and controller cards provide a degree of recovery and
high availability in the event of failures to a component within the storage
subsystem.
multiple device (MD) The Linux software RAID driver which is responsible for mirroring using a RAID-1
driver algorithm.
MTBF Mean Time Between Failures. The average time a component works without failure.
It is the number of failures divided by the hours under observation.
MTTR Mean Time to Repair. The average time it takes to repair a failed component.
namespace A namespace is the structure underlying SQL schemas. The namespace contains all
the objects within the database plus all global objects (databases, users, groups, and
system objects) There is only one namespace for each database.
nested table A data mining model configuration in which a column of a table contains a table.
Netezza Database Accel- A Netezza-designed expansion board that provides the FPGA analysis engines, mem-
erator Card ory, and I/O bandwidth to process the queries and data communications from its
associated SPU to the disks that the SPU owns.
NIC Network Interface Card. A card that attaches to a computer to control the exchange
of data between the computer and components external to the computer. Attached
to the Netezza host computer, a NIC connects the Ethernet switch to the host.
nonrepeatable reads When a SQL transaction re-reads data it previously read and finds that the data has
been modified by another transaction (that committed since the initial read).
normalization Describes the translation of a body of text so that characters with multiple represen-
tations are encoded in one way. Normalization puts different representations of the
same character sequence (as seen by the user) into a single uniform representation,
which can then be subjected to byte-wise binary comparison as an equality test.
NPS Netezza Performance Server. The former name of the Netezza high performance,
integrated database appliance.
null Specifies the absence of a value for a column in a row. Behaves as unknown in
calculations.
nz user Default Netezza system administrator Linux account that is used to run the host soft-
ware on Linux.
object privileges Object privileges authorize database users to access and maintain the data within a
database object. See also administrator privileges.
ODBC Open Database Connectivity. A way to abstract access to databases. ODBC 3.0 con-
forms to the SQL2 CLI standards.
ODBs Object databases (ODBs), first designed in the 1980s, were meant to handle the
complexity of data and relationships required by the object model of development.
overallocated SPU A SPU which is connected to more than 8 data slices. By default, a SPU manages 6
or 8 data slices. If a SPU should fail, its data slices are reassigned to the remaining
SPUs.
overhead That part of the system resources consumed by the system itself, not necessarily on
behalf of a users operation.
oversubscribed A condition that arises when the demands on the aggregate resources of a system
exceed the systems total capacity.
partition Area of disk that contains extents. Netezza disks have several partitions such as core
data, swap, primary, and mirror.
PDU Power Distribution Unit. PDUs distribute power to SPUs within a rack, and connect
to the racks UPSs or PDUs.
phantom read When a SQL transaction re-executes a query returning a set of rows that satisfy a
search condition and finds that the set of rows has changed due to another recently
committed transaction.
POST Power On Self Test. A series of tests that are run every time a hardware system or
component is first powered on.
PostgreSQL The open source relational database version of the Postgres object database program
from the University of California, Berkeley.
primary key A column or set of columns that uniquely identifies all the rows in a table.
primary partition The disk partition used for storing tables for which this disk is primarily responsible.
PXE The Preboot Execution Environment (PXE) is a set of methods that are used to boot
an IBM host or server without the need for a disk (hard drive or diskette).
RAID Redundant Array of Independent Disks. A way or arranging disks to provide perfor-
mance and fault tolerance. The host computer includes drives in a RAID
configuration.
real The same data type as FLOAT except that the DBMS defines the precision. REAL
takes no arguments.
record A single row in a database table stored on a SPU disk with a record header followed
by all the fields (column values) for this row.
referential A state in which all foreign key values in a database are valid, by ensuring that the
integrity rows in the other tables exist.
regenerate The process of copying the primary and mirror partitions of a failed disk to a spare
disk.
relational database Refers to a database in which the data is stored in a uniform structure.
relocate (or migrate) A process of manually relocating the nps resource group from the active Netezza
node to the standby node. Also called switchover or migration.
resource group A group of all the applications, scripts, or services which are associated with a par-
ticular resource. A resource is a service or facility which is made to be highly
available. The Netezza implementation has one resource group called nps which
defines the services and resources that are started and monitored by Heartbeat. (A
resource group was known as a service in the prior Netezza HA implementation.)
roll back To remove the database updates performed by partially completed transactions.
row A table entry consisting of one value for each column in the table. Some column val-
ues can be NULL.
rowset limit A limit on the amount of rows a user query can return. The administrator can specify
this limit when creating a user or a group.
S-Blade In the IBM Netezza 100, 1000, C1000, and IBM PureData System for Analytics
N1001 systems, the combined snippet processing server and Netezza Database
Accelerator card (also referred to as a SPU).
SAS connectivity module SAS Connectivity Module is a switch that resides in the SPU chassis and manages
the connections between the SPUs and their corresponding disk enclosures. There
are two SAS connectivity modules in each SPU chassis to improve availability. Also
called a SAS expander.
saturation A condition that arises when the system resources are oversubscribed and the sys-
tem can no longer demonstrate linear performance with incremental loads.
schema A database contains one or more named schemas, which in turn contain tables.
Schemas also contain other kinds of named objects, including data types, functions,
and operators. Schemas allow you to use the same object name in different schemas
without conflict.
sequences A sequence is a named object in a database that supports a get next value method.
A sequence value is an exact numeric that you can use where that type can be used.
session A specific connection to the Netezza system that aggregates units of work for a par-
ticular user.
SFI Switching Fabric Interface. On Netezza models such as the z-series and earlier, the
SFI is responsible for network connectivity among all SPUs and the host computer.
The SFI monitors and reports the status of all SPU cards, power supplies, and fans.
Shift_JIS (SJIS) is a character encoding for the Japanese language developed by the Japanese
company ASCII. It is based on character sets defined within JIS standards JIS X
0201:1997 (for the single-byte characters) and JIS X 0208:1997 (for the double
byte characters).
significand The significant digits of floating point numbers are stored as a unit called the man-
tissa (or significand), and the location of the radix point (decimal point in base 10)
is stored in a separate unit called the exponent.
SLA Service Level Agreement. A contact between the owner of the Netezza system and
their customers to provide a certain level of service.
SMART Self Monitoring Analysis and Reporting Technology. A drive technology that reports
its own degradation enabling the operating system to warn the user of potential
failure.
SMP Host The Netezza Symmetric Multiprocessing (SMP) host controls and coordinates SPU
activities, performs query plan optimization, table and database operations, and sys-
tem administration.
SMS Storage Management System. A registered storage location for backups, such as a
file system or a third-party backup system.
snippet A unit of database work (labor) to be performed by a Snippet Processing Unit (SPU).
snippet-level scheduling The process of making scheduling decisions at the snippet level rather than at the
gatekeeper or GRA level.
snippet processor A logical connection between one CPU core, one FPGA engine, and its associated
memory to process a snippet.
SNMP Simple Network Management Protocol. A widely used network monitoring and con-
trol protocol.
SPA Snippet processing array. In a z-series system, a SPA is a collection of 14 SPUs and a
network switch. In an IBM Netezza 1000, C1000, or IBM PureData System for Ana-
lytics N1001 system, the SPA contains an S-Blade chassis and its associated
storage array of disks, as well as AMMs for management services, I/O modules that
connect to the disk enclosures, and I/O modules for communication within the
enclosure and to the hosts and other components of the rack.
spare disk The disk is available to become active in the event that a currently active disk has a
nonrecoverable failure.
SPU A Snippet Processing Unit (SPU) performs as much of the query as possible at the
lowest level possible, with query operations being done in parallel across all the
SPUs. In IBM Netezza 1000 and later system architectures, this hardware compo-
nent is referred to as an S-Blade.
SQL Structured Query Language. A language used to interrogate and process data in a
relational database. Often pronounced sequel.
SQL character set SQL-99 allows for the creation of named character sets and for the declarations of a
table column to include specification of the columns character set. SQL also has
the notion of a "national character set."
SQL collation SQL-99 allows for the creation of named collations. Each character set has a default
collation, but additional collations can be defined as pertaining to a given character
set. The declaration of a character column can include its character set and its
default collation.
standby node In Linux-HA, a backup node for the cluster that takes over in the event of a failover
or relocate. This is called the secondary node in DRBD.
STONITH A shoot the other node in the head failover design that detects when one node is
in an unhealthy state and a failover is required. The STONITH process stops the
unhealthy node and then reboots so that the nps resource group will be started on
the other, healthy node. This is the specific implementation for the generic concept
of fencing in Linux-HA.
storage array A storage array is a set of one or more disk enclosures which contain the user data-
bases and tables in the Netezza system. The storage array is connected to and
owned by one SPU chassis.
striping Netezza RDBMS evenly distributes (or stripes) all tables across all active (non-spare)
disks based on the distribution key you specify. Striping keeps the system balanced
and prevents overwhelming any one disk with too much data. Striping increases sys-
tem efficiency.
SUDP Streaming User Datagram Protocol. A communications transport layer protocol for
streaming data that is specific to the Netezza system.
swap A disk partition used for the temporary storage of entities too large to fit in random
access memory (RAM).
synonym An alternate way of referencing tables or views that reside in the current or other
databases on the Netezza system. Synonyms allow you to create easy-to-type names
for long table or view names.
system catalog The set of database tables used to hold all the schema information for the system
database.
table A relation. Contains the class of objects and has rows and columns. Table names
must be unique within a schema. Tables can be permanent or temporary (within a
single session).
table lock A lock on a table including all data and indexes preventing simultaneous access to
the table by multiple transactions.
temporary table A table that the DBMS destroys automatically at the end of a session or transaction.
TFTP Trivial File Transfer Protocol. A version of the TCP/IP FTP protocol that has no direc-
tory or password capability.
timeslice A period of time in which a particular job runs as if it had all the resources on the
system.
topology The mapping of portions of the database (called data slices) to individual disks, the
mirroring assignments between the disks, the location of spare disks, and the SPU
ownership for the active data slices.
TPC Transaction Processing Council, a group focused on providing level playing field
benchmarks for databases; currently four flavors: transaction processing (TPC-C), ad
hoc queries (TPC-H), business reporting (TPC-R), and web support (TPC-W).
transaction A group of database operations combined into a logical unit of work that is either
wholly committed or rolled back.
Unicode A character encoding representing each of the worlds characters as a unique 32-bit
value, also called a code point. The standards bodies have agreed to limit the code
point values to 21 bits. This means that three bytes are required for every character
versus one byte per character in traditional ASCII. Various encodings are used to
reduce the storage overhead for popular subsets of Unicode.
unicode collation Describes techniques for collating Unicode strings according to the customs of dif-
ferent countries, cultures, and so on. The standard algorithm calls for normalization
of comparands, and the use of potentially three or four levels of comparison rules
and attributes.
UPS Uninterruptible Power Supply. A UPS distributes power within a Netezza rack and
protects against power surges and outages.
view A view can be either a virtual table or a stored query. The data accessible through a
view is not stored in the database as a distinct object, but rather as a select state-
ment. The result set of the select statement forms the virtual table returned by the
view.
VPD Vital product data (VPD) is information about a device that allows it to be managed
or administered by other system components. VPD information usually includes a
MAC address, serial number, and physical location information for the device.
window A user-specified selection of rows (or a logical partition of a query) that determines
the set of rows used to perform certain calculations with respect to the current row
under examination.
zone maps Automatically created persistent tables that the system uses to improve the through-
put and response time of SQL queries against large, group, or nearly ordered
temporal and integer data.
zoning A SAS feature that separates data traffic such as between servers and disks so that
servers use a certain set of disks. Zoning provides a means of security and access
control between SPUs and their associated data.
Index
Symbols absent device 5-9
Access Control List F-1
$hist_column_access_$SCHEMA_VERSION table 11-33 access, controlling to Netezza 8-1
$hist_failed_authentication_$SCHEMA_VERSION table accounts
11-23 Linux users B-1
$hist_log_entry_$SCHEMA_VERSION table 11-23 unlocking 8-21
$hist_nps_$SCHEMA_VERSION table 11-22 ACID F-1
$hist_plan_epilog_$SCHEMA_VERSION table 11-36 ACL F-1
$hist_plan_prolog_$SCHEMA_VERSION table 11-34 active hardware 5-7
$hist_query_epilog_$SCHEMA_VERSION table 11-28 active host, identifying 4-5
$hist_query_overflow_$SCHEMA_VERSION table 11-29 admin
$hist_query_prolog_$SCHEMA_VERSION table 11-27 database user account 1-2
$hist_service_$SCHEMA_VERSION table 11-30, 11-31 definition of F-1
$hist_session_epilog_$SCHEMA_VERSION table 11-26 nzsession 9-22
$hist_session_prolog_$SCHEMA_VERSION table 11-24 object privileges 8-10
$hist_table_access_$SCHEMA_VERSION table 11-32 predefined user 8-3
$hist_version table 11-22 privileges, user 9-1
$HOME/.nzsql_history 3-9 user characteristics 8-3
$HOME/.nzsqlrc 3-10 admin user
/etc/ldap.conf file 8-18 creating group of 8-16
/var/log/messages 4-2 resource allocations 12-10
_v_aggregate view 8-31, C-1 administration interfaces
_v_database view 8-31, C-1 about 3-1
_v_datatype view 8-31, C-1 list of 1-7
_v_function view 8-31, C-1 administration tasks 1-1
_v_group view 8-31, C-1 about 1-1
_v_groupusers view 8-31, C-1 hardware 5-1
_v_index view C-1 administrator privileges
_v_operator view 8-31, C-1 admin user 9-1
_v_planstatus view 11-16 backup A-5
_v_procedure view 8-31 create group A-4
_v_qryhist 9-29 create table A-4
_v_qrystat 9-29 create user A-5
_v_querystatus view 11-16 create view A-5
_v_relation_column view 8-31, C-2 definition of F-1
_v_relation_column_def view 8-31, C-2 description of 8-8
_v_relation_keydata, view 8-31 manage hardware A-5
_v_sched_gra view 12-16 manage security A-5
_v_sequence view 8-31, C-2 manage system A-5
_v_session view 8-32, C-2 restore A-5
_v_sys_group_priv view 8-32, C-3 security model 8-8
_v_sys_index view 8-32, C-3 unfence A-5
_v_sys_priv view 8-32, C-3 aggregate functions F-1
_v_sys_table view 8-32, C-3 alcloader process 11-8
_v_sys_user_priv view 8-32, C-3 alerts
_v_sys_view view 8-32, C-3 displaying 7-41
_v_table view 8-32, C-2 system summary page 3-22
_v_table_dist_map view 8-32, C-2 alias F-1
_v_table_index view C-2 allowed resources percentage 12-14
_v_user view 8-32, C-2 ALTER HISTORY CONFIGURATION command 11-12
_v_usergroups view 8-32, C-2 alter privilege 8-10, A-5
_v_view view 8-32, C-2 American National Standards Institute F-1
AMPP F-1
AndExpr event rule 7-13
A API F-1
abort ASCII F-1
assigned hardware 5-8
privilege 8-10, A-5
Asymmetric Massively Parallel Processing F-1
program 6-12
transactions 9-23 Atomicity/Consistency/Isolation/Durability F-1
Index-1
Index
Index-2
Index
Index-3
Index
Index-4
Index
Index-5
Index
Index-6
Index
Index-7
Index
Index-8
Index
Index-9
Index
Index-10
Index
Index-11
Index
Index-12
Index
Index-13
Index
Index-14
Index
_v_view C-2
definition of F-15
system 8-31, 8-32, C-3
voltage fault events 7-37
W
Web Admin interface
directories and files 2-10
installing 2-7
server package 2-8
WildcardExpr event rule 7-13
window F-15
Windows tools 2-5
workload management
about 12-1
admin user 12-10
compliance 12-14
compliance reports 12-16
features 12-2
gate keeper 12-21
GRA 12-6
overserved and underserved groups 12-14
PQE 12-19
priority 12-12
priority levels 12-20
resource percentages 12-9
resource sharing groups 12-8
SQB 12-4
workload, about 12-1
X
xinetd, remote access 1-7
Z
zone maps
automatic statistics 9-16
definition of F-15
Index-15
Index
Index-16