Sunteți pe pagina 1din 421

Platform Manage™ User’s Guide

Version 5.7
May 19, 2008 3:56 pm
Comments to: doc@platform.com
Support: support@platform.com
Copyright © 1994-5/20/08, Platform Computing Inc.
Although the information in this document has been carefully reviewed, Platform Computing Inc.
(“Platform”) does not warrant it to be free of errors or omissions. Platform reserves the right to make
corrections, updates, revisions or changes to the information in this document.
UNLESS OTHERWISE EXPRESSLY STATED BY PLATFORM, THE PROGRAM DESCRIBED IN
THIS DOCUMENT IS PROVIDED “AS IS” AND WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO
EVENT WILL PLATFORM COMPUTING BE LIABLE TO ANYONE FOR SPECIAL,
COLLATERAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING WITHOUT
LIMITATION ANY LOST PROFITS, DATA, OR SAVINGS, ARISING OUT OF THE USE OF OR
INABILITY TO USE THIS PROGRAM.
We’d like to hear from you You can help us make this document better by telling us what you think of the content, organization,
and usefulness of the information. If you find an error, or just want to make a suggestion for improving
this document, please address your comments to doc@platform.com.
Your comments should pertain only to Platform documentation. For product support, contact
support@platform.com.
Document redistribution This document is protected by copyright and you may not redistribute or translate it into another
and translation language, in part or in whole.
Internal redistribution You may only redistribute this document internally within your organization (for example, on an
intranet) provided that you continue to check the Platform Web site for updates and update your
version of the documentation. You may not make it available to your organization over the Internet.
Trademarks LSF is a registered trademark of Platform Computing Inc. in the United States and in other
jurisdictions.
ACCELERATING INTELLIGENCE, PLATFORM COMPUTING, PLATFORM SYMPHONY,
PLATFORM JOBSCHEDULER, PLATFORM ENTERPRISE GRID ORCHESTRATOR, PLATFORM
EGO, and the PLATFORM and PLATFORM LSF logos are trademarks of Platform Computing Inc. in
the United States and in other jurisdictions.
UNIX is a registered trademark of The Open Group in the United States and in other jurisdictions.
Microsoft is either a registered trademark or a trademark of Microsoft Corporation in the United
States and/or other countries.
Windows is a registered trademark of Microsoft Corporation in the United States and other countries.
Other products or services mentioned in this document are identified by the trademarks or service
marks of their respective owners.
Third-party license http://www.platform.com/Company/third.part.license.htm
agreements
Third-party copyright http://www.platform.com/Company/Third.Party.Copyright.htm
notices
i

Table of Contents

Chapter 1 Introduction to Platform Manager


Conventions
Terms
Typographic conventions used in this manual
Why use Platform Manager?
Platform Manager Features
Initial Cluster Deployment
Ongoing Change Management
Configuring Monitoring the Health of Cluster Resources
Automating Fault Handling and Root Cause Analysis
Optimizing Resource Utilization and Performance
Cluster Topologies
Flat cluster
Private Network
Multiple clusters
Additional Features
Using Platform Manager
Standards and Open Technologies
Provisioning with ScaNCE
CIM/Middleware
Console
Power
Monitoring
Chapter 2 Introduction to the Platform Manager GUI
About Platform Manager Icons
Opening Terminal and Console Sessions
Main Window
About Platform Manager GUI Elements
The Data Center
Working with Views
Data Center Selector View
Cluster right-click menu
Configure
Create Group
Create NIS Server...
Capture Image
Show Installation History
Remote Access

Platform Manage 5.7 User’s Guide


ii

Node On/Off
Preview Pending Changes
Subnet Right-click Menu
Opening Views
Moving views
Chapter 3 Provisioning the Data Center
About the Provisioning Process
Scalability Considerations
The Provisioning Process Assumptions
The Server Creation Wizard: Provisioning a Cluster
Selecting a Network Topology
Typical Cluster Workflow
Route-able Subnet Groups
Create Server Wizard: Creating a Cluster with a Private topology
Create Server Wizard: Configuring a network gateway
Create Server Wizard: Configuring node hardware
Create Server Wizard: Configuring the BMC ethernet interface
Create Server Wizard: Configuring the Operating System
Create Server Wizard: Adding the Configured Ethernet Interface
Create Server Wizard: Configuring the DNS and NTP
The DNS
The NTP
Create Server Wizard: Configuring the NIS and LDAP Client Services
Create Server Wizard: Configuring NIS
Configuring the LDAP Client Service
Adding LDAP Client Services in the CLI
LDAP and the NIS
Mounting a user's home directory without automount on a NIS
Creating a subnet
Adding Myrinet, Power and Console Switches
Myrinet Switches
Power Switch
Console Switch
Create Server Wizard: Adding Software options
Software Installation
Upload Software Wizard
About The Installation Proccess
Package-based Installation
Image-based Provisioning
Provisioning Process
Image Export
Image Import
Image Deployment
Diskless Image Provisioning
Installation templates
Customized Template
Modifying a template

Platform Manage 5.7 User’s Guide


iii

Copying a template
Configuring a template
Deleting a template
Discovery and Managing existing servers
Prerequisites for Discovery
Node Discovery
The Discovery Process in the Platform Manager GUI
Running the Discovery Process using the Platform Manager CLI
Adjusting the level of management
Setting up "Out of Band monitoring" on an unmanaged server
Script for setting up out of band monitoring
Setting up PBSPro clients using an unmanaged PBS Pro server
Chapter 4 Configuration
Configuration Overview
Node System Settings View
Node Hardware Configuration View
Configuring Server Properties tab
BMC tab
About the BMC tab
Configuring the BMC
Configuring the BMC in the CLI
Power and Console tab
Configuring the Power and the Console in the GUI
Configuring the Console in the CLI
Configuring the Power in the CLI
Node Network Management View
Network Interfaces
The Network Interfaces tab
Static ARP
Enabling Static ARP in the GUI
Disabling Static ARP in the GUI
Changing IP Addresses
Modifying a subnet
Default Gateway tab
NAT Settings tab
DNS Settings tab
Provisioning Management View
Distribution Settings tab
Software Settings Tab
Node Service Management View
NTP tab
About the NTP tab
Adding an NTP service
NIS Client tab
About the NIS Client tab
Configuring NIS Client Service

Platform Manage 5.7 User’s Guide


iv

LDAP Client tab


About the LDAP tab
Configuring a LDAP Client
LDAP in the CLI
NFS Export tab
About the NFS Export tab
Adding an NFS service
Configuring an NFS Service
Examples of NFS in the CLI
Example: Addng NFS to a node in the CLI
Example: Removing NFS from a node in the CLI
Scali MPI Connect tab
The LSF tab
About the LSF tab
Configuring a Master Candidate with the GUI
Adding the LSF master candidate with the CLI
About Hosted Services
About Scali_DHCPServerService
About Scali_ManagementEngineService
About Scali_RepositoryChannelService
Configuring a Dynamic Host Service or a Static Host Service
PBSPro tab
About the PBSPro tab
Configuring the PBSPro server
PBSPro Clients
Configuring a PBSPro Client
Setting up PBS Pro clients using an unmanaged PBS Pro server
PBS accounting files
Remote NFS
About the Remote NFS tab
Adding Remote NFS Management
Configuring a Remote NFS Service
Examples of Remote NFS in the CLI
Example: addremotefs
Example: listremotefs
Applying Changes
Chapter 5 High Availability
Introduction to High Availability
Installing High Availability on a Gateway
Installing and Configuring High Availability on the PM Server
Chapter 6 Monitoring the Data Center
Working with Monitor Views
Monitoring menu
Platform Node Status Icons
Standard Monitor Views
Alarm View

Platform Manage 5.7 User’s Guide


v

Viewing Alarms
Editing an Alarm
Adding an alarm
Example: Adding a new alarm called “CPU load too high”
Custom Variables in Platform Manager Monitoring
Interconnect Monitoring View
Ethernet Monitoring
Myrinet Monitoring
Creating a Custom Monitor View
Chapter 7 Managing Systems
Overview of the Management Menus
Running MPI Jobs
Running Parallel Shell Commands
Chapter 8 Accounting Systems
ScaAccounting
Manually enabling the accounting functions in ScaAccounting
Starting ScaAccounting
ScaAccounting log
scaacct
Time Specification
Using scaact with time start only
scaacct with a time range
Group-by Specification
Example: scaacct grouping by time specification range
Example: scaacct grouping by command
Rule Specification
Using scaacct to report on a specific user
Summarize Specification
Example: Using scaacct for summary of elapsed-, system- and user-time
Generating reports with scaacct
Using scaacct with the -f option
Using scaacct with pdflatex
Triggering Data Collection
Chapter 9 Reporting
Report Interface
Cluster Summary
Report Navigation
Opening a Report
Management and Inventory
Monitoring
Networking
Platform certified products................................................................................................................
Workload management
BIRT Report Parameters

Platform Manage 5.7 User’s Guide


vi

Chapter 10 Platform Manager Command Line Interfaces


The pmcli interface
BMC Commands
addbmc
disablebmcconsole
disablebmcmonitoring
disablebmcpower
enablebmcconsole
enablebmcmonitoring
enablebmcpower
listbmccapabilities
removebmc
showbmc
Cluster Commands
addnodetocluster
createcluster
listclusters
listnodesincluster
removenodefromcluster
renamecluster
Custom Attributes Commands
getcustomattribute
listcustomattributes
removecustomattribute
setcustomattribute
Deployment Commands
install
installmanagementsoftware
reconfigure
reconfiguredryrun
setdiskless
setdistribution
setimage
setnettemplate
setftptemplate
Diagnostic Commands
diagnosecimdatabase
diagnoseconsole
diagnoseinstallation
diagnosemonitoring
diagnosenis
diagnosescampi
diagnosescash
diagnosessh
diagnosesshkeys
Filesystem Commands
addlustremdt
addlustreost
addnfsexport
addremotefs

Platform Manage 5.7 User’s Guide


vii

createlustrefs
formatlustrefs
listlustrefs
listremotefs
removelustrefs
removeremotefs
testlustrefs
Flexlm Commands
addflexclientconfigtoservice
createflexclientconfigdir
createflexclientconfigfile
createflexclientconfigserver
deleteflexclientconfig
listflexclientconfigs
listflexclientconfigsonservice
removeflexclientconfigfromservice
High Availability (HA) Commands
addhaethernetinterface
addhasharedfs
addheartbeatchannel
addservertohagroup
bindhaservicetointerface
createhagroup
disableautofailback
disablehagroupfencing
enableautofailback
enablehagroupfencing
listhagroups
listhainterfaces
listhapingrules
listhasharedfs
listheartbeatchannels
listhostedhaservices
listserversinhagroup
moveservicestohagroup
moveservicestosystem
moveservicetohagroup
moveservicetosystem
removehaethernetinterface
removehagroup
removehasharedfs
removeheartbeatchannel
removeserverfromhagroup
sethapingallips
sethapingoneofips
setlsbscriptha
setprimaryhaserver
showautofailback
showhagroupfencing
unbindhaservicefrominterface
unsetlsbscriptha
Image Management Commands

Platform Manage 5.7 User’s Guide


viii

captureimage
exportimage
importimage
listimages
removeimage
Licensing Commands
activateproductkey
addactivationkey
addproductkey
listproductkeys
removeactivationkey
removeproductkey
showproductstatus
Logging Commands
canceljob
joblog
lastinstallationjob
listjobs
listjobsfornode
listsubjobs
removejob
LSF Commands
addlsfapplicationsystem
addlsfdynamichost
addlsfmastercandidate
addlsfstatichost
getlsfhoststatus
listlsfapplicationsystems
listlsfdynamichosts
listlsffeatures
listlsfmastercandidates
listlsfstatichosts
removelsfapplicationsystem
setlsffeatures
setlsfhostclosed
setlsfhostopen
Network Commands
addaliasedinterface
addbondedinterface
addethernetinterface
addinfinibandinterface
addmyrinetinterface
addroutablesubnet
addroute
addsubnet
clearmacaddress
clearmtu
createroutablesubnetgroup
detachslaveinterface
disablenetworkboot
disablestaticarp
enablenetworkboot

Platform Manage 5.7 User’s Guide


ix

enablestaticarp
enslaveinterface
exportethers
importethers
listinterfaces
listroutablesubnetgroups
listroutes
liststaticarpmapping
listsubnets
listsystemdevices
removealiasedinterface
removebondedinterface
removeethernetinterface
removeinfinibandinterface
removemyrinetinterface
removeroutablesubnet
removeroutablesubnetgroup
removeroute
removesubnet
setinterfacename
setmacaddress
setmtu
Node Commands
changenodebrand
createnode
disablemanagementofservers
discovernode
discovernodemac
enablemanagementofservers
getguid
getkernelbootoptions
listaccounts
listmanagementofservers
listnodes
removesystem
renamesystem
setguid
setinstalledstate
setinstallserver
setkernelbootoptions
setrootpassword
showprovisioningdata
PBS Options Commands
addpbsprolicenseserver
addpbspromom
addpbsproscheduler
addpbsproserver
createpbsnodefile
removepbsprolicenseserver
removepbspromom
removepbsproscheduler
removepbsproserver

Platform Manage 5.7 User’s Guide


x

setpbslicense
setpbslicensefile
setpbsproclientsoffline
setpbsproclientsonline
Product and Software Options Commands
addproductconflicts
addproductprovides
addproductrequires
addsoftware
addsoftwareoftype
createdependencycapability
createlocalproduct
createupdatechannel
listchannels
listdependencycapabilities
listfeatures
listinstalledsoftware
listproductdependencies
listproducts
listproducttypes
listretrieveelements
listretrievemethods
listsubscribedchannels
loadsoftware
removedependencycapability
removeproductconflicts
removeproductprovides
removeproductrequires
removesoftware
removeupdatechannel
subscribechannel
unsubscribechannel
upgradesoftware
upgradesoftwareoftype
Services Options Commands
addaccountingcollectorservice
addaccountingservice
addbatchsystemaccountingservice
addconsolemanagementcontroller
adddnsclientservice
adddhcpclientservice
adddhcpserverservice
addjbossasservice
addldapclientservice
addmanagementengineservice
addmonitoringhistoryservice
addmonitoringinbandservice
addmonitoringoutofbandservice
addmonitoringrelayservice
addnisclientservice
addnatservice
addntpservice

Platform Manage 5.7 User’s Guide


xi

addpowermanagementcontroller
addremotesyslogclientservice
addrshservice
addscarepositorycacheservice
addsmgatewayservices
addsshcredentialmanagementservice
bindservicetointerface
disablescancesubsystem
enablescancesubsystem
listdnsclientservice
listdisabledscancesubsystems
listhostedservices
listnisclientservice
listscancesubsystems
removeservice
removesmgatewayservices
unbindservicefrominterface
Switch Commands
createswitch
disconnectconsoleswitchport
disconnectpowerswitchport
findgmtopology
listswitches
removeswitch
setspeedoncomport
useconsoleswitchport
usepowerswitchport
Template Commands
addtemplate
gettemplate
listtemplates
removetemplate
replacetemplate
The console interface
Console
Console Configuration
TIP: Global Defaults
Configuration Blocks
String Replacement
Numeric Replacement
Escape Sequences
Using console with -e
Using console with -u
Using console with -w
Setting a new default escape
TIP: Locations of Files
TIP: Number of Fields
Some Known Bugs
The power interface

Platform Manage 5.7 User’s Guide


xii

Chapter 11 Parallel Shell Tools


Grid vs. Tree Routing Topologies
scacp
scagroup
scagroup File Format
scahosts
scakill
scaps
scarcp
Example: copying files using scarcp
Example: scarcp using -r
scarup
scash
Example: Running scash in the Background
Example: rpm -q glibc
Example: scash uname
Example: scash uname exclusions
ScaSH configuration file
plasub
scatop
Example: scatop
scawho
Chapter 12 Platform Custom Package Generator
Distribution and Set-up
Interfaces
Richer functionality in the CLI
Error handling
Chapter 13 Licensing
Product key overview
Showing license status in the GUI
Showing license status using the CLI
Listing Product and Activation Keys Using Platform Manager GUI
Listing Product and Activation Keys Using Platform Manager CLI
Activation of Product Keys
Automatic Online Product Activation
Automatic Online Product Activation Using the GUI
Automatic Online Product Activation Using the CLI

Platform Manage 5.7 User’s Guide


xiii

Offline Product Activation Using the GUI


Manual Product Activation Using the CLI
Adding a New Product Key Using the GUI
Adding a New Product Key Using the CLI
Product Key Deletion Using the GUI
Product Key Deletion Using the CLI
Activation key deletion using the GUI
Activation Key Deletion Using the CLI
Upgrading / Replacing a Product Key with the GUI
Upgrading / Replacing a Product Key with the CLI
Installing a Product Key for Scali MPI Connect Using smcinstall
Chapter 14 Bracketing and Grouping
Chapter 15 Best Practices in Platform Manager
Chapter 16 Glossary
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
Q
R
S
T
U
V
W

Platform Manage 5.7 User’s Guide


xiv

X
Y
Index

Platform Manage 5.7 User’s Guide


Chapter 1 - Introduction to Platform Manager
This chapter will give you a basic understanding of how to use this manual and what
Platform Manager can do for you.
Topics in this chapter include:
Conventions on page 1
Why use Platform Manager? on page 2
Platform Manager Features on page 3
Cluster Topologies on page 5
Using Platform Manager on page 9
Standards and Open Technologies on page 10

Conventions
Here you will find basic terms and typographic conventions.

Terms

Unless explicitly specified otherwise, gcc (gnu c-compiler) and bash (gnu
Bourne-Again-SHell) are used in all examples.See “Glossary” on page 388for more
term entries.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 1


TERM DESCRIPTION
node a single computer in an interconnected system consisting
of more than one computer
head node also known as a gateway; compute nodes lie behind the
head node. Any one using the public network will see the
head node and the compute nodes behind it as one node.
cluster A cluster is a set of interconnected nodes functioning as
a single server.
torus Greek word for ring, used in platform documents in the
context of 2- and 3-dimensional interconnect topologies
Platform Manager a computer outside the cluster nodes dedicated to
frontend configuration, monitoring and licensing software for the
cluster(s)
or
"frontend”
UNIX refers to all UNIX and look-alike Operating Systems
supported by the SSP, i.e. Solaris and Linux.
WINDOWS refers to Microsoft Windows 98/Me/NT/2000/XP

Typographic conventions used in this manual

The list of typographic conventions are as listed in Table 1 below.

TERM DESCRIPTION
Bold Program names, options and default values
mono space Computer related: Command Line Interface and Shell
commands, examples, environment variables, file locations
(directories) and contents.
# Command prompt in shell with super user privileges. Also used
for Commentary in pmcli script examples
% Command prompt in shell with normal user privileges

Why use Platform Manager?


Platform Manager reduces the complexity of operating clusters and data centers. It
manages both deployment and operational use by facilitating the installation and
configuration of hardware and software, as well as monitoring, management and software
maintenance. Platform Manager provides all functionality from a single point through
secure access, independent of hardware interconnects and platforms.
Platform Manager implements a close relationship between installation and operation.
From a single point of control, users and administrators have a common working

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 2


environment and uniform view of applications and data. This dramatically reduces the
complexity of establishing and running cluster systems in data centers.
Topics include:
• Platform Manager Features on page 3

• Cluster Topologies on page 5


• Using Platform Manager on page 9

Platform Manager Features


With Platform Manager you can control the total life cycle of your clusters from a single
point in your data center architecture. You can even manage heterogeneous computer
interconnections, and storage resources. The life cycle of a cluster includes:
• Initial Cluster Deployment on page 3
• Ongoing Change Management on page 4
• Configuring Monitoring the Health of Cluster Resources on page 4
• Automating Fault Handling and Root Cause Analysis on page 4
• Optimizing Resource Utilization and Performance on page 4

Initial Cluster Deployment

Platform Manager supports the installation and configuration of operating systems for
heterogeneous servers in your data center from “bare metal”, including RHEL and SLES,
as well as server-specific software. DNS, NIS, LDAP and NTP cluster and network
services are set up automatically using one of two wizards: the Upload Wizard and the
Server Creation Wizard. Use the Upload Wizard to upload OS and third party software
such as Scali MPI Connect. The Server Creation Wizard can set up MPI communication
using Scali MPI Connect. Once you have specified your configuration, you can either
apply it immediately, or save it to a central repository to be applied later.
With the intelligent provisioning feature, you have the option of deployment using the
RPM-based provisioning, or the image-based provisioning. RPM-based provisioning
allows you to build each node from its software components, such as operating system,
services, and applications. Image-based provisioning allows you to build a single node,
then replicate it by copying the entire image to other nodes. For example, if you want
to add new hardware to your cluster, you can have the Platform Manager software build
an image for you from the RPM’s and then install the image on the other nodes.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 3


Intelligent provisioning customizes image-based installations on the core image,
adapted to the needs of your software. This provides:
• reliability and speed of installation
• node personalization
• simplified management of images

Ongoing Change Management

The Platform Manager graphical user interface makes ongoing management more
efficient. The GUI is based on the Rich Client Platform (RCP) framework application
named Eclipse. Wizards and views provide powerful and flexible menas for deploying
servers and expanding clusters. You can easily refresh servers, either back to a known
point, or for security purposes.
Information management in Platform Manager is based on industry-standard data
models maintained by the Distributed Management Task Force (DMTF). The Common
Information Model (CIM) standard is the solid foundation for data storage in Platform
Manager’s open architecture.
Platform Manager supports integrated configuration changes and node administration
with features such as parallel shell, and console and power management. Auditing of
user jobs is also provided to support central management for your data center.

Configuring Monitoring the Health of Cluster Resources

The monitoring feature in Platform Manager 5 gathers and displays node availability,
environmental and performance data in a dashboard format. You can configure the
default dashboard to suit your needs for monitoring, then save it as a new monitoring
view for later use. Views can display aggregate values such as average, maximum, and
standard deviation.
The GUI can drill down through a hierarchy of configurable objects, then perform
actions on your selection of components. In addition, Scale Manage's fault-prediction
algorithms, based on monitoring of a standard set of variables, indicate potential
problems, which ensures a high availability of clusters.

Automating Fault Handling and Root Cause Analysis

Platform Manager automates event handling and response. You can attach user-defined
levels of alarms to specific objects, define automated responses such as shutting down
a node, running a script, or sending an E-mail, drill down quickly from aggregated data,
streamlining the root-cause analysis process.

Optimizing Resource Utilization and Performance

Monitoring system performance and utilization information are ongoing tasks in any
data center. The Platform Manager monitoring interface quickly pinpoints parameters
for investigation. In the Queue Status View, you can select a queue, then drill down to

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 4


a specific job. When you select a job, the Data Center Selector highlights the nodes
running the job. You can then open monitoring views for any parameter.

Cluster Topologies
In this section you will learn about the two basic cluster topologies.

Flat cluster

A cluster is a collection of interconnected computers dedicated for use as a unified


computing resource. By exploiting the rapid development of commodity platforms,
clusters have a very aggressive price/performance relative to traditional
supercomputers. Figures 2 through 5 illustrate typical network topologies for single
clusters.

Figure 1—Public Network Topology

Figure 1 illustrates a cluster configured in a public network with the Platform server
located on the public network communicating with the cluster nodes over a public LAN.
Node management occurs in-band over the public network.

Private Network

Figure 2 illustrates a cluster configured in a private network with the Platform Manager
frontend located on the public network communicating with the cluster frontend server.
The frontend communicates with cluster nodes over a private high-speed interconnect
LAN. Node management occurs in-band over the LAN.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 5


Figure 2—Private Network Topology

Note: The Platform Manager frontend can reside on the frontend server, as well as on a
separate server on the network.

Figure 3—Private and Management Networks

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 6


Figure 3 illustrates a cluster configured in a private network. The Platform server
communicates over the public network with the cluster frontend server. The frontend
communicates with cluster nodes in-band over a private high-speed interconnect LAN
and out-of-band over the separate management network.
In addition to networking functionality and high performance interconnects, clusters
can contain a control infrastructure for out-of-band management. Both the high
performance interconnect and the control infrastructure are optional, but their roles are
very different.
High performance interconnects are meant to service applications with demanding
communication requirements while the control infrastructure grants access to the
resources of the cluster despite failing components in the networking path. Node
control can be in-band over Ethernet, or high-speed interconnect, or out-of-band over
a private management network line. Selected solutions for both high performance
interconnects and control infrastructure are integrated into Platform products.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 7


Tip : Components other than nodes can be connected to the control infrastructure. Control
over networking, high performance interconnect switches, and even the subsystems in the
control infrastructure improves the manageability of the complete solution.

Multiple clusters

Platform Manager provides management of multiple clusters from a single point in the
data center. Through scalable architecture, Platform Manager incorporates multiple
clusters under a single Platform Manager frontend.
Figure 4 illustrates a multiple cluster architecture.

Figure 4—Multiple Cluster Architecture

In this configuration, the Platform Manager frontend is installed on an independent


server on the corporate network. The client resides on a desktop connected to the
corporate network. The cluster gateway controls access to a private network cluster.
GUI agents reside on the desktop systems and communicate with the Platform Manager

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 8


frontend. The frontend servers control queue management. The Platform Mange Server
handles:
• CIM database communication
• communication with individual server systems outside of clusters
• out-of-band management
• history control and fault prediction
NAT and installation services usually run on the Platform Manager frontend (head node)
and on gateway nodes. You perform all tasks from the server so you don’t have to use
ssh to connect with the server and change the configuration. Any software deployed on
nodes, such as Libraries, MPI, and drivers, is stored in the repository on the Platform
Manager frontend.

Note: Alternatively, the Platform Manager frontend can reside on the cluster gateway.

Additional Features

In addition, the following features are available with Platform Manager:


• Single Point Data Center Management
• Custom Provisioning
• Heterogeneous Cluster Support
• Linux and Windows GUI Support
• Node Reinstallation
• Multiple Package Channels
• Floating Licenses
• Custom Dashboard Monitoring with History View and Accounting
• Parallel Command Execution and Multiple Vendor Hardware Management
• Compound Alarm and Definition Setting
• Automated Corrective Actions

Note: Platform Manager Cluster Edition does not allow for multiple cluster management.

Using Platform Manager


Single user sign on is automatically synchronized with other servers.
Events, such as two users logged into one box, or temperature variations, have an
automatic response to kick off a notification script.
Workload management automatically selects nodes for a queue and provides the necessary
information.
Clustered file system distributes where files are stored. NFS is handled from a single
location.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 9


For the private network, you can use a private subnet with a NAT gateway, instead of using
registered IP addresses.
Platform Manager Server includes:
• Platform Manager software and tools
• CIM configuration database
• Software repository
• Deployment services

• Platform Manager-Client location and interface


Platform Manager Cluster Gateway (PM-CGW) includes:
• Console proxy
• NTP/NIS slaves
• NAT
• Provisioning engine
• Proxy deployment services
• DHCP
• TFTP
• PXE

Standards and Open Technologies


Data management technology in Platform Manager uses the CIM standard maintained by
the DMTF standards group. CIM addresses both FCAPS management (fault, configuration,
accounting, performance, and security management) and supports the abstraction and
decomposition of services and functionality. The information model defines and organizes
common and consistent semantics for the managed entities. CIM’s organization is based
on the object-oriented model, which promotes the use of inheritance, relationships,
abstraction, and encapsulation to improve the quality and consistency of the management
data.
Figure 5 shows the Platform Manager architecture, where there is a presentation layer in
client side where the GUI monitoring views, configuration views, and parallel shell tools
interact with the CLI. They go through an API layer that then interacts with the services
on the Platform Manager. There is a monitoring API and an API that interact with the CIM
database.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 10


Figure 5—Platform Architecture Functional Block Diagram

Services are separated into monitoring, provisioning, CIM, and power and console services.
These services interact with the nodes in the cluster.
The CIM repository is stored in a PostgreSQL database. The parallel shell tools are a
proprietary Platform implementation. The imaging engine is also proprietary.
• Provisioning with ScaNCE on page 11
• CIM/Middleware on page 12
• Console on page 14
• Power on page 14
• Monitoring on page 15

Provisioning with ScaNCE

Figure 6 shows the interaction between the api layer and services regarding
provisioning, monitoring and saving the nodes’ state on the database.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 11


Figure 6—Provisioning block architecture

The provisioning GUI interacts with the configuration database through provisioning
services. Within the provisioning service is the repository, which is tightly coupled with
the configuration database. The provisioning engine performs node deployment. After
installation ssh keys are generated and collected for all nodes in the cluster. The cluster
configuration files and the ssh-keys are packaged into a rpm named “ScaConfig” on the
frontend. This package is distributed to all the nodes in the cluster, and the Platform
Node Configuration Engine (ScaNCE) synchronizes the Platform Manager Server with the
configuration of the local nodes. If the software, or passwords change on the local
nodes, ScaNCE will reference the Platform Manager frontend for the proper
configuration and then update the local node's configuration. ScaNCE does this for
services, licenses, etc.
Similarly, after boot, all nodes will check for updated packages at the Platform Manager
frontend, so that if configuration changes while a node is down, the node will receive
a new ScaConfig package at boot which will trigger a re-configuration of the node.

CIM/Middleware

Figure 7 shows how a PostgreSQL database stores configuration instances and static
instances that interact with the CIM server.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 12


Figure 7—CIM block architecture

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 13


Console

You use the Platform CLI to access the console client which interacts with the console
server. The console server can communicate with
• RAC
• iLO interfaces
• Console switch terminal servers

• BMC IPMI interfaces.

Power

Figure 8 shows the interaction between ssh on the api layer and Console and power
services for the nodes’ hardware.

Figure 8—Console and Power block architecture

If you want to power cycle, see “Adding Myrinet, Power and Console Switches”
on page 82, or see “Configuring Power and Console settings” on page 118, or

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 14


see “The Power Interface” on page 308. You can use either the GUI or the Power
Interface.

Monitoring

Figure 9 shows the interaction between Scamon and the Monitoring services for the
nodes.

Figure 9—Monitoring block architecture

Platform Manager also monitors LSF and PBSPro queues. The data can be viewed in the
GUI from LSF and PBSPro directly in real time. You can look at the history from the
command line through the python interface. The factory module provides advanced

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 15


monitoring functions like aggregation and filtering, which may be applied on real time
or historic data.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 16


Chapter 2 - Introduction to the Platform
Manager GUI
The Platform Manager GUI is an implementation of the open-source Eclipse framework.
This chapter provides an overview of how to use the Platform Manager graphical user
interface. This chapter discusses the following topics:
About Platform Manager Icons on page 17
Opening Terminal and Console Sessions on page 18
Main Window on page 19
Working with Views on page 23

About Platform Manager Icons


Platform Manager uses the icons found in Figure 10 to symbolize components of your
network system.

Figure 10—Platform Manager Icons

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 17


Table 1—Platform Manager Icons

Nr. Icon Description Nr. Icon Description


1 Platform Manager 18 Switch
Application
2 Cluster up 19 Vendor
3 Cluster unavailable 20 software
4 Cluster down 21 Service
5 Node up 22 Image
6 Node unavailable 23 Data Center Selector
7 Node down 24 No changes pending
8 Node unmanaged 25 Changes pending
9 Gateway 26 Open Terminal
10 Group 27 Open Console
11 Network up 28 Refresh views
12 Network unavailable 29 Progress
13 Network down 30 Parallel Shell View
14 License Manager View 31 Launch MPI
15 Exit Platform Manager 32 Go to Data Center Home View icon State:
enabled
16 Operating System icon 33 Go to Data Center Home View icon State:
disabled
17 Platform Manager Template

Opening Terminal and Console Sessions


Console and Terminal session allow you to step outside the GUI to use the CLI suites.

Figure 11—Terminal and Console Icons

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 18


To open terminal and console sessions from Platform Manager:

1 Select a node from the Data Center Selector list.


2 Choose
• Management -> Remote Access -> Node Console, or
• Management -> Remote Access -> Node Terminal, or
• Click the appropriate icon in the shortcut bar.
See “The console interface” on page 312 and "The power interface" on page 324
for more details.

Main Window
At a basic level, the user interface for Platform Manager provides you with a set of
interactive forms that correspond to typical functions used to manage a cluster, or data
center. You can arrange the forms in the display to suit your needs, then save the
arrangement under a unique name. When you open Platform Manager, you see a window
similar to Figure 12.

Figure 12—Platform Manager Main Window

Notice that the window has two menu bars and is separated into views. The views provide
interaction with the various individual functions of Platform Manager.
You select items in a menu at the tool bar or by right-clicking on an item in the Data Center,
then edit, or monitor them in a view. When you first install Platform Manager Enterprise,

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 19


the Data Center Selector is empty except for the Platform Manager frontend. As you add
configure-able objects to the data center, they become visible in the selector.
Installing a cluster requires that you define your configuration settings first, then install the
cluster. You could save the configuration settings and not install the cluster. In that case,
you will see the cluster listed in the selector with an indication that it has not yet been
installed.
Changes must be applied by clicking on the Apply Changes icon. The saved changes will
not go into effect until you have applied them. Changed items appear in the configuration
status view. Selecting new objects of the same type replace the content displayed in the
same view. This requires you to confirm whether to save, or lose the changes. Multiple
objects of the same type may be edited at once.

About Platform Manager GUI Elements

Table 2 contains a quick reference to the elements that comprise the Platform Manager
interface.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 20


Table 2—Platform Manager GUI Elements

Element DESCRIPTION
Window A window comprises one, or more perspectives.

Multiple windows can be opened simultaneously.

The Data Center perspective is displayed initially in the first window


that is opened. The shortcut bar appears in the top right corner of the
window to allow the user to open new perspectives and switch
between those already open.

The window title shows the name of the active perspective. Its item is
highlighted in the shortcut bar.
View Views are typically used to navigate a hierarchy of information, or to
display properties for the active view.

Modifications made in a view are saved, or applied immediately.

Except for the monitoring view, only one instance of a particular type
of view may exist within a window.

Views can be tiled side-by-side so their content can be viewed


simultaneously.
Folder Folders contain views.

They are typically used to edit, or to browse a resource.

Modifications made in a folder follow an open-save-close life cycle


model.

Multiple instances of a folder type may exist within a window.

If a folder tab is highlighted, it indicates the view is active,

An inactive view may show information based on its last active state.
dialog Dialogs popup when input, or verification is required for an action.
Wizard Wizards guide you through a process such as installing a cluster,
adding nodes to an existing cluster or uploading software.

The Data Center

Figure 13 shows an example of The Data Center which shows all the configure-able
objects in your data center. It contains the Data Center Selector, which you use to select
objects on which you can perform actions.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 21


Perspective
Selectors

Figure 13—Platform Manager Window

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 22


Working with Views
Platform Manager allows for great flexibility in arranging views. This section discusses the
following topics:
• Data Center Selector View on page 23
• Opening Views on page 32
• Moving views on page 34

Data Center Selector View

The Data Center Selector view is central to the user interface because it lists all the
objects that you can configure in the data center.

Figure 14—Node right-click menu

If you right-click on a Node you can edit items as you drill down through the list. If you
right-click on a node name or a cluster, you can edit information about it by choosing
Configure from the drop-down list. See Figure 14.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 23


To display the list of associated nodes, click on the arrowhead next to the cluster name.
Select the node name to make changes to that node. You can also access lists of items
organized by network, operating system, specified software, or services, and vendor.
By right-clicking on icons for cluster, node, or subnet you can choose from a drop down
list to perform actions on the object.

Cluster right-click menu


Right-click on a cluster icon to bring up a menu.

Figure 15—Cluster right-click menu

For cluster icons the choices are as follows:


• Configure will open the Server Creation Wizard

• Create Group opens Group dialog

• New will open the Server Creation Wizard

• Install Cluster

• Install Platform Manager on the Cluster

• Delete

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 24


Figure 16—Node right-click menu: Configure

Configure
See Initial Cluster Deployment on page 3 for details on how to configure a
cluster.

Create Group
You can group nodes to suit your need. Common groupings include by machine
model and by OS.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 25


Figure 17—Create Group dialog

To create a group

1 Highlight one or more nodes


2 Right-click on the node(s)
3 Select Create Group
4 Enter a group name in the text box.
5 Click Ok.
The group will appear under the User-defined Groups heading in the Data
Center.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 26


Create NIS Server...
To create a NIS server

1 Right-click on a node.
2 Select Create NIS Server...
3 Enter a domain name in the text box.
4 Chose whether this node will be a master or a slave.
5 If the node will be a slave, enter the name of the master you want to use.
6 Click Create.

Figure 18—Create NIS Server view

Capture Image
This selection starts up the Capture OS Image dialog. Please see
“Image-based Provisioning” on page 74 for more information.

Show Installation History


Selecting this item brings up the node’s installation history.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 27


Figure 19—Installation History

Remote Access
Selecting Remote Access brings up a submenu where you can open a Console
or a Terminal.

Figure 20—Remote access submenu

Node On/Off
Clicking on this item will bring up a submenu of power selections for the node.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 28


Figure 21—Node On/Off submenu

Clicking on this item will bring up the following choices:


• Power On turns the node’s power on, if your node is connected to a power
switch.
• Clean Shutdown closes all applications through the software, then powers
down the node.
• Power Off turns the node’s power off via a power switch, if your node is
connected to a power switch.
• Clean Reboot closes all applications, using software, then reboots the
node. This is also known as a “warm reboot”.
• Power Cycle shuts down then powers up the node through the power
switch. This is in effect a “cold re-start” or a “hard reboot”.

Preview Pending Changes


Clicking on this item will result in a screen as in Figure 22.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 29


Figure 22—Preview Pending Changes Pop-up

• Selecting Apply Configuration applies the changes to the node, which


were saved to the CIM database.
• Selecting Install Server(s) will bring up the confirmation pop-up asking you
if you really want to install the server.
• Click Yes to install.

• Selecting Install Platform Manager on the Server(s) will bring up the


confirmation pop-up asking you if you really want to install the server.
• Click Yes to install Platform Manager on the selected server(s).

• Delete removes the cluster and the software from the servers.

• Remove from Cluster will remove the selected nodes from the cluster, but
leave the software intact. The nodes become independent servers.

Subnet Right-click Menu


To change a subnet you can right click on the subnet for the menu.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 30


Figure 23—Subnet right-click menu

Figure 23 shows the menu choices for subnet icons.


• Configure opens the Create Subnet View

• Create Group opens User Input pop-up window.

• Delete

Figure 24—Subnet Configuration

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 31


You create a subnet by first opening the Subnet Configuration view as shown in
Figure 24:

1 Enter the Subnet name.


2 Enter a description.
3 Enter the network mask
4 Enter the Route-able subnet.
5 Click Save or Reset.

Opening Views

You can have several views in the main window at the same time.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 32


To open a View:

1 Select an object in the Data Center Selector.


2 Choose Window -> Show View from the tool bar menus. Figure 25 shows the Show View
dialog.

Figure 25—Show View dialog

If you do not see the view name in the list:


a Choose Other. The Show View dialog opens.
b Click the name of the view that you want to add.
c Click OK. The view opens in the Platform Manager window.
A second view - Subnet Configuration View - is added to the window that already has
one view up as shown in Figure 26.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 33


Figure 26—Subnet Configuration View

Figure 26 shows a folder with an additional view. The new view opens in a folder next
to the existing view.

Moving views

You can position views in several ways for a more effective work environment.
Dragging a view below, above, or to the side of another view will cause the views to
dock in place. The space occupied by the stationary view will be redistributed between
the stationary view and the view you are dragging. As you drag a window, the mouse
pointer will become a black arrow whenever it is over a window boundary, indicating
that docking is allowed.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 34


If you want to move a view into a folder:

1 Right-click on the view name tab. This will give you several checkboxes.
2 Select Move choice, either View or Tab Group.
3 Drag and drop the view into another view or tab group.

Figure 27—Detached View

The two views appear together in a folder initially. To separate them, as in Figure 27,
click and hold the tab of Installation Progress Status view, then drag it to the bottom
of the window using the highlighted area as a guide.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 35


To detach a view

1 Right-click on the view name tab:


2 Choose Detached. A check mark appears next to Detached and the view displays in a
separate window.
To reattach a view:

1 Right-click on the view window title.


2 Choose Detached. The check mark next to Detached disappears and the view returns to its
former position.
To use Fast View to change the view into an icon:

1 Right-click on the view name tab.


2 Choose Fast View. A check mark appears next to Fast View and the view changes into an
icon in the bottom left corner of the window.
Note: The standard minimize button places the icon at the top of the perspective area.
To resize a view:

1 Right-click on the tab.


2 Select size, then left or bottom
3 Drag view border to change the view’s size.
To maximize, or to minimize a view:

1 Right-click on the tab.


2 Click the maximize icon. This maximizes the view within the main window.
3 Clicking the minimize icon reduces the view to its regular size.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 36


Chapter 3 - Provisioning the Data Center
Platform Manager is designed to make the management of clusters and servers within a
data center as efficient as possible.
This chapter describes the tasks for provisioning clusters and nodes in the data center.
About the Provisioning Process on page 37
The Server Creation Wizard: Provisioning a Cluster on page 39
Software Installation on page 68
Discovery and Managing existing servers on page 83

About the Provisioning Process


Platform Manager uses the following operating principles for provisioning:
• Platform Manager frontend is the central point of control for deployment,
managing, and monitoring of the datacenter.
• The Platform Manager frontend deploys all software onto the nodes, so you must
upload the software into the repository prior to deployment.
• Platform Manager uses the RPM format for files. You can use the scacpg tool to
create RPM files.
• All configurations are stored in the CIM database. You must deploy them in a
separate step.
• ScaNCE verifies that configuration files on the node match the configuration in
the CIM database. Platform Node Configuration Engine runs at startup and when
you choose Provisioning -> Apply All Configuration Changes.
You can install the Platform Manager frontend on an independent network server, or on the
frontend server of a cluster. When you configure the frontend of the cluster the cluster
nodes are identified automatically. You follow a similar process when installing a single
server.
Before provisioning, you must choose between Cluster, or Single Server provisioning.
Cluster provisioning allows multiple groups of homogeneous nodes to be combined in a
heterogeneous cluster.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 36


There are two wizards that are used in the provisioning process:

1 The Upload Software Wizard adds all the required software to the repository so that it can
be deployed.
2 The Create Node Wizard completes the provisioning process for new clusters.
The Create Node Wizard guides you through three phases:

1 frontend server configurationThe configuration of the frontend reuses as much of the current
Platform Manager frontend operating system configuration as possible.
2 Defining the cluster nodes This is done automatically, assuming you have made the
appropriate hardware connections and the nodes are active.
3 Installing the cluster. Once the frontend is configured and the nodes are defined, the cluster
installation can be completed and tested.

Scalability Considerations

As the number of nodes increases you may experience overall performance


degradation, but many of the causes can be tuned. At what level these challenges arise
is dependent upon a number of factors including topology, brands and models of
devices, and operating system(s), to name a few.
We recommend using static ARP for larger clusters. When starting an MPI job across a
large cluster using dynamic ARP, every node will try to connect to all other nodes at the
same time. Depending on how large the cluster is and the quality of your switches,
application performance may be significantly affected due to the load put on them.
When slow downs arise will vary according to your network capacity, RAM and clock
speed. As a solution Platform Manager has an option to configure all the nodes with
static ARP.
One other factor to consider is that your NFS and NIS servers may not be able to handle
parallel requests from so many nodes. As a solution Platform Manager allows for
multiple NIS slaves and parallel file systems.
You may start seeing problems with NIS in clusters with more than 256 nodes as the
servers may not be able to handle parallel requests from so many nodes. At this point
you may want to consider multiple NIS slaves and parallel file systems.
For clusters in the 500-1000 node range you would be wise to use faster CPU’s, faster
disks and more RAM on the Platform Manager frontend. There are a number of fine
tuning options which can speed things up.
When clusters reach 500+ nodes we advise you to allocate more than one management
server to run monitoring on one server and place the configuration database on
another.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 37


The Provisioning Process Assumptions

The provisioning process keeps track of the number of nodes and whether the frontend
should also be used as a processing node. You could make several assumptions about
the cluster:
• The network connected to the nodes is named n0 with an IP address of
10.0.0.1.
• Cluster nodes are named n1, n2, etc. with IP addresses starting at 10.0.0.2.
• The frontend can keep its current name and configuration for the external
interface (assuming nodes are connected to a private network).
• NAT is enabled on the frontend.
• Nodes are installed with the same OS distribution as the frontend.
• The first ethernet interface (eth0) is used for node installation.
• The frontend is configured as a NIS slave server if it is configured to be a part
of a NIS domain prior to provisioning.

The Server Creation Wizard: Provisioning a Cluster


Certain functions are unavailable with the Cluster edition. If you are unsure of which edition
you have installed you can check by entering:
$ lmutil show
to list the installed product keys.
To create more than one cluster you will need to upgrade to Platform Manager/Enterprise
edition. Please contact Platform for more information.
The radio button “Cluster on private network” is disabled for the Platform Manager Cluster
edition. You are able to create only one cluster in the Platform Manager/Cluster edition.

Note: You could define a homogeneous set of nodes and redefine the brand
later. This option may be limited in the future to redefining brands from
generic nodes only (for example: Generic i386 or Generic x86_64).
WARNING—We do not recommend adding the Platform Manager frontend to the
cluster as a compute node. This will slow down the total performance of your
system.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 38


You install a new cluster by using the Cluster/Server Creation Wizard in the full version of
Platform Manager. There are four ways to open the wizard:
• Click Provisioning -> Define Servers, or

• Window -> Open View -> Other -> Install Wizard, or


• Click the icon in the tool bar, or
• Right-click on the top Cluster icon or the Independent servers icon in the data
center selector and choose “New” option.

Note: A text message will appear at the top of the wizards’ windows to prompt
you as you fill out the forms. (see Figure 28 on page 40) (See “Install
wizard’s help text” )

Figure 28—Install wizard’s help text

Selecting a Network Topology

You have to choose a network topology prior to deploying a cluster. A typical cluster
may have at least three networks:
The management network is the network that connects the frontend to all servers using
Ethernet, or GEthernet.
The console network is connected to all baseboard management controllers.
A fast interconnect network, such Myrinet, may be connected to the frontend.
There are basically two types of network topologies. Each type may have any number
of variations dependant upon your computing needs. The main difference between the
two is the Cluster Gateway Server, found in a private cluster as you can see in the figure
below.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 39


Figure 29—Single server, Flat and Private Clusters

Typical Cluster Workflow


As you make deployment choices when setting up your cluster, keep in mind the
workflow of the cluster.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 40


Figure 30—Cluster workflow

1 Access the frontend using ssh.


2 Submit a job to the workload manager queue with instructions for running this application
with a specified data set.
3 Put the job in the workload manager. The workload manager identifies which nodes are
available and starts the job on those nodes
4 Job instructions are sent over the interconnect.
5 When the job is done the results are written over NFS, or to another clustered file system
back in the SAN, or to another designated place where you can access the resulting files.

Route-able Subnet Groups


Whenever you create a system (a server, switch or BMC) with a network interface,
the IP address must be associated with a subnet. Platform Manager assumes that
all systems connected to the same subnet can communicate with each other.
Platform Manager also assumes that if two or more systems have no common
subnet for any interface, they cannot communicate.
It is still possible for systems on separate subnets to communicate by configuring
routing using gateways or other network routing setups. If you have these types of
configurations in your network, you must configure Platform Manager to treat them
as if they are parts of the same network.
By associating two or more subnets to an instance of a 'Route-able Subnet Group',
Platform Manager allows communication among all the systems connected to your
subnets in that group. Platform Manager uses this information to deduce the
network connectivity. Platform Manager does not configure the actual routing setup
and does not care how about its implementation.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 41


You can create an unlimited number of subnets or route-able subnet groups. Both
subnets and subnet groups may be associated with zero or more other subnet
groups.

Figure 31—Route-able Subnet Group

The 'Route-able Subnet Group' can be depicted as a “Network Cloud” connecting


two or more subnets. The kind of implementation of the Route-able Subnet Group
has no bearing on its use in Platform Manager.

Create Server Wizard: Creating a Cluster with a Private


topology

A cluster with a private network includes a gateway, behind which one places compute
nodes. To the outside observer the cluster appears to be a single node.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 42


Figure 32—Cluster Creation Alternatives Private Network

You can choose to create your node(s) in:


• an internal subnet with a frontend node managed by Platform Manager located
on your data center network, or
• as members of your data center network where the nodes are accessible
individually.
The frontend should be equipped with two Ethernet interfaces to integrate the cluster
with the surrounding data center (file systems, user workstations, etc.). These could

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 43


be named, for example, “eth0” and “eth1”. The interface named “eth0” connects the
cluster to a corporate network. The interface named “eth1” connects to the cluster's
private network. A separate network can be used for out-of-band hardware
management.
Setting alternatives

1 Choose to create a private subnet, a flat network, extend an existing cluster, or independent
servers.
2 Fill in the Number of servers. Enter the total number of nodes that will be created. You can
create multiple independent servers or multiple clusters and subnets. If you choose “Cluster
on private network”, the number of servers must include at least one gateway server and one
node. Platform Manager verifies network configurations and will not allow, for example,
500 nodes on a subnet with 255 IP addresses.
3 Enter the cluster name when the “Cluster flat...” or “Cluster on private...” options are
selected.If you chose the “Extend existing cluster” option, you are presented with a
drop-down box listing existing clusters. Choose which cluster you want to extend. This list
is disabled for all other options than “Extend existing cluster”.
4 The template functionality is an optional feature for all wizard creation options. The drop
down box lists all existing nodes and independent servers. All relevant configuration details
(server brand, BMC configuration, OS/image/template, subnets, gateway etc) will be
pre-loaded based on the selected server.
5 Click Next to move to the network gateway configuration dialog.

Create Server Wizard: Configuring a network gateway

By configuring all the nodes on a private sub net (10.0.0.0), the frontend can make all
requests from the nodes appear as though they came from one machine. The NAT
(Network Address Translation) feature allows file systems outside the cluster to be
mounted through the frontend via NFS.
The NAT values are used to configure the ip tables that manage IP packet filters. In the
subnet specification, the Network Address and Network Mask specify the addresses that
should be translated. Platform Manager configures the ip tables to alter the source
address in packets from the cluster nodes as they exit the cluster (via the frontend).
As a result, the cluster appears to the network as a single computer.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 44


Figure 33—Gateway Configuration

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 45


The dialog page in Gateway Configuration on page 46 will only appear if you select
“Cluster on a private network”.
1 Choose your network interface. The default public network interface is eth1.
2 Enter the external host name.
3 Choose the subnet, default gateway and network interface. The pre-loaded subnet and
pre-loaded default gateway are on the same network as the Platform Manager frontend.
4 Pre-select your ethernet device for your private network interface in an internal network
configuration.
5 Enter the internal host name.
6 Choose the subnet.
7 Enter the IP address.
8 Click Next to go to Node Hardware Configuration.

Create Server Wizard: Configuring node hardware

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 46


Figure 34— Node Hardware Configuration

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 47


WARNING—Warning: If you do not add a BMC configuration you will be prompted
with a warning message: “Not Configured!” The BMC's configuration summary will
show configurations as “unspecified” until you edit the configuration by clicking the
Edit button.
• Choose the Server Model from the list of the brands of all nodes in use. If you
choose a server brand without BMC, the Enable BMC configuration check box
is disabled.
• If the server brand supports BMC configuration enable the check box and click
on Edit.

Create Server Wizard: Configuring the BMC ethernet


interface

If the server model you selected in the Node Hardware Configuration dialog has a
baseboard management controller, you can configure it.
Monitoring variables such as CPU temperature, fan speeds, and voltages requires
detailed knowledge about the computer. 3rd party monitoring software may be
installed.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 48


Figure 35— Configuring the BMC ethernet interface dialog

1 Choose the Subnet from the list.


2 Enter the first IP address of the IP range dedicated to the BMC’s.
3 Enter the IP stepping. The Start number default is 1. IP stepping specifies the IP stride within
the selected subnet. For example:
4 Stepping =1: 192.168.0.1 - 192.168.0.2 - 192.168.0.3 - ...
5 Stepping =2: 192.168.0.1 - 192.168.0.3 - 192.168.0.5 - ...
6 Enter the prefix for the BMC host names.
7 Enter the start number.
8 Enter how many 0’s you want in the zero padding.
9 Enter the suffix of your choice
10 If the preview of the host names is to your liking, click on OK to go back to the previous
dialog window.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 49


Figure 36— BMC User information

1 Enter your BMC user ID.


2 Enter the BMC password.
3 Confirm the password.
4 Click Next to advance to Configuring the Operating system dialog.

Create Server Wizard: Configuring the Operating System

You have three choices.


• Select OS Based, or Image Based, or Diskless Image to install an operating
system package, or an image captured from another node. Click on an
operating system, or on an image in the list. The default is OS based. “Ready
to install” means that the OS, or image is in the repository and ready to be
installed on the servers.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 50


• You will see a drop down list of compatible operating systems which have been
uploaded to the repository under operating systems in the Data Center
Selector. When radio button image based is selected, all existing images are
listed. When you choose diskless image you will only get a list of images that
have been captured which diskless-ready (captured after Platform Manager
5.5)
• Select Default templates from the available, compatible templates, depending
on the selected OS/image.
• If the OS is not in the list, use the Provisioning -> Upload Software Wizard (see
“Upload Software Wizard” on page 68) to select the location of the
source files and upload them into the repository.
• Click Next to continue to The Network Configuration dialog.

Figure 37—Configuring the operating system

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 51


Create Server Wizard: Adding the Configured Ethernet
Interface

This dialog allows for configuration of Myrinet and Infiniband cards.

Figure 38— Configuring the node network

Click on Add to configure your ethernet interface. You will get a dialog box as shown
in Figure 39.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 52


Figure 39—Configuring the node network, part 2

1 Enable the check box by “Make interface bootable.”


2 Choose the interface from the list.
3 Choose the Subnet from the list.
4 Enter the first IP address of the IP range dedicated to the ethernet interface.
5 Enter the IP stepping. The Start number default is 1. IP stepping specifies the IP stride within
the selected subnet. For example:
6 Stepping =1: 192.168.0.1 - 192.168.0.2 - 192.168.0.3 - ...
7 Stepping =2: 192.168.0.1 - 192.168.0.3 - 192.168.0.5 - ...
8 Enter the prefix for the interfaces.
9 Enter the start number.
10 Enter how many 0’s you want in the zero padding.
11 Enter the suffix of your choice
12 If the preview of the host names is to your liking, click on OK to go back to the previous
dialog window.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 53


When running the wizard with a server template, the network interfaces will be
pre-filled with the same configuration as the template server. IP addresses and host
names should be checked so that they do not conflict with existing addresses or names
in the same subnet – so this will force the user to edit the input, but have a basis for
the network configuration. See “Configuration Overview” on page 83. for how to
reconfigure your network. For all creation alternatives of the wizard, all network
configurations may be changed by the user, even when configuration details are
pre-filled. The network configuration is homogeneous, so all nodes must have the same
amount of interfaces and follow the same configuration convention per interface.
The Myrinet and Infiniband check boxes are, by default, unchecked. If you are using
Myrinet or Infiniband, checking the check boxes which will enable the list of available
drivers for each type of card.
If the settings are to your liking, click on Next to go to the Configuring the DNS and NT
dialog.

Create Server Wizard: Configuring the DNS and NTP

DNS configuration lets you specify name servers and the search list for host name look
up. If more than three name servers are defined before Platform Manager is installed,
additional lines will appear in the dialog.
The Network Time Protocol (NTP) synchronizes a computer’s time to that of another
server, or reference time source. Failure to synchronize nodes can lead to strange
behavior in a cluster, so the installer allows NTP to connect to a particular source.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 54


Figure 40— The DNS and NTP tab

The DNS

To configure DNS:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 55


• Enter a domain name for the name server, then click Add to add it to the
name server list.
• Or click on the server in the search list.
You can navigate through the list of servers using the Move Up and Move Down
buttons.
To delete a name server:

1 Highlight it
2 Click Remove

The NTP
To configure NTP:

1 Check the Enable NTP box.


2 Enter the name, or IP address of the NTP server.
3 Click Add. You can specify multiple NTP servers.
4 Click Next to continue to the NIS Configuration dialog.

Create Server Wizard: Configuring the NIS and LDAP Client


Services

Clusters are easier to use, if user information, such as login names, passwords, or home
directories, is equally accessible to all nodes in the cluster. NIS ensures this kind of
homogeneity.
Configure the NIS service, if necessary.
Configure the LDAP service, if necessary.
Click OK
Remember that your changes will be saved in the database, but not enabled before you
run the configure procedure.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 56


Figure 41—NIS and LDAP Client Configuration dialog

Create Server Wizard: Configuring NIS


The top half of the dialog screen configures the NIS service.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 57


Figure 42—Create Server Wizard: NIS Server IP Pop-up

1 Check the Enable NIS box.


2 Enter an NIS Domain name.
3 Click the radio button to choose Broadcast, or Unicast.
4 Choose an available server from the list, or enter the NIS Server name
• Click Add. The NIS Server IP dialog pops up (see Figure 42 on page 58)()

5 Enter the IP address.


6 Click OK.This brings you back to the previous screen.
You may remove NIS servers from the search list with the Remove button.
If you are using NAT, or you want to create your own user domain, you have to
identify the server(s) role(s). You must specify the NIS master server, any slaves,
or external servers. For a master, or, slave server, you can specify the domain name.

Configuring the LDAP Client Service


The bottom half of the dialog screen configures the LDAP client service.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 58


Figure 43—Create Server Wizard: LDAP Server IP Pop-up

1 Check the Enable LDAP box.


2 Enter a Base dn
3 Choose the LDPA server from the list.
4 Choose an available server from the list, or enter the NIS Server name
5 Check the Enable TLS box.
6 Click Add. The LDAP Server IP dialog pops up (see Figure 43 on page 60)
7 Enter the IP address.
8 Click OK.This brings you back to the previous screen.

Adding LDAP Client Services in the CLI


You can configure LDAP client services for single or multiple nodes using
addldapclientservice in the cli like this:
pmcli addldapclientservice <systemnames> <ldapbasedn> <ldapservers>
[enabletls=False]

LDAP and the NIS


Platform Manager can set up a NIS server to be used by nodes configured with
Platform Manager. This NIS shares user information (usernames, passwords) and
configuration data, such as configuration data for automount.
Mounting users' home directories with automount on a NIS

1. Add a user home directory in "/etc/auto.home" for each user:


#NIS server /etc/auto.home
* -intr,hard,nfsvers=3,tcp,rsize=32768,wsize=32768,timeo=30000 <nfsserver IP for users
home>:/export/home/&

If you should want to add specified users directories you should use this example:

#NIS server /etc/auto.home

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 59


user1 -intr,hard,nfsvers=3,tcp,rsize=32768,wsize=32768,timeo=30000
<nfsserver IP for user1 home>:/home/user1
user2 -intr,hard,nfsvers=3,tcp,rsize=32768,wsize=32768,timeo=30000
<nfsserver IP for user2 home>:/home/user2

2. Define map in "/etc/auto.master":


#NIS server /etc/auto.master
/home auto.home

3. Edit "/var/yp/Makefile" for mapping of auto.home, auto.master, etc. Append the auto.home, au-
to.master etc in "all:" section of Makefile

4. Run "make" in /var/yp/

5. Use "addnfsexport" command to add NFS export for your home directory. For example:
#pmcli addnfsexport nfsserver /export/home/
--client="*(rw,sync)"

Mounting a user's home directory without automount on a NIS


You need to create local home directories for users on each NIS client manually,
or to mount user home from fstab without automount:
[root@nisclient ~] #mkdir /home/<username>
Or
You can mount an exported home directory for NIS users on a NIS client with
addremotefs. For example:
# pmcli addremotefs nisclient nfs nfsserver:/export/home /home
--options="intr,hard,nfsvers=3,tcp,rsize=32768,wsize=32768,timeo=30000"
Please see the sections on the following commands for more information:
addnfsexport
addremotefs

Creating a subnet

Start in the Data Center.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 60


Figure 44— Creating a new subnet, part 1

1 Right-click IP Networks in the Data Center Selector.


2 Click Create New Subnet.

Figure 45—Create a new subnet, part 2

3 Enter the information as appropriate.


4 Click Create New Subnet to make the changes. If you only save your changes, they will not
take effect until you apply them.
The new subnet will appear under the IP Network icon in the Data Center.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 61


I you want to modify the subnet at a later time see

Adding Myrinet, Power and Console Switches

You can add Myrinet, power and console switches using the dialogs available when you
choose Provisioning -> Configure Switch.

Figure 46— Adding a switch, part 1

Myrinet Switches
Add a Myrinet Switch.

Figure 47— Adding a Myrinet switch

1 Select Add Myrinet Switch from the menu. The Add Myrinet Switch dialog pops up.
2 Enter a name for the switch
3 Enter a valid IP address on your subnet.
4 Click OK. The new Myrinet switch will appear under the switches icon in the Data Center.

Power Switch
Adding a power switch allows you to boot and shut down your nodes remotely.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 62


Figure 48—Adding a Power Switch, part 1

Figure 49—Adding a Power Switch, part 2

• Select Add Power Switch from the menu. The add Power Switch dialog pops
up.

5 Select a power switch model from the combo box.


6 Enter a name in the text box.
7 Enter a valid IP address on your subnet.
8 Click OK.

The switch will appear under the switches icon in the Data Center.

Console Switch
Next you will configure the console switch.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 63


Figure 50—Adding a Console Switch

1 Select the switch model from the combo box.


2 Enter the name of the switch.
3 Enter a valid IP address on your subnet.
4 Click OK.

The switch will appear under the switches icon in the Data Center.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 64


Create Server Wizard: Adding Software options

There are two Platform Manager options that you can configure:
• Scali MPI Connect.
• Monitoring software

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 65


Figure 51— Installing Platform Manager software options

1 Choose the appropriate software version of Scali MPI Connect from the drop down list. The
list of software versions which are compatible with your hardware appears in the drop down
boxes.
2 Choose the appropriate Monitoring software version from the drop down list.
3 Clicking on Next will open the last dialog in the installation wizard.
4 You have two choices:
• immediate installation of the configuration
• store the configuration settings for installation at a later time.

5 Click on Finish. This only makes changes to the configure database and will bring up the
Configuration Setup Completed dialog. Changes in the actual configuration of your
cluster will not happen until you choose to apply the changes.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 66


Figure 52—Configuration Complete dialog

You have two choices:


• Install operating system and Platform Manager now

• Store configuration for later use

Software Installation
Once you have defined your cluster, you must upload and install the software.

Upload Software Wizard

You can upload operating systems, local packages (RPM), and supported 3rd party
packages using the Upload Software Wizard. You may start the wizard by using the tool
bar menu Provisioning -> Upload Software

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 67


Figure 53—Upload Software Wizard - Upload Page 1

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 68


Figure 54—Upload Software Wizard - Upload Page 2

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 69


Figure 55—Upload Software Wizard - Upload Page 3

Figure 56—Upload Software Wizard - Upload Page 4

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 70


To upload software:

1 Select the type of software to upload.


2 Select the software and click Next.
3 Select retrieval method.
4 Navigate to the file and click OK.
5 Repeat steps 1 through 4 for each software.
6 Click Finish to upload the software.

About The Installation Proccess

Platform Manager 5.7 offers three forms of installation:


• Package-based Installation on page 72
• Image-based Provisioning on page 74
• Diskless Image Provisioning on page 79

Package-based Installation
The Platform Manager installation process is illustrated below.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 71


Figure 57—Platform Manager package based installation process for Red Hat based
distribution

During the initial installation, nodes are powered on one by one. Since the nodes
are set up to boot from the network (PXE) they will issue DHCP requests to contact
a server of boot images. The frontend responds to this and returns a bootloader
(pxelinux). The bootloader is then used by the node to retrieve a Linux kernel from
the frontend's TFTP server. This kernel requests a (Package based) kickstart
configuration file that guides the node through the process of configuring and

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 72


installing software. The node then reboots itself, and as part of the booting process.
Platform Manager is installed on the node. The hardware (MAC) address for the
ethernet card on each node is recorded in this process, if nodes are reinstalled
multiple nodes may be installed simultaneously.
The dialog shows all the steps necessary to add the server. The process is similar
to provisioning a cluster. There is only one additional step in the next dialog where
you identify the brand of the server. After you complete that step, follow the
description for provisioning a cluster. Click Next to start the process of adding a new
single server.

Image-based Provisioning
As a alternative to package-based provisioning, Platform Manager allows you to
capture a core image from one of your installed nodes and deploy it on other nodes.
You can also combine package- and image-based installation methodologies to be
used together, or separately.

Provisioning Process
Once you have a system tweaked and functioning, you can provision like nodes.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 73


Figure 58—Capture Image View

1 Install a golden node using package-based installation.


2 Capture the core image.
3 Define personalization requirements for selected nodes (e.g. NFS server).
4 Install the core image on all nodes with respective personalization packages for select nodes.
5 Capturing an image
6 Right-click the node in the Data Center Selector.
7 Choose Capture Image. The Capture OS Image View Opens.
8 Enter the server name, image name, a description of the image, and any paths to exclude,
then click OK. The active Capture Image Jobs View opens.

Figure 59— Active Capture Image Jobs view

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 74


Figure 60—Data Center detail: Capturing An Image

You can change the image information by right-clicking the image name.

Figure 61—Image Configuration

You can edit the information in the view, then clicking Save, or Reset, if you
change your mind.

Image Export
Platform Manager supports export of a captured image as a tarball.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 75


Figure 62—Image Export Dialog

• Right-click on a captured image


• Select 'Export Image' option then a dialog box will appear.
• Browse for the location and name of tarball.
• Click OK.

Image Import
An exported tarball can be imported by other Platform Manager frontends.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 76


Figure 63—Image Import Dialog

• Right-click on a captured image.


• Select 'Import Image' option then a dialog box will appear.
• Browse for the location and name of tarball.
• Click OK.

Image Deployment
Once you have a system tweaked and functioning, you can provision like nodes.
• Capture an image, or images from existing nodes.
• Select a node, or group of nodes in the Data Center Selector.
• Right-click on the node name.
• Click Configure -> Provision.

• Click on the Distribution Settings tab.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 77


Figure 64— Distribution Settings

• Right-click on the listing for the server.


• Click on Image Based radio button.
• Select the image to deploy, Installation and TFT templates from the
drop-down lists. See “Customizing Installation templates” on page 76. for
more information about templates.
• Click OK or Cancel. By clicking OK, the new image is deployed on the
selected node(s).
You can also use the New Cluster Wizard to deploy an image. In the Operating
System selection step, click Image, then select an image from the list.

Diskless Image Provisioning


A diskless machine is a server without any of the usual boot devices such as hard
disks, floppy drives or CD-ROM’s. Diskless nodes are the basis for ad-hoc harvesting
of computational power.
The diskless node boots off the network and needs a server that will provide it with
storage space as a local hard disk would. (From now on we call the server the
“master”, and the diskless machine we call the “slave”.) The slave node needs a
network adapter that supports PXE booting.
Platform Manager supports diskless nodes both as nodes in a cluster and as
stand-alone servers.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 78


Installation templates

Templates are used to differentiate nodes during installation, for instance to:
• change partitions on the server
• install packages
• change time zone information
• change keyboard layout
• use post-installed scripts
There are two types of templates:
• TFTP templates are configuration files for PXE Linux and are mostly used to
give kernel parameters to the installation kernel.
• Installation templates are Red Hat Kickstart files, SUSE Autoyast XML files, or
Scalamari Kickstart files or diskless.
Installation templates are associated to the installation job for a node during set up in
the installation wizard. You set the template usage in the OS/Image selection page in
the wizard. By pushing the advanced button, you will get the option of changing the
default templates, see Figure 65. For any given OS/Image only the compatible
templates will be listed.

Figure 65— Accessing the template settings

Customized Template

You can change the templates that are currently associated to the last installation job
for the node. Changing templates will require a new installation of the node. Default
templates cannot be deleted or changed, but you can copy these to create own
templates.

Modifying a template
Right-click on a template in the Data Center to make changes in a template.
If you selected a default template, you can only “Copy”.
If you selected a custom template your choices are
• Copying the template.
• Configuring the template.
• Deleting the template.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 79


Copying a template
The Copy Template dialog view will appear. This has a field for the original name of
the template, a field for the name of the copy you will be saving and a field for the
contents of the template in XML format.

Figure 66— Copy Template dialog

• Enter a new name for the template


• Enter the changes you must make.
• Click OK. The new template will appear in the Data Center under templates.

Configuring a template
Copied or modified templates can be modified later on. You can use keyword
substitution all of the templates. The format for the parameters is %(parameter)t,
where t represents the variable type. You can find a list of template keywords in
“Template Keywords” on page 285.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 80


Figure 67— Custom Template right-click Menu

Figure 68— Modifying a template

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 81


Figure 69— “Template configuration updated” Pop-up

• Right-click on your custom template and select Configure.


• Enter the changes to the template.
• Click on Save. The “Template configuration updated” pop-up appears.

• Click OK to close the pop-up.

Deleting a template
Click on Delete.

Discovery and Managing existing servers


Using the Discovery functionality in Platform Manager you may add existing servers to
Platform Manager. If multiple servers are defined you may also include them in a cluster.
As part of the discovery process Platform Manager will install the necessary software
packages to enable management. After a successful discovery there is no difference
between a server that was originally deployed with Platform Manager and one that was
installed elsewhere and later added by discovery.

Prerequisites for Discovery

The server must be installed with an OS supported by PM.


The given OS must be uploaded to the PM repository
The server must be configured for remote root login with rsh/ssh, either password-less
or with password
The Discover dialog adds an existing single server to Platform Manager.

Node Discovery

You may add existing servers to Platform Manager by using the Discovery functionality
in Platform Manager. You can also set the level of management by Platform Manager.
The range of a discovered node may run from being indistinguishable from a server

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 82


originally set up by Platform Manager to a fully unmanaged node of which Platform
Manager is only “aware”. Discovered servers can be added to a cluster or be configured
as individual servers.

Note: Managing independent servers is not available in Platform Manager/Cluster.


Note: The discovery process will only work with servers that have a PM supported
Operating system installed that accept remote login with rsh/ssh (with or without
password).
Platform Manager discovery is a simple three-step process.
• Discover the existing server: Platform Manager will attempt to log on to the
existing server and report to back to the Platform Manager frontend the
discovered server’s configuration. This information includes Operating
system (distribution and version), IP address configuration and any known
services hosted on the server.
• Enable management of the server: Platform Manager will need to install some of
its own software modules on the server to configure the discovered server
with the latest Platform Manager software and default settings in the
configuration database. No actions are taken on the server itself.
• Install management software on the server: The software added in the previous
step will be installed and Platform Manager will ensure that the server
configuration is in sync with the configuration in the database.
After completion of all the three steps above, the server can be fully managed,
configured, monitored and provisioned as though the server was originally defined and
deployed by Platform Manager.

Note: You will not be able to re-install a discovered server using the RPM-method unless
the already installed operating system is uploaded to the Platform Manager Software
Repository. Extra software installed “by hand” on the discovered server will not be
installed unless it is pulled in via RPM dependencies or configured as “Local
Product/Software” packages in Platform Manager.

The Discovery Process in the Platform Manager GUI

Running the Discovery process allows you to include pre-existing servers with varying
degrees of management.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 83


Figure 70— The discovery dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 84


To add existing server(s) to Platform Manager:

1 In the tool bar menu, choose Provisioning -> Discover Existing Servers. The discovery
dialog opens. See Figure.
2 Enter the IP address of each of the servers that should be discovered to the list
3 If you want the discovered system(s) to be part of a cluster, enable the “Group new servers
into a cluster” check-box and enter the cluster name in the text-field “Cluster name”.
4 Set the password options. If the servers are set up to use a password for remote rsh/ssh login,
supply the root password in the “Root password” text-box. If the log-in needs no password,
check the “Password-less login enabled” check-box. By definition, the first step of the three
phase discovery process described above will always run. The check-boxes “Enable
Management Software” and “Install Software” determine if the next two steps of the process
should be run. All three steps of the process are enabled by default. Disabling “Install
Software” will postpone installing the management software on the discovered server.
Disabling “Enable Management Software” will result in a fully unmanaged server. Both of
these steps can be run later via the right-click menu in the Platform Manager GUI main
selector.
5 Click Finish.
The discovery process starts. The Active Discovery Jobs view appears. When the
discovery process is finished, the new cluster (if any) and discovered server(s) will
appear in the Data Center selector.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 85


Figure 71—Active Job View

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 86


Right-click in the Active Job View to get a menu that will allow you
• a Console, if you enabled management services
• a Discovery Log (Figure 72)
• to cancel the Discovery

Figure 72—Discovery Log

Note: You will not be able to re-install a discovered server using the RPM-method unless the
already installed operating system is uploaded to the Platform Manager Software
Repository. Extra software installed “by hand” on the discovered server will not be
installed unless it is pulled in via RPM dependencies or configured as “Local
Product/Software” packages in Platform Manager.

Running the Discovery Process using the Platform


Manager CLI

For each step in the three step discovery process there is a Platform Manager CLI
command:
You can discover nodes by using the discovernode command in the Platform Manager
CLI.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 87


Table 3—discovernode
pmcli discovernode <ipspecs>
ARGUMENT DESCRIPTION
ipspecs - ip address(es) [..]

Platform Manager discovers the running configuration for an existing system.


The system must have ssh or rsh enabled, and either root login without
passwords must be enabled from this system or the SSH_PASSWORD
environment variable must be set to root password of the system to be
discovered.
Once node is discovered you must run “enablemanagementofservers” followed by
“installmanagementsoftware” if you wish to manage the discovered system.
See “enablemanagementofservers” on page 269 and
installmanagementsoftware on page 204 to enable management of the system(s)
by Platform Manager.

Table 4—enablemanagementofservers
pmcli enablemanagementofservers <systemnames> [servername]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
servername - the name of server; default Platform Manager frontend
--servername=SERVERNAME

This command adds Platform Manager software and services to a system in the
configuration database. It is primarily used for adding Platform Manager to newly
discovered systems. This command only affects the configuration database. Use
“installmanagementsoftware” after “enablemanagementofservers” for
deploying the software without reinstalling the Operating System.
You can install Platform Manager software on system(s) without reinstalling the
Operating system using “installmanagementsoftware” .

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 88


Table 5—installmanagementsoftware
pmcli installmanagementsoftware <systemnames> [netconfigtemplate] [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
netconfigtemplate - UUID of the installation template.
--netconfigtemplate=NETCONFIGTEMPLATE
installserver - server from where the installation job will run
--installserver=INSTALLSERVER

This will log into the system using remote shell (rsh or ssh), and install
the Platform Manager software and services on top of an existing linux
installation.
• The system must be present already.
• The system must have the Platform Manager software and services enabled
in the Platform Manager configuration database prior to running
“installmanagementsoftware” . Please see “discovernode” on
page 269 and “enablemanagementofservers” on page 269
“enablemanagementofservers” for adding existing systems to the Data
Center.
• Root login without password must be enabled, or the SSH_PASSWORD
environment variable must be set to the root password of the system(s) to
be discovered.
• The node must be set for “installmanagementsoftware” .

Adjusting the level of management

In some cases full management of the existing servers might be undesirable. This
might be the case for servers used or configured by other management systems or for
servers that are under strict security policies and/or service agreements not related to
Platform Manager. For these cases partial management might be more advantageous.
Two examples of partial management are “Out Of Band monitoring only” and PBS Pro
clients configured with Platform Managers using unmanaged PBS Pro Servers.

Setting up "Out of Band monitoring" on an unmanaged server


You can set up "Out of Band monitoring" on an unmanaged server with either the
GUI or the CLI: (example CLI commands given)
• Using the GUI or the CLI run only the first step of the discovery process
(described above) on the server.
• Add the BMC information for the server to Platform Manager.
• Enable Out Of Band monitoring

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 89


• Enable Console
• Enable Power

• Reconfigure.

Script for setting up out of band monitoring


# Discover nodes
pmcli discovernode <ip>
# Add the BMC information for the server to Platform Manager
pmcli addbmc <systemnames> <username> <password> <ipspecs>
# Enable Out Of Band monitoring
pmcli enablebmcmonitoring <systemnames> [monitoringserver]
# Enable Console
pmcli enablebmcconsole <systemnames> [consserver]
# Enable Power
pmcli enablebmcpower <systemnames> [powerserver]
# Reconfigure
pmcli reconfigure
You can now use Platform Manager for Out Of Band monitoring, console and power
control, but Provisioning, Configuration and In-band Monitoring are disabled.

Setting up PBSPro clients using an unmanaged PBS Pro


server

With Platform Manager 5.6 onward, you can have an existing PBS Server set up outside
the environment managed by Platform Manager. The compute nodes managed by
Platform Manager may be configured as PBS clients (MOMs) providing computing
resources to the unmanaged PBS server. Using either the GUI or the CLI you can:
• Discover the PBS server as an unmanaged system.
• Define the system as a regular PBS server configuration.
• Set the compute nodes as PBS clients (MOMs) affected by the unmanaged PBS
server.
• Apply changes to provisioning.
• Configure the compute nodes as PBS clients.
A small utility script is added to the CLI to create a file listing all PBS clients in a cluster.
This file can add the managed PBS clients to an unmanaged PBS server.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 90


Table 6—createpbsnodefile
pmcli createpbsnodefile <clustername> [setfree=SETFREE]
ARGUMENT DESCRIPTION
clustername - the name of cluster
OPTIONAL DESCRIPTION
setfree - (optional) set nodes up and available for the PBS batch
system.
--setfree=SETFREE

This script creates a Qmgr file that defines all nodes in cluster. This should only be
necessary for unmanaged PBS servers. This will only list compute nodes that are PBS
clients (MOMs). This file can be used to add nodes to the PBS server with the command.

qmgr -c < nodefile.qmgr

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 91


Chapter 4 - Configuration
Platform Manager is designed to make the configuration of clusters and servers within a
data center as efficient as possible. This section describes the tasks for configuring clusters
and nodes in the data center. Topics include:
Configuration Overview on page 93
Node System Settings View on page 94
Node Hardware Configuration View on page 98
Node Network Management View on page 102
Provisioning Management View on page 110
Node Service Management View on page 114
Remote NFS on page 141
Applying Changes on page 146

Configuration Overview
You can tweak and fine tune the running of your clusters once you have gone through
provisioning. You must apply your changes in configuration before they take effect.
There are two menues to get you started:
• the drop down menu bar at the top of the screen (see Figure 73)
• right-clicking on an element in the Data Center view (see Figure 74).

Figure 73—Menu -> Configuration ->Configure menu

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 92


Figure 74—right-click menu on a node in the Data Center

TIP: When working with several nodes at the same time, you may use
shift-click and edit accordingly, if all your selected options are to be applied to
all of the selected nodes.

Node System Settings View


There are two ways to configure the system settings.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 93


Figure 75—Configuring the System part 1

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 94


To configure the System Settings
• You may click on all fields, except Node UUID, enter changes directly into the
fields (See Figure 75 .) then click on Save, or
• You may right-click on selected lines, which will give you two choices: Set root
password and Set Hostname (See Figure 76 .).

Figure 76—Configuring the System, part 2

Figure 77—Configuring the System part 3

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 95


To set the root password:

1 Select Set Root Password. The Password Configuration dialog pops up.
2 Enter the password in the textbox.
3 Confirm the password in the second textbox.
4 Click Save.

Figure 78—Hostname configuration dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 96


To set the hostname:

1 Select Set Hostname.


2 Enter a prefix (See the Attention table below).
3 Enter a start number.
4 Enter how many zeros you want in the zero padding.
5 Enter an optional suffix (See the Attention table below).
6 Cick on OK.
Note: There are some generous string length constraints on server naming
with regards to the PBS service, but you should consider them when naming
your nodes. Prefixes have a maximum of 32 characters. Suffixes have a
maximum of 100 characters.

Node Hardware Configuration View


You will find three tabs for configuring your servers.
• Configuring Server Properties tab on page 98
• BMC tab on page 99
• Power and Console tab on page 100

Configuring Server Properties tab

There are three fields in the Server properties tab: Hostname, Server model and
Architecture.

Figure 79—Server Properties tab

You can configure Hostname by clicking in the Hostname column.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 97


The Server model column shows Server model information
The Architecture column shows the hardware architecture of the node.

BMC tab

You can configure several BMC settings in this tab.The BMC tab has its counterpart in
the cli comands. See BMC commands for further information.
When you are finished configuring BMC functionality you can restore the previous
values by clicking on Reset, or save the new setings by clicking on Save.

Figure 80—BMC tab

About the BMC tab


There are six columns in the BMC tab.
• Hostname - the systemname for your node. Hostname is not configurable
from here.
• Username - You may change the User name by clicking on it and entering a
string.
• Password and Confirm Password - You may change the password, then you
must confirm your password in the next column.
• IP address - You may change the IP address.
• Subnet - You may add a subnet by clicking on the subnet column.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 98


Configuring the BMC
1 Click on the line with the host you want to configure.
2 Enter your username
3 Enter your password
4 confirm the password enter the IP address.
5 Enter the name of the subnet
6 Click on Save.

Configuring the BMC in the CLI


The following script is an example of how to add and configure a BMC. We will make
the following assumptions about the setup and configuration:
• The node is called n001.
• The user name is root.
• The password is pass_word
• The IP Address is 172.19.0.101
• The power, monitoring and console servers and the subnet will be found
automatically.

# add a BMC using addbmc


pmcli addbmc n001 root pass_word 172.19.0.1001
# enable remote control over bmc power
pmcli enablebmcpower n001
#enable the bmc console
pmcli enablebmcconsole n001
# enable monitoring of the bmc
pmcli enablebmcmonitoring n001
# reconfigure all so that changes take effect
pmcli reconfigure

Power and Console tab

You can configure Power and Console settings in this tab.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 99


Figure 81—Power and Console tab

• Hostname is not configurable from here.


• Power management - You may choose Manual or a the system name of a
switch.
• Power port - If you have set up power switches you must enter a power port
for the switch.
• Console management - You may select Manual or BMC or an existing switch
in the fourth column.
• Console port - If you have set up console switches you must select a console
switch port.
Click on Reset to restore settings, or click on Save to save the new settings in the
database.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 100


Configuring the Power and the Console in the GUI

1 Click on the line in the table that contains the hostname you want to configure.
2 Click on the Power management server name field and choose a switch name or “manual”.
3 If you have set up power switches, choose the appropriate power port.
4 Click on the Console management server name field and choose a switch name or “manual”
or “BMC”.
5 If you have set up console switches, select a console switch port.
6 Click on Save.

Configuring the Console in the CLI


See “The console interface” on page 312 for information about how to
configure the console using CLI commands.

Configuring the Power in the CLI


See “The power interface” on page 324 for information about how to configure
the power using CLI commands.

Node Network Management View


Set your network connections in this view. There are four tabs.
• Network Interfaces on page 102
• Default Gateway tab on page 105
• NAT Settings tab on page 106
• DNS Settings tab on page 108

Network Interfaces

This is where you configure communications among nodes in the cluster.

The Network Interfaces tab


The first tab is for network interface configuration.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 101


Figure 82—Network Interfaces tab - Configuring Network Interfaces, part 1

Figure 83—Network Interfaces tab - Configuring Network Interfaces, part 2

• Hostname - The first column contains the external host name. You may not
edit this from here.
• IP name - You may edit the IP name by clicking on the field and entering a
new value.
• Device - You may not edit the device name (for example change from eth0
to eth1) from here.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 102


• IP address - You may edit the IP address by clicking on the field and entering
a new value. See the note at the bottom of this section
• Subnet - You may choose the subnet from a list of available subnets. To
change the subnet see “Modifying a subnet” on page 105.
• MAC - Assign the MAC address by clicking on the field and entering a value.
This is a series of six pairs of hexadecimal values separated by colons.
• Mode - The seventh column contains the Mode. The default is Normal. The
other values which can appear here are "Bonding Master", "Bonding Slave"
and "Aliased". You may not edit this from here. However, you may set up
bonding through the CLI and your changes will be reflected here. If you want
to use bonded or aliased interfaces please see the chapter on CLI commands
for more details.
• Dependents - The eighth column contains any dependants. You may not edit
this from here. This will remain blank unless you are using Bonding.
• You can enable Static Arp in the ninth column by clicking on the checkbox.
Platform advises enabling Static ARP, especially with larger clusters.
• Network Bootable - You can enable the Network bootable checkbox to make
the node bootable.
• MTU - In the eleventh coulmn you can set the MTU. 1500 is the default value.

Static ARP
Enabling Static ARP provides the selected node with a complete mapping from MAC
addresses to ipadresses for all other nodes on the same subnet. Enabling Static ARP
speeds up tcp/ip communication on the affected node's subnet.

Note: If you have Static ARP feature enabled special care must be taken when making
changes that cause the MAC address to change for an interface. This usually
applies to changing network adapter or replacing a complete node. If a MAC
address changes you need to delete the MAC address for that interface in Platform
Manager and apply those changes before you continue.
Disabling the Static ARP removes the mapping from MAC-addresses to ip-addresses
from the kernel. This forces the affected node to request subnet address mapping
upon the first access to every other node on that particular subnet.

Note: We recommend that you disable Static ARP on the Platform Manager frontend
itself.
You can enable and disable Static ARP on nodes, either through the pmgui, or the
pmcli. Please see enablestaticarp on page 261, disablestaticarp on page 260
and liststaticarpmapping on page 263 for further details.

Enabling Static ARP in the GUI


You can enable Static ARP for the node/interface in the Server Creation Wizard.
You may tick all nodes in your selection by using the “Select All” button in the
lower right hand corner of the in the “Network Interfaces” tab.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 103


Disabling Static ARP in the GUI
Use the “Clear All” button if you wish to disable Static ARP on all nodes in your
selection. After making the desired changes to the settings, press Save in the
lower right hand corner to save your settings into the configuration database.
Remember that the changes do not take effect until you apply them.

Changing IP Addresses
There are two methods to changing the IP addresses.
• Change the IP configuration and reinstall.
• Manually change the IP on the compute nodes, and change the Platform
Manager configuration to match.
To add a new network interface

1 Enter a hostname or select one from the list in the combobox.


2 Enter an interface hostname in the textbox.
3 Enter an IP address.
4 Enter a subnet or select one from the list in the combobox.
5 Enter a device subnet mask.
6 Enter a device name or select one from the combobox.
7 Enter your NIC, or select one from the combobox.
8 Enter a MAC address. This is a series of four pairs of hexadecimal values.
9 Click OK.

Modifying a subnet
If you want to modify an exsisting subnetstart in the Data Center.

1 Right-click on a subnet name in the Data Center Selector.


2 Choose Configure. Existing information for the subnet displays in the Subnet Configuration
view.
3 Change the information as appropriate.
4 Then either click on
• Apply to make the changes to the configuration database (CIM), or

• Reset to revert to the previous values.

Default Gateway tab

You can choose which node will act as a gateway or headnode.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 104


By configuring all the nodes on a private sub net, the Platform Manager frontend can
make all requests from the nodes appear as though they came from one machine.

Figure 84—Configuring the gateway

You may set an individual gateway for this node by clicking on the Gateway IP address
in the active line.
You may define a default Geateway by entering the IP address of the node you want
to be the "head node" or "gateway" for the private network into the text box for the
default gateway. This will affect only selected nodes or all nodes in the window if none
are selected.

1 Click on Set.
2 Click on Save.

NAT Settings tab

The NAT (Network Address Translation) values are used to configure the iptables that
manage IP packet filters.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 105


Figure 85—Configuring the NAT, part 1

Figure 86—Configuring the NAT, part 2

In the subnet specification, the Network Address and Network Mask specify the
addresses that should be translated. Platform Manager configures the iptables to alter

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 106


the source address in packets from the cluster nodes as they exit the cluster (via the
frontend). As a result, the cluster appears to the network as a single computer.

1 Right-click on one or more nodes in the Datacenter view and choose Configure -> Network.
2 Click on the NAT Settings tab to get to the View and tab as pictured in Figure 85
To add an NAT server

1 Click Add. The Enable NAT on Interface dialog window appears.


2 Chose a node from the dropdown menu
3 Chose an interface from the dropdown window
4 Click Add.

DNS Settings tab

The DNS tab contains information about the Domain Name Services.

Figure 87—Configuring the DNS part 1

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 107


Figure 88— Configuring the DNS configuration, part 2

To configure the DNS

1 Right-click on one or more nodes in the Datacenter view and choose Configure -> Network.
2 Click on the DNS Settings tab and right-click on the node you want to configure.
• Click on Configure to get the pop-up Edit DNS Configuration dialog as shown
in Figure 88 .

3 To choose the DNS name server:


• Add a new server enter the IP address to the DNS name server list in the text
box and click on Add, or

• Click on a choice from the list of servers to Edit, Re-order or Remove.

4 Add a server to the DNS search list:


• Add a new server enter the IP address to the DNS search list in the text box,
then click on Add, or

• Click on a server in the list and click on OK.

Note: Remember that the changes will not take effect before you apply the changes. See
“Applying Changes” on page 146.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 108


Provisioning Management View
You will find two tabs in the Provisioning Management View.

Distribution Settings tab

The first tab is the Distribution Settings tab. You can see what the architecture is as well
as which operating system the node uses and how the software was distributed

Figure 89—Configuring Distribution Settings, part 1

The first column contains the external host name. You may not edit this from here.
The HW architecture of your node is listed here in the second column.
The three values of the "Installation method" column are "image based", "pre-installed"
and "package based."

Note: "Pre-installed" will appear when a node has been "discovered". You will not be able
to re-install the node because because the repository in Platform Manager might not
have a copy of the software running on discovered nodes. You must instead un-install
software of a discovered node, then install compatible software to be found in the
repository.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 109


Figure 90—Configuring Distribution Settings, part 2

You re-configure the distribution by right-clicking on the selected lines and selecting
"Configure".
You may change the remaining values in the next columns by right-clicking on the
line(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 110


Figure 91—Configuring Distribution Settings, part 3

• From this dialog you can change the method of distribution via the radio
buttons.
• You may select an appropriate OS in the "Installation data" combo box. The
choices are an installation template or a TFTP template.
Click on OK or Cancel when you are through.

Note: Remember that the changes will not take effect before you apply the changes. See
“Applying Changes” on page 146.

Software Settings Tab

The second tab is the Software Settings tab. It shows you what is installed on your
node.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 111


Figure 92—Configuring Software settings, part 1

Figure 93—Configuring the software settings, part 2

1 Select appropriate software from the Software products combobox.


• The Hostname column shows the name of the node.
• Each installed feature has a check box to show it is installed.
• You may right-click on the line(s) to install or to uninstall a software feature.

2 After the installation is finished, click on Save.


3 Each time you configure a software product you must save the configuration before moving
on to another.
Note: Remember that the changes will not take effect before you apply the changes. See
“Applying Changes” on page 146.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 112


Node Service Management View
There are six services that have tabs in the Node Service Management View:
• NTP tab on page 114
• NIS Client tab on page 116
• LDAP Client tab on page 118
• NFS Export tab on page 121
• Scali MPI Connect tab on page 126
• LSF tab on page 127
• PBSPro tab on page 136

NTP tab

This tab is for configuring the NTP server you will use to keep your cluster in synch.

About the NTP tab


There are three columns in the NTP tab.

Figure 94— NTP tab: Right-click menu

• The first column contains the external host name. You may not edit this from
here.
• The second column has a checkbox for enabling the service.
• The third column contains an ip address and the server name of the NTP
server.

Adding an NTP service


The fields in Enable service and the NTP server list are configurable.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 113


Figure 95—Edit NTP Configuration dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 114


Figure 96—NTP Server IP dialog

1 Click on the Enable Service checkbox to activate the service.


2 Right-click on the entry See Figure 94 .
3 Click on Configure to open the Edit NTP dialog. See Figure 95 .
4 Click Add to add another NTP to the list. The NTP Server IP dialog opens. See
Figure 96 .

5 Enter the IP address or the host name of the NTP server.


6 Click OK.in the NTP Server IP dialog.
7 Click OK.in the NTP Edit configuration dialog.
8 Click Save.

NIS Client tab

Maintain a directory of text-based tables in your network using the NIS Client tab.

About the NIS Client tab


The first column contains the external host name. You may not edit this from here.
You may activate the Enable Service Checkbox by clicking on it, or by right-clicking
on a selection of multiple nodes. See Figure 97 .
Right-clicking on the line opens the Edit NIS Configuration pop-up dialog and affects
the selected, enabled nodes. See Figure 98 .
Selecting Configure in the pop-up menu opens the edit NIS Configuration dialog.
See Figure 99 .

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 115


Figure 97—NIS Client tab: Right-click menu

Figure 98—Edit NIS Configuration dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 116


Figure 99—NIS Server IP Pop-up

Configuring NIS Client Service


To configure the NIS:
• Select the NIS domain in the NIS Domain combobox.

• Click on one of the radio buttons to choose Broadcast or Unicast.

• Add or select an IP address to Edit.


• Either enter a new IP address or change the existing IP address.
• Click on OK

• Click on Save.

LDAP Client tab

The LADP Client tab allows you a means to query and modify directories of text-based
information tables in your cluster using the Lightweight Directory Access Protocol..

About the LDAP tab


The LDAP Client tab has five columns in a table.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 117


Figure 100—LDAP Client tab

• Hostname is not editable from here.


• Enable Service by placing a check in the checkbox.
• Base DN - list of domain components
• Enable TLS by placing a check in the checkbox.
• LDAP server list is a comma-separated list of ip’s or systemnames.
There are two buttons at the bottom of the view:
• Rest cancels the changes in favor of the original settings.
• Save will save the changes to the database. Remember that you must apply
changes for the changes in configuration to take effect.

Configuring a LDAP Client


You may configure one or more nodes at the same time for using Lightweight
Directory Access Protocol.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 118


Figure 101—LDAP Client tab: Right-click menu: Configure

Figure 102—Edit LDAP Client Configuration dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 119


Figure 103—LDAP Server IP dialog

To configure one or more nodes:

1 Select the nodes you want to configure.


2 Right-click on the selection. See Figure 101 ..
3 Click on Configure. The LDAP Client Configure dialog box opens. See Figure 102 ..
4 Specify Base DN for selected nodes.
5 Enter a list of LDAP servers. See Figure 103 .
6 Click on OK to save the changes to the database and exit the dialog.

LDAP in the CLI


Please see “addldapclientservice” on page 296 for a reference to using the CLI
to set up a client service for Lightweight Directory Access Protocol.
# pmcli addldapclientservice <systemnames> <ldapbasedn> <ldapservers>
[enabletls=False]

NFS Export tab

The Network File System tab helps you configure your system so that you may share
resources among your nodes.

About the NFS Export tab


There are three columns as you can see in Figure 104 .

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 120


Figure 104—The NFS Export tab

• Hostname - This is the name of the host in the entry. You cannot edit this field
from here.
• Enable Service - Check this box to enable the service on the node.
• Exported Directory List - this column shows the exported directories on a
particular host.

Adding an NFS service


You can add one NFS service node by node or several nodes at a time.

Figure 105—NFS Export tab: Right-click menu : Add NFS Export

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 121


Figure 106—NFS Clients Export configuration Pop-up

1 Select the host entry. (Figure 104)


2 Enable the service by clicking on the check box for that entry.
3 Right click on the entry.
1 Select Add NFS Export from the right-click menu. You can export one directory for however
many clients you highlighted. (Figure 105) The NFS Clients Export Configuration
dialog opens.
1 In the NFS Clients Export Configuration dialog, (Figure 106) enter the directory for
exporting if it is not already in the text box at the top of the dialog.
2 Select the client from the combo box.
3 Click Add.
• Click OK to save the changes to the Database. Remember you must apply
changes for the changes to take effect. See “Applying Changes” on
page 146.

Configuring an NFS Service


You can configure a NFS service for a single node, or several nodes at a time.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 122


Figure 107—NFS Export tab: Right-click menu : Configure

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 123


Figure 108—Edit NFS Export Configuration Pop-up

1 Select a node in the NFS Export tab.


2 Right click and select Configure. The Edit NFS Export Configuration dialog opens.
(Figure 108)

3 Click on Edit. The Edit Remote NFS dialog opens. (Figure 124)
4 Make your changes in the fields.
5 Click on OK. Remember that you must appy changes for them to take effect.

Examples of NFS in the CLI

Example: Addng NFS to a node in the CLI


You want to add the nfs export on a node named sc1435-3. Use addnfsexport
<systemname> <directoryname> [clientoptions]
pmcli addnfsexport sc1435-3 /export/test1

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 124


Example: Removing NFS from a node in the CLI
You want to remove the nfs export on a node named sc1435-3.
• Get the service id for exported nfs on sc1435-3
• Use the service id in removeservice <systemname> <serviceid>
pmcli listhostedservices sc1435-3
platform:749e5a08-3e02-cb2d-2686-3bd617478328 Scali_NFSExportService
(eth2,eth3) Directory:'/export/test1'
pmcli removeservice sc1435-3
scali:749e5a08-3e02-cb2d-2686-3bd617478328

Scali MPI Connect tab

The first column contains the external host name. You may not edit this from here.

Figure 109—Configuring the Scali MPI Connect, part 1

To Enable and Configure Scali MPI Connect:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 125


Figure 110—Configuring the Scali MPI Connect, part 2

1 You may activate the Enable SMC Checkbox by clicking on it.


2 Right-click on the line(s) to affect enabled services.
3 Click on Configure.

Figure 111—Configuring the Scali MPI Connect, part 3

4 Select the available version of Scali MPI Connect from the combobox.
5 Enable Infiniband checkbox, if your server supports it and to let Scali MPI Connect use the
existing software stack on the node(s).
6 Enable Myrinet checkbox, if your server supports it.
7 Click on OK.
8 Click on Save. (Or Reset to remove changes)

The LSF tab

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 126


Using the fields under the tab marked "LSF" in "Node Service Management View"
provides the details of the LSF configuration on the selected LSF nodes.
To ensure a good LSF set up adhere to the following two prerequisites:
• Define lsfadmin as a user on a NIS server.
• Share NSF and LSF log files (/opt/lsf/work/) among the master candidates to
assure a decent failover.

Note: Be sure to read the LSF documentation for details on a proper set up.

About the LSF tab


If no LSF service is present on the selected node then the tab will show empty
columns for "LSF Host Type", "LSF Cluster Name", "LSF Version" and will show text
"0 more lines" in the column field "License Server/Port" for the host name in the
table row.

Figure 112—LSF Configuration tab

The LSF tab is similar to "PBS Pro" tab. It has the following columns:
• Hostname - The hostname of the node.
• LSF Host Type - This field shows the type of LSF ("Master Candidate",
"Dynamic Host" OR "Static Host"). If this is empty then LSF is not configured
for the node.
• LSF Cluster Name - This field shows the name of the LSF cluster of which the
node is a member. If this is empty then LSF is not configured for the node.
• LSF Version - This field shows the LSF software version configured on the
node. If this is empty then LSF is not configured for the node.
• License Server/Port - This field shows the hostname(or IP Address) of the
license server and port number on which the licensing service is accessible
on the license server. If you do not provide the port number, then the port
number defaults to 1700. For example: 1700@PMServer or
1700@172.19.50.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 127


• LSF License - This field shows a brief preview of the LSF license. If there are
up to three features in the license then it will show 3 comma separated
feature names. If there are more than three then you will see"..." after the
third feature. For example for four or more features you will see
"Feature1,Feature2,Feature3,..." If this is empty then LSF is not configured
for the node.

Note: If the License server/port column field is empty, then the service will check out
license features from local file.
You can select multiple nodes in the table viewer and right click to show a menu of
two options: "Configure" and "Remove".

Figure 113—LSF Configuration tab- Right- click menu

• Remove removes any previously configured LSF on selected nodes.

• Configure opens the Edit LSF configuration dialog.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 128


Configuring a Master Candidate with the GUI
You can configure a master candidate by following these steps ,starting in the LSF
tab of the node services view:

1 Right-click on the table row(s) of the selected node(s). See Figure 113 .
1 Click on "Configure". This will open the "Edit LSF Configuration dialog". See
Figure 114 .

2 Select "Master Candidate" for the field labeled "LSF Host Type" in the "Edit LSF
Configuration" dialog.
3 Enter a new LSF cluster name or you can choose a cluster name from the list of LSF clusters
created in the field labeled "LSF Cluster".
4 Select the version of the LSF from the field labeled "Software Version".
5 Select the license file for the LSF version using the "Browse" button.
6 Click "OK".
7 To save the configuration click on the "Save" button in the LSF Tab.

Figure 114— Edit LSF Configuration dialog - Master Candidate

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 129


Adding the LSF master candidate with the CLI
See “addlsfmastercandidate” on page 242 for information about LSF master
candidates configuration using the Platform Manager CLI.

About Hosted Services


Topics included in this section are:
• “About Scali_DHCPServerService”
• “About Scali_ManagementEngineService”
• “About Scali_RepositoryChannelService”

About Scali_DHCPServerService
Scali_DHCPServerService decides which interfaces will entertain DHCP
broadcast requests. Only those interfaces which are bound to this service are in
a position to entertain DHCP broadcast requests.
DHCPServerService should normally be hosted by Platform Manager Servers
and Platform Manager Gateways. It assigns IP addresses to hosts that have
DHCPClientService and support network OS installation via PXE/EFI/Etherboot.
See the pmcli commands enablenetworkboot and disablenetworkboot.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 130


Table 7—Example: bindservicetointerface
To bind Scali_DHCPServerService to an interface, enter:
# pmcli bindservicetointerface <systemnames> /
<Scali_DHCPServerService_serviceid> <interface>

Table 8—Example: unbindservicefrominterface


To unbind Scali_DHCPServerService from an interface, enter:
# pmcli unbindservicefrominterface <systemnames> /
<Scali_DHCPServerService _serviceid> <interface>

Table 9—Example: unbindservicefrominterface


In this example the head node has two interfaces eth0 (Public
Interface) & eth1 (Private Interface).
If the situation demands that you should not use a public interface (eth0) in assigning IP
addresses, first unbind Scali_DHCPServerService from the public interface as shown
below:
# pmcli unbindservicefrominterface node7 /
scali:531b5bf2-d1e1-75da-f80a-1b3b6aa8552b eth0

About Scali_ManagementEngineService
Scali_ManagementEngineService manages the installation and configuration of
compute nodes.
Platform Manager assigns the network interface for installation by looking at all
available network interfaces that have PXE enabled (see the pmcli commands
enablenetworkboot and disablenetworkboot) on the same network as a
Scali_ManagementEngineService. If multiple possible interfaces are found
Platform Manager will use an interface from this selection.
Scali_ManagementEngineService is not a daemon that runs all the time but a
service that is started on demand. Platform Manager requires a
DHCPServerService to be hosted by the same server for OS installations.
DHCPServerServices should normally be hosted on Platform Manager Servers
and Platform Manager Gateways.
By default a PM ethernet interface is set automatically for node installation.
Many times, a public or cooperate network is not required for provisioning. To
prevent an ethernet interface from being used in node installation, unbind the
Scali_ManagementEngineService service from the interfaces which aren't
required for node installation. This will make sure that the only the left-out
interface is used for node installation.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 131


Table 10—Example: unbindservicefrominterface
In this example the head node has two interfaces eth0 (Public
Interface) & eth1 (Private Interface). If the situation demands that public interface (eth0)
should not participate in the provisioning (node installation),first unbind
Scali_ManagementEngineService from the public interface as shown below:
# pmcli unbindservicefrominterface <systemnames> /
<Scali_ManagementEngineService serviceid> <interface>
Specifically:
# pmcli unbindservicefrominterface node7 /
scali:fd48adbb-ba4d-dfae-dd20-56213f0677bb eth0

About Scali_RepositoryChannelService
RepositoryChannelService manages the repository of software available for
installation in the cluster. The RepositoryChannelService uses the apache web
server to make the packages and images available for download for the compute
nodes.
The Yum utility may use any one of the interfaces/networks for installing rpms
on compute nodes on which this service is bound. Which Interface/Network yum
is using can be found by entering:
# cat /opt/scali/etc/yum.conf
Do not use interfaces or networks that are unreachable. If one is unreachable
the node may not install completely. In this case there will be an error message
while restarting "scance" service. Platform advises you to unbind
"Scali_RepositoryChannelService" from unused interfaces.

Table 11—Example: unbindservicefrominterface


In this example the head node has three interfaces eth0, eth1 & eth2. If you do not want
yum to use the "eth2" network for installing rpms you should unbind
"Scali_RepositoryChannelService" from eth2.
# pmcli unbindservicefrominterface <systemnames> /
<Scali_RepositoryChannelService serviceid> <interface>
Specifically:
# pmcli unbindservicefrominterface node7 /
scali:1e025ee1-4e7c-d12e-c2be-bc0b21cfa063 eth0

Configuring a Dynamic Host Service or a Static Host Service


You can also configure a Dynamic or Static Host. The only difference between the
two is the mode in which the host is running. The procedure below will configure a
dynamic host.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 132


Figure 115— Edit LSF Configuration dialog - Dynamic Host

Note: Figure 115 shows that "License Text" field is disabled for the host type other
than "Master Candidate".
You can configure a Dynamic Host Service by following these steps, starting in the
LSF tab of the node services view:

1 Right click on the table row of the selected node


2 Click on "Configure". This will open the "Edit LSF Configuration dialog".
3 Select "Dynamic Host" for the field labeled "LSF Host Type" in the "Edit LSF
Configuration" dialog.
4 Select one LSF Cluster name from the drop down or you can choose your own cluster name
for the field labeled "LSF Cluster". The drop down for LSF Cluster names will only appear
if the LSF cluster name is already present in the database.
5 Select the version of the LSF from the field labeled "Software Version".
6 Click OK.
7 To save the configuration click on the "Save" button in the LSF Tab.
See “addlsfdynamichost” on page 242 for information about LSF dynamic host
configuration using the Platform Manager CLI.
See “addlsfstatichost” on page 243 for information about LSF dynamic host
configuration using the Platform Manager CLI

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 133


Queue status
The PM GUI shows the status of queues and jobs. The user may open,close,
activate, inactivate queues and suspend or resume jobs. There are two views
here:one for LSF and one for PBS.
The LSF agent provides status for running jobs only, not pending jobs.

Figure 116—LSF Queue Status

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 134


Figure 117—PBS queue status view

PBSPro tab

PBSPro runs batch system accounting services.

About the PBSPro tab


The fourth tab is for PBSPro configuration

Figure 118—Configuring PBS Pro, part 1

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 135


The first column contains the external host name. You may not edit this from here.
To Enable and Configure PBS Pro Service

Figure 119—Configuring PBS Pro, part 2

Figure 120—Configuring PBS Pro, part 3

• You may choose bewteen client and server by clicking on the column entry.

• Click on Configure.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 136


Configuring the PBSPro server
To configure the PBSPro server:
• Enable Make PBS Server checkbox.
• Fill in the PBS Pro license textbox.
• Click on OK.

• Click on Save (Or Reset to remove changes).

PBSPro Clients
Topics included in this section are:
• Setting up PBS Pro clients using an unmanaged PBS Pro server on
page 138

• PBS accounting files on page 140

Configuring a PBSPro Client


To configure a PBSPro client:

1 Enable Make PBS Server checkbox


2 Select the PBS PRO Server from the "Client managed by:" combobox.
3 Select the PBS Pro version from the combobox.
4 Click on OK.
5 Click on Save (Or Reset to remove changes).

Setting up PBS Pro clients using an unmanaged PBS Pro server


With Platform Manager 5.6 onward, you can have an existing PBS Server set up
outside the environment managed by Platform Manager. The compute nodes
managed by Platform Manager may be configured as PBS clients (MOMs) providing
computing resources to the unmanaged PBS server. Using either the GUI or the CLI
you can
• Discover the PBS server as an unmanaged system.
• Define the system as a regular PBS server configuration.
• Set the compute nodes as PBS clients (MOMs) affected by the unmanaged
PBS server.
• Apply changes to provision and configure the compute nodes as PBS clients.
A utility script using createpbsnodefile in the CLI creates a file listing all PBS clients
in a cluster. This file can add the managed PBS clients to an unmanaged PBS server:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 137


Syntax pmcli createpbsnodefile <clustername> [set-
free=False]
Arguments DESCRIPTION
clustername - name of cluster
Options DESCRIPTION
setfree - set nodes up and available for the PBS batch system

This script creates a Qmgr file that defines all nodes in cluster. This should only be
necessary for unmanaged PBS servers. This will only list compute nodes that are
PBS clients (MOMs). You can use this file to add nodes to the PBS server with the
command
'qmgr -c < nodefile.qmgr'
In the example below we will be setting up ScaAccounting (Batch system
accounting) on an unmanaged PBSPro server. We will assume the following:
• “dl360g3-4” is the name of the Platform Manager frontend.
• “vega” is the name of the unmanaged PBS server

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 138


# Upload the PBSPro, define the server in the configuration database.
[root@dl360g3-4 ~]# scarepository.py --addproduct pbs_8.0.0.63106-0_i386
\ Filebrowser /home/software/3rdparty/PBSPro/pbs-8.0.0.63106-0.i386.rpm
[root@dl360g3-4 ~]# pmcli addpbsproserver vega
"L-00010-07312-0214-jko66V6l2o-rtT-agt-Platform"
[root@dl360g3-4 ~]# pmcli listhostedservices vega | grep PBS
scali:8e69dfd0-d872-3c03-d50a-f42d1f627239 Scali_PBSProServerService (eth0,eth1)
# Install the batch system accounting daemon
[root@vega ~]# rpm -Uvh pmlsb-2.1.23-1.rhel4.i386.rpm
[root@vega ~]# rpm -Uvh ZSI-2.0-0.0.5_1.i386.rpm
[root@vega ~]# rpm -Uvh scaaccounting-1.2.73-2.rhel4.i386.rpm
# Setup the configuration file for the batch system daemon.
# Use 172.19.5.23 for communications with the frontend
IP='172.19.5.23'
# Enter PBS server service ID on vega
ID='pm:8e69dfd0-d872-3c03-d50a-f42d1f627239'
# Enter path to PBS accounting files:
LOG='/var/spool/PBS/server_priv/accounting/'
# Write configuration file
echo "[PBSPro]
wsdl =
http://"$IP":8080/AccountingCollectorBeanService/AccountingCollectorBean?wsdl \
servicename = "$ID"
accountingfiles = "$LOG \
>/opt/scali/etc/scaaccounting.conf
# Start accounting service
[root@vega ~]# /etc/init.d/scaaccounting start
Check that everything is running smoothly:
[root@vega ~]# tail -f /var/log/scaaccountd.log

If you have problems with this section, please contact professional services.

PBS accounting files


Normally you will find the PBS accounting files in
/var/spool/PBS/server_priv/accounting.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 139


You may put historical accounting files into this folder before enabling
ScaAccounting to parse data gathered prior to installing Platform Manager.
ScaNCE sets up the ScaAccounting config file /opt/scali/etc/scaaccounting.conf.
The file contains information about which directory ScaAccounting (see
“ScaAccounting” on page 177) should monitor, and where to write the report.
The daemon will not start if this file does not exist.

Remote NFS
Configure your node(s) to access file systems on other servers as if the systems were on
the node.

About the Remote NFS tab

As you can see in Figure 119 there are three columns in the Remote NFS tab.

Figure 121—Remote NFS tab

• Hostname - a field for the systemname of your node. This is not editable from
here.
• Enable NFS - a checkbox for enabling or disabling the service
• Mount Point - a field for comma separated lists of mount points

Adding Remote NFS Management

About the Remote NFS Management tab

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 140


Figure 122—Remote NFS: Right-click menu : Add

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 141


Figure 123—New Remote NFS Configuration Pop-up

1 Select one or mode nodes in the table, as in Figure 120 .


2 Right-click and select Add. This opens the New Remote NFS Configuration dialog.
(Figure 121)

3 In the New Remote NFS Configuration select the remote system from the combobox.
4 Enter the remote directory.
5 Enter the mount point(s). If you have multiple points, separate the list with commas.
6 Enter the mountpoint options. If you have multiple options, separate the list with commas.
7 Click on OK to make changes to the database and exit the dialog.

Configuring a Remote NFS Service

Figure 122 shows the right-click menu.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 142


Figure 124—Remote NFS tab: Right-click menu

Figure 125—Remote NFS Configuration Pop-up dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 143


Figure 126—Edit Remote NFS Configuration Pop-up

1 Select the node(s) you want to configure in the Remote NFS Management tab, as in
Figure 122.

2 Right-click and select Configure. The Remote Configuration dialog opens, as in


Figure 123.

3 Select a node entry in the table.


4 Click on Edit to make changes in the configuration. The Edit Remote NFS Configuation
dialog opens as in Figure 124.
5 Make your changes, remembering to separate items in a list with commas.
6 Click on OK to save the changes to the database and exit.

Examples of Remote NFS in the CLI

For a list of File System commands please see “Filesystem Commands” on


page 212.

Example: addremotefs
Use “addremotefs” to mount a remote directory.
pmcli addremotefs sc1435-3 nfs sc1435-6:/export/test1 /mnt/test1

Example: listremotefs
Use “listremotefs” to list remote directories in the system.
# pmcli listremotefs sc1435-3

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 144


Applying Changes
The Pending Changes Icon will indicate whether the changes you have made have been
applied or only saved to the CIM database. The icon is located under the Data Center
Selector Tab. See “Node Service Management View” on page 114

Figure 127—Pending Changes Icon

Your changes take effect only after the configuration has been applied to the nodes. There
are two ways to apply changes in configuration.
• You can right-click on the selected node(s) in the Data Center view, then click can
click on Apply Configuration.

• You can click the Apply Configuration in the “Provisioning” menu bar at the top
of the screen.

Figure 128—Confirm applying configuration changes Pop-up

Click Yes to apply configuration changes.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 145


Chapter 5 - High Availability
The High Availability feature allows you to configure failover protection for a Platform
Manager Server and a gateway node, where additional services can be protected.
Introduction to High Availability on page 162
Installing High Availability on a Gateway on page 145
Installing and Configuring High Availability on the PM Server on page 170

Introduction to High Availability


HA cannot, by nature, be an “out-of-the-box” feature. You and your vendors will need to
design the topology of back-up servers that you want to use and then configure the High
Availability feature accordingly. It will take on average 2-3 days per installation.
The High Availability (HA) option provides fault tolerance for:
• The Platform Manager Server
• The Cluster Gateways set up by Platform Manager
• Other services which are compliant with standards specifications such as HTTP,
NFS, Samba, etc.

Note: Workload Management systems (such as PBS Pro) often have their own
HA options. You need to buy your vendors’ own HA option for their product
line. Please contact them for their solutions.
Platform Manager employs the Active-Passive model of HA. This means that there is a
secondary node with a fully redundant instance of the Platform Manager Server and/or one
secondary node for each protected gateway, nodes which lie passively offline until their
associated primary nodes fail. This configuration requires the most “extra” hardware of all
the topologies, but it also assures the greatest protection against systems failure.
To access services on the cluster during a failure of the cluster host there must be what is
called a “cluster logical host”. This is a network address or a host name which is not tied
solely to any given node, but rather linked to services provided by the cluster. This allows
the database to be restarted on a redundant cluster during failure. That network
address/host name is then temporarily assigned to the redundant node so users may
interact with the database.

WARNING—Use non-volatile shared storage (NAS/SAN) to store as much of the


application state as possible so that the application can restart in its last state
before the failure, on another node.The Platform Manager Server itself requires a
hardware-based shared storage solution (such as SAN) to be present before you
can set up the HA option. You and your hardware vendor must establish a shared
storage solution before attempting to deploy the HA feature. It will not be possible
to implement the HA option without meeting this requirement.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 144


Furthermore, applications in a high availability cluster environment must satisfy these
technical requirements:
• Ease of application start, stop and force-stop.
• Monitoring the status of the application
• Support multiple instances of the application.
• Scripting or CLI
• Data must not be corrupted if a node crashes or restarts the application from a
saved state.
Licensing compliance must be observed.

Note: The Platform Manager/HA option requires a product key for the HA
feature on each cluster. The copies of the key must be set to allow for
activation on each node instance AND activated. Please obtain licenses from
your Platform sales representative or visit http://www.platform.com.
The High Availability feature would normally be installed for the Platform Manager Server
and/or for gateways in your clusters. Compute nodes do not generally need this
functionality.

Installing High Availability on a Gateway


We will assume the following values for installation of the High Availability feature on your
gateway:
• PRIMARY="gw1"
• SECONDARY="gw2"
• HAGROUPNAME="hagw"
• PRIMARY_EXT_IP="172.19.5.21"
• SECONDARY_EXT_IP="172.19.5.22"
• HAGROUP_EXT_IP="172.19.5.200"
• PRIMARY_INT_IP="10.0.0.21"
• SECONDARY_INT_IP="10.0.0.22"
• HAGROUP_INT_IP="10.0.0.200"
• UDPPORT="694"
• PRIVATENODES="n[01-64]"

WARNING—Do not use the floating IPs for heartbeat channels.


Note: Make sure to follow the additional steps if systems are already installed.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 145


Figure 127—High Availability Topology on a gateway

To install High Availability functionality on a cluster gateway:

1 Do a normal bootstrap of the Platform Manager Server.


• Define primary Platform Manager Server and PRIVATE NODES in the Server
Creation Wizard (private cluster) or pmcli. See “Creating a node with pmcli”
on page 463

2 Define the SECONDARY node using the Server Creation Wizard (extend cluster /
independent server) or the pmcli.
3 OPTIONAL: If some, or all defined systems are already installed, make sure all systems are
installed and working properly. It is also possible to install all systems defined above at this
point before continuing. Make sure to follow the optional steps also if systems are already
installed.
4 Create the HA group and add the floating ethernet interfaces. The IP addresses of these
interfaces will started on the ACTIVE gateway and moved in case of a failover.
pmcli createhagroup hagw
pmcli addhaethernetinterface hagw nic1 eth0 hagateway.ext 172.19.5.200
pmcli addhaethernetinterface hagw nic2 eth1 hagateway.int 10.0.0.200

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 146


The use of ethX in the addhaethernetinterface will determine which of the physical
interfaces on the active gateway the floating IPs will be started on.

5 Add the heartbeat software to the primary gateway (for Platform Manager version 5.7.1):
pmcli addsoftware gw1 pm-5.7.1 Heartbeat

Add the primary HA group member (primary HA gateway)


pmcli addservertohagroup gw1 hagw

6 Add heartbeat channels to the primary HA gateway. It is recommended to use the “unicast”
and “serial” methods and as many different channels as possible for redundancy.
pmcli addheartbeatchannel gw1 unicast eth0 172.19.5.22
pmcli addheartbeatchannel gw1 unicast eth1 10.0.0.22
pmcli addheartbeatchannel gw1 serial /dev/ttyS0

7 Set the udp-port if necessary. Default value is 694. There are two common reasons for
overriding this value:
• There are multiple HA groups using the “broadcast” channel method on the
same subnet, or
• This port is already in use in accordance with some locally-established policy.
pmcli addheartbeatchannel gw1 udpport 694

8 Move the services to the HA group. This controls which services should be run the gateway
as HA services. There are two ways to do this:
• Move all HA compatible services from the primary gateway to the HA group.
pmcli moveservicestohagroup gw1 hagw

• Or move the services individually:


pmcli moveservicetohagroup gw1 hagw Scali_ManagementEngineService
pmcli moveservicetohagroup gw1 hagw Scali_DHCPServerService
pmcli moveservicetohagroup gw1 hagw Scali_NATService
pmcli moveservicetohagroup gw1 hagw Scali_ScaMonitoringRelayService
pmcli moveservicetohagroup gw1 hagw Scali_ScaMonitoringOutofbandService
pmcli moveservicetohagroup gw1 hagw Scali_ConsoleManagementController
pmcli moveservicetohagroup gw1 hagw Scali_PowerManagementController

9 Start up HA on the primary gateway. This will enable a HA group running Heartbeat with
one member system.
pmcli reconfigure gw1

10 Set the gateway of the private nodes to be the internal floating HA group IP address
pmcli removeroute “n[01-64]” 0.0.0.0

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 147


pmcli addroute “n[01-64]” 0.0.0.0 0.0.0.0 10.0.0.200

11 Configure the nodes on the private net to use the new gateway IP. Everything should now
work normally as if HA was never there.
pmcli reconfigure “n[01-64]”

12 Add the heartbeat software to the secondary gateway:


pmcli addsoftware gw2 pm-5.7.1 Heartbeat

13 Add the secondary HA group member (secondary HA gateway)


pmcli addservertohagroup gw2 hagw

14 Add heartbeat channels to the secondary HA gateway. See note on udp-port above.
pmcli addheartbeatchannel gw2 unicast eth0 172.19.5.21
pmcli addheartbeatchannel gw2 unicast eth1 10.0.0.21

15 Install the software on the secondary node for the HA services selected. This is a crucial step
to make HA work. If a failover occurs and the software is not installed on the secondary
gateway, the failover procedure will fail and the gateways will both go down. What software
should be installed is related to the use of “moveservices” command(s) above.
pmcli addsoftware gw2 pm-5.7.1 'MonitoringRelay' 'Install server' 'Console
Server' 'Power Server' 'MonitoringOutofband'
pmcli addheartbeatchannel gw2 serial /dev/ttyS0

16 Reconfigure the primary gateway and install the secondary gateway if not installed earlier.
pmcli reconfigure gw1
pmcli install gw2
pmcli reconfigure all

17 If the secondary gateway was already installed, reconfigure the gateway systems
pmcli reconfigure gw1

pmcli reconfigure gw2

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 148


Installing and Configuring High Availability on the PM Server
In this chapter’s examples we will assume the following values for installation of the High
Availability feature on your Platform Manager Server:
• PRIMARY="PMServer-1"
• PRIMARY_EXT_IP="172.19.5.20"
• PRIMARY_INT_IP="10.0.0.20"
• PRIMARY_BMC_IP="172.20.5.20"
• SECONDARY="PMServer-2"
• SECONDARY_EXT_IP="172.19.5.21"
• SECONDARY_INT_IP="10.0.0.21"
• SECONDARY_BMC_IP="172.20.5.21"
• HAGROUPNAME="HAPMServer"
• HAGROUP_EXT_IP="172.19.5.200"
• HAGROUP_INT_IP="10.0.0.200"
• UDPPORT="11694"
• NODES="dl360g3-6"

CAUTION—Read Installing Platform Manager High Availability on a Gateway and understand the
procedure before reading this!

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 149


Figure 128—High Availability Topology on a Server

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 150


Note: Make sure to follow the additional steps if systems are already installed.
• Do a normal bootstrap of the primary Platform Manager Server.
• OPTIONAL: Set up NAT the primary Platform Manager Server. See “Create
Server Wizard: Configuring a network gateway” on page 44, or See “NAT
Settings tab” on page 111, or See “addnatservice” on page 370.
• Enable power and console control on the primary Platform Manager Server.
• Define the SECONDARY node with the Server Creation Wizard (independent
servers) or pmcli.

WARNING—Do not run the GUI during the HA Platform Manager Server setup!
• OPTIONAL: If some or all defined systems (in addition to primary Platform
Manager Server) are already installed, make sure all systems are installed and
working properly.

Note: It is also possible to install all systems defined above at this point
before continuing.
• Configure the HA group
• Create the HA group
• Add the floating ethernet interfaces
• Add the primary Platform Manager Server with heartbeat channels.
• Reconfigure the primary Platform Manager Server to enable the new floating
IP(s).
• pmcli reconfigure PMServer-1
• Move the services to the Platform Manager Server HA group. The first example
script will show you how to move them all at once. The second, a partial script,
will show you how to move the services individually.
• Move the Platform Manager Configuration Database to shared storage. Mount the
shared storage manually and start the DB again.
• OPTIONAL: Move the monitoring history database.
• Add the shared disk mount for the Platform Manager Configuration Database to
the HA group.
• OPTIONAL: Add the shared disk mount for the Add Monitoring History database
to the HA group.
• Move the repository to shared storage.
• Mount the repository in the shared storage.
• Move and mount tftpboot
• Move and mount images
• OPTIONAL: Move and mount the Monitoring history database
• Add the shared disk mount to the HA group:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 151


• Start up HA on the the primary Platform Manager Server
• Add the secondary Platform Manager Server to the HA group and heartbeat
channels.
• Install the Platform Manager Server softwareAdd Gateway software to secondary
gateway
• Add Platform Manager Server software to secondary Platform Manager Server.
• Reconfigure the primary Platform Manager Server.
• Make the cluster wait about 60 seconds to let heartbeat on the primary initialize.
• Do one of the following:
• If the secondary Platform Manager Server WAS NOT INSTALLED EARLIER use
“reconfigure all”.
• If the secondary Platform Manager Server WAS INSTALLED EARLIER, reconfigure
the Platform Manager Server system by using “reconfigure PMServer-1”. The
example script will use this choice.
• Make the cluster wait about 60 sec to let heartbeat on the primary initialize
again:
• Copy the product keys from the primary Platform Manager Server to the
secondary.

CAUTION—This last command is subject to change. Check the current release notes or contact
Platform support for current commands.
This script assumes you have done the first two steps in the process above:

#network, bmc, console and power for the primary PMServer


pmcli addsubnet 172.20.0.0/16 172.20.0.0 255.255.0.0
pmcli addroutablesubnet 'Default routable group' 172.20.0.0/16
pmcli addbmc PMServer-1 root rivendel 172.20.5.20
pmcli enablebmcpower PMServer-1 PMServer-1
pmcli enablebmcconsole PMServer-1 PMServer-1
#Define the SECONDARY node
#Configure the HA group
# Create the HA group
pmcli createhagroup HAPMServer
#Add the floating ethernet interfaces
pmcli addhaethernetinterface HAPMServer nic1 eth0 HAPMServer.ext
172.19.5.200

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 152


pmcli addhaethernetinterface HAPMServer nic2 eth1 HAPMServer.int
10.0.0.200
#Add the primary Platform Manager Server with heartbeat
channels.
pmcli addservertohagroup PMServer-1 HAPMServer
pmcli addheartbeatchannel PMServer-1 unicast eth0 172.19.5.21
pmcli addheartbeatchannel PMServer-1 udpport 11694
#Enable the new floating IP(s).
pmcli reconfigure PMServer-1
#Move the services to the Platform Manager Server HA group all
at once
pmcli moveservicestohagroup PMServer-1 HAPMServer
#Move, mount and restart the Platform Manager Configuration
Database.
mkdir /mnt/tmpmount1
/etc/init.d/scacim-pgsql stop
mount -t ext3 /dev/sda1 /mnt/tmpmount1/
rm -rf /mnt/tmpmount1/*
cp -pr /opt/scali/var/scacim-db/* /mnt/tmpmount1/
umount /mnt/tmpmount1/
rmdir /mnt/tmpmount1/
rm -rf /opt/scali/var/scacim-db/*
mount -t ext3 /dev/sda1 /opt/scali/var/scacim-db/
/etc/init.d/scacim-pgsql start
#Move the monitoring history database.
mkdir /mnt/tmpmount7
/etc/init.d/scasmo-pgsql stop
mount -t ext3 /dev/sda7 /mnt/tmpmount7/
rm -rf /mnt/tmpmount7/*
cp -pr /opt/scali/var/scasmo-db/* /mnt/tmpmount7/

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 153


umount /mnt/tmpmount7/
rmdir /mnt/tmpmount7/
rm -rf /opt/scali/var/scasmo-db/*
mount -t ext3 /dev/sda7 /opt/scali/var/scasmo-db/
#Add shared disk mount for the Configuration Database to the HA
group.
pmcli addhasharedfs HAPMServer /dev/sda1
/opt/scali/var/scacim-db/ ext3
#Add shared disk mount for the Add Monitoring History database
to HA group.
pmcli addhasharedfs HAPMServer /dev/sda7
/opt/scali/var/scasmo-db/ ext3
#Move the repository to shared storage.
mkdir /mnt/tmpmount2
mount -t ext3 /dev/sda2 /mnt/tmpmount2/
rm -rf /mnt/tmpmount2/*
cp -pr /opt/scali/repository/* /mnt/tmpmount2/
umount /mnt/tmpmount2
rmdir /mnt/tmpmount2
rm -rf /opt/scali/repository/*
#Mount the repository in the shared storage.
mount -t ext3 /dev/sda2 /opt/scali/repository/
#Move and mount tftpboot
mkdir /mnt/tmpmount5
mount -t ext3 /dev/sda5 /mnt/tmpmount5/
rm -rf /mnt/tmpmount5/*
cp -pr /tftpboot/* /mnt/tmpmount5/
umount /mnt/tmpmount5
rmdir /mnt/tmpmount5
rm -rf /tftpboot/*

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 154


mount -t ext3 /dev/sda5 /tftpboot/
#Move and mount images
mkdir /mnt/tmpmount6
mount -t ext3 /dev/sda6 /mnt/tmpmount6/
rm -rf /mnt/tmpmount6/*
cp -pr /opt/scali/images/* /mnt/tmpmount6/
umount /mnt/tmpmount6
rmdir /mnt/tmpmount6
rm -rf /opt/scali/images/*
mount -t ext3 /dev/sda6 /opt/scali/images/
# Move and mount the Monitoring history database
mkdir /mnt/tmpmount7
/etc/init.d/scasmo-pgsql stop
mount -t ext3 /dev/sda7 /mnt/tmpmount7/
rm -rf /mnt/tmpmount7/*
cp -pr /opt/scali/var/scasmo-db/* /mnt/tmpmount7/
umount /mnt/tmpmount7/
rmdir /mnt/tmpmount7/
rm -rf /opt/scali/var/scasmo-db/*
mount -t ext3 /dev/sda1 /opt/scali/var/scasmo-db/
#Add the shared disk mount to HA group:
pmcli addhasharedfs HAPMServer /dev/sda2 /opt/scali/repository/
ext3
pmcli addhasharedfs HAPMServer /dev/sda5 /tftpboot/ ext3
pmcli addhasharedfs HAPMServer /dev/sda6 /opt/scali/images/ ext3
# Start HA on the the primary Platform Manager Server
pmcli reconfigure dl360g3-1
# Add secondary Platform Manager Server to HA group and heartbeat
channels:
pmcli addservertohagroup PMServer-2 HAPMServer

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 155


pmcli addheartbeatchannel PMServer-2 unicast eth0 172.19.5.20
pmcli addheartbeatchannel PMServer-2 udpport 11694
# Install PMServer software, Add Gateway software to gateway 2
pmcli addsoftware dl360g3-2 pm-5.7.1 \
'MonitoringRelay' \
'Install server' \
'Console Server' \
'Power Server' \
'MonitoringOutofband'
#Add Platform Manager Server software to PMServer-2
pmcli addsoftware dl360g3-2 pm-5.7.1 \
'Platform Manager CLI' \
'Platform Manager GUI' \
'Repository server' \
'MonitoringControl' \
'MonitoringHistory' \
'Configuration server'
Reconfigure primary Platform Manager Server.
pmcli reconfigure PMServer-1
Pause cluster about 60 seconds to let heartbeat on the primary
initialize
sleep 60
pmcli install PMServer-2
#Note: Platform Manager Server WAS INSTALLED EARLIER
pmcli reconfigure PMServer-1
# Copy product keys from primary PMServer-1 to PMServer-2
scp /opt/scali/etc/productkeys PMServer-2:/opt/scali/etc/
As an alternative to moving all services to an HA group as in step 14, you can move them
one at a time:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 156


#Move the services to the Platform Manager Server HA group
individually
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_ManagementEngineService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_DHCPServerService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_NATService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_ScaMonitoringRelayService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_ScaMonitoringOutofbandService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_ConsoleManagementController
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_PowerManagementController
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_RepositoryChannelService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_ScaMonitoringControlService
pmcli moveservicetohagroup PMServer-1 HAPMServer
Scali_ScaliManageConfigurationService
pmcli moveservicetohagroup dl360g3-1 HAPMServer
Scali_ScaLMServerService

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 157


Chapter 6 - Monitoring the Data Center
Platform Manager allows you to monitor the health of your data center in real time with a
variety of data display formats, it provides historical data, and it enables event notification
through alarms triggered at preset thresholds.

Working with Monitor Views


This section discusses the following topics:
• Monitoring menu on page 161
• Platform Node Status Icons on page 162
• Standard Monitor Views on page 162
• Creating a Custom Monitor View on page 172

Monitoring menu

Figure 129 shows the monitoring dropdown menu at the top of the screen.

Figure 129—Monitoring menu

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 158


Platform Node Status Icons

Node status is visually indicated by the Platform node server icon. There are three
states for this icon:

Figure 130—Node Status

Standard Monitor Views

To open a standard monitor view, choose Monitoring in the tool bar, then select a view
from the drop-down list.
Platform Manager has a back select feature that allows you to work interactively with
monitor views. To back-select nodes from a monitor view, select a variable whose data
are displayed in one of the views. The nodes being monitored for that variable are
highlighted in the Data Center Selector.
Platform Manager provides a set of standard monitor views, as well as a method to
create custom monitor views. The standard monitor views are:
• Alarm View on page 162
• Custom Variables in Platform Manager Monitoring on page 167
• Interconnect Monitoring View on page 168
• Creating a Custom Monitor View on page 172

Alarm View
Platform user-definable alarms are part of the monitoring system. Users may define
their own events to trigger an alarm based on any combination of comparative
operations of any available monitoring variable.It is possible to select between a set
of pre-defined actions to be performed when an alarm is triggered. Combined with
the possibility to define monitoring variables this makes for an extremely flexible
and powerful solution.

Viewing Alarms
To view alarms choose Monitoring -> Alarm Log. This will show a dynamically
updated list of events that have triggered the alarm. Use the Edit Alarms view
to define new alarms and edit existing ones.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 159


Figure 131—Alarms View

Editing an Alarm
Figure 132 illustrates the Alarms view specifying where you perform the
individual steps necessary to define an alarm.

Figure 132—Edit Alarms View

The Edit Alarms view contains a list of current alarms with status information,
as well as buttons to perform operations on selected alarms.Apart from Add
Alarm, all of the other buttons require that an alarm is selected first to work
properly.
The model used to describe a condition that should set off an alarm is to combine
a series of boolean expressions using either AND, or, or logical functions.The
boolean expressions are constructed by comparing a monitoring variable to a
reference value by means a logic operator (<, <=, >, >=, !=, == ).
For existing alarms, when an alarm is selected in either the Alarms editor, or the
Alarms log, the corresponding nodes are selected in the Data Center Selector.
Custom alarms can be defined based on any available monitoring parameter and
will trigger customized, or default actions.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 160


As a response to the alarm being triggered, it is possible to select different
actions to perform, in addition to sending an E-mail.Predefined actions are
Reboot Machine and Shutdown Machine, but this can be easily extended with
user defined actions.

View Item DESCRIPTION


Variable Option menu to select a monitoring variable for comparison.
Operand Option menu to select operator for comparison <, <=, >, >=, ==,
or != (C syntax).
Threshold Threshold value to compare against.Uses default unit of selected
monitoring variable, e.g.percent for CPU load, rpm for fan speed.
Match All Use logical AND between the monitoring variable comparisons
below.Excludes Match Any
Match Any Use logical, or between the monitoring variable comparisons
below.Excludes Match All.
Add Criteria Opens the Define Chart Data dialog, which allows you to specify the
variables, aggregate variables, or filters to be used for the alarm.
Action What to do when the alarm is triggered: options are None, Reboot
Machine, or Shutdown Machine. User specified actions will also
show up in this menu.
E-mail address E-mail address for the alarm notification.

Adding an alarm
Start in the Monitoring drop down menu:

Figure 133—Add Alarm dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 161


Figure 134—Define Chart Data dialog

1 Select Monitoring -> Edit Alarms.


2 In the Edit Alarms view, click Add Alarm.The Add Alarm dialog opens.
3 Enter a name for the alarm, and click OK. The name appears in the alarms list.
4 Select the alarm name.
5 Click OK.The Define Chart Data dialog opens.
6 Select Variable radiobutton. Drill down through the list of available variables to
select the ones to use for the alarm.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 162


7 Select the Aggregation radio button and select an aggregate type.
8 Enter a name for the aggregate variable.
9 Click OK. The variable name appears in the Variables column.
10 Select for the Filter:
11 Drill down through the list of available variables to select the type of filter criteria to use for
the alarm.
12 Enter a name for the filter.
13 Choose the operand and value for the filter.
14 Click OK.The variable name appears in the Variables column.
15 Define the three levels of thresholds by entering appropriate values for Critical, Warning,
and Notice.
16 Enter an operand to be applied to the thresholds of the alarm.
17 Select Match All to trigger the alarm if the thresholds for all variables in the list are met,
or Match Any if any are met.
18 Set priority level to activate notification for the alarm:
19 Choose an action from the drop-down list
20 Enter an E-mail notification address for the alarm.
21 Click Apply Alarm.
22 Select nodes in the Data Center Selector.
23 Select alarm names in the Edit Alarms view.
24 Click Apply.

Example: Adding a new alarm called “CPU load too high”


As a simple example, define the alarm called “CPU load too high” that will send
you an E-mail if user CPU load passes 50%.

Figure 135—Adding a new alarm

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 163


To define the new alarm:

1 Press the Add Alarm button in the Edit Alarms view.


2 Select cpuusertime from the Select Variable view.
3 Set the Operand to -> and the Attention Threshold to 50 (%).
4 Click Attention.
5 To send an E-mail only when the alarm is triggered, choose None from the Action
drop-down list.
6 Enter the E-mail address of the notification recipient.
7 Save the settings by pressing Apply Alarm.
The alarm has now been defined and will appear in the Edit Alarm view. Now,
to test the new alarm, running any reasonably demanding program should push
CPU load beyond 50.If the alarm main window is kept open, it will soon illustrate
how the alarm background changes to red as the alarm is triggered.The E-mail
sent out would resemble this one:
Cluster frontend “node-11.company.com” detected that the alarm:
"CPU usage" ("{$cpuusertime} > 30")
triggered at Tue Aug 19 02:16:49 PM CEST 2005 on the following nodes:
node-11 (value=( 100 ) > 30)
node-12 (value=( 100 ) > 30)
node-21 (value=( 99 ) > 30)
node-22 (value=( 99 ) > 30)
This alarm has no valid action.
PLEASE DO NOT REPLY TO THIS AUTOMATED MESSAGE

Custom Variables in Platform Manager Monitoring


This example is a simple script to monitor local disk access. We’ll use the metric for
monitoring disk activity at the local disk in transactions per second (tps). As there
can only be one user script, the second parameter in this script contains the metric
we want to monitor, in this case disk.tps. This parameter is set in the oid definition
line in the ScaMond.conf configuration file (see later).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 164


#bin/sh
#
case $2 in
disk.tps)
;;
echo $1 : $2 = -1
;;
esac

Keep this small script running to log output from an iostat command to the file in
the var directory. The benefit of this script, while not particularly elegant, is that it
relies only on standard tools in the sysstat package.

#/bin/sh
while true; do
iostat 2 2 | grep sda | tail -1 > /var/log/iostat.tmp
mv /var/log/iostat.tmp /var/log/iostat.log
sleep 1
done
# Copy the script to all nodes and start it.
scash -p "/tmp/iostat.sh &"

This will keep the dot log file updated with values from the iostat command. You
need to update ScaMond.conf to get these values into Platform Manager.

#class name type format cmdline


class custscript external {%s : %s = %s} {/bin/sh /tmp/disk.tps.script %s %s}
#oid hwgroup name size class format oid
oid {} custdisktps integer custscript %d disk.tps
#variable hwgroup name descr size maxvalue expression
variable {} diskiotps {Disk IO sda "TPS"} integer 100 { $custdisktps }

Restart the scamond and scasmo services. The Platform Manager GUI will then
display the new custom variables.

Interconnect Monitoring View


This section describes how to monitor interconnects using Platform Manager.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 165


Platform Manager can include Scali MPI Connect in the installation process. Scali
MPI Connect can be combined with a number of popular interconnect technologies
to support particular applications’ communication requirements. Currently, this
support covers interconnects based on Ethernet technology and Myrinet.
The standard Interconnect view included with Platform Manager shows Ethernet
interconnect information for all the selected nodes. You may monitor parameters for
any selection of objects which may be displayed with any applicable presentation.

Figure 136—Interconnect Monitoring view

To monitor interconnect status:

1 Select an object, or objects in the Data Center Selector.


2 Choose Monitoring -> Interconnect, or Window -> Show View -> Interconnect to open a
new interconnect monitoring view.
3 Select an Interconnect view from the Interconnects drop-down list.
4 You can also lock, or unlock the data presented in the view and display a legend of
descriptions for each monitor symbol.

Ethernet Monitoring
The Platform Manager GUI provides a cluster wide interconnect overview of all
Ethernet-related monitoring variables through the standard Interconnect View.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 166


The Ethernet status view is really a compound monitoring view utilizing several
Ethernet monitoring variables to give an overall picture of the state of the
interconnect. From this single view you can read:
• Link speed 10Mbit, 100Mbit and 1Gbit
• Duplex: full/half
• Link state: up/down/cable out

• Error counters: (transmit errors)

Myrinet Monitoring
When you manage systems with Myrinet, Platform Manager adds a Myrinet
submenu item to the Interconnects menu in the Interconnect view. The Platform
Manager GUI provides a cluster wide interconnect overview of all
Myrinet-related monitoring variables.
To monitor Myrinet systems:

1 Select an object, or objects in the Data Center Selector.


2 Choose Monitoring -> Interconnect, or Window -> Show View -> Interconnect to open
a new interconnect monitoring view.
3 Choose Interconnects -> Myrinet.The Myrinet Interconnect view opens.

Figure 137—Myrinet Interconnect view

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 167


The Myrinet link status window is a compound monitoring view using several
Myrinet monitoring variables to give an overall status picture of the Myrinet
interconnect.Links are arranged in a 2D column. From this single window you
can monitor:
• Link Self test state: passed/failed/in-progress
• Link state up/down
• Data In and Data Out flags
• Bad CRC count
Together these give a good indication of the state of the Myrinet interconnect.

Monitoring Queue Status View

You can monitor the status of jobs that you have sent to the queuing system using
the LSF Queue Status View or the PBS Queue Staus View.

Figure 138—LSF Queue Status

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 168


Figure 139—PBS Queue Status

To monitor queue status:

1 Select a node, or nodes in the Data Center Selector.


2 Choose Monitoring -> Queue Status.

Creating a Custom Monitor View

You can create custom views for monitoring a wide variety of activity in your data
center.Platform Mange provides a generic view, the Monitor View, that you can modify
to include data from variables that you specify.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 169


Figure 140—Define Chart Data dialog

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 170


To create a new Monitor View:
• Select an object, or objects in the Data Center Selector.
• Choose Monitoring -> New Monitor View, or Window -> Show View -> Monitor
View. The Define Chart Data dialog opens.
• Click Variable. You could choose to monitor by aggregate variable, or filter.
• Select a variable, or variables from the drop-down list.
• Click Add. The variable displays in the view area

• Click OK. The Monitor View opens. The data you specified is displayed in the
view.

Figure 141— Monitor View.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 171


To configure the chart:
• Choose Presentation.
• Select a chart format.
• Choose Add/Remove Monitoring to specify more data to display in the view
using the Define Chart Data view.
• Click Save Chart to save a snapshot of the chart in a file.
There are two ways to preserve your customized monitoring:
• You can capture the image from your test node using IBP.
• If you are not using IBP, you can create an RPM of your modifications using
ScaCPG.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 172


Chapter 7 - Managing Systems
Platform Manager is designed to make the management of clusters and servers within a
data center as efficient as possible.
This chapter’s topics include:
Overview of the Management Menus on page 191
Running MPI Jobs on page 193
Running Parallel Shell Commands on page 194

Overview of the Management Menus


Platform Manager provides tools to manage one, or many nodes in your data center
directly.You can also queue MPI jobs and run commands on preselected nodes using a suite
of parallel shell tools.
The Management menu facilitates system management related tasks, including the use of
remote power and console switches.

Note: Only the Node Console option is available to ordinary users. All other items
require root privileges.

The options in the Power Management menu are described in Table 12.

Figure 142—The Management Menus

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 173


Table 12—Power Management Menu

Menu Item DESCRIPTION


Power Mgt Opens the “Power” submenu.This contains options to perform
a hardware Power Off, Power On, or Power Cycle on all
selected nodes.(Root only, confirmation needed).
Reboot Initiates software reboot at all selected node (Root only,
confirmation is needed)
Shutdown Initiate software shutdown for all selected nodes (Root only,
confirmation in needed)
MPI Launch Opens the MPI Start view to run MPI jobs on selected nodes.
Remote Access -> Node Opens a console window on the selected nodes.
Console
Remote Access -> Node Opens a terminal window on the selected nodes.
Terminal
Parallel Shell Opens the Parallel Shell view to run shell commands on
selected nodes.

Figure 143—right-click Node Power menu

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 174


You may also right-click on a node and chose Node On/Off for the same choices as you will
find in the Management drop-down menu.
Console and power functionality requires additional management infrastructure.
• External power /console switches, or
• Certified nodes with built in support for IPMI, HP iLO, or Dell RAC.
These options will be visible in the menu only if you have configured these functions
properly on your nodes.

Running MPI Jobs


You can execute MPI programs at selected nodes using the MPI Start View.

Figure 144—MPI Launch view

To run MPI programs:


• Select a node, or nodes in the Data Center Selector.
• Choose Management -> MPI Launch. The MPI Launch View opens.
• In the Processes per Node field, select the number of instances of the program
you want to start on each node.
• Enter the path to the MPI executable program.
• Enter any MPI options required. These options are for mpimon. See the Scali MPI
Connect User’s Guide for more information.
• Click Launch.
The program is started. Output to “stdout” and “stderr” is printed in the MPI Output area.
While a program is running the Launch button becomes the Stop button. Clicking Stop
aborts the program and stops all instances of the program on all nodes.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 175


Running Parallel Shell Commands
The Parallel Shell View allows you to run shell commands on selected nodes in parallel.This
is a graphical front-end to a subset of the ScaSh parallel shell tools.

Figure 145—Example of a Parallel shell command

To run a parallel shell command:


1 Select a set of nodes in the Data Center Selector.
2 Choose Management -> Parallel Shell.
3 Enter a command in the field at the bottom of the view.
4 Click Run Command.The output displays in the Shell Output area.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 176


Chapter 8 - Accounting Systems
This chapter explains how to use and configure the Platform Accounting system.
ScaAccounting is the batch system accounting service, a daemon that listens to accounting
files on a PBS Pro server. When you start it, it checks all existing accounting files for new
records. After all new records are saved, the daemon will listen for modifications in
accounting files. It will only save records for successfully completed jobs. Accounting
reports on PBS Pro data is represented in the BIRT report viewer.
scaacct is a cluster wide accounting-system based on BSD accounting. This package
includes software for accounting collection and presentation. Accounting-presentation is
performed by /opt/scali/sbin/scaacct. scaacct generates accounting reports for a given period
of time.
Topics in this chapter include:
ScaAccounting on page 180
scaacct on page 181

ScaAccounting
ScaAccounting is used in batch system accounting. Topics in this section include:
Manually enabling the accounting functions in ScaAccounting on page 180
Starting ScaAccounting on page 180
ScaAccounting log on page 180

Manually enabling the accounting functions in


ScaAccounting

ScaAccounting and the PBS Pro Server must run on the same node. When you enable
a PBS server in the Platform Manager GUI, Platform Manager will enable
ScaAccounting, also.
To enable ScaAccounting through the pmcli enter:
pmcli -h addbatchsystemaccountingservice <systemname>

Starting ScaAccounting

Enter:
/etc/init.d/scaaccounting start

ScaAccounting log

Normally you will find the ScaAccounting log here: /var/log/scaaccountd.log. This log will
contain a list of error messages and a list of files that were parsed.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 177


scaacct
Scaacct generates accounting reports for the cluster. Enter:

/opt/scali/sbin/scaacct -h
to get information from help.
Table 13 contains the syntax and options found in scacct.

Table 13 — scaacct options


scaacct [options] [start] [stop]
OPTIONAL DESCRIPTION
-f, --FORMAT, - output report in FORMAT. Choose between text
--format=FORMAT (default), csv, html and latex.
-g, - summarize accounting data grouped by GROUPBY.
Use option twice to make a table with data grouped
by two parameters. You can group data by user,
group, command, node, year, month, day, weekday,
or time of day.
--GROUPBY
-- groupby=GROUPBY
-h, --help - show this help message and exit
-l, --listall - list all accounting-entries individually instead of
grouping them
-r - filter out accounting-records according to RULE.
RULE should be of format key=value. Key can be
either uid, user, gid, group, node or command. Use
the option multiple times to add more rules.
--RULE
--rule=RULE
-s - specify how to summarize accounting-data.
Choose between summarizing CPU time (default,
system-time and user-time), times (elapsed,
system-time and user-time) or full (summarize all
available data)
--SUMMARIZE
--summarize=SUMMARIZE
--version Show program’s version number and exit

By default the scaacct command reports per user accounting data for the previous month.
(Immediately after an install no data has been gathered and the report will be empty.
Processes are accounted at time of termination).
Enter:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 178


# scaacct
to render a report table something like this:

Accounting report 07/01/06 00:00:00 - 07/31/06 23:59:59


Username CPU time
root 184.14
rpc 0.33
apache 0.17
rk 23.07
Total 207.71

Time Specification

The time specification can be given as a <start> <stop> time range or a single <time>
unit as listed below:

Time specification Output


# scaacct 2006 Specific year
# scaacct 2006-07 Specific month in year
# scaacct 2006-07-03 Specific day in month in year
# scaacct june Last month by name

Using scaact with time start only


Entering a start value only:
scaacct 2007-07-01
renders an accounting report table for a single day

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 179


Accounting report
2007-07-01 00:00:00 - 2007-07-01 23:59:59
Username CPU time
root 135.28
rpc 0.14
apache 0.17
rk 23.01
Total 158.60

scaacct with a time range


Entering both start and stop values:
# scaacct 2007-06-14 2007-08-02
renders an accounting report table over a determined period.

Table 14 — scaacct with specified time range nr.2


Accounting report
06/14/07 00:00:00 - 08/02/07 23:59:59
username CPU time
root 184.14
rpc 0.33
apache 0.17
rk 23.07
Total 207.71

Group-by Specification

The default is to group by user, but there are several other options. Please note that
you can specify two group-by options to create a two-dimensional table. If you do not
want to group at all you may use the --listall option instead; this will list all records).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 180


Group-by specification Output
# scaacct -g user List results by Unix user IDs
# scaacct -g group List results by Unix groups IDs
# scaacct -g command List results by command name
# scaacct -g node List results by node name
# scaacct -g year List results by year
# scaacct -g month List results by month
# scaacct -g day List results by day
# scaacct -g weekday List results by weekday
# scaacct -g timeofday List results by time of day

Example: scaacct grouping by time specification range


Entering:
# scaacct -g group
renders an accounting report table over a period greater than 24 hours

Table 15 — Example: scaact with time specification range

Accounting report
07/01/07 00:00:00 - 07/31/07 23:59:59
Group CPU time
root 457.48
rpc 0.57
apache 0.21
platformusers 191.47
Total 649.73

Example: scaacct grouping by command


Entering
# scaacct -g node -g command
renders an accounting report table grouped by command

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 181


Table 16 — Example: scaacct grouped by command

Accounting report
07/01/06 00:00:00 - 07/31/06 23:59:59
Command \ Node delfi2-1 n1 n2 n3 Total
0anacron 0.01 0.00 0.00 0.00 0.01
S90psacct 0.02 0.01 0.01 0.01 0.05
S90scasmo-contr 0.00 0.00 0.00 0.00 0.00
S90xfs 0.08 0.00 0.04 0.03 0.15
yphelper 0.01 0.00 0.00 0.00 0.01
ypwhich 0.00 0.01 0.00 0.00 0.01
ypxfr 0.11 0.00 0.00 0.00 0.11
yum.cron 0.00 0.00 0.00 0.00 0.00
zcat 0.15 2.58 2.50 2.61 7.84
Total 424.61 102.32 87.98 34.82 649.73

Rule Specification
Use the -r option to exclude records that don’t match rules specified in a
“key=value” format. Multiple rules may be combined.

Rule specification Output


# scaacct -r uid=0 Only list results matching Unix user ID 0
# scaacct -r user=root Only list results matching Unix username ‘root’
# scaacct -r gid=501 Only list results matching Unix group ID 501
# scaacct -r group=users Only list results matching Unix group name ‘users’
# scaacct -r node=n12 Only list results matching node name ‘n12’
# scaacct -r command=all2all Only list results matching command name ‘all2all’

Note: The accounting information is based on termination time and not start time.

Using scaacct to report on a specific user


Entering:
# scaacct -g node -g command -r user=rpc
renders an accounting report table for a particular user

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 182


Table 17 — Example: scaact with user specification
Accounting report
07/01/07 00:00:00 - 07/31/07 23:59:59
Command\Node delfi2-1 n1 n2 n3 Total
portmap0.56 0.00 0.01 0.00 0.00 0.57
Total 0.56 0.00 0.01 0.00 0.57

Summarize Specification

Summarize specifica- Output


tion
# scaacct -s CPUtime Summarize CPU time (system-time + user-time)
# scaacct -s times Summarize elapsed-, system- and user-time
# scaacct -s full Summarize the following values:

Number of times the application was invoked

Elapsed time

System time

User time

Memory consumption (size * time)

Characters transferred

Number of blocks read or written

Minor page faults

Major page faults

Number of swaps

Example: Using scaacct for summary of elapsed-, system- and


user-time
Entering:
# scaacct -s times
scaacct renders an accounting report table for a summary

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 183


Accounting report
07/01/07 00:00:00 - 07/31/07 23:59:59
Username Elapsed System User
root 372015.62 196.33 2611.15
rpc 19152.59 0.42 0.17
apache 108816.72 0.15 0.06
rws 1297.25 85.06 83.17
rk 15478.77 17.24 6.00
Total 516760.95 299.18 350.55

Generating reports with scaacct

By default, scaacct generates a text-formatted report, but other formats can be


specified with the “--format” option. Supported formats include text, csv, html, and
latex.
Csv generates reports that can easily be imported in spreadsheets.
Latex-reports can be converted to postscript or pdf by using the latex-tools. The easiest
way to create printable reports is to redirect output from scaacct to a file
scaacct [options] > file.tex)
and use pdflatex
pdflatex file.tex
The tex-file can be modified before running pdflatex to change layout or content.

Using scaacct with the -f option


Entering:
# scaacct -g node -g user -f csv
generates an accounting report table like the one below.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 184


Table 18 — Example: Using scaacct with -f
Accounting report
07/01/07 00:00:00 - 07/31/07 23:59:59
Username/Node delfi2-1 n1 n2 n3 Total
root 354.05 34.03 34.64 34.76 457.48
rpc 0.56 0.00 0.01 0.00 0.57
apache 0.21 0.00 0.00 0.00 0.21
rws 58.10 57.85 52.22 0.06 168.23
rk 11.69 10.44 1.11 0.00 23.24
Total 424.61 102.32 87.98 34.82 649.73

Using scaacct with pdflatex


Entering:
# scaacct -g group -g day -f latex 2007-07 > test.tex
# pdflatex test.tex
generates an accounting report table in the pdf format as shown in Figure 146.

Figure 146—PDF output sample

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 185


Triggering Data Collection

Accounting data is collected from the nodes to the accounting server daily.You can now
create reports with the current data.
To trigger an immediate update run logrotate and scaacct_collect on all nodes enter:
sh -p logrotate -f /etc/logrotate.conf
# scash -p /etc/cron.daily/scaacct_collect

n1 : renaming file: /var/account/pacct.1.gz


n1 : transfering file:
/var/account/history/1088733720.1088766834.6a5623da496c6ae42a24f45e38068
29b.gz
n3 : renaming file: /var/account/pacct.1.gz
n3 : transfering file:
/var/account/history/1088733720.1088766845.33c3b47878e56d4c61f9b6b166e9d
4aa.gz
n2 : renaming file: /var/account/pacct.1.gz
n2 : transfering file:
/var/account/history/1088733720.1088766834.76d8b4d0ec33cb32cd559a61c0b06f
cd.gz
n0 : renaming file: /var/account/pacct.1.gz
n0 : transfering file:
/var/account/history/1088733720.1088766782.c7e2a9a02627b90d0aa1e19ea21b8
d94.gz

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 186


Chapter 9 - Reporting
This chapter’s topics include:
Report Interface on page 205
Opening a Report on page 206

Report Interface
There are two grouping of functions at the top of the interface. The Cluster Summary and
the Report Navigation.

Cluster Summary

You will find the cluster summary tools on the left side of the tool bar

Figure 147—Cluster Summary tool bar icons, top left view

On the left at the top of the report dialog from left to right
• Home
• TOC toggle
• Run Report
• Export Data
• Export report
• Print report as PDF
• Print report on server

Report Navigation

On the right side of the tool bar are the icons for navigation.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 187


Figure 148—Report Navigation interface

Opening a Report
Go to the Report drop-down menu in the tool bar. Report-> Show Reports
You may also use the web interface at http://<servername>:8080/reports/.

Figure 149—Report Menu

Go to Report-> Show Reports.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 188


Figure 150—Selecting the report type

Choose the appropriate type of report. A description follows each type.


Click on OK when you have chosen.

Management and Inventory

Inventory Overview is a list showing the totals of each type of product.


Management Dashboard is an overview of utilization and availability of the clusters.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 189


Server List lists servers matching the specified parameters

Monitoring

Cluster Summary shows server settings and status. Refreshes every 5 minutes.
Node Status shows monitoring status for all nodes. Refreshes every 5 minutes.
Server Summary shows server settings and historical status.

Networking

Network Overview is a table depicting the configuration of the network devices on all
systems.

Platform certified products

Certified Distributions lists certified distributions.


Certified Servers shows a list of certified servers.

Workload management

PBS Pro Job History lists batch jobs which have run on the cluster.
PBS Pro Job Usage is an overview of the number of batch jobs and resources consumed
by each user.

BIRT Report Parameters

Required parameters are marked with an asterisk.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 190


Figure 151—Birt parameters

1 Choose which cluster you will report.


2 Click on OK.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 191


Chapter 10 - Platform Manager Command Line
Interfaces
Platform Manager provides an extensive set of command line interfaces (CLI) which gives
the advanced user an option to perform datacenter management from the command line.
You can perform scripted management tasks with the CLI’s. For examples please see Best
Practices in Platform Manager on page 362. You have the option of using bracket
expansion with zero padding and grouping. For more information on expansion brackets
and grouping please see Bracketing and Grouping on page 359.
Platform Manager contains a suite of the following command line interface tools:
The pmcli interface on page 195
The console interface on page 312
The power interface on page 324

The pmcli interface


The pmcli is the command line counterpart to configuration and provisioning functions in
the Platform Manager GUI.
Arguments marked with [..] in help will be bracket-expanded. [..] at the end of an
argument explanation means that this argument should be a list.
Use the following for Platform Manager Command Line interface:

Table 19— for Platform Manager CLI Commands

Syntax: pmcli command <arguments> [options]


ARGUMENT - arguments that are required
OPTIONAL - you may use one of the options in the list
-h, -help - show this help message and exit

CAUTION—The CLI commands make changes to the configuration database ONLY. If you do not
apply your changes to the node(s) you will have an eternity of wall time wondering why the system hasn't
changed. You MUST apply your changes to the node(s) or clusters either in the GUI or by using:
# pmcli reconfigure <nodename|all>

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 192


The commands are grouped in classes:
• BMC Commands on page 197
• Cluster Commands on page 201
• Custom Attributes Commands on page 203
• Deployment Commands on page 205
• Diagnostic Commands on page 209
• Filesystem Commands on page 212
• Flexlm Commands on page 217
• High Availability (HA) Commands on page 221
• Image Management Commands on page 235
• Licensing Commands on page 237
• Logging Commands on page 242
• LSF Commands on page 244
• Network Commands on page 253
• Node Commands on page 268
• PBS Options Commands on page 276
• Product and Software Options Commands on page 281
• Services Options Commands on page 293
• Switch Commands on page 305
• Template Commands on page 310

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 193


BMC Commands

The Baseboard Management Controller commands help you manage the interface
between system management software and platform hardware.
• addbmc on page 197
• disablebmcconsole on page 197
• disablebmcmonitoring on page 198
• disablebmcpower on page 198
• enablebmcconsole on page 198
• enablebmcmonitoring on page 198
• enablebmcpower on page 199
• listbmccapabilities on page 199
• removebmc on page 199
• showbmc on page 200

addbmc
addbmc adds a BMC for the system(s)

Table 20—addbmc
pmcli addbmc <systemnames> <username> <password> <ipspecs>[subnet]
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}
username - username
password - password - YOU MUST NOT USE ENCRYPTION HERE
ipspecs - ip address(es) [..]
OPTIONAL DESCRIPTION
subnet - subnet for the ipaddress
--subnet=SUBNET

disablebmcconsole
disablebmcconsole disables BMC console for the system(s)

Table 21—disablebmcconsole
pmcli disablebmcconsole <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 194


disablebmcmonitoring
disablebmcmonitoring disables BMC power monitoring for the system.

Table 22—disablebmcmonitoring
pmcli disablebmcmonitoring <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}

disablebmcpower
disablebmcpower disables BMC power control for the system(s).

Table 23—disablebmcpower
pmcli disablebmcpower <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}

enablebmcconsole
enablebmcconsole enables BMC console for the system(s).

Table 24—enablebmcconsole
pmcli enablebmcconsole <systemnames> [consserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
consserver - the name of a console server
--consserver=CONSSERVER

enablebmcmonitoring
enablebmcmonitoring enables BMC power control for the system(s)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 195


Table 25—enablebmcmonitoring
pmcli enablebmcmonitoring <systemnames> [monitoringserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
monitoringserver name of out of band monitoring system to monitor this BMC.
Default is to find one automatically.
--monitoringserver = MONITORINGSERVER

enablebmcpower
enablebmcpower enables BMC power control for the system(s).

Table 26—enablebmcpower

pmcli enablebmcpower <systemnames> [powerserver]


ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
powerserver - the name of the power server
--powerserver=POWERSERVER

listbmccapabilities
listbmccapabilities returns a list of BMC capabilities for the system(s).

Table 27—listbmccapabilities
pmcli listbmccapabilities <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

removebmc
removebmc removes BMC from system(s)

Table 28—removebmc
pmcli removebmc <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 196


showbmc
showbmc shows BMC for the system(s)

Table 29—showbmc
pmcli showbmc <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 197


Cluster Commands

Through the cluster commands you can create clusters from a collection of nodes,
rename them or add and remove nodes from cluster.
• addnodetocluster on page 201
• createcluster on page 201
• listclusters on page 201
• listnodesincluster on page 201
• removenodefromcluster on page 202
• renamecluster on page 202

addnodetocluster
addnodetocluster adds system names to a cluster.

Table 30—addnodetocluster
pmcli addnodetocluster <systemnames> <clustername>
ARGUMENT DESCRIPTION
systemnames - name of system(s) [..]
clustername - name of the cluster

createcluster
createcluster creates a cluster of type 'performance'. See See “Creating a flat
cluster with pmcli” on page 380 for best practices on how to use createcluster.

Table 31—createcluster
pmcli createcluster <name>
ARGUMENT DESCRIPTION
name - the name of the cluster you will create

listclusters
listclusters returns a list of all available clusters of the type “performance”.

Table 32—listclusters
pmcli listclusters
ARGUMENT OPTIONAL
none none

listnodesincluster

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 198


listnodesincluster returns a list of all nodes in a cluster.

Table 33—listnodesincluster
pmcli listnodesincluster <clustername>
ARGUMENT DESCRIPTION
clustername - name of the current cluster

removenodefromcluster
removenodefromcluster removes systemname(s) from a cluster.

Table 34—removenodefromcluster

pmcli removenodefromcluster <systemnames> <clustername>


ARGUMENT DESCRIPTION
systemnames - name of system(s) [..]
clustername - name of cluster

renamecluster
Figure 1 renamecluster assigns a new name to a cluster.

Table 35—renamecluster
pmcli renamecluster <oldclustername> <newclustername>
ARGUMENT DESCRIPTION
oldclustername - the current name of the cluster
newclustername - the new name of the cluster

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 199


Custom Attributes Commands

Manage your system attributes with these commands.


• getcustomattribute on page 203
• listcustomattributes on page 203
• removecustomattribute on page 203
• setcustomattribute on page 203

getcustomattribute
getcustomattribute gets a custom attribute for system(s).

Table 36—getcustomattribute
pmcli getcustomattribute <systemnames> <attributename>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
attributename - the name of the attribute to set

listcustomattributes
listcustomattributes lists custom attributes for system(s).

Table 37—listcustomattributes
pmcli listcustomattributes <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

removecustomattribute
removecustomattribute removes a custom attribute for system(s).

Table 38—removecustomattribute
pmcli removecustomattribute <systemnames> <attributename>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
attributename - name of attribute to remove.

setcustomattribute
setcustomattribute sets a custom attribute for system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 200


Table 39—setcustomattribute
pmcli setcustomattribute <systemnames> <attributename> <value>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
attributename - name of attribute to set.
value - the value for the custom attribute

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 201


Deployment Commands

Use the deployment commands to install and to configure software on your systems.
These commands are the cli counterparts to The Upload Wizard in the GUI (see
“Upload Software Wizard” on page 68).
• install on page 205
• installmanagementsoftware on page 205
• reconfigure on page 207
• reconfiguredryrun on page 207
• setdiskless on page 207
• setdistribution on page 207
• setimage on page 208
• setnettemplate on page 208
• setftptemplate on page 208

install
install installs system(s). The operating system will be installed based on the
current configuration of the system.

Table 40—install
pmcli install <systemnames> [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
installserver - name of installserver
--installserver=INSTALLSERVER

installmanagementsoftware
installmanagementsoftware installs Platform Manager software on
system(s) without reinstalling the operating system. This logs onto the system
using remote shell (rsh or ssh), and install the Platform Manager software and
services on top of an existing linux installation. The system must already be present
and have the Platform Manager software and services enabled in the Platform
Manager configuration database prior to running installmanagementsoftware,

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 202


discovernode on page 270 and enablemanagementofservers on page 267
for adding existing systems to the database.

Table 41—installmanagementsoftware
pmcli installmanagementsoftware <systemnames> [netconfigtemplate] [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
netconfigtemplate - UUID of the installation template.
--netconfigtemplate=NETCONFIGTEMPLATE
installserver - server from where the installation job will run
--installserver=INSTALLSERVER

Note: Root login without password must be enabled, or the SSH_PASSWORD variable
for the system to be discovered must be set for installmanagementsoftware.
Note: See how to use listtemplates on page 311 to get a list of available
templates for using netconfigtemplate.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 203


reconfigure
reconfigure will update configuration files and running services to match the
configuration in the configuration database. This is the same operation as running
“Apply changes” from pmgui. By default reconfigure will reconfigure all installed
managed systems.

Table 42—reconfigure
pmcli reconfigure [systemnames] [installserver]
OPTIONAL DESCRIPTION
systemnames - name of system(s) to reconfigure{[..]
--systemnames=SYSTEMNAMES
installserver - server where the installation job will run
--installserver=INSTALLSERVER

reconfiguredryrun
reconfiguredryrun tests system(s) re-configuration. This is the same as the
“reconfigure” command, but the changes will only be reported, not actually
performed.

Table 43—reconfiguredryrun
pmcli reconfiguredryrun <systemnames> [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
installserver - name of installserver
--installserver=INSTALLSERVER

setdiskless
setdiskless sets systems(s) diskless with software image.

Table 44—setdiskless
pmcli setdiskless <systemnames> <imagename>
ARGUMENT DESCRIPTION
systemnames - string with the name(s) of the system(s) {[..]}
imagename - sets an os image

setdistribution
setdistribution sets distribution to system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 204


Table 45—setdistribution
pmcli setdistribution <systemnames> <distroid>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
distroid - distribution to set

setimage
setimage sets software image for system(s).

Table 46—setimage
pmcli setimage <systemnames> <imagename>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
imagename - os image to set

setnettemplate
setnettemplate sets netconfig template to system(s).

Table 47—setnettemplate
pmcli setnettemplate <systemnames> <nettemplate>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nettemplate - netconfig template to set

setftptemplate
setftptemplate sets tftptemplate to system(s).

Table 48—settftptemplate
pmcli setftptemplate <systemnames> <template>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
template - TFTP template to set

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 205


Diagnostic Commands

You can test your database and check that systems are functioning properly using
commands found in this section.
• diagnosecimdatabase on page 209
• diagnoseconsole on page 209
• diagnoseinstallation on page 209
• diagnosemonitoring on page 210
• diagnosenis on page 210
• diagnosescampi on page 210
• diagnosescash on page 210
• diagnosessh on page 210
• diagnosesshkeys on page 211

diagnosecimdatabase
diagnosecimdatabase tests that the database is sane and overrides from base class.

Table 49—diagnosecimdatabase
pmcli diagnosecimdatabase <systemname>
ARGUMENT OPTIONAL
none none

diagnoseconsole
diagnoseconsole tests console functionality.

Table 50—diagnoseconsole
pmcli diagnoseconsole <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

diagnoseinstallation
diagnoseinstallation verifies that the installation was successful, or not.

Table 51—diagnoseinstallation
pmcli diagnoseinstallation <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 206


diagnosemonitoring
diagnosemonitoring tests the monitoring of the system.

Table 52—diagnosemonitoring
pmcli diagnosemonitoring <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

diagnosenis
diagnosenis tests the nis.

Table 53—diagnosenis
pmcli diagnosenis <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

diagnosescampi
diagnosescampi tests scampi (Scali MPI Connect).

Table 54—diagnosescampi
pmcli diagnosescampi <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

diagnosescash
diagnosescash tests scash.

Table 55—diagnosescash
pmcli diagnosescash <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

diagnosessh
diagnosessh tests the ssh.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 207


Table 56—diagnosessh
pmcli diagnosesssh <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

diagnosesshkeys
diagnosesshkeys tests the CIM for both public and private ssh keys.

Table 57—diagnosesshkeys
pmcli diagnosesshkeys <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 208


Filesystem Commands

• addlustremdt on page 212


• addlustreost on page 212
• addnfsexport on page 213
• addremotefs on page 213
• createlustrefs on page 214
• formatlustrefs on page 214
• listlustrefs on page 214
• listremotefs on page 215
• removelustrefs on page 215
• removeremotefs on page 215
• testlustrefs on page 215

addlustremdt
addlustremdt creates Lustre MTD for system

Table 58—addlustremdt
pmcli addlustremdt <systemnames> <fsname> <mdtname> <backendfstype>
<filepath> <filesize>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
fsname - name of file system
mtdname - name for this MDT inside lustre
backendfstype - filesystemtype for the backend. E.g. ldiskfs
filepath - path to devicefile or loopback file
filesize - number of MB for loopback files, 0 for devices

addlustreost
addlustreost creates Lustre OST for system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 209


Table 59—addlustreost
pmcli addlustreost <systemnames> <fsname> <mdtname> <backendfstype>
<filepath> <filesize>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
fsname - name of file system
mtdname - name for this MDT inside lustre
backendfstype - filesystemtype for the backend. E.g. ldiskfs
filepath - path to devicefile or loopback file
filesize - number of MB for loopback files, 0 for devices

addnfsexport
addnfsexport adds a service for exporting directories over NFS from system(s).

Table 60—addnfsexport
pmcli addnfsexport <systemnames> <directory> [client=['*:(ro)']]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
directory - name of the directory to be exported
OPTIONAL DESCRIPTION
client - clients with export options. Client argument shouldn't have any
spaces. Defaults to "*(ro)"
--client=CLIENT
Multiple clients can be specified by using --client multiple times in
command e.g
--client "host1:(rw,sync)"
--client "host2:(ro,async)".

Note: --client host2 is equivalent to --client "host2(ro)"

addremotefs
addremotefs adds mounting for remote filesystem on system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 210


Table 61—addremotefs
pmcli addremotefs <systemnames> <fstype> <src> <mntpoint> [options]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
fstype - type of filesystem, legal values: "nfs" and "lustre"
src - source
mntpoint - mountpoint
OPTIONAL DESCRIPTION
--options - options to mount command to be given as -o options to mount,
using comma seperated values - e.g
"_netdev,tcp,hard,rsize=64K,wsize=64K,intr"
--options=OPTIONS
Run "man mount" on a unix system for a list of all the options and
their documentation

createlustrefs
createlustrefs creates Lustre file system.

Table 62—createlustrefs
pmcli createlustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - name of file system

formatlustrefs
formatlustrefs formats and enables lustre filesystem.

Table 63—formatlustrefs
pmcli formatlustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - the name given to the filesystem in createlustrefs on page
214.

listlustrefs
listlustrefs lists lustre file systems.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 211


Table 64—listlustrefs
pmcli listlustrefs
ARGUMENT OPTIONAL
none none

listremotefs
listremotefs returns a list of remote filesystem(s) to the current system.

Table 65—listremotefs
pmcli listremotefs <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

removelustrefs
removelustrefs removes lustre file system.

Table 66—removelustrefs
pmcli removelustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - name of file system

removeremotefs
removeremotefs removes mounting of remote filesystem(s) on the current
system.

Table 67—removeremotefs
pmcli removeremotefs <systemnames> <mntpoint>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
mntpoint - mountpoint

testlustrefs
testlustrefs runs performance diagnostics for lustre filesystem.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 212


Table 68—testlustrefs
pmcli testlustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - the name given to the filesystem in createlustrefs

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 213


Flexlm Commands

Flex manages FLEX licensed services. Commands are listed below.


• addflexclientconfigtoservice on page 217
• createflexclientconfigdir on page 217
• createflexclientconfigfile on page 217
• createflexclientconfigserver on page 218
• deleteflexclientconfig on page 219
• listflexclientconfigs on page 219
• listflexclientconfigsonservice on page 219
• removeflexclientconfigfromservice on page 220

addflexclientconfigtoservice
addflexclientconfigtoservice associates a FLEX client config with one or more
services.

Table 69—addflexclientconfigtoservice
pmcli addflexclientconfigtoservice <configid > <systemnames> <serviceid>
ARGUMENT DESCRIPTION
configid - uuid of the FLEX client config
systemnames - name of system(s) {[..]}
serviceid - name or uuid of service (from listhostedservice)

createflexclientconfigdir
createflexclientconfigdir creates a FLEX client config setting for local license
directory.

Table 70—createflexclientconfigdir
pmcli createflexclientconfigdir <name> <lmdir> <description>
ARGUMENT DESCRIPTION
name - the name(s) of the new configuration file
lmdir - name of the directory where the license files (*.lic) will be
found
description - description of the new configuration file

createflexclientconfigfile
createflexclientconfigfile creates a FLEX client config setting for local license file.

Note: You need to add one or more services with the command

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 214


"addflexclientconfigtoservice" before the createflexclientconfigfile takes effect.

Table 71—createflexclientconfigfile
pmcli createflexclientconfigfile <name> <lmdir> <lmfilename > <inputfile>
<description>
ARGUMENT DESCRIPTION
name - the name(s) of the new configuration file
lmdir - name of the directory where the license files (*.lic) will be
found
lmfilename - name of the license file to write in the lmdir
inputfile - full file path for the license file data. The file is read and saved
in the configuration database.
description - description of the new configuration file

createflexclientconfigserver
createflexclientconfigserver creates a FLEX client config setting for remote FLEX
license server.

Note: You need to add one or more services with the command
"addflexclientconfigtoservice" before the createflexclientconfigserver takes
effect.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 215


Table 72—createflexclientconfigserver
pmcli createflexclientconfigserver <name> <lmserver> <lmport> <description>
[inputfile]
ARGUMENT DESCRIPTION
name - the name(s) of the new configuration file
lmserver - IP address of the remote FLEX server
lmport - port on remote FLEX server
description - description of the new configuration file
OPTIONAL DESCRIPTION
inputfile - full file path for the license file data. The file is read and saved
in the configuration database.
--inputfile=LICENSE_FILE_PATH

deleteflexclientconfig
deleteflexclientconfig deletes an unused FLEX client config setting

Table 73—deleteflexclientconfig
pmcli deleteflexclientconfig <configid>
ARGUMENT DESCRIPTION
configid - uuid of the FLEX client config

listflexclientconfigs
listflexclientconfigs returns a list of all FLEX client config settings.

Table 74—listflexclientconfigs
pmcli listflexclientconfigs [verbose]
OPTIONAL DESCRIPTION
verbose - list license file contents
--verbose=VERBOSE

listflexclientconfigsonservice
listflexclientconfigsonservice returns a list of FLEX client config(s) associated
with one or more services.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 216


Table 75—listflexclientconfigsonservice
pmcli listflexclientconfigsonservice <serviceid> [verbose]
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}
serviceid - name or uuid of service (from listhostedservice)
OPTIONAL DESCRIPTION
verbose - list license file contents
--verbose=VERBOSE

removeflexclientconfigfromservice
removeflexclientconfigfromservice removes association of a FLEX client config
from one or more services.

Table 76—removeflexclientconfigfromservice
pmcli removeflexclientconfigfromservice <configid> <systemnames> <serviceid>
ARGUMENT DESCRIPTION
configid - uuid of the FLEX client config
systemnames - the name(s) of the system(s) {[..]}
serviceid - name or uuid of service (from listhostedservice)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 217


High Availability (HA) Commands

High Availability commands provide failover functionality on your frontend and


gateways.
• addhaethernetinterface on page 222
• addhasharedfs on page 222
• addheartbeatchannel on page 223
• addservertohagroup on page 224
• bindhaservicetointerface on page 225
• createhagroup on page 225
• disableautofailback on page 225
• disablehagroupfencing on page 225
• enableautofailback on page 226
• enablehagroupfencing on page 226
• listhagroups on page 227
• listhainterfaces on page 227
• listhapingrules on page 227
• listhasharedfs on page 227
• listheartbeatchannels on page 228
• listhostedhaservices on page 228
• listserversinhagroup on page 228
• moveservicetohagroup on page 229
• moveservicestosystem on page 228
• moveservicetosystem on page 229
• removehaethernetinterface on page 229
• removehagroup on page 229
• removehasharedfs on page 231
• removeheartbeatchannel on page 231
• removeserverfromhagroup on page 231
• sethapingallips on page 231
• sethapingoneofips on page 232
• setlsbscriptha on page 232
• setprimaryhaserver on page 233
• showautofailback on page 233

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 218


• showhagroupfencing on page 233
• unbindhaservicefrominterface on page 233
• unsetlsbscriptha on page 234

addhaethernetinterface
addhaethernetinterface adds HA (floating) ethernet interface. Each HA
interface will be managed (up/down and location of its host) by the HA group.

Table 77—addhaethernetinterface
pmcli addhaethernetinterface <hagroupname> <nicname> <lanendpoint> <hostspecs>
<ipspecs>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
nicname - name of nic (e.g. "nic1")
lanendpoint - name of the lanendpoint (e.g. "eth0")
hostspecs - hostname for HA interface
ipspecs - ip address(es)

addhasharedfs
addhasharedfs adds mount information for a shared storage filesystem
to the HA group

Note: This configuration requires HA fencing to be enabled.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 219


Table 78—addhasharedfs
pmcli addhasharedfs <hagroupname> <src> <mntpoint> <fstype> [options]
ARGUMENTS DESCRIPTION
hagroupname - name of HA group
src - source (e.g. /dev/sda1 or <host>:/exportdir)
mntpoint - mount point
fstype - file system type
OPTIONAL DESCRIPTION
options - options to mount command to be given as -o options to
mount
--options=OPTIONS

addheartbeatchannel
addheartbeatchannel allows for the actual "heartbeat" communication within the
HA group, which can be done over one or more channels.

Note: There must be at least one channel. Run the addheartbeatchannel


command several times to create multiple redundant channels.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 220


Table 79—addheartbeatchannel
pmcli addheartbeatchannel <systemname> <hbtype> [channelargs..]
ARGUMENT DESCRIPTION
hagroupname - name of HA group
hbtype - one of the following:

"broadcast" <interface list>

"multicast" <interface> <multicast group> [<ttl> default 1]

"unicast" <interface> <peer-ip-address>

"serial" <serial device>

"udpport"<udp port for all broadcast, multicast and unicast


channels> default is 694

"baudrate" <baudrate for all serial channels>


OPTIONAL DESCRIPTION
channelargs - options for the channel type

addservertohagroup
addservertohagroup adds a system to HA group and installs the HA software on
the system.

Note: Make sure to manage the HA services with the "moveservice” commands.
Note: Make sure to use the command 'addheartbeatchannel' to add heartbeat "ping"
channels.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 221


Table 80—addservertohagroup
pmcli addservertohagroup <systemnames> <hagroupname>
ARGUMENT DESCRIPTION
systemnames - name of HA system(s) [..]
hagroupname - name of HA group

bindhaservicetointerface
bindhaservicetointerface binds service to interface for HA group

Table 81—bindhaservicetointerface
pmcli bindhaservicetointerface <hagroupname> <serviceid> <interface>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
serviceid - name or uuid of service
interface - interface (e.g. 'eth0')

createhagroup
createhagroup creates a group of type 'HA' with default fencing disabled and
generates a common authentication key for the member systems. Enables auto
failback for the group by default.

Table 82—createhagroup
pmcli createhagroup <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group to be created

disableautofailback
The HA services will remain on whatever server is serving it until
that node fails, or an administrator intervenes.

Table 83—disablebmcconsole
pmcli disablebmcconsole <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}

disablehagroupfencing
disablehagroupfencing disables system fencing (stonith) on failover.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 222


Table 84—disablehagroupfencing
pmcli disablehagroupfencing <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

enableautofailback
enableautofailback makes the HA services automatically fail back to the
"primary" server.

Table 85—enableautofailback
pmcli enableautofailback <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

enablehagroupfencing
enablehagroupfencing enables system fencing on failover.
Note: enablehagroupfencing requires correct setup of the
Scali_PowerManagementController services

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 223


Table 86—enableautofailback
pmcli enableautofailback <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

listhagroups
listhagroups lists all groups of type 'HA’.

Table 87—listhagroups

pmcli listhagroups
ARGUMENT OPTIONAL
none none

listhainterfaces
listhainterfaces returns a list of ha (floating) ethernet interfaces on the ha group.

Table 88—listhainterfaces
pmcli listhainterfaces <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - the name of HA group

listhapingrules
listhapingrules lists ping rule summary.

Table 89—listhapingrules
pmcli listhapingrules <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

listhasharedfs
listhasharedfs lists all shared filesystems for an HA group.

Table 90—listhasharedfs
pmcli listhasharedfs <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 224


listheartbeatchannels
listheartbeatchannels lists the channel parameters and their IDs for all
members of the HA group.

Table 91—listheartbeatchannels
pmcli listheartbeatchannels <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - the name of the HA group

listhostedhaservices
listhostedhaservices lists the HA services hosted on the HA group

Table 92—listhostedservices
pmcli listhostedservices <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

listserversinhagroup
listserversinhagroup lists servers in an HA group

Table 93—listserversinhagroup
pmcli listserversinhagroup <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

moveservicestohagroup
moveservicestohagroup moves all Platform HA enabled services from the
system to the HA group

Table 94—moveservicestohagroup
pmcli moveservicestosystem <systemname> <hagroupname>
ARGUMENT DESCRIPTION
systemname - name of HA server
hagroupname - name of HA group

moveservicestosystem
moveservicestosystem moves all Scali HA enabled services from the HA group
to the system.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 225


Table 95—moveservicetosystem
pmcli moveservicestosystem <systemname> <hagroupname>
ARGUMENT DESCRIPTION
systemname - name of HA server
hagroupname - name of HA group

moveservicetohagroup
moveservicetohagroup moves a specified HA enabled service from a system to
HA group

Table 96—moveservicetohagroup
pmcli moveservicestohagroup <systemname> <hagroupname>
ARGUMENT DESCRIPTION
systemname - name of HA server
hagroupname - name of HA group

moveservicetosystem
moveservicetosystem moves a specified HA enabled service from a HA group to
a system.

Table 97—moveservicetosystem

pmcli moveservicetosystem <systemname> <hagroupname> <pmserviceid>


ARGUMENT DESCRIPTION
systemname - name of HA server
hagroupname - name of HA group
pmserviceid - uuid/type of Platform Manager rservice

removehaethernetinterface
removehaethernetinterface removes HA (floating) ethernet interface.

Table 98—removehaethernetinterface
pmcli removehaethernetinterface <hagroupname> <nicname>
ARGUMENT DESCRIPTION
hagroupname - name of group
nicname - name of nic (e.g. "nic1")

removehagroup

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 226


removehagroup removes High Availability Group.

Note: You must disconnect all resources (services and member systems) before using
removehagroup is allowed.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 227


Table 99—removehagroup
pmcli removehagroup <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group to be removed

removehasharedfs
removehasharedfs removes mount information for the HA group’s shared
storage filesystem.

Table 100—removehasharedfs
pmcli removehasharedfs <hagroupname> <mntpoint>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
mntpoint - mountpoint

removeheartbeatchannel
removeheartbeatchannel removes a heartbeat communication channel (if
there is more than one).

Table 101—removeheartbeatchannel
pmcli removeheartbeatchannel <systemname> <channelid>
ARGUMENT DESCRIPTION
systemname - name of the system in an HA group
channelid - uuid of the channel

removeserverfromhagroup
removeserverfromhagroup removes systemname(s) from an HA group.

Table 102—removeserverfromhagroup
pmcli removeserverfromhagroup <systemnames> <hagroupname>
ARGUMENT DESCRIPTION
systemnames - name of HA system(s) [..]
hagroupname - name of HA group

sethapingallips
sethapingallips
• Sets ping constraint on HA group (empty unsets).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 228


• Makes the HA system fail over if ONE of the IPs does not reply on ping/ICMP
request from the active HA server, but a reply is received from the passive
server.

Table 103—sethapingallips
pmcli sethapingallips <hagroupname> [iplist..]
ARGUMENT DESCRIPTION
hagroupname - name of HA group
OPTIONAL DESCRIPTION
iplist - list of ip-addresses that should be pingable for the HA system
to be "up".

sethapingoneofips
sethapingoneofips
• Sets ping constraint group on HA group (empty unsets).
• Makes the HA system fail over if ALL of the IPs does not reply on ping/ICMP
request from the active HA server, but reply is received from the passive
server. A reply from ONE of the IPs listed does not trigger a failover.

Table 104—sethapingoneofips
pmcli sethapingoneofips <hagroupname> [iplist..]
ARGUMENT DESCRIPTION
hagroupname - name of HA group
OPTIONAL DESCRIPTION
iplist - list of ip-addresses that should be pingable for the HA system
to be "up".

setlsbscriptha
setlsbscriptha enables a custom LSB script to be controlled by the HA group. The
scripts are started in alphabetical order (and stopped in reversed order).

WARNING—the LSB script MUST follow the LSB specification, or BAD THINGS WILL
HAPPEN! LSB spec can be found at:
http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-ge neric/iniscrptact.ht-
ml

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 229


Table 105—setlsbscriptha
pmcli setlsbscriptha <hagroupname> <lsbscriptname>
ARGUMENT DESCRIPTION
hagroupname - the name of the HA group
lsbscriptname - thename of the lsbscript (ex. "ypbind")

setprimaryhaserver
setprimaryhanode sets the primary server of the HA group. The HA services will
automatically fail back to its "primary" server as long as it is "up".

Table 106—setprimaryhaserver
pmclisetprimaryhaserver <systemname> <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
systemname - name of primary HA server

showautofailback
showautofailback shows if the HA services automatically fail back to the
"primary" server.

Table 107—showautofailback
pmcli showautofailback <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

showhagroupfencing
showhagroupfencing shows system fencing (stonith) status on the HA group.

Table 108—showhagroupfencing
pmcli showhagroupfencing <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group

unbindhaservicefrominterface
unbindhaservicefrominterface removes binding of service from interface for
system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 230


Table 109—unbindhaservicefrominterface
pmcli unbindhaservicefrominterface <hagroupname> <serviceid> <interface>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
serviceid -name or uuid of service
interface - interface (e.g. 'eth0')

unsetlsbscriptha
unsetlsbscriptha disables a custom LSB script from being controlled by the HA
group.

Table 110—unsetlsbscriptha
pmcli unsetlsbscriptha <hagroupname> <lsbscriptname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
lsbscriptname - name of lsbscript (e.g. "ypbind")

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 231


Image Management Commands

You can use these commands to replicate uniform software installation across nodes.
• captureimage on page 235
• exportimage on page 235
• importimage on page 235
• listimages on page 236
• removeimage on page 236

captureimage
captureimage captures an image of installed software on the system.

Table 111—captureimage
pmcli captureimage <systemname> <imagename> [description] [excludes..]
ARGUMENT DESCRIPTION
systemname - the name of the system
imagename - the name of an image
OPTIONAL DESCRIPTION
description - description of an image; Default is“none”
--description=DESCRIPTION
excludes - a space-separated list of the systems to be excluded

exportimage
exportimage exports an image as a tarball.

Table 112—exportimage
pmcli exportimage <imagename> [tarballname]
ARGUMENT DESCRIPTION
imagename - the name of an image
OPTIONAL DESCRIPTION
tarballname - Tarball name of exported image. Defaults to stdout.
--tarballname=TARBALLNAME

importimage
importimage imports an image from a tarball.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 232


Table 113—importimage
pmcli importimage [imagetarballname]
ARGUMENT DESCRIPTION
imagename - the name of an image
OPTIONAL DESCRIPTION
imagetarballname - Tarball name of exported image. Defaults to stdin.
--imagetarballname=IMAGETARBALLNAME

listimages
listimages returns a list of all available images.

Table 114—listimages
pmcli listimages
ARGUMENT OPTIONAL
none none

removeimage
removeimage removes an image of installed software on the system.

Table 115—removeimage
pmcli removeimage <imagename>
ARGUMENT DESCRIPTION
imagename - the name of an image

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 233


Licensing Commands

Licenses and product keys are required to run applications.


When using the activation commands below you will have to follow up with the
reconfigure command in order to apply changes to the database.
• activateproductkey on page 237
• addactivationkey on page 239
• addproductkey on page 239
• listproductkeys on page 239
• removeactivationkey on page 240
• removeproductkey on page 240
• showproductstatus on page 241

activateproductkey
activateproductkey automatically activates productkey. This function requires
internet access. It will contact the Platform registration servers, and permanently
bind this productkey to the system it's running on.

Table 116—activateproductkey
pmcli activateproductkey <productkey> <company> <contactemail> [street] [street2]
[city] [state] [postalcode] [country] [contactname] [contactphone] [licenseserver]
ARGUMENT DESCRIPTION
productkey - The productkey to activate. Using an empty string will
activate all productkeys.
company - name of the company wishing to register the productkey
contactemail - email address of the company wishing to register the
productkey
street - the first line of thestreet address of the company wishing
product activation
street2 - the second line of the street address of the company wishing
product activation
city - the city of the company wishing product activation
state - the state/province of the company wishing product
activation
postalcode - the postal code of the company wishing product activation
country - country for product activation
contactname - contact name for the company wishing product activation
contactphone - contact phone for the company wishing product activation

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 234


Table 116—activateproductkey
pmcli activateproductkey <productkey> <company> <contactemail> [street] [street2]
[city] [state] [postalcode] [country] [contactname] [contactphone] [licenseserver]
ARGUMENT DESCRIPTION
licenseserver - name of system where the license server is hosted. Default
is to use all license servers.
OPTIONAL DESCRIPTION
street --street=STREET
street2 --street2=STREET2
city --city=CITY
state --state=STATE
postalcode --postalcode=POSTALCODE
country --country=COUNTRY
contactname --contactname=CONTACTNAME
contactphone --contactphone=CONTACTPHONE
licenseserver --licenseserver=LICENSESERVER

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 235


addactivationkey
addactivationkey manually activate a product key.
To obtain the activation code for your product, please visit our site and follow
instructions.

Table 117—addactivationkey
pmcli addactivationkey <productkey> <activationkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key to activate
activationkey - the activation key for this productkey
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default is
to use all license servers
--licenseserver=LICENSESERVER

addproductkey
addproductkey adds a product key to a license server.

Table 118—addproductkey
pmcli addproductkey <productkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key for the product you want to enable.
The key is in the format
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default is
to use all license servers.
--licenseserver=LICENSESERVER

Example: addproductkey
To add a product key enter:
pmcli addproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
pmcli reconfigure all

listproductkeys
listproductkeys lists product keys registered with a license server.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 236


Table 119—listproductkeys
pmcli listproductkeys [licenseserver]
ARGUMENT DESCRIPTION
licenseserver name of system where the license server is hosted. Default is to
use all license servers.
--licenseserver=LICENSESERVER

removeactivationkey
removeactivationkey removes product key from a license server

Table 120—removeactivationkey
pmcli removeactivationkey <productkey> <activationkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key to remove
activationkey - the activation key to remove
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default
is to use all license servers.
--licenseserver=LICENSESERVER

removeproductkey
removeproductkey removes a product key from a license server.

Table 121—removeproductkey
pmcli removeproductkey <productkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key to remove
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default
is to use all license servers.
--licenseserver=LICENSESERVER

Example: removing a product key


To remove a product key enter:
pmcli removeproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 237


Example: removing a product key
pmcli reconfigure all

showproductstatus
showproductstatus lists product keys registered with a license server.

Table 122—showproductstatus
pmcli showproductstatus [licenseserver]
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default is
to use all license servers.
--licenseserver=LICENSESERVER

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 238


Logging Commands

Logging commands manipulate the queue of jobs.


• canceljob on page 242
• joblog on page 242
• lastinstallationjob on page 242
• listjobs on page 243
• listjobsfornode on page 243
• listsubjobs on page 243
• removejob on page 243
• canceljob
• joblog
• lastinstallationjob
• listjobs
• listjobsfornode
• listsubjobs
• removejob

canceljob
canceljob cancels a job.

Table 123—canceljob
pmcli canceljob <jobid>
ARGUMENT DESCRIPTION
jobid identification of job

joblog
joblog returns a list of log for job

Table 124—joblog
pmcli joblog <jobid>
ARGUMENT DESCRIPTION
jobid identification of job

lastinstallationjob
lastinstallationjob lists the status of the last installation job for system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 239


Table 125—lastinstallationjob
pmcli lastinstallationjob <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

listjobs
listjobs returns a list of all jobs.

Table 126—listjobs
pmcli listjobs
ARGUMENT OPTIONAL
none none

listjobsfornode
listjobsfornode returns a list of subjobs for job

Table 127—listjobsfornode
pmcli listjobsfornode <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

listsubjobs
listsubjobs returns a list of subjobs for job

Table 128—listsubjobs
pmcli listsubjobs <jobid>
ARGUMENT DESCRIPTION
jobid identification of job

removejob
removejob removes job(s) from the queue.

Table 129—removejob
pmcli removejob <jobid>
ARGUMENT DESCRIPTION
jobid identification of job

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 240


LSF Commands

You can configure Platform Manager to recognise Platform LSF clusters using the LSF
commands. The script below will set up 'mylsfcluster'.

# Create Application system named 'mylsfcluster':


pmcli addlsfapplicationsystem mylsfcluster
# Add Master candidates to application system:
# This command will add two master candidates (sc1435-1 sc1435-2) to
# application system (mylsfcluster)
# with Flexlm server host name and port details. Last parameter defines
# the work directory i.e. fali-over directory.
pmcli addlsfmastercandidate 'sc1435-1 sc1435-2' mylsfcluster \
/home/license.dat FlexlmServerHost 1700 /usr/shared/lsfwork
# Add Slave only nodes to application system:
pmcli addlsfstatichost sc1435-3 mylsfcluster
# Add Dynamic hosts to application system if any:
pmcli addlsfdynamichost sc1435-4 mylsfcluster
# Ensure that work directory is on NFS and mounted to
# "/usr/shared/lsfwork" on each master candidate.
# Reconfigure the all machines in LSF Application System.
pmcli reconfigure sc1435-[1-4]

Note: Run the LSF commands 'lsid' for cluster information or 'bhosts' for details about the
hosts in application system on each machine. A successfull run will ensure that the
application system is configured correctly. See the LSF documentation for more
information about these two commands.
• addlsfapplicationsystem on page 245
• addlsfdynamichost on page 245
• addlsfmastercandidate on page 245
• addlsfstatichost on page 246
• listlsfapplicationsystems on page 247
• listlsfdynamichosts on page 247
• listlsffeatures on page 247
• listlsfmastercandidates on page 249
• listlsfstatichosts on page 250

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 241


• removelsfapplicationsystem on page 250
• setlsffeatures on page 250
• setlsfhostclosed on page 251
• setlsfhostopen on page 251

addlsfapplicationsystem
Adds an LSF Application System

Table 130—addlsfapplicationsystem
pmcli addlsfapplicationsystem <lsfappname>
ARGUMENT DESCRIPTION
lsfappname - LSF Application System name

# Add an LSF Application System named “LSFCLuster1”:


pmcli addlsfapplicationsystem LSFCLuster1

addlsfdynamichost
adds LSF Dynamic Hosts to LSF Application systems

Table 131—addlsfdynamichost
pmcli addlsfdynamichost <systemnames> <lsfappname>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]} to be defined as dynamic
host(s)
lsfappname - LSF Application System name

Example: addlsfdynamichost
Add an LSF Dynamic Host named “LSFCLuster1” to LSF Application system named
sc1435-6:
pmcli addlsfdynamichost sc1435-6 LSFCLuster1

addlsfmastercandidate
adds LSF master candidate to the LSF application system

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 242


Table 132—addlsfmastercandidate
pmcli addlsfmastercandidate <systemnames> <lsfappname> <licensefile>
[flexserveripaddress] [port=1700] [lsfworkdirectory]
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]} to be defined as dynamic host(s)
lsfappname - LSF Application System name
licensefile - path for license file
For example: /home/platform/license.dat
OPTIONAL DESCRIPTION
flexserveripaddress - the Flex License Manager Server's IP address
--flexserveripaddress=FLEXSERVERIPADDRESS
port - TCP port used by the FLEXlm license server
--port=PORT
lsfworkdirectory - LSF work directory for fail-over files
--lsfworkdirectory=LSFWORKDIRECTORY

Example: addlsfmastercandidate
Add an LSF Dynamic Host named “LSFCLuster1” to LSF Application system named
sc1435-6 with a license with a location “/home/scali/license.dat” and the
optional path to the LSF work directory:
pmcli addlsfmastercandidate sc1435-6 LSFCLuster1 /home/scali/license.dat
179.179.0.91 --port=1700 /opt/lsfhpc/work

addlsfstatichost
adds a static host to theto LSF application system.

Table 133—addlsfstatichost
pmcli addlsfstatichost <systemnames> <lsfappname>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]} to be defined as static host(s)
lsfappname - LSF Application System name

getlsfhoststatus
gets LSF service status for the host.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 243


Table 134—getlsfhoststatus
pmcli getlsfhoststatus <systemname> <lsfappname>
ARGUMENT DESCRIPTION
systemname - name of systemt
lsfappname - LSF Application Application name

Example: getlsfhoststatus
Get the status of the lsf host named “sc1435-1” in the lsf application named “LSFApp”
getlsfhoststatus sc1435-1 LSFApp

listlsfapplicationsystems
lists the existing LSF application systems in the data center.

Table 135—listlsfapplicationsystems
pmcli listlsfapplicationsystems
ARGUMENT OPTIONAL
none none

Example: listlsfapplicationsystems
List the existing LSF application systems in your data center:
pmcli listlsfapplicationsystems

listlsfdynamichosts
Lists the dynamic hosts in the LSF application system.

Table 136—listlsfdynamichosts
pmcli listlsfdynamichosts <lsfappname>
ARGUMENT DESCRIPTION
lsfappname - LSF Application System

Example: listlsfapplicationsystems
List the existing LSF dynamichosts in your LSF application:
pmcli listlsfdynamichosts LSF_APP_1

listlsffeatures

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 244


lists the FLEXlm features used by the LSF master candidate. The list of available
features can be found only on the master candidate.There are four steps to getting
the list of features.
1 Use listlsfapplicationsystem to choose the proper application.
2 Use listlsfmastercandidates with the name of the lsfapplication you are interested in.
3 Use listhostedservices with the mastercandidate to get the service in question.
4 Finally use listlsffeatures with the hosted service on the master candidate to get a
list of features in the service.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 245


Table 137—listlsffeatures
pmcli listlsffeatures <lsfserviceid>
ARGUMENT DESCRIPTION
lsfserviceid - LSF Batch Service id of the mastercandidate

Example: listlsffeatures
pmcli listlsfapplicationsystems
--- List of all LSF Application Systems ---
scali:460bd743-a4ee-7f88-7c7f-2cb7ef51f867 : mylsfcluster
# pmcli listlsfmastercandidates mylsfcluster
--- List of master candidates in LSF Application System 'mylsfcluster' ---
scali:93da39f5-7c54-58cd-9fb6-18862f73e32a: db03b07vm2
scali:b01fae3c-11b7-2116-b252-5744eeeb5c78: db03b07vm3
# pmcli listhostedservices db03b07vm2
--- List hosted services for system db03b07vm2
scali:93da39f5-7c54-58cd-9fb6-18862f73e32a ---
scali:6a6a76ca-678a-093e-ea42-fad1fd2c5b7c Scali_LSFBatchService (eth0)
# pmcli listlsffeatures scali:6a6a76ca-678a-093e-ea42-fad1fd2c5b7c
--- List of the FLEXlm features used by the LSF master candidate
'scali:6a6a76ca-678a-093e-ea42-fad1fd2c5b7c' --- lsf_base lsf_manager
lsf_sched_fairshare lsf_sched_preemption lsf_sched_parallel
lsf_sched_resource_reservation lsf_sched_advance_reservation lsf_multicluster
lsf_make lsf_parallel lsf_client lsf_float_client platform_hpc lsf_reports lsf_sla
lsf_license_scheduler
lsf_dualcore_x86
lsf_mv_grid_filter
pmcli listlsffeatures uuid1:9fa87631-4eb6-3d9c-d44f-263b63e32a1e

listlsfmastercandidates
Lists the master candidates in the LSF application system

Table 138—listlsfmastercandidates
pmcli listlsfmastercandidates <lsfappsystem>
ARGUMENT DESCRIPTION
lsfappsystem - LSF Application System

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 246


Example: listlsfmastercandidates
List the existing LSF master candidates in your LSF cluster:
pmcli listlsfmastercandidates LSF_APP_1

listlsfstatichosts
Lists the the slave only hosts in the LSF application system.

Table 139—listlsfstatichosts
pmcli listlsfstatichosts <lsfappsystem>
ARGUMENT DESCRIPTION
lsfappsystem - LSF Application System

Example: listlsfstatichosts
List the existing LSF static hosts in your data center:
pmcli listlsfstatichosts LSF_APP_1

removelsfapplicationsystem
removes LSF Application System

Table 140—removelsfapplicationsystem
pmcli removelsfapplicationsystem <lsfappname>
ARGUMENT DESCRIPTION
lsfappname - LSF Application System name

Example: removelsfapplicationsystem
pmcli removelsfapplicationsystem LSFApp

setlsffeatures
sets the FLEXlm features used by the LSF master candidate

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 247


Table 141—setlsffeatures
pmcli setlsffeatures <lsfserviceid> [features..]
ARGUMENT DESCRIPTION
lsfserviceid - LSF Batch Service id
OPTION DESCRIPTION
features - list of FLEXlm features
--features=FEATURES

Example: setlsffeatures
pmcli pmcli setlsffeatures scali:9fa87631-4eb6-3d9c-d44f-263b63e32a1e
lsf_base lsf_manager lsf_sched_fairshare lsf_sched_parallel

setlsfhostclosed
sets LSF service down for the host

Table 142—setlsfhostclosed
pmcli setlsfhostclosed <systemnames> <lsfappsystem>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
lsfappsystem - LSF Application System

Example: setlsfhostclosed
pmcli setlsfhostclosed sc1435-1 LSFApp

setlsfhostopen
sets LSF service Open for the host

Table 143—setlsfhostopen
pmcli setlsfhostopen <systemnames> <lsfappsystem>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
lsfappsystem - LSF Application System

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 248


Example: setlsfhostopen
pmcli setlsfhostopen "sc1435-[1-2] LSFApp

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 249


Network Commands

• addaliasedinterface on page 254


• addbondedinterface on page 254
• addethernetinterface on page 255
• addinfinibandinterface on page 257
• addmyrinetinterface on page 257
• addroutablesubnet on page 258
• addroute on page 258
• addsubnet on page 259
• clearmacaddress on page 259
• clearmtu on page 259
• createroutablesubnetgroup on page 259
• detachslaveinterface on page 260
• disablestaticarp on page 260
• disablenetworkboot on page 260
• enablenetworkboot on page 261
• enablestaticarp on page 261
• enslaveinterface on page 261
• exportethers on page 262
• importethers on page 262
• listinterfaces on page 262
• listroutablesubnetgroups on page 262
• listroutes on page 263
• liststaticarpmapping on page 263
• listsubnets on page 263
• listsystemdevices on page 263
• removealiasedinterface on page 264
• removebondedinterface on page 264
• removeethernetinterface on page 264
• removeinfinibandinterface on page 264
• removemyrinetinterface on page 265
• removeroutablesubnet on page 265

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 250


• removeroutablesubnetgroup on page 265
• removeroute on page 265
• removesubnet on page 266
• setinterfacename on page 266
• setmacaddress on page 266
• setmtu on page 266

addaliasedinterface
addaliasedinterface adds alias interface to systemname(s).

Table 144—addaliasedinterface
pmcli addaliasedinterface <systemnames> <interface> <aliasnumber> <ipspecs>
<ipnamespecs> [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
interface - name of interface (example: eth0)
aliasnumber - alias number (example: 1 => eth0:1)
ipspecs - ip address(es) [..]
ipnamespecs - ip name [..]
OPTIONAL DESCRIPTION
subnet - subnet for the ipaddress
--subnet=SUBNET

Example: addaliasinterface
Add an alias for your interface like this:
pmcli addaliasedinterface RenderFarm01 eth0 1 192.168.0.96
The second address to eth0 would then be eth0:1

addbondedinterface
addbondedinterface creates a logical bonded interface. For more information
about kernel bonding:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 251


http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation
/networking/bonding.txt

Table 145— addbondedinterface


pmcli addbondedinterface <systemnames> <bondinterfacename> <ipspecs>
[moduleargs]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
bondinterfacename - name of interface, for instance "bond0"
ipspecs - ip address(es) [..]
OPTIONAL DESCRIPTION
moduleargs - bonding driver options

Example: addbondedinterface
Add a bonded interface to a system named “dl140-3”:
pmcli addbondedinterface dl140-3 bond0 172.19.0.100 "mode=802.3ad
miimon=100"

Example: addbondedinterface
When you do not enter the bonding driver options
pmcli addbondedinterface dl140-3 bond0 172.19.0.100
the default values listed below will be used:
['mode=802.3ad', 'miimon=100', 'lacp_rate=slow']

addethernetinterface
addethernetinterfaces adds an ethernet interface to systemname(s). See
“Adding an interface with pmcli” on page 383 for more information about this
command.

Table 146—addethernetinterface
pmcli addethernetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "nic1").
lanendpoint - name of the lanendpoint (e.g. "eth0")
OPTIONAL DESCRIPTION

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 252


Table 146—addethernetinterface
pmcli addethernetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS
ipspecs - (optional) ip address(es) [..]
--ipspecs=IPSPECS
subnet - subnet for the ipaddress
--subnet=SUBNET

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 253


addinfinibandinterface
addinfinibandinterface adds infiniband interface to systemname(s).

Table 147—addinfinibandinterface
pmcli addinfinibandinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "nic1").
OPTIONAL DESCRIPTION
lanendpoint - name of the lanendpoint (e.g. "eth0")
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS
ipspecs - (optional) ip address(es) [..]
--ipspecs=IPSPECS
subnet - subnet for the ipaddress
--subnet=SUBNET

addmyrinetinterface
addmyrinetinterface adds myrinet interface to systemname(s).

Table 148—addmyrinetinterface
pmcli addmyrinetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [monservername] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "ib0")
lanendpoint - name of the lanendpoint (e.g. "eth0")
OPTIONAL DESCRIPTION
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 254


Table 148—addmyrinetinterface
pmcli addmyrinetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [monservername] [subnet]
ARGUMENT DESCRIPTION
ipspecs - (optional) ip address(es) [..]
--ipspecs=IPSPECS
monservername - (optional) monserver name(s) [..]
--monservername=MONSERVERNAME
subnet - subnet for the ipaddress
--subnet=SUBNET

addroutablesubnet
addroutablesubnet adds subnets to a routable subnets collection. See
createroutablesubnetgroup on page 260 and listroutablesubnetgroups on
page 262.

Table 149—addroutablesubnet
pmcli addroutablesubnet <routablesubnets> [subnets..]
ARGUMENT DESCRIPTION
routablesubnets - name or UUID of routable subnets collection (from
listroutablesubnetgroup).
OPTIONAL DESCRIPTION
subnets - name or UUIDs of subnets to add to the collection

You should also see “createroutablesubnetgroup” on page 260

Note: Use listroutablesubnetgroups to get a list of subnets.

addroute
addroute returns a list of routes for systemname(s).

Table 150—addroute
pmcli addroute <systemnames> <destinationaddress> <destinationmask>
<gatewayip>
ARGUMENT DESCRIPTION
destinationaddress - destination address (Use 0.0.0.0 for default gw)
destinationmask - destination mask (Use 0.0.0.0 for default gw)
gatewayip - ip for gateway

Note: Use 0.0.0.0 for default gateway destination address. Use 0.0.0.0 for default
gateway destination mask.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 255


addsubnet
addsubnet adds a subnet to your network.

Table 151—addsubnet
pmcli addsubnet <name> <subnetnumber> <subnetmask>
ARGUMENT DESCRIPTION
name - the name of subnet
subnetnumber - number of subnet
subnetmask - mask of subnet

clearmacaddress
clearmacaddress clears macaddress for the system(s).

Table 152—clearmacaddress
pmcli clearmacaddress <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
interface - the name of interface (e.g. "eth0")
--interfacename=eth1

clearmtu
clearmtu clears an MTU from the system. The default for MTU is 1500.

Table 153—clearmtu
pmcli clearmtu <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
interface - the name(s) of interface (e.g. "eth0")
--interfacename=eth1

createroutablesubnetgroup
createroutablesubnetgroup creates a routable subnet group. Routable subnets
collections can be used to specify that routing exists between two subnets, so that
a service hosted on a node on one subnet can be accessed by nodes on the other

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 256


subnets. After creating the collection each subnet that should be a member must
be added with (“addroutablesubnet” on page 258).

Table 154—createroutablesubnetgroup
pmcli createroutablesubnetgroup <name> [description]
ARGUMENT DESCRIPTION
name - the name of the routable subnet group(s) {[..]}
OPTIONAL DESCRIPTION
description - DESCRIPTION of the routable subnet collection
--description=DESCRIPTION

detachslaveinterface
detachslaveinterface removes (detaches) an interface from a bonded device

Table 155—detachslaveinterface
pmcli detachslaveinterface <systemnames> <bondinterfacename> [interfacenames..]
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
bondinterfacename - name of interface, for instance 'bond0'
OPTIONAL DESCRIPTION
interfacenames - the name(s) of interface(s) (e.g. "eth0")
--interfacename=eth1

disablenetworkboot
disablenetworkboot disables network boot for system(s).

Table 156—disablenetworkboot
pmcli disablenetworkboot <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
interface - the name(s) of interface (e.g. "eth0")
--interfacename=eth1

disablestaticarp
disablestaticarp disables static arp configuration on all or a specific ip-interfaces
on system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 257


Table 157—disablestaticarp
pmcli disablestaticarp <systemnames> [interfacename]
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
OPTIONAL DESCRIPTION
interfacename - the name of interface (e.g. "eth0")
--interfacename=eth1

enablenetworkboot
enablenetworkboot enables network boot for system(s)

Table 158—enablenetworkboot
pmcli enablenetworkboot <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
interface - the name(s) of interface (e.g. "eth0")
--interfacename=eth1

enablestaticarp
enablestaticarp enables static ARP configuration on all or a specific ip-interfaces
on system(s)

Table 159—enablestaticarp
pmcli enablestaticarp <systemnames> [interfacename]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
interfacename - the name(s) of interface (e.g. "eth0")
--interfacename=eth1

enslaveinterface
enslaveinterface adds an enslaved interface to a bonded device.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 258


Table 160—enslaveinterface
pmcli enslaveinterface <systemnames> <bondinterfacename> [interfacenames..]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
bondinterfacename - the name of interface - for instance 'bond0'
OPTIONAL DESCRIPTION
interfacenames - the name(s) of interface (e.g. "eth0")
--interfacename=eth1

exportethers
exportethers exports ethers; writes to stdout.

Table 161—exportethers
pmcli exportethers
ARGUMENT OPTIONAL
none none

importethers
importethers imports ethers; reads fromstdin.

Table 162—importethers
pmcli importethers
ARGUMENT OPTIONAL
none none

listinterfaces
listinterfaces returns a list of interfaces for the system(s).

Table 163—listinterfaces
pmcli listinterfaces <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

listroutablesubnetgroups
listroutablesubnetgroups returns a list of routable subnet groups
See also “createroutablesubnetgroup” on page 259

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 259


Table 164—listroutablesubnetgroups
pmcli listroutablesubnetgroups
ARGUMENT OPTIONAL
none none

listroutes
listroutes returns a list of routes for systemname(s).

Table 165—listroutes
pmcli listroutes <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}

liststaticarpmapping
liststaticarpmapping returns a list of static arp mapping as represented in the
Platform CIM database.

Table 166—liststaticarpmapping
pmcli liststaticarpmapping <nodename> [interfacename]
ARGUMENT DESCRIPTION
nodname - the name of the node - for instance 'n001'
OPTIONAL DESCRIPTION
interfacename - the name(s) of interface (e.g. "eth0")
--interfacename=eth1

listsubnets
listsubnets returns a list of all subnets.

Table 167—listsubnets
pmcli listsubnets
ARGUMENT OPTIONAL
none none

listsystemdevices
listsystemdevices returns a list of system devices.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 260


Table 168—listsystemdevices
pmcli listsystemdevices <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}

removealiasedinterface
removealiasedinterface removes aliased interface from systemname(s).

Table 169—removealiasedinterface
pmcli removealiasedinterface <systemnames> <interface> <aliasnumber>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
interface - name of interface (e.g. eth0)
aliasnumber - alias number (e.g. 1 => eth0:1)

removebondedinterface
removebondedinterface removes a logical "bonded" device.

Table 170—removebondedinterface
pmcli removebondedinterface <systemnames> <bondinterfacename>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
bondinterfacename - name of interface (e.g. bond0)

removeethernetinterface
removeethernetinterface removes an ethernet interface from a system.

Table 171—removeethernetinterface
pmcli removeethernetinterface <systemname> <nicname>
ARGUMENT DESCRIPTION
systemname - the name of the system
nicname - the name of the nic (e.g. "nic1")

removeinfinibandinterface
removeinfinibandinterface removes infiniband interface from a system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 261


Table 172—removeinfinibandinterface
pmcli removeinfinibandinterface <systemname> <nicname>
ARGUMENT DESCRIPTION
systemname - the name of system
nicname - the name of the nic (e.g. "ib0")

removemyrinetinterface
removemyyrinetinterface removes myrinet interface from systemname(s)

Table 173—removemyrinetinterface
pmcli removemyrinetinterface <systemname> <nicname>
ARGUMENT DESCRIPTION
systemname - the name of system(s) {[..]}
nicname - the name of the nic (e.g. "gm0")

removeroutablesubnet
removeroutablesubnet removes a subnet from a routable subnets collection.
See “createroutablesubnetgroup” on page 260

Table 174—removeroutablesubnet
pmcli removeroutablesubnet <routablesubnets> [subnets..]
ARGUMENT DESCRIPTION
routablesubnets - name or UUID of routable subnet group. Use
listroutablesubnetgroups for these values.
OPTIONAL DESCRIPTION
subnets - name(s) or UUID(s) of subnets to remove from the
group

removeroutablesubnetgroup
removeroutablesubnetgroup removes a routable subnets collection.See
“createroutablesubnetgroup” on page 260

Table 175—removeroutablesubnetgroup
pmcli removeroutablesubnetgroup <routablesubnets>
ARGUMENT DESCRIPTION
routablesubnets - name or UUID of routable subnet group. Use
listroutablesubnetgroups for these values.

removeroute

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 262


removeroute removes a route for system(s)

Table 176—removeroute
pmcli removeroute <systemnames> <destinationaddress>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}
destinationaddress - destination address for the route to be removed

removesubnet
removesubnet removes a subnet.

Table 177—removesubnet
pmcli removesubnet <name>
ARGUMENT DESCRIPTION
systemnames - the name of thesubnet

setinterfacename
setinterfacename changes the mapped interface hostname for the system(s) to
the IP address on this interface. One of the interfaces on a system should have
the same name as the system itself. ( See “renamesystem” on page 272).

Table 178—setinterfacename
pmcli setinterfacename <systemnames> <interface> <ifnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
interface - the name of interface
ifnames - new hostname for interface(s) [..]

setmacaddress
setmacaddress sets macaddress for the system.

Table 179—setmacaddress
pmcli setmacaddress <systemname> <interface> <macaddress>
ARGUMENT DESCRIPTION
systemname - the name of system
interface - the name of interface
macaddress - macaddress given as "AA:BB:CC:DD:EE:FF"

setmtu

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 263


setmtu sets an MTU for system. The default for MTU is 1500.

Table 180—setmtu
pmcli setmtu <systemname> <interface> <mtu>
ARGUMENT DESCRIPTION
systemname - the name of system
interface - the name of interface
mtu - mtu as integer

Example: mtu
Set the MTU to 1500 for eth0 on a system called Renderfarm00::
pmcli setmtu RenderFarm00 eth0 1500
Remember: the default is 1500

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 264


Node Commands

The following commands affect the provisioning and configuration of nodes:


• changenodebrand on page 268
• createnode on page 268
• disablemanagementofservers on page 269
• discovernode on page 270
• discovernodemac on page 270
• enablemanagementofservers on page 270
• getguid on page 271
• listaccounts on page 271
• listmanagementofservers on page 271
• listnodes on page 272
• removesystem on page 272
• renamesystem on page 272
• setguid on page 273
• setinstalledstate on page 273
• setinstallserver on page 273
• setrootpassword on page 274
• showprovisioningdata on page 275

changenodebrand
changenodebrand changes the product brand of a node. Use listproducts on
page 287 for available options.

Table 181—changenodebrand
pmcli changenodebrand <systemnames> <productid>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
productid - productid for the new server brand. Use "listproducts Servers"
for available options.

createnode
See Creating a node with pmcli on page 376 for details on using createnode.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 265


Table 182—createnode
pmcli createnode <systemnames> <rootpassword> <hwproduct> <ipspecs>
<defaultgateway> <swproduct> [dnsdomain=DNSDOMAIN]
[dnsservers=DNSSERVERS] [nicname=nic1] [laninterface=eth0] [smgatewayname]
[nisdomain] [nisservers] [subnet]
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) [..]
rootpassword - password for root; can be given, unencrypted or md5-encrypted
hwproduct - hardware product name. Run "pmcli listproducts 11" for a list of
valid products.
ipspecs - ip address(es) [..]
defaultgateway - Default gateway for the system(s)
swproduct - software product name; Specify Imagename or distibution. Run
"pmcli listproducts 7" for a list of distributions, or "pmcli
listimages" for a list of available images.
OPTIONAL DESCRIPTION
dnsdomain - the name of DNS domain (default is no DNS)
--dnsdomain=DNSDOMAIN
dnsservers - ip of DNS servers (space separated)

--dnsservers=DNSSERVERS
nicname - the name of nic; default "nic1" --nicname=NICNAME
laninterface - the name of interface; default "eth0"
--laninterface=LANINTERFACE
smgatewayname - the name of Platform Manager gateway; default Cimserver
--smgatewayname=SMGATEWAYNAME
nisdomain - the name of NIS domain; Default is not to configure NIS
--nisdomain=NISDOMAIN
nisservers - the name of NIS servers (space separated)
--nisservers=NISSERVERS
subnet - subnet for the ipaddress

disablemanagementofservers
disablemanagementofservers disables management of the system(s) by
Platform Manager

Table 183—disablemanagementofservers
pmcli disablemanagementofservers <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 266


discovernode
discovernode will discover the running configuration of existing system(s). The
system needs to have ssh or rsh enabled, and root login without passwords must
be enabled from this system or the SSH_PASSWORD environment variable must be
set to discover the system. Once node is discovered you need to run
enablemanagementofservers on page 270 followed
byinstallmanagementsoftware on page 202 you wish to manage discovered
system.

• See “enablemanagementofservers”
• See “installmanagementsoftware” on page 206.

Table 184—discovernode
pmcli discovernode <ipspecs>
ARGUMENT DESCRIPTION
ipspecs - ip address(es) [..]

discovernodemac
discovernodemac runs MAC discovery for system(s). The systems will be power
cycled one by one to learn their MAC addresses.

Table 185—discovernodemac
pmcli disablemanagementofservers <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]} for which to discover MAC
addresses

enablemanagementofservers
enablemanagementofserversenables management of the system(s) by
Platform Manager. This will add Platform Manager software and services to a system
in the configuration database. It's primarily used for adding Platform Manager to
newly discovered systems. This command only affects the configuration database.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 267


Use “installmanagementsoftware” after “enablemanagementofservers" for
deploying the software without reinstalling the Operating System.

Table 186—enablemanagementofservers
pmcli enablemanagementofservers <systemnames> [servername]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
servername - the name of server; default Platform Manager frontend
--servername=SERVERNAME

getguid
getguid returns an unique GUID identifier for system(s).

Table 187—getguid
pmcli getguid <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

getkernelbootoptions
getkernelbootoptions lists the extra kernel boot options for system(s).

Table 188—getkernelbootoptions
pmcli getkernelbootoptions <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}

listaccounts
listaccounts lists accounts of system

Table 189—listaccounts
pmcli listaccounts <system>
ARGUMENT DESCRIPTION
system - system name

listmanagementofservers
listmanagementofserversshows management status of the system(s) (i.e. if
the system is managed by Platform Manager or not)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 268


Table 190—listmanagementofservers
pmcli listmanagementofservers <system>
ARGUMENT DESCRIPTION
system - system name

listnodes
listnodes returns a list of all available nodes, both performance and "HA" types.

Table 191—listnodes
pmcli listnodes
ARGUMENT OPTIONAL
none none

removesystem
removesystem removes the system from the network.

Table 192—removesystem
pmcli removesystem <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) [..]

renamesystem
renamesystem changes the hostname for the system(s).

Table 193—renamesystem
pmcli renamesystem <systemnames> <newsystemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
newsystemnames - the new name(s) [..]

Note: renamesystem will change the hostname of the system but not the names
that map to the IP-addresses assigned to the system (See
“setinterfacename” on page 266).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 269


setguid
setguid sets an unique GUID identifyer for system.

Table 194—setguid
pmcli setguid <systemname> [guid]
ARGUMENT DESCRIPTION
systemname - the name of system
OPTIONAL DESCRIPTION
guid - GUID for this system. Default is to clear the GUID.
--guid=GUID

setinstalledstate
setinstalledstate overrides the installation status of system(s) to set them as
completed.

Table 195—setinstalledstate
pmcli setinstalledstate <systemnames> [isinstalled=True]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
isinstalled - Installation state True or False. Defaults To True.
--isinstalled=ISINSTALLED

setinstallserver
setinstallserver sets install server for system.

Table 196—setinstallserver
pmcli setinstallserver <systemnames> <installserver>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
installserver - the name of install server

setkernelbootoptions
Use setkernelbootoptions to enter the extra kernel boot options for system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 270


Table 197—setkernelbootoptions
pmcli setkernelbootoptions <systemnames> [options]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
options - space seperated list of options. Default is to clear options.
--options=OPTIONS

setrootpassword
setrootpassword sets root password for the system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 271


Table 198—setrootpassword
pmcli setrootpassword <systemnames> <password>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
password - the password; can be given unencrypted or md5-encrypted

showprovisioningdata
showprovisioningdata shows provisioning setting data for system(s)

Table 199—showprovisioningdata
pmcli showprovisioningdata <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 272


PBS Options Commands

• addpbsprolicenseserver on page 276


• addpbspromom on page 276
• addpbsproscheduler on page 276
• addpbsproserver on page 277
• createpbsnodefile on page 277
• removepbsprolicenseserver on page 278
• removepbspromom on page 278
• removepbsproscheduler on page 278
• removepbsproserver on page 278
• setpbslicense on page 279
• setpbslicensefile on page 279
• setpbsproclientsoffline on page 279

• setpbsproclientsonline on page 279

addpbsprolicenseserver
addpbsprolicenseserver adds a PBS Pro FLEXlm license server to system(s).
Supported for PBSPro version 9.0 and later.

Table 200—addpbsprolicenseserver
pmcli addpbsprolicenseserver <systemnames> <licensefile>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
licensefile - full path to FLEXlm license file

addpbspromom
addpbspromom adds a pbspromom for the system(s).

Table 201—addpbspromom
pmcli addpbspromom <systemnames> <servername>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
servername - the name of the server

addpbsproscheduler

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 273


addpbsproscheduler adds a PBS Pro scheduler to system(s).

Table 202—addpbsproscheduler
pmcli addpbsproscheduler
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
servername - the name of the server

addpbsproserver
addpbsproserver adds a PBSPro server to the system(s)

Table 203—addpbsproserver
pmcli addpbsproserver <systemnames> [--licensekey=LICENSEKEY]
[--licenseserver=LICENSESERVER]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
licensekey - Enter the License key here. --licensekey=LICENSEKEY
licenseserver - name of system hosting the PBS license key (for PBS Pro
version 9 and newer)
--licenseserver=LICENSESERVER

createpbsnodefile
createpbsnodefile creates a Qmgr file that defines all nodes in cluster. This
should only be necessary for unmanaged PBS servers. createpbsnodefile will
only list compute nodes that are PBS clients (MOMs)

Note: The Qmgr file can be used to add nodes to the PBS server with the command
'qmgr -c < nodefile.qmgr'

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 274


Table 204—createpbsnodefile
pmcli createpbsnodefile <clustername> [setfree=SETFREE]
ARGUMENT DESCRIPTION
clustername - the name of cluster
OPTIONAL DESCRIPTION
setfree - (optional) set nodes up and available for the PBS batch
system.
--setfree=SETFREE

removepbsprolicenseserver
removepbsprolicenseserver removes PBS Pro FLEXlm license server for
system(s).Supported for PBSPro version 9.0 and later.

Table 205—removepbsprolicenseserver
pmcli removepbsprolicenseserver <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

removepbspromom
removepbspromom removes PBS Pro Mom from system(s).

Table 206—removepbspromom
pmcli removepbspromom <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

removepbsproscheduler
removepbsproscheduler removes PBS Pro scheduler for system(s).

Table 207—removepbsproscheduler
pmcli removepbsproscheduler <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

removepbsproserver
removepbsproserver removes PBS Pro server forPBS Pro FLEXlm license server
for system(s)
Supported for PBSPro version 9.0 and newer

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 275


Table 208—removepbsproserver
pmcli removepbsproserver <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

setpbslicense
setpbslicense sets the PBS Pro license file on a system hosting a PBS Pro server.
Supported for versions of before PBSPro version 9.0.

Table 209—setpbslicense
pmcli setpbslicense <servername> <licensekey>
ARGUMENT DESCRIPTION
servername - the name of system running the PBS server
licensekey - PBS license key

setpbslicensefile
setpbslicensefile sets the PBS Pro FLEXlm license file on a system hosting a PBS
Pro license server. Supported for PBSPro version 9.0 and later.

Table 210—setpbslicensefile
pmcli setpbslicensefile <servername> <licensefile>
ARGUMENT DESCRIPTION
servername - the name(s) of the server running PBS
licensefile - full path to FLEXlm license file

setpbsproclientsoffline
setpbsproclientsoffline marks listed nodes as OFFLINE even if currently in use.
Will only communicate with the primary server in a HA setup

Table 211—setpbsproclientsoffline
pmcli setpbsproclientsoffline <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

setpbsproclientsonline
setpbsproclientsonline clears OFFLINE or DOWN from listed nodes.
The listed nodes are "freed" for allocation to jobs and will only communicate with
the primary server in a HA setup

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 276


Table 212—setpbsproclientsonline
pmcli setpbsproclientsonline <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 277


Product and Software Options Commands

• addproductconflicts on page 281


• addproductprovides on page 282
• addproductrequires on page 282
• addsoftware on page 282
• addsoftwareoftype on page 284
• createdependencycapability on page 284
• createlocalproduct on page 284
• createupdatechannel on page 284
• listchannels on page 286
• listdependencycapabilities on page 286
• listfeatures on page 286
• listinstalledsoftware on page 286
• listproductdependencies on page 286
• listproducts on page 287
• listproducttypes on page 287
• listretrieveelements on page 288
• listretrievemethods on page 288
• listsubscribedchannels on page 288
• loadsoftware on page 288
• removedependencycapability on page 289
• removeproductconflicts on page 290
• removeproductprovides on page 290
• removeproductrequires on page 290
• removesoftware on page 290
• removeupdatechannel on page 291
• subscribechannel on page 291
• unsubscribechannel on page 291
• upgradesoftware on page 291

• upgradesoftwareoftype on page 292

addproductconflicts

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 278


addproductconflicts adds a conflict capability to a (hw or sw) product to make
products incompatible.

Table 213—addproductconflicts
pmcli addproductconflicts <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hw or sw product
capabilityspec - name or UUID of dependency capability

addproductprovides
addproductprovides adds a provides capability to a (hw or sw) product so that
other products may depend on it

Table 214—addproductprovides
pmcli addproductprovides <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - software product identification
capabilityspec - name or UUID of dependency capability

addproductrequires
addproductrequires adds a require dependency to a (hw or sw) product.

Table 215—addproductrequires
pmcli addproductrequires <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - software product identification
capabilityspec - name or UUID of dependency capability

addsoftware
addsoftware adds a software product to the system(s).

Table 216—addsoftware
pmcli addsoftware <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
productid - software product identification

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 279


Table 216—addsoftware
pmcli addsoftware <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
OPTIONAL DESCRIPTION
force - ignore product dependencies
featurenames - list of features

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 280


addsoftwareoftype
addsoftwareoftype adds software products to the system(s)

Table 217—addsoftwareoftype
pmcli addsoftwareoftype <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
productid - software product identification
OPTIONAL DESCRIPTION
force - ignore product dependencies
featurenames - list of features

createdependencycapability
createdependencycapability adds a dependency capability. The capability
may be provided or required by
sw and/or hw products.

Table 218—createdependencycapability
pmcli createdependencycapability <capabilityname><description>
ARGUMENT DESCRIPTION
capabilityname - human readable name of the new capability
description - semantic description of capability

createlocalproduct
createlocalproduct creates and loads local software to the repository.

Table 219—createlocalproduct
pmcli createlocalproduct <productname> <filenames>
ARGUMENT DESCRIPTION
productname - product identification
filenames - file names. Lists of files should be space separated in quotes.

Note: Wildcard can be used. e.g


"/home/software/testproduct/test1.rpm
/home/software/testproduct/producttest?.rpm"

createupdatechannel
Create an update channel for a product. Update channels are used to distribute
updates for an existing product.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 281


You can have multiple update channels per product. A typical use would be to have
a "testing" and a "stable" channel for each operating system, with the testbed nodes
subscribed to the "testing"-channel and the production nodes subscribed to the
"stable"-channel. Then updates can easily be tested on a subset of the datacenter
before they are approved for production and moved to the "stable"-channel.

Table 220—createupdatechannel
pmcli createupdatechannel <productid> <name> [description]
ARGUMENT DESCRIPTION
productid The productid is stored for use in updating the software. You
can find the productid by using “listproducts”“ listproducts” on
page 213.
name - the name of the new channel.
OPTIONAL DESCRIPTION
description - an optional description of the channel.
--description=DESCRIPTION

Note: Best practice for createupdate channel- After creating the channel, use:
/opt/scali/libexec/scarepository.py --addupdatesto
populate it with packages. See “Creating and Deploying an update
Channel with pmcli” on page 382 or "scarepository.py --help" for
details. Finally, subscribe nodes to the new channel with subscribechannel. (see
“subscribechannel” on page 291 )

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 282


listchannels
listchannels returns a list of software channels

Table 221—listchannels
pmcli listchannels
ARGUMENT OPTIONAL
none none

listdependencycapabilities
listdependencycapabilities lists all the dependency capabilities. The capabilities
may be provided or required by software and/or hardware products.

Table 222—listdependencycapabilities

pmcli listdependencycapabilities [verbose=False]


OPTIONAL DESCRIPTION
verbose - lists more info (off by default)
--verbose=VERBOSE

listfeatures
listfeatures returns a list of features for a product.

Table 223—listfeatures
pmcli listfeatures <productid>
ARGUMENT DESCRIPTION
productid - software product identification

listinstalledsoftware
listinstalledsoftware returns a list of software installed on system(s).

Table 224—listinstalledsoftware
pmcli listinstalledsoftware <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

listproductdependencies
listproductdependencies lists capabilities required by a product

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 283


Table 225—listproductdependencies
pmcli listproductdependencies <productid> [verbose=False]
ARGUMENT DESCRIPTION
productid - product identification of a (hw or sw) product
OPTIONAL DESCRIPTION
verbose - lists more info (off by default)
--verbose=VERBOSE

listproducts
listproducts returns a list of available products for a product type.

Table 226—listproducts
pmcli listproducts <producttype>
ARGUMENT DESCRIPTION
producttype - product type; Both numerical and alphabetical product types
are accepted.
listproducttype= 7
listproducttype= 11

Example: listproducts with producttype 7


Use this to generate an ordered list of distributions.
pmcli listproducts 7

Example: listproducts with producttype 11


Run this for a list of valid products.
pmcli listproducts 11

listproducttypes
listproducttypes returns a list of product types.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 284


Table 227—listproducttypes
pmcli listproducttypes [sortorder]
OPTIONAL DESCRIPTION
sortorder - sortorder
--sortorder=numerical
--sortorder=alphabetical

listretrieveelements
listretrieveelements lists retrieve elements for an OS product

Table 228—listretrieveelements
pmcli listretrieveelements <productid> <retmethod>
ARGUMENT DESCRIPTION
productid - product identification
retmethod - retrieve method

listretrievemethods
listretrievemethods lists retrieve methods for an OS product

Table 229—listretrievemethods
pmcli listretrievemethods <productid>
ARGUMENT DESCRIPTION
productid - product identification

listsubscribedchannels
listsubscribedchannels returns a list of subscribed channels for system(s).

Table 230—listsubscribedchannels
pmcli listsubscribedchannels <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

loadsoftware
loadsoftware loads software to the repository. You can use wildcards, e.g.
"/home/os/iso/SLES-10CD1.iso /home/os/iso/SLES-10-x86*"

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 285


Table 231—loadsoftware
pmcli loadsoftware <productid> <retmethod> <filenames> [force=False]
[verbose=False]
ARGUMENT DESCRIPTION
productid - product identification
retmethod - product's elements retrieve elements method
filenames - OS files(ISO/DVDISO). Files should be space separated with in
quotes.
OPTIONAL DESCRIPTION
force - upload software without verifying checksum. [force=False]
verbose - verbose output
[verbose=False]

removedependencycapability
removedependencycapability removes a “dependency capability” from a
product.

Note: If a capability is removed, products currently requiring/providing it will no longer


do so.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 286


Table 232—removedependencycapability
pmcli removedependencycapability <capabilityspec>
ARGUMENT DESCRIPTION
capabilityspec - name or UUID of dependency capability

removeproductconflicts
removeproductconflicts removes a “conflicts capability” from a hardware or
software product

Table 233—removeproductconflicts
pmcli removeproductconflicts <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hardware or software product
capabilityspec - name or UUID of dependency capability

removeproductprovides
removeproductprovides removes a “provides capability” from a hardware or
software product

Table 234—removeproductprovides
pmcli removeproductprovides <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hardware or software product
capabilityspec - name or UUID of dependency capability

removeproductrequires
removeproductrequiresremoves a “requires capability” from a hardware or
software product.

Table 235—removeproductrequires
pmcli removesproductrequires <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hardware or software product
capabilityspec - name or UUID of dependency capability

removesoftware
removesoftware removes specified software product from system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 287


Table 236—removesoftware
pmcli removesoftware <systemnames> <productid> [featurenames..]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
productid - software product identification
featurenames - list of features

removeupdatechannel
removeupdatechannel removes an update channel

Table 237—removeupdatechannel
pmcli removeupdatechannel <name>
ARGUMENT DESCRIPTION
name - the name of the channel to remove

subscribechannel
subscribechannel subscribes system(s) to a channel.

Table 238—subscribechannel
pmcli subscribechannel <systemnames> <channelname>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
channelname - the name of a channel

unsubscribechannel
unsubscribechannel unsubscribes system(s) from channel

Table 239—unsubscribechannel
pmcli unsubscribechannel <systemnames> <channelname>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
channelname - the name of a channel

upgradesoftware
See “Up-grading with pmcli” on page 370 for details on using
upgradessoftware.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 288


Table 240—upgradesoftware
pmcli upgradesoftware <systemnames> <newproductid>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
newproductid - software product identification

upgradesoftwareoftype
upgradesoftwareoftype upgrades software products of a specific type for
system(s).
See “Up-grading with pmcli” on page 370 for up-grading with CLI for details on
using upgradessoftware.

Table 241—upgradesoftwareoftype
pmcli upgradesoftwareoftype <systemnames> <newproductid>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
newproductid - software product identification

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 289


Services Options Commands

Services Options commands configure the available services on your nodes.


• addaccountingcollectorservice on page 294
• addaccountingservice on page 294
• addbatchsystemaccountingservice on page 294
• addconsolemanagementcontroller on page 294
• adddhcpclientservice on page 295
• adddnsclientservice on page 295
• addjbossasservice on page 296
• addldapclientservice on page 296
• addmanagementengineservice on page 296
• addmonitoringhistoryservice on page 296
• addmonitoringinbandservice on page 297
• addmonitoringoutofbandservice on page 297
• addmonitoringrelayservice on page 297
• addnatservice on page 298
• addntpservice on page 298
• addpowermanagementcontroller on page 300
• addremotesyslogclientservice on page 300
• addscarepositorycacheservice on page 300
• addsmgatewayservices on page 301
• addsshcredentialmanagementservice on page 301
• bindservicetointerface on page 301
• disablescancesubsystem on page 301
• enablescancesubsystem on page 302
• listdnsclientservice on page 302
• listdisabledscancesubsystems on page 302
• listhostedservices on page 302
• listnisclientservice on page 303
• listscancesubsystems on page 303
• removeservice on page 303
• removesmgatewayservices on page 304
• unbindservicefrominterface on page 304

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 290


addaccountingcollectorservice
addaccountingcollectorservice adds an accounting collection service. The
collector service can receive accounting data from accounting services, and produce
accounting reports.

Table 242—addaccountingcollectorservice

pmcli addaccountingcollectorservice <systemnames>


ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}where you want scaacct to collect data

addaccountingservice
addaccountingservice enables a BSD accounting service for the named
systems and servers. The service will perform BSD resource accounting and
transfer the data to the given accounting collector server for generating reports.

Table 243—addaccountingservice

pmcli addaccountingservice <systemnames> [servername]


ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}where you want scaacct
to collect data.
OPTIONAL DESCRIPTION
servername - the name of the server where you want scaacct to
run
--servername=SERVERNAME

addbatchsystemaccountingservice
addbatchsystemaccountingservice adds an accounting collector service to
system(s). The collector service can receive accounting data from accounting
services, and produce accounting reports.

Table 244—addbatchsystemaccountingservice
pmcli addbatchsystemaccountingservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

addconsolemanagementcontroller
addconsolemanagementcontroller adds console management controller

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 291


Table 245—addconsolemanagementcontroller
pmcli addconsolemanagementcontroller <systemnames> [ondemand=False]
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
OPTIONAL DESCRIPTION
ondemand - only connect to backend services on demand. Default is always be
connected to backend services.
--ondemand=ONDEMAND

adddnsclientservice
adddnsclientservice adds a DNS client service to system(s).

Table 246—adddnsclientservice
pmcli adddnsclientservice <systemnames> <searchdomains> <servers>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
searchdomains - DNS search domains (Space separated)
servers - DNS servers (Space separated)

adddhcpclientservice
adddhcpclientservice adds DHCP client service to your system(s) for assigning
IP addresses automatically. Platform Manager will add a host entry to DHCP servers
on the subnets with interfaces bound to the service.

Table 247—adddnsclientservice
pmcli adddnsclientservice <systemnames> <searchdomains> <servers>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
searchdomains - DNS search domains (Space separated)
servers - DNS servers (Space separated)

adddhcpserverservice
adddhcpserverservice adds DHCP server service to the system(s)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 292


Table 248—adddhcpserverservice
pmcli adddhcpserverservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

addjbossasservice
addjbossasservice adds JBoss AS service to system(s).

Table 249—addjbossasservice
pmcli addjbossaservice
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

addldapclientservice
Adds LDAP client service to system(s)

Table 250—adddhcpclientservice
pmcli adddhcpclientservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

Example: addldapclientservice
Add a LDAP Client Service to a system named “sc1435-4”:
pmcli addldapclientservice sc1435-4 dc=example,dc=com "server1 server2"

addmanagementengineservice
addmanagementengineservice adds management engine service to the
system(s)

Table 251—addmanagementengineservice
pmcli addmanagementengineservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

addmonitoringhistoryservice
addmonitoringhistoryservice adds a monitoring history service to the
system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 293


Table 252—addmonitoringhistoryservice
pmcli addmonitoringhistoryservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

addmonitoringinbandservice
addmonitoringinbandservice adds a monitoring inband server to the
system(s).

Table 253—addmonitoringinbandservice
pmcli addmonitoringinbandservice <systemnames> [servername]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
servername - the name of relay server for Monitoring inband server. Default is to
find one automatically.

addmonitoringoutofbandservice
addmonitoringoutofbandservice adds a monitoring out-of-band server
service to the system(s).

Table 254—addmonitoringoutofbandservice
pmcli addmonitoringoutofbandservice <systemnames> <servername>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
servername - the name of relayserver for monitoring outofband server

addmonitoringrelayservice
addmonitoringrelayservice adds a monitoring relay server service to the
system(s).

Table 255—addmonitoringrelayservice
pmcli addmonitoringrelayservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

addnisclientservice
addnisclientservice adds a NIS client service to system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 294


Table 256—addnisclientservice
pmcli addnisclientservice <systemnames> <domain> [servers]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
domain - NIS domains (Space separated)
OPTIONAL DESCRIPTION
servers - NIS servers (Space separated) If no servers are specified broadcast
is used. Default is broadcast.
--servers=SERVERS

addnatservice
addnatservice adds an NAT service to the system(s).

Table 257—addnatservice
pmcli addnatservice <systemnames> [interface]
ARGUMENT DESCRIPTION
systemname - name(s) of system(s) {[..]}
s
OPTIONAL DESCRIPTION
interface - the name of the internal interface to NAT.
--interface=INTERFACE

addntpservice
addntpservice adds an NTP service to the system(s).

Table 258—addntpservice
pmcli addntpservice <systemnames> <servers> [broadcastclient=False]
[broadcastaddresses] [peeraddresses]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
servers - The current system will synchronize with specific NTP
server(s) (space separated). You may enter the name of the
server or enter:
--servers=SERVERS
OPTIONAL DESCRIPTION

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 295


Table 258—addntpservice
pmcli addntpservice <systemnames> <servers> [broadcastclient=False]
[broadcastaddresses] [peeraddresses]
ARGUMENT DESCRIPTION
broadcastclient - how the service syncronizes against other NTP services
--broadcastclient=BROADCASTCLIENT
broadcastaddresses - broadcast adress(s) {[..]}
--broadcastaddresses=BROADCASTADDRESSES
peeraddresses - address of system(s) {[..]}
--peeraddresses=PEERADDRESSES

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 296


addpowermanagementcontroller
addpowermanagementcontroller adds a power management controller
tothe system(s).

Table 259—addpowermanagementcontroller
pmcli addpowermanagementcontroller <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

addremotesyslogclientservice
addremotesyslogclientservice adds a remote syslog client service to
system(s). This will redirect kernel messages to a remote syslog server service.

Table 260—addremotesyslogclientservice
pmcli addremotesyslogclientservice <systemnames> [servername]
ARGUMENT DESCRIPTION
systemname - the name of system(s) {[..]}
s
OPTIONAL DESCRIPTION
servername - the name of server the messages should be redirected to. Default is
to find one automatically.
--servername=SERVERNAME

addrshservice
addrshservice adds an rsh service to system(s).

Table 261—addrshservice
pmcli addrshservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

addscarepositorycacheservice
addscarepositorycacheservice adds scarepository cache service to
system(s).

Table 262—addscarepositorycacheservice
pmcli addscarepositorycacheservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 297


addsmgatewayservices
addsmgatewayservices is a shortcut for adding all the services required for a
Platform Manager Gateway.

Table 263—addsmgatewayservices
pmcli addsmgatewayservices <systemnames> [interface=eth1]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
interface - sets the name of the internal interface for the NAT. Default is
eth1
--interface=INTERFACE

addsshcredentialmanagementservice
addsshcredentialmanagementservice adds an SSH credential
management service to the system(s)

Table 264—addsshcredentialmanagementservice
pmcli addsshcredentialmanagementservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

bindservicetointerface
bindservicetointerface binds a service to the interface for a system(s).

Table 265—bindservicetointerface
pmcli bindservicetointerface <systemnames> <serviceid> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
serviceid - the name or uuid of service
interface - the name of an interface

disablescancesubsystem
disablescancesubsystem disables Platform Node Configuration Engine
subsystem for the system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 298


Table 266—disablescancesubsystem
pmcli disablescancesubsystem <systemnames> <subsystem>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
subsystem - the name of the subsystem to be disabled

enablescancesubsystem
enablescancesubsystem enables Platform Node Configuration Engine
subsystem for system(s).

Table 267—enablescancesubsystem
pmcli enablescancesubsystem <systemnames> <subsystem>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
subsystem - the name of subsystem to be enabled

listdnsclientservice
listdnsclientservice returns a list of DNS services for the system(s)

Table 268—listdnsclientservice
pmcli listdnsclientservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

listdisabledscancesubsystems
listdisabledscancesubsystems returns a list of disabled Platform Node
Configuration Engine subsystems for the system(s).

Table 269—listdisabledscancesubsystems
pmcli listdisabledscancesubsystems <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

listhostedservices
listhostedservices returns a list of software services hosted by system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 299


Table 270—listhostedservices
pmcli listhostedservices <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

Note: When running 'listhostedservices' you'll get an overview of all services on the
Platform Manager frontend and which interfaces host them. If eth1 is to be used
for application data transfer and eth0 for installation, monitoring and general
management, then make sure that these services have only eth0 listed:
• Scali_ManagementEngineService
• Scali_DHCPServerService
• Scali_RepositoryChannelService
• Scali_ScaMonitoringControlService
• Scali_ScaMonitoringHistoryService
• Scali_ScaMonitoringRelayService
• Scali_ScaliManageConfigurationService
• Scali_RemoteSysLogServerService

listnisclientservice
listnisclientservice returns a list of NIS client service for the system(s)

Table 271—listnisclientservice
pmcli listnisclientservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}

listscancesubsystems
listscancesubsystems returns a list of Platform Node Configuration Engine
subsystems.

Table 272—listscancesubsystems
pmcli listscancesubsystems
ARGUMENT OPTIONAL
none none

removeservice
removeservice removes a service from system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 300


Table 273—removeservice
pmcli removeservice <systemnames> <servicename>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
servicename - the name of service

removesmgatewayservices
removesmgatewayservices removes all the services required for a Platform
Manager Gateway.

Table 274—removesmgatewayservices
pmcli removesmgatewayservices <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}

unbindservicefrominterface
unbindservicetointerface removes bonding from a service to the interface for
a system(s).

Table 275—unbindservicefrominterface
pmcli unbindservicefrominterface <systemnames> <serviceid> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
serviceid - the name or uuid of service
interface - the name of an interface

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 301


Switch Commands

• createswitch on page 305


• disconnectconsoleswitchport on page 305
• disconnectpowerswitchport on page 306
• findgmtopology on page 306
• listswitches on page 307
• removeswitch on page 307
• setspeedoncomport on page 308
• useconsoleswitchport on page 308
• usepowerswitchport on page 308

createswitch
createswitch creates network switch(es).

Table 276—createswitch
pmcli createswitch <systemnames> <ipspecs> <product> [username] [password]
[subnet]
ARGUMENT DESCRIPTION
systemnames - the name(s) of the switch(es) [..]
ipspecs - corresponding ip-addresses) [..]
product - hardware product specification
OPTIONAL DESCRIPTION
password - password on the switch, if needed
--password=PASSWORD
username - username on the switch, if needed
--username=USERNAME
subnet - subnet for the ipaddress
--subnet=SUBNET

disconnectconsoleswitchport
disconnectconsoleswitchport disconnects console switch port for system(s).

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 302


Table 277—disconnectconsoleswitchport
pmcli disconnectswitchport <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the switch(es) [..]

disconnectpowerswitchport
disconnectpowerswitchport disconnects power switch port for system(s).

Table 278—disconnectpowerswitchport
pmcli disconnectpowerswitch <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the switch(es) [..]

findgmtopology
findgmtopology discovers how the nodes are connected to the Myrinet switch.
Note: findgmtopology only works on the system monitoring the switch.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 303


Table 279—findgmtopology
pmcli findgmtopology <switchname> <systemname>
ARGUMENT DESCRIPTION
switchname - the name of switch
systemname - the name of system that will communicate with the switch

Example: Using findgmtopology: Myrinet in the CLI


# High speed interconnects
pmcli addmyrinetinterface ${NODES} "gm0" "gm0"
pmcli addsoftware ${NODES} "GM_2.1.23" "driver"
pmcli addinfinibandinterface ${NODES} "ib0" "ib0"pmcli addsoftware
${NODES} "IBGD_1.8.0" "driver"
# The Myrinet switch
pmcli createswitch myr1 172.19.99.98 myrinet
pmcli setmac myr1 eth "00:60:dd:48:f7:0d"
#pmcli adddhcpclient myr1
# Myrinet topology
pmcli findgmtopology myr1 ${PMFE}

listswitches
listswitches lists all switch(es).

Table 280—listswitches
pmcli listswitches
ARGUMENT OPTIONAL
none none

removeswitch
removeswitch removes network switch(es) from the configuration.

Table 281—removeswitch
pmcli removeswitch <systemnames>
ARGUMENT DESCRIPTION
systemname - the name of system that will communicate with the switch

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 304


setspeedoncomport
setspeedoncomport sets the speed of any serial port used as a console on the
system.

Table 282—setspeedoncomport
pmcli setspeedoncomport <systemnames> <speed>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
speed - new speed of the serial port

useconsoleswitchport
useconsoleswitchport defines the console switch port for the system.

Table 283—useconsoleswitchport
pmcli useconsoleswitchport <systemnames> <switchname> <portnumbers>
[devicename=ttyS0] [consserver]
ARGUMENT DESCRIPTION
systemnames - the name of switch(es) [..]
switchname - the name of switch to be used
portnumbers - port number(s) to be used
OPTIONAL DESCRIPTION
devicename - the name for the serial device on the server. Default is ttyS0.
--devicename=DEVICENAME
consserver - the name of console server
--consserver=CONSSERVER

usepowerswitchport
The usepowerswitchport command defines a power switch port for the system.

Table 284—usepowerswitchport
pmcli usepowerswitchport <systemnames> <switchname> <portnumbers>
[powerserver]
ARGUMENT DESCRIPTION
systemname - the name of the system(s) [..]
switchname - the name of switch to be used

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 305


Table 284—usepowerswitchport
pmcli usepowerswitchport <systemnames> <switchname> <portnumbers>
[powerserver]
ARGUMENT DESCRIPTION
portnumbers - port number(s) to be used [..]
OPTIONAL DESCRIPTION
powerserver - the name of power server
--powerserver=POWERSERVER

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 306


Template Commands

Templates instruct how to install the software on systems.The following commands


affect your choice of template:
• addtemplate on page 310
• gettemplate on page 311
• listtemplates on page 311
• removetemplate on page 311
• replacetemplate on page 311

addtemplate
addtemplate adds a new network installation template, read from the stdin.

Table 285—addtemplate
pmcli addtemplate <name> <templatetype>
ARGUMENT DESCRIPTION
name - specifies the name of the template
templatetype -The type of template: kickstart, autoyas, etc. which can be
retrieved by using listtemplates. The template file is read from
standard-in.

Note: Find list of types with "listemplates"

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 307


Example: addtemplate
pmcli addtemplate MYTEMPLATE_NAME autoyast MYTEMPLATE_FILE.xml

gettemplate
gettemplate returns the content of an existing template. You can learn how to
get values for id by running listtemplates (see listtemplates on page 311).

Table 286—gettemplate
pmcli gettemplate <id>
ARGUMENT DESCRIPTION
id - template id

listtemplates
listtemplates returns a list of existing kickstart/autoyast templates.

Table 287—listtemplates
pmcli listtemplates
ARGUMENT OPTIONAL
none none

removetemplate
removetemplate removes a new network installation template.You can learn
how to get values for id by running listtemplates (see listtemplates on page 311)

Table 288—removetemplate
pmcli removetemplate <id>
ARGUMENT DESCRIPTION
id - template id

replacetemplate
replacetemplate adds a new network installation template

Table 289—replacetemplate
pmcli replacetemplate <templateid>
ARGUMENT DESCRIPTION
templateid - Name or UUID of template to replace

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 308


The console interface
Console is the second CLI.
• Console on page 313
• Console Configuration on page 317

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 309


Console

Below you will find options and console information fields tables.

Table 290—Console Options

console [optional]
OPTIONAL DESCRIPTION
-7 Strip the high bit off all console data whether from user input
or the server, before any processing occurs.Disallows escape
sequence characters with high bit set
-a(A) Access a console with a read-write connection (default
setting).The connection is dropped to "spy mode" if some one
else is attached read-write
-bmessage - send broadcast message to all users on each server
-Bmessage - send broadcast message to users on the primary server
-C config - Override per-user config file
-c cred - load an SSL certificate and key from the PEM-encoded file
cred
-d - specified by [user] [@console]. You may specify the target
as:
• user - disconnect the user regardless of which
console they are using
• @console - disconnect all users of a specific
console
• user@console - disconnect a specific user of a
specific console.
-D - enable debug output, sent to stderr
-e esc - set the initial two-character escape sequence to those
represented by esc.Any of the forms’ output by cat(1)’s -v
option are accepted. The default value is "^Ec".
-E If encryption has been built into the code (--with-openssl),
encrypted client connections are required by default. This
option disables any attempt to create an encrypted
connection. Use the -U option if you would like to use
encrypted connections and have encryption supported on
your server, but want to revert to unencrypted connections
otherwise.
-f (F) - force any existing read/write connections into "spy mode".
-h - print help message to the screen
-i (I) - display information in machine-parseable form (see below)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 310


Table 290—Console Options

console [optional]
OPTIONAL DESCRIPTION
-l user - set the log-in name used for authentication for user. By
bdefault console employseither $USER, or $LOGNAME if a
value matches the user’s real uid,else the name associated
with the user’s real uid.
-M master The console client program polls Master server as the primary
server rather than the default set at compile time (typically
"console").The default master may be changed at compilation
time using the "--with-master" option. If you use "--with-uds"
to enable UNIX domain sockets, this option points console to
the directory which holds those sockets.The default master
directory ("../tmp/conserver") can be changed at compilation
time using "--with-uds".
-n - do not read system-wide config file
-P - display the pids of master daemon process on each server.
-p port - sets a connection to this Port. The default port can be
changed at compilation using the "--with-port" option.

Note: If you use "--with-uds" the -p port option is


ignored
-q (Q) - send a quit command to the (master) server
-r (R) - display (master) daemon version (think remote version)
-s (S) - spy on a console (and replay)
-t - send a text message to [user][@console]
-U - ignored - encryption not compiled into code
-u - show users on the various consoles
-v - be more verbose
-V - show version information
-w (W) - show who is on which console (on master)
-x - examine ports and baud rates

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 311


The -a(A), -f(F), and -s(S) options have the same effect as using upper-case variants.
These options also request the last 20 lines of console output just as if you entered
"^Ecr".
The -i option outputs each console’s information in 15 colon-separated fields.

Table 291—Console information fields table

Field Description
name - the name of the console
hostname - hostname of the child process managing the console
pid - pid of the child process managing the console
socket number - the socket number of the child process managing the console
type - the type of console
“/” means the console is a local device
l - means a command
! - means a remote port
console-details - the values are comma-separated and depend on the type of
console.
Local devices have values of:
the device file
the baud rate/parity
the file descriptor for the device.
Commands have values of
the command
the command’s pid
the pseudo-tty
the file descriptor for the pseudo-tty
The remote ports have values of
the remote hostname
remote port number
"raw" or "telnet" protocol
file descriptor for the socket connection
user-list - the comma separated bundles containing details of each user
attached to a console:
connection type
r means read-only
w means read-write
s means suspended

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 312


Table 291—Console information fields table

Field Description
username
hostname
user’s idletime
"r" and "s" users’ requests for read-write mode
state - console state - "up", "down" or "init"
perm - type of permission.
"ro"(read-only) is returned ifthe device is a local device AND the
user’s permissions on the server allow the user to read the file, but
not write.
"rw" means you have read-write permission
log-filedetails The coma-separated values are:
log-file name
logging enabled or not - "log" or "nolog" - toggled by "^EcL"
activity logging enabled or not - "act" or "noact", the "a" timestamp
option
timestamp interval
logfile descriptor file
break the default break sequence used for the console
reup There are two values:
"autoup -the server is down and the automatic reconnection code is
at work.
"noautoup" - either the node is up or automatic reconnection code is
not currently running
aliases comma-separated list of console aliases
options returns a comma-separate lsit of active options for a console
initcmd initcmd configuration option for the console.
idletimeout idletimeout configuration option for the console.
idlestring idlestring configuration option for the console

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 313


Console Configuration

The client-server console application reads the configuration information for the
system-wide configuration file (console.cf) then ther per-user configuration file
(.consolerc) then applies command-line arguments. Each configuration location and
can override the previous. The same happens when parsing an individual file.
The configuration file is read by the same parser as the one that reads conserver.conf.
You should check that help file for parser details.

TIP: Global Defaults


Later entries always override earlier entries. For this reason you should enter
"global" defaults first followed by more specific defaults.

Configuration Blocks
Console recognises the following configuration blocks:
• config <hostname>|<ipaddr>defines a configuration block for the specified
client host, or using a specified ipaddress.
• escape esc sets the escape sequence (see “-e esc”)
• Master (master) sets the default master to master
• port sets the default port to port (see “-p port”)
• sslcredentials filename sets the SSL credentials file location (see “-c cred”)
• sslenabled sets whether or not encryption is used in connections (see “-E”).
Valid values are yes|true|on|no|false|off.
• sslrequired sets whether or not encryption is required in connections ( see
“-U” ). Valid values are yes|true|on|no|false|off
• striphigh sets whether or not to strip the high bit off all data received (see
“-7”). Valid values are yes|true|on|no|false|off.
• username <user> sets the username passed from the server to the user (see
“-l user”)
• terminal <terminal_type> defines a configuration block when using a
specified type of terminal.
• attach string| "" prints a string when successfully connected to a console.
Character substitutions will be performed based on the attachsubst value and
occur before interpretation of the special characters listed below. If you use
the null string ("") no string will be printed.The string is a simple sharacter
string with the exception of ’/’ and ’^’:
• \a- alert
• \b- backspace
• \c- character c
• \f- form-feed
• \n- new line

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 314


• \r- carriage return
• \t- tab
• \v-vertical-tab
• \\- backslash
• \^- circumflex
• \oooo- octal representation of a character where ooo is one to three octal
digits
• ^?- delete
• ^c- control character (c is "and"ed with 0x1f)
• attachsubst c=t[n]f[,...] "" performs character substitution on the attached
value. You can define a series of replacements by specifying a
comma-separated list of c=t[n]f sequences where c is any printable
character; t specifies the replacement value; n is an optional field length; and
f is the format string.t can be one of the characters listed below, catagorised
as a string, or numeric replacement that dictates the use of the n and f fields.
• detach string ""sets a string to print once detatched from a consoleCharacter
substitutions willbe performed based on the detachsubst value.See attach for
an explanation of the string. If you use the null string ("") no string will be
printed.
• detachsubst c=t[n]f[,...] "" performs character substitutions on the detach
value.See the optionattchsubst above for an explanation of the format string.

String Replacement
For example:
• u- username
• c- console name
If the string replacement is less than n characters the value will be padded on its
left with space characters. f must be ’s’.

Note: If you use "*" as a value for <hostname> or <ipaddr> the configuration block will
be applied to all client hosts
Note: If you use "*"for ternimal type this block will be applied to all terminal types.

Numeric Replacement
Numeric replacement is not yet implimented. If the numeric replacement is less
than n characters in length, it will be padded with 0’s if n begins with a 0. Otherwise
it will be padded as string replacements with space characters.
f must be one of the following:
• ’d’ -a decimal value
• ’x’ -a lower-case hexadecimal value

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 315


• ’X’ - an upper-case hexadecimal value.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 316


If the null string ("") is used, there will be no replacements.

Example:attach and attachsubst


An interesting use of attach and attachsubst would be:
terminal xterm {
attach "^[]0;
conserver: U@C^G";
attachsubst U=us, C=cs;
}

Escape Sequences
The connection cn be controlled by a two-character escape sequence, followed by
a command. The default escape sequence is
"CONTROL-E c\", , (octal 005 143).
The escape sequences are actually processed by the server. See the conserver for
more information
When you run a local command via "^Ec|", you can enter "^C" to send a SIGHUP,
"^\" to send a SIGKILL command and "o" to toggle the display of the console data.

Table 292—Console arguments

Argument Description
. - disconnects the user from the console
; - moves the user to another console.
a - attaches a read-write connection if no one else is connected.
b - sends a broadcast message to all users on this console.
c - toggles flow control. It is strong advised that you do not use this.
d - shuts down the current console.
ecc - changes the escape sequence to the next two characters
fconnection - forcibly attaches a read-write
g - returns group information
i - dumps information
L - toggles logging on and off
? - lists the available break sequences
0 - sends the break sequence associated with this console
1-9 - sends the specific break sequence
m - displays message of the day

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 317


Table 292—Console arguments

Argument Description
o - closes (if open) and reopens the line to clear errors - silo overflows - and
the log file
p - replays the last 60 lines of output
r - replays the last 20 lines of output
s - switches to spy mode (read only)
u - shows status of users/hosts in this group
v - shows the version of the group server
w - returns a list of users on this console
x - examines this group’s devices and modes
z - suspends this connection
| - attaches a local command to the console
? - displays a list of commands
^M
(return) - continue, ignores the escape sequence
^R
(CTRL-R) - replays the last line only
\ooo - sends character having an octal code ooo (must specify three octal
numerals)

Example: Using console


To connect to the console on node n001 enter:
console n001
Entering “Ctrl+E c ?” will return a list of available escape sequences.
Entering “Ctrl+E c” will disconnect the console.

If any other character is entered after the escape sequence, all three characters
will be discarded. Note that a line-break or a down command can only be sent from
a read-write connection.You must redefine the outer escape sequence or use
^Ec\ooo to send the first escape character before typing the second character
directly in order to send another escape sequence through the connection.

Using console with -e

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 318


Example: Using console -e
Entering:
console -e "^[1" lv426
requests a connection to host lv426 with the escape sequence set to "escape one".

Using console with -u


The -u output login value "none" indicates that no one is viewing that console.
The value “spies” indicates that users have read-only connections. No one has
a read-write connection.

Example: console -u
Entering:
console -u
would result in something like:
expertupksb@mentor
tyroup<spies>
mentorup<none>
sageupfine@cis

Using console with -w


Entering -w lists all users:
console -w
root@localhost.localdomain attach0:00 n001

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 319


This lists the consoles and the status of each one. In this case, we see:
• that the console is configured for two nodes, n001 and n002
• that both consoles are up
• that root is currently attached to the console n001 while no one is viewing
the console n002

Example: console -w
Entering:
console -w
results in:
kbs@extraattach 2 days expert
file@cisattach 21:46 sage
drm@alice spy 0:04 tyro
The third column displays the idletime of the user. Either hours, minutes, or number of
days may be displayed.

Help and more examples can be found on the conserver home page at
http://www.conserver.com/

Setting a new default escape


A simple configuration to set a new default escape sequence and override the
master location would be as follows:

TIP: Locations of Files


You can override default file locations at compilation, or by the command-line
options above. Run:
console -V
to see the default locations set at compilation.

TIP: Number of Fields


You can divide the-i output into more than 15 fields if the user-provided
information contains manually embedded colons.

Some Known Bugs


You can create looped console connections, but Platform advises strongly against
this.

WARNING— Never run console from within a console connection without uniquely
setting each escape sequence to be different from the others.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 320


The power interface
The power interface controls the power of a node.
• Usage: power arguments/options on page 324
• Usage: power actions on page 324

Table 293—Usage: power arguments/options

power [options] <nodelist> <action>


OPTION DESCRIPTION
nodelist - name(s) of target node(s)
-c, CONF_FILE - defines a config file.
--conf_file=CONF_FILE
-d, --debug - debugs run; display debug information
--debug=TRUE
-h, --help - display this help and exit.
-i, INTERVAL - number of seconds pause between each node.
--interval=10
-p PLUGIN_DIR - defines directory for plug-ins. The default is
/opt/scali/sbin/../plugins.--plugin_dir=PLUGIN_DIR
-r RETRIES - retry failed commands this many times before giving
up.
--retries=RETRIES
-v - verbose run; Output more information and exit.
--verbose=TRUE

There are five actions.

Table 294—Usage: power actions

power [options] [<nodelist> <action>]


ACTION DESCRIPTION
status result = node.powerStatus()
off result = node.powerOff()
on result = node.powerOn()
cycle result = node.powerCycle()
reset result = node.powerReset()

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 321


Chapter 11 - Parallel Shell Tools

The complexity of managing clusters increases as the number of nodes increases. So you really need
tools that allow you to operate in parallel on a set of nodes, issuing commands to them as if they were
a single entity. Platform Manager provides a suite of shell tools that can be run in parallel on nodes in
your data center.
The target nodes for all programs in the ScaSH suite may be defined on the command line.
Please note that here ScaSH supports bracket and grouping name expansion for node
name specification. If node names are not specified on the command line the nodes
reported by the scahosts program will be used.
All the tools in the ScaSH suite are based on a client/server implementation where
xinetd starts the server processes as a standard service. You control access through
PAM (Pluggable Authentication Modules). Both are automatically configured by
Platform Manager.
Topics in this chapter include:
Grid vs. Tree Routing Topologies on page 412
scacp on page 413
scagroup on page 414
scagroup File Format on page 415
scahosts on page 415
scakill on page 416
scaps on page 417
scarcp on page 418
scarup on page 420
scash on page 421
ScaSH configuration file on page 424
plasub on page 425
scatop on page 426
scawho on page 428

Grid vs. Tree Routing Topologies


Platform Manager supports two different topologies: tree and grid. Figure 152 shows the
balanced tree topology.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 322


Figure 152—A 33 node tree topology with a Fan-out value of 2

The tree topology is the default. For normal scash operations the tree topology is more
often suitable, given bandwidth and latency issues. For copying larger files a grid topology
is often a quicker solution.

Figure 153—A 33 node Grid with a Fanout value of 4

The nodes are numbered in a sequence, established at initiating the command, starting
with the originating node, having a value of -1.

scacp
The scacp program copies file(s) locally on nodes in a Platform cluster.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 323


Syntax scacp <arguments> <from> [<from>] <to>
ARGUMENT DESCRIPTION
-a Run in background
-p Print machine name before each line in each output block
-R Copy recursively
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a
new line.
-n <nodelist> Node names separated by space characters
-?/-h Print a short description of the command line options
-v Print verbose information
-V Print version
-x <nodelist> List of nodes to be excluded
-X Omit reading of configuration file(s)

Example: scap
Use the scacp command to copy a file that is located on each node to a another directory
within the same node.
[root@rigel root]# scacp /etc/hosts /tmp/hosts.bu
[root@rigel root]#

Note: Remember to surround the parameter with double quotes if the


parameter is more than one word or a list.
-n “n0 n1 n2 n3”

scagroup
When utilities require a nodelist as a parameter you can build the nodelist from node
names, group aliases and bracketed expressions. The group alias will be resolved to a list
of node names as specified in the scagroup configuration file(s). The system-wide
config-file /opt/scali/etc/scagroup.conf is read first. If a user-specific config-file,
~/.scali/scagroup.conf, exists, its content will be combined with the system-wide
definitions.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 324


Syntax scagroup <arguments> [<group>]
ARGUMENT DESCRIPTION
-g <groupfile> Read group-definitions from <groupfile> instead of default-files
-w <width> Set printing width
-h Print a short description of the command line options
-v Print verbose information
-V Print version information
-X Omit reading of configuration file(s)

scagroup File Format


Each group has the keyword group at the beginning of a line followed by a group alias and
a list of node names included in the group. The list may itself contain previously defined
group aliases which will be recursively resolved.
The nodelist may use bracket expressions which will be resolved as specified.
If an entry starts with ’!’ the entry will be excluded (instead of included). The file may
contain comments which is a line starting with #.
For more information about grouping and bracket notation see “Bracketing and
Grouping” on page 446.

scahosts
The scahosts program located in /opt/scali/bin, checks a number of hosts for availability.
An available host is one that answers to a scash connection request. The program prints
the name of the available hosts. There are several ways you can specify which hosts the
program checks. You may specify hosts on the command line when running the program,
either with the -f option which gives the path to a file containing host names, or with a list
of hosts at the end of the command. If neither the -f option, nor the hostlist is present,
scahosts will look for hosts in the file $HOME/.scahosts. If this file is not available, it will
look for hosts in the file /opt/scali/etc/ScaSH.scahosts.
Use one name per line in the file containing host names

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 325


Syntax scahosts <arguments> <nodelist>
ARGUMENT DESCRIPTION
-1 Print node names on separate lines
-G Use GRID routing, default is TREE
-z Don't check node availability
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a new
line.
-F <n> Fanout value controls how many nodes should be interconnect ed in each
level of the routing
-?/-h Print a short description of the command line options.
-v Print verbose information
-V Print version

scakill
scakill kills processes running in a Platform cluster.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 326


Syntax scakill <arguments> [optionals]
ARGUMENT DESCRIPTION
-<number> Signal to send to processes (only numeric values allowed)
-i <range> Kill those processes that lie within the given process id range given as
<startpid>-<stopid>
-l <signal> Signal to send to processes given either as a number or a signal name
(e.g. 9, HUP,...)
-s txt Kill those processes that match the given <string>
OPTIONAL DESCRIPTION
-a Run in background
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a
new line
-n <nodelist> Node names separated by space characters
-p Print machine name before each line in each output block
-?/-h Print a short description of the command line options.
-v Print verbose information
-V Print version
-X Omit reading of configuration file(s)
-x <nodelist> List of node names to be excluded
<nodelist> Nodes may be specified using bracket expansion and groups (see
“Bracketing and Grouping” on page 446.) If no nodes are specified,
scahosts will use the nodes specified in the scagroup named “default”.

Here is an example of scakill


Use the scakill command and the -s switch to kill specific running processes on all default
nodes of the system. By using the -s switch you specify the processes matching the given
string.
[root@scali3-11 /root]# scakill -s all2all scali3-11
We are killing these pids: 29093 29094 29095 29096 29097 29098 29100
29101 29107 29109 29110

scaps
scaps prints processes on nodes in a Platform cluster.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 327


Syntax scaps <arguments>
ARGUMENT DESCRIPTION
-a Run in background
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a
new line
-n <nodelist> node names separated by space characters
-p Print machine name before each line in each output block
-s <string> Only list those processes that contains the given <string>
-u <user> Only list those processes that match the given <user>
-h/-? Print a short description of the command line options
-v Print verbose information
-V Print version
-x <nodelist> List of node names to be excluded.
-X Omit reading configuration file(s).
<nodelist> Nodes may be specified using bracket expansion and groups (see
“Bracketing and Grouping” on page 446.) If no nodes are specified,
scahosts will use the nodes specified in the scagroup named “default”.

To show which processes are running on all default nodes of the system, use the scaps
command. You use the -u switch to show only the processes belonging to a certain user,
in this case, ole:
[root@rigel root]# scaps -u ole
r7 : ole 14906 0.0 0.0 5264 1464 pts/0 S 16:11 0:00
-bash

scarcp
scarcp copies file(s) between local machine and nodes in a Platform cluster

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 328


Syntax scarcp <arguments>
ARGUMENT DESCRIPTION
-c <client> Use <client> as rcp client instead of default client
-F <n> Fanout value controls how many nodes to interconnect in each level of the
routing Default is 3.
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a
new line.
-g Gather. Treat nodes specified with -n or -f as source, instead of destination
nodes
-G Use GRID routing. The default is TREE
-n <nodelist> node names separated by space characters
-p Print machine name before each line in each output block
-R Copy recursively
-r key Replace key with machine name if key is found in command
-?/-h Print a short description of the command line options
-v Print verbose information
-V Print version
-x <nodelist> List of nodes to be excluded
-X Omit reading of configuration file(s)
<nodelist> Nodes may be specified using bracket expansion and groups (see
“Bracketing and Grouping” on page 446.) If no nodes are specified,
scahosts will use the nodes specified in the scagroup named “default”.

Node availability is checked before returning the nodes’ names.


ARGUMENT DESCRIPTION

Note: You may specify the command to be used when copying files with the
-c option.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 329


TIP Use of the -g option will force fanout to be zero. Use -g in
combination with -r option to avoid overwriting files.

Example: copying files using scarcp

To copy a file from current node, or frontend to all default nodes of the system, use the
scarcp (remote copy) command. To copy the local file:
[root@rigel root]# scarcp /os/i686/kernel-smp-2.4.20-19.8.i686.rpm
to the /tmp directory on all the default nodes.
[root@rigel root]# scarcp /os/i686/kernel-smp-2.4.20-19.8.i686.rpm /tmp

Example: scarcp using -r

Use scarcp with the -r option to create unique files at the destinations when you want
to copy files from a selection of nodes to a the local machine. Copy files with a common
path /etc/hosts from a selection of nodes to a different selection of nodes. All source
files are copied to each of the destinations using -r to avoid overwriting the files on the
destination nodes:
[root@rigel root]# scarcp -r KEY n[1-10]:/etc/hosts
n[11-14]:/tmp/hosts.KEY

scarup
scarup prints up-time/load information about nodes in a cluster.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 330


Syntax scarup <arguments>
ARGUMENT DESCRIPTION
-a Run in background
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a new
line
-p Print machine name before each line in each output block
-n <nodelist> node names separated by space characters
-v Print verbose information
-V Print version
-x <nodelist> List of node names to be excluded
-X Omit reading of configuration file(s)
OPTIONAL DESCRIPTION
-h/-? Print a short description of the command line options.
<nodelist> Nodes may be specified using bracket expansion and groups (see
“Bracketing and Grouping” on page 446.) If no nodes are specified,
scahosts will use the nodes specified in the scagroup named “default”.

Execute commands on nodes in a Platform cluster using the ScaX


infrastructure.

scash
The scash utility located in /opt/scali/bin is a UNIX command line utility which runs the
same shell command on a set of Platform system nodes. You may specify the target nodes
in a configuration file, or on the command line (see “scahosts” on page 415).
The command options are listed below.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 331


Syntax scash <arguments> (-n "<nodelist>" <command>)|(-c
"<command>" <nodelist>)
ARGUMENT DESCRIPTION
-a Execute command in background
-c <command> The command to be run
-F <n> Fanout value controls how many nodes to connect in each level of the
routing.
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a new
line.
-G GRID routing, default is TREE
-n <nodelist> node names separated by space characters
-p Print machine name before each line in each output block
-r <key> Replace key with machine name if <key> is found in command
-t <timeout> Connection timeout in milliseconds
-u Don't buffer stdout and stderr output
-h/-? Print a short description of the command line options.
-V Print version
-v Print verbose information
-X Omit reading of configuration file(s)
-x <nodelist> List of nodes to be excluded.
<nodelist> Nodes may be specified using bracket expansion. (See “Bracketing and
Grouping” on page 446 .) If no nodes are specified, scash will use the
nodes specified in the scagroup named “default”.
<command> Any command you want!

Example: Running scash in the Background

Run:
rpm -i /tmp/kernel-smp-2.4.20-19.8.i686.rpm
on selected nodes of the system, but disconnect from the terminal, run it in the
background and return back to the issuing shell. Use the -a switch to specify a
command to be run in the background on each node.

Example: rpm -q glibc

Run an “rpm -q glibc” command on selected nodes of the system. Select nodes by using
the -n switch instead of the default nodes of the system.
[root@rigel root]# scash -pn "r1 r2 r4 r6 r11" rpm -q glibc

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 332


r11 : glibc-2.3.2-4.80.6
r6 : glibc-2.3.2-4.80.6
r2 : glibc-2.3.2-4.80.6
r1 : glibc-2.3.2-4.80.6
r4 : glibc-2.3.2-4.80.6
[root@rigel root]#

Example: scash uname

Run an “uname -r” command on the default nodes of the system. You want each node
name prefix to correspond to output. Specify this prefix by using the -p switch in scash.

[root@rigel root]# scash -p uname -r


r1 : 2.4.20-18.8smp
r2 : 2.4.20-18.8smp
r3 : 2.4.20-18.8smp
r4 : 2.4.20-18.8smp
r5 : 2.4.20-18.8smp
r6 : 2.4.20-18.8smp
r7 : 2.4.20-18.8smp
r8 : 2.4.20-18.8smp
r9 : 2.4.20-18.8smp
r10 : 2.4.20-18.8smp
r11 : 2.4.20-18.8smp
r12 : 2.4.20-18.8smp
r13 : 2.4.20-18.8smp
r14 : 2.4.20-18.8smp
r15 : 2.4.20-18.8smp
r16 : 2.4.20-18.8smp
[root@rigel root]#

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 333


Example: scash uname exclusions

Run an "uname -r" command on the default nodes except specified nodes to be
excluded. Exclude nodes by using the -x switch. In the example below, we have used
bracket expansion when specifying the nodes.
[root@rigel root]# scash -px "r[1-6]" -- uname -r
r7 : 2.4.18-27.8.0smp
r8 : 2.4.18-27.8.0smp
r9 : 2.4.18-27.8.0smp
r10 : 2.4.18-27.8.0smp
r11 : 2.4.18-27.8.0smp
r12 : 2.4.18-27.8.0smp
r13 : 2.4.18-27.8.0smp
r14 : 2.4.18-27.8.0smp
r16 : 2.4.18-27.8.0smp
r15 : 2.4.18-27.8.0smp
[root@rigel root]

ScaSH configuration file


At the outset, there is no configuration file, because default values are used. If you wish
to override the default settings using command line-options, you must create a
configuration file called ScaSH.conf which contains configuration parameters for the ScaSH
parallel shell tools suite. The file is placed in the path /opt/scali/etc/ScaSH.conf. Open
ScaSH.conf.example to see how this is done.
Entries should have the following format: <option>=<value> The file may contain
comments which is a line starting with #.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 334


Syntax /opt/scali/etc/ScaSH.conf
ARGUMENT DESCRIPTION
-a atnow=<flag>

Execute command in background. Legal value for flag is either ’true’ or


’false’.
-F <fanout> fanout=<fanout>

Define fanout factor used for limiting the number of connections from one
host when running a scash commands. When the number of nodes used
as argument for the scash command exceeds the fanout factor, the hosts
will be divided into groups where fanout gives the number of groups and
the scash command will be run as a scash command in each group in
parallel. Hence, the number of connections from one host will be limited
to fanout number of connections.
-p prefixprint=<flag>

Print node name before each line in each output block. Legal value for flag
is either ’true’ or ’false’.
-t <timeout> connect_timeout=<timeout>

Define connect timeout in milliseconds. This timeout controls how long


scash will wait for connection response in each fanout group.
-u unbuffered=<flag>

Don’t buffer standard-out / standard-error. Legal value for flag is either


’true’ or ’false’.

plasub
The command plasub is a wrapper script for submitting jobs through Scali MPI Connect.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 335


Syntax plasub <params> <progname> [program]
ARGUMENT DESCRIPTION
-a Start time for job ([[[[CC]YY]MM]DD]hhmm)
-debug Debug, print debug output
-e <filename> specify filename for stderr
-env <env> export list of environment variables, format: var1,var2,...
-i <file> Use <file> as input for stdin
-l Specify resources for job (man pbs_resources)
<"resource_list">
-m <addr> e-mail <addr> when job completed
-maxtime <mt> Max time for mpi job in minutes (Scali MPI Connect only)
-mpimon Submit parallel job using Scali MPI Connect mpimon
-mpiparams Specify parameters to mpimon/mpirun
<"parameters">
-N <job name> Specify name for job
-nodes <nodes> submit job at given nodes, comma separated
-np <np> Total number of processes, default

1 for non-mpi programs

2 for mpi programs


-npn <npn> Number of processes pr. node, default 1
-ns submit job to any system, independent of architecture (default)
-o <filename> specify filename for stdout
-q Quiet, no echo of mpimon/mpirun command
-Q <queue> Destination queue for job
-qsparams Specify native queue system parameters
<"parameters">
-r <minutes> reserve nodes for <minutes>
(nodes listed in file 'reserved_nodes.$PBS_JOBID')
-s <system> submit job to given system/resources
-scampi Submit parallel job using Scali MPI Connect mpimon
-v Verbose
-X Omit reading of configuration file(s)

scatop
To show all processes using more than a specified CPU usage percentage (default value is

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 336


20%) running on the default nodes of the system, use the scatop command. Scatop returns
processes in a cluster with the highest CPU (Mem) usage

Syntax scatop <arguments>


ARGUMENT DESCRIPTION
-f <nodefile> Use file containing node names separated by new lines
-c <perc> CPU usage limit, negative excludes higher than, default 20.0%
-m <perc> Mem usage limit, negative excludes higher than, default 0.0%
-n <nodelist> node names separated by space characters
-h Print a short description of the command line options.
-V Print version
-x <nodelist> List of node names to be excluded.

Example: scatop

Enter:
[root@rigel root]# scatop
renders a report table such as the one below:
[root@rigel root]# scatop
Node: PIDUSERPRINISZSIZERSSSTAT%CPU%MEMTIMECOMMAND
r1 : 20866 ole31-10340425475654744R<32.51.70:13fluent_scampi.6
r1 : 20867 ole27-10302465461254604R<97.21.70:40fluent_scampi.6
r2 : 20430 ole27-10336194646046452R<97.71.40:41fluent_scampi.6
r2 : 20431 ole25-10302374658846576R<83.71.50:35fluent_scampi.6
r3 : 21780 ole27-10331074769647688R<97.01.50:41fluent_scampi.6
r3 : 21781 ole25-10295224762847620R<97.01.50:41fluent_scampi.6
r4 : 19517 ole27-10326094774447736R<95.51.50:41fluent_scampi.6
r4 : 19518 ole25-10290114762847620R<95.11.50:40fluent_scampi.6
r5 : 20656 ole19-10322994774047732R<95.11.50:40fluent_scampi.6
r5 : 20657 ole19-10285014763247624R<95.21.50:40fluent_scampi.6
r6 : 20692 ole27-10317864772847720R<97.21.50:41fluent_scampi.6
r6 : 20693 ole26-10279304638446376R<98.11.40:42fluent_scampi.6
r7 : 19438 ole26-10310724773247724R<94.31.50:41fluent_scampi.6
r7 : 19439 ole25-10274744762047612R<93.81.50:41fluent_scampi.6

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 337


r8 : 21857 ole27-10 30559 47724 47716 R<97.91.50:41
fluent_scampi.6
r8 : 21858 ole25-10 26962 47616 47608 R<97.11.50:40
fluent_scampi.6
[root@rigel root]#

scawho
scawho prints user names and number of processes on nodes in a Platform cluster.

Syntax scawho <arguments>


ARGUMENT DESCRIPTION
-f <nodefile> Nodes will be read from <nodefile>. Each entry must be separated by a
new line.
-n <nodelist> Node names separated by space characters
-h/-? Print a short description of the command line options
-V Print version
-v Print verbose information
-X Omit reading of configuration file(s)
-x <nodelist> List of nodes to be excluded.
<nodelist> Nodes may be specified using bracket expansion. (see “Bracketing and
Grouping” on page 356) If no nodes are specified, the program will use
the nodes specified in the scagroup named “default”.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 338


Chapter 12 - Platform Custom Package
Generator
The Platform Custom Package Generator (ScaCPG) is a tool for packaging arbitrary files into the RPM
format. Files packaged in the RPM format can easily be included in routines for distribution
of software to nodes in clusters running Platform Manager. The real importance of ScaCPG
comes into play when a node must be reinstalled after a crash. Packages that capture the
installed software base on the other nodes ensure that the reinstalled node becomes
coherent with the other nodes. The operating system and Platform’s software is distributed
as RPM’s, and ScaCPG ensures that application software, configuration files etc. can be
distributed the same way.
This chapter’s topics include:
Distribution and Set-up on page 429
Interfaces on page 430
Richer functionality in the CLI on page 432

Error handling on page 433

Distribution and Set-up


The ScaCPG is distributed in the RPM format. For example:
scacpg-1.1.2-29.rhel4.noarch.rpm
Seeing this, you interpret the ScaCPG package as being for version 1.1.2, for Red Hat
Enterprise Linux version 4.
The program is placed in
/opt/scali/bin
when it is unpacked thus:

rpm -U scacpg-1.1.2-29.rhel4.noarch.rpm
Note: Using ScaCPG requires root-privileges, and the directory to be
packaged must be owned by root.
ScaCPG treats directory names in the same way as the tar archiving utility, i.e. use of a
directory name always implies that the subdirectories below should be included when
building an RPM, and the same directory structure is (if necessary) recreated on the local
machine when the RPM is installed or upgraded. Since the path leading to the directory to
be packaged is not included in the package, the desired target directory structure must be
created in the package directory. For example, if you want to package /etc/passwd you must
copy it to
/home/user1/package
resulting in
/home/user1/package/etc/passwd.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 339


Then you must execute ScaCPG from /home/user1 and name

/home/user1/package
as the directory to be packaged. When the resulting RPM is unpackaged it will deposit its
contents in /etc/passwd.
This ability of packaged material to be copied exactly as the original maintains the
homogeneous setup of cluster nodes that are key to supporting a single system
environment with Platform’s cluster management (Platform Manager). While other
techniques for distributing material are available, ScaCPG avoids manual participation that
is prone to introduce errors. More information about the RPM mechanism is available from
http://www.rpm.org.

Interfaces
ScaCPG has both a command line interface and a graphical, NEWT based interface. The
option set is richer in the command line variant, while the graphical variant is more
intuitive. ScaCPG launches its graphical interface when there are no arguments on the
command line.
Enter
[root@rigel root]# scacpg --help
to get a list of the arguments recognized by ScaCPG on the command line. See t.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 340


Syntax [root@rigel root]# scacpg <arguments>
Arguments DESCRIPTION
-d <directory> Directory of package
-n <name> Name of rpm
-v <version> Version of package
OPTIONAL DESCRIPTION
-b <description> DESCRIPTION of package
--buildarch <architecture> Build for this architecture
-g <group> Group of package
-h, --help print this help message
-l <license> License of package
--prescr <path> The path to the pre script
--presh <program> The shell where the pre script runs
--preunscr <path> The path to the preun script
--preunsh <program> The shell where the preun script runs
--postscr <path> The path to the post script
--postsh <program> The shell where the post script runs
--postunscr <path> The shell where the post script runs
--postunscr <path> The path to the postun script
-q <requirements> Comma separated list of requirements
-r <release> Release of package
-s <summary> Summary of package

This view appears in the terminal where /opt/scali/bin/scacpg was started. As can be seen
from the arguments listed above the fields "Name", “Directory to package” and “Version”
are mandatory.
The values entered in the dialog are copied to the package and can be retrieved with
rpm -qip <file>.

Copyright ©1997-2008 Platform Manage 5.7 User’s Guide 341


For example, the information stored with version 1.0.3 of ScaCPG is:
• Name ScaCPG
• Relocations (not re-locatable)
• Version 1.1.2
• Vendor Scali AS - www.scali.com
• Release 1
• Build Date Sun 27 Jan 2008 12:38:01 PM CET
• Install date (not installed)
• Build Host ane.office.scali.no
• Group Utilities/System
• Source RPM scacpg-1.1.2-29.src.rpm
• Size 24609
• License commercial
• Signature (none)
• Summary Scali Package Installation
• Utility DESCRIPTION Scali Custom Package Generator
The requirements of a particular RPM-package can be displayed with the
rpm -qpR <file>
command.
ScaCPG deposits packages in the standard RPM creation directory, for example:
/usr/src/redhat/RPMS
The value from the buildarch-field is appended to the standard path to find the final
placement of the package. The default value for buildarch is i386, resulting in:
/usr/src/redhat/RPMS/i386
as the destination of the package.

Richer functionality in the CLI


The RPM mechanism provides more than just simple archiving with files to be expanded in
specific directories of the host system. It also offers the feature of pre- and post-installation
scripts. This functionality allows the packager to include pieces of code which will be
executed on the hosts when installing or removing the package.
Since the scripts will run as root care must be taken not to harm the host system. The usual
tasks delegated to the scripts correspond to tasks a system administrator would typically
do when installing new functionality on a system, e.g., adding a cron job to run a program
regularly and configuring a daemon to run.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 342


The inclusion of scripts is available for four situations:
• --pre - This script is executed just before the package is installed on the
system.
• --post - This script is executed just after the package is installed on the
system.
• --preun - This script is executed just before the package is uninstalled from
the system.
• --postun - This script is executed just after the package is uninstalled on the
system.
For each of these scripts the shell used to execute it must be named explicitly. The
command line version of ScaCPG uses the --pre* and --post* options to receive
information about the scripts and their shells. These scripts can exploit all commands
available in a particular shell programming environment as long as they are not interactive.
Anything which requires manual input from the user breaks with the idea that RPM
installation procedures must be non-interactive.

Error handling
ScaCPG includes simple error handling. The directory to package is checked for validity. More
complex error situations leads to a log-file in /tmp which can be inspected before trying
again.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 343


Chapter 13 - Licensing
Platform software products are now licensed using product keys. This chapter explains the
Platform Licensing system. The following topics are included:

Product key overview on page 434


Showing license status in the GUI on page 435
Showing license status using the CLI on page 435
Listing Product and Activation Keys Using Platform Manager GUI on page 436
Listing Product and Activation Keys Using Platform Manager CLI on page 436
Activation of Product Keys on page 436
Automatic Online Product Activation on page 437
Offline Product Activation Using the GUI on page 440
Manual Product Activation Using the CLI on page 442
Adding a New Product Key Using the GUI on page 443
Adding a New Product Key Using the CLI on page 443
Product Key Deletion Using the GUI on page 443
Product Key Deletion Using the CLI on page 444
Activation key deletion using the GUI on page 444
Activation Key Deletion Using the CLI on page 444
Upgrading / Replacing a Product Key with the GUI on page 445
Upgrading / Replacing a Product Key with the CLI on page 445
Installing a Product Key for Scali MPI Connect Using smcinstall on page 445

Product key overview


There is a one-to-one relationship between the product key and software product, so you
will have an unique product key for each of your Platform software products.
There are two types of product keys: demo keys and permanent product keys. Demo keys
have an expiry date and cannot be activated. Both types of keys have the following format.
There are eight groups of four alpha-numeric values thus:
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
You will be requested to enter your product key. The product keys can be found on the
activation form that came with your Platform software.
The product keys allow Platform software to work right out of the box and immediately give
you full enjoyment of your Platform software. For the software to remain operational it

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 344


must be activated within 30 days of installation. Activation is simple and described in
further detail in the next section.

Showing license status in the GUI


To view the licence status navigate to Help -> View Platform Licenses

Figure 154—Platform License Management view

You will see 5 columns:


• Column 1: Name of the Licensed product
• Column 2: License version
• Column 3: Number of available licenses
• Column 4: Expiration date
• Column 5: Activation status
When the key is added, but not yet applied (i.e. you haven't reconfigured, or applied
changes to the system) the first column will show the contents of the product key instead
of the product name for which the key is intended. Column 4 will show a date of expiration
if the key is a demo or is unactivated.
Once a permanent product key is activated, the date in column 4 will be replaced by the
word “Permanent”

Showing license status using the CLI


Going to the CLI, you can use showproductstatus:

# pmcli showproductstatus

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 345


my-system 3YYG-2YLO-MFTW-KADN-MFXA-KAAA-AAAA-E25Y Platform
Manager 5.2 0 2048 permanent Activated
my-system 3YYG-C3DM-MNXW-2ADB-NRWA-KAAA-AAAA-ERYX Scali MPI
Connect 5.2 0 uncounted
11-jul-2008 Need activation
my-system 3YYH-G3LI-MEAH-G3LI-MEAA-KAAA-AAAA-HYUY (Not applied
yet)
• Column 1: Name of the Licensed product
• Column 2: License version
• Column 3: Number of available licenses
• Column 4: Expiration date
• Column 5: Activation status
The first line of the example above shows an activated permanent license.
The second line shows an unactivated license that will expire 2008-06-11 unless it gets
activated.
The third line shows a newly added product key that is stored in the configuration database,
but not yet submitted to the PM license system (a reconfigure / apply changes is needed).

Listing Product and Activation Keys Using Platform Manager


GUI
N/A

Listing Product and Activation Keys Using Platform Manager


CLI
You can list the keys using listproductkeys.
# pmcli listproductkeys
3YYG-2YLO-MFTW-KADN-MFXA-KAAA-AAAA-E25Y
DIAC62O3PSX2435ZV7CXNAI44C5UOBSTAD7ENOWRPXMYYAAIAAAAAAA=
3YYG-C3DM-MNXW-2ADB-NRWA-KAAA-AAAA-ERYX
3YYH-G3LI-MEAH-G3LI-MEAA-KAAA-AAAA-HYUY
The first column of the output above lists the product keys. Both the keys known by the
PM licensing system and the one added to the configuration database (not yet added to the
licensing system).
The second column lists the activation key(s) associated with the product key in the first
column.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 346


Activation of Product Keys
Product activation may be either automatic, in which case the software contacts Platform
and retrieves the activation key(s), or manual, in which case you will have to enter the
activation keys yourself. To obtain the activation key(s) manually you will have to go to the
Platform Product Activation web page and enter your product's details.There are both GUI
and CLI interfaces to the license activation functionality.

WARNING—Do NOT activate the product key on the compute nodes. If you install
the product key on one of the compute nodes first, you will activate the license for
a license server on it. Activation is irreversible. You can install/activate the product
key on ONLY ONE server. Install the product key ONLY on the Platform Manager
frontend or on the Gateway.

Automatic Online Product Activation


Product keys may be managed using either the Platform Manager GUI or the Platform
Manager CLI. If you do not have a license for Platform Manager you will have to use the
CLI. Automatic product activation requires that you have internet access to the Platform
activation server from the computer running the activation process. This will usually be the
Platform Manager frontend.

Automatic Online Product Activation Using the GUI

Navigate to Help -> View Platform Licenses

Figure 155—Right Click License Action List Menu

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 347


Figure 156—Online Activation

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 348


Figure 157—Apply Changes Pop-up

• Select the license you wish to activate from the table (a License with status
'Need activation')

3 Right-click on the selected row and select Online Activation.


4 A dialog will appear. Please enter your personal/company details (company name and
email-address as a minimum)
5 Press OK. The License server will now connect to license.platform.com and retrieve the
Activation Key. The Activation Key is put in the Configuration DB ready to submit to the
PM licensing system.
6 To submit the Activation Key run Apply Changes (a pop-up box will appear in the GUI as
soon as the Activation Key is available)
7 When the 'Apply Changes' process is complete, press Refresh in the License View to see the
updated status. The activated license should now have status 'Activated'.

Automatic Online Product Activation Using the CLI

Enter:
pmcli activateproductkey <productkey> <company> <contactemail>
or
pmcli activateproductkey <productkey> <company> <contactemail>
[street] [street2] [city] [state] [postalcode] [country]
[contactname] [contactphone]
then
pmcli reconfigure my-system
# pmcli activateproductkey
3YYH-G3LI-MEAH-G3LI-MEAA-KAAA-AAAA-HYUY "Platform Inc"
"support@platform.com" contactname="my name"

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 349


Adding a new activation key
3YYH-G3LI-MEAH-G3LI-MEAA-KAAA-AAAA-HYUY:
DMAAK67Y6UVVW2XTYQ2NQIXL74EZGCBWPHAL65DWBO75DNYABAAAAAAA
pmcli reconfigure my-system
pmcli addactivationkey
3YYG-2YLO-MFTW-KADN-MFXA-KAAA-AAAA-E25Y
DMAAK67Y6UVVW2XTYQ2NQIXL74EZGCBWPHAL65DWBO75DNYABAAAAAAA
pmcli reconfigure my-system
The 'activateproductkey' command connects you to license.platform.com and retrieves
the Activation Key. The Activation Key is put in the Configuration DB ready to submit
to the PM licensing system.
Submit the Activation Key you need by running 'reconfigure' on the license server.

Offline Product Activation Using the GUI


You will need one or another computer with internet access.

Figure 158—License list menu

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 350


Figure 159—Offline Activation Pop-up

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 351


Figure 160—Product Key Pop-up

1 Open the 'Help' -> 'View Platform Licenses'


2 Select the license you wish to activate from the table (a License with status 'Need activation')
3 Right-click on the selected row and select 'Offline Activation. A dialog will appear
containing the Product Key' and the license server 'lmhostid' needed to acquire an Activation
Key from the Platform Product activation portal. For more information visit:
http://www.platform.com:88/activation/
4 Once you have submitted the relevant information and it is processed, your activation key
will be displayed on the web page.
5 Enter the activation key in the dialog field marked 'Please Enter Activation Code'.
6 Press 'OK'. The Activation Key is put in the Configuration DB ready to submit to the PM
licensing system.
7 To submit the Activation Key run 'Apply Changes' (a pop-up box will appear in the GUI as
soon as the Activation Key is available).
8 When the 'Apply Changes' process is complete, press 'Refresh' in the License View to see the
updated status. The activated license should now have status 'Activated'.

Manual Product Activation Using the CLI


Enter:
pmcli addactivationkey <productkey> <activationkey>
pmcli reconfigure my-system
pmcli addactivationkey
3YYG-2YLO-MFTW-KADN-MFXA-KAAA-AAAA-E25Y
DMAAK67Y6UVVW2XTYQ2NQIXL74EZGCBWPHAL65DWBO75DNYABAAAAAAA
pmcli reconfigure my-system

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 352


To get the information needed for completing the form on Platform Product activation portal
http://www.platform.com/services/support/product-activation
then use the command:
# /opt/scali/bin/lmhostid
The host ID of this machine is "b23c9ca9b90f3c7808f913824ac5f2c4d30e2e74"

Adding a New Product Key Using the GUI


Open the 'Help' -> 'View Platform Licenses'

1 Press the Add product key button.


2 Enter your new product key when the dialog appears.The Product Key is put in the
Configuration DB ready to submit to the PM licensing system.
3 To submit the Product Key run 'Apply Changes' (a pop-up box will appear in the GUI)
4 When the 'Apply Changes' process is complete, press 'Refresh' in the License View to see the
updated status.

Adding a New Product Key Using the CLI


Enter:
pmcli addproductkey <productkey>
pmcli reconfigure my-system
pmcli addproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
pmcli reconfigure my-system

Product Key Deletion Using the GUI


Navigate to Help ->View Platform Licenses

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 353


Figure 161—Deleting Product from License Pop-up

1 Select the license you wish to delete from the table.


2 Right-click on the selected row and select 'Delete Product Key'
3 Confirm by pressing 'Yes' when the dialog appears. The Product Key removed from the
Configuration DB and the action is scheduled for the PM licensing system.
4 To remove the Product Key form the licensing subsystem run 'Apply Changes' (a pop-up box
will appear in the GUI)
5 When the 'Apply Changes' process is complete, press 'Refresh' in the License View to see the
updated status.
Note: The Activation Key(s) from the deleted Product Key will also be
removed.

Product Key Deletion Using the CLI


Enter:
pmcli removeproductkey <productkey>
pmcli reconfigure my-system
pmcli removeproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
pmcli reconfigure my-system

Activation key deletion using the GUI


N/A

Activation Key Deletion Using the CLI


Enter:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 354


pmcli removeproductkey <productkey> <activationkey>
pmcli reconfigure my-system
pmcli removeproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
DMAAK67Y6UVVWZXTYQ2NQIXL74EZGCBWPHAL65DWBO75DNAYBAAAAAAA
pmcli reconfigure my-system

Upgrading / Replacing a Product Key with the GUI


The basic process is this:
• Add a new product key. See “Adding a New Product Key Using the GUI” on
page 443See “Adding a New Product Key Using the GUI”
• Remove the old product key.See “Product Key Deletion Using the GUI” on
page 443

• Reconfigure / apply changes.

Upgrading / Replacing a Product Key with the CLI


In the CLI enter:
pmcli addproductkey
3YYH-G3LI-MEAH-G3LI-MEAA-KAAA-AAAA-FIES
pmcli removeproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
pmcli reconfigure my-system

Installing a Product Key for Scali MPI Connect Using


smcinstall
On all the nodes in the cluster run:
$ ./smcinstall -<t,m,o or b>
On the license server run:
$ ./smcinstall -p <demo key>

WARNING—Do not use the -n option on the license server. You will get an error
message when you use -p on the license server: “This command should only be
run on scalm_net_server” because the -n option resets a flag in configfile that
identifies the server as the license server to 0 so that server is no longer designated
in the configuration file as a license server.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 355


Appendix A - Bracketing and Grouping
To ease usage of Platform software on large cluster configuration, many of the command
line utilities have bracket expansion and grouping functionality.

A-1 Bracket expansion


The following syntax applies:
pmcli createcluster cluster01
The syntax does not allow for negative numbers. <from> does not have to be less that
<to>.

<bracket>==
"["<number_or_range>[,<number_or_range>]*"]"
<number_or_range>==
<number> | <from>-<to>[:<stride>]
<number>==
<digit>+
<from>==
<digit>+
<to>==
<digit>+
<stride>==
<digit>+
<digit>==
0|1|2|3|4|5|6|7|8|9
If <to> or <from> contains leading zeros, then the expansion will contain leading zeros
such that the width is constant and equal to the larger of the widths of <to> and
<from>.

A-1.1 Example: Ranges

You can depict a range of consecutively named nodes, for example: n00, n01 and n02,
by entering:
n[0-2]

A-1.2 Example: Stepping through Ranges

If you need to step through a range of nodes to depict every third node, for example:
n00, n03, n06, and n09 in a range of 11 nodes (n00 through n10), then enter:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 356


n[00-10:3]

A-2 Grouping
Utilities that use scagroup will accept a group alias wherever a host name of hostlist is
expected. The group alias will be resolved to a list of host names as specified in the file
scagroup config file. If there exists a file .scagroup.conf in the users home directory, this
will be used. Otherwise, the system default file /opt/scali/etc/scagroup.conf
will be used.
Each group has the keyword group at the beginning of a line followed by a group alias and
a list of host names included in the group. The list may itself contain previously defined
group aliases which will be recursively resolved. The host list may use bracket expressions
which will be resolved as specified above.
If an entry starts with ’!’ the entry will be excluded (instead of included). The file may
contain comments which is a line starting with #.
The examples below assume that you have six nodes named “node00” through “node05”.

A-2.1 Example: Creating a Single Node Group

Create a group named “apples” containing node00.


group apples n00
# ’n00’

A-2.2 Example: Multiple Groupings

You want to establish three groups. The first group is a single node that you will call
“master”. The second group contains multiple nodes that you will call “slaves”. “All” is
a super set of “master” and “slaves”. Enter:
group master n00
group slaves n[01-32]
group all master slaves

A-2.3 Example: Multiple Nodes in a Group

Create a group named “oranges” containing nodes 01-04.


group oranges n[01-04]
# ’n01 n02 n03 n04’

A-2.4 Example: Super Sets of Groups

Create a group named “fruits” containing both “apples” and “oranges” plus node 05.

Note: Note: You can add as many elements as you like, dependent upon line length.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 357


group fruits apples oranges n05
# ’n01 n02 n03 n04 n05’

A-2.5 Example: Subsets of Larger Groups #1

Create a group named “almost_all” from “fruits” that does not contain node 03
group almost_all fruits !n03
# ’n01 n02 n04 n05’

A-2.6 Example: Subsets of Larger Groups #2

Create a group named “one” from “fruits” containing all nodes except those in group
“almost_all”.
group one fruits !almost_all fruits
# ’n03’

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 358


Appendix B - Best Practices in Platform
Manager
Platform Manager provides an extensive set of command line interfaces (CLI) which gives
the advanced user an option to perform datacenter and cluster management from the
command line. You can perform scripted management tasks with the CLI’s. Remember:
you can use expansion brackets with zero padding. For more information see “Bracketing
and Grouping” on page 359.
Topics in this Appendix include:
Upgrading Overview on page 363
Upgrading Third Party Software on page 364
Upgrading from Scali Manage 4.4 on page 365
Clearing the Repository on page 369
Up-grading with pmcli on page 370
Moving a custom configuration from Scali Manage 4.4 on page 371
Installing LSF with pmcli on page 372
Installing PBSPro with pmcli on page 373
Installing Custom RPM’s / Local Packages on page 375
Creating a node with pmcli on page 376
Replacing a node on page 379
Creating a flat cluster with pmcli on page 380
Creating a private cluster with pmcli on page 381
Creating and Deploying an update Channel with pmcli on page 382
Adding an interface with pmcli on page 383
Adding Scali MPI Connect to a Platform Manager system on page 384
Adding entries to /etc/fstab using the CLI on page 386
Importing a MAC address table with CLI on page 387
Defining a Myrinet switch with pmcli on page 388

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 359


B-1 Upgrading Overview
Platform Manager depends on a large number of software packages from the OS and will
install the required packages automatically from the ISO-images specified, but this may fail
if update packages are incompatible with the packages from the ISO’s.
Because of internal dependencies, the same version of glibc and glibc-devel must be
installed.
Failures can and will result from the following conditions:
• An operating system may include glibc-2.3.0 and glibc-devel-2.3.0. but only
glibc is installed for a minimal package installation.
• The operating system vendor releases a software update including glibc-2.3.1
and glibc-devel-2.3.1. A software update tool installs the updated glibc-2.3.1,
package but glibc-devel is still not installed.
• Platform Manager requires glibc-devel installation, and will attempt to install it
automatically from the software repository. Let’s say the software repository only
includes glibc-devel-2.3.0, not glibc-devel-2.3.1. Installation of glibc-devel/2.3.0 will
trigger an attempted installation of glibc-2.3.0.
You will get messages like:
“Transaction Check Error: package glibc-2.3.1 (which is newer than glibc-2.3.0) is already
installed.”
There are two solutions:
• When installing OS updates, make sure to install the complete set of updates
with internal dependencies. For instance, if you upgrade glibc you also need to
install glibc-devel.
• Provide Platform Manager a repository with all available software updates.
Platform Manager will then use packages from this repository when OS packages
must be installed to fulfill dependencies. Add a “--updates <directory>” option
when running the Platform Manager installation to specify the location of the OS
updates.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 360


B-2 Upgrading Third Party Software
Always refer to your third party documentation.

B-2.1 Installing OS updates

If a system has PMFE installed and you want to update the OS beyond RH ES 4 U4 and
errata there are two solutions:
• When installing OS updates, make sure to install the complete set of updates
with internal dependencies. For instance, if you upgrade glibc you also need
to install glibc-devel.
• Provide Platform Manager a repository with all available software updates.
Platform Manager will then use packages from this repository when OS
packages must be installed to fulfill dependencies. Then add an “--updates
<directory>” option when running the Platform Manager installation to
specify the location of the OS updates.

B-2.2 Upgrading from PBS Pro 7.1.xx to 8.0

Platform recommends strongly that you upgrade to PBS Pro 8. We refer to Altair PBS
Pro 8.0 Administrator's Guide, chapter 5: “Upgrading PBS Professional”. The upgrade
from 7.1.xx to 8.0 is not transparent. You will find a good many configuration tips in
chapter 5 of the PBS Pro Guide.

CAUTION—In addition to reading Chapter 5 first, you must remember to stop any processes and take
a backup copy before you upgrade ANY software.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 361


B-3 Upgrading from Scali Manage 4.4
Scali Manage 5.2 went through a major architectural redesign and uses several different
standards and technologies in the back-end that were not part of Scali Manage 4.4. For this
reason there is no direct upgrade path from Scali Manage 4.4 to 5.3+, but there is an easy
to follow procedure to help migrate to Scali Manage 5.3+, using the features introduced in
Scali Manage 5.3+ and later to Platform Manager 5.7+.
All commands are run on the Platform Manager Configuration Server.
The following example uses the Platform Manager 5 discovery procedure from the
command line interface (cli). We will assume the following in this example:
• You have 3 nodes named node001, node002 and node003.
• The nodes’ IP addresses are in the range 10.0.0.[2-4].
• The BMC-addresses are in the range 10.0.10.[2-4].

B-3.1 Back up And Restoration of Platform Manager

Routinely backing up your entire system is considered good practice. Sometimes a new
installation comes at a time when there has been a lot of activity since the last back
up. As with any installation you should back up your current set up before installing a
new version.
Dumping the database can get complicated because of the variations in PostgreSQL
versions from the sundry Linux distributions. Unfortunately, there are compatibility
issues among versions of PostgreSQL with regards to utilities - pg_dump and
pg_restore, for example.
So you may have procedural problems when using the pg_dump/restore utilities.
Platform will continue to develop a procedure/set of wrapper scripts to enable easy use
of pg_dump and pg_restore.
There is a much easier way to backup and restore the database, but
• Your Platform Manager Database will be down while dumping. This should not
be a problem as long as the GUI is not running and no PM actions are under
execution (for example: installations/discovery/reconfigure).
• A PostgreSQL database is in the end just files, but you must make sure that the
PostgreSQL daemon (postmaster) is _not_ running.
• The dump will be unreadable, but then the pg_dump optimized file format is
also unreadable.
• This procedure cannot be use for upgrading the PostgreSQL version (for
example: as a consequence of reinstalling the PMFE with a different OS
distribution).
This procedure will include all PosgreSQL configuration files set up by Platform Manager.

B-3.2.1 Creating a Partial Back-up


There are three steps:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 362


• You have to stop your processes
• You have to enter a destination directory
• You have to start up again

# make a Partial Backup


/etc/init.d/scacim-pgsql stop
cp -r --preserve /opt/scali/var/ <your backup destination dir>
/etc/init.d/scacim-pgsql start

B-3.3.2 Creating a Complete Back-up


You should also copy these other files to “a safe place”:
• Repository
• Images
• TFTP boot files

B-3.4.3 Restoring from a Backup


#Stop all processes
/etc/init.d/scacim-pgsql stop

# Copy everything to the runtime.


cp -r --preserve <your backup destination dir> /opt/scali/var/

# Restart everything
/etc/init.d/scacim-pgsql start

B-3.5 Update distributions and update levels

Before you begin check /etc/redhat-release or /etc/SuSE-release to see what


distribution and update level / service pack you are currently running.
Installation from CD's, DVD's and other media containing the CD ISO files is not
currently supported. Download the CD/DVD ISO files for your Linux distribution from
the Red Hat or Novell website for your distribution before starting the installation of
Platform Manager. The ISO-files you will be downloading must match the exact
distribution, and update level. These disks are large and can take hours to download.

B-3.6 Get a new Activation Key.

You will need a new license (activation key) to be able to perform the upgrade. If you
have not received one already, contact your local sales representative, or
sales@platform.com.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 363


B-3.7 Uninstall the older version

In the CLI enter

/opt/scali/sbin/scauninstall

B-3.8 Read the tarball documents

Follow the instructions that follow in the 5.x.x tar-ball to run the bootstrap and initiate
the Platform Manager 5 installation.

B-3.9 Discover and install the nodes:

SSH_PASSWORD=<rootpw>
pmcli discover 10.0.0.[2-4]

B-3.10 Configure the BMC IP address, user name and password:

pmcli addbmc node[001-003] <BMC-username>


<BMC-password> 10.0.10.[2-4]

B-3.11 Enable the BMC functionality by entering:

pmcli enablebmcpower node[001-003]


pmcli enablebmcconsole node[001-003]
pmcli enablebmcmonitoring node[001-003]

B-3.12 Add Platform Manager software and services to the nodes

pmcli enablemanagementofservers node[001-003]

B-3.13 Set root password

pmcli setrootpassword node[001-003] <rootpw>

B-3.14 Install Platform Mange software on the nodes and reboot:

SSH_PASSWORD=<rootpw>
pmcli installmanagementsoftware node[001-003]

B-3.15 Apply changes:

/etc/init.d/scance restart

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 364


When the nodes have been rebooted, they are up and running with Platform Manager
5 software.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 365


B-4 Clearing the Repository
#/opt/scali/libexec/scarepository.py --clear <channel_id> will clear an OS repository from
the CIM database. See the documentation on scarepository.py for more information.
Example

#/opt/scali/libexec/scarepository.py --clear
os_rhel4_u5_x86_64_ws
You can determine the channel id of the respective Operating System using:

# ls /opt/scali/repository/ | grep -i os_

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 366


B-5 Up-grading with pmcli
Refer to the release notes to see if you must install additional Red Hat or SUSE packages
before you can run the upgrade.
The compute nodes must be online during the upgrade. If any of them are down, there will
be error messages indicating that the upgrade failed for each node that was down. You
would then have to manually initiate the upgrade of those nodes once they are back
on-line. The compute nodes will be rebooted as part of the process.
Unpack the PM tar file, and run the "./upgrade" script. The script will upgrade both the
database and the software, on the head node and on the compute nodes.

# manually initiate the upgrade


pmcli installpreinstalled <nodenames>
# Upgrade database and software, on head node and compute nodes.
./upgrade

There is functionality in 5.6.1 that may require a few changes in your node configuration.
After you complete the upgrade, the first thing you should do is to change the node type
of the nodes to Altix, by running:

Example: Script 2 for Upgrading Altix nodes


# Change the nodetype of the nodes to Altix
pmcli changenodebrand frontend SGIALTIXXE240
pmcli changenodebrand n[01-64] SGIALTIXXE210
# Add the information about the BMC’s:
pmcli addbmc n[01-64] ipmiusername ipmipassword 10.0.1.[1-64]
# Enable the BMC for power, console and monitoring:
pmcli enablebmcpower n[01-64]
pmcli enablebmcconsole n[01-64]
pmcli enablebmcmonitoring n[01-64]
# Apply the changes to the whole system:
pmcli reconfigure all
# Verify that BMC communication works correctly
power n01 status

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 367


B-6 Moving a custom configuration from Scali Manage 4.4
Custom templates for kickstarts and autoyast are handled in the Platform Manager 5 GUI.

1 Start the GUI.


2 Browse to “Templates” in the selector to the left in the GUI.
3 If you have several different roles and nodes:
4 Make copies for each by right-clicking the “Default Kickstart template for Red Hat
Enterprise Linux” or “Default Autoyast template for SUSE LINUX Enterprise Server” and
choose “Copy Template”.
5 Give the copies suitable names.
6 Right-click the template and choose edit
7 Edit nodes to use the new template
8 Highlight the nodes that will use a template and right-click the selection.
9 Choose “Configure” and there is a roll-down menu in the first tab called “General” where
you can select the new template(s)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 368


B-7 Installing LSF with pmcli
After you upload LSF to the Platform Manager repository, use this script to install LSF using
the pmcli:

# Create Application system say 'mylsfcluster':


pmcli addlsfapplicationsystem mylsfcluster
# Add Master candidates to application system:
# This command will add two master candidates (sc1435-1 sc1435-2)
# to application system (mylsfcluster) with Flexlm server
# host name and port details. Last parameter # defines the work
directory i.e. fali-over directory.
pmcli addlsfmastercandidate 'sc1435-1 sc1435-2' mylsfcluster \
/home/license.dat FlexlmServerHost 1700 /usr/shared/lsfwork
# Add Slave only nodes to application system:
pmcli addlsfstatichost sc1435-3 mylsfcluster
# Add Dynamic hosts to application system if any:
pmcli addlsfdynamichost sc1435-4 mylsfcluster
# Ensure that work directory is on NFS and mounted to
"/usr/shared/lsfwork" on each master candidate.
# Reconfigure the all machines in LSF Application System.
pmcli reconfigure sc1435-[1-4]
Note: Run the LSF commands 'lsid' for cluster information or 'bhosts' for
details about the hosts in application system on each machine. Successfull run
of this will ensure that application system is configured correctly.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 369


B-8 Installing PBSPro with pmcli
Here are two scripts for PBSPro, dependent on version.

B-8.1 Installing PBSPro version 8 and earlier

This script is for versions of PBS before version 9.

# This script is for versions of PBS before version 9.


PMFE='myPMFE'
NODES='n[001-100]
EXEC='/opt/scali/libexec/'
${EXEC}scarepository.py --addproduct pbs_8.0.0.63106-0_i386 \
Filebrowser /path/to/pbs-8.0.0.63106-0.i386.rpm
pmcli addpbsproserver ${PMFE} \ "L-00016-07264-my-pbspro-license-string"
pmcli addpbsproscheduler ${PMFE} ${PMFE}
pmcli addpbspromom ${NODES} ${PMFE}

B-8.2 Installing PBSPro version 9+

This script is for versions of PBS later than version 9.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 370


# This script is for version 9 of PBSPro and later.
PMFE='myPMFE'
NODES='n[001-100]
EXEC='/opt/scali/libexec/'
# Creates rpms of the Altairlm tarball
# Default stored as /tmp/flexlm/noarch/pbsflexlm-*.noarch.rpm
/opt/scali/libexec/extractpbsproflexlm /path/to/altair_flexlm.*.tar.gz
{EXEC}scarepository.py --addproduct pbs_9.0.0.71596-0_i386 \
Filebrowser /path/to/pbs-9.0.0.71596-0.i386.rpm \
/tmp/flexlm/noarch/pbsflexlm-9-i386.noarch.rpm
pmcli addpbsprolicenseserver ${PMFE} \
/some/path/altair_lic.dat
pmcli addpbsproserver ${PMFE}
${PMFE} pmcli addpbsproscheduler ${PMFE} ${PMFE}
pmcli addpbspromom ${NODES} ${PMFE}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 371


B-9 Installing Custom RPM’s / Local Packages
If you had or want to install Custom made RPM’s on the system, take the following steps
in the GUI.
• Select Software in the selector to the left in the GUI

• Right-click and choose Upload Software.

10Add the Software definition to the nodes


11Highlight the nodes that will have the software installed and right-click the selection.
• Choose Configure

12Select the tab called “Third Party Software”


13Select the newly added software.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 372


B-10 Creating a node with pmcli
For the example we will assume the following information:
• Cluster name: cluster01
• Node name: testpc07
• System password: rootbeer
• HWProduct: DELLPE1950
• IPSpecs: 10.0.0.7
• Default Gateway: 10.0.0.1
• SWProduct: sles9_sp3_i386
• DNSDomain: engr.hoohah.com
• DNSServer: 136.114.16.3
• NICName: 136.114.16.4
• Image: rootbeerfloat
• LanInterface: 136.114.16.5

B-10.1 Create the cluster and give it a name.

pmcli createcluster cluster01

B-10.2 Create nodes.

For a list of valid products for <hwproduct> you can run:

pmcli listproducts 11
For a list of distributions you can run:

pmcli listproducts 7
For a list of available images you can run:

pmcli listimages

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 373


Table 1—createnode
pmcli createnode <systemnames> <rootpassword> <hwproduct> <ipspecs>
<defaultgateway> <swproduct> [dnsdomain=DNSDOMAIN]
[dnsservers=DNSSERVERS] [nicname=nic1] [laninterface=eth0] [smgatewayname]
[nisdomain] [nisservers] [subnet]
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) [..]
rootpassword - password for root; can be given, unencrypted or md5-encrypted
hwproduct - hardware product name. Run "platformmanage-cli listproducts
11" for a list of valid products.
ipspecs - ip address(es) [..]
defaultgateway - Default gateway for the system(s)
swproduct - software product name; Specify Imagename or distibution. Run
"platformmanage-cli listproducts 7" for a list of distributions, or
"platformmanage-cli listimages" for a list of available images.
OPTIONAL DESCRIPTION
dnsdomain - the name of DNS domain (default is no DNS)
--dnsdomain=DNSDOMAIN
dnsservers - ip of DNS servers (space separated)

--dnsservers=DNSSERVERS
nicname - the name of nic; default "nic1" --nicname=NICNAME
laninterface - the name of interface; default "eth0"
--laninterface=LANINTERFACE
smgatewayname - the name of Platform Manager gateway; default Cimserver
--smgatewayname=SMGATEWAYNAME
nisdomain - the name of NIS domain; Default is not to configure NIS
--nisdomain=NISDOMAIN
nisservers - the name of NIS servers (space separated)
--nisservers=NISSERVERS
subnet - subnet for the ipaddress

Enter all the values from our list in the script below:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 374


# Create a node
pmcli createnode testpc07 rootbeer DELLPE1950 10.0.0.7 10.0.0.1 sles9_sp3_i386
engr.hoohah.com 136.114.16.3 136.114.16.4 136.114.16.5
# Add the node(s) to the cluster
pmcli addnodetocluster testpc07 cluster01
# Activate changes.
pmcli reconfigure
# Deploy the node by either, for a fresh installation:
pmcli install testpc07
# Or for installation of a captured image:
pmcli setimage rootbeerfloat testpc07
pmcli install testpc07

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 375


B-11 Replacing a node
Sooner or later you will have to change out a server due to a failure. Below is a general
recipe for doing just that.

1 Replace the node physically.


2 On the frontend run:
pmcli clearmacadress <systemnames>
pmcli discovernode node2
3 If you are using Static ARP see the note in the section about Static ARP on page 104.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 376


B-12 Creating a flat cluster with pmcli
This script uses bracket expansion. For more information on expansion brackets and
grouping please see “Bracketing and Grouping” on page 359. Create a cluster with
five nodes. In this case:
• the password is fruitbowl;
• the hardware model is DELLPE1750;
• the DNS Domain is test.orchard.com;
• the ip address is 175.15.0.19.

# Create nodes
pmcli createnode v[1-5] fruitbowl DELLPE1750 10.0.0.[101-105]
10.0.0.1 rhel4_u3_i386_ws test.orchard.com 175.15.0.19
# Add a subnet called bmcnet.
pmcli addsubnet bmcnet 175.20.0.0 253.250.0.0
# Add bmc power.
pmcli enablebmcpower v[1-5]
# Add bmc console.
pmcli enablebmcconsole v[1-5]
# Create a cluster.
pmcli createcluster FruitSalad_cluster
# Add multiple (five) nodes to the cluster.
pmcli addnodetocluster v[1-5] FruitSalad_cluster
# Import an ethernet.
pmcli importethers < /tmp/ethers
# Install software on the nodes.
pmcli addsoftware v[1-5] pm-5.7.1 "Scali MPI Connect"
pmcli addsoftware v[1-5] pm-5.7.1 "NTP"
pmcli addsoftware v[1-5] pm-5.7.1 "NISClient"
pmcli addntpservice v[1-5] install.test.scali.no
pmcli addnisclientservice v[1-5] test.scali.no
install.test.scali.no
# Restart
/etc/init.d/scance restart

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 377


B-13 Creating a private cluster with pmcli
For more information on expansion brackets and grouping please see “Bracketing and
Grouping” on page 359
Create a cluster with five nodes. In this case:
• the password is fruitbowl;
• the hardware model is DELLPE1750;
• the DNS Domain is test.orchard.com
• The ip address is 175.15.0.19.

# Create nodes
pmcli createnode v[1-5] fruitbowl DELLPE1750 10.0.0.[101-105]
10.0.0.1 rhel4_u3_i386_ws test.orchard.com 175.15.0.19
# Add a subnet called bmcnet.
pmcli addsubnet bmcnet 175.20.0.0 253.250.0.0
# Add bmc power.
pmcli enablebmcpower v[1-5]
# Add bmc console.
pmcli enablebmcconsole v[1-5]
# Add a gateway
addsmgatewayservices <systemnames> [interface=eth1]
# Create a cluster.
pmcli createcluster FruitSalad_cluster
# Add multiple (five) nodes to the cluster.
pmcli addnodetocluster v[1-5] Berry
# Import an ethernet.
pmcli importethers < /tmp/ethers
# Install software on the nodes.
pmcli addsoftware v[1-5] pm-5.7.1 "Scali MPI Connect"
pmcli addsoftware v[1-5] pm-5.7.1 "NTP"
pmcli addsoftware v[1-5] pm-5.7.1 "NISClient"
pmcli addntpservice v[1-5] install.test.scali.no
pmcli addnisclientservice v[1-5] test.scali.no
install.test.scali.no
# Restart
/etc/init.d/scance restart

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 378


B-14 Creating and Deploying an update Channel with pmcli
Once you run this script updates from this channel will be available and installed if needed
on subscribed node.

# Create an update channel for an OS channel (or other channels)


pmcli createupdatechannel rhel4_u3_x86_64_es rhel4_u3_x86_64_es_updates
# Add updates in this case a package from U4 in an updatechannel for U4:
/opt/scali/libexec/scarepository.py --addupdates rhel4_u3_x86_64_es_updates
/home/os/RedHat/redhat/4ES-U4/x86_64/RedHat/RPMS/bash-3.0-19.3.x86_64.rpm
# Subscribe nodes to this channel.
pm-cli subscribechannel n001 rhel4_u3_x86_64_es_updates
# Apply changes to the cluster.
pmcli reconfigure

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 379


B-15 Adding an interface with pmcli
Add an ethernet interface to your system named “MyPenguin”. The host name of the server
is “MyPenguin-mgt”.The NIC server’s name is “nic2”. The lanendpoint is “eth1”. We will use
bracket expansion syntax to include 3 ipaddresses - 10.0.0.1 to 10.0.03 inclusive. The
syntax is as follows:

Table 2—addethernetinterface
pmcli addethernetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "nic1").
lanendpoint - name of the lanendpoint (e.g. "eth0")
OPTIONAL DESCRIPTION
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS
ipspecs - (optional) ip address(es) [..]
--ipspecs=IPSPECS
subnet - subnet for the ipaddress
--subnet=SUBNET

So the proper usage is:

pmcli addethernetinterface MyPenguin nic2 eth1 MyPenguin-mgt


[10.0.0.1-10.0.0.3]

For more information see “Bracketing and Grouping” on page 359.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 380


B-16 Adding Scali MPI Connect to a Platform Manager sys-
tem
To get a list of products available, for example, Scali MPI Connect, run this script:
Then take a look at the addsoftware command for syntax:

# This script is for adding Scali MPI Connect.


# It returns a list of products for product-type 15 Scali MPI Connect.
# Available products are followed by “(Installable)“
pmcli listproducts 'Scali MPI Connect'
smc-5.5.1-rhel3-i386: 'Scali MPI Connect 5.5.1, RHEL3 - i386' (Installable)
smc-5.5.1-rhel3-i64: 'Scali MPI Connect 5.5.1, RHEL3 - ia64' (Installable)
smc-5.5.1-rhel3-x86_64: 'Scali MPI Connect 5.5.1, RHEL3 - x86_64' (Installable)
smc-5.5.1-rhel4-i386: 'Scali MPI Connect 5.5.1, RHEL4 - i386' (Installable)
smc-5.5.1-rhel4-i64: 'Scali MPI Connect 5.5.1, RHEL4 - ia64' (Installable)
smc-5.5.1-rhel4-x86_64: 'Scali MPI Connect 5.5.1, RHEL4 - x86_64' (Installable)
smc-5.5.1-rhel5-i386: 'Scali MPI Connect 5.5.1, RHEL5 - i386' (Installable)
smc-5.5.1-rhel5-x86_64: 'Scali MPI Connect 5.5.1, RHEL5 - x86_64' (Installable)
smc-5.5.1-sles10-i386: 'Scali MPI Connect 5.5.1, SLES10 - i386' (Installable)
smc-5.5.1-sles10-x86_64: 'Scali MPI Connect 5.5.1, SLES10 - x86_64' (Installable)
smc-5.5.1-sles9-i386: 'Scali MPI Connect 5.5.1, SLES9 - i386' (Installable)
smc-5.5.1-sles9-x86_64: 'Scali MPI Connect 5.5.1, SLES9 - x86_64' (Installable)

Table 3—addsoftware
pmcli addsoftware <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
productid - software product identification
OPTIONAL DESCRIPTION
force - ignore product dependencies
featurenames - list of features

Use the information from the first part of the script to fill in the arguments for the
addsoftware command as in the second half of the script below:

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 381


# This is part two of the script for adding Scali MPI Connect.
# Use the syntax from addsoftware:

pmcli addsoftware mynode smc-5.5.1-rhel5-x86_64 'Scali MPI Connect'


# Apply changes:
pmcli reconfigure <nodenames>

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 382


B-17 Adding entries to /etc/fstab using the CLI
If the nodes are supposed to mount a remote file system from another server, it can be
handled through Platform Manager. Run these commands:

# Script for adding entries to /etc/fstab


pmcli addremotefs node[001-003] nfs
<nfs_server>:/<exported>/<path>/ </mount_point>
scash -p -n node[001-003] /etc/init.d/scance
restart

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 383


B-18 Importing a MAC address table with CLI
Providing a MAC address table for Platform Manager will greatly enhance the speed of
operations even with the DHCP Listening enabled. If you have the possibility of collecting
this information Platform recommends that you import a text file containing a MAC address
table in the following format:

<systemname> <MAC_Address>

Example: MAC TABLE


n001 AA:BB:CC:DD:EE:AA
n002 AA:BB:CC:DD:EE:BB
n003 AA:BB:CC:DD:EE:CC
n004 AA:BB:CC:DD:EE:DD

To import a MAC address table via the Command Line Interface:

# pmcli importethers
The table will be written to stdin.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 384


B-19 Defining a Myrinet switch with pmcli
The Myrinet switch must be defined in Platform Manager and the command pmcli
findgmtopology must be run to configure the monitoring of Myrinet switches. The Myrinet
monitoring is done on the switch, not the nodes.
See findgmtopology on page 304 in the chapter on CLI for more details.

# High speed interconnects: adding Myrinet


pmcli addmyrinetinterface ${NODES} "gm0" "gm0"

pmcli addsoftware ${NODES} "GM_2.1.23" "driver"

pmcli addinfinibandinterface ${NODES} "ib0" "ib0"

pmcli addsoftware ${NODES} "IBGD_1.8.0" "driver"

# The Myrinet switch

pmcli createswitch myr1 172.19.99.98 myrinet

pmcli setmac myr1 eth "00:60:dd:48:f7:0d"

pmcli adddhcpclient myr1

# Myrinet topology

pmcli findgmtopology myr1 ${PMFE}

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 385


Appendix C - Glossary

Terms used in Platform Manager 5.6 documentation.

A
AMD64 - the 64 bit Instruction set architecture (ISA) that is the 64 bit extension to the Intel
x86 ISA. Also known as x86-64. The Opteron and Athlon64 from AMD are the first
implementations of this ISA.

ARCH - Template keyword - Architecture for this node (i386, x86_64 or ia64)

B
BOOTDEVICE - Template keyword - the device name of the network device used for the
network installation (e.g. "eth0")
BOOTFILESYSTEMTYPE - Template keyword - Filesystem type to be used on the /boot
filesystem (fast32 on EFI systems, ext2 on other systems Bootloader to be used on this
system
BOOTLOADER - Template keyword - the Bootloader used on this system (elilo on ia64,
grub on other systems)

C
cluster - A cluster is a set of interconnected nodes functioning as a single server
conserver - The Platform Manager console server which relays messages to and from all
the console capable BMCs and the console switches in the datacenter. The conserver is
installed on the Platform Manager frontend and on Platform Manager Gateways.
CONSOLEKERNELOPTIONS - Template keyword - kernel options for console redirection
(or empty if console redirection is disabled)
CONSOLEPORTNR - Template keyword - the number of the com port used for console
redirection (1 for com1...)
CONSOLEPORTSETTING - Template keyword - settings for the com port used for console
redirection

CUSTOM:xxx - Template keyword - Your custom attribute xxx for node

D
DAPL - Direct Access Provider Library - DAT Instantiation for a given interconnect
DAT - Direct Access Transport - Transport-independent, platform-independent Application
Programming Interfaces that exploit RDMA
DET - Direct Ethernet Transport - Platform's DAT implementation for Ethernet-like devices,
including channel aggregation

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 386


DNSSEARCH - Template keyword - domain list for DNS search list

DNSSERVER - Template keyword - DNS Server

E
EM64T - the Intel implementation of the 64 bit extension to the x86 ISA. See AMD64.

F
fencing - a method of denying an errant node access to resources
frontend - a computer outside the cluster nodes dedicated to run configuration, monitoring
and licensing software

G
GATEWAY - Template keyword - default gateway
GM - a software interface provided by Myricom for their Myrinet interconnect hardware.

H
HA- High Availability - feature that provides a system with a redundant counterpart that
will assume the sustem’s role in case of failure on the original.
HASSELINUX - Template keyword - a method of denying an errant node access to
resources
HCA - Hardware Channel Adapter. Term used by Infiniband vendors referencing to the
hardware adapter
HPC - High Performance Computer.
HOSTNAME - Template keyword - the host name of the system being installed
HTTPREPOSITORYURL - Template keyword - URL of the software repository the
operating system available via http

I
IA32 - Instruction set Architecture 32 Intel x86 architecture
IA64 - Instruction set Architecture 64 Intel 64-bit architecture, Itanium, EPIC
IMAGE - Template keyword - file name for the image to be installed for image based
installation
Infiniband - a high speed interconnect standard available from a number of vendors
INITRD - Template keyword - file name for the initrd image (relative to the tftp-root
INSTALLSERVER - Template keyword - IP-address for the installation server (the server
controlling the network installation process)

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 387


IPADDRESS - Template keyword - the IP address for the BOOTDEVICE network device

K
KERNEL - Template keyword - file name for the kernel image (relative to the tftp-root)
KERNELBOOTOPTIONS - Template keyword - Extra kernel options

L
LDAP- see Lightweight Directory Access Protocol
LIBXMLURL - Template keyword - the URL of the libxml rpm package to be installed on
this node
Lightweight Directory Access Prototcol- Lightweight Directory Access Protocol is a means of
querying and modifying directory trees using messages encoded in the BER binary format.

LSF- Platform LSF is software for managing and accelerating batch workload processing for
compute-and data-intensive applications.

M
MPI - Message Passing Interface - De-facto standard for message passing
MPI process - Instance of application program with unique rank within MPI_COMM_WORLD
mpid - the Scali MPI Connect daemon is installed on all nodes that should run MPI programs
mtu - maximum transimission unit. The default is 1500
Myrinet™ - an interconnect developed by Myricom. Myrinet is the product name for the
hardware. (See GM).

N
NAS - Network Attached Storage - uses file-based approach to storage. See also SAN
NETBOOTLOADER - Template keyword - network bootloader for this hardware platform
(efi, pxe, etherboot)
NETINSTALLFILE - Template keyword - URL of the configuration file for the the network
installation (kickstart file, autoyast file or scalamari file)
NETMASK - Template keyword
NFS - Network File System - protocol to allow client servers to access files on other host
servers over a network.
NIC - Network Interface Card
NISDOMAIN - Template keyword - NIS domain for this node.Empty string if NIS is not
enabled

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 388


NISSERVER - Template keyword - IP address of the NIS server for this node. Use an
Empty string if a NIS is not enabled
node - a single computer in an interconnected system consisting of more than one
computer
NODEID - Template keyword - an unique identifier for a node in Platform Manager, using
the UUID format

O
OEM- Original Equipment Manufacturer

P
Point-to-point link is a dedicated link that connects two nodes of a network
power - the Platform Manager power utility which controls power on nodes via power
capable BMCs or power switches in the Data Center is installed on the Platform Manager
frontend and on Platform Manager Gateways.
power - a generic term that cover the PowerPC and POWER processor families. These
processors are both 32 and 64 bit capable. The common case is to have a 64 bit OS that
support both 32 and 64 bit executables. See also PPC64
POWER - the IBM POWER processor family. Platform supports versions 4 and 5. See PPC64
PowerPC - the IBM/Motorola PowerPC processor family. See PPC64 below.
PPC64 - abbreviation for PowerPC 64, which is the common 64 bit instruction set
architecture(ISA) name used in Linux for the PowerPC and POWER processor families.
These processors have a common core ISA that allow one single Linux version to be made
for all three processor families.
PRODUCTIONBOOTIMAGE - Template keyword - etherboot boot image file name. (Used
only for etherboot)

Q
quorum - a method of fencing where the partition of a cluster must have at least 51% of
the total nodes in the cluster to gain control over the cluster. The smaller partition(s) are
then fenced in. See fencing.

R
REPOSITORYDIR - Template keyword - the directory on the repository server where the
software repository for the operating system is stored
REPOSITORYSERVER - Template keyword - IP-address for the repository server (the
server hosting software repositories)
REPOSITORYURL - Template keyword - URL of the software repository the operating
system Http- or NFS- URL depending on selected installation method

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 389


RDMA - Remote DMA Read or Write Data in a remote memory at a given address
RHAUTHLINE - Template keyword - authentication settings formatted as options to
authline in Red Hat kickstart
RHGATEWAYARG - Template keyword - default gateway option part of the Red Hat
network configuration key word (expands to --gateway X.X.X.X if default gateway is
defined and empty string if not)
RPMPYTHONURL - Template keyword - the URL of the rpm-python rpm package to be
installed on this node.

S
SAN - Storage Area Network - an array of storage devices available to clusters. Each
element is accessed by one node. SAN uses block-based approach to storage. See also
NAS
scadb_maintenance - the clean-up script for scadb (The historical monitoring system)
reduces the size of the database by reducing the sample-frequency for old data. It is
installed on the Platform Manager frontend.
scagmbuilder - the Compiler for GM (Myrinet driver) is installed on the Platform Manager
frontend and on all nodes that have GM enabled.
scaibbuilder - the compiler for IBGold (Infiniband driver) is installed on the Platform
Manager Server and on all nodes that have IBGold enabled.
scald - the Platform Manager License daemon. scald manages the licenses for Platform
Manager and Scali MPI Connect. scald is installed on the Platform Manager frontend.
Platform system - a cluster consisting of Platform components
SCALAMARIARCH - Template keyword - the architecture of the scalamari software to use
on this system (same as the architecture of the distribution)
scamond - the monitoring daemon for in-band monitoring is installed on all nodes.
scamond-mapper is the monitoring daemon for out-of-band monitoring. scamond-mapper
is installed on the Platform Manager frontend and on Platform Manager Gatweways.
ScaMPI - Scali's MPI - First generation MPI Connect product, replaced by SMC
SCANCEJOB - Template keyword - identifies the installation job in Platform Manager
scaproxyd - the monitoring daemon for IPMI is installed on the Platform Manager frontend
and on Platform Manager Gateways.
scasmo-controller - the controller for the monitoring system. Installed on the Platform
Manage Server.
scasmo-diag.py - the diagnostic tool for the monitoring system is installed on the Platform
Manager frontend.
scasmo-factory - alarm and aggregation daemon for the monitoring system. Installed on the
Platform Manager frontend.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 390


scasmo-history-controller - the controller for the historical monitoring system. Installed on
the Platform Manager frontend.
scasmo-server - the relaying service for monitoring system. It is installed on the Platform
Manager frontend and on Platform Manager Gateways.
scasnmpd - the SNMP daemon for inband monitoring. Installed on all managed nodes.
scasnmpxd - the SNMP subagent for inband monitoring is installed on all managed nodes.
scauninstall - the utility for uninstalling Platform Manager is installed on all nodes managed
by Platform Manager.
SKIPLICENSEKEY- Template keyword - assigned an empty string if you want to skip the
license key check
SMC - Scali MPI Connect - Scali's second generation MPI
SMCONNECTION - Template keyword - connection parameters needed to connect to the
Platform Manager configuration server
split brain - the cluster state when nodes become isolated from the rest of the cluster due
to communication failure. See fencing.
spy mode - view only
SSP - Scali Software Platform is the name of the bundling of all Scali software packages.
SSP 3.x.y - First generation SSP - WulfKit, Universe, Universe XE, ClusterEdge
SSP 4.x.y - second generation SSP - Platform Manager + SMC (option)
STAGE2 - Template keyword - file name for the stage2 image (relative to the tftp-root
STONITH - Shoot The Other Node In The Head - fencing a node by resetting the node on
failure. See fencing.

T
TIMEZONE - Template keyword - time zone name, currently fixed to "UTC"
torus - Greek word for ring, used in Platform documents in the context of 2- and
3-dimensional interconnect topologies

U
UNIX refers to all UNIX and look-alike Operating Systems supported by the SSP, i.e.
Solaris and Linux.
USECONSOLE - Template keyword - If the console redirection should be enabled, assign
USECONSOLE the empty string. If not, assign it the value '#'.
USEEFI - Template keyword -If you want to use USEEFI, assign it the value of an empty
string . If not, assign the value '#'.
USEHTTP - Template keyword - If you want the installation to use the HTTP protocol,
assign it the value of an empty string. If not, assign it the value '#'.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 391


USELIBXMLURL - Template keyword - If LIBXMLURL should be installed assign this the
value of an ampty string. if not assign it the nvalue ’#’.
USELIBXMLRUL- Template keyword - If you want to use USELIBXMLRUL assign the value
of an empty string. If not assign the value ’#’.
USENFS - Template keyword - If the installation should use the NFS protocol, use an empty
string. Assign it the value '#' if not.

V
VAR - Value Added Reseller

W
Windows refers to Microsoft Windows 98/Me/NT/2000/XP

X
x86-64 - see AMD64 and EM64T

Y
YUMCONF - Template keyword - the yum configuration for this node (content of
yum.conf)
YUMURL - Template keyword - the URL of the yum rpm package to be installed on this
node.

Copyright © 1997-2008 Platform Manage 5.7 User’s Guide 392


Index
A
Accounting 9
Add Alarm dialog 161
aggregated data 4
alias removal 261
all jobs 239
all subnets 260
AMD64 386
API layer 10
arallel Command Execution 9
ARCH 386
available images 198, 269
B
BMC 14, 194, 196, 197
BMC capabilities 196
BMC console 194
BMC power control 195, 196
bonded interface 251
removal 261
BOOTDEVICE 386
BOOTFILESYSTEMTYPE 386
BOOTLOADER 386
bracket expansion 356, 377
bracket-expanded 192
C
Certified nodes 175
Characters transferred 183
CIM 4, 9, 10, 11, 143, 260
CIM database 10
CIM server 12
CIM/Middleware 12
CIM’s organization 10
Cimserver 266, 374
client side 10
Cluster
systemname 198, 199
cluster 11, 386
cluster life cycle 3
Cluster Resources 4
Cluster Topologies, Multiple clusters 8
Cluster Topologies, Single cluster 5
clusters 2
command 178

User’s Guide 393


command line interfaces (CLI) 359
commodity platforms 5
Common Information Model 4
configuration 12
configuration database 12
Configure 23
Configuring the operating system 50
conserver 386
Console 14
console management controller 291
console switch port 305
CONSOLEKERNELOPTIONS 386
CONSOLEPORTNR 386
CONSOLEPORTSETTING 386
createSwitch 302
csv 178, 184
CUSTOM
xxx 386
Custom Dashboard Monitoring 9
Custom Provisioning 9
D
DAPL 386
dashboard 4
DAT 386
Data Center 21, 22
Data Center Selector 5, 19, 23, 32, 175
Data Center Selector view 23
data centers 2
day 178
default perspectives 22
Delete 24
Deployment services 10
DET 386
Detach 35
detach string 315
detachsubst 315
DHCP 10
DHCP server service 292
dialogs
Add Alarm 161
Direct Access Provider Library (DAPL) 386
Disabling the Static ARP 103
Discover node(s) 267
DMTF 4, 10
DNS services 299
DNSSEARCH 387
DNSSERVER 387
Double-click 35
E
Eclipse 4, 17
Edit Alarms View 160
effective work environment 33
Elapsed time 183

394 [5.7]
EM64T 387
Enabling Static ARP 103
enslaved interface 258
ether importing 259
ethers
export 259
event handling and response 4
existing template 308
External power /console switches 175
F
Fast View 35
fault-prediction algorithms 4
FCAPS 10
fencing 387
Floating Licenses 9
Folder 21
FORMAT 178
front end 387
G
GATEWAY 387
Gateway 10
gateway 8, 9, 10
Gateway, default 373
gatewayip 255
gid 178
GM 387
group 178
group alias 357
group by 180
group slaves 357
grouping functionality 356
GUI Elements 20
GUI elements 20
H
HA 387
handlingfaults and root cause analysis 4
HASSELINUX 387
HCA 387
Heterogeneous Cluster 9
High Availability 387
History View 9
hostlist 357
HOSTNAME 387
hostname
renaming 269
HPC 387
html 178, 184
HTTPREPOSITORYURL 387
I
IA32 387
IA64 387
icon
Node is up 159

User’s Guide 395


icons 17
IMAGE 387
Infiniband 387
Initial Cluster Deployment 3
initial cluster deployment 3
INITRD 387
INSTALLSERVER 387
intelligent provisioning 3
Interconnect Ethernet View 166
interconnects
Myrinet monitoring 167
interface hostname 263
interface list 259
IPADDRESS 388
K
KERNEL 388
kernel bonding
link 251
KERNELBOOTOPTIONS 388
Key 178
key=value 178
keyword 357
L
LADP 117
large cluster configuration 356
Last month by name 179
latex 178, 184
Latex-reports 184
latex-tools 184
Launch button 175
layout 184
LDAP 388
Libraries 9
LIBXMLURL 388
licenses 12
Lightweight Directory Access Protocol 117
Lightweight Directory Access Prototcol 388
Linux and Windows GUI Support 9
list of disabled ScaNCE subsystems 299
list of routes 255, 260
List results 181
List results by command name 181
List results by day 181
List results by month 181
List results by node name 181
List results by time of day 181
List results by Unix groups IDs 181
List results by Unix user IDs 181
List results by weekday 181
List results by year 181
log for job 239
logrotate 186
LSF 388

396 [5.7]
M
macaddress 256, 263
Major page faults 183
management engine service 293
Management Menues 173
maximize icon 35
Memory consumption (size * time) 183
Minor page faults 183
monitoring
Myrinet 167
monitoring API 10
monitoring inband server 294
monitoring out-of-band server service 294
monitoring relay server service 294
Monitoring services 11
monitoring the health of cluster resources 4
month 178
MPI 9, 388
MPI Launch 174
MPI Launch View 175
MPI process 388
MPI programs 175
MPI Start View 175
mpid 388
MTU 256, 264
mtu 388
Multiple Package Channels 9
Multiple rules 182
multiple times 178
Multiple Vendor Hardware Management 9
Myrinet
interconnect monitoring 167
Myrinet monitoring 167
Myrinet submenu 167
Myrinet™ 388
N
NAS 388, 390
NAT 10
NAT service 295
negative numbers 356
NETBOOTLOADER 388
NETINSTALLFILE 388
NETMASK 388
network 24
network installation template 307, 308
network switch 302
NFS 37, 41, 44, 74
NIC 388
NIS 10
NIS client service 300
NISDOMAIN 388
NISSERVER 389
node 178, 389

User’s Guide 397


Node On/Off 175
Node Reinstallation 9
NODEID 389
nodes 11
NTP service 295
NTP/NIS slaves 10
Number of blocks read or written 183
Number of swaps 183
Number of times the application was invoked 183
O
object-oriented model 10
OEM 389
ongoing change management 4
Only list results matching command name ‘all2all 182
Only list results matching node name ‘n12’ 182
Only list results matching Unix group ID 501 182
Only list results matching Unix group name ‘users’ 182
Only list results matching Unix user ID 0 182
Only list results matching Unix username ‘root 182
opening terminal and console sessions 18
operating system 24
optimizing resource utilization and performance 4
P
Parallel Shell 174
parallel shell
scakill 326
scaps 327
scash configuration file 334
scatop 336
parallel shell command 176
parallel shell tools 11, 173
Parallel Shell View 176
PBSPro server 274
PBSProMom 273
pdf 184
pdflatex 184
Pending Changes Icon 143
Pending Changes Icon - changes applied 143
perspectives
working with 23
Platform Accounting system 177
Platform Manage 19
Platform Manage Application 18
Platform Manage architecture 10
Platform Manage Cluster Gateway (PM-CGW) 10
Platform Manage Command Line Interface 192
Platform Manage Features 3
Platform Manage Front End 12
Platform Manage Gateway 298, 301
Platform Manage graphical user interface 17
Platform Manage GUI 17
Platform Manage window 32
Platform Manage-Client location and interface 10

398 [5.7]
Platform Node Configuration Engine 12
Point-to-point 389
PostgresSQL database 12
postscript 184
POWER 389
power 389
power interface 321
power management controller 297
Power Mgt 174
power switch port 305
PowerPC 389
Processes Per Node 175
PRODUCTIONBOOTIMAGE 389
Provisioning engine 10
provisioning services 12
PXE 10
Q
queue 173
Queue Status View 4, 169
quorum 389
R
-r option 182
RDMA 390
Reboot 174
Remote Access -> Node Console 174
Remote Access -> Node Terminal 174
remote filesystem(s) 212
repository 9, 12
REPOSITORYDIR 389
REPOSITORYSERVER 389
REPOSITORYURL 389
RHAUTHLINE 390
RHGATEWAYARG 390
Rich Client Platform (RCP) 4
root password 271
root privileges 173
RPMPYTHONURL 390
RPMs 3
RULE 178
S
SAN 390
ScaAccounting log 177
ScaAcct 177
scaacct 178
scaacct_collect 186
scacp 323
scadb_maintenance 390
scagmbuilder 390
scagroup 357
scagroup config file 357
scagroup.conf 357
scahosts 325
scaibbuilder 390

User’s Guide 399


scakill 326
SCALAMARIARCH 390
scald 390
Scali MPI Connect 335
Scali system 390
scalimanage-cli 192
scamond 390
scamond-mapper 390
ScaMPI 390
ScaNCE subsystem 298
ScaNCE subsystems 300
SCANCEJOB 390
scaproxyd 390
scaps 327
scash
configuration file 334
scasmo-controller 390
scasmo-diag.py 390
scasmo-factory 390
scasmo-history-controller 391
scasmo-server 391
scasnmpd 391
scasnmpxd 391
scasub 335
scatop 336
scauninstall 391
scripted management tasks 359
service removal 300
Shell Output area 176
Show View dialogue 32
Shutdown 174
single unit 179
Single Point Data Center 9
SKIPLICENSEKEY 391
SMCONNECTION 391
Software repository 10
software services 299
Specific day in month in yea 179
Specific month in year 179
Specific year 179
specified software, 24
split brain 391
spreadsheets 184
spy mode 391
ssh 9
SSH credential management 298
ssh keys 12
SSP 391
SSP 3.x.y 391
SSP 4.x.y 391
STAGE2 391
static arp configuration 257, 258
static arp mapping 260
stderr 175

400 [5.7]
stdout 175
STONITH 391
Stop button 175
subjobs 240
subnet 256
subnet removal 263
Subscribe nodes 379
Summarize CPU time (system-time + user-time) 183
Summarize elapsed-, system- and user-time 183
Summarize the following values
183
Switch option commands
createSwitch 302
system
removal 269
system component icons 17
System time 183
T
terminal and console sessions 19
text 178, 184
text-formatted report 184
TFTP 10
time range 179
time specification 179
timeofday 178
TIMEZONE 391
title bar 35
torus 391
U
uid 178
Unix 391
update 12
update channel 379
USECONSOLE 391
USEEFI 391
USEHTTP 391
USELIBXMLRUL 392
USELIBXMLURL 392
USENFS 392
user 178
User time 183
V
VAR 392
vendor 24
View 21
view 32
maximize or minimize 35
view, detaching 35
views 19, 23
Data Center Selector 23
detaching from a perspective 35
draging and dropping 33
Edit Alarms 160

User’s Guide 401


Interconnect Ethernet 166
maximizing and minimizing 35
moving 33
opening 32
Queue Status 169
reattaching 35
using Fast View 35
views, positioning 33
W
weekday 178
Window 21
Windows 392
Wizard 21
X
x86-64 392
xinetd 322
Y
year 178
YUMCONF 392
Z
zero padding 356

402 [5.7]

S-ar putea să vă placă și