Documente Academic
Documente Profesional
Documente Cultură
Version 5.7
May 19, 2008 3:56 pm
Comments to: doc@platform.com
Support: support@platform.com
Copyright © 1994-5/20/08, Platform Computing Inc.
Although the information in this document has been carefully reviewed, Platform Computing Inc.
(“Platform”) does not warrant it to be free of errors or omissions. Platform reserves the right to make
corrections, updates, revisions or changes to the information in this document.
UNLESS OTHERWISE EXPRESSLY STATED BY PLATFORM, THE PROGRAM DESCRIBED IN
THIS DOCUMENT IS PROVIDED “AS IS” AND WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO
EVENT WILL PLATFORM COMPUTING BE LIABLE TO ANYONE FOR SPECIAL,
COLLATERAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING WITHOUT
LIMITATION ANY LOST PROFITS, DATA, OR SAVINGS, ARISING OUT OF THE USE OF OR
INABILITY TO USE THIS PROGRAM.
We’d like to hear from you You can help us make this document better by telling us what you think of the content, organization,
and usefulness of the information. If you find an error, or just want to make a suggestion for improving
this document, please address your comments to doc@platform.com.
Your comments should pertain only to Platform documentation. For product support, contact
support@platform.com.
Document redistribution This document is protected by copyright and you may not redistribute or translate it into another
and translation language, in part or in whole.
Internal redistribution You may only redistribute this document internally within your organization (for example, on an
intranet) provided that you continue to check the Platform Web site for updates and update your
version of the documentation. You may not make it available to your organization over the Internet.
Trademarks LSF is a registered trademark of Platform Computing Inc. in the United States and in other
jurisdictions.
ACCELERATING INTELLIGENCE, PLATFORM COMPUTING, PLATFORM SYMPHONY,
PLATFORM JOBSCHEDULER, PLATFORM ENTERPRISE GRID ORCHESTRATOR, PLATFORM
EGO, and the PLATFORM and PLATFORM LSF logos are trademarks of Platform Computing Inc. in
the United States and in other jurisdictions.
UNIX is a registered trademark of The Open Group in the United States and in other jurisdictions.
Microsoft is either a registered trademark or a trademark of Microsoft Corporation in the United
States and/or other countries.
Windows is a registered trademark of Microsoft Corporation in the United States and other countries.
Other products or services mentioned in this document are identified by the trademarks or service
marks of their respective owners.
Third-party license http://www.platform.com/Company/third.part.license.htm
agreements
Third-party copyright http://www.platform.com/Company/Third.Party.Copyright.htm
notices
i
Table of Contents
Node On/Off
Preview Pending Changes
Subnet Right-click Menu
Opening Views
Moving views
Chapter 3 Provisioning the Data Center
About the Provisioning Process
Scalability Considerations
The Provisioning Process Assumptions
The Server Creation Wizard: Provisioning a Cluster
Selecting a Network Topology
Typical Cluster Workflow
Route-able Subnet Groups
Create Server Wizard: Creating a Cluster with a Private topology
Create Server Wizard: Configuring a network gateway
Create Server Wizard: Configuring node hardware
Create Server Wizard: Configuring the BMC ethernet interface
Create Server Wizard: Configuring the Operating System
Create Server Wizard: Adding the Configured Ethernet Interface
Create Server Wizard: Configuring the DNS and NTP
The DNS
The NTP
Create Server Wizard: Configuring the NIS and LDAP Client Services
Create Server Wizard: Configuring NIS
Configuring the LDAP Client Service
Adding LDAP Client Services in the CLI
LDAP and the NIS
Mounting a user's home directory without automount on a NIS
Creating a subnet
Adding Myrinet, Power and Console Switches
Myrinet Switches
Power Switch
Console Switch
Create Server Wizard: Adding Software options
Software Installation
Upload Software Wizard
About The Installation Proccess
Package-based Installation
Image-based Provisioning
Provisioning Process
Image Export
Image Import
Image Deployment
Diskless Image Provisioning
Installation templates
Customized Template
Modifying a template
Copying a template
Configuring a template
Deleting a template
Discovery and Managing existing servers
Prerequisites for Discovery
Node Discovery
The Discovery Process in the Platform Manager GUI
Running the Discovery Process using the Platform Manager CLI
Adjusting the level of management
Setting up "Out of Band monitoring" on an unmanaged server
Script for setting up out of band monitoring
Setting up PBSPro clients using an unmanaged PBS Pro server
Chapter 4 Configuration
Configuration Overview
Node System Settings View
Node Hardware Configuration View
Configuring Server Properties tab
BMC tab
About the BMC tab
Configuring the BMC
Configuring the BMC in the CLI
Power and Console tab
Configuring the Power and the Console in the GUI
Configuring the Console in the CLI
Configuring the Power in the CLI
Node Network Management View
Network Interfaces
The Network Interfaces tab
Static ARP
Enabling Static ARP in the GUI
Disabling Static ARP in the GUI
Changing IP Addresses
Modifying a subnet
Default Gateway tab
NAT Settings tab
DNS Settings tab
Provisioning Management View
Distribution Settings tab
Software Settings Tab
Node Service Management View
NTP tab
About the NTP tab
Adding an NTP service
NIS Client tab
About the NIS Client tab
Configuring NIS Client Service
Viewing Alarms
Editing an Alarm
Adding an alarm
Example: Adding a new alarm called “CPU load too high”
Custom Variables in Platform Manager Monitoring
Interconnect Monitoring View
Ethernet Monitoring
Myrinet Monitoring
Creating a Custom Monitor View
Chapter 7 Managing Systems
Overview of the Management Menus
Running MPI Jobs
Running Parallel Shell Commands
Chapter 8 Accounting Systems
ScaAccounting
Manually enabling the accounting functions in ScaAccounting
Starting ScaAccounting
ScaAccounting log
scaacct
Time Specification
Using scaact with time start only
scaacct with a time range
Group-by Specification
Example: scaacct grouping by time specification range
Example: scaacct grouping by command
Rule Specification
Using scaacct to report on a specific user
Summarize Specification
Example: Using scaacct for summary of elapsed-, system- and user-time
Generating reports with scaacct
Using scaacct with the -f option
Using scaacct with pdflatex
Triggering Data Collection
Chapter 9 Reporting
Report Interface
Cluster Summary
Report Navigation
Opening a Report
Management and Inventory
Monitoring
Networking
Platform certified products................................................................................................................
Workload management
BIRT Report Parameters
createlustrefs
formatlustrefs
listlustrefs
listremotefs
removelustrefs
removeremotefs
testlustrefs
Flexlm Commands
addflexclientconfigtoservice
createflexclientconfigdir
createflexclientconfigfile
createflexclientconfigserver
deleteflexclientconfig
listflexclientconfigs
listflexclientconfigsonservice
removeflexclientconfigfromservice
High Availability (HA) Commands
addhaethernetinterface
addhasharedfs
addheartbeatchannel
addservertohagroup
bindhaservicetointerface
createhagroup
disableautofailback
disablehagroupfencing
enableautofailback
enablehagroupfencing
listhagroups
listhainterfaces
listhapingrules
listhasharedfs
listheartbeatchannels
listhostedhaservices
listserversinhagroup
moveservicestohagroup
moveservicestosystem
moveservicetohagroup
moveservicetosystem
removehaethernetinterface
removehagroup
removehasharedfs
removeheartbeatchannel
removeserverfromhagroup
sethapingallips
sethapingoneofips
setlsbscriptha
setprimaryhaserver
showautofailback
showhagroupfencing
unbindhaservicefrominterface
unsetlsbscriptha
Image Management Commands
captureimage
exportimage
importimage
listimages
removeimage
Licensing Commands
activateproductkey
addactivationkey
addproductkey
listproductkeys
removeactivationkey
removeproductkey
showproductstatus
Logging Commands
canceljob
joblog
lastinstallationjob
listjobs
listjobsfornode
listsubjobs
removejob
LSF Commands
addlsfapplicationsystem
addlsfdynamichost
addlsfmastercandidate
addlsfstatichost
getlsfhoststatus
listlsfapplicationsystems
listlsfdynamichosts
listlsffeatures
listlsfmastercandidates
listlsfstatichosts
removelsfapplicationsystem
setlsffeatures
setlsfhostclosed
setlsfhostopen
Network Commands
addaliasedinterface
addbondedinterface
addethernetinterface
addinfinibandinterface
addmyrinetinterface
addroutablesubnet
addroute
addsubnet
clearmacaddress
clearmtu
createroutablesubnetgroup
detachslaveinterface
disablenetworkboot
disablestaticarp
enablenetworkboot
enablestaticarp
enslaveinterface
exportethers
importethers
listinterfaces
listroutablesubnetgroups
listroutes
liststaticarpmapping
listsubnets
listsystemdevices
removealiasedinterface
removebondedinterface
removeethernetinterface
removeinfinibandinterface
removemyrinetinterface
removeroutablesubnet
removeroutablesubnetgroup
removeroute
removesubnet
setinterfacename
setmacaddress
setmtu
Node Commands
changenodebrand
createnode
disablemanagementofservers
discovernode
discovernodemac
enablemanagementofservers
getguid
getkernelbootoptions
listaccounts
listmanagementofservers
listnodes
removesystem
renamesystem
setguid
setinstalledstate
setinstallserver
setkernelbootoptions
setrootpassword
showprovisioningdata
PBS Options Commands
addpbsprolicenseserver
addpbspromom
addpbsproscheduler
addpbsproserver
createpbsnodefile
removepbsprolicenseserver
removepbspromom
removepbsproscheduler
removepbsproserver
setpbslicense
setpbslicensefile
setpbsproclientsoffline
setpbsproclientsonline
Product and Software Options Commands
addproductconflicts
addproductprovides
addproductrequires
addsoftware
addsoftwareoftype
createdependencycapability
createlocalproduct
createupdatechannel
listchannels
listdependencycapabilities
listfeatures
listinstalledsoftware
listproductdependencies
listproducts
listproducttypes
listretrieveelements
listretrievemethods
listsubscribedchannels
loadsoftware
removedependencycapability
removeproductconflicts
removeproductprovides
removeproductrequires
removesoftware
removeupdatechannel
subscribechannel
unsubscribechannel
upgradesoftware
upgradesoftwareoftype
Services Options Commands
addaccountingcollectorservice
addaccountingservice
addbatchsystemaccountingservice
addconsolemanagementcontroller
adddnsclientservice
adddhcpclientservice
adddhcpserverservice
addjbossasservice
addldapclientservice
addmanagementengineservice
addmonitoringhistoryservice
addmonitoringinbandservice
addmonitoringoutofbandservice
addmonitoringrelayservice
addnisclientservice
addnatservice
addntpservice
addpowermanagementcontroller
addremotesyslogclientservice
addrshservice
addscarepositorycacheservice
addsmgatewayservices
addsshcredentialmanagementservice
bindservicetointerface
disablescancesubsystem
enablescancesubsystem
listdnsclientservice
listdisabledscancesubsystems
listhostedservices
listnisclientservice
listscancesubsystems
removeservice
removesmgatewayservices
unbindservicefrominterface
Switch Commands
createswitch
disconnectconsoleswitchport
disconnectpowerswitchport
findgmtopology
listswitches
removeswitch
setspeedoncomport
useconsoleswitchport
usepowerswitchport
Template Commands
addtemplate
gettemplate
listtemplates
removetemplate
replacetemplate
The console interface
Console
Console Configuration
TIP: Global Defaults
Configuration Blocks
String Replacement
Numeric Replacement
Escape Sequences
Using console with -e
Using console with -u
Using console with -w
Setting a new default escape
TIP: Locations of Files
TIP: Number of Fields
Some Known Bugs
The power interface
X
Y
Index
Conventions
Here you will find basic terms and typographic conventions.
Terms
Unless explicitly specified otherwise, gcc (gnu c-compiler) and bash (gnu
Bourne-Again-SHell) are used in all examples.See “Glossary” on page 388for more
term entries.
TERM DESCRIPTION
Bold Program names, options and default values
mono space Computer related: Command Line Interface and Shell
commands, examples, environment variables, file locations
(directories) and contents.
# Command prompt in shell with super user privileges. Also used
for Commentary in pmcli script examples
% Command prompt in shell with normal user privileges
Platform Manager supports the installation and configuration of operating systems for
heterogeneous servers in your data center from “bare metal”, including RHEL and SLES,
as well as server-specific software. DNS, NIS, LDAP and NTP cluster and network
services are set up automatically using one of two wizards: the Upload Wizard and the
Server Creation Wizard. Use the Upload Wizard to upload OS and third party software
such as Scali MPI Connect. The Server Creation Wizard can set up MPI communication
using Scali MPI Connect. Once you have specified your configuration, you can either
apply it immediately, or save it to a central repository to be applied later.
With the intelligent provisioning feature, you have the option of deployment using the
RPM-based provisioning, or the image-based provisioning. RPM-based provisioning
allows you to build each node from its software components, such as operating system,
services, and applications. Image-based provisioning allows you to build a single node,
then replicate it by copying the entire image to other nodes. For example, if you want
to add new hardware to your cluster, you can have the Platform Manager software build
an image for you from the RPM’s and then install the image on the other nodes.
The Platform Manager graphical user interface makes ongoing management more
efficient. The GUI is based on the Rich Client Platform (RCP) framework application
named Eclipse. Wizards and views provide powerful and flexible menas for deploying
servers and expanding clusters. You can easily refresh servers, either back to a known
point, or for security purposes.
Information management in Platform Manager is based on industry-standard data
models maintained by the Distributed Management Task Force (DMTF). The Common
Information Model (CIM) standard is the solid foundation for data storage in Platform
Manager’s open architecture.
Platform Manager supports integrated configuration changes and node administration
with features such as parallel shell, and console and power management. Auditing of
user jobs is also provided to support central management for your data center.
The monitoring feature in Platform Manager 5 gathers and displays node availability,
environmental and performance data in a dashboard format. You can configure the
default dashboard to suit your needs for monitoring, then save it as a new monitoring
view for later use. Views can display aggregate values such as average, maximum, and
standard deviation.
The GUI can drill down through a hierarchy of configurable objects, then perform
actions on your selection of components. In addition, Scale Manage's fault-prediction
algorithms, based on monitoring of a standard set of variables, indicate potential
problems, which ensures a high availability of clusters.
Platform Manager automates event handling and response. You can attach user-defined
levels of alarms to specific objects, define automated responses such as shutting down
a node, running a script, or sending an E-mail, drill down quickly from aggregated data,
streamlining the root-cause analysis process.
Monitoring system performance and utilization information are ongoing tasks in any
data center. The Platform Manager monitoring interface quickly pinpoints parameters
for investigation. In the Queue Status View, you can select a queue, then drill down to
Cluster Topologies
In this section you will learn about the two basic cluster topologies.
Flat cluster
Figure 1 illustrates a cluster configured in a public network with the Platform server
located on the public network communicating with the cluster nodes over a public LAN.
Node management occurs in-band over the public network.
Private Network
Figure 2 illustrates a cluster configured in a private network with the Platform Manager
frontend located on the public network communicating with the cluster frontend server.
The frontend communicates with cluster nodes over a private high-speed interconnect
LAN. Node management occurs in-band over the LAN.
Note: The Platform Manager frontend can reside on the frontend server, as well as on a
separate server on the network.
Multiple clusters
Platform Manager provides management of multiple clusters from a single point in the
data center. Through scalable architecture, Platform Manager incorporates multiple
clusters under a single Platform Manager frontend.
Figure 4 illustrates a multiple cluster architecture.
Note: Alternatively, the Platform Manager frontend can reside on the cluster gateway.
Additional Features
Note: Platform Manager Cluster Edition does not allow for multiple cluster management.
Services are separated into monitoring, provisioning, CIM, and power and console services.
These services interact with the nodes in the cluster.
The CIM repository is stored in a PostgreSQL database. The parallel shell tools are a
proprietary Platform implementation. The imaging engine is also proprietary.
• Provisioning with ScaNCE on page 11
• CIM/Middleware on page 12
• Console on page 14
• Power on page 14
• Monitoring on page 15
Figure 6 shows the interaction between the api layer and services regarding
provisioning, monitoring and saving the nodes’ state on the database.
The provisioning GUI interacts with the configuration database through provisioning
services. Within the provisioning service is the repository, which is tightly coupled with
the configuration database. The provisioning engine performs node deployment. After
installation ssh keys are generated and collected for all nodes in the cluster. The cluster
configuration files and the ssh-keys are packaged into a rpm named “ScaConfig” on the
frontend. This package is distributed to all the nodes in the cluster, and the Platform
Node Configuration Engine (ScaNCE) synchronizes the Platform Manager Server with the
configuration of the local nodes. If the software, or passwords change on the local
nodes, ScaNCE will reference the Platform Manager frontend for the proper
configuration and then update the local node's configuration. ScaNCE does this for
services, licenses, etc.
Similarly, after boot, all nodes will check for updated packages at the Platform Manager
frontend, so that if configuration changes while a node is down, the node will receive
a new ScaConfig package at boot which will trigger a re-configuration of the node.
CIM/Middleware
Figure 7 shows how a PostgreSQL database stores configuration instances and static
instances that interact with the CIM server.
You use the Platform CLI to access the console client which interacts with the console
server. The console server can communicate with
• RAC
• iLO interfaces
• Console switch terminal servers
Power
Figure 8 shows the interaction between ssh on the api layer and Console and power
services for the nodes’ hardware.
If you want to power cycle, see “Adding Myrinet, Power and Console Switches”
on page 82, or see “Configuring Power and Console settings” on page 118, or
Monitoring
Figure 9 shows the interaction between Scamon and the Monitoring services for the
nodes.
Platform Manager also monitors LSF and PBSPro queues. The data can be viewed in the
GUI from LSF and PBSPro directly in real time. You can look at the history from the
command line through the python interface. The factory module provides advanced
Main Window
At a basic level, the user interface for Platform Manager provides you with a set of
interactive forms that correspond to typical functions used to manage a cluster, or data
center. You can arrange the forms in the display to suit your needs, then save the
arrangement under a unique name. When you open Platform Manager, you see a window
similar to Figure 12.
Notice that the window has two menu bars and is separated into views. The views provide
interaction with the various individual functions of Platform Manager.
You select items in a menu at the tool bar or by right-clicking on an item in the Data Center,
then edit, or monitor them in a view. When you first install Platform Manager Enterprise,
Table 2 contains a quick reference to the elements that comprise the Platform Manager
interface.
Element DESCRIPTION
Window A window comprises one, or more perspectives.
The window title shows the name of the active perspective. Its item is
highlighted in the shortcut bar.
View Views are typically used to navigate a hierarchy of information, or to
display properties for the active view.
Except for the monitoring view, only one instance of a particular type
of view may exist within a window.
An inactive view may show information based on its last active state.
dialog Dialogs popup when input, or verification is required for an action.
Wizard Wizards guide you through a process such as installing a cluster,
adding nodes to an existing cluster or uploading software.
Figure 13 shows an example of The Data Center which shows all the configure-able
objects in your data center. It contains the Data Center Selector, which you use to select
objects on which you can perform actions.
The Data Center Selector view is central to the user interface because it lists all the
objects that you can configure in the data center.
If you right-click on a Node you can edit items as you drill down through the list. If you
right-click on a node name or a cluster, you can edit information about it by choosing
Configure from the drop-down list. See Figure 14.
• Install Cluster
• Delete
Configure
See Initial Cluster Deployment on page 3 for details on how to configure a
cluster.
Create Group
You can group nodes to suit your need. Common groupings include by machine
model and by OS.
To create a group
1 Right-click on a node.
2 Select Create NIS Server...
3 Enter a domain name in the text box.
4 Chose whether this node will be a master or a slave.
5 If the node will be a slave, enter the name of the master you want to use.
6 Click Create.
Capture Image
This selection starts up the Capture OS Image dialog. Please see
“Image-based Provisioning” on page 74 for more information.
Remote Access
Selecting Remote Access brings up a submenu where you can open a Console
or a Terminal.
Node On/Off
Clicking on this item will bring up a submenu of power selections for the node.
• Delete removes the cluster and the software from the servers.
• Remove from Cluster will remove the selected nodes from the cluster, but
leave the software intact. The nodes become independent servers.
• Delete
Opening Views
You can have several views in the main window at the same time.
Figure 26 shows a folder with an additional view. The new view opens in a folder next
to the existing view.
Moving views
You can position views in several ways for a more effective work environment.
Dragging a view below, above, or to the side of another view will cause the views to
dock in place. The space occupied by the stationary view will be redistributed between
the stationary view and the view you are dragging. As you drag a window, the mouse
pointer will become a black arrow whenever it is over a window boundary, indicating
that docking is allowed.
1 Right-click on the view name tab. This will give you several checkboxes.
2 Select Move choice, either View or Tab Group.
3 Drag and drop the view into another view or tab group.
The two views appear together in a folder initially. To separate them, as in Figure 27,
click and hold the tab of Installation Progress Status view, then drag it to the bottom
of the window using the highlighted area as a guide.
1 The Upload Software Wizard adds all the required software to the repository so that it can
be deployed.
2 The Create Node Wizard completes the provisioning process for new clusters.
The Create Node Wizard guides you through three phases:
1 frontend server configurationThe configuration of the frontend reuses as much of the current
Platform Manager frontend operating system configuration as possible.
2 Defining the cluster nodes This is done automatically, assuming you have made the
appropriate hardware connections and the nodes are active.
3 Installing the cluster. Once the frontend is configured and the nodes are defined, the cluster
installation can be completed and tested.
Scalability Considerations
The provisioning process keeps track of the number of nodes and whether the frontend
should also be used as a processing node. You could make several assumptions about
the cluster:
• The network connected to the nodes is named n0 with an IP address of
10.0.0.1.
• Cluster nodes are named n1, n2, etc. with IP addresses starting at 10.0.0.2.
• The frontend can keep its current name and configuration for the external
interface (assuming nodes are connected to a private network).
• NAT is enabled on the frontend.
• Nodes are installed with the same OS distribution as the frontend.
• The first ethernet interface (eth0) is used for node installation.
• The frontend is configured as a NIS slave server if it is configured to be a part
of a NIS domain prior to provisioning.
Note: You could define a homogeneous set of nodes and redefine the brand
later. This option may be limited in the future to redefining brands from
generic nodes only (for example: Generic i386 or Generic x86_64).
WARNING—We do not recommend adding the Platform Manager frontend to the
cluster as a compute node. This will slow down the total performance of your
system.
Note: A text message will appear at the top of the wizards’ windows to prompt
you as you fill out the forms. (see Figure 28 on page 40) (See “Install
wizard’s help text” )
You have to choose a network topology prior to deploying a cluster. A typical cluster
may have at least three networks:
The management network is the network that connects the frontend to all servers using
Ethernet, or GEthernet.
The console network is connected to all baseboard management controllers.
A fast interconnect network, such Myrinet, may be connected to the frontend.
There are basically two types of network topologies. Each type may have any number
of variations dependant upon your computing needs. The main difference between the
two is the Cluster Gateway Server, found in a private cluster as you can see in the figure
below.
A cluster with a private network includes a gateway, behind which one places compute
nodes. To the outside observer the cluster appears to be a single node.
1 Choose to create a private subnet, a flat network, extend an existing cluster, or independent
servers.
2 Fill in the Number of servers. Enter the total number of nodes that will be created. You can
create multiple independent servers or multiple clusters and subnets. If you choose “Cluster
on private network”, the number of servers must include at least one gateway server and one
node. Platform Manager verifies network configurations and will not allow, for example,
500 nodes on a subnet with 255 IP addresses.
3 Enter the cluster name when the “Cluster flat...” or “Cluster on private...” options are
selected.If you chose the “Extend existing cluster” option, you are presented with a
drop-down box listing existing clusters. Choose which cluster you want to extend. This list
is disabled for all other options than “Extend existing cluster”.
4 The template functionality is an optional feature for all wizard creation options. The drop
down box lists all existing nodes and independent servers. All relevant configuration details
(server brand, BMC configuration, OS/image/template, subnets, gateway etc) will be
pre-loaded based on the selected server.
5 Click Next to move to the network gateway configuration dialog.
By configuring all the nodes on a private sub net (10.0.0.0), the frontend can make all
requests from the nodes appear as though they came from one machine. The NAT
(Network Address Translation) feature allows file systems outside the cluster to be
mounted through the frontend via NFS.
The NAT values are used to configure the ip tables that manage IP packet filters. In the
subnet specification, the Network Address and Network Mask specify the addresses that
should be translated. Platform Manager configures the ip tables to alter the source
address in packets from the cluster nodes as they exit the cluster (via the frontend).
As a result, the cluster appears to the network as a single computer.
If the server model you selected in the Node Hardware Configuration dialog has a
baseboard management controller, you can configure it.
Monitoring variables such as CPU temperature, fan speeds, and voltages requires
detailed knowledge about the computer. 3rd party monitoring software may be
installed.
Click on Add to configure your ethernet interface. You will get a dialog box as shown
in Figure 39.
DNS configuration lets you specify name servers and the search list for host name look
up. If more than three name servers are defined before Platform Manager is installed,
additional lines will appear in the dialog.
The Network Time Protocol (NTP) synchronizes a computer’s time to that of another
server, or reference time source. Failure to synchronize nodes can lead to strange
behavior in a cluster, so the installer allows NTP to connect to a particular source.
The DNS
To configure DNS:
1 Highlight it
2 Click Remove
The NTP
To configure NTP:
Clusters are easier to use, if user information, such as login names, passwords, or home
directories, is equally accessible to all nodes in the cluster. NIS ensures this kind of
homogeneity.
Configure the NIS service, if necessary.
Configure the LDAP service, if necessary.
Click OK
Remember that your changes will be saved in the database, but not enabled before you
run the configure procedure.
If you should want to add specified users directories you should use this example:
3. Edit "/var/yp/Makefile" for mapping of auto.home, auto.master, etc. Append the auto.home, au-
to.master etc in "all:" section of Makefile
5. Use "addnfsexport" command to add NFS export for your home directory. For example:
#pmcli addnfsexport nfsserver /export/home/
--client="*(rw,sync)"
Creating a subnet
You can add Myrinet, power and console switches using the dialogs available when you
choose Provisioning -> Configure Switch.
Myrinet Switches
Add a Myrinet Switch.
1 Select Add Myrinet Switch from the menu. The Add Myrinet Switch dialog pops up.
2 Enter a name for the switch
3 Enter a valid IP address on your subnet.
4 Click OK. The new Myrinet switch will appear under the switches icon in the Data Center.
Power Switch
Adding a power switch allows you to boot and shut down your nodes remotely.
• Select Add Power Switch from the menu. The add Power Switch dialog pops
up.
The switch will appear under the switches icon in the Data Center.
Console Switch
Next you will configure the console switch.
The switch will appear under the switches icon in the Data Center.
There are two Platform Manager options that you can configure:
• Scali MPI Connect.
• Monitoring software
1 Choose the appropriate software version of Scali MPI Connect from the drop down list. The
list of software versions which are compatible with your hardware appears in the drop down
boxes.
2 Choose the appropriate Monitoring software version from the drop down list.
3 Clicking on Next will open the last dialog in the installation wizard.
4 You have two choices:
• immediate installation of the configuration
• store the configuration settings for installation at a later time.
5 Click on Finish. This only makes changes to the configure database and will bring up the
Configuration Setup Completed dialog. Changes in the actual configuration of your
cluster will not happen until you choose to apply the changes.
Software Installation
Once you have defined your cluster, you must upload and install the software.
You can upload operating systems, local packages (RPM), and supported 3rd party
packages using the Upload Software Wizard. You may start the wizard by using the tool
bar menu Provisioning -> Upload Software
Package-based Installation
The Platform Manager installation process is illustrated below.
During the initial installation, nodes are powered on one by one. Since the nodes
are set up to boot from the network (PXE) they will issue DHCP requests to contact
a server of boot images. The frontend responds to this and returns a bootloader
(pxelinux). The bootloader is then used by the node to retrieve a Linux kernel from
the frontend's TFTP server. This kernel requests a (Package based) kickstart
configuration file that guides the node through the process of configuring and
Image-based Provisioning
As a alternative to package-based provisioning, Platform Manager allows you to
capture a core image from one of your installed nodes and deploy it on other nodes.
You can also combine package- and image-based installation methodologies to be
used together, or separately.
Provisioning Process
Once you have a system tweaked and functioning, you can provision like nodes.
You can change the image information by right-clicking the image name.
You can edit the information in the view, then clicking Save, or Reset, if you
change your mind.
Image Export
Platform Manager supports export of a captured image as a tarball.
Image Import
An exported tarball can be imported by other Platform Manager frontends.
Image Deployment
Once you have a system tweaked and functioning, you can provision like nodes.
• Capture an image, or images from existing nodes.
• Select a node, or group of nodes in the Data Center Selector.
• Right-click on the node name.
• Click Configure -> Provision.
Templates are used to differentiate nodes during installation, for instance to:
• change partitions on the server
• install packages
• change time zone information
• change keyboard layout
• use post-installed scripts
There are two types of templates:
• TFTP templates are configuration files for PXE Linux and are mostly used to
give kernel parameters to the installation kernel.
• Installation templates are Red Hat Kickstart files, SUSE Autoyast XML files, or
Scalamari Kickstart files or diskless.
Installation templates are associated to the installation job for a node during set up in
the installation wizard. You set the template usage in the OS/Image selection page in
the wizard. By pushing the advanced button, you will get the option of changing the
default templates, see Figure 65. For any given OS/Image only the compatible
templates will be listed.
Customized Template
You can change the templates that are currently associated to the last installation job
for the node. Changing templates will require a new installation of the node. Default
templates cannot be deleted or changed, but you can copy these to create own
templates.
Modifying a template
Right-click on a template in the Data Center to make changes in a template.
If you selected a default template, you can only “Copy”.
If you selected a custom template your choices are
• Copying the template.
• Configuring the template.
• Deleting the template.
Configuring a template
Copied or modified templates can be modified later on. You can use keyword
substitution all of the templates. The format for the parameters is %(parameter)t,
where t represents the variable type. You can find a list of template keywords in
“Template Keywords” on page 285.
Deleting a template
Click on Delete.
Node Discovery
You may add existing servers to Platform Manager by using the Discovery functionality
in Platform Manager. You can also set the level of management by Platform Manager.
The range of a discovered node may run from being indistinguishable from a server
Note: You will not be able to re-install a discovered server using the RPM-method unless
the already installed operating system is uploaded to the Platform Manager Software
Repository. Extra software installed “by hand” on the discovered server will not be
installed unless it is pulled in via RPM dependencies or configured as “Local
Product/Software” packages in Platform Manager.
Running the Discovery process allows you to include pre-existing servers with varying
degrees of management.
1 In the tool bar menu, choose Provisioning -> Discover Existing Servers. The discovery
dialog opens. See Figure.
2 Enter the IP address of each of the servers that should be discovered to the list
3 If you want the discovered system(s) to be part of a cluster, enable the “Group new servers
into a cluster” check-box and enter the cluster name in the text-field “Cluster name”.
4 Set the password options. If the servers are set up to use a password for remote rsh/ssh login,
supply the root password in the “Root password” text-box. If the log-in needs no password,
check the “Password-less login enabled” check-box. By definition, the first step of the three
phase discovery process described above will always run. The check-boxes “Enable
Management Software” and “Install Software” determine if the next two steps of the process
should be run. All three steps of the process are enabled by default. Disabling “Install
Software” will postpone installing the management software on the discovered server.
Disabling “Enable Management Software” will result in a fully unmanaged server. Both of
these steps can be run later via the right-click menu in the Platform Manager GUI main
selector.
5 Click Finish.
The discovery process starts. The Active Discovery Jobs view appears. When the
discovery process is finished, the new cluster (if any) and discovered server(s) will
appear in the Data Center selector.
Note: You will not be able to re-install a discovered server using the RPM-method unless the
already installed operating system is uploaded to the Platform Manager Software
Repository. Extra software installed “by hand” on the discovered server will not be
installed unless it is pulled in via RPM dependencies or configured as “Local
Product/Software” packages in Platform Manager.
For each step in the three step discovery process there is a Platform Manager CLI
command:
You can discover nodes by using the discovernode command in the Platform Manager
CLI.
Table 4—enablemanagementofservers
pmcli enablemanagementofservers <systemnames> [servername]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
servername - the name of server; default Platform Manager frontend
--servername=SERVERNAME
This command adds Platform Manager software and services to a system in the
configuration database. It is primarily used for adding Platform Manager to newly
discovered systems. This command only affects the configuration database. Use
“installmanagementsoftware” after “enablemanagementofservers” for
deploying the software without reinstalling the Operating System.
You can install Platform Manager software on system(s) without reinstalling the
Operating system using “installmanagementsoftware” .
This will log into the system using remote shell (rsh or ssh), and install
the Platform Manager software and services on top of an existing linux
installation.
• The system must be present already.
• The system must have the Platform Manager software and services enabled
in the Platform Manager configuration database prior to running
“installmanagementsoftware” . Please see “discovernode” on
page 269 and “enablemanagementofservers” on page 269
“enablemanagementofservers” for adding existing systems to the Data
Center.
• Root login without password must be enabled, or the SSH_PASSWORD
environment variable must be set to the root password of the system(s) to
be discovered.
• The node must be set for “installmanagementsoftware” .
In some cases full management of the existing servers might be undesirable. This
might be the case for servers used or configured by other management systems or for
servers that are under strict security policies and/or service agreements not related to
Platform Manager. For these cases partial management might be more advantageous.
Two examples of partial management are “Out Of Band monitoring only” and PBS Pro
clients configured with Platform Managers using unmanaged PBS Pro Servers.
• Reconfigure.
With Platform Manager 5.6 onward, you can have an existing PBS Server set up outside
the environment managed by Platform Manager. The compute nodes managed by
Platform Manager may be configured as PBS clients (MOMs) providing computing
resources to the unmanaged PBS server. Using either the GUI or the CLI you can:
• Discover the PBS server as an unmanaged system.
• Define the system as a regular PBS server configuration.
• Set the compute nodes as PBS clients (MOMs) affected by the unmanaged PBS
server.
• Apply changes to provisioning.
• Configure the compute nodes as PBS clients.
A small utility script is added to the CLI to create a file listing all PBS clients in a cluster.
This file can add the managed PBS clients to an unmanaged PBS server.
This script creates a Qmgr file that defines all nodes in cluster. This should only be
necessary for unmanaged PBS servers. This will only list compute nodes that are PBS
clients (MOMs). This file can be used to add nodes to the PBS server with the command.
Configuration Overview
You can tweak and fine tune the running of your clusters once you have gone through
provisioning. You must apply your changes in configuration before they take effect.
There are two menues to get you started:
• the drop down menu bar at the top of the screen (see Figure 73)
• right-clicking on an element in the Data Center view (see Figure 74).
TIP: When working with several nodes at the same time, you may use
shift-click and edit accordingly, if all your selected options are to be applied to
all of the selected nodes.
1 Select Set Root Password. The Password Configuration dialog pops up.
2 Enter the password in the textbox.
3 Confirm the password in the second textbox.
4 Click Save.
There are three fields in the Server properties tab: Hostname, Server model and
Architecture.
BMC tab
You can configure several BMC settings in this tab.The BMC tab has its counterpart in
the cli comands. See BMC commands for further information.
When you are finished configuring BMC functionality you can restore the previous
values by clicking on Reset, or save the new setings by clicking on Save.
1 Click on the line in the table that contains the hostname you want to configure.
2 Click on the Power management server name field and choose a switch name or “manual”.
3 If you have set up power switches, choose the appropriate power port.
4 Click on the Console management server name field and choose a switch name or “manual”
or “BMC”.
5 If you have set up console switches, select a console switch port.
6 Click on Save.
Network Interfaces
• Hostname - The first column contains the external host name. You may not
edit this from here.
• IP name - You may edit the IP name by clicking on the field and entering a
new value.
• Device - You may not edit the device name (for example change from eth0
to eth1) from here.
Static ARP
Enabling Static ARP provides the selected node with a complete mapping from MAC
addresses to ipadresses for all other nodes on the same subnet. Enabling Static ARP
speeds up tcp/ip communication on the affected node's subnet.
Note: If you have Static ARP feature enabled special care must be taken when making
changes that cause the MAC address to change for an interface. This usually
applies to changing network adapter or replacing a complete node. If a MAC
address changes you need to delete the MAC address for that interface in Platform
Manager and apply those changes before you continue.
Disabling the Static ARP removes the mapping from MAC-addresses to ip-addresses
from the kernel. This forces the affected node to request subnet address mapping
upon the first access to every other node on that particular subnet.
Note: We recommend that you disable Static ARP on the Platform Manager frontend
itself.
You can enable and disable Static ARP on nodes, either through the pmgui, or the
pmcli. Please see enablestaticarp on page 261, disablestaticarp on page 260
and liststaticarpmapping on page 263 for further details.
Changing IP Addresses
There are two methods to changing the IP addresses.
• Change the IP configuration and reinstall.
• Manually change the IP on the compute nodes, and change the Platform
Manager configuration to match.
To add a new network interface
Modifying a subnet
If you want to modify an exsisting subnetstart in the Data Center.
You may set an individual gateway for this node by clicking on the Gateway IP address
in the active line.
You may define a default Geateway by entering the IP address of the node you want
to be the "head node" or "gateway" for the private network into the text box for the
default gateway. This will affect only selected nodes or all nodes in the window if none
are selected.
1 Click on Set.
2 Click on Save.
The NAT (Network Address Translation) values are used to configure the iptables that
manage IP packet filters.
In the subnet specification, the Network Address and Network Mask specify the
addresses that should be translated. Platform Manager configures the iptables to alter
1 Right-click on one or more nodes in the Datacenter view and choose Configure -> Network.
2 Click on the NAT Settings tab to get to the View and tab as pictured in Figure 85
To add an NAT server
The DNS tab contains information about the Domain Name Services.
1 Right-click on one or more nodes in the Datacenter view and choose Configure -> Network.
2 Click on the DNS Settings tab and right-click on the node you want to configure.
• Click on Configure to get the pop-up Edit DNS Configuration dialog as shown
in Figure 88 .
Note: Remember that the changes will not take effect before you apply the changes. See
“Applying Changes” on page 146.
The first tab is the Distribution Settings tab. You can see what the architecture is as well
as which operating system the node uses and how the software was distributed
The first column contains the external host name. You may not edit this from here.
The HW architecture of your node is listed here in the second column.
The three values of the "Installation method" column are "image based", "pre-installed"
and "package based."
Note: "Pre-installed" will appear when a node has been "discovered". You will not be able
to re-install the node because because the repository in Platform Manager might not
have a copy of the software running on discovered nodes. You must instead un-install
software of a discovered node, then install compatible software to be found in the
repository.
You re-configure the distribution by right-clicking on the selected lines and selecting
"Configure".
You may change the remaining values in the next columns by right-clicking on the
line(s).
• From this dialog you can change the method of distribution via the radio
buttons.
• You may select an appropriate OS in the "Installation data" combo box. The
choices are an installation template or a TFTP template.
Click on OK or Cancel when you are through.
Note: Remember that the changes will not take effect before you apply the changes. See
“Applying Changes” on page 146.
The second tab is the Software Settings tab. It shows you what is installed on your
node.
NTP tab
This tab is for configuring the NTP server you will use to keep your cluster in synch.
• The first column contains the external host name. You may not edit this from
here.
• The second column has a checkbox for enabling the service.
• The third column contains an ip address and the server name of the NTP
server.
Maintain a directory of text-based tables in your network using the NIS Client tab.
• Click on Save.
The LADP Client tab allows you a means to query and modify directories of text-based
information tables in your cluster using the Lightweight Directory Access Protocol..
The Network File System tab helps you configure your system so that you may share
resources among your nodes.
• Hostname - This is the name of the host in the entry. You cannot edit this field
from here.
• Enable Service - Check this box to enable the service on the node.
• Exported Directory List - this column shows the exported directories on a
particular host.
3 Click on Edit. The Edit Remote NFS dialog opens. (Figure 124)
4 Make your changes in the fields.
5 Click on OK. Remember that you must appy changes for them to take effect.
The first column contains the external host name. You may not edit this from here.
4 Select the available version of Scali MPI Connect from the combobox.
5 Enable Infiniband checkbox, if your server supports it and to let Scali MPI Connect use the
existing software stack on the node(s).
6 Enable Myrinet checkbox, if your server supports it.
7 Click on OK.
8 Click on Save. (Or Reset to remove changes)
Note: Be sure to read the LSF documentation for details on a proper set up.
The LSF tab is similar to "PBS Pro" tab. It has the following columns:
• Hostname - The hostname of the node.
• LSF Host Type - This field shows the type of LSF ("Master Candidate",
"Dynamic Host" OR "Static Host"). If this is empty then LSF is not configured
for the node.
• LSF Cluster Name - This field shows the name of the LSF cluster of which the
node is a member. If this is empty then LSF is not configured for the node.
• LSF Version - This field shows the LSF software version configured on the
node. If this is empty then LSF is not configured for the node.
• License Server/Port - This field shows the hostname(or IP Address) of the
license server and port number on which the licensing service is accessible
on the license server. If you do not provide the port number, then the port
number defaults to 1700. For example: 1700@PMServer or
1700@172.19.50.
Note: If the License server/port column field is empty, then the service will check out
license features from local file.
You can select multiple nodes in the table viewer and right click to show a menu of
two options: "Configure" and "Remove".
1 Right-click on the table row(s) of the selected node(s). See Figure 113 .
1 Click on "Configure". This will open the "Edit LSF Configuration dialog". See
Figure 114 .
2 Select "Master Candidate" for the field labeled "LSF Host Type" in the "Edit LSF
Configuration" dialog.
3 Enter a new LSF cluster name or you can choose a cluster name from the list of LSF clusters
created in the field labeled "LSF Cluster".
4 Select the version of the LSF from the field labeled "Software Version".
5 Select the license file for the LSF version using the "Browse" button.
6 Click "OK".
7 To save the configuration click on the "Save" button in the LSF Tab.
About Scali_DHCPServerService
Scali_DHCPServerService decides which interfaces will entertain DHCP
broadcast requests. Only those interfaces which are bound to this service are in
a position to entertain DHCP broadcast requests.
DHCPServerService should normally be hosted by Platform Manager Servers
and Platform Manager Gateways. It assigns IP addresses to hosts that have
DHCPClientService and support network OS installation via PXE/EFI/Etherboot.
See the pmcli commands enablenetworkboot and disablenetworkboot.
About Scali_ManagementEngineService
Scali_ManagementEngineService manages the installation and configuration of
compute nodes.
Platform Manager assigns the network interface for installation by looking at all
available network interfaces that have PXE enabled (see the pmcli commands
enablenetworkboot and disablenetworkboot) on the same network as a
Scali_ManagementEngineService. If multiple possible interfaces are found
Platform Manager will use an interface from this selection.
Scali_ManagementEngineService is not a daemon that runs all the time but a
service that is started on demand. Platform Manager requires a
DHCPServerService to be hosted by the same server for OS installations.
DHCPServerServices should normally be hosted on Platform Manager Servers
and Platform Manager Gateways.
By default a PM ethernet interface is set automatically for node installation.
Many times, a public or cooperate network is not required for provisioning. To
prevent an ethernet interface from being used in node installation, unbind the
Scali_ManagementEngineService service from the interfaces which aren't
required for node installation. This will make sure that the only the left-out
interface is used for node installation.
About Scali_RepositoryChannelService
RepositoryChannelService manages the repository of software available for
installation in the cluster. The RepositoryChannelService uses the apache web
server to make the packages and images available for download for the compute
nodes.
The Yum utility may use any one of the interfaces/networks for installing rpms
on compute nodes on which this service is bound. Which Interface/Network yum
is using can be found by entering:
# cat /opt/scali/etc/yum.conf
Do not use interfaces or networks that are unreachable. If one is unreachable
the node may not install completely. In this case there will be an error message
while restarting "scance" service. Platform advises you to unbind
"Scali_RepositoryChannelService" from unused interfaces.
Note: Figure 115 shows that "License Text" field is disabled for the host type other
than "Master Candidate".
You can configure a Dynamic Host Service by following these steps, starting in the
LSF tab of the node services view:
PBSPro tab
• You may choose bewteen client and server by clicking on the column entry.
• Click on Configure.
PBSPro Clients
Topics included in this section are:
• Setting up PBS Pro clients using an unmanaged PBS Pro server on
page 138
This script creates a Qmgr file that defines all nodes in cluster. This should only be
necessary for unmanaged PBS servers. This will only list compute nodes that are
PBS clients (MOMs). You can use this file to add nodes to the PBS server with the
command
'qmgr -c < nodefile.qmgr'
In the example below we will be setting up ScaAccounting (Batch system
accounting) on an unmanaged PBSPro server. We will assume the following:
• “dl360g3-4” is the name of the Platform Manager frontend.
• “vega” is the name of the unmanaged PBS server
If you have problems with this section, please contact professional services.
Remote NFS
Configure your node(s) to access file systems on other servers as if the systems were on
the node.
As you can see in Figure 119 there are three columns in the Remote NFS tab.
• Hostname - a field for the systemname of your node. This is not editable from
here.
• Enable NFS - a checkbox for enabling or disabling the service
• Mount Point - a field for comma separated lists of mount points
3 In the New Remote NFS Configuration select the remote system from the combobox.
4 Enter the remote directory.
5 Enter the mount point(s). If you have multiple points, separate the list with commas.
6 Enter the mountpoint options. If you have multiple options, separate the list with commas.
7 Click on OK to make changes to the database and exit the dialog.
1 Select the node(s) you want to configure in the Remote NFS Management tab, as in
Figure 122.
Example: addremotefs
Use “addremotefs” to mount a remote directory.
pmcli addremotefs sc1435-3 nfs sc1435-6:/export/test1 /mnt/test1
Example: listremotefs
Use “listremotefs” to list remote directories in the system.
# pmcli listremotefs sc1435-3
Your changes take effect only after the configuration has been applied to the nodes. There
are two ways to apply changes in configuration.
• You can right-click on the selected node(s) in the Data Center view, then click can
click on Apply Configuration.
• You can click the Apply Configuration in the “Provisioning” menu bar at the top
of the screen.
Note: Workload Management systems (such as PBS Pro) often have their own
HA options. You need to buy your vendors’ own HA option for their product
line. Please contact them for their solutions.
Platform Manager employs the Active-Passive model of HA. This means that there is a
secondary node with a fully redundant instance of the Platform Manager Server and/or one
secondary node for each protected gateway, nodes which lie passively offline until their
associated primary nodes fail. This configuration requires the most “extra” hardware of all
the topologies, but it also assures the greatest protection against systems failure.
To access services on the cluster during a failure of the cluster host there must be what is
called a “cluster logical host”. This is a network address or a host name which is not tied
solely to any given node, but rather linked to services provided by the cluster. This allows
the database to be restarted on a redundant cluster during failure. That network
address/host name is then temporarily assigned to the redundant node so users may
interact with the database.
Note: The Platform Manager/HA option requires a product key for the HA
feature on each cluster. The copies of the key must be set to allow for
activation on each node instance AND activated. Please obtain licenses from
your Platform sales representative or visit http://www.platform.com.
The High Availability feature would normally be installed for the Platform Manager Server
and/or for gateways in your clusters. Compute nodes do not generally need this
functionality.
2 Define the SECONDARY node using the Server Creation Wizard (extend cluster /
independent server) or the pmcli.
3 OPTIONAL: If some, or all defined systems are already installed, make sure all systems are
installed and working properly. It is also possible to install all systems defined above at this
point before continuing. Make sure to follow the optional steps also if systems are already
installed.
4 Create the HA group and add the floating ethernet interfaces. The IP addresses of these
interfaces will started on the ACTIVE gateway and moved in case of a failover.
pmcli createhagroup hagw
pmcli addhaethernetinterface hagw nic1 eth0 hagateway.ext 172.19.5.200
pmcli addhaethernetinterface hagw nic2 eth1 hagateway.int 10.0.0.200
5 Add the heartbeat software to the primary gateway (for Platform Manager version 5.7.1):
pmcli addsoftware gw1 pm-5.7.1 Heartbeat
6 Add heartbeat channels to the primary HA gateway. It is recommended to use the “unicast”
and “serial” methods and as many different channels as possible for redundancy.
pmcli addheartbeatchannel gw1 unicast eth0 172.19.5.22
pmcli addheartbeatchannel gw1 unicast eth1 10.0.0.22
pmcli addheartbeatchannel gw1 serial /dev/ttyS0
7 Set the udp-port if necessary. Default value is 694. There are two common reasons for
overriding this value:
• There are multiple HA groups using the “broadcast” channel method on the
same subnet, or
• This port is already in use in accordance with some locally-established policy.
pmcli addheartbeatchannel gw1 udpport 694
8 Move the services to the HA group. This controls which services should be run the gateway
as HA services. There are two ways to do this:
• Move all HA compatible services from the primary gateway to the HA group.
pmcli moveservicestohagroup gw1 hagw
9 Start up HA on the primary gateway. This will enable a HA group running Heartbeat with
one member system.
pmcli reconfigure gw1
10 Set the gateway of the private nodes to be the internal floating HA group IP address
pmcli removeroute “n[01-64]” 0.0.0.0
11 Configure the nodes on the private net to use the new gateway IP. Everything should now
work normally as if HA was never there.
pmcli reconfigure “n[01-64]”
14 Add heartbeat channels to the secondary HA gateway. See note on udp-port above.
pmcli addheartbeatchannel gw2 unicast eth0 172.19.5.21
pmcli addheartbeatchannel gw2 unicast eth1 10.0.0.21
15 Install the software on the secondary node for the HA services selected. This is a crucial step
to make HA work. If a failover occurs and the software is not installed on the secondary
gateway, the failover procedure will fail and the gateways will both go down. What software
should be installed is related to the use of “moveservices” command(s) above.
pmcli addsoftware gw2 pm-5.7.1 'MonitoringRelay' 'Install server' 'Console
Server' 'Power Server' 'MonitoringOutofband'
pmcli addheartbeatchannel gw2 serial /dev/ttyS0
16 Reconfigure the primary gateway and install the secondary gateway if not installed earlier.
pmcli reconfigure gw1
pmcli install gw2
pmcli reconfigure all
17 If the secondary gateway was already installed, reconfigure the gateway systems
pmcli reconfigure gw1
CAUTION—Read Installing Platform Manager High Availability on a Gateway and understand the
procedure before reading this!
WARNING—Do not run the GUI during the HA Platform Manager Server setup!
• OPTIONAL: If some or all defined systems (in addition to primary Platform
Manager Server) are already installed, make sure all systems are installed and
working properly.
Note: It is also possible to install all systems defined above at this point
before continuing.
• Configure the HA group
• Create the HA group
• Add the floating ethernet interfaces
• Add the primary Platform Manager Server with heartbeat channels.
• Reconfigure the primary Platform Manager Server to enable the new floating
IP(s).
• pmcli reconfigure PMServer-1
• Move the services to the Platform Manager Server HA group. The first example
script will show you how to move them all at once. The second, a partial script,
will show you how to move the services individually.
• Move the Platform Manager Configuration Database to shared storage. Mount the
shared storage manually and start the DB again.
• OPTIONAL: Move the monitoring history database.
• Add the shared disk mount for the Platform Manager Configuration Database to
the HA group.
• OPTIONAL: Add the shared disk mount for the Add Monitoring History database
to the HA group.
• Move the repository to shared storage.
• Mount the repository in the shared storage.
• Move and mount tftpboot
• Move and mount images
• OPTIONAL: Move and mount the Monitoring history database
• Add the shared disk mount to the HA group:
CAUTION—This last command is subject to change. Check the current release notes or contact
Platform support for current commands.
This script assumes you have done the first two steps in the process above:
Monitoring menu
Figure 129 shows the monitoring dropdown menu at the top of the screen.
Node status is visually indicated by the Platform node server icon. There are three
states for this icon:
To open a standard monitor view, choose Monitoring in the tool bar, then select a view
from the drop-down list.
Platform Manager has a back select feature that allows you to work interactively with
monitor views. To back-select nodes from a monitor view, select a variable whose data
are displayed in one of the views. The nodes being monitored for that variable are
highlighted in the Data Center Selector.
Platform Manager provides a set of standard monitor views, as well as a method to
create custom monitor views. The standard monitor views are:
• Alarm View on page 162
• Custom Variables in Platform Manager Monitoring on page 167
• Interconnect Monitoring View on page 168
• Creating a Custom Monitor View on page 172
Alarm View
Platform user-definable alarms are part of the monitoring system. Users may define
their own events to trigger an alarm based on any combination of comparative
operations of any available monitoring variable.It is possible to select between a set
of pre-defined actions to be performed when an alarm is triggered. Combined with
the possibility to define monitoring variables this makes for an extremely flexible
and powerful solution.
Viewing Alarms
To view alarms choose Monitoring -> Alarm Log. This will show a dynamically
updated list of events that have triggered the alarm. Use the Edit Alarms view
to define new alarms and edit existing ones.
Editing an Alarm
Figure 132 illustrates the Alarms view specifying where you perform the
individual steps necessary to define an alarm.
The Edit Alarms view contains a list of current alarms with status information,
as well as buttons to perform operations on selected alarms.Apart from Add
Alarm, all of the other buttons require that an alarm is selected first to work
properly.
The model used to describe a condition that should set off an alarm is to combine
a series of boolean expressions using either AND, or, or logical functions.The
boolean expressions are constructed by comparing a monitoring variable to a
reference value by means a logic operator (<, <=, >, >=, !=, == ).
For existing alarms, when an alarm is selected in either the Alarms editor, or the
Alarms log, the corresponding nodes are selected in the Data Center Selector.
Custom alarms can be defined based on any available monitoring parameter and
will trigger customized, or default actions.
Adding an alarm
Start in the Monitoring drop down menu:
Keep this small script running to log output from an iostat command to the file in
the var directory. The benefit of this script, while not particularly elegant, is that it
relies only on standard tools in the sysstat package.
#/bin/sh
while true; do
iostat 2 2 | grep sda | tail -1 > /var/log/iostat.tmp
mv /var/log/iostat.tmp /var/log/iostat.log
sleep 1
done
# Copy the script to all nodes and start it.
scash -p "/tmp/iostat.sh &"
This will keep the dot log file updated with values from the iostat command. You
need to update ScaMond.conf to get these values into Platform Manager.
Restart the scamond and scasmo services. The Platform Manager GUI will then
display the new custom variables.
Ethernet Monitoring
The Platform Manager GUI provides a cluster wide interconnect overview of all
Ethernet-related monitoring variables through the standard Interconnect View.
Myrinet Monitoring
When you manage systems with Myrinet, Platform Manager adds a Myrinet
submenu item to the Interconnects menu in the Interconnect view. The Platform
Manager GUI provides a cluster wide interconnect overview of all
Myrinet-related monitoring variables.
To monitor Myrinet systems:
You can monitor the status of jobs that you have sent to the queuing system using
the LSF Queue Status View or the PBS Queue Staus View.
You can create custom views for monitoring a wide variety of activity in your data
center.Platform Mange provides a generic view, the Monitor View, that you can modify
to include data from variables that you specify.
• Click OK. The Monitor View opens. The data you specified is displayed in the
view.
Note: Only the Node Console option is available to ordinary users. All other items
require root privileges.
The options in the Power Management menu are described in Table 12.
ScaAccounting
ScaAccounting is used in batch system accounting. Topics in this section include:
Manually enabling the accounting functions in ScaAccounting on page 180
Starting ScaAccounting on page 180
ScaAccounting log on page 180
ScaAccounting and the PBS Pro Server must run on the same node. When you enable
a PBS server in the Platform Manager GUI, Platform Manager will enable
ScaAccounting, also.
To enable ScaAccounting through the pmcli enter:
pmcli -h addbatchsystemaccountingservice <systemname>
Starting ScaAccounting
Enter:
/etc/init.d/scaaccounting start
ScaAccounting log
Normally you will find the ScaAccounting log here: /var/log/scaaccountd.log. This log will
contain a list of error messages and a list of files that were parsed.
/opt/scali/sbin/scaacct -h
to get information from help.
Table 13 contains the syntax and options found in scacct.
By default the scaacct command reports per user accounting data for the previous month.
(Immediately after an install no data has been gathered and the report will be empty.
Processes are accounted at time of termination).
Enter:
Time Specification
The time specification can be given as a <start> <stop> time range or a single <time>
unit as listed below:
Group-by Specification
The default is to group by user, but there are several other options. Please note that
you can specify two group-by options to create a two-dimensional table. If you do not
want to group at all you may use the --listall option instead; this will list all records).
Accounting report
07/01/07 00:00:00 - 07/31/07 23:59:59
Group CPU time
root 457.48
rpc 0.57
apache 0.21
platformusers 191.47
Total 649.73
Accounting report
07/01/06 00:00:00 - 07/31/06 23:59:59
Command \ Node delfi2-1 n1 n2 n3 Total
0anacron 0.01 0.00 0.00 0.00 0.01
S90psacct 0.02 0.01 0.01 0.01 0.05
S90scasmo-contr 0.00 0.00 0.00 0.00 0.00
S90xfs 0.08 0.00 0.04 0.03 0.15
yphelper 0.01 0.00 0.00 0.00 0.01
ypwhich 0.00 0.01 0.00 0.00 0.01
ypxfr 0.11 0.00 0.00 0.00 0.11
yum.cron 0.00 0.00 0.00 0.00 0.00
zcat 0.15 2.58 2.50 2.61 7.84
Total 424.61 102.32 87.98 34.82 649.73
Rule Specification
Use the -r option to exclude records that don’t match rules specified in a
“key=value” format. Multiple rules may be combined.
Note: The accounting information is based on termination time and not start time.
Summarize Specification
Elapsed time
System time
User time
Characters transferred
Number of swaps
Accounting data is collected from the nodes to the accounting server daily.You can now
create reports with the current data.
To trigger an immediate update run logrotate and scaacct_collect on all nodes enter:
sh -p logrotate -f /etc/logrotate.conf
# scash -p /etc/cron.daily/scaacct_collect
Report Interface
There are two grouping of functions at the top of the interface. The Cluster Summary and
the Report Navigation.
Cluster Summary
You will find the cluster summary tools on the left side of the tool bar
On the left at the top of the report dialog from left to right
• Home
• TOC toggle
• Run Report
• Export Data
• Export report
• Print report as PDF
• Print report on server
Report Navigation
On the right side of the tool bar are the icons for navigation.
Opening a Report
Go to the Report drop-down menu in the tool bar. Report-> Show Reports
You may also use the web interface at http://<servername>:8080/reports/.
Monitoring
Cluster Summary shows server settings and status. Refreshes every 5 minutes.
Node Status shows monitoring status for all nodes. Refreshes every 5 minutes.
Server Summary shows server settings and historical status.
Networking
Network Overview is a table depicting the configuration of the network devices on all
systems.
Workload management
PBS Pro Job History lists batch jobs which have run on the cluster.
PBS Pro Job Usage is an overview of the number of batch jobs and resources consumed
by each user.
CAUTION—The CLI commands make changes to the configuration database ONLY. If you do not
apply your changes to the node(s) you will have an eternity of wall time wondering why the system hasn't
changed. You MUST apply your changes to the node(s) or clusters either in the GUI or by using:
# pmcli reconfigure <nodename|all>
The Baseboard Management Controller commands help you manage the interface
between system management software and platform hardware.
• addbmc on page 197
• disablebmcconsole on page 197
• disablebmcmonitoring on page 198
• disablebmcpower on page 198
• enablebmcconsole on page 198
• enablebmcmonitoring on page 198
• enablebmcpower on page 199
• listbmccapabilities on page 199
• removebmc on page 199
• showbmc on page 200
addbmc
addbmc adds a BMC for the system(s)
Table 20—addbmc
pmcli addbmc <systemnames> <username> <password> <ipspecs>[subnet]
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}
username - username
password - password - YOU MUST NOT USE ENCRYPTION HERE
ipspecs - ip address(es) [..]
OPTIONAL DESCRIPTION
subnet - subnet for the ipaddress
--subnet=SUBNET
disablebmcconsole
disablebmcconsole disables BMC console for the system(s)
Table 21—disablebmcconsole
pmcli disablebmcconsole <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}
Table 22—disablebmcmonitoring
pmcli disablebmcmonitoring <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}
disablebmcpower
disablebmcpower disables BMC power control for the system(s).
Table 23—disablebmcpower
pmcli disablebmcpower <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}
enablebmcconsole
enablebmcconsole enables BMC console for the system(s).
Table 24—enablebmcconsole
pmcli enablebmcconsole <systemnames> [consserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
consserver - the name of a console server
--consserver=CONSSERVER
enablebmcmonitoring
enablebmcmonitoring enables BMC power control for the system(s)
enablebmcpower
enablebmcpower enables BMC power control for the system(s).
Table 26—enablebmcpower
listbmccapabilities
listbmccapabilities returns a list of BMC capabilities for the system(s).
Table 27—listbmccapabilities
pmcli listbmccapabilities <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
removebmc
removebmc removes BMC from system(s)
Table 28—removebmc
pmcli removebmc <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
Table 29—showbmc
pmcli showbmc <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
Through the cluster commands you can create clusters from a collection of nodes,
rename them or add and remove nodes from cluster.
• addnodetocluster on page 201
• createcluster on page 201
• listclusters on page 201
• listnodesincluster on page 201
• removenodefromcluster on page 202
• renamecluster on page 202
addnodetocluster
addnodetocluster adds system names to a cluster.
Table 30—addnodetocluster
pmcli addnodetocluster <systemnames> <clustername>
ARGUMENT DESCRIPTION
systemnames - name of system(s) [..]
clustername - name of the cluster
createcluster
createcluster creates a cluster of type 'performance'. See See “Creating a flat
cluster with pmcli” on page 380 for best practices on how to use createcluster.
Table 31—createcluster
pmcli createcluster <name>
ARGUMENT DESCRIPTION
name - the name of the cluster you will create
listclusters
listclusters returns a list of all available clusters of the type “performance”.
Table 32—listclusters
pmcli listclusters
ARGUMENT OPTIONAL
none none
listnodesincluster
Table 33—listnodesincluster
pmcli listnodesincluster <clustername>
ARGUMENT DESCRIPTION
clustername - name of the current cluster
removenodefromcluster
removenodefromcluster removes systemname(s) from a cluster.
Table 34—removenodefromcluster
renamecluster
Figure 1 renamecluster assigns a new name to a cluster.
Table 35—renamecluster
pmcli renamecluster <oldclustername> <newclustername>
ARGUMENT DESCRIPTION
oldclustername - the current name of the cluster
newclustername - the new name of the cluster
getcustomattribute
getcustomattribute gets a custom attribute for system(s).
Table 36—getcustomattribute
pmcli getcustomattribute <systemnames> <attributename>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
attributename - the name of the attribute to set
listcustomattributes
listcustomattributes lists custom attributes for system(s).
Table 37—listcustomattributes
pmcli listcustomattributes <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
removecustomattribute
removecustomattribute removes a custom attribute for system(s).
Table 38—removecustomattribute
pmcli removecustomattribute <systemnames> <attributename>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
attributename - name of attribute to remove.
setcustomattribute
setcustomattribute sets a custom attribute for system(s).
Use the deployment commands to install and to configure software on your systems.
These commands are the cli counterparts to The Upload Wizard in the GUI (see
“Upload Software Wizard” on page 68).
• install on page 205
• installmanagementsoftware on page 205
• reconfigure on page 207
• reconfiguredryrun on page 207
• setdiskless on page 207
• setdistribution on page 207
• setimage on page 208
• setnettemplate on page 208
• setftptemplate on page 208
install
install installs system(s). The operating system will be installed based on the
current configuration of the system.
Table 40—install
pmcli install <systemnames> [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
installserver - name of installserver
--installserver=INSTALLSERVER
installmanagementsoftware
installmanagementsoftware installs Platform Manager software on
system(s) without reinstalling the operating system. This logs onto the system
using remote shell (rsh or ssh), and install the Platform Manager software and
services on top of an existing linux installation. The system must already be present
and have the Platform Manager software and services enabled in the Platform
Manager configuration database prior to running installmanagementsoftware,
Table 41—installmanagementsoftware
pmcli installmanagementsoftware <systemnames> [netconfigtemplate] [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
netconfigtemplate - UUID of the installation template.
--netconfigtemplate=NETCONFIGTEMPLATE
installserver - server from where the installation job will run
--installserver=INSTALLSERVER
Note: Root login without password must be enabled, or the SSH_PASSWORD variable
for the system to be discovered must be set for installmanagementsoftware.
Note: See how to use listtemplates on page 311 to get a list of available
templates for using netconfigtemplate.
Table 42—reconfigure
pmcli reconfigure [systemnames] [installserver]
OPTIONAL DESCRIPTION
systemnames - name of system(s) to reconfigure{[..]
--systemnames=SYSTEMNAMES
installserver - server where the installation job will run
--installserver=INSTALLSERVER
reconfiguredryrun
reconfiguredryrun tests system(s) re-configuration. This is the same as the
“reconfigure” command, but the changes will only be reported, not actually
performed.
Table 43—reconfiguredryrun
pmcli reconfiguredryrun <systemnames> [installserver]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
installserver - name of installserver
--installserver=INSTALLSERVER
setdiskless
setdiskless sets systems(s) diskless with software image.
Table 44—setdiskless
pmcli setdiskless <systemnames> <imagename>
ARGUMENT DESCRIPTION
systemnames - string with the name(s) of the system(s) {[..]}
imagename - sets an os image
setdistribution
setdistribution sets distribution to system(s).
setimage
setimage sets software image for system(s).
Table 46—setimage
pmcli setimage <systemnames> <imagename>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
imagename - os image to set
setnettemplate
setnettemplate sets netconfig template to system(s).
Table 47—setnettemplate
pmcli setnettemplate <systemnames> <nettemplate>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nettemplate - netconfig template to set
setftptemplate
setftptemplate sets tftptemplate to system(s).
Table 48—settftptemplate
pmcli setftptemplate <systemnames> <template>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
template - TFTP template to set
You can test your database and check that systems are functioning properly using
commands found in this section.
• diagnosecimdatabase on page 209
• diagnoseconsole on page 209
• diagnoseinstallation on page 209
• diagnosemonitoring on page 210
• diagnosenis on page 210
• diagnosescampi on page 210
• diagnosescash on page 210
• diagnosessh on page 210
• diagnosesshkeys on page 211
diagnosecimdatabase
diagnosecimdatabase tests that the database is sane and overrides from base class.
Table 49—diagnosecimdatabase
pmcli diagnosecimdatabase <systemname>
ARGUMENT OPTIONAL
none none
diagnoseconsole
diagnoseconsole tests console functionality.
Table 50—diagnoseconsole
pmcli diagnoseconsole <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
diagnoseinstallation
diagnoseinstallation verifies that the installation was successful, or not.
Table 51—diagnoseinstallation
pmcli diagnoseinstallation <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
Table 52—diagnosemonitoring
pmcli diagnosemonitoring <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
diagnosenis
diagnosenis tests the nis.
Table 53—diagnosenis
pmcli diagnosenis <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
diagnosescampi
diagnosescampi tests scampi (Scali MPI Connect).
Table 54—diagnosescampi
pmcli diagnosescampi <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
diagnosescash
diagnosescash tests scash.
Table 55—diagnosescash
pmcli diagnosescash <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
diagnosessh
diagnosessh tests the ssh.
diagnosesshkeys
diagnosesshkeys tests the CIM for both public and private ssh keys.
Table 57—diagnosesshkeys
pmcli diagnosesshkeys <systemname>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
addlustremdt
addlustremdt creates Lustre MTD for system
Table 58—addlustremdt
pmcli addlustremdt <systemnames> <fsname> <mdtname> <backendfstype>
<filepath> <filesize>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
fsname - name of file system
mtdname - name for this MDT inside lustre
backendfstype - filesystemtype for the backend. E.g. ldiskfs
filepath - path to devicefile or loopback file
filesize - number of MB for loopback files, 0 for devices
addlustreost
addlustreost creates Lustre OST for system(s).
addnfsexport
addnfsexport adds a service for exporting directories over NFS from system(s).
Table 60—addnfsexport
pmcli addnfsexport <systemnames> <directory> [client=['*:(ro)']]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
directory - name of the directory to be exported
OPTIONAL DESCRIPTION
client - clients with export options. Client argument shouldn't have any
spaces. Defaults to "*(ro)"
--client=CLIENT
Multiple clients can be specified by using --client multiple times in
command e.g
--client "host1:(rw,sync)"
--client "host2:(ro,async)".
addremotefs
addremotefs adds mounting for remote filesystem on system(s).
createlustrefs
createlustrefs creates Lustre file system.
Table 62—createlustrefs
pmcli createlustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - name of file system
formatlustrefs
formatlustrefs formats and enables lustre filesystem.
Table 63—formatlustrefs
pmcli formatlustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - the name given to the filesystem in createlustrefs on page
214.
listlustrefs
listlustrefs lists lustre file systems.
listremotefs
listremotefs returns a list of remote filesystem(s) to the current system.
Table 65—listremotefs
pmcli listremotefs <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
removelustrefs
removelustrefs removes lustre file system.
Table 66—removelustrefs
pmcli removelustrefs <fsname>
ARGUMENT DESCRIPTION
fsname - name of file system
removeremotefs
removeremotefs removes mounting of remote filesystem(s) on the current
system.
Table 67—removeremotefs
pmcli removeremotefs <systemnames> <mntpoint>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
mntpoint - mountpoint
testlustrefs
testlustrefs runs performance diagnostics for lustre filesystem.
addflexclientconfigtoservice
addflexclientconfigtoservice associates a FLEX client config with one or more
services.
Table 69—addflexclientconfigtoservice
pmcli addflexclientconfigtoservice <configid > <systemnames> <serviceid>
ARGUMENT DESCRIPTION
configid - uuid of the FLEX client config
systemnames - name of system(s) {[..]}
serviceid - name or uuid of service (from listhostedservice)
createflexclientconfigdir
createflexclientconfigdir creates a FLEX client config setting for local license
directory.
Table 70—createflexclientconfigdir
pmcli createflexclientconfigdir <name> <lmdir> <description>
ARGUMENT DESCRIPTION
name - the name(s) of the new configuration file
lmdir - name of the directory where the license files (*.lic) will be
found
description - description of the new configuration file
createflexclientconfigfile
createflexclientconfigfile creates a FLEX client config setting for local license file.
Note: You need to add one or more services with the command
Table 71—createflexclientconfigfile
pmcli createflexclientconfigfile <name> <lmdir> <lmfilename > <inputfile>
<description>
ARGUMENT DESCRIPTION
name - the name(s) of the new configuration file
lmdir - name of the directory where the license files (*.lic) will be
found
lmfilename - name of the license file to write in the lmdir
inputfile - full file path for the license file data. The file is read and saved
in the configuration database.
description - description of the new configuration file
createflexclientconfigserver
createflexclientconfigserver creates a FLEX client config setting for remote FLEX
license server.
Note: You need to add one or more services with the command
"addflexclientconfigtoservice" before the createflexclientconfigserver takes
effect.
deleteflexclientconfig
deleteflexclientconfig deletes an unused FLEX client config setting
Table 73—deleteflexclientconfig
pmcli deleteflexclientconfig <configid>
ARGUMENT DESCRIPTION
configid - uuid of the FLEX client config
listflexclientconfigs
listflexclientconfigs returns a list of all FLEX client config settings.
Table 74—listflexclientconfigs
pmcli listflexclientconfigs [verbose]
OPTIONAL DESCRIPTION
verbose - list license file contents
--verbose=VERBOSE
listflexclientconfigsonservice
listflexclientconfigsonservice returns a list of FLEX client config(s) associated
with one or more services.
removeflexclientconfigfromservice
removeflexclientconfigfromservice removes association of a FLEX client config
from one or more services.
Table 76—removeflexclientconfigfromservice
pmcli removeflexclientconfigfromservice <configid> <systemnames> <serviceid>
ARGUMENT DESCRIPTION
configid - uuid of the FLEX client config
systemnames - the name(s) of the system(s) {[..]}
serviceid - name or uuid of service (from listhostedservice)
addhaethernetinterface
addhaethernetinterface adds HA (floating) ethernet interface. Each HA
interface will be managed (up/down and location of its host) by the HA group.
Table 77—addhaethernetinterface
pmcli addhaethernetinterface <hagroupname> <nicname> <lanendpoint> <hostspecs>
<ipspecs>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
nicname - name of nic (e.g. "nic1")
lanendpoint - name of the lanendpoint (e.g. "eth0")
hostspecs - hostname for HA interface
ipspecs - ip address(es)
addhasharedfs
addhasharedfs adds mount information for a shared storage filesystem
to the HA group
addheartbeatchannel
addheartbeatchannel allows for the actual "heartbeat" communication within the
HA group, which can be done over one or more channels.
addservertohagroup
addservertohagroup adds a system to HA group and installs the HA software on
the system.
Note: Make sure to manage the HA services with the "moveservice” commands.
Note: Make sure to use the command 'addheartbeatchannel' to add heartbeat "ping"
channels.
bindhaservicetointerface
bindhaservicetointerface binds service to interface for HA group
Table 81—bindhaservicetointerface
pmcli bindhaservicetointerface <hagroupname> <serviceid> <interface>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
serviceid - name or uuid of service
interface - interface (e.g. 'eth0')
createhagroup
createhagroup creates a group of type 'HA' with default fencing disabled and
generates a common authentication key for the member systems. Enables auto
failback for the group by default.
Table 82—createhagroup
pmcli createhagroup <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group to be created
disableautofailback
The HA services will remain on whatever server is serving it until
that node fails, or an administrator intervenes.
Table 83—disablebmcconsole
pmcli disablebmcconsole <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}
disablehagroupfencing
disablehagroupfencing disables system fencing (stonith) on failover.
enableautofailback
enableautofailback makes the HA services automatically fail back to the
"primary" server.
Table 85—enableautofailback
pmcli enableautofailback <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
enablehagroupfencing
enablehagroupfencing enables system fencing on failover.
Note: enablehagroupfencing requires correct setup of the
Scali_PowerManagementController services
listhagroups
listhagroups lists all groups of type 'HA’.
Table 87—listhagroups
pmcli listhagroups
ARGUMENT OPTIONAL
none none
listhainterfaces
listhainterfaces returns a list of ha (floating) ethernet interfaces on the ha group.
Table 88—listhainterfaces
pmcli listhainterfaces <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - the name of HA group
listhapingrules
listhapingrules lists ping rule summary.
Table 89—listhapingrules
pmcli listhapingrules <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
listhasharedfs
listhasharedfs lists all shared filesystems for an HA group.
Table 90—listhasharedfs
pmcli listhasharedfs <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
Table 91—listheartbeatchannels
pmcli listheartbeatchannels <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - the name of the HA group
listhostedhaservices
listhostedhaservices lists the HA services hosted on the HA group
Table 92—listhostedservices
pmcli listhostedservices <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
listserversinhagroup
listserversinhagroup lists servers in an HA group
Table 93—listserversinhagroup
pmcli listserversinhagroup <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
moveservicestohagroup
moveservicestohagroup moves all Platform HA enabled services from the
system to the HA group
Table 94—moveservicestohagroup
pmcli moveservicestosystem <systemname> <hagroupname>
ARGUMENT DESCRIPTION
systemname - name of HA server
hagroupname - name of HA group
moveservicestosystem
moveservicestosystem moves all Scali HA enabled services from the HA group
to the system.
moveservicetohagroup
moveservicetohagroup moves a specified HA enabled service from a system to
HA group
Table 96—moveservicetohagroup
pmcli moveservicestohagroup <systemname> <hagroupname>
ARGUMENT DESCRIPTION
systemname - name of HA server
hagroupname - name of HA group
moveservicetosystem
moveservicetosystem moves a specified HA enabled service from a HA group to
a system.
Table 97—moveservicetosystem
removehaethernetinterface
removehaethernetinterface removes HA (floating) ethernet interface.
Table 98—removehaethernetinterface
pmcli removehaethernetinterface <hagroupname> <nicname>
ARGUMENT DESCRIPTION
hagroupname - name of group
nicname - name of nic (e.g. "nic1")
removehagroup
Note: You must disconnect all resources (services and member systems) before using
removehagroup is allowed.
removehasharedfs
removehasharedfs removes mount information for the HA group’s shared
storage filesystem.
Table 100—removehasharedfs
pmcli removehasharedfs <hagroupname> <mntpoint>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
mntpoint - mountpoint
removeheartbeatchannel
removeheartbeatchannel removes a heartbeat communication channel (if
there is more than one).
Table 101—removeheartbeatchannel
pmcli removeheartbeatchannel <systemname> <channelid>
ARGUMENT DESCRIPTION
systemname - name of the system in an HA group
channelid - uuid of the channel
removeserverfromhagroup
removeserverfromhagroup removes systemname(s) from an HA group.
Table 102—removeserverfromhagroup
pmcli removeserverfromhagroup <systemnames> <hagroupname>
ARGUMENT DESCRIPTION
systemnames - name of HA system(s) [..]
hagroupname - name of HA group
sethapingallips
sethapingallips
• Sets ping constraint on HA group (empty unsets).
Table 103—sethapingallips
pmcli sethapingallips <hagroupname> [iplist..]
ARGUMENT DESCRIPTION
hagroupname - name of HA group
OPTIONAL DESCRIPTION
iplist - list of ip-addresses that should be pingable for the HA system
to be "up".
sethapingoneofips
sethapingoneofips
• Sets ping constraint group on HA group (empty unsets).
• Makes the HA system fail over if ALL of the IPs does not reply on ping/ICMP
request from the active HA server, but reply is received from the passive
server. A reply from ONE of the IPs listed does not trigger a failover.
Table 104—sethapingoneofips
pmcli sethapingoneofips <hagroupname> [iplist..]
ARGUMENT DESCRIPTION
hagroupname - name of HA group
OPTIONAL DESCRIPTION
iplist - list of ip-addresses that should be pingable for the HA system
to be "up".
setlsbscriptha
setlsbscriptha enables a custom LSB script to be controlled by the HA group. The
scripts are started in alphabetical order (and stopped in reversed order).
WARNING—the LSB script MUST follow the LSB specification, or BAD THINGS WILL
HAPPEN! LSB spec can be found at:
http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-ge neric/iniscrptact.ht-
ml
setprimaryhaserver
setprimaryhanode sets the primary server of the HA group. The HA services will
automatically fail back to its "primary" server as long as it is "up".
Table 106—setprimaryhaserver
pmclisetprimaryhaserver <systemname> <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
systemname - name of primary HA server
showautofailback
showautofailback shows if the HA services automatically fail back to the
"primary" server.
Table 107—showautofailback
pmcli showautofailback <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
showhagroupfencing
showhagroupfencing shows system fencing (stonith) status on the HA group.
Table 108—showhagroupfencing
pmcli showhagroupfencing <hagroupname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
unbindhaservicefrominterface
unbindhaservicefrominterface removes binding of service from interface for
system(s).
unsetlsbscriptha
unsetlsbscriptha disables a custom LSB script from being controlled by the HA
group.
Table 110—unsetlsbscriptha
pmcli unsetlsbscriptha <hagroupname> <lsbscriptname>
ARGUMENT DESCRIPTION
hagroupname - name of HA group
lsbscriptname - name of lsbscript (e.g. "ypbind")
You can use these commands to replicate uniform software installation across nodes.
• captureimage on page 235
• exportimage on page 235
• importimage on page 235
• listimages on page 236
• removeimage on page 236
captureimage
captureimage captures an image of installed software on the system.
Table 111—captureimage
pmcli captureimage <systemname> <imagename> [description] [excludes..]
ARGUMENT DESCRIPTION
systemname - the name of the system
imagename - the name of an image
OPTIONAL DESCRIPTION
description - description of an image; Default is“none”
--description=DESCRIPTION
excludes - a space-separated list of the systems to be excluded
exportimage
exportimage exports an image as a tarball.
Table 112—exportimage
pmcli exportimage <imagename> [tarballname]
ARGUMENT DESCRIPTION
imagename - the name of an image
OPTIONAL DESCRIPTION
tarballname - Tarball name of exported image. Defaults to stdout.
--tarballname=TARBALLNAME
importimage
importimage imports an image from a tarball.
listimages
listimages returns a list of all available images.
Table 114—listimages
pmcli listimages
ARGUMENT OPTIONAL
none none
removeimage
removeimage removes an image of installed software on the system.
Table 115—removeimage
pmcli removeimage <imagename>
ARGUMENT DESCRIPTION
imagename - the name of an image
activateproductkey
activateproductkey automatically activates productkey. This function requires
internet access. It will contact the Platform registration servers, and permanently
bind this productkey to the system it's running on.
Table 116—activateproductkey
pmcli activateproductkey <productkey> <company> <contactemail> [street] [street2]
[city] [state] [postalcode] [country] [contactname] [contactphone] [licenseserver]
ARGUMENT DESCRIPTION
productkey - The productkey to activate. Using an empty string will
activate all productkeys.
company - name of the company wishing to register the productkey
contactemail - email address of the company wishing to register the
productkey
street - the first line of thestreet address of the company wishing
product activation
street2 - the second line of the street address of the company wishing
product activation
city - the city of the company wishing product activation
state - the state/province of the company wishing product
activation
postalcode - the postal code of the company wishing product activation
country - country for product activation
contactname - contact name for the company wishing product activation
contactphone - contact phone for the company wishing product activation
Table 117—addactivationkey
pmcli addactivationkey <productkey> <activationkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key to activate
activationkey - the activation key for this productkey
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default is
to use all license servers
--licenseserver=LICENSESERVER
addproductkey
addproductkey adds a product key to a license server.
Table 118—addproductkey
pmcli addproductkey <productkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key for the product you want to enable.
The key is in the format
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default is
to use all license servers.
--licenseserver=LICENSESERVER
Example: addproductkey
To add a product key enter:
pmcli addproductkey
XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
pmcli reconfigure all
listproductkeys
listproductkeys lists product keys registered with a license server.
removeactivationkey
removeactivationkey removes product key from a license server
Table 120—removeactivationkey
pmcli removeactivationkey <productkey> <activationkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key to remove
activationkey - the activation key to remove
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default
is to use all license servers.
--licenseserver=LICENSESERVER
removeproductkey
removeproductkey removes a product key from a license server.
Table 121—removeproductkey
pmcli removeproductkey <productkey> [licenseserver]
ARGUMENT DESCRIPTION
productkey - the product key to remove
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default
is to use all license servers.
--licenseserver=LICENSESERVER
showproductstatus
showproductstatus lists product keys registered with a license server.
Table 122—showproductstatus
pmcli showproductstatus [licenseserver]
OPTIONAL DESCRIPTION
licenseserver - name of system where the license server is hosted. Default is
to use all license servers.
--licenseserver=LICENSESERVER
canceljob
canceljob cancels a job.
Table 123—canceljob
pmcli canceljob <jobid>
ARGUMENT DESCRIPTION
jobid identification of job
joblog
joblog returns a list of log for job
Table 124—joblog
pmcli joblog <jobid>
ARGUMENT DESCRIPTION
jobid identification of job
lastinstallationjob
lastinstallationjob lists the status of the last installation job for system(s).
listjobs
listjobs returns a list of all jobs.
Table 126—listjobs
pmcli listjobs
ARGUMENT OPTIONAL
none none
listjobsfornode
listjobsfornode returns a list of subjobs for job
Table 127—listjobsfornode
pmcli listjobsfornode <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
listsubjobs
listsubjobs returns a list of subjobs for job
Table 128—listsubjobs
pmcli listsubjobs <jobid>
ARGUMENT DESCRIPTION
jobid identification of job
removejob
removejob removes job(s) from the queue.
Table 129—removejob
pmcli removejob <jobid>
ARGUMENT DESCRIPTION
jobid identification of job
You can configure Platform Manager to recognise Platform LSF clusters using the LSF
commands. The script below will set up 'mylsfcluster'.
Note: Run the LSF commands 'lsid' for cluster information or 'bhosts' for details about the
hosts in application system on each machine. A successfull run will ensure that the
application system is configured correctly. See the LSF documentation for more
information about these two commands.
• addlsfapplicationsystem on page 245
• addlsfdynamichost on page 245
• addlsfmastercandidate on page 245
• addlsfstatichost on page 246
• listlsfapplicationsystems on page 247
• listlsfdynamichosts on page 247
• listlsffeatures on page 247
• listlsfmastercandidates on page 249
• listlsfstatichosts on page 250
addlsfapplicationsystem
Adds an LSF Application System
Table 130—addlsfapplicationsystem
pmcli addlsfapplicationsystem <lsfappname>
ARGUMENT DESCRIPTION
lsfappname - LSF Application System name
addlsfdynamichost
adds LSF Dynamic Hosts to LSF Application systems
Table 131—addlsfdynamichost
pmcli addlsfdynamichost <systemnames> <lsfappname>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]} to be defined as dynamic
host(s)
lsfappname - LSF Application System name
Example: addlsfdynamichost
Add an LSF Dynamic Host named “LSFCLuster1” to LSF Application system named
sc1435-6:
pmcli addlsfdynamichost sc1435-6 LSFCLuster1
addlsfmastercandidate
adds LSF master candidate to the LSF application system
Example: addlsfmastercandidate
Add an LSF Dynamic Host named “LSFCLuster1” to LSF Application system named
sc1435-6 with a license with a location “/home/scali/license.dat” and the
optional path to the LSF work directory:
pmcli addlsfmastercandidate sc1435-6 LSFCLuster1 /home/scali/license.dat
179.179.0.91 --port=1700 /opt/lsfhpc/work
addlsfstatichost
adds a static host to theto LSF application system.
Table 133—addlsfstatichost
pmcli addlsfstatichost <systemnames> <lsfappname>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]} to be defined as static host(s)
lsfappname - LSF Application System name
getlsfhoststatus
gets LSF service status for the host.
Example: getlsfhoststatus
Get the status of the lsf host named “sc1435-1” in the lsf application named “LSFApp”
getlsfhoststatus sc1435-1 LSFApp
listlsfapplicationsystems
lists the existing LSF application systems in the data center.
Table 135—listlsfapplicationsystems
pmcli listlsfapplicationsystems
ARGUMENT OPTIONAL
none none
Example: listlsfapplicationsystems
List the existing LSF application systems in your data center:
pmcli listlsfapplicationsystems
listlsfdynamichosts
Lists the dynamic hosts in the LSF application system.
Table 136—listlsfdynamichosts
pmcli listlsfdynamichosts <lsfappname>
ARGUMENT DESCRIPTION
lsfappname - LSF Application System
Example: listlsfapplicationsystems
List the existing LSF dynamichosts in your LSF application:
pmcli listlsfdynamichosts LSF_APP_1
listlsffeatures
Example: listlsffeatures
pmcli listlsfapplicationsystems
--- List of all LSF Application Systems ---
scali:460bd743-a4ee-7f88-7c7f-2cb7ef51f867 : mylsfcluster
# pmcli listlsfmastercandidates mylsfcluster
--- List of master candidates in LSF Application System 'mylsfcluster' ---
scali:93da39f5-7c54-58cd-9fb6-18862f73e32a: db03b07vm2
scali:b01fae3c-11b7-2116-b252-5744eeeb5c78: db03b07vm3
# pmcli listhostedservices db03b07vm2
--- List hosted services for system db03b07vm2
scali:93da39f5-7c54-58cd-9fb6-18862f73e32a ---
scali:6a6a76ca-678a-093e-ea42-fad1fd2c5b7c Scali_LSFBatchService (eth0)
# pmcli listlsffeatures scali:6a6a76ca-678a-093e-ea42-fad1fd2c5b7c
--- List of the FLEXlm features used by the LSF master candidate
'scali:6a6a76ca-678a-093e-ea42-fad1fd2c5b7c' --- lsf_base lsf_manager
lsf_sched_fairshare lsf_sched_preemption lsf_sched_parallel
lsf_sched_resource_reservation lsf_sched_advance_reservation lsf_multicluster
lsf_make lsf_parallel lsf_client lsf_float_client platform_hpc lsf_reports lsf_sla
lsf_license_scheduler
lsf_dualcore_x86
lsf_mv_grid_filter
pmcli listlsffeatures uuid1:9fa87631-4eb6-3d9c-d44f-263b63e32a1e
listlsfmastercandidates
Lists the master candidates in the LSF application system
Table 138—listlsfmastercandidates
pmcli listlsfmastercandidates <lsfappsystem>
ARGUMENT DESCRIPTION
lsfappsystem - LSF Application System
listlsfstatichosts
Lists the the slave only hosts in the LSF application system.
Table 139—listlsfstatichosts
pmcli listlsfstatichosts <lsfappsystem>
ARGUMENT DESCRIPTION
lsfappsystem - LSF Application System
Example: listlsfstatichosts
List the existing LSF static hosts in your data center:
pmcli listlsfstatichosts LSF_APP_1
removelsfapplicationsystem
removes LSF Application System
Table 140—removelsfapplicationsystem
pmcli removelsfapplicationsystem <lsfappname>
ARGUMENT DESCRIPTION
lsfappname - LSF Application System name
Example: removelsfapplicationsystem
pmcli removelsfapplicationsystem LSFApp
setlsffeatures
sets the FLEXlm features used by the LSF master candidate
Example: setlsffeatures
pmcli pmcli setlsffeatures scali:9fa87631-4eb6-3d9c-d44f-263b63e32a1e
lsf_base lsf_manager lsf_sched_fairshare lsf_sched_parallel
setlsfhostclosed
sets LSF service down for the host
Table 142—setlsfhostclosed
pmcli setlsfhostclosed <systemnames> <lsfappsystem>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
lsfappsystem - LSF Application System
Example: setlsfhostclosed
pmcli setlsfhostclosed sc1435-1 LSFApp
setlsfhostopen
sets LSF service Open for the host
Table 143—setlsfhostopen
pmcli setlsfhostopen <systemnames> <lsfappsystem>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
lsfappsystem - LSF Application System
addaliasedinterface
addaliasedinterface adds alias interface to systemname(s).
Table 144—addaliasedinterface
pmcli addaliasedinterface <systemnames> <interface> <aliasnumber> <ipspecs>
<ipnamespecs> [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
interface - name of interface (example: eth0)
aliasnumber - alias number (example: 1 => eth0:1)
ipspecs - ip address(es) [..]
ipnamespecs - ip name [..]
OPTIONAL DESCRIPTION
subnet - subnet for the ipaddress
--subnet=SUBNET
Example: addaliasinterface
Add an alias for your interface like this:
pmcli addaliasedinterface RenderFarm01 eth0 1 192.168.0.96
The second address to eth0 would then be eth0:1
addbondedinterface
addbondedinterface creates a logical bonded interface. For more information
about kernel bonding:
Example: addbondedinterface
Add a bonded interface to a system named “dl140-3”:
pmcli addbondedinterface dl140-3 bond0 172.19.0.100 "mode=802.3ad
miimon=100"
Example: addbondedinterface
When you do not enter the bonding driver options
pmcli addbondedinterface dl140-3 bond0 172.19.0.100
the default values listed below will be used:
['mode=802.3ad', 'miimon=100', 'lacp_rate=slow']
addethernetinterface
addethernetinterfaces adds an ethernet interface to systemname(s). See
“Adding an interface with pmcli” on page 383 for more information about this
command.
Table 146—addethernetinterface
pmcli addethernetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "nic1").
lanendpoint - name of the lanendpoint (e.g. "eth0")
OPTIONAL DESCRIPTION
Table 147—addinfinibandinterface
pmcli addinfinibandinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "nic1").
OPTIONAL DESCRIPTION
lanendpoint - name of the lanendpoint (e.g. "eth0")
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS
ipspecs - (optional) ip address(es) [..]
--ipspecs=IPSPECS
subnet - subnet for the ipaddress
--subnet=SUBNET
addmyrinetinterface
addmyrinetinterface adds myrinet interface to systemname(s).
Table 148—addmyrinetinterface
pmcli addmyrinetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [monservername] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "ib0")
lanendpoint - name of the lanendpoint (e.g. "eth0")
OPTIONAL DESCRIPTION
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS
addroutablesubnet
addroutablesubnet adds subnets to a routable subnets collection. See
createroutablesubnetgroup on page 260 and listroutablesubnetgroups on
page 262.
Table 149—addroutablesubnet
pmcli addroutablesubnet <routablesubnets> [subnets..]
ARGUMENT DESCRIPTION
routablesubnets - name or UUID of routable subnets collection (from
listroutablesubnetgroup).
OPTIONAL DESCRIPTION
subnets - name or UUIDs of subnets to add to the collection
addroute
addroute returns a list of routes for systemname(s).
Table 150—addroute
pmcli addroute <systemnames> <destinationaddress> <destinationmask>
<gatewayip>
ARGUMENT DESCRIPTION
destinationaddress - destination address (Use 0.0.0.0 for default gw)
destinationmask - destination mask (Use 0.0.0.0 for default gw)
gatewayip - ip for gateway
Note: Use 0.0.0.0 for default gateway destination address. Use 0.0.0.0 for default
gateway destination mask.
Table 151—addsubnet
pmcli addsubnet <name> <subnetnumber> <subnetmask>
ARGUMENT DESCRIPTION
name - the name of subnet
subnetnumber - number of subnet
subnetmask - mask of subnet
clearmacaddress
clearmacaddress clears macaddress for the system(s).
Table 152—clearmacaddress
pmcli clearmacaddress <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
interface - the name of interface (e.g. "eth0")
--interfacename=eth1
clearmtu
clearmtu clears an MTU from the system. The default for MTU is 1500.
Table 153—clearmtu
pmcli clearmtu <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
interface - the name(s) of interface (e.g. "eth0")
--interfacename=eth1
createroutablesubnetgroup
createroutablesubnetgroup creates a routable subnet group. Routable subnets
collections can be used to specify that routing exists between two subnets, so that
a service hosted on a node on one subnet can be accessed by nodes on the other
Table 154—createroutablesubnetgroup
pmcli createroutablesubnetgroup <name> [description]
ARGUMENT DESCRIPTION
name - the name of the routable subnet group(s) {[..]}
OPTIONAL DESCRIPTION
description - DESCRIPTION of the routable subnet collection
--description=DESCRIPTION
detachslaveinterface
detachslaveinterface removes (detaches) an interface from a bonded device
Table 155—detachslaveinterface
pmcli detachslaveinterface <systemnames> <bondinterfacename> [interfacenames..]
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
bondinterfacename - name of interface, for instance 'bond0'
OPTIONAL DESCRIPTION
interfacenames - the name(s) of interface(s) (e.g. "eth0")
--interfacename=eth1
disablenetworkboot
disablenetworkboot disables network boot for system(s).
Table 156—disablenetworkboot
pmcli disablenetworkboot <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
interface - the name(s) of interface (e.g. "eth0")
--interfacename=eth1
disablestaticarp
disablestaticarp disables static arp configuration on all or a specific ip-interfaces
on system(s).
enablenetworkboot
enablenetworkboot enables network boot for system(s)
Table 158—enablenetworkboot
pmcli enablenetworkboot <systemnames> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
interface - the name(s) of interface (e.g. "eth0")
--interfacename=eth1
enablestaticarp
enablestaticarp enables static ARP configuration on all or a specific ip-interfaces
on system(s)
Table 159—enablestaticarp
pmcli enablestaticarp <systemnames> [interfacename]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
interfacename - the name(s) of interface (e.g. "eth0")
--interfacename=eth1
enslaveinterface
enslaveinterface adds an enslaved interface to a bonded device.
exportethers
exportethers exports ethers; writes to stdout.
Table 161—exportethers
pmcli exportethers
ARGUMENT OPTIONAL
none none
importethers
importethers imports ethers; reads fromstdin.
Table 162—importethers
pmcli importethers
ARGUMENT OPTIONAL
none none
listinterfaces
listinterfaces returns a list of interfaces for the system(s).
Table 163—listinterfaces
pmcli listinterfaces <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
listroutablesubnetgroups
listroutablesubnetgroups returns a list of routable subnet groups
See also “createroutablesubnetgroup” on page 259
listroutes
listroutes returns a list of routes for systemname(s).
Table 165—listroutes
pmcli listroutes <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
liststaticarpmapping
liststaticarpmapping returns a list of static arp mapping as represented in the
Platform CIM database.
Table 166—liststaticarpmapping
pmcli liststaticarpmapping <nodename> [interfacename]
ARGUMENT DESCRIPTION
nodname - the name of the node - for instance 'n001'
OPTIONAL DESCRIPTION
interfacename - the name(s) of interface (e.g. "eth0")
--interfacename=eth1
listsubnets
listsubnets returns a list of all subnets.
Table 167—listsubnets
pmcli listsubnets
ARGUMENT OPTIONAL
none none
listsystemdevices
listsystemdevices returns a list of system devices.
removealiasedinterface
removealiasedinterface removes aliased interface from systemname(s).
Table 169—removealiasedinterface
pmcli removealiasedinterface <systemnames> <interface> <aliasnumber>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
interface - name of interface (e.g. eth0)
aliasnumber - alias number (e.g. 1 => eth0:1)
removebondedinterface
removebondedinterface removes a logical "bonded" device.
Table 170—removebondedinterface
pmcli removebondedinterface <systemnames> <bondinterfacename>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
bondinterfacename - name of interface (e.g. bond0)
removeethernetinterface
removeethernetinterface removes an ethernet interface from a system.
Table 171—removeethernetinterface
pmcli removeethernetinterface <systemname> <nicname>
ARGUMENT DESCRIPTION
systemname - the name of the system
nicname - the name of the nic (e.g. "nic1")
removeinfinibandinterface
removeinfinibandinterface removes infiniband interface from a system(s).
removemyrinetinterface
removemyyrinetinterface removes myrinet interface from systemname(s)
Table 173—removemyrinetinterface
pmcli removemyrinetinterface <systemname> <nicname>
ARGUMENT DESCRIPTION
systemname - the name of system(s) {[..]}
nicname - the name of the nic (e.g. "gm0")
removeroutablesubnet
removeroutablesubnet removes a subnet from a routable subnets collection.
See “createroutablesubnetgroup” on page 260
Table 174—removeroutablesubnet
pmcli removeroutablesubnet <routablesubnets> [subnets..]
ARGUMENT DESCRIPTION
routablesubnets - name or UUID of routable subnet group. Use
listroutablesubnetgroups for these values.
OPTIONAL DESCRIPTION
subnets - name(s) or UUID(s) of subnets to remove from the
group
removeroutablesubnetgroup
removeroutablesubnetgroup removes a routable subnets collection.See
“createroutablesubnetgroup” on page 260
Table 175—removeroutablesubnetgroup
pmcli removeroutablesubnetgroup <routablesubnets>
ARGUMENT DESCRIPTION
routablesubnets - name or UUID of routable subnet group. Use
listroutablesubnetgroups for these values.
removeroute
Table 176—removeroute
pmcli removeroute <systemnames> <destinationaddress>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) {[..]}
destinationaddress - destination address for the route to be removed
removesubnet
removesubnet removes a subnet.
Table 177—removesubnet
pmcli removesubnet <name>
ARGUMENT DESCRIPTION
systemnames - the name of thesubnet
setinterfacename
setinterfacename changes the mapped interface hostname for the system(s) to
the IP address on this interface. One of the interfaces on a system should have
the same name as the system itself. ( See “renamesystem” on page 272).
Table 178—setinterfacename
pmcli setinterfacename <systemnames> <interface> <ifnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of system(s) {[..]}
interface - the name of interface
ifnames - new hostname for interface(s) [..]
setmacaddress
setmacaddress sets macaddress for the system.
Table 179—setmacaddress
pmcli setmacaddress <systemname> <interface> <macaddress>
ARGUMENT DESCRIPTION
systemname - the name of system
interface - the name of interface
macaddress - macaddress given as "AA:BB:CC:DD:EE:FF"
setmtu
Table 180—setmtu
pmcli setmtu <systemname> <interface> <mtu>
ARGUMENT DESCRIPTION
systemname - the name of system
interface - the name of interface
mtu - mtu as integer
Example: mtu
Set the MTU to 1500 for eth0 on a system called Renderfarm00::
pmcli setmtu RenderFarm00 eth0 1500
Remember: the default is 1500
changenodebrand
changenodebrand changes the product brand of a node. Use listproducts on
page 287 for available options.
Table 181—changenodebrand
pmcli changenodebrand <systemnames> <productid>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
productid - productid for the new server brand. Use "listproducts Servers"
for available options.
createnode
See Creating a node with pmcli on page 376 for details on using createnode.
--dnsservers=DNSSERVERS
nicname - the name of nic; default "nic1" --nicname=NICNAME
laninterface - the name of interface; default "eth0"
--laninterface=LANINTERFACE
smgatewayname - the name of Platform Manager gateway; default Cimserver
--smgatewayname=SMGATEWAYNAME
nisdomain - the name of NIS domain; Default is not to configure NIS
--nisdomain=NISDOMAIN
nisservers - the name of NIS servers (space separated)
--nisservers=NISSERVERS
subnet - subnet for the ipaddress
disablemanagementofservers
disablemanagementofservers disables management of the system(s) by
Platform Manager
Table 183—disablemanagementofservers
pmcli disablemanagementofservers <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name(s) of the system(s) {[..]}
• See “enablemanagementofservers”
• See “installmanagementsoftware” on page 206.
Table 184—discovernode
pmcli discovernode <ipspecs>
ARGUMENT DESCRIPTION
ipspecs - ip address(es) [..]
discovernodemac
discovernodemac runs MAC discovery for system(s). The systems will be power
cycled one by one to learn their MAC addresses.
Table 185—discovernodemac
pmcli disablemanagementofservers <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]} for which to discover MAC
addresses
enablemanagementofservers
enablemanagementofserversenables management of the system(s) by
Platform Manager. This will add Platform Manager software and services to a system
in the configuration database. It's primarily used for adding Platform Manager to
newly discovered systems. This command only affects the configuration database.
Table 186—enablemanagementofservers
pmcli enablemanagementofservers <systemnames> [servername]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
servername - the name of server; default Platform Manager frontend
--servername=SERVERNAME
getguid
getguid returns an unique GUID identifier for system(s).
Table 187—getguid
pmcli getguid <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
getkernelbootoptions
getkernelbootoptions lists the extra kernel boot options for system(s).
Table 188—getkernelbootoptions
pmcli getkernelbootoptions <systemnames>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
listaccounts
listaccounts lists accounts of system
Table 189—listaccounts
pmcli listaccounts <system>
ARGUMENT DESCRIPTION
system - system name
listmanagementofservers
listmanagementofserversshows management status of the system(s) (i.e. if
the system is managed by Platform Manager or not)
listnodes
listnodes returns a list of all available nodes, both performance and "HA" types.
Table 191—listnodes
pmcli listnodes
ARGUMENT OPTIONAL
none none
removesystem
removesystem removes the system from the network.
Table 192—removesystem
pmcli removesystem <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the system(s) [..]
renamesystem
renamesystem changes the hostname for the system(s).
Table 193—renamesystem
pmcli renamesystem <systemnames> <newsystemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
newsystemnames - the new name(s) [..]
Note: renamesystem will change the hostname of the system but not the names
that map to the IP-addresses assigned to the system (See
“setinterfacename” on page 266).
Table 194—setguid
pmcli setguid <systemname> [guid]
ARGUMENT DESCRIPTION
systemname - the name of system
OPTIONAL DESCRIPTION
guid - GUID for this system. Default is to clear the GUID.
--guid=GUID
setinstalledstate
setinstalledstate overrides the installation status of system(s) to set them as
completed.
Table 195—setinstalledstate
pmcli setinstalledstate <systemnames> [isinstalled=True]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
isinstalled - Installation state True or False. Defaults To True.
--isinstalled=ISINSTALLED
setinstallserver
setinstallserver sets install server for system.
Table 196—setinstallserver
pmcli setinstallserver <systemnames> <installserver>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
installserver - the name of install server
setkernelbootoptions
Use setkernelbootoptions to enter the extra kernel boot options for system(s).
setrootpassword
setrootpassword sets root password for the system(s).
showprovisioningdata
showprovisioningdata shows provisioning setting data for system(s)
Table 199—showprovisioningdata
pmcli showprovisioningdata <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
addpbsprolicenseserver
addpbsprolicenseserver adds a PBS Pro FLEXlm license server to system(s).
Supported for PBSPro version 9.0 and later.
Table 200—addpbsprolicenseserver
pmcli addpbsprolicenseserver <systemnames> <licensefile>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
licensefile - full path to FLEXlm license file
addpbspromom
addpbspromom adds a pbspromom for the system(s).
Table 201—addpbspromom
pmcli addpbspromom <systemnames> <servername>
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
servername - the name of the server
addpbsproscheduler
Table 202—addpbsproscheduler
pmcli addpbsproscheduler
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
servername - the name of the server
addpbsproserver
addpbsproserver adds a PBSPro server to the system(s)
Table 203—addpbsproserver
pmcli addpbsproserver <systemnames> [--licensekey=LICENSEKEY]
[--licenseserver=LICENSESERVER]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
licensekey - Enter the License key here. --licensekey=LICENSEKEY
licenseserver - name of system hosting the PBS license key (for PBS Pro
version 9 and newer)
--licenseserver=LICENSESERVER
createpbsnodefile
createpbsnodefile creates a Qmgr file that defines all nodes in cluster. This
should only be necessary for unmanaged PBS servers. createpbsnodefile will
only list compute nodes that are PBS clients (MOMs)
Note: The Qmgr file can be used to add nodes to the PBS server with the command
'qmgr -c < nodefile.qmgr'
removepbsprolicenseserver
removepbsprolicenseserver removes PBS Pro FLEXlm license server for
system(s).Supported for PBSPro version 9.0 and later.
Table 205—removepbsprolicenseserver
pmcli removepbsprolicenseserver <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
removepbspromom
removepbspromom removes PBS Pro Mom from system(s).
Table 206—removepbspromom
pmcli removepbspromom <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
removepbsproscheduler
removepbsproscheduler removes PBS Pro scheduler for system(s).
Table 207—removepbsproscheduler
pmcli removepbsproscheduler <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
removepbsproserver
removepbsproserver removes PBS Pro server forPBS Pro FLEXlm license server
for system(s)
Supported for PBSPro version 9.0 and newer
setpbslicense
setpbslicense sets the PBS Pro license file on a system hosting a PBS Pro server.
Supported for versions of before PBSPro version 9.0.
Table 209—setpbslicense
pmcli setpbslicense <servername> <licensekey>
ARGUMENT DESCRIPTION
servername - the name of system running the PBS server
licensekey - PBS license key
setpbslicensefile
setpbslicensefile sets the PBS Pro FLEXlm license file on a system hosting a PBS
Pro license server. Supported for PBSPro version 9.0 and later.
Table 210—setpbslicensefile
pmcli setpbslicensefile <servername> <licensefile>
ARGUMENT DESCRIPTION
servername - the name(s) of the server running PBS
licensefile - full path to FLEXlm license file
setpbsproclientsoffline
setpbsproclientsoffline marks listed nodes as OFFLINE even if currently in use.
Will only communicate with the primary server in a HA setup
Table 211—setpbsproclientsoffline
pmcli setpbsproclientsoffline <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
setpbsproclientsonline
setpbsproclientsonline clears OFFLINE or DOWN from listed nodes.
The listed nodes are "freed" for allocation to jobs and will only communicate with
the primary server in a HA setup
addproductconflicts
Table 213—addproductconflicts
pmcli addproductconflicts <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hw or sw product
capabilityspec - name or UUID of dependency capability
addproductprovides
addproductprovides adds a provides capability to a (hw or sw) product so that
other products may depend on it
Table 214—addproductprovides
pmcli addproductprovides <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - software product identification
capabilityspec - name or UUID of dependency capability
addproductrequires
addproductrequires adds a require dependency to a (hw or sw) product.
Table 215—addproductrequires
pmcli addproductrequires <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - software product identification
capabilityspec - name or UUID of dependency capability
addsoftware
addsoftware adds a software product to the system(s).
Table 216—addsoftware
pmcli addsoftware <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
productid - software product identification
Table 217—addsoftwareoftype
pmcli addsoftwareoftype <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
productid - software product identification
OPTIONAL DESCRIPTION
force - ignore product dependencies
featurenames - list of features
createdependencycapability
createdependencycapability adds a dependency capability. The capability
may be provided or required by
sw and/or hw products.
Table 218—createdependencycapability
pmcli createdependencycapability <capabilityname><description>
ARGUMENT DESCRIPTION
capabilityname - human readable name of the new capability
description - semantic description of capability
createlocalproduct
createlocalproduct creates and loads local software to the repository.
Table 219—createlocalproduct
pmcli createlocalproduct <productname> <filenames>
ARGUMENT DESCRIPTION
productname - product identification
filenames - file names. Lists of files should be space separated in quotes.
createupdatechannel
Create an update channel for a product. Update channels are used to distribute
updates for an existing product.
Table 220—createupdatechannel
pmcli createupdatechannel <productid> <name> [description]
ARGUMENT DESCRIPTION
productid The productid is stored for use in updating the software. You
can find the productid by using “listproducts”“ listproducts” on
page 213.
name - the name of the new channel.
OPTIONAL DESCRIPTION
description - an optional description of the channel.
--description=DESCRIPTION
Note: Best practice for createupdate channel- After creating the channel, use:
/opt/scali/libexec/scarepository.py --addupdatesto
populate it with packages. See “Creating and Deploying an update
Channel with pmcli” on page 382 or "scarepository.py --help" for
details. Finally, subscribe nodes to the new channel with subscribechannel. (see
“subscribechannel” on page 291 )
Table 221—listchannels
pmcli listchannels
ARGUMENT OPTIONAL
none none
listdependencycapabilities
listdependencycapabilities lists all the dependency capabilities. The capabilities
may be provided or required by software and/or hardware products.
Table 222—listdependencycapabilities
listfeatures
listfeatures returns a list of features for a product.
Table 223—listfeatures
pmcli listfeatures <productid>
ARGUMENT DESCRIPTION
productid - software product identification
listinstalledsoftware
listinstalledsoftware returns a list of software installed on system(s).
Table 224—listinstalledsoftware
pmcli listinstalledsoftware <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
listproductdependencies
listproductdependencies lists capabilities required by a product
listproducts
listproducts returns a list of available products for a product type.
Table 226—listproducts
pmcli listproducts <producttype>
ARGUMENT DESCRIPTION
producttype - product type; Both numerical and alphabetical product types
are accepted.
listproducttype= 7
listproducttype= 11
listproducttypes
listproducttypes returns a list of product types.
listretrieveelements
listretrieveelements lists retrieve elements for an OS product
Table 228—listretrieveelements
pmcli listretrieveelements <productid> <retmethod>
ARGUMENT DESCRIPTION
productid - product identification
retmethod - retrieve method
listretrievemethods
listretrievemethods lists retrieve methods for an OS product
Table 229—listretrievemethods
pmcli listretrievemethods <productid>
ARGUMENT DESCRIPTION
productid - product identification
listsubscribedchannels
listsubscribedchannels returns a list of subscribed channels for system(s).
Table 230—listsubscribedchannels
pmcli listsubscribedchannels <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
loadsoftware
loadsoftware loads software to the repository. You can use wildcards, e.g.
"/home/os/iso/SLES-10CD1.iso /home/os/iso/SLES-10-x86*"
removedependencycapability
removedependencycapability removes a “dependency capability” from a
product.
removeproductconflicts
removeproductconflicts removes a “conflicts capability” from a hardware or
software product
Table 233—removeproductconflicts
pmcli removeproductconflicts <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hardware or software product
capabilityspec - name or UUID of dependency capability
removeproductprovides
removeproductprovides removes a “provides capability” from a hardware or
software product
Table 234—removeproductprovides
pmcli removeproductprovides <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hardware or software product
capabilityspec - name or UUID of dependency capability
removeproductrequires
removeproductrequiresremoves a “requires capability” from a hardware or
software product.
Table 235—removeproductrequires
pmcli removesproductrequires <productid> <capabilityspec>
ARGUMENT DESCRIPTION
productid - product identification of a hardware or software product
capabilityspec - name or UUID of dependency capability
removesoftware
removesoftware removes specified software product from system(s).
removeupdatechannel
removeupdatechannel removes an update channel
Table 237—removeupdatechannel
pmcli removeupdatechannel <name>
ARGUMENT DESCRIPTION
name - the name of the channel to remove
subscribechannel
subscribechannel subscribes system(s) to a channel.
Table 238—subscribechannel
pmcli subscribechannel <systemnames> <channelname>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
channelname - the name of a channel
unsubscribechannel
unsubscribechannel unsubscribes system(s) from channel
Table 239—unsubscribechannel
pmcli unsubscribechannel <systemnames> <channelname>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
channelname - the name of a channel
upgradesoftware
See “Up-grading with pmcli” on page 370 for details on using
upgradessoftware.
upgradesoftwareoftype
upgradesoftwareoftype upgrades software products of a specific type for
system(s).
See “Up-grading with pmcli” on page 370 for up-grading with CLI for details on
using upgradessoftware.
Table 241—upgradesoftwareoftype
pmcli upgradesoftwareoftype <systemnames> <newproductid>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
newproductid - software product identification
Table 242—addaccountingcollectorservice
addaccountingservice
addaccountingservice enables a BSD accounting service for the named
systems and servers. The service will perform BSD resource accounting and
transfer the data to the given accounting collector server for generating reports.
Table 243—addaccountingservice
addbatchsystemaccountingservice
addbatchsystemaccountingservice adds an accounting collector service to
system(s). The collector service can receive accounting data from accounting
services, and produce accounting reports.
Table 244—addbatchsystemaccountingservice
pmcli addbatchsystemaccountingservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
addconsolemanagementcontroller
addconsolemanagementcontroller adds console management controller
adddnsclientservice
adddnsclientservice adds a DNS client service to system(s).
Table 246—adddnsclientservice
pmcli adddnsclientservice <systemnames> <searchdomains> <servers>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
searchdomains - DNS search domains (Space separated)
servers - DNS servers (Space separated)
adddhcpclientservice
adddhcpclientservice adds DHCP client service to your system(s) for assigning
IP addresses automatically. Platform Manager will add a host entry to DHCP servers
on the subnets with interfaces bound to the service.
Table 247—adddnsclientservice
pmcli adddnsclientservice <systemnames> <searchdomains> <servers>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
searchdomains - DNS search domains (Space separated)
servers - DNS servers (Space separated)
adddhcpserverservice
adddhcpserverservice adds DHCP server service to the system(s)
addjbossasservice
addjbossasservice adds JBoss AS service to system(s).
Table 249—addjbossasservice
pmcli addjbossaservice
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
addldapclientservice
Adds LDAP client service to system(s)
Table 250—adddhcpclientservice
pmcli adddhcpclientservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
Example: addldapclientservice
Add a LDAP Client Service to a system named “sc1435-4”:
pmcli addldapclientservice sc1435-4 dc=example,dc=com "server1 server2"
addmanagementengineservice
addmanagementengineservice adds management engine service to the
system(s)
Table 251—addmanagementengineservice
pmcli addmanagementengineservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
addmonitoringhistoryservice
addmonitoringhistoryservice adds a monitoring history service to the
system(s).
addmonitoringinbandservice
addmonitoringinbandservice adds a monitoring inband server to the
system(s).
Table 253—addmonitoringinbandservice
pmcli addmonitoringinbandservice <systemnames> [servername]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
OPTIONAL DESCRIPTION
servername - the name of relay server for Monitoring inband server. Default is to
find one automatically.
addmonitoringoutofbandservice
addmonitoringoutofbandservice adds a monitoring out-of-band server
service to the system(s).
Table 254—addmonitoringoutofbandservice
pmcli addmonitoringoutofbandservice <systemnames> <servername>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
servername - the name of relayserver for monitoring outofband server
addmonitoringrelayservice
addmonitoringrelayservice adds a monitoring relay server service to the
system(s).
Table 255—addmonitoringrelayservice
pmcli addmonitoringrelayservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
addnisclientservice
addnisclientservice adds a NIS client service to system(s).
addnatservice
addnatservice adds an NAT service to the system(s).
Table 257—addnatservice
pmcli addnatservice <systemnames> [interface]
ARGUMENT DESCRIPTION
systemname - name(s) of system(s) {[..]}
s
OPTIONAL DESCRIPTION
interface - the name of the internal interface to NAT.
--interface=INTERFACE
addntpservice
addntpservice adds an NTP service to the system(s).
Table 258—addntpservice
pmcli addntpservice <systemnames> <servers> [broadcastclient=False]
[broadcastaddresses] [peeraddresses]
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
servers - The current system will synchronize with specific NTP
server(s) (space separated). You may enter the name of the
server or enter:
--servers=SERVERS
OPTIONAL DESCRIPTION
Table 259—addpowermanagementcontroller
pmcli addpowermanagementcontroller <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
addremotesyslogclientservice
addremotesyslogclientservice adds a remote syslog client service to
system(s). This will redirect kernel messages to a remote syslog server service.
Table 260—addremotesyslogclientservice
pmcli addremotesyslogclientservice <systemnames> [servername]
ARGUMENT DESCRIPTION
systemname - the name of system(s) {[..]}
s
OPTIONAL DESCRIPTION
servername - the name of server the messages should be redirected to. Default is
to find one automatically.
--servername=SERVERNAME
addrshservice
addrshservice adds an rsh service to system(s).
Table 261—addrshservice
pmcli addrshservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
addscarepositorycacheservice
addscarepositorycacheservice adds scarepository cache service to
system(s).
Table 262—addscarepositorycacheservice
pmcli addscarepositorycacheservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
Table 263—addsmgatewayservices
pmcli addsmgatewayservices <systemnames> [interface=eth1]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
OPTIONAL DESCRIPTION
interface - sets the name of the internal interface for the NAT. Default is
eth1
--interface=INTERFACE
addsshcredentialmanagementservice
addsshcredentialmanagementservice adds an SSH credential
management service to the system(s)
Table 264—addsshcredentialmanagementservice
pmcli addsshcredentialmanagementservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
bindservicetointerface
bindservicetointerface binds a service to the interface for a system(s).
Table 265—bindservicetointerface
pmcli bindservicetointerface <systemnames> <serviceid> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
serviceid - the name or uuid of service
interface - the name of an interface
disablescancesubsystem
disablescancesubsystem disables Platform Node Configuration Engine
subsystem for the system(s).
enablescancesubsystem
enablescancesubsystem enables Platform Node Configuration Engine
subsystem for system(s).
Table 267—enablescancesubsystem
pmcli enablescancesubsystem <systemnames> <subsystem>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
subsystem - the name of subsystem to be enabled
listdnsclientservice
listdnsclientservice returns a list of DNS services for the system(s)
Table 268—listdnsclientservice
pmcli listdnsclientservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
listdisabledscancesubsystems
listdisabledscancesubsystems returns a list of disabled Platform Node
Configuration Engine subsystems for the system(s).
Table 269—listdisabledscancesubsystems
pmcli listdisabledscancesubsystems <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
listhostedservices
listhostedservices returns a list of software services hosted by system(s).
Note: When running 'listhostedservices' you'll get an overview of all services on the
Platform Manager frontend and which interfaces host them. If eth1 is to be used
for application data transfer and eth0 for installation, monitoring and general
management, then make sure that these services have only eth0 listed:
• Scali_ManagementEngineService
• Scali_DHCPServerService
• Scali_RepositoryChannelService
• Scali_ScaMonitoringControlService
• Scali_ScaMonitoringHistoryService
• Scali_ScaMonitoringRelayService
• Scali_ScaliManageConfigurationService
• Scali_RemoteSysLogServerService
listnisclientservice
listnisclientservice returns a list of NIS client service for the system(s)
Table 271—listnisclientservice
pmcli listnisclientservice <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
listscancesubsystems
listscancesubsystems returns a list of Platform Node Configuration Engine
subsystems.
Table 272—listscancesubsystems
pmcli listscancesubsystems
ARGUMENT OPTIONAL
none none
removeservice
removeservice removes a service from system(s).
removesmgatewayservices
removesmgatewayservices removes all the services required for a Platform
Manager Gateway.
Table 274—removesmgatewayservices
pmcli removesmgatewayservices <systemnames>
ARGUMENT DESCRIPTION
systemnames - name(s) of system(s) {[..]}
unbindservicefrominterface
unbindservicetointerface removes bonding from a service to the interface for
a system(s).
Table 275—unbindservicefrominterface
pmcli unbindservicefrominterface <systemnames> <serviceid> <interface>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
serviceid - the name or uuid of service
interface - the name of an interface
createswitch
createswitch creates network switch(es).
Table 276—createswitch
pmcli createswitch <systemnames> <ipspecs> <product> [username] [password]
[subnet]
ARGUMENT DESCRIPTION
systemnames - the name(s) of the switch(es) [..]
ipspecs - corresponding ip-addresses) [..]
product - hardware product specification
OPTIONAL DESCRIPTION
password - password on the switch, if needed
--password=PASSWORD
username - username on the switch, if needed
--username=USERNAME
subnet - subnet for the ipaddress
--subnet=SUBNET
disconnectconsoleswitchport
disconnectconsoleswitchport disconnects console switch port for system(s).
disconnectpowerswitchport
disconnectpowerswitchport disconnects power switch port for system(s).
Table 278—disconnectpowerswitchport
pmcli disconnectpowerswitch <systemnames>
ARGUMENT DESCRIPTION
systemnames - the name of the switch(es) [..]
findgmtopology
findgmtopology discovers how the nodes are connected to the Myrinet switch.
Note: findgmtopology only works on the system monitoring the switch.
listswitches
listswitches lists all switch(es).
Table 280—listswitches
pmcli listswitches
ARGUMENT OPTIONAL
none none
removeswitch
removeswitch removes network switch(es) from the configuration.
Table 281—removeswitch
pmcli removeswitch <systemnames>
ARGUMENT DESCRIPTION
systemname - the name of system that will communicate with the switch
Table 282—setspeedoncomport
pmcli setspeedoncomport <systemnames> <speed>
ARGUMENT DESCRIPTION
systemnames - the name of system(s) {[..]}
speed - new speed of the serial port
useconsoleswitchport
useconsoleswitchport defines the console switch port for the system.
Table 283—useconsoleswitchport
pmcli useconsoleswitchport <systemnames> <switchname> <portnumbers>
[devicename=ttyS0] [consserver]
ARGUMENT DESCRIPTION
systemnames - the name of switch(es) [..]
switchname - the name of switch to be used
portnumbers - port number(s) to be used
OPTIONAL DESCRIPTION
devicename - the name for the serial device on the server. Default is ttyS0.
--devicename=DEVICENAME
consserver - the name of console server
--consserver=CONSSERVER
usepowerswitchport
The usepowerswitchport command defines a power switch port for the system.
Table 284—usepowerswitchport
pmcli usepowerswitchport <systemnames> <switchname> <portnumbers>
[powerserver]
ARGUMENT DESCRIPTION
systemname - the name of the system(s) [..]
switchname - the name of switch to be used
addtemplate
addtemplate adds a new network installation template, read from the stdin.
Table 285—addtemplate
pmcli addtemplate <name> <templatetype>
ARGUMENT DESCRIPTION
name - specifies the name of the template
templatetype -The type of template: kickstart, autoyas, etc. which can be
retrieved by using listtemplates. The template file is read from
standard-in.
gettemplate
gettemplate returns the content of an existing template. You can learn how to
get values for id by running listtemplates (see listtemplates on page 311).
Table 286—gettemplate
pmcli gettemplate <id>
ARGUMENT DESCRIPTION
id - template id
listtemplates
listtemplates returns a list of existing kickstart/autoyast templates.
Table 287—listtemplates
pmcli listtemplates
ARGUMENT OPTIONAL
none none
removetemplate
removetemplate removes a new network installation template.You can learn
how to get values for id by running listtemplates (see listtemplates on page 311)
Table 288—removetemplate
pmcli removetemplate <id>
ARGUMENT DESCRIPTION
id - template id
replacetemplate
replacetemplate adds a new network installation template
Table 289—replacetemplate
pmcli replacetemplate <templateid>
ARGUMENT DESCRIPTION
templateid - Name or UUID of template to replace
Below you will find options and console information fields tables.
console [optional]
OPTIONAL DESCRIPTION
-7 Strip the high bit off all console data whether from user input
or the server, before any processing occurs.Disallows escape
sequence characters with high bit set
-a(A) Access a console with a read-write connection (default
setting).The connection is dropped to "spy mode" if some one
else is attached read-write
-bmessage - send broadcast message to all users on each server
-Bmessage - send broadcast message to users on the primary server
-C config - Override per-user config file
-c cred - load an SSL certificate and key from the PEM-encoded file
cred
-d - specified by [user] [@console]. You may specify the target
as:
• user - disconnect the user regardless of which
console they are using
• @console - disconnect all users of a specific
console
• user@console - disconnect a specific user of a
specific console.
-D - enable debug output, sent to stderr
-e esc - set the initial two-character escape sequence to those
represented by esc.Any of the forms’ output by cat(1)’s -v
option are accepted. The default value is "^Ec".
-E If encryption has been built into the code (--with-openssl),
encrypted client connections are required by default. This
option disables any attempt to create an encrypted
connection. Use the -U option if you would like to use
encrypted connections and have encryption supported on
your server, but want to revert to unencrypted connections
otherwise.
-f (F) - force any existing read/write connections into "spy mode".
-h - print help message to the screen
-i (I) - display information in machine-parseable form (see below)
console [optional]
OPTIONAL DESCRIPTION
-l user - set the log-in name used for authentication for user. By
bdefault console employseither $USER, or $LOGNAME if a
value matches the user’s real uid,else the name associated
with the user’s real uid.
-M master The console client program polls Master server as the primary
server rather than the default set at compile time (typically
"console").The default master may be changed at compilation
time using the "--with-master" option. If you use "--with-uds"
to enable UNIX domain sockets, this option points console to
the directory which holds those sockets.The default master
directory ("../tmp/conserver") can be changed at compilation
time using "--with-uds".
-n - do not read system-wide config file
-P - display the pids of master daemon process on each server.
-p port - sets a connection to this Port. The default port can be
changed at compilation using the "--with-port" option.
Field Description
name - the name of the console
hostname - hostname of the child process managing the console
pid - pid of the child process managing the console
socket number - the socket number of the child process managing the console
type - the type of console
“/” means the console is a local device
l - means a command
! - means a remote port
console-details - the values are comma-separated and depend on the type of
console.
Local devices have values of:
the device file
the baud rate/parity
the file descriptor for the device.
Commands have values of
the command
the command’s pid
the pseudo-tty
the file descriptor for the pseudo-tty
The remote ports have values of
the remote hostname
remote port number
"raw" or "telnet" protocol
file descriptor for the socket connection
user-list - the comma separated bundles containing details of each user
attached to a console:
connection type
r means read-only
w means read-write
s means suspended
Field Description
username
hostname
user’s idletime
"r" and "s" users’ requests for read-write mode
state - console state - "up", "down" or "init"
perm - type of permission.
"ro"(read-only) is returned ifthe device is a local device AND the
user’s permissions on the server allow the user to read the file, but
not write.
"rw" means you have read-write permission
log-filedetails The coma-separated values are:
log-file name
logging enabled or not - "log" or "nolog" - toggled by "^EcL"
activity logging enabled or not - "act" or "noact", the "a" timestamp
option
timestamp interval
logfile descriptor file
break the default break sequence used for the console
reup There are two values:
"autoup -the server is down and the automatic reconnection code is
at work.
"noautoup" - either the node is up or automatic reconnection code is
not currently running
aliases comma-separated list of console aliases
options returns a comma-separate lsit of active options for a console
initcmd initcmd configuration option for the console.
idletimeout idletimeout configuration option for the console.
idlestring idlestring configuration option for the console
The client-server console application reads the configuration information for the
system-wide configuration file (console.cf) then ther per-user configuration file
(.consolerc) then applies command-line arguments. Each configuration location and
can override the previous. The same happens when parsing an individual file.
The configuration file is read by the same parser as the one that reads conserver.conf.
You should check that help file for parser details.
Configuration Blocks
Console recognises the following configuration blocks:
• config <hostname>|<ipaddr>defines a configuration block for the specified
client host, or using a specified ipaddress.
• escape esc sets the escape sequence (see “-e esc”)
• Master (master) sets the default master to master
• port sets the default port to port (see “-p port”)
• sslcredentials filename sets the SSL credentials file location (see “-c cred”)
• sslenabled sets whether or not encryption is used in connections (see “-E”).
Valid values are yes|true|on|no|false|off.
• sslrequired sets whether or not encryption is required in connections ( see
“-U” ). Valid values are yes|true|on|no|false|off
• striphigh sets whether or not to strip the high bit off all data received (see
“-7”). Valid values are yes|true|on|no|false|off.
• username <user> sets the username passed from the server to the user (see
“-l user”)
• terminal <terminal_type> defines a configuration block when using a
specified type of terminal.
• attach string| "" prints a string when successfully connected to a console.
Character substitutions will be performed based on the attachsubst value and
occur before interpretation of the special characters listed below. If you use
the null string ("") no string will be printed.The string is a simple sharacter
string with the exception of ’/’ and ’^’:
• \a- alert
• \b- backspace
• \c- character c
• \f- form-feed
• \n- new line
String Replacement
For example:
• u- username
• c- console name
If the string replacement is less than n characters the value will be padded on its
left with space characters. f must be ’s’.
Note: If you use "*" as a value for <hostname> or <ipaddr> the configuration block will
be applied to all client hosts
Note: If you use "*"for ternimal type this block will be applied to all terminal types.
Numeric Replacement
Numeric replacement is not yet implimented. If the numeric replacement is less
than n characters in length, it will be padded with 0’s if n begins with a 0. Otherwise
it will be padded as string replacements with space characters.
f must be one of the following:
• ’d’ -a decimal value
• ’x’ -a lower-case hexadecimal value
Escape Sequences
The connection cn be controlled by a two-character escape sequence, followed by
a command. The default escape sequence is
"CONTROL-E c\", , (octal 005 143).
The escape sequences are actually processed by the server. See the conserver for
more information
When you run a local command via "^Ec|", you can enter "^C" to send a SIGHUP,
"^\" to send a SIGKILL command and "o" to toggle the display of the console data.
Argument Description
. - disconnects the user from the console
; - moves the user to another console.
a - attaches a read-write connection if no one else is connected.
b - sends a broadcast message to all users on this console.
c - toggles flow control. It is strong advised that you do not use this.
d - shuts down the current console.
ecc - changes the escape sequence to the next two characters
fconnection - forcibly attaches a read-write
g - returns group information
i - dumps information
L - toggles logging on and off
? - lists the available break sequences
0 - sends the break sequence associated with this console
1-9 - sends the specific break sequence
m - displays message of the day
Argument Description
o - closes (if open) and reopens the line to clear errors - silo overflows - and
the log file
p - replays the last 60 lines of output
r - replays the last 20 lines of output
s - switches to spy mode (read only)
u - shows status of users/hosts in this group
v - shows the version of the group server
w - returns a list of users on this console
x - examines this group’s devices and modes
z - suspends this connection
| - attaches a local command to the console
? - displays a list of commands
^M
(return) - continue, ignores the escape sequence
^R
(CTRL-R) - replays the last line only
\ooo - sends character having an octal code ooo (must specify three octal
numerals)
If any other character is entered after the escape sequence, all three characters
will be discarded. Note that a line-break or a down command can only be sent from
a read-write connection.You must redefine the outer escape sequence or use
^Ec\ooo to send the first escape character before typing the second character
directly in order to send another escape sequence through the connection.
Example: console -u
Entering:
console -u
would result in something like:
expertupksb@mentor
tyroup<spies>
mentorup<none>
sageupfine@cis
Example: console -w
Entering:
console -w
results in:
kbs@extraattach 2 days expert
file@cisattach 21:46 sage
drm@alice spy 0:04 tyro
The third column displays the idletime of the user. Either hours, minutes, or number of
days may be displayed.
Help and more examples can be found on the conserver home page at
http://www.conserver.com/
WARNING— Never run console from within a console connection without uniquely
setting each escape sequence to be different from the others.
The complexity of managing clusters increases as the number of nodes increases. So you really need
tools that allow you to operate in parallel on a set of nodes, issuing commands to them as if they were
a single entity. Platform Manager provides a suite of shell tools that can be run in parallel on nodes in
your data center.
The target nodes for all programs in the ScaSH suite may be defined on the command line.
Please note that here ScaSH supports bracket and grouping name expansion for node
name specification. If node names are not specified on the command line the nodes
reported by the scahosts program will be used.
All the tools in the ScaSH suite are based on a client/server implementation where
xinetd starts the server processes as a standard service. You control access through
PAM (Pluggable Authentication Modules). Both are automatically configured by
Platform Manager.
Topics in this chapter include:
Grid vs. Tree Routing Topologies on page 412
scacp on page 413
scagroup on page 414
scagroup File Format on page 415
scahosts on page 415
scakill on page 416
scaps on page 417
scarcp on page 418
scarup on page 420
scash on page 421
ScaSH configuration file on page 424
plasub on page 425
scatop on page 426
scawho on page 428
The tree topology is the default. For normal scash operations the tree topology is more
often suitable, given bandwidth and latency issues. For copying larger files a grid topology
is often a quicker solution.
The nodes are numbered in a sequence, established at initiating the command, starting
with the originating node, having a value of -1.
scacp
The scacp program copies file(s) locally on nodes in a Platform cluster.
Example: scap
Use the scacp command to copy a file that is located on each node to a another directory
within the same node.
[root@rigel root]# scacp /etc/hosts /tmp/hosts.bu
[root@rigel root]#
scagroup
When utilities require a nodelist as a parameter you can build the nodelist from node
names, group aliases and bracketed expressions. The group alias will be resolved to a list
of node names as specified in the scagroup configuration file(s). The system-wide
config-file /opt/scali/etc/scagroup.conf is read first. If a user-specific config-file,
~/.scali/scagroup.conf, exists, its content will be combined with the system-wide
definitions.
scahosts
The scahosts program located in /opt/scali/bin, checks a number of hosts for availability.
An available host is one that answers to a scash connection request. The program prints
the name of the available hosts. There are several ways you can specify which hosts the
program checks. You may specify hosts on the command line when running the program,
either with the -f option which gives the path to a file containing host names, or with a list
of hosts at the end of the command. If neither the -f option, nor the hostlist is present,
scahosts will look for hosts in the file $HOME/.scahosts. If this file is not available, it will
look for hosts in the file /opt/scali/etc/ScaSH.scahosts.
Use one name per line in the file containing host names
scakill
scakill kills processes running in a Platform cluster.
scaps
scaps prints processes on nodes in a Platform cluster.
To show which processes are running on all default nodes of the system, use the scaps
command. You use the -u switch to show only the processes belonging to a certain user,
in this case, ole:
[root@rigel root]# scaps -u ole
r7 : ole 14906 0.0 0.0 5264 1464 pts/0 S 16:11 0:00
-bash
scarcp
scarcp copies file(s) between local machine and nodes in a Platform cluster
Note: You may specify the command to be used when copying files with the
-c option.
To copy a file from current node, or frontend to all default nodes of the system, use the
scarcp (remote copy) command. To copy the local file:
[root@rigel root]# scarcp /os/i686/kernel-smp-2.4.20-19.8.i686.rpm
to the /tmp directory on all the default nodes.
[root@rigel root]# scarcp /os/i686/kernel-smp-2.4.20-19.8.i686.rpm /tmp
Use scarcp with the -r option to create unique files at the destinations when you want
to copy files from a selection of nodes to a the local machine. Copy files with a common
path /etc/hosts from a selection of nodes to a different selection of nodes. All source
files are copied to each of the destinations using -r to avoid overwriting the files on the
destination nodes:
[root@rigel root]# scarcp -r KEY n[1-10]:/etc/hosts
n[11-14]:/tmp/hosts.KEY
scarup
scarup prints up-time/load information about nodes in a cluster.
scash
The scash utility located in /opt/scali/bin is a UNIX command line utility which runs the
same shell command on a set of Platform system nodes. You may specify the target nodes
in a configuration file, or on the command line (see “scahosts” on page 415).
The command options are listed below.
Run:
rpm -i /tmp/kernel-smp-2.4.20-19.8.i686.rpm
on selected nodes of the system, but disconnect from the terminal, run it in the
background and return back to the issuing shell. Use the -a switch to specify a
command to be run in the background on each node.
Run an “rpm -q glibc” command on selected nodes of the system. Select nodes by using
the -n switch instead of the default nodes of the system.
[root@rigel root]# scash -pn "r1 r2 r4 r6 r11" rpm -q glibc
Run an “uname -r” command on the default nodes of the system. You want each node
name prefix to correspond to output. Specify this prefix by using the -p switch in scash.
Run an "uname -r" command on the default nodes except specified nodes to be
excluded. Exclude nodes by using the -x switch. In the example below, we have used
bracket expansion when specifying the nodes.
[root@rigel root]# scash -px "r[1-6]" -- uname -r
r7 : 2.4.18-27.8.0smp
r8 : 2.4.18-27.8.0smp
r9 : 2.4.18-27.8.0smp
r10 : 2.4.18-27.8.0smp
r11 : 2.4.18-27.8.0smp
r12 : 2.4.18-27.8.0smp
r13 : 2.4.18-27.8.0smp
r14 : 2.4.18-27.8.0smp
r16 : 2.4.18-27.8.0smp
r15 : 2.4.18-27.8.0smp
[root@rigel root]
Define fanout factor used for limiting the number of connections from one
host when running a scash commands. When the number of nodes used
as argument for the scash command exceeds the fanout factor, the hosts
will be divided into groups where fanout gives the number of groups and
the scash command will be run as a scash command in each group in
parallel. Hence, the number of connections from one host will be limited
to fanout number of connections.
-p prefixprint=<flag>
Print node name before each line in each output block. Legal value for flag
is either ’true’ or ’false’.
-t <timeout> connect_timeout=<timeout>
plasub
The command plasub is a wrapper script for submitting jobs through Scali MPI Connect.
scatop
To show all processes using more than a specified CPU usage percentage (default value is
Example: scatop
Enter:
[root@rigel root]# scatop
renders a report table such as the one below:
[root@rigel root]# scatop
Node: PIDUSERPRINISZSIZERSSSTAT%CPU%MEMTIMECOMMAND
r1 : 20866 ole31-10340425475654744R<32.51.70:13fluent_scampi.6
r1 : 20867 ole27-10302465461254604R<97.21.70:40fluent_scampi.6
r2 : 20430 ole27-10336194646046452R<97.71.40:41fluent_scampi.6
r2 : 20431 ole25-10302374658846576R<83.71.50:35fluent_scampi.6
r3 : 21780 ole27-10331074769647688R<97.01.50:41fluent_scampi.6
r3 : 21781 ole25-10295224762847620R<97.01.50:41fluent_scampi.6
r4 : 19517 ole27-10326094774447736R<95.51.50:41fluent_scampi.6
r4 : 19518 ole25-10290114762847620R<95.11.50:40fluent_scampi.6
r5 : 20656 ole19-10322994774047732R<95.11.50:40fluent_scampi.6
r5 : 20657 ole19-10285014763247624R<95.21.50:40fluent_scampi.6
r6 : 20692 ole27-10317864772847720R<97.21.50:41fluent_scampi.6
r6 : 20693 ole26-10279304638446376R<98.11.40:42fluent_scampi.6
r7 : 19438 ole26-10310724773247724R<94.31.50:41fluent_scampi.6
r7 : 19439 ole25-10274744762047612R<93.81.50:41fluent_scampi.6
scawho
scawho prints user names and number of processes on nodes in a Platform cluster.
rpm -U scacpg-1.1.2-29.rhel4.noarch.rpm
Note: Using ScaCPG requires root-privileges, and the directory to be
packaged must be owned by root.
ScaCPG treats directory names in the same way as the tar archiving utility, i.e. use of a
directory name always implies that the subdirectories below should be included when
building an RPM, and the same directory structure is (if necessary) recreated on the local
machine when the RPM is installed or upgraded. Since the path leading to the directory to
be packaged is not included in the package, the desired target directory structure must be
created in the package directory. For example, if you want to package /etc/passwd you must
copy it to
/home/user1/package
resulting in
/home/user1/package/etc/passwd.
/home/user1/package
as the directory to be packaged. When the resulting RPM is unpackaged it will deposit its
contents in /etc/passwd.
This ability of packaged material to be copied exactly as the original maintains the
homogeneous setup of cluster nodes that are key to supporting a single system
environment with Platform’s cluster management (Platform Manager). While other
techniques for distributing material are available, ScaCPG avoids manual participation that
is prone to introduce errors. More information about the RPM mechanism is available from
http://www.rpm.org.
Interfaces
ScaCPG has both a command line interface and a graphical, NEWT based interface. The
option set is richer in the command line variant, while the graphical variant is more
intuitive. ScaCPG launches its graphical interface when there are no arguments on the
command line.
Enter
[root@rigel root]# scacpg --help
to get a list of the arguments recognized by ScaCPG on the command line. See t.
This view appears in the terminal where /opt/scali/bin/scacpg was started. As can be seen
from the arguments listed above the fields "Name", “Directory to package” and “Version”
are mandatory.
The values entered in the dialog are copied to the package and can be retrieved with
rpm -qip <file>.
Error handling
ScaCPG includes simple error handling. The directory to package is checked for validity. More
complex error situations leads to a log-file in /tmp which can be inspected before trying
again.
# pmcli showproductstatus
WARNING—Do NOT activate the product key on the compute nodes. If you install
the product key on one of the compute nodes first, you will activate the license for
a license server on it. Activation is irreversible. You can install/activate the product
key on ONLY ONE server. Install the product key ONLY on the Platform Manager
frontend or on the Gateway.
• Select the license you wish to activate from the table (a License with status
'Need activation')
Enter:
pmcli activateproductkey <productkey> <company> <contactemail>
or
pmcli activateproductkey <productkey> <company> <contactemail>
[street] [street2] [city] [state] [postalcode] [country]
[contactname] [contactphone]
then
pmcli reconfigure my-system
# pmcli activateproductkey
3YYH-G3LI-MEAH-G3LI-MEAA-KAAA-AAAA-HYUY "Platform Inc"
"support@platform.com" contactname="my name"
WARNING—Do not use the -n option on the license server. You will get an error
message when you use -p on the license server: “This command should only be
run on scalm_net_server” because the -n option resets a flag in configfile that
identifies the server as the license server to 0 so that server is no longer designated
in the configuration file as a license server.
<bracket>==
"["<number_or_range>[,<number_or_range>]*"]"
<number_or_range>==
<number> | <from>-<to>[:<stride>]
<number>==
<digit>+
<from>==
<digit>+
<to>==
<digit>+
<stride>==
<digit>+
<digit>==
0|1|2|3|4|5|6|7|8|9
If <to> or <from> contains leading zeros, then the expansion will contain leading zeros
such that the width is constant and equal to the larger of the widths of <to> and
<from>.
You can depict a range of consecutively named nodes, for example: n00, n01 and n02,
by entering:
n[0-2]
If you need to step through a range of nodes to depict every third node, for example:
n00, n03, n06, and n09 in a range of 11 nodes (n00 through n10), then enter:
A-2 Grouping
Utilities that use scagroup will accept a group alias wherever a host name of hostlist is
expected. The group alias will be resolved to a list of host names as specified in the file
scagroup config file. If there exists a file .scagroup.conf in the users home directory, this
will be used. Otherwise, the system default file /opt/scali/etc/scagroup.conf
will be used.
Each group has the keyword group at the beginning of a line followed by a group alias and
a list of host names included in the group. The list may itself contain previously defined
group aliases which will be recursively resolved. The host list may use bracket expressions
which will be resolved as specified above.
If an entry starts with ’!’ the entry will be excluded (instead of included). The file may
contain comments which is a line starting with #.
The examples below assume that you have six nodes named “node00” through “node05”.
You want to establish three groups. The first group is a single node that you will call
“master”. The second group contains multiple nodes that you will call “slaves”. “All” is
a super set of “master” and “slaves”. Enter:
group master n00
group slaves n[01-32]
group all master slaves
Create a group named “fruits” containing both “apples” and “oranges” plus node 05.
Note: Note: You can add as many elements as you like, dependent upon line length.
Create a group named “almost_all” from “fruits” that does not contain node 03
group almost_all fruits !n03
# ’n01 n02 n04 n05’
Create a group named “one” from “fruits” containing all nodes except those in group
“almost_all”.
group one fruits !almost_all fruits
# ’n03’
If a system has PMFE installed and you want to update the OS beyond RH ES 4 U4 and
errata there are two solutions:
• When installing OS updates, make sure to install the complete set of updates
with internal dependencies. For instance, if you upgrade glibc you also need
to install glibc-devel.
• Provide Platform Manager a repository with all available software updates.
Platform Manager will then use packages from this repository when OS
packages must be installed to fulfill dependencies. Then add an “--updates
<directory>” option when running the Platform Manager installation to
specify the location of the OS updates.
Platform recommends strongly that you upgrade to PBS Pro 8. We refer to Altair PBS
Pro 8.0 Administrator's Guide, chapter 5: “Upgrading PBS Professional”. The upgrade
from 7.1.xx to 8.0 is not transparent. You will find a good many configuration tips in
chapter 5 of the PBS Pro Guide.
CAUTION—In addition to reading Chapter 5 first, you must remember to stop any processes and take
a backup copy before you upgrade ANY software.
Routinely backing up your entire system is considered good practice. Sometimes a new
installation comes at a time when there has been a lot of activity since the last back
up. As with any installation you should back up your current set up before installing a
new version.
Dumping the database can get complicated because of the variations in PostgreSQL
versions from the sundry Linux distributions. Unfortunately, there are compatibility
issues among versions of PostgreSQL with regards to utilities - pg_dump and
pg_restore, for example.
So you may have procedural problems when using the pg_dump/restore utilities.
Platform will continue to develop a procedure/set of wrapper scripts to enable easy use
of pg_dump and pg_restore.
There is a much easier way to backup and restore the database, but
• Your Platform Manager Database will be down while dumping. This should not
be a problem as long as the GUI is not running and no PM actions are under
execution (for example: installations/discovery/reconfigure).
• A PostgreSQL database is in the end just files, but you must make sure that the
PostgreSQL daemon (postmaster) is _not_ running.
• The dump will be unreadable, but then the pg_dump optimized file format is
also unreadable.
• This procedure cannot be use for upgrading the PostgreSQL version (for
example: as a consequence of reinstalling the PMFE with a different OS
distribution).
This procedure will include all PosgreSQL configuration files set up by Platform Manager.
# Restart everything
/etc/init.d/scacim-pgsql start
You will need a new license (activation key) to be able to perform the upgrade. If you
have not received one already, contact your local sales representative, or
sales@platform.com.
/opt/scali/sbin/scauninstall
Follow the instructions that follow in the 5.x.x tar-ball to run the bootstrap and initiate
the Platform Manager 5 installation.
SSH_PASSWORD=<rootpw>
pmcli discover 10.0.0.[2-4]
SSH_PASSWORD=<rootpw>
pmcli installmanagementsoftware node[001-003]
/etc/init.d/scance restart
#/opt/scali/libexec/scarepository.py --clear
os_rhel4_u5_x86_64_ws
You can determine the channel id of the respective Operating System using:
There is functionality in 5.6.1 that may require a few changes in your node configuration.
After you complete the upgrade, the first thing you should do is to change the node type
of the nodes to Altix, by running:
pmcli listproducts 11
For a list of distributions you can run:
pmcli listproducts 7
For a list of available images you can run:
pmcli listimages
--dnsservers=DNSSERVERS
nicname - the name of nic; default "nic1" --nicname=NICNAME
laninterface - the name of interface; default "eth0"
--laninterface=LANINTERFACE
smgatewayname - the name of Platform Manager gateway; default Cimserver
--smgatewayname=SMGATEWAYNAME
nisdomain - the name of NIS domain; Default is not to configure NIS
--nisdomain=NISDOMAIN
nisservers - the name of NIS servers (space separated)
--nisservers=NISSERVERS
subnet - subnet for the ipaddress
Enter all the values from our list in the script below:
# Create nodes
pmcli createnode v[1-5] fruitbowl DELLPE1750 10.0.0.[101-105]
10.0.0.1 rhel4_u3_i386_ws test.orchard.com 175.15.0.19
# Add a subnet called bmcnet.
pmcli addsubnet bmcnet 175.20.0.0 253.250.0.0
# Add bmc power.
pmcli enablebmcpower v[1-5]
# Add bmc console.
pmcli enablebmcconsole v[1-5]
# Create a cluster.
pmcli createcluster FruitSalad_cluster
# Add multiple (five) nodes to the cluster.
pmcli addnodetocluster v[1-5] FruitSalad_cluster
# Import an ethernet.
pmcli importethers < /tmp/ethers
# Install software on the nodes.
pmcli addsoftware v[1-5] pm-5.7.1 "Scali MPI Connect"
pmcli addsoftware v[1-5] pm-5.7.1 "NTP"
pmcli addsoftware v[1-5] pm-5.7.1 "NISClient"
pmcli addntpservice v[1-5] install.test.scali.no
pmcli addnisclientservice v[1-5] test.scali.no
install.test.scali.no
# Restart
/etc/init.d/scance restart
# Create nodes
pmcli createnode v[1-5] fruitbowl DELLPE1750 10.0.0.[101-105]
10.0.0.1 rhel4_u3_i386_ws test.orchard.com 175.15.0.19
# Add a subnet called bmcnet.
pmcli addsubnet bmcnet 175.20.0.0 253.250.0.0
# Add bmc power.
pmcli enablebmcpower v[1-5]
# Add bmc console.
pmcli enablebmcconsole v[1-5]
# Add a gateway
addsmgatewayservices <systemnames> [interface=eth1]
# Create a cluster.
pmcli createcluster FruitSalad_cluster
# Add multiple (five) nodes to the cluster.
pmcli addnodetocluster v[1-5] Berry
# Import an ethernet.
pmcli importethers < /tmp/ethers
# Install software on the nodes.
pmcli addsoftware v[1-5] pm-5.7.1 "Scali MPI Connect"
pmcli addsoftware v[1-5] pm-5.7.1 "NTP"
pmcli addsoftware v[1-5] pm-5.7.1 "NISClient"
pmcli addntpservice v[1-5] install.test.scali.no
pmcli addnisclientservice v[1-5] test.scali.no
install.test.scali.no
# Restart
/etc/init.d/scance restart
Table 2—addethernetinterface
pmcli addethernetinterface <systemnames> <nicname> <lanendpoint> [hostspecs]
[ipspecs] [subnet]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
nicname - name of nic (e.g. "nic1").
lanendpoint - name of the lanendpoint (e.g. "eth0")
OPTIONAL DESCRIPTION
hostspecs - (optional) hostname(s) [..]
--hostspecs=HOSTSPECS
ipspecs - (optional) ip address(es) [..]
--ipspecs=IPSPECS
subnet - subnet for the ipaddress
--subnet=SUBNET
Table 3—addsoftware
pmcli addsoftware <systemnames> <productid> [force] [featurenames..]
ARGUMENT DESCRIPTION
systemnames - name of system(s) {[..]}
productid - software product identification
OPTIONAL DESCRIPTION
force - ignore product dependencies
featurenames - list of features
Use the information from the first part of the script to fill in the arguments for the
addsoftware command as in the second half of the script below:
<systemname> <MAC_Address>
# pmcli importethers
The table will be written to stdin.
# Myrinet topology
A
AMD64 - the 64 bit Instruction set architecture (ISA) that is the 64 bit extension to the Intel
x86 ISA. Also known as x86-64. The Opteron and Athlon64 from AMD are the first
implementations of this ISA.
ARCH - Template keyword - Architecture for this node (i386, x86_64 or ia64)
B
BOOTDEVICE - Template keyword - the device name of the network device used for the
network installation (e.g. "eth0")
BOOTFILESYSTEMTYPE - Template keyword - Filesystem type to be used on the /boot
filesystem (fast32 on EFI systems, ext2 on other systems Bootloader to be used on this
system
BOOTLOADER - Template keyword - the Bootloader used on this system (elilo on ia64,
grub on other systems)
C
cluster - A cluster is a set of interconnected nodes functioning as a single server
conserver - The Platform Manager console server which relays messages to and from all
the console capable BMCs and the console switches in the datacenter. The conserver is
installed on the Platform Manager frontend and on Platform Manager Gateways.
CONSOLEKERNELOPTIONS - Template keyword - kernel options for console redirection
(or empty if console redirection is disabled)
CONSOLEPORTNR - Template keyword - the number of the com port used for console
redirection (1 for com1...)
CONSOLEPORTSETTING - Template keyword - settings for the com port used for console
redirection
D
DAPL - Direct Access Provider Library - DAT Instantiation for a given interconnect
DAT - Direct Access Transport - Transport-independent, platform-independent Application
Programming Interfaces that exploit RDMA
DET - Direct Ethernet Transport - Platform's DAT implementation for Ethernet-like devices,
including channel aggregation
E
EM64T - the Intel implementation of the 64 bit extension to the x86 ISA. See AMD64.
F
fencing - a method of denying an errant node access to resources
frontend - a computer outside the cluster nodes dedicated to run configuration, monitoring
and licensing software
G
GATEWAY - Template keyword - default gateway
GM - a software interface provided by Myricom for their Myrinet interconnect hardware.
H
HA- High Availability - feature that provides a system with a redundant counterpart that
will assume the sustem’s role in case of failure on the original.
HASSELINUX - Template keyword - a method of denying an errant node access to
resources
HCA - Hardware Channel Adapter. Term used by Infiniband vendors referencing to the
hardware adapter
HPC - High Performance Computer.
HOSTNAME - Template keyword - the host name of the system being installed
HTTPREPOSITORYURL - Template keyword - URL of the software repository the
operating system available via http
I
IA32 - Instruction set Architecture 32 Intel x86 architecture
IA64 - Instruction set Architecture 64 Intel 64-bit architecture, Itanium, EPIC
IMAGE - Template keyword - file name for the image to be installed for image based
installation
Infiniband - a high speed interconnect standard available from a number of vendors
INITRD - Template keyword - file name for the initrd image (relative to the tftp-root
INSTALLSERVER - Template keyword - IP-address for the installation server (the server
controlling the network installation process)
K
KERNEL - Template keyword - file name for the kernel image (relative to the tftp-root)
KERNELBOOTOPTIONS - Template keyword - Extra kernel options
L
LDAP- see Lightweight Directory Access Protocol
LIBXMLURL - Template keyword - the URL of the libxml rpm package to be installed on
this node
Lightweight Directory Access Prototcol- Lightweight Directory Access Protocol is a means of
querying and modifying directory trees using messages encoded in the BER binary format.
LSF- Platform LSF is software for managing and accelerating batch workload processing for
compute-and data-intensive applications.
M
MPI - Message Passing Interface - De-facto standard for message passing
MPI process - Instance of application program with unique rank within MPI_COMM_WORLD
mpid - the Scali MPI Connect daemon is installed on all nodes that should run MPI programs
mtu - maximum transimission unit. The default is 1500
Myrinet™ - an interconnect developed by Myricom. Myrinet is the product name for the
hardware. (See GM).
N
NAS - Network Attached Storage - uses file-based approach to storage. See also SAN
NETBOOTLOADER - Template keyword - network bootloader for this hardware platform
(efi, pxe, etherboot)
NETINSTALLFILE - Template keyword - URL of the configuration file for the the network
installation (kickstart file, autoyast file or scalamari file)
NETMASK - Template keyword
NFS - Network File System - protocol to allow client servers to access files on other host
servers over a network.
NIC - Network Interface Card
NISDOMAIN - Template keyword - NIS domain for this node.Empty string if NIS is not
enabled
O
OEM- Original Equipment Manufacturer
P
Point-to-point link is a dedicated link that connects two nodes of a network
power - the Platform Manager power utility which controls power on nodes via power
capable BMCs or power switches in the Data Center is installed on the Platform Manager
frontend and on Platform Manager Gateways.
power - a generic term that cover the PowerPC and POWER processor families. These
processors are both 32 and 64 bit capable. The common case is to have a 64 bit OS that
support both 32 and 64 bit executables. See also PPC64
POWER - the IBM POWER processor family. Platform supports versions 4 and 5. See PPC64
PowerPC - the IBM/Motorola PowerPC processor family. See PPC64 below.
PPC64 - abbreviation for PowerPC 64, which is the common 64 bit instruction set
architecture(ISA) name used in Linux for the PowerPC and POWER processor families.
These processors have a common core ISA that allow one single Linux version to be made
for all three processor families.
PRODUCTIONBOOTIMAGE - Template keyword - etherboot boot image file name. (Used
only for etherboot)
Q
quorum - a method of fencing where the partition of a cluster must have at least 51% of
the total nodes in the cluster to gain control over the cluster. The smaller partition(s) are
then fenced in. See fencing.
R
REPOSITORYDIR - Template keyword - the directory on the repository server where the
software repository for the operating system is stored
REPOSITORYSERVER - Template keyword - IP-address for the repository server (the
server hosting software repositories)
REPOSITORYURL - Template keyword - URL of the software repository the operating
system Http- or NFS- URL depending on selected installation method
S
SAN - Storage Area Network - an array of storage devices available to clusters. Each
element is accessed by one node. SAN uses block-based approach to storage. See also
NAS
scadb_maintenance - the clean-up script for scadb (The historical monitoring system)
reduces the size of the database by reducing the sample-frequency for old data. It is
installed on the Platform Manager frontend.
scagmbuilder - the Compiler for GM (Myrinet driver) is installed on the Platform Manager
frontend and on all nodes that have GM enabled.
scaibbuilder - the compiler for IBGold (Infiniband driver) is installed on the Platform
Manager Server and on all nodes that have IBGold enabled.
scald - the Platform Manager License daemon. scald manages the licenses for Platform
Manager and Scali MPI Connect. scald is installed on the Platform Manager frontend.
Platform system - a cluster consisting of Platform components
SCALAMARIARCH - Template keyword - the architecture of the scalamari software to use
on this system (same as the architecture of the distribution)
scamond - the monitoring daemon for in-band monitoring is installed on all nodes.
scamond-mapper is the monitoring daemon for out-of-band monitoring. scamond-mapper
is installed on the Platform Manager frontend and on Platform Manager Gatweways.
ScaMPI - Scali's MPI - First generation MPI Connect product, replaced by SMC
SCANCEJOB - Template keyword - identifies the installation job in Platform Manager
scaproxyd - the monitoring daemon for IPMI is installed on the Platform Manager frontend
and on Platform Manager Gateways.
scasmo-controller - the controller for the monitoring system. Installed on the Platform
Manage Server.
scasmo-diag.py - the diagnostic tool for the monitoring system is installed on the Platform
Manager frontend.
scasmo-factory - alarm and aggregation daemon for the monitoring system. Installed on the
Platform Manager frontend.
T
TIMEZONE - Template keyword - time zone name, currently fixed to "UTC"
torus - Greek word for ring, used in Platform documents in the context of 2- and
3-dimensional interconnect topologies
U
UNIX refers to all UNIX and look-alike Operating Systems supported by the SSP, i.e.
Solaris and Linux.
USECONSOLE - Template keyword - If the console redirection should be enabled, assign
USECONSOLE the empty string. If not, assign it the value '#'.
USEEFI - Template keyword -If you want to use USEEFI, assign it the value of an empty
string . If not, assign the value '#'.
USEHTTP - Template keyword - If you want the installation to use the HTTP protocol,
assign it the value of an empty string. If not, assign it the value '#'.
V
VAR - Value Added Reseller
W
Windows refers to Microsoft Windows 98/Me/NT/2000/XP
X
x86-64 - see AMD64 and EM64T
Y
YUMCONF - Template keyword - the yum configuration for this node (content of
yum.conf)
YUMURL - Template keyword - the URL of the yum rpm package to be installed on this
node.
394 [5.7]
EM64T 387
Enabling Static ARP 103
enslaved interface 258
ether importing 259
ethers
export 259
event handling and response 4
existing template 308
External power /console switches 175
F
Fast View 35
fault-prediction algorithms 4
FCAPS 10
fencing 387
Floating Licenses 9
Folder 21
FORMAT 178
front end 387
G
GATEWAY 387
Gateway 10
gateway 8, 9, 10
Gateway, default 373
gatewayip 255
gid 178
GM 387
group 178
group alias 357
group by 180
group slaves 357
grouping functionality 356
GUI Elements 20
GUI elements 20
H
HA 387
handlingfaults and root cause analysis 4
HASSELINUX 387
HCA 387
Heterogeneous Cluster 9
High Availability 387
History View 9
hostlist 357
HOSTNAME 387
hostname
renaming 269
HPC 387
html 178, 184
HTTPREPOSITORYURL 387
I
IA32 387
IA64 387
icon
Node is up 159
396 [5.7]
M
macaddress 256, 263
Major page faults 183
management engine service 293
Management Menues 173
maximize icon 35
Memory consumption (size * time) 183
Minor page faults 183
monitoring
Myrinet 167
monitoring API 10
monitoring inband server 294
monitoring out-of-band server service 294
monitoring relay server service 294
Monitoring services 11
monitoring the health of cluster resources 4
month 178
MPI 9, 388
MPI Launch 174
MPI Launch View 175
MPI process 388
MPI programs 175
MPI Start View 175
mpid 388
MTU 256, 264
mtu 388
Multiple Package Channels 9
Multiple rules 182
multiple times 178
Multiple Vendor Hardware Management 9
Myrinet
interconnect monitoring 167
Myrinet monitoring 167
Myrinet submenu 167
Myrinet™ 388
N
NAS 388, 390
NAT 10
NAT service 295
negative numbers 356
NETBOOTLOADER 388
NETINSTALLFILE 388
NETMASK 388
network 24
network installation template 307, 308
network switch 302
NFS 37, 41, 44, 74
NIC 388
NIS 10
NIS client service 300
NISDOMAIN 388
NISSERVER 389
node 178, 389
398 [5.7]
Platform Node Configuration Engine 12
Point-to-point 389
PostgresSQL database 12
postscript 184
POWER 389
power 389
power interface 321
power management controller 297
Power Mgt 174
power switch port 305
PowerPC 389
Processes Per Node 175
PRODUCTIONBOOTIMAGE 389
Provisioning engine 10
provisioning services 12
PXE 10
Q
queue 173
Queue Status View 4, 169
quorum 389
R
-r option 182
RDMA 390
Reboot 174
Remote Access -> Node Console 174
Remote Access -> Node Terminal 174
remote filesystem(s) 212
repository 9, 12
REPOSITORYDIR 389
REPOSITORYSERVER 389
REPOSITORYURL 389
RHAUTHLINE 390
RHGATEWAYARG 390
Rich Client Platform (RCP) 4
root password 271
root privileges 173
RPMPYTHONURL 390
RPMs 3
RULE 178
S
SAN 390
ScaAccounting log 177
ScaAcct 177
scaacct 178
scaacct_collect 186
scacp 323
scadb_maintenance 390
scagmbuilder 390
scagroup 357
scagroup config file 357
scagroup.conf 357
scahosts 325
scaibbuilder 390
400 [5.7]
stdout 175
STONITH 391
Stop button 175
subjobs 240
subnet 256
subnet removal 263
Subscribe nodes 379
Summarize CPU time (system-time + user-time) 183
Summarize elapsed-, system- and user-time 183
Summarize the following values
183
Switch option commands
createSwitch 302
system
removal 269
system component icons 17
System time 183
T
terminal and console sessions 19
text 178, 184
text-formatted report 184
TFTP 10
time range 179
time specification 179
timeofday 178
TIMEZONE 391
title bar 35
torus 391
U
uid 178
Unix 391
update 12
update channel 379
USECONSOLE 391
USEEFI 391
USEHTTP 391
USELIBXMLRUL 392
USELIBXMLURL 392
USENFS 392
user 178
User time 183
V
VAR 392
vendor 24
View 21
view 32
maximize or minimize 35
view, detaching 35
views 19, 23
Data Center Selector 23
detaching from a perspective 35
draging and dropping 33
Edit Alarms 160
402 [5.7]