Sunteți pe pagina 1din 56

GPFS 3.

2 System
Administration

(Course code AN81)
Instructor Exercises Guide
with hints
ERC 1.1


V5.4.0.3
cover
Front cover
Instructor Exercises Guide with hints
March 2011 edition
The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customers ability to evaluate and integrate them into the customers operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Copyright International Business Machines Corporation 2010, 2011.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
Trademarks
IBM and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Other product and service names might be trademarks of IBM or other companies.
AIX 5L AIX 6 AIX
DB2 FlashCopy GPFS
PowerVM POWER5 POWER6
POWER7 Tivoli
Instructor Exercises Guide with hints
V5.4.0.3
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Contents iii
Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Instructor exercises overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Exercises description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Exercise 1. GPFS installation and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Exercise 2. GPFS management and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Exercise 3. Storage pools, filesets, and policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Exercise 4. GPFS replication and snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Exercise 5. Dynamically adding and deleting disks in active file system . . . . . . . . 5-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
iv GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Trademarks v
V5.4.0.3
TMK
Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM and the IBM logo are registered trademarks of International Business Machines
Corporation.
The following are trademarks of International Business Machines Corporation, registered in
many jurisdictions worldwide:
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Other product and service names might be trademarks of IBM or other companies.
AIX 5L AIX 6 AIX
DB2 FlashCopy GPFS
PowerVM POWER5 POWER6
POWER7 Tivoli
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
vi GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Instructor exercises overview vii
V5.4.0.3
pref
Instructor exercises overview
The objectives of the exercises are for the students to successfully:
Install and configure GPFS
Use administration tools
Implement a highly available GPFS cluster
Describe the ILM tools available and how to use them
All the exercises depend on the previous exercises being successfully
completed.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
viii GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercises description ix
V5.4.0.3
pref
Exercises description
This course includes the following exercises:
GPFS installation and setup
GPFS management and configuration
Storage pools, filesets, and policies
GPFS replication and snapshots
Setting up a multi-cluster connection
In the exercise instructions you will see each step prefixed by a line.
You may wish to check off each step as you complete it to keep track
of your progress.
Most exercises include required sections which should always be
completed. These may be required before performing later exercises.
Some exercises may also include optional sections that you may wish
to perform if you have sufficient time and want an additional challenge.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
x GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-1
V5.4.0.3
EXempty
Exercise 1. GPFS installation and setup
(with hints)
What this exercise is about
This exercise covers GPFS installation and setup.
What you should be able to do
At the end of the exercise, you should be able to:
Verify the AIX system environment
Create a GPFS cluster
Define NSDs
Create a GPFS file system
Introduction
Before starting each lab open at least one window to each test node.
Use man pages or the online documentation for details information on
GPFS commands.
Requirements
Cluster access information
- System access URL
- P Addresses
Login information
Shared disk drives
- /dev/hdisk
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-2 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Exercise instructions with hints
All exercises in this chapter depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.
Step 1: Verify environment
__ 1. Verify that the AIX nodes (LPARs) are properly installed.
In this section you will review the FAQ compatibility list to ensure OS levels
before installing GPFS.
GPFS FAQ:
http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.
doc/gpfs_faqs/gpfs_faqs.html
__ a. On each AIX LPAR run the following command:
# oslevel -s
6100-06-03-1048
__ b. Is the installed OS level supported by GPFS? Yes / No
Yes
__ c. Is there a specific GPFS patch level required for the installed OS? Yes / No
Yes
__ d. If so, what patch level is available and could be used? ___________
GPFS fix level: 3.4.0.2
__ 2. Verify that all of your nodes have been configured properly on the networks. Ask the
instructor if you have any questions.
__ a. Write the hostname of Node1: ____________
__ b. Write the hostname of Node2: ____________
__ c. From Node1, ping Node2.
__ d. From Node2, ping Node1.
Hint
Replace Node1 and Node2 with their associated hostname or IP addresses.
If the pings fail, resolve the issue before continuing.
Replace Node1 and Node2 with the appropriate host name/IP address.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-3
V5.4.0.3
EXempty
__ 3. Verify node-to-node ssh communications.
For this lab you will use ssh and scp for communications.
__ a. Each node will need to create an ssh-key pair using the ssh-keygen command
and press Enter each time you are prompted to create a key with no passphrase
until you are returned to a prompt. The result should look something like this:
# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (.ssh/id_rsa):
Created directory '.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
7d:06:95:45:9d:7b:7a:6c:64:48:70:2d:cb:78:ed:61
__ b. On Node1 copy the /.ssh/id_rsa.pub file to /.ssh/authorized_keys.
# cp /.ssh/id_rsa.pub /.ssh/authorized_keys
__ c. From Node1 copy the /.ssh/id_rsa.pub file from Node2 to
/tmp/id_rsa.pub file. Assuming Node has generated its ssh keys:
# scp Node2:/.ssh/id_rsa.pub /tmp/id_rsa.pub
__ d. On Node1 add the public key from Node2 to the authorized_keys file on Node1.
cat /tmp/id_rsa.pub >> /.ssh/authorized_keys
__ e. Copy the authorized key file from Node1 to Node2.
# scp /.ssh/authorized_keys Node2:/.ssh/authorized_keys
__ f. To test your ssh configuration, ssh as root from Node1 to Node1 and from Node1
to Node2 and then repeat from Node2 back to Node1 until you are no longer
prompted for a password. If this fails, stop and seek help from your instructor
before proceeding.
Node1# ssh Node1 date
Node1# ssh Node2 date
Node2# ssh Node1 date
Node2# ssh Node2 date
__ g. Suppress ssh banners.
i. On both nodes create .hushlogin file in root home directory.
# touch /.hushlogin
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-4 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 4. Verify the disks are available to the system. For this lab you should have 4 shared
disks available for use hdisk1-hdisk4.
__ a. Use lspv to verify that the disks provided by your instructor exist.
i. If you do not see 4 shared disks besides hdisk0, talk to the instructor.
# lspv
hdisk0 00f60603554ee515 rootvg active
hdisk1 00f606036452ef92 None
hdisk2 00f606036452f010 None
hdisk3 00f606036452f084 None
hdisk4 000bf81121b8eb44 None
hdisk5 00f606036452f13f None
Step 2: Install the GPFS software
On Node1:
__ 5. Locate the GPFS software in /gpfs-course/software/base/.
# cd /gpfs-course/software/base/
__ 6. Run the inutoc command to create the table of contents.
# inutoc .
__ 7. Install the base GPFS code using the installp command.
# installp -aY -d . gpfs -f all
__ 8. Locate the latest GPFS patch level in /gpfs-course/software/PTF/.
# cd /gpfs-course/software/PTF/
__ 9. Run the inutoc command to create the table of contents.
# inutoc .
__ 10. Install the PTF GPFS code using the installp command.
# installp -aY -d . gpfs -f all
__ 11. Repeat steps 1-6 on Node2.
__ 12. On Node1 and Node2 confirm GPFS is installed using lslpp.
# lslpp -L gpfs.\*
The output should look similar to this:
Fileset Level State Type Description (Uninstaller)
----------------------------------------------------------------------------
gpfs.base 3.4.0.2 A F GPFS File Manager
gpfs.docs.data 3.4.0.1 A F GPFS Server Manpages and
gpfs.msg.en_US 3.4.0.2 A F GPFS Server Messages - U.S.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-5
V5.4.0.3
EXempty
Note
Exact versions of GPFS might vary from this example. The important part is that all four
filesets are installed.
__ 13. Confirm the GPFS binaries are in your path using the mmlscluster command.
# mmlscluster
mmlscluster: 6027-1382 This node does not belong to a GPFS cluster.
mmlscluster: 6027-1639 Command failed. Examine previous error messages to
determine cause.
Note
The path to the GPFS binaries is: /usr/lpp/mmfs/bin.
Step 3: Create the GPFS cluster
For this exercise the cluster is initially created with a single node. When creating the
cluster, make Node1 the primary GPFS configuration data server and give Node1 the
designations of a quorum node and a manager node. Use ssh and scp as the remote shell
and remote file copy commands.
Primary configuration server (Node1): ____________
Verify fully qualified path to ssh and scp:
ssh path_____________
scp path_____________
/usr/bin/ssh
/usr/bin/scp
__ 14. Use the mmcrcluster command to create the cluster.
# mmcrcluster -N Node1:manager-quorum -p Node1 -r
/usr/bin/ssh -R /usr/bin/scp
Mon Feb 28 02:26:25 CET 2011: mmcrcluster: Processing node Node1
mmcrcluster: Command successfully completed
mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
__ 15. Assign the proper license designation to the GPFS node.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-6 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
GPFS requires that each node in the cluster be designated with the appropriate
license. GPFS nodes that will act as quorum nodes, file system managers, NSD
servers or export the GPFS file system data via any protocol (that is, HTTP, FTP,
NFS, and so on) require server licenses. Consumer nodes can be designated as
client nodes.
# mmchlicense server --accept -N <Node1>
The following nodes will be designated as possessing GPFS server licenses:
Node1
mmchlicense: Command successfully completed
__ 16. Run the mmlscluster command again to see that the cluster was created.
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: Node1
GPFS cluster id: 729524514563591075
GPFS UID domain: Node1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: Node1
Secondary server: (none)
Node Daemon node name IP address Admin node name
Designation
-----------------------------------------------------------------------------------------------
1 r07s6vlp1 10.31.202.174 r07s6vlp1 quorum-manager
Step 4: Start GPFS and verify the status of all nodes
__ 17. Start GPFS on all the nodes in the GPFS cluster using the mmstartup command.
# mmstartup -a
Mon Feb 28 02:27:54 CET 2011: mmstartup: Starting GPFS...
__ 18. Check the status of the cluster using the mmgetstate command.
# mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 Node1 active
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-7
V5.4.0.3
EXempty
Step 5: Add the second node to the cluster
__ 19. On Node1 use the mmaddnode command to add Node2 to the cluster.
# mmaddnode -N Node2
Thu Feb 24 19:00:43 CET 2011: 6027-1664 mmaddnode: Processing node
<Node2>
mmaddnode: Command successfully completed
mmaddnode: 6027-1254 Warning: Not all nodes have proper GPFS license
designations.
Use the mmchlicense command to designate licenses as needed.
mmaddnode: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 20. On Node1 assign the proper license designation to the GPFS Node2
# mmchlicense server --accept -N Node2
The following nodes will be designated as possessing GPFS server licenses:
<Node2>
mmchlicense: Command successfully completed
mmchlicense: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 21. Confirm the node was added to the cluster using the mmlscluster command.
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: Node1
GPFS cluster id: 722325741419006983
GPFS UID domain: Node1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: Node1
Secondary server: (none)
Node Daemon node name IP address Admin node name
Designation
-----------------------------------------------------------------------------------------------
1 Node1 10.6.55.111 Node1 quorum-manager
2 Node2 10.6.55.112 Node2
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-8 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 22. Use the mmchcluster command to set Node2 as the secondary configuration
server.
# mmchcluster -s Node2
mmchcluster: GPFS cluster configuration servers:
mmchcluster: Primary server: Node1
mmchcluster: Secondary server: Node2
mmchcluster: Command successfully completed
__ 23. Check the cluster configuration again.
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: r07s6vlp1
GPFS cluster id: 729524514563591075
GPFS UID domain: Node1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: Node1
Secondary server: Node2
Node Daemon node name IP address Admin node name
Designation
-----------------------------------------------------------------------------------------------
1 Node1 10.31.202.174 Node1 quorum-manager
2 Node2 10.31.202.175 Node2
__ 24. Start node2 using the mmstartup command.
# mmstartup -N Node2
Mon Feb 28 02:30:50 CET 2011: mmstartup: Starting GPFS...
__ 25. Use the mmgetstate command to verify that both nodes are in the active state.
# mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 Node1 active
2 Node2 active
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-9
V5.4.0.3
EXempty
Step 6: Collect information about the cluster
Now we will take a moment to check a few things about the cluster.
__ 26. Examine the cluster configuration using the mmlscluster command
__ a. What is the cluster name? ______________________
__ b. What is the IP address of node2? _____________________
__ 27. What date was this version of GPFS "built"? ________________
Hint: Look in the GPFS log file: /var/adm/ras/mmfs.log.latest
Step 7: Create NSDs
In this part of the lab exercise, you will use four hdisks provided by your instructor.
Make sure they can all hold data and metadata.
Leave the storage pool column blank.
Leave the primary and backup server fields blank.
__ 28. On Node1 create directory /gpfs-course/data if it does not exist
__ 29. Create a disk descriptor file /gpfs-course/data/diskdesc.txt using the
format:
#DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredNam
e:StoragePool
hdisk1:::dataAndMetadata::nsd1:
hdisk2:::dataAndMetadata::nsd2:
hdisk3:::dataAndMetadata::nsd3:
hdisk4:::dataAndMetadata::nsd4:
Note
hdisk numbers will vary per system. The instructor will provide you the hdisk numbers you
should use. Most of the time, you will be using the first four available hdisks
(hdisk1-hdisk4). The fifth disk will be used in Lab 5 when you add a NSD to an active GPFS
file system.
__ 30. Create a backup copy of the disk descriptor file
/gpfs-course/data/diskdesc_bak.txt.
# cp /gpfs-course/data/diskdesc.txt /gpfs-course/data/diskdesc_bak.txt
__ 31. Create the NSDs using the mmcrnsd command.
# mmcrnsd -F /gpfs-course/data/diskdesc.txt
mmcrnsd: Processing disk hdisk1
mmcrnsd: Processing disk hdisk2
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-10 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
mmcrnsd: Processing disk hdisk3
mmcrnsd: Processing disk hdisk4
mmcrnsd: 6027-1371 Propagating the cluster configuration data to
Note
The option of -v no specifies that the disks are to be formatted irrespective of its previous
state. The default is -v yes.
Step 8: Collect information about the NSDs
Now collect some information about the NSDs you have created.
__ 32. Examine the NSD configuration using mmlsnsd.
__ a. What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?)
associated with an NSD? _______
# mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
(free disk) nsd1 (directly attached)
(free disk) nsd2 (directly attached)
(free disk) nsd3 (directly attached)
(free disk) nsd4 (directly attached)
Step 9: Create a file system
Now that there is a GPFS cluster and some NSDs available you can create a file system. In
this section we will create a file system.
Set the file system blocksize to 64KB.
Mount the file system at /gpfs.
__ 33. Create the file system using the mmcrfs command.
# mmcrfs /gpfs fs1 -F /gpfs-course/data/diskdesc.txt -B 64k
GPFS: 6027-531 The following disks of fs1 will be formatted on node
sys3161_T1_p1:
nsd1: size 5242880 KB
nsd2: size 5242880 KB
nsd3: size 5242880 KB
nsd4: size 5242880 KB
GPFS: 6027-540 Formatting file system...
GPFS: 6027-535 Disks up to size 51 GB can be added to storage pool 'system'.
Creating Inode File
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-11
V5.4.0.3
EXempty
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
GPFS: 6027-572 Completed creation of file system /dev/fs1.
mmcrfs: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 34. Verify the file system was created correctly using the mmlsfs command.
# mmlsfs fs1
flag value description
------------------- ------------------------ -----------------------------------
-f 2048 Minimum fragment size in bytes
-i 512 Inode size in bytes
-I 8192 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file
system
-B 65536 Block size
-Q none Quotas enforced
none Default quotas enabled
-V 12.03 (3.4.0.0) File system version
-u yes Support for large LUNs?
-z no Is DMAPI enabled?
-L 2097152 Logfile size
-E yes Exact mtime mount option
-S no Suppress atime mount option
-K whenpossible Strict replica allocation option
--create-time Thu Feb 24 19:20:48 2011 File system creation time
--fastea yes Fast external attributes enabled?
--filesetdf no Fileset df enabled?
--inode-limit 33792 Maximum number of inodes
-P system Disk storage pools in file system
-d nsd1;nsd2;nsd3;nsd4 Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /gpfs Default mount point
--mount-priority 0 Mount priority
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-12 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Is the file system automatically mounted when GPFS starts?
___________________
__ 35. Mount the file system using the mmmount command.
# mmmount all -a
__ 36. Verify the file system is mounted using the df command.
# df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2097152 1683624 20% 12492 7% /
/dev/hd2 6291456 2618984 59% 37851 12% /usr
/dev/hd9var 262144 118752 55% 4576 25% /var
/dev/hd3 262144 258800 2% 49 1% /tmp
/dev/hd1 262144 261416 1% 5 1% /home
/dev/fslv00 4194304 3658096 13% 27 1% /gpfs-course
/dev/fs1 41943040 0 100% 33536 100% /gpfs
__ 37. Use the mmdf command to get information on the file system.
# mmdf fs1
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 25 GB)
nsd1 5242880 -1 yes yes 5219200 (100%) 124 ( 0%)
nsd2 5242880 -1 yes yes 5219264 (100%) 106 ( 0%)
nsd3 5242880 -1 yes yes 5219072 (100%) 124 ( 0%)
nsd4 5242880 -1 yes yes 5219264 (100%) 116 ( 0%)
------------- -------------------- -------------------
(pool total) 20971520 20876800 (100%) 470 ( 0%)
============= ====================
===================
(total) 20971520 20876800 (100%) 470 ( 0%)
Inode Information
-----------------
Number of used inodes: 4038
Number of free inodes: 29754
Number of allocated inodes: 33792
Maximum number of inodes: 33792
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 1. GPFS installation and setup 1-13
V5.4.0.3
EXempty
How many inodes are currently used in the file system? ______________
End of exercise
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-14 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 2. GPFS management and configuration 2-1
V5.4
EXempty
Exercise 2. GPFS management and configuration
(with hints)
What this exercise is about
In this lab you will examine the GPFS structure, navigate around the
GPFS structure, and see where information and tools are placed.
What you should be able to do
At the end of this exercise, you should be able to:
Document where GPFS information resides
Make configuration changes to the cluster
Make changes to the NSD and block I/O subsystem
Introduction
Lab 1 must be completed before lab 2 to set up the environment.
Before starting each lab, open at least one window to each test node.
Use man pages or the online documentation for detailed information
on GPFS commands.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-2 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Exercise instructions with hints
All exercises in this chapter depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.
Step 1: Where does GPFS put information?
You will find the GPFS binaries, data files and logs in three directories on the system:
/usr/lpp/mmfs
/var/mmfs/
/var/adm/ras
__ 1. What directories are in /usr/lpp/mmfs?
__ a. ____________________
__ b. ____________________
__ c. ____________________
__ d. ____________________
__ e. ____________________
__ f. ___________________
__ g. ____________________
__ h. ____________________
__ 2. What is the sample utility for backing up a GPFS file system to TSM called? (Hint: It
is under /usr/lpp/mmfs.)
__ a. ____________________
__ 3. What directories are in /var/mmfs?
__ a. ____________________
__ b. ____________________
__ c. ____________________
__ d. ___________________
__ e. ____________________
__ f. ____________________
__ 4. Find the mmsdrfs file. (Hint: It is under /var/mmfs). The mmsdrfs file is text. Take
a look at the contents of the file. What kind of information does this file appear to
contain?
__ a. ____________________
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 2. GPFS management and configuration 2-3
V5.4
EXempty
Important
Do not ever edit the mmsdrfs file manually. It is pointed out here because it should be
backed up for disaster recovery.
__ 5. What type of GPFS information appears in the /var/adm/ras directory?
__ a. ____________________
Step 2: Configuration changes
In this section you will change some configuration parameters and see what happens. You
will add the designations quorum and manager to Node2.
Node roles
__ 6. On Node1 use the mmlscluster command to verify that only Node1 has the
designation quorum-manager and Node2 has no designation.
# mmlscluster
__ 7. Use the mmchnode command to designate Node2 as a quorum and manager node.
# mmchnode --quorum --manager N Node2
Thu Feb 24 19:25:56 CET 2011: 6027-1664 mmchnode: Processing node Node2
mmchnode: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 8. Use the mmlscluster command to verify the changes to Node2 took place.
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: Node1
GPFS cluster id: 722325741419006983
GPFS UID domain: Node1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: Node1
Secondary server: Node2
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-4 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Node Daemon node name IP address Admin node name
Designation
-----------------------------------------------------------------------------------------------
1 Node1 10.6.55.111 Node1 quorum-manager
2 Node2 10.6.55.112 Node2 quorum-manager
File system manager
__ 9. Use the mmlsmgr command to view the file system manager for the file system fs1.
# mmlsmgr fs1
file system manager node [from 10.6.55.111 (Node1)]
---------------- ------------------
fs1 10.6.55.111 (Node1)
__ 10. Using the mmchmgr command change the file system manager to the other node.
# mmchmgr fs1 Node2
GPFS: 6027-628 Sending migrate request to current manager node
10.6.55.111 (Node1).
GPFS: 6027-629 Node 10.6.55.111 (Node1) resigned as manager for
fs1.
GPFS: 6027-630 Node 10.6.55.112 (Node2) appointed as manager for
fs1.
__ 11. Use the mmlsmgr command to verify the changes.
# mmlsmgr fs1
file system manager node [from 10.6.55.111 (Node1)]
---------------- ------------------
fs1 10.6.55.112 (Node2)
Cluster/node configuration parameters
Now let's take a look at how to verify and modify configuration parameters.
__ 12. Use the mmlsconfig command to view the configuration.
# mmlsconfig
Configuration data for cluster Node1:
---------------------------------------------
clusterName Node1
clusterId 722325741419006983
autoload no
minReleaseLevel 3.4.0.0
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 2. GPFS management and configuration 2-5
V5.4
EXempty
dmapiFileHandleSize 32
adminMode central
File systems in cluster Node1:
--------------------------------------
/dev/fs1
Is the GPFS daemon configured to autoload?
__ 13. Use mmchconfig to set autoload so that the GPFS daemon starts automatically
when the operating system starts.
# mmchconfig autoload=yes
__ 14. Verify the command is complete using the mmlsconfig command.
#mmlsconfig
Configuration data for cluster Node1:
---------------------------------------------
clusterName Node1
clusterId 722325741419006983
autoload yes
minReleaseLevel 3.4.0.0
dmapiFileHandleSize 32
adminMode central
File systems in cluster Node1:
--------------------------------------
/dev/fs1
The mmlsconfig output shows parameters that have been changed from the default. Here
we will see a method to view a full list of parameters and their current values.
__ 15. Use the mmfsadm dump config command to view all the GPFS parameters.
What is the value of pagepool? (Hint: You might want to use the command more/less to
view the output.)
____________________
# mmfsadm dump config | grep pagepool
pagepool 67108864
pagepoolMaxPhysMemPct 75
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-6 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Step 3: NSD network block I/O
In this section you will configure an NSD server to provide IO over the IP network. You will
demonstrate NSD functionality by seeing how NSD clients connect over the LAN and
access a file system.
Sample configuration files can be found in /gpfs-course/samples.
To accomplish this you will define Node1 as the primary NSD server and configure Node2
as an NSD client. This is done by removing SAN access to the disks from Node2.
Node2 I/O request will be sent to Node1 and flow through an IP network.
__ 16. Check the NSD configuration using the mmlsnsd command.
# mmlsnsd

File system Disk name NSD servers
-----------------------------------------------------------------
fs1 nsd1 (directly attached)
fs1 nsd2 (directly attached)
fs1 nsd3 (directly attached)
fs1 nsd4 (directly attached)
There should not be any NSD servers defined for any of the NSDs.
__ 17. On Node1 create a disk descriptor file /gpfs-course/data/primary.txt. The
format for the file is: (replace Node1 with your LPAR proper hostname)
#nsdname:ServerList:
nsd1:Node1::
nsd2:Node1::
nsd3:Node1::
nsd4:Node1::

__ 18. Unmount the file system on all nodes using the mmunmount command.
# mmunmount fs1 a
__ 19. Define Node1 as the primary NSD server for each LUN using the mmchnsd
command.
# mmchnsd F /gpfs-course/data/primary.txt
__ 20. Verify that Node1 is defined as the primary NSD server for the NSDs using the
mmlsnsd command.
# mmlsnsd -m
To test the network access to the LUNS on Node2, remove the disk devices containing the
GPFS file system.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 2. GPFS management and configuration 2-7
V5.4
EXempty
__ 21. To see a list of devices on Node2, run the mmlsnsddisk command.
# mmlsnsd f fs1 m
Note that the device shows up as local on Node2 and remote on Node1.
__ 22. Remove all the hdisk devices associated with the NSDs using the AIX rmdev
command.
# for i in 1 2 3 4
> do
> rmdev -l hdisk$i
> done
hdisk1 Defined
hdisk2 Defined
hdisk3 Defined
hdisk4 Defined
#
__ 23. Mount fs1 on all nodes.
# mmmount fs1 -a
__ 24. Run the mmlsdisk command on both nodes and compare the output.
# mmlsdisk fs1 m
What is listed in the device column on node1?
___________________________________
On Node2?
__________________________________
What is listed for the column IO performed on node for node1?
__________________________________
On Node2?
___________________________________
__ 25. Take a look at the file system using the df command on Node1 and Node2. Is there
any difference in the output of the df command?
On Node2, you are now accessing the data across the network from Node1. Now you will
move the access back to local.
__ 26. On Node2 stop the GPFS daemon using the mmshutdown command.
# mmshutdown N Node2
__ 27. Use the AIX cfgmgr command to rediscover the hdisks.
# cfgmgr
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
2-8 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 28. Start the GPFS daemon on node2 using the mmstartup command.
# mmstartup N Node2
__ 29. Verify Node2 is active using the mmgetstate command.
# mmgetstate a
__ 30. When the server is active, run the mmlsdisk command on Node2.
# mmlsdisk fs1 m
Are the local devices displayed?
If you do not see the devices please check with the instructor.
End of exercise
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 3. Storage pools, filesets, and policies 3-1
V5.4.0.3
EXempty
Exercise 3. Storage pools, filesets, and policies
(with hints)
What this exercise is about
This exercise covers creating storage pools and filesets, implementing
a placement policy, and defining and executing a file management
policy.
What you should be able to do
At the end of the exercise, you should be able to:
Create storage pools
Create filesets
Implement a placement policy
Define and execute a file management policy
Introduction
Lab 1 must be completed before lab 3 to set up the environment.
Before starting each lab open at least one window to each test node.
Use man pages or the online documentation for details information on
GPFS commands.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-2 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Exercise instructions with hints
All exercises in this chapter depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.
Step 1: Prepare the environment
At the end of lab 1 you should have two nodes in a single GPFS cluster with one file
system. In order to create the storage pools for this lab you need to delete the existing file
system and NSD definitions.
Log in to Node1 and perform the following tasks:
__ 1. Unmount all GPFS file systems on all nodes.
# mmunmount all -a
__ 2. Delete the file system.
# mmdelfs fs1
All data on following disks of fs1 will be destroyed:
nsd1
nsd2
nsd3
nsd4
Completed deletion of file system /dev/fs1.
mmdelfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 3. Delete the NSDs.
# mmdelnsd "nsd1;nsd2;nsd3;nsd4"
mmdelnsd: Processing disk nsd1
mmdelnsd: Processing disk nsd2
mmdelnsd: Processing disk nsd3
mmdelnsd: Processing disk nsd4
mmdelnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Now you are ready to create some storage pools.
Step 2: Create a file system with two storage pools
Storage pools are defined when an NSD is created. You will use the four hdisks provided
by your instructor to create two storage pools. Place the disks into two storage pools and
make sure both pools can store file data.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 3. Storage pools, filesets, and policies 3-3
V5.4.0.3
EXempty
Hint
Create two disks per storage pool.
The first pool must be the system pool.
The system pool should be able to store data and metadata.
Only the system pool can contain metadata.
__ 4. Create a disk descriptor file /gpfs-course/data/pooldesc.txt using the
format:
#DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
/dev/hdisk1:::dataAndMetadata::nsd1:system
/dev/hdisk2:::dataAndMetadata::nsd2:system
/dev/hdisk3:::dataOnly::nsd3:pool1
/dev/hdisk4:::dataOnly::nsd4:pool1
The two storage pools will be system and pool1.
__ 5. Create a backup copy of the disk descriptor file
/gpfs-course/data/pooldesc_bak.txt.
# cp /gpfs-course/data/pooldesc.txt /gpfs-course/data/pooldesc_bak.txt
__ 6. Create the NSDs using the mmcrnsd command.
# mmcrnsd -F /gpfs-course/data/pooldesc.txt
__ 7. Create a file system based on these NSDs using the mmcrfs command.
Set the file system blocksize to 64kb.
Mount the file system at /gpfs.
# mmcrfs /gpfs fs1 -F pooldesc.txt -B 64k
mmcrnsd: Processing disk hdisk1
mmcrnsd: Processing disk hdisk2
mmcrnsd: Processing disk hdisk3
mmcrnsd: Processing disk hdisk4
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Node1:/gpfs-course/data # mmcrfs /gpfs fs1 -F "pool.desk" -B 64k
The following disks of fs1 will be formatted on node Node1:
nsd1: size 1048576 KB
nsd2: size 1048576 KB
nsd3: size 1048576 KB
nsd4: size 1048576 KB
Formatting file system ...
Disks up to size 12 GB can be added to storage pool 'system'.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-4 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Disks up to size 12 GB can be added to storage pool 'pool1'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Formatting Allocation Map for storage pool 'pool1'
Completed creation of file system /dev/fs1.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 8. Verify that the file system was created correctly using the mmlsfs command.
# mmlsfs fs1
__ 9. Mount the file system using the mmmount command.
# mmmount fs1 -a
__ 10. Verify that the file system is mounted using the df command.
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 32908108 10975916 21932192 34% /
tmpfs 2019884 4 2019880 1% /dev/shm
/dev/sda1 72248 45048 27200 63% /boot
/dev/fs1 3989291072 491992640 3497298432 13% /gpfs
__ 11. Verify the storage pool configuration using the mmdf command.
# mmdf fs1
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system
nsd1 102734400 1 yes yes 102565184 (100%) 90 ( 0%)
nsd2 102734400 2 yes yes 102564608 (100%) 96 ( 0%)
------------- -------------------- -------------------
(pool total) 205468800 205129792 (100%) 186 ( 0%)
Disks in storage pool: pool1
nsd3 102734400 3 no yes 102732288 (100%) 62 ( 0%)
nsd4 102734400 4 no yes 102732288 (100%) 62 ( 0%)
------------- -------------------- -------------------
(pool total) 205468800 205464576 (100%) 124 ( 0%)
============= ==================== ===================
(data) 410937600 410594368 (100%) 310 ( 0%)
(metadata) 205468800 205129792 (100%) 186 ( 0%)
============= ==================== ===================
(total) 410937600 410594368 (100%) 310 ( 0%)
Inode Information
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 3. Storage pools, filesets, and policies 3-5
V5.4.0.3
EXempty
-----------------
Number of used inodes: 4038
Number of free inodes: 397370
Number of allocated inodes: 401408
Maximum number of inodes: 401408
Step 3: Create filesets
We are going to create five filesets to organize the data. Use Node1 to perform these tasks.
__ 12. Create five filesets (fileset1-fileset5) using the mmcrfileset command.
# mmcrfileset fs1 fileset1
# mmcrfileset fs1 fileset2
# mmcrfileset fs1 fileset3
# mmcrfileset fs1 fileset4
# mmcrfileset fs1 fileset5
__ 13. Verify that they were created using the mmlsfileset command.
# mmlsfileset fs1
What is the status of fileset1-fileset5? ___________________
__ 14. Link the filesets into the file system using the mmlinkfileset command.
# mmlinkfileset fs1 fileset1 -J /gpfs/fileset1
# mmlinkfileset fs1 fileset2 -J /gpfs/fileset2
# mmlinkfileset fs1 fileset3 -J /gpfs/fileset3
# mmlinkfileset fs1 fileset4 -J /gpfs/fileset4
# mmlinkfileset fs1 fileset5 -J /gpfs/fileset5
__ 15. Verify that the filesets were linked using the mmlsfileset command.
# mmlsfileset fs1
Now what is the status of fileset1-fileset5? ___________________
Step 4: Create a file placement policy
Now that you have two storage pools and some filesets you need to define a placement
policy to instruct GPFS where you would like the file data placed. By default, if the system
pool can accept data, all files will go to the system storage pool. You are going to change
the default and create the following three placement rules:
Data in fileset1-fileset4 go to the system storage pool.
Data in fileset5 go to the pool1.
Files that end in .dat go to pool1.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-6 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 16. Start by creating a policy file (on Node1)
/gpfs-course/data/placementpolicy.txt.
The fileset does not matter, we want all .dat and .DAT files to go to pool1.
RULE 'datfiles' SET POOL 'pool1' WHERE UPPER(name) like '%.DAT'
All non *.dat files placed in filset5 will go to pool1.
RULE 'fs5' SET POOL 'pool1' FOR FILESET ('fileset5')
Set a default rule that sends all files not meeting the other criteria to the system
pool.
RULE 'default' SET POOL 'system'
__ 17. Install the new policy file using the mmchpolicy command.
# mmchpolicy fs1 placementpolicy.txt
__ 18. Verify that the policy was installed using the mmlspolicy command.
# mmlspolicy fs1
Then try:
# mmlspolicy fs1 -L
Step 5: Testing the placement policies
Now you will do some experiments to see how policies work. Use this chart to track the
experiments results. You can get the amount of free space by using the mmdf command.
__ 19. Record the free space in each pool using the mmdf command (before).
# mmdf fs1
__ 20. Create a file in fileset1 called bigfile1.
# dd if=/dev/zero of=/gpfs/fileset1/bigfile1 bs=64k
count=1000
__ 21. Record the free space in each pool using the mmdf command (Bigfile1).
# mmdf fs1
Experiment
System pool
(free KB)
Pool (free KB)
Before
Bigfile1
Bigfile1.dat
Bigfile2
Migrate/delete
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 3. Storage pools, filesets, and policies 3-7
V5.4.0.3
EXempty
__ 22. Create a file in fileset1 called bigfile1.dat.
# dd if=/dev/zero of=/gpfs/fileset1/bigfile1.dat bs=64k
count=1000
Record the free space (bigfile1.dat)
__ 23. Create a file in fileset5 called bigfile2.
# dd if=/dev/zero of=/gpfs/fileset5/bigfile2 bs=64k
count=1000
Record the free space (bigfile2).
__ 24. Questions
__ a. Where does the data reside for each file?
Hint
Use mmlsattr -L [full pathname of filename], for example:
# mmlsattr -L /gpfs/fileset1/bigfile1
bigfile1 ______________
bigfile1.dat ______________
bigfile2 ______________
__ b. Why?
__ 25. Create a couple more files (these will be used in the next step).
# dd if=/dev/zero of=/gpfs/fileset3/bigfile3 bs=64k
count=1000
# dd if=/dev/zero of=/gpfs/fileset4/bigfile4 bs=64k
count=1000
Step 6: File management policies
Now that you have data in the file system you are going to manage the placement of the file
data using file management policies. For this example your business rules say that all file
names that start with the letters big need to be moved to pool1. In addition, all files that end
in .dat should be deleted.
__ 26. To begin, on Node1, create a policy file
/gpfs-course/data/managementpolicy.txt that implements the business
rules.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-8 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
RULE 'datfiles' DELETE WHERE UPPER(name) like '%.DAT'
RULE 'bigfiles' MIGRATE TO POOL 'pool1' WHERE UPPER(name) like 'BIG%'
__ 27. Test the rule set using the mmapplypolicy command.
# mmapplypolicy fs1 -P managementpolicy.txt -I test
This command will show you what mmapplypolicy will do but will not actually
perform the delete or migrate.
__ 28. Perform the migration and deletion using the mmapplypolicy command.
# mmapplypolicy fs1 -P managementpolicy.txt
__ 29. Review the output of the mmapplypolicy command to answer these questions:
__ a. How many files were deleted?
__ b. How many files were moved?
__ c. How many KB total were moved?
Step 7: External pools
In this step you will use the external pool interface to generate a report.
__ 30. Create the file /gpfs-course/data/expool1.ksh.
#!/usr/bin/ksh
dt=`date +%h%d%y-%H_%M_%S`
results=/tmp/FileReport_${dt}
echo one $1
if [[ $1 == 'MIGRATE' ]];then
echo Filelist
echo There are `cat $2 | wc -l` files that match >> ${results}
cat $2 >> ${results}
echo ----
echo - The file list report has been placed in ${results}
echo ----
fi
__ 31. Create the file /gpfs-course/data/listrule1.txt.
RULE EXTERNAL POOL 'externalpoolA' EXEC '/gpfs-course/data/expool1.ksh'
RULE 'MigToExt' MIGRATE TO POOL 'externalpoolA'
WHERE FILE_SIZE > 2
__ 32. Make the external pool script executable.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 3. Storage pools, filesets, and policies 3-9
V5.4.0.3
EXempty
# chmod +x /gpfs-course/data/expool1.ksh
__ 33. Use the mmapplypolicy command to execute the external pool migration.
# mmapplypolicy fs1 -P /gpfs-course/data/listrule1.txt
__ 34. The location of the output file will be reported by the expool1.ksh script. View this
file to see what information is generated.
# more /tmp/FileReport_xxxxxx
End of exercise
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
3-10 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 4. GPFS replication and snapshots 4-1
V5.4
EXempty
Exercise 4. GPFS replication and snapshots
(with hints)
What this exercise is about
In this lab you will configure a file system for replication. To free up
disks, you will modify the file system that was created in earlier labs.
What you should be able to do
At the end of this exercise, you should be able to:
Enable data and metadata replication
Verify and monitor a file's replication status
Create a file system snapshot
Restore a file from a snapshot
Introduction
Lab 1 must be completed before lab 4 to set up the environment.
Before starting each lab, open at least one window to each GPFS
cluster node. Use man pages or the online documentation for detailed
information on GPFS commands.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-2 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Exercise instructions with hints
All exercises in this chapter depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.
Step 1: Enabling replication
__ 1. The max replication factor should have been set in lab 1. Use the mmlsfs command
to verify that the file system is enabled for replication. A file system is enabled for
replication when the maximum number of data and metadata replicas is set to 2. To
determine the values for these parameters, use the mmlsfs command.
mmlsfs fs1 -mRMR
flag value description
------------------- ------------------------ -----------------------------------
-m 1 Default number of metadata replicas
-R 2 Maximum number of data replicas
-M 2 Maximum number of metadata replicas
-R 2 Maximum number of data replicas
__ 2. If these parameters are not set to 2, you will need to recreate the file system. To
recreate the file system:
__ a. Umount the file system.
# mmunmount fs1 -a
__ b. Delete the file system.
# mmdelfs fs1
__ c. Create the file system and specify M 2 and R 2.
# mmcrfs /gpfs fs1 -F pooldesc.txt B 64k
Where pooldesc.txt is the disk descriptor file from lab 3.
Step 2: Change the failure group on the NSDs using the mmchdisk
command
__ 3. View the current failure group settings using the mmlsdisk command.
# mmlsdisk fs1
The failure group should be set to a value of -1 for NSDs.
__ 4. Change the failure group so that all NSDs are placed in a unique failure group using
the mmchdisk command. For example, use FG 1 for nsd1, FG 2 for nsd2, and so
forth.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 4. GPFS replication and snapshots 4-3
V5.4
EXempty
# mmchdisk fs1 change -d "nsd1::::1:::"
# mmchdisk fs1 change -d "nsd2::::2:::"
# mmchdisk fs1 change -d "nsd3::::3:::"
# mmchdisk fs1 change -d "nsd4::::4:::"
__ 5. Verify the changes using the mmdf command.
# mmdf fs1
Notice that no data has moved. This is because the default replication level is still
set to 1. Now that there are two failure groups in each pool you can see how to
change the replication status of a file.
Step 3: Replicate a file
Replication status can be set at the file level. In this step you will replicate the data and
metadata of a single file in the file system.
__ 6. Create a file in fileset1 called bigfile10.
# dd if=/dev/zero of=/gpfs/fileset1/bigfile10 bs=64k count=1000
__ 7. Use the mmlsattr command to check the replication status of the file bigfile10.
# mmlsattr -L /gpfs/fileset1/bigfile10
[r07s6vlp1]# mmlsattr -L /gpfs/fileset1/bigfile10
file name: /gpfs/fileset1/bigfile10
metadata replication: 1 max 2
data replication: 1 max 2
flags: unbalanced
storage pool name: system
fileset name: fileset1
snapshot name:
__ 8. Change the file replication status of bigfile10 so that it is replicated in two failure
groups using the mmchattr command.
# mmchattr m 2 r 2 /gpfs/fileset1/bigfile10
Notice that this command takes a few moments to execute. As you change the
replication status of a file, the data is copied before the command completes unless
you use the I defer option.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-4 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 9. Again, use the mmlsattr command to check the replication status of the file
bigfile10.
# mmlsattr -L /gpfs/fileset1/bigfile10
file name: /gpfs/fileset1/bigfile10
metadata replication: 2 max 2
data replication: 2 max 2
flags: unbalanced
storage pool name: system
fileset name: fileset1
snapshot name:
Did you see a change in the replication status of the file?
Step 4: Replicate all data in the file system
If desired, you can replicate all of the data in the file system. In this step you will change the
default replication status for the whole file system.
__ 10. Create a file in fileset1 called bigfile11.
# dd if=/dev/zero of=/gpfs/fileset1/bigfile11 bs=64k count=1000
__ 11. Use the mmlsattr command to check the replication status of the file bigfile11.
# mmlsattr -L /gpfs/fileset1/bigfile11
file name: /gpfs/fileset1/bigfile11
metadata replication: 1 max 2
data replication: 1 max 2
flags:
storage pool name: system
fileset name: fileset1
snapshot name:
__ 12. Using the mmchfs command, change the default replication status for fs1.
# mmchfs fs1 m 2 r 2
__ 13. Use the mmlsaatr command to check the replication status of the file bigfile11.
# mmlsattr -L /gpfs/fileset1/bigfile11
file name: /gpfs/fileset1/bigfile11
metadata replication: 1 max 2
data replication: 1 max 2
flags:
storage pool name: system
fileset name: fileset1
snapshot name:
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 4. GPFS replication and snapshots 4-5
V5.4
EXempty
Has the replication status of bigfile11 changed? _________________
__ 14. The replication status of a file does not change until mmrestripefs is run or a new
file is created. To test this, create a new file called bigfile12.
# dd if=/dev/zero of=/gpfs/fileset1/bigfile12 bs=64k count=1000
__ 15. Use the mmlsaatr command to check the replication status of the file bigfile12.
# mmlsattr -L /gpfs/fileset1/bigfile12
file name: /gpfs/fileset1/bigfile12
metadata replication: 2 max 2
data replication: 2 max 2
flags:
storage pool name: system
fileset name: fileset1
snapshot name:
Is the file replicated?
__ 16. You can replicate the existing files in the file system using the mmrestripefs
command.
# mmrestripefs fs1 R
__ 17. Use the mmlsattr command to check the replication status of the file bigfile11.
# mmlsattr -L /gpfs/fileset1/bigfile11
file name: /gpfs/fileset1/bigfile11
metadata replication: 2 max 2
data replication: 2 max 2
flags:
storage pool name: system
fileset name: fileset1
snapshot name:
Is the file replicated?
Step 5: Use a snapshot to backup a file
In this portion of the lab you will create a file system snapshot and restore a user deleted
file from a snapshot image.
A snapshot is a point in time view of a file system. To see how snapshots operate, you will
create a file, take a snapshot, delete the file, and then restore the file from the snapshot.
__ 18. Create a file for testing in the /gpfs/fileset1 directory.
# echo hello world:snap1 > /gpfs/fileset1/snapfile1
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
4-6 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 19. Create a snapshot image using the mmcrsnapshot command.
# mmcrsnapshot fs1 snap1
__ 20. Modify the file for testing in the /gpfs/fileset1 directory.
# echo hello world:snap2 >> /gpfs/fileset1/snapfile1
__ 21. Create a seconds snapshot image using the mmcrsnapshot command.
# mmcrsnapshot fs1 snap2
__ 22. View the list of snapshots created using the mmlssnapshot command.
# mmlssnapshot fs1
__ 23. Delete the file /gpfs/fileset1/snapfile1.
# rm /gpfs/fileset1/snapfile1
Now that the file is deleted, let's see what is in the snapshots:
__ 24. Take a look at the snapshot images. To view the image, change directories to the
.snapshot directory.
# cd /gpfs/.snapshots
What directories do you see? _____________________
__ 25. Compare the snapfile1 stored in each snapshot.
# cat snap1/fileset1/snapfile1
# cat snap2/fileset1/snapfile1
Are the file contents the same? _______________
__ 26. To restore the file from the snapshot, copy the file back into the original location.
# cp /gpfs/.snapshots/snap2/fileset1/snapfile1
/gpfs/fileset1/snapfile1
__ 27. When you are done with a snapshot you can delete the snapshot. Delete both of
these snapshots using the mmdelsnapshot command.
# mmdelsnapshot fs1 snap1
# mmdelsnapshot fs1 snap2
__ 28. Verify that the snapshots were deleted using the mmlssnapshot command.
# mmlssnapshot fs1
End of exercise
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 5. Dynamically adding and deleting disks in active file 5-1
V5.4
EXempty
Exercise 5. Dynamically adding and deleting
disks in active file system
(with hints)
What this exercise is about
In this lab you will practice adding one or more disks to a storage pool
online and re-balance existing data in the file system.
What you should be able to do
At the end of this exercise, you should be able to:
Add a disk to a storage pool online
Re-balance existing data in the file system
Introduction
Before starting each lab open at least one window to each test node.
Use man pages or the online documentation for detailed information
on GPFS commands. The tasks involved deal with adding one or more
NSDs to a gpfs file system and then rebalancing the file system
Requirements
Access to two AIX LPAR nodes that make up the GPFS cluster as
well as a set of shared disk drives/LUNs that could be added to the
file system.
An existing file system
This lab exercise manual
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-2 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
Exercise instructions with hints
All exercises in this chapter depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.
Step 1: Add a disk (NSD) to an active GPFS file system
__ 1. Verify that GPFS is running and the file system is mounted using the mmgetstate
and the df commands
The mmgetstate command will show the status of the nodes in the cluster.
# mmgetstate -a
The df command will display the mounted GPFS file system.
df
__ 2. Create a disk descriptor file /gpfs-course/data/adddisk.txt for the new disk
using the format
#DiskName:serverlist::DiskUsage:FailureGroup:DesiredName:St
oragePool
hdisk5:::dataOnly::nsd5:pool1
__ 3. Use the mmcrnsd command to create the NSD.
# mmcrnsd -F /gpfs-course/data/adddisk.txt
mmcrnsd: Processing disk hdisk5
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 4. Verify that the disk have been created using the mmlsnsd command
# mmlsnsd
Note that the disk you just added should show as a (free disk).
File system Disk name NSD servers
---------------------------------------------------------------------------
fs1 nsd1 (directly attached)
fs1 nsd2 (directly attached)
fs1 nsd3 (directly attached)
fs1 nsd4 (directly attached)
(free disk) nsd5 (directly attached)
__ 5. Add the new NSD to the fs1 file system using the mmadddisk command:
# mmadddisk fs1 -F /gpfs-course/data/adddisk.txt
The following disks of fs1 will be formatted on node Node1:
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 5. Dynamically adding and deleting disks in active file 5-3
V5.4
EXempty
nsd5: size 1048576 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'pool1'
Completed adding disks to file system fs1.
mmadddisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 6. Verify the NSD were added using the mmdf command as well as verify the capacity
of the file system.
# mmdf fs1
__ 7. Use the mmlsdisk command to displays the current configuration and state of the
disks in a file system.
# mmlsdisk fs1
Step 2: Re-balancing the data
Note
In some cases you may wish to have GPFS re-balance existing data over the new disk that
were added to the file system. Often it is not necessary to manually re-balance the data
across the new disks. New data that is added to the file system is correctly striped.
Re-striping a large file system requires a large number of insert and delete operations and
may affect system performance. Plan to perform this task when system demand is low.
__ 8. To re-balance the existing data in the file system use the mmrestripefs command.
# mmrestripefs fs1 -b
Scanning file system metadata, phase 1 ...
Scan completed successfully.
Scanning file system metadata, phase 2 ...
Scanning file system metadata for pool1 storage pool
Scan completed successfully.
Scanning file system metadata, phase 3 ...
Scan completed successfully.
Scanning file system metadata, phase 4 ...
Scan completed successfully.
Scanning user file metadata ...
100.00 % complete on Mon Feb 28 02:56:37 2011
Scan completed successfully.
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-4 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 9. Use the mmdf command to view the utilization of each disk.
# mmdf fs1
Step 3: Deleting a disk in an active GPFS file system
Removing a disk is accomplished with the mmdeldisk command, passing the file
system name and disk (NSD) you want to delete. As with mmadddisk, you can also
specify that the file system should be re-striped by using the -r and -a options to
perform the re-stripe in the background.
The disk to be deleted by the mmdeldisk command must be up and running for this
command to succeed; you can verify this by using the mmlsdisk command. If you
need to delete a damaged disk, you must use the -p option so it can delete a stopped
disk.
GPFS handles this easily when the system utilization is low. Under load, it may take a
significant amount of time.
# mmdeldisk fs1 -d nsd5 -r -a
Deleting disks ...
Scanning pool1 storage pool
Scanning user file metadata ...
100.00 % complete on Mon Feb 28 03:09:30 2011
Scan completed successfully.
Checking Allocation Map for storage pool 'pool1'
tsdeldisk64 completed.
mmdeldisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Note: -a option is for asynchronous (not wait for re-balancing to finish).
Use the mmlsdisk and mmlsnsd commands to verify the successful removal of
the disks
Note: If replication were implemented, then -r option can be used to preserve the
data metadata replication factors.
__ 10. Verify the NSDs associated with the file system
# mmlsdisk fs1
__ 11. Delete file system.
__ a. First unmount the file system and then delete.
mmunmount all -a
Mon Feb 28 03:11:52 CET 2011: mmunmount: Unmounting file systems ...
Node1 :/gpfs-course/data # mmdelfs fs1
All data on following disks of fs1 will be destroyed:
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Copyright IBM Corp. 2010, 2011 Exercise 5. Dynamically adding and deleting disks in active file 5-5
V5.4
EXempty
nsd1
nsd2
nsd3
nsd4
Completed deletion of file system /dev/fs1.
mmdelfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ b. Second delete all NSDs.
# mmdelnsd "nsd1;nsd2;nsd3;nsd4"
mmdelnsd: Processing disk nsd1
mmdelnsd: Processing disk nsd2
mmdelnsd: Processing disk nsd3
mmdelnsd: Processing disk nsd4
mmdelnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
__ 12. Check the NSDs.
# mmlsnsd

File system Disk name NSD servers
---------------------------------------------------------------------------
(free disk) nsd5 (directly attached)
__ 13. Delete the cluster.
__ a. First shut down the GPFS subsystem.
mmshutdown -a
Mon Feb 28 03:17:37 CET 2011: mmshutdown: Starting force unmount of GPFS
file systems
Mon Feb 28 03:17:42 CET 2011: mmshutdown: Shutting down GPFS daemons
Node1: Shutting down!
Node2: Shutting down!
Node1: 'shutdown' command about to kill process 11731070
Node2: 'shutdown' command about to kill process 12451868
Mon Feb 28 03:17:49 CET 2011: mmshutdown: Finished
__ b. Delete the node.
# mmdelnode -a
Verifying GPFS is stopped on all affected nodes ...
mmdelnode: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
mmdelnode: Command successfully completed
Instructor Exercises Guide with hints
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
5-6 GPFS 3.2 System Administration Copyright IBM Corp. 2010, 2011
__ 14. Remove GPFS file sets.
installp -u gpfs
End of exercise
V5.4.0.3
backpg
Back page