Sunteți pe pagina 1din 15

IBM PowerHA SystemMirror cluster migration

IBM Power Systems high availability


Kunal Langer (kunal.langer@in.ibm.com) Technical Consultant IBM 04 September 2013

IBM PowerHA SystemMirror is an application that makes a system fault resilient and reduces downtime of applications or databases. This article helps customers to plan for and successfully accomplish cluster migration.

Introduction
The purpose of this article is to provide a step-by-step guide for migrating an existing PowerHA cluster (at PowerHA 6.1.0) to PowerHA SystemMirror 7.1.2. This article helps in understanding how to plan for and accomplish a successful migration. It provides an overview of cluster variants in the PowerHA 7.1.2 and provides an overview of the PowerHA migration process, various migration methodologies, and the requirements for the migration process. I will discuss some migration limitations and prerequisites along with the planning process and also introduce the clmigcheck utility, which checks the current cluster configuration for any unsupported element in the cluster as well as collecting additional information required for the migration. The actual migration steps are presented in detail for use by customers to seamlessly migrate their two-node PowerHA 6.1 (single-site clusters) to PowerHA 7.1.2.

Cluster Aware AIX


The Cluster Aware AIX (CAA) is a built-in clustering capability of the IBM AIX operating system. Using CAA, administrators can create a cluster of AIX nodes and take advantage of the capabilities of cluster. The CAA has many capabilities, and some of them are listed below: Cluster-wide event management Communication and storage events such as node up and down, network adapter up and down, network address changes, and disk up and down Predefined and user-defined events Cluster-wide storage naming service Cluster-wide command distribution Commands and application programming interfaces (APIs) to create clusters across a set of AIX systems: Kernel-based heartbeats and messages provide a robust cluster infrastructure
Copyright IBM Corporation 2013 IBM PowerHA SystemMirror cluster migration Trademarks Page 1 of 15

developerWorks

ibm.com/developerWorks/

and by default, use multichannel communication between nodes using the network and storage area network (SAN) physical links Cluster repository disk A cluster repository disk is a storage device shared across all the cluster nodes. This disk is used as a central repository. You can have only one cluster repository disk. In PowerHA 7.1.2, you can define a backup repository disk, which can be used in case the primary repository disk fails. For a linked cluster (true XD cluster), each PowerHA site will have its own repository. The repository disk cannot be mirrored using AIX Logical Volume Manager (LVM), and therefore, plan to have Redundant Array of Independent Disks (RAID) mirroring for the disk. The minimum space required for a cluster repository disk is 1 GB. Refer to the PowerHA SystemMirror Admin Guide for information on how to define backup repository disk. Multicast IP addresses CAA uses multicast addresses for cluster communication between the nodes in the cluster. It is mandatory to have multicast enabled in your cluster network infrastructure. Differences between clcomdES and clcomd Starting with AIX 6.1 TL6 and AIX 7.1, the cluster communication daemon has been integrated into AIX as part of the CAA infrastructure. Some of the differences between the clcomdES subsystem (used by previous versions of PowerHA) and the new clcomd daemon of CAA and PowerHA 7.1 and later are provided in this section. Install: The clcomdES subsystem is part of the PowerHA SystemMirror install media. Whereas, clcomd is part of AIX installed with Base AIX Enterprise Edition (delivered with the bos.cluster.rte file set). Name: The subsystem name of traditional cluster communication daemon is clcomdES; the new subsystem name is clcomd. Run ability: The clcomdES daemon is always running on the nodes installed with PowerHA SystemMirror (run from /etc/inittab). The clcomd daemon is always running on nodes even if PowerHA SystemMirror is not installed (this is run from /etc/inittab as well). Port: The clcomdES subsystem uses port 6191 (/etc/services). The clcomd daemon uses port 16191 (/etc/services); also uses the clcomdES port 6191 if PowerHA SystemMirror migration is detected. Cluster definition: The clcomdES subsystem uses the /usr/es/sbin/cluster/etc/rhosts file for initial cluster definition. It can be populated with IP addresses for all available adapters on the node. Whereas, clcomd uses /etc/cluster/rhosts for initial cluster definition. The file /etc/ cluster/rhosts should be populated with IP addresses, only one per line, of members in this file. Then, refresh clcomd using the refresh s clcomd command. Definition query: The clcomdES subsystem gets the cluster definition from PowerHA SystemMirror configuration data, whereas clcomd queries the definition of the cluster using kernel API (making use of the CAA infrastructure).

Differences between PowerHA 6.1 and PowerHA 7.1 and later


With the introduction of the CAA feature in AIX 7.1 and AIX 6.1 TL6, PowerHA SystemMirror has undergone a lot of architectural changes. Due to architectural changes in PowerHA 7.1 and later
IBM PowerHA SystemMirror cluster migration Page 2 of 15

ibm.com/developerWorks/

developerWorks

with the advent of CAA, PowerHA 7.1 and later expects the communication path for cluster node be set to the IP address mapped to the host name. Some of the differences between PowerHA 6.1 and PowerHA 7.1 and later are: PowerHA 7.1 and later releases are based upon CAA where monitoring and event management is built into the AIX kernel providing robust foundation not prone to job scheduling. In the previous releases, PowerHA monitored soft and hard errors within the cluster from various event sources using Reliable Scalable Cluster Technology (RSCT). In PowerHA 6.1 and lower releases, the main communication path goes from PowerHA to group services (grpsvcs subsystem of RSCT) and then to topology services (topsvcs subsystem of RSCT) and back. In PowerHA 7.1 and later releases, the main communication path goes from PowerHA to group services (cthags) and then to CAA. With PowerHA 7.1, event management is handled by using a new pseudo file system architecture called Autonomic Health Advisor File System (AHAFS). This is used by CAA as its monitoring framework. PowerHA 7.1 uses the cluster repository disk, Fibre Channel (FC)/SAN adapters and multicasting for heartbeating. Heartbeat is performed by sending and receiving special gossip packets across the network using the multicast protocol. The gossip packets are always replied to by other nodes. In older releases of PowerHA, IP and non-IP networks participated in heartbeats and detection or diagnosis of network, node, or network adapter failures. These heartbeat packets were never acknowledged. PowerHA 7.1 and later releases use a special gossip protocol over the multicast address to determine node information and implement scalable reliable multicast. Older releases use traditional cluster communication daemon (clcomdES subsystem) which gets information from PowerHA Object Data Manager (ODM) and uses the heartbeat mechanism provided by RSCT for node information processing. PowerHA 7.1 and later releases, introduced Systems Events, which are handled by the clevmgrdES subsystem. The root volume group (rootvg) system event allows the monitoring of loss of access to the rootvg. Loss of access to rootvg results in log entry in the system error log and system reboot. Older releases of PowerHA do not handle rootvg failures.

PowerHA 7.1.2 cluster variants


PowerHA SystemMirror 7.1.2 allows customers to configure three different styles of clusters namely local, stretched, and linked clusters. Local cluster It is a simple, multinode, single-site or local cluster configured using node or logical partitions (LPARs) within a single data center. This is the most typical cluster configuration providing for local PowerHA cluster fallover. Local fallover provides a faster transition onto another machine than a fallover going to a geographically dispersed site. Local clusters can benefit from advanced functions such as IBM PowerVM Live Partition Mobility (LPM) between machines within the same site. This combination of IBM PowerVM functions and IBM PowerHA SystemMirror clustering is useful for helping to avoid any service interruption for a planned maintenance event while protecting the environment in the event of unforeseen outage.
IBM PowerHA SystemMirror cluster migration Page 3 of 15

developerWorks

ibm.com/developerWorks/

Stretched cluster The term denotes a cluster that has sites defined within the same geographic location. This provides for a campus style disaster recovery and high availability cluster with cluster nodes separated by a shorter distance. The sites can be near enough to have shared logical unit numbers (LUNs) in the same SAN. The key aspect about stretched cluster is that it uses a shared repository disk. Stretched clusters can support cross-site LVM mirroring, IBM HyperSwap, and Geographic Logical Volume Manager (GLVM). Extended distance sites with IPonly connectivity are not possible with this configuration. Figure 1. Example of a stretched cluster

A stretched cluster configuration can also be used with PowerHA 7.1.2 Standard Edition with the use of LVM cross-site Mirroring. The stretched cluster is capable of using all three levels of cluster communication (TCP/IP, SAN heartbeat and repository disk). The distance can be up to 15 km, with direct SAN links and up to 120 km with dense wavelength division multiplexing (DWDM) or coarse wavelength division multiplexing (CWDM) or other SAN extenders. This provides for synchronous replication or mirroring. Linked cluster The term denotes a cluster that has sites defined across geographic locations allowing configuration of a traditional extended distance cluster between two sites, for example Brisbane and Singapore. The key aspect of a linked cluster that makes it different from extended distance clusters in previous versions is the use of SIRCOL in CAA. This means that each site has its own CAA repository disk, which is replicated automatically between sites by CAA. Linked cluster sites communicate with each other using unicast and not multicast as it is the case with stretched cluster or normal cluster. However, local sites internally use multicast, and therefore, multicast still must be enabled in the network at each site. Figure 2. Illustration of a linked cluster

IBM PowerHA SystemMirror cluster migration

Page 4 of 15

ibm.com/developerWorks/

developerWorks

All the interfaces are defined in this type of configuration as CAA gateway addresses. CAA maintains the repository information automatically across sites through unicast address communication.

Migration overview
Unlike previous PowerHA SystemMirror migration methods, there will be some cases where migration will have to be done manually by the customer resulting in a complete cluster outage. These conditions can be detected at the time we run /usr/sbin/clmigcheck. There are three supported migration paths for PowerHA SystemMirror 6.1 migration to PowerHA SystemMirror 7.1.2. Each one requires an AIX upgrade, migration to AIX 6.1 TL8 SP1 or later, or AIX 7.1 TL2 SP1. Migration to PowerHA SystemMirror 7.1.2 is a two-phase process. Phase I: AIX migration or upgrade based on the current AIX level. Phase II: PowerHA SystemMirror migration AIX migration Refer to the AIX information center for steps on how to migrate or upgrade AIX. PowerHA migration PowerHA SystemMirror provides the following three different migration options. Offline migration: As the name suggests, this type of migration involves bringing down the entire PowerHA cluster, installing PowerHA SystemMirror 7.1.2, and restarting cluster services for one node at a time. Rolling migration: During rolling migration, the workload is moved from the node where it is currently running to another node in the system. This is followed by the installation of PowerHA 7.1.2 and the starting of cluster services. These steps are followed on all the remaining nodes. Snapshot migration: This really is not a migration at all. Customers would remove the previous version of PowerHA SystemMirror and install the newer version of PowerHA SystemMirror 7.1.2. Customer would then use the PowerHA SystemMirror 7.1.2 configuration interface, either the Director GUI, System Management Interface Tool (SMIT), or command line to install the same configuration as they previously had, that is, restoring from cluster snapshot.

Migration requirements
Before you start migrating the cluster nodes, ensure that the following tasks are completed: 1. Back up all the application and system data. 2. Create a back out or reversion plan. A back out plan allows for easy restoration of cluster and AIX configuration in case migration runs into some problem. System backup should be created using the mksysb and savevg utilities.
IBM PowerHA SystemMirror cluster migration Page 5 of 15

developerWorks

ibm.com/developerWorks/

3. Ensure that the Communication Path to Node option in the PowerHA cluster nodes is set to the IP address mapping to the hostname. 4. Save the existing cluster configuration. Also, save any user provided scripts, most commonly custom events, pre and post event scripts, notification scripts, and application controller scripts. Some migration requirements are as follows: 1. All cluster nodes have one shared disk that will be used for cluster repository, having at least 1 GB size. The list of supported FC and SAS adapters for connection to the repository disk: 4 GB Single-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 1905; CCIN 1910) 4 GB Single-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 5758; CCIN 280D) 4 GB Single-Port Fibre Channel PCI-X Adapter (FC 5773; CCIN 5773) 4 GB Dual-Port Fibre Channel PCI-X Adapter (FC 5774; CCIN 5774) 4 Gb Dual-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 1910; CCIN 1910) 4 Gb Dual-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 5759; CCIN 5759) 8 Gb PCI Express Dual Port Fibre Channel Adapter (FC 5735; CCIN 577D) 8 Gb PCI Express Dual Port Fibre Channel Adapter 1Xe Blade (FC 2B3A; CCIN 2607) 3 Gb Dual-Port SAS Adapter PCI-X DDR External (FC 5900 and 5912; CCIN 572A) For the most current list of supported storage adapters, refer to the IBM PowerHA SystemMirror for AIX web page. 2. Ensure that the current network infrastructure supports multicast. Enable multicast traffic on all network switches connected to all cluster nodes. 3. Ensure that the /etc/cluster/rhosts file is properly filled with hostnames or IP addresses of all cluster nodes (IP addresses mapping to the host name), else cluster communication will fail and migration will not take place. 4. Ensure that all cluster nodes have the requisite version of AIX installed. Refer to the following table.
PowerHA version PowerHA 7.1.0 PowerHA 7.1.1 PowerHA 7.1.2 AIX version required AIX 6.1 TL6 SP1 or AIX 7.1 TL0 SP1 AIX 6.1 TL7 SP2 or AIX 7.1 TL1 SP2 AIX 6.1 TL8 SP1 or AIX 7.1 TL2 SP1

5. Ensure that Virtual I/O Server (VIOS) 2.2.0.1-FP24-SP01 or later is installed. 6. The following additional file sets are required: bos.cluster bos.ahafs bos.clvm.enh devices.common.IBM.storfwork (required for SAN heartbeat) 7. RSCT version: rsct.core.rmc 3.1.4.0 rsct.basic 3.1.4.0 rsct.compat.basic.hacmp 3.1.4.0 rsct.compat.clients.hacmp 3.1.4.0
IBM PowerHA SystemMirror cluster migration Page 6 of 15

ibm.com/developerWorks/

developerWorks

Migration limitations
There are certain limitations in migrating to PowerHA 7.1.2 because of the structural changes. The limitations are listed below: Not all configurations can be migrated. Configurations with FDDI, ATM, X.25 or Token Ring cannot be migrated and must be removed before migration. Configurations with IP Address Takeover (IPAT) using replacement or Hardware Address Takeover (HWAT) cannot be migrated, and must be removed from configuration. Configurations with heartbeat over IP aliasing must be removed before migration. Non-IP networking is accomplished differently. PowerHA 7.1.2 (and the underlying CAA) use multicast, FC/SAN and the cluster repository disk for heartbeating. Traditional non-IP networks such as rs232, diskhb, mndhb, tmscsi, tmssa are not supported. These will be removed during migration.

clmigcheck utility
The clmigcheck utility is part of base AIX, included with AIX 6.1 TL6 or later. It is an interactive tool that verifies the current cluster configuration, checks for unsupported elements, and collects additional information required for migration. You must run this command on all cluster nodes, one node at a time, before installing PowerHA 7.1.2. The initial screen is as follows:
----------[PowerHA System Mirror Migration Check] ------------Please select one of the following options: 1 = Check ODM configuration. 2 = Check snapshot configuration. 3 = Enter repository disk and multicast IP addresses. Select one of the above, "x" to exit or "h" for help:

Note that at any prompt, you can type h for help about that data entry prompt. Option 1 checks SystemMirror configuration data (/etc/es/objrepos) and provides errors and warnings if there are any elements in the configuration that must be removed manually. In that case, the flagged elements must be removed, cluster configuration verified and synchronized, and this command must be re-run until the SystemMirror configuration data check completes without errors. Option 2 checks a snapshot (present in /usr/es/sbin/cluster/snapshots) and provides error information if there are any elements in the configuration that will not migrate. Because PowerHA SystemMirror provides no tools to edit a snapshot, any errors checking this snapshot means that it cannot be used for migration. In this case, the customer might have to apply the snapshot on the back-level PowerHA SystemMirror and update the configuration manually. Save the new snapshot and start the procedure all over again. Option 3 queries the customer for additional configuration needed and saves it in a file in / var on every node in the cluster. When option 3 is selected from the main screen, you will be prompted for repository disk and multicast dotted decimal IP addresses. This data will be stored in a file (/var/clmigcheck/clmigcheck.txt) on every node in the cluster. When PowerHA SystemMirror 7.1.2 is installed, this file is read and the SystemMirror configuration data is
IBM PowerHA SystemMirror cluster migration Page 7 of 15

developerWorks

ibm.com/developerWorks/

populated. The customer must use either option 1 or option 2 successfully before running option 3, which collects and stores configuration data. When the /usr/sbin/clmigcheck command is run on the last node of the cluster before installing PowerHA SystemMirror 7.1.2, the CAA infrastructure will be started. This can be verified by running the /usr/sbin/lscluster m command.

FC/SAN based heartbeat mechanism


The cluster communication in PowerHA 7.1 and later (and CAA) is achieved by communicating over multiple redundant paths. This includes the important process of sending and processing the cluster heartbeats by each participating node. The following redundant paths provide robust clustering foundation: TCP/IP (basically using multicast address) Optional SAN or FC adapters Repository disk SAN-based path is a redundant, high-speed path of communication established between the hosts by using the SAN fabric that exists in any data center between hosts. Discovery-based configuration reduces the burden of configuring the links. PowerHA 7.1.2 supports SAN-based heartbeat within a site. It is not mandatory to set up FC or SAN-based heartbeat path, if the configured SANComm (sfwcomm as seen in lscluster i output) provides additional heartbeat path for redundancy. The SAN heartbeat infrastructure can be accomplished in several ways: Using real adapters on the cluster nodes and enabling the storage framework capability (sfwcomm device) of the host bus adapters (HBAs). Currently, FC and SAS technologies are supported. The Setting up cluster storage communication link provides more details about supported HBAs and the required steps to set up the storage framework communication. In a virtual environment using N-Port ID Virtualization (NPIV) or virtual Small Computer System Interface (vSCSI) with a VIOS instance, enabling the sfwcomm interface requires activating the target mode (the tme attribute) on the real adapter in the VIOS instance and defining a private virtual LAN (VLAN) (ID 3358) for communication between the partition containing the sfwcomm interface and VIOS. The real adapter on the VIOS must be a supported HBA. The target mode enabled (tme) attribute for a supported adapter is only available when the minimum AIX level for CAA is installed. The configuration steps are as follows: 1. Configure the FC adapters for SAN heartbeat on the VIOS instances. Use the chdev command to enable the tme attribute:
# chdev l fcsX a tme=yes -perm

2. Run the chdev command to enable dynamic tracking and fast failure recovery on all FSCSI adapters.
# chdev l fscsiX a dyntrk=yes a fc_err_recov=fast_fail

IBM PowerHA SystemMirror cluster migration

Page 8 of 15

ibm.com/developerWorks/

developerWorks

3. Restart the VIOS instances. 4. On the Hardware Management Console (HMC) create a new virtual Ethernet adapter for each cluster LPAR and VIOS. Set the VLAN ID to 3558 (no other VLAN ID is allowed). 5. On the VIOS, run the cfgmgr command and check for the virtual Ethernet adapter and sfwcomm device using the lsdev command.
# lsdev C | grep sfwcomm sfwcomm0 Available 01-00-02-FF Fibre Channel Storage Framework Comm sfwcomm1 Available 01-01-02-FF Fibre Channel Storage Framework Comm

6. On the cluster nodes, run the cfgmgr command and check for the virtual Ethernet and sfwcomm device using the lsdev command. 7. No other configuration is required in PowerHA. When the cluster is up and running, you can check the status of SAN heartbeat using the lscluster i command. You can run clras from /usr/lib/cluster, as shown below, to check if sfwcomm and dpcomm are working or not.
0) root @ <nodename>: /usr/lib/cluster # ./clras sancomm_status +---------------------------------------------------------------+ | NAME | UUID | STATUS | +---------------------------------------------------------------+ | servr2.abcdefg.xxx.com | 6c3af126-d8d4-11e2-9c7a-00145ee770e9 | UP | +---------------------------------------------------------------+ (0) root @ <nodename>: /usr/lib/cluster # ./clras dpcomm_status +---------------------------------------------------------------+ | NAME | UUID | STATUS | +---------------------------------------------------------------+ | servr1.abcdefg.xxx.com | 54119a46-d8d4-11e2-ac6b-00145ee770e9 | UP| +---------------------------------------------------------------+ | servr2.abcdefg.xxx.com | 6c3af126-d8d4-11e2-9c7a-00145ee770e9 | UP| +---------------------------------------------------------------+ (0) root @ <nodename>: /usr/lib/cluster #

PowerHA migration
Before migrating PowerHA to PowerHA 7.1 and later, test whether the nodes in your environment support multicast-based communication. To test end-to-end multicast communication for all nodes used to create the cluster on your network, run the mping command, which is part of the CAA framework of AIX. You can run mping with a specific multicast address; otherwise the command uses a default address. The following is an example of the mping command for the multicast address 228.168.101.43, where nodeA is the receiver and nodeB is the sender. You must run the following commands from both the nodes at the same time: 1. From nodeA, run mping r v c 5 a 228.168.101.43 2. From nodeB, run mping s v c 5 a 228.168.101.43 Repeat the steps this time by reversing the sender and receiver. Offline migration You can choose to stop cluster services on all nodes, and then install PowerHA 7.1. After all the checks are successful, the clconvert utility runs from installp to convert the configuration represented in the back level PowerHA configuration data classes to PowerHA 7.1 and later version, including the running of mkcluster and creating the CAA version of cluster in addition to removing any discovered interface not in the previous version of PowerHA (such as SAN/FC
IBM PowerHA SystemMirror cluster migration Page 9 of 15

developerWorks

ibm.com/developerWorks/

heartbeat sfwcomm). After AIX has been migrated, follow these steps to migrate the PowerHA level to version PowerHA 7.1 and later. 1. Stop cluster services on all cluster nodes. Use the smitty clstop command and select the Bring a Resource Group Offline option. 2. Ensure that the cluster services have been stopped. Use the lssrc ls clstrmgrES command to check the cluster state. It should be ST_INIT. 3. Run /usr/sbin/clmigcheck on the first node and select option 1. 4. If the cluster cannot be migrated, the clmigcheck utility will indicate that in error messages. Remove the unsupported elements. If no errors are reported, skip step 5. 5. Perform a verification and then synchronize. 6. Run clmigcheck once again and select option 1. The clmigcheck command says The ODM has no unsupported elements, as shown in the following figure.

7. Now select option 3 to enter the repository disk information and optionally provide the multicast IP address. The data is saved in the /var/clmigcheck/clmigcheck.txt file on each node. You need to enter this information only on the first node. 8. Populate the /etc/cluster/rhosts file on this node with the IP addresses of all the cluster nodes (addresses corresponding to hostname command). 9. Refresh the clcomd daemon, and run the refresh s clcomd command. 10. Install PowerHA 7.1 and later on the first node. 11. Run the following steps on all remaining nodes, one at a time. a. Run /usr/sbin/clmigcheck. It prompts you to install new version of PowerHA, as shown in the message in the following figure.

IBM PowerHA SystemMirror cluster migration

Page 10 of 15

ibm.com/developerWorks/

developerWorks

b. Add the IP addresses of all the cluster nodes in the /etc/cluster/rhosts file and refresh the clcomd daemon. c. Install PowerHA 7.1.2. 12. /usr/sbin/clmigcheck detects the last node when it runs and it creates a cluster-aware infrastructure, that is, a CAA cluster on all the nodes. This can be verified by running the / usr/sbin/lscluster m command 13. Update the /etc/cluster/rhosts file, refresh clcomd and install PowerHA SystemMirror 7.1. 14. Start the cluster services, one node at a time, and ensure that each node successfully joins the cluster. After the last node has joined the cluster, your migration is successful. Rolling migration In rolling migration, the newer version of PowerHA is installed (one node at a time), while the remaining nodes continue to run cluster services and host the workload. In this mixed version state, PowerHA continue to respond to cluster events. In a rolling migration, you stop cluster services on the target node with the Move Resource Groups option. There is a brief interruption while the application moves to the backup or fallover node, and a second interruption while the application moves back to the primary or home node after it has been migrated. The steps to migrate are as follows: 1. Run /usr/sbin/clmigcheck on the first node, and select option 1. 2. If the cluster cannot be migrated, error messages will be displayed. In that case, remove all unsupported elements. 3. Verify and sync the corrected cluster definition from the first node. 4. Populate the /et/cluster/rhosts file on this node, and refresh the clcomd daemon.

IBM PowerHA SystemMirror cluster migration

Page 11 of 15

developerWorks

ibm.com/developerWorks/

5. Run clmigcheck again to verify that there are no further unsupported elements. The The ODM has no unsupported elements message is displayed, as shown in the following figure.

6. Stop the cluster services with the Move Resource Groups option. 7. Run clmigcheck again and select option 3. Enter the shared repository disk, and optionally, provide the multicast IP address. The information is saved in /var/clmigcheck/clmigcheck.txt 8. Install the newer version of PowerHA on this node. 9. After the installation is complete, start the cluster services. 10. On the remaining nodes, follow these steps, one node at a time, after stopping the cluster services with the Move Resource Groups option. a. Populate the /etc/cluster/rhosts file with the IP addresses of all cluster nodes. b. Refresh the clcomd daemon. c. Run /usr/sbin/clmigcheck. A message, as shown in the following figure is displayed.

d. Install PowerHA 7.1.2 on the node. e. Start cluster services. 11. /usr/sbin/clmigcheck will detect the last node when it runs and will create a CAA cluster on all the nodes. Run the /usr/sbin/lscluster m command to verify this.

IBM PowerHA SystemMirror cluster migration

Page 12 of 15

ibm.com/developerWorks/

developerWorks

12. Start cluster services on the last node. After the last node joins the cluster, your migration is complete. Snapshot migration The snapshot migration path requires cluster services to be down on all the nodes, thus calling for a cluster outage or application downtime. To migrate a cluster using this path, you need to perform the following steps. 1. Create a cluster snapshot. By default, the snapshot is saved in the /usr/es/sbin/cluster/ snapshots directory. Save a copy of it in /tmp or some other location. 2. Stop the cluster services on all nodes using the Bring Resource Groups Offline option. 3. Run /usr/sbin/clmigcheck on the first node, and then select option 2. Enter the snapshot name. 4. If the utility reports errors for unsupported elements, the snapshot cannot be migrated. In this case, remove all unsupported elements reported by clmigcheck. If no errors are reported, go to step 7. 5. Take a new cluster snapshot and save a copy of it in /tmp. 6. Run /usr/sbin/clmigcheck again with option 2 to ensure that there are no unsupported elements. 7. Choose option 3 in /usr/sbin/clmigcheck to enter the shared disk (repository disk) and optionally, the multicast address. 8. Remove the existing version of PowerHA software on all cluster nodes. 9. In the /etc/cluster/rhosts file, fill in the IP addresses of all cluster nodes (IP addresses corresponding to the host name command). 10. Refresh clcomd using the refresh s clcomd command. 11. Install a newer version of PowerHA. 12. Convert the snapshot using the clconvert_snapshot command:
# /usr/es/sbin/cluster/conversion/clconvert_snapshot v 6.1.0 s <snapshot file name>

13. Restore the converted snapshot. Use the path: smitty sysmirror -> Cluster Nodes and Networks -> Manage the Cluster -> Snapshot Configuration -> Restore the Cluster Configuration from a Snapshot. 14. After the restoration is done, run verification and synchronization. This creates and enables the CAA infrastructure. You can verify this using the lscluster m command. 15. Start the cluster services, one node at a time. After the last node joins the cluster, the migration is complete.

Resources
Learn What's New in PowerHA PowerHA version compatibility matrix Cluster Aware AIX PowerHA Information Center How to test multicast
Page 13 of 15

IBM PowerHA SystemMirror cluster migration

developerWorks

ibm.com/developerWorks/

AIX 7.1 migration AIX 6.1 migration Stay current with IBM developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. Refer to the Planning a two-node IBM PowerHA SystemMirror cluster - Six must-know items tutorial for advice and guidance on building a PowerHA cluster. Get products and technologies Find and download service packs from Fix Central. Discuss Participate in the discussion forum Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups and wikis.

IBM PowerHA SystemMirror cluster migration

Page 14 of 15

ibm.com/developerWorks/

developerWorks

About the author


Kunal Langer Kunal Langer works as a Power Systems Technical Consultant in Systems and Technology Group Lab Based Services (LBS) based out of India. He has more than six years of experience in AIX and PowerHA development, testing, and support and demonstrated expertise in PowerHA SystemMirror installation, configuration, administration, testing, and development. He has experience in interacting with customers and handling customer-critical situations. You can contact Kunal at kunal.langer@in.ibm.com. Copyright IBM Corporation 2013 (www.ibm.com/legal/copytrade.shtml) Trademarks (www.ibm.com/developerworks/ibm/trademarks/)

IBM PowerHA SystemMirror cluster migration

Page 15 of 15

S-ar putea să vă placă și