Documente Academic
Documente Profesional
Documente Cultură
Disclaimer and Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site. Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details. Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Intel Expressway Service Gateway, Intel Expressway Tokenization Broker, Intel Services Designer, Intel Expressway Service Gateway for Healthcare, Intel SOAE-H, and Intel are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others. Copyright 2011, Intel Corporation. All rights reserved.
Date
Revision
Description Initial document published for Intel Expressway Service Gateway v2.8 Document published for Intel Expressway Service Gateway v2.8
001 002
Contents
Contents
1.0 Introduction .............................................................................................................. 3 1.1 Supported Servers .............................................................................................. 3 1.2 Hardware Requirements ....................................................................................... 3 1.3 Software Requirements ........................................................................................ 4 1.4 Support for Virtual Machines ................................................................................. 4 1.5 Installing Unlimited Strength Java* Cryptography Extension (JCE) ............................. 4 1.6 Security Support ................................................................................................. 5 1.7 Supported Transport Protocols .............................................................................. 5 1.8 Supported Authentication Protocols........................................................................ 6 Preparing Your System for ESG Installation............................................................... 7 2.1 Red Hat Enterprise Linux OS* AS5 Installation Requirements for ESG......................... 7 2.2 SUSE Linux Enterprise 11 OS* Installation and Configuration for ESG......................... 8 2.3 Enabling Ports................................................................................................... 10 2.4 Installing the Java Runtime Environment .............................................................. 11 2.5 Setting the Path to the CLI ................................................................................. 11 2.6 Set Parameter Limits for ESG.............................................................................. 11 Installation Procedure for ESG................................................................................. 13 3.1 Permissions for Service Gateway ......................................................................... 13 3.2 Prerequisites..................................................................................................... 13 3.3 Installing Service Gateway ................................................................................. 13 3.3.1 Example of Postinstalling ESG .................................................................. 14 3.4 Starting and Stopping ESG Service ...................................................................... 16 3.5 Uninstall and Reinstall Service Gateway................................................................ 16 3.6 Making a Network Interface Active....................................................................... 17 3.7 Making a Network Interface Inactive .................................................................... 17 Accessing the Management Console ........................................................................ 19 4.1 Logging into the Management Console ................................................................. 19 4.2 Removing the Web Browser Security Warning Caused by the Management Console .... 21 Managing a Collection of Service Gateway Machines................................................ 23 5.1 Hardware, Software, and Network Requirements for a Cluster ................................. 24 5.2 Setting up a Service Gateway Cluster................................................................... 27 5.2.1 Example of Postinstalling Service Gateway on a Slave Node ......................... 29 5.3 Cluster Operation, Communication, and Management............................................. 30 5.3.1 Viewing the Status of a Nodes Message Processing..................................... 30 5.3.2 Managing Nodes in a Service Gateway Cluster............................................ 32 5.3.3 Viewing a Nodes Logs ............................................................................ 32 5.3.4 Message and File Transfer between Nodes ................................................. 33 5.4 Removing a Node from a Cluster ......................................................................... 34 5.4.1 Removing a Slave Node from a Cluster...................................................... 34 5.4.2 Removing a Master Node from a Cluster .................................................... 34 5.5 Changing the IP Address for a Nodes Management Network Interface ...................... 36 5.5.1 Changing the IP Address for a Slave Nodes Management NIC ...................... 36 5.5.2 Changing the IP Address for a Master Nodes Management NIC .................... 36 Front 6.1 6.2 6.3 6.4 6.5 End Load Balancing for HTTP Traffic ............................................................... 39 Prerequisites for Load Balancing .......................................................................... 40 Installing and Configuring a Load Balancer on a Service Gateway Cluster .................. 41 Starting, Stopping, or Uninstalling the Load Balancer ............................................. 42 Determining the Load Balancer Version ................................................................ 42 Monitoring Traffic Handled by the Load Balancer.................................................... 42
2.0
3.0
4.0
5.0
6.0
Contents
Describing the Command Syntax for lbconfig .........................................................43 6.6.1 Example of Executing lbconfig ..................................................................44 Defining Load Balancing Algorithms......................................................................45 Using Connection Affinity ....................................................................................46 Configuring an Application to use Front End Load Balancing.....................................46 Failover and Electing a Director ...........................................................................47
Integrating Hardware Cavium Cards with Service Gateway......................................49 7.1 Prerequisites for Integrating a Cavium Card with Service Gateway............................49 7.2 Installing a Cavium Device Driver ........................................................................49 7.3 Removing a Cavium Device Driver .......................................................................50 7.4 Creating and Using a Backup of Cavium Device Driver ............................................50 Upgrade Procedure ..................................................................................................53 8.1 Upgrade Command Syntax..................................................................................53 8.2 Back up Service Gateway Logs Before Upgrade ......................................................53 8.3 Upgrading Service Gateway ................................................................................54 8.3.1 Example of an Upgrade ...........................................................................55 8.3.2 Backing Out an Upgrade ..........................................................................56 8.4 Check the status of the Service Gateway ..............................................................56 8.5 Performing a Cluster-wide Upgrade ......................................................................56 8.5.1 Prerequisites for a Cluster-wide Upgrade....................................................56 8.5.2 Procedure for Upgrading a Cluster.............................................................57 8.5.3 Backing out a Cluster-wide upgrade ..........................................................57 Troubleshooting a Service Gateway Installation ......................................................59
8.0
9.0
Introduction
1.0
Introduction
Intel Expressway Service Gateway (ESG), also known as Service Gateway, is a software-appliance designed to simplify and secure application architecture on-premise or in the cloud. Service Gateway expedites deployments by addressing common security and performance challenges. ESG accelerates, secures, integrates and routes XML, web services and legacy data in a single, easy to manage software appliance form factor. This document provides instructions about installing the Service Gateway on the Linux* operating system.
1.1
Supported Servers
Service Gateway is a server-based product that provides optimal performance when the server is dedicated to it. However, other software can run on the server if required. The Service Gateway comes in a soft-appliance the form factor, which is the ESG software installed on a customer provided operating system and hardware or in a virtual machine. Intel Expressway Service Gateways soft-appliance form factor is designed for Intel OEM servers. ESG can be installed on any supported Intel OEM server, which includes the following: Dell PowerEdge* 2950 (Quad-Core) HP ProLiant* DL380 G5 server (Dual Core or Quad Core) HP ProLiant* BL460C server (Dual-Core)
1.2
Hardware Requirements
The minimum processor and memory configuration for the Service Gateway is Pentium 4 class processor with a 4 gigabytes of RAM. The recommended processor and memory configuration for the Service Gateway is 2 Quad core processors (8 core, 2 socket) and 8 gigabytes of RAM.
Introduction
1.3
Software Requirements
To install Service Gateway in a software environment, the system must have the following: Table 1. ESG Software Environment
Item Software Version Red Hat Enterprise Linux* AS 5 64-bit SUSE* Linux Enterprise Server 11 (SLES 11) 64-bit Java Runtime Environment (JRE*) 1.6.0_22 or greater.
Operating System
1.4
1.5
Introduction
1.6
Security Support
Service Gateway supports cryptographic processing using a Cavium hardware security card for high performance applications or software-only OpenSSL*. For details about setting up Service Gateway to use Cavium hardware cards, refer to the section 7.0 Integrating Hardware Cavium Cards with Service Gateway. When you install Service Gateway, the runtime uses the default version of OpenSSL for cryptographic processing, such as a WS-security policy encrypting a message request. The version of OpenSSL used by Service Gateway depends on the operating system that the runtime is installed on. On software only versions of Service Gateway, the runtime supports OpenSSL 0.9.8o. If Service Gateway uses a Cavium security card for security offloading, then the runtime uses OpenSSL 0.9.8o.
1.7
Table 2.
ESG can communicate with any message broker that supports the JMS standard APIs. Service Gateway has been tested with Sun MQ*, Oracle AQ*, WebSphere MQ Series*, and Active MQ*.
N/A Mirth v1.6.0. This is an implementation of MLLP Release 1. RFC 959 File Transfer Protocol 2.1 network protocol that provides file access, file transfer, and file management functionality over an, encrypted, reliable data stream. Treats a file system as an endpoint. ESG can get or put files on a file system that is accessible from a network, such as NFS. N/A
Yes No
Introduction
1.8
Table 3.
For information about the authentication protocols used in ESG, refer to the Security Reference Guide for Intel Expressway Service Gateway.
2.0
2.1
Red Hat Enterprise Linux OS* AS5 Installation Requirements for ESG
Service Gateway software (ESG) requires certain features of the Linux* operating system that are not the default Red Hat Enterprise Linux OS* installation. When you install Red Hat Enterprise Linux OS* AS5, use the following when installing: 1. Install with Machine use as: Software Development Web Server
3. Language and other options should follow local administration guidelines. 4. During the post-reboot installation stage of Red Hat Enterprise Linux OS* 5, we recommend that you select: No Firewall. If you must enable the firewall, you need to carefully follow port enabling and set ups for these two items. Then, perform the procedure in section 2.3 Enabling Ports. 5. Perform the procedure in section 2.4 Installing the Java Runtime Environment. 6. Perform the procedure in section 2.6 Set Parameter Limits for ESG. 7. Perform the procedure in section 3.3 Installing Service Gateway.
2.2
2. In the Welcome screen, perform the following steps. a. b. c. d. In the Language drop-down menu, select English (US). In the Keyboard Layout drop-down menu, select English (US). Read the License Agreement and, then if you agree, select I agree to the License Terms check box. Select the Next button.
3. In the Media Check screen, you can check the installation media to avoid install problems by selecting the Start Check button. Once you have checked the installation media, select the Next button. 4. In the Installation Mode window, select the New Installation radio button and then select the Next button. 5. In the Clock and Time Zone screen, perform the following steps. a. b. In the Region drop-down menu, select the country or geographic location where the machine will reside. In the Time zone drop-down menu, select the time zone where the machine will reside.
c.
Update the date and time by performing the following steps. i. Select the Change button. ii. In the Current Time field, enter the current time in the UTC format. iii. In the Current Date field, enter the current date. iv. Select the Accept button.
d. e. a. b.
Verify that the Hardware Clock Set to UTC check box is selected. Select the Next button. Select the Physical Machine (Also Fully Virtualized Guests) radio button. Select the Next button.
7. In the Installation Settings screen, verify that all the configuration options are correct and then select the Install button.
8. In the Confirm Package License fonts dialog box, read the license and if you agree with the license select the I Agree button. 9. In the Confirm Installation dialog box, select the Install button. 10. Wait several minutes for the installation to complete 11. .In the Password for the System Administrator root screen, perform the following steps. a. b. In the Password for Root User field, enter the root users password. In the Confirm Password field, reenter the root users password.
c. a. b. c. d.
Select the Next button. In the Hostname field, enter the machines hostname. In Domain Name field, enter the machines domain name. Clear the Change Hostname via DHCP check box. Select the Next button.
12. In the Hostname and Domain Name screen, perform the following steps.
13. In the Network Configuration screen, select the Next button. 14. In the Test Internet Connection screen, select the Next button. 15. In the Network Services Configuration screen, select the Next button. 16. In the User Authentication Method screen, select the appropriate authentication method and then click the Next button. Note: If you select an authentication method other then Local, then there may be additional configuration steps needs. 17. In the New Local User screen, populate each field with the appropriate information and then select the Next button. 18. In the Release Notes screen, select the Next button. 19. Wait several minutes for hardware configuration to complete. 20. In the Hardware Configuration screen, select the Next button. 21. In the Installation Completed screen, select the Finish button. As a result, the login screen displays. 22. In the login screen, perform the following steps. a. b. c. d. In the username field, enter root. Select the Log in button. In the Password field, enter the root users password. Select the Log in button.
23. If you have enabled the Linux* operating system's firewall, then perform the procedure in 2.3 Enabling Ports. 24. Perform the procedure in section 2.4 Installing the Java Runtime Environment. 25. Perform the procedure in section 2.6 Set Parameter Limits for ESG. 26. Perform the procedure in section 3.3 Installing Service Gateway.
2.3
Enabling Ports
If you have enabled the Linux* operating system's firewall, then the ports that the ESG requires are disabled. You need to ensure that four ports are open on the firewall for the following processes. Must have a TCP port available for the Management Console port, which clients use to access the web interface. The default Management Console port is 8443. Must have a TCP port available for Operation, Administration, and Management (OAM) communication. The port that ESG uses for this is defined during the postinstall process. The default OAM communication port is 9443. Must have a TCP port available for exchanging files between nodes in an ESG cluster. The port that ESG uses for this is defined during the postinstall process. The default OAM file port is 9444.
10
Must have a UDP port available for nodes to communicate about whether cluster election needs to occur. The port that ESG uses for this is defined during the postinstall process. The default OAM cluster election port is 9445.
2.4
2.5
2.6
11
9. In the limits.conf, insert the following lines <ESG> hard nofile 65536 <ESG> soft nofile 65536 10. Change ESG to the user that ESG is installed under. Typically, the user is nobody. 11. Save the limits.conf file.
12
3.0
3.1
3.2
Prerequisites
Prior to installing ESG, you need to have the following: You must have administrator rights to install Service Gateway. On the machine where ESG will be installed, there must be a NIC bound to an isolated network. An isolated network means the network does not permit external access of any kind. During ESGs postinstall, you will bind the security gateways management traffic to this NIC. root access to the machine. root access is required to register Service Gateway as an OS service. esg--runtime-[os]-64bit-[rx_y_z].rpm, where OS name of the Linux operating system x_y_z the release number of Service Gateway
3.3
13
6. Determine how many network interfaces the ESG runtime needs to use. If the machine where the ESG will be installed does not have enough NICs, then you must either install the NICs now or install the ESG on a different machine. CAUTION: FOR THE ESG TO USE NIC HARDWARE INSTALLED AFTER A POSTINSTALLATION, REQUIRES RUNNING ANOTHER POSTINSTALL AFTER THE HARDWARE IS INSTALLED. 7. Start the postinstallation process by executing the following command: cli postinstall. a. b. c. d. e. f. g. h. i. j. k. l. When asked if you want to postinstall, type yes and then press enter. When asked to specify a value for JRE_HOME, use the default value by pressing enter. When asked if you want to add this node to an existing cluster, type no and press enter. When asked to enter the management interface from the above list, type the name of the NIC bound to an isolated network and then press enter. When asked to specify a userid or username as which this software should run, accept the default by pressing enter. When asked to specify a groupid or group name as which this software should run, accept the default by pressing enter. When asked to enter a a port number for the Web Interface, accept the default by pressing enter. When asked to enter a name for this cluster, accept the default by pressing enter. When asked to enter a port number for OAM cluster communication, accept the default by pressing enter. When asked to enter a port number for OAM cluster file transfer, accept the default by pressing enter. When asked to enter a port number for OAM cluster election, accept the default by pressing enter. When asked Are these OK, use the default answer of yes by pressing enter.
8. Configure the ESG service so that it automatically starts each time the machine restarts by executing the following command: chkconfig --add soae 9. Start the ESG by executing the following command: cli serviceStart. For additional details about starting and stopping the service, refer to section 3.4 Starting and Stopping ESG Service.
3.3.1
14
Next run the script: /opt/scr/clibin/cli postinstall and answer the questions.
/etc/init.d/soae has been installed. It supports chkconfig: chkconfig --add soae /opt/scr/clibin/cli serviceStart or can be manually linked into the desired rc initialization directories. [root@iclab002 ~]# cli postinstall
Add this node to an existing cluster (y/n, or q to quit): n Detecting network configuration
Enter the management interface from the above list [default=eth1]: Selected eth1 for management interface Enter a userid or user name as which this software should run (default=nobody): Enter a groupid or group name as which this software should run (default=nobody): Enter port number for Web Interface [8443]: Using 8443 Enter name for this cluster [ESG-cluster]: Using Cluster name ESG-cluster Enter port number for OAM cluster communication [9443]: Using 9443 Enter port number for OAM cluster file transfer [9444]: Using 9444 Enter port number for OAM cluster election [9445]: Using 9445 Selected the following: cluster name: ESG-cluster OAM cluster communication port: 9443 OAM cluster file transfer port: 9444
15
OAM cluster election port: 9445 Are these OK (yes or no) [yes]: Using these values. Successfully installed
3.4
3.5
16
3.6
3.7
17
18
4.0
Accessing the
Management Console
The Management Console provides web-based access to the administrative functions of the Service Gateway runtime. The following sections explain how to login into the Intel Expressway Service Gateway Management Console right after ESG is installed or upgraded and how to resolve the security warnings that occur when a user first logs into the Management Console.
4.1
[hostname] is the name or IP address bound to the management network interface. The management network interface is a NIC bound to an isolated
19
network, which is a network that does not permit external access. You specified the management NIC during the postinstall process. [Port number] is the web interface port specified during postinstallation. The default port number is 8443.
5. In the User name and Password fields, enter valid login credentials. If login credentials have not been set up yet, then you can use one of following the default usernames. Table 4. Default Login Credentials for Management Console
User ID admin opsadmin cfgadmin secadmin passwd Password Privileges Security administration, Operator administration, and Configuration administrator Operator administration only Configuration administration only Security administration only
WARNING:
THESE DEFAULT LOGIN CREDENTIALS SHOULD ONLY ALLOWED IN TESTING ENVIRONMENTS. ALLOWING THE USE OF THESE CREDENTIALS IN A PRODUCTION ENVIRONMENT IS INSECURE. 6. Select the Sign In button. As a result, the Management Console displays in your web browser. If your username has been assigned all the ESG roles, then the follow page displays:
Note:
A warning may display about security acceleration hardware. This warning only appears if you have Cavium network hardware cards installed on the same system as Service Gateway. To remove this warning, refer to the Installation Guide for Intel Expressway Service Gateway which provides the integration procedure for ESG and Cavium cards.
20
4.2
Removing the Web Browser Security Warning Caused by the Management Console
You can access and manage the ESG from any computers Internet Explorer or Firefox web browser. A browser can only communicate with the Management Console over an SSL connection. To avoid certificate errors when the Management Console is loaded into a web browser, you must install a client certificate into ESG and the issuers certificate in the web browser. When you first access Management Console, the web browser may display a security warning about the connection being untrusted. The following screenshot is of the security warning Firefox displays.
A browser can only communicate with Management Console over an SSL connection. This SSL connection requires a X.509 certificate that identifies Management Console. When ESG is installed, a self-signed certificate is automatically generated and archived in a JKS-type keystore. This certificate is only valid for 3 months after the installation. Once the original SSL certificate expires, you must delete the expired certificate and then create a new one. You can use the keytool provided with your JRE installation to create, delete, and manage SSL certificates in this keystore. Note: For details about managing SSL certificates in a keystore, refer to the following keytool documentation: http://download.oracle.com/javase/6/docs/technotes/tools/solaris/ keytool.html. To avoid certificate errors when the Management Console is loaded into a web browser, you must install a client certificate into ESG and the issuers certificate into the web browser. To install SSL certificates into Management Console and the web browser, perform the following steps. 1. In a system where OpenSSL is installed, verify that you have root privileges in the system. 2. Create the clients private key. For example: openssl genrsa -des3 -out client.key -passout pass:securityadmin. The output of this command is client.key, which is the clients private key. 3. Ensure that you retain the password that encrypted the key. 4. Generate a client certificate request using the client key. This certificate must identify the Management Console. For example: openssl req -new -key client.key -out client.csr. The output of this command is client.csr, which is the Client Certificate Request (CSR) that will be sent to a CA Authority. 5. Send the CSR to a Trusted Root Certificate Authority. The Trusted Root CA signs the X.509 certificate and then returns this certificate to you. The signed X.509 certificate must be in a PEM format and have the file extension crt.
21
6. If needed, obtain the CA Path that links the Trusted Root Certificate Authority who signed the X.509 certificate to the client certificate. The CA Path must be in a PEM format. 7. If you have not already done so, install the issuer certificate into each systems web browser, where the Management Console will be accessed. If Firefox is used, then install the issuer certificate in the Certificate Managers Authorities tab. If Internet Explorer is used, then install the issuer certificate in the Certificates Trusted Root Certification Authorities tab.
8. In the system where the ESG is installed, verify that you have root privileges. Then, create a folder. Name the folder Cert. 9. Copy the following files into the Cert folder. CA Path file that contains a chain of PEM format certificates starting with the immediate CA certificate that signed the target certificate following through to immediate CA certificates if applicable and ending with the high level (root) CA. This file must be in a PEM format. Client certificate X.509 certificate that identifies the Management Console and was signed by a CA. This must be in a PEM format. Client certificates private key used by ESG to decrypt data sent by a web browser. The web browser used the client certificate to encrypt the data.
10. Verify that you have root privileges in the system where ESG is installed. 11. In the system where ESG is installed, execute the cli SetWiCert command. To successfully execute the command, you must specify the absolute path to the CA Path file, the client key, and the client certificate. For example, if the certificates and key are located in /home/lablogin/cert and the certificate file name is client.pem, the key file name is client.key, and the CA Path file name is client_root.pem, then you execute the following command. cli setWiCert -w /home/lablogin/cert/client.pem -k /home/lablogin/ cert/client.key -c /home/lablogin/cert/client_root.pem
22
5.0
23
To support scalability, the Management Console provides a single operational view across all members of the cluster. An attempt to access a slave nodes Management Console causes an automatic redirect in a web browser to the master nodes Management Console. If for any reason the master nodes ESG service stops running, then a master election automatically takes place in the cluster. A master election is the process in which the slave nodes can no longer communicate with the master nodes ESG service and as a consequence elect one of the remaining nodes to be the master. If the former master nodes ESG service starts up again, it will automatically be added back into the cluster as a slave node. From the master node, you can collect statistics and debug application and system issues for all nodes in the cluster. The master node collects statistics, logs, message processing, and component status from all the nodes in the cluster and then presents that information within a single view in the master nodes Management Console. Manual administrative changes are automatically executed across all nodes in the cluster. For example, if you deactivate an application configuration on the master nodes Management Console, then the application configuration automatically becomes inactive on all the slave nodes in the cluster. From the master nodes Management Console, ability to execute cluster wide commands that start, stop, and test components on all nodes in the cluster simultaneously. If a slave node fails, the cluster instantly identifies this failure and throws up an alarm. Once identified, the cluster will not attempt to any data to the slave node until the slave node starts up again. If the node that failed starts up again, the immediately synchronizes all the data on the slave node with the data on the cluster. For example, if an application is deployed to the cluster while the slave node is down, when the slave node comes back up the cluster pushes that application onto the slave node. If a master node fails, the cluster performs a master election. A master election is the process by which a slave node becomes the master because the original master is no longer available. If the former master node comes back up, then it automatically rejoins the cluster as a slave node. If an ESG cluster processes HTTP message transactions, then you can use load balancing to intelligently distribute the messages across nodes. With the clustering and load balancing combined, you can improve the availability and failover for applications deployed to the ESG cluster. When one node fails, the load balancer routes messages to another node, which will process the message the exact same way the failed node would have. For implementing load balancing, refer to section 6.0 Front End Load Balancing for HTTP Traffic.
5.1
24
Acme network, then every other node must assign the OAM process to a NIC named eth0 and bind the NIC to the Acme network. Must have a TCP port available for OAM communication. All the nodes must have the same OAM communication port, which the nodes use to communicate with one another. This is defined during the postinstall process. The default OAM communication port is 9443. Must have a TCP port available for exchanging files between nodes. All the nodes must have the same OAM file port, which the nodes use to exchange files with one another. This is defined during the postinstall process. The default OAM file port is 9444. Must have a UDP port available for nodes to communicate about whether cluster election needs to occur (i.e. nodes learn that the master node has died) and then if necessary which slave node will become the master. All the nodes must have the same OAM cluster election port. This is defined during the postinstall process. The default OAM cluster election port is 9445. If you have more then one ESG cluster on the same network, then the clusters can not share the following ports: OAM communication port, OAM file port, and OAM cluster election port. If firewalls are erected between the nodes in a cluster, then the following ports must be opened in the firewalls: OAM communication port, OAM file port, and OAM cluster election port. The cluster election port is a UDP port and all other ports are TCP ports. If one node uses a security card for cryptographic acceleration, then every other node in the cluster must have one as well. If one node uses a Hardware Security Module, then every other node in the cluster must have one as well. You can not cluster software installations and hardware appliances of ESG together. The cluster must either consist of all software or all hardware appliances. However, an ESG cluster can contain virtual machines and bare metal machines. The nodes should have the same ports in use at all times. For example, if the master node is using port 8443 then all the other nodes in the cluster should be using port 8443. You should avoid a situation where one node is using a port that no other node is using or a node is not using a port that every other node is using. In the cluster, all the machines clocks must be synchronized with one another to within a second. Before you create a cluster, it is highly recommended that you set up all the machines to use the same NTP time source and that the NTP time source have a low offset. Must have a TCP port available for the Management Console port. All the nodes must provide access to the Management Console through the same port and this port cannot be blocked by any nodes firewall. The default Management Console port is 8443. If a firewall is erected between the master node and a user who is on a different network, then for the user to access the Management Console the Management Console port must be opened on the firewall. This is a TCP port.
25
Before setting up a cluster, verify that an appropriate hostname is assigned to each machine. It is highly recommended all the members of a cluster have the same computing power, such as CPU and RAM. If they do not, then each nodes runtime performance will differ from one another, such as message throughput and the size of the messages a node can process. If you have nodes with a number of CPU cores that differ from other nodes in the cluster, then you should always set the number of workflow threads to zero. If you set the workflow threads above zero, then you may degrade runtime performance because ESG may use a number of threads that exceeds the number of CPU cores on one of the nodes. If you have machines with different amounts of disk space, then application design and deployment should be restricted based on the node with the lowest amount of disk space. For example, in a two-node cluster, node1 has 60 GB of disk space and node2 has 100 GB of disk space. In this scenario, you should design applications and file storage based on the limit of 60 GB. All nodes must run the same operating system and OS version. In order to identify the source of alarms, alerts, and logs, each node must have a unique name within the cluster. No node may have the same node name as another node in the cluster. On each node in the cluster, the JRE used by ESG must have unlimited JCE installed. All the nodes must run the same version of Service Gateway. All machines should be either 32-bit or 64-bit machines. You should not combine 32- and 64-bit machines within a cluster. On a node, each NIC must be uniquely named and no two NICs may have the same logical name assigned to it. All the nodes must reside in the same timezone. Only static IP addresses should be assigned to each nodes management network interface. If the IP address changes, then the node cannot communicate with any other node in the cluster until you manually update the IP on the node where it changed. When joining a node to a cluster, you may only join one node at a time. The network bound to each NIC on the master node must be the same network bound to the NIC of the same name on every other node in the cluster. For example, if you have a two node cluster, the master node could have a NIC named eth1 on a network named Acme. Then, the slave node must have a NIC named eth1 on the same network named Acme.The network bound to each NIC on the master node must be the same network bound to the NIC of the same name on every other node in the cluster. For example, if you have a two node cluster, the master node could have a NIC named eth1 on a network named Acme. Then, the slave node must have a NIC named eth1 on the same network named Acme. A master node will have a set of active NICs that are each assigned a name, such as eth1 and eth0. The slave nodes must have at least the same number of NICs with the same names as the master node. For example, if the master node had two active NICs named eth1 and eth0, then every slave node must have two NICs named eth1 and eth0. The number of network interfaces on a slave node can exceed the number of network interfaces on a master node. However, any additional network interfaces on the slave node will not be used in the cluster. For example, if the master node has two network interfaces and the slave node has three network interfaces, then only two network interfaces are used in the cluster.
26
5.2
Setting up a
Service Gateway
Cluster
To set up the ESG cluster, perform the following steps. 1. Determine whether you need to implement load balancing for HTTP traffic. If you do, then you must install and configure the load balancer on each node before you set up the cluster. To install and configure an ESG load balancer, refer to section 6.0 Front End Load Balancing for HTTP Traffic. 2. Obtain two or more machines where ESG can be installed. 3. Verify that these machines conform to the following requirements: 1.1 Supported Servers 1.2 Hardware Requirements 1.3 Software Requirements 5.1 Hardware, Software, and Network Requirements for a Cluster
4. In the group of machines select which one will be the master node. 5. Log into the master node via an SSH session. If you are not already root, then su to the root user id now. 6. In the machine which will become the master node, use the RPM to install the ESG. During the postinstall, process you must specify that the machine is NOT a node in a cluster. For the procedure about installing ESG, refer to section 3.3 Installing Service Gateway. 7. Identify the name of the master nodes management network interface, which is also known as the Operation, Administration, and Management (OAM) NIC. Each node has its own OAM network interface that the node uses to communicate with every other node in the cluster and the master node uses to propagate configuration changes and application data to all other nodes in the cluster. To determine the name of the master nodes management network interface, perform the following steps. a. b. c. d. e. Log into the master node with a user account that has all the ESG roles assigned to it. Execute the following command: cli moStatus -t intf. As a result, a list of all the network interfaces used by the ESG displays. Execute following command for one of the interfaces in the list: cli moDetails -t intf -n [name of network interface]. In the output of the moDetails command, search for the string Information specific to this object type. In the Information specific to this object type section, search for the string Is OAM interface. If the string Is OAM interface = true, then this is the management interface. If the string is OAM interface = false, then this is not the management interface. Continue executing the cli moDetails command on each network interface until the output displays Is OAM interface = true. URL to the Management Console, which includes the port number that the Management Console is listening on. Username and password that has full access to the instance of Service Gateway. This means all of the Service Gateway roles have been assigned to the username.
f.
9. Verify that the system and application configurations are closed on the master node.
27
10. In each slave node, perform the following steps. a. b. c. d. e. f. g. Obtain an RPM of the ESG. The ESG version must be identical to the one installed on the master node. Copy the ESG RPM into a directory on the target system. Use SCP (secure copy) or FTP to do this. Ensure that you have root privileges to do the RPM install. For security reasons, it is recommended that you install the ESG under a non-root user. Execute the following command to install the ESG: rpm -i [ESG rpm], where [ESG rpm] is the absolute file path to the ESG RPM. Start the postinstallation process by executing the command: cli postinstall When asked if you want to postinstall, enter yes and then press enter. When instructed to specify a value for JRE_HOME, either enter a directory location to the Java Runtime Environment or accept the default value. Then, press enter. When asked to specify a management interface, enter the master nodes management interface and then press enter. When asked for a master node login and password, enter credentials that has full access to the master nodes instance of Service Gateway. This means the user must have all the ESG roles assigned to it. When asked if you accept the master nodes X.509 certificate, enter yes and then press enter. The following command output displays if the node is successfully added to the cluster. Successfully completed reload current Successfully installed To automatically start Service Gateway when the Linux OS is restarted or rebooted, execute the command chkconfig --add soae. Start the ESG by executing the command cli serviceStart.
h. i.
j.
k. l.
11. On the master node, execute the command cli status. Before you take any other action, the output of this command must display the string Service state=ACT. The following is an example of this output. CLUSTER:1(ESG-cluster) state=ACT_DGRD NODE:1-1(icbl021) state=ACT_DGRD Service state=ACT Master=NO MasterName=iclab002 Mode=INIT uptime: 23 seconds Current Config: HTTP 1 TCAs (WARNING=1) 2 Alarms (WARNING=2) 6 Non-Act Managed Objects (ACT_DGRD=3,OOS_AUTO=2,OOS_AUTO_START=1) 1 Apps Deployed (ACT_DGRD=1) icbl021 view of other nodes in cluster: NODE:1-0(iclab002) state=ACT *** status Sun Sep 19 17:17:22 CDT 2010 ***
28
5.2.1
Enter the management interface from the above list [default=eth0]: Selected eth0 for management interface Master node login:admin password: [ [ Version: V3 Subject: CN=foobar, OU=Expressway, O=Intel Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
Key:
modulus: 111400538014406489819022678058520748621231110454030743744078891355630 332279362310240269499357704095494366156599741360612218430441653731771 708414100629171160051422272436661896161290032264381790995380647091272 486143721944408164849096079043119855400246809539051312275249617431894 768579703614143032300551415556179 public exponent: 65537 Validity: [From: Mon Sep 13 20:29:50 CDT 2010, To: Thu Sep 10 20:29:50 CDT 2020]
29
] Algorithm: [SHA1withRSA] Signature: 0000: 10 CB 67 DB BA A6 D6 38 ..g....8........ 0010: 7A 5F 5F E1 D0 6F 6E B6 z__..on......... 0020: CF 07 3C C2 B7 C6 52 16 ..<...R.......C. 0030: A9 3E 0F EA 02 C0 1D 39 .>.....9&^..$B6/ 0040: 4D D1 A6 8D DC 3C CF B5 M....<......W..J 0050: 12 46 85 66 7C EF 48 1B K.3c... 0060: 71 28 A6 C1 D2 CA 09 EA q(........$.... 0070: 01 5A C0 57 5A CB AA 14 .=.FJ BF A3 80 AB 9D B3 DC 8F 83 03 17 1D 7F 9D CC EC 1D 12 8F B6 CC 0D 43 D3 26 5E 1D 2E 24 42 36 2F 8C 0A 90 FE 57 DD C0 4A 2D 4B 92 33 63 82 A5 CD 8D BB 24 BA D4 D4 C8 20 73 A4 20 96 3D 9E 46 4A .Z.WZ...s. .F.f..H.-
] Do you accept the above certificate, y/n? (n)y Successfully completed reload current Successfully installed
5.3
5.3.1
30
fail. In addition, the message throughput of one node may differ from another. The Management Consoles Dashboard provides a filter that lets you see message processing for each node or for the entire cluster. To view message transaction information about each node in a cluster, perform the following steps. 1. Log in to the Management Console with a user account that has the operation admin role assigned to it. 2. Select the Dashboard tab. 3. In the Dashboard tab, click the Node Selector drop-down menu. As a result, the drop-down menu displays options for the cluster and each node in the cluster.
4. To view data about messages processed by a particular node, select that node 5. from the Node Selector drop-down menu. As a result, all the information displayed in the dashboard comes from the message processing performed by the node you selected. 6. In the Graph drop-down menu, select one the of the following options. Requests processed tracks whether message transactions processed by a node were successful or failed. Requests latency Latency is the time that elapses between Service Gateway receiving a message from a client and the runtime returning a message response to the client. This option tracks the latency for the node. Open Transactions tracks how many message transactions the node is currently processing
7. To collected detailed statistics about the messages processed by the node, then select Detailed from the Collect metrics drop-down menu. 8. To only view data about a particular application or operation that is being processed by the node, select the appropriate option from the Service Selector drop-down menu.
31
5.3.2
4. In the Nodes table, consider the following information. Name string the cluster uses to identify the node. Host Name identifies the hostname of the machine where the ESG is installed Role indicates whether the node is the master node or a slave node. State indicates whether the master node can communicate with the slave node. If everything is functioning as expected, then the string ACTIVE appears in the State column. If communication between the master and slave node is failing, then the string COMMUNICATION_PROBLEM displays in the State column.
5. To stop, restart, or test the node, select the appropriate link in the Operations column. 6. To view interval alerts and the nodes IP address, select the nodes arrow in the Nodes table.
7. If alerts appear in the Interval Alerts table, you can remove them by selecting the Dismiss link.
5.3.3
32
1. Select the Logs tab. 2. In the Logs tab, click the Node Selector drop-down menu.
3. In the Node Selector drop-down menu, select the node that you want to view logs from. 4. Perform a log search for transaction, exceptions, commands, or alerts. As a result, logs only display if they were generated in the node chosen from the Node Selector drop-down menu.
5.3.4
CAUTION:
IF YOU HAVE MORE THEN ONE ESG CLUSTER ON THE SAME NETWORK, THEN THE CLUSTERS CAN NOT SHARE THE FOLLOWING PORTS: OAM COMMUNICATION PORT, OAM FILE PORT, AND OAM CLUSTER ELECTION PORT.
33
5.4
5.4.1
b.
5. In the node that you removed from the cluster, execute the following commands: cli status. The output of the cli status command must contain the following strings: Service state=ACT Master=YES 6. Even though the node is removed from the cluster, it still considers itself a part of a cluster in which it cannot communicate with any of the other nodes. If you need the node to be a completely standalone machine, then perform the following steps. a. You must run the postinstall command. This will delete all your application, security, log, and system data. To save application configurations, you must export them from the Management Console. For the procedure about exporting configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway. Execute the cli postinstall command. During postinstall, you will be asked if the node should be added to an existing cluster. Answer no. Start the service by executing cli serviceStart. In the Management Console, import the application configurations that you exported. For the procedure about importing configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway.
b. c. d. e.
5.4.2
34
3. In the master node, shut down the ESG service by executing the command cli serviceStop. As a result, after several minutes, another node in the cluster will be elected as master. 4. Log in to another node in the cluster. 5. In that node, execute the command cli status. In the command output, verify that another node has been elected master. 6. If it has, then log in to the former master node. 7. In the former master node, start the ESG service by executing the command cli serviceStart. As result, the former master node is added back to the cluster as a slave node. 8. Access both the former master node and the current master node from CLI windows. 9. In the current master node, perform the following steps. a. Before a node can be removed, all the nodes in a cluster must be in the active state. To determine the state of all nodes execute the cli status. If the service state for each node is state=ACT, then all the nodes are active and you can remove the node. To remove the former master node, execute the command cli removeNode -n [nodename], where the [nodename] is the name of the former master node. As a result, the following output should display: Successfully deleted the node '[node name]'.
b.
10. In the node that you removed from the cluster, execute the following commands: cli status. The output of the cli status command must contain the following strings: Service state=ACT Master=YES 11. Even though the node is removed from the cluster, it still considers itself a part of a cluster in which it cannot communicate with any of the other nodes. If you need the node to be a completely standalone machine, then perform the following steps. a. You must run the postinstall command. This will delete all your application, security, log, and system data. To save application configurations, you must export them from the Management Console. For the procedure about exporting configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway. Execute the cli postinstall command. During postinstall, you will be asked if the node should be added to an existing cluster. Answer no. Start the service by executing cli serviceStart. In the Management Console, import the application configurations that you exported. For the procedure about importing configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway.
b. c. d. e.
35
5.5
5.5.1
e.
3. Obtain the nodes name. For details about retrieving this information, refer to section 5.3.2 Managing Nodes in a Service Gateway Cluster. 4. In the master node, execute the following command: cli editNodeOamIp -nodename [Node Name] --oam_ip [new IP address for OAM NIC]
5.5.2
36
e.
Continue executing the cli moDetails command on each network interface until the output displays Is OAM interface = true. section 5.3.2 Managing Nodes in a Service Gateway Cluster.
4. Determine which slave node was elected as the master. 5. In the new master node, execute the following command: cli editNodeOamIp -nodename [Node Name] --oam_ip [new IP address for OAM NIC]
37
38
6.0
Figure 1.
The following step-by-step process describes how load balancing works in an ESG cluster 1. Only one node in the load balancer group accepts incoming connections from a client. This node is called the Director. During initial set up of the load balancer, the first real sever that is started becomes the Director. If two nodes start at the same time, then the node with the lowest IP address becomes Director. 2. All interfaces are on the same network. The interfaces labeled F1, F2, and F3 are front end interfaces. They are used to hold the VIP when a node becomes the Director. The interfaces labeled B1, B2, and B3 are back end interfaces. They are used by the load-balancing software to distribute network traffic.
39
3. The Director is assigned the Virtual IP address (VIP). The Director binds the VIP to an external interface (F1). If another node takes the Director role, the VIP is moved to its external interface. A VIP is an IP address that is not connected to a physical network interface card (NIC). The VIP is bound to one physical NIC on the node, such as eth0. The binding is based on the NICs name. The physical NIC that the VIP is bound to is determined during the set up of the load balancer. 4. Messages are sent to the VIP, which is bound to a physical NIC. 5. Based on a load balancing algorithm, the Director determines what node will receive the message request. 6. Once that decision is made, the message is sent to the back end NIC of the Directors node. 7. From the Directors back end NIC, the message is sent to the receiving nodes loopback network interface, also known as the LO. LO is a virtual network interface that is not connected to any hardware but is fully integrated into the systems internal network infrastructure. 8. The receiving node processes the message request. Then, the node sends a message response directly to the client.
6.1
40
6.2
Service
3. To install the load balancer, execute the following command on the load balance installer file: sh [loadbalancer].sh 4. By default, to execute the load balancer configuration commands, you must provide the absolute file path to the load balancers bin directory:/opt/scr-lb/bin. To avoid specifying the full path, execute the following command: PATH=$PATH:/opt/scr-lb/bin 5. Familiarize yourself with how to use the lbconfig command. The lbconfig command sets up the load balancer on a machine by allowing you to specify settings, such as the VIP, back end IP addresses that messages will be routed to, and load balancing algorithms used by the Director. For information about the lbconfig, refer to sections 6.6 Describing the Command Syntax for lbconfig and 6.6.1 Example of Executing lbconfig. 6. Execute the lbconfig command with the command options appropriate for your load balanced environment. The following is an example. /opt/scr-lb/bin/lbconfig --vip=vip:eth0:10.0.10.100/24 backend_server=10.0.10.102:1,10.0.10.103:2 --lb_algo=wrr 7. You must configure the load balancer so that it runs automatically after the machine reboots. To do this, execute the following command: chkconfig --add soae_lb 8. You must encrypt communication between the nodes in a load balanced cluster by creating a VRRP password for the production system. To do this, perform the following steps. a. b. c. d. Execute the command /opt/scr-lb/bin/lbpasswd The default password is passwd. When the Old password displays, enter passwd. When the New password prompt displays, enter a 14-character alphanumeric string. When the Re-enter new password prompt displays, enter the same 14character alphanumeric string that you specified in the New password prompt.
9. Start the load balancer service by executing the following command: service soae_lb start 10. Perform steps 1. through 9. on each machine that will be part of the load balanced cluster. In each machine, you must do the following: The lbconfig must be executed with the same command options on each machine. For example, if the lbconfigs command option is lb_algo=wrr on one machine, then the load balancing algorithm must be weighted round robin on every other machine in the load balanced cluster. The VRRP password must be the same on each machine.
WARNING:
IN A LOAD BALANCED CLUSTER, IF THE LBCONFIG IS EXECUTED WITH COMMAND OPTIONS ON ONE MACHINE THAT DIFFER FROM COMMAND OPTIONS USED ON ANOTHER MACHINE, THEN THE LOAD BALANCER CANNOT FUNCTION CORRECTLY. IF THE MACHINES DO NOT ALL USE THE SAME VRRP PASSWORD, THEN THE MACHINES CANNOT DECRYPT COMMUNICATION FROM ONE ANOTHER.
WARNING:
41
11. Place the machines where the load balancer was installed and configured into the ESG cluster. For instructions about setting up a cluster, refer to section 5.0 Managing a Collection of Service Gateway Machines.
6.3
6.4
6.5
10.0.10.100:5000
42
6.6
--vrrp_if
-backend_s erver
--vips
43
Table 5.
Command --last
--lb_algo
--port
1. Execute the lbconfig command with these options: lbconfig --vip=vip:eth0:10.0.10.100/24 - backend_server=10.0.10.102:1,10.0.10.103:2 --port=vip:5555:sed -lb_algo=nq 2. In a HTTP application, set the input servers port number to 5555 3. Then the director uses the sed load balancing algorithm to route messages. 4. If any other port number is specified in the input server, then the Director uses the never queue load balancing algorithm for routing messages.
if the client establishes a connection to a node, any additional connections made by client within user-defined interval, are routed to the same node. This persistence timeout is the period of time during which a messages are routed to the same real server. Unit of time is in seconds. If you have multiple load balancing groups, then you must specify a unique identifier for each group.
6.6.1
44
This is a two-node cluster. The network interface for VRRP is eth0. The physical NIC that the VIP is associated with is eth0. The virtual IP address and subnet mask is 10.0.10.100/24. The real servers IP addresses are 10.0.10.102 and 10.0.10.103. 10.0.10.102 has the highest priority. In general, the load balancing algorithm the Director must use is weighted round robin. In some rare cases, the Director must use the weighted least-connection algorithm. If multiple messages are sent by the same client within a 5 second window, then all the messages must be sent to the same real server. If the system has the requirements described above, then you would execute the following command. /opt/scr-lb/bin/lbconfig --vip=vip:eth0:10.0.10.100/24 --vrrp_if=eth0 -backend_server=10.0.10.102:2,10.0.10.103:1 --port=vip:5555:wlc -lb_algo=wrr --persistence_timeout=5
6.7
round robin
rr
wrr
leastconnection
lc
45
Table 6.
Algorithm
weighted leastconnection
wlc
source hashing
sh
sed
never queue
nq
6.8
6.9
46
4. When the lbconfig was executed, you could have specified that a particular load balancing algorithm is used if a particular port is specified in the input servers Port field. If necessary, enter that port number in the input servers Port field. 5. Activate the application configuration.
6.10
CAUTION:
IF A NODE FAILS, THEN EXISTING CONNECTIONS AND DATA ARE LOST. UNTIL THE DIRECTOR NOTICES THE NODE IS DOWN, ANY CONNECTIONS DIRECTED TO IT ARE LOST. HEALTH CHECKER PERFORMS A CONNECTION TEST ON EACH NODE EVERY TWO SECONDS. THIS MEANS THERE IS A TWO SECOND WINDOW IN WHICH DATA COULD BE LOST DUE TO A NODE FAILING. The VIP is bound to one physical NIC on the Directors node, such as eth0. Each node in the cluster associates the VIP with the same NIC name, even when that node is not the Director nor possesses the VIP. For example, if theres a three node cluster and the Director is node1 and binds the VIP to eth0, then node2 and node3 associate the VIP with eth0. Consequently, if the Director node fails, then the Director, with its VIP, moves to a different node. The new Director already knows the VIP is bound to a particular NIC based on the network interfaces name, even though the network interfaces IP address has changed. For example, consider a two-node cluster, where node1 is the Director and the VIP is bound to eth0. If node1 fails and node2 becomes the Director, then node2 binds the VIP to eth0. If the Director node fails, such as the machine crashing or ESG service stopping, then a Director election automatically takes place. A Director election is the process by which the load balancing service elects one of the nodes to be the Director.
47
48
7.0
7.1
7.2
49
backup. If you receive this error message, then execute the secCardDriverInstall command. When executing this command, you must use the -b,-olddriverbackup option, which specifies the absolute file path file to where the backup of the existing driver will be written prior to the new driver being installed. This backup file must be in a tgz format. The following command is an example. cli secCardDriverInstall -d /home/lablogin/secCardDriver/ secCardDrivers-r2.5.1.tgz -b /home/lablogin/secCardDriver/ secCardDrivers_Backup.tgz 7. Start the ESG service by executing the command cli serviceStart.
7.3
CAUTION:
IF YOU EXECUTE THE SECCARDDRIVERIGNORE COMMAND THEN ESG WILL NOT USE THE SECURITY CARD FOR ANY CRYPTOGRAPHIC PROCESSING OR SECURITY ACCELERATION. 1. Stop the ESG service by executing the command cli serviceStop. 2. Configure the ESG to stop using the Cavium security card by executing the secCardDriverIgnore command. When executing this command, you must use the -b,--olddriverbackup option, which specifies the absolute file path file to where the backup of the existing driver will be written prior to the new driver being installed. This backup file must be in a tgz format. The following command is an example. cli secCardDriverIgnore -b /home/lablogin/secCardDriver/ secCardDrivers_Backup.tgz 3. Start the ESG by executing the command cli serviceStart. To reconfigure the ESG to use the security card, perform the procedure in section 7.4 Creating and Using a Backup of Cavium Device Driver
7.4
50
secCardDrivers_Backup.tgz 3. Start the ESG service by executing the command cli serviceStart.
51
52
Upgrade Procedure
8.0
Upgrade Procedure
The Service Gateway runtime is upgraded using the Command Line Interface (CLI). When you have a working ESG, you should always perform an upgrade rather than reinstallation except under extreme conditions, such as a disk corruption. An upgrade allows you to go from any lower release to any higher release of ESG. The upgrade command converts your existing configurations to the new release. A fresh installation completely deletes your existing application configurations. You may want to reinstall the current release for some reason. You can back out an upgrade with the b option on the cli installUpgrade. To do this, you must have the tgz file generated by executing the upgrade_save command. You also need the RPM of the release you wish to back out to. You can upgrade the ESG and move it to new hardware if the new hardware has the same set of network interfaces. For example, if current hardware has eth0 and eth1, then the new machine must also have eth0 and eth1. This can be done by using the cli upgrade_save and cli installUpgrade commands. a. b. c. Run cli upgrade_save on the current machine. Copy the tgz bundle that contains the ESG from the current machine to the new machine. Run cli installUpgrade using the tgz bundle from step b.
8.1
rpm name of the new RPM you are upgrading to upgrade_save_dir the directory where you want to save the backup of your currently installed RPM upgradeCluster upgrades all nodes of an entire cluster at the same time
8.2
53
Upgrade Procedure
logs.tgz is the tgz file that contains all of the ESG logs. If you do not specify an absolute file path in the -f argument, then the tgz file is saved to the directory where saveLogFiles is executed.
8.3
54
Upgrade Procedure
8.3.1
Example of an Upgrade
The following is an example of upgrading to a new RPM. In this example, the system being upgraded is either a standalone system, or is the first node of a cluster to be upgraded. You must issue a cli serviceStop command before the upgrade can run successfully and that you must issue a cli serviceStart command once upgrade is completed to start the ESG again. /tmp>cli saveLogFiles -f logs.tgz cli upgrade -r /root/esg-runtime-as5-64bit-r2_8_0.rpm Warning: This command will delete all logs. If you would like to save a backup of these logs, execute command "cli saveLogFiles".
You are about to upgrade the software, do you want to continue 'yes|no'? yes You answered yes Pre-retrofit check of config cluster Pre-retrofit check of config current Pre-retrofit check of config factory test RPM for validity upgrade tar file is: /tmp/retrofit.esg.intel/systemBackup.tgz upgrade_save successfully completed Prepare Upgrade Stopping soaed: [ OK ]
Please wait while upgrade continues soae service will be stopped and uninstalled Please Wait .[root@iclab002 ~]# ..installing new esg rpm: /root/esg-runtime-as564bit-r2_8_0.rpm .........begin upgrade_finish configured uid/gid will be used Forcing service state to new state = POSTINSTALLING. esg patch in progress Installing with uid: 99 gid: 99 Forcing service state to new state = OOS. esg completed (upgrade) successfully. You are now ready to start the soae service. Hit Return to continue
55
Upgrade Procedure
8.3.2
8.4
8.5
8.5.1
56
Upgrade Procedure
None of the nodes are permitted to have an alarm or Threshold Crossing Alert (TCA), except if the TCA is caused by the factory configuration being active.
8.5.2
8.5.3
57
Upgrade Procedure
58
9.0
Table 7.
59
60