Sunteți pe pagina 1din 21

IBM Elastic Storage Server (ESS)

Architecture and Configuration Guide


for
SAP HANA Tailored Datacenter Integration
Doc ID: WP102644, located on
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs

isicc@de.ibm.com
Version 1.3 July 2016

1 TABLE OF CONTENTS
2

Preface ............................................................................................................................................ 2

IBM storage system certified for SAP HANA TDI............................................................................. 2

IBM ESS at a glance ......................................................................................................................... 2

IBM ESS sizing information for SAP HANA TDI ................................................................................ 3

Sharing IBM ESS between SAP HANA and other applications (competing storage utilization)...... 4

Required IP network ....................................................................................................................... 4

Spectrum Scale cluster concept ...................................................................................................... 5

Installing LINUX ............................................................................................................................... 7

10

Setting up the Spectrum Scale clusters....................................................................................... 7

10.1

IBM ESS initial cluster setup .................................................................................................... 8

10.2

Customizing ESS for HANA ...................................................................................................... 8

10.3

Setting up IBM Spectrum Scale on HANA nodes .................................................................. 12

11

Installing SAP HANA .................................................................................................................. 17

11.1

Global.ini ............................................................................................................................... 17

11.2

hdbparam fileio parameter ................................................................................................... 17

12

SAP HANA TDI backup with IBM Tivoli Storage Manager for ERP ............................................ 17

13

Resources .................................................................................................................................. 18

14

Trademarks ............................................................................................................................... 19

15

Disclaimers ................................................................................................................................ 20

Page | 1

Copyright IBM Corporation 2016

2 PREFACE
This paper is intended as an architecture and configuration guide to setup
IBM Elastic Storage Server (ESS) and IBM Spectrum Scale (formerly known
as GPFS) clients for SAP HANA tailored datacenter integration (SAP HANA
TDI). The SAP HANA TDI allows the SAP customer to use external storage to attach to the SAP HANA nodes.
This document has been written for IT technical specialists and architects, with
advanced skill levels on SUSE Linux Enterprise Server or Red Hat Enterprise
LINUX and IBM Spectrum Scale, with a focus on architecting and setting up
the HANA nodes and IBM ESS.
The recommendations in this guideline apply to both single node and scale
out configurations for Intel and IBM POWER 8 (and up) server.
For more details on SAP specification for using external storage for SAP HANA
TDI please read SAP document: http://www.saphana.com/docs/DOC-3633

3 IBM STORAGE SYSTEM CERTIFIED FOR SAP HANA TDI


All IBM ESS models are certified for SAP HANA TDI production: IBM ESS GS2, GS4,
GS6 and GL2, GL4, GL6.
For a list of all IBM storage systems certified for SAP HANA production please
visit:
http://global.sap.com/community/ebook/2014-09-02-hana-hardware/enEN/enterprise-storage.html
Please read also SAP note: 2055470 - HANA on POWER Planning and Installation Specifics - Central Note.

4 IBM ESS AT A GLANCE


The IBM Elastic Storage Server (ESS) is a modern implementation of software
defined storage built on the IBM Spectrum Scale. This technology combines
the CPU and I/O capability of the IBM POWER8 architecture and matches it
with 2U and 4U storage enclosures. This architecture permits the IBM Spectrum
Scale Raid software capability to actively manage all RAID functionality formerly accomplished by a hardware disk controller. Newly developed RAID
techniques from IBM use this CPU and I/O power to help overcome the limitations of current disk drive technology and simplify your transition to a multi-tier

Page | 2

Copyright IBM Corporation 2016

storage architecture employing solid state flash technology and robotic tape
libraries.
In addition to the technological advancements available with the Elastic Storage Server, it can also address other data issues found in many businesses. For
example, as each department or division in your organization evolves their
own storage needs, this can result in a costly duplication of hardware resources.
The resulting islands of information may hold valuable insights that may not be
accessible in such a disparate environment. By consolidating storage requirements across your organization onto the Elastic Storage Server, you can reduce
inefficiency and acquisition costs while simplifying management and improving data protection.
ESS is designed for performance. Storing petabytes of data is meaningless unless it can be accessed and analyzed quickly. Sustained streaming performance of data can reach 20 GB per second in each building block, growing
as more blocks are added to a configuration. By combining the superior data
movement capability of IBM Power Systems servers with the enhanced I/O
subsystem introduced in the POWER8 processor as well as adding the disk management capability of the Power server driven Native RAID technology, a
complete storage solution can be deployed without traditional storage controllers acting as a bottleneck to overall system performance.
With support for multiple 10 Gb per second and 40 Gb per second Ethernet, as
well as Infiniband speeds of up to 56Gb per second (FDR speed), Elastic Storage Servers have the architecture to deliver improved data throughput.

5 IBM ESS SIZING INFORMATION FOR SAP HANA TDI


The IBM ESS should be configured (sized) through the IBM tool HANAmagic;
this tool is available to IBM Sales and IBM Business Partner on IBM TechDocs.
Note: The performance sizing described in this chapter is only required for
productive HANA nodes (HANA databases for the SAP production systems)
and not for the non-production HANA Nodes (e.g. QA, Test, Dev, Sandbox
Systems). For the non-production systems only a capacity sizing is required,
DATA capacity is roughly three times the RAM, and LOG capacity is one time
the RAM size of the HANA node please check with the latest SAP HANA installation guide.

Page | 3

Copyright IBM Corporation 2016

The table shows the maximum number of HANA nodes for the different IBM
ESS models.

Number of
Hana nodes

GS2
SSD

GS4
SSD

GS6
SSD

GL2
HDD

GL4
HDD

GL6
HDD

16

16

16

6 SHARING IBM ESS BETWEEN SAP HANA AND OTHER APPLICATIONS


(COMPETING STORAGE UTILIZATION)
IBM ESS can be shared between SAP HANA production systems and other SAP
production systems, or non-SAP production systems.
Because these IO sizing can very, e.g. one small HANA prod system and several
big non-HANA systems to many big HANA prod systems and just one non-HANA
prod systems, a sizing rule cannot be given here.
IBM and the Business Partner of IBM are able to provide a detailed performance sizing for a give or required IT landscape.

7 REQUIRED IP NETWORK
Figure 1 shows the principle network setup for HANA TDI with Spectrum Scale.
It is recommended to have a separate network for the application and storage layer.

A 1G or 10G Ethernet Network to connect the SAP HANA nodes with


the SAP application systems. This network will be used for management as well.
A 10G, 40G Ethernet or 56G Infiniband network connecting the
HANA nodes with IBM ESS and the Backup Server, e.g. IBM Tivoli Storage Manager.
10G Et: One connection for every HANA node to ESS
40G Et: One connection for four

HANA nodes to ESS

56G IB: One connection for five

HANA nodes to ESS

Page | 4

Copyright IBM Corporation 2016

8 SPECTRUM SCALE CLUSTER CONCEPT


The IBM Spectrum Scale (formerly known as GPFS), is a cluster file system. This
means that it provides concurrent access to a single file system or set of file
systems from multiple nodes. These nodes can all be SAN attached or a mix of
SAN and network attached. This enables high performance access to this common set of data to support a scale-out solution or provide a high availability
platform.
Spectrum Scale has many features beyond common data access including
data replication, policy based storage management, and multi-site operations.
You can create a cluster of AIX nodes, Linux nodes, Microsoft Server nodes, or
a mix of all three. It can run on virtualized instances providing common data
access in environments, leverage logical partitioning, or other hypervisors. Multiple clusters can share data within a location or across wide area network
(WAN) connections.
Spectrum Scale provides a global namespace, shared file system access
among multiple Spectrum Scale clusters, simultaneous file access from multiple
nodes, high recoverability and data availability through replication, the ability
to make changes while a file system is mounted, and simplified administration
even in large environments.
Spectrum Scale provides storage management based on the definition and
use of: Storage pools, Policies, and File sets.
Storage pools
A storage pool is a collection of disks or RAIDs with similar properties that
are managed together as a group. Storage pools provide a method to
partition storage within a file system.

Page | 5

Copyright IBM Corporation 2016

Policies
Files are assigned to a storage pool based on defined policies.
File placement policies are used to automatically place newly created
files in a specific file system pool.
File management policies are used to manage files during their lifecycle
by moving them to another file system pool, moving them to near-line
storage, copying them to archival storage, changing their replication
status, or deleting them.
File sets
File sets provide a method for partitioning a file system and allow administrative operations at a finer granularity than the entire file system.
Figure 2 illustrates the recommended Spectrum Scale cluster concept. The IBM
ESS is a cluster on its own, the Spectrum Scale storage cluster, each HANA system, or a group of HANA systems build up the Spectrum Scale application clusters. Each application cluster accesses the file system on the ESS (storage cluster) via Spectrum Scale cross cluster file system mount. This is a remote mount
of a file system (in this figure mount point /hana) using NSD server access,
through a virtual connection to the file system data through an NSD server,
which means no direct, physical connection between the disks (NSD, network
shared disk) and the application nodes.

Figure 2 Spectrum Scale Cluster concept

Page | 6

Copyright IBM Corporation 2016

9 INSTALLING LINUX
Install SUSE LINUX Enterprise Server 11, SUSE LINUX Enterprise Server for SAP
11 Service Pack 4 or higher, or Red Hat Enterprise Linux for SAP HANA on local disks or on SAN attached storage. Follow the instructions and requirement
according the following documentations:
SAP Note 2001528 - Linux: SAP HANA Database SPS 12 on RHEL 6 or SLES 11
SAP Note 2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL)
Operating System
SAP Note 2013638 - SAP HANA DB: Recommended OS settings for RHEL 6.5
SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation
Setup ssh without prompt for password between all HANA/Spectrum Scale
nodes and ESS.
Ensure that proper DNS resolution is in place - via /etc/hosts and DNS.
Ensure that a timeserver is configured.

10 SETTING UP THE SPECTRUM SCALE CLUSTERS


Install the IBM Spectrum Scale software; follow the instructions and requirement according the following documentations:
SAP Note 1084263 - Cluster File System: Use of GPFS on Linux
http://www.redbooks.ibm.com/abstracts/sg248086.html?Open
In-memory Computing with SAP HANA on IBM eX5 and X6 Systems
http://scn.sap.com/docs/DOC-60254
SAP HANA on RHEL on IBM xServer using GPFS
The minimum maintenance level of Spectrum Scale is 4.2, latest TL.
In general, you need to download the packages required for running Spectrum Scale directly from IBM or install them from your local repository.

Page | 7

Copyright IBM Corporation 2016

1. Create separate, dedicated, diskless SAP HANA Cluster, details can be


found here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Linux
2. Mounting a file system owned and served by another Spectrum Scale
cluster (Spectrum Scale Cross Cluster mount), details can be found
here: http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_admrmsec.htm
10.1 IBM ESS INITIAL CLUSTER SETUP
Every ESS comes with a standard deployment procedure, which is already well
documented in IBM Spectrum Scale product documentation. After finishing
the standard deployment of the ESS, you will get an up and running Spectrum
Scale cluster, consisting of the two ESS head nodes and the management
node. You can verify the successful provisioning process by mmlscluster command.
[root@is38esma ~]# mmlscluster
GPFS cluster information
========================
GPFS cluster name:
HoPcluster.wdf.sap.corp
GPFS cluster id:
276748495507182815
GPFS UID domain:
HoPcluster.wdf.sap.corp
Remote shell command:
/usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type:
CCR
Node Daemon node name
IP address
Admin node name
Designation
------------------------------------------------------------------------------1
is38san1.wdf.sap.corp 10.17.193.185 is38san1.wdf.sap.corp quorummanager
3
is38san2.wdf.sap.corp 10.17.193.186 is38san2.wdf.sap.corp quorummanager
4
is38esm.wdf.sap.corp
10.17.193.184 is38esm.wdf.sap.corp
quorummanager

10.2 CUSTOMIZING ESS FOR HANA


The ESS models can be ordered with different amount of Memory and capacity. Depending on the targeted amount of HANA nodes, various ESS models
are available. In addition, the ESS can be ordered with 2 x 3 PCI adapters for
network connectivity in any combination of GbE or Infiniband.

Page | 8

Copyright IBM Corporation 2016

10.2.1 initial vdisk Layout

After standard deployment and initial function verification of the ESS, delete all
existing vdisks and create new ones according to the following procedure. Access the samples directory on one of the ESS node.
[root@is38san1a vdisk]# pwd
/usr/lpp/mmfs/samples/vdisk

Customize the vdisk configuration file as:


[root@is38san1a vdisk]# cat vdisk.stanza.ini
#
NVR
%vdisk: vdiskName=hanaL_ltip rg=hanaL da=NVR size=48m blocksize=2m
raidCode=2WayReplication diskUsage=vdiskLogTip
#
SSD
#
create log tip backlup vdisk on a single SSD
%vdisk: vdiskName=hanaL_ltbackup rg=hanaL da=SSD size=48m blocksize=2m
raidCode=Unreplicated diskUsage=vdiskLogTipBackup
#
DA1
%vdisk: vdiskName=hanaL_lhome rg=hanaL da=DA1 size=100g blocksize=2m
raidCode=4WayReplication diskUsage=vdiskLog longTermEventLogSize=4m
shortTermEventLogSize=4m fastWriteLogPct=90 diskUsage=vdiskLog
# Recovery group hanaR
%vdisk: vdiskName=hanaR_ltip rg=hanaR da=NVR size=48m blocksize=2m
raidCode=2WayReplication diskUsage=vdiskLogTip
%vdisk: vdiskName=hanaR_ltbackup rg=hanaR da=SSD size=48m blocksize=2m
raidCode=Unreplicated diskUsage=vdiskLogTipBackup
%vdisk: vdiskName=hanaR_lhome rg=hanaR da=DA1 size=100g blocksize=2m
raidCode=4WayReplication diskUsage=vdiskLog

Create the vdisks by mmcrvdisk -F <your-stanzafile-name>


10.2.2 filesystems

You need to configure two different filesystems for HANA. One filesystem, holding the data files, with 16M blocksize and one FS for the DB log workload
which is the sequential part of HANA workloads with 1 MB blocksize. This is a
standard ESS configuration step. Use the following 2 sample vdisk stanza to proceed.
File system stanza file for data:
[root@is38san1a vdisk]# cat vdisk.stanza.datafs
%vdisk: vdiskName=hanaLDFT2M1 rg=hanaL
tion diskUsage=metadataOnly

da=DA1

blocksize=1m size=200g raidCode=4WayReplica-

%vdisk: vdiskName=hanaLDFT2D1 rg=hanaL


diskUsage=dataOnly pool=datapool

da=DA1

blocksize=16m

%vdisk: vdiskName=hanaRDFT2M1 rg=hanaR


tion diskUsage=metadataOnly

da=DA1

blocksize=1m size=200g raidCode=4WayReplica-

%vdisk: vdiskName=hanaRDFT2D1 rg=hanaR


diskUsage=dataOnly pool=datapool

da=DA1

blocksize=16m

size=500g raidCode=8+2p

Page | 9

size=500g raidCode=8+2p

Copyright IBM Corporation 2016

File system stanza file for log:


[root@is38san1a vdisk]# cat vdisk.stanza.logfs
%vdisk: vdiskName=hanaLM1 rg=hanaL da=DA1 blocksize=1m
Code=4WayReplication diskUsage=metadataOnly
%vdisk: vdiskName=hanaLD1 rg=hanaL da=DA1 blocksize=1m
Code=8+2p diskUsage=dataOnly pool=datapool
%vdisk: vdiskName=hanaRM1 rg=hanaR da=DA1 blocksize=1m
Code=4WayReplication diskUsage=metadataOnly
%vdisk: vdiskName=hanaRD1 rg=hanaR da=DA1 blocksize=1m
Code=8+2p diskUsage=dataOnly pool=datapool

size=50g raidsize=200g raidsize=50g raidsize=200g raid-

Generate vdisks and NSDs from your stanza files according to the following
sample. Repeat the same step for each stanza file so for log and data.
[root@is38san1a vdisk]# mmcrvdisk -F vdisk.stanza.datafs
mmcrvdisk: [I] Processing vdisk hanaLDFT2M1
mmcrvdisk: [I] Processing vdisk hanaLDFT2D1
mmcrvdisk: [I] Processing vdisk hanaRDFT2M1
mmcrvdisk: [I] Processing vdisk hanaRDFT2D1
mmcrvdisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]# mmcrnsd -F vdisk.stanza.datafs
mmcrnsd: Processing disk hanaLDFT2M1
mmcrnsd: Processing disk hanaLDFT2D1
mmcrnsd: Processing disk hanaRDFT2M1
mmcrnsd: Processing disk hanaRDFT2D1

After that, you need to create the file systems by mmcrfs, see next sample how
to proceed.
[root@is38san1a vdisk]# mmcrfs hana16M -F vdisk.stanza.datafs -B 16M --metadata-block-size 1M
-M 2 -R 2 -m 1 -r 1 -L 256M -T /gpfs/data16M -E no -j scatter
-S relatime
The following disks of hana16M will be formatted on node is38san1a.gpfs.net:
hanaLDFT2M1: size 205833 MB
hanaLDFT2D1: size 521712 MB
hanaRDFT2M1: size 205833 MB
hanaRDFT2D1: size 521712 MB
Formatting file system ...
Disks up to size 1.8 TB can be added to storage pool system.
Disks up to size 6.7 TB can be added to storage pool datapool.
Creating Inode File
Creating Allocation Maps
Creating Log Files
3 % complete on Wed Jun 1 10:46:13 2016
100 % complete on Wed Jun 1 10:46:14 2016
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
Formatting Allocation Map for storage pool datapool
Completed creation of file system /dev/hana16M.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]#

P a g e | 10

Copyright IBM Corporation 2016

Repeat this step for the log filesystem:


[root@is38san1a vdisk]# mmcrvdisk -F vdisk.stanza.logfs
mmcrvdisk: [I] Processing vdisk hanaLM1
mmcrvdisk: [I] Processing vdisk hanaLD1
mmcrvdisk: [I] Processing vdisk hanaRM1
mmcrvdisk: [I] Processing vdisk hanaRD1
mmcrvdisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]# mmcrnsd -F vdisk.stanza.logfs
mmcrnsd: Processing disk hanaLM1
mmcrnsd: Processing disk hanaLD1
mmcrnsd: Processing disk hanaRM1
mmcrnsd: Processing disk hanaRD1
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]#
[root@is38san1a vdisk]# mmcrfs log1M -F vdisk.stanza.logfs -B 1M --metadata-block-size 1M
2 -R 2 -m 1 -r 1 -L 256M -T /gpfs/log1M -E no -j scatter -S relatime

-M

The following disks of log1M will be formatted on node is38san2a.gpfs.net:


hanaLM1: size 52986 MB
hanaLD1: size 206324 MB
hanaRM1: size 52986 MB
hanaRD1: size 206324 MB
Formatting file system ...
Disks up to size 531 GB can be added to storage pool system.
Disks up to size 1.8 TB can be added to storage pool datapool.
Creating Inode File
83 % complete on Thu Jun 2 15:07:24 2016
100 % complete on Thu Jun 2 15:07:24 2016
Creating Allocation Maps
Creating Log Files
3 % complete on Thu Jun 2 15:07:30 2016
28 % complete on Thu Jun 2 15:07:37 2016
53 % complete on Thu Jun 2 15:07:43 2016
78 % complete on Thu Jun 2 15:07:49 2016
100 % complete on Thu Jun 2 15:07:50 2016
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
Formatting Allocation Map for storage pool datapool
Completed creation of file system /dev/log1M.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]#

[root@is38san1a vdisk]# mmcrvdisk -F vdisk.stanza.logfs


mmcrvdisk: [I] Processing vdisk hanaLM1
mmcrvdisk: [I] Processing vdisk hanaLD1
mmcrvdisk: [I] Processing vdisk hanaRM1
mmcrvdisk: [I] Processing vdisk hanaRD1
mmcrvdisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]# mmcrnsd -F vdisk.stanza.logfs
mmcrnsd: Processing disk hanaLM1
mmcrnsd: Processing disk hanaLD1
mmcrnsd: Processing disk hanaRM1
mmcrnsd: Processing disk hanaRD1
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]#
[root@is38san1a vdisk]# mmcrfs log1M -F vdisk.stanza.logfs -B 1M --metadata-block-size 1M
2 -R 2 -m 1 -r 1 -L 256M -T /gpfs/log1M -E no -j scatter -S relatime
The following disks of log1M will be formatted on node is38san2a.gpfs.net:
hanaLM1: size 52986 MB
hanaLD1: size 206324 MB
hanaRM1: size 52986 MB
hanaRD1: size 206324 MB
Formatting file system ...
Disks up to size 531 GB can be added to storage pool system.
Disks up to size 1.8 TB can be added to storage pool datapool.
Creating Inode File
83 % complete on Thu Jun 2 15:07:24 2016
100 % complete on Thu Jun 2 15:07:24 2016
P
a g e Allocation
| 11
Copyright IBM Corporation 2016
Creating
Maps
Creating Log Files
3 % complete on Thu Jun 2 15:07:30 2016
28 % complete on Thu Jun 2 15:07:37 2016
53 % complete on Thu Jun 2 15:07:43 2016

-M

The following disks of log1M will be formatted on node is38san2a.gpfs.net:


hanaLM1: size 52986 MB
hanaLD1: size 206324 MB
hanaRM1: size 52986 MB
hanaRD1: size 206324 MB
Formatting file system ...
Disks up to size 531 GB can be added to storage pool system.
Disks up to size 1.8 TB can be added to storage pool datapool.
Creating Inode File
83 % complete on Thu Jun 2 15:07:24 2016
100 % complete on Thu Jun 2 15:07:24 2016
Creating Allocation Maps
Creating Log Files
3 % complete on Thu Jun 2 15:07:30 2016
28 % complete on Thu Jun 2 15:07:37 2016
53 % complete on Thu Jun 2 15:07:43 2016
78 % complete on Thu Jun 2 15:07:49 2016
100 % complete on Thu Jun 2 15:07:50 2016
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
Formatting Allocation Map for storage pool datapool
Completed creation of file system /dev/log1M.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38san1a vdisk]#

10.2.3 Customizing Spectrum Scala (aka GPFS) parameters

You need to adjust some of the Spectrum Scale parameters to benefit from
the HW and setup changes made before.
mmchconfig
mmchconfig
mmchconfig
mmchconfig

nsdRAIDFlusherFWLogLimitMB=60k,-N gss_ppc64
nsdRAIDFlusherFWLogHighWatermarkMB=60k -N gss_ppc64
nsdRAIDFastWriteFSMetadataLimit=1m -N gss_ppc64
nsdRAIDFastWriteFSDataLimit=2m -N gss_ppc64

A full list of parameter you can find in the appendix.

10.3 SETTING UP IBM SPECTRUM SCALE ON HANA NODES


IBM Spectrum Scale is shipped as an self-extracting, executable program. To
extract and install the software, you may need to set the mod bits accordingly,
and execute the program. Youll either need a valid X11 environment or add
a command line option -- text-only to allow redirecting the output from the
license approval. After agreeing the license, the script will create some directories and unpack the software to /usr/lpp/mmfs/[release]/gpfs_rpms, as
shown in example: After the self-extracting program finished successfully, verify
that the RPM packets are available in the given directory, according to the
output of the command. In the Spectrum Scale 4.2 release, the directory is
/usr/lpp/mmfs/4.2.0.0. Same procedure applies, if you download higher PTF
levels from IBM fixcentral.

P a g e | 12

Copyright IBM Corporation 2016

lsh30100:~/ # chmod 755 Spectrum_Scale_Standard-4.2.0.1-ppc64-Linux-install


lsh30100:~/ # ./Spectrum_Scale_Standard-4.2.0.0-x86_64-Linux-install --text-only
Extracting License Acceptance Process Tool to /usr/lpp/mmfs/4.2.0.0 ...
tail -n +544 ./Spectrum_Scale_Standard-4.2.0.0-x86_64-Linux-install | /bin/tar -C
/usr/lpp/mmfs/4.2.0.0 -xvz --exclude=installer --exclude=*_rpms --exclude=*rpm --exclude=*tgz
--exclude=*deb 1> /dev/null
Installing JRE ...
tail -n +544 ./Spectrum_Scale_Standard-4.2.0.0-x86_64-Linux-install | /bin/tar -C
/usr/lpp/mmfs/4.2.0.0 --wildcards -xvz ibm-java*tgz 1> /dev/null
/bin/tar -C /usr/lpp/mmfs/4.2.0.0/ -xzf /usr/lpp/mmfs/4.2.0.0/ibm-java*tgz
Invoking License Acceptance Process Tool ...
/usr/lpp/mmfs/4.2.0.0/ibm-java-x86_64-71/jre/bin/java -cp
/usr/lpp/mmfs/4.2.0.0/LAP_HOME/LAPApp.jar com.ibm.lex.lapapp.LAP -l
/usr/lpp/mmfs/4.2.0.0/LA_HOME -m /usr/lpp/mmfs/4.2.0.0 -s /usr/lpp/mmfs/4.2.0.0

-text_only

LICENSE INFORMATION
[]
Press Enter to continue viewing the license agreement, or
enter "1" to accept the agreement, "2" to decline it, "3"
to print it, "4" to read non-IBM terms, or "99" to go back
to the previous screen.
1
License Agreement Terms accepted.
Extracting Product RPMs to /usr/lpp/mmfs/4.2.0.0 ...
[. . . ]
==================================================================
Product rpms successfully extracted to /usr/lpp/mmfs/4.2.0.0

10.3.1 Install Spectrum Scale

Step right into this directory and install directly with rpm command as shown in
the following example.
lsh30100:/usr/lpp/mmfs/4.2.0.1/gpfs_rpms # rpm -ihv gpfs.adv-4.2.0-1.ppc64.rpm gpfs.base4.2.0-1.ppc64.rpm gpfs.docs-4.2.0-1.noarch.rpm gpfs.ext-4.2.0-1.ppc64.rpm gpfs.gpl-4.2.0-1.noarch.rpm gpfs.gskit-8.0.50-47.ppc64.rpm gpfs.msg.en_US-4.2.0-1.noarch.rpm
Preparing...
########################################### [100%]
1:gpfs.base
########################################### [ 14%]
2:gpfs.ext
########################################### [ 29%]
3:gpfs.adv
########################################### [ 43%]
4:gpfs.docs
########################################### [ 57%]
5:gpfs.gpl
########################################### [ 71%]
6:gpfs.gskit
########################################### [ 86%]
7:gpfs.msg.en_US
########################################### [100%]
lsh30100:/usr/lpp/mmfs/4.2.0.1/gpfs_rpms #

10.3.2 Build the portability layer

IBM Spectrum Scale is a fully POSIX compliant file system and supported under
various linux distributions. The Spectrum Scale portability layer is a loadable kernel module that allows the GPFS daemon to interact with the operating system,
it need to be built in your given environment.
P a g e | 13

Copyright IBM Corporation 2016

Each kernel module is specific to a Linux version and platform. If you have multiple nodes running exactly the same operating system level on the same platform, and only some of these nodes have a compiler available, you can build
the kernel module on one node, then create an installable package that contains the binary module for ease of distribution.
Build the portability layer as follows:
lsh30100:~ # mmbuildgpl --build-package
-....
[...]
Wrote: /usr/src/packages/RPMS/ppc64/gpfs.gplbin-3.0.101-63-ppc64-4.2.01.ppc64.rpm

Install this newly created RPM on every node, similar to the following example:
lsh30100:~ # for i in 1 2 4 5 6
> do
> ssh lsh3010$i "rpm -ihv /root/gpfs.gplbin-3.0.101-63-ppc64-4.2.01.ppc64.rpm"
> done

10.3.3 adjust your environment for Spectrum Scale

All executables are installed in /usr/lpp/mmfs/bin directory. For an easier use


of Spectrum Scale administrative commands, you may include this directory in
your environment, example:
echo "export PATH=\$PATH:/usr/lpp/mmfs/bin" >> .bashrc

To be able to administer Spectrum Scale cluster, you need according user credentials. You can configure your cluster with sudo wrapper enabled or simply
allow root access to your nodes.
For using sudo wrapper please have a look to the Spectrum Scale product
documentation
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_sudowrapper.htm.
10.3.4 add nodes to the cluster

Once the software is accordingly installed on the nodes, the node(s) simply
can be added to your existing cluster.
Use the mmaddnode command and make sure, that DNS name resolution
works properly. In a further step, you need to accept the license agreement
by mmchlicense. For further information, please see the example.

P a g e | 14

Copyright IBM Corporation 2016

[root@is38esma ~]# mmaddnode -N lsh30107


Thu Jun 16 12:41:36 CEST 2016: mmaddnode: Processing node
lsh30107.wdf.sap.corp
mmaddnode: Command successfully completed
mmaddnode: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
mmaddnode: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38esma ~]# mmchlicense client --accept -N lsh30107
The following nodes will be designated as possessing client licenses:
lsh30107.wdf.sap.corp
mmchlicense: Command successfully completed
mmchlicense: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38esma ~]#

10.3.5 configure the first added node / create a node class


In order to assign the right Spectrum Scale parameters to the new node, you
may configure a node class, so that you will be able to apply the settings just
by sort the new node into the node class.
Create a node class as in next example:
[root@is38esma ~]# mmcrnodeclass hananode -N lsh30107
mmcrnodeclass: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38esma ~]# mmlsnodeclass hananode
Node Class Name
Members
--------------------- ---------------------------------------------------------Hananode
lsh30107.wdf.sap.corp
[root@is38esma ~]#

Assign default settings to the node class:


mmchconfig maxMBpS=2000,maxGeneralThreads=2048,numaMemoryInterleave=yes,verbsRdmaMinBytes=8k,verbsRdmaSend=yes,
\
verbsRdmasPerConnection=128,verbsSendBufferMemoryMB=1024,nsdInlineWriteMax=4k,aioWorkerThreads=256 -N hananode

For better overview , check your configuration by mmlsconfig.


mmchconfig disableDIO=yes,aioSyncDelay=10 -N hananode
mmchconfig verbsPorts="mlx4_0/1 mlx4_0/2" -N hananode
your IB cards
mmchconfig pagepool=32G -N hananode
quirement is 12GB

P a g e | 15

Copyright IBM Corporation 2016

## according to
## minimum re-

For better overview, check your configuration by mmlsconfig.


[hananode]
maxMBpS 2000
maxGeneralThreads 2048
numaMemoryInterleave yes
verbsRdmaMinBytes 8k
verbsRdmaSend yes
verbsRdmasPerConnection 128
verbsSendBufferMemoryMB 1024
nsdInlineWriteMax 4k
aioWorkerThreads 256
verbsPorts mlx4_0/1 mlx4_0/2
pagepool 32G

10.3.6 configure the added node


When adding further Spectrum Scale clients to the cluster, the parameter settings could easily applied by adding the new node into the existing node class.
[root@is38esma ~]# mmchnodeclass hananode add -N lsh30106
mmchnodeclass: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@is38esma ~]#

10.3.7 Start Spectrum Scale and verify configuration


Simply start Spectrum Scale demon on the node and check status.
[root@is38esma ~]# mmstartup -N lsh30106
Thu Jun 16 15:34:06 CEST 2016: mmstartup: Starting GPFS ...
[root@is38esma ~]#
lsh30106:~ # mmlsnodeclass hananode
Node Class Name

Members

--------------------- ---------------------------------------------------------hananode

lsh30100,lsh30101,lsh30102,lsh30104,lsh30105,lsh30103
lsh30107.wdf.sap.corp,lsh30106.wdf.sap.corp

P a g e | 16

Copyright IBM Corporation 2016

11 INSTALLING SAP HANA


Please follow the instruction in the SAP HANA Server Installation and Update
Guide, current release: SAP HANA Platform SPS 12.

11.1 GLOBAL.INI
No additional storage configuration is necessary.
This is an example of a /hana/shared/global.ini file. The [storage] section will
be empty:
[communication]
listeninterface = .global
[persistence]
basepath_datavolumes = /hana/data/SID
basepath_logvolumes = /hana/log/SID
[storage]
[trace]

11.2

HDBPARAM FILEIO PARAMETER

The following parameters were set for the SAP HWCCT tool hwval/fsperf, and
needs to be used for all IBM storage:
[async_write_submit_active : on
async_write_submit_blocks : all
async_read_submit
: on

Please read the SAP note: 1930979 - Alert: Sync/Async read ratio how to set
these parameters via the hdbparm tool.

12 SAP HANA TDI BACKUP WITH IBM TIVOLI STORAGE MANAGER FOR
ERP
IBM Tivoli Storage Manager for Enterprise Resource Planning protects your
vital SAP system data. It provides automated data protection designed for
SAP and SAP HANA environments. Now you can improve the availability of
your SAP database servers and reduce your administration workload.
Please see current product documentation how to install, configure, and run
this integrated solution.

P a g e | 17

Copyright IBM Corporation 2016

13 RESOURCES
For any product or documentation provided by SAP please contact SAP.
For any product or documentation provided by SUSE please contact SUSE.
For any product or documentation provided by Red Hat please contact Red
Hat.
For any product or documentation provided by IBM regarding SAP send an
email to isicc@de.ibm.com
How to setup Spectrum Scale & ESS can be found here:
http://www-01.ibm.com/support/knowledgecenter/

Select in the Table of Content


Cluster software
General Parallel File System
GPFS storage server (ESS)

Installation, configuration and usage of Tivoli Storage Manager for Enterprise


Resource Planning V6.4 Data Protection for SAP HANA
http://www-01.ibm.com/support/docview.wss?uid=swg21608240
SAP documentation:
Overview - SAP HANA tailored data center integration
http://www.saphana.com/docs/DOC-3633
FAQ - SAP HANA tailored data center integration
http://www.saphana.com/docs/DOC-3634
Introduction to High Availability for SAP HANA
http://www.saphana.com/docs/DOC-2775
IBM/SAP Whitepaper "High-end customer IT Landscapes based on SAP HANA"
http://www.saphana.com/docs/DOC-3211
Access to sapjam and documentation will be provided by SAP the listed
links only work after login: login first, and then access the links.
SAP HANA reference architecture
https://jam4.sapjam.com/wiki/show/48495
SAP HANA Bill of Material
https://jam4.sapjam.com/wiki/show/48202

P a g e | 18

Copyright IBM Corporation 2016

Novell / SUSE documentation:


List of all SLES 11 documentations
https://www.suse.com/documentation/sles11/
Highly Available NFS Storage with DRBD and Pacemaker with SUSE Linux Enterprise High Availability Extension 11
https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha_techguides/book_sleha_techguides.html

Red Hat documentation:


RED HAT ENTERPRISE LINUX FOR SAP HANA ON IBM: INSTALLATION GUIDE
https://hcp.sap.com/content/dam/website/saphana/en_us/Technology%20Documents/SAP_HANA_on_RHEL_on_IBM_xServer_using_GPFS.pdf

14 TRADEMARKS
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Ma-chines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first
occurrence in this information with the appropriate symbol ( or ), indicating
US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common
law trademarks in other countries. A current list of IBM trademarks is available
on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX, BladeCenter, DB2, Global Business Services, Global Technology Services, GPFS, Spectrum Scale, IBM SmartCloud, IBM, Intelligent Cluster, Passport Advantage, POWER, PureFlex, Rack-Switch, Redbooks,
Redpaper, Redbooks (logo) , System Storage, System x, System z, Tivoli, z/OS.
The following terms are trademarks of other companies:
SAP, R/3, ABAP, BAPI, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer, StreamWork, SAP HANA, the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence as well as their
respective logos are trademarks or registered trademarks of SAP AG in Germany or an SAP affiliate company.
Intel Xeon, Intel, Itanium, Intel logo, Intel Inside logo, and Intel Centrino logo
are trademarks or registered trademarks of Intel Corporation or its subsidiaries
in the United States and other countries.

P a g e | 19

Copyright IBM Corporation 2016

Linux is a trademark of Linus Torvalds in the United States, other countries, or


both.
SUSE is a registered trademark of SUSE Germany and Novell USA, other countries, or both. SLES is a trademark of SUSE Germany and Novell USA, other countries, or both.
Read Hat is a registered trademark of Red Hat United States and other countries.
Other company, product, or service names may be trademarks or service
marks of others.

15 DISCLAIMERS
This information was developed for products and services offered in Germany.
IBM may not offer the products, services, or features discussed in this document
in other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property
right may be used instead. However, it is the user's responsibility to evaluate
and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to: IBM
Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 105041785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law: INTERNATIONAL
BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will

P a g e | 20

Copyright IBM Corporation 2016

be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in
this publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those
websites. The materials at those websites are not part of the materials for this
IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments
may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will
be the same on generally available systems. Furthermore, some measurements
may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available
sources. IBM has not tested those products and cannot confirm the accuracy
of performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include
the names of individuals, companies, brands, and products. All of these names
are fictitious and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.

P a g e | 21

Copyright IBM Corporation 2016

S-ar putea să vă placă și