Documente Academic
Documente Profesional
Documente Cultură
Agenda
External storage options for IBM i
IBM i storage management
Connecting V7000 to IBM i
Defining the LUNs for IBM i
Boot from SAN
Solid State Drives in V7000 for IBM i
Zoning teh switches
Multipath
Implementation steps
IBM i architecture important for Copy services
V7000 Copy services
Solutions for IBM i with V7000 Copy services
PowerHA SystemMirror for i with V7000 Copy services
© 2014 IBM Corporation
Storwize V7000 and IBM i pre-sales enablement
Archive
IBM i
SVC
VIOS V840
Tape
Libraries
& Drives
DS4000 DS5000
XIV
DS6000 ProtecTIER
Storwize Family
Packages
IBM SAN Volume Controller
(Virtualization System; No Storage)
IBM Storwize V3700 (Entry)
IBM Storwize V7000 (Midrange)
IBM Storwize V5000 (Midrange) System Functions
Thin provisioning
Easy Tier
IBM Real-time Compression
IBM Storwize V7000 Unified (File)
IBM SmartCloud Storage Access
(Cloud)
Remote replication
Integrated Options (synchronous and asynchronous)
IBM Flex System Storage Virtualization (IBM and non-
V840 IBM)
SVC-DH8
2U
IBM i Partition
I5/OS Partition Single-Level Storage
Main Memory
IBM i sees all disk space and the main memory as one storage area.
It uses the same set of 64-bit virtual addresses to cover both main memory and disk space.
© 2014 IBM Corporation
Storwize V7000 and IBM i pre-sales enablement
IBM i takes responsibility for managing the information in auxiliary storage pools.
The system places the file in the best location that ensures the best performance.
It normally spreads the data in the file across multiple disk units. © 2014 IBM Corporation
Storwize V7000 and IBM i pre-sales enablement
POWER7 or later
Requirements:
Hardware: IBM i
– POWER7, POWER8
Load
Source
V7000
© 2014 IBM Corporation
Storwize V7000 and IBM i pre-sales enablement
Some steps are needed to identify the LUN in VSCSI connection© 2014 IBM Corporation
Storwize V7000 and IBM i pre-sales enablement
Zone
– One WWPN of one IBM i port
– Two ports of V7000, each port
from one node canister
Resiliency for the IO to / from the
LUN mapped to the WWPN
The path through preferred node is
active
The path through non preferred node
is passive
Zone
– One physical port in VIOS
– As many V7000 ports as possible
to allow load balancing, keeping in
mind that there are maximum 8
paths available from VIOS to
V7000.
– V7000 ports should be evenly
spread between node canisters
Multipath
Every LUN in Storwize V7000 uses one V7000 node as preferred node
–The preferred node for a LUN is automatically assigend when the LUN
is created, but can be changed
–The IO traffic to / from the particular LUN normally goes through the
preferred node
–If that node fails the IO is transferred to the remaining node
With IBM i Multipath, all the paths to a LUN through the preferred node
are active, load balancing is used on active paths
The paths through the non-preferred node are passive.
VIOS profile
IBM i profile
Example
Activate partition
Start System Management Services
(SMS) window
Choose CD_ROM as Install /Boot
device
After VIOs reboots, insert userid and
password
Accept licenses
Start disk mirroring of Rootvg volume
group
Setup network connectivity
Update VIOS with the latest ficpack
Note:
The IBM i wwpns will
Show in switches and
V7000 after IBM i IPL is
started
Before IBM i started you need
to manually insert WWPNs
Migrations
IBM i Cluster
2-node cluster
Node 1 Node 2
CRG CRG
Cluster resources
Recovery domain
Device domain
IBM i Journaling
Your
Program
New
Row
Put
File A Journal
The DB is the
last to know
File A: PT
New
Row
2
1
New Journal
Row Receiver's PT: PT
New Row New Row
content
File A
Jrn Rcvr
Memory
IASP,sysbas
To quiesce data in an IASP to disk use command CHGASPACT with the following
parameters:
– ASP Device (ASPDEV)
specify the name of the IASP being quiesced, or *SYSBAS.
– Option (OPTION)
specify option *SUSPEND to quiesce the data in the IASP.
– Suspend timeout (SSPTIMO)
specify duration of the timeout during which the system is quescing the data to
disk.
– Suspend timeout action (SSPTIMOACN)
specify the desired action at the end of timeout if the system was not able to
quesce all data during the timeout.
Requires IBM i V6R1 or later
FlashCopy Overview
FC Mappings: FlashCopy occurs between a source volume and a target volume
that must be of the same size
FlashCopy options: Multiple target, Cascaded, Incremental, Reverse FlashCopy
Data copied between source and target are copied in units known as grains, each
grain is by default 256K
Background copy
– The background copy rate is defined as a value of 0 - 100.
– No-copy option is achieved by a value of 0 of background copy. This disables
the background copying and provides pointer-based images for limited lifetime
uses.
– Afer the background copying is finished the FlashCopy relation disappears
– Background copy rates can be different for each mapping
Consistency groups
– Consistency groups preserve point-in-time data consistency across multiple
volumes
– They ensure that dependent writes are executed in the application’s intended
sequence.
– The volumes within the consistency group are managed at the same time
FlashCopy options
Multiple target Reversed
Incremental
IP WAN
Metro Mirror
Creates a synchronous copy of data from a master volume to an auxiliary volume.
Global Mirror
Production host
Dependent writes to primary sent with sequence number to secondary site to ensure they
are applied in the same order
80ms roundtrip latency maximum
Bandwidth sized for peak change rates
RPO seconds and non-configurable
Host I/O
Change Change
volume volume
FlashCopy FlashCopy
Mapping mapping
Initially copy all data from primary volume to secondary volume at point-in-time of when GM relationship started
Change volumes hold point-in-time copy of pieces of data called grains, 256K, that change during cycling mode
(default 300 seconds)
80ms roundtrip latency maximum
Bandwidth sized based on RPO desired
RPO minutes to hours
Production
Production
partition IASP
partition
BfS IASP
•Cluster
Backup
Backup partition FlashCopy
BfS
partition
FlashCopy
On IBM i V6R1 and later we can quiesce data to disk without power-
down production system i
Full
Disaster
Production recovery
System BfS
partition BfS partition
MM/GM
Cluster
Production Disaster
partition recovery
IASP
IASP IASP partition
MM/GM
Note: the test in Mainz were done for functionality, performance were not
tracked
Create IASP
MM ASP Session
Copy desc Copy desc
Generate SSH keys
and start Metro Mirror
Produciton
Production
Start FlashCopy session
Copy desc
Further references
Example of IBM i Partition Migration from Virtual SCSI to NPIV attached IBM System
Storage SAN Volume Controller / IBM Storwize Family Systems
– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106089