Documente Academic
Documente Profesional
Documente Cultură
TCI2743
© Hitachi Vantara Corporation 2018. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd., Content Platform Anywhere and Hi-Track
are trademarks or registered trademarks of Hitachi Vantara Corporation. All other trademarks, service marks and company names are properties of their respective
owners.
ii
Table of Contents
Introduction ............................................................................................................. xiii
Welcome and Introductions ..................................................................................................................... xiii
Course Description ................................................................................................................................. xiv
Prerequisites .......................................................................................................................................... xiv
Course Objectives ................................................................................................................................... xv
Course Topics ......................................................................................................................................... xv
Learning Paths ....................................................................................................................................... xvi
Resources: Product Documents .............................................................................................................. xvii
Collaborate and Share ...........................................................................................................................xviii
Social Networking —Twitter Site .............................................................................................................. xix
Hitachi Self-Paced Learning Library ........................................................................................................... xx
1. Overview........................................................................................................... 1-1
Module Objectives ................................................................................................................................. 1-1
HCP: Object-Based Storage .................................................................................................................... 1-2
Hitachi Compute Platform Basics ........................................................................................................ 1-2
What Is an HCP Object? .................................................................................................................... 1-4
Multiple Custom Metadata Injection ................................................................................................... 1-5
Internal Object Representation .......................................................................................................... 1-6
How Users and Applications View Objects ........................................................................................... 1-7
Hitachi Content Platform Evolution ..................................................................................................... 1-8
Introduction to Tenants and Namespaces ........................................................................................... 1-9
Swift: Another Way to Use Your Storage Pool ................................................................................... 1-10
HCP Configurations.............................................................................................................................. 1-10
Unified HCP G10 Platform................................................................................................................ 1-11
HCP G10 With Local Storage............................................................................................................ 1-12
HCP G10 With Attached Storage ...................................................................................................... 1-13
HCP G10 SSD Performance Option ................................................................................................... 1-14
HCP S Node ................................................................................................................................... 1-15
HCP S10 ........................................................................................................................................ 1-16
HCP S30 ........................................................................................................................................ 1-17
HCP S Node ................................................................................................................................... 1-18
HCP S Series Storage Principles ....................................................................................................... 1-19
RAID Rebuild Principles ................................................................................................................... 1-19
iii
Table of Contents
iv
Table of Contents
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
viii
Table of Contents
ix
Table of Contents
x
Table of Contents
xi
Table of Contents
xii
Introduction
Welcome and Introductions
Student Introductions
• Name
• Position
• Experience
• Expectations
xiii
Introduction
Course Description
Course Description
You will complete numerous hands-on lab activities designed to build the
skills necessary to integrate, administer and configure the key software
products for HCP solutions.
Prerequisites
xiv
Introduction
Course Objectives
Course Objectives
Course Topics
xv
Introduction
Learning Paths
Learning Paths
Available on:
Hitachivantara.com (for customers)
Partner Xchange (for partners)
Customers
Partners: https://partner.hitachivantara.com/
Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.
xvi
Introduction
Resources: Product Documents
Documentation that
provides detailed product
information and future
updates is available on
the Hitachi Vantara
Support Portal
https://support.hitachivantara
.com/en_us/documents.html
Resource Library: The site for Hitachi Vantara product documentation is accessed through:
https://support.hitachivantara.com/en_us/documents.html
xvii
Introduction
Collaborate and Share
https://community.hitachivantara.com/welcome
xviii
Introduction
Social Networking —Twitter Site
Twitter site
Site URL: http://www.twitter.com/HitachiVantara
http://www.twitter.com/HitachiVantara
xix
Introduction
Hitachi Self-Paced Learning Library
The Hitachi Self-Paced Learning Library is a subscription-based learning platform that gives you
the flexibility of accessing Hitachi Vantara training libraries for the Hitachi Vantara Solutions you
need.
• Guided demonstrations
Training is set up in a modular approach, allowing you to take an entire course or just a portion,
depending on the time available to you. You can easily access the library anywhere, anytime,
including your mobile device.
The Hitachi Self-Paced Learning Library is currently only available in the Americas.
• https://www.hitachivantara.com/en-us/services/training-
certification/training.html#trainingDetail
• training@hitachivantara.com
xx
1. Overview
Module Objectives
Page 1-1
Overview
HCP: Object-Based Storage
• Hitachi Content Platform (HCP) is a distributed storage system designed to support large,
growing repositories of fixed-content data. HCP stores objects that includes both data
and metadata that describes the data. It distributes these objects across the storage
space, but still presents them as files in a standard directory structure. HCP provides a
cost-effective, scalable, and easy-to-use solution to the enterprise-wide need to
maintain a repository of all types of data from simple text files and medical image files
to multi-gigabyte database images. An HCP system consists of both hardware and
software
• REST API – Representational state transfer, stateless, using simple HTTP commands
(GET/PUT/DELETE)
o It is used by HCP Anywhere, HDI, HCP data migrator, HNAS and most third party
middleware products to communicate with HCP
o Hitachi Vantara provides REST API developer’s guide – all available APIs are
open and well documented
Page 1-2
Overview
Hitachi Compute Platform Basics
o In S3 API, it is possible to use any S3 client software – it will work with HCP out
of box
Comparing protocols:
• Network File System (NFS) and Common Internet File System (CIFS) are value added
protocols
o NFS and CIFS are good for migrations and/or application access
o NFS and CIFS don’t perform as well as Hypertext Transfer Protocol (HTTP), the
World Wide Web protocol
• Other protocols
Page 1-3
Overview
What Is an HCP Object?
Fixed-content data
(Data)
• Once it’s in HCP, this
data cannot be modified
System metadata
• System-managed properties
describing the data
• Includes policy settings
Custom metadata
(Annotations)
• The metadata, a user
or application provides
to further describe an
object
© Hitachi Vantara Corporation 2018. All rights reserved.
• An HCP object is a means of abstracting and insulating the data and metadata from HW
(hardware) and SW (software). This allows for great robustness and easy migrations to
new hardware (HW) or software (SW)
• This object contains the actual data, system-generated metadata and custom
metadata/annotations
• This architecture allows for easy HW/SW upgrades and great scalability
• Object storage is a black box, users do not work with file systems, only with data
containers. Users do not know on what filesystem or volume is the particular file/object
stored. Users and admins do not work with filesystems, only with data containers
Page 1-4
Overview
Multiple Custom Metadata Injection
Images such as X-rays and other medical scanning pictures have no content that can be
searched other than a file name, but can have embedded metadata such as billing details,
doctor and patient information and other relevant details regarding the actual object.
These details are invaluable for searching this type of content as functional in our Hitachi
Clinical Repository solution.
An HCP object can be associated with multiple sets of custom metadata. That is why we talk
about multiple custom metadata injection.
Page 1-5
Overview
Internal Object Representation
HCP
System ExternalFile
External file Region 10
Fixed-content data metadata /xray1.jpg, vol 5, size 9999,
(Data)
shred=true, ...
External File
© Hitachi Vantara Corporation 2018. All rights reserved.
• HCP is using “Regions” to distribute system metadata. By default there are eight regions
per node, meaning eight chunks of system metadata database per node
• A region stores a subset of the metadata, It is a collection of related tables stored in the
DB
• Regions are distributed across nodes, each node shares part of the load
Page 1-6
Overview
How Users and Applications View Objects
Each object and annotation within HCP has its URI path
Page 1-7
Overview
Hitachi Content Platform Evolution
Hitachi Content Archive Platform (HCAP): The Archive Platform – release v2.6
and before
• Active archiving
HCP can adapt the way no other content product can. It has a chance to grow in the archive
market and align to emerging markets such as the cloud. Think about active archiving. What
actually is archiving and what makes it active? Archiving means we are moving data from
expensive high performance storage, somewhere where it can be stored securely over long
periods of time. This is different from backup, where we create redundant copies. HCP has lots
of services that constantly work with data to ensure it is always healthy and securely stored.
The HCP services are what make archiving active. Old HCAP used to be a simple box with no
concept of multitenancy and even with no authentication options. The new HCP is a versatile
and flexible storage system that offers multiple deployment options. HCP is undergoing very
turbolent development – new features are added every year, these features bring significant
improvements in terms of possibilities the system can offer.
HCP always ensure backwards compatibility, meaning that even from the oldest system you can
upgrade to the newest version.
Because of this, there are some legacy features in the system, namely: default tenant, search
node references, blade chassis references, and so on.
Page 1-8
Overview
Introduction to Tenants and Namespaces
Tenants: NS 1 NS 1
…
Namespaces: NS N NS N
• Segregation of data
If you need to store data on HCP, you must create at least one tenant. The tenant will manage
its own users. Tenant users cannot use their credentials to get to System Management console.
Tenants can create as many namespaces as the system owner allows in System Management
console.
HCP supports access control lists that allow users to manage permissions on the object level.
Page 1-9
Overview
Swift: Another Way to Use Your Storage Pool
• Swift API Applications can read and write from HCP – No changes
• Increased utility
HCP Configurations
In this section you will learn about HCP configuration.
Page 1-10
Overview
Unified HCP G10 Platform
• 2U server enclosure
• Redundant fans and power supplies (Left rear SATA HDD/SSD cage included - not
shown)
• G10 servers can me mixed with existing Hitachi Compute Rack (CR) 210H and CR 220S
based HCP systems
Page 1-11
Overview
HCP G10 With Local Storage
• Customers who purchase a local storage HCP G10 system with 6 internal hard drives can
expand the internal capacity later by purchasing a “six-pack” upgrade. These six drives
are installed in each applicable node and a service procedure is run to add them into the
system. All RAID group creation, virtual drive creation, initialization, or formatting is
handled automatically – no manual configuration is required.
• HCP G10 is compatible with existing HCP 300 nodes and HCP S10 and S30 nodes.
Page 1-12
Overview
HCP G10 With Attached Storage
HCP G10 replacement for HCP 500 and HCP 500 XL models
• OS is now always stored locally on the server’s internal drives, not on the array (as it
used to be in HCP 500). No requirement to set up boot LUNs on the HBA cards for
attached storage systems. Online array migration is possible on HCP G10 nodes because
the OS is stored on the internal drives
Page 1-13
Overview
HCP G10 SSD Performance Option
• SSDs have been proven to eliminate performance degradation related to certain high
density usage patterns like those addressed by the cloud optimized namespace.
Unlike cloud optimized which reduces performance impact, SSDs can eliminate
performance impact and return a degraded system to like new performance
• Postgres Indexes are moved from HDD to SSD on SSD equipped systems
• May improve healthy systems performance when characterized with services, results
TBD
Page 1-14
Overview
HCP S Node
HCP S Node
Value proposition
• Address the need for commodity object storage
• Uses commodity hardware
• Value is in the S-series software
• Faster Data Rebuild times after HDD failure
• Optimized for any object size (small and large)
• Compatible with all HCP models
• Low cost self service ready
• Ethernet Attach Storage to facilitate capacity scaling
• Large scale manufacturing of this hardware lowers the cost and as such brings higher value for
the dollar
• The way the multi-patent pending software enables the hardware capabilities sets us apart from
the rest
• The software protects data faster after disk failures than traditional protection like RAID
• Our implementation of the new Erasure Code Data protection is optimized for large and small
objects
• Failed drives do not have to be replaced immediately and reduces maintenance cost
• Maintenance procedures are dead simple and do not require training. The HCP S10 is ready for
self-service
Page 1-15
Overview
HCP S10
HCP S10
Controller 1 Controller 2
Mid-plane
= Half populated
168 TB (raw)
= Full populated
336 TB (raw)
• HCP S10 and S30 offer better data protection than offered by Hitachi Unified Storage
(HUS) and Hitachi Virtual Storage Platform (VSP) G family (20+6 EC versus
RAID-5/RAID-6)
• HCP S10/S30 licensing costs are lower than comparable array configurations per TB
Page 1-16
Overview
HCP S30
HCP S30
• HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family
Page 1-17
Overview
HCP S Node
HCP S Node
• The software delivers high reliable and durable storage from commodity hardware
components
• Offers fast data re-protection to the largest HDD available now and in the future
• Has self-optimize features. The user does not have to be concerned with configuring,
tuning, balancing resources (HDD)
• Besides a fully capable web user interface, the HCP S10 an be entirely managed and
monitored using MAPI
• Communication between generic nodes and the HCP S10 nodes is S3 protocol based,
and as such ready to be supported by other Hitachi Vantara products like HNAS (august
2015)
• HCP objects stored on HCP S10 will fully support retention, WORM, versioning,
compression and encryption
Page 1-18
Overview
HCP S Series Storage Principles
Page 1-19
Overview
HCP S Series Rebuild Principles
• Fixed-size extents; with small files, rebuild times do not increase and have no reduced
storage efficiency
Page 1-20
Overview
Direct Write to HCP S10/S30
Previously, HCP S10 was only a tiering target for HCP nodes
Any HCP model with v7.2 software now supports direct write to HCP
S10 or HCP S30
• HCP G10b supports 10G front-end Ethernet networking and 1G back-end Ethernet
networking
• Excellent performance locally or with HCP S10/S30 versus attached storage (see
following slides)
• HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family
Page 1-21
Overview
VMware Edition of HCP
HCP v6.x and up support deployments in VMware ESXi 5.5 and 6.0
Benefits:
• Easy and fast deployment
• Aligns with VMware features
• No HCP hardware is needed
• Open Virtualization Format (OVF) templates are part of every new HCP SW version
release
• Using OVF templates make it faster to deploy HCP in VMware as you do not have to
create VMs manualy nor do you need to install the OS
• If you wish to deploy four virtual nodes, you must deploy an OVF template 4 times
• When you have the required number of virtual nodes, you can start with HCP Application
SW install
Page 1-22
Overview
OpenStack KVM HCP-VM
Feature Overview
This section provides the overview of HCP features.
Page 1-23
Overview
Nondisruptive Service
Nondisruptive Service
Self-protection Self-configuration
• Policies enforce object retention, • Simplified installation and integration by
authentication and object replication setting platform configurations through
Self-healing high-level policies
• Architecture is resilient to drive/node Self-balancing
failures with no impact to data integrity, • Adjusts load by monitoring the activity and
and little to no impact to data capacity of all nodes
accessibility/throughput
© Hitachi Vantara Corporation 2018. All rights reserved.
HCP has been designed never to lose data. In addition, high availability features are built in to
make sure the user has continuous access.
Policies enforce data preservation and retention, and the clustering software provides for
failures without impact is called self-healing, however recovery without effort is called self-
configuration.
For continuous scaling, the cluster also provides for automatic load balancing.
The software looks for low water mark thresholds, and then starts distributing data and work to
other processors and storage.
As the customer adds more processing and storage, the clustering software automatically
continues to take advantage of the additional resources.
When the failed resources are replaced, the platform reconfigures and rebalances.
Page 1-24
Overview
HCP Objects – Protected
Page 1-25
Overview
Protection Concepts
Protection Concepts
Protection sets
Multipaths
HCP quorum:
Hitachi Content Platform is a cluster in that it has the same properties of a cluster, for example,
heartbeat, voting for the quorum, but differs from the traditional term cluster. HCP handles
read/write requests dramatically different from traditional clusters. The minimum number of
nodes required to initially start the platform or keep it going (the rule = 50% + 1).
HCP is a clustered system, and will continue to run to the best of its ability in light of hardware
failures. (N/2+1) node failures is the very farthest the cluster can push things; after that the
system will no longer be able maintain quorum and will be forced into read-only mode for ALL
data. It's still partially functional.
As hardware fails, HCP will try its best to keep all its data hot and available. A lot depends on
the customer-defined data protection level (DPL). For instance, if you only have 1 copy per
namespace and both servers that manage the storage that copy lives on go down
simultaneously, you will have data unavailability (DU). The cluster as a whole may still be
running, but access to some data will be gone until one of the two are brought back online.
Customers can reduce the probability of DU in this particular case by increasing DPL at the cost
of usable storage.
Page 1-26
Overview
Protection Concepts
As nodes go down the system will strive as hard as it can to repair itself. For instance, when
one node goes down the system will try and automatically create a backup copy of metadata
somewhere else on the cluster where there is healthy, running hardware. If you have
simultaneous failures, this will limit the system’s ability to heal itself. If both nodes which host
the metadata for a group of objects go down simultaneously, you will have data unavailability
(DU) since the cluster will not have any “live” copy to create new metadata backups out of or to
promote.
HCP can and will self-heal itself to the best of its ability as hardware faults occur, but
concurrent faults will limit its ability to keep ALL data available online at ALL times. The more
nodes you have the higher the probability that the cluster can take hits and keep the entire
system and its corresponding data available.
Protection sets:
• To improve reliability in the case of multiple component failures, HCP tries to store all
the copies of an object in a single protection set.
For example, if an HCP system has 6 nodes, it creates 3 groups of protection sets:
• Each namespace uses the group of protection sets that corresponds to its DPL.
Zero copy failure and multipath protection concept have HCP G10 with attached storage only.
Page 1-27
Overview
Zero Copy Failover
If one node fails, the other node in a cross-mapped pair can access the
volumes Node 1
HG = Host Group
Node 2
Port 0A Port 0A
HG 000 HG 001
Zero Copy Failover (ZCF) is also known as Data Access Path (DAP).
Data Encryption
Page 1-28
Overview
Time Settings Compliance Mode
Time compliance mode will not allow anybody to make any changes to
time settings in the GUI
o Time compliance mode does not allow time changes on the system
o Internal clocks
If somebody with the service role accidentally or intentionally changes time, for example 10
years ahead in the future, the system will accept the settings and files with retention offset
settings shorter than ten years will no longer be protected – retention expires and disposition
service starts deleting files. This falls outside the scope of legal compliance. NTP is
recommended together with time compliance mode. Furthermore, it is recommended that
multiple NTP servers are specified during or after installation.
Page 1-29
Overview
Compliance Features
Compliance Features
This section covers compliance features.
Retention Times
Source: ESG
© Hitachi Vantara Corporation 2018. All rights reserved.
While government regulations have a significant impact on content archiving and preservation
for prescribed periods, compliance does not necessarily require immutable or Write Once, Read
Many (WORM)-like media. In many cases, the need for corporate governance of business
operations and the information generated are related to the need to retain authentic records.
This requirement ensures adherence to corporate records management policies, as well as the
transparency of business activities to regulatory bodies. As this chart illustrates, the retention
periods for records are significant, from two years to near indefinite.
Page 1-30
Overview
Regulatory Compliance
Regulatory Compliance
• Note that the Enterprise mode is always the default when you create a namespace. If
you wish to use compliance mode
• First, “Retention Mode Selection“ feature must be enabled on the tenant and then the
settings will become visible when creating/modifying a namespace
Page 1-31
Overview
Retention Mode Selection for Tenants
• To use certain features on the namespace level, these features must be first be enabled
for the tenant. Once you allow the tenant to use a feature, you cannot remove this
permission. The tenant can then use these features freely
• If feature is not enabled for a tenant, all its namespaces will be created in Enterprise
mode
Page 1-32
Overview
Change Retention Mode for Namespace
• Once you switch to Compliance mode, there is no going back to Enterprise mode.
Always consider switching to compliance mode carefully as there is no service procedure
that can remove WORM data stored in a namespace with compliance mode enabled
Page 1-33
Overview
Reviewing Retention
Reviewing Retention
Retention hold: A condition that prevents an object from being deleted by any means or
having its metadata modified, regardless of its retention setting, until it is explicitly released.
Page 1-34
Overview
Default Retention Setting
Note: If you change the default retention setting, after a while the new setting will not
automatically propagate to objects that were stored before. Retention setting is part of an
object metadata. If you wish to change the retention setting (for example from “initially
unspecified“ to “offset“) you need to use HCP Tools script to modify all existing metadata of
objects. This can be performance intensive. The design is of utmost performance. Before you
start using the HCP system, you should have a clear idea of what kind of data you want or need
to store with what retention settings.
Page 1-35
Overview
Privileged Delete or Purge
Privileged Delete allows the ability to perform an audited delete, even if the object
is under retention
Privileged Purge allows a compliance user to delete all versions of an object
Privileged Delete or Purge is not allowed for objects under retention hold
Privileged Deletes are logged
• If you have the compliance role on your account, you can perform a privileged delete
also using other gateways to data – for example Common Internet file system (CIFS),
http (data migrator/curl)
Page 1-36
Overview
Policies and Services
Policies
• Settings that influence transactions and services on objects
• Set at the object or namespace levels
• DPL, indexing, retention, shredding and versioning
Services
• Background processes that iterate over objects
• Services run according to service schedule
• Enable or disable, start or stop at the system level
DPL: Since HCP v7, DPL is configured and managed as a service plan
• Indexing:
• If you wish to use Metadata Query Engine (MQE), the built-in search console, you need
to enable indexing on the namespace you want to search through
o In case you plan to use HDDS, you don‘t have to use indexing, HDDS does that
on its own
• With indexing, you need to decide where to put the index database; you have 3 options:
o Shared volume – default option. One of the data LUs on each node will became a
shared volume; it will hold both user data and the index database
o IDX-only LU – you can dedicate volumes that will be used to hold the index
database; you need to use specific Host Logical Unit Number (H-LUN) numbers
for mapping and cross-mapping
o HCP500XL – the index database is stored on internal disks; this is the best option
if you plan to use MQE intensively or if you share the back-end storage system
with other applications/Hitachi Vantara products
Note: These options are available only in case of HCP 500; in HCP 300, the only place where
you can store the index database is on a shared volume.
Page 1-37
Overview
Policies and Services
HCP services are responsible for optimizing the use of system resources and maintaining the
integrity and availability of the stored data
• Services work on the repository as a whole, that is, they work across all namespaces
• For example on a 4 node HCP the default region count is 32 which is 8 regions per node.
It takes 8 runs for a service to process the entire repository
Page 1-38
Overview
Services
Services
You can disable or start specific services using the Services panel of
the System Management Console, Overview page if you have the
Service role
Disable a service
Start a service
Notice that you cannot simply modify the schedule. If you want to modify, you need to create a
new schedule.
Page 1-39
Overview
Service Descriptions
Service Descriptions
A customer wrote all his data with a 1 minute retention period and then, out of curiosity,
enabled the disposition service. He was a little upset when all his data was removed overnight!
Protection Scavenging
Maintains set level of data Ensures objects have valid
redundancy, as specified by DPL metadata by detecting and
for each namespace repairing violations
Can be set to maintain 1 to 4
internal copies depending on
value of data
Replication
Indexing Creates copies of objects to
Prepares objects to be indexed another system for recovery
and found by specified criteria
through the Search console
Replication Verification Service
Continually processes new
objects and metadata changes Replication Verification Service
(RVS), checks that objects are
being properly replicated as
specified in the service plan.
© Hitachi Vantara Corporation 2018. All rights reserved.
https://hcpanywhere.hds.com/a/Lo_5_FMP-
xI8j_n3/Global%20Data%20Protection%20Audit.pptx?
Page 1-40
Overview
Geographically Distributed Data Protection
Page 1-41
Overview
Policy Descriptions
Policy Descriptions
Retention Shredding
May
May Prevents file deletion before retention Ensures no trace of file is
21
21
2036 period expires recoverable from disk after
Can be set explicitly or inherited deletion
Versioning
Deferred retention option
New object version is created
Can set a Retention Hold on any file
when data changes
Indexing Write Seldom Read Many
(WSRM)
Determines whether an object will be
indexed for search Service plans
Tiering policies
Custom metadata XML checking Tier to spindown (HUS only)
Determines whether HCP allows Tier to cloud services
custom metadata to be added to a Tier to HCP S10 and HCP S30
namespace if it is not well-formed Tier to NFS
XML Tier to replica (Metadata Only)
© Hitachi Vantara Corporation 2018. All rights reserved.
• To set a replication policy, we need to set retention mode and retention method.
Retention settings apply to new objects. To change retention settings for existing
objects, it is neccessary to overwrite its system metadata
• Custom Metadata XML Checking is turned off by default. With large custom metadata,
this may slow down the system
• Shredding is on/off setting. Deleted data are securely shredded, when used
• Versioning is on/off feature. You can configure automated version pruning – automated
deletion of old versions. Versioning cannot work when CIFS/Network File System (NFS)
access to a namespace is enabled
Page 1-42
Overview
Module Summary
Module Summary
Page 1-43
Overview
Module Review
Module Review
Page 1-44
2. Hardware Components
Module Objectives
Page 2-1
Hardware Components
HCP Components
HCP Components
This section describes the hardware components of HCP G10, HCP S10 and HCP S30.
2U server enclosure
Page 2-2
Hardware Components
HCP G10 Optional/Future Hardware
Page 2-3
Hardware Components
HCP G10 Ethernet Networking Options
All HCP G10 nodes (local or attached storage) can support 1G and 10G
networking with the following options:
FRONT-END BACK-END
Description Speed Port Speed Port
Type Type
2x10G motherboard, one 2x10G PCIe 1GbE/ BASE-T 1GbE BASE-T
10GbE
2x10G motherboard, one 2x10G PCIe 10GbE SFP+ 1GbE BASE-T
2x10G motherboard, one 2x10G PCIe 10GbE BASE-T 10GbE SFP+
Two 2x10G PCIe (motherboard unused) 10GbE SFP+ 10GbE SFP+
10GbE front-end with 1GbE back-end is optimized for HCP S node integration. HCP S nodes
support only 10GbE interface.
Page 2-4
Hardware Components
HCP G10 1/10Gb BASE-T FE/1G BASE-T BE
Bonding will take place across the motherboard and PCIe card
slots/ports as shown
SEC SEC
PRI PRI
Bonding will take place across the motherboard and PCIe card
slots/ports as shown
PRI SEC
SEC PRI
• RED = FE = Front-end
• BLUE = BE = Back-end
• PRI = Primary connection
• SEC = Secondary connection
Page 2-5
Hardware Components
HCP G10 10Gb BASE-T FE/10G SFP+ BE
Bonding will take place across the motherboard and PCIe card
slots/ports as shown
PRI SEC
SEC PRI
Bonding will take place across the motherboard and PCIe card
slots/ports as shown
SEC SEC
PRI PRI
Page 2-6
Hardware Components
HCP G10 10Gb SFP+ FE/10G SFP+ BE
SEC SEC
SEC PRI
PRI PRI
• RED = FE = Front-end
• BLUE = BE = Back-end
Page 2-7
Hardware Components
Back-End Ethernet Switches
There is one new back-end Ethernet switch option available with HCP
G10
Page 2-8
Hardware Components
Fibre Channel Networking
Note: HCP G10 nodes use 8Gb/sec Fibre Channel PCIe cards,
so effective speed per port is 8Gb/sec, not 16Gb/sec
Customer supplied switches can be used if they are approved for usage with HCP.
The customer can order their HCP G10 nodes with a pair of 400GB or
800GB SSDs
• Optional 2.5” SSD drives are located at the rear of the server
Page 2-9
Hardware Components
Racked and Rackless
HCP G10 systems (Local and Attached) can be ordered with or without
racks
Rackless systems will arrive at the customer site fully configured, but
will need to be racked in the customer supplied equipment
• In past all HCP systems were shipped together with rack by default
Page 2-10
Hardware Components
HCP S10 Node
Hardware features
• 4U enclosure for 60 HDD and with 2
Intel-based servers with mirrored SSD for the
software
• High availability design
• Hot swappable HDD, servers and power
supply
• S10 nodes are connected to standard HCP nodes over the front-end (virtual) networks
• The commodity hardware base is a 4U enclosure that houses up to 60 3.5” HDD and two Intel
based servers with SSD for the software
• Each server has one CPU with six cores and 32GB memory
• The major components can be replaced when the unit is under power and in production use
• Generic HCP nodes and HCP S series nodes communicate with each other over the front-end
network, on a secure separate VLAN if so desired
• Each server in the HCP S10 node has two SFP+ 10GbE network ports and one RJ45 1GbE
management port
o Each server must have one SFP+ 10GbE connected to the (front-end) network switch
Page 2-11
Hardware Components
HCP S30 Node – Server Module
The HCP S30 will use the 2U1N Hitachi Vantara server for the server
heads
Front view:
• That is 160 server modules and 160 racks with approximately 460PB capacity
Page 2-12
Hardware Components
HCP S30 Node – Server Module
5 6 7 8
Management Service
1 2 3 4
5 6 7 8
Server
BMC BMC
Interconnect
Interconnect
© Hitachi Vantara Corporation 2018. All rights reserved.
Back-end networking:
• Service port is used for engineer access with laptop and LAN cable
Page 2-13
Hardware Components
HCP S30 Node – Enclosure Unit
1 2 3 4
5 6 7 8
Front-end networking
Fully populated tray is very heavy – approximately 110 kg. Two persons are required to perform
assembly.
Page 2-14
Hardware Components
HCP S30 Node – Enclosure Unit
Lock
Screws
Baseboard
PCB
Power &
Cooling Power Distribution Board
Modules Rail-kit
12G SAS I/O Alignment
Modules
© Hitachi Vantara Corporation 2018. All rights reserved.
The first 4 trays (1-4) are directly connected from both server modules
using 12Gb/sec mini-SAS HD cables
The second 4 trays (5-8) are chained from trays 1-4 using 12Gb/sec
mini-SAS HD cables
The third 4 trays (9-12) are directly connected from both server modules
using 12Gbps mini-SAS HD cables
The fourth 4 trays (13-16) are chained from trays 9-12 using 12Gb/sec
mini-SAS HD cables
Page 2-15
Hardware Components
Module Summary
Module Summary
Module Review
1. What is the purpose of optional SSDs and where are they located?
5. What is Supercap?
Page 2-16
3. Network Configuration
Module Objectives
• Integrate Hitachi Content Platform (HCP) with Domain Name System (DNS)
Page 3-1
Network Configuration
Network Interfaces
Network Interfaces
In this section, you will identify network interfaces.
Networking
Use colour coded LAN cables for HCP network integration. An HCP system is shipped with red
and blue cables that are to support the back-end network. For the front end network, obtain
yellow and green cables yourself.
• Presently, front-end network is used for all data traffic as well as for replication and
management access. VLANs can be used to separate management, data and
replication traffic
• VLAN setup and advanced networking are discussed in detail in the implementation
course
Page 3-2
Network Configuration
LAN Connections Review
Interfaces are always bonded – that means you always have two IP address per node (back-
end, front-end) even though there are four physical ports.
Public LAN
System LAN Primary
connection
Node 1 Node 2 Node 3 Node 4
Secondary
connection
Page 3-3
Network Configuration
DNS Configuration
DNS Configuration
This section covers information on DNS configuration.
DNS Service
• DNS is a network service that translates, or resolves domain names (for example,
example.com) into IP addresses for client access. The service is provided by one or
more servers, called name servers, that share responsibility for resolving client requests.
The domain names resolved by DNS are divided into zones, where each zone is defined
by set of related hostnames
• Configure the HCP subdomain for DNS to enable access to HCP by its system name
• HCP automatically generates name server records for all storage nodes in the system.
Each storage node stores a copy of these records
• Before HCP can accept client requests, we need to register all of these HCP storage
nodes as master name servers for the HCP secondary zone
Page 3-4
Network Configuration
Name Resolution
Name Resolution
• Network entities are addressed by their Full Qualified Domain Names (FQDNs), while
network communication is based on IP addresses
But:
Page 3-5
Network Configuration
Name Resolution – Best Practice
If HCP is replicated:
• Configure HCP to replicate its domains and certificates
• Add the remote HCPs IP addresses to the end of the list of masters for each
zone configured in corp. DNS
Page 3-6
Network Configuration
Shadow Master Functionality
Page 3-7
Network Configuration
DNS Notify
DNS Notify
• Prerequisites:
o DNS needs to be enabled for HCP system via the SMC > Configuration > DNS
Settings page
• As with HCP v6.1, option available for customer to configure and use
Page 3-8
Network Configuration
VLAN Configuration
VLAN Configuration
This section covers information on VLAN configuration.
A concept of partitioning a
physical network so that distinct
broadcast domains are created
HCP allows to integrate into up to 200 VLANs where each one maps to a Network in HCP terms.
Page 3-9
Network Configuration
Network Segregation
Network Segregation
Page 3-10
Network Configuration
SMC Network Configuration
Page 3-11
Network Configuration
Create Network – Step 2 : IP Configuration
IPv4 Configuration
IPv6 Configuration
IP Mode
• If HCP system is enabled for Dual Stack mode, each network may be configured for Dual
Stack, IPv4 only, or IPv6 only
• [hcp_system] network must be configured with IPv4 and IPv6 settings as required by
virtual networks
IPv4 Configuration
• Gateway
• Netmask
IPv6 Configuration
• Gateway & Prefix Length for IPv6 address (primary & required)
Page 3-12
Network Configuration
Create Network – Step 3 : Review
1. Review Settings. SMC > Configuration > Networks > Create Network
2. Review IP Configurations.
Note: Network has no node IP address error will be displayed until node IPs are properly
configured.
Page 3-13
Network Configuration
SMC Network View
As a convenience, HCP
With an HCP system in v7.0 provides the ability
dual stack mode, each to auto-calculate IPv6
network can be addresses since they
configured for IPv4 only, can be cumbersome to
IPv6 only, or dual stack enter manually
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 3-14
Network Configuration
SMC Node View
SMC > Configuration > Networks > Network View > Settings >
Downstream DNS Configuration
Page 3-15
Network Configuration
Link Aggregation and IPv6 Support
In case a VLAN is created, some of the functions of hcp_system network can be serviced
through the VLAN.
Link Aggregation
You will get information on link aggregation and IPv6 support.
Page 3-16
Network Configuration
Link Aggregation
The active-active link aggregation requires a single front end switch for
both ports
• HCP currently provides active-passive bonding for the front end interface. This means
that HCP can take advantage of only single 1GB E network port performance. This
feature allows the customer to configure the front end interface for active-active
bonding using 802.3ad
• This setting affects all the nodes in the system and cannot be done on a node-by-node
basis
• The active-active link aggregation requires a single front end switch for both ports
o This will reduce some of the high-availability capability since a single switch
failure results in loss of connectivity
• The customer’s switch must also support 802.3ad to take advantage of the active-active
bonding. However, the active-active bonding provides failover capability if a single link is
lost
Page 3-17
Network Configuration
IPv4 Running Out Of Room
Page 3-18
Network Configuration
Authentication With AD
Authentication With AD
• Support for dual stack (IPv4 or IPv6), native IPv4, and native IPv6 operations
• HCP supports all applications during the migration, regardless of which IP version they
support
• SNMP, access protocols (CIFS, HTTP, NFS, SMTP), and secure HTTPS access over IPv6
• SSH
• IPv6 increases IP address size from 32 bits to 128 bits, providing 340 undecillion
(approximately 3.4 x 1038) addresses
Page 3-19
Network Configuration
Authentication With AD
• Better built in security – authentication, encryption, and protection at the network layer
• True end-to-end connectivity – no need for network address translation (NAT) and
triangular routing eliminated
• Active Directory
• DNS server
• RADIUS server
• Time server
Page 3-20
Network Configuration
Support for Active Directory: Introduction
What is it?
• It enables customers to perform their HCP user administration in Active
Directory and use it for HCP user/account authentication
• It merges management users and data access accounts into one user to
facilitate a consistent security experience
Benefits
• Allows customers to comply with corporate security policies and
procedures
• Includes HCP in a pool of devices that support single sign-on for users
• Has a single repository of users to access multiple HCPs
• Manage roles and access based on groups
Page 3-21
Network Configuration
Active Directory: Configuration
Feature Details:
Page 3-22
Network Configuration
Active Directory: Groups
Domain name and domain user credentials are required to make the connection.
Page 3-23
Network Configuration
Module Summary
Module Summary
• Integrate Hitachi Content Platform (HCP) with Domain Name System (DNS)
Module Review
Page 3-24
4. Administration
Module Objectives
Page 4-1
Administration
HCP Consoles
HCP Consoles
This section covers information on HCP consoles.
System
User https://172.25.1.59:8000
Account
https://10.0.0.59:8000
https://admin.hcp.hitachi.com:8000
System Management Console
Tenant
User
Account
https://t-name.hcp.hitachi.com:8000
Tenant Management Console https://t-name.hcp.hitachi.com:8888
Tenant Search Console
https://ns-name.t-name.hcp.hitachi.com
Namespace
Tenant User
Account
• System console can be accessed via any front-end or back-end IP address providing you
specify port 8000
Page 4-2
Administration
System Management Console
Configuration:
Requires Service role
• Branding
Tenants:
• Create new tenants • DNS
• View/edit tenant • Miscellaneous
details • Monitored Components
Security: • Networks
• Permissions • Time
• Domains and • Upgrade
Services:
certificates
• Schedule
• Network Security
• Compression Monitoring:
• Console Security
• Content Verification • System Events
• MAPI
• Duplication • Resources
• Search Security
Elimination • Syslog
• Users
• Garbage Collection • SNMP
• Authentication
• Replication • Email
• RADIUS
• Search • Charge Back
• Shredding • Internal Logs
© Hitachi Vantara Corporation 2018. All rights reserved.
• Tenant, storage and service management is visible only to users with admin role
Page 4-3
Administration
Tenant Management Console
• View/edit namespace
o Overview
Tenant admin manages tenant user accounts, data permissions and namespaces.
Page 4-4
Administration
Namespace Browser
Namespace Browser
Namespace, browser is very good for seeing what is in the namespace although it is not very
useful to upload data as you can upload only one file at a time.
System Users
This section describes system users and their roles and permissions.
Page 4-5
Administration
User Roles: System Management Console
Monitor role
Administrator role
Security role
Compliance role
Search role
Service role
Monitor role
• Grants permission to use the System Management Console to view the system status
and most aspects of the platform configuration
Administrator role
• Grants permission to use the Administration Console to view the system status and
perform most platform configuration activities
Security role (the only role of the default starter account after a system build)
• Grants permission to use the System Management Console to view the system status
and create and manage user accounts
• Cannot perform platform configuration activities reserved for users with the
administrator role
Page 4-6
Administration
User Roles: System Management Console
Compliance role
Search role
Service role
• Grants permission to use the System Management Console to view the HCP status and
perform advanced system reconfiguration and management activities
Page 4-7
Administration
User Authentication
User Authentication
Radius Authentication
OpenStack Keystone
When logging in to one of the Hitachi Content Platform (HCP) consoles, or APIs the user needs
to be authenticated by 1 of 3 methods.
User authentication can be local, remote (Radius), or Active Directory).
• Local Authentication (by HCP)
o The user’s password is stored in the platform
o HCP checks the validity of the login internally
• Radius Authentication
o HCP securely sends the specified username and password to a RADIUS server for
authentication
o The RADIUS server checks the validity of the login and sends the result back to
the platform
o HCP allows user access to the target console or API
• Active Directory (AD)
o HCP securely sends the specified username and password to AD for
authentication
o If the credentials are valid, HCP allows user access to the target console or API
• OpenStack Keystone
o A Keystone Authentication Token service was introduced with Hswift API and can
be used when HCP solution is integrated with OpenStack
Page 4-8
Administration
Starter Account
Starter Account
• You can delete this account after creating another locally authenticated account with the
security role
• HCP enforces the existence of at least one locally authenticated security account at all
times
Tenant Users
This section describes tenant users and their roles and permissions.
Page 4-9
Administration
Tenant-Level Administration
Tenant-Level Administration
Tenants, except the default tenant, have their own administrative user
accounts for access to the Tenant Management Console
• Tenants, except the default tenant, have their own administrative user accounts for
access to the Tenant Management Console
• HCP system-level users with the monitor, administrator, security, or compliance role
automatically have access to the Tenant Management Console for the default tenant
o The default tenant does not have administrative users of its own
o This enables system-level users with the monitor or administrator role to log into
the Tenant Management Console for that tenant, or to access the Tenant
Management Console directly from the System Management Console
o For the default tenant, this access is enabled automatically and cannot be
disabled
Page 4-10
Administration
Tenant User Account
Tenant
Page 4-11
Administration
Data Access Permissions Example
Tenant
• Browse – Will allow Namespace Browser login but will not allow any other operation,
including read or write
• Delete – Allows the user to delete a file which is not Write-Once, Read-Many (WORM)
protected and does not have multiple versions
• Purge – Allows the user to delete all versions of a file which is not WORM protected and has
multiple versions
• Privileged – Allows the user to perform privileged delete operation for WORM protected files
in a namespace running in Enterprise mode for which the user must also have compliance
management role
• Search – Allows the user to log in into tenant search console and perform search operations
• Read ACL and Write ACL – Active Directory related, further information can be found in the
documentation
• Allow namespace management – used when using third party HS3 API clients
Page 4-12
Administration
Permission Masks
Permission Masks
In this section, you will learn how to apply permission masks and register new storage
components.
Page 4-13
Administration
Permissions Classifications
Permissions Classifications
• System user with security role can enable/disable permissions across HCP
Page 4-14
Administration
System-Wide Permission Mask
System
In case you disable delete operations using System-Wide Permission Mask, all delete operations
will be disabled for all tenants and their users. System can be put into read-only mode here by
disabling all write and delete operations.
Page 4-15
Administration
Tenant Permission Mask
Tenant
Tenant user with Security role can edit the permissions across all tenants
© Hitachi Vantara Corporation 2018. All rights reserved.
Tenant permission mask can override tenants users permissions. If you disable write operations
using Tenant Permission Mask, no user of this tenant will be able to write data. Other tenants
will not be affected by this change.
Page 4-16
Administration
Namespace Permission Mask
Tenant
Namespace Permission Mask allows you to disallow certain operations for all namespace users.
For example, you can disable delete operations using Namespace Permission Mask for all tenant
users. Other namespaces within a tenant will not be affected.
Tenant
Page 4-17
Administration
Storage Component Administration
Storage Overview
• Metadata-only statistics
Metadata-only object count
and bytes saved
Storage Components
Page 4-18
Administration
Storage Component Advanced Options
• These options may be used to tweak various settings which are as follows:
Page 4-19
Administration
Storage Pools
Storage Pools
A Service Plan is a tiering policy. Multiple service plans can be created. Each namespace can be
configured with a service plan.
Page 4-20
Administration
Service Plan Assignment and Utilization
Page 4-21
Administration
Service Plan Wizards – Tier Editor
Number of data and metadata copies that should be held on different tiers can be setup here.
Data will be tiered based on:
• Number of days the files were not accessed
• Threshold – after certain utilization of primary storage is reached
o It is important to emphasize that the percentage (%) set as the Threshold is with
respect to the total utilization of the entire initial Tier (for example, of Primary
Running) to which the Service Plan will be applied. It is not with respect to the
Quota of the Namespace to which the Service Plan will be applied
• When you start an active-passive replication over a namespace with a configured
Service Plan (other than the default Service Plan), a Service Plan with the same name as
in the originating HCP (although its definition may be different) must exist in the HCP
destination of the replica
• If the replication is active-active, it should exist the same Service Plan name in both
HCPs
• It is not recommended to apply a Service Plan that migrates the data to an HCP S series
in a namespace with enabled CIFS or NFS
• Any Service Plan change in a namespace with data can mean the movement of a lot of
information
• Combination of both
Page 4-22
Administration
Service Plan Wizards – Import Creation
Data can be imported from existing pools and service plans which simplifies the configuration
process.
Storage Reports
Granular control
• Day, hour, or total reporting intervals
• Limit start/end dates
• Chargeback UI updated to include
the same features
Page 4-23
Administration
Storage Retirement
Storage Retirement
Retirement options:
• Entire Pool
• Entire Component
• Specific Volume
Page 4-24
Administration
HCP S10 and HCP S30 Nodes
• The HCP S10 nodes are added to the HCP from the HCP hardware page
• When you click on Add Node in the HCP S series nodes section, a add node wizard starts
• In few steps, you complete the process and the HCP S10 storage component is created
and added to an existing HCP S10 storage pool or a new HCP S10 storage pool is
created
• After this is completed, the user adds the HCP S10 storage pool to the service plan for
one or more namespaces (if not already done)
• On the S nodes detail page, there is a link to login to the individual node to perform
maintenance procedures
Page 4-25
Administration
HCP S10 Node – Manage S Nodes
• The HCP S series node console provides more detailed status information
• When a disk failed, you can start a maintenance procedure to replace the disk
Page 4-26
Administration
HCP S Series Storage – Ingest Tier
This feature will allow data to be ingested directly to either an HCP S10 or HCP S30 storage.
Before v7.2, this process would be done in two steps, first ingesting data to Primary running
storage, then to have the tiering service transfer the data to the S Series. Now the data will no
longer need to land on Primary running first and can be passed directly to the S Series by HCP.
This eliminates the tiering backlog, which can be a major bottleneck in systems that see heavy
traffic. In addition, we can greatly reduce the storage space needed on Primary running
because data is never stored there.
Page 4-27
Administration
Write Through to S Series Storage
+
Great Performance
• No Tiering Delay
• Single Put with S3 Scavenging Metadata
• HCP S30 Performance Enhancements
Writing directly to S Series storage allows a customer to have greater storage capacity without
attaching an array to HCP. Unlike setting an S Series as the second tier, there is no tiering delay
because we do not need to wait for the tiering service to run. Also, new in v7.2, we will make
use of the AWS headers to use a single transaction to put both data and S3 Scavenging
Metadata, rather than needing two transactions as in previous releases, which offers a
considerable speed boost. Additionally, the soon to be released HCP S30 has many performance
enhancements over the HCP S10. Just as for data that is tiered to an S Series, a tenant user will
not notice any functional difference between data that has been ingested to Primary running or
an S Series.
Page 4-28
Administration
Write Through to S Series Storage
It is very straightforward to modify a service plan to use the Write Through to S Series Storage
feature. Simply edit the first tier in the service plan and check an HCP S Series storage pool. All
the products that are not in the S Series pools will automatically become unchecked. As always,
all metadata is stored on Primary running.
There is no rehydration when using this feature because all data will be stored on the S Series,
or other higher tiers.
Page 4-29
Administration
Write Through to S Series Storage
When data is ingested, it will appear as though it has already been tiered to the S Series.
Page 4-30
Administration
Module Summary
Module Summary
Page 4-31
Administration
Module Review
Module Review
3. How do you start using HCP S10 and HCP S30 nodes?
4. How long must the data stay on primary storage before tiering to HCP
S10 or HCP S30 node?
Page 4-32
5. Ingestion Processes
Module Objectives
Page 5-1
Ingestion Processes
Namespace Browser
Namespace Browser
This section covers Namespace browser.
Data Access
Page 5-2
Ingestion Processes
CIFS and NFS Support
CIFS and Network File System (NFS) should be used only for migration or
application access
• If namespace cloud optimization is enabled, CIFS, NFS and SMTP cannot be used
• Namespace cloud optimization can be disabled only if now data was written to the
namespace
• Once there is a write to cloud optimized namespace, CIFS, NFS and SMPT can never be
enabled for this namespace
Page 5-3
Ingestion Processes
Enable CIFS Protocol
Tenant
Page 5-4
Ingestion Processes
Microsoft Windows Mounted Disks
DATA
METADATA
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 5-5
Ingestion Processes
Set Retention Period
Note: In lab, you will change this to A+4m to set a retention period of 4 minutes.
Default Tenant
This section covers information on default tenant.
Page 5-6
Ingestion Processes
Enable Creation of Default Tenant
System
System Management console user with service role has to enable the possibility to create
default Tenant.
Page 5-7
Ingestion Processes
Create Default Tenant or Namespace
Page 5-8
Ingestion Processes
Overview
Overview
Data can be deleted from any of the locations listed above not under
retention
As of HCP v8.x, HCP-DM has been moved into Open Source, and is
available for download from github
• https://github.com/Hitachi-Data-Systems/Open-DM
• When copying data from one location to another, the source and destination locations
can be in combination of:
o A local file system, including remote directories accessed using the local file
system
o An HCAP archive
Page 5-9
Ingestion Processes
Installation
Installation
Migration Panes
Main window contains 2 identical panes separated by transfer buttons << and >>
Same functionality supported in both panes of DM GUI
Select Local File System
or namespace profile
Main window panes © Hitachi Vantara Corporation 2018. All rights reserved.
Page 5-10
Ingestion Processes
Namespace Profile Manager: Create Profile
Page 5-11
Ingestion Processes
Set Preferences: Policies
Set policies
• Indexing
• Shredding
• Retention method
• Retention hold
Applies to default
namespace and
HCAP 2.x archives
Page 5-12
Ingestion Processes
Set Preferences: Owner
HCP-DM CLI
Command Line Interface (CLI) provides functionality similar to the existing HCP
client tool arcmv
CLI commands
• hcpdm copy – write each source file to the target destination
If a file with the same name exists, it will fail
With versioning enabled, it will create a new version
Page 5-13
Ingestion Processes
REST API
REST API
This section provides information on REST API.
From Wikipedia
• Representational State Transfer (REST) is a style of software architecture
for distributed systems such as the World Wide Web
REST has emerged as a predominant Web service design model when using the
HTTP protocol
In a nutshell:
• Multiple requests while similar in form, actual usage may have different
meaning depending on the receiver of the request
HCP Request Amazon S3 Request
GET /my-image.jpg?type=acl GET /my-image.jpg?acl
Accept: application/xml
© Hitachi Vantara Corporation 2018. All rights reserved.
Management API (MAPI) together with Replication API and Search API are all REST-ful
Interfaces, that can influence transactions on a system. MAPI must be enabled on the System
level and also on the Tenant level in order to work.
Page 5-14
Ingestion Processes
Simplified REST Example
• Management API
Configure tenant and namespaces, and replication
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 5-15
Ingestion Processes
Anatomy of Request
Anatomy of Request
Exercise:
Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true
Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true
Page 5-16
Ingestion Processes
Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true
Provide HCP authorization credentials for the tenant in the form of:
Form: <base64-username>:<md5sum-password>
See Using a Namespace document for how to obtain the encoding of
username and password
© Hitachi Vantara Corporation 2018. All rights reserved.
Note: Namespace.pdf document can be downloaded from both System Management Console
and Tenant Management Console on HCP G10.
Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true
Page 5-17
Ingestion Processes
Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true
Write object:
Using the /rest data access gateway
In folder myfolder
With object name my-image.jpg
Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true
Page 5-18
Ingestion Processes
Using Programming Languages
Curl command is useful for single items or testing, but inefficient for
large volume usage
What Is HS3?
DragonDisk
Page 5-19
Ingestion Processes
HS3 and Multipart Upload (MPU)
With the HS3 API, you can perform operations to create an individual
object by uploading the object data in multiple parts
List the parts of in-progress multipart uploads (GET object list parts)
© Hitachi Vantara Corporation 2018. All rights reserved.
• To use the HS3 API to perform the operations listed above, you can write applications
that use any standard HTTP client library. HS3 is also compatible with many third-party
tools that support Amazon S3
Page 5-20
Ingestion Processes
S3 Basic Concepts
S3 Basic Concepts
Page 5-21
Ingestion Processes
How to Make S3 Requests
To use the HS3 API as an authenticated user, you need to provide credentials that are based on
the username and password for your user account. To provide credentials, you typically use the
HTTP Authorization request header.
Authorization request header:
To provide credentials for AWS authentication using the Authorization header, you use this
format:
Authorization: AWS access-key:signature
In this format:
• access-key is the Base64-encoded username for your user account
• signature is a value calculated using your secret key and specific elements of the HS3
request, including the date and time of the request
• Your secret key is the MD5-hashed password for your user account
• Because the signature for an HS3 request is based on the request contents, it differs for
different requests
• Third-party tools that are compatible with HS3 typically calculate request signatures
automatically
• If you’re writing your own application, you can use an AWS SDK to calculate request
signatures
Page 5-22
Ingestion Processes
OpenStack Concepts and Terminology
https://www.openstack.org/
Page 5-23
Ingestion Processes
Module Summary
Module Summary
Module Review
Page 5-24
6. Search Activities
Module Objectives
Page 6-1
Search Activities
Overview
Overview
What is it?
What it is not?
• A full-featured search engine for content
• Version HCP v4.x, MQE is the basic way to find a set of objects based on operation and time.
Example: find all the objects created between time A and time B
• You can now perform real search for sets of objects based on metadata (system and custom)
• The indexing and search engine is built into HCP
• You can search across tenant and namespaces to locate related sets of objects
Page 6-2
Search Activities
Metadata Query Engine: Details
Query via API, Hitachi Content Platform (HCP) MQE Search Console or
MQE Tool
Conforms to HCP data access authorization security
Examples: Find all the emails in this namespace and put them on litigation hold or find
everything owned by Richard and give it to Scott.
Page 6-3
Search Activities
Metadata Query Engine: Qualifications
o System metadata index size: ~340 bytes per object (very light)
Page 6-4
Search Activities
MQE and HDDS Search Differences
You can define multiple content classes and content properties (all of which show up in the
search GUI for the tenants they are defined on). Search criteria is now fully configurable.
Page 6-5
Search Activities
MQE Content Classes
• Each content property has a data type that determines how the property values are
treated by the metadata query engine. Additionally, a content property is defined as
either single-valued or multi-valued. A multi-valued property can extract the values of
multiple occurrences of the same element or attribute from the XML
• Content properties are grouped into content classes, and each namespace can be
associated with a set of content classes. The content properties that belong to a content
class associated with the namespace are indexed for the namespace. Content classes
are defined at the tenant level, so multiple namespaces can be associated with the same
content class
• For example, if the namespace personnel is associated with the content class MedInfo,
and the content property DrName is a member of the content class, the query engine
will use the DrName content property to index the custom metadata in the Personnel
namespace
Page 6-6
Search Activities
MQE Content Classes
On tenant management
console go to
Services/Search
Configuration of class
properties
Page 6-7
Search Activities
Enable HCP MQE Search Facility
Note: The above screen shot shows the configuration of HDDS as the search console because
of the previous lab project. If no search console has been selected, then the configuration
would indicate Disable Search Console and the Query Status for both the MQE and HDDS
consoles would be indicating Unavailable.
Page 6-8
Search Activities
Launch MQE GUI
https://TenantName.Qualified_DNS_Name:8888
For example: https://legal.hcap1.hcap1.local:8888
Page 6-9
Search Activities
Narrow Structured Search
1. Click the plus sign (+) to the right of the third box indicating the object size (14009749)
to add another query field
2. Select Namespace in the left panel and Litigation (Legal) in the right panel and click
the Query button
To perform a Control Operation (like delete), open the object or save it to a target
location
Page 6-10
Search Activities
MQE Tool
MQE Tool
OpenSource @ Sourceforge
https://sourceforge.net/projects/
hcpmetadataquer/
Page 6-11
Search Activities
Module Summary
Module Summary
Module Review
Page 6-12
7. Replication Activities
Module Objectives
Page 7-1
Replication Activities
Active – Passive Replication
• DNS top level directories and subset of data (retention classes, compliance logs)
• Support for multiple links and link types to build advanced topologies
• Schedule replication
Page 7-2
Replication Activities
Before You Begin
Ensure both primary and replica HCP systems have replication enabled
• Replication is no longer a licensed feature!
When you replicate a tenant, a tenant with the same name cannot
reside on the replica system
In case you will use a separate VLAN for replication, this VLAN should
be created prior replication setup
Page 7-3
Replication Activities
Active – Active Replication
Page 7-4
Replication Activities
Domain and Certificate Replication
Page 7-5
Replication Activities
Fully Automated Collision Handling
o Annotations created on one side are added to the object on the remote side if
the same annotation is changed as latest edit wins
<queryRequest>
<operation>
<count>0</count>
<systemMetadata>
<replicationCollision>true</replicationCollision>
Page 7-6
Replication Activities
Fully Automated Collision Handling
</systemMetadata>
</operation>
</queryRequest>
<queryRequest>
<operation>
<systemMetadata>
<namespaces>
<namespace>ns1.ten1</namespace>
</namespaces>
<changeTime>
<start>1375839364000</start>
<end>1475839364000</end>
</changeTime>
<replicationCollision>true</replicationCollision>
</systemMetadata>
</operation>
</queryRequest>
Page 7-7
Replication Activities
Fully Automated Collision Handling
Collision losers:
• Can be queried (and bulk processed) via MQE – both operation and object query are
supported
Page 7-8
Replication Activities
Querying Collisions With MQE
HCP now supports all replication operations via the management API
(MAPI), including:
• Link creation
• Link content selections
• Link status
• Link management
• Link schedule configuration
• Link monitoring
• Tenant backlog monitoring
• Failover lifecycles
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 7-9
Replication Activities
Implementation Notes Overview
Active-active links remove the restriction that require tenants and namespaces to be fully
deleted on the remote side before being re-added to the link
A mix of active-active and active-passive link is fully supported in a replication topology
• Use case: Active-active with common disaster recovery (DR) backup system
Link type will be reported over in SNMP and in Hi-Track Remote Monitoring system
Replication link can be moved to a separate Virtual Local Area Network (VLAN)
Replication performance can be set up using replication schedule, priorities
Low/Medium/High/Idle/Custom
Custom performance level can be set up in System Management Console (SMC)
• Active-Active links remove the restriction that require tenants and namespaces to be
fully deleted on the remote side before being re-added to the link
o Use case: Active-Active with common Disaster Recovery (DR) backup system
• Link type will be reported over in Simple Network Management Protocol (SNMP) and in
Hi-Track
• The creation of empty links (no content selections) is now supported so that connectivity
between sites can be verified
• Namespace level pruning period on replica has been removed and system honors single
pruning period
• Custom performance level can be set up in SMC. The default is 5 threads for replication
Page 7-10
Replication Activities
Active-Active Links Persist Metadata First
• Removing content selections from a link provides a warning when any metadata-only
objects will be orphaned as a result
Page 7-11
Replication Activities
Limits, Performance and Networks
Increase limits:
• Maximum of 5 outbound links are supported
• Maximum of 5 inbound links are supported
• Maximum of 5 active-active links (counts as 5 inbound and 5 outbound) per
system
Failover
This section provides information on failover.
Page 7-12
Replication Activities
Automatic Failover/Failback Options
• Failover is required in active-passive replication links to make replica tenant read and
write. The failover can be either manual or automated
• It is possible to set up automated failover – for example failover to replica after 120
minutes of no heartbeat, from primary HCP system
o Making replica HCP system handle redirected DNS requests (if Automated DNS
Failover enabled)
• Once primary HCP system is back online, it is necessary to perform recovery process.
During recovery the replica is serving clients while it replicates new data to former
primary HCP system. Once the process is nearly finalized, both HCP systems must enter
Complete Recovery mode, during which the final synchronization is achieved. Once
data on both HCP systems are the exact mirror, primary HCP will become read and write
and starts serving clients. Replica HCP will resume its role as replica. During Complete
Recovery phase both HCP systems are read-only. Complete recovery procedure can be
manual or automated
Page 7-13
Replication Activities
Automatic Failover/Failback Options
• Automated DNS failover is typically used in active passive configurations. DNS is used to
redirect clients from a failed primary system to replica. This involves modifying the
secondary DNS forward lookup zone and replacing IP addresses of HCP primary with IP
addresses of HCP replica. Once failover to replica is triggered, replica HCP will start
handling these DNS request. There is no impact on clients. They keep accessing primary
HCP without knowing that DNS redirects them to HCP replica
Page 7-14
Replication Activities
Active-Active Failover Scenario 1
Applications either:
• Use a load balancer to route requests to specific systems in the replication
topology
• Are made aware of the multiple systems in the replication topology and issue
requests directly to each system
System administrators can manually fail one system over to the remote side on
demand
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 7-15
Replication Activities
Active-Passive Failover Scenario
1. Begin Failover
2. Restore Link
3. Begin Recovery
4. Complete Recovery
• If DNS Failover is enabled, the remote system becomes inaccessible via DNS
• Local system becomes writable and remote system (if accessible) becomes read-only
Restore Link – ensures that the link definition exists on the remote system
Begin Recovery – restore content from the local DR site to the primary system
• Primary system remains read-only and the local system remains read-write
Complete Recovery – restore content from the local DR site to the primary system
Page 7-16
Replication Activities
Active-Active Failover Scenario
1. Begin Failover
2. Restore Link
3. Fail Back
Restore Link – ensures that the link definition exists on the remote system
Fail Back – update DNS zone files to route back to the primary system
Page 7-17
Replication Activities
Distributed Authoritative DNS Systems
• Configuration requirements:
• In an active-active topology:
Page 7-18
Replication Activities
What Is Geo Distributed Data Protection?
Page 7-19
Replication Activities
Protection Types
Protection Types
With whole-object protection, all the data for each object in a replicated
namespace is maintained on each HCP system in a replication topology
Page 7-20
Replication Activities
Geo-Distributed Erasure Coding Service Processing
The service on one system can query other HCP systems in the
topology for the state of the objects on those systems even when the
service is not running on those systems
Replication Topologies
Page 7-21
Replication Activities
Considerations for Cross-Release Replication
From one HCP system to a second HCP system and from that second
system to a third HCP system, such that the same HCP tenants and
namespaces and default-namespace directories that are replicated to
the second system are then replicated to the third system
• chained replication
HCP release v8.x systems support replication with other release 8.x
systems and with release v7.x systems
• HCP does not support replication between v8.x systems and systems
earlier than v7.0.
Page 7-22
Replication Activities
Working With Erasure Coding Topologies
To create an erasure coding topology, you select the HCP systems and
replication links to include in the topology
• You also set the topology properties
• After you create the topology, you add HCP tenants to it
• You can modify the properties of an erasure coding topology at any time
© Hitachi Vantara Corporation 2018. All rights reserved.
Geo-EC Setup
Before you can create an erasure coding topology, the HCP system must be
connected to at least two other HCP Clusters by active/active replication
links
• An HCP system can participate in only one active erasure coding topology
• There can be a maximum of five erasure coding topologies at any given time,
regardless of state (active, retiring, or retired)
Page 7-23
Replication Activities
Steps to Create a Geo-EC Configuration
4. Add Tenants.
Page 7-24
Replication Activities
Replication Verification Service (RVS)
Hardened Migrations
Customers can use replication to migrate with confidence knowing that RVS
will ensure their migration is successful
• Replication Verification Service (RVS) provides Distributed Data Protection (DDP) across
your replication topology. It will ensure that every HCP system that should have a copy
of an object has a copy of the object
• With Replication Verification Service, the customer can confidently use replication for
migration. RVS will make sure all objects are replicated, and in the off chance there are
objects which cannot be replicated, RVS will provide concise reports in the SMC and
Tenant Management Console (TMC)
Page 7-25
Replication Activities
RVS: How Does It Work?
• For objects that already exist on both sides, it will skip over
• For objects that does not exist on one end, it will replicate over
• In the case that it does not replicate, it will be labeled in a report as non-replicated
Page 7-26
Replication Activities
RVS Setup
RVS Setup
Check Verify
replicated objects
checkbox to enable
RVS
As you can see under the Replication > Settings, there is a checkbox labeled Verify
replicated objects
• If you select run once, it will run one time as soon as you hit the Update button
• If you select Always verify replicated objects, it will create its own schedule and run
constantly
Page 7-27
Replication Activities
RVS Running Status
Running status
On the Replication link page, you can see a Verifying status, it will tell you the last completed
pass, current status, and issues found. The issue found will tell you if there are any objects that
are found by RVS but cannot be replicated onto the other HCP, such as files being opened or
corrupted.
Page 7-28
Replication Activities
RVS Results
RVS Results
TMC
SMC
Check overall RVS results at SMC > Replication > Overview > Issues found
overlay
Which objects are not replicated for what reasons on TMC > <namespace> >
Monitoring
© Hitachi Vantara Corporation 2018. All rights reserved.
Lastly, under Issues in the System Management Console, there is a list that tells which tenant
has how many objects not replicated. In addition, under Monitoring in the Tenant
Management Console, it will tell you the object name and reason why it was not replicated.
Load Balancers
This section covers load balancers.
Page 7-29
Replication Activities
Load Balancer Overview
Server Pool
Load Balancer
Back-end Servers
• Discuss how a Load Balancer enables many clients to access a server farm through one
and the same FQDN
Page 7-30
Replication Activities
Load Balancer With Single HCP
Load Balancer monitors HCP nodes for availability - TCP and http(s)
Apps
Load Balancer
• Make sure you monitor the HCP nodes for TCP and https
• Be aware that HCP goes into R/O mode if too many nodes are offline
Page 7-31
Replication Activities
Load Balancer With Pair of Replicated HCP
• Make sure you monitor the HCP nodes for TCP and https
• Be aware that HCP goes into R/O mode if too many nodes are offline
• In this case, the Load Balancer needs to take all cluster nodes offline
Page 7-32
Replication Activities
What About Distributed Sites?
Page 7-33
Replication Activities
Global Traffic Manager (GTM)
o It monitors the local resources for availability, like a Local Traffic Managers (LTM)
does
Page 7-34
Replication Activities
GTM With Replicated HCPs
n1.tenant.hcp.dom.com ?
Apps Apps
WAN
Stay local !
LTM LTM
Access remote !
Server Pool Server Pool
GTM Stay local !
GTM
GAT
HCP HCP
• Make sure you monitor the HCP nodes for TCP and https
• Be aware that HCP goes into R/O mode if too many nodes are offline
• In this case, the Load Balancer needs to take all cluster nodes offline
GTM
• To monitor all (!) HCP nodes
• Needs similar rules as LTM
• Answers DNS queries with the best fitting HCPs IP addresses
• May (but does not need to) point to an LTM per site
Page 7-35
Replication Activities
Admin Commands
Admin Commands
This section covers information about Admin controls.
Failover related admin commands have been updated to account for new active-active failover
workflow
Page 7-36
Replication Activities
Admin Commands Reference
Displays real time information about all the replication threads and queues for the given link.
Information such as thread queue size, EF being worked on per thread, pauseResumeType of
change operation, thread state, and so on, is displayed. It is helpful to pipe this command to
the "watch" command in order to see how the threads and queues are progressing. Use the
"admin jvm replication list" command to get the linkId that can be used to pass to this
command.
• The --python flag tells this command to print the output in python dictionary format
• The --verbose flag tells this command to print extra detailed information
• The --regions flag tells this command to print region details only
• The --globals flag tells this command to print global details only
• The --metadata flag tells this command to print metadata first details only
Page 7-37
Replication Activities
Admin Commands Reference
Return a list of strings describing the region progress checkpoints (in milliseconds since 1970)
for the given link. All objects changed before this time are guaranteed to be replicated.
• With --earliest, returns the earliest progress checkpoint for that link, across all object
types. The optional [count] field may be specified to list the earliest [n] checkpoints.
May be utilized with the --metadata flag to return metadata checkpoints instead
• With --metadata, metadata checkpoints are included and region checkpoints are
excluded
• With --namespaces, only the region checkpoints for the namespaces in the specified.
comma-separated namespaces are listed. If omitted, the checkpoints for all namespaces
are listed. May optionally be used with --metadata
Page 7-38
Replication Activities
System Events
System Events
This section covers system events.
The following new alerts have been added to the System Management Console > Overview
page:
o It’s important to keep the times on the two systems synchronized within two
minutes of each other the prevent improper collision handling
Page 7-39
Replication Activities
System Log Events – Reference
Performance
This section covers performance.
Page 7-40
Replication Activities
Performance Overview
Performance Overview
Outbound link limit per cluster has increased from two to five
Data is visible and accessible from remote systems 9x faster for
active-active topologies
Data is fully protected 42% faster for active-passive topologies
6000
5048 4973
5000
4200 4466
4000
3000
2000 PUTS
1138
1000 974 899 885
GETS
0
Baseline Spray 1-2 BiRepl. WDR BiRepl. BiRepl. © Hitachi Vantara Corporation 2018. All rights reserved.
Page 7-41
Replication Activities
Module Summary
Module Summary
Module Review
Page 7-42
8. Support Activities
Module Objectives
Page 8-1
Support Activities
Chargeback
Chargeback
This section covers information on chargeback.
Chargeback Features
Chargeback logs can be used to monitor namespace usage patterns. They are downloaded from
HCP in .csv format which can be imported into MS Excel table. Chargeback log downloads can
be automated with a tool called Chargeback collector which is basically a script that download
Chargeback logs from HCP using MAPI (Management API) on regular basis.
Page 8-2
Support Activities
Chargeback
Chargeback
o System user
o Tenant user
• Special Consideration
Page 8-3
Support Activities
Chargeback Metrics
Chargeback Metrics
Column(s) Description
systemName DNS name for HCP Cluster Name for record
tenantName Tenant name for record, if blank, it is a summary line for HCP system
namespaceName Namespace name for record, if blank, it is summary line for either the
tenant or the system
startTime Start time for record, it will typically be the beginning of an hour for the
granularity requested
For example: 2010-08-26T08:00:00-0400
endTime End time for record, it will typically be the end of the hour or the time of
the collection for active bucket (that is time of collection)
For example: 2010-08-26T08:59:59-0400
In these reports, Point-in-time values are what the value was at the moment the bucket was
returned (that is, at the end of the latest bucket for the record or instant when active bucket
was collected).
Column(s) Description
objectCount • Point-in-time value of the number of end-user objects in the
system at the end of the data bucket collection
• This value includes both data objects and custom-metadata only
ingestedVolume • Point-in-time value in bytes of the amount of end-user object and
custom-metadata ingested for the tenant/namespace being
reported
• The overhead of directories are not included in this value
storageCapacityUsed Point-in-time value in bytes of the raw storage used to store and
protect end-user data: (# 4KB blocks of user data * 4KB * DPL)
• Smallest allocation size on system is 4KB blocks
• Includes object data and custom-metadata only
• Includes hidden versions of content as well
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 8-4
Support Activities
Chargeback Metrics
Column(s) Description
bytesIn/bytesOut Number of bytes transmitted as part of the HTTP message body into
and out of the HCP system
reads/writes/deletes Count of operations for read, write and delete against the
namespace/tenant being reported
This includes operations against objects, custom-metadata and
directories
deleted Indicates as that data record represents namespace(s) data which
has been deleted but existed during the data collection time frame
valid True/false field that indicates if there was a problem with collecting
data collection stats off of all nodes in the cluster during the period
for the for the specific record
bytesIn/bytesOut consists of object data, custom-metadata and directory listings results. Data
in HTTP headers are not counted. For example, Existence Checks, system/object level metadata,
HTTP Response status, and so on. Deleted command includes valid values, such as true, false
and included. The included means summary value from deleted namespaces.
Column(s) Description
multipartObject The total number of bytes of object data in all the parts of multipart
Bytes objects currently stored in the given namespace or in all the
namespaces owned by the tenant
multipartObjects The total number of multipart objects currently stored in the given
Parts namespace or in all the namespaces owned by the tenant
multipartObjects Indicates as that data record represents namespace(s) data which
has been deleted but existed during the data collection time frame
multipartUpload The total number of bytes of object data in all the successfully
Bytes uploaded parts of multipart uploads that are currently in progress in
the given namespace or in all the namespaces owned by the tenant
Page 8-5
Support Activities
Chargeback Reporting Fundamentals
Column(s) Description
multipartUpload The total number of successfully uploaded parts of multipart
Parts uploads that are currently in progress in the given
namespace or in all the namespaces owned by the tenant
multipartUploads The total number of multipart uploads that are currently in
progress in the given namespace or in all the namespaces
owned by the tenant
Page 8-6
Support Activities
System Logs
System Logs
This section covers system logs.
Page 8-7
Support Activities
Types of Logs
Types of Logs
Syslog logging
• HCP sends system log messages to one or more syslog servers
When you do this, you can use tools in your syslog environment to perform
functions such as sorting the messages, querying for certain events, or forwarding
error messages to a mobile device
Page 8-8
Support Activities
Log Management Controls
Email alerts
• Allow HCP system and tenant level administrators to receive email
notification of HCP health events
Internal logs
• Record the processing activity of various components of HCP
• Can help HCP support personnel diagnose and resolve the problem, if a
problem with HCP occurs
• Are kept for up to 35 days
© Hitachi Vantara Corporation 2018. All rights reserved.
Page 8-9
Support Activities
Download Internal Log
You can select what logs should be collected on which nodes. It is also possible to specify log
timeframe.
HCP v7.2.1 will include a log Triage tool expected to launch April 1, 2018. While the initial
targeted users are primarily the HCP Sustaining Team, it could be extended to include GSC, QA,
automation, and developers. The main goal of this project is to help reduce manual sustaining
effort while triaging a support issue. The "offline" tool will speed up issue triaging by providing
configurability around extraction, indexing, analysis and visualization of HCP logs. HCP
Sustaining team depends heavily on logs downloaded from an HCP system for post analysis of a
problem that already occurred or for triaging a problem that blocks a certain function of the
system and hence requires quick turn around of the root cause and fix for the problem. The
Triage tool will be built as a Web Application that will provide a simple web based interface for
easy navigation. The Web Application will leverage search technologies from HCI (HCP Content
Intelligence) for actual analysis and visualization of the results.
Page 8-10
Support Activities
Log Download Enhancements
Default behavior:
• Consistent with HCP pre-v7.2 behavior
• All log download types are selected
• All HCP nodes are selected
• Note that these boxes are greyed out until the preparation for download is complete…
Page 8-11
Support Activities
Log Download Enhancements – MAPI
Page 8-12
Support Activities
Log Download Enhancements – MAPI
Page 8-13
Support Activities
Module Summary
Module Summary
Module Review
Page 8-14
9. Solutions
Module Objectives
Page 9-1
Solutions
HCP Solutions and Supported ISVs
HCP and HDI together are solution for Remote Offices/Branch Offices
HCP system is located in your datacenter (core) and HDI is deployed typically remotely in the
branch office
Advantages of HDI are:
• It migrates all data to HCP
• It works as a cache, when it starts running out of local capacity, it stubs the files and on
read it rehydrates them from the HCP
• It is back up free, it backs all configuration to the HCP automatically
• It can use entire HCP capacity
• It is easy to manage
• It can be used for NAS migrations
HNAS F (Hitachi NAS F) offers the same features as HDI in terms of integration with HCP
HNAS can be also integrated with an HCP system
HCP Anywhere allows you to build your own on-premises cloud enabling your employees to
synchronise their data on BYOD devices. (Bring Your Own Devices)
• HCP can also create a solution with Content Audit Services and Data Archiving powered
by Arkivio
• HCP can also create a solution with Hitachi Content Optimization for Microsoft
SharePoint
Page 9-2
Solutions
HCP Solution With HDI
Operating as an on-ramp for users and applications at the edge, HDI connects to Hitachi
Content Platform (HCP) at a core data center; users work with it like any Network File
System (NFS) or Common Internet File System (CIFS) storage
HDI is essentially a caching device; it provides users and applications with seemingly
endless storage and a host of newly available capabilities
For easier and efficient control of distributed IT, HDI comes with a Management API
(MAPI) that enables integration with Hitachi Content Platform’s management
It uses standard
protocols to file
system access
(CIFS and NFS) HTTPS REST API
and Management API
HDI is essentially a caching device. it provides users and applications with seemingly endless
storage and a host of newly available capabilities. Furthermore, for easier and efficient control
of distributed IT, Hitachi Data Ingestor comes with a Management API that enables integration
with Hitachi Content Platform’s management UI and other third party or home-grown
management UIs.
Because of Management API at the Data Ingestor, customers can even integrate HDI
management into their homegrown management infrastructures for deployment and ongoing
management.
Page 9-3
Solutions
Elastic and Back Up Free
WEB APPS
AND DATA
Remote
Corporate Edge
Content Core Storage
• Small footprint
• Stores relevant data locally; links the remaining data to the content core
Page 9-4
Solutions
Available HDI Configurations
Non-redundant Non-redundant
Highly available configuration Non-redundant
configuration
cluster pair Internal storage configuration
Internal Storage
SAN-attached to (RAID-6 Customer-defined
hardware and Configured
Hitachi storage configuration)
storage through HCP
Supports HUS, HUS Anywhere
VM, VSP, VSP configuration
Also Hyper-V Note - EOS
G1000
• HDI Cluster will be managed using Hitachi File Service Manager (HFSM)
• HDI Single Node and VMWare format will be managed using the Integrated
Management GUI
Page 9-5
Solutions
HDI Maps to HCP Tenants and Namespaces
FS 1 FS 2 FS 1 FS 2 FS 1 FS 2
Benefits
• Satisfy multiple applications, varying SLAs and workload types or organizations
• Edge dispersion: each HDI can access another when set up that way
• Examples: replication, encryption, DPL levels (how many copies to keep), compliance
and retention, compression and versioning
Page 9-6
Solutions
Single HCP Tenant Solution for Cloud
FS 1 FS 2 FS 1 FS 2 FS 1
This configuration of multiple HDI sharing one tenant can be used in cloud situations.
A tenant represents a customer corporation. All HDIs of this corporation share the same tenant.
Page 9-7
Solutions
File System Migration Task
• Depends on the path length on the file and the number of ACEs on the file, the size of stub file
could be either of 4KB or 8KB
CIFS/NFS REST/HTTP(S)
HDI HCP
Application WRITE REPLICATE
READ RECALL
• The devices communicate using the same HTTP verbs (GET, POST, PUT, DELETE and so
on) through HTTP or HTTPS
o Recalled files are deleted from HDI later and replaced by another link, based on
HDI system capacity
Page 9-8
Solutions
Stubs – File Restoration
When the system capacity reaches 90%, HDI deletes the files in excess of the
threshold using a LRU algorithm and creates stubs to replace them
If a user or application retrieves deleted files, HDI recovers the file data using
the Stub Metadata, performing a restore operation from the HCP namespace to
the HDI file system
Stub
Metadata
Benefits:
Stub stores only the information required to restore user data quickly and
enables to save space in the HDI file system to cache the more accessed
files.
Page 9-9
Solutions
Hitachi NAS (HNAS) Data Migration to HCP
HNAS can tier to HCP using Hitachi NAS Platform (HNAS) Data Migrator
Data Migration
RO – WORM
CVL – cross volume link = stub, pointer; XVL – external cross volume link = stub, pointer
pointing outside HNAS; RO-WORM – Read Only – Write Once, Read Many
The 3 types of data migration targets are:
o NFS targets
o Hitachi Content Platform (HCP) and Atempo Digital Archiving (ADA) using HTTP
o Hitachi Content Platform (HCP) and Atempo Digital Archiving (ADA) using HTTP
• Migration to HCP:
o On HNAS, an external path to http target (HCP) must be added using CLI
o Once the path is created, it is possible to set up HNAS Data Migrator rules
(policies)
Page 9-10
Solutions
HNAS Data Migrator to Cloud (DM2C)
Data Migrator To
Data Migration
Cloud
Local File System NFS pointer HTTP pointer HTTPS pointer
pointer [Remote server [Path/URL for [Path/URL for
[Handle for local FS] identifier] HTTP] HTTPS]
RW RW RO - WORM RW
• Before v12.3, the DM2Cloud path was pathing through the Linux MMB package
• From v12.3 and up, the aggregates on the FPGA board are used
• The target can be any public cloud service, but also Hitachi Content Platform (HCP) G10
or an HCP S30 node
• Data is available for read and write (focus is not on tiering and retention, but on a way
how to expand HNAS capacity in a inexpensive way)
Page 9-11
Solutions
HCP Solution With HCP Anywhere
An HCP Anywhere system consists of both hardware and software and uses Hitachi Content
Platform (HCP) to store data.
o This feature allows users to add files to HCP Anywhere and access those files
from nearly any location
o When users add files to HCP Anywhere, HCP Anywhere stores the files in an HCP
system and makes the files available through the user's computers, smartphones
and tablets
o Users can also share links to files that they have added to HCP Anywhere
Page 9-12
Solutions
HCP Anywhere Architecture
Internal network
Intranet
Load Balancers
HTTP(S)
Active Directory Server
Web Servers REST API Web Servers REST API
AW App Network
Notification Notification
Sync Server DB DB Sync Server
Server Server
Application and DB Server Application and DB Server Other customer IT
Infrastructure: DNS, NTP,
Virus scanning and so on
Replication (back-end network)
HCP Anywhere POD Enterprise IT
© Hitachi Vantara Corporation 2018. All rights reserved.
HCP Anywhere is sync and share gateway for HCP. It connects to HCP using HTTP protocol on
the back end and on the front end, it provides secure applications for mobile devices such as
smartphones and tablets. Desktop applications are available too as well as Web based GUI.
Multiple platforms are supported. Client applications can be branded for a specific customer.
HCP Anywhere solution consists of two servers. These servers can be either Quanta servers or
virtual machines running in VMware.
Page 9-13
Solutions
A Solution to the Data Delimma
Page 9-14
Solutions
Content Intelligence Does Three Specific Things
1 2 3
Connect Understand Recommend
Different data types Extract all or parts Review and assess
Different data sources Evaluate data value Visualize relationships
Enrich and augment Explore data
Transform and index Discover opportunities
Decisions based on data
© Hitachi Vantara Corporation 2018. All rights reserved.
Regardless of how you use it, Hitachi Content Intelligence makes data-driven decision making
as easy as 1, 2, 3.
In the slide above, you can see just how important it is to make the shift from simply
connecting data to instead focus on how the business can more effectively and efficiently
search for what they need, understand the relationships between different data sets in an effort
to surface valuable insights that they can act on.
With today’s structured and unstructured data being viewed from multiple perspectives, finding
new and unexpected patterns are what will help your business find the new solutions to the
complex problems, market dynamics, new opportunity identification, and more.
Page 9-15
Solutions
Data Connections: Connecting the Dots
Page 9-16
Solutions
Recommend Enabling Data-Driven Decisions
Page 9-17
Solutions
Workflow Designer Transforms and Enriches Data
Workflow Pipeline
Un- EXTRACT
structured Stage Stage Stage
and Index
Data Collection
INDEX
Structured
Data Workflows: inputs + pipeline + index
Data Connections: Sources that are crawled
Page 9-18
Solutions
Admin Interface Manages the System
Compare results
Customizable views
Auto training mode: System can be trained to understand unstructured data with well-known
format and custom formats.
Page 9-19
Solutions
Highly Scalable With Deployment Flexibility
Hitachi
Content Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Intelligence
Micro- Micro- Micro- Micro- Micro- Micro- Micro- Micro- Micro-
Service Service Service Service Service Service Service Service Service
Deployment
Mediums
On Physical Servers In Virtual Machines In the Cloud
Host OS
With Docker
Deployment Mediums
Physical, Virtual, Cloud-Hosted
Hitachi Data Discovery Suite (HDDS) © Hitachi Vantara Corporation 2018. All rights reserved.
Other Solutions
This section covers other solutions.
Page 9-20
Solutions
HCP Integration With ISV Middleware
Page 9-21
Solutions
Software Partners Complete the Solution (100+ Partners)
ECM/ERM
Email File
Mainframe
Security/Logging/CDR
Voice Logging
Module Summary
Page 9-22
Solutions
Module Review
Module Review
@HitachiVantara
Check your progress in the Learning Path.
Review the course description for supplemental courses, or Hitachi University / Hitachi
Vantara Learning Center
register, enroll and view additional course offerings.
Get practical advice and insight with Hitachi Vantara white papers.
Hitachi Vantara
Join the conversation with your peers in the Hitachi Vantara Community.
Certification: https://www.hitachivantara.com/en-us/services/training-
certification.html#certification
Page 9-23
Solutions
Your Next Steps
Learning Paths:
• https://community.hitachivantara.com/welcome
Page 9-24
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.
ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before
Page G-1
proceeding with other work. Asynchronous or Yottabyte (YB). Note that variations of
I/O operations enable an initiator to have this term are subject to proprietary
multiple concurrent I/O operations in trademark disputes in multiple countries at
progress. Also called Out-of-band the present time.
virtualization. BIOS — Basic Input/Output System. A chip
ATA —Advanced Technology Attachment. A disk located on all computer motherboards that
drive implementation that integrates the governs how a system boots and operates.
controller on the disk drive itself. Also BLKSIZE — Block size.
known as IDE (Integrated Drive Electronics)
Advanced Technology Attachment. BLOB — Binary Large OBject.
Page G-2
CAGR — Compound Annual Growth Rate. Centralized management — Storage data
Capacity — Capacity is the amount of data that a management, capacity management, access
storage system or drive can store after security management, and path
configuration and/or formatting. management functions accomplished by
software.
Most data storage companies, including HDS,
calculate capacity based on the premise that CF — Coupling Facility.
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, CFCC — Coupling Facility Control Code.
1GB = 1,024 megabytes, and 1TB = 1,024 CFW — Cache Fast Write.
gigabytes. See also Terabyte (TB), Petabyte
(PB), Exabyte (EB), Zettabyte (ZB) and CH — Channel.
Yottabyte (YB). CH S — Channel SCSI.
CAPEX — Capital expenditure — the cost of CHA — Channel Adapter. Provides the channel
developing or providing non-consumable interface control functions and internal cache
parts for the product or system. For example, data transfer functions. It is used to convert
the purchase of a photocopier is the CAPEX, the data format between CKD and FBA. The
and the annual paper and toner cost is the CHA contains an internal processor and 128
OPEX. (See OPEX). bytes of edit buffer memory. Replaced by
CAS — (1) Column Address Strobe. A signal sent CHB in some cases.
to a dynamic random access memory CHA/DKA — Channel Adapter/Disk Adapter.
(DRAM) that tells it that an associated CHAP — Challenge-Handshake Authentication
address is a column address. CAS-column Protocol.
address strobe sent by the processor to a
CHB — Channel Board. Updated DKA for Hitachi
DRAM circuit to activate a column address.
Unified Storage VM and additional
(2) Content-addressable Storage.
enterprise components.
CBI — Cloud-based Integration. Provisioning of a
Chargeback — A cloud computing term that refers
standardized middleware platform in the
to the ability to report on capacity and
cloud that can be used for various cloud
utilization by application or dataset,
integration scenarios.
charging business users or departments
An example would be the integration of based on how much they use.
legacy applications into the cloud or CHF — Channel Fibre.
integration of different cloud-based
applications into one application. CHIP — Client-Host Interface Processor.
Microprocessors on the CHA boards that
CBU — Capacity Backup. process the channel commands from the
CBX —Controller chassis (box). hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository protocol controller.
CDWP — Cumulative disk write throughput. CICS — Customer Information Control System.
CE — Customer Engineer. CIFS protocol — Common internet file system is a
platform-independent file sharing system. A
CEC — Central Electronics Complex. network file system accesses protocol
CentOS — Community Enterprise Operating primarily used by Windows clients to
System. communicate file access requests to
Windows servers.
Page G-3
CIM — Common Information Model. • Data discoverability
CIS — Clinical Information System. • Data mobility
CKD ― Count-key Data. A format for encoding • Data protection
data on hard disk drives; typically used in • Dynamic provisioning
the mainframe environment.
• Location independence
CKPT — Check Point.
• Multitenancy to ensure secure privacy
CL — See Cluster.
• Virtualization
CLI — Command Line Interface.
Cloud Fundamental —A core requirement to the
CLPR — Cache Logical Partition. Cache can be deployment of cloud computing. Cloud
divided into multiple virtual cache fundamentals include:
memories to lessen I/O contention.
• Self service
Cloud Computing — “Cloud computing refers to
• Pay per use
applications and services that run on a
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet Cloud Security Alliance — A standards
protocols and networking standards. It is organization active in cloud computing.
distinguished by the notion that resources are
CLPR — Cache Logical Partition.
virtual and limitless, and that details of the
physical systems on which software runs are Cluster — A collection of computers that are
abstracted from the user.” — Source: Cloud interconnected (typically at high-speeds) for
Computing Bible, Barrie Sosinsky (2011) the purpose of improving reliability,
availability, serviceability or performance
Cloud computing often entails an “as a
(via load balancing). Often, clustered
service” business model that may entail one
computers have access to a common pool of
or more of the following:
storage and run special software to
• Archive as a Service (AaaS) coordinate the component computers'
• Business Process as a Service (BPaas) activities.
• Failure as a Service (FaaS) CM ― Cache Memory, Cache Memory Module.
• Infrastructure as a Service (IaaS) Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• IT as a Service (ITaaS)
x 2 areas) of capacity. It is available and
• Platform as a Service (PaaS) controlled as 2 areas of cache (cache A and
• Private File Tiering as a Service (PFTaas) cache B). It is fully battery-backed (48 hours).
• Software as a Service (Saas) CM DIR — Cache Memory Directory.
• SharePoint as a Service (SPaas) CME — Communications Media and
• SPI refers to the Software, Platform and Entertainment.
Infrastructure as a Service business model. CM-HSN — Control Memory Hierarchical Star
Cloud network types include the following: Network.
• Community cloud (or community CM PATH ― Cache Memory Access Path. Access
network cloud) Path from the processors of CHA, DKA PCB
to Cache Memory.
• Hybrid cloud (or hybrid network cloud)
CM PK — Cache Memory Package.
• Private cloud (or private network cloud)
• Public cloud (or public network cloud) CM/SM — Cache Memory/Shared Memory.
• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:
Page G-4
CNS — Cluster Name Space or Clustered Name CSTOR — Central Storage or Processor Main
Space. Memory.
CNT — Cumulative network throughput. C-Suite — The C-suite is considered the most
CoD — Capacity on Demand. important and influential group of
individuals at a company. Referred to as
Community Network Cloud — Infrastructure “the C-Suite within a Healthcare provider.”
shared between several organizations or
CSV — Comma Separated Value or Cluster Shared
groups with common concerns.
Volume.
Concatenation — A logical joining of 2 series of
CSVP — Customer-specific Value Proposition.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSW ― Cache Switch PCB. The cache switch
often concatenated to provide a unique (CSW) connects the channel adapter or disk
name or reference (e.g., S_ID | X_ID). adapter to the cache. Each of them is
Volume managers concatenate disk address connected to the cache by the Cache Memory
spaces to present a single larger address Hierarchical Star Net (C-HSN) method. Each
space. cluster is provided with the 2 CSWs, and
Connectivity technology — A program or device's each CSW can connect 4 caches. The CSW
ability to link with other programs and switches any of the cache paths to which the
devices. Connectivity technology allows channel adapter or disk adapter is to be
programs on a given computer to run connected through arbitration.
routines or access objects on another remote CTG — Consistency Group.
computer. CTL — Controller module.
Controller — A device that controls the transfer of CTN — Coordinated Timing Network.
data from a computer to a peripheral device
(including a storage system) and vice versa. CU — Control Unit (refers to a storage subsystem.
The hexadecimal number to which 256
Controller-based virtualization — Driven by the
LDEVs may be assigned).
physical controller at the hardware
microcode level versus at the application CUDG — Control Unit Diagnostics. Internal
software layer and integrates into the system tests.
infrastructure to allow virtualization across CUoD — Capacity Upgrade on Demand.
heterogeneous storage and third party
CV — Custom Volume.
products.
CVS ― Customizable Volume Size. Software used
Corporate governance — Organizational
to create custom volume sizes. Marketed
compliance with government-mandated
under the name Virtual LVI (VLVI) and
regulations.
Virtual LUN (VLUN).
CP — Central Processor (also called Processing
Unit or PU). CWDM — Course Wavelength Division
Multiplexing.
CPC — Central Processor Complex.
CXRC — Coupled z/OS Global Mirror.
CPM — Cache Partition Manager. Allows for
-back to top-
partitioning of the cache and assigns a
partition to a LU; this enables tuning of the —D—
system’s performance. DA — Device Adapter.
CPOE — Computerized Physician Order Entry
DACL — Discretionary access control list (ACL).
(Provider Ordered Entry).
The part of a security descriptor that stores
CPS — Cache Port Slave. access rights for users and groups.
CPU — Central Processing Unit. DAD — Device Address Domain. Indicates a site
CRM — Customer Relationship Management. of the same device number automation
CSS — Channel Subsystem. support function. If several hosts on the
same site have the same device number
CS&S — Customer Service and Support.
system, they have the same name.
Page G-5
DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the bits per second for a modem, and in
X-modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.
Page G-6
Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
Domain — A number of related storage array
A disk array may contain several disk drive
groups.
trays, and is structured to improve speed
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
8 LUs; a large one, with hundreds of disk Denied Persons List.
drives, can support thousands. DR — Disaster Recovery.
DKA ― Disk Adapter. Also called an array control DRAC — Dell Remote Access Controller.
processor (ACP). It provides the control
DRAM — Dynamic random access memory.
functions for data transfer between drives
and cache. The DKA contains DRR (Data DRP — Disaster Recovery Plan.
Recover and Reconstruct), a parity generator DRR — Data Recover and Reconstruct. Data Parity
circuit. Replaced by DKB in some cases. Generator chip on DKA.
DKB — Disk Board. Updated DKA for Hitachi DRV — Dynamic Reallocation Volume.
Unified Storage VM and additional
DSB — Dynamic Super Block.
enterprise components.
DSF — Device Support Facility.
DKC ― Disk Controller Unit. In a multi-frame
configuration, the frame that contains the DSF INIT — Device Support Facility Initialization
front end (control and memory (for DASD).
components). DSP — Disk Slave Program.
DKCMN ― Disk Controller Monitor. Monitors DT — Disaster tolerance.
temperature and power status throughout DTA —Data adapter and path to cache-switches.
the machine.
DTR — Data Transfer Rate.
DKF ― Fibre disk adapter. Another term for a
DVE — Dynamic Volume Expansion.
DKA.
DW — Duplex Write.
Page G-7
DWDM — Dense Wavelength Division ERP — Enterprise Resource Planning.
Multiplexing. ESA — Enterprise Systems Architecture.
DWL — Duplex Write Line or Dynamic ESB — Enterprise Service Bus.
Workspace Linking.
ESC — Error Source Code.
-back to top-
ESD — Enterprise Systems Division (of Hitachi)
—E— ESCD — ESCON Director.
EAL — Evaluation Assurance Level (EAL1 ESCON ― Enterprise Systems Connection. An
through EAL7). The EAL of an IT product or input/output (I/O) interface for mainframe
system is a numerical security grade computer connections to storage devices
assigned following the completion of a developed by IBM.
Common Criteria security evaluation, an ESD — Enterprise Systems Division.
international standard in effect since 1999.
ESDS — Entry Sequence Data Set.
EAV — Extended Address Volume.
ESS — Enterprise Storage Server.
EB — Exabyte.
ESW — Express Switch or E Switch. Also referred
EC — Enterprise Class (in contrast with BC, to as the Grid Switch (GSW).
Business Class).
Ethernet — A local area network (LAN)
ECC — Error Checking and Correction. architecture that supports clients and servers
ECC.DDR SDRAM — Error Correction Code and uses twisted pair cables for connectivity.
Double Data Rate Synchronous Dynamic ETR — External Time Reference (device).
RAM Memory.
EVS — Enterprise Virtual Server.
ECM — Extended Control Memory. Exabyte (EB) — A measurement of data or data
ECN — Engineering Change Notice. storage. 1EB = 1,024PB.
E-COPY — Serverless or LAN free backup. EXCP — Execute Channel Program.
EFI — Extensible Firmware Interface. EFI is a ExSA — Extended Serial Adapter.
specification that defines a software interface -back to top-
between an operating system and platform
firmware. EFI runs on top of BIOS when a —F—
LPAR is activated. FaaS — Failure as a Service. A proposed business
EHR — Electronic Health Record. model for cloud computing in which large-
EIG — Enterprise Information Governance. scale, online failure drills are provided as a
service in order to test real cloud
EMIF — ESCON Multiple Image Facility.
deployments. Concept developed by the
EMPI — Electronic Master Patient Identifier. Also College of Engineering at the University of
known as MPI. California, Berkeley in 2011.
Emulation — In the context of Hitachi Data Fabric — The hardware that connects
Systems enterprise storage, emulation is the workstations and servers to storage devices
logical partitioning of an Array Group into in a SAN is referred to as a "fabric." The SAN
logical devices. fabric enables any-server-to-any-storage
EMR — Electronic Medical Record. device connectivity through the use of Fibre
Channel switching technology.
ENC — Enclosure or Enclosure Controller. The
units that connect the controllers with the Failback — The restoration of a failed system
Fibre Channel disks. They also allow for share of a load to a replacement component.
online extending a system by adding RKAs. For example, when a failed controller in a
redundant configuration is replaced, the
EOF — End of Field.
devices that were originally controlled by
EOL — End of Life. the failed controller are usually failed back
EPO — Emergency Power Off. to the replacement controller to restore the
EREP — Error REPorting and Printing. I/O balance, and to restore failure tolerance.
Page G-8
Similarly, when a defective fan or power transmitting data between computer devices;
supply is replaced, its load, previously borne a set of standards for a serial I/O bus
by a redundant component, can be failed capable of transferring data between 2 ports.
back to the replacement part. FC RKAJ — Fibre Channel Rack Additional.
Failed over — A mode of operation for failure- Module system acronym refers to an
tolerant systems in which a component has additional rack unit that houses additional
failed and its function has been assumed by hard drives exceeding the capacity of the
a redundant component. A system that core RK unit.
protects against single failures operating in FC-0 ― Lowest layer on fibre channel transport.
failed over mode is not failure tolerant, as This layer represents the physical media.
failure of the redundant component may FC-1 ― This layer contains the 8b/10b encoding
render the system unable to function. Some scheme.
systems (e.g., clusters) are able to tolerate
FC-2 ― This layer handles framing and protocol,
more than 1 failure; these remain failure
frame format, sequence/exchange
tolerant until no redundant component is
management and ordered set usage.
available to protect against further failures.
FC-3 ― This layer contains common services used
Failover — A backup operation that automatically
by multiple N_Ports in a node.
switches to a standby database server or
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Adapter. Fibre interface card.
accessibility. Also called path failover. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed
achieved by including redundant instances for new mass storage devices and other
of components whose failure would make peripheral devices that require very high
the system inoperable, coupled with facilities bandwidth. Using optical fiber to connect
that allow the redundant components to devices, FC-AL supports full-duplex data
assume the function of failed ones. transfer rates of 100MBps. FC-AL is
compatible with SCSI for high-performance
FAIS — Fabric Application Interface Standard.
storage systems.
FAL — File Access Library.
FCC — Federal Communications Commission.
FAT — File Allocation Table. FCIP — Fibre Channel over IP, a network storage
Fault Tolerant — Describes a computer system or technology that combines the features of
component designed so that, in the event of a Fibre Channel and the Internet Protocol (IP)
component failure, a backup component or to connect distributed SANs over large
procedure can immediately take its place with distances. FCIP is considered a tunneling
no loss of service. Fault tolerance can be protocol, as it makes a transparent point-to-
provided with software, embedded in point connection between geographically
hardware or provided by hybrid combination. separated SANs over IP networks. FCIP
FBA — Fixed-block Architecture. Physical disk relies on TCP/IP services to establish
sector mapping. connectivity between remote SANs over
FBA/CKD Conversion — The process of LANs, MANs, or WANs. An advantage of
converting open-system data in FBA format FCIP is that it can use TCP/IP as the
to mainframe data in CKD format. transport while keeping Fibre Channel fabric
services intact.
FBUS — Fast I/O Bus.
FC ― Fibre Channel or Field-Change (microcode
update) or Fibre Channel. A technology for
Page G-9
FCoE – Fibre Channel over Ethernet. An FPGA — Field Programmable Gate Array.
encapsulation of Fibre Channel frames over Frames — An ordered vector of words that is the
Ethernet networks. basic unit of data transmission in a Fibre
FCP — Fibre Channel Protocol. Channel network.
FC-P2P — Fibre Channel Point-to-Point. Front end — In client/server applications, the
FCSE — Flashcopy Space Efficiency. client part of the program is often called the
FC-SW — Fibre Channel Switched. front end and the server part is called the
FCU— File Conversion Utility. back end.
FD — Floppy Disk or Floppy Drive. FRU — Field Replaceable Unit.
FDDI — Fiber Distributed Data Interface. FS — File System.
FDR — Fast Dump/Restore. FSA — File System Module-A.
FE — Field Engineer. FSB — File System Module-B.
FED — (Channel) Front End Director.
FSI — Financial Services Industries.
Fibre Channel — A serial data transfer
FSM — File System Module.
architecture developed by a consortium of
computer and mass storage device FSW ― Fibre Channel Interface Switch PCB. A
manufacturers and now being standardized board that provides the physical interface
by ANSI. The most prominent Fibre Channel (cable connectors) between the ACP ports
standard is Fibre Channel Arbitrated Loop and the disks housed in a given disk drive.
(FC-AL). FTP ― File Transfer Protocol. A client-server
FICON — Fiber Connectivity. A high-speed protocol that allows a user on 1 computer to
input/output (I/O) interface for mainframe transfer files to and from another computer
computer connections to storage devices. As over a TCP/IP network.
part of IBM's S/390 server, FICON channels FWD — Fast Write Differential.
increase I/O capacity through the
-back to top-
combination of a new architecture and faster
physical link rates to make them up to 8 —G—
times as efficient as ESCON (Enterprise GA — General availability.
System Connection), IBM's previous fiber
GARD — General Available Restricted
optic channel standard.
Distribution.
FIPP — Fair Information Practice Principles.
Guidelines for the collection and use of Gb — Gigabit.
personal information created by the United GB — Gigabyte.
States Federal Trade Commission (FTC). Gb/sec — Gigabit per second.
FISMA — Federal Information Security
GB/sec — Gigabyte per second.
Management Act of 2002. A major
compliance and privacy protection law that GbE — Gigabit Ethernet.
applies to information systems and cloud Gbps — Gigabit per second.
computing. Enacted in the United States of
GBps — Gigabyte per second.
America in 2002.
GBIC — Gigabit Interface Converter.
FLGFAN ― Front Logic Box Fan Assembly.
GCMI — Global Competitive and Marketing
FLOGIC Box ― Front Logic Box.
Intelligence (Hitachi).
FM — Flash Memory. Each microprocessor has
GDG — Generation Data Group.
FM. FM is non-volatile memory that contains
microcode. GDPS — Geographically Dispersed Parallel
Sysplex.
FOP — Fibre Optic Processor or fibre open.
GID — Group Identifier within the UNIX security
FQDN — Fully Qualified Domain Name.
model.
FPC — Failure Parts Code or Fibre Channel
gigE — Gigabit Ethernet.
Protocol Chip.
Page G-10
GLM — Gigabyte Link Module. HDD ― Hard Disk Drive. A spindle of hard disk
Global Cache — Cache memory is used on demand platters that make up a hard drive, which is
by multiple applications. Use changes a unit of physical storage within a
dynamically, as required for READ subsystem.
performance between hosts/applications/LUs. HDDPWR — Hard Disk Drive Power.
GPFS — General Parallel File System. HDU ― Hard Disk Unit. A number of hard drives
GSC — Global Support Center. (HDDs) grouped together within a
subsystem.
GSI — Global Systems Integrator.
Head — See read/write head.
GSS — Global Solution Services.
Heterogeneous — The characteristic of containing
GSSD — Global Solutions Strategy and dissimilar elements. A common use of this
Development. word in information technology is to
GSW — Grid Switch Adapter. Also known as E describe a product as able to contain or be
Switch (Express Switch). part of a “heterogeneous network,"
GUI — Graphical User Interface. consisting of different manufacturers'
products that can interoperate.
GUID — Globally Unique Identifier.
Heterogeneous networks are made possible by
-back to top-
standards-conforming hardware and
—H— software interfaces used in common by
H1F — Essentially the floor-mounted disk rack different products, thus allowing them to
(also called desk side) equivalent of the RK. communicate with each other. The Internet
(See also: RK, RKA, and H2F). itself is an example of a heterogeneous
network.
H2F — Essentially the floor-mounted disk rack
(also called desk side) add-on equivalent HiCAM — Hitachi Computer Products America.
similar to the RKA. There is a limitation of HIPAA — Health Insurance Portability and
only 1 H2F that can be added to the core RK Accountability Act.
Floor Mounted unit. See also: RK, RKA, and HIS — (1) High Speed Interconnect. (2) Hospital
H1F. Information System (clinical and financial).
HA — High Availability. HiStar — Multiple point-to-point data paths to
Hadoop — Apache Hadoop is an open-source cache.
software framework for data storage and HL7 — Health Level 7.
large-scale processing of data-sets on
clusters of hardware. HLQ — High-level Qualifier.
HANA — High Performance Analytic Appliance, HLS — Healthcare and Life Sciences.
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP. H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter. lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.
Page G-11
HPC — High Performance Computing. —I—
HSA — Hardware System Area. I/F — Interface.
HSG — Host Security Group.
I/O — Input/Output. Term used to describe any
HSM — Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
HSN — Hierarchical Star Network. peripheral device.
HSSDC — High Speed Serial Data Connector. IaaS —Infrastructure as a Service. A cloud
HTTP — Hyper Text Transfer Protocol. computing business model — delivering
computer infrastructure, typically a platform
HTTPS — Hyper Text Transfer Protocol Secure.
virtualization environment, as a service,
Hub — A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are
physically connected. IDE — Integrated Drive Electronics Advanced
Technology. A standard designed to connect
Hybrid Cloud — “Hybrid cloud computing refers
hard and removable disk drives.
to the combination of external public cloud
computing services and internal resources IDN — Integrated Delivery Network.
(either a private cloud or traditional iFCP — Internet Fibre Channel Protocol.
infrastructure, operations and applications) Index Cache — Provides quick access to indexed
in a coordinated fashion to assemble a data on the media during a browse\restore
particular solution.” — Source: Gartner operation.
Research.
IBR — Incremental Block-level Replication or
Hybrid Network Cloud — A composition of 2 or
Intelligent Block Replication.
more clouds (private, community or public).
Each cloud remains a unique entity but they ICB — Integrated Cluster Bus.
are bound together. A hybrid network cloud ICF — Integrated Coupling Facility.
includes an interconnection.
ID — Identifier.
Hypervisor — Also called a virtual machine
IDR — Incremental Data Replication.
manager, a hypervisor is a hardware
virtualization technique that enables iFCP — Internet Fibre Channel Protocol. Allows
multiple operating systems to run an organization to extend Fibre Channel
concurrently on the same computer. storage networks over the Internet by using
Hypervisors are often installed on server TCP/IP. TCP is responsible for managing
hardware then run the guest operating congestion control as well as error detection
systems that act as servers. and recovery services.
Hypervisor can also refer to the interface iFCP allows an organization to create an IP SAN
that is provided by Infrastructure as a Service fabric that minimizes the Fibre Channel
(IaaS) in cloud computing. fabric component and maximizes use of the
company's TCP/IP infrastructure.
Leading hypervisors include VMware
vSphere Hypervisor™ (ESXi), Microsoft® IFL — Integrated Facility for LINUX.
Hyper-V and the Xen® hypervisor. IHE — Integrating the Healthcare Enterprise.
-back to top-
IID — Initiator ID.
IIS — Internet Information Server.
Page G-12
ILM — Information Life Cycle Management. ISL — Inter-Switch Link.
ILO — (Hewlett-Packard) Integrated Lights-Out. iSNS — Internet Storage Name Service.
IML — Initial Microprogram Load. ISOE — iSCSI Offload Engine.
IMS — Information Management System. ISP — Internet service provider.
In-band virtualization — Refers to the location of ISPF — Interactive System Productivity Facility.
the storage network path, between the ISPF/PDF — Interactive System Productivity
application host servers in the storage Facility/Program Development Facility.
systems. Provides both control and data ISV — Independent Software Vendor.
along the same connection path. Also called ITaaS — IT as a Service. A cloud computing
symmetric virtualization.
business model. This general model is an
INI — Initiator. umbrella model that entails the SPI business
Interface —The physical and logical arrangement model (SaaS, PaaS and IaaS — Software,
supporting the attachment of any device to a Platform and Infrastructure as a Service).
connector or to another device. ITSC — Informaton and Telecommunications
Internal bus — Another name for an internal data Systems Companies.
bus. Also, an expansion bus is often referred -back to top-
to as an internal bus.
—J—
Internal data bus — A bus that operates only
Java — A widely accepted, open systems
within the internal circuitry of the CPU,
programming language. Hitachi’s enterprise
communicating among the internal caches of
software products are all accessed using Java
memory that are part of the CPU chip’s
applications. This enables storage
design. This bus is typically rather quick and
administrators to access the Hitachi
is independent of the rest of the computer’s
enterprise software products from any PC or
operations.
workstation that runs a supported thin-client
IOC — I/O controller. internet browser application and that has
IOCDS — I/O Control Data Set. TCP/IP network access to the computer on
IODF — I/O Definition file. which the software product runs.
IOPH — I/O per hour. Java VM — Java Virtual Machine.
IOS — I/O Supervisor. JBOD — Just a Bunch of Disks.
IOSQ — Input/Output Subsystem Queue. JCL — Job Control Language.
IP — Internet Protocol. The communications JMP —Jumper. Option setting method.
protocol that routes traffic across the JMS — Java Message Service.
Internet. JNL — Journal.
IPv6 — Internet Protocol Version 6. The latest JNLG — Journal Group.
revision of the Internet Protocol (IP).
JRE —Java Runtime Environment.
IPL — Initial Program Load.
JVM — Java Virtual Machine.
IPSEC — IP security.
J-VOL — Journal Volume.
IRR — Internal Rate of Return. -back to top-
ISC — Initial shipping condition or Inter-System
Communication. —K—
iSCSI — Internet SCSI. Pronounced eye skuzzy. KSDS — Key Sequence Data Set.
An IP-based standard for linking data kVA— Kilovolt Ampere.
storage devices over a network and
KVM — Kernel-based Virtual Machine or
transferring data by carrying SCSI
Keyboard-Video Display-Mouse.
commands over IP networks.
kW — Kilowatt.
ISE — Integrated Scripting Environment.
-back to top-
iSER — iSCSI Extensions for RDMA.
Page G-13
—L— networks where it is difficult to predict the
number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
LAN — Local Area Network. A communications with more capacity. Load balancing can also
network that serves clients within a refer to the communications channels
geographical area, such as a building. themselves.
LBA — Logical block address. A 28-bit value that LOC — “Locations” section of the Maintenance
maps to a specific cylinder-head-sector Manual.
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
LC — Lucent connector. Fibre Channel connector Manual. An internal architecture extension
that is smaller than a simplex connector (SC). to the Control Unit addressing scheme that
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics.
Hitachi enterprise storage system.
LCM — Link Control Module.
Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM.
LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems.
LR — Local Router.
LCU — Logical Control Unit.
LRECL — Logical Record Length.
LD — Logical Device.
LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol.
LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
LSS — Logical Storage Subsystem (equivalent to
(number). A set of physical disk partitions
LCU).
(all or portions of 1 or more disks) that are
combined so that the subsystem sees and LU — Logical Unit. Mapping number of an LDEV.
treats them as a single area of data storage. LUN ― Logical Unit Number. 1 or more LDEVs.
Also called a volume. An LDEV has a Used only for open systems.
specific and unique address within a LUSE ― Logical Unit Size Expansion. Feature used
subsystem. LDEVs become LUNs to an to create virtual LUs that are up to 36 times
open-systems host. larger than the standard OPEN-x LUs.
LDKC — Logical Disk Controller or Logical Disk LVDS — Low Voltage Differential Signal
Controller Manual.
LVI — Logical Volume Image. Identifies a similar
LDM — Logical Disk Manager. concept (as LUN) in the mainframe
LDS — Linear Data Set. environment.
LED — Light Emitting Diode. LVM — Logical Volume Manager.
LFF — Large Form Factor. -back to top-
LIC — Licensed Internal Code. —M—
LIS — Laboratory Information Systems.
MAC — Media Access Control. A MAC address is
LLQ — Lowest Level Qualifier. a unique identifier attached to most forms of
LM — Local Memory. networking equipment.
LMODs — Load Modules. MAID — Massive array of disks.
LNKLST — Link List. MAN — Metropolitan Area Network. A
communications network that generally
Load balancing — The process of distributing
covers a city or suburb. MAN is very similar
processing and communications activity
to a LAN except it spans across a
evenly across a computer network so that no
geographical region such as a state. Instead
single device is overwhelmed. Load
of the workstations in a LAN, the
balancing is especially important for
Page G-14
workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical Microprogram — See Microcode.
disk block addresses and the block addresses
MIF — Multiple Image Facility.
of the virtual disks presented to operating
environments by control software. Mirror Cache OFF — Increases cache efficiency
over cache data redundancy.
Mb — Megabit.
M-JNL — Primary journal volumes.
MB — Megabyte.
MM — Maintenance Manual.
MBA — Memory Bus Adaptor.
MMC — Microsoft Management Console.
MBUS — Multi-CPU Bus.
Mode — The state or setting of a program or
MC — Multi Cabinet.
device. The term mode implies a choice,
MCU — Main Control Unit, Master Control Unit, which is that you can change the setting and
Main Disk Control Unit or Master Disk put the system in a different mode.
Control Unit. The local CU of a remote copy
MP — Microprocessor.
pair. Main or Master Control Unit.
MPA — Microprocessor adapter.
MCU — Master Control Unit.
MPB – Microprocessor board.
MDPL — Metadata Data Protection Level.
MPI — (Electronic) Master Patient Identifier. Also
MediaAgent — The workhorse for all data
known as EMPI.
movement. MediaAgent facilitates the
transfer of data between the data source, the MPIO — Multipath I/O.
client computer, and the destination storage MP PK – MP Package
media. MPU — Microprocessor Unit.
Metadata — In database management systems, MQE — Metadata Query Engine (Hitachi).
data files are the files that store the database
information; whereas other files, such as MS/SG — Microsoft Service Guard.
index files and data dictionaries, store MSCS — Microsoft Cluster Server.
administrative information, known as MSS — (1) Multiple Subchannel Set. (2) Managed
metadata. Security Services.
MFC — Main Failure Code. MTBF — Mean Time Between Failure.
MG — (1) Module Group. 2 (DIMM) cache MTS — Multitiered Storage.
memory modules that work together. (2)
Multitenancy — In cloud computing,
Migration Group. A group of volumes to be
multitenancy is a secure way to partition the
migrated together.
infrastructure (application, storage pool and
MGC — (3-Site) Metro/Global Mirror. network) so multiple customers share a
MIB — Management Information Base. A database single resource pool. Multitenancy is one of
of objects that can be monitored by a the key ways cloud can achieve massive
network management system. Both SNMP economy of scale.
and RMON use standardized MIB formats M-VOL — Main Volume.
that allow any SNMP and RMON tools to
MVS — Multiple Virtual Storage.
monitor any device defined by a MIB. -back to top-
Page G-15
—N— —O—
NAS ― Network Attached Storage. A disk array OCC — Open Cloud Consortium. A standards
connected to a controller that gives access to organization active in cloud computing.
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level.
OFC — Open Fibre Control.
NAT — Network Address Translation.
OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. organization active in cloud computing.
A protocol meant to transport data between
NAS devices. OID — Object identifier.
Page G-16
PAN — Personal Area Network. A PDM — Policy based Data Migration or Primary
communications network that transmit data Data Migrator.
wirelessly over a short distance. Bluetooth PDS — Partitioned Data Set.
and Wi-Fi Direct are examples of personal
PDSE — Partitioned Data Set Extended.
area networks.
Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol.
information.
Parity — A technique of checking whether data Petabyte (PB) — A measurement of capacity — the
has been lost or written over when it is amount of data that a drive or storage
moved from 1 place in storage to another or system can store after formatting. 1PB =
when it is transmitted between computers. 1,024TB.
Parity Group — Also called an array group. This is PFA — Predictive Failure Analysis.
a group of hard disk drives (HDDs) that
PFTaaS — Private File Tiering as a Service. A cloud
form the basic unit of storage in a subsystem.
computing business model.
All HDDs in a parity group must have the
same physical capacity. PGP — Pretty Good Privacy (encryption).
Partitioned cache memory — Separate workloads PGR — Persistent Group Reserve.
in a “storage consolidated” system by PI — Product Interval.
dividing cache into individually managed PIR — Performance Information Report.
multiple partitions. Then customize the PiT — Point-in-Time.
partition to match the I/O characteristics of
assigned LUs. PK — Package (see PCB).
PAT — Port Address Translation. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PATA — Parallel ATA. motherboard or backplane.
Path — Also referred to as a transmission channel, PM — Package Memory.
the path between 2 nodes of a network that a
POC — Proof of concept.
data communication follows. The term can
refer to the physical cabling that connects the Port — In TCP/IP and UDP networks, an
nodes on a network, the signal that is endpoint to a logical connection. The port
communicated over the pathway or a sub- number identifies what type of port it is. For
channel in a carrier frequency. example, port 80 is used for HTTP traffic.
Path failover — See Failover. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
PAV — Parallel Access Volumes. application programming interface (API) for
PAWS — Protect Against Wrapped Sequences. software designed to run under
PB — Petabyte. heterogeneous operating systems.
PBC — Port By-pass Circuit. PP — Program product.
PCB — Printed Circuit Board. P-P — Point-to-point; also P2P.
PCHIDS — Physical Channel Path Identifiers. PPRC — Peer-to-Peer Remote Copy.
PCI — Power Control Interface. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI CON — Power Control Interface Connector
single company; modest economies of scale
Board.
and less automation. Infrastructure and data
PCI DSS — Payment Card Industry Data Security reside inside the company’s data center
Standard. behind a firewall. Comprised of licensed
PCIe — Peripheral Component Interconnect software tools rather than on-going services.
Express.
PD — Product Detail. Example: An organization implements its
own virtual, scalable cloud and business
PDEV— Physical Device. units are charged on a per use basis.
Page G-17
Private Network Cloud — A type of cloud QoS — Quality of Service. In the field of computer
network with 3 characteristics: (1) Operated networking, the traffic engineering term
solely for a single organization, (2) Managed quality of service (QoS) refers to resource
internally or by a third-party, (3) Hosted reservation control mechanisms rather than
internally or externally. the achieved service quality. Quality of
PR/SM — Processor Resource/System Manager. service is the ability to provide different
priority to different applications, users, or
Protocol — A convention or standard that enables
data flows, or to guarantee a certain level of
the communication between 2 computing
performance to a data flow.
endpoints. In its simplest form, a protocol
can be defined as the rules governing the QSAM — Queued Sequential Access Method.
syntax, semantics, and synchronization of -back to top-
communication. Protocols may be —R—
implemented by hardware, software, or a
combination of the 2. At the lowest level, a RACF — Resource Access Control Facility.
protocol defines the behavior of a hardware RAID ― Redundant Array of Independent Disks,
connection. or Redundant Array of Inexpensive Disks. A
group of disks that look like a single volume
Provisioning — The process of allocating storage
resources and assigning storage capacity for to the server. RAID improves performance
an application, usually in the form of server by pulling a single stripe of data from
multiple disks, and improves fault-tolerance
disk drive space, in order to optimize the
performance of a storage area network either through mirroring or parity checking
and it is a component of a customer’s SLA.
(SAN). Traditionally, this has been done by
the SAN administrator, and it can be a RAID-0 — Striped array with no parity.
tedious process. In recent years, automated RAID-1 — Mirrored array and duplexing.
storage provisioning (also called auto- RAID-3 — Striped array with typically non-
provisioning) programs have become rotating parity, optimized for long, single-
available. These programs can reduce the threaded transfers.
time required for the storage provisioning
RAID-4 — Striped array with typically non-
process, and can free the administrator from
rotating parity, optimized for short, multi-
the often distasteful task of performing this
threaded transfers.
chore manually.
RAID-5 — Striped array with typically rotating
PS — Power Supply.
parity, optimized for short, multithreaded
PSA — Partition Storage Administrator . transfers.
PSSC — Perl Silicon Server Control. RAID-6 — Similar to RAID-5, but with dual
PSU — Power Supply Unit. rotating parity physical disks, tolerating 2
PTAM — Pickup Truck Access Method. physical disk failures.
PTF — Program Temporary Fixes. RAIN — Redundant (or Reliable) Array of
Independent Nodes (architecture).
PTR — Pointer.
PU — Processing Unit. RAM — Random Access Memory.
RAM DISK — A LUN held entirely in the cache
Public Cloud — Resources, such as applications
and storage, available to the general public area.
over the Internet. RAS — Reliability, Availability, and Serviceability
P-VOL — Primary Volume. or Row Address Strobe.
-back to top- RBAC — Role Base Access Control.
Page G-18
RCUT — RCU Target. language and development environment,
RD/WR — Read/Write. can write object-oriented programming in
which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RDP — Remote Desktop Protocol. RPC (remote procedure call), but with the
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter
side, and each head is attached to a single ROA — Return on Asset.
actuator shaft. RoHS — Restriction of Hazardous Substances (in
RECFM — Record Format Redundant. Describes Electrical and Electronic Equipment).
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and
telecommunication links that are installed to Round robin mode — A load balancing technique
back up primary resources in case they fail. which distributes data packets equally
among the available paths. Round robin
A well-known example of a redundant DNS is usually used for balancing the load
system is the redundant array of of geographically distributed Web servers. It
independent disks (RAID). Redundancy works on a rotating basis in that one server
contributes to the fault tolerance of a system. IP address is handed out, then moves to the
back of the list; the next server IP address is
Redundancy — Backing up a component to help
handed out, and then it moves to the end of
ensure high availability.
the list; and so on, depending on the number
Reliability — (1) Level of assurance that data will of servers being used. This works in a
not be lost or degraded over time. (2) An looping fashion.
attribute of any commuter component
Router — A computer networking device that
(software, hardware, or a network) that
forwards data packets toward their
consistently performs according to its
destinations, through a process known as
specifications.
routing.
REST — Representational State Transfer.
RPC — Remote procedure call.
REXX — Restructured extended executor.
RPO — Recovery Point Objective. The point in
RID — Relative Identifier that uniquely identifies time that recovered data should match.
a user or group within a Microsoft Windows
RPSFAN — Rear Power Supply Fan Assembly.
domain.
RRDS — Relative Record Data Set.
RIS — Radiology Information System.
RS CON — RS232C/RS422 Interface Connector.
RISC — Reduced Instruction Set Computer.
RSD — RAID Storage Division (of Hitachi).
RIU — Radiology Imaging Unit.
R-SIM — Remote Service Information Message.
R-JNL — Secondary journal volumes.
RSM — Real Storage Manager.
RK — Rack additional.
RTM — Recovery Termination Manager.
RKAJAT — Rack Additional SATA disk tray.
RTO — Recovery Time Objective. The length of
RKAK — Expansion unit.
time that can be tolerated between a disaster
RLGFAN — Rear Logic Box Fan Assembly. and recovery of data.
RLOGIC BOX — Rear Logic Box. R-VOL — Remote Volume.
RMF — Resource Measurement Facility. R/W — Read/Write.
RMI — Remote Method Invocation. A way that a -back to top-
programmer, using the Java programming
Page G-19
—S— SBM — Solutions Business Manager.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
of restricting a user's rights to a file system SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk Sector — A sub-division of a track of a magnetic
management, just to name the most common disk that stores a fixed amount of data.
software that runs in the cloud. This is the
fastest growing service in the cloud market SEL — System Event Log.
today. SaaS performs best for relatively Selectable segment size — Can be set per partition.
simple tasks in IT-constrained organizations. Selectable Stripe Size — Increases performance by
SACK — Sequential Acknowledge. customizing the disk access size.
SACL — System ACL. The part of a security SENC — Is the SATA (Serial ATA) version of the
descriptor that stores system auditing ENC. ENCs and SENCs are complete
information. microprocessor systems on their own and
they occasionally require a firmware
SAIN — SAN-attached Array of Independent upgrade.
Nodes (architecture).
SeqRD — Sequential read.
SAN ― Storage Area Network. A network linking
Serial Transmission — The transmission of data
computing devices to disk or tape arrays and
bits in sequential order over a single line.
other devices over Fibre Channel. It handles
data at the block level. Server — A central computer that processes
end-user applications or requests, also called
SAP — (1) System Assist Processor (for I/O
a host.
processing), or (2) a German software
company. Server Virtualization — The masking of server
resources, including the number and identity
SAP HANA — High Performance Analytic of individual physical servers, processors,
Appliance, a database appliance technology and operating systems, from server users.
proprietary to SAP. The implementation of multiple isolated
SARD — System Assurance Registration virtual environments in one physical server.
Document. Service-level Agreement — SLA. A contract
SAS —Serial Attached SCSI. between a network service provider and a
SATA — Serial ATA. Serial Advanced Technology customer that specifies, usually in
Attachment is a new standard for connecting measurable terms, what services the network
hard drives into computer systems. SATA is service provider will furnish. Many Internet
based on serial signaling technology, unlike service providers (ISP) provide their
current IDE (Integrated Drive Electronics) customers with a SLA. More recently, IT
hard drives that use parallel signaling. departments in major enterprises have
Page G-20
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
available SLRP — Storage Logical Partition.
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
• Specific performance benchmarks to (director names). This type of information is
which actual performance will be used for the exclusive control of the
periodically compared subsystem. Like CACHE, shared memory is
• The schedule for notification in advance of controlled as 2 areas of memory and fully non-
network changes that may affect users volatile (sustained for approximately 7 days).
• Help desk response time for various SM PATH— Shared Memory Access Path. The
classes of problems Access Path from the processors of CHA,
• Dial-in access availability DKA PCB to Shared Memory.
• Usage statistics that will be provided SMB/CIFS — Server Message Block
Protocol/Common Internet File System.
Service-Level Objective — SLO. Individual
SMC — Shared Memory Control.
performance metrics built into an SLA. Each
SLO corresponds to a single performance SME — Small and Medium Enterprise
characteristic relevant to the delivery of an SMF — System Management Facility.
overall service. Some examples of SLOs SMI-S — Storage Management Initiative
include: system availability, help desk Specification.
incident resolution time, and application
SMP — Symmetric Multiprocessing. An IBM-
response time.
licensed program used to install software
SES — SCSI Enclosure Services. and software changes on z/OS systems.
SFF — Small Form Factor. SMP/E — System Modification
SFI — Storage Facility Image. Program/Extended.
SFM — Sysplex Failure Management. SMS — System Managed Storage.
SFP — Small Form-Factor Pluggable module Host SMTP — Simple Mail Transfer Protocol.
connector. A specification for a new SMU — System Management Unit.
generation of optical modular transceivers. Snapshot Image — A logical duplicated volume
The devices are designed for use with small (V-VOL) of the primary volume. It is an
form factor (SFF) connectors, offer high internal volume intended for restoration.
speed and physical compactness, and are SNIA — Storage Networking Industry
hot-swappable. Association. An association of producers and
SHSN — Shared memory Hierarchical Star consumers of storage networking products,
Network. whose goal is to further storage networking
SID — Security Identifier. A user or group technology and applications. Active in cloud
identifier within the Microsoft Windows computing.
security model. SNMP — Simple Network Management Protocol.
SIGP — Signal Processor. A TCP/IP protocol that was designed for
SIM — (1) Service Information Message. A management of networks over TCP/IP,
message reporting an error that contains fix using agents and stations.
SOA — Service Oriented Architecture.
Page G-21
SOAP — Simple object access protocol. A way for SRM — Site Recovery Manager.
a program running in one kind of operating SSB — Sense Byte.
system (such as Windows 2000) to
SSC — SiliconServer Control.
communicate with a program in the same or
another kind of an operating system (such as SSCH — Start Subchannel.
Linux) by using the World Wide Web's SSD — Solid-state Drive or Solid-State Disk.
Hypertext Transfer Protocol (HTTP) and its SSH — Secure Shell.
Extensible Markup Language (XML) as the
SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange.
Identifier.
Socket — In UNIX and some other operating
SSL — Secure Sockets Layer.
systems, socket is a software object that
connects an application to a network SSPC — System Storage Productivity Center.
protocol. In UNIX, for example, a program SSUE — Split SUSpended Error.
can send and receive TCP/IP messages by SSUS — Split SUSpend.
opening a socket and reading and writing
SSVP — Sub Service Processor interfaces the SVP
data to and from the socket. This simplifies
to the DKC.
program development because the
programmer need only worry about SSW — SAS Switch.
manipulating the socket and can rely on the Sticky Bit — Extended UNIX mode bit that
operating system to actually transport prevents objects from being deleted from a
messages across the network correctly. Note directory by anyone other than the object's
that a socket in this sense is completely soft; owner, the directory's owner or the root user.
it is a software object, not a physical Storage pooling — The ability to consolidate and
component. manage storage resources across storage
SOM — System Option Mode. system enclosures where the consolidation
SONET — Synchronous Optical Network. of many appears as a single view.
SOSS — Service Oriented Storage Solutions. STP — Server Time Protocol.
SPaaS — SharePoint as a Service. A cloud STR — Storage and Retrieval Systems.
computing business model. Striping — A RAID technique for writing a file to
SPAN — Span is a section between 2 intermediate multiple disks on a block-by-block basis,
supports. See Storage pool. with or without parity.
Spare — An object reserved for the purpose of Subsystem — Hardware or software that performs
substitution for a like object in case of that a specific function within a larger system.
object's failure. SVC — Supervisor Call Interruption.
SPC — SCSI Protocol Controller. SVC Interrupts — Supervisor calls.
SpecSFS — Standard Performance Evaluation S-VOL — (1) (ShadowImage) Source Volume for
Corporation Shared File system. In-System Replication, or (2) (Universal
SPECsfs97 — Standard Performance Evaluation Replicator) Secondary Volume.
Corporation (SPEC) System File Server (sfs) SVP — Service Processor ― A laptop computer
developed in 1997 (97). mounted on the control frame (DKC) and
SPI model — Software, Platform and used for monitoring, maintenance and
Infrastructure as a service. A common term administration of the subsystem.
to describe the cloud computing “as a service” Switch — A fabric device providing full
business model. bandwidth per port and high-speed routing
SRA — Storage Replicator Adapter. of data via link-level addressing.
SRDF/A — (EMC) Symmetrix Remote Data SWPX — Switching power supply.
Facility Asynchronous. SXP — SAS Expander.
SRDF/S — (EMC) Symmetrix Remote Data Symmetric virtualization — See In-band
Facility Synchronous. virtualization.
Page G-22
Synchronous — Operations that have a fixed time storage cost. Categories may be based on
relationship to each other. Most commonly levels of protection needed, performance
used to denote I/O operations that occur in requirements, frequency of use, and other
time sequence, i.e., a successor operation does considerations. Since assigning data to
not occur until its predecessor is complete. particular media may be an ongoing and
-back to top- complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
Target — The system component that receives a Tiered Storage Promotion — Moving data
SCSI I/O command, an open device that between tiers of storage as their availability
operates at the request of the initiator.
requirements change.
TB — Terabyte. 1TB = 1,024GB. TLS — Tape Library System.
TCDO — Total Cost of Data Ownership.
TLS — Transport Layer Security.
TCO — Total Cost of Ownership. TMP — Temporary or Test Management Program.
TCP/IP — Transmission Control Protocol over
TOD (or ToD) — Time Of Day.
Internet Protocol.
TOE — TCP Offload Engine.
TDCONV — Trace Dump CONVerter. A software
program that is used to convert traces taken Topology — The shape of a network or how it is
on the system into readable text. This laid out. Topologies are either physical or
information is loaded into a special logical.
spreadsheet that allows for further TPC-R — Tivoli Productivity Center for
investigation of the data. More in-depth Replication.
failure analysis. TPF — Transaction Processing Facility.
TDMF — Transparent Data Migration Facility. TPOF — Tolerable Points of Failure.
Telco or TELCO — Telecommunications Track — Circular segment of a hard disk or other
Company. storage media.
TEP — Tivoli Enterprise Portal. Transfer Rate — See Data Transfer Rate.
Terabyte (TB) — A measurement of capacity, data Trap — A program interrupt, usually an interrupt
or data storage. 1TB = 1,024GB. caused by some exceptional situation in the
TFS — Temporary File System. user program. In most cases, the Operating
TGTLIBs — Target Libraries. System performs some action, and then
returns control to the program.
THF — Front Thermostat.
TSC — Tested Storage Configuration.
Thin Provisioning — Thin provisioning allows
storage space to be easily allocated to servers TSO — Time Sharing Option.
on a just-enough and just-in-time basis. TSO/E — Time Sharing Option/Extended.
THR — Rear Thermostat. T-VOL — (ShadowImage) Target Volume for
Throughput — The amount of data transferred In-System Replication.
from 1 place to another or processed in a -back to top-
Page G-23
UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage.
VLVI — Virtual Logic Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top-
VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
secondary servers, set up protection and VOLID — Volume ID.
perform failovers and failbacks. VOLSER — Volume Serial Numbers.
VCS — Veritas Cluster System. Volume — A fixed amount of storage on a disk or
VDEV — Virtual Device. tape. The term volume is often used as a
synonym for the storage medium itself, but
VDI — Virtual Desktop Infrastructure. it is possible for a single disk to contain more
VHD — Virtual Hard Disk. than 1 volume or for a volume to span more
VHDL — VHSIC (Very-High-Speed Integrated than 1 disk.
Circuit) Hardware Description Language. VPC — Virtual Private Cloud.
VHSIC — Very-High-Speed Integrated Circuit. VSAM — Virtual Storage Access Method.
VI — Virtual Interface. A research prototype that VSD — Virtual Storage Director.
is undergoing active development, and the VTL — Virtual Tape Library.
details of the implementation may change
considerably. It is an application interface VSP — Virtual Storage Platform.
that gives user-level processes direct but VSS — (Microsoft) Volume Shadow Copy Service.
protected access to network interface cards. VTOC — Volume Table of Contents.
This allows applications to bypass IP
VTOCIX — Volume Table of Contents Index.
processing overheads (for example, copying
data, computing checksums) and system call VVDS — Virtual Volume Data Set.
overheads while still preventing 1 process V-VOL — Virtual Volume.
from accidentally or maliciously tampering -back to top-
with or reading data being used by another.
Virtualization — Referring to storage
—W—
virtualization, virtualization is the WAN — Wide Area Network. A computing
amalgamation of multiple network storage internetwork that covers a broad area or
devices into what appears to be a single region. Contrast with PAN, LAN and MAN.
storage unit. Storage virtualization is often
used in a SAN, and makes tasks such as WDIR — Directory Name Object.
archiving, backup and recovery easier and WDIR — Working Directory.
faster. Storage virtualization is usually
implemented via software applications. WDS — Working Data Set.
WebDAV — Web-based Distributed Authoring
There are many additional types of and Versioning (HTTP extensions).
virtualization.
Virtual Private Cloud (VPC) — Private cloud WFILE — File Object or Working File.
existing within a shared or public cloud (for WFS — Working File Set.
example, the Intercloud). Also known as a
virtual private network cloud. WINS — Windows Internet Naming Service.
Page G-24
WL — Wide Link. —Y—
WLM — Work Load Manager. YB — Yottabyte.
WORM — Write Once, Read Many. Yottabyte — A highest-end measurement of data
WSDL — Web Services Description Language. at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WSRM — Write Seldom, Read Many. that all the computer hard drives in the
world do not contain 1YB of data.
WTREE — Directory Tree Object or Working Tree.
-back to top-
WWN ― World Wide Name. A unique identifier
for an open-system host. It consists of a 64-
bit physical address (the IEEE 48-bit format —Z—
with a 12-bit extension and a 4-bit prefix). z/OS — z Operating System (IBM® S/390® or
z/OS® Environments).
WWNN — World Wide Node Name. A globally
unique 64-bit identifier assigned to each z/OS NFS — (System) z/OS Network File System.
Fibre Channel node process. z/OSMF — (System) z/OS Management Facility.
WWPN ― World Wide Port Name. A globally zAAP — (System) z Application Assist Processor
unique 64-bit identifier assigned to each (for Java and XML workloads).
Fibre Channel port. A Fibre Channel port’s ZCF — Zero Copy Failover. Also known as Data
WWPN is permitted to use any of several Access Path (DAP).
naming authorities. Fibre Channel specifies a
Zettabyte (ZB) — A high-end measurement of
Network Address Authority (NAA) to
data at the present time. 1ZB = 1,024EB.
distinguish between the various name
registration authorities that may be used to zFS — (System) zSeries File System.
identify the WWPN. zHPF — (System) z High Performance FICON.
-back to top- zIIP — (System) z Integrated Information
Processor (specialty processor for database).
—X—
Zone — A collection of Fibre Channel Ports that
XAUI — "X"=10, AUI = Attachment Unit Interface. are permitted to communicate with each
other via the fabric.
XCF — Cross System Communications Facility.
Zoning — A method of subdividing a storage area
XDS — Cross Enterprise Document Sharing. network into disjoint zones, or subsets of
XDSi — Cross Enterprise Document Sharing for nodes on the network. Storage area network
Imaging. nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
XFI — Standard interface for connecting 10Gb SANs, traffic within each zone may be
Ethernet MAC device to XFP interface. physically isolated from traffic outside the
zone.
XFP — "X"=10Gb Small Form Factor Pluggable.
-back to top-
XML — eXtensible Markup Language.
XRC — Extended Remote Copy.
-back to top-
Page G-25
Page G-26
Evaluating This Course
Please use the online evaluation system to help improve our
courses.
https://hitachiuniversity/Web/Main
Page E-1
Evaluating This Course
3. On the Transcript page, click the down arrow in the Active menu.
4. In the Active menu, select Completed. Your completed courses will display.
6. Click the down arrow in the View Certificate drop down menu.
Page E-2