Sunteți pe pagina 1din 33

1. What is fabric and fabric management?

Ans: A fabric is a virtual space in which all storage nodes communicate with each other over
distances. It can be created with a single switch or a group of switches connected together. Each
switch contains a unique domain identifier which is used in the address schema of the fabric.
In order to identify the nodes in a fabric, 24-bit fibre channel addressing is used.
Fabric services: When a device logs into a fabric, its information is maintained in a database. The
common services found in a fabric are:
Login Service
Name Service
Fabric Controller
Management Server
Fabric Management : Monitoring and managing the switches is a daily activity for most SAN
administrators. Activities include accessing the specific management software for monitoring
purposes and zoning.

2. What is ISL?
Ans: Switches are connected to each other in a fabric using Inter-switch Links (ISL).

3. Switch fabric ?
Ans: Switched Fabric - Each device has a unique dedicated I/O path to the device it is
communicating with. This is accomplished by implementing a fabric switch.

4. Lun Migration?
Ans: LUN Migration Information:
•LUN Migration provides the ability to migrate data from one LUN to another dynamically.
•The target LUN assumes the identity of the source LUN.
•The source LUN is unbound when migration process is complete.
•Host access to the LUN can continue during the migration process.
•The target LUN must be the same size or larger than the source.
•The source and target LUNs do not need to be the same RAID type or disk type (FC<->ATA).
•Both LUNs and metaLUNs can be sources and targets.
•Individual components LUNs in a metaLUN cannot be migrated indepedently - the entire
metaLUN must be migrated as a unit.
•The migration process can be throttled.
•Reserved LUNs cannot be migrated.
5. What is heterogeneous?
Ans: A network that includes computers and other devices from different manufacturers. For
example, local-area networks (LANs) that connect PCs with Apple Macintosh computers are
heterogeneous.
6. What is zoning? What all the different types of zoning?
Ans: There are several configuration layers involved in granting nodes the ability to
communicate with each other:
Members - Nodes within the SAN which can be included in a zone.
Zones - Contains a set of members that can access each other. A port or a node can be members
of multiple zones.
Zone Sets - A group of zones that can be activated or deactivated as a single entity in either a
single unit or a multi-unit fabric. Only one zone set can be active at one time per fabric. Can also
be referred to as a Zone Configuration.

In general, zoning can be divided into three categories:


WWN zoning - WWN zoning uses the unique identifiers of a node which have been recorded
in the switches to either allow or block access A major advantage of WWN zoning is flexibility.
The SAN can be re-cabled without having to reconfigure the zone information since the WWN is
static to the port.
Port zoning - Port zoning uses physical ports to define zones. Access to data is determined by
what physical port a node is connected to. Although this method is quite secure, should recabling
occur zoning configuration information must be updated.
Mixed Zoning – Mixed zoning combines the two methods above. Using mixed zoning allows a
specific port to be tied to a node WWN. This is not a typical method.

7. what is Single HBA Zoning?


Ans: Under single-HBA zoning, each HBA is configured with its own zone. The members of the
zone consist of the HBA and one or more storage ports with the volumes that the HBA will use.
Two reasons for Single HBA Zoning include:
Cuts down on the reset time for any change made in the state of the fabric.
Only the nodes within the same zone will be forced to log back into the fabric after a RSCN
(Registered State Change Notification).

8. What is LUN Masking?


Ans: Device (LUN) Masking ensures that volume access to servers is controlled appropriately.
This prevents unauthorized or accidental use in a distributed environment.
A zone set can have multiple host HBAs and a common storage port. LUN Masking prevents
multiple hosts from trying to access the same volume presented on the common storage port.

9. What is iSNS (Internet Storage Name service)?


Ans: Each Fibre Channel Name Service message has an equivalent iSNS message. This mapping
is transparent, allowing iFCP fabrics with iSNS support to provide the same services that Fibre
Channel fabrics can.
When an iFCP or iSCSI gateway receives a Name Service ELS, it is directly converted to the
equivalent iSNS Name Service message. The gateway intercepts the response and maps any
addressing information obtained from queries to its internal address translation table before
forwarding the Name Service ELS response to the original Fibre Channel requester.

10. What is Replication? Replication Software?


Ans: Local replication is a technique for ensuring Business Continuity by making exact copies of
data. With replication, data on the replica will be identical to the data on the original at the point-
in-time that the replica was created.
Replica - An exact copy (in all details)
Replication - The process of reproducing data
Examples:
Copy a specific file
Copy all the data used by a database application
Copy all the data in a UNIX Volume Group (including underlying logical volumes, file
systems, etc.)
Copy data on a storage array to a remote storage array

EMC Symmetrix Arrays


– EMC TimeFinder/Mirror
Full volume mirroring
– EMC TimeFinder/Clone
Full volume replication
– EMC TimeFinder/SNAP
Pointer based replication
EMC CLARiiON Arrays
– EMC SnapView Clone
Full volume replication
– EMC SnapView Snapshot
Pointer based replication

11. CLARiiON CX3-80 Architecture?


Ans: The CLARiiON architecture includes fully redundant, hot swappable components—
meaning the system can survive the loss of a fan or a power supply, and the failed component
can be replaced without powering down the system.
The Standby Power Supplies (SPSs) will maintain power to the cache for long enough to allow
its content to be copied to a dedicated disk area (called the vault) if a power failure should occur.
Storage Processors communicate with each other over the CLARiiON Messaging Interface
(CMI) channels. They transport commands, status information, and data for write cache
mirroring between the Storage Processors. CMI is used for peer-to-peer communications in the
SAN space and may be used for I/O expansion in the NAS space.
The CX3-80 uses PCI-Express as the high-speed CMI path. PCI Express architecture delivers
advance I/O technology delivering high bandwidth per pin, superior routing characteristics, and
improved reliability.
When more capacity is required, additional disk array enclosures containing disk modules can
be easily added. Link Control Cards (LCC) connect shelves of disks.

12. What is Control Station?


Ans: A Control Station is a dedicated management, Intel processor based computer running EMC
Linux to provide:
Specialized software installation and upgrade portal
Management of high availability features
– Fault monitoring
– Fault recovery
– Fault Reporting (CallHome)
Management of Data Mover configuration and storage for the system configuration database
Remote diagnosis and repair

13. What is Fiber Channel?


Ans: Fibre channel is a set of standards which define protocols for performing high speed serial
data transfer. The standards define a layered model similar to the OSI model found in traditional
networking technology. Fibre Channel provides a standard data transport frame into which
multiple protocol types can be encapsulated. The addressing scheme used in Fibre Channel
switched fabrics will support over 16 million devices in a single fabric.

14. How To make LUN Available to host?


Ans: Making LUNs available to a host is a 3-step process:
1. Create a RAID Group
Choose which physical disks should be used for the RAID Group and assign those disks to the
group. Each physical disk may be part of one RAID Group only.
2. Create LUNs on that RAID Group
LUNs may be created (Note: The CLARiiON term is ‘bound’) on that RAID Group. The first LUN
that is bound will have a RAID Level selected by the user; all subsequent LUNs must be of the
same RAID Level.
3. Create storage group and Assign those LUNs and host to storage group
When LUNs have been bound, they are assigned to hosts. Normal host procedures, such as
partitioning, formatting and labeling, will then be performed to make the LUN usable. The
CLARiiON software that controls host access (LUN Masking) to LUNs is Access logic.

15. How you will check compatibility for server when you’re installing new box?
Ans: Compatibility Matrix in E-Lab Interoperability navigator powerlink.emc.com

16.What is Hot Spare?


Ans: A hot spare is an idle component (often a drive) in a RAID array that becomes a temporary
replacement for a failed component.
For example:
The hot spare takes the failed drive’s identity in the array.
Data recovery takes place. How this happens is based on the RAID implementation:
If parity was used, data will be rebuilt onto the hot spare from the parity and data on the
surviving drives.
If mirroring was used, data will be rebuilt using the data from the surviving mirrored drive.
The failed drive is replaced with a new drive at some time later.
One of the following occurs:
The hot spare replaces the new drive permanently—meaning that it is no longer a hot spare
and a new hot spare must be configured on the system.
When the new drive is added to the system, data from the hot spare is copied to the new drive.
The hot spare returns to its idle state, ready to replace the next failed drive.
Note: The hot spare drive needs to be large enough to accommodate the data from the failed
drive
Hot spare replacement can be:
Automatic - when a disk’s recoverable error rates exceed a predetermined threshold, the disk
subsystem tries to copy data from the failing disk to a spare one. If this task completes before the
damaged disk fails, the subsystem switches to the spare and marks the failing disk unusable. (If
not it uses parity or the mirrored disk to recover the data, as appropriate).
User initiated - the administrator tells the system when to do the rebuild. This gives the
administrator control (e.g., rebuild overnight so as not to degrade system performance), however,
the system is vulnerable to another
failure because the hot spare is now unavailable. Some systems implement multiple hot spares to
improve availability.

16. What is Hot Swap?


Ans: Like hot spares, hot swaps enable a system to recover quickly in the event of a failure. With
a hot swap the user can replace the failed hardware (such as a controller) without having to shut
down the system.

17. Data Availability at the host?


Ans:
Multiple HBAs: Redundancy can be implemented using multiple HBAs. HBAs are the host’s
connection to the storage subsystem.

Multi-pathing software: Multi-pathing software is a server-resident, availability enhancing,


software solution. It utilizes the available HBAs on the server to provide redundant
communication paths between host and storage devices. It provides multiple path I/O
capabilities and path failover, and may also provide automatic load balancing. This assures
uninterrupted data transfers even in the event of a path failure.
Clustering: Clustering uses redundant host systems connected together. In the event that one of
the hosts in the cluster fails, its functions will be assumed by the surviving member(s). Cluster
members can be configured to transparently take over each others’ workload, with minimal or no
impact to the user.

18. What is HBA?


Ans: The host connects to storage devices using special hardware called a Host Bus Adapter
(HBA). HBAs are generally implemented as either an add-on card or a chip on the motherboard
of the host. The ports on the HBA are the host’s connection to the storage subsystem. There may
be multiple HBAs in a host. The HBA has the processing capability to handle some storage
commands, thereby reducing the burden on the host CPU.

19. What Volume Manager?


Ans: The Volume Manager is an optional intermediate layer between the filesystem and the
physical disks. It can aggregate several smaller disks to form a larger virtual disk and make this
virtual disk visible to higher level programs and applications. It optimizes access to storage and
simplifies the management of storage resources.

20. DAS,NAS,SAN?
DAS : In a Direct Attached Storage (DAS) environment, servers connect directly to the disk array
typically via a SCSI interface. The same connectivity port on the Disk array cannot be shared
between multiple servers. Clients connect to the Servers through the Local Area Network The
distance between the Server and the Disk array is governed by the SCSI limitations. With the
advent of Storage Area Networks and Fibre Channel interface, this method of Disk array access is
becoming less prevalent.

NAS: In a Network Attached Storage (NAS) environment, NAS Devices access the disks in an
array via direct connection or through external connectivity. The NAS heads are optimized for
file serving. They are setup to export/share file systems. Servers called NAS clients access these
file systems over the Local Area Network (LAN) to run applications. The clients connect to these
servers also over the LAN.

SAN: In a Storage Area Network (SAN) environment, servers access the disk array through
adedicated network designated as SAN in the slide. SAN consists of Fibre Channel switches that
provide connectivity between the servers and the disk array. In this model, multiple servers can
access the same Fibre Channel port on the disk array. The distance between the server and the
disk array can also be greater than that permitted in a direct attached SCSI environment. Clients
communicate with the servers over the Local Area Network (LAN).
21.What is DPE2?
Ans: All CX-series models now ship with the new UltraPoint Disk Array Enclosure (DAE2P).

22.What is CRU signature?


Ans: The CRU Signature defines what Enclosure/ Bus/ Slot the disk is in. It also has an entry that
is unique for each RAID group so a disk not only must match the Bus/ Enclosure/Slot but also
the RAID group in order to be included in as part of a LUN. If you pull one disk, that slot is
marked for rebuild. If you pull a second disk from a RAID group, the RAID group shuts down. If
you then insert a new disk for the second disk that you pulled, Flare (the array operating system)
will try to bring up the LUN in "n -1 disks mode" but since it is not the original disk, it cannot
bring the LUN online and, instead, returns a CRU signature error. If you insert the right disk
(that is, the one you pulled from that slot), the LUN will come back online. If you were to insert a
new disk into the first slot you pulled or insert the original disk that you pulled from that slot, the
disk will be rebuilt because that first slot is marked as needing a rebuild. When a slot requires a
rebuild, the CRU signature of the disk does not matter. A rebuilt disk is assigned a new CRU
signature.
The CRU Signature is created when a LUN is bound and is stored on the private area of each
disk.
23. What is CMI and Function?
Ans: SPs communicate via the CMI or PCI Express bus
– CLARiiON Messaging Interface
– New models will use the faster PCI express bus
A Fibre Channel connection between the SPs, called the CMI, carries data between the SPs, and
also carries status information. On FC-series arrays, the CMI is a single connection, running at
100 MB/s. On CX-series arrays, the CMI is a dual connection, with each connection running at
200 MB/s. On newer CX3 series arrays, the communication path between SPs will use the faster
PCI express bus
On FC-series arrays, there is an internal mechanism to determine where the fault lies if the CMI
should fail – it uses a backup CMI, which is a low-speed serial connection between the SPs. On
the CX arrays, the dual CMI itself allows this determination.

24. Explain raid levels?


Ans:
RAID 0 – Striped Array with no Fault Tolerance
RAID 1 – Disk Mirroring
RAID 1+0 – Mirroring and Striping
RAID 3 - Parallel access array with dedicated parity disk
RAID 4- Striped array with independent disks and a dedicated parity disk
RAID 5 - Striped array with independent disks and distributed parity
RAID 6 - Striped array with independent disks and dual distributed parity

25. What is PSM LUN?


Ans: A PSM LUN was created during installation or immediately after a CLARiiON FC4700
array was installed.
The PSM LUN was not properly planned or the customer did not want the type of RAID Groups
or LUNs that were configured. Destroying and recreating the PSM LUN assumes that no data has
been stored on the array and that the procedure takes place during the initial installation of the
array. While RAID Group and LUN information is unaffected, all Access Logix configuration
information (including host mappings, storage groups, host information, etc.) and all
configuration information for optional SnapView and MirrorView software are lost.

26.How to create raid groups, binding Lun, create storage group?


Ans: Right click Array properties à create raid group, Right click Array properties à bind Lun,
Right click Array properties à Storage group

27. What is mirror view?


Ans: MirrorView is the CLARiiON remote disaster recovery solution. Two arrays have dedicated
connection over Fibre Channel or T3 lines. It is synchronous mirroring, meaning that all writes to
the source must be completed to the Secondary array before the acknowledgement is sent to the
source host. During a disaster, the Secondary Image can be promoted and mounted by a Standby
host, minimizing downtime.

28. What is Remote replication ?


Ans: Remote Replication:
Replica is available at a remote facility
– Could be a few miles away or half way around the world
– Backup and Vaulting are not considered remote replication
Synchronous Replication
– Replica is identical to source at all times – Zero RPO
Asynchronous Replication
– Replica is behind the source by a finite margin – Small RPO
Connectivity
– Network infrastructure over which data is transported from source site to remote site

29. What is DWDM?


Ans: Dense Wavelength Division Multiplexing (DWDM)
DWDM is a technology that puts data from different sources together on an optical fiber with
each signal carried on its own separate light wavelength (commonly referred to as a lambda or •).
Up to 32 protected and 64 unprotected separate
wavelengths of data can be multiplexed into a light
stream transmitted on a single optical fiber.

30. What all storage Arrays & SAN Devices supported by EMC ECC?
Ans:
Storage arrays:
EMC Symmetrix
EMC CLARiiON
EMC Centera
EMC Celerra and Network Appliances NAS servers
EMC Invista
Hitachi Data Systems (including the HP and Sun resold versions)
HP Storageworks
IBM ESS
SMI-S (Storage Management Initiative Specification) compliant arrays
SAN Devices:
EMC Connectrix
Brocade
McData
Cisco
Inrange (CNT)
IBM Blade Server (IBM-branded Brocade models only)
Dell Blade Server (Dell-branded Brocade models only)

31. What is reserved pool?


Ans: The CLARiiON storage system must be configured with a Reserved LUN Pool in order to
use SnapView Snapshot features. The Reserved LUN Pool consists of 2 parts: LUNs for use by
SPA and LUNs for use by SPB. Each of those parts is made up of one or more Reserved LUNs.
The LUNs used are bound in the normal manner. However, they are not placed in storage groups
and allocated to hosts, they are used internally by the storage system software. These are known
as private LUNs because they cannot be used, or seen, by attached hosts.
Like any LUN, a Reserved LUN will be owned by only one SP at any time and they may be
trespassed if the need should arise (i.e., if an SP should fail).
Just as each storage system model has a maximum number of LUNs it will support, each also has
a maximum number of LUNs which may be added to the Reserved LUN Pool.
The first step in SnapView configuration will usually be the assignment of LUNs to the Reserved
LUN Pool. Only then will SnapView Sessions be allowed to start. Remember that as snapable
LUNs are added to the storage system, the LUN Pool size will have to be reviewed. Changes may
be made online.
LUNs used in the Reserved LUN Pool are not host-visible, though they do count towards the
maximum number of LUNs allowed on a storage system.

32.Explain DASD,JBOD,DISK Array?


Ans: DASD – Direct Access Storage Device (originally introduced by IBM in 1956) is the ‘oldest’
of the techniques for accessing disks from a host computer. Disks are directly accessed from the
host (historically a mainframe system) and tightly coupled to the host environment. A hard drive
in a personal computer is an example of a DASD system. Typically, you can view the DASD as a
one-to-one relationship between a server/computer and its disk drive.

JBOD: JBOD is an acronym for “just a bunch of disks”. The drives in a JBOD array can be
independently addressed and accessed by the Server.

DISK ARRAY : Disk arrays extend the concept of JBODs by improving performance and
reliability. They have multiple host I/O ports. This enables connecting multiple hosts to the same
disk array. Array management software allows the partitioning or segregation of array resources,
so that a disk orgroup of disks can be allocated to each of the hosts. Typically they have
controllers that can perform RAID (Redundant Array of Independent Disks) calculations.

32.What is BCV?
Ans: The most fundamental element of TimeFinder/Mirror is a specially defined volume called a
Business Continuity Volume. A BCV is a Symmetrix volume with special attributes that allows it
to be attached to another Symmetrix Logical Volume within the same Symmetrix as the next
available mirror. It must be of the same size, type, and emulation (for mainframe 3380/3390) as
the device which it will mirror. Each BCV has its own host address and Symmetrix device
number.

33. How SRDF works?


Ans: SRDF used for Data mirroring between physically separate Symmetrix systems

34. Explain FCIP, IFCP, ISCSI?


Ans:
• FCIP – TCP/IP based tunneling/encapsulating protocol for connecting/extending Fibre
Channel SANs. The entire FC frame is sent over IP links.
• iFCP – Gateway-to-gateway protocol for FCP over IP. Mapping natively in IP across Fibre
Channel and IP. An IP-based tunneling protocol for interconnecting Fibre Channel devices
together in place of Fibre Channel switches. When iFCP creates the IP packets, it inserts
information that is readable by network devices and routable within the IP network. iFCP wraps
Fibre Channel data in IP packets but maps IP addresses to individual Fibre Channel ports.
• iSCSI – Native TCP/IP protocol. An IP-based protocol for establishing and managing
Connections between IPbased storage devices, hosts, and clients. No Fibre Channel content, but
bridging between iSCSI and FC is possible.

35. What is WWN?


Ans:
WWN: All Fibre Channel devices (ports) have 64 bit unique identifiers called World Wide Names
(WWN). These WWNs are similar to the MAC address used on a TCP/IP adapter, in that they
uniquely identify a device on the network and are burned into the hardware or assigned through
software. It is a critical feature, as it used in several configurations used for storage access.
However, in order to communicate in the SAN, a port also needs an address. This address is used
to transmit data through the SAN from source node to destination node.

36. What is zoing and how many types of zoning?


Ans: Zone
– Controlled at the switch layer
– List of nodes that are made aware of each other
– A port or a node can be members of multiple zones
Zone Set
– A collection of zones
– Also called zone config
EMC recommends Single HBA Zoning
– A separate zone for each HBA
– Makes zone management easier when replacing HBAs

Types of zones:
– Port Zoning (Hard Zoning)
Port-to-Port traffic
Ports can be members of more than one zone
Each HBA only “sees” the ports in the same zone
If a cable is moved to a different port, zone has to be modified
– WWN based Zoning (Soft Zoning)
Access is controlled using WWN
WWNs defined as part of a zone “see” each other regardless of the
switch port they are plugged into
HBA replacement requires the zone to be modified
– Hybrid zones (Mixed Zoning)
Contain ports and WWNs
Port Zoning Advantages: More Secure, Simplified HBA replacement
Disadvantages: Reconfiguration
WWPN Zoning Advantage: Flexibility, Reconfiguration, Troubleshooting
Disadvantages: – “Spoofing, HBA replacement

37. What is flushing and how many levels?


Ans: Three levels of flushing:
– Idle - Low I/Os to the LUN; user I/Os continue
– Watermark - Priority depends on cache fullness; user I/Os continue
– Forced - Cache has no free space; user I/Os queue
With write cache enabled, all writes will enter the write cache on the owning SP, and then get
mirrored to the other SP. As the write cache fills, it attempts to empty the write cache to the
destination disk drives. This process is called “Flushing”. We have the ability to control this
activity by setting watermarks for write cache, a Low and a High. Until cache fullness reaches the
Low Watermark (default value = 40%), the SP is in a state called “Idle Flushing”. The SP will
attempt to clear cache lazily and during idle periods of the array. The array is at its peak cache
performance during idle flushing.
To maintain free space in write cache, pages are flushed
From write cache to the drives
Three levels of flushing:
– Idle flushing
LUN is not busy; user I/Os continue
– Watermark processing
Priority depends on cache fullness; user I/Os continue
– Forced flushing
Cache has no free space; user I/Os stall
For maximum performance:
– provide a “cushion” of unused cache for I/O bursts
– minimize/avoid forced flushes
Low & high Water mark:
At the low water mark the flushing process is given a higher priority by assigning a single thread
to the write cache flushing process for each scheduler cycle. As the amount of unflushed data in
write cache increases beyond the low water mark higher priorities are assigned to the flushing
process at regular intervals by Assigning additional threads to each scheduler cycle, until a
maximum of 4 threads are assigned at the high water mark.

Forced flushing: If the data in the write cache continues to increase, forced flushing will occur at a
point where there is not enough room in write cache for the next write I/O to fit. Write I/Os to
the array will be halted until enough data is flushed to make sufficient room available for the
next I/O. This process will continue until the forced flushing plus the scheduled (watermark)
flushing creates enough room for normal caching to continue. All during this process write cache
remains enabled.

38. What is FC-AL?


Ans: Fibre Channel Arbitrated Loop has a shared bandwidth, distributed topology, and can
connect with hubs.

39. What is San Copy?


Ans: SAN Copy allows for data replication to heterogeneous storage arrays through Fibre
Channel topology, without involving hosts, or LAN topolgies.

40. SAN Toplogies?


Ans: SAN Connectivity Topologies
– Point to Point (P-to-P) : Point to Point is a direct connection between two devices.
– Fibre Channel Arbitrated Loop (FC-AL) : Fibre Channel Arbitrated Loop has a shared
bandwidth, distributed topology, and can connect with hubs.
– Fibre Channel Switch Fabric (FC-SW): Fibre Channel Switched Fabric (FC-SW) can have
multiple devices connected via switching technologies. Fibre Channel Switched Fabric
provides the highest performance and connectivity of the three topologies. A switched
fabric provides scalability and dedicated bandwidth between any given pair of inter-
connected devices. It uses a 24-bit address (called the Fibre Channel Address) to route
traffic, and can accommodate as many as 15 million devices in a single fabric.

41. What is cluster?


Ans: Multiple server acts like single server, when one server down, user in the end will not get
any effort.

42. What is JBOD?


Ans: JBOD is an acronym for “just a bunch of disks”. The drives in a JBOD array can be
Independently addressed and accessed by the Server.

43. What is trespassing?


Ans: If the Storage Processor, Host Bus Adapter, cable, or any component in the I/O path fails,
ownership of the LUN can be moved to the surviving SP – Process is called LUN Trespassing.

44. What is Snapview?


Ans: SnapView includes two separate types of internal data replication: Snapshots and Clones.
Snapshots: Point-in-time virtual copies of LUNs. Allow for on-line back-up of databases while
only using 10-20% of the source disk resources.

Clones: Full Synchronous copies of LUNs within the same array. Can be used as a point-in-time,
FULL copy of a LUN through the fracture process, or as a data recovery solution in the event of a
failure.

45. What is replication software? What is RMSE?


Ans: Replication Technology
Microsoft VSS
Symmetrix: Time Finder/Mirror, Time Finder/Snap, SRDF/Synchronous,
SRDF/Asynchronous
ECA (consistency)
Clarion : Snap View clones, Snap View snapshots, SAN Copy,
HP Storage Works Clones
RMSE:
Creates disk replicas of mission-critical information for customers running Windows on EMC
Clarion storage arrays
Designed for the commercial customer space
– Customer installable, Easy to use

46. What is vault?


Ans: When problems occur, such as a power or SP Failure, the contents of write cache are copied
into a special area of the first 5 drives on the array called THE VAULT.

47. What is CMI how it works?


Ans: CLARiiON Messaging Interface (CMI) used to communicate between SPs SP-A SP-B The
mirroring insures data is protected from a single SP failure. The connection between the SPs,
called the CMI, carries data and status information between SPs. On FC-series arrays, the CMI is
a single connection, running at 100 MB/s. On CX-series arrays, the CMI is a dual connection,
with each connection running at 200 MB/s.
48. What is access logic?
Ans: Access Logix is array-based software which facilities a shared-storage
Environment. It will segment the storage on a CLARiiON so that hosts will only
See the LUNs we intend for them to see. The generic term for this is LUN
Masking, as the array is technically hiding unwanted LUNs from the hosts.

49. What is HBA?


Ans: HBA is any adapter that allows a computer bus to attach to another bus or channel
A Host Bus Adapter performs many low-level interface functions automatically to minimize
the impact on host processor performance
The HBA enables a range of highavailability and storage management capabilities
– Load balance
– Fail-over
– SAN administration
– Storage management

50. Explain SRDF and configuration?


SRDF: It is a business tool that allows a client with symmetrix based data centers to copy their
data between the sites for a number of purposes
By maintaining copies of data in different physical locations, SRDF enables you to perform the
fllowing opearations by integrating with your strategies for
1. Disaster restarts, disaster restart testing,
2. Recovery from planned outages, remote backup
3. Data center migration, data replication and mobility

Many different disaster recoveries


SRDF/s,
SRDF/A,
SRDF/DM (data mobility)

4Add-on solutions
SRDf/Star-Multi point replication
SRDF/CG - Consistency group
SRDF/AR - Automated Replication
SRDF/CE - Culster Enable

SRDF: Data mirrioring between physically spearate Symmetrix systems

Modes:
Synchronous Replication
Semi-synchronous replication
Adaptive Copy Replication
Asynchronous Replication

Conectivity options:
RLD: Remote Link Director
RFD: Remote fiber director
Communication is peer-to-peer 2 types
GigiE Remote Directors and MPCD
MPCD: Multiprotocal Chancel director

SRDF conection: Uni-driectional, Bi-Directional, Dual Configaration Flexible


Configuration:
1. Multiple source symmetrix to a single target symmetrix
2. Single Source symmetrix to multiple target symmetrix
Switched SRDF

51. What Is MirrorView/S?


Ans: MirrorView/S is a software application that maintains a copy image of a logical unit (LUN)
at separate locations in order to provide for disaster recovery; that is, to let one image continue if
a serious accident or natural disaster disables the other.

MirrorView/S Features and Benefits


MirrorView/S mirroring has the following features:
• Provision for disaster recovery with minimal overhead
• CLARiiON® environment
• Bidirectional mirroring
• Integration with EMC SnapView™ LUN copy software
• Integration with EMC SAN Copy™ software

CLARiiON MirrorView/S Environment


MirrorView/S operates in a highly available environment, leveraging the dual-SP design of
CLARiiON systems. If one SP fails, MirrorView/S running on the other SP will control and
maintain the mirrored LUNs. If the server is able to fail over I/O to the remaining SP, then
mirroring will continue as normal. The high-availability features of RAID protect against disk
failure, and mirrors are resilient to an SP failure in the primary or secondary storage system.

Integration with EMC SnapView Snapshot Software


SnapView software lets you create a snapshot of an active LUN at any point in time; however, do
this only when the mirror is not synchronizing the secondary image. Since the secondary image is
not viewable to any servers, but you can use SnapView in conjunction with MirrorView/S to
create a snapshot of a secondary image on a
Secondary storage system to perform data verification and run parallel processes, for example
backup.

52. What Is MirrorView/A?


MirrorView/A lets you periodically update a remote copy of production data. It is a software
application that keeps a point-in-time copy of a logical unit (LUN) and periodically replicates the
copy to a separate location in order to provide disaster recovery, that is, to let one image continue
to be active if a serious accident or natural disaster disables the other. It provides data replication
over long distances
(Hundreds to thousands of miles).

MirrorView/A Features and Benefits:


MirrorView/A mirroring has the following features:
• Provision for disaster recovery with minimal overhead
• CLARiiON® environment
• Bidirectional mirroring
• Integration with EMC SnapView™ LUN copy software
• Replication over long distances

CLARiiON MirrorView/A Environment


MirrorView/A operates in a highly available environment, leveraging the dual-SP design of
CLARiiON systems. If one SP fails, MirrorView/A running on the other SP will control and
maintain the mirrored LUNs. If the server is able to fail over I/O to the remaining SP, then
periodic updates will continue. The high-availability features
of RAID protect against disk failure, and mirrors are resilient to an SP failure in the primary or
secondary storage system.

53. What Is EMC SAN Copy?


EMC SAN Copy is storage-system-based software for copying data directly from a logical unit on
one storage system to destination logical units on supported remote systems without using host
resources. SAN Copy connects through a storage area network (SAN), and also supports
protocols that let you use the IP WAN to send data over extended distances.
SAN Copy runs in the SPs of a supported storage system (called a SAN Copy storage system),
and not on the host servers connected to the storage systems. As a result, the host processing
resources are free for production applications while the SPs copy data.
SAN Copy can copy data between logical units as follows:
• Within a CLARiiON storage system
• On CLARiiON storage systems
• On CLARiiON and Symmetrix storage systems
• On CLARiiON and qualified non-EMC storage systems

SAN Copy can use any CLARiiON SP ports to copy data, provided the port is not being used for
MirrorView connections. Multiple sessions can share the same port. You choose which ports SAN
Copy sessions use through switch zoning.

SAN Copy Features and Benefits


SAN Copy adds value to customer systems by offering the following
features:
• Storage-system-based data mover application that offloads the host, thereby improving host
performance
• Software that you can use in conjunction with replication software, allowing I/O with the
source logical unit to continue during the copy process
• Simultaneous sessions that copy data to multiple CLARiiON and Symmetrix storage systems
• Easy-to-use, web-based application for configuring and managing SAN Copy

54. What Is EMC SAN Copy/E?


Like EMC SAN Copy, EMC SAN Copy/E is storage-system-based software for copying data
directly from a logical unit on one storage system to destination logical units on supported
remote systems without using host resources. SAN Copy/E connects through a storage area
network (SAN), and also supports protocols that let you use the IP WAN to send data over
extended distances
SAN Copy/E runs only on a CX300 or AX-Series storage system and can only copy data to CX-
Series storage systems that are running San Copy.
SAN Copy/E can use any CLARiiON SP ports to copy data, provided the port is not being used
for MirrorView connections. Multiple sessions can share the same port. You choose which ports
SAN Copy sessions use through switch zoning.

SAN Copy/E Features and Benefits


The SAN Copy/E adds value to customer systems by offering the following features:
• Incremental copy sessions from AX-Series or CX300 storage systems to SAN Copy systems
located in the data center.
• Storage-system-based data mover application that uses storage area network (SAN) rather than
host resources to copy data resulting in a faster copy process.
• Easy-to-use, web-based application for configuring and managing SAN Copy/E.
• Software that you can use in conjunction with replication software, allowing I/O with the
source logical unit to continue during the copy process

55. Explain Clariion CX Series?


Disk Array Enclosure (DAE)
– CX family uses DAE2 with up to (15) 2Gb FC Drives
– FC family uses DAE with up to (10) 1 GB FC drives
– DAE2-ATA contains up to 15 ATA drives (Advanced
Technology Attachment)
Disk Array Enclosure (DAE2P)
– CX family only on 300, 500, and 700 series
– Replacement for DAE2 with code upgrade
– Houses up to 15 2GB FC Drives
Disk Processor Enclosure (DPE)
– Some CX series use DPE2 that contains Storage
Processors and up to (15) 2Gb FC Drives
– FC family uses DPE that contains Storage
Processors and up to (10) 1Gb FC Drives
Storage Processor Enclosure (SPE)
– Contains two dual CPU Storage Processors and no drives
Standby Power Supply (SPS)
– Provides battery backup protection

56. How CMI works?


CMI: CLARiiON Messaging Interface (CMI) used to communicate between SPs SP-A SP-B the
mirroring insures data is protected from a single SP failure. The connection between the SPs,
called the CMI, carries data and status information between SPs. On FC-series arrays, the CMI is
a single connection, running at 100 MB/s. On CX-series arrays, the CMI is a dual connection,
with each connection running at 200 MB/s.

57. What is Vault?


The vault is a reserved area found on the first 9 disks of the DPE on the FC series and the first 5
drives on a CX series system. For this reason, only the DPE needs to be kept alive in the event of a
power failure. At the first sign of an event which could potentially compromise the integrity of
the write cache data, that data is dumped to the vault area. It is protected there by the non-
volatile nature of disk

58. What is flushing and how many levels?


Ans:
Three levels of flushing:
– Idle - Low I/Os to the LUN; user I/Os continue
– Watermark - Priority depends on cache fullness; user I/Os continue
– Forced - Cache has no free space; user I/Os queue

59. Explain Meta LUNs?


a metaLUN is created by combining LUNs
– Dynamically increase LUN capacity
– Can be done on-line while host I/O is in progress
– A LUN can be expanded to create a metaLUN and a metaLUN can be further expanded by
adding additional LUNs
– Striped or concatenated
Data is restriped when a striped metaLUN is created
Appears to host as a single LUN
– Added to storage group like any other LUN
– Can be used with MirrorView, SnapView, or SAN Copy
Supported only on CX family with Navisphere 6.5+

60. What is Trespassing:


If the Storage Processor, Host Bus Adapter, cable, or any component in the I/O path fails,
ownership of the LUN can be moved to the surviving SP
– Process is called LUN Trespassing

61. what is power path ?


Host Connectivity Redundancy PowerPath – Failover Software
Host resident program for automatic detection and management of failed paths
Host will typically be configured with multiple paths to LUN
If HBA, cable or Switch fails,
PowerPath will redirect I/O over surviving path
If Storage Processor fails, PowerPath will “Trespass” LUN to surviving Storage Processor and
redirect I/O
Dynamic load balancing across HBA and Fabric – Not Storage Processors

62. Explain FLARE Operating Environment?


FLARE software manages all functions of the CLARiiON storage system. Each storage system
ships with a complete copy of FLARE software installed. When you power up the storage system,
each SP boots and executes FLARE software.
FLARE performs provisioning and resource allocation
Memory budgets for caching and for snap sessions, mirrors, clones, copies
Process Scheduling
Boot Management
Access Logix software is optional software that runs within the FLARE operating environment
on each storage processor (SP). Access Logix provides access control and allows multiple hosts to
share the storage system. This “LUN Masking” functionality is implemented using Storage
Groups. A Storage Group is one or more LUNs within a storage system that are reserved for one
or more hosts and are inaccessible to other hosts. When you power up the storage system, each
SP boots and executes its Access Logix software.
Navisphere Management software is a suite of tools that allows centralized management of
CLARiiON storage systems. Navisphere provides a centralized tool to monitor, configure, and
analyze performance. CLARiiON can also be managed as part of EMC ControlCenter, allowing
full end-to-end management.

63. SAN Connectivity Methods

There are three basic methods of communication using Fibre Channel infrastructure
– Point to point (P-to-P)
A direct connection between two devices
– Fibre Channel Arbitrated Loop (FC-AL)
A daisy chain connecting two or more devices
– Fabric connect (FC-SW)
Multiple devices connected via switching technologies

64. What is a Fabric?


Logically defined space used by FC nodes to communicate with each other
One switch or group of switches connected together
Routes traffic between attached devices
Component identifiers:
– Domain ID
Unique identifier for an FC switch within a fabric
– Worldwide Name (WWN)
Unique 64-bit identifier for an FC port (either a host port or a storage port)

65. Explain Fabric Security – Zoning?


Zone
– Controlled at the switch layer
– List of nodes that are made aware of each other
– A port or a node can be members of multiple zones
Zone Set
– A collection of zones
– Also called zone config
EMC recommends Single HBA Zoning
– A separate zone for each HBA
– Makes zone management easier when replacing HBAs

66. Explain Types of zones?


– Port Zoning (Hard Zoning)
Port-to-Port traffic
Ports can be members of more than one zone
Each HBA only “sees” the ports in the same zone
If a cable is moved to a different port, zone has to be modified
– WWN based Zoning (Soft Zoning)
Access is controlled using WWN
WWNs defined as part of a zone “see” each other regardless of the
switch port they are plugged into
HBA replacement requires the zone to be modified
– Hybrid zones (Mixed Zoning)
Contain ports and WWNs

Port Zoning Advantages: More Secure, Simplified HBA replacement


Disadvantages: Reconfiguration
WWPN Zoning Advantage: Flexibility, Reconfiguration, Troubleshooting
Disadvantages: – “Spoofing, HBA replacement

67. Hard and soft zoning?


Ans:
WWN zoning - WWN zoning uses the unique identifiers of a node which have been recorded in
the switches to either allow or block access A major advantage of WWN zoning is flexibility. The
SAN can be re-cabled without having to reconfigure the zone information since the WWN is
static to the port.
Port zoning - Port zoning uses physical ports to define zones. Access to data is determined by
what physical port a node is connected to. Although this method is quite secure, should recabling
occur zoning configuration information must be updated.

68. What is the difference between Hub and switch?


Ans: A switch is much faster than a hub and reduces collisions/retransmissions. Switches send
traffic only to the destination device/port, while hubs broadcast the data to all ports. If you have
a choice, a switch will provide more reliable and faster transfers of data.
CLARiiON CX Pocket Reference v5.0 (R19 - September 2005)

• Fibre Channel Model Comparison


• iSCSI Channel Model Comparison
• Max storage capacity by disk type
• General Information
• iSCSI Information
• metaLUN Information
• LUN Migration Information
• Reliability/Availability Features
• Security Features
• Software
• Definitions
• Feedback

CX700/500/300 Model Comparison


CX700 CX500 CX300
Disks 240 120 60
SPs 2 2 2
CPU/SP 2x3.2GHz 2x1.6GHz 1x800MHz
FE/SP 4@2Gb 2@2Gb 2@2Gb
BE/SP 4@2Gb 2@2Gb 1@2Gb
IOPS 200K 120K 50K
MB/sec 1520 760 680
Mem 8GB 4GB 2GB
Cache 3224MB 1515MB 571MB
W Cache 3072MB 1515MB 571MB
HA Hosts 256 128 64
Min Size 8U 4U 4U
LUNs 2048 1024 512
RGs 240 120 60
LUNs/RG 128 128 128
SGs 512 256 128
LUNs/SG 256 256 256
CMI 2x2Gb 2x2Gb 2x2Gb
Rsrvd LUNs 100 50 25
CX600/400/200 Model Comparison
CX600 CX400 CX200 CX200LC
Disks 240 60 30 15
SPs 2 2 2 1
CPU/SP 2x2GHz 1x800MHz 1x800MHz 1x800MHz
FE/SP 4@2Gb 2@2Gb 2@2Gb 2@2Gb
BE/SP 2@2Gb 2@2Gb 1@2Gb 1@2Gb
IOPS 150K 60K 40K 20K
MB/sec 1300 680 200 100
Mem 8GB 2GB 1GB 512MB
Cache 3470MB 619MB 237MB 237MB
W Cache 3072MB 619MB 237MB 237MB
HA Hosts 128 64 15 NA
Min Size 8U 4U 4U 4U
LUNs 1024 512 256 256
LUNs/RG 128 128 128 128
RGs 240 60 30 15
SGs 256 128 30 2
CMI 2x2Gb 2x2Gb 2x2Gb NA
Rsrvd LUNs 100 50 25 NA

Model Comparison Notes:

• All figures are maxs except where noted.

Back To Index

CX500i/300i Model Comparison


CX500i CX300i
Disks 120 60
SPs 2 2
CPU/SP 2x1.6GHz 1x800MHz
iSCSI/SP 2@1Gb 2@1Gb
BE/SP 2@2Gb 1@2Gb
Mem 4GB 2GB
Cache 1515MB 571MB
W Cache 1515MB 571MB
Connects 128 64
Min Size 4U 4U
LUNs 1024 512
RGs 120 60
LUNs/RG 128 128
SGs 512 128
LUNs/SG 256 256
CMI 2x2Gb 2x2Gb
Rsrvd LUNs 50 25

Model Comparison Notes:

• 'Connects' is the max number of NICs/HBAs that can connect to the array, regardless of
the number of hosts.

Back To Index

Max FC Capacity Comparison


Disk CX700 CX500 CX300
36GB 8.6TB 4.2TB 2.1TB
73GB 17.5TB 8.6TB 4.4TB
146GB 35TB 17.4TB 8.6TB
300GB 72TB 36TB 18TB

Disk Drive Notes:

• The max number of FC drives per RAID Group is 16, making the largest possible current
FC RAID group size 4.8 TB (raw) (16 x 300GB on a CX-Series array).
• All sizes are raw.

Back To Index

Max ATA Capacity Comparison


Disk CX700 CX500 CX300
250GB 56TB 26.2TB 11.2TB
320GB 72TB 33.6TB 14.4TB

ATA Disk Drive Notes:

• The max number of ATA drives per RAID Group is 16, making the largest possible
current ATA RAID group size 5.12TB (raw) (16 x 320GB on a CX-Series array).
• All sizes are raw and assume that all DAEs, other than the first one, are ATA.
• The 250GB drives are 7200RPM SATA - the 320GB drives are 5400RPM PATA.
• All sizes do not include the FC drives in the first DAE.
• The first DAE must be FC and contains at least 5 drives. All other DAEs can be ATA or a
mix of ATA & FC.
• 250GB SATA and 320GB PATA drives can be mixed in the same DAE.

Back To Index

General CLARiiON Information


• All models have 2 SPs (except where noted).
• All CX-series models now ship with the new UltraPoint Disk Array Enclosure (DAE2P).
o UltraPoint DAEs utilize a switch within the LCC, providing direct connections to
each drive in the DAE.
o Existing CX arrays (FC & iSCSI) can be upgraded with the new UltraPoint DAEs.
o CLARiiON arrays shipped with the new UltraPoint DAEs are designated with a
'-s' following the model name.
o UltraPoint DAEs cannot function as the DPE for a CX300/500.
• RAID levels 0, 1, 1+0, 3 and 5 and individual disks are supported.
• The CX300/CX500 are essentially a DAE2 with the SP integrated into the LCC.
• The 2 FE ports on the CX300/CX400/CX500 SP are independent - the 2 on the CX200 &
CX200LC go through an internal hub.
• A RAID group is a collection of physical drives.
• LUNs are logical storage devices that are presented to hosts.
• LUNs are created within RAID groups.
• All LUNs within a RAID group must be of the same RAID type.
• Uses LUN ownership model - each LUN is owned by one SP at a time.
• Both SPs are active (active/active), but, from a LUN perspective, only 1 SP is serving the
LUN at a time (active/passive)
• Data-in-place upgrades are available for any lower model CX to any higher model
(CX400/600 upgrades are no longer available).
• Block size is fixed at 520 bytes - 512 bytes of user data and 8 bytes of CLARiiON data per
block (see HA sections for more details).
• The previous capability to expand a LUN providing it was the only one in the RAID
group is no longer available on the CX-series (FC4700 only) - use metaLUNs for LUN
expansion
• SPS batteries are tested once a week. By default both are tested at the same time, which
disables write cache until testing is done. To prevent write cache disabling, stagger the
test times.
• RAID3 with write caching enabled is the recommended implementation for ATA drive
LUNs (R3 now uses general cache instead of dedicated RAID3 cache).
• Reserved LUNs are used for SnapView sessions, Incremental SAN Copy sessions and
MirrorView/A sessions (minimum of 1 reserved LUN for each source LUN). Each
session type can share the reserved LUN(s) for that source.
• The maximum size of a LUN (or metaLUN) is 16 exabytes.

Back To Index

iSCSI CLARiiON Information

• iSCSI arrays have 1Gb copper Ethernet FE ports instead of Fibre Channel FE ports.
• You cannot mix Fibre Channel and iSCSI ports on the same array.
• Refer to the EMC Support Matrix for supported host iSCSI connectivity.
• The iSCSI ports and the 10/100 host management ports can be on the same IP subnet.
• iSNS is supported.
• IPSEC is not supported natively on the arrays.
• Both standard NICs as well as iSCSI HBAs (e.g. QLogic QLA4010) are supported for host
access.
• PowerPath (V4.3.1 or later) supports multi-pathing and load balancing for iSCSI arrays.
• MirrorView/S/A and SAN Copy are not supported on iSCSI arrays.
• Direct Gigabit Ethernet attach is supported.
• 10/100 NIC connections direct to the iSCSI array are not supported, except to the
management port.
• RADIUS is not currently supported.
• Gigabit Ethernet Jumbo frames are not currently supported.
• The Clariion storage systems allow one login per iSCSI name per SP port.
• When using the Microsoft iSCSI Initiator all NICs in the same host will use the same
iSCSI name. The name will identify the host and the individual NICs will not be
identifiable. This behavior allows one login per server to each array SP port.
• When using Qlogic iSCSI adapters each HBA will have unique iSCSI names. The name
will identify the individual HBA in the host. This behavior allows one login per HBA to
each array SP port.
• When using physically separated networks, each network MUST use a unique sub-
network address to allow proper routing of traffic. This type of configuration is always
required for direct connect environments, and is also applicable whenever dedicated
subnets are used for the data paths
• A single host cannot mix iSCSI HBAs and NICs to connect to the same CLARiiON array.
• A single host cannot mix iSCSI HBAs and NICs to connect to different CLARiiON arrays.
• Hosts with iSCSI HBAs and separate hosts with NICs can connect to the same Array.
• A single host can not attach to a Fibre Array and ISCSI Array at the same time.
• A single host can not attach to a Clariion CX iSCSI array and a Clariion AX iSCSI array at
the same time.
• A single host can attach to Clariion CX iSCSI arrays and Symmetrix iSCSI arrays when
there is common network configuration, failover software, and driver support for both
platforms.
• A single host can attach to Clariion CX iSCSI arrays and IP/FC Switches to Clariion CX
Fibre Channel arrays when there is common network configuration, failover software,
and driver support for both platforms.
• Using the OSCG definition of Fan-in (server to storage system), a server can be connected
to a max of 4 Clariion storage systems (iSCSI and FC).
• Target array addresses and names can be configured manually in the Initiators, or iSNS
can be used to configure them dynamically.
• Support is provided for up to 4 HBAs or 4 NICs in one host connecting to one CX500i
array.
• Currently it is not possible to boot a Windows system using an iSCSI disk volume
provided by the Microsoft iSCSI Initiator. The only currently supported method for
booting a Windows system using iSCSI is via a supported iSCSI HBA.
• Dynamic disks are not supported on an iSCSI session using the Microsoft iSCSI Initiator.

Back To Index

metaLUN Information

• metaLUNs form an abstract LUN that is presented to the host as a single piece of storage
but consists of 2 or more 'back end' LUNs
• The use of metaLUNs is optional. The capability is available in the base FLARE upgrade
R12.
• metaLUNs and traditional LUNs can be mixed on the same array.
• metaLUNs are created from an initial LUN referred to as the 'base' LUN.
• The metaLUN takes on the characteristics of the base LUN when it is created (WWN,
Nice name, etc.), which can be modified by the user.
• Creation of a metaLUN is dynamic - the creation process is functionally transparent to
any hosts accessing the base LUN.
• FC and ATA LUNs cannot be mixed in the same metaLUN.
• Ownership of the back-end LUNs that make up a metaLUN will all be moved to the
same SP as the base LUN.
• All LUNs that make up a metaLUN become private.
• Destroying a metaLUN destroys all the LUNs that make up that metaLUN.
• If a LUN uses SnapView, MirrorView or SAN Copy it must be removed from those
applications before it can be expanded using metaLUNs.
• metaLUN components do not count against the max LUN count for an array; however,
they have their own limits (see below).
• metaLUNs can be striped or concatenated.
• Striping Considerations
o All striped LUNs must be the same size and RAID type.
o Striping will generally provide better performance since more spindles are
available.
o If a new LUN is added to a striped metaLUN, all data on the existing LUNs will
be restriped.
o The new space will not be available until re-striping occurs.
o For optimal performance LUNs should be in different RAID groups (spindles).
• Concatenation Considerations
o Any LUN types can be concatenated together except for R0 LUNs.
o R0 LUNs can only be concatenated with other R0 LUNs.
o Concatenation occurs by adding components to a base or existing metaLUN
LUN.
o A component is a collection of one or more LUNs identical in RAID type and size
that are striped together.
o The space added by concatenating a LUN is available immediately for use.
o You can only add LUNs to the last component in a metaLUN. You cannot insert
LUNs into the chain of component LUNs.

metaLUN Configuration
Item CX700 CX500 CX300
Max metaLUNs. 1024 512 256
LUNs/Cmpnt 32 32 16
Concat. Cmpnts/metaLUN 16 8 8
LUNs in metaLUNs 512 256 128

Back To Index

LUN Migration Information

• LUN Migration provides the ability to migrate data from one LUN to another
dynamically.
• The target LUN assumes the identity of the source LUN.
• The source LUN is unbound when migration process is complete.
• Host access to the LUN can continue during the migration process.
• The target LUN must be the same size or larger than the source.
• The source and target LUNs do not need to be the same RAID type or disk type (FC<-
>ATA).
• Both LUNs and metaLUNs can be sources and targets.
• Individual components LUNs in a metaLUN cannot be migrated indepedently - the
entire metaLUN must be migrated as a unit.
• The migration process can be throttled.
• Reserved LUNs cannot be migrated.

Reliability/Availability Features

• All components are dual-redundant and hot swappable (no single point of failure).
• Write cache is protected by a 'vault' area on disk. On a failure the contents are written to
disks (de-staged or dumped). When the failure is corrected the contents are written to the
back-end disks and write cache is re-enabled.
• The de-stage process is supported by batteries during power failures. Write cache will
not be re-enabled until the batteries are sufficiently recharged to support another cache
de-stage.
• The following conditions must be met for write-cache to be enabled:
o There must be a standby power supply present, and it must be fully charged.
o At least 4 vault drives must be present (all 5 if 'Non-HA' option is not selected);
they cannot be faulted or rebuilding.
o The ability to keep write cache enabled when a single vault drive fails is optional
under R12 and later.
o Both storage processors must be present and functional.
o Both power supplies must be present in the DPE/SPE.
o Both fan packs must be present in the DPE/SPE.
o The DPE/SPE and all DAEs must have two non-faulted link control cards (LCC)
each.
• Each data block on a CLARiiON contain 8 bytes of error checking data.
o 8 bytes consist of LRC, Shedstamp, Writestamp and Timestamp
• SNiiFER runs in the background and continuously checks all data blocks for errors.
• Updates to the array SW are non-disruptive from a host perspective.
• Failure of an SP results in all LUNs owned by that SP being trespassed to the other SP
(assuming PowerPath is running on the host(s) accessing those LUNs).
• Slightly higher (0.0025%) reliability can be achieved using vertical RAID groups rather
than horizontal ones.
• Striping a RAID1 RG across multiple DAEs that include the first DAE (the one containing
the vault drives) is not recommended.

Back To Index

Security Features

• Navisphere
o Arrays can be configured into domains to control who can manage.
o Named role-based accounts.
o Roles are Read Only (Monitor), Manager and Security Manager.
o All management communications with array are encrypted with 128-bit SSL.
o All actions performed on an array are logged by username@hostname.
• NaviCLI
o username@hostname is authenticated against privileged list of Navi agent (on
host for pre-FC4700, SP for all others).
o No encryption.
o Password is sent in clear.
o Communicates on TCP/IP port 6389.

Back To Index

Software

• Navisphere
• NaviCLI
• Navisphere Integrator
• Navisphere Analyzer
• LUN Masking
• SnapView
• MirrorView/S
• MirrorView/A
• SAN Copy
• PowerPath
• CLARalert/OnAlert

Back To Index

Navisphere (6.19)

Array-based package for managing CLARiiON arrays.

• Web (Java) based.


• V6.19 requires JRE 1.4.1 (or 1.4.2, depending on OS) or higher for browser.
• Navisphere Base allows management of a single array.
• Navisphere Manager allows management of up to 100 arrays from the array running
Manager.
o Only 1 instance of Navisphere Manager is required per domain
• Provides Windows-like look-and-feel.
• Layered SW appears in same tree view as standard management functions.
• Update interval is adjustable from setup page (host_ip_address\setup).
• Navisphere Manager can also be installed on a Windows server in the environment and
disabled on the arrays.
• Navisphere Manager can manage arrays in multiple domains from a single window.

Back To SW Index

NaviCLI (6.19)

Host-based utility that provides command line control of CLARiiON arrays.

• All management functions are available.


• Useful for scripting CLARiiON management.
• Communicates with array out-of-band (via LAN).
• There are currently 3 versions of NaviCLI available - Classic, Java and Secure.

Classic Java Secure


Implemented C++ Java C++
Port 443 Port 443
Comm Port 6389
or 2163 (SSL) or 2163 (SSL)
Security Basic Navi 6.x Navi 6.x
Addtl. SW Reqd. None Java JRE None
Win2K/3, Linux, Win2K/3, Linux,
Win2K/3, Linux,
OS's Solaris, AIX, Solaris, AIX,
Solaris
HP-UX, NW HP-UX, NW
Back To SW Index

Navisphere Integrator (V6.19)

Host-based package for integrating CLARiiON management into 3rd party packages.

• Available for Windows hosts only.


• Provides SNMP MIB integration for CLARiiON.
• Supports:
o HP OpenView
o CA Unicenter
o Tivoli NetView

Back To SW Index

Navisphere Analyzer (V6.19)

Provides detailed performance metrics.

• Two parts - Provider and UI.


o Provider gathers data - must be installed on any array to be analyzed.
o UI - Displays gathered data - only 1 copy required in domain.
• Can view realtime or archived data.
• Support for metaLUNs - gathers same perf statistics as for standard LUNs.
• Polling interval can be changed by domain Administrator.
• Perf logs (up to the previous 24 hours) can be downloaded to a host via the CLI in either
a .nar format or an ASCII text .csv format.
• Stores previous 24 hours of perf statistics on array.
• Data can be viewed at the following object levels:
o Disk
o SP
o LUN
o metaLUN
o RAID group
o Storage group
o Array
o SnapView sessions
o SAN Copy sessions

Back To SW Index

LUN Masking

FLARE provides LUN masking to control host access to LUNs.

• Connect hosts with different OSs to the same array port (through a switch).
• Hosts and LUNs are combined into Storage Groups.
• Allows assignment of a Storage Group to more than one host (for clustering).
• A Storage Group can be assigned up to 256 LUs.
• Supports multiple paths to the Storage Group (in conjunction with host-based
Powerpath).
• Disallows changing or deleting the hidden Management storage group.
• Disallows deleting a storage group that has hosts assigned to it.
• Disallows deleting a storage group that has LUNs in it.
• Disallows unbinding an LUN that is in a storage group.
• When activated, changes Default Storage Group to Management Storage Group.
o Management Storage Group is a communications mechanism only (LUN 0 or
LUN Z).
o It never contains any actual LUNs.

Back To SW Index

SnapView (V2.19)

Provides Snapshots (point-in-time pointer-based copies of LUNs with copy-on-first-write


functionality) and BCVs (pont-in-time full copies aka. clones).

Array CX700 CX500 CX300


Snaps/LUN 8 8 8
Snap Srcs/Array 100 50 25
Snaps/Array 300 150 100
Sessions/LUN 8 8 8
Reserved LUNs 100 50 25
BCVs/Source 8 8 8
BCV Groups 50 25 25
BCV Sources/Array 100 50 25
BCV Images/Array 200 100 50
Snaps/BCV 8 8 8
BCV Priv LUNs 2 req 2 req 2 req

SnapView Notes:
• SnapView is available for the CX300/400/500/600 and CX700.
• The combined total number of BCV or MirrorView images (source or target) cannot
exceed 50 for the CX500 and 100 for the CX700.
• Snapshot persistence across reboots is optional per snap session.
• AdmSnap is an extended command line interface (CLI) for SnapView.
o Communicates in-band with array (via SAN).
o Adds a higher degree of host integration (i.e. cache flushing).
• In order to mount Snapshots/BCVs on the same host as their source, Replication
Manager must be used to properly modify drive signatures.
• BCVs must be fractured before they can be accessed by a host.
• Snap rollback allows a source LUN to be instantly restored from any snapshot of that
source.
• Write changes to a snapshot can be optionally rolled back with the snapshot.
• BCVs can be incrementally updated from source.
• A Clone Group contains a source LUN and all of its clones..
• Instant restore allows a source LUN to be instantly restored from a BCV.
• Snapshots and BCVs can be mounted and written to on an alternate host. Any data in the
original snapshot/BCV is preserved when writes occur.
• If the snapshot/BCV has been written to before performing a snap rollback, the user has
the option of keeping or deleting any writes that have occurred.
• A protected restore option is available to prevent any new writes to the source LUN from
going to the BCV being used during a reverse synchronization process.
• Each source LUN with SnapView sessions requires a minimum of 1 reserved LUN.
Multiple sessions for that LUN can use the same reserved LUN(s).
• A single snap session for multiple LUNs can be created with a single command (GUI or
CLI) to ensure consistent snapshots across LUNs.
o Snap sessions can be both consistent and persistent.
o A max of 16 snap sessions with a single command on a CX600/700 (8 for all other
supported models)
o A max of 32 consistent set operations (BCVs & Snaps combined) can be in
progress simultaneously per storage system.
• BCVs for multiple LUNs can be fractured with a single command (GUI or CLI) to ensure
consistent BCVs across LUNs.
o BCVs in a consistent fracture operation must be in different Clone Groups.
o After the consistent fracture completes, there is no group association between the
BCVs.
o During the consistent fracture operation, If there is a failure on any of the clones,
the consistent fracture will fail on all of the clones
o If any clones within the set were fractured prior to the failure, SnapView will re-
synchronize those clones.
o A max of 16 BCVs can be fractured with a single command on a CX600/700 (8 for
all other supported models)
o A max of 32 consistent set operations (BCVs & Snaps combined) can be in
progress simultaneously per storage system.

Back To SW Index

MirrorView/S (V2.19)

Provides full-copy synchronous remote mirroring of LUNs between 2 or more arrays.


• Available for CX400/500/600/700 and FC4700.
• Supports mirroring between different CLARiiON models (FC4700, CX400/500/600/700).
• Support synchronous mirroring only.
• Fracture log maintains pointer-based change information on primary if link is fractured.
• Fracture log is not persistent across SP reboots.
• Write Intent Log (WIL) is a persistent record of in-flight writes headed to the primary
(source) and secondary (target) images.Because it is persistent, this record can be used to
recreate the Fracture Log, in the event of an SP reboot.
• WIL resides in a standard LUN (User can specify RAID group).
• The Secondary (target) image cannot brought online expect by promoting it to Primary.
• You can create snapshots of both the source and target LUNs. BCVs of a MirrorView
source/target are not supported.
• Consistency Groups ensure that all LUNs in the set are mirrored and controlled
consistently as a set:
o Consistency Groups must be explicitly created.
o To add a mirror set to a consistency group, it must be in a synchronized state.
o Max of 16 LUNs in a consistency group on a CX600/700, 8 on a CX400/500.
o Max of 16 consistency groups on a CX600/700, 8 on a CX400/500.
o Max of 100 total LUNs participating in consistency sets on a CX600/700, 50 on a
CX400/500.
• Configuration rules:
o One primary image and zero, one, or two secondary images per mirror.
o A storage system can have only one image of a mirror.
o A storage system can have mirroring connections to a max of four other storage
systems concurrently.
o A max of 200 images (total of primary and secondary images; 100 on CX500) per
storage system with a max of 100 (50 for CX400) primary images configured to
use the Write Intent Log.
o To manage remote mirror configurations, the Navisphere management
workstation must have an IP connection to both the local and remote storage
systems. The connection to the remote storage system should have an effective
bandwidth of at least 128 Kbits/second.
o Requires LUN masking.

Back To SW Index

MirrorView/A (V2.19)

Provides full-copy asynchronous remote mirroring of LUNs between 2 or more arrays.

• Utilizes delta set technology to track changes between transfer cycles. Whatever changes
between cycles is what's tranferred.
• Available for CX400/500/600/700.
• Supports mirroring between different CLARiiON models (CX400/500/600/700).
• Supports consistency groups (Note: All LUNs in a consistency group are consistent
relative to each other, not necessarily to the applications view of the data).
• Once mirrors are in a Consistency Group, you cannot fracture, synchronize, promote, or
destroy individual mirrors that are in the Consistency Group.
• All secondary images in a Consistency Group must be on the same remote storage
system.
• CX400/CX500 systems are limited to a max of 8 consistency groups with a max of 8
mirrors per group. CX600/CX700 systems are limited to a max of 16 consistency groups
with a max of 16 relationships per group.
• You can create snapshots of both the source and target LUNs. BCVs of a MirrorView
source/target are not supported.
• Each source LUN with a MirrorView/A session requires a minimum of 1 reserved LUN.
Multiple sessions for that LUN can use the same reserved LUN(s).
• Configuration rules:
o Each mirror can have one primary image and zero or one secondary images. Any
single storage system can have only one image of a mirror.
o A storage system can have mirroring connections to a max of four other storage
systems concurrently. (Mirroring connections are common between synchronous
and asynchronous mirrors.)
o You can configure a max of 50 primary and secondary images on CX400 and
CX500 storage systems and a max of 100 primary and secondary images on
CX600 and CX700 storage systems. The total number of primary and secondary
images on the storage system make up this max number.
o To manage remote mirror configurations, the Navisphere management
workstation must have an IP connection to both the local and remote storage
systems. The connection to the remote storage system should have an effective
bandwidth of at least 128 Kbits/second.
o The local and remote storage systems do not need to be in the same Navisphere
domain.
o You must have the MirrorView/A and Access Logix software installed and
enabled on all storage systems you want to participate in a mirror.
o Requires LUN masking.

Back To SW Index

SAN Copy (V2.19)

SAN Copy - LUN copy across SAN between arrays.

• Supports both full LUN copies and incremental updates.


• Supports copies between CLARiiONs, within CLARiiONs, between CLARiiON and
Symmetrix and between CLARiiON and 3rd-party arrays.
• For full copies, either the source or the target LUN must reside on the CLARiiON
running SAN Copy.
• For incremental copy sessions the source LUN must reside on the array running SAN
Copy.
• For incremental copy sessions the target LUN must be initially populated as an exact
block-by-block copy of the source LUN. SAN Copy provides a bulk copy capability
(called 'No Mark') to accomplish this via NaviCLI.
• Target of copy must be at least as large as source (can be larger)
• For full copies the source of a copy must be quiecsed when copy occurs (no writes).
• A session descriptor defines a copy process - it contains source, target(s) & config info.
• Sources for full copy sessions can be LUNs, metaLUNs, snapshots or BCVs (both
CLARiiON & Symm BCVs).
• Sources for incremental copy sessions can be LUNs, metaLUNs, or BCVs (on the array
running SAN Copy).
• If copying within a single array, source and target must be owned by the same SP.
• Session definitions are persistent.
• All CLARiiON arrays involved in a SAN Copy session must have LUN masking
configured.
• You must correctly zone SAN Copy ports to remote storage systems so that SAN Copy
can have access to these systems.
• Both GUI and CLI available for management and scripting.
• Resource utilization can be throttled per session (1-10).
• Supports copies over FCIP/iFCP.
• Tracks failures of access to targets and resumes from last checkpoint taken before failure.
• Sessions can be paused and resumed.
• The max number of full SAN Copy sessions/targets per array is limited. Use the
following formula: (# Sessions x 448) + (# extra targets x 72 ) must be less than 204,800
(each Session includes 1 target).
• Configuration rules:
o Max of 8 active sessions/SP for CX500, 16 active sessions/SP for CX700.
o Each source LUN with an incremental SAN Copy session requires a minimum of
1 reserved LUN. Multiple sessions for that LUN can use the same reserved
LUN(s).
o Sessions can copy single source to multiple targets in parallel (50 for CX500, 100
for CX700).
• SAN Copy/E is a 'lite' version of SAN Copy designed to provide incremental puch
cpabilities from a CX300/AX100 to a CX4/5/6/700 running SAN Copy.
• SAN Copy/E on a CX300 Configuration rules:
o Max of 4 concurrent sessions
o Max of 50 destination LUNs per source
o Max of 25 incremental sessions
o Max of 25 incremental sessions

Back To SW Index

PowerPath (4.4)

Provides host-based path load-balancing and failover.

• Provides 8 different load balancing policies (2 are Symm unique).


o Basic - load balancing disabled
o CLARiiON optimized - Optimizes based on CLARiiON architecture
o Least blocks - Sends next I/O using path with fewest pending blocks of I/O
o Least I/O - Sends next I/O using path with fewest pending number of I/Os
o No redirect - load balancing and path failover disabled (Symm only)
o Request - Uses OS default path (as if no PowerPath)
o Round Robin - Each path is used in sequence
o Symmetrix optimized - Optimizes based on Symmetrix architecture (Symm only)
• Supports prioritization of I/Os based on LUN target.
• Supports both FC and iSCSI arrays.
• Single host can access CLARiiON, Symmetrix and 3rd-party arrays.
• Provides automatic path failover.
• Uses standard HBA drivers.
• Supports up to 16 HBAs per host.
Back To SW Index

CLARalert (6.2)

Provides remote support for CLARiiON arrays.

• Components.
o CLARalert
o Navisphere host agent or NaviCLI
• OnAlert is no longer used for ClarAlert dial-home.
• Notifications can be sent via dial-up or email
• Dial-up requires Windows NT/2000 management station with modem
• Email can be via either Windows or Sun/Solaris station
• Max monitered system per central monitor is 1000.

Back To SW Index

Definitions

• BCC - Bridge Control Card


• BCV - Business Continuance Volume (clone)
• BE - Back-end FC-AL Fibre Channel loops
• CMI - CLARiiON Messaging Interface.
• DAE - Disk Array Enclosure (FC4n00)
• DAE2 - Disk Array Enclosure 2 (CXn00)
• DPE - Disk Processor Enclosure (FC4700)
• FE - Front-end Fibre Channel ports
• LCC - Link Control Card (FC-AL interface)
• LRC - Longitudinal Redundancy Check
• LU(N) - Logical Unit (Number)
• NS - Not Supported
• PSM - Persistent Storage Manager
• RG - RAID Group
• SG - Storage Group
• SP - Storage Processor
• WIL - Write Intent Log (MirrorView)

S-ar putea să vă placă și