Documente Academic
Documente Profesional
Documente Cultură
device?
We have storage array product from different vendor. Everyone talkes about active-active and active-
passive device technology. With different types of storage arrays and host connection types, it is
important to understand the difference between active-active and active-passive devices. Here is
short explanation of the differences:
Active-active (for example, Symmetrix arrays)
In an active-active storage system, if there are multiple interfaces to a logical device, they all provide
equal access to the logical device. Active-active means that all interfaces to a device are active
simultaneously.
Active-passive (for example, CLARiiON arrays)
Active-passive means that only one interface to a device is active at a time, and any others are
passive with respect to that device and waiting to take over if needed.
In an active-passive storage system, if there are multiple interfaces to a logical device, one of them is
designated as the primary route to the device (that is, the device is assigned to that interface card).
Typically, assigned devices are distributed equally among interface cards. I/O is not directed to paths
connected to a non-assigned interface. Normal access to a device through any interface card other
than its assigned one is either impossible (for example, on CLARiiON arrays) or possible, but much
slower than access through the assigned interface card.
In the event of a failure, logical devices must be moved to another interface. If an interface card fails,
logical devices are reassigned from the broken interface to another interface. This reassignment is
initiated by the other, functioning interface. If all paths from a host to an interface fail, logical devices
accessed on those paths are reassigned to another interface with which the host can still
communicate. EitherApplication-Transparent Failover (ATF) or PowerPath, which instructs the
storage system to make the reassignment, initiates this reassignment. These reassignments are
known as trespassing. Trespassing can take several seconds to complete. However, I/Os do not fail
during it. After devices are trespassed, ATF or PowerPath detects the changes and seamlessly routes
data via the new route. After a trespass, logical devices can be trespassed back to their assigned
interface. This occurs automatically if PowerPath's periodic autorestore feature is enabled. It occurs
manually if powermt restore is run, which is the faster approach. Or if ATF is in use, a manual
restore of the path can be executed to restore the original path.
You can create up to four mirrors for each Symmetric device. The Mirror positions are designed M1,
M2, M3 and M4. When we create a device and specify its configuration type, the Symmetrix system
maps the device to one or more complete disks or part of disks known as Hyper Volumes/Hypers. As
a rule, a device maps to at least two mirror means hypers on two different disks, to maintain multiple
copies of data.
Different Types of Volume Control Manager Database (VCMDB) available on DMX ?
Generally we never give thought about VCMDB database once we initialize first time. It does make sense when you messup or did
some thing disaster. This database is most impppppportaaaaaaant for DMX. Once you loose this database means you can't get DMX
configuration back at any cost. So, I am discussing different type of VCMDB on DMX.
We can now support up to 16,000/64000 addressable devices enginuity 5771 onward and therefore the Volume Control Manager
Database needs to be physically larger. At 5670, as per EMC recommend CE's were encouraged to create 96 cylinder (minimum)
VCMDB during new installs. This was to cater for future upgrades to 5671.
• Type 3 - this can cater for 32 fibre or iSCSI initiators per port. Introduced with Enginuity 5669 and requires a 24 cylinder (minimum)
VCMDB and Solutions Enabler v5.2.
• Type 4 - this can cater for 64 fibre or 128 iSCSI initiators per port. Introduced with Enginuity 5670 and requires a 48 cylinder
(minimum) VCMDB and Solutions Enabler v5.3.
• Type 5 - this can support 64 fibre or 128 iSCSI initiators per port AND cater for 16,000 devices. Introduced with Enginuity 5671 and
requires a 96 cylinder (minimum) VCMDB and Solutions Enabler v6.0. (Note: without a type 5 96cyl VCMDB and SE 6.0 you will be
restricted to 8192 logical volumes as in 5670).
• Type 6 - this can support 128 fibre or 256 iSCSI initiators per port AND cater for 32,000 devices available on DMX-3 with Enginuity
5771 (at GA release). Currently the Type 6 database (at latest Enginuity 5771 with Solution Enabler 6.0 and above) will cater for 256
fibre or 512 iSCSI initiators and 64,000 logical devices.
The three requirements for a Type 5 VCM database on DMX (and support for up to 16,000 customer addressable volumes) is a
correctly configured 96 cylinder VCMDB device, Enginuity 5671 and Solutions Enabler v6.0 or above. Note that the VCMBD “type”
reflects the internal data structure of the Volume Control Manager Database. Therefore a 96 cylinder VCMDB size does NOT mean that
you have a Type 5 VCMDB.
Note:
• At 5670 with a 48 cylinder VCMDB it is still type 4.
• At 5670 with a 96 cylinder VCMDB it is still type 4.
• At 5670 with a 96 cylinder VCMDB and SE 6.0 it is still type 4 - do not try to convert the database using the SYMCLI (EMC do not
support more than 8192 logical volumes at 5670).
• At 5671 with a 48 cylinder VCMDB and SE 6.0 it is still type 4 - the VCMDB is NOT physically large enough.
• At 5671 with a 96 cylinder VCMDB and SE 5.5 it is still type 4 - the VCMDB is large enough but SE 5.5 does not support the Type 5
database.
• At 5671 with a 96 cylinder VCMDB and SE 6.0 it is a type 5 database - if you have run the “symmaskdb convert -vcm_type 5”
command. Be aware that if you convert from a lower type database to a higher type, any hosts running a Solutions Enabler version that
does not support the higher VCMDB type will NOT be able to access the "new" database.
• At 5771 (DMX-3) the VCMDB data now resides in the SFS volumes. At 5771 the VCMDB should be configured the SAME size as a
standard FBA gatekeeper (this can be 3 cylinders due to the 64KB track size but 6 cylinder, as recommended in some guides, is also
perfectly acceptable) but it must still be assigned the VCM fibre gatekeeper status. Note that the VCMDB "gatekeeper" on DMX-3 is no
longer shown as "write disabled" (it is now a "gatekeeper" rather than a physical volume used for physical storage - the Volume Control
Manager data is protected and stored on the internal SFS volumes).
• Note that Enginuity 5771 will ONLY support a Type 6 VCM database (again the data is resident on the SFS volumes). This re-location
of the physical database to the SFS volumes caters for the increased host connectivity AND the increase in logical volumes supported
with DMX-3.
EMC announce Flash Drive support in CX-4 (A new generation CLARiiON) and DMX-4. EMC started supporting TIER 0
with Flash Drive. Flash drives provide maximum performance for latency sensitive applications. Flash drives, also
referred to as solid state drives (SSD), contain no moving parts and appear as standard Fibre Channel drives to existing
Symmetrix management tools, allowing administrators to manage tier 0 without special processes or custom tools.
Tier 0 Flash storage is ideally suited for applications with high transaction rates and those requiring the fastest
possible retrieval and storage of data, such as currency exchange and electronic trading systems, or real time data
feed processing.
A Symmetrix DMX-4 with Flash drives can deliver single-millisecond application response times and up to 30 times
more IOPS than traditional 15,000 rpm Fibre Channel disk drives. Additionally, because there are no mechanical
components, Flash drives require up to 98 percent less energy per IOPS than traditional disk drives. Database
acceleration is one example for Flash drive performance impact. Flash drive storage can be used to accelerate online
transaction processing (OLTP), accelerating performance with large indices and frequently accessed database tables.
Examples of OLTP applications include Oracle and DB2 databases, and SAP R/3. Flash drives can also improve
performance in batch processing and shorten batch processing windows.
Flash drive performance will help any application that needs the lowest latency possible. Examples include
• Algorithmic trading
• Currency exchange and arbitrage
• Trade optimization
• Realtime data/feed processing
• Contextual web advertising
• Other realtime transaction systems
• Data modeling
Flash drives are most beneficial with random read misses (RRM). If the RRM percentage is low, Flash drives may show
less benefit since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible
response times. The local EMC SPEED Guru can do a performance analysis of the current workload to determine how
the customer may benefit from Flash drives. Write response times of long distance SRDF/S replication could be high
relative to response times from Flash drives. Flash drives cannot help with reducing response time due to long distance
replication. However, read misses still enjoy low response times.
Flash drives can be used as clone source and target volumes. Flash drives can be used as SNAP source volumes.
Virtual LUN Migration supports migrating volumes to and from Flash drives. Flash drives can be used with SRDF/s and
SRDF/A. Metavolumes can be configured on Flash drives as long as all of the logicals in the metagroup are on Flash
drives.
Due to the new nature of the technology, not all Symmetrix functions are currently
supported on Flash drives. The following is a list of the current limitations and restrictions
of Flash drives.
• Delta Set Extension and SNAP pools cannot be configured on Flash drives.
• RAID 1 and RAID 6 protection, as well as unprotected volumes, are currently
not supported with Flash drives.
• TimeFinder/Mirror is currently not supported with Flash drives.
• iSeries volumes currently cannot be configured on Flash drives.
• Open Replicator of volumes configured on Flash drives is not currently
supported.
• Secure Erase of Flash drives is not currently supported.
• Compatible Flash for z/OS and Compatible Native Flash for z/OS are not
currently supported.
• TPF is not currently supported.
There are two type of Zoning basically : Hard Zoning and Soft Zoning. Lets first define what is Zoning??
Zoning is nothing but map of host to device to device connectivity is overlaid on the storage networking fabric, reducing the risk of
unauthorized access.Zoning supports the grouping of hosts, switches, and storage on the SAN, limiting access between members of
one zone and resources in another.
Zoning also restricts the damage from unintentional errors that can corrupt storage allocations or destabilize the network. For example,
if a Microsoft Windows server is mistakenly connected to a fabric dedicated to UNIX applications, the Windows server will write header
information to each visible LUN, corrupting the storage for the UNIX servers. Similarly, Fibre Channel register state change notifications
(RSCN) that keep SAN entities apprised of configuration changes, can
sometimes destabilize the fabric. Under certain circumstances, an RSCN storm will overwhelm a
switch’s ability to process configuration changes, affecting SAN performance and availability for
all users. Zoning can limit RSCN messages to the zone affected by the change, improving overall
SAN availability.
By segregating the SAN, zoning protects applications against data corruption, accidental access,
and instability. However, zoning has several drawbacks that constrain large-scale consolidated
infrastructures.
Lets first discuss what are type of Zoning and pro and cos:
As I have mentioned earlier that Zoning got two types basically you can say three but only 2 types popular in industry.
Soft Zoning : Soft zoning uses the name server to enforce zoning. The World Wide Name (WWN) of the elements enforces the
configuration policy.
Pros:
- Administrators can move devices to different switch ports without manually reconfiguring
zoning. This is major flexibility to administrator. You don't need to change once you create zone set for particular device connected on
switch. You create a zone set on switch and allocate storage to host. You can change any port for device connectivity
Cons:
- Devices might be able to spoof the WWN and access otherwise restricted resources.
- Device WWN changes, such as the installation of a new Host Bus Adapter (HBA) card, require
policy modifications.
- Because the switch does not control data transfers, it cannot prevent incompatible HBA
devices from bypassing the Name Server and talking directly to hosts.
Hard Zoning: - Hard Zoning uses the physical fabric port number of a switch to create zones and enforce the policy.
Pros:
- This system is easier to create and manage than a long list of element WWNs.
- Switch hardware enforces data transfers and ensures that no traffic goes between
unauthorized zone members.
- Hard zoning provides stronger enforcement of the policy (assuming physical security on the
switch is well established).
Cons:
- Moving devices to different switch ports requires policy modifications.
If you ask me how to choose the zoning type then it is based on SAN requirement in your data center environment. But port zoning is
more secure but you have to be sure that device is not going to change otherwise every time you change in storage allocation you have
to modify your zoning.
Generally use in industry is soft zoning but as i have mentioned soft zoning has many cos. So, it is hard to say which one you should
use always. So, analyze your datacenter environment and use proper zoning.
Broadcast zoning uses in large environment where are various fabric domain.
Having said that Zoning can be enforced either port number or WWN number but not both. When both port number and WWN specify a
zone, it is a software-enforced zone. Hardware-enforced zoning is enforced at the Name Server level and in the ASIC. Each ASIC
maintains a list of source port IDs that have permission to access any of the ports on that ASIC. Software-enforced zoning is exclusively
enforced through selective information presented to end nodes through the fabric Simple Name Sever (SNS).
If you know about switch then you must notice that in Cisco we have FCNS database and Brocade Name Server. Both are for same
purpose to store all the information about port and other. FCNS is stand for Fibre Channel Name Server.
There are plenty of thing on Switch itself to protect your SAN environment. Each vendor comes with different security policy. Zoning is
the basic thing in order to secure your data access.
Hope this info will be useful for beginner. Please raise a comment if you want to know specific things.
Fibare Channel is nothing but just a medium to connect host and shared storage. When we talk about
SAN first things comes in mind about Fibre Channel.
Fibre Channel is serial data transfer interface intended for connecting shared storage to computer.
Where storage is not connected physically to host.
Why FC is most important in SAN? Because FC gives you high speed through the following process:
1) Networking and I/O Protocol such as SCSI command, are mapped to FC construct
2) Encapsulate and transported with FC frame.
3) With this, the hight speed transfer of multiple protocol is possible over same physical interface.
FC operate over copper wire or optical fibre at the rate upto 4GB/s and upto 10GB/s when used as
ISL (E - Port) on supported switch. At the same time, latency is kept very low, minimizing the delay
between data requests and deliveries. For example, the latency across a typical FC switch is only a
few microseconds. It is this combination of high speed and low latency that makes FC an ideal choice
for time-sensitive or transactional processing environments.
These attributes also support high scalability, allowing more storage systems and servers to be
interconnected.Fibre Channel is also supports a variety of topologies, and is able to operate between
two devices in a simple point-to-point mode, in an economical arbitrated loop to connect up to 126
devices, or (most commonly) in a powerful switched fabric providing simultaneous full-speed
connections for many thousands of devices. Topologies and cable types can easily be mixed in the
same SAN.
FC is the most important in building SAN, it gives us flexibility to use protocol like FCP, FICON, IP
(iSCSI, FCIP, iFCP) and uses block type data transfer.
if we want to define what is FC - Fibre Channel is a storage area networking technology designed to
interconnect hosts and shared storage systems within the enterprise. It's a high-performance, high-
cost technology. iSCSI is an IP-based storage networking standard that has been touted for the wide
range of choices it offers in both performance and price.
Fibre Channel technology is a block-based networking approach based on ANSI standard X3.230-
1994 (ISO 14165-1). It specifies the interconnections and signaling needed to establish a network
"fabric" between servers, switches and storage subsystems such as disk arrays or tape libraries. FC
can carry virtually any kind of traffic.
However, there are some recognized disadvantages to FC. Fibre Channel has been widely criticized
for its expense and complexity. A specialized HBA card is needed for each server. Each HBA must
then connect to corresponding port on a Fibre Channel Switch. creating the SAN "fabric." Every
combination of HBA and switch port can cost thousands of dollars for the storage organization. This is
the primary reason why many organizations connect only large, high-end storage systems to their
SAN. Once LUNs are created in storage, they must be zoned and masked to ensure that they are
only accessible to the proper servers or applications; often an onerous and error-prone procedure.
These processes add complexity and costly management overhead to Fibre Channel SANs.
When running inq or syminq, you'll see a column titled Ser Num. This column has quite a bit of
information hiding in it.
An example syminq output is below. Your output will differ slightly as I'm creating a table from a book to
show this; I don't currently have access to a system where I can get the actual output just yet.
Using the first and last serial numbers as examples, the serial number is broken out as follows:
So, the first example, device 009 is mapped to director 15, processor A, port 0 while the second example has device 01A mapped to director
12, processor B, port 0.
Even if you don't buy any of the EMC software, you can get the inq command from their web
site. Understanding the serial numbers will help you get a better understanding of which ports
are going to which hosts. Understanding this and documenting it will circumvent hours of
rapturous cable tracings.
At Enginuity 5x71 there are two new entries on the IMPL Initialization screen in the
RDF Section. These pertain to SRDF/A or SNOW configurations.
• The "Estimated SRDFA cache required (in MB)" can range from 0 to FFFFFFFF (MB) this value should
always be "0" for non-SNOW (or normal) SRDF configurations. If SRDF/A is to be implemented this value should
reflect the ADDITIONAL cache added to support SRDF/A (this is the cache added, over and above the quantity
needed for the raw disk storage). The amount of additional cache added here is determined by Sales / TSG
when the SRDF/A configuration is being initially sized, i.e. it depends on the number and size of the SRDF/A
devices, it depends on the link bandwidth, it depends on the measured and anticipated peak I/O, etc. A general
rule of thumb is ~.75 GB of additional cache per 1 TB of SRDF/A protected data. The quantity of cache
required (added specifically for SRDF/A) must be provided here (remember, this is in HEX MB) to
ensure that the SymmWin correctly "polices" the amount of cache and cache layout for the raw
disk storage*. Should the total amount of cache be under configured then SYMMWIN will present a warning to
the user indicating what the correct amount of base cache should be based on the additional cache input via
this parameter. This setting is not used by the Enginuity to influence any performance or operability metrics.
*Please ensure that the CORRECT quantity (in HEX MB) is specified here. An incorrect value may impact your
next online drive upgrade (the script may red box for insufficient memory).
• "Snow cache usage" can range from 0 to FF with a default value of F0 (hex) or 94%. This is the percentage
of the maximum value of the System write pending (WP) cache that can be used by RDFA. When set to less
than 100% (FF in the bin) SRDF/A will drop BEFORE the System WP ceiling is reached, thereby allowing running
applications to retain some WP "ceiling". That is, should R1 peak I/O suddenly increase (and the SRDF/A cache
usage unexpectedly goes up) then SRDF/A will drop BEFORE driving cache usage to the System WP limit. Note
that the "snow max throttle time" must be left at the default of "0" for the "snow cache usage" to have any
affect.
The System WP ceiling or "Write Pending Ceiling (%)" value on the IMPL Initialization screen in the Memory Section
is CD of FF by default. This means that the WP ceiling is 80% of available user cache slots. This is not 80% of
physical cache as the cache tables, can take up a significant and variable amount of space (especially with DMX). The
"snow cache usage" is F0 of FF% of this value (or 94% of the System WP ceiling) OR 94% of 80% of available user
cache slots OR in this case 5% below the System WP limit.
To determine the actual value in cache slots you need to run an A1, on the running DMX, look at the CACHE SLOT
COUNT (this is the available user cache slots after the tables) and look at the value for the WR MAX-SYS slot count. The
WP MAX-SYS is typically 80% of the CACHE SLOT COUNT. The "snow cache usage" is therefore typically 94% of the WP
MAX-SYS value.
At Enginuity 5x71 there are two new entries on the RDF screen. These pertain to SRDF/A or SNOW configurations.
• The "Allow SRDF-A" flag can be set to YES or NO for every static RDF group configured in the bin file. At
5670 the "SNOW" flag on the SymmWin RDF screen dictates whether an SRDF group in SNOW capable.
However, at 5671 the "Allow SRDF-A" flag is NOT used by the Enginuity code to allow or disallow SRDF-A
operation. Any / all SRDF groups at 5x71 are SNOW or SRDF/A capable. However, this flag is required to ensure
that SymmWin policing is performed correctly for any SRDF group that is running in SRDF-A mode (see below
for more details) and this flag may also be interrogated by the SYMCLI software*.
• ErrCode 4000241E "All devices of a Allow SRDF-A group must have the same RDF type" (i.e. ONLY all R1
devices or all R2 devices are allowed in an SRDF/A group).
• ErrCode 4000242A "The rdf type of volumes in SRDF-A group of an Escon director should match the type of
director" (if using ESCON RA's you would receive this error if creating a "bi-directional" configuration with R1's
on an RA2 director or R2's on an RA1 director).
• ErrCode 4000241D "In concurrent RDF only one mirror can be SRDF-A. Device xxxx is concurrent RDF and it is
attached to 2 Allow SRDF-A groups".
• Note that the additional cache checking invoked by specifying a value for "Estimated SRDFA cache required (in
MB)" does NOT require the RDF groups to have "Allow SRDF-A" enabled.
• The "Snow Drop" value ranges from 00 to FF with a default value of 80 (hex). This is the "drop priority" for
this SRDF/A group. At 5x71 we can have multiple SRDF/A groups or sessions and the relative importance of the
individual sessions can be set in the bin file (or set via symconfigure and the session_priority parameter).
The priority determines which SRDF/A session are to be dropped first when cache resources become limited.
Normally set via the SYMCLI the value are 1 to 64 (decimal) with 1 being the highest priority (or the last group
to drop).
Finally, what is the "rdfa_cache_percent" ?. This is the bin file "snow cache usage" value set via the Solutions
Enabler Configuration Component or symconfigure command e.g. "set symmetrix rdfa_cache_percentage = 94" will set
the "snow cache usage" value in the bin file to F0.
ALL of these setting can be updated ONLINE at 5x71 via the SYMCLI or CE applied Online Configuration Change (i.e.
typically a Change_Global_Flags).
symapierr - Used to translate SYMAPI error code numbers into SYMAPI error messages.
symaudit - List records from a symmetrix audit log file.
symbcv - Perform BCV support operations on Symmetrix BCV devices.
symcfg - Discover or display Symmetrix configuration information. Refresh the
host's Symmetrix database file or remove Symmetrix info from the file. Can also
be used to view or release a 'hanging' Symmetrix exclusive lock.
symchg - Monitor changes to Symmetrix devices or to logical objects stored on Symmetrix
devices.
symcli - Provides the version number and a brief description of the commands included in
the Symmetrix Command Line
symdev - Perform operations on a device given the device's Symmetrix name. Can also be
used to view Symmetrix device locks.
symdg - Perform operations on a device group (dg).
symdisk - Display information about the disks within a Symmetrix.
symdrv - List DRV devices on a Symmetrix.
symevent - Monitor or inspect the history of events within a Symmetri
symgate - Perform operations on a gatekeeper device.
symhost - Display host configuration information and performance statistics.
syminq - Issues a SCSI Inquiry command on one or all devices. Interface.
symlabel - Perform label support operations on a Symmetrix device.
symld - Perform operations on a device in a device group (dg).
symlmf - Registers SYMAPI license keys.
sympd - Perform operations on a device given the device's physical name.
symstat - Display statistics information about a Symmetrix, a Director, a device group, or a
device.
symreturn - Used for supplying return codes in pre-action and post-action script files.
When a VSAN is configured for the default interoperability mode, the MDS 9000 Family of
switches is limited in the following areas when interoperating with non-MDS switches:
<!--[if !vml]-->• Interop mode only affects the specified VSAN. The MDS 9000 switch can
still operate with full functionality in other non-interop mode VSANs. All switches that
partake in the interoperable VSAN should have that VSAN set to interop mode, even if
they do not have any end devices.
<!--[if !vml]-->• Domain IDs are restricted to the 97 to 127 range, to accommodate
McData's nominal restriction to this same range. Domain IDs can either be set up
statically (the MDS 9000 switch will only accept one domain ID; if it does not get that
domain ID, it isolates itself from the fabric), or preferred (if the MDS 9000 switch does
not get the requested domain ID, it takes any other domain ID).
<!--[if !vml]-->• TE ports and PortChannels cannot be used to connect an MDS 9000
switch to a non-MDS switch. Only E ports can be used to connect an MDS 9000 switch to
a non-MDS switch. However, TE ports and PortChannels can still be used to connect an
MDS 9000 switch to other MDS 9000 switches, even when in interop mode.
<!--[if !vml]-->• Only the active zone set is distributed to other switches.
<!--[if !vml]-->• In MDS SAN-OS Release 1.3(x), Fibre Channel timers can be set on a per
VSAN basis. Modifying the times, however, requires the VSAN to be suspended. Prior to
SAN-OS Release 1.3, modifying timers required all VSANs across the switch to be put into
the suspended state.
<!--[if !vml]-->• The MDS 9000 switch still supports the following zoning limits per switch
across all VSANs:
When interoperability mode is set, the Brocade switch has the following limitations:
<!--[if !vml]-->• Interop mode affects the entire switch. All switches in the fabric must
have interop mode enabled.
<!--[if !vml]-->• If there are no zones defined in the effective configuration, the default
behavior of the fabric is to allow no traffic to flow. If a device is not in a zone, it is
isolated from other devices.
<!--[if !vml]-->• Zoning can only be done with pWWNs. You cannot zone by port numbers
or nWWNs.
<!--[if !vml]-->• To manage the fabric from a Brocade switch, all Brocade switches must
be interconnected. This interconnection facilitates the forwarding of the inactive zone
configuration.
<!--[if !vml]-->• Brocade WebTools will show a McData switch or an MDS 9000 switch as
an anonymous switch. Only a zoning configuration of the McData switch or the MDS 9000
switch is possible.
<!--[if !vml]-->• Private loop targets will automatically be registered in the fabric using
translative mode.
<!--[if !vml]-->• The full zone set (configuration) is distributed to all switches in the
fabric. However, the full zone set is distributed in a proprietary format, which only
Brocade switches accept. Other vendors reject these frames, and accept only the active
zone set (configuration).
Symm 4/4.8/5 (2-port or 4-port) Fibre Channel front directors, the WWN breakdown are as
follows:
The director WWN (50060482B82F9654) can be broken down (in binary) as follows:
First 28 Bits (from the left, bits 63-36, binary) of WWN are assigned by the IEEE (5006048, the
vendor ID for EMC Symmetrix)
5006048 2 B 8 2 F 9 6 5 4
0010 1011 1000 0010 1111 1001 0110 0101 0100
The least signifigant 6 bits (bits 5 through 0) can be decoded to obtain the Symmetrix director
number, processor and port. Bit 5 is used to designate the port on the processor (0 for A, 1 for B). Bit
4, known as the side bit, is used to designate the processor (0 for A, 1 for B). The least signifigant 4
bits, 3 through 0, represent the Symm slot number.
In review, this WWN represents EMC Symmetrix serial number 182500953, director 5b port A
For Symm DMX product family (DMX-1/2/3), the WWN breakdown are as follows:
The director WWN (5006048ACCC86A32) can be broken down (in binary) as follows:
Again, like Symm 4/5, the first 28 bits (63-36) are assigned by the IEEE
5006048 A C C C 8 6 A 3 2
Bit 35 is now known as the 'Half' bit and is now used to decode which half the processor/port
lie on the board.
Bits 34 through 6 represent the serial number; the decode starts at bit 6 and works up to bit 34
to create the serial number. This is broken down as illustrated above.
In conjunction with bit 35, the last 6 bits of the WWN represent the director number, processor
and port. Bit 35, the 'Half' bit, represents either processor A and B, or C and D (0 for A and B, 1
for C and D). Bit 5 again represents the port on the processor (0 for A, 1 for B). Bit 4, the side
bit, again represents the processor but with a slight change (if 0 then port A or C, if 1 then port
B or D, depending on what the half bit is set to). The last 4 bits, 3 through 0, represent the
Symm slot number.
1 11 0010 -------> half bit = 1 (either processor C or D), port bit = 1 (port B), side bit = 1 (because
half = 1, looking at C and D processors only, side = 1 now means processor D)
0010 hex = 2 decimal (slot 2 or director 3)
For using EMCs TimeFinder I have to create a device group.(AIX is working with volumegroups. EMCs TimeFinder is working
withdiskgroups.)With the following command the AIX volumegroup MyName_vg is convertedto the diskgroup MyName_dg)
# rmbcv -a
Using the establish I mirror all data from the original hdisks to the BCVs (including the PVIDs!)
# symmir -g MyName_dg -i 10
Query When the establish is done, I have to unmount my filesystem andvaryoff the volumegroup
# umount /MyName_mp
# varyoffvg MyName_vg
When the split is done, I can varyon my volumegroup and mount myFilesystem
# mkbcv -a
# vxdisk list
Vxdisk list command showing that some disks are marked with the udid_mismatch flag.
You can use the following command to update the unique disk identifier (UDID) for one or more disks:
# vxdisk [-f] [-g diskgroup] updateudid disk ...
Note : The -f option must be specified if VxVM has not raised the udid_mismatch flag for a disk.
You can then import the cloned disks by specifying the -ouseclonedev=on option to the vxdgimport command, as shown in this
example:
Note: This form of the command allows only cloned disks to be imported. All non-cloned disks remain unimported. .) However, the
import fails if multiple copies of one or more cloned disks exist.
You can use the following command to tag all the disks in the disk group that are to be imported:
You can use the following command to ensure that a copy of the metadata is placed on a disk, regardless of the placement policy for
the disk group:
To check which disks in a disk group contain copies of this configuration information, use the vxdglistmeta command:
The tagged disks in the disk group may be imported by specifying the tag to the vxdgimport command in addition to the
-ouseclonedev=on option:
If you have already imported the non-cloned disks in a disk group, you can use
the -n and -t option to specify a temporary name for the disk group containing
the cloned disks:
A Symmetrix DMX-4 with Flash drives can deliver single-millisecond application response times and up to 30 times
more IOPS than traditional 15,000 rpm Fibre Channel disk drives. Additionally, because there are no mechanical
components, Flash drives require up to 98 percent less energy per IOPS than traditional disk drives. Database
acceleration is one example for Flash drive performance impact. Flash drive storage can be used to accelerate online
transaction processing (OLTP), accelerating performance with large indices and frequently accessed database tables.
Examples of OLTP applications include Oracle and DB2 databases, and SAP R/3. Flash drives can also improve
performance in batch processing and shorten batch processing windows.
Flash drive performance will help any application that needs the lowest latency possible. Examples include
• Algorithmic trading
• Trade optimization
• Data modeling
Flash drives are most beneficial with random read misses (RRM). If the RRM percentage is low, Flash drives may show
less benefit since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible
response times. The local EMC SPEED Guru can do a performance analysis of the current workload to determine how
the customer may benefit from Flash drives. Write response times of long distance SRDF/S replication could be high
relative to response times from Flash drives. Flash drives cannot help with reducing response time due to long distance
replication. However, read misses still enjoy low response times.
Flash drives can be used as clone source and target volumes. Flash drives can be used as SNAP source volumes.
Virtual LUN Migration supports migrating volumes to and from Flash drives. Flash drives can be used with SRDF/s and
SRDF/A. Metavolumes can be configured on Flash drives as long as all of the logicals in the metagroup are on Flash
drives.
Due to the new nature of the technology, not all Symmetrix functions are currently
supported on Flash drives. The following is a list of the current limitations and restrictions
of Flash drives.
• Delta Set Extension and SNAP pools cannot be configured on Flash drives.
• RAID 1 and RAID 6 protection, as well as unprotected volumes, are currently
not supported with Flash drives.
• TimeFinder/Mirror is currently not supported with Flash drives.
• iSeries volumes currently cannot be configured on Flash drives.
• Open Replicator of volumes configured on Flash drives is not currently
supported.
• Secure Erase of Flash drives is not currently supported.
• Compatible Flash for z/OS and Compatible Native Flash for z/OS are not
currently supported.
• TPF is not currently supported.
The following steps describes splitting BCV devices that hold a database supporting a host running an Oracle database. In this case,
the BCV split operation is in an environment without PowerPath or ECA. The split operation described here suspends writes to a
database momentarily while an instant split occurs. After an establish operation and the standard device and BCV mirrors are
synchronized, the BCV device becomes a mirror copy of the standard device. You can split the paired devices to where each holds
separate valid copies of the data, but will no longer remain synchronized to changes when they occur.
The Oracle database is all held on standard and BCV devices assigned to one Oracle device group.
Check the output to ensure all BCV devices listed in the group are in the synchronized state
2. Check and set the user account
For SYMCLI to access a specified database, set the SYMCLI_RDB_CONNECT environment variable to the username and
password of the system administrator’s account. The export action sets this variable to a username of system and a password
of manager, allowing a local connection as follows:
export SYMCLI_RDF_CONNECT=system/manager
The ORACLE_HOME command specifies the location of the Oracle binaries and the ORACLE_SID command specifies the
database instance name as follows:
export ORACLE_HOME=/disks/symapidvt/oraclhome/api179
export ORACLE_sid=api179
You can test basic database connectivity with the symrdb command as follows:
To split all the BCV devices from the standard devices in the database device group, enter:
Make sure the split operation completes on all BCVs in the database device group.
6. Thaw the database to resume I/O
Example:
-------------
I have created a volumegroup, a logical volume, afilesystem and a file on two EMC standard
volumes.(For this test you need to have two hdisks hdisk and hdisk andtwo BCVs dev and available)
# mkvg -f -y MyName_vg -s 16 hdisk hdisk
# mklv -y MyName_lv -b n MyName_vg 20
# crfs -v jfs -d MyName_lv -m /MyName_mp -A yes -p rw
# mount /MyName_mp
# lptest > /MyName_mp/lptest.out
For using EMCs TimeFinder I have to create a device group.(AIX is working with volumegroups.
EMCs TimeFinder is working withdiskgroups.)With the following command the AIX volumegroup
MyName_vg is convertedto the diskgroup MyName_dg)
# rmbcv -a
Using the establish I mirror all data from the original hdisks to the BCVs (including the PVIDs!)
# symmir -g MyName_dg -i 10
Query When the establish is done, I have to unmount my filesystem andvaryoff the volumegroup
# umount /MyName_mp
# varyoffvg MyName_vg
When the split is done, I can varyon my volumegroup and mount myFilesystem
# mkbcv -a
RAID 6 was implemented to provide superior data protection, tolerating up to two drive failures in the
same RAID group. Other RAID protection schemes, such as mirroring (RAID 1), RAID S, and RAID 5,
protect a system from a single drive failure in a RAID group.
RAID 6 provides this extra level of protection while keeping the same dollar cost per megabyte of
usable storage as RAID 5 configurations. Although two parity drives are required for RAID 6, the
same ratio of data to parity drives is consistent. For example, a RAID 6 6+2 configuration consists of
six data segments and two parity segments. This is equivalent to two sets of a RAID 5 3+1
configuration, which is three data segments and one parity segment, so 6+2 = 2(3+1).
Now, you must have understand the formula of calcuating the actual size of disk you can use on any symmetrix or
DMX.
I have attended EMC Live classroom for new series of symmetrix DMX-4. I thought to
share depth new architectural knowledge to you all.
EMC made a important announcement with respect to its 73 GB and 146 GB flash drives
or solid state drives (SSD). Flash Drives and SSD represent a new solid-state storage tier,
“tier 0”, for symmetrix DMX-4. In addition to that EMC will offer Virtual Provisioning for
Symmetrix 3 and 4 as well 1 TB SATA II drives.
With this announcement EMC became first storage vendor to integrate Flash Technology
into its enterprise-class arrays. There as excitement in industry who is looking for faster
transaction and performance. Why this much excitement for new customer? I will be
discussing some technical stuff in coming paragraph.
With flash drive technology in a Symmetrix DMX-4 storage system, a credit card provider
could clear up to six transactions in the time it once took to process a single transaction.
Overall, EMC’s efforts could significantly alter the dynamics of the flash SSD market,
where standalone flash storage systems have been available only from smaller vendors.
EMC said that the new flash drives will cost about 30 times what an equivalent size high
speed FC drive, and estimated that adding four drives would raise the cost of a
Symmetrix disk storage system by about 10%. But in high-end business applications
where every bit of IOPS performance counts, that premium becomes entirely acceptable.
When an organization truly needs a major boost then flash drives are a very real and
very reasonable solution.
One Flash drive can deliver IOPS equivalent to 30 15K hard disk drives with
approximately 1 ms application response time. This means Flash memory achieves
unprecedented performance and the lowest latency ever available in an enterprise-class
storage array.
There are two models in DMX-4 series the DMX-4 and the DMX-4 950. DMX-4 supports full connectivity to open system and mainframe
hosts like ESCON and FICON.
The DMX-4 950 represents a lower entry point for DMX technology providing open system connectivity with FICON connection for
mainframe hosts.
The DMX-4 is the world’s largest high end storage array, allowing configure from 96 to 2400 drives in a single system. Yes, 2400
drives!! Means you can have peta-byte storage in one box.
Mainframe Connectivity
4 Gb/s back-end support
Point to Point connection
SATA II drives support
Support Enginuity 5772 ( Enginuity is the Operating System for DMX)
Improved RAID 5 performance via multiple location RAID XOR calculation
Partial sector read hit improvement
128 TimeFinder/Snap session of the same source volume.
Improvements in TimeFinder/Clone create and terminate times upto 10 times.
For SRDF, synchronous response time improvement up to 33 times.
Avoiding COFW (Copy on First Write) for TimeFinder/Clone target devices.
Symmetrix Virtual LUN
Clone to larger target device
RSA technology integrate called new feature Symmetrix Audit Log.
Improve Power Efficiency
RAID 6 supports
Separate Console to manage Symmetrix Management Console
Storage Protection: