Documente Academic
Documente Profesional
Documente Cultură
Before you can understand SANs, you need to appreciate their evolution from earlier storage
models. You also need to understand the protocols used to transport data.
This module gives you an introduction to the primary protocols used to transport data to and
from storage. It also introduces you to the storage models as they developed from DAS to NAS
and finally to SAN. You should learn the characteristics and limitations of each storage model.
Your business may have one, two, or all three of these storage models, depending on your
requirements.
Although the storage models are distinctly different, they share common goals. These goals are:
Data availability - All storage models can be configured for high availability by using a
highly available hardware and software framework that eliminates single points of
failure.
Leveraging existing investments - Existing storage arrays can be incorporated into more
complex storage models. This process is especially critical for large tape libraries that
may be deployed within an enterprise.
Identify the differences between small computer system interface (SCSI) and Fibre
Channel (FC) protocols
formed a working group to define the new storage standard. The new standard was called Small
Computer Systems Interface (SCSI). SCSI was based on a parallel wire connection, for a limited
connection distance and a relatively high speed, 10 to 40 Megabytes per second (Mbytes/sec).
As storage needs expanded and prices dropped on storage hardware, applications demanded
more flexibility and performance. ANSI saw an opportunity to introduce a new transport that
could solve the storage needs into the future. They introduced the Fibre Channel (FC)
specification. FC offered longer distances, 500 meters (m) with copper cables and up to 3
kilometers (km) with optical cables. It also offered greater speeds, 100 Mbytes/sec, with the
flexibility to increase distance and speed with new technologies
SCSI PROTOCOL
The SCSI protocol is a method for accessing data on disk drives physically attached to a server.
SCSI was initially designed to support a small number of disks attached to a single interface on
the host. The SCSI protocol has matured from its original standard that supported a few lowspeed devices. It has gone through several iterations that improved access speed, increased the
number of devices, and defined a wider range of supported devices.
SCSI is limited by the number of devices that can be attached to one SCSI chain (up to 15). Its
speed is also limited due to electrical interference and signal timing between individual wires in
the copper cabling.
FC channel Protocol
The FC protocol is a layered protocol that defines a set of standards for the efficient transfer of
information.
The FC protocol is characterized by the following features:
Simplifies traditional cable plants with cables using only transmit and receive
The FC transport can use both fiber-optic cable and copper wire (either twisted pair or coaxial).
Because copper is also a valid transport, when referring to the FC protocol, the spelling of fiber
has been replaced with fibre to remove the assumed association with optical technology
Market development
Trade shows
Standards development
Enablement of e-business
Through its worldwide activities, the association facilitates business development opportunities
and a competitive market environment. TIA provides a market-focused forum for its member
companies, which manufacture or supply the products and services used in global
communications.
http://www.tiaonline.org
As the world computer systems market embarks on the evolutionary journey called storage
networking, the Storage Networking Industry Association (SNIA) is the point of cohesion for:
System integrators
Application vendors
Service providers
The SNIA is uniquely committed to delivering architectures, education, and services that propel
storage networking solutions into the broader market. Storage networking represents the next
step of technological evolution for the networking and storage industries. It is an opportunity to
fundamentally improve the effectiveness and efficiency of the storage resources employed by the
Information Technology (IT) community.
http://www.snia.org
Manufacturers
Systems integrators
Developers
Systems vendors
Industry professionals
End users
Storage
Video
Networking
SAN management
http://www.fibrechannel.org
These storage devices are referred to as direct attached storage devices, also known as DASD.
Access to data is directly controlled by the host.
File systems are not readily available to other hosts unless they are NFS mounted,
thereby providing fairly strong physical data security.
Application, file, and file system data can be made available to clients over local area and wide
area networks by using file access and network protocols, such as Network File System (NFS)
and Common Internet File System (CIFS).
Click View Example to see an example of a DAS Configuration. Notice the DAS devices
attached directly to the servers.
DAS devices present challenges to the system administrator. New tool sets are required to
manage intelligent DAS boxes. Troubleshooting becomes more complex as the number of
devices increases. When a server uses up the available space within an array, additional arrays
can be added. However, the storage needs can increase beyond the ability of the server hardware
to accommodate the added devices.
DAS has the following limitations:
File systems are not readily available to other hosts unless they are NFS mounted.
For SCSI arrays, only a limited number of disks are supported on the SCSI chain, thereby
limiting the addition of new drives.
For FC arrays, large numbers of disks in the loop contribute to poor performance for
lower priority devices.
Servers have limited slots available, thereby restricting the total number of disks that can
be attached.
NAS devices can be added or removed from the network without directly impacting the
servers attached to the network.
Storage can be centralized and shared between a number of heterogeneous servers and
desktops.
NAS appliances incorporate a file server and disk array storage within a single physical unit. The
file server integrated into the NAS appliance, which is generally available as a LAN attached
device, is usually running a cut-down or thin operating system (OS). This OS is tuned
specifically for the purpose of file management and logical volume management.
As with the DAS model, application, file, and file system data are made available to clients over
local area and wide area networks using file access and network protocols, such as NFS and
CIFS. Access to data is limited to LAN speeds, and availability of data is limited to the
availability of the LAN/WAN.
The NAS model:
Is a file-centric model. All transfers must be at the file or record level, rather than at the
block or track level.
Treats NAS devices as modules that can be attached to and removed from the network
with minimum disruption to network activity or other network attached devices.
One industry trend is to replace several smaller file servers, which use DAS, with one or more
larger NAS appliances. The larger NAS appliances use redundant components, such as redundant
power and logical volume RAID levels.
Click View Example to see a typical NAS model. The figure shows a NAS appliance that has
been used to provide access to all of the clients in the LAN.
Limitation of NAS
The NAS model is limited by network bandwidth issues. Each FC packet contains headers and
trailers that must be managed individually by the LAN. File access protocols such as NFS lead to
additional overhead.
LAN/WAN technology was never designed as a network for the transport of sustained,
sequential, high bandwidth I/O that the current storage environment often demands.
SAN Storage Model
SAN is a dedicated network for the attachment and management of storage devices and for the
movement of data between those storage devices. The storage is accessed through
interconnecting devices called hubs or switches. While most SANs use an FC transport, other
mechanisms, such as iSCSI can also be used.
Storage that is directly attached to a server using fiber optic cables is not a SAN, even when it
uses FC transport. A more complex SAN configuration could include DAS, NAS, and FCattached storage devices. The overall environment is known as a SAN.
Some additional definitions:
The Storage Networking Industry Association (SNIA) technical dictionary defines a SAN
as follows:
"A network whose primary purpose is the transfer of data between computer systems and
storage elements and among storage elements. Abbreviated SAN. A SAN consists of a
communication infrastructure, which provides physical connections, and a management
layer, which organizes the connections, storage elements, and computer systems so that
data transfer is secure and robust. The term SAN is usually (but not necessarily)
identified with block I/O services rather than file access services. A storage system
consisting of storage elements, storage devices, computer systems, and/or appliances,
plus all control software, communicating over a network."
In Designing Storage Area Networks, Tom Clark offers the following definition:
"Storage area networks; a network linking servers or workstations to disk arrays, tapebackup subsystems, and other devices, typically over FC"
Uses pooled storage that can, potentially, be accessed by any host on the SAN
The single most important feature of the SAN model is the replacement of DAS storage
configurations with a dedicated storage network that can share storage resources. This network
makes use of transport protocols that are optimized for data movement and data access. Storage
resources are not directly attached to any one host. All of the benefits and advantages of a SAN
evolve from this one feature.
The most common data transport mechanism used within a SAN is FC. FC is a serial transport
protocol --the physical cabling mechanism uses just two lines, one for data transmit and one for
data receive. This serial transport mechanism replaces the more traditional SCSI transport, which
is a parallel transport mechanism limited by length and connections.
Click View Example to see an example of the SAN model (sometimes referred to as networking
behind the server).
Advantages of SAN
SANs have the potential to solve many problems encountered in both storage device
management and data management. SANs have the advantage of combining the existing
investment in storage devices as well as incorporating newer storage strategies as they evolve.
SANs can be used to eliminate distance barriers of other storage models. NAS is attached
to the same LAN/WAN backbone. DAS is attached directly to the server.
Performance can be managed more effectively in a SAN through the use of multiple
routes between the application servers and their data.
Limitation Of SAN
The development of the FC SAN has revolutionized the storage industry and greatly improved
the availability and accessibility of data for Enterprise IT resources. It has also brought new
challenges.
Management of a SAN introduces additional complexity. Few products exist that can
present a single picture of the SAN and allow all devices to be monitored and managed.
This process typically requires the administrator to be familiar with many different
configuration and management software tools. Organizations like the Distributed
Management Task Force (DMTF) and SNIA are working to solve this industry-wide
problem.
Health monitoring tools are needed to predict or notify the administrator of emerging
problems. Many of the underlying metrics to support health monitoring are still being
developed by equipment manufacturers.
Succes of FC
Storage networks, which used DAS and NAS devices, resulted from the desire to maintain
legacy technologies such as SCSI and to utilize every dollar from the more expensive, older
technologies. Standards and practices changed with the availability of FC technology, but the
computer room hardware did not always change.
The success of FC presented some new problems:
Disk arrays were getting larger, containing hundreds of disks, with lots of data.
Backups were becoming more difficult to accomplish during the nighttime window due
to the surge in storage capacity.
Customers wanted to attach more arrays to servers than the servers were designed to
support (servers have a limited number of slots for interface cards).
Customers wanted to start sharing their large storage arrays among different servers.
The logical solution was to eliminate direct attached storage and share the storage over a
network. The birth of storage area networks (SANs) provided the answer to these demands.
SANs have the potential to solve many problems businesses encounter in both storage device
management and data management. This module covers the business issues addressed by a SAN.
Upon completion of this module, you should be able to:
SAN use
Some regulatory agencies require their constituents, such as banks and stock exchanges, to
implement business continuity plans that include remote data sites and remote mirroring of data
flow to ensure no loss of service. Many other companies do so for their own protection. Keeping
copies of data at sites that are remote from servers and from each other is important for:
Disaster recovery - Ability to reconstruct the data over an acceptable time interval during
which the business cannot conduct its affairs.
The use of FC technology, optical fiber cable, and the very coherent property of a laser light
source enables engineers to maintain signal integrity over extended distance.
Click View Image to see an illustration of an extended distance configuration. This configuration
supports disaster recovery, but not business continuity.
Clustering benefit
The SAN model implements a network topology for storage. This model enables the highly
available configurations that are so characteristic of networking technology.
Vendors of FC disks implement dual-ported drives. These drives have two interfaces through
which you can read and write data. If one interface fails, you should still be able to access the
data through the remaining interface. This process illustrates the redundant connections through
the dual-ported drive interfaces.
Although there is no physical single point of failure in this configuration, it still needs to be
carefully managed through a software framework, such as Sun Cluster hardware and software.
Such frameworks:
Manage multiple TCP/IP network interfaces for client connections to the servers
Automatically move logical data volumes from the control of a host that might have a
hardware fault, to the control of a healthy host.
Click View Demo to see an example of a generic high availability configuration. This
demonstration shows redundant servers, switches, and cable connections to dual-ported storage.
There should be no single point of failure in such a configuration.
Multipathing
The SAN supports multipathing for fast, redundant access to critical data located on high
capacity arrays. This module defines multipathing and identifies the business needs supported by
multipathing. It also describes the features and technical benefits of multipathing.
Upon completion of this module, you should be able to define multipathing and identify its
features and technical benefits.