Sunteți pe pagina 1din 14

SAN INTRO

Before you can understand SANs, you need to appreciate their evolution from earlier storage
models. You also need to understand the protocols used to transport data.
This module gives you an introduction to the primary protocols used to transport data to and
from storage. It also introduces you to the storage models as they developed from DAS to NAS
and finally to SAN. You should learn the characteristics and limitations of each storage model.
Your business may have one, two, or all three of these storage models, depending on your
requirements.
Although the storage models are distinctly different, they share common goals. These goals are:

Data integrity - Data is considered to be the most valuable asset of an organization.


Integrity of this data is critical to any storage model.

Data availability - All storage models can be configured for high availability by using a
highly available hardware and software framework that eliminates single points of
failure.

Leveraging existing investments - Existing storage arrays can be incorporated into more
complex storage models. This process is especially critical for large tape libraries that
may be deployed within an enterprise.

Upon completion of this module, you should be able to:

Identify the differences between small computer system interface (SCSI) and Fibre
Channel (FC) protocols

Define the evolution and function of the FC protocol

Describe the functions of the FC standards organizations

Describe the characteristics and limitations of DAS

Describe the characteristics and limitations of NAS

Describe the characteristics and limitations of SANs

Data Transport Protocols


In the early 1970's, the paradigm for storage shifted from mainframes to open storage systems.
There was a short period during which many proprietary disk systems were introduced. The
industry recognized the need for a standard, so American National Standards Institute (ANSI)

formed a working group to define the new storage standard. The new standard was called Small
Computer Systems Interface (SCSI). SCSI was based on a parallel wire connection, for a limited
connection distance and a relatively high speed, 10 to 40 Megabytes per second (Mbytes/sec).
As storage needs expanded and prices dropped on storage hardware, applications demanded
more flexibility and performance. ANSI saw an opportunity to introduce a new transport that
could solve the storage needs into the future. They introduced the Fibre Channel (FC)
specification. FC offered longer distances, 500 meters (m) with copper cables and up to 3
kilometers (km) with optical cables. It also offered greater speeds, 100 Mbytes/sec, with the
flexibility to increase distance and speed with new technologies
SCSI PROTOCOL
The SCSI protocol is a method for accessing data on disk drives physically attached to a server.
SCSI was initially designed to support a small number of disks attached to a single interface on
the host. The SCSI protocol has matured from its original standard that supported a few lowspeed devices. It has gone through several iterations that improved access speed, increased the
number of devices, and defined a wider range of supported devices.
SCSI is limited by the number of devices that can be attached to one SCSI chain (up to 15). Its
speed is also limited due to electrical interference and signal timing between individual wires in
the copper cabling.
FC channel Protocol
The FC protocol is a layered protocol that defines a set of standards for the efficient transfer of
information.
The FC protocol is characterized by the following features:

Uses a synchronous serial transfer protocol

Simplifies traditional cable plants with cables using only transmit and receive

Allows extended distance between devices (kilometers, rather than meters)

Allows the connectivity of thousands and, potentially, millions of devices

The FC transport can use both fiber-optic cable and copper wire (either twisted pair or coaxial).
Because copper is also a valid transport, when referring to the FC protocol, the spelling of fiber
has been replaced with fibre to remove the assumed association with optical technology

Fibre Channel Organizations and Standards


The Telecommunications Industry Association (TIA) is the leading United States (U.S.) nonprofit trade association serving the communications and information technology industry, with
proven strengths in the following:

Market development

Trade shows

Domestic and international advocacy

Standards development

Enablement of e-business

Through its worldwide activities, the association facilitates business development opportunities
and a competitive market environment. TIA provides a market-focused forum for its member
companies, which manufacture or supply the products and services used in global
communications.
http://www.tiaonline.org

As the world computer systems market embarks on the evolutionary journey called storage
networking, the Storage Networking Industry Association (SNIA) is the point of cohesion for:

Developers of storage and networking products

System integrators

Application vendors

Service providers

The SNIA is uniquely committed to delivering architectures, education, and services that propel
storage networking solutions into the broader market. Storage networking represents the next
step of technological evolution for the networking and storage industries. It is an opportunity to
fundamentally improve the effectiveness and efficiency of the storage resources employed by the
Information Technology (IT) community.
http://www.snia.org

The Fibre Channel Industry Association (FCIA) is an international organization of:

Manufacturers

Systems integrators

Developers

Systems vendors

Industry professionals

End users

FCIA is committed to delivering a broad base of FC infrastructure to support a wide array of


industry applications within the mass storage and IT-based arenas. FCIA working groups focus
on specific aspects of the technology, which target both vertical and horizontal markets,
including:

Storage

Video

Networking

SAN management

http://www.fibrechannel.org

DAS Storage Model


One of the earliest storage models, after mainframe storage, is direct attached storage (DAS).
With DAS, a storage device is directly attached to a dedicated server. DAS devices provide
flexibility in managing and allocating storage to a server. External devices can be shut down and
maintained without necessarily affecting the server to which they are attached. DAS devices
have some intelligence, which allows them to off load some of the overhead, like managing
RAID volumes, from the server.
In the DAS model:

Storage devices are directly attached to dedicated servers.

These storage devices are referred to as direct attached storage devices, also known as DASD.
Access to data is directly controlled by the host.

File systems are not readily available to other hosts unless they are NFS mounted,
thereby providing fairly strong physical data security.

Application, file, and file system data can be made available to clients over local area and wide
area networks by using file access and network protocols, such as Network File System (NFS)
and Common Internet File System (CIFS).
Click View Example to see an example of a DAS Configuration. Notice the DAS devices
attached directly to the servers.

DAS devices present challenges to the system administrator. New tool sets are required to
manage intelligent DAS boxes. Troubleshooting becomes more complex as the number of
devices increases. When a server uses up the available space within an array, additional arrays
can be added. However, the storage needs can increase beyond the ability of the server hardware
to accommodate the added devices.
DAS has the following limitations:

File systems are not readily available to other hosts unless they are NFS mounted.

For SCSI arrays, only a limited number of disks are supported on the SCSI chain, thereby
limiting the addition of new drives.

For FC arrays, large numbers of disks in the loop contribute to poor performance for
lower priority devices.

Servers have limited slots available, thereby restricting the total number of disks that can
be attached.

Failure of a storage device can require system downtime for repair

NAS Storage Model


number of storage vendors have improved upon file servers and DAS by introducing NAS
devices. NAS devices plug directly into a network, and are often referred to as NAS appliances.
The term appliance often refers to a computer device that can be plugged into the network and
begin providing services with minimal configuration.
NAS appliances provide a level of flexibility to the system and the storage administrator. By
using network protocols, such as NFS, file systems can be made available to any server or host
attached to the network.

NAS devices can be added or removed from the network without directly impacting the
servers attached to the network.

Storage can be centralized and shared between a number of heterogeneous servers and
desktops.

Storage management requirements are reduced as storage is more centralized.

Backups can be handled efficiently because the storage is clustered in groups.

NAS Configuration Example

NAS appliances incorporate a file server and disk array storage within a single physical unit. The
file server integrated into the NAS appliance, which is generally available as a LAN attached
device, is usually running a cut-down or thin operating system (OS). This OS is tuned
specifically for the purpose of file management and logical volume management.

As with the DAS model, application, file, and file system data are made available to clients over
local area and wide area networks using file access and network protocols, such as NFS and
CIFS. Access to data is limited to LAN speeds, and availability of data is limited to the
availability of the LAN/WAN.
The NAS model:

Is a file-centric model. All transfers must be at the file or record level, rather than at the
block or track level.

Makes a storage array a network addressable device.

Treats NAS devices as modules that can be attached to and removed from the network
with minimum disruption to network activity or other network attached devices.

One industry trend is to replace several smaller file servers, which use DAS, with one or more
larger NAS appliances. The larger NAS appliances use redundant components, such as redundant
power and logical volume RAID levels.
Click View Example to see a typical NAS model. The figure shows a NAS appliance that has
been used to provide access to all of the clients in the LAN.

Limitation of NAS
The NAS model is limited by network bandwidth issues. Each FC packet contains headers and
trailers that must be managed individually by the LAN. File access protocols such as NFS lead to
additional overhead.
LAN/WAN technology was never designed as a network for the transport of sustained,
sequential, high bandwidth I/O that the current storage environment often demands.
SAN Storage Model

SAN is a dedicated network for the attachment and management of storage devices and for the
movement of data between those storage devices. The storage is accessed through
interconnecting devices called hubs or switches. While most SANs use an FC transport, other
mechanisms, such as iSCSI can also be used.
Storage that is directly attached to a server using fiber optic cables is not a SAN, even when it
uses FC transport. A more complex SAN configuration could include DAS, NAS, and FCattached storage devices. The overall environment is known as a SAN.
Some additional definitions:

The Storage Networking Industry Association (SNIA) technical dictionary defines a SAN
as follows:
"A network whose primary purpose is the transfer of data between computer systems and
storage elements and among storage elements. Abbreviated SAN. A SAN consists of a
communication infrastructure, which provides physical connections, and a management
layer, which organizes the connections, storage elements, and computer systems so that
data transfer is secure and robust. The term SAN is usually (but not necessarily)
identified with block I/O services rather than file access services. A storage system
consisting of storage elements, storage devices, computer systems, and/or appliances,
plus all control software, communicating over a network."

In Designing Storage Area Networks, Tom Clark offers the following definition:
"Storage area networks; a network linking servers or workstations to disk arrays, tapebackup subsystems, and other devices, typically over FC"

Note: Although a SAN storage network is typically implemented by using FC technology,


general definitions of SAN do not mandate the use of FC. For example, an Ethernet network with
the primary (or dedicated) function of providing storage services, could be considered a SAN.
When discussing a SAN that is implemented using FC technology, the SAN is usually referred to
as an FC SAN.
According to the first definition, SANs are generally considered to be device-centric, as opposed
to file-centric. Data is written directly to a device rather than to a file system. This reduces the
overhead and increases efficiency.
Charecterestics
A SAN:

Is a network dedicated to storage needs that uses serial transport protocols

Is scalable through the addition of new components

Uses pooled storage that can, potentially, be accessed by any host on the SAN

Does not increase traffic on the LAN or WAN

The single most important feature of the SAN model is the replacement of DAS storage
configurations with a dedicated storage network that can share storage resources. This network
makes use of transport protocols that are optimized for data movement and data access. Storage
resources are not directly attached to any one host. All of the benefits and advantages of a SAN
evolve from this one feature.

The most common data transport mechanism used within a SAN is FC. FC is a serial transport
protocol --the physical cabling mechanism uses just two lines, one for data transmit and one for
data receive. This serial transport mechanism replaces the more traditional SCSI transport, which
is a parallel transport mechanism limited by length and connections.
Click View Example to see an example of the SAN model (sometimes referred to as networking
behind the server).

Advantages of SAN
SANs have the potential to solve many problems encountered in both storage device
management and data management. SANs have the advantage of combining the existing
investment in storage devices as well as incorporating newer storage strategies as they evolve.

SANs lend themselves to storage consolidation efforts, thereby eliminating poorly


utilized pools of storage.

A SAN is a highly available, redundant network storage infrastructure that seeks to


eliminate single points of failure.

SANs can be used to eliminate distance barriers of other storage models. NAS is attached
to the same LAN/WAN backbone. DAS is attached directly to the server.

Performance can be managed more effectively in a SAN through the use of multiple
routes between the application servers and their data.

Limitation Of SAN
The development of the FC SAN has revolutionized the storage industry and greatly improved
the availability and accessibility of data for Enterprise IT resources. It has also brought new
challenges.

Interoperability between hardware vendors is problematic. As a result, many SAN


installations are still single vendor. Organizations like SNIA are working to help alleviate
the management problem and hone the standard to reduce limitations on interoperability.

Troubleshooting failures in a SAN requires a high level of expertise. SAN administrators


must deal with a wide variety of servers, arrays, and volume managers in order to
properly diagnose and correct errors and performance concerns.

Management of a SAN introduces additional complexity. Few products exist that can
present a single picture of the SAN and allow all devices to be monitored and managed.
This process typically requires the administrator to be familiar with many different
configuration and management software tools. Organizations like the Distributed

Management Task Force (DMTF) and SNIA are working to solve this industry-wide
problem.

Health monitoring tools are needed to predict or notify the administrator of emerging
problems. Many of the underlying metrics to support health monitoring are still being
developed by equipment manufacturers.

Succes of FC
Storage networks, which used DAS and NAS devices, resulted from the desire to maintain
legacy technologies such as SCSI and to utilize every dollar from the more expensive, older
technologies. Standards and practices changed with the availability of FC technology, but the
computer room hardware did not always change.
The success of FC presented some new problems:

Disk arrays were getting larger, containing hundreds of disks, with lots of data.

Backups were becoming more difficult to accomplish during the nighttime window due
to the surge in storage capacity.

Customers wanted to attach more arrays to servers than the servers were designed to
support (servers have a limited number of slots for interface cards).

Customers wanted to start sharing their large storage arrays among different servers.

The logical solution was to eliminate direct attached storage and share the storage over a
network. The birth of storage area networks (SANs) provided the answer to these demands.

Business Issue Addressed by SAN

SANs have the potential to solve many problems businesses encounter in both storage device
management and data management. This module covers the business issues addressed by a SAN.
Upon completion of this module, you should be able to:

Identify how the return on IT infrastructure investments can be maximized in a SAN


environment

Identify how a SAN supports backup solutions

Identify how a SAN supports business continuity

SAN use

Some regulatory agencies require their constituents, such as banks and stock exchanges, to
implement business continuity plans that include remote data sites and remote mirroring of data
flow to ensure no loss of service. Many other companies do so for their own protection. Keeping
copies of data at sites that are remote from servers and from each other is important for:

Disaster recovery - Ability to reconstruct the data over an acceptable time interval during
which the business cannot conduct its affairs.

Business continuity - Ability to continue to operate after an outage occurs by switching


processing sites quickly and efficiently, making the operation of the business appear
seamless in spite of the outage.

A configuration over an extended distance, known as a campus or a short-haul metropolitan


distance, is generally several kilometers and is enabled in the FC world. The technical challenges
that FC vendors address in such configurations include:

Signal integrity over extended distance

Signal latency over extended distance

Troubleshooting communication problems over extended distance

The use of FC technology, optical fiber cable, and the very coherent property of a laser light
source enables engineers to maintain signal integrity over extended distance.
Click View Image to see an illustration of an extended distance configuration. This configuration
supports disaster recovery, but not business continuity.

Extended Distance Configuration

Clustering benefit
The SAN model implements a network topology for storage. This model enables the highly
available configurations that are so characteristic of networking technology.
Vendors of FC disks implement dual-ported drives. These drives have two interfaces through
which you can read and write data. If one interface fails, you should still be able to access the

data through the remaining interface. This process illustrates the redundant connections through
the dual-ported drive interfaces.
Although there is no physical single point of failure in this configuration, it still needs to be
carefully managed through a software framework, such as Sun Cluster hardware and software.
Such frameworks:

Implement logical volume RAID levels on the storage

Manage multiple TCP/IP network interfaces for client connections to the servers

Automatically move logical data volumes from the control of a host that might have a
hardware fault, to the control of a healthy host.

Click View Demo to see an example of a generic high availability configuration. This
demonstration shows redundant servers, switches, and cable connections to dual-ported storage.
There should be no single point of failure in such a configuration.

Multipathing

The SAN supports multipathing for fast, redundant access to critical data located on high
capacity arrays. This module defines multipathing and identifies the business needs supported by
multipathing. It also describes the features and technical benefits of multipathing.
Upon completion of this module, you should be able to define multipathing and identify its
features and technical benefits.

S-ar putea să vă placă și