Documente Academic
Documente Profesional
Documente Cultură
Homre
IMPACT modules consist of focused, in-depth training content that can be consumed in about 1-2 hours
Course Description Start Training Run/Download the PowerPoint presentation Student Resource Guide Training slides with notes Assessment Must be completed online
(Note: Completed Assessments will be reflected online within 24-48 hrs.)
Complete Course Directions on how to update your online transcript to reflect a complete status for this course.
Course Completion
Link to Knowledgelink to update your transcript and indicate that you have completed the course.
SAN Foundations
Course Completion Steps: 1. 2. 3. 4. Logon to Knowledgelink (EMC Learning management system). Click on 'My Development'. Locate the entry for this learning event you wish to complete. Click on the complete icon [ ].
Note: The Mark Complete button does not apply to items with the Type: Class, Downloadable (AICC Compliant) or Assessment Test. Any item you cancel from your Enrollments will automatically be deleted from your Development Plan. Click here to link to Knowledgelink
This foundation level course provides participants with an understanding of Storage Area Networks and the EMCs Connectrix family of fibre channel switches.
e-Learning
This course is part of the EMC Technology Foundations curriculum and is a pre-requisite to other learning paths Audience This course is intended for any person who: Educates partners and/or customers on the value of Storage Area Networks and EMCs Connectrix family of fibre channel switches. Provides technical consulting skills and support for EMC products Analyzes a Customers business technology requirements Qualifies the value of EMCs products Collaborates with customers as a storage solutions advisor
Prerequisites The prerequisites listed are recommended and should be completed prior to attending class. The prerequisite courses include: Symmetrix Foundations CLARiiON Foundations
Prior to taking this course, participants should have strong understanding of IT concepts and a basic knowledge of storage concepts. Course Objectives Upon successful completion of this course, participants should be able to: Define a Storage Area Network List the features and benefits of a SAN List SAN considerations and switch management issues Discuss the benefits of the Connectrix family of switches and directors Compare and contrast the various fabric management software List and explain Connectrix opportunities based on interoperability, scalability, value added functionality, and high availability
Modules Covered These modules are designed to support the course objectives. This course includes a single module on SAN Foundations.
Page 1 of 2
Labs
e-Learning
Labs reinforce the information you have been taught. There are no labs associated with this course. Assessments Assessments validate that you have learned the knowledge or skills presented during a learning experience. This course includes a self-assessments quiz, to be conducted on-line via KnowledgeLink, EMCs Learning Management System.
Page 2 of 2
SAN Foundations, 1
SAN Foundations
Welcome to Storage Area Network (SAN) Foundations. Copyright 2004 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
SAN Foundations, 2
SAN Foundations
After completing this course, you will be able to: Define a Storage Area Network List the features and benefits of a SAN Describe the benefits of the Connectrix family of switches and directors Describe Fabric software Identify Connectrix opportunities
The objectives for this course are shown here. Please take a moment to read them.
SAN Foundations, 3
SAN Foundations, 4
IP NETWORK
LAN Switches
Servers / Applications
A SAN is a dedicated network that carries storage traffic. Storage Area Networks connect servers to storage, which can include disk and tape resources.
SAN Foundations, 5
Why a SAN ?
Overcome limitations of Direct Attached Storage model
Expands storage connectivity Extend the distances Reduces corporate network traffic Can provide non-disruptive storage provisioning Improve utilization of existing storage resources
IP Network LAN
Application Servers
SAN
External Disks
EMC Global Education
2004 EMC Corporation. All rights reserved.
Tape Library
5
Fibre Channel SANs apply networking technologies to solve the connectivity and distance limitations of channel based storage architectures. By allowing application servers to share access to external storage devices, we are able to achieve better utilization and availability of critical data. The use of a storage network reduces the impact on the corporate communication network. A SAN supports the nondisruptive provisioning of additional storage resources. Finally, SAN allows multiple host access to a common storage array, which can improve array utilization.
SAN Foundations, 6
SANs provide value in an Automated Network Storage environment by providing: flexibility, availability, accessibility, scalability, and security access. Flexibility is a measure of how rapidly you are able to deploy, shift, and re-deploy new storage and host assets in a dynamic fashion without interrupting your currently running environment. Availability builds redundancy into the environment. We must always weigh the opportunity cost of how much redundancy we need/want to build into the environment. Accessibility measures a hosts ability to physically connect and communicate with the individual storage arrays, as well as your ability to provide enough bandwidth resources to meet your full-access performance requirements. Accessibilitys link to available bandwidth leads us to consider the differences in building a statistical (partial) bandwidth infrastructure or a guaranteed (full) bandwidth infrastructure. Scalability is a measure of how easily a fabric can be extended so that it can accept more storage, more hosts, or more connectivity (switches). Considerations include external links and internal components in the data path. Security refers to the ability to protect your operations from external and internal malicious intrusions, as well as the ability to protect yourself from accidental or unintentional data access by unauthorized parties.
SAN Foundations, 7
SAN Operation
A SAN is essentially a network that can provide both a high level of connectivity and channel-speed performance The host Operating System performs normal file system including:
organizing and accessing all files and directories on disk partitions
SAN
Host
Application File System O/S Block I/O Storage Network FC Protocol (FCP)
All I/O to the disk controller software is sent over the SAN When properly configured, a SAN limits potential single points of failure
EMC Global Education
2004 EMC Corporation. All rights reserved.
Disk Storage
7
In a SAN, an application makes an I/O request to a file system on the server, which initiates a block I/O request. The request is mapped to a network transport protocol, which is delivered to a disk resource located on the storage network. Database applications may bypass the file system layer and initiate raw block I/O that is also mapped to a network transport protocol and delivered to a disk resource located on the storage network. The most common protocol currently being used is SCSI over Fibre Channel - FCP.
SAN Foundations, 8
200 MB/sec
12.5 MB/sec
200 MB/sec
Physical Network
Fiber Optic
Physical Network
Just as the corporate networks have become synonymous with Ethernet and IP, SANs are generally implemented using Fibre Channel (FC). The protocol maps many existing protocols to Fibre Channel frames for transmission. These protocols include SCSI - Small Systems Computer Interface, HIPPI - High Performance Parallel Interface, and ESCON - Enterprise System Connection. FICON is replacing ESCON as the Fibre Channel implementation for the IBM Z series mainframes, ATM - Asynchronous Transfer Mode, and IP - Internet Protocol. The Fibre Channel standards define a layered protocol stack. FC0 defines the physical layer of the model. This layer defines standard connectivity media, connectors, and transmission methods. The standards currently define a physical layer data rate of up to 1000 MB/sec. Implementations today are both 100 MB/sec and 200 MB/sec. FC1 defines the 8B/10B, encoding which is implemented in Fibre Channel. This encoding enhances error detection and recovery. FC2 defines the construction of the basic data frame, methods of frame sequencing, and flow control. FC3 is related common services, but it has not been defined yet. FC4 maps upper level protocols, such as SCSI or IP, to Fibre Channel.
SAN Foundations, 9
FC-AL
Switch
FC-SW
EMC Global Education
2004 EMC Corporation. All rights reserved. 9
The ANSI Fibre Channel Standards define three topologies: Point-to-point, Arbitrated loop (FC-AL), and Switched fabric (FC-SW). A private arbitrated loop is a configuration where up to 126 devices can be attached. The attachment point is called NL-port (loop ports). FC-AL is a low cost connectivity solution because it does not require expensive switching devices. Instead, low cost hubs are used to increase server and storage connectivity. Hubs have won wide acceptance in JBOD (Just a Bunch of Disks) environments, because just as JBOD costs less than enterprise storage, hubs cost less than switches. A switch based fabric provides full bandwidth between the nodes in the fabric. A device(port) gains access to the fabric through a point-to-point connection with a port on a switch or director. At one time, there can be n/2 full bandwidth connections between nodes in the fabric (one connection for the initiator and one connection for the target). These topologies have been developed to solve several distinct customer problems, including Proximity Extension Distance topology using shortwave-to-longwave conversion to extend server-to-storage distance beyond shortwaves 500-meter limitation. Capacity Expansion - Capacity topology expands the storage capacity supported per host port by allowing a host port to connect to two or more Storage array nodes.
SAN Foundations, 10
What is a Fabric
Switch or group of connected switches Routes traffic between attached devices Domain ID
Unique switch identifier
Application File System O/S
Host
Switch Services
Login Service assigns ID to nodes at login Name Service stores node information
SWITCH
Login Service Name Service
Fabric
Disk
EMC Global Education
2004 EMC Corporation. All rights reserved.
Array
10
A fabric is a single switch or a group of connected switches. The switch provides a physical connection and logical routing of frames of data between attached devices. The primary function of the fabric is to receive the frames from a source port(device) and route the frames to the destination port (device) whose address identifier is specified in the frames. Each port (device) is physically attached through a link to the fabric. Domain ID - a Domain ID is a unique identification number provided to each domain in the fabric. A Fibre Channel address is a unique address for each node in the fabric. Login Service This Servers address (FFFFFE) is used by all nodes when they perform a Fabric Login (FLOGI). This service provides the Source Identifier (SID) to the new node while supplying that node with information about the fabric and the fabrics capabilities. Name Service Each switch has an entity within it which is responsible for the name registration of devices that are attached to it. These entries are stored in a locally resident database. Each switch in the fabric topology exchanges the Name Service information with other switches in the fabric to maintain a synchronized, distributed view of the fabric.
SAN Foundations, 11
Applications
SAN is optimized for large data block transfer Suited for the demands of real time applications that require access to data resources
Databases High transaction rate High Data Volatility
Storage Consolidation
Gain efficiencies in the management of storage resources including capacity, performance, and connectivity Physical consolidation Logical consolidation
EMC Global Education
2004 EMC Corporation. All rights reserved. 11
The SAN is optimized for large block transfers and is suited for the demands of high performance, real time application I/O. These applications that require high speed access resources are well suited for a SAN implementation. A second use for a SAN is Storage Consolidation. Generally, the best candidates for consolidation are systems within the same family, first by workgroup or job purpose, and second by server type. However, heterogeneous environments are also common. Resource consolidation includes the concepts of both physical and logical consolidation. Physical consolidation involves the physical movement of resources to a centralized location. Once these resources are located together, you may be able to more efficiently use facility resources, such as HVAC (heating, ventilation and air conditioning), power protection, personnel, and physical security. The tradeoff that comes with physical consolidation is the loss of resilience against a site failure. Logical consolidation is associated with bringing components under a unified management infrastructure and creating a shared resource pool, such as a SAN. Logical consolidation does not allow you to take full advantage of the site consolidation benefits, but it does maintain site failure resilience. Consolidated resources and unified management provides justification for a SAN.
SAN Foundations, 12
SAN Considerations
An Introduction
SAN Foundations, 13
When designing a SAN, be aware of the Fan-Out ratio. The Fan-Out ratio reflects how many hosts may access a Storage port at any given time. Storage consolidation enables customers to achieve the full benefits of Enterprise Storage. The consolidation topology allows customers to map multiple host HBA ports into a single Symmetrix FA port. The Fan Out implementation will be dependent on the I/O throughput requirements of customer applications. Always check the Open Systems Support Matrix for the latest Fan-Out ratios for specific operating system and HBA.
SAN Foundations, 14
The Fan-In ratio reflects how many storage systems may be accessed by a single host at any given time. This will allow a customer to expand connectivity by a single host across multiple storage units. An example of an application of this topology would be a case where a host requires additional storage capacity and the Symmetrix /Clariion unit, or units that it is attached to, has no more capacity. This topology would allow the host HBA to be mapped to another Symmetrix /Clariion that has the required capacity available. Again, always check the Open Systems Support Matrix for the latest Fan-In ratios for specific operating system and HBA.
SAN Foundations, 15
F_Port
F_Port
Login Service Name Service
Tx Rx
Rx Tx
N_Port
FC Node HBA
Fabric Switch
Links multiple node Routes data frames
Port
Provides the physical connectivity Port Types N-Port, F_port, FL_Port, E_port G_port Link Speeds 1Gb/sec and 2Gb/sec
EMC Global Education
2004 EMC Corporation. All rights reserved. 15
So where do we start to implement a Storage Area Network? We begin by identifying the basic components. A network is a collection of fibre channel nodes that usually communicate via fiber optic media. (Fibre Channel can be implemented using copper media for short distances up to 30m). The port on the member node provides the physical and logical connection to the network. Each port and node has a permanent, unique identifier called the WWN, Worldwide Name. Each node will use node specific drivers to access the network. The switch provides the Source Identifier (SID) to the new node, while supplying the node with information about the ports capabilities. Each switch has an entity, Name Server, which is responsible for the name registration and the management of devices that are attached to the switch. The switched fabric receives data frames from a source port (device) and routes the frames to the destination port (device) whose address identifier is specified in the frames. The switch provides non-blocking channels between the source node and the switch port. The more common types of ports are: N_Port node port: creates and receives frames NL_Port node port within an Arbitrated Loop environment F_Port fabric port: port on a switch connected to a node FL_Port fabric port: interconnects a fabric with a loop environment E_Port expansion port: connects a switch to another switch G_Port Generic port: switch port that auto-configures to other port types
SAN Foundations, 16
Node
N_Port WWN
Tx Rx
Node
EMC Global Education
N_Port WWN
Tx Rx
Login Service
16
Once the components of the network have been been physically connected, the next task is to identify the members of the network. This is accomplished in two operations performed by the Fibre Channel protocol. The first is establishing a logical connection between the network node and the fabric switch. Primitive ordered sets are sent between the node and the switch to establish the link. The second operation is to establish an identity for the node on the network. This is accomplished by the node sending special frame types (FLOGI) to identify itself to the switch fabric and register its WWPN/WWNN. The FLOGI begins with the N_port communicating to the Login Server (FFFFFE) using a source address of 000000. The Login Server then assigns a valid address to the node port. Information is then registered with the Name Server (FFFFFC). This information includes: Port Identifier = SID Port Name = WWN of the N_Port Class of Service = Typically, class 3 FC-4 Types Supported = SCSI-3 Port Type = N_Port
SAN Foundations, 17
Who else is out there Here are the nodes you can talk to
Node Login
Name Server
Node
Port
Node
Node attempts Login to all nodes available from list provided by Name Server (PLOGI - N_Port Login) Queries devices / LUNs on node
EMC Global Education
2004 EMC Corporation. All rights reserved. 17
Now that a node has a network identity, it is free to communicate with any other node that is attached to the switch, unless limitations are assigned. As we will see in later slides, this limit is implemented through zoning. After the node has registered with the Fabric, it will request the address of other nodes that support the same upper level protocol (SCSI, ATM, etc) and are members of the same zone. Then the node will attempt to login (PLOGI) to all nodes on the list it receives from the switchs Name Server. The node is allowed to query (PRLI) these nodes to determine if there is a LUN present. At this point, connectivity is established. The process will be repeated as other nodes are attached to the ports on the switch. This leads to the next question. What can we do if all available ports on the switch are used, but there is a need to add additional nodes? The answer can be found in the next slide, Interswitch Links - ISLs.
SAN Foundations, 18
ISL
Switches are interconnected using Interswitch-Links (ISLs) through an Expansion port on the switch (E_Port). Interswitch links (ISLs) are used to transfer host-to-storage data, as well as fabric management traffic from one switch to another. The number of links is based on two factors: availability and accessibility. Availability provides redundant paths to carry switch traffic. Accessibility provides the bandwidth resources to support the host application. Performance on a storage port is highly dependent on the number of I/O requests per second and the size of the I/O in the request. This will vary from customer to customer, and from process to process. Where possible, avoid the need for ISL traffic. Distance is a consideration when implementing ISLs. The appropriate signal source must be used in order to deliver the data signal across the link at a detectable level.
SAN Foundations, 19
ISL Considerations
Capacity Distance
Signal loss Throughput Power
Multimode Fiber 1Gb=500m 2Gb=300m Single-mode Fiber > 10 km
Three options are available for implementing ISL connections: Multimode, Single-mode and DWDM. Multimode ISL is for distances up to 500m; Single-mode ISL is for distances up to 35 Km, depending on switch port transceiver technology. DWDM (Dense Wavelength Division Multiplexing) is for distances typically up to 200Km. The DWDM can be configured with Multimode or Single-mode connections. Variables that affect distance: Propagation and Dispersion Losses Mode Signal propagation mode and laser wavelength (Long wave = 1310 nm single mode, Short-wave = 850 nm multimode) address losses encountered to the signal across long distances, which is the reason long-haul Fibre Channel plants use high-grade and high-power laser emitters, at 1310 nm wavelength over 9-micron fiber-optic cables. Buffer-to-Buffer Credit Throughput on long links can degrade quickly if not enough frames are on the link. The longer the link, the more frames must be sent down the link to prevent this degradation. For example, a standard Fibre Channel frame of 2 KB is approximately 4 km long. Transmitting a 2 KB frame across a 100 km link is similar to a 4 km-long train on a 100 km track. Optical power- Is there sufficient signal power for transmitter and receiver?
SAN Foundations, 20
Cabling Considerations
Operating distances decrease when moving from 1Gbps to 2Gbps When using existing fiber optic infrastructures, or installing new ones, you must consider the following:
Fiber Optic Glass Filament Core 50 micron
multimode
62.5 micron
multimode
9 micron
singlemode
Port speed is the major variable to consider when selecting the type of fiber optic cable to use for the interswitch links.
SAN Foundations, 21
Combining Signals
Transmission on fiber
Separating Signals
Dense Wavelength Division Multiplexing (DWDM) is a process in which different channels of data are carried at different wavelengths over one pair of fiber links. This is in contrast with a conventional fiber optic system in which just one channel is carried over a single wavelength traveling through a single fiber. Using DWDM, several separate wavelengths (or channels) of data can be multiplexed into a light stream transmitted on a single optical fiber (dark fiber). This technique to transmit several independent data streams over a single fiber link is an approach to opening up the conventional optical fiber bandwidth by breaking it up into many channels, each at a different optical wavelength (a different color of light). Each wavelength can carry a signal at any bit rate less than an upper limit defined by the electronics, typically up to several gigabits per second. Different data formats being transmitted at different data rates can be transmitted together. Specifically, IP data, ESCON SRDF, Fibre Channel SRDF, SONET data, and ATM data can all be traveling at the same time within the optical fiber. For EMC customers, it means that multiple SRDF channels and Fibre Channel ISL (Inter Switch Links) can be transferred over one pair of fiber links along with traditional network traffic. This is especially important where fiber links are at a premium. For example, a customer may be leasing fiber, so the more traffic they can run over a single link, the more cost effective the solution. DWDM technology can also be used to tie two or more metro area data centers together as one virtual data center.
SAN Foundations, 22
Now that we know how to expand the connection between switch fabric using ISLs, we want to become familiar with the topologies to connect switched fabrics. In a one tier topology, all switches are connected so that traffic of end devices (hosts and storage) need only travel over a maximum of one ISL (referred to as a hop) to reach the destination point. Hosts and storage may be located anywhere on the fabric. If hosts and storage are localized on the same switch, the FC traffic passes over the back plane of that switch only and not an ISL, therefore there are no ISL hops. If they are not localized on the same switch, then there is a maximum of one hop to reach any part of the fabric. The one tier topology provides the maximum availability, however, it is done at the expense of connectivity.
SAN Foundations, 23
Host Tier
Benefit
High Availability, Connectivity
Storage Tier
The next topology is the two-tier configuration. The topology yields one ISL hop for node traffic. The switches in the fabric are referred to as the Host tier (aka Edge) and the Storage tier (aka Core or Backbone). It is best to have Enterprise Director switches at the Storage tier to insure highest availability. Departmental switches at the host tier are designed as an inexpensive method to bring additional low cost hosts into the fabric. Distance between Host and Storage tier may be either short or long wave. It could potentially be over DWDM.
SAN Foundations, 24
Host Tier
Connectivity Tier
Connectivity Tier
Storage Tier
Storage Tier
Benefit
The third topology is the three tier fabric. In this configuration, hosts Fibre Channel frame reads/writes must travel over the Connectivity Tier in order to reach the switch where its storage is located. Fabrics of this size are developed because of the increasing need for more ports. An end device on the Host tier is three ISL hops away from an end device on the Storage tier. In larger fabrics such as this, it is definitely possible that there are host and storage located on the same tier. Not all hosts necessarily have to reach storage on the far side of the fabric. Maybe only a few hosts need to.
SAN Foundations, 25
Cisco McData
Open Fabric is not really a topology. It is a mode supported by EMC for Brocade, Cisco, and/or McData switches. This slide is an example of possible Open Fabric configurations. Refer to the EMC Networked Storage Topology Guide under Solutions > Connectrix Interoperability Solution.
SAN Foundations, 26
ISL Summary
Additional ISLs should be based on performance data and redundancy Unless Trunking
Directors - connect ISLs across different port cards Departmental switches - connect ISLs to different switch ports
Review the EMC Support Matrix for a list of ISL limits for individual switch vendors Large fabrics are created out of need rather than desire Configurations are outlined in the ESN Topology Guide
ISLs add redundancy to the fabric to protect your environment for possible events and failures that may occur. The amount of redundancy that you choose to add to the fabric is a decision based on your business model and the amount of resources you can spare for increased availability. So, when mapping ISLs in your fabric: Always connect each switch to more than one other switch in the fabric. Having each switch connected to more than one switch in large configurations ensures multiple paths to the edge switches if one of the intermediate switches or paths to those switches fails. If you are limited in the number of ISLs you can place in the fabric, EMC recommends configuring a single ISL from the originating switch to multiple switches if both paths from the hosts to its storage could be counted as equal, shortest-path, lowest-cost paths. The ultimate goal is to provide each host multiple primary paths to its storage and then a number of secondary paths if failures were to occur. ISL utilization should always be monitored to identify unused, underused, or over utilized ISLs. Unused ISLs could become candidates for removal if they do not represent the only secondary path a host would have to its storage in the event of a switch or ISL failure. For a list of ISL limits for individual switch vendors, review the EMC Support Matrix.
SAN Foundations, 27
Access
Port Zoning WWN Zoning Device Masking
Security
Port Binding (license required) SANtegrity (license required)
In addition to connectivity, other SAN considerations include performance access and security. Routes and trunks can all have an impact on performance. Also, due to the sharing of storage resources provided by a SAN, secure access to disk and volume resources by multiple hosts in the SAN must be tightly managed. This is accomplished by zoning at the switch and device masking at the array. Port binding is a function for McData switches that uses the WWN of a device to create an exclusive attachment to a port. When port binding is enabled, the only device that can attach to the port is the one specified by the WWN. Developed by McData, SANtegrity enhances security in SANs that contain a large and mixed group of fabrics and attached devices by allowing or prohibiting switch attachment to fabrics and device attachment to switches. This ensures that Fibre Channel traffic cannot be directed to the incorrect port/device/domain through deliberate deceptive acts.
SAN Foundations, 28
Routing
A Routing Table algorithm calculates lowest cost Fabric Shortest Path First (FSPF) route for a frame Recalculated at each change in topology ISLs may remain unused
Host
2
Host
2 3 1 3
4
SPF=1 Storage
EMC Global Education
2004 EMC Corporation. All rights reserved.
4
SPF =2,3,4 Storage
28
Frames are routed across the fabric via an algorithm that uses a combination of lowest cost and Fabric shortest-pathfirst (FSPF) routing. Lowest cost refers to the speed of the links in the routes. As the speed of the link increases, the cost of the route decreases. FSPF refers to the number of ISLs, or hops between the host and its storage. EMC strongly recommends that you construct your fabric to have multiple equal, lowest-cost, shortest-path routes between any combination of host and storage. This means that you may have two ISLs between every switch in the fabric or you may have single links between switches, but multiple equal-cost/length paths that travel through different switch combinations. Routes between storage and hosts that are not the shortest, lowest-cost path will not be used unless there is an event in the fabric that causes them to become the shortest, lowest-cost path. Routes are assigned to devices for each direction of the communication and the route one way may differ from the return route. The routes are assigned based on a round-robin approach that is initiated as the device is logged into the fabric. These routes are static for as long as the device is logged in so routes do not have to be recalculated due do a fabric event. Routing tables on each switch are updated and recalculated during events that change the status of links in the system. The calculation of routes, and its ability to perform this function in a timely fashion, is important for the stability of the fabric. For this reason, as well as the fact that for every ISL used, two ports are lost for attaching storage or hosts, EMC recommends using reasonable limits on the number of ISLs in a fabric. It should be noted that since only the lowest cost/shortest-path routes will be used in the routing scheme. many ISLs may remain unused while others may be approaching peak utilization as long as there are no events in the fabric. For a true number of required ISLs, you should continually examine your ISL utilization and identify your level of actual protection from fabric events. You may be able to identify how changes in your current topology could enhance your performance as well as your path redundancy.
SAN Foundations, 29
2 Gb 1.5 Gb .5 Gb 1 Gb 2 Gb
Node 2 Switch 1 Switch 2
1 Gb 1.5 Gb .5 Gb 1 Gb 1 Gb
Single Brocade ISL is only capable of 2Gbps maximum During fabric build process, multiple nodes may be assigned the same ISL In this case, two nodes pushing 4Gb combined are competing for a single 2Gb pipe Congestion results
EMC Global Education
2004 EMC Corporation. All rights reserved. 29
Trunking is a licensed Brocade switch option. ISL Trunking is optional software that enables distribution of traffic over the combined bandwidth of ISLs between two adjacent switches. ISL Trunking ensures that all links are used efficiently, eliminating congestion on one link, while distributing the load of the links. This feature is designed to significantly reduce traffic congestion. Each incoming frame is sent across the first available ISL. As a result, transient workload peaks for one system or application are much less likely to impact the performance of other parts of SAN fabric.
SAN Foundations, 30
Trunking - Implemented
Node 1
2 Gb 1.5 Gb .5 Gb 1 Gb 2 Gb
Node 2 Switch 1
8 Gb
2 Gb 1.5 Gb .5 Gb 1 Gb
Switch 2
2 Gb
All four physical ISL create one large logical ISL Frames are sent across first available ISL Trunking relieves congestion Trunking utilizes bandwidth more efficiently
As shown here, four ISLs are combined into a single logical ISL with a total capacity of 8GB/Sec. Because the full bandwidth of each physical link is available, bandwidth is efficiently allocated. Review the EMC Support Matrix for Trunking Limitations.
SAN Foundations, 31
Physical Access
Physical layout
Foundation of a secure network
Planning required
Location of H/W and S/W components Identify Data Center components Data Center location for management applications Disaster Planning
Planning the physical location of all components is an essential part of storage network security. Building a physically secure data center is only half the challenge; deciding which hardware and software components need to reside there is the other half of the challenge. Critical components such as storage arrays, switches, control stations, and management applications should reside in the data center. With physical security implemented, only authorized users have the ability to make physical or logical changes to the topology (i.e. move cables from one port to another, reconfigure access, add/remove devices to the network). Planning should also take into account requirements for disaster recovery.
SAN Foundations, 32
Zoning is a switch function that allows devices within the fabric to be segmented into groups that can communicate with each other. The Connectrix zoning implementation is based on the ANSI standard Simple Name Service standard. When a device logs into a fabric, it is registered by the name server. A port that is utilizing SCSI FCP protocol will be registered as such in the name server. When a port logs into the fabric, it goes through a device discovery process with other devices registered as SCSI FCP in the name server. The zoning function controls this process by only letting ports in the same zone establish these link level services. A collection of zones is called a zone set. The zone set can be active or inactive. An active zone set is the collection of zones currently being used by the switched fabric to manage data traffic. Single HBA zoning consists of a single HBA port and one or more storage ports. It is important to note that a port can reside in multiple zones which provide the ability to map a single Symmetrix FA /Clariion SP port to multiple single HBA zones. This will allow multiple hosts to share a single Symmetrix/Clariion port. Single HBA zoning best simulates a single initiator SCSI environment. This will reduce the reliability concerns that can be raised when mixing driver revision levels, HBA types, and heterogeneous servers on the same fabric.
SAN Foundations, 33
Port Zoning
Port to port traffic Ports can be members of more than one zone Each HBA only sees the ports in the same zone Zoning by port number may not be the best option
If a cable was moved to a different port during troubleshooting, zoning would have to be changed
Zone 1 = Domain1 Port 0 & Domain1 Port 31 Zone 2 = Domain1Port 1, Domain1 Port 31, & Domain1 port 29 Zone 3 = Doamin1 Port 2 & Domain1Port 29 Zone 4 = Domain1 Port 5 & Domain1 Port 25 Zone 5 = Domain1 Port 7 & Domain1 Port 25
EMC Global Education
2004 EMC Corporation. All rights reserved. 33
H B A
FC Switch Domain Id = 1
Port 0 Port 1 Port 2 Port 31 Port 30 Port 29 Port 28 Port 27 Port 26 Port 25 Port 24
H B A
H B A
H B A
Port 6 Port 7
H B A
Zone Sets Zone Set1 = zone1, zone2, zone3 Zone Set2 = zone 4, zone 5
When port zoning is implemented, only the ports listed in the zone are allowed to send Fibre Channel frames to each other (e.g., Members of zone1: the HBA port attached to Domain 1 Port 0 can only talk to the array port attached to Domain1, Port 31). The switch hardware examines each frame of data coming through the fabric for the Domain ID of the switch and the port number of the node to ensure it is allowed to pass to another node connected to the switch. No frames, accidental or intentional, can pass through to nodes where permission is not given. Port zoning will not allow Fibre Channel frames to be sent from within a zone to ports outside the zone, nor will it pass Fibre Channel frames into a zone from ports that are not included in a zones port list. This strict limitation on traffic implies that moving a node that is zoned by a port zoning policy to a different switch port will effectively isolate it. On the other hand, if a node is inadvertently plugged into a port that is zoned by a port zoning policy, that port will gain access to the other ports in the zone.
SAN Foundations, 34
Zoning by WWN
Defining zones by WWN allows greater flexibility
WWN (World Wide Name) is a unique 64 bit identifier assigned to a Fibre Channel device that does not change Typically Burned into hardware on HBAs WWPN specifies a port WWNN specifies a node WWNs defined as part of a zone see each other regardless of the switch port they are plugged into EMC uses WWPN
Makes troubleshooting easier and environment is more secure
WWN Zoning WWN zoning creates zones by using the WWNs of the attached node (HBA and/or storage subsystem). A WWN is a unique 64-bit identifier that is assigned by a standards organization to a factory and set on HBAs, EMC storage array ports, and other Fibre Channel device ports. WWN Zoning provides the capability to restrict devices, specified by a port, WWPN, or node, WWNN, into zones. This is more flexible as any node anywhere in the network can stay within their zone. Be aware - when replacing a device, the WWN might change, while the port address remains the same.
SAN Foundations, 35
FC Switch
10:00:00:00:C9:20:C4:E4
Port 0 Port 1 Port 31 Port 30 Port 29 Port 28 Port 27 Port 26 Port 25 Port 24
50:06:04:82:B8:91:2B:8E 50:06:04:82:B8:91:2B:9F
:21 H :20:C3 :00:C9 B :00:00 A 10 E C :F 0:0 9 :0 6 H E0: 00: B 0:00: 1 A 8 4:B 0 :1 9 :0 :6 H E0 00: B 0:00: A 1 :6D :15 :00 9 0:6 H :E :00 B :00 A 10
50:06:04:82:B8:90:51:4F
WWN 10:00:00:00:C9:20:C4:E4 WWN 50:06:04:82:B8:91:2B:8E WWN 10:00:00:E0:69:00:0C:FE WWN 50:06:04:82:B8:91:2B:9F WWN 10:00:00:00:C9:20:C3:21 WWN 50:06:04:82:B8:91:2B:8E WWN 50:06:04:82:B8:91:2B:9F WWN 10:00:00:E0:69:00:14:B8 WWN 50:06:04:82:B8:90:51:4F WWN 10:00:00:E0:69:00:16:6D WWN 50:06:04:82:B8:90:51:4F
35
Access is controlled by WWN, regardless of the port that connects a device to the switch
EMC Global Education
2004 EMC Corporation. All rights reserved.
Zone 4 = Zone 5 =
The WWNs are registered in the switchs Distributed Name Service (dNS). WWN zoning is achieved through a dNS operation, which responds to queries arriving from nodes connected to the fabric. When an HBA logs in to a fabric, it queries the dNS in search of other FCP-capable nodes. If the HBA is WWN-zoned with other storage target ports, the dNS returns a list of the target nodes that are FCP-capable and in the same zone with the HBA.
SAN Foundations, 36
Disadvantages
Reconfiguration
Port zoning advantages: Security port zoning is sometimes considered more secure than WWN zoning because zoning configuration changes must be performed at the switch. If physical access to the switch is restricted, the potential for unauthorized configuration changes is greatly reduced. HBA replacement In some situations, port zoning can also simplify the process of replacing HBAs. When zones are created using port zoning, HBAs can be replaced without requiring modification of zone configurations (for example, a test environment). Port zoning disadvantages: Switch port replacement and the use of spare ports, require manual changes to the zone configuration. If the domain ID changes (which can happen if a set of independent switches are reconfigured to form a multi-switch fabric), the zoning configuration becomes invalid, increasing the chance of data corruption. Replacing an HBA requires reconfiguration of the volume access control settings on the storage subsystem. This minimizes the benefit of hard zoning because manual configuration changes will still be necessary. WWN zoning advantages: The zone member identification will not change if the fiber cable connections to switch ports are rearranged. Fabric reconfiguration Fabric changes, such as switch addition or replacement, do not require changes to the zoning. WWN zoning disadvantages: It is possible to change an HBAs WWN to match the current WWN of another HBA. Commonly referred to as spoofing, Replacement of a damaged HBA requires the user to update the zoning information and the volume access control settings.
SAN Foundations, 37
Y Host Y
Host R
CLARiiON
Symmetrix
EMC Global Education
2004 EMC Corporation. All rights reserved. 37
Device (LUN) masking ensures that servers receive appropriate volume access. A zone set can have multiple host HBAs and a common storage port. To prevent the multiple hosts trying to access the same volume presented on the common storage port, device masking is employed. When servers log into the switched fabric, the WWNs of their Host Bus Adapters (HBAs) are passed to the storage fibre adapter ports that are in their respective zones. The storage system records the connection and builds a filter listing the storage devices (LUNs) available to that WWN through the storage fibre adapter port. The HBA port then sends I/O requests directed at a particular LUN to the storage fibre adapter . Each request includes the identity of their requesting HBA (from which its WWN can be determined) and the identity of the requested storage device, with its storage fibre adapter and logical unit number (LUN). The storage array processes requests to verify that the HBA is allowed to access that LUN on the specified port. Any request for a LUN that an HBA does not have access to returns an error to the server.
SAN Foundations, 38
Devices can be masked through the use of several tools. These include ControlCenter, Navisphere, and the SYMCLI commands. The following is an example using the SYMCLI symmask command: symmask add dev 19,60,A0,D4 -wwn 10000000c92741a1 -dir 14D -p 0 The symmask add command adds access to devices 0019, 0060, 00A0, and 00D4 for the HBA whose WWN identifier is 10000000c92741a1 and is zoned to director 14D, port 0.
SAN Foundations, 39
Reference Documentation
One of the major documentation resources that is available is the EMC Networked Storage Topology Guide. It includes information on network storage concepts, EMC Fibre channel switch products, and network storage design considerations. http://www.emc.com/horizontal/interoperability http://powerlink.emc.com
SAN Foundations, 40
Connectrix Family
Directors , Switches and Opportunities
Lastly, we will look at the Connectrix Family including Directors, Switches, and opportunities.
SAN Foundations, 41
Switches McData
DS-24M2 DS-32M2 DS-32M DS-16M2 DS-16M
Brocade
ED-12000B
Brocade
DS-8B2, DS-16B2 DS-32B2
Cisco
MDS 9509
Cisco
MDS 9216
Cabinets
EC-1200 (M Series) EC-1230 (B Series)
EMC Global Education
2004 EMC Corporation. All rights reserved. 41
Only EMC offers a complete range of 1Gb and 2Gb SAN products - from Connectrix Enterprise Directors for data center deployments to Connectrix Departmental Switches for data center and departmental deployment. Determine connection, redundancy, and bandwidth requirements, then choose the appropriate Director or switch solution. All of EMCs Connectrix storage networking devices have EMCs hardware warranty, which includes 24x7 support. Always refer to the Support Matrix or Powerlink for current supported products and features.
SAN Foundations, 42
Connectrix Directors
Redundant Components Optimal serviceability Highest availability Maximum scalability Support large fabrics Data Center Deployment
ED-64M MDS 9509 ED-12000B ED-140M
With Connectrix Enterprise Directors, you get greater levels of modularity, fault tolerance, and expandability, as compared to those offered by Connectrix Departmental Switches. Connectrix Directors deliver the necessary scalability and availability attributes required by your most mission critical SAN based application without sacrificing simplicity and manageability. Directors allow you to build larger SANs and avoid the complexity that would be incurred with building large fabrics with lots of Switches. A Director provides the following functionality: Internal switched bus supporting any to any connections across internal connections, redundant modular components supporting automated switchover triggered by hard or soft failures, pre-emptive hardware switchover powered by both automated periodic health checking and the correlation of identified hardware failures, on-line code loads, and on-line redundant hardware replacement. Always refer to the Support Matrix or Powerlink for current supported products and features.
SAN Foundations, 43
Connectrix Switches
Redundant fans Redundant power supplies High availability through redundant deployment Work group, departmental, and data center deployment Scalability through Interswitch links
DS-32B2 DS-32M2
DS-16B2
DS-24M2
MDS-9216
EMC Global Education
2004 EMC Corporation. All rights reserved. 43
Switches are less expensive than Directors, smaller in capacity and offer limited built in availability features. Switches are ideal for smaller environments where a large number of host connections are not required. SANs can be created with switches, but at the expense of a more complex architecture requiring many more network devices and switch interconnects. However, Connectrix Switches provide the following functionality: high availability, heterogeneous access for an expanding storage environment, and switch health monitoring and switch management features. Connectrix Switches interoperate with a Director to consolidate data into centralized storage pools. Always refer to the Support Matrix or Powerlink for current supported products and features.
SAN Foundations, 44
Optimized airflow ensures high reliability Cable management system EC-1230B Cabinet
For Brocade 12000B Director only Two ED-12000B chassis per cabinet
EC-1200
EC-1230B
44
The Connectrix EC-1200 is a highly configurable cabinet to support a mix of M-series (McData) switches and directors ED-64M, ED-6064, DS-32M, DS-32M2, DS-16M, and the DS-16M2. (The legacy ED-1032 can also be installed in the EC-1200 cabinet.) The cabinet provides Integrated Connectrix Manager console with diagnostics, management, and alerts. The unit is designed for optimized airflow, enabling additional cooling, and utilizes a Velcro strap system for cable management The EC-1200 cabinet fits four or three enterprise-class directors (four ED-64M or ED-6064, or three ED-140Ms), thereby providing a total of 256 to 420 ports in the same size cabinet. The ED-6064 has the same dimensions as the ED-64M and can also be installed in an EC-1200 cabinet. EMC has designed the EC-1230B cabinet to provide optimal housing for the B-series (Brocade) ED-12000B. The cabinet supports two ED-12000B chassis. The unit is optimized for flow enabling additional cooling. Power to the 12000B directors is provided by the EC-1230B cabinet, which utilizes a Velcro strap system for cable management. No switches other than the two ED-12000 Directors may be installed in the EC-1230B cabinet. If ordered for CLARiiON environments, DS-32M2 and DS-24M2 are installed in CLARiiON racks.
SAN Foundations, 45
Telnet
Command Line
HTTP
Network GUI tools
Stand Alone
Connectrix Manager, WebTools, Fabric Manager
There are several ways to monitor and manage a fabric. If the fabric is contained in a cabinet with a Service Processor (SP), we can use console software loaded on the SP. If the fabric is configured in a network, then we can use HTTP for GUI Network tools to manage the fabric. We can also use Telnet to manage the fabric in a network configuration using a command line interface. Lastly, if the environment contains a framework such as Tivoli, SNMP can be used to monitor the fabric.
SAN Foundations, 46
Topology snap shot feature Ability to set and identify operating speeds and hardware
EMC Global Education
2004 EMC Corporation. All rights reserved. 46
Connectrix Manager is used for the management of M-series (McData) switches. Connectrix Manager can be run locally on the Service Processor platform or remotely on any network-attached user workstation in the enterprise. The Java-based deployment support gives IT administrators the flexibility to run Connectrix Manager from virtually any type or size of client device. Connectrix Manager uses Product View to provide an intuitive graphical view of all the devices on the network and includes mini-icons that display the device name or IP address, number of ports, switch speed, and attention indicator which reveals a devices status. Connectrix Manager uses Fabric View to provide an easy-to-use fabric tree control and tabs for topology and zone sets. The elements in the tree control support context menus for single-click administration and display a visual status of fabric health for immediate problem identification. Connectrix Manager uses Hardware View to manage individual switches.
SAN Foundations, 47
Allows direct access via LAN (local area network) public or private IP address Hardware and zoning management of switches
EMC Global Education
2004 EMC Corporation. All rights reserved. 47
The Embedded Web Server is used when a switch is not being managed by the Service Processor in an EC 1200 cabinet. EWS allows users to do initial configuration as well as further management. It also allows direct access via LAN public or private IP address. EWS requires Netscape or Internet Explorer. The McData Switches have an Embedded Web Server for management purposes.
SAN Foundations, 48
Supports remote installation of SAN Managers FibreZone Bridge on customer-supplied Windows NT and 2000 server/ workstation Not a tool to manage switches
Use Embedded Web Server or Telnet
DS-M Connect is not a tool to manage switches. It hosts the ControlCenter SAN Manager bridge agent and provides support for the bridge agent to allow ControlCenter (SAN Manager) to manage the zoning of switches when Connectrix Manager isnt present. Use Embedded Web Server or Telnet to perform device management functions.
SAN Foundations, 49
WebTools - Brocade
Browser-based management application for B Series(Brocade) switches Provides zoning, fabric, and switch management
B Series products Supports aliases Provides fabric-wide and detailed views Firmware upgrades
Administration View
WebTools is an easy-to-use, browser-based application for switch management and is included with all Connectrix BSeries (Brocade) products. WebTools simplifies switch management by enabling administrators to configure, monitor, and manage switch and fabric parameters from a single online access point. WebTools supports the use of an alias name for a member of a zone. With WebTools, firmware upgrade is a one-step process. The Switch View allows you to check the status of a switch in the fabric. The color of the LED icon for the port reporting an issue will change color.
SAN Foundations, 50
Included with every MDS 9509 Director and 9216 FC switch is the MDS Fabric Manager. This Java-based tool simplifies management of the MDS Series through an integrated approach to fabric administration, device discovery, topology mapping, and configuration functions for the switch, fabric, and port. Features include: Fabric visualization automatic discovery, zone and path highlighting Comprehensive configuration across multiple switches Powerful configuration analysis including real-time monitoring, alerts, zone merge analysis, and configuration checking Network diagnostics probes network and switch health enabling administrators to pinpoint connectivity and performance issues Comprehensive security protects against unauthorized management access with Simple Network Management Protocol Version 3 (SNMPV3), Secure Shell Protocol (SSH), and role-based access control (RBAC) Traffic Management Congestion control mechanism (FCC) can throttle back traffic at its origin QoS allows traffic to be intelligently managed Low-priority traffic throttled at source High-priority traffic not affected Diagnostic Features SPAN provides the ability to intelligently capture traffic Cisco Fabric Analyzer: decode and analyze Fibre Channel and SCSI protocols and send to workstation over IP FC Traceroute logs timestamps of each hop The switches can also be managed by the included CLI and through third-party storage management tools.
Copyright 2004 EMC Corporation. All Rights Reserved.
SAN Foundations, 51
Telnet
Connectrix family All switch management functions available through a CLI
Zoning, and switch management Firmware upgrades
Available over IP
Telnet is available in most members of the Connectrix family. Some administrators prefer the feel of a Command Line Interface (CLI). Most functions can be done through the CLI. The Ethernet (IP) provides multiple ways to easily access and configure any switch in the fabric, from inside or outside communication ports within the data center.
SAN Foundations, 52
SNMP
Connectrix family Provides SNMP GETs and TRAPs Allows third-party management tool interface Management Information Base (MIBs) support
FibreAlliance Fabric Element (FE) Switch (SW)
SW-MIB
EMC Global Education
2004 EMC Corporation. All rights reserved. 52
SNMP is available in all members of the Connectrix family. SNMP (Simple Network Management Protocol), an industry standard for managing networks, is used mostly for monitoring the status of the network to identify problems. The SNMP MIB is a numerical representation of the status information that is accessed via SNMP from a management station.
SAN Foundations, 53
SAN Manager provides a single interface to manage storage device masking, switch zoning, and device monitoring and management capabilities for heterogeneous SAN environments. The integration of SAN Manager into ControlCenter provides a distributed infrastructure allowing for remote management support ,as well as support for alarms, alerts, and thresholds on SAN devices. SAN Manager functionality automatically discovers, maps, and displays the entire SAN topology at a high level, or in detail. Specific physical and logical information about each object in the topology can be displayed. Administrators can view physical devices such as host bus adapters (HBAs), Fibre Channel switches (Brocade, McData, and Cisco), and storage arrays, including HDS, HP StorageWorks, Symmetrix, and CLARiiON. It also includes logical information such as zoning and storage device masking definitions.
SAN Foundations, 54
Two important documentation resources are: EMC SAN Product Guide and the Support Matrix. The SAN Product Guide includes information on SAN Switch and Director Products, SAN Management software, Interoperability and qualifications, and EMC Global services. The Switched Fabric Topology Parameters section of the Support Matrix includes Server/HBA model limitations, code revision recommendations, and references to the Fibre Connectivity: Switch Interoperability Application. Switch Interoperability describes the limitations for a mixed Fibre Channel switched fabric topology. Externally accessible resources: Director data http://www.emc.com/products/networking/connectrix/connectrix_directors.jsp Switch data http://www.emc.com/products/networking/connectrix/connectrix_switches.jsp Others http://www.emc.com/horizontal/interoperability http://powerlink.emc.com
SAN Foundations, 55
2561024
64256
Switch
864
Switch
Switch
Switch Director
Lets compare the directors and switch products. Directors are deployed for high availability and/or large scaling environments. Connectrix Directors can have more than a hundred ports per device; however, the SAN can scale much larger by connecting the products together via ISLs (Inter-switch Links). Directors allow you to consolidate more servers and storage with fewer devices. Disadvantage of directors: Higher cost, larger footprint. Switches are the choice for smaller environments and/or where 100% availability may not be required. Price is usually a driving factor. Switches are ideal for departmental or mid-tier environments. Each switch may have many ports, but like Directors, the SAN may grow through ISLs. Fabrics built with switches require more switches to consolidate servers and storage; therefore, there are more devices and more complexity in your SAN. Switch disadvantage: Lower number of ports, complex to scale.
SAN Foundations, 56
M-Series
Broad host support Multi-vendor storage Some Brocade interoperability Up to 2048 ports per fabric Performance monitoring Extended distance support Proactive alerts and detailed monitoring
MDS Series
Windows, Solaris, HP-UX, IBM AIX Symmetrix and CLARiiON No switch interoperability today Up to 1792 ports per fabric Application hosting (future) VSANs, QoS, FCC, SPAN, FC Traceroute, Fabric Analyzer, PortChannel, load balancing Redundant power, fans and controllers Hot swap components Transparent failover and code load on controllers *Optional features
56
Scalability
Redundant power, fans and controllers High availability Hot swap components Transparent code load and failover
EMC Global Education
2004 EMC Corporation. All rights reserved.
Redundant power, fans, and controllers Hot-swap components Transparent code load and failover
We can further refine the Connectrix solution by examining unique availability and scalability characteristics provided by the different technology vendors. This broad choice increases EMCs ability to meet your individual requirements. Note: The data in the above slide is an example for discussion purposes only. Always refer to the Support Matrix or Powerlink for current supported products and features.
SAN Foundations, 57
SAN Design
Professional Services are delivered by the Technical Services Group in several distinct practice areas: Consultation, Planning and Design, Implementation and Integration. Using proven Best Practices and time tested procedures, TSG ensures effective project management throughout the engagement. The term Best Practice describes a process rather than a series of documents or steps. When we talk about a best practice, we must remember that what is best for one situation may not be optimal in another situation. For example, deciding that a Core/Edge design is the best topology, and trying to apply that to a simple SAN with 1 host and 4 storage ports, results in a SAN that is too large, too expensive, and unwieldy for the customer. The Technical Services Group has many offerings, among them are SAN Design and Implementation, and San Copy Implementation. For further information, go to: http://powerlink.emc.com/HighFreq/Service_offering_Index.xls
SAN Foundations, 58
Course Summary
Key points covered in this course:
Storage Area Networks and their value SAN Configuration and Switch Management Connectrix Switch Products Fabric Software Identify Connectrix Opportunities
Key points covered in this course are shown here. Please take a moment to review them.
SAN Foundations, 59
Closing Slide
Thank you for your attention. This ends our training on SAN Foundations.