Sunteți pe pagina 1din 42

Ethernet Storage Guy: Multimode VIF Survival Guide

Posted by Trey Layton Apr 4, 2009 This is my first post to this NetApp community blog and as this month marks my 1 year anniversary at NetApp I felt compelled to post something I feel will be valuable to many as it is the most frequent topic of conversation with clients and colleagues. I have spent nearly all of my technology career as the network guy. A few years ago I ventured out on a path to be a virtualization and storage guy. What has been amazing is that the deeper I get into the later (virtualization and storage) I call upon the skills of the former (networking). My networking experience has afforded me a clear understanding of the correct ways to construct a high performance Ethernet fabric to support Ethernet Storage. A technology that nearly everyone has deployed to increase performance on the Ethernet Fabric is MultiMode VIFs.

A MultiMode VIF is NetApp ease for EtherChannel or Port Channeling. Quite simply, the bonding of physical links into a virtual link, to which the virtual link utilizes a algorithm to distribute or load-balance traffic across the physical links.

The first subject to tackle is NetApp terminology versus Networking Industry Terminology. At NetApp we tend to generalize MultiMode VIF into a all encompassing description of channeling. Some of us refer to MultiMode VIFs as "trunked ports". This is an inaccurate description, yet I understand why the term is used. When referring to a "trunked interface" the networking industry thinks of that as a VLAN trunked interface, utilizing a technology like 802.1q. Therefore, when I refer to the technology enabled by MultiMode VIFs I will always call the physical links EtherChanneled or Channeled interfaces.

The next thing we tend to do is never reference the other type of Multimode VIF, that is the "Dynamic MultiMode VIF". Many of you reading that term for the first time are going to say what is he talking about. Take a look at our OnTap Network Management Guides for virtually any release and browse the section on MultiMode VIFs. You will see two distinct types of MultiMode VIFS. The Static MultiMode VIF and the Dynamic MultiMode VIF. These two different types of MultiMode VIFs are key differentiators and knowing what each is will help you in those conversations with the Networking team in your organization.

Generated by Jive SBS on 2011-09-05-06:00 1

Ethernet Storage Guy: Multimode VIF Survival Guide

Static MultiMode VIF - A static MultiMode VIF is a static EtherChannel. A static channel is quite simply the static definition of physical links into the channel. There is no negotiation or auto detection of the physical ports status or ability to be channeled. The interfaces are simply forced to channel.

The Cisco command to enable a static etherchannel is channel-group (#) mode on The NetApp command to enable a static MultiMode VIF is vif create multi

** Covered in detail in the templates below

Dynamic MultiMode VIF - A Dynamic MultiMode VIF is a LACP EtherChannel. LACP is short for Link Aggregation Control Protocol and is the IEEE 802.3ad standard for port channeling. LACP provides a means to exchange PDUs between devices which are channeled together. In the case of the present topic that is a NetApp controller and Cisco switch. The only difference between the two types are the use of PDUs to alert the remote partner of interface status. This is used when one partner (lets say a switch) decides it is going to remove one physical interface from the channel, for reasons other than the link being physically down. If the switch removes a physical interface from the channel, with LACP, a transmission from the switch to the partner (NetApp controller) is sent, providing notification that the link was removed. This allows the controller to respond and also remove that link from the channel thus not creating a situation where the controller attempts to continue to use that link, causing certain transmissions to be lost. Static EtherChannels do not have this ability and if a situation like this occurs, the only means to remove the link from the channel is via configuration change, cable removal or adminstrative shutdown of the port.

I provide the above distinctions because I find that many often interchange terms Static Multimode and LACP. This can produce problems in the configuration of the network to support the controllers so try to stick with the terminology above.

Generated by Jive SBS on 2011-09-05-06:00 2

Ethernet Storage Guy: Multimode VIF Survival Guide

The next thing I often see is a conversation around which technology LACP VIFs or Static VIFs provide better load-balancing. The truth his they are simply the same, there is no performance benefit provided by one over the other. This generally leads to the topic of load-balancing as we often find that not everyone understands the mechanism provided by the current load-balancing algorithms for LACP and Static VIFs. There are limitations to the technology and understanding those limitations are key to getting the most out of the deployment when utilizing them.

The Cisco command to enable a static etherchannel is channel-group (#) mode active, channel-protocol lacp The NetApp command to enable a dynamic MultiMode VIF is vif create lacp

** Covered in detail in the templates below

Load-Balancing in VIFs can utilize 1 of 3 choices of algorithms IP, MAC and Round Robin.

Round Robin

I am personally not a fan of Round Robin load-balancing as I used this algorithm in the early 90s, when a majority of networking manufactures were first introducing EtherChannel based features. This technology runs the risk of packets arriving out of order and has nearly been eliminated from most network manufactures equipment features, for that reason. However, there are still deployments in production which utilize this feature and they work without issue. Round Robin essentially oscillates ethernet frames over the active links in the channel. This provides the most even distribution of transmission but can produce a situation where frame 1 is transmitted on link 1 and frame 2 is transmitted on link 2, frame

Generated by Jive SBS on 2011-09-05-06:00 3

Ethernet Storage Guy: Multimode VIF Survival Guide

2 could arrive at the destination prior to frame one because of congestion experienced by frame 1 while in transit. This would produce a condition where errors would occur and the protocol and application would need to facilitate a recovery which typically results in the frames being retransmitted.

Source and Destination Pairs

The load-balancing algorithm used in most NetApp MultiMode VIF deployments are detailed in the sections that follow but one thing they have in common is that they calculate the interface to be used by executing a XOR algorithm on source and destination pairs. As source and destination pairs are compared they are ultimately divided by the number of physical links in the VIF. This calculation equals a result which is matched to one of the physical interfaces. It is important to understand this as many people assume that bonding 4 physical links together enables a speed equal to the sum of the links. This is not true, the maximum speed that can be reached on a EtherChannel link is equal to the speed of one physical link in the channel, not the sum. Utilizing an example of a connection which contains 4 1Gbps physical links bonded into a MultiMode VIF. It is often assumed that this would equal 4Gbps of bandwidth to the controller. It actually equals 4 - 1Gbps links to the controller. A single transmission (source and destination pair) can burst up to the speed of one of the physical links (1Gbps). No one single communication can exceed the 1Gbps speed.

The following sections will describe how the algorithms work.

NOTE: The algorithms defined herein are industry limiations and are the same no matter who is the manufacture. Cisco has implemented a few additional algorithms but none get over the core limitations of not being able to exceed the speed of a given physical link in the channel.

MAC Load-Balancing

Generated by Jive SBS on 2011-09-05-06:00 4

Ethernet Storage Guy: Multimode VIF Survival Guide

This is the least-common algorithm utilized because of conditions which produce the likelihood that traffic distribution will be weighted heavily to a single link. The MAC based algorithm makes a XOR calculation on the source and destination pair of the MAC addresses in a communication. The source would be the MAC address of the NIC card on the host connecting to the NetApp controller. The destination MAC address would be the MAC address of the VIF interface on the NetApp controller. This algorithm works well if the hosts and NetApp controller reside on the same subnet or VLAN. When hosts reside on a different subnet, than the NetApp controller, we begin to exploit the weakness in this algorithm. To understand the weakness you must understand what happens to a Ethernet Frame as it is routed through a network.

Lets say we want Host1 to connect to Filer1.

Host1's IP address is 10.10.1.10/24 (Host1's default router is 10.10.1.1) Controller1's IP address is 10.10.3.100/24 (Controller1's default router is 10.10.3.1)

Above we have defined Host and Filer on two separate subnets. The only way they can communicate with each other is by going through a router. In the case of the example above, default router 10.10.1.1 and 10.10.3.1 are actually the same physical router, those addresses are simply two physical interfaces on the router. The routers purpose is to connect networks and allow communication between subnets.

As Host1 transmits a frame destined for Controller1 it compiles a frame to its default router because it recognizes that 10.10.3.100 is an IP address not on it's local network, therefore it forwards the frame to it's default router so that it can be forwarded to that destination.

Host1 to Host1Router

-IP Source: Host1 (10.10.1.10) -MAC Source: Host1

Generated by Jive SBS on 2011-09-05-06:00 5

Ethernet Storage Guy: Multimode VIF Survival Guide

-IP Destination: VIFController1 (10.10.3.100) -MAC Destination: Host1DefaultRouter

Host1Router Routing Packet to Controller1

-IP Source: Host1 (10.10.1.10) -MAC Source: Controller1DefaultRouter -IP Destination: VIFController1 (10.10.3.100) -MAC Destination: VIFController1

NOTE: The source and destination MAC addresses changed as the frame was forwarded through the network. This is how routing works as routers exist between source and destination the MAC address can change multiple times. How many times is not of concern but specifically when the frame is forwarded onto the local segment of the Controller. The source MAC will always be the router and the destination MAC will always be the controller VIF. If the source and destination pair is always the same then you will always be loadbalanced to one link. To fully understand how this creates a problem, lets say that we have a 4 - 1Gbps Etherchannel on the Controller1. Lets also say that we have 50 other hosts on the same subnet as Host1. The source and destination pair for Host1 to Controller1 is the exact same for every other host on host1's network as the source and destination MAC address will always be Controller1DefaultRouter and VIFController1.

IP Load-Balancing

IP Load-Balancing is the default parameter for all NetApp MultiMode VIFs and is the most common type of MultiMode VIF in production today. The algorithm is no different than the MAC algorithm defined above. The difference is that we are using Source and Destination IP Addresses, which if you go back through the example above you will note that the source

Generated by Jive SBS on 2011-09-05-06:00 6

Ethernet Storage Guy: Multimode VIF Survival Guide

and destination IP addresses never change, unlike MAC addresses. The fact that the IP addresses never change produces the scenario where you are more likely to have more unique pairs which will result in a more equal distribution of traffic across the physical links.

It is important to understand one final thing about source and destination IP pairs, that is the last octet of the IP address is the only factor used in caclulating the source and destination pair. This would mean that IP Source 10.10.1.10 (only uses the 10 - last octet) and IP Destination 10.10.3.100 (only uses the 100 - last octet). It is important to be aware that the last digits in the IP address are used for the calculation so that if you deploy hosts on multiple subnets the hosts with the same last octets will be transmitted on the same physical links.

IP Aliasing

Understanding Load-Balancing Algorithms allow you as an administrator to exploit them to your benefit. All NetApp VIFs and physical interfaces have the ability to have an alias placed on the interface. This is simply a additional address on the VIF itself. I always consult with customers to place addresses (VIF + number of aliases) equal to the number of physical links in the EtherChannel between the Controller and the switch to which the controller is attached. Therefore, if you have a 4 1Gbps MultiMode VIF between a Controller and Switch then place one address on the VIF and three aliases on that same VIF.

Simply placing the additional addresses will not exploit the advantage of additional addresses. You must ensure that the hosts which mount data from the NetApp controllers utilize all of the addresses. This can be achieved by a few different ways, depending on the protocol being utilized for storage access below are a few NFS examples.

Oracle NFS - Oracle Hosts should mount NFS volumes by evenly distributing NFS mounts across the available Controller IP address. If there are 4 different NFS mounts then mount the four via the four different IP address on the Controller. Each mount will have a different

Generated by Jive SBS on 2011-09-05-06:00 7

Ethernet Storage Guy: Multimode VIF Survival Guide

source and destination pair and the communication from the host to controller will be efficiently utilized.

VMware NFS - ESX Hosts should mount each NFS Datastore via different IP addresses on the NetApp Controller. It is perfectly fine to utilize a single VMkernel interface (the source address) as long as you are mounting each datastore with different IP addresses on the Controller. If you have more datastores than IP addresses then simply distribute the datastore mounts evenly across the IP addresses on the Controller.

Final note about aliases: When administrators configure physical interfaces on NetApp controllers they typically partner those interfaces with the other controllers interfaces. This ensures that failover of a controller will move the failed controllers interfaces to the surviving controller. Anytime you place an alias on an interface, if you have partnered the physical, the aliases WILL travel to the clustered controller in failover. You do not partner the aliases, if the physical has already been partnered.

Finally the templates:

LACP - Dynamic MultiMode VIF ____________________________________ Filer RC File

#Manually Edited Filer RC file 3 March, 2009, by Trey Layton

hostname filer a

Generated by Jive SBS on 2011-09-05-06:00 8

Ethernet Storage Guy: Multimode VIF Survival Guide

vif create lacp template-vif1 -b ip e0a e0b e0c e0d

ifconfig template-vif1 10.10.3.100 netmask 255.255.255.0 mtusize 1500 partner (partner-vifname) ifconfig template-vif1 alias 10.10.3.101 netmask 255.255.255.0 ifconfig template-vif1 alias 10.10.3.102 netmask 255.255.255.0 ifconfig template-vif1 alias 10.10.3.103 netmask 255.255.255.0

route add default 10.10.3.1 1 routed on options dns.domainname template.netapp.com options dns.enable on options nis.enable off savecore

_____________________________________ Cisco Configuration

!!!!!! The following interface is a virtual interface for the etherchannel. This interface must be referenced !!!!!! on the physical interface to create the channel.

interface Port-channel 1 description Virtual Interface for Etherchannel to filer switchport

Generated by Jive SBS on 2011-09-05-06:00 9

Ethernet Storage Guy: Multimode VIF Survival Guide

switchport mode access switchport nonegotiate spanning-tree guard loop spanning-tree portfast !

!!!!! The following are the physical interfaces in the channel. The above is the virtual interface for the channel. !!!!! Each physical interface will reference the virtual interface.

interface GigabitEthernet 2/12 description filer interface e0a switchport switchport mode access switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-protocol lacp channel-group 1 mode active

!!!!!! !!!!!! The above channel-group command is the command which bonds the physical interface to the virtual interface !!!!!! previously created. The command following the channel number is the mode - active is for LACP. !!!!!!

Generated by Jive SBS on 2011-09-05-06:00 10

Ethernet Storage Guy: Multimode VIF Survival Guide

! interface GigabitEthernet 2/13 description filer interface e0b switchport switchport mode access switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-protocol lacp channel-group 1 mode active ! interface GigabitEthernet 2/14 description filer interface e0c switchport switchport mode access switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-protocol lacp channel-group 1 mode active ! interface GigabitEthernet 2/15 description filer interface e0d switchport switchport mode access

Generated by Jive SBS on 2011-09-05-06:00 11

Ethernet Storage Guy: Multimode VIF Survival Guide

switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-protocol lacp channel-group 1 mode active

Static EtherChannel - Static MultiMode VIF ____________________________________ Filer RC File

#Manually Edited Filer RC file 3 March, 2009, by Trey Layton

hostname filer a

vif create multi template-vif1 -b ip e0a e0b e0c e0d

ifconfig template-vif1 10.10.3.100 netmask 255.255.255.0 mtusize 1500 partner (partner-vifname) ifconfig template-vif1 alias 10.10.3.101 netmask 255.255.255.0 ifconfig template-vif1 alias 10.10.3.102 netmask 255.255.255.0

Generated by Jive SBS on 2011-09-05-06:00 12

Ethernet Storage Guy: Multimode VIF Survival Guide

ifconfig template-vif1 alias 10.10.3.103 netmask 255.255.255.0

route add default 10.10.3.1 1 routed on options dns.domainname template.netapp.com options dns.enable on options nis.enable off savecore _____________________________________ Cisco Configuration

!!!!!! The following interface is a virtual interface for the etherchannel. This interface must be referenced !!!!!! on the physical interface to create the channel.

interface Port-channel 1 description Virtual Interface for Etherchannel to filer switchport switchport mode access switchport nonegotiate spanning-tree guard loop spanning-tree portfast ! interface GigabitEthernet 2/12 description filer interface e0a switchport switchport mode access

Generated by Jive SBS on 2011-09-05-06:00 13

Ethernet Storage Guy: Multimode VIF Survival Guide

switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-group 1 mode on

!!!!!! !!!!!! The above channel-group command is the command which bonds the physical interface to the virtual interface !!!!!! previously created. The command following the channel number is the mode - active is for LACP. !!!!!! ! interface GigabitEthernet 2/13 description filer interface e0b switchport switchport mode access switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-group 1 mode on ! interface GigabitEthernet 2/14 description filer interface e0c switchport switchport mode access switchport nonegotiate

Generated by Jive SBS on 2011-09-05-06:00 14

Ethernet Storage Guy: Multimode VIF Survival Guide

flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-group 1 mode on ! interface GigabitEthernet 2/15 description filer interface e0d switchport switchport mode access switchport nonegotiate flowcontrol receive on no cdp enable spanning-tree guard loop spanning-tree portfast channel-group 1 mode on
34707 Views Tags: vif, lacp, cisco, etherchannel, static_etherchannel, lacp_etherchannel, lacp_vif, static_vif, vif, lacp, cisco, etherchannel, static_etherchannel, lacp_etherchannel, lacp_vif, static_vif Apr 5, 2009 10:57 AM Andrew Miller

Awesome....thanks very much....just added to my bookmarks.

Apr 7, 2009 1:28 AM Daniel Prakash Andrew Miller in response to

Excellent one.. Dude you rock on the first post itself. I am going to wait for the next

Apr 15, 2009 10:17 AM John Summers

Great first blog.....looking forward to the future.

Apr 21, 2009 4:13 AM jmmorrell

Generated by Jive SBS on 2011-09-05-06:00 15

Ethernet Storage Guy: Multimode VIF Survival Guide

great post, good explanations. i have a question regarding 4 physical ports w/ a VIF. can you channel 4 NIC ports together to get a 4GB pipe (4 x 1GbE)?

The user will be using iSCSI VMFS datastores and Oracle (I think via NFS). We're deploying dual head 3140 w/ additional 4-port NIC. Or would it be better to utilize the aliasing?

I'm still trying to understand what the aliases do. It sounds like the user would need 4 seperate IP addresses and point the host at a specific address.

Is there any failover if the 1 of the ports tied to an alias goes down?

May 8, 2009 8:46 AM mfgquote-gva

Man, I've been looking all over for this snippet...

Anytime you place an alias on an interface, if you have partnered the physical, the aliases WILL travel to the clustered controller in failover. You do not partner the aliases, if the physical has already been partnered.

Thanks!

May 11, 2009 8:45 PM jdstevens_nei

We have a filer with a vif with multiple 1Gbit/s cables into an HP ProCurve switch. We have a Sun V890 which is an Oracle server and NFS client of the NetApp filer. All datafiles were served through a single 1Gbit/s port. We added four more 1 Gbit/s interfaces. I assigned IP addresses to the four interfaces. Each port was on a separate 16 address subnet. I assigned corresponding aliases to the vif on the NetApp filer. We distributed Oracle datafiles across four directories and mounted the directories via addresses to force mounts through separate interfaces. All the mounts looked good, but performance went from decent to poor. We

Generated by Jive SBS on 2011-09-05-06:00 16

Ethernet Storage Guy: Multimode VIF Survival Guide

were hoping go from decent to good. This appears to be a configuration analogous to that described in the blog.

I snooped the interfaces and found the traffic wasn't segregated by the IP addresses and subnets. Each interface was seeing traffic for itself, plus traffic for the other interfaces. It was as if the interfaces were competing for traffic rather than dividing the traffic according to IP addresses and subnets.

Has anyone suggestions what I might be doing wrong? It might be because the subnets were children of the networks of the primary interfaces. Would it work better without subnets? Would it work better with some other IP arrangement? Does someone have experience implementing such an arrangement? The idea seemed good, but the traffic is not behaving as I expected.

Here are interfaces and IPs:

NetApp: vif-1 172.16.5.87 netmask 255.255.0.0

alias 172.16.10.1 netmask 255.255.255.240 alias 172.16.10.17 netmask 255.255.255.240 alias 172.16.10.33 netmask 255.255.255.240 alias 172.16.10.49 netmask 255.255.255.240

Sun: ge0 172.16.5.55 netmask 255.255.0.0 ce0 172.16.10.2 netmask 255.255.255.240

Generated by Jive SBS on 2011-09-05-06:00 17

Ethernet Storage Guy: Multimode VIF Survival Guide

ce1 172.16.10.18 netmask 255.255.255.240 ce2 172.16.10.35 netmask 255.255.255.240 ce3 172.16.10.50 netmask 255.255.255.240

When I assigned all of the additional IPs and netmasks as aliases on the ge0 interface (ge0: {1,2,3,4}), performance returned to "normal." We should be able to improve performance by divided traffic among the interfaces by mounting different file systems through the four interfaces.

May 12, 2009 5:10 AM jdstevens_nei jdstevens_nei in response to

Sun hosts, by default, use the same MAC address for all interfaces. The MAC is based on the hostid as defined by the boot PROM. This causes problems with TCP traffic if interfaces are connect to a common network. That was the source of the problem I described in a previous post.

The ether address of an interface on a Sun machine can be set with the ifconfig command. There is an eeprom command to use local MAC addresses, but I chose a more direct approach. I used prtconf to find the local MAC addresses of interfaces on a quad ethernet card that had been added to the system. I assigned the unique MAC addresses to the interfaces that all connect with the NetApp filer. That solved most of the performance problems related to what appeared to be competition among the interfaces for traffic.

Because all of the interfaces connected to a common switch had the same MAC address, the switch might send packets to any of the interfaces regardless of IP. With a one-to-one relationship between MAC and IP, the switch could send packets to the proper interface.

May 15, 2009 5:03 AM Trey Layton jdstevens_nei in response to

Couple of things I note on the config.

Generated by Jive SBS on 2011-09-05-06:00 18

Ethernet Storage Guy: Multimode VIF Survival Guide

1.) The aliased ip addresses have a different netmask than the actual vif. It is advised to use the same netmask for the ip addresses that are bound to the physical and the alias.

The change would be.

NetApp: vif-1 172.16.5.87 netmask 255.255.0.0

alias 172.16.10.1 netmask 255.255.0.0 alias 172.16.10.17 netmask 255.255.0.0 alias 172.16.10.33 netmask 255.255.0.0 alias 172.16.10.49 netmask 255.255.0.0

2.) Using a Class B mask is a large mask to use for a in data center service. I am not sure how many hosts actually reside in this network but I typically mask things down a little further to conserve address spaces. It is unlikely that you are experiencing broadcast related storms which are affecting performance, but if there are alot of hosts, it would be something to look at.

Doing a show interface on your Cisco switch will show you statistics of broadcast versus unicast (and multicast) packets. Doing some quick division against the total traffic will help you calculate what the percentage of broadcast versus unicast is. Through a little more detailed investigation it may be beneficial to investigate the "storm-control broadcast level" command on your Cisco switches, if you are seeing a problem. Using that command can have other implications so only use if you simply can't segment the address space and there is a broadcast problem.

Generated by Jive SBS on 2011-09-05-06:00 19

Ethernet Storage Guy: Multimode VIF Survival Guide

3.) Many Cisco switches have differing capabilities in regards to etherchannel load-balancing algorithms. Many also default to mac based load-balancing. Even though you define ip based on the NetApp side, your switch could be performing mac-based. The way to check and ultimately change this is the following.

show etherchannel load-balance

This command will tell you which algorithm is being used by your switch type. If the results show that you are using mac based load-balancing then you can change this by issuing the following command.

port-channel load-balance src-dst-ip

NOTE: You may have a switch that doesn't support src-dst-ip therefore the show etherchannel loadbalance command would have detail as to which algorithms are supported.

4.) It appears that you have placed aliases on the Sun server and this is not necessary. The purpose was to place aliases on the NetApp controller to have different targets for a single IP addressed host to mount storage from. I would change your config to have a single IP address on the host side and simply target the aliased addresses on the NetApp side.

shoot me a message at treyl@netapp.com and I will be happy to assist further.

Trey

Generated by Jive SBS on 2011-09-05-06:00 20

Ethernet Storage Guy: Multimode VIF Survival Guide

May 15, 2009 5:15 AM Trey Layton jmmorrell in response to

Jim

Sorry this wasn't entirely clear in the post. I seem to find new ways to explain this as time goes on.

First statement is that when you deploy a 4 x 1Gbps etherchannel your maximum throughput for 1 connection would be 1Gbps. So in the case case of your 3140 I would place the 4 interfaces in a vif with 1 physical address + 3 aliases. This helps distribute connections across the physical interfaces in the channel thus distributing traffic, enabling better performance. We don't want to have 4 physical connections in a channel and not use them because of a limited source and destination address pairs. This is sadly the case in many etherchannel environments I get asked to take a look at. It is simply because not everyone fully understands how the load-balancing algorithms work.

The purpose of the alias is to force unique source and destination pairs to exploit the loadbalancing algorithm used with the etherchannel.

With your NFS mounts on the host side I would do the following.

mount 1: target vif physical ip address mount 2: target vif alias 1 mount 3: target vif alias 2 mount 4: target vif alias 3 if mount 5: target vif physical ip address (again)

Generated by Jive SBS on 2011-09-05-06:00 21

Ethernet Storage Guy: Multimode VIF Survival Guide

etc....

This process forces unique source and destination pairs which causes that connections to be loadbalanced to different physical interfaces. The only overlap would be mount 5 and mount 1 going on the same interface. In a future post I will show the math for how the loadbalancing algorithms are calculated so that this becomes a little more clear to folks.

NOTE: On the iSCSI side I would define my iSCSI targets on the ESX server as 4 total (1 physical + 3 aliases)

Trey

May 16, 2009 2:41 PM jdstevens_nei Trey Layton in response to

Thanks for you reply. I've found the discussion helpful.

Excessive broadcasts are not an issue on our network. There are not a lot of hosts in the network. Most do very little broadcasting. We were assigned the network and primary netmask by another group that manages all switching and routing. It works well, so we left if alone.

We use HP switches. I don't have login access to the switches. Another group manages those. It appears that traffic is switched based on IP addresses. I don't think the switch tries to perform any load balancing, but leaves that to the "clients."

The additional IP addresses on the Solaris host are not aliases for a single or trunked interface, but are assigned to specific interfaces. There are five 1000Base-T interfaces on the Sun box. My purpose in using subnets was to insure that the Sun would use a specific interface to mount a specific set of NFS file systems. The purpose is to balance traffic on the Sun machine. The default traffic flows through the primary interface. The NFS traffic for several specific file systems mounted from the NetApp flow through the four secondary

Generated by Jive SBS on 2011-09-05-06:00 22

Ethernet Storage Guy: Multimode VIF Survival Guide

interfaces. We don't allow routing on the Sun and its route to the aliased and subnetted IPs on the NetApp are through the interfaces with the appropriate IP addresses. If we choose the proper combination of NetApp/Sun IP addresses, the traffic for each of the five Sun interfaces should be on a different interface on the NetApp, which has six interfaces in a vif.

It appears, based on client statistics, that we are getting good, balanced throughput. It is far superior to what we were getting through a single interface. The Sun host of interest is an Oracle server with a very active database with over 2 TB of data files. Traffic from this host dominates activity on the NetApp filer. If it is balanced, we are in good shape. The rest of the hosts on the network should be scattered across the vif without having to tweak IP addresses.

I expect we could tweak IP addresses on the Solaris host and view output from ifstat to work on balancing traffic for interfaces in the vif on the NetApp. Is there are better way if we don't have access to switch statistics? Without access to log in to the switch, is there a way to specifically determine which interface on the Sun is connected to each interface on the NetApp?

May 27, 2009 3:48 AM navneetk jdstevens_nei in response to

May 27, 2009 3:52 AM navneetk jdstevens_nei in response to

Hi All,

I have couple of scenarios so just wanted to check what could be an issue with the second one.
1. If there is no etherchannel configured on the switch but a VIF is there on the filer, both(filer and switch) will be able to communicate but with performance issues caused by port flapping at the switch end. 2. the reserve of case1 is not true i.e. if there is an etherchannel configured on the switch but there is no VIF present on the filer, both will be unable to communicate

Why this is so?

Generated by Jive SBS on 2011-09-05-06:00 23

Ethernet Storage Guy: Multimode VIF Survival Guide

Navneet

Aug 4, 2009 7:26 PM petekowalsky

Trey -- great post -- dunno why I didn't find this two weeks ago when I was figuring all this out on my own... ;-) DOH!

QUESTION: Can you point me to a NetApp TR or other [preferably concise] document that covers all this stuff from concepts to commands? Reason I ask is I need to figure out if/ how it's possible to create aliases using different VLAN tags on the single Dynamic (LACP) MultiMode VIF. I'm using VMware's software iSCSI initiator on my ESX 4.0 hosts, and each vmkernel interface needs to be on a different IP subnet there (right?), and I'd like to keep two iSCSI VLANs separate for this purpose and not have my iSCSI traffic make a layer 3 hop @ my Catalyst 4948 switches or something equally silly. I've got a working LACP (dynamic) MultiMode VIF now, and only one controller (two Gig interfaces) in my FAS 2050.

On a side note, as powerful as the NetApp stuff seems to be, as a new user, I find navigating the features / caveats / configuration information to be a total quagmire at best (admittedly, not much worse than other vendors, actually...but still..."throw me a frickin' bone here"). I'm also a networking guy, so fortunately that seems to help me quite a bit there as well... ;-)

Any info is sincerely appreciated!

Aug 5, 2009 8:45 AM Trey Layton petekowalsky in response to

Pete

Thanks for the comment. I have actually written quite a few more posts that I just haven't edited for posting yet. One of which was on VLAN configuration. I will try to get that out

Generated by Jive SBS on 2011-09-05-06:00 24

Ethernet Storage Guy: Multimode VIF Survival Guide

ASAP. With regards to TRs on Ethernet Storage Best Practices we are in fact in the process of producing a paper which does go into the detail you are requesting. I don't have an ETA on it but know that content intended for this forum has been taken to publish in that document.

On the topic of of the ESX question. You are right in that vmkernel interfaces on the same vswitch need to be on different VLAN/IP Subnets. Most customers are building one VMkernel interface for iSCSI and NFS. The second VMkernel interface is for vMotion and that interface is sometimes on the same vswitch (on a different subnet) or other times on a different vswitch on the same subnet.

The Catalyst 4948 is actually a low latency switch so routing the traffic there wouldn't be a problem. However, I completely understand the desire to not introduce a L3 hop for this so you are correct that VLAN tagging is the proper way to go on the filer connectivity side. I will be sure to get up that post quickly. If you shoot me an email treyl@netapp.com I can fire you back a sample configuration for VLANs on NetApp which will get you started quickly.

Thanks, Trey

Aug 5, 2009 10:13 AM petekowalsky Trey Layton in response to

Hey Trey -- thanks for the quick response! Yup -- my goal is to achieve an efficient, resilient and well-balanced environment, so any extra L3 hops are undesirable. I'll send an email to you shortly for that sample VIF / VLAN configuration. Thanks again for the excellent writeup, and I'll look forward to your future posts.

Oct 14, 2009 4:22 PM phoenixexpl

[deleted duplicate post]

Oct 14, 2009 8:29 PM phoenixexpl

Trey:

Generated by Jive SBS on 2011-09-05-06:00 25

Ethernet Storage Guy: Multimode VIF Survival Guide

What are the implications of the alias IP addresses for iSCSI MPIO?

I have my filer configured as follows:

hostname HOUFAS2050A vif create lacp HOUFAS2050A -b ip e0a e0b e1a e1b vlan create HOUFAS2050A 10 ifconfig HOUFAS2050A-10 10.17.10.65 netmask 255.255.255.0 partner HOUFAS2050B-10 mtusize 9000 trusted -wins up ifconfig HOUFAS2050A-10 alias 10.17.10.66 netmask 255.255.255.0 ifconfig HOUFAS2050A-10 alias 10.17.10.63 netmask 255.255.255.0 ifconfig HOUFAS2050A-10 alias 10.17.10.64 netmask 255.255.255.0 route add default 10.17.10.1 1

My host has two NICs dedicated to iSCSI, with the following addresses:

10.17.10.80 10.17.10.180

Everything is on the same subnet, same VLAN, connected via a cross-stack LACP EtherChannel on my 3750-E stack.

You said, "Simply placing the additional addresses will not exploit the advantage of additional addresses. You must ensure that the hosts which mount data from the NetApp controllers utilize all of the addresses." But the examples you gave were both NFS. Is this true for iSCSI in this scenario? If I were to set up multipath I/O from each host interface to each filer ip address, I'd have eight total iSCSI connections. Is this optimal, redundant, or less-than-optimal?

Thanks!

Generated by Jive SBS on 2011-09-05-06:00 26

Ethernet Storage Guy: Multimode VIF Survival Guide

Chuck

Oct 15, 2009 8:45 PM phoenixexpl phoenixexpl in response to

I think that I've partially answered my own question:

It doesn't make sense to have more active iSCSI connections than there are physical links on the host; you're certainly not going to get more bandwidth than the physical links will support.

However, it would be important to select the pairs in such a way that the XOR hash doesn't select the same physical link on the LACP EtherChannel between the switches and the filer. From what I've been able to determine, the interface selection is done according to the following formula:

(Source_IP) XOR (Dest_IP) MOD (Interface_Qty) = selected interface index

So in the case of my example above, my possibilities are:

.80<->.63 .80<->.64 .80<->.65 .80<->.66 .180<->.63 .180<->.64 .180<->.65 .180<->.66

50 xor 3F = 6F 50 xor 40 = 10 50 xor 41 = 11 50 xor 42 = 12 B4 xor 3F = 8B B4 xor 40 = F4 B4 xor 41 = F5 B4 xor 42 = F6

6F mod 4 = 3 10 mod 4 = 0 11 mod 4 = 1 12 mod 4 = 2 8B mod 4 = 3 F4 mod 4 = 0 F5 mod 4 = 1 F6 mod 4 = 2

Generated by Jive SBS on 2011-09-05-06:00 27

Ethernet Storage Guy: Multimode VIF Survival Guide

So in this case, 10.17.10.80<->10.17.10.64 would route traffic over the first physical link in the EtherChannel, and 10.17.10.180<->10.17.10.65 would route traffic over the second physical link.

Is this correct, according to your understanding? It would also make sense to hash a second host's IPs so that you could pick pairs that would select links three and four, yes?

Thanks,

Chuck

Nov 8, 2009 4:46 PM stkendrick phoenixexpl in response to

Hi Trey,

OK, let's say I care more about high-availability than I do about throughput. To that end, I install two switches in each of my data centers. Is there a way to use Dynamic Multimode VIFs to survive what I call "Lights are on; no one is home"? Meaning, the cases in which the Sup card fries (or a hapless administrator assigns one of the interfaces feeding the filer to the wrong VLAN). Link stays up ... but the relevant filer ports are 'isolated' from one another.

From the bottom of p. 128 in http://now.netapp.com/NOW/knowledge/docs/ontap/rel732/ pdfs/ontap/nag.pdf "Dynamic multimode vifs can detect not only the loss of link status (as do static multimode vifs), but also a loss of data flow. This feature makes dynamic multimode vifs compatible with high-availability environments."

In the example below, e0a + e1b belong to one Dynamic Multimode VIF, while e0b and e4a belong to a second Dynamic Multimode VIF. If the Sup card in c6k-a-switch fries, then I would predict that LACP Hellos would quit flowing from the switch to the filer, and that Ontap

Generated by Jive SBS on 2011-09-05-06:00 28

Ethernet Storage Guy: Multimode VIF Survival Guide

would deactivate vif1. [And if a hapless administrator assigns Port1 to the wrong VLAN, then ... hmm, I don't think LACP would detect this problem, and bad things would happen.]

head-a vif1 / / / / / / / Port1 | | | router-a \ \ / / | | | | | | | Port2 | | | | | | | Port3 | | | router-b vif2 \ \ \ \ \ \ \ Port4 e0a e1b e0b e4a

cat6k-a-switch--------cat6k-b-switch

corporate network

As an aside, I've tried tackling this problem using Single-Mode VIFs, without success. In this scenario, if I assign, say, Port1 to the wrong VLAN, svif1 stays up and Ontap continues trying to use e0a. If I simulate a Sup card frying (by rebooting cat6k-a-switch), the results are intermittent -- I believe that Ontap deactivates individual NICs when link goes down ... but of course, during most of the reboot, link is up, but the switch isn't forwarding frames ["Lights are on; but no one is home"], and Ontap doesn't realize this and tries to use NICs plugged into Ports 1 & 2, with unpleasant results.

Generated by Jive SBS on 2011-09-05-06:00 29

Ethernet Storage Guy: Multimode VIF Survival Guide

head-a svif1 / / / / / / / Port1 | | | router-a \ \ / / / \ \ \/ /\ / \ \ Port3 | | | router-b Port2 \ / / / svif2 \ \ \ \ \ \ \ Port4 e0a e1b e0b e4a

cat6k-a-switch--------cat6k-b-switch

corporate network

At any rate, I have two questions:

(a) From the point of view of High-Availability, would you buy my story around using Dynamic Multimode VIFs, as above? Or would you suggest I investigate another approach?

(b) What does the "can detect not only loss of link status ... but also a loss of data flow" sentence mean? I don't see how LACP helps anybody detect loss of packets flowing across a link. Perhaps this means something like "If you administratively remove a switch port from the aggregate, LACP will detect the change in the channel definition and remove that path

Generated by Jive SBS on 2011-09-05-06:00 30

Ethernet Storage Guy: Multimode VIF Survival Guide

from the filer's view of the aggregate." ? Or does Ontap track the packet receive counter on each NIC and use changes in the behavior of that counter to influence whether or not it deactivates/activates a NIC?

--sk

Stuart Kendrick

Nov 10, 2009 2:12 PM Trey Layton stkendrick in response to

Stuart

Thanks for the detail. In your first example with two dynamic multimode VIFs you would actually role these two dynamic multimode VIFs into what we call at NetApp a second level single mode VIF. You would then prefer one of the dynamic multimode VIFs over the other. This would render one of the dynamic multimode VIFs offline until all the interfaces in the preferred VIF went offline. If such a case were to happen then all IP and MAC information would float to the secondary (standby) dynamic multimode VIF. Many customers run in this configuration today but they don't like the idea of having one multimode VIF inactive. To that end many network manufactures have created switch clustering technologies which allow a etherchannel to be actively spanned across two physical chassis. In the case of your 6500 this is called Multi-Chassis Etherchannel and has some hardware requirements for Supervisor and Linecards. I have written an article on these various technologies and have provided that link below. You would need to use this type of technology on the switch side to achieve an all active interface scenario which spans multiple switches. If you are not able to deploy these diverse switch spanning etherchannel technologies then you must deploy the two VIFs in an active and passive state.

With regard to our description in the Network Administration Guide for ONTAP about how Dynamic MultiMode VIFs are able to in effect sense inactivity, this is unfortunately not some technology that NetApp has invented and is unique to NetApp, it is actually how the LACP standard is written and the features which have been provided with this 802.3 specification.

Generated by Jive SBS on 2011-09-05-06:00 31

Ethernet Storage Guy: Multimode VIF Survival Guide

That is the standard states binding or aggregating links may be executed under manual control through direct administrator manipulation. Additionally, automatic determination, configuration, binding and monitoring may occur through the use of the Link Aggregation Control Protocol. The LACP uses peer exchanges across the bound links to determine, on an ongoing basis, the aggregation capability of the various links, and continuously provides the maximum level of aggregation capability achievable between a given pair of systems.

We spend alot of time talking about static versus dynamic. Static is administratively bound or aggregated interfaces and the only means to determine failure is loss of link. Dynamic is LACP and the means to determine failure extend beyond link loss through peer exchange. This is why every reference in the site states that when you can use LACP do so because it has many more mechanisms to determine failure. Loss of link is the most absolute form of failure but we all know that links don't always just loose link. They sometimes half fail and the automatic binding mechanism in LACP has been developed to leverage those features. ONTAP does not use peer exchanges for configuration binding but does for all the other variables mentioned above.

I encourage you to read this link regarding the diverse switch ether-channeling technologies.

I hope this helps in your deployment efforts, Trey

Link I mentioned above: http://communities.netapp.com/blogs/ethernetstorageguy/2009/09/23/virtual-port-channelsvpc-cross-stack-etherchannels-multichassis-etherchannels-mec--what-does-it-all-mean-andcan-my-netapp-controller-use-them

Dec 16, 2009 2:27 AM Verky Yi

Hi Trey, Thanks for your article, and I'd learned a lot from it. But still I'd like to ask you some problem about the vif.

Generated by Jive SBS on 2011-09-05-06:00 32

Ethernet Storage Guy: Multimode VIF Survival Guide

Can I create a vif with interface from different slot? For examle, e0a,e0b from onboard interface where e1a,e2b are from slot 1! Thanks!

Dec 17, 2009 11:42 AM Trey Layton Verky Yi in response to

Verky

Yes you can mix interfaces from different cards. There has been some confusion on this in the past and the reason is that the default settings for a card may be different from one expansion card versus the onboard. A LACP vif will check for settings to match for flowcontrol etc.. If they do not match then the switch may isolate one of the ports causing a conclusion to be drawn that it is not supported. Many times the isolation is forced on the switch and someone may not have looked at the logs of the switch to understand why isolation was performed.

So if you are mixing interfaces and you experience any problems when configuring the vif where certain links are forced down then you are likely to have a interface which doesn't match the config of the others in the channel either on the NetApp controller or on the switch itself.

If all the interface configuration parameters match on both the switch and the NetApp controller you should be fine. If you use the templates provided in this post you should be good to go. If anyone finds any problems let me know and I will be sure to identify them and post a resolution.

Trey

Jun 10, 2010 3:18 PM Jiri Franek Trey Layton in response to

in real world when u using CIFS, NFS and iSCSI at once (just imagine some VMware servers connected via NFS or iSCSI with some virtual M$ based servers and CIFS or NFS NAS for users in one box), u usualy need some traffic separation in inexpensive, secure

Generated by Jive SBS on 2011-09-05-06:00 33

Ethernet Storage Guy: Multimode VIF Survival Guide

way and thats the situation where some L2 stuff kick in. So basicaly the configuration should be little more complex, than just a plain IP aliases. On VIF1 u have to add taged VIF1 VIFs and according the 802.1Q tags u have to change confiuration of switches. j.

Jun 10, 2010 11:39 PM dennisvisser

I was wondering how the mounts would look on a linux oracle server, with ip aliasing.

If i mount the same volume on the Netapp filer with the different ip-adresses, would this do loadbalacing? (or will it just take the last mount path?)

A small example of this would be very welcome.

Jun 11, 2010 3:49 PM Jiri Franek dennisvisser in response to

are you asking about loadbalancing L7 protocol with L3 trick?

Jun 14, 2010 7:53 AM AdvUni-MD

Hi,

you're saying that round robin produces errors because it may be possible that packets arrive out-of-order.

However, the TCP protocol explicitly handles such cases (via a large enough TCP receive window) so it shouldn't be a problem. Are you suggesting that the IP stacks in common server OSs don't handle the fragmentation correctly? Or where *exactly* is the problem with round robin?

Generated by Jive SBS on 2011-09-05-06:00 34

Ethernet Storage Guy: Multimode VIF Survival Guide

Thanks!

Jul 2, 2010 2:07 AM dennisvisser Jiri Franek in response to

In the example of vmware: When i asked the question i thought it would be possible to mount one datastore with more ip-adresses and that the host would be smart enough to do the loadbalacing between them. After rethinking this it is clear that this will never work. To have loadbalancing work i just need to mount different datastores with different ip-adresses.

Aug 13, 2010 7:28 AM kleber.silva dennisvisser in response to

The comment provided by dennisvisser was very helpful: "To have loadbalancing work i just need to mount different datastores with different ip-adresses." When I have read the excelent original post from trey (and the same on TR 3749), I got confused about his statement but now it is clear "If you have more datastores than IP addresses then simply distribute the datastore mounts evenly across the IP addresses on the Controller". Actually I did not pay attention to the word MORE! It means that if you have only Datastore this procedure will not work. I noticed this on the fly when I tried to mount the same datastore on different ESX hosts. this is because VMware ties to the UUID of the Datastore.

Regards, Kleber

Aug 16, 2010 10:58 AM Michael Cope Trey Layton in response to

I have a question from a partner about using LACP in multi-level VIFs. The MAN pages for the VIF command state that multi-level LACP vifs are not supported. The assumption is this means you cannot create an LACP vif and then place it in second level VIF also configured

Generated by Jive SBS on 2011-09-05-06:00 35

Ethernet Storage Guy: Multimode VIF Survival Guide

to use LACP. Where we are uncertain if it means you can't create two LACP VIFs and then bind them into a second level single mode VIF. The examples in section 3.6 of TR-3749 seems to indicate the opposite will work - creating two VIFs and then binding them into a second level VIF using LACP. Any feedback would be helpful, especially if someone can show that LACP can be used at one level in a multi-level VIF, just not both levels or not at all.

Thanx Michael

Aug 17, 2010 9:49 AM stkendrick Michael Cope in response to

Generally, we bind LACP VIFs into a second level single mode VIF: vif create lacp dmmvif1 -b ip e0a vif create lacp dmmvif2 -b ip e0b vif create lacp dmmvif3 -b ip e0c vif create lacp dmmvif4 -b ip e0d vif create single svif1 dmmvif1 dmmvif2 vif create single svif2 dmmvif3 dmmvif4

Diagram -- http://www.skendric.com/philosophy/Toast-Ethernet-IP.pdf Verbiage (search for NetApp) -- http://www.skendric.com/philosophy/Configure-HA-Serversin-Data-Centers.pdf

--sk

Feb 17, 2011 4:09 PM bever

Trey,

Generated by Jive SBS on 2011-09-05-06:00 36

Ethernet Storage Guy: Multimode VIF Survival Guide

Is there any particular reason why you disable CDP on the switches with your recomended config?

My customer likes to have it turned on.

Tom

Feb 18, 2011 10:40 AM Henry PAN

Cool:>) Love it & this will save my life!

Feb 25, 2011 6:29 AM sut-reseau

Fantastic article that come just a while after I sorted it all by myself. Just want to add my little part on this. I also added the vlan trunking to the VIF so I can pass multiple VLAN trough the VIF.

On the SAN: sanqc2-001>vlan create VIF1 3 185 : Created two VLAN (3 and 185) on Virtual interface VIF1. Virtual interface VIF1 will do VLAN tagging. sanqc2-001>ifconfig VIF1-3 192.168.0.93 netmask 255.255.255.0 VLAN 3 config IP address

sanqc2-001> ifconfig VIF1-3 alias 192.168.0.92 netmask 255.255.255.0 config alias IP VLAN 3 sanqc2-001>ifconfig VIF1-185 10.11.185.93 netmask 255.255.255.0 VLAN 185 config IP address

sanqc2-001> ifconfig VIF1-185 alias 10.11.185.92 netmask 255.255.255.0 config alias IP VLAN 185

Generated by Jive SBS on 2011-09-05-06:00 37

Ethernet Storage Guy: Multimode VIF Survival Guide

On CISCO switch: interface Port-channel2 desc NetApp VIF1 switchport mode trunk Define logical port description ability to pass multiple VLAN trough the interface

Intergace Gi1/0/17 description san001-e0a switchport mode trunk channel-group 2 mode active spanning-tree portfast

Of course there other configuration that might be added to this config, but basically, this will do the trunking we wished to acieve,

Mar 8, 2011 12:55 AM miststech sut-reseau in response to

Trey,

Sorry to rake up and old post. We are using cisco 3750's in a stack so the devices appear as one, do we still need to crete the virtual channel, or the cisco side, the Multi mode vif has been created on the NSD.

May 4, 2011 9:47 AM dwshort

Thanks for putting this together Trey. It's cleared up some things for me coming from a network background. Thanks again.

May 12, 2011 12:00 PM josh.miller miststech in response to

Generated by Jive SBS on 2011-09-05-06:00 38

Ethernet Storage Guy: Multimode VIF Survival Guide

I use a single LACP ehterchannel to my 3750X stack with the connections balenced accross the stack in case of a switch failure. I use secendary ips on each interface so I can load balence as needed.

May 19, 2011 2:08 PM bshep8384

Thanks for the article Trey. I am a SysAdmin trying to figure out a network problem, please excuse my ignorance. We have a similar setup to yours. Are you seeing any network retransmits with your NFS datastores and Netapp filers? We are noticing a little more than 10% of the total traffic represented by TCP ACKed lost segment, TCP Out-of-Order and TCP Dup ACK (InfiniStream nomenclature) all without dropping packets. This is only happening over our NFS VIF and occurs with all of our VMWare ESX hosts. Is this normal?

Here is our config:

Cisco 6509 portchannel:

interface Port-channel65 switchport switchport trunk encapsulation dot1q switchport trunk allowed vlan 681,682,696 switchport mode trunk flowcontrol receive on flowcontrol send on spanning-tree portfast edge

Physical port:

Generated by Jive SBS on 2011-09-05-06:00 39

Ethernet Storage Guy: Multimode VIF Survival Guide

interface TenGigabitEthernet1/3/11 switchport switchport trunk allowed vlan 681,682,696 switchport mode trunk flowcontrol send on spanning-tree portfast edge channel-group 65 mode on

Netapp RC:

vif create multi eth-trunk -b ip e4a e4b vlan create eth-trunk 681 682 696 #configure vlan interface 681 for nfs, 10.90.32.0/22 ifconfig eth-trunk-681 `hostname`-nfs netmask 255.255.252.0 partner eth-trunk-681 #configure vlan interface 682 for cifs, 10.90.36.0/27 ifconfig eth-trunk-682 `hostname`-cifs netmask 255.255.255.224 partner eth-trunk-682 #configure vlan interface 696 for iscsi, 10.90.84.0/25 ifconfig eth-trunk-696 `hostname`-iscsi netmask 255.255.255.128 partner eth-trunk-696 #configure e0M interface, 10.90.37.0/24 ifconfig e0M `hostname`-e0M flowcontrol full netmask 255.255.255.0 partner e0M route add default 10.90.36.1 1 routed on

Generated by Jive SBS on 2011-09-05-06:00 40

Ethernet Storage Guy: Multimode VIF Survival Guide

I noticed on your config you use the following:

no cdp enable spanning-tree guard loop channel-protocol lacp

From what I have read guard loop is for layer 2 traffic. Is there a reason you have it in your config? Should we be using lacp for the channel-protocol? Is there a reason you disable cdp?

Thanks, Brian

Jul 11, 2011 3:57 AM RUPAMS567

Hey just a thought, you would probably get more readers if you interviewed controversial people for your blog. moving to vancouver

Jul 11, 2011 3:57 AM RUPAMS567

Hey just a thought, you would probably get more readers if you interviewed controversial people for your blog. home movers toronto

Aug 2, 2011 8:09 AM pclayton99

Trey, et al.

vif is now deprecated in favor of the ifgrp command with Ontap 8.02.

Generated by Jive SBS on 2011-09-05-06:00 41

Ethernet Storage Guy: Multimode VIF Survival Guide

Our equilevant version in the rc file to lash up four 10Gb links into one and work in conjunction with the Extreme Summit stack switches in place.

ifgrp create multi nodename -b mac e1a e1b e2a e2b

Might change over to the IP option if we see load balancing not taking place across the four cables.

Next challenging part is how to do vlan tags over this resulting trunk with Extreme switches on the other end of the pipe. Been trying to get that working now for a while and completely failed so far. Both sides say they are happy yet the NetApp can't find the gateway to send traffic out to the rest of the network. And sytems can reach the filer.

pdc

Aug 2, 2011 1:12 PM oscaraock

If I create a Dynamic Multi-mode VIF with 4 GbE interfaces, I will have a bandwith of 4Gbps?

Generated by Jive SBS on 2011-09-05-06:00 42

S-ar putea să vă placă și