Documente Academic
Documente Profesional
Documente Cultură
1 6
CCIE Data Center Full-Scale Labs CCIE Data Center Full-Scale Lab 2
Diagram
(/uploads/workbooks/images/diagrams/EOk1VbStsRSTXL3L0cMZ.png)
Introduction 1. Data Center Infrastructure 2. Data Center Storage Networking 3. Unified Computing 4. Data Center Virtualization
Introduction
General Lab Guidelines
You may not use any links that may physically be present but not specifically pictured and labeled in this topology. Name and number all VLANs, port channels, SAN port channels, service profiles, templates, and so on exactly as described in this lab. Failure to do so will result in missed points for that task. You may not change any passwords on any devices unless explicitly directed to do so. You may not change any management IP addresses or default routes on any devices or VDCs unless explicitly directed to do so (you may add them if they do not exist, but you may not change existing). You may not disable telnet on any device. Telnet must work properly on all devices and VDCs. You may not log on to the 3750G switch for this particular lab. It is fully functional and pre-configured for you.
Table 1
Name
http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013
. 2 6
VLAN 140 150 200 201 710 711 720 721 DC1-ISP-1 DC1-ISP-2 DC2-ISP-1 DC2-ISP-2 OTV-SITE BACKUP DCI-ESXI DCI-VMOTION
Name
http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013
. 3 6
Trunk only previously created VLANs 120-135 and 200-201 southbound from N7K1 to both FIs.
1.5 HSRP
Using information from Table 2, configure SVIs on N7K1 and N7K2 for all VLANs that are present on that switch. Assume that a second Nexus 7000 will be added to each Data Center, and with that in mind, go ahead and provision HSRP for all SVIs at both sites, as follows: Use the newest version of HSRP supported. Make HSRP group numbers correspond with their respective VLAN/SVI numbers. Use the virtual IP address of .254 for SVIs on both switches. Use the host IP address of .251 for each current SVIs on N7K1. (.250 will be used in the future for the other HSRP member at DC1). Use the host IP address of .252 for each current SVIs on N7K2. (.253 will be used in the future for the other HSRP member at DC1). These current SVIs will be the primary HSRP group member even after the other N7K is put into service at each DC; ensure that these SVIs have a higher preference for being the Active forwarder assuming the others come online with defaults. Have the SVIs for VLAN 200 use the fastest possible hello and hold timers.
Table 2
VLAN 120 125 130 135 200 201 192.168.120.0 255.255.255.0 192.168.125.0 255.255.255.0 192.168.130.0 255.255.255.0 192.168.135.0 255.255.255.0 192.168.200.0 255.255.255.0 192.168.201.0 255.255.255.0
VRF
1.6 vPC
Configure vPC between N5K1 and N5K2 with the Domain ID 12. Configure the peer-link with an LACP trunking over ports e1/1-2 on Port Channel 512 between N5K1 and N5K2 according to the diagram. Ensure that any vPC numbers correspond with their designated port channel numbers, as listed in the tasks that follow. You are not permitted to create any additional links that are not explicitly pictured in the diagram. Ensure that N5K1 is the root for all STP instances; however, you may not configure any spanning tree priority or root commands globally or at the interface level on N5K1. Ensure that N5K1 holds the primary role for the vPC domain. Ensure that N5K1 always decides which links are active in any port channel. Synchronize all ARP tables. Ensure that if our SAN was an EMC VPLEX or VMAX using IP technologies, vPC would not cause any problems with forwarding frames.
1.10 OTV
Extend only previously created VLANs 120-135 and 200-201 between Data Centers using OTV. Use the OTV site VLAN of 140 on both sides of the DCI. You may use whatever site identifiers you prefer. The ISP supports SSM and ASM, and for ASM it provides a PIM RP of 10.10.10.25; use this as your only RP. OTV should be authenticated using a hashed value from the word "DCIOTV". Any of the SVIs on N7K1 or N7K2 for the VLANs that are extended across the DCI should be able to ping each other. Prevent HSRP groups at DC1 from becoming active/standby members of the same HSRP group numbers at DC2, and vice-versa. Prevent any device ARPing at either DC from getting the virtual MAC address of the HSRP group from the 7K at the opposite side of the DCI.
http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013
. 4 6
When finished, both N7K1 and N7K2 should be able to ping the actual host IP address of the SVI at the opposite data center traversing the overlay. Each N7K1 and N7K2 should also be able to ping the virtual IP address of .254, which should keep traffic local to the site from which the ping originates.
2.5 FCIP
Configure FCIP between MDS1 and MDS2 on interfaces G1/1 and G1/2 on each switch. Use the IP address of 12.12.12.1/30 on MDS1 G1/1 and 12.12.12.2/30 on MDS2 G1/1 over FCIP Profile 10 and interface FCIP 10 on both sides. Use the IP address of 12.12.12.5/30 on MDS1 G1/2 and 12.12.12.6/30 on MDS2 G1/2 over FCIP Profile 20 and interface FCIP 20 on both sides. The 3750G switch is already configured properly; do not connect to it at all. Configure SAN Port Channel 50 over both of these links and trunk only VSAN 10 and VSAN 20 over it. Optimize FCIP on MDS1 and MDS2 to account for optimum TCP window scaling based on the approximate actual RTT (within 20% variance is allowed). Allow FCIP to monitor the congestion window and increase the burst size to the maximum allowed. Ensure that there is no fragmentation of FCIP packets over the link.
2.6 Zoning
Ensure that MDS1 appears to the fabric as domain 0x61 for VSAN 10 and 20. Ensure that MDS2 appears to the fabric as domain 0x62 for VSAN 10 and 20. Ensure that N5K2 appears to the fabric as domain 0x52 for VSAN 10 and 20. Ensure that N5K1 appears to the fabric as domain 0x51 for VSAN 10 and 20. Zone according to the following information. You may only make zoning changes for both Fabric A and Fabric B from MDS1. According to information given in Table 3: Zone so that "ESXi1", "ESXi2", and "ESXi3" all have access to their FC-TARGET-SAN-x for the appropriate Fabrics (fc0's to Fabric A; fc1's to Fabric B). Fabric A uses VSAN 10. Fabric B uses VSAN 20. Zoning for Fabric A should use the zone name "ZONE-A". Zoning for Fabric B should use the zone name "ZONE-B". The zoneset for Fabric A should be named "ZoneSet_VSAN10". The zoneset for Fabric B should be named "ZoneSet_VSAN20". Aliases must be created according to Table 3 and must be used in the zoning configuration.
Note:
Many pWWN's are the same below. They are sorted first by FC-4 Type and then by Fabric.
Table 3
Fabric A
pWWN 20:aa:00:25:b5:01:01:01
LUN N/A
Alias ESXi1-A-fc0
http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013
. 5 6
Fabric A A B B B A A A A B B B B
pWWN 20:aa:00:25:b5:01:01:02 20:00:d4:8c:b5:bd:46:0e 20:bb:00:25:b5:01:01:01 20:bb:00:25:b5:01:01:02 20:00:d4:8c:b5:bd:46:0f 21:03:00:1b:32:64:5e:dc 21:03:00:1b:32:64:5e:dc 21:03:00:1b:32:64:5e:dc 21:03:00:1b:32:64:5e:dc 21:01:00:1b:32:24:5e:dc 21:01:00:1b:32:24:5e:dc 21:01:00:1b:32:24:5e:dc 21:01:00:1b:32:24:5e:dc
Description ESXi2 vHBA "fc0" ESXi3 vHBA "fc0" ESXi1 vHBA "fc1" ESXi2 vHBA "fc1" ESXi3 vHBA "fc1" ESXi1 Boot Volume ESXi2 Boot Volume FC_Datastore 1 FC_Datastore 2 ESXi1 Boot Volume ESXi2 Boot Volume FC_Datastore 1 FC_Datastore 2
Alias ESXi2-A-fc0 ESXi3-A-fc0 ESXi1-B-fc1 ESXi2-B-fc1 ESXi3-B-fc1 FC-TARGET-SAN-A FC-TARGET-SAN-A FC-TARGET-SAN-A FC-TARGET-SAN-A FC-TARGET-SAN-B FC-TARGET-SAN-B FC-TARGET-SAN-B FC-TARGET-SAN-B
FC-4 Type Init Init Init Init Init Target Target Target Target Target Target Target Target
3. Unified Computing
3.1 UCS Initialization
Initialize both UCS Fabric Interconnects (FIs). Fabric Interconnect A should use the IP address of 192.168.101.201/24. Fabric Interconnect B should use the IP address of 192.168.101.202/24. Both Fabric Interconnects should use a VIP of 192.168.101.200.
3.5 Pools
Create a UUID pool called "Global-UUIDs" and allocate suffixes from the range of 0001-000000000101 to 0001-00000000010f. Create a MAC address pool called "Global-MACs" ranging from 00:25:b5:0a:0a:01 to 00:25:b5:0a:0a:11. Create an nWWN pool called "Global-nWWNs" ranging from 20:ff:00:25:b5:01:01:01 to 20:ff:00:25:b5:01:01:11. Create a Management IP address pool ranging from 192.168.101.210 to 192.168.101.219 with the default gateway of 192.168.101.1.
http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013
. 6 6
UUIDs should be dynamically allocated from the Global-UUIDs pool. 2 vHBAs should be created with the following information: Name them "fc0" and "fc1". "fc0" must be assigned the initiator pWWN of 20:aa:00:25:b5:01:01:01. "fc1" must be assigned the initiator pWWN of 20:bb:00:25:b5:01:01:01. Both vHBAs must be able to dynamically obtain nWWNs from the Global-nWWNs pool. Neither of these vHBAs should be allowed to re-attempt FLOGIs more than 3 times. Configure a specific boot policy to boot from SAN with the following information: "fc0" should attempt first to boot from Fabric A using the pWWN for "ESXi1 Boot Volume" in Table 3. "fc1" should attempt first to boot from Fabric B using the pWWN for "ESXi1 Boot Volume" in Table 3. 5 vNICs should be created with the following information: Name them "eth0", "eth1", "eth2", "eth3", and "eth4". "eth0" and "eth3" should only be allowed to ever use Fabric A. "eth1" and "eth4" should only be allowed to ever use Fabric B. "eth2" primarily uses Fabric A, but should automatically use Fabric B if all uplinks on FI-A are down. MAC addresses should must be allocated dynamically from the Global-MACs pool. All VLANs should be allowed on all vNICs except for VLAN 1 and VLAN 150; these should not be allowed on any vNICs. All hosts will explicitly tag their VLAN IDs. Any changes to the service profile requiring a reboot should force the administrator to manually allow it. Any service profile created from this template should not automatically associate with any blades in the chassis. Only allow this service profile to ever associate with blades that have a Palo mezzanine adapter. Do not allow blade to automatically boot after this service profile is associated. Ensure that when booting, the KVM console viewer can see the FC disk that attaches directly after the FC drivers load. Configure the management IP addresses to be dynamically assigned from the global pool. Manually associate this profile with blade 1 and boot the blade.
http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013