Sunteți pe pagina 1din 273

Topic Notes: Troubleshooting Switches

Using a Layered Approach


The most effective way to troubleshoot is with a systematic approach using a layered model, such as the Open Systems Interconnection (OSI) model or the TCP/IP model. Three methods that are commonly used to troubleshoot include bottom up, top down, and divide and conquer. Each method has its advantages and disadvantages. Switches operate at Layer 1 of the OSI model, and provide an interface to the physical media. Switches also operate at Layer 2 of the OSI model, and provide switching frames that are based upon MAC addresses. Therefore, problems generally are seen at Layer 1 and Layer 2. Some Layer 3 issues could also result, regarding IP connectivity to the switch for management purposes.

Identifying and Resolving Common Port Issues


Media-related issues may be reported as an access issue. For example, the user may say that they cannot access the network. Duplex-related issues result from a mismatch in duplex settings. Speed-related issues result from a mismatch in speed settings.

Use the show interface command to verify the duplex settings.


A common issue with speed and duplex is when the duplex settings are mismatched between two switches, between a switch and a router, or between the switch and a workstation or server. This mismatch can occur when manually hard-coding the speed and duplex, or from autonegotiation issues between the two devices. Duplex mismatch is a situation in which the switch operates at full duplex and the connected device operates at half duplex, or the other way around. The result of a duplex mismatch is extremely slow performance, intermittent connectivity, and loss of connection. Other possible causes of data-link errors at full duplex are bad cables, a faulty switch port, or network interface card (NIC) software or hardware issues. If the mismatch occurs between two Cisco devices with Cisco Discovery Protocol enabled, you will see Cisco Discovery Protocol error messages on the console or in the logging buffer of both devices. Cisco Discovery Protocol is useful to detect errors, as well as to gather port and system statistics on nearby Cisco devices. Whenever there is a duplex mismatch (in this example, on the FastEthernet0/1 interface), these error messages are displayed on the switch consoles of Catalyst switches that run Cisco IOS Software:
%CDP-4-DUPLEX_MISMATCH: duplex mismatch discovered on FastEthernet0/1 (not half duplex)

Additionally, for switches with Cisco IOS Software, these messages appear for link up or down situations (in this example, on the FastEthernet0/1 interface):
%LINK-3-UPDOWN: Interface FastEthernet0/1, changed state to up %LINK-3-UPDOWN: Interface FastEthernet0/1, changed state to down

Identifying and Resolving Media Issues


Media issues are common. It is a fact of life that wiring gets damaged. The following are some examples of everyday situations that can cause media issues:

In an environment using Category 3 wiring, the maintenance crew installs a new air conditioning system that introduces new EMI sources into the environment. In an environment using Category 5 wiring, cabling is run too close to an elevator motor. Poor cable management puts strain on RJ-45 connectors, causing one or more wires to break. New applications can change traffic patterns.

Something as simple as a user connecting a hub to a switch port in an office in order to connect a second PC can cause an increase in collisions. Damaged wiring and EMI commonly show up as excessive collisions and noise. Changes in traffic patterns and the installation of a hub will show up as collisions and runt frames. These symptoms are best viewed using the show interface command. To display information about the specific Fast Ethernet interface, use the show interfaces fastethernet 0/0 command. The resulting output varies, depending on the network for which an interface has been configured. Callout Field Description Indicates whether the interface hardware is currently active or whether an administrator has disabled it. If the interface is shown as "disabled," the device has received more than 5000 errors in a keepalive interval, which is 10 sec by default. If the line protocol is shown as "down" or "administratively down," the software processes that manage the line protocol consider the interface unusable (because of unsuccessful keepalives) or an administrator has disabled the interface. Total number of errors that are related to no buffer, runt, giant, CRC, frame, overrun, ignored, and abort issues. Other inputrelated errors can also increment the count, so this sum might not balance with the other counts. Number of times that the receiver hardware was unable to hand received data to a hardware buffer because the input rate exceeded the ability of the receiver to process the data. Number of messages that are retransmitted because of an

Interface and line protocol status

Input errors, including cyclic redundancy check (CRC) errors and framing errors Output errors Collisions

3 4

Callout

Field

Interface resets

Description Ethernet collision, which is usually the result of an overextended LAN. LANs can become overextended when an Ethernet or transceiver cable is too long or when there are more than two repeaters between stations. Duplex mismatch is on of the most common reasons for collisions. Number of times an interface has been completely reset.

Local EMI is commonly known as noise. There are four types of noise that are most significant to data networks:

Impulse noise that is caused by voltage fluctuations or current spikes that are induced on the cabling. Random (white) noise that is generated by many sources, such as FM radio stations, police radios, building security, and avionics for automated landing. Alien crosstalk, which is noise that is induced by other cables in the same pathway. Near-end crosstalk, which is noise originating from crosstalk from other adjacent cables or noise from nearby electric cables, devices with large electric motors, or anything that includes a transmitter that is more powerful than a cell phone.

When you are troubleshooting issues that are related to excessive noise, three steps are suggested to help isolate and resolve the issue:

Use the show interface fastethernet EXEC command to determine the status of the Fast Ethernet interfaces of the device. The presence of many CRC errors but not many collisions is an indication of excessive noise. Inspect the cables for damage. If you are using 100BASE-TX, make sure that you are using Category 5 cabling.

Collision domain problems affect the local medium and disrupt communications to Layer 2 or Layer 3 infrastructure devices, local servers, or services. Collisions are normally a moresignificant problem on shared media than on switch ports. Average collision counts on shared media should generally be below 5 percent, although that number is conservative. Be sure that judgments are based on the average and not on a peak or spike in collisions. Collision-based problems may often be traced back to a single source. It may be a bad cable to a single station, a bad uplink cable on a hub or port on a hub, or a link that is exposed to external electrical noise. A noise source near a cable or hub can cause collisions even when there is no apparent traffic to cause them. If collisions get worse in direct proportion to the level of traffic, if the amount of collisions approaches 100 percent, or if there is no good traffic at all, the cable system may have failed. When you are troubleshooting issues that are related to excessive collisions, three steps are suggested to help isolate and resolve the issue:

Use the show interface ethernet command to check the rate of collisions. The total number of collisions that are compared to the total number of output packets should be 0.1 percent or less. A time domain reflectometer (TDR) is a device that sends signals through a network medium to check cable continuity and other attributes. Use a TDR to find any unterminated Ethernet cables. Jabber occurs when a device that is experiencing circuitry or logic failure continuously sends random (garbage) data. Look for a jabbering transceiver that is attached to a host. This issue might require host-by-host inspection or the use of a protocol analyzer.

Late collision happens when a collision occurs after transmitting the preamble. The most common cause of late collisions is that your Ethernet cable segments are too long for the speed at which you are transmitting. When you are troubleshooting issues that are related to excessive late collisions, two steps are suggested to help isolate and resolve the issue. You can use the show interfaces command to check for FCS late collision errors, as shown in example below:
SwitchX#show interfaces fastethernet0/1 FastEthernet0/1 is up, line protocol is up (connected) Hardware is Fast Ethernet, address is 0022.91c4.0e01 (bia 0022.91c4.0e01) MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 <output omitted> 0 0 0 0 output errors, 0 collisions, 1 interface resets babbles, 0 late collision, 0 deferred lost carrier, 0 no carrier, 0 PAUSE output output buffer failures, 0 output buffers swapped out

Identifying and Resolving Common Configuration Issues


Many things can be improperly configured on an interface to cause it to go down, thus causing a loss of connectivity with attached network segments. You should always know what you have before you start in terms of device configuration, hardware, and topology. Once you have a working configuration, keep a copy of it. For example, keep both a hard copy and an electronic copya text file on a PC and a copy stored on a TFTP server. When you are making changes, before saving the running configuration, verify that the changes accomplished what you wanted and did not cause unexpected issues. Changes that are made by an unauthorized person, whether maliciously or not, can be disastrous. To ensure that you have secured the configuration, have the console as well as the vty ports protected by a strong password. Also ensure that a strong password has been enabled on the EXEC mode.

Topic Notes: Wireless Transmissions, Standards, and Certification


Business Case for WLAN Services
Productivity is no longer restricted to a fixed work location or a defined time period. People now expect to be connected at any time and place, from the office to the airport or even the home. Traveling employees used to be restricted to pay phones for checking messages and returning a few phone calls between flights. Now employees can check email, voice mail, and the web status of products on personal digital assistants (PDAs) while walking to a flight. Even at home, people have changed the way that they live and learn. The Internet has become a standard in homes, along with TV and phone service. Even the method of accessing the Internet has quickly moved from temporary modem dialup service to dedicated DSL or cable service, which is always connected and is faster than dialup. In 2005, users of PCs purchased more WiFi-enabled mobile laptops than fixed-location desktops. The most tangible benefit of wireless is the cost reduction. Two situations illustrate the cost savings. First, with a wireless infrastructure already in place, savings are realized when moving a person from one cubicle to another, reorganizing a lab, or moving from temporary locations or project sites. On average, the IT cost of moving an employee from one cubicle to another is $375. For the business case, we will assume that 15 percent of the staff is moved every year. The second situation to consider is when a company moves into a new building that does not have any wired infrastructure. In this case, the savings of wireless are even more noticeable, because running cables through walls, ceilings, and floors is a labor-intensive process. Last but not least, another advantage of using a WLAN is the increase in employee satisfaction, which leads to less turnover and the cost savings of not hiring as many new employees. Employee satisfaction also results in better customer support, which cannot be easily quantified, but is a major benefit.

Differences Between WLANs and LANs


In WLANs, radio frequencies are used as the physical layer of the network.

WLANs use Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) instead of Carrier Sense Multiple Access with Collision Detection (CSMA/CD), which is used by Ethernet LANs. Collision detection is not possible in WLANs, because a sending station cannot receive at the same time that it is transmitting and, therefore, cannot detect a collision. Instead, WLANs use the Ready to Send (RTS) and Clear to Send (CTS) protocols to avoid collisions.

WLANs use a different frame format than wired Ethernet LANs. WLANs require additional information in the Layer 2 header of the frame.

Radio waves cause problems in WLANs that are not found in LANs:

Connectivity issues occur in WLANs because of coverage problems, RF transmission, multipath distortion, and interference from other wireless services or other WLANs. Privacy issues occur because radio frequencies can reach outside the facility.

In WLANs, mobile clients connect to the network through an access point, which is the equivalent of a wired Ethernet hub (but an access point has some Layer 2 features, making it also have characteristics of switches):

Mobile clients do not have a physical connection to the network. Mobile devices are often battery-powered, as opposed to plugged-in LAN devices.

WLANs must meet country-specific RF regulations. The aim of standardization is to make WLANs available worldwide. Because WLANs use radio frequencies, they must follow country-specific regulations of RF power and frequencies. This requirement does not apply to wired LANs.

RF Transmission
Radio frequencies range from the AM radio band to frequencies used by cell phones. Radio frequencies are radiated into the air by antennas that create radio waves. When radio waves are propagated through objects, they may be absorbed (for instance, by walls) or reflected (for instance, by metal surfaces). This absorption and reflection may cause areas of low signal strength or low signal quality. The transmission of radio waves is influenced by the following factors:

Reflection: Occurs when RF waves bounce off objects (for example, metal or glass surfaces). Scattering: Occurs when RF waves strike an uneven surface (for example, a rough surface) and are reflected in many directions. Absorption: Occurs when RF waves are absorbed by objects (for example, walls).

The following rules apply for data transmission over radio waves:

Higher data rates have a shorter range because the receiver requires a stronger signal with a better signal-to-noise ratio (SNR) to retrieve the information. Higher transmit power results in a greater range. To double the range, the power has to be increased by a factor of 4.

Higher data rates require more bandwidth. Increased bandwidth is possible with higher frequencies or more-complex modulation. Higher frequencies have a shorter transmission range because they have higher degradation and absorption. This problem can be addressed by more-efficient antennas.

Organizations That Define WLANs


Several organizations have stepped forward to establish WLAN standards, certifications, and multivendor interoperability. Regulatory agencies control the use of the RF bands. With the opening of the 900-MHz industrial, scientific, and medical (ISM) band in 1985, the development of WLANs started. New transmissions, modulations, and frequencies must be approved by regulatory agencies. A worldwide consensus is required. Regulatory agencies include the Federal Communications Commission (FCC) for the United States and the European Telecommunications Standards Institute (ETSI) for Europe. The Institute of Electrical and Electronic Engineers (IEEE) defines standards. IEEE 802.11 is part of the 802 networking standardization process. IEEE 802.11 is a group of standards for WLAN computer communication in the 2.4-, 3.6-, and 5-GHz frequency bands. The first release was completed in 1997. You can download ratified standards from the IEEE website. The Wi-Fi Alliance offers certification for interoperability between vendors of 802.11 products. Certification provides a comfort zone for purchasers of these products. It also helps market WLAN technology by promoting interoperability between vendors. Certification includes all three 802.11 RF technologies and Wi-Fi Protected Access (WPA), a security model that was released in 2003 and ratified in 2004, based on the new security standard IEEE 802.11i, which was ratified in 2004. The Wi-Fi Alliance promotes and influences WLAN standards. A list of ratified products can be found on the Wi-Fi Alliance website.

ITU-R Local FCC Wireless


There are several unlicensed RF bands. There are three unlicensed bands: 900 MHz, 2.4 GHz, and 5.7 GHz. The 900-MHz and 2.4-GHz bands are referred to as the industrial, scientific, and medical or ISM bands. The 5GHz band is commonly referred to as the Unlicensed National Information Infrastructure (UNII) band. Frequencies for these bands are as follows:

900-MHz band: 902 to 928 MHz. 2.4-GHz band: 2.400 MHz to 2.483 GHz. (In Japan, this band extends to 2.495 GHz.).

5-GHz band: 5.150 to 5.350, 5.725 to 5.825 MHz, with some countries supporting middle bands between 5.350 and 5.725 MHz. Not all countries permit IEEE 802.11a, and the available spectrum varies widely. The list of countries that permit 802.11a is changing.

Next to the WLAN frequencies in the spectrum are other wireless services such as cellular phones and narrowband Personal Communications Service (PCS). The frequencies that are used for WLAN are ISM bands. A license is not required to operate wireless equipment on unlicensed frequency bands. However, no user has exclusive use of any frequency. For example, the 2.4-GHz band is used for WLANs, video transmitters, Bluetooth, microwave ovens, and portable phones. Unlicensed frequency bands offer best-effort use, and interference and degradation are possible. Even though these three frequency bands do not require a license to operate equipment, they are still subject to the local country code regulations. Countries regulate frequency areas such as transmitter power, antenna gain (which increases the effective power), and the sum of transmitter loss, cable loss, and antenna gain. Note: The number of channels that are available and transmission parameters are regulated by country regulations. Each country allocates radio spectrum channels to various services. Refer to the country regulations and product documentation for specific details for each regulatory domain. Effective Isotropic Radiated Power (EIRP) is the final unit of measurement that is monitored by local regulatory agencies. EIRP is the radiated power from the device, including antenna, cables, and other components of the WLAN system that are attached to it. By changing the antenna, cables, and transmitter power, the EIRP can change and exceed the allowed value. Therefore, caution should be used when attempting to replace a component of wireless equipment; for example, when adding or upgrading an antenna to increase the range. The possible result could be a WLAN that is illegal under local codes. EIRP = transmitter power + antenna gain cable loss Note: Only use antennas and cables that are supplied by the original manufacturer that is listed for the specific access point implementation. Only use qualified technicians who understand the many requirements of the RF regulatory codes for that country.

IEEE 802.11 Standards Comparison


IEEE standards define the physical layer as well as the MAC sublayer of the data link layer of the Open Systems Interconnection (OSI) model. The original 802.11 wireless standard was completed in June 1997. It was revised in 1999 to create IEEE 802.11a and 802.11b, then reaffirmed in 2003 as IEEE 802.11g, and reaffirmed again in 2009 as IEEE 802.11n.

By design, the standard does not address the upper layers of the OSI model. IEEE 802.11b was defined using Direct Sequence Spread Spectrum (DSSS). DSSS uses just one channel that spreads the data across all frequencies that are defined by that channel. IEEE 802.11 divided the 2.4-GHz ISM band into 14 channels, but local regulatory agencies such as the FCC designate which channels are allowed, such as channels 1 through 11 in the United States. Each channel in the 2.4-GHz ISM band is 22-MHz wide with 5-MHz separation, resulting in an overlap with channels before or after a defined channel. Therefore, a separation of five channels is needed to ensure unique nonoverlapping channels. For example, using the 11 FCC channels, there are three nonoverlapping channels: 1, 6, and 11. Remember that wireless uses half-duplex communication, so the basic throughput is only about half of the data rate. Because of this limitation, the IEEE 802.11b main development goal was to achieve higher data rates within the 2.4-GHz ISM band. They want to continue to increase the Wi-Fi consumer market and encourage consumer acceptance of Wi-Fi. IEEE 802.11b defined the usage of DSSS with newer encoding or modulation of Complementary Code Keying (CCK) for higher data rates of 5.5 and 11 Mb/s while retaining coding of 1 and 2 Mb/s. IEEE 802.11b still uses the same 2.4-GHz ISM band as prior 802.11 standards, making it backward-compatible with the prior 802.11 standard and its associated data of 1 and 2 Mb/s. The same year that the 802.11b standard was adopted, IEEE developed another standard that is known as 802.11a. This standard was motivated by the goal of increasing data rates by using a different orthogonal frequency-division multiplexing (OFDM) spread spectrum and modulation technology, and using the less-crowded frequency of 5-GHz UNII. The 2.4-GHz ISM band was widely used for all WLAN devices, such as Bluetooth, cordless phones, monitors, video, and home gaming consoles. The 802.11a standard was not as widely accepted because materials that were needed to manufacture chips that supported 802.11a were less readily available and initially resulted in higher cost. Most applications satisfied the requirements for wireless support by following the cheaper and more accessible standards of 802.11b. A continued development by IEEE maintains usage of the 802.11 MAC and obtains higher data rates in the 2.4-GHz ISM band. The IEEE 802.11g amendment uses the newer OFDM from 802.11a for higher speeds, yet is backward-compatible with 802.11b using DSSS, which was already using the same ISM frequency band. DSSS data rates of 1, 2, 5.5, and 11 Mb/s are supported, as are OFDM data rates of 6, 9, 12, 18, 24, 36, 48, and 54 Mb/s. The most recent development by IEEE is the completed 802.11n standard as the upgrade to the 802.11 protocol. The project was a multi-year effort to standardize and upgrade the 802.11g standard. IEEE 802.11n provides a new set of capabilities that dramatically improve the reliability of communications, the predictability of coverage, and the overall throughput of devices. The 802.11n protocol has several enhancements in the physical layer and the MAC sublayer that provide exceptional benefits to wireless deployments. The four key features are as follows:

Multiple-input multiple-output (MIMO). MIMO uses the diversity and duplication of signals using the multiple transmit and receive antennas. 40-MHz operation bonds adjacent channels that are combined with some of the reserved channel space between the two, to more than double the data rate. Frame aggregation reduces the overhead of 802.11 by coalescing multiple packets together. Backward compatibility, which makes it possible for 802.11a/b/g and 802.11n devices to coexist, therefore allowing customers to phase in their access point and/or client migrations over time.

The 802.11n standard supports 2.4- and 5-GHz frequency bands, and adopted an OFDM modulation method. 20-MHz or 40-MHz bandwidth is supported. 20-MHz bandwidth is used for backward compatibility. IEEE 802.11n continues the modulation evolution. IEEE 802.11n uses OFDM like 802.11a and 802.11g standards. However, 802.11n increases the number of subcarriers in each 20-MHz channel from 48 to 52. IEEE 802.11n provides a selection of eight data rates for a transmitter, including a data rate using 64 quadrature amplitude modulation (QAM) with a rate 5/6 encoder. Together, these changes marginally increase the data rate to a maximum of 72.2 Mb/s for a single-transmit radio. Via spatial division multiplexing, 802.11n also increases the number of transmitters allowable to four. For two, the maximum data rate is 144 Mb/s. Three provide a maximum data rate of 216 Mb/s. The maximum of four transmitters can deliver 288 Mb/s. When using 40-MHz channels, 80211n increases the number of subcarriers available to 108. This provides a maximum data rate of 150, 300, 450, and 600 Mb/s for one through four transmitters, respectively. The data rates depend on the OFDM mode of operation. IEEE 802.11n has the ability to dramatically increase the capacity of a WLAN, the effective throughput of every client, and the reliability of the networking experience for the client.

Wi-Fi Certification
Even after the 802.11 standards were established, there was a need to ensure interoperability among 802.11 products. The Wi-Fi Alliance is a global, nonprofit industry trade association that is devoted to promoting the growth and acceptance of WLANs. One of the primary benefits of the Wi-Fi Alliance is to ensure interoperability among 802.11 products that are offered by various vendors. The Wi-Fi Alliance provides a certification for each product as a proof of interoperability. Certified vendor interoperability provides a comfort zone for purchasers. Certification includes all three IEEE 802.11-RF technologies, as well as early adoption of pending IEEE drafts, such as one addressing security. The Wi-Fi Alliance adapted IEEE 802.11i draft security as WPA, and then revised it to Wi-Fi Protected Access 2 (WPA2) after the final release of IEEE 802.11i.

IEEE 802.11 Standards Comparison Chart

Here is the IEEE 802.11 Standards Comparison Chart. The images are shown in more detail below.

802.11b Frequency band: 2.4GHz Number of channels: 3

802.11b Frequency band: 2.4GHz Transmission: OFDM

802.11b Frequency band: 2.4GHz Data rates [Mb/s]: 1,2,5.5,11

802.11a Frequency band: 5 GHz Number of channels : Up to 23

802.11a Frequency band: 5 GHz Transmission: DSSS

802.11a Frequency band: 5 GHz Data rates [Mb/s]: 6,9,12,18,24,36,48,54

802.11g Frequency band: 2.4 GHz Number of channels:3

802.11g Frequency band: 2.4 GHz Transmission: DSSS/OFDM

802.11g Frequency band: 2.4 GHz Data rates [Mb/s]: 1,2,5.5,11/6,9,12,18,24,36,48,54

Course: Cisco ICND1 1.1: Implementing Wireless LANs Topic: Wireless Transmissions, Standards, and Certification

Topic Notes: WLAN Security


WLAN Security Threats
With the lower costs of IEEE 802.11b/g systems, it is inevitable that hackers will have many more unsecured WLANs from which to choose. Incidents have been reported of people using numerous Open Source applications to collect and exploit vulnerabilities in the IEEE 802.11 standard security mechanism, Wired Equivalent Privacy (WEP). Wireless sniffers enable network engineers to passively capture data packets so that they can be examined to correct system problems. These same sniffers can be used by hackers to exploit known security weaknesses. "War driving" originally meant using a cellular scanning device to find cell phone numbers to exploit. War driving now also means driving around with a laptop and an 802.11b/g client card to find an 802.11b/g system to exploit. Most wireless devices that are sold today are WLAN-ready. End users often do not change default settings or they implement only standard WEP security, which is not optimal for securing wireless networks. With basic WEP encryption enabled (or, obviously, with no encryption enabled), it is possible to collect data and obtain sensitive network information, such as user login information, account numbers, and personal records. A rogue access point is an unauthorized access point that is connected to the corporate network. If a rogue access point is programmed with the correct WEP key, client data could be captured. A rogue access point could also be configured to provide unauthorized users with information such as MAC addresses of clients (wireless and wired), or to capture and spoof data packets. At

worst, a rogue access point could be configured to gain access to servers and files. A simple and common version of a rogue access point is one installed by employees with authorization. Employee access points that are intended for home use and are configured without the necessary security can cause a security risk in the enterprise network.

Mitigating Security Threats


To secure a WLAN, the following steps are required:

Authentication, to ensure that legitimate clients and users access the network via trusted access points. Encryption, to provide privacy and confidentiality. Intrusion detection systems (IDSs) and intrusion prevention systems (IPSs), to protect from security risks and availability.

The fundamental solution for wireless security is authentication and encryption to protect the wireless data transmission. These two wireless security solutions can be implemented in degrees; however, both apply to both small office, home office (SOHO) and large enterprise wireless networks. Larger enterprise networks will need the additional security that is offered by an IPS monitor. Current IPSs can not only detect wireless network attacks, but they also provide basic protection against unauthorized clients and access points. Many enterprise networks use IPSs for protection not primarily against outside threats, but mainly against unintentional access points that are installed by employees who desire the mobility and benefits of wireless.

Evolution of WLAN Security


Almost as soon as the first WLAN standards were established, hackers began trying to exploit weaknesses. To counter this threat, standards evolved to provide more security. Initially, 802.11 security defined only 64-bit static WEP keys for both encryption and authentication. The 64-bit key contained the actual 40-bit key plus a 24-bit initialization vector. The authentication method was not strong. Open and shared-key authentication is supported. Open authentication allows association of any wireless client. Shared-key authentication allows authentication of selected wireless clients only, but the challenge text is sent unencrypted. This is the main reason that shared-key authentication is not secure. Another issue with WEP key encryption is that the keys were eventually compromised. The keys were administered statically, and this method of security was not scalable to large enterprise environments. Companies tried to counteract this weakness with techniques such as MAC address filtering. The SSID is a network-naming scheme and configurable parameter that the client and the access point must share. If the access point is configured to broadcast its SSID, the client that is associated with the access point is using the SSID that is advertised by the access point. An access point can be configured to not broadcast the SSID (called "SSID cloaking") to provide a first level of security. The belief was that if the access point did not advertise itself, it would be

more difficult for hackers to find it. To allow the client to learn the access point SSID, 802.11 allows wireless clients to use a null value (that is, no value is entered in the SSID field), therefore requesting that the access point broadcast its SSID. However, this technique renders the security effort ineffective because hackers need only send a null string until they find an access point. Access points supported filtering using a MAC address as well. Tables are manually constructed on the access point to allow for clients that are based on their physical hardware address. However, MAC addresses are easily spoofed, and MAC address filtering is not considered a security feature. While 802.11 committees began the process of upgrading WLAN security, enterprise customers needed wireless security immediately to enable deployment. Driven by customer demand, Cisco introduced early proprietary enhancements to RC4-based WEP encryption. Cisco implemented Cisco Temporal Key Integrity Protocol (CKIP) per-packet keying or hashing, and Cisco Message Integrity Check (Cisco MIC) to protect WEP keys. Cisco also adapted 802.1X wired authentication protocols to wireless and dynamic keys using Cisco Lightweight Extensible Authentication Protocol (Cisco LEAP) to a centralized database. This approach is based on the IEEE 802.11 Task Group i end-to-end framework using 802.1X and the Extensible Authentication Protocol (EAP) to provide this enhanced functionality. Cisco has incorporated 802.1X and EAP into its WLAN solutionthe Cisco Wireless Security Suite. Numerous EAP types are available today for user authentication over wired and wireless networks. Current EAP types include the following:

EAP-Cisco Wireless (LEAP) EAP-Transport Layer Security (EAP-TLS) Protected EAP (PEAP) EAP-Tunneled TLS (EAP-TTLS) EAP-Subscriber Identity Module (EAP-SIM)

In the Cisco SAFE wireless architecture, LEAP, EAP-TLS, and PEAP were tested and documented as viable mutual authentication EAP protocols for WLAN deployments. Soon after Cisco wireless security implementation, the Wi-Fi Alliance introduced Wi-Fi Protected Access (WPA) as an interim solution. WPA was a subset of the expected IEEE 802.11i security standard for WLANs using 802.1X authentication and improvements to WEP encryption. The newer key-hashing TKIP has security implementations like those implementations that are provided by Cisco Key Integrity Protocol and message integrity check (CKIP), but these three security implementations are not compatible. Today, 802.11i has been ratified and the Advanced Encryption Standard (AES) has replaced WEP as the latest and most secure method of encrypting data. Wireless IDSs are available to identify attacks and protect the WLAN from them. The Wi-Fi Alliance certifies 802.11i devices under Wi-Fi Protected Access 2 (WPA2).

Wireless Client Association

In the client association process, access points send out beacons announcing one or more SSIDs, data rates, and other information. The client scans all of the channels and listens for beacons and responses from the access points. The client associates to the access point that has the strongest signal. If the signal becomes low, the client repeats the scan to associate with another access point. This process is called "roaming." During association, the SSID, MAC address, and security settings are sent from the client to the access point, and then checked by the access point. The association of a wireless client to a selected access point is actually the second step in a twostep process. First authentication, then association, must occur before an 802.11 client can pass traffic through the access point to another host on the network. Client authentication in this initial process is not the same as network authentication (which is entering a username and password to gain access to the network). Client authentication is simply the first step (followed by association) between the wireless client and access point, and merely establishes communication. The 802.11 standard has specified only two methods of authentication: open authentication and shared-key authentication. Open authentication is simply the exchange of four hello-type packets with no client or access point verification to allow ease of connectivity. Shared-key authentication uses a static WEP key that is known between the client and access point for verification. This same key may be used to encrypt the actual data passing between a wireless client and access point.

How 802.1X Works on WLANs


The access point, acting as the authenticator at the enterprise edge, allows the client to associate using open authentication. The access point then encapsulates any 802.1X traffic that is bound for the AAA (Authentication, Authorization, and Accounting) server and sends it to the server. All other network traffic is blocked, meaning that all other attempts to access network resources are stopped. Upon receiving AAA traffic that is bound for the client, the access point encapsulates it and sends the information to the client. Although the server authenticates the client as a valid network user, this process allows the client to validate the server as well, ensuring that the client is not logging into a fake server. While an enterprise network will use a centralized authentication server, smaller offices or businesses might simply use the access point as the authentication server for wireless clients.

WPA and WPA2 Modes


WPA provides authentication support via 802.1X and pre-shared key (PSK). 802.1X is recommended for enterprise deployments. WPA provides encryption support via TKIP. TKIP includes MIC and per-packet keying (PPK) via initialization vector hashing and broadcast key rotation.

WPA2 (standard 802.11i) uses the same authentication architecture, key distribution, and key renewal technique as WPA. However, WPA2 added better encryption, called AES-Counter with CBC-MAC Protocol (AES-CCMP). AES-CCMP uses two combined cryptographic techniques. One is counter mode and the second one is CBC-MAC. AES-CCMP provides a robust security protocol between the wireless client and the wireless access point. Note: AES is a cryptographic cipher that uses a block length of 128 bits and key lengths of 128, 192, or 256 bits. Counter mode is a mode of operation. Counter mode uses a number that changes with each block of text encrypted. The number is called the counter. The counter is encrypted with the cipher, and the result goes into ciphertext. The counter changes for each block and the ciphertext is not repeated. Cipher Block Chaining-Message Authentication Code (CBC-MAC) is a message integrity method. The method uses block cipher such as AES. Each block of cleartext is encrypted with the cipher and then an exclusive OR (XOR) operation is conducted between the first and the second encrypted blocks. An XOR operation is then run between this result and the third block, and so on.

Enterprise Mode
"Enterprise mode" is a term that is used for products that are tested to be interoperable in both PSK and 802.1X Extensible Authentication Protocol (EAP) modes of operation for authentication. When 802.1X is used, an authentication, authorization, and accounting (AAA) server is required to perform authentication as well as key and user management. Enterprise mode is targeted to enterprise environments.

Personal Mode
"Personal mode" is a term that is used for products that are tested to be interoperable in the PSKonly mode of operation for authentication. It requires manual configuration of a PSK on the access point and clients. The PSK authenticates users via a password, or identifying code, on both the client station and the access point. No authentication server is needed. Personal mode is targeted to SOHO environments. Encryption is the process of transforming plaintext information to make it unreadable to anyone except those possessing the key. The algorithm that is used to encrypt information is called cipher, and the result is called ciphertext. Encryption is now commonly used in protecting information within WLAN implementations. Encryption is also used to protect data in transit. Data in transit might be intercepted, and encryption is one option for protection.

WEP keys were the first solution to encrypt and decrypt WLAN transmitted data. Several research papers and articles have highlighted the potential vulnerabilities of static WEP keys. An improvement to static WEP keys was dynamic WEP keys in combination with 802.1X authentication. However, hackers have ready access to tools for cracking WEP keys. Several enhancements to WEP keys were provided. These WEP enhancements were TKIP, support for MIC, per-packet key hashing, and broadcast key rotation. TKIP is a set of software enhancements to RC4-based WEP. Cisco had a proprietary implementation of TKIP at the beginning. It was sometimes referred to as Cisco TKIP. In 2002, 802.11i finalized the specification for TKIP, and the Wi-Fi Alliance announced that it was making TKIP a component of WPA. Cisco TKIP and the WPA TKIP both include per-packet keying and message integrity check. A weakness exists in TKIP, however, that can allow an attacker to decrypt packets under certain circumstances. An enhancement to TKIP is Advanced Encryption Standard (AES). AES is a stronger alternative to the RC4 encryption algorithm. AES is a more-secure encryption algorithm and has been deemed acceptable for the U.S. government to encrypt both unclassified and classified data. AES is currently the highest standard for encryption and replaces WEP. AES has been developed to replace the Data Encryption Standard (DES). AES offers a larger key size, while ensuring that the only known approach to decrypt a message is for an intruder to try every possible key. AES has a variable key lengththe algorithm can specify a 128-bit key (the default), a 192-bit key, or a 256-bit key. The use of WPA2 with AES is recommended whenever possible. It, however, is more resource-consuming and requires new hardware compared to simple WEP or TKIP implementations. If a client does not support WPA2 with AES due to the age of the hardware or lack of driver compatibility, a VPNmay be a good solution for securing over-the-air client connections. IP Security (IPsec) and Secure Sockets Layer (SSL) VPNs provide a similar level of security as WPA2. IPsec VPNs are the services that are defined within IPsec to ensure confidentiality, integrity, and authenticity of data communications across public networks, such as the Internet. IPsec also has a practical application to secure WLANs by overlaying IPsec in addition to cleartext 802.11 wireless traffic. IPsec provides for confidentiality of IP traffic, as well as authentication and anti-replay capabilities. Confidentiality is achieved through encryption using a variant of the DES, called Triple DES (3DES), or the new AES.

Topic Notes: WLAN Topologies and Services


IEEE 802.11 Topology Building Blocks
IEEE 802.11 provides several topologies (or modes) that can be used as building blocks of a WLAN:

Ad hoc mode: Independent Basic Service Set (IBSS) is the ad hoc topology mode. Mobile clients connect directly without an intermediate access point. Operating systems such as Windows have made this peer-to-peer network easy to set up. This configuration can be used for a small office (or home office) to allow a laptop to be connected to the main PC or for several people to simply share files. The coverage is limited. Everyone must be able to hear everyone else. An access point is not required. A drawback of peerto-peer networks is that they are difficult to secure. Infrastructure mode: In infrastructure mode, clients connect through an access point.

There are two infrastructure sub-topologies. These topologies are the original standard defined 802.11 topologies. Topologies such as repeaters, bridges, and workgroup bridges are vendor-specific extensions.

BSA Wireless Topology


A basic service area is the physical area of RF coverage that is provided by an access point in a BSS. This area is dependent on the RF energy field that is created, with variations caused by access point power output, antenna type, and physical surroundings affecting the RF. This area of coverage is referred to as a cell. While the BSS is the topology building block and the BSA is the actual coverage pattern, the two terms are used interchangeably in basic wireless discussions. The access point attaches to the Ethernet backbone and communicates with all of the wireless devices in the cell area. The access point is the master for the cell, and controls traffic flow to and from the network. The remote devices do not communicate directly with each other; they communicate only with the access point. The access point is user-configurable with its unique RF channel and wireless SSID name. The access point broadcasts the name of the wireless cell in the SSID through beacons. Beacons are broadcasts that access points send to announce the available services. The SSID is used to logically separate WLANs. It must match exactly between the client and the access point. However, clients can be configured without an SSID (null SSID), then detect all access points and learn the SSID from the beacons of the access point. A common example of the discovery process is the one that is used by the integrated Wireless Zero Configuration (WZC) utility when a wireless laptop is used at a new location. The user is shown a display of the newly found wireless service and asked to connect or supply appropriate keying material to join. SSID broadcasts can be disabled on the access point, but this approach does not work if the client needs to see the SSID in the beacon. If a single cell does not provide enough coverage, any number of cells can be added to extend the range. The range of the combined cells is known as an extended service area (ESA). Several issues exist when extended coverage is implemented. If overlapping of the wireless cells is required, careful design of the coverage outside of the office is required, and the performance of the wired network and WLAN devices is important.

It is recommended that ESA cells have 10 to 15 percent overlap to allow remote users to roam without losing RF connections. For wireless voice networks, an overlap of 15 to 20 percent is recommended. Bordering cells should be set to different nonoverlapping channels for best performance. Extending the coverage with more access points must be properly designed. WLAN coverage outside of the office or home area provides easy access to your network to anybody including attackers. Once the coverage is extended, and the number of users increases, the performance of the network devices is important. The access points, which are providing access to multiple users, must ensure that all of the users get enough bandwidth and the required quality of service. At the same time, the increased number of users requires additional throughput via the wired network and WLAN. A sufficient number of access points must be implemented and the network capacity must be taken into account.

Wireless Topology Data Rates


WLAN clients have the ability to shift data rates while moving. This technique allows the same client that is operating at 11 Mb/s to shift to 5.5 Mb/s, then 2 Mb/s, and finally still communicate in the outside ring at 1 Mb/s. This rate shifting happens without losing the connection and without any interaction from the user. Rate shifting also happens on a transmission-bytransmission basis; therefore, the access point has the ability to support multiple clients at multiple speeds depending upon the location of each client.

Higher data rates require stronger signals at the receiver. Therefore, lower data rates have a greater range. Wireless clients always try to communicate with the highest possible data rate. The client will reduce the data rate only if transmission errors and transmission retries occur.

This approach provides the highest total throughput within the wireless cell. The same concept applies to 802.11a, 802.11g, or 802.11n data rates. The difference is in distance and the coverage area of the wireless cell. The performance, throughput, and the distance (range) depend on topology, installation, different obstacles in a path, and configuration of the WLAN equipment. The topology and the installation can significantly change the performance of the WLAN network. Installation without a line of sight, and placement near metal objects, can significantly decrease the distance as well as the throughput and data rate of the WLAN network. When different obstacles are on a path between two wireless devices, the absorption of the signal can limit the performance and the distance. Water, cardboard, and metal can significantly impact the coverage. Additionally, the configuration of the WLAN devices with different parameters is important. In order to limit the coverage to a particular area, the transmit power can be decreased and antennas with lower gain can be used. Lowering the transmit power and antenna gain affects the coverage area. There is no single answer on how far away the wireless signal will go and how large the data rate can be.

The whole WLAN network must be observed and tests must be performed in order to define the real coverage area and data rate.

Access Point Configuration


Wireless access points can be configured through a command-line interface (CLI) or, more commonly, through a browser GUI. However, the mode of configuration of the basic wireless parameters is the same. Basic wireless access point parameters include SSID, an RF channel with optional power, and authentication (security). Basic wireless client parameters include only authentication. Wireless clients need fewer parameters because a wireless network interface card (NIC) will scan the entire available RF spectrum to locate the RF channel. Note: IEEE 802.11 standards cannot scan 2.4 or 5-GHz bands. Every standard operates at a specific frequency range, and the automatic scan is scanning this frequency range. Wireless client will usually initiate the connection with a null SSID in order to discover the SSIDs that are available. Therefore, by 802.11 design, if you are using open authentication, the result is almost plug-and-play. When security is configured with pre-shared keys (PSKs) for older Wired Equivalent Privacy (WEP) or current Wi-Fi Protected Access (WPA), the key must be an exact match on the client side and on the infrastructure side in order to allow connectivity. Depending on the hardware that is chosen for the access point, it might be capable of one or two frequencies. Two frequencies that are available are 2.4 GHz from the industrial, scientific, and medical (ISM) band, and the 5-GHz Unlicensed National Information Infrastructure (UNII) band. The features of the access point usually allow for fine adjustment of parameters such as transmit power, frequencies that are used, which radio will be enabled, and which IEEE standard to use on that RF. When 802.11b wireless clients are mixed with 802.11g wireless clients, throughput is decreased because the access point must implement a Ready to Send/Clear to Send (RTS/CTS) protocol. After configuring the basic required wireless parameters of the access point, additional fundamental wired-side parameters must be configured for the default router and DHCP server. On a pre-existing LAN, there must be a default router to exit the network as well as a DHCP server to lease IP addresses to wired PCs. The access point simply uses the existing router and DHCP servers for relaying IP addresses to wireless clients. Because the network has been expanded, you should verify that the existing DHCP IP address scope is large enough to accommodate the new wireless client additions. If this is a new installation with all router and access point functions in the same hardware, then you simply configure all parameters in the same hardware.

Topic Notes: WLAN and VoIP Implementation and Troubleshooting

The basic approach to wireless implementation, as with any basic networking, is to gradually configure and incrementally test. Before implementing any wireless network, verify the existing network and Internet access for the wired hosts. Implement the wireless network with only a single access point and a single client, without wireless security. Verify that the wireless client has received a DHCP IP address and can ping the local wired default router, and then browse to the external Internet. Before the installation, perform a site survey to identify the position and the configuration parameters for all the required WLAN equipment. Correct WLAN coverage and throughput in the WLAN network must be ensured. Finally, configure wireless security with WPA or WPA2. Use WEP only if the hardware does not support WPA. Use WPA2 if possible because AES encryption support provides a higher level of security. Once the configuration is completed, verify the WLAN operation.

Wireless Clients
Currently, there are many form factors available to add wireless capabilities to laptops. The most common are Universal Serial Bus (USB) devices with self-contained fixed antennas and wireless supplicant software. Both of them enable wireless hardware usage and provide security options for authentication and encryption. Most new laptops contain some form of wireless capability. The availability of wireless technology has increased the wireless market and improved ease of use. Newer Microsoft Windows operating systems have a basic wireless supplicant client (that is, WZC) to enable wireless plug-and-play. This functionality is performed by discovering SSIDs that are being broadcast and allowing the user to simply enter the matching security credentials or keys for WEP or WPA, for example. The basic features of WZC are satisfactory for simple small office, home office (SOHO) environments. Large enterprise networks require more-advanced wireless client features than those features of native operating systems. In 2000, Cisco started a program of value-added feature enhancements through a royalty-free certification program. More than 95 percent of Wi-Fi- enabled laptops that are shipped today are compliant with Cisco Compatible Extensions.

Versions and Features


Version v1 v2 v3 Security Scaling Performance and security Topic Example Wi-Fi compliant, 802.1X, LEAP, Cisco Key Integrity Protocol WPA, access point assisted roaming WPA2, Wi-Fi Multimedia (WMM)

Version v4 v5 Voice over WLAN

Topic

Example Call Admission Control (CAC), voice metrics Management Frame Protection (MFP), client reporting

Management and intrusion prevention system (IPS)

Until Cisco offered a full-featured supplicant for both wired and wireless clients (called Cisco Secure Services Client), enterprise networks were managing one set of wired clients and another set of wireless clients separately. The benefit to users is a single client for wired or wireless connectivity and security.

Wireless Troubleshooting
If you follow the recommended steps for implementing a wireless network, the incremental method of configuration will most likely lead you to the probable cause of an issue. These issues are the most common causes of configuration problems:

Configuring a defined SSID on the client (as opposed to the method of discovering the SSID) that does not match the access point (including case-sensitivity) Configuring incompatible security methods

The wireless client and access point must match in authentication methodExtensible Authentication Protocol (EAP) or PSKand encryption method (TKIP or AES). Other common problems can result from initial RF installation, such as:

Is the radio enabled on both the access point and the client for the correct RF (2.4-GHz ISM or 5GHz UNII)? Is an external antenna connected and facing in the correct direction? Is the antenna location too high or too low relative to the wireless clients, preferably within 20 vertical feet (6 vertical m) of the client? Is a metal object in the room reflecting RF and causing poor performance? Are you attempting to reach too great a distance?

The first step in troubleshooting a suspected wireless issue is to separate the environment into wired network versus wireless network. The second step is to further divide the wireless network into configuration versus RF issues. Begin by verifying the proper operation of the existing wired infrastructure and associated services. Verify that existing Ethernet-attached hosts can renew their DHCP addresses and reach the Internet. Then colocate the access point and wireless client to verify the configuration and eliminate the possibility of RF issues. Always start the wireless client on open authentication and establish connectivity. Then implement the desired wireless security.

If the wireless client is operational at this point, then only RF-related issues remain. First, consider whether metal obstructions exist. If so, move the obstruction or change the location of the access point. If the distance is too great, consider adding another access point using the same SSID but on a unique RF channel. Lastly, consider the RF environment. Just as a wired network can become congested with traffic, so can RF for 2.4 GHz (more often than 5 GHz). Check for other sources of wireless devices using 2.4 GHz. Performance issues that seem to relate to time of day would indicate RF interference from a device. An example would be slow performance at lunchtime in an office that is located near a microwave oven that is used by employees. Although most microwaves will jam RF channel 11, some microwaves will jam all of the 2.4 GHz RF channels. Another cause of problems could be RF devices that hop frequencies, such as the Frequency Hopping Spread Spectrum (FHSS) that is used in cordless phones. Since there can be many sources of RF interference, always start by colocating the access point and wireless client, and then move the wireless client until you can reproduce the problem. Most wireless clients have supplicant software that helps you troubleshoot issues by presenting relative RF signal strength and quality.

VoIP Requirements
VoIP Requirements in Switched Networks

Modern networks can support converged services where video and voice traffic is merged with data traffic. When implementing VoIP in the network, all network requirements, including power and capacity planning, must be examined. There are several VoIP network devices that are required to implement a VoIP solution. Special attention is required for VoIP phones because network engineers might meet them on a daily basis. In order to support the VoIP solution, network e engineers must be aware of the parameters and requirements of the VoIP solution. When connecting the VoIP phone to the network, two options exist:

Wired VoIP phone that is connected directly to the switch Wireless VoIP phone that is connected to the switch via an access point

It is a common solution that end-user PCs are connected to the VoIP phone, and then the VoIP phone provides the connectivity toward the switched network. The Cisco VoIP solution offers many benefits and, in order to perform a proper installation, network engineers must take many parameter settings s into consideration. Network engineers are working with network equipment on a daily basis, but they are not necessarily VoIP and WLAN professionals. In order to support their LAN and WLAN environments, they must be aware of VoIP requirements. One of the important parameters is delay. For data traffic, the delay is not critical. The delay is unacceptable, however, for VoIP users. In addition to the delay, the concerns are jitter (variable

delay) and guaranteed bandwidth. All these factors are best described with the term quality of service (QoS). Voice traffic usually generates a smooth demand on bandwidth and has minimal impact on other traffic as long as voice traffic is properly managed. VoIP traffic requirements are as follows:

Guaranteed bandwidth Transmission priority over other types of network traffic or the ability to be routed around congested areas on the network End-to-end delay of less than 150 ms across the network

Additionally, administrators must provide power to the VoIP devices. An uninterruptible power supply (UPS) is a great solution for these devices, in order to provide uninterrupted power. Wired IP phones are best implemented with Power over Ethernet (PoE). WLAN IP phones are usually battery-powered. PoE power that is supplied to the wired IP phones is implemented directly from Cisco Catalyst switches with inline power capabilities. If the PoE switch is not available in the network, a Cisco Catalyst Inline Power Patch Panel or adapter must be used. When data and voice traffic is mixed in the network, the best solution is to separate these two traffic types. If the user PCs and the IP phones are on the same VLAN, each of them will try to use the available bandwidth without considering the other device. The simplest method to avoid a conflict is to use separate VLANs for IP telephony traffic and data traffic. Some Cisco Catalyst switches offer a unique feature that is called a voice VLAN, which lets you overlay a voice topology onto a data network. You can segment phones into separate logical networks, even though the data and voice infrastructure are physically the same. The voice VLAN feature places the phones into their own VLANs without any end-user intervention. The user simply plugs the phone into the switch, and the switch provides the phone with the necessary VLAN information.

Topic Notes: The Functions of Routing


Routers
A router or gateway is a network device that determines the optimal path for transmitting data from one network to another. Some characteristics are common to all routers. Routers are essential components of large networks that use TCP/IP, because routers can accommodate growth across wide geographical areas. The following characteristics are common to all routers:

Routers have these components, which are also found in computers and switches: CPU : The CPU or the processor is the chip that is installed on the motherboard that carries out the instructions of a computer program. CPUs process all of the information that is gathered from other routers and sent to other routers. o Motherboard : The motherboard is the central circuit board, which holds critical electronic components of the system. The motherboard provides connections to other peripherals and interfaces. o Memory : There are two types of memory, RAM and ROM. RAM is memory on the motherboard that stores data during CPU processing. It is a volatile type of memory in that the information is lost after the power is switched off. ROM is read-only memory on the motherboard. As opposed to RAM, the content of the ROM is not lost after the power is switched off. Modern types of ROM are EPROM and EEPROM, which can be erased and reprogrammed multiple times. Routers have network adapters to which IP addresses are assigned. Network adapters are used to connect routers to other devices in the network. Routers can have these types of ports: Console/AUX port : The router uses a console port to attach to a terminal that is used for management, configuration, and control. A console port may not exist on all routers. The AUX interface is used for remote management of the router. Typically, a modem is connected to the AUX interface for dial-in access. From a security standpoint, enabling the option to connect remotely to a network device carries with it the responsibility of maintaining vigilant device management. o Network port : The router has a number of network ports, including different LAN or WAN media ports, which may be copper or fiber cable. The router uses its routing 4-??? to determine the best path on which to forward the packet. When the router receives a packet, it examines its destination IP address and searches for the best match to a network address in the routing 4-???.
o o

Routers are devices that gather routing information from the network. The information that is processed locally goes into the routing 4-???. The routing 4-??? contains a list of all known destinations to the router and provides information regarding how to reach them. Routers have the following two important functions:

Path determination : Routers must maintain their own routing 4-???s and ensure that other routers know about changes in the network. Routers use a routing protocol to communicate network information to other routers. A routing protocol distributes the information from a local routing 4-??? on the router. Different protocols use different methods to populate the routing 4-???. The first letter in each line of the routing 4-??? indicates which protocol was the source for the information. It is possible to statically populate the routing 4-???s. Statically populating the routing 4-???s does not scale and leads to problems when the network topology changes. Design changes and outages also result in some problems. Packet forwarding : Routers use the routing 4-??? to determine where to forward packets. Routers forward packets through a network interface toward the destination network. Each line of the routing 4-??? indicates which network interface is used to forward a packet. The destination IP address in the packet defines the packet destination. Routers use their local routing 4-??? and compare the entries to the destination IP address of the packet. The result is which outgoing interface to use to send the packet out of the router. If routers do not have a matching entry in their routing 4-???s, the packets are dropped.

Path Determination
During the path determination portion of transmitting data over a network, routers evaluate the available paths to remote destinations. The routing 4-??? holds only one entry per network. More sources of the information about the particular destination might exist. The routing process that runs on the router must be able to evaluate all the sources and select the best one to populate the routing 4-???. Multiple sources come from having multiple dynamic routing protocols running, and from static and default information being available. The routing protocols use different metrics to measure the distance and desirability of a path to a destination network. When multiple routing protocols are running at the same time, the routers must be able to select the best source of information. Administrative distance is the feature that routers use in order to select the best path when there are two or more different routes to the same destination from two different routing protocols. Administrative distance defines the reliability of a routing protocol. Each routing protocol is prioritized in order of most to least reliable (believable) with the help of an administrative distance value.

Routing Tables
As part of the path determination procedure, the routing process builds a routing 4-??? that identifies known networks and how to reach them. Routers forward packets using the information in the routing 4-???. Each router has its own local routing 4-???. The routing 4-??? is populated from different sources. Routing metrics vary depending on the routing protocol that is running in the router.

Routing Table Information

The routing 4-??? consists of an ordered list of known network addresses. Network addresses can be learned dynamically by the routing process or by being statically configured. All directly connected networks are added to the routing 4-??? automatically. Routing 4-???s also include information regarding destinations and next-hop associations. These associations tell a router that a particular destination is either directly connected to the router or that it can be reached via another router. This router is the next-hop router and is on the path to the final destination. When a router receives an incoming packet, it uses the destination address and searches the routing 4-??? to find the best path. If no entry can be found, the router will discard the packet after sending an Internet Control Message Protocol (ICMP) message to the source address of the packet.

Routing Update Messages


Routers communicate with each other and maintain their routing 4-???s. A number of update messages are transmitted between routers in order to keep the routing 4-???s updated. Depending on the particular routing protocol, routing update messages can be sent periodically or only when there is a change in the network topology. The information that is contained in the routing update messages includes the destination networks that the router can reach and the routing metric to reach each destination. By analyzing routing updates from neighboring routers, a router can build and maintain its routing 4-???.

Static, Dynamic, Directly Connected, and Default Routes


Routers can learn about other networks via static, dynamic, directly connected, and default routes. The routing 4-???s can be populated by the following methods:

Directly connected networks : This entry comes from having router interfaces that are directly attached to network segments. This method is the most certain method of populating a routing 4-???. If the interface fails or is administratively shut down, the entry for that network will be removed from the routing 4-???. The administrative distance is "0" and, therefore, will preempt all other entries for that destination network. Entries with the lowest administrative distance are the best, most-trusted sources. Static routes : A system administrator manually enters static routes directly into the configuration of a router. The default administrative distance for a static route is "1"; therefore, the static routes will be included in the routing 4-??? unless there is a direct connection to that network. Static routes can be an effective method for small, simple networks that do not change frequently. For bigger and uns4-??? networks, the solution with static routes does not scale. Dynamic routes : The router learns dynamic routes automatically when the routing protocol is configured and a neighbor relationship to other routers is established. The information is responsive to changes in the network and updates constantly. There is, however, always a lag between the time that a network changes and all of the routers become aware of the change. The time delay for a router to match a network change is called convergence time. A shorter convergence time is better for users of the network.

Different routing protocols perform differently in this regard. Larger networks require the dynamic routing method because there are usually many addresses and constant changes. These changes require updates to routing 4-???s across all routers in the network or connectivity is lost. Default routes : A default route is an optional entry that is used when no explicit path to a destination is found in the routing 4-???. The default route can be manually inserted or it can be populated from a dynamic routing protocol.

The show ip route command is used to show the contents of the routing 4-??? in a router. The first part of the output explains the codes, presenting the letters and the associated source of the entries in the routing 4-???. Letter "C," which is reserved for directly connected networks, labels the second and third entry. Letter "S," which is reserved for static routes, labels the last two entries. Letter "R," which is reserved for Routing Information Protocol (RIP), labels the first entry. Letter "O," which is reserved for Open Shortest Path First (OSPF) routing protocol, labels the fourth entry. Letter "D," which is reserved for Enhanced Interior Gateway Routing Protocol (EIGRP), labels the fifth entry.

Dynamic Routing Protocols


Routing protocols use their own rules and metrics to build and update routing 4-???s automatically.

Routing Metrics
When a routing protocol updates a routing 4-???, the primary objective of the protocol is to determine the best information to include in the 4-???. The routing algorithm generates a number, called the metric value, for each path through the network. Sophisticated routing protocols can base route selection on multiple metrics, combining them in a single metric. Typically, the smaller the metric number is, the better the path. Metrics can be based on either a single characteristic or on several characteristics of a path. The metrics that are most commonly used by routing protocols are as follows:

Bandwidth : The data capacity of a link (the connection between two network devices). Delay : The length of time that is required to move a packet along each link from the source to the destination. The delay depends on the bandwidth of intermediate links, port queues at each router, network congestion, and physical distance. Hop count : The number of routers that a packet must travel through before reaching its destination Cost : An arbitrary value that is assigned by a network administrator, usually based on bandwidth, administrator preference, or other measurement, such as load or reliability.

Routing Methods
Many routing protocols are designed around one of the following routing methods:

Distance vector routing : In distance vector routing, a router does not have to know the entire path to every network segment. The router only has to know the direction, or vector, in which to send the packet. The distance vector routing approach determines the direction (vector) and distance (hop count) to any network in the internetwork. Distance vector algorithms periodically (such as every 30 seconds) send all or portions of their routing 4-??? to their adjacent neighbors. Routers that are running a distance vector routing protocol will send periodic updates, even if there are no changes in the network. By receiving the routing 4-??? of a neighbor, a router can verify all the known routes and changes to its local routing 4-???. The router changes its routing 4-??? that is based on updated information that is received from the neighboring router. This process is also known as "routing by rumor." This name comes from the fact that the understanding of the network topology is based on the perspective of the neighboring router routing 4-???. An example of a distance vector protocol is the RIP, which is a commonly used routing protocol that uses hop count as its routing metric. Link-state routing : In link-state routing, each router tries to build its own internal map of the network topology. Each router sends messages into the network when it first becomes active. This message lists the routers to which it is directly connected and provides information about whether the link to each router is active. The other routers use this information to build a map of the network topology and then use the map to choose the best destination. Link-state routing protocols respond quickly to network changes. Triggered updates are sent when a network change occurs. Periodic updates (link-state refreshes) are sent at longer time intervals, such as every 30 minutes.

When a link state changes, the device that detected the change creates an update message concerning that link (route). This update message is propagated to all routers that are running the same routing protocol. Each router takes a copy of the update message, updates its routing 4???s, and forwards the update message to all neighboring routers. This flooding of the update message is required to ensure that all routers update their databases before creating an updated routing 4-??? that reflects the new topology. Examples of link-state routing protocols are OSPF and Intermediate System-to- Intermediate System (IS-IS). Note Cisco developed EIGRP, which combines the best features of distance vector and link-state routing protocols.

Topic Notes: Starting and Monitoring a Cisco Router


Initial Startup of a Cisco Router
The startup of a Cisco router requires verifying the physical installation, powering up the router, and viewing the Cisco IOS Software output on the console. To start router operations, the router completes the following tasks:

Runs the power-on self-test (POST) to test the hardware Finds and loads the Cisco IOS Software that the router uses for its operating system Finds and applies the configuration statements about router-specific attributes, protocol functions, and interface addresses

When a Cisco router powers up, it performs a POST. During the POST, the router executes diagnostics to verify the basic operation of the CPU, memory, and interface circuitry. After verifying the hardware functions, the router proceeds with the software initialization. During the software initialization, the router finds and loads the Cisco IOS image. When the Cisco IOS image is loaded, the router finds and loads the configuration file, if one exists. This 4-??? lists the steps that are required for the initial startup of a Cisco router. Startup of Cisco Routers Step Action Before starting the router, verify the following: - All network cable connections are secure. 1. - Your terminal is connected to the console port. - Your console terminal application, such as HyperTerminal, is selected. 2. Push the power switch to "on." The Cisco IOS Software output text appears on the console. Observe the boot sequence of 3. the router on the console.

Initial Setup of a Cisco Router


When the router starts, it looks for a device configuration file. If it does not find one, the router executes a question-driven initial configuration routine called "setup." After a router completes the POST and loads a Cisco IOS image, it looks for a device configuration file in its NVRAM. The NVRAM of the router is a type of memory that retains its

contents even when power is turned off. If the router has a configuration file in NVRAM, the user-mode prompt appears. When starting a new Cisco router or when starting a Cisco router without a configuration in NVRAM, there will be no configuration file. If no valid configuration file exists in NVRAM, the operating system executes a question-driven initial configuration routine that is referred to as the system configuration dialog or setup mode. Setup mode is not intended for entering complex protocol features in the router. Use setup mode to bring up a minimal configuration. Rather than using setup mode, you can use other various configuration modes to configure the router. When using the setup mode and after you complete the configuration process for all of the installed interfaces on the router, the setup command shows the configuration command script that was created. Depending on the software revision, the router asks if the configuration that was created can be used or the setup command offers the following three choices:

[0]: Go to the EXEC prompt without saving the created configuration. [1]: Go back to the beginning of the setup without saving the created configuration. [2]: Accept the created configuration, save it to NVRAM, and exit to the EXEC mode.

If you choose [2], the configuration is executed and saved to NVRAM, and the system is ready to use. To modify the configuration, you must reconfigure it manually. The script file that is generated by the setup command is additive. You can turn on features with the setup command, but not off. In addition, the setup command does not support many of the advanced features of the router or those features that require a more-complex configuration.

Logging into the Cisco Router


When you configure a Cisco router from the CLI on a console or remote terminal, Cisco IOS Software provides an interpreter that is called the EXEC. The EXEC interprets the commands that are entered and carries out the corresponding operations. After you have configured a Cisco router from the setup utility, you can reconfigure it or add to the configuration from the user interface that runs on the router console or auxiliary port. You can also configure a Cisco router by using a remote-access application such as SSH. The Cisco IOS Software command interpreter, the EXEC, interprets the commands that are entered and carries out the corresponding operations. You must log in to the router before entering an EXEC command. For security purposes, the EXEC has two levels of access to commands:

User mode : Typical tasks include checking the router status. Privileged mode : Typical tasks include changing the router configuration.

When you first log in to the router, a user-mode prompt is displayed. The EXEC commands that are available in user mode are a subset of the EXEC commands that are available in privileged mode. These commands provide a means to display information without changing the configuration settings of the router. To access the complete set of commands, you must enable the privileged mode with the enable command and supply the enable password, if it is configured. Note The enable password is displayed in cleartext by using the show run command. The secret password is encrypted, so it is not displayed in cleartext. If both the enable and secret passwords are configured, the secret password overrides the enable password. The EXEC prompt is displayed as a pound sign (#) while in privileged mode. From the privileged level, you can access global configuration mode and the other specific configuration modes, such as interface, subinterface, line, router, route-map, and several others. Use the disable command to return to the user EXEC mode from the privileged EXEC mode. Use the exit or logout command to end the current session. Enter a question mark (?) at the user-mode prompt or at the privileged-mode prompt to display a list of commands that are available in the current mode. Note The available commands vary with different Cisco IOS Software versions. Notice "-- More --" at the bottom of the sample display. This output indicates that multiple screens are available as output. You can perform any of the following tasks:

Press the Spacebar to display the next available screen. Press the Enter key to display the next line. Press any other key to return to the prompt.

Enter the enable user-mode command to access the privileged EXEC mode. Normally, if an enable password has been configured, you must also enter the enable password before you can access the privileged EXEC mode. Enter the ? command at the privileged-mode prompt to display a list of the available privileged EXEC commands. Note The available commands vary with different Cisco IOS Software versions.

Showing the Router Initial Startup Status

After logging in to a Cisco router, the router hardware and software status can be verified by using the following router status commands: show version, show running-config, and show startup-config. Use the show version EXEC command to display the configuration of the system hardware, the software version, the memory size, and the configuration register setting. The I/O memory is used for holding packets while they are in the process of being routed. The router has two Fast Ethernet interfaces and two serial interfaces. This output is useful for confirming that the expected interfaces are recognized at startup and are functioning, from a hardware perspective. The router has 239 KB used for startup configuration storage in the NVRAM and 62,720 KB of flash storage for the Cisco IOS Software image. The show version command displays information about the currently loaded software version, along with hardware and device information. Some of the information that is shown from this command is as follows:

Software version : Cisco IOS Software version (stored in flash) Bootstrap version : Bootstrap version (stored in boot ROM) System uptime : Time since last reboot System restart info : Method of restart (such as power cycle or crash) Software image name : Cisco IOS filename that is stored in flash Router type and processor type: Router model number and processor type Memory type and allocation (shared and main): Main processor RAM and shared packet I/O buffering Software features : Supported protocols or feature sets Hardware interfaces : Interfaces that are available on the router Configuration register : Sets bootup specifications, console speed setting, and related parameters

The show running-config command, which is used in privileged EXEC mode, displays the current running configuration that is stored in RAM. With a few exceptions, all configuration commands that were used will be entered into the running-config and implemented immediately by Cisco IOS Software. The show startup-config command displays the startup configuration file that is stored in NVRAM. This is the configuration that the router will use on the next reboot. This configuration does not change unless the current running configuration is saved to NVRAM.

Topic Notes: Configuring a Cisco Router for an Internetwork


Cisco Router Configuration Modes
From privileged EXEC mode, you can enter global configuration mode, which provides access to the specific router configuration modes. The first step in configuring a Cisco router is to use the setup utility. The setup utility allows you to create a basic initial configuration. For more-complex and specific configurations, you can use the CLI to enter terminal configuration mode. After the basic initial configuration or after a successful login procedure, the routers display the user EXEC mode prompt. In user EXEC mode, the network engineer has a limited set of available commands. In order to start the router configuration, the network engineer must enter the privileged EXEC mode. The enable command is used to enter the privileged EXEC mode. From the privileged EXEC mode, you can enter the global configuration mode with the configure terminal command. From global configuration mode, you can access the specific configuration modes, which include the following:

Interface : Supports commands that configure operations on a per-interface basis. Subinterface : Supports commands that configure multiple virtual interfaces on a single physical interface. Controller : Supports commands that configure controllers (for example, E1 and T1 controllers). Line : Supports commands that configure the operation of a terminal line (for example, the console or the vty ports). Router : Supports commands that configure an IP routing protocol.

If you enter the exit command, the router will go back one level. You can enter the exit command from one of the specific configuration modes to return to global configuration mode. Press Ctrl-Z to leave the configuration mode completely and return the router to the privileged EXEC mode. In terminal configuration mode, an incremental compiler is used. Each configuration command that is entered is parsed as soon as the Enter key is pressed. If there are no syntax errors, the command is executed and stored in the running configuration, and it is effective immediately. Commands that affect the entire router are called global commands. The hostname and enable password commands are examples of global commands.

Commands that point to or indicate a process or interface that will be configured are called major commands. When they are entered, major commands cause the CLI to enter a specific configuration mode. Major commands have no effect unless a subcommand that supplies the configuration entry is immediately entered. For example, the major command interface serial 0 has no effect unless it is followed by a subcommand that tells what is to be done to that interface. The following are examples of some major commands and the subcommands that go with them:
Router(config)#interface serial 0 (major command) Router(config-if)#shutdown (subcommand) Router(config-if)#line console 0 (major command) Router(config-line)#password cisco (subcommand) Router(config-line)#router rip (major command) Router(config-router)#network 10.0.0.0 (subcommand)

Entering a major command switches from one configuration mode to another. It is not necessary to return to the global configuration mode first before entering another configuration mode. After you enter the commands to configure the router, the running configuration is changed. You must save the running configuration to NVRAM. If the configuration is not saved to NVRAM and the router is reloaded, the configuration will be lost and the router will revert to the last configuration saved in NVRAM. Use the copy running-config startup-config command to save the running configuration to the startup configuration in NVRAM.

Configuring a Cisco Router from the CLI


The CLI is used to configure the router name, password, and other console commands. One of the first tasks in configuring a router is to name it. Naming the router helps you to better manage the network by enabling you to uniquely identify each router within the network. The name of the router is considered to be the hostname and is the name that is displayed at the system prompt. If no name is configured, the default router name is Router. The name of the router is assigned in global configuration mode. In the example that is shown, the name of the router is set to RouterX. Use the hostname global configuration command to set the name of the router. You can configure a message of the day (MOTD) banner to be displayed on all of the connected terminals. This banner is displayed at login and is useful for conveying messages, such as impending system shutdowns that might affect network users. When you enter the banner motd global configuration command, follow the command with one or more blank spaces and a delimiting character of any kind. In the example, the delimiting character is a pound sign (#). After entering the banner text, terminate the message with the same delimiting character. Other console-line commands include exec-timeout and logging synchronous. The exec-timeout command sets the timeout for the console EXEC session to 20 minutes and 30 seconds, which changes the session from the default timeout of 10 minutes.

The logging synchronous console-line command is useful when console messages are being displayed while you are attempting to input EXEC or configuration commands. Instead of the console messages being interspersed with the input, the input is redisplayed on a single line at the end of each console message that interrupts the input. This functionality makes reading the input and the messages much easier. The following example shows how the console messages interrupt the interface serial 0/0 command entered.
RouterX(config)#interface ser *Jan 9 00:26:44.887: %LINK-5-CHANGED: Interface Serial0/0, changed state to administratively down *Mar 9 00:26:45.887: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0, changed state to downial 0/0

The following example shows the same situation except that this time the logging synchronous console-line command is used. Now the input is redisplayed on a single line.
RouterX(config)#logging synchronous RouterX(config)#interface ser *Jan 9 00:26:44.887: %LINK-5-CHANGED: Interface Serial0/0, changed state to administratively down *Mar 9 00:26:45.887: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0, changed state to down RouterX(config)#interface Serial 0/0

Configuring Cisco Router Interfaces


The main function of a router is to forward packets from one network device to another. For the router to perform this task, you must define the characteristics of the interfaces through which the packets are received and sent. The router interface characteristics include, but are not limited to, the IP address of the interface, the data-link encapsulation method, the media type, the bandwidth, and the clock rate. You can enable many features on a per-interface basis. Interface configuration mode commands modify the operation of Ethernet, serial, and many other interface types. When you enter the interface command, you must define the interface type and number. The number is assigned to each interface based on the physical location of the interface hardware in the router and is used to identify each interface. This identification is critical when there are multiple interfaces of the same type in a single router. Examples of an interface type and number are as follows:
RouterX(config)#interface serial 0 RouterX(config)#interface fa 0/0

An interface in a Cisco 2800 and 3800 Series Integrated Services Router, or other modular router, is specified by the physical slot in the router and port number on the module in that slot, as follows:

RouterX(config)#interface fa 1/0

You can add a description to an interface to help remember specific information about that interface. Two common descriptions might be the network that is serviced by that interface or the customer that is connected to that interface. This description is meant solely as a comment to help identify how the interface is being used. To add a description to an interface configuration, use the description command in interface configuration mode. To remove the description, use the no form of this command. A Serial 0 interface in the RouterX router is connected to the Router1 router. The following example shows the commands that are used to add the description on the Serial 0 interface:
RouterX(config)#interface Serial 0 RouterX(config-if)#description Link to Router1

The description will appear in the output when the configuration information that exists in the memory of the router is displayed. The same text will appear in the show interfaces command display output, as follows:
RouterX(config)#show interfaces <output ommited> Serial0/0/0 is administratively down, line protocol is down (disabled) Hardware is HD64570 Description: Link to Router1 MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 <output ommited>

To quit interface configuration mode and to move into global configuration mode, enter the exit command at the RouterX(config-if)# prompt as follows:
RouterX(config-if)#exit

You may want to disable an interface to perform hardware maintenance on a specific interface or a segment of a network. You may also want to disable an interface if a problem exists on a specific segment of the network and you must isolate that segment from the rest of the network. The shutdown subcommand administratively turns off an interface. To reinstate the interface, use the no shutdown subcommand. When an interface is first configured, except in setup mode, you must administratively enable the interface before it can be used to transmit and receive packets. Use the no shutdown subcommand to allow Cisco IOS Software to use the interface.

Configuring the Cisco Router IP Address

Each interface on a Cisco router must have its own IP address to uniquely identify it on the network. Unique IP addressing is required for the communication between the hosts and other network devices. Each router link to each LAN is associated to a dedicated and unique subnet. The router needs to have an IP address configured on each of its links to each LAN. The routers determine the path to the destination based on the destination IP address, which is written in the IP header. To configure an interface on a Cisco router, complete these steps. Step Action Results and Notes Enter global configuration mode using the configure This command displays a new 1 terminal command. prompt: Router#configure terminal Router(config)# Identify the specific interface that requires an IP address This command displays a new prompt, for example, as follows: 2 by using the interface type slot/port command. Router(config-if)# Router(config)#interface serial 0 Set the IP address and subnet mask for the interface by using the ip address ip-address mask command. Router(config-if)#ip address 172.18.0.1 255.255.0.0 Enable the interface to change the state from administratively down to up by using the no shutdown command. Router(config-if)#no shutdown Exit configuration mode for the interface by using the exit command. Router(config-if)#exit This command configures the IP address and subnet mask for the selected interface.

This command enables the current interface. This command displays the global configuration mode prompt. Router(config)#

The following example shows how to configure the IP address on the Serial 0 interface on RouterX:
RouterX#configure terminal RouterX(config)#interface serial 0 RouterX(config-if)#ip address 172.18.0.1 255.255.0.0

Verifying the Interface Configuration


When you have completed the router interface configuration, you can verify the configuration by using the show interfaces command. The show interfaces command displays the status and statistics of all of the network interfaces on the router. Alternatively, the status for a specific interface can be displayed by using the show interfaces type slot command. Output fields for an Ethernet interface and their meanings are shown in this 4-???.

One of the most important elements of the show interfaces command output is the display of the line and data-link protocol status. For other types of interfaces, the meanings of the status line may be slightly different. The first parameter refers to the hardware layer and, essentially, reflects whether the interface is receiving the carrier detect signal from the other end (the DCE if using serial connection). The second parameter refers to the data link layer, and reflects whether the data link layer protocol keepalives are being received. Based on the output of the show interfaces command, possible problems can be fixed as follows:

If the interface is up and the line protocol is down, a problem exists. Some possible causes include the following: No keepalives Mismatch in encapsulation type Clock rate issue If the line protocol and the interface are both down, a cable might never have been attached when the router was powered up, or some other interface problem exists. For example, in a back-to-back connection, the other end of the connection may be administratively down. If the interface is administratively down, it has been manually disabled (the shutdown command has been issued) in the active configuration.
o o o

After configuring a serial interface, use the show interfaces serial command to verify the changes. In this example, the show interfaces serial 0/0/0 command is used. Note that the line and protocol are up and that the bandwidth is 64 kb/s.

Topic Notes: The Packet Delivery Process


Layer 2 Addressing
Host-to-host communications require Layer 2 addresses. MAC addresses are assigned to end devices such as hosts. The physical interfaces on a router provide a Layer 2 function and are also assigned a MAC address. Each host and network device that provides a Layer 2 function maintains a MAC address table.

Layer 3 Addressing
Layer 3 addresses are assigned to end devices such as hosts and to network devices that provide Layer 3 functions. The router has its own Layer 3 address on each interface. Each network device that provides a Layer 3 function maintains a Layer 3 address table.

Host-to-Host Packet Delivery


There are a number of steps that are involved in delivering an IP packet over a routed network. The host sends any packet that is not destined for the local IP network to the default gateway. The default gateway is the address of the local router, which must be configured on hosts (PCs, servers, and so on). In this example, host 192.168.3.1 has data that it wants to send to host 192.168.4.2. The application does not need a reliable connection because User Datagram Protocol (UDP) is selected. Because it is not necessary to set up a session, the application can start sending data. UDP prepends a UDP header and passes the protocol data unit (PDU) to IP (Layer 3) with an instruction to send the PDU to 192.168.4.2. IP encapsulates the PDU in a Layer 3 packet and passes it to Layer 2. Layer 2 checks for the mapping between the Layer 2 and Layer 3 addresses. The corresponding mapping does not exist and the Address Resolution Protocol (ARP) holds the packet while it resolves the Layer 2 address. To send the packet over the line, the host needs the Layer 2 information of the next-hop device. The ARP table in the host does not have an entry and must resolve the Layer 2 address (MAC address) of the default gateway. The default gateway is the next hop for the packet delivery. The packet waits while the host resolves the Layer 2 information. This example differs from the previous examples. The two hosts are on different segments: 192.168.3.0/24 and 192.168.4.0/24. Because the host is not running a routing protocol, it does not know how to reach the other segment. It must send the frame to its default gateway, where the frame can be forwarded. If the host does not have a mapping for the default gateway, the host

uses the standard ARP process to obtain the mapping. The host sends an ARP request to the router. The user has programmed the IP address of 192.168.3.2 as the default gateway. Host 192.168.3.1 sends out the ARP request, and the router receives it. The ARP request contains information about the host, and the router adds the information in its ARP table. The router processes the ARP request like any other host, and sends the ARP reply with its own information. The host receives an ARP reply to the ARP request and enters the information in its local ARP table. Now the Layer 2 frame with the application data can be sent to the default gateway. Note that the ARP reports a mapping of the destination IP address (192.168.4.2) to the MAC address of the default gateway instead of the actual destination MAC address. The pending frame is sent with the local host IP address and MAC address as the source. However, the destination IP address is that of the remote host, but the destination MAC address is that of the default gateway. The router receives the frame and must decide where to send the data. When the frame is received by the router, the router recognizes its MAC address and processes the frame. At Layer 3, the router sees that the destination IP address is not its address. A host Layer 3 device would discard the frame. However, because this device is a router, it passes all packets that are for unknown destinations to the routing process. The routing process will determine where to send the packet. The routing process looks up the destination IP address in its routing table. In this example, the destination segment is directly connected. Because of this functionality, the routing process can pass the packet directly to Layer 2 for the appropriate interface. Layer 2 will use the ARP process to obtain the mapping for the IP address and the MAC address. The router asks for the Layer 2 information in the same way as hosts. An ARP request for the destination Layer 3 address is sent to the link. The destination receives and processes the ARP request. The host receives the frame that contains the ARP request and passes the request to the ARP process. The ARP process takes the information about the router from the ARP request and places the information in its local ARP table. The ARP process generates the ARP reply and sends it back to the router. The router receives the ARP reply and takes the information that is required for forwarding the packet to the next hop. The router populates its local ARP table and starts the packet forwarding process.

The frame is forwarded to the destination.

Using the show ip arp Command


To display the ARP cache (the ARP table), use the show ip arp EXEC command as follows:
show ip arp [ip-address] [host-name] [mac-address] [interface type number]

host-name mac-address interface type number

Syntax Description (Optional) Hostname (Optional) 48-bit MAC address (Optional) ARP entries that are learned via this interface type and number are displayed.

Usage Guidelines
The ARP establishes correspondence between network addresses (an IP address, for example) and LAN hardware addresses (Ethernet addresses). A record of each correspondence is kept in a cache for a predetermined amount of time and then discarded. The table describes the following sample output from the show ip arp commmand:
RouterX#show ip arp Protocol Address Age(min)Hardware Addr Type Interface Internet 192.168.3.1 - 0800.0222.2222 ARPA FastEthernet0/0 Internet 192.168.4.2 - 0800.0222.1111 ARPA FastEthernet0/1

Field Protocol Address Age (min) Hardware Addr

Usage Guidelines Description The protocol for the network address in the Address field. The network address that corresponds to the hardware address. The age in minutes of the cache entry. A hyphen (-) means that the address is local. The LAN hardware address of a MAC address that corresponds to the network address. Indicates the encapsulation type that Cisco IOS Software is using in the network address in this entry. Possible values include the following:

Type

Advanced Research Projects Agency (ARPA) Subnetwork Access Protocol (SNAP) Session Announcement Protocol (SAP)

Interface

Indicates the interface that is associated with this network address.

Using Common Cisco IOS Tools


To diagnose basic network connectivity, you can use the ping command in user EXEC or privileged EXEC mode as follows:
ping [protocol {host-name | system-address}]

Syntax Description Hostname of the system to ping. If a host-name or system-address is not host-name specified at the command line, it will be required in the ping system dialog. systemAddress of the system to ping. If a host-name or system-address is not specified at address the command line, it will be required in the ping system dialog. This example represents a simple network with two routers. The RouterX router uses the ping 10.0.0.2 command to check the reachability of the neighboring router interface. By default, five Internet Control Message Protocol (ICMP) packets are sent and five replies are required for a perfectly successful test. The RouterX router receives all five replies. The following ping command output represents a perfectly successful test:
RouterX#ping 10.0.0.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 4/6/8 ms

To determine the routes that packets will actually take when traveling to their destination address, you can use the traceroute command in user EXEC or privileged EXEC mode as follows:
traceroute [vrf vrf-name] [protocol] destination

Syntax Description (Optional in privileged EXEC mode; required in user EXEC mode) The destination address or hostname for which you want to trace the route. The destination software determines the default parameters for the appropriate protocol and the tracing action begins. This example represents a network with four routers. The RouterX router uses the traceroute 192.168.1.4 command to verify the path that packets will take to the RouterW router. The RouterX router receives replies from all the hops. The following traceroute command output represents the path from RouterX to RouterW:
RouterX#traceroute 192.168.1.4 Type escape sequence to abort. Tracing the route to 192.168.1.4

1 10.1.1.2 4 msec 4 msec 4 msec 2 172.16.1.3 20 msec 16 msec 16 msec 3 192.168.1.4 16 msec * 16 msec

Host-to-Host Packet Delivery


Purpose: Use this learning aid to examine the 17 Steps in the host-to-host packet delivery process. Initial Phase Step 01

Step 02

Step 03

Step 04

Step 05

Step 06

Step 07

Step 08

Step 09

Step 10

Step 11

Step 12

Step 13

Step 14

Step 15

Step 16

Step 17

Course: The Packet Delivery Process, Router Security, and Remote Access Topic: 5

Topic Notes: Router Security


Physical and Environmental Threats
Improper and incomplete network device installation is an often-overlooked security threat. Software-based security measures alone cannot prevent network damage due to poor installation. There are four classes of unsecure installations or physical access threats:

Hardware threats : Threats of physical damage to the router or router hardware. Environmental threats : Threats such as temperature extremes (too hot or too cold) or humidity extremes (too wet or too dry). Electrical threats : Threats such as voltage spikes, insufficient supply voltage (brownouts), unconditioned power (noise), and total power loss. Maintenance threats : Threats such as poor handling of key electrical components (ESD), lack of critical spare parts, poor cabling, poor labeling, and so on.

Configuring Password Security


You can use the command-line interface (CLI) to configure the password and other console commands. Caution These passwords are for instructional purposes only. Passwords that are used in an actual implementation should meet the requirements of a "strong" password. Like securing a switch, you can secure a router by using a password to restrict access. Using a password and assigning privilege levels are simple ways to provide terminal access control in a network. A password can be established on individual lines, such as the console, and to the privileged EXEC mode. Passwords are case-sensitive. Each Telnet port on the router is known as a vty terminal. There are a maximum of five default vty ports on the router, which allows for five concurrent Telnet sessions. On the router, the vty ports are numbered from 0 through 4. You can activate up to 11 additional optional vty terminals (5 to 15) if needed. You can use the line console 0 command, followed by the login and password subcommands, to require login and establish a login password on a console terminal or a vty port. By default, login is not enabled on a console or vty port. You can use the line vty 0 4 command, followed by the login and password subcommands, to require login and establish a login password on incoming Telnet sessions. To activate and

configure the additional vty lines, use the line vty 5 15 command, followed by the login and password subcommands You can use the login local command to enable password checking on a per-user basis, using the username and password that is specified with the username global configuration command. The username command establishes username authentication with encrypted passwords. The enable password global configuration command restricts access to the privileged EXEC mode. You can assign an encrypted form of the enable password command, called the enable secret password. The enable secret command with the desired password at the global configuration mode prompt is required for this functionality. If the enable secret password is configured, it is used rather than enable password, not in addition to it. You can also add a further layer of security, which is particularly useful for passwords that cross the network or are stored on a TFTP server. Cisco provides a feature that allows the use of encrypted passwords. To set password encryption, enter the service password-encryption command in global configuration mode. Passwords that are displayed or set after you configure the service password-encryption command will be encrypted. To disable a command, enter no before the command. For example, use the no service password-encryption command to disable password encryption. Cisco AutoSecure is a Cisco IOS security CLI command feature. You can deploy one of these two modes, depending on your needs:

Interactive mode : Prompts the user with options to enable and disable services and other security features. Noninteractive mode : Automatically executes a Cisco AutoSecure command with the recommended Cisco default settings.

Caution Cisco AutoSecure attempts to ensure maximum security by disabling the services most commonly used by hackers to attack a router. However, some of these services may be needed for successful operation in your network. For this reason, you should not use the Cisco AutoSecure feature until you fully understand its operations and the requirements of your network. Cisco AutoSecure performs the following functions:

Disables the following global services:


o o o o

Finger Packet assembler/disassembler (PAD) Small servers Bootstrap Protocol (BOOTP) servers

HTTP service Identification service Cisco Discovery Protocol Network Time Protocol (NTP) Source routing Enables the following global services: Password encryption service Tuning of scheduler interval and allocation TCP synwait-time TCP keepalive messages Security policy database (SPD) configuration Internet Control Message Protocol (ICMP) unreachable messages Disables the following services per interface: ICMP Proxy Address Resolution Protocol (ARP) Directed broadcast Maintenance Operation Protocol (MOP) service ICMP unreachables ICMP mask reply messages Provides logging for security, including the following functions: Enables sequence numbers and time stamp Provides a console log Sets log buffered size Provides an interactive dialog to configure the logging server IP address Secures access to the router, including the following functions: Checking for a banner and providing the ability to add text for automatic configuration o Login and password o Transport input and output o exec-timeout commands o Local authentication, authorization, and accounting (AAA) o Secure Shell (SSH) timeouts and ssh authentication-retries commands o Enabling only SSH and Secure Copy Protocol (SCP) for access and file transfers to and from the router o Disabling Simple Network Management Protocol (SNMP) if not being used Secures the forwarding plane, including the following functions:
o o o o o o o o o o o o o o o o o o o o

o o o o o

Enabling Cisco Express Forwarding or distributed Cisco Express Forwarding on the router, when available Antispoofing Blocking all Internet Assigned Numbers Authority (IANA)-reserved IP address blocks

o o o o o

Blocking private address blocks, if the customer desires Installing a default route to Null0, if a default route is not being used Configuring a TCP Intercept for a connection timeout, if the TCP Intercept feature is available and the user desires it Starting an interactive configuration for Context-Based Access Control (CBAC) on interfaces facing the Internet, when using a Cisco IOS Firewall image Enabling NetFlow on software forwarding platforms

Configuring the Login Banner


You can use the CLI to configure the message of the day and other console commands. To define a customized banner to be displayed before the username and password login prompts, you can use the banner login command in global configuration mode. To disable the login banner, you can use the no banner login command. When you enter the banner login command, follow the command with one or more blank spaces and a delimiting character. In this example, the delimiting character is a quote mark ("). After the banner text has been added, terminate the message with the same delimiting character. Warning Caution should be used when selecting the words that are used in the login banner. Words like "welcome" may imply that access is not restricted and allow hackers to defend their actions. You can also configure a Message of the day banner (motd), that will be display to all terminals at connection time. The motd banner would be configure with the same logic, and the command sequence would look like this:
RouterX(config)#banner motd #This router will not be accessible today between 10 and 11 PM for maintenance reasons #

Telnet and SSH Access


Telnet is the most common method of accessing a network device. However, Telnet is an unsecure way of accessing a network. SSH is a secure replacement for Telnet, which gives the same type of access. Communication between the client and server is encrypted in both SSH version 1 (SSHv1) and SSH version 2 (SSHv2). Implement SSHv2, if possible, because it uses a more enhanced security encryption algorithm. When encryption is enabled, a Rivest, Shamir, and Adleman (RSA) encryption key must be generated on the router. In addition, an IP domain must be assigned to the router. Before implementing SSH, first test the authentication without SSH to make sure that authentication works with the router. The following configuration shows local authentication, which allows you to use Telnet to connect to the router with the username "cisco" and password "cisco":

RouterX(config)#username cisco password cisco RouterX(config)#line vty 0 4

RouterX(config-line)#login localIn order to enable and test authentication with SSH, you must add to the previous statements. Then test SSH from the PC and UNIX stations. The following configuration enables SSH and disables Telnet access:
RouterX(config)#ip domain-name cisco.com RouterX(config)#crypto key generate rsa The name for the keys will be: RouterX.cisco.com Choose the size of the key modulus in the range of 360 to 2048 for your General Purpose Keys. Choosing a key modulus greater than 512 may take a few minutes. How many bits in the modulus [512]: 1024 % Generating 1024 bit RSA keys, keys will be non- exportable...[OK] *Mar 16 20:32:15.613: %SSH-5-ENABLED: SSH 1.99 has been enabled RouterX(config)#ip ssh version 2 RouterX(config)#line vty 0 4 RouterX(config-line)#login local RouterX(config-line)#transport input ssh

If you want to prevent non-SSH connections, the transport input ssh command limits the router to SSH connections only. Straight (non-SSH) Telnet connections are refused. The following configuration enables SSH connections only:
RouterX(config)#line vty 0 4 RouterX(config-line)transport input ssh

Test to ensure that non-SSH users cannot use Telnet to connect to the router.

Topic Notes: Cisco SDM


Cisco SDM is an intuitive, web-based device management tool for Cisco IOS Software-based routers. Cisco SDM simplifies router and security configuration by using wizards, which help you quickly and easily deploy, configure, and monitor a Cisco router without requiring knowledge of the command-line interface (CLI). Cisco SDM is supported on Cisco 830 Series, Cisco 1700 Series, Cisco 1800 Series, Cisco 2600XM, Cisco 2800 Series, Cisco 3600 Series, Cisco 3700 Series, and Cisco 3800 Series routers, as well as on selected Cisco 7200 Series and Cisco 7301 routers. Cisco SDM allows you to easily configure routing, switching, security, and quality of service (QoS) services on Cisco routers while helping to enable proactive management through performance monitoring. Whether you are deploying a new router or installing Cisco SDM on an existing router, you can remotely configure and monitor these routers without using the Cisco IOS Software CLI. The Cisco SDM GUI aids nonexpert users of Cisco IOS Software in day-today operations, provides easy-to-use smart wizards, automates router security management, and assists you through comprehensive online Help and tutorials. Cisco SDM smart wizards guide you, step by step, through router and security configuration by systematically configuring LAN and WAN interfaces, firewalls, intrusion prevention systems (IPSs), and IPsec virtual private networks (VPNs). Cisco SDM wizards can intelligently detect incorrect configurations and propose fixes, such as allowing DHCP traffic through a firewall if the WAN interface is DHCP-addressed. Online Help in Cisco SDM contains appropriate background information, in addition to step-by-step procedures to help you enter correct data in Cisco SDM. Networking and security terms, and other definitions that you might need, are included in an online glossary. For network professionals who are familiar with Cisco IOS Software and its security features, Cisco SDM offers advanced configuration tools to allow you to quickly configure and fine-tune router security features, allowing you to review the commands that are generated by Cisco SDM before delivering the configuration changes to the router. Cisco SDM helps you configure and monitor routers from remote locations using Secure Sockets Layer (SSL) and Secure Shell version 2 (SSHv2) connections. This technology helps enable a secure connection over the Internet between the user browser and the router. When deployed at a branch office, a router that is enabled with Cisco SDM can be configured and monitored from corporate headquarters, which reduces the need for experienced network administrators at the branch office. Cisco SDM is supported on a number of Cisco routers and associated Cisco IOS Software versions. Cisco SDM comes preinstalled on several Cisco router models that were manufactured after June 2003 and that were purchased with the VPN bundle.

If you have a router that does not have Cisco SDM installed, and you would like to use Cisco SDM, you must download it from http://www.Cisco.com and install it on your router. Ensure that your router contains enough flash memory to support your existing flash file structure and the Cisco SDM files. Cisco Configuration Professional is a GUI-based device management tool for Cisco IOS Software-based access routers, including Cisco integrated services routers, Cisco 7200VXR Series Routers, and the Cisco 7301 router. Cisco Configuration Professional simplifies router, firewall, IPS, VPN, unified communications, WAN, and basic LAN configuration through easyto-use wizards. With Cisco Configuration Professional, you can remotely configure and monitor Cisco routers without using the Cisco IOS Software CLI. Cisco Configuration Professional is an alternative to Cisco SDM. Like Cisco SDM, Cisco Configuration Professional assumes a general understanding of networking technologies and terms, but assists individuals unfamiliar with the Cisco CLI. Cisco Configuration Professional is currently supported on Windows platforms only. Cisco Configuration Professional is included on a CD at no additional cost with several integrated services routers. It is also available as a free download from http://www.cisco.com/. Always consult the latest information regarding Cisco Configuration Professional router and Cisco IOS Software release support at http://www.cisco.com/.

Cisco SDM User Interface


Configuring Your Router to Support Cisco SDM

You can install and run Cisco SDM on a router that is already in use without disrupting network traffic, but you must ensure that a few configuration settings are present in the router configuration file. Access the CLI using SSH or the console connection to modify the existing configuration before installing Cisco SDM on your router. Step 1 Enable the HTTP and HTTPS servers on your router by entering the following commands in global configuration mode:
Router#configure terminal Enter configuration commands, one per line. End with CNTL/Z. Router(config)#ip http server Router(config)#ip http secure-server Router(config)#ip http authentication local Router(config)#ip http timeout-policy idle 600 life 86400 requests 10000

Note If the router supports HTTPS, the HTTPS server will be enabled. If not, the HTTP server will be enabled. HTTPS is supported in all images that support the cryptography IPsec feature set, starting from Cisco IOS Release 12.25(T).

Step 2 Create a user account that is defined with privilege level 15 (enable privileges). Enter the following command in global configuration mode, replacing username and password with the strings that you want to use:
Router(config)#username username privilege 15 secret 0 password

For example, if you chose the username "tomato" and the password "vegetable", you would enter:
Router(config)#username tomato privilege 15 secret 0 vegetable

You will use this username and password to log in to Cisco SDM. Step 3 Configure SSH and Telnet for local login and privilege level 15. Use the following commands:
Router(config)#line vty 0 4 Router(config-line)#privilege level 15 Router(config-line)#login local Router(config-line)#transport input telnet ssh Router(config-line)#exit

Topic Notes: The Cisco SDM Configuration Interface


Cisco SDM is stored in the router flash memory. It is invoked by executing an HTML file in the router archive, which then loads the signed Cisco SDM Java file. To launch Cisco SDM, complete the following steps: Step 1 From your browser, enter the following URL: https://router IP address The https:// designation specifies that the SSL protocol is used for a secure connection. The http:// designation can be used if SSL is not available. Step 2 The Cisco SDM home page will appear in the browser window. The username and password dialog box will appear. The type and shape of the dialog box will depend on the type of browser that you are using. Enter the username and password for the privileged (privilege level 15) account on your router. The Cisco SDM Java applet will begin loading to your PC. Step 3 Cisco SDM is a signed Java applet. This applet may cause your browser to display a security warning. Accept the certificate. Step 4 Cisco SDM displays the Launch page. When the Launch window appears, Cisco SDM displays the Cisco SDM home page. The home page gives you a snapshot of the router configuration and the features that the Cisco IOS image supports. Cisco SDM starts in wizard mode, in which you can perform configuration tasks using a sequence of windows that break the configuration task into manageable steps. The home page supplies basic information about the router hardware, software, and configuration, and contains the following sections:

Host Name : This hostname is the configured name of the router. About Your Router : This area shows basic information about your router hardware and software, and contains the fields that are shown in this table. Hardware Description

Hardware

Description

Hardware Description Hardware Model Type Available/Total Memory Total Flash Capacity The router model number Available RAM and total RAM Description

Flash plus webflash memory (if applicable) Software Description

Software IOS Version

Description The version of Cisco IOS Software that is currently running on the router

Cisco SDM Version The version of Cisco SDM software that is currently running on the router Feature Availability The features available in the Cisco IOS image that the router is using are designated by a check. The features that Cisco SDM looks for are IP, firewall, VPN and IPS.

The More Link


The More link displays a popup window that provides additional hardware and software details, as follows:

Hardware Details : In addition to the information presented in the About Your Router window, this tab displays information about the following: Where the router boots from (flash memory or the configuration file) Whether the router has accelerators, such as VPN accelerators A diagram of the hardware configuration Software Details : In addition to the information presented in the About Your Router section, this tab displays information about the feature sets included in the Cisco IOS image.
o o o

Configuration Overview
This section of the home page summarizes the configuration settings that have been made. If you want to view the running configuration, click View Running Config.

Interfaces and Connections

This area shows the following information:


Up : The number of connections that are up. Down : The number of connections that are down. Double arrow : Click to display or hide details. Total Supported LAN : Shows the total number of LAN interfaces that are present in the router. Total Supported WAN : The number of WAN interfaces that are present on the router and that are supported by Cisco SDM. Configured LAN Interface : The number of supported LAN interfaces that are currently configured on the router. Total WAN Connections : The total number of WAN connections that are present on the router and that are supported by Cisco SDM. DHCP Server : Configured and not configured. DHCP Pool (Detail View): If one pool is configured, this area shows the starting and ending address of the DHCP pool. If multiple pools are configured, it shows a list of configured pool names. Number of DHCP Clients (Detail View): Current number of clients leasing addresses. Interface : Name of the configured interface.
o o o

Type : Interface type IP Mask : IP address and subnet mask Description : Description of the interface

Firewall Policies
This area shows the following information:

Active : A firewall is in place. Inactive : No firewall is in place. Trusted : The number of trusted (inside) interfaces. Untrusted : The number of untrusted (outside) interfaces. DMZ : The number of demilitarized zone (DMZ) interfaces. Double arrow : Click to display or hide details. Interface : The name of the interface to which a firewall has been applied. Firewall icon : Whether the interface is designated as an inside or an outside interface. NAT : The name or number of the Network Address Translation (NAT) rule that is applied to this interface. Inspection Rule : The names or numbers of the inbound and outbound inspection rules. Access Rule : The names or numbers of the inbound and outbound access rules.

Virtual Private Network

This area shows the following information:

Up: The number of active VPN connections.

Double arrow: Click to display or hide details. IPsec (Site-to-Site): The number of configured site-to-site VPN connections. GRE over IPsec: The number of configured Generic Routing Encapsulation (GRE) over IPsec connections. XAUTH Login Required: The number of Cisco Easy VPN connections awaiting an Extended Authentication (XAUTH) login.

Note Some VPN servers or concentrators authenticate clients using XAUTH. This functionality shows the number of VPN tunnels awaiting an XAUTH login. If any Cisco Easy VPN tunnel is waiting for an XAUTH login, a separate message panel is shown with a Login button. Click Login to enter the credentials for the tunnel. If XAUTH has been configured for a tunnel, it will not begin to function until the login and password have been supplied. There is no timeout after which it will stop waiting; it will wait indefinitely for this information.

Easy VPN Remote : The number of configured Cisco Easy VPN Remote connections. Number of DMVPN Clients : If the router is configured as a Dynamic Multipoint VPN (DMVPN) hub, the number of DMVPN clients. Number of Active VPN Clients : If the router is functioning as a Cisco Easy VPN Server, the number of Cisco Easy VPN Clients with active connections. Interface : The name of the interface with a configured VPN connection. IPsec Policy : The name of the IPsec policy that is associated with the VPN connection.

Routing

This area shows the following information:


Number of Static Routes: The number of static routes that are configured on the router. Dynamic Routing Protocols: List of any dynamic routing protocols that are configured on the router.

Intrusion Prevention

This area shows the following information:


Active Signatures: The number of active signatures that the router is using. These signatures may be built-in, or they may be loaded from a remote location. Number of IPS-Enabled Interfaces : The number of router interfaces on which IPS has been enabled.

Cisco SDM Wizards


Cisco SDM contains several wizard options.

Interfaces and Connections: This menu contains several wizards that are designed to help you configure how the router connects to the network. You can access a LAN wizard to configure the LAN interfaces with a static or DHCP-assigned IP address. You can also access a WAN wizard to configure PPP, Frame Relay, and High-Level Data Link Control (HDLC) WAN interfaces. Additionally, you can configure the router as a DHCP server. Firewall wizard: This wizard is used to configure the firewall features. You can access a basic firewall setup, which consists of predefined access control lists (ACLs) for standard services, and an advanced firewall setup, where you can define each rule manually. VPN wizard: This wizard is used to configure the VPN features. You can configure your router as a VPN client for a site-to-site VPN, or as a VPN server for Cisco IOS WebVPN or Cisco Easy VPN. Security Audit wizards: There are these two options, as follows: The router security audit wizard An easy one-step router security lockdown wizard Quality of Service wizard: This wizard is used to configure a basic QoS policy for outgoing traffic on WAN interfaces and IP Security (IPsec) tunnels.
o o

Note At the end of each wizard procedure, all changes are automatically delivered to the router using Cisco SDM-generated CLI commands. You can choose whether to preview the commands to be sent. The default is to not preview the commands.

Topic Notes: The Router as a DHCP Server


Understanding DHCP DHCP is built on a client-server model. The DHCP server hosts allocate network addresses and deliver configuration parameters to dynamically configured hosts. The term "client" refers to a host that is requesting initialization parameters from a DHCP server. DHCP supports these three mechanisms for IP address allocation:

Automatic allocation: DHCP assigns a permanent IP address to a client. Dynamic allocation: DHCP assigns an IP address to a client for a limited time (or until the client explicitly relinquishes the address). Manual allocation: A client IP address is assigned by the network administrator, and DHCP is used simply to convey the assigned address to the client.

Dynamic allocation is the only one of the three mechanisms that allows automatic reuse of an address that is no longer needed by the client to which it was assigned. Dynamic allocation is particularly useful for assigning an address to a client that will be connected to the network only temporarily, or for sharing a limited pool of IP addresses among a group of clients that do not need permanent IP addresses. Dynamic allocation may also be a good choice for assigning an IP address to a new client that is being permanently connected to a network in which IP addresses are so scarce that it is important to reclaim them when old clients are retired.

DHCPDISCOVER
When a client boots up for the first time, it transmits a DHCPDISCOVER message on its local physical subnet. Because the client has no way of knowing the subnet to which it belongs, the DHCPDISCOVER message is an all-subnets broadcast (destination IP address of 255.255.255.255). The client does not have a configured IP address, so the source IP address of 0.0.0.0 is used.

DHCPOFFER
A DHCP server that receives a DHCPDISCOVER message may respond with a DHCPOFFER message, which contains initial configuration information for the client. For example, the DHCP server provides the requested IP address. The DHCPOFFER message also contains an Options field that is used to provide additional information such as the subnet mask or the default gateway ("router"). This Options field can also be used to specify several other values, including the IP address lease time, renewal time, domain name server, and NetBIOS Name Service (Microsoft Windows Internet Name Service [Microsoft WINS]). This DHCPOFFER message is sent to the client MAC address at Layer 2. The destination IP address is the address offered by the server.

DHCPREQUEST

After the client receives a DHCPOFFER message, it responds with a DHCPREQUEST message, indicating its intent to accept the parameters in the DHCPOFFER. The DHCPREQUEST is sent to the broadcast address (at Layer 2 and Layer 3), because the client is not sure yet if this address can safely be used )or if another DHCP client is also going to try to use it).

DHCPACK
After the DHCP server receives the DHCPREQUEST message, it acknowledges the request with a unicast DHCPACK message, thus completing the initialization process.

Using a Cisco Router as a DHCP Server


Cisco routers running Cisco IOS Software provide complete support for a router to be a DHCP server. The Cisco IOS DHCP server is a complete DHCP server implementation that assigns and manages IP addresses from specified address pools within the router to DHCP clients. You can configure a DHCP server to assign additional parameters, such as the IP address of the Domain Name System (DNS) server and the default router. The Cisco IOS DHCP server accepts address assignment requests and renewals and assigns the addresses from predefined groups of addresses that are contained within DHCP address pools. These address pools can also be configured to supply additional information to the requesting client, such as the IP address of the DNS server, the default router, and other configuration parameters. The Cisco IOS DHCP server can accept broadcasts from locally attached LAN segments or from DHCP requests that have been forwarded by other DHCP relay agents within the network.

Using Cisco SDM to Enable the DHCP Server Function


In this example, you enable the DHCP server for the 10.10.10.1/24 interface using a pool of addresses from 10.10.10.100 through 10.10.10.200. This router will be advertised as the default router (default gateway) to the clients. The DHCP server function is enabled from within the Additional Tasks tab in the Cisco SDM tool. From the list, click DHCP Pools. Then click Add to create the new DHCP pool. The Add DHCP Pool window allows you to configure the DHCP IP address pool. The IP addresses that the DHCP server assigns are drawn from a common pool that you configure by specifying the starting and ending IP addresses in the range. The Add DHCP Pool window shows the following fields:

DHCP Pool Name: A character string that identifies the DHCP pool. DHCP Pool Network and Subnet Mask: The IP addresses that the DHCP server assigns are drawn from a common pool that you configure by specifying the starting IP address in the range and the ending address in the range. The address range that you specify should be within the following private address ranges:

10.1.1.1 to 10.255.255.255 172.16.1.1 to 172.31.255.255 192.168.0.0 to 192.168.255.255 The address range that you specify must also be in the same subnet as the IP address of the LAN interface. The range can represent a maximum of 254 addresses. The following examples are valid ranges: 10.1.1.1 to 10.1.1.254 (assuming that the LAN IP address is in the 10.1.1.0 subnet) o 172.16.1.1 to 172.16.1.254 (assuming that the LAN IP address is in the 172.16.1.0 subnet) Cisco SDM configures the router to automatically exclude the LAN interface IP address in the pool. You must not use the following reserved addresses in the range of addresses that you specify: The network or subnetwork IP address The broadcast address on the network Starting IP: Enter the beginning of the range of IP addresses for the DHCP server to use in assigning addresses to devices on the LAN. This IP address is the lowest-numbered IP address in the range. Ending IP: Enter the highest-numbered IP address in the range of IP addresses. Lease Length: The amount of time that the client may use the assigned address before it must be renewed. DHCP Options: Use this pane to configure DHCP options that will be sent to hosts on the LAN that request IP addresses from the router. These are not options for the router that you are configuring; these are parameters that will be sent to the requesting hosts on the LAN. To set these properties for the router, click Additional Tasks on the Cisco SDM category bar, click DHCP, and configure these settings in the DHCP Pool window. DNS Server1: The DNS server is typically a server that maps a known device name with its IP address. If you have a DNS server that is configured for your network, enter the IP address for the server here. DNS Server2: If there is an additional DNS server on the network, you can enter the IP address for that server in this field. Domain Name: The DHCP server that you are configuring on this router will provide services to other devices within this domain. Enter the name of the domain here. WINS Server1: Some clients may require Microsoft WINS to connect to devices on the Internet. If there is a Microsoft WINS server on the network, enter the IP address for the server in this field. WINS Server2: If there is an additional Microsoft WINS server on the network, enter the IP address for the server in this field. Default Router: The IP address that will be provided to the client for use as the default gateway.
o o o

o o o

Import All DHCP Options into the DHCP Server Database: Select this check box to allow the DHCP options to be imported from a higher-level server. This import is typically used with an Internet DHCP server.

DHCP server configuration is supported through Cisco SDM or Cisco IOS command-line interface (CLI). The Cisco SDM GUI tool introduces an easier way of configuration for users that are not familiar with Cisco IOS CLI. For more-experienced users, Cisco IOS CLI provides additional DHCP configuration options and faster configuration. To configure Cisco IOS DHCP, follow these steps: Step 1 Using the ip dhcp pool global configuration command, create a DHCP IP address pool for the IP addresses you want to use. The configuration mode will change to dhcp pool configuration mode. Step 2 Using the network command, specify the network and the subnet to use. Step 3 Using the domain-name command, define the DNS domain name. Step 4 Using the dns-server command, define the primary and secondary DNS servers. Step 5 Using the default-router command, define the default gateway. Step 6 Using the lease command, specify the lease duration for the addresses that are provided from the DHCP server. The example shows a seven-day lease: lease 7. Step 7 Using the exit command, exit the dhcp pool configuration mode. Step 8 Using the ip dhcp excluded-address global configuration command, exclude addresses in the pool range that you do not want to assign to the clients. The following example shows a configured Cisco IOS DCHP server on a router:
Router(config)#ip dhcp pool mydhcppool Router(dhcp-config)#network 10.10.10.0 /8 Router(dhcp-config)#domain-name mydhcpdomain.com Router(dhcp-config)#dns-server 10.10.10.98 10.10.10.99 Router(dhcp-config)#default-router 10.10.10.1 Router(dhcp-config)#lease 7 Router(dhcp-config)#exit Router(config)#ip dhcp excluded-address 10.10.10.0 10.10.10.99

Topic Notes: Monitoring DHCP Server Functions


Monitoring DHCP Server Functions You can verify the DHCP configuration parameters from the DHCP Pools tab. You can also view additional information regarding the leased addresses by clicking DHCP Pool Status. The DHCP Pool Status window shows a list of the currently leased addresses. To verify the operation of DHCP, use the show ip dhcp binding command. This command displays a list of all IP address to MAC address bindings that have been provided by the DHCP service. To display address conflicts that are found by a DHCP server when addresses are offered to the client, use the show ip dhcp conflict command in user EXEC or privileged EXEC mode.
Router#show ip dhcp conflict [ip-address]

The server uses ping to detect conflicts. The client uses Gratuitous Address Resolution Protocol (GARP) to detect clients. If an address conflict is detected, the address is removed from the pool and the address is not assigned until an administrator resolves the conflict. The following example displays the detection method and detection time for all IP addresses that are offered by the DHCP server that have conflicts with other devices.
RouterX#show ip dhcp conflict IP address Detection Method Detection time 172.16.1.32 Ping Feb 16 2007 12:28 PM 172.16.1.64 Gratuitous ARP Feb 23 2007 08:12 AM

Field Descriptions for the show ip dhcp conflict Command Field IP address Detection Method Description The IP address of the host as recorded on the DHCP server The manner in which the IP addresses of the hosts were found on the DHCP server. This field can be ping or GARP.

Detection Time The date and time when the conflict was found.

Topic Notes: Accessing Remote Devices


Establishing a Telnet or SSH Connection
Telnet

Network administrators can connect to a router or switch locally or remotely. As companies get bigger, and as the number of routers and switches in the network grows, the workload to connect to all of the devices locally can become overwhelming. Telnet and SSH are Virtual Terminal Protocols that are part of the TCP/IP suite. The protocols allow connections and remote console sessions from one network device to one or more other remote devices. Remote administrative access is more convenient than local access for administrators that have many devices to manage. However, if it is not implemented securely, an attacker could collect valuable confidential information. For example, implementing remote administrative access using Telnet can be unsecure because Telnet forwards all network traffic in cleartext. An attacker could capture network traffic while an administrator is logged in remotely to a router and sniff the administrator passwords or router configuration information. Therefore, remote administrative access must be configured with additional security precautions. Telnet on Cisco routers varies slightly from Telnet on most Cisco Catalyst switches. To log on to a host that supports Telnet, use the telnet user EXEC command. As shown in this example, IP address 10.2.2.2 is used to establish a telnet session:
RouterA#telnet 10.2.2.2

Instead of an IP address, a hostname could also be used.


Secure Shell

The SSH feature has an SSH server and an SSH integrated client, which are applications that run on the switch. You can use any SSH client running on a PC or the Cisco SSH client running on the switch to connect to a switch running the SSH server. To start an encrypted session with a remote networking device, use the ssh user EXEC command, where the 1 cisco option represents a username that is used to access the SSHenabled switch SwitchB:
RouterA#ssh l cisco 10.2.2.2

Instead of an IP address, a hostname could also be used.

With Cisco IOS Software installed on a router, the IP address or hostname of the target device is all that is required to establish a Telnet connection. The telnet command that is placed before the target IP address or hostname is used to open a Telnet connection from a Cisco Catalyst switch. For routers and switches, a prompt for console login signifies a successful Telnet connection if login is enabled on the vty ports on the remote device. Once you are logged in to the remote device, the console prompt indicates which device is active on the console. The console prompt uses the hostname of the device. Use the show sessions command on the originating router or switch to verify Telnet connectivity and to display a list of hosts to which a connection has been established. This command displays the hostname, the IP address, the byte count, the amount of time that the device has been idle, and the connection name that is assigned to the session. If multiple sessions are in progress, the asterisk (*) indicates which was the last session and to which session the user will return to if the Enter key is pressed. Use the show users command to learn whether the console port is active and to list all active Telnet or SSH sessions with the IP address or IP alias of the originating host on the local device. In the show users output, the "con" line represents the local console, and the "vty" line represents a remote connection. The "11" next to the vty value in the example indicates the vty line number, not its port number. If there are multiple users, the asterisk (*) denotes the current terminal session user. To display the status of SSH server connections, use the show ssh command in privileged EXEC mode.

Suspending and Resuming a Telnet Session


Once you are connected to a remote device, you may want to access a local device without terminating the Telnet session. Telnet allows temporary suspension and resumption of a remote session. To suspend a Telnet session and escape from the remote target system back to a local switch or router, use the command Ctrl-Shift-6 or Ctrl-^ (depending on your keyboard), followed by the character x. The methods to re-establish a suspended Telnet session are as follows:

Press the Enter key. Enter the resume command if there is only one session. (Entering resume without the session number argument will resume the last active session.) Enter the resume session number command to reconnect to a specific Telnet session. (Enter the show sessions command to find the session number.)

Closing a Telnet Session


You can close a Telnet session on a Cisco network device by using one of the following methods:

From a remote device, use the exit or logout command to log out of the console session and return the session to the local device. From the local device, use the disconnect command (when there are multiple sessions) or the disconnect session session number command to disconnect a single session.

If a Telnet session from a remote user is causing bandwidth or other types of problems, you should close the session. Alternatively, network staff can terminate the session from their console. To close a Telnet session from a foreign host, use the clear line linenumber command. The linenumber option corresponds to the vty port of the incoming Telnet session. In this example, the line number is 11. Use the show sessions command to determine the linenumber variable. At the other end of the connection, the user gets a notice that the connection was "closed by a foreign host."

Alternate Connectivity Tests


The ping and traceroute commands provide information about connectivity to remote devices and the path to them. You can compile relevant device information about local and remote networks by using Cisco Discovery Protocols SSH and Telnet. This information is useful for creating and maintaining a network topology map. Other tools that can aid in testing and troubleshooting a network topology are as follows:

The ping command verifies network connectivity. Ping tells the minimum, average, and maximum times it takes for ping packets to find the specified system and return. This information can validate the reliability of the path to a specified system.

This table lists the possible output characters from the ping command output.
Ping Command Output Character ! . U Receipt of a reply The network server timed out while waiting for a reply The destination unreachable protocol data unit (PDU) was received Description

Ping Command Output Character Q M ? & Source quench (destination too busy) A router along the path could not fragment the transmitted packet and the transmitted packet was larger than the allowed transmission unit (MTU) on the affected link Unknown packet type Lifetime of the packet was exceeded Description

The traceroute command shows the actual routes that the packets take between network devices. A device, such as a router or switch, sends out a sequence of User Datagram Protocol (UDP) datagrams to an invalid port address at the remote host. Three datagrams are sent, each with a Time to Live (TTL) field value that is set to 1. The TTL value of 1 causes the datagram to time out as soon as it reaches the first router in the path. The router then responds with an Internet Control Message Protocol (ICMP) Time Exceeded Message (TEM) indicating that the datagram has expired. Another three UDP messages are then sent, each with the TTL value set to 2, which causes the second router to return ICMP TEMs. Traceroute then progressively increments the TTL field (3, 4, 5, and so on) for each sequence of messages. This sequence provides traceroute with the address of each hop as the packets time out further down the path. The TTL field continues to be increased until the destination is reached or it is incremented to a predefined maximum. Once the final destination is reached, the host responds with either an ICMP port unreachable message or an ICMP echo-reply message instead of the ICMP TEM. The purpose is to record the source of each ICMP TEM in order to provide a trace of the path that the packet took to reach the destination.

This table lists the characters that can appear in the traceroute command output.
Traceroute Command Output Character Description

nn msec For each node, the round-trip time (RTT) in milliseconds for the specified number of probes * A Q I The probe timed out Administratively prohibited (for example, access list) Source quench (destination too busy) User-interrupted test

Traceroute Command Output Character U H N P T ? Port unreachable Host unreachable Network unreachable Protocol unreachable Timeout Unknown packet type Description

Note If the IP domain name lookup is enabled, the router will attempt to reconcile each IP address to a name, which can cause the traceroute command to slow down.

Topic Notes: WAN Characteristics


What is a WAN?
A WAN is a data communications network that operates beyond the geographic scope of a LAN. WANs use facilities that are provided by a service provider, or carrier, such as a telephone or cable company. They connect the locations of an organization to each other, to locations of other organizations, to external services, and to remote users. WANs generally carry various traffic types, such as voice, data, and video. Here are the three major characteristics of WANs:

WANs generally connect devices that are separated by a broader geographical area than a LAN can serve. WANs use the services of carriers, such as telephone companies, cable companies, satellite systems, and network providers. WANs use serial connections of various types to provide access to bandwidth over large geographic areas.

Why are WANs Necessary?


There are several reasons why WANs are necessary in a communications environment. LAN technologies provide speed and cost efficiency for the transmission of data in organizations in relatively small geographic areas. However, there are other business needs that require communication among remote users, including the following:

People in the regional or branch offices of an organization need to be able to communicate and share data. Organizations often want to share information with other organizations across large distances. For example, software manufacturers routinely communicate product and promotion information to distributors that sell their products to end users. Employees who travel on company business frequently need to access information that resides on their corporate networks.

In addition, home computer users need to send and receive data across increasingly larger distances. Here are some examples:

It is now common in many households for consumers to communicate with banks, stores, and various providers of goods and services via computers. Students do research for classes by accessing library indexes and publications that are located in other parts of their country and in other parts of the world.

Because it is obviously not feasible to connect computers across a country or around the world in the same way that computers are connected in a LAN environment with cables, different technologies have evolved to support this need. Increasingly, the Internet is being used as an inexpensive alternative to using an enterprise WAN for some applications. New technologies are available to businesses to provide security and privacy for their Internet communications and transactions. WANs used by themselves, or in concert with the Internet, allow organizations and individuals to meet their wide-area communication needs.

How is a WAN Different from a LAN?


WANs are different from LANs in several ways. While a LAN connects computers, peripherals, and other devices in a single building or other small geographic area, a WAN allows the transmission of data across broad geographic distances. In addition, a company or organization must subscribe to an outside WAN service provider to use WAN carrier network services. The company or organization that uses a LAN typically owns it. Features of a LAN include the following:

The organization has the responsibility of installing and managing the infrastructure. Ethernet is the most common technology that is used. The LAN connects users and provides support for localized applications and server farms. Connected devices are usually in the same local area, such as a building or a campus.

Features of a WAN include the following:


Connected sites are usually geographically dispersed. Connectivity to the WAN requires a device such as a modem or CSU/DSU to put the data in a form that is acceptable to the network of the service provider. WAN services include T1 to T3 lines, or E1 to E3 lines, DSL, Cable, Frame Relay, and ATM. The ISP has the responsibility of installing and managing the WAN infrastructure. The edge devices modify the Ethernet encapsulation to a serial WAN encapsulation.

Some WANs are privately owned; however, because the development and maintenance of a private WAN is expensive, only very large organizations can afford to maintain a private WAN. Most companies purchase WAN connections from a service provider or ISP. The ISP is then responsible for maintaining the back-end network connections and network services between the LANs. When an organization has many global sites, establishing WAN connections and service can be complex. For example, the major ISP for the organization may not offer service in every location or country in which the organization has an office. As a result, the organization must purchase services from multiple ISPs. Using multiple ISPs often leads to differences in the quality of

services that are provided. In many emerging countries, for example, network designers will find differences in equipment availability, WAN services that are offered, and encryption technology for security. To support an enterprise network, it is important to have uniform standards for equipment, configuration, and services.

WAN Access and the OSI Reference Model


WANs operate in relation to the OSI reference model, primarily at Layer 1 and Layer 2. WAN access standards typically describe physical layer delivery methods and data link layer requirements, including physical addressing, flow control, and encapsulation. A number of recognized authorities, including the ISO, the TIA, and the EIA, define and manage WAN access standards. The physical layer (OSI Layer 1) protocols describe how to provide electrical, mechanical, operational, and functional connections to the services of a communications service provider. The data link layer (OSI Layer 2) protocols define how data is encapsulated for transmission toward a remote location and the mechanisms for transferring the resulting frames. A variety of different technologies are used, such as Frame Relay, ATM, and Ethernet. Several of these protocols use the same basic framing mechanism, High-Level Data Link Control (HDLC), which is an ISO standard, or one of its subsets or variants.

WAN Devices
There are several devices that operate at the physical layer in a WAN. WANs use numerous types of devices that are specific to WAN environments, including the following:

Modem: Modulates an analog carrier signal to encode digital information, and also demodulates the carrier signal to decode the transmitted information. A voiceband modem, such as one used in a dial-up connection, converts the digital signals that are produced by a computer into voice frequencies that can be transmitted over the analog lines of the public telephone network. On the other side of the connection, another modem converts the sounds back into a digital signal for input to a computer or network connection. Faster modems, such as cable modems and DSL modems, transmit using higher broadband frequencies. CSU/DSU: Digital lines, such as T1 or T3 carrier lines, require a CSU and a DSU. The two are often combined into a single piece of equipment that is called the CSU/DSU. The CSU provides termination for the digital signal and ensures connection integrity through error correction and line monitoring. The DSU converts the T-carrier line frames into frames that the LAN can interpret and vice versa.

Access server: Centralizes dial-in and dial-out user communications. An access server may have a mixture of analog and digital interfaces, and supports hundreds of simultaneous users. WAN switch: A multiport internetworking device that is used in carrier networks. These devices typically switch traffic such as Frame Relay, ATM, or X.25, and operate at the data link layer of the OSI reference model. Public switched telephone network (PSTN) switches may also be used within the cloud for circuit-switched connections like ISDN or analog dialup. Router: Provides internetworking and WAN access interface ports that are used to connect to the service provider network. These interfaces may be serial connections or other WAN interfaces. With some types of WAN interfaces, an external device such as a CSU/DSU or modem (analog, cable, or DSL) is required to connect the router to the local point of presence (POP) of the service provider. Core router: A router that resides within the middle or backbone of the WAN, rather than at its periphery. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the WAN core. It also must be able to forward IP packets at wire speed on all of those interfaces. The router must support the routing protocols being used in the core

Topic Notes: WAN Access Methods


Devices on the subscriber premises are referred to as customer premises equipment (CPE). The subscriber owns the CPE or leases the CPE from the service provider. A copper or fiber cable connects the CPE to the nearest exchange or central office (CO) of the service provider. This cabling is often called the local loop or "last mile." Transmission of analog data (such as a telephone call) is connected locally to other local loops or to remote locations through a trunk to a primary center. Analog data then goes to a sectional center and on to a regional or international carrier center as the call travels to its destination. For the local loop to carry data, however, a device such as a modem or CSU/DSU is needed to prepare the data for transmission. Devices that put data on the local loop are called DCE. The customer devices that pass the data to the DCE are called DTE. The DCE primarily provides an interface for the DTE into the communication link on the WAN cloud. The WAN physical layer determines the interface between the DTE and the DCE.

WAN Cabling
WAN physical layer protocols describe how to provide electrical, mechanical, operational, and functional connections for WAN services. The WAN physical layer also determines the interface between the DTE and the DCE. The DTE and DCE interfaces on Cisco routers use various physical layer protocols, including the following:

EIA/TIA-232: This protocol allows signal speeds of up to 64 kb/s on a 25-pin Dconnector over short distances. It was formerly known as RS-232. The ITU-T V.24 specification is effectively the same. EIA/TIA-449, -530: This protocol is a faster (up to 2 Mb/s) version of EIA/TIA-232. It uses a 36-pin D-connector and is capable of longer cable runs. There are several versions. This standard is also known as RS422 and RS-423. EIA/TIA-612, -613: This standard describes the High-Speed Serial Interface (HSSI) protocol, which provides access to services up to 52 Mb/s on a 60-pin D-connector. V.35: This is the ITU-T standard for synchronous communications between a network access device (NAD) and a packet network. Originally specified to support data rates of 48 kb/s, it now supports speeds of up to 2.048 Mb/s using a 34-pin rectangular connector. X.21: This protocol is an ITU-T standard for synchronous digital communications. It uses a 15-pin D-connector.

These protocols establish the codes and electrical parameters that the devices use to communicate with each other. The method of facilitation that is used by the service provider largely determines the choice of protocol. When you order the cable, you receive a shielded serial transition cable that has the appropriate connector for the standard that you specify. The router end of the shielded serial transition cable

has a DB-60 connector, which connects to the DB-60 port on a serial WAN interface card (WIC). Because five different cable types are supported with this port, the port is sometimes called a five-in-one serial port. The other end of the serial transition cable is available with the connector that is appropriate for the standard that you specify. The documentation for the device to which you want to connect should indicate the standard for that device. Your CPE, in this case a router, is the DTE. The data DCE, commonly a modem or a CSU/DSU, is the device that is used to convert the user data from the DTE into a form acceptable to the WAN service provider. The synchronous serial port on the router is configured as DTE or DCE (except EIA/TIA-530, which is only DTE) depending on the attached cable, which is ordered as either DTE or DCE to match the router configuration. If the port is configured as DTE (which is the default setting), it will require external clocking from the DCE device. Note: To support higher densities in a smaller form factor, Cisco introduced a Smart Serial cable. The serial end of the Smart Serial cable is a 26-pin connector. It is much smaller than the DB-60 connector that is used to connect to a five-in-one serial port. These transition cables support the same five serial standards, are available in either a DTE or DCE configuration, and are used with two-port serial connections as well as with two-port asynchronous and synchronous WICs.

Role of Routers in WANs


An enterprise WAN is a collection of separate but connected LANs. Routers play a central role in transmitting data through this interconnected network. Routers have LAN and WAN interfaces, and while a router is used to segment LANs, it is also used as the WAN access connection device. The functions and role of a router in accessing the WAN can be best understood by looking at the types of connections that are available on the router. There are three basic types of connections on a router: LAN interfaces, WAN interfaces, and management ports. LAN interfaces allow the router to connect to the LAN media through Ethernet or another LAN technology, such as Token Ring or ATM. WAN connections are made through a WAN interface on a router. These connections may be serial connections or any number of other WAN interfaces. With some types of WAN interfaces, an external device such as a CSU/DSU or modem (such as an analog modem, cable modem, or DSL modem) is required to connect the router to the local POP of the service provider. The physical demarcation point is the place where the responsibility for the connection changes from the user to the service provider. This fact is very important because when problems arise, both sides of the link need to prove that the problem either resides with them or not. Currently, LAN interfaces are also used for connections, while Ethernet is used to connect the router to modems, wireless access points, or directly to the ISP network. The management ports provide a text-based connection that allows for configuration and troubleshooting of a router. The common management interfaces are the console and auxiliary ports. These ports are connected to a communications port on a computer. The computer must

run a terminal emulation program to provide a text-based session with the router, which enables you to manage the device.

WAN Data-Link Protocols


In addition to physical layer devices, WANs require data link layer protocols to establish the link across the communication line from the sending to the receiving device. Data link layer protocols define how data is encapsulated for transmission to remote sites and the mechanisms for transferring the resulting frames. A variety of different technologies, such as ISDN, Frame Relay, or ATM, are used. Several of these protocols use the same basic framing mechanism, HDLC, which is an ISO standard, or one of its subsets or variants. ATM is the most different, because it uses small fixed-size cells of 53 bytes (48 bytes for data). The WAN datalink protocols are as follows:

HDLC PPP Frame Relay ATM Metro Ethernet Multiprotocol Label Switching (MPLS)

ISP service to home networks is currently often delivered over Ethernet at LAN-type speeds. Within metro areas (and beyond), companies are connecting using 1 Gigabit Ethernet and 10 Gigabit Ethernet, which is sometimes leased from a telecommunication company and sometimes company-owned. One of the most widely deployed technologies that Cisco supports is Metro Ethernet. Another data link layer protocol is the MPLS protocol. Service providers increasingly deploy MPLS to provide an economical solution to carry circuit-switched as well as packet-switched network traffic. It can operate over any existing infrastructure, such as IP, Frame Relay, ATM, or Ethernet. It sits between Layer 2 and Layer 3, and is sometimes referred to as a Layer 2.5 protocol.

Metro Ethernet
Metro Ethernet is a rapidly maturing networking technology that extends Ethernet to the WAN networks services that are offered by telecommunications companies. IP-aware Ethernet switches enable service providers to offer enterprises converged voice, data, and video services such as IP telephony, video streaming, imaging, and data storage. By extending Ethernet to the metropolitan area, companies can provide their remote offices with reliable access to applications and data on the corporate headquarters LAN. Benefits of Metro Ethernet include the following:

Reduced expenses and administration: Metro Ethernet provides a switched, highbandwidth Layer 2 network that is capable of managing data, voice, and video all on the same infrastructure. This characteristic increases bandwidth and eliminates expensive conversions to ATM and Frame Relay. The technology enables businesses to inexpensively connect numerous sites in a metropolitan area to each other and to the Internet. Easy integration with existing networks: Metro Ethernet connects easily to existing Ethernet LANs, thus reducing installation costs and time. Enhanced business productivity: Metro Ethernet enables businesses to take advantage of productivity-enhancing IP applications that are difficult to implement on time-division multiplexing (TDM) or Frame Relay networks, such as hosted IP communications, VoIP, and streaming and broadcast video.

WAN Communication Link Options


There are a number of ways in which WANs are accessed, depending on the data transmission requirements for the WAN. Many options for implementing WAN solutions are currently available. They differ in technology, speed, and cost. Familiarity with these technologies is an important part of network design and evaluation. WAN connections can be either over a private infrastructure or over a public infrastructure, such as the Internet.

Private WAN Connection Options


Private WAN connections include dedicated and switched communication link options.

Dedicated communication links: When permanent dedicated connections are required, point-to-point lines are used with various capacities that are limited only by the underlying physical facilities and the willingness of users to pay for these dedicated lines. A point-to- point link provides a pre-established WAN communications path from the customer premises through the provider network to a remote destination. Point-to-point lines are usually leased from a carrier and are also called leased lines. Switched communication links: Switched communication links can be either circuitswitched or packet-switched.

Public WAN Connection Options


Public connections use the global Internet infrastructure. Until recently, the Internet was not a viable networking option for many businesses because of the significant security risks and lack of adequate performance guarantees in an end-to end Internet connection. With the development of VPN technology, however, the Internet is now an inexpensive and secure option for connecting to teleworkers and remote offices where performance guarantees are not critical.

Internet WAN connection links are through broadband services such as DSL, cable modem, and broadband wireless, and are combined with VPN technology to provide privacy across the Internet. Broadband connection options are typically used to connect telecommuting employees to a corporate site over the Internet.

Last Mile and Long Range WAN Technologies


ISPs use several different WAN technologies to connect their subscribers. The connection type that is used on the local loop, or last mile, may not be the same as the WAN connection type that is employed within the ISP network or between various ISPs. Each of these technologies provides advantages and disadvantages for the customer. Not all technologies are available in all locations. When a service provider receives data, it must forward this data to other remote sites for final delivery to the recipient. These remote sites connect either to the ISP network or pass from ISP to ISP to the recipient. Long-range communications are usually those connections between ISPs or between branch offices in very large companies. Enterprises are becoming larger and more dispersed. As a result, applications require more bandwidth. This growth requires technologies that support high-speed and high-bandwidth transfer of data over even greater distances. SONET and Synchronous Digital Hierarchy (SDH) are standards that allow the movement of large amounts of data over great distances through fiber-optic cables. Both SONET and SDH encapsulate earlier digital transmission standards and support either ATM, Packet over SONET (POS), or SDH networking. SDH and SONET are used for moving both voice and data. One of the newer developments for extremely long-range communications is dense wavelengthdivision multiplexing (DWDM). DWDM assigns incoming optical signals to specific frequencies or wavelengths of light. It is also capable of amplifying these wavelengths to boost the signal strength. DWDM can multiplex more than 80 different wavelengths or channels of data onto a single piece of fiber. Each channel is capable of carrying a multiplexed signal at 10 Gb/s. Demultiplexed data at the receiving end allows a single piece of fiber to carry many different formats at the same time and at different data rates. For example, DWDM can carry IP, SONET, and ATM data concurrently.

Topic Notes: WAN Physical Connections


Packet-Switched Communication Links
Many WAN users do not make efficient use of the fixed bandwidth that is available with dedicated, switched, or permanent circuits because the data flow fluctuates. Communications providers have data networks available to more appropriately service these users. In packetswitched networks, the data is transmitted in labeled frames, cells, or packets. Packet switching is a switching method in which there is no dedicated path between the source and destination endpoints. This method allows for the sharing of connection links and common carrier resources for data transmission. Packet-switched networks send data packets over different routes of a shared public network to reach the same destination. Instead of providing a dedicated communication path, the carrier provides a network to its subscribers and ensures that data that is received from one site exits toward another specific site. Packet-switching networks do not require an establishment of a dedicated circuit, and they allow many pairs of nodes to communicate over the same channel. However, the route that the packets take to reach the destination site will vary. When the packets reach their destination, it is the responsibility of the receiving protocol to ensure that they are reassembled in the correct order. The switches in a packet-switched network determine, from the addressing information in each packet, the link on which the packet must be sent next. There are two approaches to this link determination: connectionless or connection-oriented:

Connectionless systems, such as the Internet, carry complete addressing information in each packet. Each switch must evaluate the address to determine where to send the packet. Connection-oriented systems predetermine the route for a packet, and each packet only has to carry an identifier. In the case of Frame Relay, these are called data-link connection identifiers (DLCIs). The switch determines the onward route by looking up the identifier in tables that are held in memory. The set of entries in the tables identifies a particular route or circuit through the system. If this circuit is only physically in existence while a packet is traveling through it, it is called a virtual circuit (VC). The circuit, or pathway, between the source and destination is often a preconfigured link, but it is not an exclusive link. When the customer does not use the complete bandwidth on its VC, the carrier, through statistical multiplexing, can make that unused bandwidth available to another customer.

Because the internal links between the switches are shared between many users, the costs of packet switching are lower than the costs of circuit switching. Delays (latency) and variability of delay (jitter) are greater in packet-switched than in circuit-switched networks. This behavior is because the links are shared, and packets must be entirely received at one switch before moving to the next. Despite the latency and jitter inherent in shared networks, modern technology allows satisfactory transport of voice and even video communications on these networks.

Packet switching enables you to reduce the number of links to the network. It allows the carrier to make more efficient use of its infrastructure so that the overall cost is generally lower than with discreet point-to-point lines, or leased lines.

Digital Subscriber Line


DSL technology is an always-on connection technology that uses existing twisted-pair telephone lines to transport high-bandwidth data and provides IP Services to subscribers. A DSL modem is used to convert an Ethernet signal from users to a DSL signal to the central office (CO). In the early 1950s, Bell Labs identified that although the physical cabling was capable of supporting frequencies from 300 Hz to 1 MHz, a typical voice conversation over a local loop only required bandwidth of 300 Hz to 3 kHz. For many years, the telephone networks were designed to use this lower bandwidth. Advances in technology allowed DSL to use the additional bandwidth from 3 kHz up to 1 MHz to deliver high-speed data services over ordinary copper lines. Service providers deploy DSL connections in the last step of a local telephone network, called the local loop or last mile. The connection is set up between a pair of modems on either end of a copper wire that extends between the customer premises equipment (CPE) and the DSL access multiplexer (DSLAM). A DSLAM is the device that is located at the CO of the provider and concentrates connections from multiple DSL subscribers, incorporating time-division multiplexing (TDM) technology. The two key components that are needed to provide a DSL connection to a SOHO are the DSL transceiver and the DSLAM.

Transceiver: Connects the computer of the teleworker to the DSL. Usually the transceiver is a DSL modem that is connected to the computer using a Universal Serial Bus (USB) or Ethernet cable. Newer DSL transceivers can be built into small routers with multiple 10/100 switch ports that are suitable for home office use. DSLAM: Located at the CO of the carrier, the DSLAM combines individual DSL connections from users into one high-capacity link to an ISP and to the Internet.

DSL availability is far from universal and there is a wide variety of types, standards, and emerging standards. DSL is now a popular choice for enterprise IT departments to support home workers. Generally, a subscriber cannot choose to connect to an enterprise network directly. The subscriber must first connect to an ISP and then an IP connection is made through the Internet to the enterprise. Security risks are incurred in this process.

DSL Types and Standards


The two basic types of DSL technologies are as follows:

Asymmetric DSL (ADSL): Provides higher download bandwidth than upload bandwidth.

Symmetric DSL (SDSL): Provides the same capacity of bandwidth in both directions.

All forms of DSL service are categorized as asymmetric or symmetric, but there are several varieties of each type. ADSL includes the following forms:

ADSL, ADSL2, ADSL2+ Consumer DSL, also called G.Lite or G.992.2 Very-high-data-rate DSL (VDSL, VDSL2)

SDSL includes the following forms:


SDSL High-data-rate DSL (HDSL) ISDN DSL (IDSL) Symmetric high bit rate DSL (G.shdsl)

Current DSL technologies use sophisticated coding and modulation techniques to achieve high data rates. ADSL reaches greater distances than other DSL types, but the achievable speed of ADSL transmissions degrades as the distance increases. The maximum distance is limited to approximately 18,000 feet (5.5 km) from the CO. ADSL2 and ADSL2+ are enhancements to basic ADSL and provide a downstream bandwidth of up to 24 Mb/s and an upstream bandwidth of up to 1.5 Mb/s. VDSL2 offers the highest operational speed but has the shortest achievable distance. VDSL2 deteriorates quickly from a theoretical maximum of 250 Mb/s at the source to 100 Mb/s at 1640 feet (0.5 km) and 50 Mb/s at 3280 feet (1 km).

DSL Considerations
DSL service can be added incrementally in any area. A service provider can upgrade the bandwidth to coincide with a growth in the number of subscribers. DSL is also backwardcompatible with analog voice and makes good use of the existing local loop, which means that it is easy to use DSL service simultaneously with normal phone service. Another advantage that DSL has over cable technology is that DSL is not a shared medium. Each user has a separate direct connection to the DSLAM. Adding users does not impede performance unless the DSLAM Internet connection to the ISP, or the Internet, becomes saturated. However, DSL suffers from distance limitations. Most DSL service offerings currently require the customer to be within 18,000 feet (5.5 km) of the CO location of the provider, and the older, longer loops present problems. Additionally, upstream (upload) speed is usually considerably slower than the downstream (download) speed. The always-on technology of DSL can also present security risks, because potential hackers have greater access.

Cable

Another technology that has become increasingly popular as a WAN communications access option is the IP-over-Ethernet Internet service that is delivered by cable networks. Coaxial cable is widely used in urban areas to distribute television signals. Network access is available from some cable television networks. This technology allows for greater bandwidth than the conventional telephone local loop. Cable modems provide an always-on connection and a simple installation. A subscriber connects a computer or LAN router to the cable modem, which translates the digital signals into the broadband frequencies that are used for transmitting on a cable television network. The local cable TV office, which is called the cable headend, contains the computer system and databases that are needed to provide Internet access. The most important component that is located at the headend is the cable modem termination system (CMTS). It sends and receives digital cable modem signals on a cable network and is necessary for providing Internet services to cable subscribers. Cable modem subscribers must use the ISP that is associated with the service provider. All the local subscribers share the same cable bandwidth. As more users join the service, available bandwidth may be below the expected rate.

Global Internet: The Largest WAN


The Internet can be thought of as a WAN that spans the globe. A global mesh of interconnected networks (internetworks) meets human communication needs. Large public and private organizations own some of these interconnected networks, such as government agencies or industrial enterprises, and those networks are reserved for their exclusive use. The most well-known and widely used publicly accessible internetwork is the Internet. In the 1960s, researchers at the U.S. Department of Defense wanted to build a command-andcontrol network by linking several of their computing facilities around the country. This early WAN could be vulnerable, however, to natural disaster or military attack. Therefore, it was necessary to ensure that if part of the network was destroyed, the rest of the system would still function. The network would have no central authority, and the computers running it could automatically reroute the flow of information around any broken links. The Department of Defense researchers devised a way to break messages into multiple parts and send each part separately to its destination, where the message would be reassembled. This method of data transmission is now known as a packet system. This packet system, which was made public by the military in 1964, was also being researched at Massachusetts Institute of Technology (MIT); University of California, Los Angeles (UCLA); and the National Physical Laboratory in the United Kingdom. In 1969, UCLA installed the first computer on this network. Several months later, there were four computers on this network, which was named the Advanced Research Projects Agency Network (ARPANET).

In 1972, the first email messaging software was developed so that ARPANET developers could more easily communicate and coordinate on projects. Later that year, a program that allowed users to read, file, forward, and respond to messages was developed. Throughout the 1970s and 1980s, the network expanded as technology became more sophisticated. In 1984, the Domain Name System (DNS) was introduced and gave the world domain suffixes (such as .edu, .com, .gov, and .org) as well as a series of country codes. This system made the Internet more manageable. Without DNS, users had to remember the IP address of every Internet site they wanted to visita long series of numbers, instead of a string of words. In 1989, Timothy Berners-Lee began work on a means to better facilitate communication among physicists around the world, based on the concept of hypertext, which would allow electronic documents to be linked directly to each other. The eventual result of linking documents was the World Wide Web. Standard formatting languages, such as HTML and its variants, allow web pages to display formatted text, graphics, and multimedia. A web browser can read and display HTML documents, and can access and download related files and software. The web was popularized in the 1993 release of a graphical, easy-to-use browser called Mosaic. Although the web began as only one component of the Internet, it is clearly the most popular, and the two are now nearly synonymous. Throughout the 1990s, PCs became more powerful and less expensive, allowing millions of people to buy them for their homes and offices. ISPs, such as America Online (AOL), CompuServe, and many local providers, began offering affordable dialup connections to the Internet. To accommodate the need for increased speed, cable service providers began to offer access through cable network facilities and technologies. Today, the Internet has grown into the largest network on the earth, providing access to information and communication for business and home users. The Internet can be seen as a network of networks that consists of a worldwide mesh of hundreds of thousands of networks. Millions of companies and individuals all over the world, all connected to thousands of ISPs, own and operate these networks. These ISP networks connect to each other to provide access for millions of users all over the world. Ensuring effective communication across this diverse infrastructure requires the application of consistent and commonly recognized technologies and protocols as well as the cooperation of many network administration agencies.

Topic Notes: Enabling the Internet Connection


Obtaining an Interface Address from a DHCP Server
The DHCP service enables devices on a network to obtain IP addresses and other information from a DHCP server. This service automates the assignment of IP addresses, subnet masks, gateways, and other IP networking parameters. An ISP will sometimes provide a static address for an interface that is connected to the Internet. In other cases, this address is provided using DHCP. On larger local networks, or where the user population changes frequently, DHCP is preferred. New users may arrive with laptops and need a connection. Others have new workstations that need to be connected. Rather than have the network administrator assign IP addresses for each workstation, it is more efficient to have IP addresses assigned automatically using DHCP. If the ISP uses DHCP to provide interface addressing, no manual address can be configured. Instead, the interface is configured to operate as a DHCP client. This configuration means that when the router is connected to a cable modem, for example, it is a DHCP client and requests an IP address from the ISP.

Introducing NAT and PAT


Small networks are commonly implemented using private IP addressing. Private addressing gives enterprises considerable flexibility in network design. This addressing enables operationally and administratively convenient addressing schemes as well as easier growth. However, you cannot route private addresses over the Internet, and there are not enough public addresses to allow organizations to provide one to each of their hosts. Therefore, networks need a mechanism to translate private addresses to public addresses. The mechanism must be at the edge of the network and must work in both directions. Without a translation system, private hosts behind a router in the network of one organization cannot connect with private hosts behind a router in other organizations over the Internet. NAT provides this mechanism. Before NAT, a host with a private address could not access the Internet. Using NAT, individual companies can address some or all of their hosts with private addresses and provide access to the Internet. NAT is like the receptionist in a large office. Assume that you have left instructions with the receptionist not to forward any calls to you unless you request it. Later on, you call a potential client and leave a message for them to call you back. You tell the receptionist that you are expecting a call from this client, and you ask the receptionist to put them through to your

telephone. The client calls the main number to your office, which is the only number that the client knows. When the client tells the receptionist who they are looking for, the receptionist checks a lookup table that matches your name to your extension. The receptionist knows that you requested this call; therefore, the receptionist forwards the caller to your extension. NAT operates on a Cisco router and is designed for IP address simplification and conservation. Usually, NAT connects two networks together and translates the private (inside local) addresses in the internal network into public addresses (inside global) before packets are forwarded to another network. You can configure NAT to advertise only one address for the entire network to the outside world. Advertising only one address effectively hides the internal network from the world, thus providing additional security. Any device that sits between an internal network and the public networksuch as a firewall, a router, or a computeruses NAT, which is defined in RFC 1631. In NAT terminology, the "inside network" is the set of networks that are subject to translation. The "outside network" refers to all other addresses. Usually, these are valid addresses that are located on the Internet. Cisco defines the following NAT terms:

Inside local address: The IP address that is assigned to a host on the inside network. The inside local address is likely not an IP address that is assigned by the Network Information Center (NIC) or service provider. Inside global address: A legitimate IP address that is assigned by the NIC or service provider that represents one or more inside local IP addresses to the outside world. Outside local address: The IP address of an outside host as it appears to the inside network. Not necessarily legitimate, the outside local address is allocated from an address space that is routable on the inside. Outside global address: The IP address that is assigned to a host on the outside network by the host owner. The outside global address is allocated from a globally routable address or network space.

One of the main features of NAT is static PAT, which is also referred to as overload in a Cisco IOS configuration. Several internal addresses can be translated into just one or a few external addresses by using NAT PAT. Most home routers operate in this manner. Your ISP assigns one address to your router, but several members of your family can simultaneously access the Internet. With NAT overloading, multiple addresses can be mapped to one or to a few addresses because a port number tracks each private address. When a client opens a TCP/IP session, the NAT router assigns a port number to its source address. NAT overload ensures that clients use a different TCP port number for each client session with a server on the Internet. When a response comes back from the server, the source port number, which becomes the destination port number on the return trip, determines to which client the router routes the packets. It also validates that the incoming packets were requested, thus adding a degree of security to the session.

PAT uses unique source port numbers on the inside global IP address to distinguish between translations. Because the port number is encoded in 16 bits, the total number of internal addresses that NAT can translate into one external address is, theoretically, as many as 65,536 addresses. PAT attempts to preserve the original source port. If the source port is already allocated, PAT attempts to find the first available port number. It starts from the beginning of the appropriate port group, which is either 0 to 511, 512 to 1023, or 1024 to 65535. If PAT does not find a port that is available from the appropriate port group and if more than one external IP address is configured, PAT will move to the next IP address and try to allocate the original source port again. PAT continues trying to allocate the original source port until it runs out of available ports and external IP addresses.

Differences Between NAT and NAT Overload


NAT generally only translates IP addresses on a 1:1 correspondence between publicly exposed IP addresses and privately held IP addresses. NAT overload modifies both the private IP address and the port number of the sender. NAT overload chooses the port numbers that are seen by hosts on the public network. NAT routes incoming packets to their inside destination by referring to the incoming source IP address that is given by the host on the public network. With NAT overload, there is generally only one or very few publicly exposed IP addresses. Incoming packets from the public network are routed to their destinations on the private network by referring to a table in the NAT overload device that tracks public and private port pairs. This is called connection tracking.

Translating Inside Source Addresses


You can translate your own IP addresses into globally unique IP addresses when you are communicating outside your network. You can configure static or dynamic inside source translation. You can conserve addresses in the inside global address pool by allowing the router to use one inside global address for many inside local addresses. When this overloading is configured, the router maintains enough information from higher-level protocols, such as TCP or User Datagram Protocol (UDP) port numbers, to translate the inside global address back into the correct inside local address. When multiple inside local addresses map to one inside global address, the TCP or UDP port numbers of each inside host distinguish between the local addresses.

Configuring the DHCP Client and PAT


In this implementation, you will configure the WAN interface (fa0/1) as a DHCP client using SDM. The DHCP client will get its IP address, default gateway, and default routing from the Internet DHCP server. In addition, you will enable PAT to translate the internal private addressing to the external public addressing.

To begin configuring the DHCP client interface, click the Interfaces and Connections tab. Check the Ethernet (PPPoE or Unencapsulated Routing) radio button, and then click the Create New Connection button If the ISP uses PPP over Ethernet (PPPoE), check the Enable PPPoE encapsulation check box, and then click Next. Check the Dynamic (DHCP Client) radio button and enter the hostname. Check Port Address Translation and choose the inside interface from the drop-down list. Note: Cisco routers can also be manually configured as a DHCP client. To configure an interface as a DHCP client, the ip address dhcp command must be configured on the interface.

Verifying the DHCP Client Configuration


You can use the Interfaces and Connections window to verify that the DHCP client has obtained an address from the DHCP server. Note: The client IP address may not appear in the window immediately and it may be necessary to refresh the window. To display DHCP addresses that are leased from the server, use the show dhcp lease command. The output also displays other important DHCP information, such as the DHCP server IP address, default gateway, and lease time.

Verifying the NAT and PAT Configuration


This table shows the commands that you can use in EXEC mode to display translation information. EXEC mode commands to display translation information. Command Description
show ip nat translations clear ip nat translation *

Displays active translations Clears all dynamic address translation entries from the NAT translation table

After you have configured NAT, verify that it is operating as expected by using the clear and show commands. By default, dynamic address translations will time out from the NAT and PAT translation tables after a period of nonuse. When port translation is not configured, translation entries time out after 24 hours unless you reconfigure them with the ip nat translation command. You can clear the entries before the timeout by using one of the commands that are listed in the table.

Alternatively, you can use the show run command and look for NAT, access control list (ACL), interface, or pool sections with the required values.

Topic Notes: Static and Dynamic Routing Methods


Routing Overview
Routing is the process by which a packet gets from one location to another. In networking, a router is the device that is used to route traffic. The router is a special-purpose computer that plays an important role in the operation of any data network. Routers perform packet forwarding by learning about remote networks and maintaining routing information. The router is the junction or intersection that connects multiple IP networks. The primary forwarding decision of the router is based on Layer 3 information, which is the destination IP address. To be able to route something, a router or any entity that performs routing must perform the following functions:

Identify the destination address: Determine the destination (or address) of the packet that needs to be routed. Identify sources of routing information: Determine from which sources (other routers) the router can learn the paths to the given destinations. Identify routes: Determine the initial possible routes or paths to the intended destination. Select routes: Select the best path to the intended destination. Maintain and verify routing information: Determine if the known paths to the destination are the most current.

The routing information that a router obtains from other routers is placed in its routing table. The routing table is used to find the best match and path between the destination IP of a packet and a network address in the routing table. The routing table will ultimately determine the exit interface to forward the packet, and the router will encapsulate that packet in the appropriate data-link frame for that outgoing interface. The routing table stores information about connected and remote networks. Connected networks are directly attached to one of the router interfaces. These interfaces are the gateways for the hosts on different local networks. If the destination network is directly connected, the router already knows which interface to use when forwarding packets. Remote networks are networks that are not directly connected to the router. Routes to these networks can be determined in one of the following ways:

Manually configured on the router by the network administrator Learned automatically using the dynamic routing process

Static and Dynamic Route Comparison


Based on the router configuration, routers can forward packets over static routes or dynamic routes. Remote networks are added to the routing table either by configuring static routes or enabling a dynamic routing protocol. When Cisco IOS Software learns about a remote network and the interface that it will use to reach that network, it adds that route to the routing table as long as the exit interface is enabled. The two ways to tell the router where to forward packets to on networks that are not directly connected are as follows:

Static routes: Routes to remote networks with an associated next hop can be manually configured on the router. These routes are known as static routes. The administrator must manually update a static route entry whenever an internetwork topology change requires an update. Static routes are user-defined routes that specify the path that packets take when moving between a source and a destination. These administrator-defined routes allow very precise control over the routing behavior of the IP internetwork. Dynamic routes: Remote networks can also be added to the routing table by using a dynamic routing protocol. The router dynamically learns routes after an administrator configures a routing protocol that helps determine routes. Unlike the situation with static routes, after the network administrator enables dynamic routing, the routing process automatically updates route knowledge whenever new topology information is received. The router learns and maintains routes to the remote destinations by exchanging routing updates with other routers in the internetwork.

When to Use Static Routes


Static routes should be used in the following cases:

A network consists of only a few routers. Using a dynamic routing protocol in such a case does not present any substantial benefit. On the contrary, dynamic routing may add more administrative overhead. A network is connected to the Internet only through a single ISP. There is no need to use a dynamic routing protocol across this link because the ISP represents the only exit point to the Internet. A large network is configured in a hub-and-spoke topology. A hub-and-spoke topology consists of a central location (the hub) and multiple branch locations (the spokes), with each spoke having only one connection to the hub. Using dynamic routing would be unnecessary because each branch has only one path to a given destination: through the central location.

Routers in an enterprise network use bandwidth, memory, and processing resources to provide NAT and PAT, packet filtering, and other services. Static routing provides forwarding services without the overhead that is associated with most dynamic routing protocols.

Static routing provides more security than dynamic routing, because no routing updates are required. A hacker could intercept a dynamic routing update to gain information about a network. However, static routing is not without problems. It requires time and accuracy from the network administrator, who must manually enter routing information. A simple typographical error in a static route can result in network downtime and packet loss. When a static route changes, the network may experience routing errors and problems during manual reconfiguration. For these reasons, static routing is impractical for general use in a large enterprise environment.

Topic Notes: Configuring Static Routing


Static routes are commonly used when you are routing from a network to a stub network. A stub network (which is sometimes called a leaf node) is a network that is accessed by a single route. Static routes can also be useful for specifying a "gateway of last resort" to which all packets with an unknown destination address will be sent.

Default Route Forwarding Configuration


Use a default route in situations in which the route from a source to a destination is not known or when it is not feasible for the router to maintain many routes in its routing table. A default static route is a route that will match all packets. Default static routes are used in the following instances:

When no other routes in the routing table match the destination IP address of the packet or when a more specific match does not exist. A common use for a default static route is when connecting an edge router of a company to the ISP network. When a router has only one other router to which it is connected. This condition is known as a stub router.

The syntax for a default static route is like any other static route, except that the network address is 0.0.0.0 and the subnet mask is 0.0.0.0.
Router(config)#ip route 0.0.0.0 0.0.0.0 exit-interface

or
Router(config)#ip route 0.0.0.0 0.0.0.0 ip-address

The 0.0.0.0 network address and 0.0.0.0 mask is called a "quad-zero" route.

Example: Verifying the Static Route Configuration


To verify that you have properly configured static routing, enter the show ip route command and look for static routes that are denoted by "S" in the first position. An asterisk (*) indicates the last path that was used when a packet was forwarded. This example shows the RouterB routing table after configuring the default route.

Topic Notes: WAN Communication Links


Circuit-Switched Communication Links
A circuit-switched network is one that establishes a dedicated circuit (or channel) between nodes and terminals before the users may communicate. When the transmission is complete, it terminates the circuit. In circuit switching, a dedicated path is established, maintained, and terminated through a carrier network for each communication session. The only dedicated physical circuit is the access path. The network uses a multiplexing technology within the cloud. Circuit switching operates much like a normal dialup telephone call and is used extensively in telephone company networks. Circuit switching establishes a dedicated physical connection for voice or data between a sender and receiver. Before communication can start, it is necessary to establish the connection by setting the switches through a dialup activity. While point-to-point communication links can accommodate only two sites on a single connection, circuit switching allows multiple sites to connect to the switched network of a carrier and communicate with each other. An example of a circuit-switched connection is a public switched telephone network (PSTN).

Public Switched Telephone Network


The most common type of circuit-switched WAN communication is the PSTN, which is also referred to as the plain old telephone service (POTS). When intermittent, low-volume data transfers are needed, asynchronous modems and analog telephone lines provide low-capacity, on-demand, dedicated switched connections. Traditional telephony uses a copper cable, called the local loop, to connect the telephone handset in the subscriber premises to the telephone network. The signal on the local loop during a call is a continuously varying electronic signal that is a translation of the subscriber voice. The local loop is not suitable for direct transport of binary computer data, but a modem can send computer data through the voice telephone network. The modem modulates the binary data into an analog signal at the source and, at the destination, demodulates the analog signal to binary data. The physical characteristics of the local loop and the connection of the local loop to the PSTN limit the rate of the signal. The upper limit is approximately 33 kb/s. The rate can be increased to approximately 56 kb/s if the signal comes directly through a digital connection. For small businesses, the PSTN can be adequate for the exchange of sales figures, prices, routine reports, and email. Using automatic dialup at night or on weekends for large file transfers and

data backup can result in lower off-peak tariffs (line charges). Tariffs are based on the distance between the endpoints, the time of day, and the duration of the call. There are a number of advantages to using the PSTN, including the following:

Simplicity : Other than a modem, there is no additional equipment that is required, and analog modems are easy to configure. Availability : Because a public telephone network is available almost everywhere, it is easy to locate a telephone service provider. The maintenance of the telephone system is of high quality, with few instances in which lines are not available.

There are also some disadvantages to using the PSTN, including the following:

Low data rates : Because the telephone system was designed to transmit voice data, the transmission rate for large data files is noticeably slow. Relatively long connection setup time : Because the connection to the PSTN requires a dialup activity, the time that is required to connect through the WAN is very slow when compared to other connection types.

Point-to-Point Communication Links


A point-to-point (or serial) communication link provides a single, established WAN communications path from the customer premises through a carrier network, such as a telephone company, to a remote network. When permanent dedicated connections are required, a point-to-point link is used to provide a pre-established WAN communications path from the customer premises through the provider network to a remote destination. A point-to-point (or serial) line can connect two geographically distant sites, such as a corporate office in New York and a regional office in London. Point-topoint lines are usually leased from a carrier and are therefore often called leased lines. For a point-to-point line, the carrier dedicates fixed transport capacity and facility hardware to the line that is leased by the customer. The carrier will, however, still use multiplexing technologies within the network. Leased lines are a frequently used type of WAN access, and they are generally priced based on the bandwidth that is required and the distance between the two connected points. Point-to-point links are usually more expensive than shared services such as Frame Relay. The cost of leased-line solutions can become significant when they are used to connect many sites over increasing distances. However, there are times when the benefits outweigh the cost of the leased line. The dedicated capacity removes latency or jitter between the endpoints. Constant availability is essential for some applications such as VoIP or video over IP. A router serial port is required for each leased-line connection. If the underlying network is based on the T-carrier or E-carrier technologies, the leased line connects to the network of the carrier through a CSU/DSU. The purpose of the CSU/DSU is to provide a clocking signal to the

customer equipment interface from the DSU and terminate the channelized transport media of the carrier on the CSU. The CSU also provides diagnostic functions such as a loopback test. Most T1 or E1 time-division multiplexing (TDM) interfaces on current routers include approved CSU/DSU capabilities. Leased lines provide permanent dedicated capacity and are used extensively for building WANs. They have been the traditional connection of choice but have a number of disadvantages. Leased lines have a fixed capacity; however, WAN traffic is often variable and leaves some of the capacity unused. In addition, each endpoint needs a separate physical interface on the router, which increases equipment costs. Any change to the leased line generally requires a site visit by the carrier personnel.

Bandwidth
Bandwidth refers to the rate at which data is transferred over the communication link. The underlying carrier technology depends on the bandwidth that is available. There is a difference in bandwidth points between the North American (T-carrier) specification and the European (Ecarrier) system. Both of these systems are based on the Plesiochronous Digital Hierarchy (PDH) that is supported in their networks. Optical networks use a different bandwidth hierarchy, which again differs between North America and Europe. In the United States, the Optical Carrier (OC) defines the bandwidth points. In Europe, the Synchronous Digital Hierarchy (SDH) defines the bandwidth points. In North America, the bandwidth is usually expressed as a digital signal level number (DS0, DS1, and so on), which refers to the rate and format of the signal. The most fundamental line speed is 64 kb/s, or DS0, which is the bandwidth that is required for an uncompressed, digitized phone call. Serial connection bandwidths can be incrementally increased to accommodate the need for faster transmission. For example, 24 DS0s can be bundled to get a DS1 line (also called a T1 line) with a speed of 1.544 Mb/s. Also, 28 DS1s can be bundled to get a DS3 line (also called a T3 line) with a speed of 43.736 Mb/s. Leased lines are available in different capacities and are generally priced based on the bandwidth that is required and the distance between the two connected points. Note E1 (2.048 Mb/s) and E3 (34.368 Mb/s) are European standards like T1 and T3, but they possess different bandwidths and frame structures. Serial interfaces are used to connect WANs to routers at a remote site or ISP. To configure a serial interface, follow the steps that are described in this table: Configuring a serial interface Step Action 1. Enter global configuration mode (configure terminal command). 2. When in global configuration mode, enter the interface configuration mode. In this

Configuring a serial interface Step Action example, use the interface serial 0/0/0 command. Enter the specified bandwidth for the interface. The bandwidth command provides a minimum bandwidth guarantee during congestion. The bandwidth command overrides the default bandwidth that is displayed in the show interfaces command and is used by some routing protocols, such as the Interior Gateway Routing Protocol (IGRP), for routing metric 3. calculations. The router also uses the bandwidth for other types of calculations, such as those required for the Resource Reservation Protocol (RSVP). The bandwidth that is entered has no effect on the actual speed of the line. If a DCE cable is attached, enter the clock rate bps command with the desired speed. Use the clock rate interface configuration command to configure the clock rate for the hardware connections on the serial interfaces, such as network interface modules (NIMs) and interface processors, to an acceptable bit rate. Typically clock rate is configured in lab environment. Be sure to enter the complete clock speed. For example, a clock rate of 64000 cannot be abbreviated to 64. 4. On serial links, one side of the link acts as the DCE and the other side of the link acts as the DTE. By default, Cisco routers are DTE devices, but can be configured as DCE devices. In a "back-to-back" router configuration in which a modem is not used, one of the interfaces must be configured as the DCE to provide a clocking signal. You must specify the clock rate for each DCE interface that is configured in this type of environment. Clock rates in bits per second are as follows: 1200, 2400, 4800, 9600, 19200, 38400, 56000, 64000, 72000, 125000, 148000, 500000, 800000, 1000000, 1300000, 2000000, and 4000000. By default, interfaces are disabled. The interface must be activated with the no shutdown command. This activation is like powering on the interface. The interface must also be connected to another device (such as a hub, a switch, or another router) for the physical layer to be active. If an interface needs to be disabled for maintenance or troubleshooting, use the shutdown command. Serial interfaces require a clock signal to control the timing of the communications. In most environments, a DCE device such as a CSU/DSU will provide the clock. By default, Cisco routers are DTE devices, but they can be configured as DCE devices. Note The serial cable that is attached determines the DTE or DCE mode of the Cisco router. Choose the cable to match the network requirement. Each connected serial interface must have an IP address and subnet mask to route IP packets. Note A common misconception for students new to networking and Cisco IOS Software is to assume that the bandwidth command will change the physical bandwidth of the link. The bandwidth command only modifies the bandwidth metric that is used by routing protocols such

as Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF). Sometimes, a network administrator will change the bandwidth value in order have more control over the chosen outgoing interface. The show controller command displays information about the physical interface itself. This command is useful with serial interfaces to determine the type of cable that is connected without the need to physically inspect the cable itself. The information that is displayed is determined when the router initially starts and represents only the type of cable that was attached when the router was started. If the cable type is changed after startup, the show controller command will not display the cable type of the new cable.

Point-to-Point Communication Considerations


One of the most common types of WAN connection is the point-to-point connection. Point-topoint connections are used to connect LANs to service provider WANs, and to connect LAN segments within an enterprise network. Companies pay for a continuous connection between two remote sites, and the line is continuously active and available. The advantages to this type of WAN access include the following:

Simplicity : Point-to-point communication links require minimal expertise to install and maintain. Quality : Point-to-point communication links usually offer a high quality of service, if they have adequate bandwidth. The dedicated capacity gives no latency or jitter between the endpoints. Availability : Constant availability is essential for some applications such as ecommerce, and point-to-point communication links provide permanent, dedicated capacity that is always available.

There are also some disadvantages to this type of WAN access, including the following:

Cost : Point-to-point links are generally the most expensive type of WAN access, and this cost can become significant when they are used to connect many sites. In addition, each endpoint requires an interface on the router, which increases equipment costs. Limited flexibility : WAN traffic is often variable, and leased lines have a fixed capacity, resulting in the bandwidth of the line seldom being exactly what is needed. Any change to the leased line generally requires a site visit by the ISP or carrier personnel to adjust capacity.

Topic Notes: Configuring WAN Links


High-Level Data Link Control Protocol
The High-Level Data Link Control (HDLC) protocol is one of two major data-link protocols that are commonly used with point-to-point WAN connections. On each WAN connection, data is encapsulated into frames before crossing the WAN link. To ensure that the correct protocol is used, you need to configure the appropriate Layer 2 encapsulation type. The choice of protocol depends on the WAN technology and the communicating equipment. ISO developed HDLC as a synchronous data link layer bit-oriented protocol. HDLC uses synchronous serial transmission to provide error-free communication between two points. HDLC defines a Layer 2 framing structure that allows for flow control and error checking by using acknowledgments, control characters, and checksum. Each frame has the same format, whether it is a data frame or a control frame. HDLC supports point-to-point and multipoint configurations, and includes a means for authentication. HDLC may not be compatible, however, between devices from different vendors because of the way each vendor may have chosen to implement it. When you want to transmit frames over synchronous or asynchronous links, you must remember that those links have no mechanism to mark the beginnings or ends of the frames. HDLC uses a frame delimiter to mark the beginning and the end of each frame. There is a Cisco implementation of HDLC, which is the default encapsulation for serial lines. Cisco HDLC is very streamlined; there is no windowing or flow control, and only point-to- point connections are allowed. The Cisco HDLC implementation includes proprietary extensions in the data field. The extensions allowed multiprotocol support at a time before PPP was specified. Because of the modification, the Cisco HDLC implementation will not interoperate with other HDLC implementations. HDLC encapsulations vary; however, PPP should be used when interoperability is required.

Verifying Serial HDLC Encapsulation


Cisco HDLC is the default encapsulation method that is used by Cisco devices on synchronous serial lines. Use Cisco HDLC as a point-to-point protocol on leased lines between two Cisco devices. If you are connecting to a device that is not a Cisco device, use synchronous PPP. If the default encapsulation method has been changed, use the encapsulation hdlc command in privileged EXEC mode to re-enable HDLC. There are two steps to enable HDLC encapsulation:

Step 1 Enter the interface configuration mode of the serial interface. Step 2 Enter the encapsulation hdlc command to specify the encapsulation protocol on the interface.
Example: Verifying HDLC Encapsulation Configuration

Use the show interface command to verify proper configuration. When HDLC is configured, "Encapsulation HDLC" should be reflected in the output of the show interface command.

Point-to-Point Protocol
Cisco HDLC is a PPP that can be used on leased lines between two Cisco devices. For communicating with a device from another vendor, synchronous PPP is a better option. PPP originally emerged as an encapsulation protocol for transporting IP traffic over point-topoint links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start and stop bit) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network layer address negotiation and data- compression negotiation. PPP provides router-to-router and host-to-network connections over both synchronous and asynchronous circuits. An example of an asynchronous connection is a dialup connection. An example of a synchronous connection is a leased line. PPP provides a standard method for transporting multiprotocol datagrams (packets) over pointto-point links. There are many advantages to using PPP, including the fact that it is not proprietary. Moreover, it includes many features not available in HDLC, including the following:

The link quality management feature monitors the quality of the link. If too many errors are detected, PPP takes down the link. PPP supports Password Authentication Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP) authentication.

Developers designed PPP to make the connection for point-to-point links. PPP, described in RFCs 1661 and 1332, encapsulates network layer protocol information over point-to-point links. RFC 2153: PPP Vendor Extensions updated the previous RFC 1661. You can configure PPP on the following types of physical interfaces:

Asynchronous serial Synchronous serial High-Speed Serial Interface (HSSI)

PPP Layered Architecture


PPP comprises these three main components:

A method for encapsulating multiprotocol datagrams. Extensible Link Control Protocol (LCP) to establish, configure, and test the WAN data-link connection. Family of Network Control Protocols (NCPs) for establishing and configuring different network layer protocols. PPP allows the simultaneous use of multiple network layer protocols. Some of the more common NCPs are IP Control Protocol (IPCP), AppleTalk Control Protocol, Novell IPX Control Protocol, Cisco Systems Control Protocol, Systems Network Architecture (SNA) Control Protocol, and Compression Control Protocol.

LCP provides sufficient versatility and portability to a wide variety of environments. LCP is used to automatically determine the encapsulation format option, to manage varying limits on sizes of packets, and to detect a loopback link and terminate the link. Other optional facilities that are provided are authentication of the identity of its peer on the link and determination of when a link is functioning properly or failing. The authentication phase of a PPP session is optional. After the link has been established and the authentication protocol is chosen, the peer can be authenticated. If the authentication option is used, authentication takes place before the network layer protocol configuration phase begins. The authentication options require that the calling side of the link enters authentication information to help ensure that the user has permission from the network administrator to make the call. Peer routers exchange authentication messages. To enable PPP encapsulation, enter interface configuration mode. Use the encapsulation ppp interface configuration command to specify PPP encapsulation on the interface. To set PPP as the encapsulation method to be used by a serial or ISDN interface, use the encapsulation ppp interface configuration command. The following example enables PPP encapsulation on serial interface 0/0/0 on RouterA:
RouterA#configure terminal RouterA(config)#interface serial 0/0/0 RouterA(config-if)#encapsulation ppp RouterA(config-if)#bandwith 64 RouterA(config-if)#no shutdown

The encapsulation ppp command has no arguments but you must first configure the router with an IP routing protocol to use PPP encapsulation. Remember that if you do not configure PPP on a Cisco router, the default encapsulation for serial interfaces is HLDC. In this example the bandwidth value is set to 64 kb/s.

Serial PPP Encapsulation Configuration Verification


Use the show interfaces serial command to verify the proper configuration of HDLC or PPP Encapsulation.

Topic Notes: RIP Routing


Enabling RIP
Dynamic Routing Protocol Overview

A routing protocol is a set of processes, algorithms, and messages that is used to exchange routing information and populate the routing table with the choice of best paths for the routing protocol. Routing protocols are a set of rules by which routers dynamically share their routing information. As routers become aware of changes to the networks for which they act as the gateway, or changes to links between routers, this information is passed on to other routers. When a router receives information about new or changed routes, it updates its own routing table and, in turn, passes the information to other routers. In this way, all routers have accurate routing tables that are updated dynamically and can learn about routes to remote networks that are many hops way. Further examples of the information that routing protocols determine are as follows:

How updates are conveyed What knowledge is conveyed When to convey knowledge How to locate recipients of the updates

The purpose of a routing protocol includes the following functions:


Discovery of remote networks Maintaining up-to-date routing information Choosing the best path to destination networks Ability to find a new best path if the current path is no longer available

All routing protocols have the same purpose: to learn about remote networks and to quickly adapt whenever there is a change in the topology. The method that a routing protocol uses to accomplish this purpose depends upon the algorithm that it uses and the operational characteristics of that protocol. The operations of a dynamic routing protocol vary depending on the type of routing protocol and on the routing protocol itself. In general, the operations of a dynamic routing protocol can be described as follows:

The router sends and receives routing messages on its interfaces. The router shares routing messages and routing information with other routers that are using the same routing protocol. Routers exchange routing information to learn about remote networks. When a router detects a topology change, the routing protocol can advertise this change to other routers.

Although routing protocols provide routers with up-to-date routing tables, there are costs that put additional demands on the memory and processing power of the router. First, the exchange of route information adds overhead that consumes network bandwidth. This overhead can be a problem, particularly for low-bandwidth links between routers. Second, after the router receives the route information, protocols such as Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) process it extensively to make routing table entries. This means that routers that are employing these protocols must have sufficient processing capacity to implement the algorithms of the protocol as well as to perform timely packet routing and forwarding. An autonomous system (AS), otherwise known as a routing domain, is a collection of routers under a common administration. Typical examples are an internal network of a company and a network of an ISP. Because the Internet is based on the AS concept, the following two types of routing protocols are required:

Interior gateway protocols (IGPs): These routing protocols are used to exchange routing information within an AS. RIP, EIGRP, Intermediate System-to-Intermediate System (IS- IS), and OSPF are examples of IGPs. Exterior gateway protocols (EGPs): These routing protocols are used to route between autonomous systems. Border Gateway Protocol (BGP) is the EGP of choice in networks today.

Note The Internet Assigned Numbers Authority (IANA) assigns AS numbers for many jurisdictions. Use of IANA numbering is required if your organization plans to use BGP. However, it is good practice to be aware of private versus public AS numbering schema. IGPs are used for routing within a routing domain, which consists of those networks within the control of a single organization. An AS is commonly comprised of many individual networks that belong to companies, schools, and other institutions. An IGP is used to route within the AS and it is also used to route within the individual networks themselves. For example, a fictitious organization operates an AS that includes schools, colleges, and universities. This organization uses an IGP to route within its AS in order to interconnect all of these institutions. Each of the educational institutions also uses an IGP of their own choosing to route within its own individual network. The IGP that is used by each entity provides best-path determination within its own routing domains, just as the IGP that is used by the organization provides best-path routes within the AS itself. EGPs, on the other hand, are designed for use between different autonomous systems that are under the control of different administrations. BGP is the only currently viable EGP and is the routing protocol that is used by the Internet. BGP is a path vector protocol that can use many different attributes to measure routes. At the ISP level, there are often more important issues than

just choosing the fastest path. BGP is typically used between ISPs and sometimes between a company and an ISP. Within an AS, most IGP routing can be classified as conforming to one of the following algorithms:

Distance vector : The distance vector routing approach determines the direction (vector) and distance (such as hops) to any link in the internetwork. Some distance vector protocols periodically send complete routing tables to all of the connected neighbors. In large networks, these routing updates can become enormous, causing significant traffic on the links. Distance vector protocols use routers as signposts along the path to the final destination. The only information that a router knows about a remote network is the distance or metric to reach that network and which path or interface to use to get there. Distance vector routing protocols do not have an actual map of the network topology. Protocol RIP is an example of distance vector routing protocol. Link state : The link-state approach, which uses the Shortest Path First (SPF) algorithm, creates an abstract of the exact topology of the entire internetwork, or at least of the partition in which the router is situated. Using an analogy of signposts, using a link-state routing protocol is like having a complete map of the network topology. The signposts along the way from the source to the destination are not necessary, because all link-state routers are using an identical "map" of the network. A link-state router uses the link-state information to create a topology map and to select the best path to all destination networks in the topology. Protocols OSPF and IS-IS are examples of link state routing protocols. Advanced distance vector or balanced hybrid : The advanced distance vector approach combines aspects of the link-state and distance vector algorithms. Protocol EIGRP is an example of advanced distance vector routing protocol.

There is no single best routing algorithm for all internetworks. All routing protocols provide the information differently.

Features of Dynamic Routing Protocols


Multiple routing protocols and static routes may be used at the same time. If there are several sources for routing information, including specific routing protocols, static routes, and even directly connected networks, an administrative distance value is used to rate the trustworthiness of each routing information source. Cisco IOS Software uses the administrative distance feature to select the best path when it learns about the same destination network from two or more routing sources.

Classless vs. Classful Routing


Classful routing is a consequence of the fact that subnet masks are not advertised in the routing advertisements that most distance vector routing protocols generate. When a classful routing protocol is used, all subnetworks of the same major network (Class A, B, or C) must use the same subnet mask, which is not necessarily a default major class subnet

mask. Routers that are running a classful routing protocol perform automatic route summarization across network boundaries. Upon receiving a routing update packet, a router that is running a classful routing protocol performs one of the following actions to determine the network portion of the route:

If the routing update information contains the same major network number as is configured on the receiving interface (for example /26 for both the update and the receiving interface), the router applies the subnet mask that is configured on the receiving interface. If the routing update information contains a major network that is different from the network that is configured on the receiving interface, the router applies the default classful mask (by address class) as follows:
o o o

For Class A addresses, the default classful mask is 255.0.0.0. For Class B addresses, the default classful mask is 255.255.0.0. For Class C addresses, the default classful mask is 255.255.255.0.

Classless routing protocols can be considered second-generation protocols because they are designed to address some of the limitations of the earlier classful routing protocols. A serious limitation in a classful network environment is that the subnet mask is not exchanged during the routing update process, thus requiring the same subnet mask to be used on all subnetworks within the same major network. Another limitation of the classful approach is the need to automatically summarize to the classful network number at all major network boundaries. In the classless environment, the summarization process is controlled manually and can usually be invoked at any bit position within the address. Because subnet routes are propagated throughout the routing domain, manual summarization may be required to keep the size of the routing tables manageable. Classless routing protocols include Routing Information Protocol version 2 (RIPv2), EIGRP, OSPF, and IS-IS.

Distance Vector Route Selection


As the name implies, "distance vector" means that routes are advertised as vectors of distance and direction. Distance is defined in terms of a metric such as hop count, and direction is simply the next-hop router or exit interface. A router that is using a distance vector routing protocol does not have the knowledge of the entire path to a destination network. Instead, the router knows only the following information:

The direction or interface in which packets should be forwarded The distance or how far it is to the destination network

Distance vector routing protocols call for the router to periodically broadcast the entire routing table to each of its neighbors. The periodic routing updates that most distance vector routing

protocols generate are addressed only to directly connected routing devices. The addressing scheme that is most commonly used is a logical broadcast. Routers that are running a distance vector routing protocol send periodic updates even if there are no changes in the network. In a pure distance vector environment, the periodic routing update includes a complete routing table. Upon receiving a complete routing table from its neighbor, a router can verify all known routes and make changes to the local routing table based on this updated information. This process is also known as "routing by rumor" because the router understands the network that is based on the perspective of the network topology of the neighboring router.

RIP Features
Over the years, routing protocols have evolved to meet the increasing demands of complex networks. The first protocol used was RIP. RIP still enjoys popularity because of its simplicity and widespread support. The key characteristics of RIP include the following:

RIP is a distance vector routing protocol. Hop count is used as the metric for path selection. The maximum allowable hop count is 15. By default, routing updates are broadcast every 30 seconds. RIP is capable of load balancing over as many as six equal-cost paths. The default is four equalcost paths. Defining the maximum number of parallel paths that are allowed in a routing table enables RIP load balancing. With RIP, the paths must be equal-cost paths. If the maximum number of paths is set to one, load balancing is disabled.

RIPv1 and RIPv2 Comparison


Over the years, RIP has evolved from a classful routing protocol (RIPv1) to a classless routing protocol (RIPv2). RIPv2 is a standardized routing protocol that works in a mixed vendor router environment. Routers that are made by different companies can communicate using RIP. It is one of the easiest routing protocols to configure, making it a good choice for small networks. However, RIPv2 still has limitations. Both RIPv1 and RIPv2 have a route metric that is based only on hop count and is limited to 15 hops. RIPv2 introduced the following improvements to RIPv1:

Includes the subnet mask in the routing updates, making it a classless routing protocol Has an authentication mechanism to secure routing table updates Supports variable-length subnet mask (VLSM) Uses multicast addresses instead of broadcast Supports manual route summarization

Note Cisco routers support RIPv1 and RIPv2.

Topic Notes: The Cisco Discovery Protocol


Creating a Network Map of the Environment
Once all of the devices on the internetwork have been discovered, it is important to document the network so that it can be readily supported. Topology documentation is used to validate design guidelines and to aid future design, change, and troubleshooting. Topology documentation should include both logical and physical documentation for the following components:

Connectivity Addressing Media types Devices Rack layouts Card assignments Cable routing Cable identification Termination points Power information Circuit identification information

Graphical representation of a network illustrates how each device in a network is connected and its logical architecture. A topology diagram shares many of the same components as the network configuration table. Each network device should be represented on the diagram with consistent notation or a graphical symbol. Also, each logical and physical connection should be represented using a simple line or other appropriate symbol. Routing protocols can also be shown. Maintaining accurate network topology documentation is important to successful configuration management. To create an environment where topology documentation maintenance can occur, the information must be available for updates. Cisco strongly recommends updating topology documentation whenever a network change occurs.

Cisco Discovery Protocol


Cisco Discovery Protocol is a powerful network monitoring and troubleshooting tool. It is also an information-gathering tool that is used by network administrators to obtain information about directly connected Cisco devices. Cisco Discovery Protocol is a proprietary tool that enables you to access a summary of protocol and address information about Cisco devices that are directly connected. By default, each Cisco device sends periodic messages, which are known as Cisco Discovery Protocol advertisements, to other directly connected Cisco devices. These advertisements contain information such as the

types of devices that are connected, the router interfaces to which they are connected, the interfaces that are used to make the connections, and the model numbers of the devices. A Cisco device frequently has other Cisco devices as neighbors on the network. Information that is gathered from other devices can assist you in making network design decisions, troubleshooting, and making changes to equipment. Cisco Discovery Protocol can be used as a network discovery tool, helping you to build a logical topology of a network when such documentation is missing or lacking in detail. Cisco Discovery Protocol runs over the data link layer, connecting the physical media to the upper-layer protocols (ULPs). Because Cisco Discovery Protocol operates at the data link layer, two or more Cisco network devicessuch as routers that support different network layer protocols (for example, IP and Novell IPX)can learn about each other. Physical media that connect Cisco Discovery Protocol devices must support Subnetwork Access Protocol (SNAP) encapsulation. These media can include all LANs, Frame Relay, other WANs, and ATM networks. When a Cisco device boots, Cisco Discovery Protocol starts by default and automatically discovers neighboring Cisco devices that are running Cisco Discovery Protocol, regardless of which protocol suite is running. The exception to this is Frame Relay interfaces, where Cisco Discovery Protocol must manually be enabled.

Information Obtained with Cisco Discovery Protocol


Cisco Discovery Protocol exchanges hardware and software device information with its directly connected neighbors. Cisco Discovery Protocol operates at Layer 2 only and can be used on many different types of local networks, including Ethernet and serial networks. Because it is a Layer 2 protocol, it can be used to determine the status of a directly connected link when no IP address has been configured or if the IP address is incorrect. Cisco Discovery Protocol neighbors are Cisco devices that are directly connected physically and share the same data link. Cisco Discovery Protocol exchanges hardware and software device information with its directly connected Cisco Discovery Protocol neighbors. The switch will receive Cisco Discovery Protocol advertisements from all neighboring devices. You can display the results of this information exchange on a console that is connected to a network device that is configured to run Cisco Discovery Protocol on its interfaces. Cisco Discovery Protocol provides the following information about each neighbor device:

Device identifiers : For example, the configured hostname of the switch. Address list : Up to one network layer address for each protocol that is supported. Port identifier : The name of the local port and remote port, in the form of an ASCII character string such as ethernet0. Capabilities list : Supported features, such as whether this device is a router or a switch.

Platform : The hardware platform of the device, such as Cisco 2800 Series Router.

To obtain Cisco Discovery Protocol information about this upper router from the console of the administrator, network staff could use Telnet to connect to a switch that is connected directly to this target device. Cisco Discovery Protocol version 2 is the most recent release of the protocol and provides more intelligent device-tracking features. These features include a reporting mechanism that allows for quicker error tracking, therefore reducing costly downtime. Reported error messages can be sent to the console or to a logging server.

Link Layer Discovery Protocol


Cisco introduced the Cisco Discovery Protocol in 1994 to provide a mechanism for the management system to automatically learn about devices that are connected to the network. Cisco Discovery Protocol runs on Cisco devices (routers, switches, phones, and so on) and is also licensed to run on some network devices from other vendors. With this capability in the network, Cisco customers can introduce a new network management application that can discover and display an accurate topology of all the Cisco devices that are active on the network. Because automatic device discovery is so helpful for network administration, many vendors in the data networking industry later implemented their own discovery protocols to manage their devices. Over time, enhancements have been made to discovery protocols to provide greater capabilities. Applications (such as voice) have become dependent on these capabilities to operate properly, leading to interoperability problems between vendors. Therefore, to allow interworking between vendor equipment, it has become necessary to have a single, standardized discovery protocol. Cisco has been working with other leaders in the Internet and IEEE community to develop a new, standardized discovery protocol, 802.1AB (Station and Media Access Control Connectivity Discovery, or Link Layer Discovery Protocol [LLDP]). LLDP, which defines basic discovery capabilities, was enhanced to specifically address the voice application. This extension to LLDP is called LLDP-MED or LLDP for Media Endpoint Discovery. It should be noted that either LLDP or LLDP-MEDbut not bothcan be used at any given time on an interface between two devices. LLDP-MED and Cisco Discovery Protocol are closely related protocols. They are similar in many ways, including operation and the information that they carry.

Implementation of Cisco Discovery Protocol


You can enable or disable Cisco Discovery Protocol on a router as a whole (global) or on a portby-port (interface) basis. You can view Cisco Discovery Protocol information with the show cdp command. Cisco Discovery Protocol has several keywords that enable access to different types of information and

different levels of detail. It is designed and implemented as a simple, low-overhead protocol. A Cisco Discovery Protocol packet can be as small as 80 octets, mostly made up of the ASCII strings that represent information. Cisco Discovery Protocol functionality is enabled by default on all interfaces (except for Frame Relay multipoint subinterfaces), but can be disabled at the device level. However, some interfaces, such as ATM interfaces, do not support Cisco Discovery Protocol. To prevent other Cisco Discovery Protocol-capable devices from accessing information about a specific device, the no cdp run global configuration command is used.
Router(config)#no cdp run

If you want to use Cisco Discovery Protocol but need to stop Cisco Discovery Protocol advertisements on a particular interface, use the following command:
Router(config-if)#no cdp enable

To enable Cisco Discovery Protocol on an interface, the cdp enable interface configuration command is used.
Router(config-if)#cdp enable

Topic Notes: Troubleshooting CDP


The show cdp neighbors command displays information about Cisco Discovery Protocol neighbors. For each Cisco Discovery Protocol neighbor, the following information is displayed:

Neighbor device ID Local interface Holdtime value, in seconds Neighbor device capability code Neighbor hardware platform Neighbor remote port ID

The holdtime value indicates how long the receiving device should hold the Cisco Discovery Protocol packet before discarding it. The format of the show cdp neighbors output varies between different types of devices, but the available information is generally consistent across devices. The show cdp neighbors command can be used on a Cisco Catalyst switch to display the Cisco Discovery Protocol updates that were received on the local interfaces. Note that on a switch, the local interface is referred to as the local port.

The show cdp neighbors detail command also reveals the IP address of a neighboring device. Cisco Discovery Protocol will reveal the IP address of the neighbor even if you cannot ping the neighbor. This command is very helpful when two Cisco routers cannot route across their shared data link. The show cdp neighbors detail command will help determine if one of the Cisco Discovery Protocol neighbors has an IP configuration error. For network discovery situations, knowing the IP address of the Cisco Discovery Protocol neighbor is often all the information that is needed in order to use Telnet to connect to that device. With an established Telnet session, information can be gathered about directly connected Cisco devices. In this fashion, you can use Telnet to navigate around a network and build a logical topology. The output from the show cdp neighbors detail command is identical to the output that is produced by the show cdp entry * command.

Monitoring and Maintaining Cisco Discovery Protocol


The show cdp entry, show cdp traffic, and show cdp interface commands display detailed Cisco Discovery Protocol information. The show cdp entry or show cdp neighbors detail command displays detailed information about neighboring devices. To display information about a specific neighbor, the command string must include the IP address or device ID of the neighbor. The asterisk (*) is used to display information about all neighbors. The show cdp entry command shows the following information:

Neighbor device ID Layer 3 protocol information Device platform Device capabilities Local interface type and outgoing remote port ID Holdtime value, in seconds Cisco IOS Software type and release (Cisco IOS Software, 2800 Software

The output from this command includes all the Layer 3 addresses of the neighboring device interfaces (up to one Layer 3 address per protocol). The show cdp traffic command displays information about interface traffic. It shows the number of Cisco Discovery Protocol packets that are sent and received. It also displays the number of errors for the following error conditions:

Syntax error Checksum error Failed encapsulations Out of memory Invalid packets. Fragmented packets

Number of Cisco Discovery Protocol version 1 packets sent Number of Cisco Discovery Protocol version 2 packets sent

The show cdp interface command displays the following interface status and configuration information about the local device:

Line and data-link status of the interface Encapsulation type for the interface Frequency at which Cisco Discovery Protocol packets are sent (default is 60 seconds) Holdtime value, in seconds (default is 180 seconds)

Cisco Discovery Protocol is limited to gathering information about directly connected Cisco neighbors. Other tools, such as Telnet, are available for gathering information about remote devices that are not directly connected.

Topic Notes: Router Power-On Boot Sequence


Internal Router Components
The major internal components of a Cisco router include the interfaces, RAM, ROM, flash memory, NVRAM, and configuration register. A router is a computer, just like any other computer, including a PC. Routers have many of the same hardware and software components that are found in other computers, such as the following:

Central processing unit (CPU) Synchronous dynamic random-access memory (SDRAM) Read-only memory (ROM): Boot ROM Data storage: Flash, CompactFlash Interfaces: Console, Fast Ethernet Operating system: Cisco IOS Software

Although there are several different types and models of routers, every router has the same general hardware components. Depending on the model, those components are located in different places inside the router. The major components of a router are mostly hardware.

CPU : The CPU executes operating system instructions, such as system initialization, routing functions, and switching functions.

RAM : RAM stores the instructions and data that the CPU needs to execute. This read/write memory contains the software and data structures that allow the router to function. RAM is volatile memory and loses its content when the router is powered down or restarted. However, the router also contains permanent storage areas, such as ROM, flash, and NVRAM. RAM is used to store the following components: Operating system : Cisco IOS Software is copied into RAM during bootup. Running configuration file : This file stores the configuration commands that Cisco IOS Software is currently using on the router. With few exceptions, all commands that are configured on the router are stored in the running configuration file, which is also known as "running-config." o IP routing table : This file stores information about directly connected and remote networks. It is used to determine the best path to forward the packet. o Address Resolution Protocol (ARP) cache : This cache contains the IP version 4 (IPv4) address to MAC address mappings, like the ARP cache on a PC. The ARP cache is used on routers that have LAN interfaces such as Ethernet interfaces. o Packet buffer : Packets are temporarily stored in a buffer when they are received on an interface or before they exit an interface. ROM : ROM is a form of permanent storage. This type of memory contains microcode for basic functions to start and maintain the router. The ROM contains the ROM monitor, which is used for router disaster recovery functions, such as password recovery. ROM is nonvolatile; it maintains the memory contents even when the power is off. Flash memory : Flash memory is nonvolatile computer memory that can be electrically stored and erased. Flash is used as permanent storage for the operating system. In most models of Cisco routers, the IOS is permanently stored in flash memory and copied into RAM during the bootup process, where the CPU then executes it. Some older models of Cisco routers run the IOS directly from flash. Flash consists of SIMMs or Personal Computer Memory Card International Association (PCMCIA) cards, which can be upgraded to increase the amount of flash memory. Flash memory does not lose its contents when the router loses power or is restarted. NVRAM : NVRAM does not lose its information when power is turned off. Cisco IOS Software uses NVRAM as permanent storage for the startup configuration file (startupconfig). All configuration changes are stored in the running configuration file in RAM, and with few exceptions, Cisco IOS Software implements them immediately. To save those changes in case the router is restarted or loses power, the running configuration must be copied to NVRAM, where it is stored as the startup configuration file. Configuration register : The configuration register is used to control how the router boots. The configuration register is part of the NVRAM. Interfaces : Interfaces are the physical connections to the external world for the router, and include the following types, among others:
o o o o o o o

Ethernet, Fast Ethernet, and Gigabit Ethernet Asynchronous and synchronous serial Token Ring Fiber Distributed Data Interface (FDDI) ATM

Console and auxiliary ports

There are three major areas of microcode that are generally contained in ROM.

Bootstrap code : The bootstrap code is used to bring up the router during initialization. It reads the configuration register to determine how to boot and then, if instructed to do so, loads the Cisco IOS Software. Power-on self-test (POST): POST is the microcode that is used to test the basic functionality of the router hardware and determine which components are present. ROM monitor (ROMMON): This area includes a low-level operating system that is normally used for manufacturing, testing, troubleshooting, and password recovery. In ROM monitor mode, the router has no routing or IP capabilities.

Note: Depending on the specific Cisco router platform, the components that are listed may be stored in flash memory or in bootstrap memory to allow for field upgrade to later versions.

Stages of the Router Power-On Boot Sequence


When a router boots, it performs a series of steps: performing tests, finding, and loading the Cisco IOS Software, finding and loading configurations, and finally, running the Cisco IOS Software. The sequence of events that occurs during the power-up (boot) of a router is important. Knowledge of this sequence helps to accomplish operational tasks and troubleshoot router problems. When power is initially applied to a router, the events occur in the order that is shown in this table. Step 1. Description This event is a series of hardware tests that verifies if all components of the Cisco router are functional. During this test, the router also Perform POST determines what hardware is present. POST executes from microcode that is resident in the system ROM. Bootstrap code is used to perform subsequent events, such as locating the Cisco IOS Software, loading it into RAM, and then running it. Load and run When the Cisco IOS Software is loaded and running, the bootstrap bootstrap code code is not used until the next time that the router is reloaded or power-cycled. The bootstrap code determines the location of the Cisco IOS Software that is to be run. Normally, the Cisco IOS Software image is located in Find the Cisco IOS the flash memory, but it can also be stored in other places, such as a Software TFTP server. The configuration register and configuration file determine where the Cisco IOS Software images are located and which image file to use. If a complete Cisco IOS image cannot be located, a Event

2.

3.

Step

4.

5.

6.

7.

Description scaled-down version of the Cisco IOS Software is copied from ROM into RAM. This version of Cisco IOS Software is used to help diagnose any problems and can be used to load a complete version of the Cisco IOS Software into RAM. Once the bootstrap code has found the proper image, it then loads that Load the Cisco IOS image into RAM and starts the Cisco IOS Software. Some older Software routers do not load the Cisco IOS Software image into RAM, but execute it directly from flash memory instead. Find the After the Cisco IOS Software is loaded, the bootstrap program configuration searches for the startup configuration file (startup-config) in NVRAM. If a startup configuration file is found in NVRAM, the Cisco IOS Software loads it into RAM as the running configuration and executes the commands in the file, one line at a time. The running-config file Load the contains interface addresses, starts routing processes, configures router configuration passwords and defines other characteristics of the router. If no configuration exists, the router will enter the setup utility or attempt an AutoInstall to look for a configuration file from a TFTP server. When the prompt is displayed, the router is running the Cisco IOS Run the configured Software with the current running configuration file. The network Cisco IOS Software administrator can now begin using Cisco IOS commands on this router.

Event

Topic Notes: Managing Router Images


When a Cisco router boots, it searches for the Cisco IOS image in a specific sequence, the location that is specified in the configuration register, flash memory, a TFTP server, and ROM. The bootstrap code is responsible for locating the Cisco IOS Software. It searches for the image according to the following sequence: 1. The bootstrap code checks the boot field of the configuration register. The configuration register has several uses, such as for telling the router how to boot up or for password recovery. For example, the factory default setting for the configuration register is 0x2102. This value indicates that the router attempts to load a Cisco IOS Software image from flash and loads the startup configuration file from NVRAM. The boot field is the lower four bits of the configuration register and is used to specify how the router boots. These bits can point to flash memory for the Cisco IOS image, the startup configuration file (if one exists) for commands that tell the router how to boot, or a remote TFTP server. Alternatively, these bits can specify that no Cisco IOS image is to be loaded and to start a Cisco IOS subset image in ROM. The configuration register bits perform other functions

as well, such as selection of the console b/s rate and whether to use the saved configuration file (startup- config) in NVRAM. It is possible to change the configuration register and, therefore, change where the router looks for the Cisco IOS image and the startup configuration file during the bootup process. For example, a configuration register value of 0x2102 (the "0x" indicates that the digits that follow are in hexadecimal notation) has a boot field value of 0x2. The right-most digit in the register value is 2 and represents the lower 4 bits of the register. 2. If the boot field value of the configuration register is from 0x2 to 0xF, the bootstrap code parses the startup configuration file in NVRAM for the boot system commands that specify the name and location of the Cisco IOS Software image to load. Several boot system commands can be entered in sequence to provide a fault-tolerant boot plan. The boot system command is a global configuration command that allows you to specify the source for the Cisco IOS Software image to load. Some of the syntax options available include the following:
Router(config)#boot system tftp://192.168.7.24/cs3-rx.90-1 Router(config)#boot system tftp://192.168.7.19/cs3-rx.83-2 Router(config)#boot system rom

1. If there are no boot system commands in the configuration, the router defaults to loading the first valid Cisco IOS image in flash memory and running it. 2. If no valid Cisco IOS image is found in flash memory, the router attempts to boot from a network TFTP server using the boot field value as part of the Cisco IOS image filename.

Note : Booting from a network TFTP server is a seldom-used method of loading a Cisco IOS Software image. Not every router has a boothelper image, so Steps 5 and 6 do not always follow.

1. By default, if booting from a network TFTP server fails after five tries, the router will boot the boothelper image (the Cisco IOS subset) from ROM. The user can also set bit 13 of the configuration register to 0 to tell the router to try to boot from a TFTP server continuously without booting the Cisco IOS subset from ROM after five unsuccessful tries. 2. If there is no boothelper image or if it is corrupted, the router will boot the ROM monitor from ROM. When the router locates a valid Cisco IOS image file in flash memory, the Cisco IOS image is normally loaded into RAM to run. Some routers, including the Cisco 2500 Series Routers, do not have sufficient RAM to hold the Cisco IOS image and, therefore, run the Cisco IOS image directly from flash memory. If the image needs to be loaded from flash memory into RAM, it must first be decompressed. After the file is decompressed into RAM, it is started. When the Cisco IOS Software begins to load, you may see a string of pounds signs (#), as shown in this example, while the image decompresses:

System Bootstrap, Version 12.1(3r)T2, RELEASE SOFTWARE (fc1) Copyright (c) 2000 by cisco Systems, Inc. cisco 2811 (MPC860) processor (revision 0x200) with 60416K/5120K bytes of memory Self decompressing the image : ##############################################################

Cisco IOS images that are run from flash memory are not compressed. The show version command can be used to help verify and troubleshoot some of the basic hardware and software components of the router. The show version command displays information about the version of the Cisco IOS Software that is currently running on the router, the version of the bootstrap program, and information about the hardware configuration, including the amount of system memory. The output from the show version command includes the following:

Cisco IOS version

Cisco IOS Software, 1841 Software (C1841-ADVIPSERVICESK9-M), Version 12.4(15)T7, RELEASE SOFTWARE (fc3)

This line from the example output shows the version of the Cisco IOS Software in RAM that the router is using.

ROM bootstrap program

ROM: System Bootstrap, Version 12.4(13r)T, RELEASE SOFTWARE (fc1)

This line from the example output shows the version of the system bootstrap software, which is stored in ROM memory and was initially used to boot up the router.

Location of Cisco IOS image

System image file is "flash:c1841-advipservicesk9-mz.124-15.T7.bin"

This line from the example output shows where the Cisco IOS image is located and loaded, as well as its complete filename.

CPU and amount of RAM

Cisco 1841 (revision 7.0) with 236544K/25600K bytes of memory

The first part of this line displays the type of CPU on this router. The last part of this line displays the amount of DRAM. Some series of routers, like the Cisco 2600 Series Routers, use a fraction of DRAM as packet memory. Packet memory is used for buffering packets. To determine the total amount of DRAM on the router, add both numbers. In this example, the Cisco1800 router has 236,544 KB (kilobytes) of free DRAM that is used for temporarily storing

the Cisco IOS Software and other system processes. The other 25,600 KB is dedicated for packet memory. The sum of these numbers is 302,080 KB of total DRAM.

Interfaces

2 FastEthernet/IEEE 802.3 interface(s) 2 Low-speed serial(sync/async) network interface(s)

This section of the output displays the physical interfaces on the router. In this example, the Cisco 2621 router has two Fast Ethernet interfaces and two low-speed serial interfaces.

Amount of NVRAM

191K bytes of NVRAM.

This line from the example output shows the amount of NVRAM on the router.

Amount of flash

974736K bytes of ATA CompactFlash (Read/Write)

This line from the example output shows the amount of flash memory on the router.

Configuration register

Configuration register is 0x2102

The last line of the show version command displays the current configured value of the software configuration register in hexadecimal format. This value indicates that the router will attempt to load a Cisco IOS Software image from flash memory and load the startup configuration file from NVRAM After the Cisco IOS Software image is loaded and started, the router must be configured to be useful. If there is an existing saved configuration file (startup-config) in NVRAM, it is executed. If there is no saved configuration file in NVRAM, the router either begins AutoInstall or enters the setup utility. If the startup configuration file does not exist in NVRAM, the router may search for a TFTP server. If the router detects that it has an active link to another configured router, it sends a broadcast searching for a configuration file across the active link. This condition will cause the router to pause, but you will eventually see a console message like the following one: <The router pauses here while it broadcasts for a configuration file across an active link.>
%Error opening tftp://255.255.255.255/network-confg(Timed out) %Error opening tftp://255.255.255.255/cisconet.cfg (Timed out)

The setup utility prompts the user at the console for specific configuration information to create a basic initial configuration on the router, as shown in this example:
<output omitted> If you require further assistance please contact us by sending email to export@cisco.com. cisco 2811 (MPC860) processor (revision 0x200) with 60416K/5120K bytes of memory Processor board ID JAD05190MTZ (4292891495) M860 processor: part number 0, mask 49 2 FastEthernet/IEEE 802.3 interface(s) 239K bytes of non-volatile configuration memory. 62720K bytes of ATA CompactFlash (Read/Write) Cisco IOS Software, 2800 Software (C2800NM-ADVIPSERVICESK9-M), Version 12.4(15)T1, RELEASE SOFTWARE (fc2) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1987 by Cisco Systems, Inc. Compiled Wed 18-Jul-07 06:21 by pt_rel_team --- System Configuration Dialog --Continue with configuration dialog? [yes/no]: no

The show running-config and show startup-config commands are among the most common Cisco IOS Software EXEC commands, because they allow you to see the current running configuration in RAM on the router or the startup configuration commands in the startup configuration file in NVRAM that the router will use on the next restart. If the words "Current configuration" are displayed, the active running configuration from RAM is being displayed. If there is a message at the top indicating how much nonvolatile memory is being used ("Using 924 out of 196600 bytes" in this example), the startup configuration file from NVRAM is being displayed.

Configuration Register
The configuration register includes information that specifies where to locate the Cisco IOS Software image. You can examine the register with the show command, and you can change the register value with the config-register global configuration command. Before altering the configuration register, you should determine how the router is currently loading the software image. The show version command will display the current configuration register value. The last line of the display contains the configuration register value. You can change the default configuration register setting with the config-register global configuration command. The configuration register is a 1 register. The lowest 4 bits of the configuration register (bits 3, 2, 1, and 0) form the boot field. A hexadecimal number is used as the argument to set the value of the configuration register. The default value of the configuration register is 0x2102.

The guidelines for changing the boot field are as follows:

The boot field is set to 0 to enter ROM monitor mode automatically. This value sets the boot field bits to 0000. In ROM monitor mode, the router displays the ">" or "rommon>" prompt, depending on the router processor type. From ROM monitor mode, you can use the boot command to manually boot the router. The boot field is set to 1 to configure the system to boot the Cisco IOS subset automatically from ROM. This value sets the boot field bits to 0001. The router displays the "Router(boot)>" prompt in this mode. The boot field is set to any value from 0x2 to 0xF to configure the system to use the boot system commands in the startup configuration file in NVRAM. The default is 0x2. These values set the boot field bits to 0010 through 1111.

The show version command is used to verify changes in the configuration register setting. The new configuration register value takes effect when the router reloads. In this example, the show version command indicates that the current configuration register setting of 0x2102 will be used during the next router reload, because of the password recovery procedure. Note: When using the config-register command, all 16 bits of the configuration register are set. Be careful to modify only the bits that you are trying to change, such as the boot field, and leave the other bits as they are. Remember that the other configuration register bits perform functions that include the selection of the console b/s rate and whether to use the saved configuration in NVRAM. The show flash command is an important tool to gather information about the router memory and image file. It can determine the following information:

Total amount of flash memory on the router Amount of flash memory that is available Names of all the files that are stored in the flash memory

In this example, the line at the bottom tells how much flash memory is available. Some of it might already be in use. Flash memory is always read-only.

Topic Notes: Cisco IOS File System and Devices


The Cisco IOS File System (Cisco IFS) feature provides a single interface to all the file systems that a router uses. The availability of the network can be at risk if the configuration of a router or the operating system is compromised. Attackers who gain access to infrastructure devices can alter or delete configuration files. They can also upload incompatible Cisco IOS images or delete the Cisco IOS image. The changes are invoked automatically or invoked once the device is rebooted. To mitigate against these problems, you have to be able to save, back up, and restore configuration and Cisco IOS images. Cisco IOS devices provide a feature that is called the Cisco IFS. This system allows you to create, navigate, and manipulate directories on a Cisco device. The directories that are available depend on the platform. The Cisco IFS feature provides a single interface to all the file systems that a Cisco router uses, including the following:

Flash memory file systems Network file systems (NFSs): TFTP, Remote Copy Protocol (RCP), and FTP Any other endpoint for reading or writing data (such as NVRAM, the running configuration in RAM, and so on)

An important feature of the Cisco IFS is the use of the URL convention to specify files on network devices and the network. The URL prefix specifies the file system. The output of the show file systems command lists all of the available file systems on a Cisco 1841 router. The command provides insightful information, such as the amount of available and free memory, and the type of file system and its permissions. Permissions include read only (as indicated by the "ro" flag), write only (as indicated by the "wo" flag), and read and write (as indicated by the "rw" flag). The flash file system has an asterisk preceding it, which indicates the current default file system. The bootable Cisco IOS Software is located in flash, therefore the pound symbol (#) that is appended to the flash listing indicates a bootable disk. This table contains some commonly used URL prefixes for Cisco network devices. Prefix bootflash: Bootflash memory Description

Prefix flash: flh: ftp: nvram: rcp: slot0: slot1: system: tftp:

Description Flash memory. This prefix is available on all platforms. For platforms that do not have a device named flash, the flash: prefix is aliased to slot0. Therefore, the flash: prefix can be used to refer to the main flash memory storage area on all platforms. Flash load helper log files FTP network server NVRAM The RCP network server The first Personal Computer Memory Card International Association (PCMCIA) flash memory card The second PCMCIA flash memory card Contains the system memory, including the current running configuration TFTP network server

Managing Cisco IOS Images


As a network grows, storage of Cisco IOS Software images and configuration files on a central TFTP server enables control of the number and revision level of Cisco IOS images and configuration files that must be maintained. Production internetworks usually span wide areas and contain multiple routers. For any network, it is prudent to retain a backup copy of the Cisco IOS Software image in case the system image in the router becomes corrupted or accidentally erased. Widely distributed routers need a source or backup location for Cisco IOS Software images. Using a network TFTP server allows image and configuration uploads and downloads over the network. The network TFTP server can be another router, a workstation, or a host system. Before copying the Cisco IOS image software from flash memory in the router to the network TFTP server, you should follow these steps: Step 1 Make sure that there is access to the network TFTP server. You can ping the TFTP server to test connectivity. Step 2 Verify that the TFTP server has sufficient disk space to accommodate the Cisco IOS Software image. Use the show flash command on the router to determine the size of the Cisco IOS image file. Step 3 Check the filename requirements on the TFTP server. This requirement may differ, depending on whether the server is running Microsoft Windows, UNIX, or another operating system.

Step 4 Create the destination file to receive the upload, if required. This step is dependent on the network server operating system. The show flash command is an important tool to gather information about the router memory and image file. The show flash command can determine the following information:

Total amount of flash memory on the router Amount of flash memory that is available Names of all the files that are stored in the flash memory

The Cisco IOS image file is based on a special naming convention. The name for the Cisco IOS image file contains multiple parts, each with a specific meaning. It is important that you understand this naming convention when upgrading and selecting a Cisco IOS Software.

The first part (c1841) identifies the platform on which the image runs. In this example, the platform is a Cisco 1841 router. The second part (ipbase) specifies the feature set. In this case, "ipbase" refers to the basic IP internetworking image. Other feature set possibilities include the following: i : Designates the IP feature set. j : Designates the enterprise feature set (all protocols). s : Designates a Plus feature set (extra queuing, manipulation, or translations). 56i : Designates 5 IP Security (IPsec) Data Encryption Standard (DES). 3 : Designates the firewall or intrusion detection system (IDS). k2 : Designates the 3-DES IPsec encryption (168 bit). The third part (mz) indicates where the image runs and if the file is compressed. In this example, "mz" indicates that the file runs from RAM and is compressed. The fourth part (12.3-14.T7) is the version number. The final part (bin) is the file extension. This extension indicates that this file is a binary executable file.
o o o o o o

The Cisco IOS Software naming conventions, field meaning, image content, and other details are subject to change.

Cisco IOS copy Command


The Cisco IOS Software copy command is used to move configurations from one component or device to another, such as RAM, NVRAM, or a TFTP server. In addition to using AutoInstall, the setup utility, or the CLI to load or create a configuration, there are several other sources for configurations that you can use. You can use the Cisco IOS Software copy command to move configurations from one component or device to another. The syntax of the copy command requires that the first argument indicates the source (from where to copy the configuration), followed by the destination (where to copy the configuration).

A good practice for maintaining system availability is to ensure that you always have backup copies of the startup configuration files and Cisco IOS image files. The Cisco IOS Software copy command is used to move configuration files from one component or device to another, such as RAM, NVRAM, or a TFTP server. A software backup image file is created by copying the image file from a router to a network TFTP server. To copy the current system image file from the router to the network TFTP server, use the following command in the privileged EXEC mode:
Router#copy flash tftp

The copy flash tftp command requires that you enter the IP address of the remote host and the name of the source and destination system image files. The exclamation points (!) indicate the copying process from the flash memory of the router to the TFTP server. Each exclamation point means that one User Datagram Protocol (UDP) segment has successfully transferred. Before updating the flash memory with a new Cisco IOS image, you should back up the current Cisco IOS image to a TFTP server. Backing up provides a fallback in case there is only sufficient space to store one image in the flash memory. Upgrading a system to a newer software version requires loading a different system image file on the router. Use the following command to download the new image from the network TFTP server:
Router#copy tftp flash

This command prompts you for the IP address of the remote host and the name of the source and destination system image files. Enter the appropriate filename of the update image just as it appears on the server. After these entries are confirmed, the "erase flash" prompt appears. Erasing flash memory makes room for the new image. Erase flash memory if there is not sufficient flash memory for more than one Cisco IOS image. If no free flash memory is available, the erase routine is required before new files can be copied. The system informs you of these conditions and prompts for a response. Each exclamation point means that one UDP segment has successfully transferred. Note : Make sure that the Cisco IOS image that is loaded is appropriate for the router platform. If the wrong Cisco IOS image is loaded, the router could be made unbootable, requiring ROM monitor intervention.

Topic Notes: Managing Device Configuration Files


Device configuration files contain a set of user-defined configuration commands that customize the functionality of a Cisco device. Configuration files contain the Cisco IOS Software commands that are used to customize the functionality of a Cisco routing device, such as a router, access server, switch, and so on. Commands are parsed (that is, translated and executed) by the Cisco IOS Software when you boot the system from the startup configuration file or when you enter commands at the command-line interface (CLI) in configuration mode. Configuration files are stored in the following locations:

The running configuration is stored in RAM. The startup configuration is stored in NVRAM.

You can copy configuration files from the router to a file server using FTP, RCP, or TFTP. For example, you can copy configuration files to back up a current configuration file to a server before changing its contents, therefore allowing the original configuration file to be restored from the server. The protocol that is used depends on which type of server is used. You can copy configuration files from a TFTP, RCP, or FTP server to the running configuration in RAM or to the startup configuration file in NVRAM of the router for one of the following reasons:

To restore a backed-up configuration file. To use the configuration file for another router. For example, you may add another router to the network and want it to have a similar configuration as the original router. By copying the file to the network server and making the changes to reflect the configuration requirements of the new router, you can save time by not recreating the entire file. To load the same configuration commands onto all of the routers in the network so that all of the routers have similar configurations. To use the configuration file for another router. For example, you may add another router.

For example, in the copy running-config tftp command, the running configuration in RAM is copied to a TFTP server. Use the copy running-config startup-config command after a configuration change is made in the RAM and must be saved to the startup configuration file in NVRAM. Similarly, copy the startup configuration file in NVRAM back into RAM with the copy startup running: command. Notice that you can abbreviate the commands.

Similar commands exist for copying between a TFTP server and either NVRAM or RAM. The following examples show common copy command usage. The examples list two methods to accomplish the same tasks. The first example is a simple syntax and the second example provides a more explicit syntax.

Copy the running configuration from RAM to the startup configuration in NVRAM, overwriting the existing file:

R2#copy running-config startup-config R2#copy system:running-config nvram:startup-config

Copy the running configuration from RAM to a remote location, overwriting the existing file:

R2#copy running-config tftp R2#copy system:running-config tftp

Copy a configuration from a remote source to the running configuration, merging the new content with the existing file:

R2#copy tftp running-config R2#copy tftp system:running-config

Copy a configuration from a remote source to the startup configuration, overwriting the existing file:

R2#copy tftp startup-config R2#copy tftp nvram:startup-config

Use the configure terminal command to interactively create configurations in RAM from the console or Remote Terminal. Use the erase startup-config command to delete the saved startup configuration file in NVRAM. Note : When a configuration is copied into RAM from any source, the configuration merges with, or overlays, any existing configuration in RAM, rather than overwriting it. New configuration parameters are added, and changes to existing parameters overwrite the old parameters. Configuration commands that exist in RAM, for which there is no corresponding command in NVRAM, remain unaffected. Copying the running configuration from RAM into the startup configuration file in NVRAM will overwrite the startup configuration file in NVRAM. With Cisco IOS Release 12.0 and later, commands that are used to copy and transfer configuration and system files are changed to integrate the Cisco IFS specifications. Refer to Cisco.com for details.

You can use the TFTP servers to store configurations in a central place, allowing centralized management and updating. Regardless of the size of the network, there should always be a copy of the current running configuration online as a backup. The copy running-config tftp command allows the current configuration to be uploaded and saved to a TFTP server. The IP address or name of the TFTP server and the destination filename must be supplied. A series of exclamation marks in the display shows the progress of the upload. The copy tftp running-config command downloads a configuration file from the TFTP server to the running configuration of the RAM. Again, the address or name of the TFTP server and the source and destination filename must be supplied. In this case, because you are copying the file to the running configuration, the destination filename should be running-config. This process is a merge process, not an overwrite process.

Topic Notes: Cisco Show and Debug Commands


The show and debug commands are built-in tools for troubleshooting. The show command is used to display static information. The debug command is used to display dynamic data and events. When you have a valid Cisco IOS image running on all of the routers in the network, and all the configurations are backed up, you can manually tune configurations for individual devices to improve their performance in the network. Two commands that are used in day-to-day network administration are show and debug. The difference between the two is significant. The show and debug commands have the following functions:
show

: Lists the configured parameters and their values, and provides a snapshot of problems with interfaces, media, or network performance. debug : Allows you to trace the execution of a process and to check the flow of protocol traffic for problems, protocol bugs, or misconfigurations.

The table describes the major differences between the show and debug commands. Command
show

Description Provides a static collection of information about the status of a network device, neighboring devices, and network performance. Use show commands when gathering facts for isolating problems in an internetwork, including problems with interfaces,

Command

debug

Description nodes, media, servers, clients, or applications. Provides a flow of information about the traffic being seen (or not seen) on an interface, error messages that are generated by nodes on the network, protocolspecific diagnostic packets, and other useful troubleshooting data. Use debug commands when operations on the router or network must be viewed to determine if events or packets are working properly.

Use debug commands to isolate problems, not to monitor normal network operation. The following are some considerations when using debug commands:

Be aware that the debug commands may generate too much data that is of little use for a specific problem. Typically, knowledge of the protocol or protocols being debugged is required to properly interpret the debug output. Because the high CPU overhead of debug commands can disrupt network device operation, debug commands should be used only when looking for specific types of traffic or problems, and when those problems have been narrowed to a likely subset of causes. When using the debug troubleshooting tools, be aware that output formats vary with each protocol. Some generate a single line of output per packet, whereas others generate multiple lines of output per packet. Some debug commands generate large amounts of output, whereas others generate only occasional output. Some generate lines of text, and others generate information in field format. Use of debug commands is suggested for obtaining information about network traffic and router status. Use these commands with great care. If you are not sure about the impact of a debug command, refer to Cisco.com for details or consult with a technical support representative.

This table lists commands that you can use with the debug command. Command
service timestamps show processes

undebug all

terminal monitor

Description Use this command to add a time stamp to a debug or log message. This feature can provide valuable information about when debug elements occurred and the duration of time between events. Displays the CPU utilization for each process. This data can influence decisions about using a debug command if it indicates that the production system is already too heavily used for adding a debug command. Disables all debug commands. This command can free up system resources after you finish debugging. The no debug all command can be used to disable all debugging. Displays debug output and system error messages for the current terminal and session. When you use Telnet to connect to a device and issue a debug command, you will not see output unless this command is entered.

Because the problem condition is an abnormal situation, you may be willing to temporarily trade off switching efficiency for the opportunity to rapidly diagnose and correct the problem. To effectively use debugging tools, you must consider the following:

The impact that a troubleshooting tool has on router performance The most selective and focused use of the diagnostic tool How to minimize the impact of troubleshooting on other processes that compete for resources on the network device How to stop the troubleshooting tool when diagnosing is complete so that the router can resume its most efficient switching.

It is one thing to use debug commands to troubleshoot a lab network that lacks end-user application traffic. It is another thing to use debug commands on a production network that users depend on for data flow. Without proper precautions, the impact of a broadly focused debug command could make matters worse. With proper, selective, and temporary use of debug commands, you can easily obtain potentially useful information without needing a protocol analyzer or other third-party tool. Other considerations for using debug commands are as follows:

Ideally, it is best to use debug commands during periods of lower network traffic and fewer users. Debugging during these periods reduces the effect on other users. When the information you need from the debug command is interpreted and the debug process (and any other related configuration setting, if any) is stopped, the router can resume its faster switching. Problem-solving can be resumed, a better-targeted action plan can be created, and the network problem can be resolved.

All debug commands are entered in privileged EXEC mode, and most debug commands take no arguments. Caution : Do not use the debug all command, because this command can cause a system to crash. To list and see a brief description of all the debugging command options, enter the debug ? command in privileged EXEC mode. By default, the network server sends the output from debug commands and system error messages to the console. When using this default, you should monitor the debugging output using a virtual terminal connection rather than the console port. To redirect debugging output, you should use the logging command options within configuration mode. Possible destinations include the console, vty, internal buffer, and UNIX hosts running a syslog server. The syslog format is compatible with 4.3 Berkeley Software Distribution (4.3 BSD) UNIX and its derivatives. Caution : It is important to turn off debugging when you have finished troubleshooting.

Topic Notes: Small Network Implementation Overview


Cisco IOS CLI Functions
The Cisco IOS Software is the system software in Cisco devices. It is the core technology that extends across most of the Cisco product line. Its operation details vary on the internetworking devices, depending on the purpose and features that are set of the devices. Services that are provided by the Cisco IOS Software are generally accessed using a CLI. To enter commands into the CLI, type or paste the entries within one of the several console configuration modes. In terminal configuration mode, an incremental compiler is invoked. Each configuration command that is entered is parsed as soon as you press Enter. If there are no syntax errors, the command is executed and stored in the running configuration, and it is effective immediately, but the command is not automatically saved to NVRAM. Cisco IOS Software uses a hierarchy of commands in its configuration-mode structure. Each configuration mode is indicated with a distinctive prompt. Each configuration mode supports specific Cisco IOS commands that are related to a type of operation on the device. The hierarchical modal structure can be configured to provide security. Different authentication can be required for each hierarchal mode. This functionality controls the level of access that network personnel can be granted. As a security feature, Cisco IOS Software separates the EXEC (executive) sessions into the following two access levels:

User EXEC: Allows access to only a limited number of basic monitoring commands. Privileged EXEC: Allows access to all device commands, such as those used for configuration and management, and can be password-protected to allow only authorized users to access the device.

Configuration Modes of Cisco IOS Software


The first method of configuration on a Cisco device is the setup utility, which enables you to create a basic initial configuration. For more complex and specific configurations, you can use the CLI to enter terminal configuration mode. From privileged EXEC mode, you can enter global configuration mode using the configure terminal command. Global configuration is the primary configuration mode. From this global configuration mode, CLI configuration changes affect the operation of the device as a whole. From global configuration mode, you can access specific configuration modes, which include, but are not limited to, the following:

Interface: Supports commands that configure operations on a per-interface basis Subinterface: Supports commands that configure multiple virtual interfaces on a single physical interface Controller: Supports commands that configure controllers (for example, E1 and T1 controllers) Line: Supports commands that configure the operation of a terminal line; for example, the console or the vty ports Router: Supports commands that configure the parameters for one of the routing protocols

If you enter the exit command, the router backs out one level, eventually logging out. In general, you enter the exit command from one of the specific configuration modes to return to global configuration mode. Press Ctrl-Z or enter the end command to leave configuration mode completely and return to the privileged EXEC mode. Commands that affect the entire device are called global commands. The hostname and enable secret commands are examples of global commands. Commands that point to or indicate a process or interface that will be configured are called major commands. When entered, major commands cause the CLI to enter a specific configuration mode. Major commands have no effect unless a subcommand that supplies the configuration entry is immediately entered. For example, the major command interface serial 0 has no effect unless a subcommand is used. The subcommand configures a selected interface. The following are examples of some major commands and subcommands that go with them: Major command:
RouterX(config)#interface serial 0

Subcommand:
RouterX(config-if)#shutdown

Major command:
RouterX(config-if)#line console 0

Subcommand:
RouterX(config-line)#password cisco

Major command:
RouterX(config-line)#router rip

Subcommand:

RouterX(config-router)#network 10.0.0.0

Notice that entering a major command switches from one configuration mode to another. Note: You do not need to return to global configuration mode before entering another configuration mode.

Help Facilities of the Cisco IOS CLI


At any time during an EXEC session, you can enter a question mark (?) to get help. The following two types of context-sensitive help are available:

Word help: Displays a list of commands or keywords that start with a specific character or characters. Enter the ? command to get word help for a list of commands that begin with a particular character sequence. Enter the character sequence and follow it immediately by the question mark. Do not include a space before the question mark. The router displays a list of commands that begin with the characters you entered. For example, enter sh? to get a list of commands that begin with the character sequence "sh."

Note: You can use context-sensitive help to get a list of available commands. This functionality can be used when you are unsure of the name for a command or if you want to see if the Cisco IOS CLI supports a particular command in a particular mode. For example, to list the commands available at the user EXEC level, type ? at the Router > prompt.

Command syntax help: Used to determine which options, keywords, or arguments are matched with a specific command. Enter the ? command to get command syntax help for completing a command. Enter a question mark in place of a keyword or argument. Include a space before the question mark. The network device then displays a list of available command options. "<cr>" represents a carriage return.

If you submit a command by pressing the Enter key and the interpreter cannot understand the command, it will provide feedback describing what is wrong with the command. There are three different types of error messages:

Ambiguous command: The Cisco IOS Software returns an error message indicating that required keywords or arguments were left off the end of the command. Incomplete command: The Cisco IOS Software returns an error message to indicate that not enough characters were entered for the command interpreter to recognize the command. Incorrect command: The Cisco IOS Software returns a "^" underneath the command to indicate the point from where the command interpreter cannot decipher the command.

The Cisco IOS Software buffers past command lines (by default 10) so that entries can be recalled. The buffer is useful for re-entering commands without retyping. By using Up Arrow key and Down Arrow key, you can view previous commands.

Topic Notes: Understanding VLANs


Issues That Result from a Poorly Designed Network
A poorly designed network has increased support costs, reduced service availability, and limited support for new applications and solutions. Less-than-optimal performance affects end users directly and their access to central resources. Some of the issues that stem from a poorly designed network include the following:

Failure domains: One of the most important reasons to implement an effective network design is to minimize the extent of problems when they occur. When Layer 2 and Layer 3 boundaries are not clearly defined, failure in one network area can have a far-reaching effect. Broadcast domains: Broadcasts exist in every network. Many applications and network operations require broadcasts to function properly; therefore, it is not possible to eliminate them completely. In the same way that avoiding failure domains involves clearly defining boundaries, broadcast domains should also have clear boundaries. They should also include an optimal number of devices to minimize the negative impact of broadcasts. Large amount of unknown MAC unicast traffic: Cisco Catalyst switches limit unicast frame forwarding to ports that are associated with the specific unicast address recorded in the MAC address table of the switch. However, when there is no entry corresponding to the frame's destination MAC address, this unicast frame, as is the case with broadcast frames, will be sent to all forwarding ports within the respective VLAN except the port where the frame originally arrived. This behavior is called "unknown MAC unicast flooding." Because this type of flooding causes excessive traffic on switch ports, network interface cards (NICs) must contend with a larger number of frames on the wire. When data is propagated on a wire for which it was not intended, security could be compromised. Multicast traffic on ports where not intended: IP multicast is a technique that allows IP traffic to be propagated from one source to a multicast group that is identified by a single MAC destination group address or a single IP and MAC destination group-address pair. Like unicast flooding and broadcasting, multicast frames are flooded out of all of the switch ports within the respective VLAN. A proper design allows for the containment of multicast frames while allowing them to be functional. Difficulty in management and support: A poorly designed network may be disorganized, poorly documented and lack easily identified traffic flows, which can make support, maintenance, and problem resolution time-consuming and arduous. Possible security vulnerabilities: A switched network that has been designed with little attention to security requirements at the access layer can compromise the integrity of the entire network.

A poorly designed network always has a negative impact and becomes a support and cost burden for any organization.

VLAN Overview

Network performance can affect productivity in an organization and its reputation for delivering as promised. VLANs contribute to network performance by separating large broadcast domains into smaller segments. A VLAN allows a network administrator to create logical groups of network devices. These devices act as if they were on their own independent network, even if they share a common infrastructure with other VLANs. A VLAN is a logical broadcast domain that can span multiple physical LAN segments. Within the switched internetwork, VLANs provide segmentation and organizational flexibility. You can design a VLAN structure that lets you group stations that are segmented logically by functions, project teams, and applications without regard to the physical location of the users. VLANs allow you to implement access and security policies to particular groups of users. You can assign each switch port to only one VLAN, which adds a layer of security (if the port is operating as an access port). Ports in the same VLAN share broadcasts, whereas ports in different VLANs do not share broadcasts. Containing broadcasts within a VLAN improves the overall performance of the network. A VLAN can exist on a single switch or span multiple switches. VLANs can include stations in a single building or multiple-building infrastructures. VLANs can also connect across WANs. A process of forwarding network traffic from one VLAN to another VLAN using a router is called inter-VLAN routing. VLANs are associated with unique IP subnets on the network. This subnet configuration facilitates the routing process in a multi-VLAN environment. When using a router to facilitate inter-VLAN routing, the router interfaces can be connected to separate VLANs. Devices on those VLANs send traffic through the router to reach other VLANs.

Grouping Business Functions into VLANs


Each VLAN in a switched network corresponds to an IP network. Therefore, VLAN design must take into consideration the implementation of a hierarchical network-addressing scheme. Hierarchical network addressing means that IP network numbers are applied to network segments or VLANs in an orderly fashion that takes into consideration the network as a whole. Blocks of contiguous network addresses are reserved for and configured on devices in a specific area of the network. Some of the benefits of hierarchical addressing include the following:

Ease of management and troubleshooting: A hierarchical addressing scheme groups network addresses contiguously. Because a hierarchical IP addressing scheme makes problem components easier to locate, network management and troubleshooting are more efficient. Fewer errors: Orderly network address assignment can minimize errors and duplicate address assignments. Reduced routing table entries: In a hierarchical addressing plan, routing protocols are able to perform route summarization, allowing a single routing table entry to represent a collection of IP network numbers. Route summarization makes routing table entries more manageable and provides these benefits:
o o

Fewer CPU cycles when recalculating a routing table or sorting through the routing table entries to find a match. Reduced router memory requirements.

o o

Faster convergence after a change in the network. Easier troubleshooting.

Applying IP Address Space in the Enterprise Network


The Cisco Enterprise Architecture model provides a modular framework for designing and deploying networks. It also provides the ideal structure for overlaying a hierarchical IP addressing scheme. The following are some guidelines:

Design the IP addressing scheme in the blocks of 2n contiguous network numbers (such as 4, 8, 16, 32, 64, and so on). These blocks of IP addresses can be assigned to the subnets in a given building distribution and access switch block. This approach lets you summarize each switch block into one large address block. At the building distribution layer, continue to assign network numbers contiguously out to the access layer devices. Have a single IP subnet correspond to a single VLAN. Each VLAN is a separate broadcast domain. When possible, subnet at the same binary value on all network numbers to avoid variable length subnet masks. This approach helps minimize errors and confusion when troubleshooting or configuring new devices and segments.

Considering Traffic Source to Destination Paths


Network management

Many different types of network management traffic can be present on the network. Some examples are bridge protocol data units (BPDUs), Cisco Discovery Protocol updates, Simple Network Management Protocol (SNMP) traffic, and Remote Monitoring (RMON) traffic. To make network troubleshooting easier, some designers assign a separate VLAN to carry certain types of network management traffic.
IP telephony

There are two types of IP telephony traffic: signaling information between end devices (IP phones and softswitches, such as Cisco Unified Communications Manager) and the data packets of the voice conversation itself. Designers often configure the data to and from the IP phones on a separate VLAN designated for voice traffic. Quality of service (QoS) measures can be applied to these VLANs to give high priority to voice traffic.
IP multicast

IP multicast traffic is sent from a particular source address to a multicast group that is identified by a single IP and MAC destination-group address pair. Examples of applications that generate this type of traffic are Cisco IP/TV broadcasts and imaging software that is used to quickly configure workstations and servers. Multicast traffic can produce a large amount of data streaming across the network. For example, video traffic from online training, security

applications, Cisco Unified MeetingPlace, and Cisco TelePresence is proliferating on some networks. Switches must be configured to keep this traffic from flooding to devices that have not requested it. Routers must be configured to ensure that multicast traffic is forwarded to the network areas where it is requested.
Normal data

Normal data traffic is typical application traffic that is related to file and print services, email, Internet browsing, database access, and other shared network applications. This data will need to be treated in either the same ways or different ways in different parts of the network, depending on the volume of each type. Examples of this type of traffic are Server Message Block (SMB), Netware Core Protocol (NCP), Simple Mail Transfer Protocol (SMTP), Structured Query Language (SQL), and HTTP.
Scavenger class

Scavenger class includes all traffic with protocols or patterns that exceed their normal data flows. This type of traffic is used to protect the network from exceptional traffic flows that may be the result of malicious programs executing on end-system PCs. Scavenger class is also used for "less than best effort" traffic, such as peer-to-peer traffic.

Voice VLAN Essentials


Modern networks can support converged services where video and voice traffic is merged with data traffic. When implementing VoIP in the network, all network requirements, including power and capacity planning, must be examined. When the data network goes down, it may not come back up for minutes or even hours. This delay is unacceptable for telephony users. Administrators must provide an uninterruptible power supply (UPS) to these devices in addition to providing network availability. IP phones are best implemented with Power over Ethernet (PoE). Power can be supplied to the IP phones directly from Cisco Catalyst switches with inline power capabilities or by inserting a Cisco Catalyst Inline Power Patch Panel. Voice traffic has stringent quality of service (QoS) requirements. Voice traffic usually generates a smooth demand on bandwidth and has minimal impact on other traffic as long as voice traffic is managed. VoIP traffic requirements include the following:

Assured bandwidth and voice quality Transmission priority over other types of network traffic Ability to be routed around congested areas on the network One-way overall delay of less than 150 ms across the network

If both the user PCs and the IP phones are on the same VLANs, each will try to use the available bandwidth without considering the other device. The simplest method to avoid a conflict is to use separate VLANs for IP telephony traffic and data traffic.

Some Cisco Catalyst switches offer a unique feature that is called a voice VLAN, which lets you overlay a voice topology onto a data network. You can segment phones into separate logical networks, even though the data and voice infrastructure are physically the same. The voice VLAN feature places the phones into their own VLANs without any end-user intervention. The user simply plugs the phone into the switch, and the switch provides the phone with the necessary VLAN information. There are several advantages to using voice VLANs. You can seamlessly maintain these VLAN assignments, even if the phones move to new locations. By placing phones into their own VLANs, you gain the advantages of network segmentation and control. It also allows you to preserve your existing IP topology for the data end stations and easily assign IP phones to different IP subnets using standards-based DHCP operation. In addition, with the phones in their own IP subnets and VLANs, you can more easily identify and troubleshoot network problems and create and enforce QoS or security policies. With the voice VLAN feature, you have all of the advantages of the physical infrastructure convergence, while maintaining separate logical topologies for voice and data terminals. This configuration creates the most effective way to manage a multiservice network.

VLAN Operation
A Cisco Catalyst switch operates in a network like a traditional bridge. Each VLAN that you configure on the switch implements address learning, forwarding, filtering decisions, and loop avoidance mechanisms, as if the VLAN were a separate physical bridge. The Cisco Catalyst switch implements VLANs by restricting traffic forwarding to destination ports that are in the same VLAN as the originating ports. When a frame arrives on a switch port, the switch must retransmit the frame to only the ports that belong to the same VLAN. In essence, a VLAN that is operating on a switch limits transmission of unicast, multicast, and broadcast traffic. Traffic originating from a particular VLAN floods to only the other ports in that VLAN. A port normally carries only the traffic for the single VLAN to which it belongs. For a VLAN to span across multiple switches, a trunk is required to connect two switches. A trunk can carry traffic for multiple VLANs.

VLAN Membership Modes


You configure ports that belong to a VLAN with a membership mode that determines to which VLAN they belong. Cisco Catalyst switch ports can belong to one of these VLAN membership modes:

Static VLAN: An administrator statically configures the assignment of VLANs to ports.

Dynamic VLAN: Cisco Catalyst switches support dynamic VLANs using a VLAN Membership Policy Server (VMPS). Some Cisco Catalyst switches can be designated as the VMPS; you can also designate an external server. The VMPS contains a database that maps MAC addresses to VLAN assignments. When a frame arrives at a dynamic port on the Cisco Catalyst access switch, the switch queries the VMPS server for the VLAN assignment. The VLAN assignment is based on the source MAC address of the arriving frame. A dynamic port can belong to only one VLAN at a time. Multiple hosts can be active on a dynamic port only if they all belong to the same VLAN. This mode is not commonly used in today's networks and beyond the scope of the course. Voice VLAN: A voice VLAN port is an access port that is attached to a Cisco IP phone. The Cisco IP phone must be configured to use one VLAN for voice traffic and another VLAN for data traffic. That data traffic is received from a device that is attached to the phone.

opic Notes: Understanding Trunking with 802.1Q and VTP


Understanding Trunking with 802.1Q
A trunk is a point-to-point link between one or more Ethernet switch interfaces and another networking device, such as a router or a switch. Ethernet trunks carry the traffic of multiple VLANs over a single link and allow you to extend the VLANs across an entire network. A trunk does not belong to a specific VLAN; rather, it is a conduit for VLANs between switches and routers. A special protocol is used to carry multiple VLANs over a single link between two devices. Cisco supports the 802.1Q trunking protocol for Fast Ethernet and Gigabit Ethernet interfaces. A trunk could also be used between a network device and server or other device that is equipped with an appropriate 802.1Q-capable NIC. Ethernet trunk interfaces support different trunking modes. You can configure an interface as trunking or nontrunking, or have it negotiate trunking with the neighboring interface. By default on a Cisco catalyst switch, all configured Vlans are carried over a trunk interface. A native VLAN on an 802.1Q port is an untagged frame, end stations send untagged frames to the switch, so therefore each access port's native VLAN is the VLAN it has been assigned to. On an 802.1Q trunk port however, there is one native vlan which is untagged (by default VLAN 1) and all other VLANs are tagged with a VLAN ID (VID).
802.1Q Frame

When Ethernet frames are placed on a trunk, they need additional information about the VLANs that they belong to. This task is accomplished by using the 802.1Q encapsulation header. IEEE 802.1Q uses an internal tagging mechanism that inserts a 4-byte tag field into the original

Ethernet frame between the Source Address and Type or Length fields. Because 802.1Q alters the frame, the trunking device recomputes the frame check sequence (FCS) on the modified frame. It is the responsibility of the Ethernet switch to look at the 4-byte tag field and determine where to deliver the frame. A tiny part of the 4-byte tag field3 bits, to be exactis used to specify the priority of the frame. The details are specified in the IEEE 802.1p standard. The 802.1Q header contains the 802.1p field, so you must have 802.1Q to have 802.1p.
802.1Q Native VLAN

An 802.1Q trunk and its associated trunk ports have a native VLAN value. When configuring an 802.1Q trunk, a matching native VLAN must be defined on each end of the trunk link. 802.1Q does not tag frames for the native VLAN. Therefore, ordinary stations can read the native untagged frames but cannot read any other frame because the frames are tagged. Note: The default native VLAN is VLAN 1.

Understanding VLAN Trunking Protocol


As the size of the network for a small- or medium-sized business grows, the management that is involved in maintaining the network grows. Cisco has developed VTP to simplify management of the VLAN database across multiple switches. VTP is a Layer 2 messaging protocol that maintains VLAN configuration consistency by managing the additions, deletions, and name changes of VLANs across networks. VTP minimizes misconfigurations and configuration inconsistencies that can cause problems, such as duplicate VLAN names or incorrect VLANtype specifications. A VTP domain is one switch or several interconnected switches sharing the same VTP environment. All switches in a domain share VLAN configuration details by using VTP advertisements. You can configure a switch to be in only one VTP domain. By default, a Cisco Catalyst switch is in the no-management-domain state until it receives an advertisement for a domain over a trunk link or until you configure a management domain. Configurations that are made to a single VTP server are propagated across trunk links to all of the connected switches in the network. The benefit of VTP is that it automatically distributes and synchronizes domain and VLAN configurations across the network. However, this benefit comes with a cost. You can only add switches that are in their default VTP configuration. If you add a VTP-enabled switch that is configured with settings that supersede existing network VTP configurations, changes that are difficult to fix are automatically propagated throughout the network.

VTP Modes

VTP operates in one of three modes: server, transparent, or client. You can complete different tasks depending on the VTP operation mode. The characteristics of the three VTP modes are as follows:

Server: The default VTP mode is server mode, but VLANs are not propagated over the network until a management domain name is specified or learned. When you make a change to the VLAN configuration on a VTP server, the change is propagated to all switches in the VTP domain. VTP messages are transmitted out of all the trunk connections. Transparent: When you make a change to the VLAN configuration in VTP transparent mode, the change affects only the local switch and does not propagate to other switches in the VTP domain. VTP transparent mode does forward VTP advertisements that it receives within the domain. Client: A VTP client behaves like a VTP server and transmits and receives VTP updates on its trunks, but you cannot create, change, or delete VLANs on a VTP client. VLANs are configured on another switch in the domain that is in server mode.

Cisco IOS VTP servers and clients save VLANs to the vlan.dat file in flash memory, causing them to retain the VLAN table and revision number. Switches that are in VTP transparent mode display the VLAN and VTP configurations in the show running-config command output because this information is also stored in the configuration text file. Caution: The erase startup-config command does not affect the vlan.dat file on Cisco IOS switches. VTP clients with a higher configuration revision number can overwrite VLANs on a VTP server in the same VTP domain. Delete the vlan.dat file and reload the switch to clear the VTP and VLAN information. See documentation for your specific switch model to determine how to delete the vlan.dat file.
VTP Operation

VTP advertisements are flooded throughout the management domain. VTP advertisements are sent every five minutes or whenever there is a change in VLAN configurations. Advertisements are transmitted (untagged) over the native VLAN (VLAN 1 by default) using a multicast frame. A configuration revision number is included in each VTP advertisement. A higher configuration revision number indicates that the VLAN information being advertised is more current than the stored information. One of the most critical components of VTP is the configuration revision number. Each time a VTP server modifies its VLAN information, the VTP server increments the configuration revision number by one. The server then sends out a VTP advertisement with the new configuration revision number. If the configuration revision number being advertised is higher than the number stored on the other switches in the VTP domain, the switches overwrite their VLAN configurations with the new information that is being advertised. The configuration revision number in VTP transparent mode is always zero.

Note: In the overwrite process, if the VTP server deleted all of the VLANs and had the higher revision number, the other devices in the VTP domain would also delete their VLANs. A device that receives VTP advertisements must check various parameters before incorporating the received VLAN information. First, the management domain name and password in the advertisement must match those values that are configured on the local switch. Next, if the configuration revision number indicates that the message was created after the configuration currently in use, the switch incorporates the advertised VLAN information. On many Cisco Catalyst switches, you can change the VTP domain to another name and then change it back to reset the configuration revision number, or alternatively, change the mode to transparent and then back to the previous setting.
VTP Pruning

VTP pruning prevents unnecessary flooding of broadcast information from one VLAN across all trunks in a VTP domain. VTP pruning permits switches to negotiate which VLANs are assigned to ports at the other end of a trunk. At the same time, the VLANs that are not assigned to ports on the remote switch are pruned. By default, a trunk connection carries traffic for all VLANs in the VTP management domain. It is not unusual for some switches in an enterprise network to not have ports that are configured in each VLAN. VTP pruning increases available bandwidth by restricting flooded traffic to those trunk links that the traffic must use to access the appropriate network devices. You can enable pruning only on Cisco Catalyst switches that are configured for VTP servers, and not on clients. The default setting for VTP pruning depends on the model of the Cisco Catalyst Switch.

Topic Notes: VTP Configuration


Configuring VLANs and Trunks
The steps that you use to configure and verify VLANs on a switched network include the following: Step 1: Determine whether to use VTP. If VTP will be used, enable VTP in server, client, or transparent mode.

Step 2: Enable trunking on the interswitch connections. Step 3: Create the VLANs on a VTP server and have those VLANs propagate to other switches. Step 4: Assign switch ports to a VLAN using static or dynamic assignment. Step 5: Execute adds, moves, and changes of ports. Step 6: Save the VLAN configuration. Note: The steps are not necessary in the same order as listed above. Based on the configuration some steps might be optional. It may also take up to 5 minutes for the changes to be reflected in the network.

VTP Configuration
When creating VLANs, you must decide whether to use VTP in your network. With VTP, you can make configuration changes on one or more switches, and those changes are automatically communicated to all other switches in the same VTP domain. Default VTP configuration values depend on the switch model and the software version. The default values for Cisco Catalyst switches are as follows:

VTP domain name: <Null> VTP mode: Server VTP password: None VTP pruning: Enabled/Disabled (model specific) VTP version: Version 1 Configuration revision: 0

The VTP domain name can be specified or learned. By default, the domain name is <Null>. You can set a password for the VTP management domain. However, if you do not assign the same password for each switch in the domain, VTP will not function properly. Note: The domain name cannot be reset to <Null> except if the database is deleted. VTP pruning eligibility is one VLAN parameter that the VTP protocol advertises. Enabling or disabling VTP pruning on a VTP server propagates the change throughout the management domain. Use the vtp global configuration command to modify the VTP configuration, including the domain name, mode, password, pruning, interface, and storage filename. Use the no form of this command to remove the filename or to return to the default settings. When the VTP mode is transparent, you can save the VTP configuration in the switch configuration file by entering the copy running-config startup-config privileged EXEC command.

Note: The domain name and password are case sensitive. You cannot remove a domain name after it is assigned; you can only reassign it.

Topic Notes: 802.1Q Trunk Configuration


The 802.1Q protocol carries traffic for multiple VLANs over a single link on a multivendor network. 802.1Q trunks impose several limitations on the trunking strategy for a network. You should consider the following:

Ensure that the native VLAN for an 802.1Q trunk is the same on both ends of the trunk link. If they are different, spanning-tree loops might result. Ensure that native VLAN frames are untagged.

Note: If 802.1Q trunk configuration is not the same on both ends, the Cisco IOS will report error messages. Use the switchport mode interface configuration command to set a Fast Ethernet or Gigabit Ethernet port to trunk mode. Many Cisco Catalyst switches support the Dynamic Trunking Protocol (DTP), which manages automatic trunk negotiation. Dynamic Trunking Protocol (DTP) is a Cisco proprietary protocol. Switches from other vendors do not support DTP. DTP is automatically enabled on a switch port when certain trunking modes are configured on the switch port. DTP manages trunk negotiation only if the port on the other switch is configured in a trunk mode that supports DTP. There are four options for the switchport mode command:

Trunk: Configures the port into permanent 802.1Q trunk mode and negotiates with the connected device to convert the link to trunk mode. Access: Disables port trunk mode and negotiates with the connected device to convert the link to nontrunk. Dynamic desirable: Triggers the port to negotiate the link from nontrunk to trunk mode. The port negotiates to a trunk port if the connected device is in either trunk state, desirable state, or auto state. Otherwise, the port becomes a nontrunk port. Dynamic auto: Enables a port to become a trunk only if the connected device has the state set to trunk or desirable. Otherwise, the port becomes a nontrunk port.

The switchport nonegotiate interface command specifies that DTP negotiation packets are not sent on the Layer 2 interface. The switch does not engage in DTP negotiation on this interface. This command is valid only when the interface switchport mode is access or trunk (configured by using the switchport mode access or the switchport mode trunk interface configuration command). This command returns an error if you attempt to execute it in dynamic

(auto or desirable) mode. Use the no form of this command to return to the default setting. When you configure a port with the switchport nonegotiate command, the port trunks only if the other end of the link is specifically set to trunk. The switchport nonegotiate command does not form a trunk link with ports in either dynamic desirable or dynamic auto mode. To verify a trunk configuration on many Cisco Catalyst switches, use the show interfaces switchport and show interfaces trunk commands. These two commands display the trunk parameters and VLAN information of the port.

Topic Notes: Configuring VLANs


VLAN Creation
Before you create VLANs, you must decide whether to use VTP to maintain global VLAN configuration information for your network. The maximum number of VLANs is switch-dependent. Many access layer Cisco Catalyst switches can support up to 250 user-defined VLANs. Cisco Catalyst switches have a factory default configuration in which various default VLANs are preconfigured to support various media and protocol types. The default Ethernet VLAN is VLAN 1. Cisco Discovery Protocol and VTP advertisements are sent on VLAN 1. If you want to communicate with the Cisco Catalyst switch remotely for management purposes, the switch must have an IP address. This IP address must be in the management VLAN, which by default is VLAN 1. If VTP is configured, before you can create a VLAN, the switch must be in VTP server mode or VTP transparent mode. By default, a switch is in VTP server mode so that you can add, change, or delete VLANs. If the switch is set to VTP client mode, you cannot add, change, or delete VLANs. For many Cisco Catalyst switches, you use the vlan global configuration command to create a VLAN and enter VLAN configuration mode. Use the no form of this command to delete the VLAN. To add a VLAN to the VLAN database, assign a number and name to the VLAN. VLAN 1 is the factory default VLAN. Normal-range VLANs are identified with a number between 1 and 1001. VLAN numbers 1002 through 1005 are reserved for Token Ring and FDDI VLANs. If the switch is in VTP server or VTP transparent mode, you can add, modify, or remove configurations for VLAN 2 to 1001 in the VLAN database. (VIDs 1 and 1002 to 1005 are automatically created and cannot be removed.)

Note: When the switch is in VTP transparent mode and the enhanced software image is installed, you can also create extended-range VLANs (VLANs with IDs from 1006 to 4094). These VLANs are not saved in the VLAN database. Configurations for VIDs 1 to 1005 are written to the vlan.dat file (VLAN database). You can display the VLANs by entering the show vlan privileged EXEC command. The vlan.dat file is stored in flash memory. To add an Ethernet VLAN, you must specify at least a VLAN number. If no name is entered for the VLAN, the default is to append the VLAN number to the command vlan. For example, VLAN0004 would be the default name for VLAN 4 if no name is specified. After you configure the VLAN, validate the parameters for that VLAN. Use the show vlan id vlan_number or the show vlan name vlan-name command to display information about a particular VLAN. Use the show vlan command to display information on all configured VLANs. The show vlan command displays the switch ports that are assigned to each VLAN. Other VLAN parameters that are displayed include the type, the security association ID (SAID), the maximum transmission unit (MTU), the STP, and other parameters that are used for Token Ring or FDDI VLANs. The default type is Ethernet. The SAID is used for the FDDI trunk.

VLAN Port Assignment


When an end system is connected to a switch port, it should be associated with a VLAN, in accordance with the network design. To associate a device with a VLAN, the switch port to which the device connects is assigned to a single data VLAN and thus becomes an access port. A switch port can become an access port through static or dynamic configuration. After creating a VLAN, you can manually assign a port or a number of ports to that VLAN. A port can belong to only one VLAN at a time. When you assign a switch port to a VLAN using this method, it is known as a static-access port. Note: By default, all ports are members of VLAN 1. On most Cisco Catalyst switches, you configure the VLAN port assignment from interface configuration mode using the switchport access vlan command. To configure a bundle of interfaces to a VLAN, use the interface range command. Use the vlan vlan_number option to set static-access membership. Use the dynamic option to have the VLAN controlled and assigned by a VMPS. Use the show vlan brief privileged EXEC command to display the VLAN assignment and membership type for all switch ports. The show vlan brief command displays one line for each VLAN. The output for each VLAN includes the VLAN name, the status, and the switch ports.

Alternatively, use the show interfaces switchport privileged EXEC command to display the VLAN information for a particular interface.

Adds, Moves, and Changes for VLANs


As network topologies, business requirements, and individual assignments change, VLAN requirements also change. To add, change, or delete VLANs, the switch must be in VTP server or transparent mode. When you make VLAN changes from a switch that is in VTP server mode, the VLAN database is updated locally and propagated to other switches automatically. VLAN changes made from a switch in VTP transparent mode affect only the local switch; changes are not propagated to the domain.

Adding VLANs and Port Membership


After you create a new VLAN, be sure to make the necessary changes to the VLAN port assignments. Separate VLANs typically imply separate IP networks. Be sure to plan the new IP addressing scheme and its deployment to stations before moving users to the new VLAN. Separate VLANs also require inter-VLAN routing to permit users in the new VLAN to communicate with other VLANs. Inter-VLAN routing includes setting up the appropriate IP parameters and services, including default gateway and DHCP.

Changing VLANs and Port Membership


To modify VLAN attributes, such as VLAN name, use the vlan global configuration command. Note: You cannot change the VLAN number. To use a different VLAN number, create a new VLAN using a new number, and then reassign all ports to this VLAN. To move a port into a different VLAN, use the switchport access vlan vlan# command on the interface. There is no need to first remove a port from a VLAN to make this change. After you reassign a port to a new VLAN, that port is automatically removed from its previous VLAN.

Deleting VLANs and Port Membership


When you delete a VLAN from a switch that is in VTP server mode, the VLAN is removed from all switches in the VTP domain. When you delete a VLAN from a switch that is in VTP transparent mode, the VLAN is deleted only on that specific switch. Use the global configuration command no vlan vlan-id to remove a VLAN.

Note: Before deleting a VLAN, be sure to first reassign all member ports to a different VLAN. Any ports that are not moved to an active VLAN will be unable to communicate with other stations after you delete the VLAN, as the switchport access vlan vlan# command will still be present on that port even though the vlan may no longer exist in the vlan database. To reassign a port to the default VLAN (VLAN 1), use the no switchport access vlan command in interface configuration mode.

Topic Notes: The Redundant Switched Topology


Building a Redundant Switched Topology
Choosing Interconnection Technologies

A number of technologies are available to interconnect devices in a switched network. The interconnection technology that you select depends on the amount of traffic the link must carry. You will likely use a mixture of copper and fiber-optic cabling that is based on distances, noise immunity requirements, security, and other business requirements. Some of the more common technologies are as follows:

Fast Ethernet (100 Mb/s Ethernet): This LAN specification (IEEE 802.3u) operates at 100 Mb/s over twisted-pair cable. The Fast Ethernet standard raises the speed of Ethernet from 10 to 100 Mb/s with only minimal changes to the existing cable structure. A switch that has ports that function at both 10 and 100 Mb/s can move frames between ports without Layer 2 protocol translation. Gigabit Ethernet: An extension of the IEEE 802.3 Ethernet standard, Gigabit Ethernet increases speed tenfold over that of Fast Ethernet, to 1000 Mb/s, or 1 Gb/s. IEEE 802.3z specifies operations over fiber optics, and IEEE 802.3ab specifies operations over twistedpair cable. 10-Gigabit Ethernet: 10-Gigabit Ethernet (IEEE 802.3ae) was formally ratified as an 802.3 Ethernet standard in June 2002. This technology is the next step for scaling the performance and functionality of an enterprise. With the deployment of Gigabit Ethernet becoming more common, 10-Gigabit Ethernet will become typical for uplinks. EtherChannel: This feature provides link aggregation of bandwidth over Layer 2 links between two switches. EtherChannel bundles individual Ethernet ports into a single logical port or link. All interfaces in each EtherChannel bundle must be configured with similar speed, duplex, and VLAN membership.

Determining Equipment and Cabling Needs

There are four objectives in the design of any high-performance network: security, availability, scalability, and manageability. This list describes the equipment and cabling decisions that you should consider when altering the infrastructure:

Replace hubs and legacy switches with new switches at the building access layer. Select equipment with the appropriate port density at the access layer to support the current user base while preparing for growth. Some designers begin by planning for about 30 percent growth. If budget allows, use modular access switches to accommodate future expansion. Consider planning for the support of inline power and quality of service (QoS) if you think you might implement IP telephony in the future. When building the cable plant from the building access layer to the building distribution layer devices, remember that these links will carry aggregate traffic from the end nodes at the access

layer to the building distribution switches. Ensure that these links have adequate bandwidth capability. You can use EtherChannel bundles here to add bandwidth as necessary. At the distribution layer, select switches with adequate performance to manage the load of the current access layer. In addition, plan some port density for adding trunks later to support new access layer devices. The devices at this layer should be multilayer (Layer 2 and Layer 3) switches that support routing between the workgroup VLANs and network resources. Depending on the size of the network, the building distribution layer devices can be fixed chassis or modular. Plan for redundancy in the chassis and in the connections to the access and core layers, as business objectives dictate. The campus backbone equipment must support high-speed data communications between other submodules. Be sure to size the backbone for scalability and plan for redundancy.

EtherChannel Overview

The increasing deployment of switched Ethernet to the desktop can be attributed to the proliferation of bandwidth-intensive applications. Any-to-any communications of new applicationssuch as video to the desktop, interactive messaging, and collaborative whiteboardingare increasing the need for scalable bandwidth. At the same time, missioncritical applications call for resilient network designs. With the wide deployment of faster switched Ethernet links in the campus, organizations must either aggregate their existing resources or upgrade the speed in their uplinks and core to scale performance across the network backbone. EtherChannel is a technology that Cisco originally developed as a LAN switch-to-switch technique of inverse multiplexing of multiple Fast Ethernet or Gigabit Ethernet switch ports into one logical channel. The benefit of EtherChannel is that it is effectively cheaper than higherspeed media while using existing switch ports. The following are advantages of EtherChannel:

It allows for the creation of a very high-bandwidth logical link. It load-shares among the physical links involved. It provides automatic failover. It simplifies subsequent logical configuration (configuration is per logical link instead of per physical link).

EtherChannel technology provides bandwidth scalability within the campus by providing the following aggregate bandwidth:

Fast Ethernet: Up to 800 Mb/s. Gigabit Ethernet: Up to 8 Gb/s. 10-Gigabit Ethernet : Up to 80 Gb/s.

Each of these connection speeds can vary in amounts equal to the speed of the links used (100 Mb/s, 1 Gb/s, or 10 Gb/s). Even in the most bandwidth-demanding situations, EtherChannel technology helps aggregate traffic and keeps oversubscription to a minimum, while providing effective link-resiliency mechanisms.

Redundant designs can eliminate the possibility of a single point of failure causing a loss of function for the entire switched or bridged network. At the same time, you must consider problems that redundant designs can cause. Some of the problems that can occur with redundant links and devices in switched or bridged networks are as follows:

Broadcast storms: Without some loop-avoidance process in operation, each switch or bridge floods broadcasts endlessly. This situation is commonly called a broadcast storm. Multiple frame transmission: Multiple copies of unicast frames may be delivered to destination stations. Many protocols expect to receive only a single copy of each transmission. Multiple copies of the same frame can cause unrecoverable errors. MAC database instability: Instability in the content of the MAC address table results from copies of the same frame being received on different ports of the switch. Data forwarding can be impaired when the switch consumes the resources that are coping with instability in the MAC address table.

Layer 2 LAN protocols, such as Ethernet, lack a mechanism to recognize and eliminate endlessly looping frames. Some Layer 3 protocols implement a Time to Live (TTL) mechanism that limits the number of times a Layer 3 networking device can retransmit a packet. Lacking such a mechanism, Layer 2 devices continue to retransmit looping traffic indefinitely. A loop-avoidance mechanism is required to solve each of these problems. The Spanning Tree Protocol (STP) was developed to address these issues.

Recognizing Issues of a Redundant Switched Topology


Switch Behavior with Broadcast Frames

Switches process broadcast and multicast frames differently from how they process unicast frames. Because broadcast and multicast frames may be of interest to all stations, the switch or bridge normally floods broadcast and multicast frames to all ports except the originating port. A switch or bridge never learns a broadcast or multicast address because broadcast and multicast addresses never appear as the source address of a frame. This flooding of broadcast and multicast frames can potentially cause a problem in a redundant switched topology.
Broadcast Storms

A broadcast storm occurs when each switch on a redundant network floods broadcast frames endlessly. Switches flood broadcast frames to all ports except the port on which the frame was received. A broadcast storm can disrupt normal traffic flow. It can also disrupt all the devices on the switched or bridged network because the CPU in each device on the segment must process the broadcast. In this way, a broadcast storm can lock up the PCs and servers that are trying to process all the broadcast frames.

A loop avoidance mechanism eliminates this problem by preventing one of the four interfaces from transmitting frames during normal operation, therefore breaking the loop.
Multiple Frame Transmissions

In a redundant topology, multiple copies of the same frame can arrive at the intended host, potentially causing problems with the receiving protocol. Most protocols are not designed to recognize or cope with duplicate transmissions. In general, protocols that make use of a sequence-numbering mechanism assume that many transmissions have failed and that the sequence number has recycled. Other protocols attempt to hand the duplicate transmission to the appropriate upper-layer protocol (ULP), with unpredictable results.
MAC Database Instability

MAC database instability results when multiple copies of a frame arrive on different ports of a switch. This subtopic describes how MAC database instability can arise and the problems that can result.

Topic Notes: Resolving Issues with STP


The Spanning Tree Protocol (STP) provides loop resolution by managing the physical paths to given network segments. STP allows physical path redundancy while preventing the undesirable effects of active loops in the network. STP is an IEEE committee standard that is defined as 802.1D. It specifies the spanning-tree algorithm that Layer 2 devices can use to create a loop-free logical topology. STP behaves as follows:

STP allows Layer 2 devices to communicate with each other to discover physical loops in the network. STP forces certain ports into a standby state so that they do not listen to, forward, or flood data frames. The overall effect is that there is only one path to each network segment that is active at any time. In other words, STP creates a tree structure of loopfree leaves and branches that spans the entire Layer 2 network. If there is a problem with connectivity to any of the segments within the network, STP reestablishes connectivity by automatically activating a previously inactive path, if one exists.

Spanning-Tree Operation
STP executes an algorithm called spanning-tree algorithm. Spanning-tree algorithm chooses a reference point, called a root bridge, and then determines the available paths to that reference

point. If more than two paths exist, spanning-tree algorithm picks the best path and blocks the rest. STP performs three steps to provide a loop-free logical network topology: 1. Elects one root bridge: STP has a process to elect a root bridge. Only one bridge can act as the root bridge in a given network. On the root bridge, all ports are designated ports. Designated ports are normally in the forwarding state. When in the forwarding state, a port can send and receive traffic. 2. Selects the root port on the non-root bridge: STP establishes one root port on each nonroot bridge. The root port is the lowest-cost path from the non-root bridge to the root bridge. Root ports are normally in the forwarding state. Spanning-tree path cost is an accumulated cost that is calculated on the bandwidth. 3. Selects the designated port on each segment: On each segment, STP establishes one designated port. The designated port is selected on the bridge that has the lowest-cost path to the root bridge. Designated ports are normally in the forwarding state, forwarding traffic for the segment. Nondesignated ports are normally in the blocking state to logically break the loop topology. When a port is in the blocking state, it is not forwarding traffic but can still receive traffic. Blocking the redundant paths is critical to preventing loops on the network. The physical paths still exist to provide redundancy, but these paths are disabled to prevent the loops from occurring. If the path is ever needed to compensate for a network cable or switch failure, STP recalculates the paths and unblocks the necessary ports to allow the redundant path to become active. Switches and bridges running the spanning-tree algorithm exchange configuration messages with other switches and bridges at regular intervals (every 2 seconds by default). Switches and bridges exchange these messages using a frame that is called the bridge protocol data unit (BPDU). One of the pieces of information that is included in the BPDU is the bridge ID (BID). STP requires a unique BID assigned for each switch or bridge. Typically, the BID comprises a priority value (2 bytes) and the bridge MAC address (6 bytes). The default priority, in accordance with IEEE 802.1D, is 32,768 (1000 0000 0000 0000 in binary, or 0x8000 in hex format), which is the midrange value. The root bridge is the bridge with the lowest BID. Note: A Cisco Catalyst switch uses one of its MAC addresses from a pool of MAC addresses that are assigned either to the backplane or to the supervisor module. The selection depends on the switch model. There are five STP port states:

Blocking Listening Learning Forwarding Disabled

Note: The disabled state is not strictly part of STP; a network administrator can manually disable an STP on a specific port.

When STP is enabled, every bridge in the network goes through the blocking state and the transitory states of listening and learning when powering up. If properly configured, the ports then stabilize to the forwarding or blocking state. Forwarding ports provide the lowest-cost path to the root bridge. During a topology change, a port temporarily implements the listening and learning states. All bridge ports initially start in the blocking state, from which they listen for BPDUs. When the bridge first boots, it functions like the root bridge and transitions to the listening state. An absence of BPDUs for a certain period is called the maximum age (max_age), which has a default of 20 seconds. If a port is in the blocking state and does not receive a new BPDU within the max_age, the bridge transitions from the blocking state to the listening state. When a port is in the transitional listening state, it is able to send and receive BPDUs to determine the active topology. At this point, the switch is not passing any user data. During the listening state, the bridge performs these three steps:

Selects the root bridge Selects the root ports on the non-root bridges Selects the designated ports on each segment

The time that it takes for a port to transition from the listening state to the learning state or from the learning state to the forwarding state is called the forward delay. The forward delay has a default value of 15 seconds. The learning state reduces the amount of flooding required when data forwarding begins. If a port is still a designated or root port at the end of the learning state, the port transitions to the forwarding state. In the forwarding state, a port is capable of sending and receiving user data. Ports that are not the designated or root ports transition back to the blocking state. A port normally transitions from the blocking state to the forwarding state in 30 to 50 seconds. You can tune the spanning-tree timers to adjust the timing, but these timers are meant to be set to the default value. The default values are put in place to give the network enough time to gather all the correct information about the network topology. Note: For switch ports that connect only to end-user stations (not to another switch or bridge), you should enable a Cisco Catalyst switch feature called PortFast. A switch port that has PortFast enabled automatically transitions from the blocking state to the forwarding state when it first comes up. This behavior is acceptable because no loops can be formed through the port, since no other switches or bridges are connected to it.

Default Spanning-Tree Configuration


Cisco Catalyst switches support three types of spanning-tree protocols: PVST+, PVRST+, and MSTP.

PVST+ (Per-VLAN spanning tree protocol plus): Based on the 802.1D standard and includes Cisco proprietary extensions, such as BackboneFast, UplinkFast, and PortFast. PVRST+ (Rapid PVST+): Based on the 802.1w standard and has a faster convergence than 802.1D. MSTP (802.1s) (Multiple STP): Combines the best aspects of PVST+ and the IEEE standards.

Describing PortFast
PortFast is a Cisco technology. When a switch port configured with PortFast is configured as an access port, that port transitions from blocking to forwarding state immediately, bypassing the typical STP listening and learning states. You can use PortFast on access ports, which are connected to a single workstation or to a server, to allow those devices to connect to the network immediately rather than waiting for spanning tree to converge. In a valid PortFast configuration, configuration BPDUs should never be received as it indicates another bridge or switch is connected to the port, potentially causing a spanning-tree loop. Cisco switches support a feature called BPDU Guard, that when enabled, puts the port in an 'errdisabled' state (effectively shutdown) on receipt of a BPDU. Note: Because the purpose of PortFast is to minimize the time that access ports, connecting to user equipment and servers, must wait for spanning tree to converge, it should be used only on access ports. If you enable PortFast on a port connecting to another switch, you risk creating a spanning-tree loop. The spanning-tree portfast interface command configures PortFast on an interface. The spanning-tree portfast default global configuration command enables PortFast on all nontrunking interfaces. The show running-config interface fa0/1 command shows the configuration on FastEthernet 0/1 interface. One part of the configuration is the spanning-tree portfast configuration, if it exists.

Per VLAN Spanning Tree+


The 802.1D standard defines a Common Spanning Tree (CST) that assumes only one spanningtree instance for the entire switched network, regardless of the number of VLANs. In a network running CST, these statements are true:

No load sharing is possible; one uplink must block for all VLANs. The CPU is spared; only one instance of spanning tree must be computed.

PVST+ defines a spanning-tree protocol that has several spanning-tree instances running for the network (one instance of STP per VLAN). In a network running several spanning-tree instances, these statements are true:

Optimum load sharing can result. One spanning-tree instance for each VLAN maintained can mean a considerable waste of CPU cycles for all the switches in the network (in addition to the bandwidth used for each instance to send its own BPDUs). This would only be problematic if there were a high number of VLANs configured.

PVST+ Operation
In a Cisco PVST+ environment, you can tune the spanning-tree parameters so that half of the VLANs forward on each uplink trunk. Correct configuration must be applied to the network. The configuration must define different root bridge for each half of the VLANs. Providing different STP root switches per VLAN creates a more redundant network. Spanning-tree operation requires that each switch have a unique BID. In the original 802.1D standard, the BID was composed of the bridge priority and the MAC address of the switch, and a CST represented all VLANs. PVST+ requires that a separate instance of spanning tree that is run for each VLAN and the BID field must carry VLAN ID (VID) information. This functionality is accomplished by reusing a portion of the Priority field as the extended system ID to carry a VID. To accommodate the extended system ID, the original 802.1D 16-bit bridge priority field is split into two fields. The BID includes the following fields:

Bridge priority: A 4-bit field that is still used to carry bridge priority. The priority is conveyed in discreet values in increments of 4096 rather than discreet values in increments of 1, due to the fact that only the four most significant bits are available of the 16-bit field. In other words in binary: priority 0 = [0000|<sys-id-ext #>], priority 4096 = [0001|<sys-idext #>], etc. Increments of 1 would be used if the complete 16-bit field will be available. The default priority, in accordance with IEEE 802.1D, is 32,768, which is the midrange value. Extended system ID: A 12-bit field carrying, in this case, the VID for PVST+. MAC address: A 6-byte field with the MAC address of a single switch.

By virtue of the MAC address, a BID is always unique. When the priority and extended system ID are prepended to the switch MAC address, each VLAN on the switch can be represented by a unique BID. Example for VLAN 2 default BID : Bridge ID Priority 32770 (priority 32768 sys-id-ext 2) If no priority has been configured, every switch will have the same default priority, and the election of the root for each VLAN is based on the MAC address. This method is a random means of selecting the ideal root bridge. For this reason, it is recommended to assign a lower priority to the switch that should serve as the root bridge.

Per VLAN Rapid Spanning Tree Plus

The RSTP (802.1w) standard uses CST, which assumes only one spanning-tree instance for the entire switched network, regardless of the number of VLANs. Per VLAN, Rapid Spanning Tree Plus (PVRST+) defines a spanning-tree protocol that has one instance of RSTP per VLAN.

Multiple Spanning Tree Protocol


Multiple Spanning Tree Protocol (MSTP), originally defined in IEEE 802.1s and later merged into IEEE 802.1Q-2005, defines a spanning-tree protocol that has several spanning-tree instances running for the network. But unlike PVRST+, which has one instance of RSTP per VLAN, MSTP reduces the switch load by allowing a single instance of spanning tree to run for multiple VLANs.

Topic Notes: Spanning-Tree Protocol Operation


Spanning-Tree Path Cost
Switches use the concept of cost to evaluate how close they are to other bridges. The spanningtree path cost is an accumulated total cost that is based on the bandwidth of all the links in the path. The 802.1D specification has been revised; in the older specification, the cost was calculated based on a bandwidth of 1000 Mb/s. The calculation of the new specification uses a nonlinear scale, to accommodate higher-speed interfaces. Note: Most Cisco Catalyst switches incorporate the revised cost calculations. A key point to remember about STP cost is that lower costs are better.

Spanning-Tree Recalculation
When there is a topology change because of a bridge or link failure, spanning tree adjusts the network topology to ensure connectivity by placing blocked ports in the forwarding state.

STP Convergence
Convergence in STP is a state in which all the switch and bridge ports have transitioned to either the forwarding or the blocking state. Convergence is necessary for normal network operations. For a switched or bridged network, a key issue is the amount of time that is required for convergence when the network topology changes.

Fast convergence is a desirable network feature because it reduces the amount of time that bridge and switch ports are in transitional states and not sending any user traffic. The normal convergence time is 30 to 50 seconds for 802.1D STP.

Topic Notes: Rapid Spanning-Tree Protocol


RSTP, specified in the IEEE 802.1w standard, supersedes STP as specified in 802.1D, while remaining compatible with STP. RSTP can be seen as an evolution of the 802.1D standard rather than a revolution. The 802.1D terminology remains primarily the same. Most parameters have been left unchanged, so users familiar with 802.1D can configure the new protocol comfortably. RSTP negates the need for the 802.1D delay timers, significantly reducing the time to reconverge the active topology of the network when changes to the physical topology or its configuration parameters occur. RSTP defines the additional port roles of alternate and backup, and it defines port states as discarding, learning, or forwarding. RSTP requires full-duplex point-to-point connection between adjacent switches to achieve fast convergence. RSTP selects one switch as the root of a spanning-tree active topology and assigns port roles to individual ports on the switch, depending on whether the ports are part of the active topology. RSTP provides rapid connectivity following the failure of a switch, a switch port, or a LAN. A new root port and the designated port on the other side of the bridge transition to forwarding. The transition is done through an explicit handshake between the ports and hence the delay timers as in 802.1D are not necessary. RSTP allows switch port configuration so that the ports can transition to forwarding directly when the switch reinitializes. RSTP defines the port roles as follows:

Root: A port that is elected for the spanning-tree topology. Designated: A port that is elected for every switched LAN segment. Alternate: An alternate path to the root bridge that is different from the path that the root port takes. Backup: A backup path that provides a redundant (but less desirable) connection to a segment which is already connected to another port on the same switch (which is the Designated port for that segment). Backup ports can exist only where two ports are connected in a loopback by a point-to-point link or bridge with two or more connections to a shared LAN segment. Disabled: A port that has no role within the operation of spanning-tree. Root and designated port roles include the port in the active topology. Alternate and backup port roles exclude the port from the active topology.

Configuring RSTP

To implement PVRST+, perform these steps: Step 1: Enable PVRST+. Step 2: Designate and configure a switch to be the root bridge (optional). Step 3: Designate and configure a switch to be the secondary (backup) root bridge (optional). Step 4: Verify the configuration. The spanning-tree mode command is used to configure PVRST+ on Cisco Catalyst switches. The show spanning-tree vlan 2 command is used to verify the spanning-tree configuration for VLAN 2. The debug spanning-tree pvst+ command is used to display PVRST+ event debug messages. The show spanning-tree vlan 2 command is used to verify the spanning-tree configuration for VLAN 2. If all the switches in a network are enabled with the default spanning-tree settings, the switch with the lowest MAC address becomes the root bridge. However, the default root bridge might not be the ideal root bridge, because of traffic patterns, the number of forwarding interfaces, or link types. Before you configure STP, select a switch to be the root of the spanning-tree. This switch does not need to be the most powerful switch, but should be the more centralized switch on the network. All data flow across the network occurs from the perspective of this switch. The distribution layer switches often serve as the spanning-tree root because these switches typically do not connect to end stations. In addition, moves and changes within the network are less likely to affect these switches. By increasing the priority (lowering the numerical value) of the preferred switch it becomes the root bridge. Change in priority is forcing spanning-tree to perform a recalculation. Recalculation reflects a new topology with the preferred switch as the root. The switch with the lowest BID becomes the root bridge for spanning-tree for a VLAN. You can use specific configuration commands to help determine which switch will become the root bridge. A Cisco Catalyst switch running PVST+ or PVRST+ maintains an instance of spanning-tree for each active VLAN that is configured on the switch. A unique BID is associated with each instance. For each VLAN, the switch with the lowest BID becomes the root bridge for that VLAN. Whenever the bridge priority changes, the BID also changes. This change results in the recomputation of the root bridge for the VLAN. To configure a switch to become the root bridge for a VLAN 1, use the command spanningtree vlan 1 root primary. With this command, the switch checks the priority of the root

switches for the VLAN 1. Because of the extended system ID support, the switch sets its own priority to 24576 for the specified VLAN if this value will cause the switch to become the root for this VLAN. If there is another switch for the specified VLAN that has a priority lower than 24576, then the switch on which you are configuring the spanning-tree vlan vlan-ID root primary command sets its own priority for the specified VLAN to 4096 less than the lowest switch priority. Note: Spanning-tree commands take effect immediately, so network traffic is interrupted while reconfiguration occurs. A secondary root is a switch that can become the root bridge for a VLAN if the primary root bridge fails. To configure a switch as the secondary root bridge for the VLAN, use the command spanning-tree vlan 1 root secondary. With this command, the switch priority is modified from the default value of 32768 to 28672 (for example). Assuming that the other bridges in the VLAN retain their default STP priority, this switch becomes the root bridge if the primary root bridge fails. You can execute this command on more than one switch to configure multiple backup root bridges. For root and secondary bridges verification, use the show spanning-tree command or related commands.

Topic Notes: Understanding Inter-VLAN Routing


Each VLAN is a unique broadcast domain. Computers on separate VLANs are, by default, not able to communicate. The way to permit these end stations to communicate is to use a solution called inter-VLAN routing. Inter-VLAN communication occurs between broadcast domains via a Layer 3 device, such as a router or a Layer 3 switch. VLANs perform network partitioning and traffic separation at Layer 2 and are usually associated with unique IP subnets on the network. This subnet configuration facilitates the routing process in a multi-VLAN environment. Inter-VLAN communication cannot occur without a Layer 3 device. When using a router to facilitate inter-VLAN routing, the router interfaces can be connected to separate VLANs. Inter-VLAN routing is a process of forwarding network traffic from one VLAN to another VLAN using a router. Traditional inter-VLAN routing requires multiple physical interfaces on both the router and the switch. VLANs are associated with unique IP subnets on the network. This subnet configuration facilitates the routing process in a multi-VLAN environment. When

using a router to facilitate inter-VLAN routing, the router interfaces can be connected to separate VLANs. Devices on those VLANs send traffic through the router to reach other VLANs. However, not all inter-VLAN routing configurations require multiple physical interfaces. Some router software permits configuring router interfaces as trunk links. Trunk links open up new possibilities for inter-VLAN routing. A "Router-on-a-Stick" is a type of router configuration in which a single physical interface routes traffic between multiple VLANs on a network.

Multilayer Switching
Some switches can perform Layer 3 functions, replacing the need for dedicated routers to perform basic routing on a network. Multilayer switches (MLS) are capable of performing interVLAN routing. Traditionally, a switch makes forwarding decisions by looking at the Layer 2 header, whereas a router makes forwarding decisions by looking at the Layer 3 header. A multilayer switch combines the functionality of a switch and a router into one device. It switches traffic when the source and destination are in the same VLAN and routes traffic when the source and destination are in different VLANs (that is, on different IP subnets). To enable a multilayer switch to perform routing functions, VLAN interfaces on the switch need to be properly configured. You must use the appropriate IP addresses that match the subnet that the VLAN is associated with on the network. The multilayer switch must also have IP routing enabled. Multilayer switching is beyond the scope of this course. To support 802.1Q trunking, you must subdivide the physical Fast Ethernet interface of the router into multiple, logical, addressable interfaces, one per VLAN. The resulting logical interfaces are called subinterfaces. Without this subdivision, you would have to dedicate a separate physical interface to each VLAN.

Topic Notes: Configuring Inter-VLAN Routing


Verifying Inter-VLAN Routing
To verify the router configuration, use show commands to display the current (running) configuration, IP routing information, and IP protocol information per VLAN to verify whether the routing table represents the subnets of all VLANs.

The show vlans command displays the information about the Cisco IOS VLAN subinterfaces. The show ip route command displays the current state of the routing table.

Topic Notes: Overview of Switch Security Concerns


Much industry attention surrounds security attacks from outside the walls of an organization and at the upper Open Systems Interconnection (OSI) layers. Network security often focuses at the perimeter of the network, for example on edge routing devices and the filtering of packets that is based on Layer 3 and Layer 4 headers, ports, stateful packet inspection, and so on. This focus includes all issues surrounding Layer 3 and above, as traffic makes its way into the campus network from the Internet. Campus access devices and Layer 2 communication are largely unconsidered in most security discussions as internal users and devices are implicitly trusted in many organizations. Routers and switches that are internal to an organization and designed to accommodate communication by delivering campus traffic have a default operational mode that forwards all traffic unless it is configured otherwise. Their function as devices that facilitate communication often results in minimal security configuration and renders them targets for malicious attacks. If an attack is launched at Layer 2 on an internal campus device, the rest of the network can be quickly compromised, often without detection. The consequences of malicious attacks that compromised Layer 3 caused the development of better security measures on the devices within the campus. For the same reason, Layer 2 also requires better security measures to guard against attacks that are launched by maliciously leveraging normal Layer 2 switch operations. Many security features are available for switches and routers, but they must be enabled to be effective. In the same way that you implement access control lists (ACLs) for upper-layer security, you must establish a policy and configure appropriate features to protect against potential malicious acts while maintaining daily network operations.

Network security vulnerabilities include loss of privacy, data theft, impersonation, and loss of data integrity. You should take basic security measures on every network to mitigate adverse effects of user negligence or acts of malicious intent. Recommended practices dictate that you should follow these general steps whenever placing new equipment in service: Step 1: Follow established organizational security policies. Step 2: Secure switch protocols.

Organizational Security Policies


You should follow the policies of an organization when determining what level and type of security you want to implement. You must balance the goal of reasonable network security against the administrative overhead that is clearly associated with extremely restrictive security measures. A well-established security policy has these characteristics:

Provides a process for auditing existing network security Provides a general security framework to implement network security Defines behaviors toward electronic data that are not allowed Determines which tools and procedures are needed for the organization Communicates consensus among a group of key decision makers and defines responsibilities of users and administrators Defines a process for managing network security incidents Enables an enterprise-wide, all-site security implementation and enforcement plan

Securing Switch Devices


Follow these recommended practices for secure switch access:

Secure physical access to the switch: Physical security is the first step when securing the devices. Securing physical access is providing anti-theft prevention and prevention that intruders can use local ports accessing and changing the device configuration. Console port is typically used to configure the devices and physical access security is important. It is not enough to provide secure physical access only as the networking devices allow remote access as well. Set system passwords: Setting the password is one of the most basic rules for security. Passwords are preventing an unauthorized access to the device configuration once physical access to the device is allowed. Secure remote access:

Use SSH when possible: SSH gives the same type of access as Telnet. Additionally SSH provides secure communication between the SSH client and the SSH server. The communication is encrypted whereas Telnet traffic is transmitted in plaintext. Not all devices support the SSH protocol and if supported, SSH version 2 is recommended. o Secure Access via Telnet: Cisco networking devices support remote access in order to manipulate with the device configuration. Telnet is a common software tool and secure access via Telnet (for example username and password checking) is an important recommended practice for switch security. o Disable HTTP, enable HTTPS: Cisco IOS Software provides an integrated HTTP server for management. In order to avoid unauthorized access it is highly recommended that you disable the HTTP server. If HTTP access to the switch is required, use basic ACLs to control the access to use secure version of HTTP HTTPS where the traffic transmitted is encrypted rather than sent in plaintext. Configure system warning banners: The Cisco IOS command set allows you to configure messages that anyone logging onto the switch sets. Configuring a warning banner to display before login is a convenient and effective way to reinforce security and general usage policies. Disable unneeded services: Cisco devices implement multiple default TCP and UDP servers to facilitate management and integration into existing environments. For most installations, these services are typically not needed and can be disabled to reduce overall security exposure. Use syslog if available: Cisco devices offer logging facility to assist and simplify troubleshooting and security investigations.

Topic Notes: Securing Access to the Switch


Securing your switches starts with protecting them from unauthorized access. Privileged EXEC mode allows any user enabling that mode on a Cisco switch to configure any option available on the switch. You can also view all the currently configured settings on the switch, including some of the unencrypted passwords. For these reasons, it is important to secure access to privileged EXEC mode. The enable password global configuration command allows you to specify a password to restrict access to privileged EXEC mode. However, one problem with the enable password command is that it stores the password in readable text in the startup-config and runningconfig. If someone gets access to a stored startup-config file or temporary access to a Telnet or console session that is logged in to privileged EXEC mode, they could see the password. As a result, Cisco introduced a new password option to control access to privileged EXEC mode that stores the password in an encrypted format.

You can assign an encrypted form of the enable password, called the enable secret password, by entering the enable secret command with the desired password at the global configuration mode prompt. If the enable secret password is configured, it is used instead of the enable password, not in addition to it. Because the enable secret command simply implements a Message Digest 5 (MD5) hash on the configured password, that password remains vulnerable to dictionary attacks. Therefore, apply standard practices in selecting a feasible password. Try to pick passwords that contain both letters and numbers in addition to special characters. For example, choose "$pecia1$" instead of "specials," in which the "s" has been replaced with "$," and the "l" has been replaced with "1" (one). You can perform all configuration options directly from the console or through the remote access. Access to the console requires local physical access to the device. Once the physical access is available, the only security that is left is a password protected console port. Without the protection, a malicious user could compromise the switch configuration. To secure the console port from unauthorized access, set a password on the console port. Use the line console 0 command to switch from global configuration mode to line configuration mode for console 0, where a password can be applied. Use the password command in line configuration mode to set the line password on the console port. Console 0 is the console port on Cisco switches and routers. When a console password is defined, the device is not using it. To ensure that a user on the console port is required to enter the line password, use the login command. The virtual terminal or vty ports on a Cisco switch allow you to access the device remotely. Any user with network access to the switch can establish a vty remote terminal connection, as no physical access is needed to access vty ports. You can perform all configuration options using the vty ports. To secure the vty ports from unauthorized access, you can set a vty line password that is required before access is granted. To set the password on the vty ports, you must be in line configuration mode. There can be many vty ports available on a Cisco switch. Multiple ports permit more than one administrator to connect to and manage the switch. To secure all vty lines, make sure that a password is set and login is enforced on all lines. Leaving some lines unsecured compromises security and allows unauthorized users access to the switch. Note: If the switch has more vty lines available, adjust the range to secure them all. For example, a Cisco 2960 has lines 0 through 15 available. Password protection is not the only protection for Telnet access. Minimum recommended steps for securing Telnet access are as follows:

Configure a line password for all configured vty lines. Apply a basic ACL for in-band access to all vty lines.

If supported by the installed Cisco IOS Software, use the Secure Shell (SSH) protocol instead of Telnet to access the device remotely.

Passwords in Cisco IOS CLI, by default all passwords, except for the enable secret password, are stored in cleartext format within the startup-config and running-config. It is common that passwords should be encrypted and not stored in cleartext format. The Cisco IOS command service password-encryption enables service password encryption. When the service password-encryption command is entered from global configuration mode, all system passwords in cleartext form are immediately converted to encrypted passwords. Note: Before you complete the switch configuration, remember to save the running configuration file to the startup configuration. There are two choices for remotely accessing a vty on a Cisco switch, Telnet and SSH. Telnet is a popular protocol that is used for terminal access. Most current operating systems come with a Telnet client built-in and historically Telnet was the way to manage Cisco switches from a vty. However, Telnet is an insecure way of accessing a network device. It sends all communications across the network in cleartext. Because of the security concerns of the Telnet protocol, SSH has become the preferred protocol for remotely accessing virtual terminal lines on a Cisco device. SSH gives the same type of access as Telnet with the added benefit of security. Communication between the SSH client and the SSH server is encrypted. Several versions exist for SSH. Cisco devices currently support both SSHv1 and SSHv2 (recommended). The ip ssh version 2 command forces all SSH sessions to be ssh version 2, the default on SSH-2 capable devices is to support both. The SSH feature has an SSH server and an SSH integrated client, which are applications that run on the switch. You can use any SSH client running on a PC or the Cisco SSH client running on the switch to connect to a switch running the SSH server. If you want to prevent non-SSH connections like Telnet, add the transport input ssh command in line configuration mode to limit the switch to SSH connections only. If the transport input all line configuration command is used, all transport protocols are permitted. The SSH access to the switch as well as Telnet access is allowed. Telnet is the default vty-supported protocol on Cisco switches and there is no need to configure Telnet support on a Cisco switch. If you have switched the transport protocol on the vty lines to permit only SSH, you need to enable the Telnet protocol to permit Telnet access manually. If you need to re-enable the Telnet protocol on a Cisco 2960 switch, use the transport input telnet line configuration command. Although Cisco IOS Software provides an integrated HTTP server for management, it is highly recommended that you disable it to minimize overall exposure. If HTTP access to the switch is

required, use basic ACLs to permit access only from trusted subnets. The no ip http server global configuration command disables the HTTP server. The Cisco IOS command set includes a feature that allows you to configure messages that anyone logging onto the switch sees. These messages are called login banners and message of the day (MOTD) banners. For both legal and administrative purposes, configuring a systemwarning banner to display before login is a convenient and effective way to reinforce security and general usage policies. By clearly stating the ownership, usage, access, and protection policies before a login, you provide better support for potential prosecution. You can define a customized banner to be displayed before the username and password login prompts by using the banner login command in global configuration mode. Enclose the banner text in quotations or using a delimiter different from any character appearing in the MOTD string. The MOTD banner displays on all connected terminals at login and is useful for sending messages that affect all network users (such as impending device maintenance). The MOTD banner displays before the login banner if it is configured. By default, Cisco devices implement multiple TCP and User Datagram Protocol (UDP) servers to facilitate management and integration into existing environments. For most installations, these services are typically not required, and disabling them can greatly reduce overall security exposure. Any network device that has UDP, TCP, BOOTP, or finger services should be protected by a firewall or have the services that are disabled to protect against Denial of Service attacks. TCP and UDP small servers are servers (daemons, in Unix parlance) that run in the router and are useful for diagnostics. The TCP and UDP small servers are enabled by default on Cisco IOS Software Version 11.2 and earlier. They may be disabled using the commands no service tcpsmall-servers and no service udp-small-servers. They are disabled by default on Cisco IOS Software Versions 11.3 and later. It is recommended that you do not enable these services unless it is absolutely necessary. These services could be exploited indirectly to gain information about the target system or directly as is the case with the fraggle attack, which uses UDP echo. The finger service allows remote users to view the output equivalent to the show users command. When IP finger is configured, the router will respond to a telnet a.b.c.d finger command from a remote host by immediately displaying the output of the show users command and then closing the connection. As with all minor services, the Finger service should be disabled on your system if you do not have a need for it in your network. The no service finger command disables the Finger service. Usually, the service config command is used with the boot host or boot network command. It can be used without boot host or boot network command as well. The service config command enables the router to automatically configure the system from file that is specified by the boot host or boot network command. To disable autoloading of configuration files from a network server, use the no service config global configuration command.

To assist and simplify both problem troubleshooting and security investigations, monitor the switch subsystem information that is received from the logging facility. To render the on-system logging useful, increase the default buffer size by using logging buffered size command. Do not make the buffer size too large because the router could run out of memory for other tasks. You can use the show memory command to view the free processor memory on the router. The output of the show memory command shows the maximum available size and should not be approached. The default logging buffered command resets the buffer size to the default for the platform. To re-enable message logging after it has been disabled, use the logging on global configuration command. Additionally, the logging process logs messages to the console and the various destinations after the processes that generated them have completed. When the logging process is disabled, messages are displayed on the console as soon as they are produced, often appearing in the middle of command output. Use logging console command to send system logging (syslog) messages to all available tty lines and limit messages that are based on severity. The console keyword indicates all available tty lines. This keyword can mean a console terminal that is attached to the router's tty line, a dialup modem connection, or a printer.

Topic Notes: Securing Switch Protocols


Follow these recommended practices to secure the switch protocols:

Cisco Discovery Protocol: Cisco Discovery Protocol does not reveal security-specific information, but it is possible for an attacker to exploit this information in a reconnaissance attack, whereby an attacker learns device and IP address information to launch other types of attacks. You should follow two practical guidelines for Cisco Discovery Protocol:
o o

If Cisco Discovery Protocol is not required, or if the device is located in an unsecured environment, disable Cisco Discovery Protocol globally on the device. If Cisco Discovery Protocol is required, disable Cisco Discovery Protocol on a per interface basis on ports that are connected to untrusted networks. Because Cisco Discovery Protocol is a link-level protocol, it is not transient across a network, unless a Layer 2 tunneling mechanism is in place. Limit it to run only between trusted devices, and disable it everywhere else. However, Cisco Discovery Protocol is required on any access port where you are attaching a Cisco IP phone to establish a trust relationship.

Secure the spanning-tree topology: It is important to protect the Spanning Tree Protocol (STP) process of the switches that form the infrastructure. Inadvertent or

malicious introduction of STP bridge protocol data units (BPDUs) could potentially overwhelm a device or pose a denial of service (DoS) attack. The first step in stabilizing a spanning-tree installation is to identify the intended root bridge in the design and hard set the STP bridge priority of that bridge to an acceptable root value. Do the same for the designated backup root bridge. These actions protect against inadvertent shifts in STP. These shifts happen when a new switch is introduced without a control. On some platforms, the BPDU guard feature may be available. If so, enable it on access port with the PortFast feature to protect the network from unwanted BPDU traffic injection. Upon receipt of a BPDU, the BPDU guard feature automatically disables the port.

Mitigating Compromises Launched Through a Switch


Follow these recommended practices to mitigate compromises through a switch:

Proactively configure unused router and switch ports: Execute the shutdown command on all unused ports and interfaces. Place all unused ports in a "parking-lot" VLAN, which is dedicated to grouping unused ports until they are proactively placed into service. o Configure all unused ports as access ports, disallowing automatic trunk negotiation. Disable automatic negotiation of trunk capabilities: By default, Cisco Catalyst switches that are running Cisco IOS Software are configured to automatically negotiate trunking capabilities. This situation poses a serious hazard to the infrastructure because an unsecured third-party device can be introduced to the network as a valid infrastructure component. Potential attacks include interception of traffic, redirection of traffic, DoS, and more. To avoid this risk, disable automatic negotiation of trunking and manually enable it on links that require it. Ensure that trunks use a native VLAN that is dedicated exclusively to trunk links. Physical device access: You should closely monitor physical access to the switch to avoid rogue device placement in wiring closets with direct access to switch ports. Access port-based security: Specific measures should be taken on every access port of any switch that is placed into service. Ensure that a policy is in place outlining the configuration of unused switch ports in addition to those ports that are in use. For ports that will connect to end devices, you can use a macro called switchport host. When you execute this command on a specific switch port, the switch port mode is set to access, spanning-tree PortFast is enabled, and channel grouping is disabled. Note: The switchport host macro disables EtherChannel and trunking, and enables STP PortFast.
o o

The switchport host command is a macro that executes several configuration commands. You cannot revoke the effect of the switchport host command by using the no form of the command because it does not exist. To return an interface to its default configuration, use the default interface interface-id global configuration command. This command returns all interface configurations to their defaults.

Topic Notes: Port Security


Describing Port Security
Port security is feature that is supported on Cisco Catalyst switches that restricts a switch port to a specific set or number of MAC addresses. The switch can learn these addresses dynamically or you can configure them statically. A port that is configured with port security accepts frames only from those addresses that it has learned or that you have configured. There are several implementations of port security:

Dynamic: You specify how many different MAC addresses are permitted to use a port at one time. You use the dynamic approach when you care only about how many rather than which specific MAC addresses are permitted. Depending on how you configure the switch, these dynamically learned addresses age out after a certain period, and new addresses are learned, up to the maximum that you have defined. Static: You statically configure which specific MAC addresses are permitted to use a port. Any source MAC addresses that you do not specifically permit are not allowed to source frames to the port. A combination of static and dynamic learning: You can choose to specify some of the permitted MAC addresses and let the switch learn the rest of the permitted MAC addresses. For example, if the number of MAC addresses is limited to four, this number becomes the maximum number. You statically configure two MAC addresses and the switch dynamically learns the next two MAC addresses that it receives on that port. Port access is limited to these four addresses, two static and two dynamically learned addresses. The two statically configured addresses do not age out, but the two dynamically learned addresses can, depending on the switch configuration. "Sticky learning": When this feature is configured on an interface, the interface converts dynamically learned addresses to "sticky secure" addresses. This feature adds the dynamically learned addresses to the running-configuration as if they were statically configured using the switchport port-security mac-address command. "Sticky learned" addresses do not age out.

Scenario
Process

Imagine five individuals whose laptops are allowed to connect to a specific switch port when they visit an area of the building. You want to restrict switch port access to the MAC addresses of those five laptops and allow no addresses to be learned dynamically on that port.
Step Action Notes 1. Port security is configured to allow only five connections on that port, and one entry is configured for each of the five allowed MAC addresses.

2. This step populates the MAC address table with five entries for that port and allows no additional entries to be learned dynamically. 3. Allowed frames are processed. 4. When frames arrive on the switch port, their source MAC address is checked against the MAC address table. If the source MAC address matches an entry in the table for that port, the frames are forwarded to the switch to be processed like any other frames on the switch. 5. New addresses are not allowed to create new MAC address table entries. 6. When frames with an unauthorized MAC address arrive on the port, the switch determines that the address is not in the current MAC address table and does not create a dynamic entry for that new MAC address. 7. The switch takes action in response to unauthorized frames.

The switch disallows access to the port and takes one of these configuration-dependent actions: (a) the entire switch port can be shut down; (b) access can be denied for only that MAC address and a log error message is generated; (c) access can be denied for that MAC address but no log message is generated. Note: You cannot apply port security to trunk ports because addresses on trunk links might change frequently. Implementations of port security vary depending on which Cisco Catalyst switch is in use. Check documentation to determine whether and how particular hardware supports this feature.

802.1X Port-Based Authentication

The IEEE 802.1X standard defines a port-based access control and authentication protocol that restricts unauthorized workstations from connecting to a LAN through publicly accessible switch ports. The authentication server authenticates each workstation that is connected to a switch port before making available any services offered by the switch or the LAN. Until the workstation is authenticated, 802.1X access control allows only Extensible Authentication Protocol over LAN (EAPOL), CDP and STP traffic through the port to which the workstation is connected. After authentication succeeds, normal traffic can pass through the port. With 802.1X port-based authentication, the devices in the network have specific roles, as follows:
o

Client: The device (workstation) that requests access to the LAN and switch services, and responds to requests from the switch. The workstation must be running 802.1Xcompliant client software, such as that offered in the Microsoft Windows XP operating system. The client running the 802.1x software is referred to as the 'supplicant' in the IEEE 802.1X specification. Authentication server: Performs the actual authentication of the client. The authentication server validates the identity of the client and notifies the switch whether the client is authorized to access the LAN and switch services. Because the switch acts as the proxy, the authentication service is transparent to the client. The RADIUS security system with Extensible Authentication Protocol (EAP) extensions is the only supported authentication server.

Switch (also called the authenticator): Controls physical access to the network based on the authentication status of the client. The switch acts as an intermediary (proxy) between the client (supplicant) and the authentication server. The switch is requesting identifying information from the client, verifying that information with the authentication server, and relaying a response to the client. The switch uses a RADIUS software agent, which is responsible for encapsulating and decapsulating the EAP frames and interacting with the authentication server.

The switch port state determines whether the client is granted access to the network. The port starts in the unauthorized state. While in this state, the port disallows all ingress and egress traffic except for 802.1X protocol packets. When a client is successfully authenticated, the port transitions to the authorized state, allowing all traffic for the client to flow normally. If the switch requests the client identity (authenticator initiation) and the client does not support 802.1X, the port remains in the unauthorized state. The client is not granted access to the network. An 802.1X-enabled client connects to a port and initiates the authentication process (supplicant initiation) by sending an EAPOL-start frame to a switch. If the switch is not running 802.1X, no response is received. After that unsuccessful authentication process, the client begins sending frames as if the port is in the authorized state. If the client is successfully authenticated (receives an 'ACCEPT' frame from the authentication server), the port state changes to authorized. After successful authentication all frames from the authenticated client are allowed through the port. If the authentication fails, the port remains in the unauthorized state, but authentication can be retried. If the authentication server cannot be reached, the switch can retransmit the request. If no response is received from the server after the specified number of attempts, authentication fails, and network access is not granted. When a client logs out, it sends an EAPOL-logout message, causing the switch port to transition to the unauthorized state.

Topic Notes: Troubleshooting Port Connectivity


Troubleshooting Switches
Once a network is operational, administrators have to monitor its performance for the sake of productivity within the organization. From time to time, network outages can occur. Sometimes they are planned, and their impact on the organization is easily managed. Sometimes they are not

planned, and their impact on the organization can be severe. If an unexpected network outage occurs, administrators must be able to troubleshoot and bring the network back into complete production. There are many ways to troubleshoot a switch. Developing a troubleshooting approach or test plan works much better than using a random approach. Here are some suggestions to make troubleshooting more effective:

Become familiar with normal switch operation: The Cisco website (Cisco.com) provides useful technical information about Cisco switches and how they operate. The configuration guides in particular are very helpful. For more complex situations, have an accurate physical and logical map of the network on hand: A physical map shows how the devices and cables are connected. A logical map shows the segments (VLANs) that exist in the network and which routers provide routing services to these segments. A spanning-tree map is also very useful for troubleshooting complex issues. Because a switch can create different segments by implementing VLANs, the physical connections alone do not provide all of the necessary information. You must know how the switches are configured to determine which segments (VLANs) exist and how they are logically connected. Always ensure documentation is up to date and always remember to update the documentation when changes are made. Have a plan. Some problems and solutions are obvious; others are not. The symptoms that you see in the network can be the result of problems in another area or layer. Before drawing conclusions, try to verify in a structured way what is working and what is not. Because networks can be complex, it is helpful to isolate possible problem domains. One way to isolate domains is to use the Open Systems Interconnection (OSI) seven-layer model. For example: check the physical connections involved (Layer 1), check connectivity issues within the VLAN (Layer 2), check connectivity issues across different VLANs (Layer 3), and so on. Assuming that the switch is configured correctly, many of the problems you encounter will be related to physical layer issues (physical ports and cabling). Do not assume that a component is working; first, verify that it is. If a PC is not able to log into a server across the network, it could be due to any number of things. Do not assume that basic components are working correctly without testing them firstsomeone else may have altered their configurations and not informed you of the change. It usually takes only a minute to verify the basics (for example, that the ports are correctly connected and active), and it can save you much valuable time.

Troubleshooting Port Connectivity


If you are experiencing connectivity problems and you have verified that the cable from the source interface is properly connected and is in good condition, check the port. Ports are the foundation of the switched network. If they do not work, nothing works. Some ports have special significance because of their location in the network and the amount of traffic they carry. These include ports that have connections to other switches, routers, and servers. They can be more complicated to troubleshoot because they often take advantage of special features, such as

trunking and EtherChannel. However, do not overlook the other portsthey are also significant because they connect users in the network.

Hardware Issues
Hardware issues can be one of the reasons that a switch has connectivity problems. To rule out hardware issues, verify the following:

The port status for both ports that are involved in the link: Ensure that neither is shut down. The administrator may have manually shut down one or both ports, or the switch software may have shut down one of the ports because of a configuration error. If one side is shut down and the other is not, the status on the enabled side will be "notconnected" (because it does not sense a neighbor on the other side of the wire). The status on the shutdown side will say something like "disabled" or "err-disabled" (depending on what actually shut down the port). The link will not be active if both ports are not enabled. The type of cable that is used for the connection: Use at least a Category 5 cable for 100Mb/s connections, and a Category 5e cable for 1 Gb/s or faster. Use a straight-through RJ45 cable for end stations, routers, or servers to connect to a switch or hub. Use an Ethernet crossover cable for switch-to-switch connections or hub-to-switch connections. The maximum distance for Ethernet or Fast Ethernet copper wires is 100 m (109.36 yd). A software process disables a port: A solid orange light on the port indicates that the switch software has shut down the port, either by way of the user interface or by internal processes such as spanning tree bridge protocol data unit (BPDU) guard, root guard, or port security violations.

Configuration Issues
Configuration of the port is another possible reason that the port is experiencing connectivity problems. Some of the common configuration issues are as follows:

The VLAN to which the port belongs has disappeared. Each port in a switch belongs to a VLAN. If the VLAN is deleted, then the port becomes inactive. Autonegotiation is enabled: Autonegotiation is an optional function of the Fast Ethernet (IEEE 802.3u) standard. It enables devices to automatically exchange information about speed and duplex abilities over a link. Do not use autonegotiation for ports that support network infrastructure devices, such as switches, routers, or other nontransient end systems like servers and printers. Autonegotiating speed and duplex settings is the typical default behavior on switch ports that have this capability. However, you should always configure ports that connect to fixed devices for the correct speed and duplex setting, rather than allowing them to autonegotiate these settings. This configuration eliminates any potential negotiation issues and ensures that you always know exactly how the ports should be operating.

Topic Notes: Troubleshooting VLANs, Trunking, and VTP


Troubleshooting VLANs and Trunking
If you discover a problem with a VLAN or trunk and do not know what is causing it, start your troubleshooting by examining the trunks for a native VLAN mismatch, and then work down the list. This section summarizes most common issues that should be checked when you are troubleshooting VLANs and trunking.
Native VLAN Mismatches

The native VLAN that is configured on each end of an IEEE 802.1Q trunk must be the same. Remember that a switch receiving an untagged frame assigns the frame to the native VLAN of the trunk. If one end of the trunk is configured for native VLAN 1 and the other end is configured for native VLAN 2, a frame that is sent from VLAN 1 on one side is received on VLAN 2 on the other. VLAN 1 traffic "leaks" into the VLAN 2 segment. There is no reason that this behavior would be required, and connectivity issues will occur in the network if a native VLAN mismatch exists. Such configuration error generates console notifications, causes control and management traffic misdirections, and poses a security risk.
Trunk Mode Mismatches

You should statically configure trunk links whenever possible. One of the most common configuration errors is when one trunk port is configured with trunk mode "off" and the other with trunk mode "on." This configuration error causes the trunk link to stop working. However, Cisco Catalyst switch ports run Dynamic Trunking Protocol (DTP) by default, which tries to automatically negotiate a trunk link. This Cisco proprietary protocol can determine an operational trunking mode and protocol on a switch port when it is connected to another device that is also capable of dynamic trunk negotiation.
VLANs and IP Subnets

Each VLAN must correspond to a unique IP subnet. Two devices in the same VLAN should have addresses in the same subnet. With intra-VLAN traffic, the sending device recognizes the destination as local and sends an Address Resolution Protocol (ARP) broadcast to discover the MAC address of the destination. Two devices in different VLANs should have addresses in different subnets. With inter-VLAN traffic, the sending device recognizes the destination as remote and sends an ARP broadcast for the MAC address of the default gateway.

You should also check to see if the list of allowed VLANs on a trunk has been updated with the current VLAN trunking requirements. In this situation, unexpected traffic or no traffic is being sent over the trunk.
Inter-VLAN Connectivity

Inter-VLAN connectivity issues are most of the time, the result of user misconfiguration. For example, if you incorrectly configure a router on a stick or Multilayer Switching, then packets from one VLAN may not reach another VLAN. To avoid misconfiguration and to troubleshoot efficiently, you should understand the mechanism that the Layer 3 forwarding device uses. If you are sure that the equipment is properly configured, yet hardware switching is not taking place, then a software bug or hardware malfunction may be the cause. Another type of misconfiguration that affects inter-VLAN routing is misconfiguration on end user devices such as PCs. A common situation is a misconfigured PC default gateway.

Troubleshooting VTP
Unable to See VLAN Details in the show vlan Command Output

VTP client and server systems require that VTP updates from other VTP servers be saved immediately without user intervention. A VLAN database was introduced into Cisco IOS Software as a method to immediately save VTP updates for VTP clients and servers. In some versions of the software, this VLAN database is in the form of a separate file in Flash, called the vlan.dat file. You can view VTP and VLAN information that is stored in the vlan.dat file for the VTP client or VTP server if you use the show vtp status command. VTP server and client mode switches do not save the entire VTP and VLAN configuration to the startup-config file in NVRAM when you use the copy running-config startup-config command on these systems. The command saves the configuration in the vlan.dat file. This behavior does not apply to systems that run in VTP transparent mode. VTP transparent switches save the entire VTP and VLAN configuration to the startup-config file in NVRAM when you use the copy running-config startup-config command. For example, if you delete the vlan.dat file on a VTP server or client mode switch after you have configured VLANS, and then reload the switch, VTP is reset to the default settings (all user-configured VLANs are deleted). But if you delete the vlan.dat file on a VTP transparent mode switch, and then reload the switch, it retains the VTP configuration. This behavior is an example of default VTP configuration. You can configure normal-range VLANs (2 through 1000) when the switch is in either VTP server or transparent mode. But on the Cisco Catalyst 2960 Switch, you can configure extendedrange VLANs (1025 through 4094) only on VTP-transparent switches.
Cisco Catalyst Switches Do Not Exchange VTP Information

There are several reasons why VTP fails to exchange the VLAN information. Verify these items if switches that run VTP fail to exchange VLAN information:

VTP information passes only through a trunk port. Ensure that all ports that interconnect switches are configured as trunks and are actually trunking. Ensure that the VLANs are active on all of the VTP server switches. One of the switches must be a VTP server in the VTP domain. All VLAN changes must be done on this switch in order to have them propagated to the VTP clients. The VTP domain name must match and it is case-sensitive. For example, "COMPANY" and "company" are different domain names. Ensure that no password is set between the server and client. If any password is set, ensure that the password is the same on both sides. The password is also case-sensitive. Every switch in the VTP domain must use the same VTP version. VTP version 1 (VTPv1) and VTP version 2 (VTPv2) are not compatible on switches in the same VTP domain. Do not enable VTPv2 unless every switch in the VTP domain supports version 2.

Note: VTPv2 is disabled by default on VTPv2-capable switches. When you enable VTPv2 on a switch, every VTPv2-capable switch in the VTP domain enables version 2. You can configure the version only on switches in VTP server or transparent mode.

A switch that is in VTP transparent mode and uses VTPv2 propagates all VTP messages, regardless of the VTP domain that is listed. However, a switch running VTPv1 propagates only VTP messages that have the same VTP domain as the domain that is configured on the local switch. VTP transparent mode switches that are using VTPv1 will drop VTP advertisements if they are not in the same VTP domain. The extended-range VLANs are not propagated. So you must configure extended-range VLANs manually on each network device. The updates from a VTP server are not updated on a client if the client already has a higher VTP revision number. In addition, the client does not propagate the VTP updates to its downstream VTP neighbors if the client has a higher revision number than that which the VTP server sends.

Recently Installed Switch Causes Network Problems

A newly installed switch can cause problems in the network when all of the switches in the network are in the same VTP domain, and you add a switch into the network that does not have the default VTP and VLAN configuration. If the configuration revision number of the switch that you insert into the VTP domain is higher than the configuration revision number on the existing switches of the VTP domain, your recently introduced switch overwrites the VLAN database of the domain with its own VLAN database. This overwriting happens whether the switch is a VTP client or a VTP server. A VTP client can overwrite and in many cases effectively erase VLAN information on a VTP server. A typical indication that this issue has happened is when many of the ports in your network go into an inactive state but continue to be assigned to a nonexistent VLAN. To prevent this problem from occurring, always ensure that the configuration revision number of all switches that you insert into the VTP domain is lower than the configuration revision number of the switches that are already in the VTP domain.

All Ports Inactive After Power Cycle

Switch ports move to the inactive state when they are members of VLANs that do not exist in the VLAN database. A common problem is that all of the ports move to this inactive state after a power cycle. Generally, you see this problem when the switch is configured as a VTP client with the uplink trunk port on a VLAN other than VLAN1. Because the switch is in VTP client mode, when the switch resets, it loses its VLAN database and causes the uplink port and any other ports that were not members of VLAN1 to become inactive. To solve this problem, complete these steps: Step 1: Temporarily change the VTP mode to transparent. Step 2: Add the VLAN to which the uplink port is assigned to the VLAN database. Step 3: Change the VTP mode back to client after the uplink port begins forwarding.

Topic Notes: Troubleshooting Spanning Tree


Unfortunately, there is no systematic procedure for troubleshooting an STP problem. This section summarizes some of the actions that you can take. Most of the steps apply to troubleshooting bridging loops in general. You can use a more conventional approach to identify other failures of STP that lead to a loss of connectivity.

Use the Cisco IOS show commands to verify STP port states
Before you troubleshoot a bridging loop, you must be aware of the following:

The topology of the bridge network The location of the root bridge The location of the blocked ports and the redundant links

This knowledge is essential for the following reasons:


Before you can determine what to fix in the network, you must know how the network looks when it is functioning correctly. Most of the troubleshooting steps simply use show commands to try to identify error conditions. Knowledge of the network helps you to focus on the critical ports on the key devices.

Identify a Bridging Loop


It used to be that a broadcast storm could have a disastrous effect on the network. Today, with high-speed links and devices that provide switching at the hardware level, it is not likely that a

single host, such as a server, will bring down a network through broadcasts. The best way to identify a bridging loop is to capture the traffic on a saturated link and verify that you see similar packets multiple times. Realistically, however, if all users in a certain bridge domain have connectivity issues at the same time, you can already suspect a bridging loop. Check the port utilization on your devices to determine whether there are abnormal values.

Restore Connectivity Quickly


Bridging loops have severe consequences on a switched network. Administrators generally do not have time to look for the cause of the loop, and prefer to restore connectivity as soon as possible. The easy way out in this case is to manually disable every port that provides redundancy in the network.

Disable Ports to Break the Loop


If you can identify a part of the network that is affected most, begin to disable ports in this area. Or, if possible, initially disable ports that should be blocking. Each time you disable a port, check to see if you have restored connectivity in the network. By identifying which disabled port stops the loop, you also identify the redundant path where this port is located. If this port should have been blocking, you have probably found the link on which the failure appeared.

Log STP Events


If you cannot precisely identify the source of the problem, or if the problem is transient, enable the logging of STP events on the switches of the network that experiences the failure. If you want to limit the number of devices to configure, at least enable this logging on devices that host blocked ports; the transition of a blocked port is what creates a loop. Use the privileged EXEC command debug spanning-tree events to enable STP debug information. Use the global configuration mode command logging buffered to capture this debug information in the device buffers. You can also try to send the debug output to a syslog device. Unfortunately, when a bridging loop occurs, you seldom maintain connectivity to a syslog server.

Temporarily Disable Unnecessary Features


Disable as many features as possible to help simplify the network structure and ease the identification of the problem. For example, EtherChannel requires STP to logically bundle several different links into a single link, so disabling this feature during troubleshooting makes sense. As a rule, make the configuration as simple as possible to ease troubleshooting.

Designate the Root Bridge

Very often, information about the location of the spanning-tree root bridge is not available at troubleshooting time. Do not let STP decide which switch becomes the root bridge. For each VLAN, you can usually identify which switch can best serve as the root bridge. Which switch would make the best root bridge depends on the design of the network. Generally, choose a powerful switch in the middle of the network. If you put the root bridge in the center of the network with direct connection to the servers and routers, you reduce the average distance from the clients to the servers and routers. For each VLAN, hardcode which switches will serve as the root bridge and the backup (secondary) root bridge.

Verify That RSTP Is Configured


The 802.1D and PVST+ spanning-tree protocols have convergence times between 30 and 50 seconds. The RSTP and PVRST+ spanning-tree protocols have convergence times between 1 and 2 seconds. A slow convergence time may indicate that not all of the switches in your network have been configured with RSTP, which can slow the convergence times globally in your network. Use the show spanning-tree command to verify the spanning tree mode.

Topic Notes: Reviewing Dynamic Routing

Static: The router learns routes when an administrator manually configures the static route. The administrator must manually update this static route entry whenever an internetwork topology change requires an update. Static routes are user-defined routes that specify the path that packets take when moving between a source and a destination. These administrator-defined routes allow very precise control over the routing behavior of the IP internetwork. Dynamic: The router dynamically learns routes after an administrator configures a routing protocol that helps determine routes. Unlike the situation with static routes, after the network administrator enables dynamic routing, the routing process automatically updates route knowledge whenever new topology information is received. The router learns and maintains routes to the remote destinations by exchanging routing updates with other routers in the internetwork. In this way, all routers have accurate routing tables. These routing tables are updated dynamically and can learn about routes to remote networks that are many hops away. Routed protocol: Any network protocol that provides enough information in its network layer address to allow a packet to be forwarded from one host to another host.

Forwarding is based on the addressing scheme, without knowing the entire path from source to destination. Packets generally are conveyed from end system to end system. IP is an example of a routed protocol.

Routing protocol: Facilitates the exchange of routing information between networks, allowing routers to build routing tables dynamically. Traditional IP routing stays simple because it uses next-hop (next-router) routing. In next-hop routing, the router needs to consider only where it sends the packet. There is no need to consider the subsequent path of the packet on the remaining hops (routers). How updates are conveyed What knowledge is conveyed When to convey the knowledge How to locate recipients of the updates Discovering remote networks: There is no need to manually define the available destination (routes). The routing protocols discover the remote networks and update the internal routing table of the router. Maintaining up-to-date routing information: The routing table contains the entries about remote networks. When changes happen in the network, routing tables are automatically updated. Choosing the best path to destination networks: Routing protocols discover the remote networks. More paths to the destinations are possible and the best paths enter the routing table. Finding a new best path if the current path is no longer available: The routing table is constantly updated and new paths may also be added. When the current best path is not available or better paths are found, the routing protocol selects a new best path.

Topic Notes: Routing Protocol Types and Classes

Interior gateway protocols (IGPs): These routing protocols are used to exchange routing information within an autonomous system. Routing Information Protocol version 2 (RIPv2), Enhanced Interior Gateway Routing (EIGRP), and Open Shortest Path First (OSPF) are examples of IGPs. Exterior gateway protocols (EGPs): These routing protocols are used to route between autonomous systems. Border Gateway Protocol (BGP) is the EGP of choice in networks today.

Distance vector: The distance vector routing approach determines the direction (vector) and distance (a metric, such as hop count in the case of RIP) to any link in the internetwork. Pure distance vector protocols periodically send complete routing tables to all connected neighbors, this mode of operation is key in defining what is a distance vector routing protocol. In large networks, these routing updates can become enormous, causing significant traffic on the links. The only information that a router knows about a remote network is the distance or metric to reach that network and which path or interface to use to get there. It is worth noting that different distance vector routing protocols may use different kinds of metrics. Distance vector routing protocols do not have an actual map of the network topology, rather a router's view of the network is based on the information provided by its neighbor(s). Link-state: The link-state approach, which utilizes the Shortest Path First (SPF) algorithm, creates an abstraction of the exact topology of the entire internetwork, or at least of the partition in which the router is situated. A link-state router uses the link-state information to create a topology map and to select the best path to all destination networks in the topology. All link-state routers are using an identical "map" of the network and calculate the shortest paths to reach the destination networks based in relation to where they are on that map. Unlike their distance vector counterparts, complete routing tables are not exchanged periodically, instead event-based 'triggered' updates containing only specific link-state information are sent. Periodic keepalives that are small and efficient, in the form of 'hello' messages are exchanged between directly connected neighbors to establish and maintain reachability to that neighbor. Advanced Distance Vector: The advanced distance vector approach combines aspects of the link-state and distance vector algorithms. Hop count: The number of times that a packet passes through the output port of one router. Bandwidth: The data capacity of, for instance, a 10-Mb/s Ethernet link is preferable to a 64-kb/s leased line. Delay: The length of time that is required to move a packet from its source to its destination. Load: The amount of activity on a network resource, such as a router or link. Reliability: This usually refers to the bit error rate of each network link. Cost: A configurable value thaton Cisco routersis based by default on the bandwidth of the interface.

Administrative Distance
Route Source Default Distance

If nondefault values are necessary, you can use Cisco IOS Software to configure administrative distance values on a per-router, per-protocol, and per-route basis (with the exception of directly connected networks).

Topic Notes: Understanding Distance Vector Routing Protocols

As the name implies, distance vector means that routes are advertised as distance and direction. Distance is defined in terms of a metric, such as hop count (for RIP), and direction is simply the next-hop router or exit interface. A router using a distance vector routing protocol does not know the entire path to a destination network. The router knows only the following information:

The direction or interface in which packets should be forwarded The distance (or how far it is) to the destination network

Distance vector routing protocols call for the router to periodically advertise their entire routing table to each of its neighbors. The periodic routing updates are addressed only to directly connected routing devices. The addressing scheme that is most commonly used is a logical broadcast or multicast. Routers that are running a pure distance vector routing protocol send full periodic updates which includes a complete routing table, even if there are no changes in the network. Upon receiving a full routing table from its neighbor, a router can verify all known routes and make changes to the local routing table based on updated information. This process is also known as "routing by rumor," because the router understands the network that is based on the perspective of the network topology from the neighboring router. Distance vector protocols traditionally have also been classful protocols. RIPv2 and EIGRP are examples of more modern distance vector protocols that exhibit classless behavior. EIGRP which is considered and advanced distance vector protocol also exhibits some link-state characteristics for this very reason. As the distance vector network discovery process continues, routers discover the best path to destination networks that are not directly connected, based on accumulated metrics from each neighbor. Neighboring routers provide information for routes that are not directly connected.

Routing Information Maintenance


Routing tables must be updated when the topology of the internetwork changes. Like the network discovery process, topology-change updates proceed step by step from router to router. Distance vector algorithms call for each router to send its entire routing table to each of its neighbors. Distance vector routing updates are sent periodically at regular intervals. The routing table can also be sent immediately using trigger updates when the router detects a topology change. Changes may occur for several reasons, including the following:

Failure of a link Introduction of a new link Failure of a router Change of link parameters

When a router receives an update from a neighboring router, the router compares the update with its own routing table. To establish the new metric, the router adds the cost of reaching the neighboring router to the path cost reported by the neighbor. If the router learns from its neighbor of a better route (a smaller total metric) to a network, it updates its own routing table. Each routing-table entry includes the following information:

Information about the total path cost, which is defined by the routing-table metric The logical address of the first router on the path to each network that the routing table knows about (the next-hop)

The age of routing information in a routing table is defined and refreshed each time that an update is received. Therefore, information in the routing table can be maintained when there is a topology change.

Topic Notes: Distance Vector Routing Table Issues


Route Poisoning
Poison reverse can be combined with the split horizon technique. The method is called split horizon with poison reverse. The rule for split horizon with poison reverse states that when sending updates out of a specific interface, the router designates as unreachable any network that is learned on that interface. The concept of split horizon with poison reverse is telling a router that it is better to ignore a route than not telling it about the route in the first place. Hold-down timers are used to prevent regular update messages from inappropriately reinstating a route that may have gone bad. Hold-down timers tell routers to hold any changes that might affect routes for some period. The hold-down period varies by routing protocol, but is usually set to three times the periodic update interval for a distance vector routing protocol, however in the case of RIP the update interval is 30 seconds and the hold-down time is 180. Hold-down timers work as follows:

When a router receives an update from a neighbor indicating that a previously accessible network is now inaccessible, the router marks the route as "possibly down" and starts a hold-down timer. If an update arrives from a neighboring router with a better metric than originally recorded for the network, the router marks the network as "accessible" and removes the hold-down timer.

If, at any time before the hold-down timer expires, an update is received from a different neighboring router with a poorer or the same metric, the update is ignored. Ignoring an update with a poorer or the same metric when a hold-down timer is in effect allows more time for the change to propagate through the entire network. During the hold-down period, routes appear in the routing table as possibly down. The router will still attempt to route packets to the possibly down network (in case the network is having only intermittent connectivity problems, which is referred to as flapping).

Routing loops can be caused by erroneous information that was calculated as a result of inconsistent updates, slow convergence, and timing. Slow convergence problems can also occur if routers wait for their regularly scheduled updates before notifying neighboring routers of network changes. Routing-table updates normally are sent to neighboring routers at regular intervals. A triggered update is a routing-table update that is sent immediately in response to some change. The detecting router immediately sends an update message to adjacent routers, which, in turn, generate triggered updates notifying their neighbors of the change. This wave of notifications propagates throughout that portion of the network where routes went through the specific link that changed. Triggered updates would be sufficient if there were a guarantee that the wave of updates would reach every appropriate router immediately. However, there are two problems:

Packets containing the update message can be dropped or corrupted by a link in the network. The triggered updates do not happen instantaneously. It is possible that a router that has not yet received the triggered update will issue a regular update at just the wrong time. Wrong timing will cause the bad route to be reinserted in a neighbor that had already received the triggered update.

Coupling triggered updates with hold-down timers is designed to prevent these problems. The hold-down rule specifies that for a specified period, no new route with the same or a worse metric than a route that is in hold-down (possibly down) will be accepted for the same destination as the hold-down route. This mechanism gives the triggered update time to propagate throughout the network.

opic Notes: Understanding Link-State Routing Protocols

Link-state routing protocols are also known as Shortest Path First protocols and built around Edger Dijkstra's Shortest Path First (SPF) algorithm. Examples of link-state routing protocols include OSPF and Intermediate System-to-Intermediate System (IS-IS). Link-state routing protocols collect routing information from all other routers in the network or within a defined area of the network. After all of the information is collected, each router, independent of the other routers, calculates the best paths to all destinations in the network. Because each router maintains its own view of the network, the router is less likely to propagate incorrect information that is provided by a router. A link is like an interface on a router. The state of the link is a description of that interface and of its relationship to its neighboring routers. An example description of the interface would include the IP address of the interface, the mask, the type of network to which it is connected, the routers that are connected to that network, and so on. The collection of link states forms a link-state (or topological) database. The link-state database is used to calculate the best paths through the network. Link-state routers find the best paths to destinations by applying Dijkstra's algorithm against the link-state database to build the SPF tree. The best paths are then selected from the SPF tree and placed in the routing table. Link-state routing protocols have the reputation of being much more complex than their distance vector counterparts. However, the basic functionality and configuration of link-state routing protocols is not complex at all. To maintain routing information, link-state routing uses link-state advertisements (LSAs), a topological database, the SPF algorithm, the resulting SPF tree, and a routing table of paths and ports to each network. Examples of link-state routing protocols include OSPF and IS-IS.

Open Shortest Path First


Open Shortest Path First (OSPF) is a link-state routing protocol that was developed as a replacement for the distance vector routing protocol RIP. RIP was an acceptable routing protocol in the early days of networking and the Internet. Its reliance on hop count as the only measure for choosing the best route quickly became unacceptable in larger networks that needed a more robust routing solution. OSPF was designed by the Internet Engineering Task Force (IETF). RFC 2328 describes OSPF link-state concepts and operations. It defines the OSPF metric as an arbitrary value called cost. Cisco IOS Software uses a calculation based on the interface bandwidth as the OSPF cost metric. OSPF's major advantages over RIP are its fast convergence and its scalability to much larger network implementations. The ability of link-state routing protocols, such as OSPF, to divide one large autonomous system into smaller groupings of routers (called areas) is referred to as hierarchical routing. Link-state routing protocols use the concept of areas for scalability. Topological databases contain

information about every router and associated interfaces which in large networks can be resource-intensive. Arranging routers into areas, effectively partitions this potentially large database into smaller and more manageable databases. With hierarchical routing, routing still occurs between the areas (called interarea routing). At the same time, many of the minute internal routing operations, such as recalculating the database, are kept within an area. When a failure occurs in the network, such as a neighbor becoming unreachable, link-state protocols flood LSAs using a special multicast address throughout an area. Each link-state router takes a copy of the LSA, updates its link-state (topological) database, and forwards the LSA to all neighboring devices. LSAs cause every router within the area to recalculate routes. Because LSAs must be flooded throughout an area and all routers within that area must recalculate their routing tables, the number of link-state routers that can be in an area should be limited.

OSPF Hierarchical Routing


The hierarchical-topology possibilities of OSPF have the following important advantages:

Reduced frequency of SPF calculations Smaller routing tables Reduced link-state update overhead

Link-State Routing Protocol Algorithms


Link-state routing algorithms maintain a complex database of the network topology. Unlike distance vector protocols, link-state protocols develop and maintain a complete awareness of the network routers and how they interconnect. This awareness is achieved through the exchange of LSAs with other routers in a network. Each router that has exchanged LSAs constructs a topological database using all received LSAs. An SPF algorithm is then used to compute reachability to networked destinations. This information is used to update the routing table. This process can discover changes in the network topology that are caused by component failure or network growth. Instead of using periodic updates, the LSA exchange is triggered by an event in the network. This mechanism can speed the convergence process considerably because there is no need to wait for a series of timers to expire before the networked routers can begin to converge.

Benefits and Limitations of Link-State Routing


The following are some of the many benefits of link-state routing protocols over the traditional distance vector algorithms:

Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Routing updates are less frequent. Link-state protocols usually scale to larger networks than distance vector protocols do, particularly the traditional distance vector protocols, such as RIPv2. The network can be segmented into area hierarchies, limiting the scope of route changes. Link-state protocols send only updates of a topology change. By using triggered, flooded updates, link-state protocols can immediately report changes in the network topology to all routers in the network. This immediate reporting generally leads to fast convergence times. Because each router has a complete and synchronized picture of the network, it is very difficult for routing loops to occur. Because LSAs are sequenced and aged, routers always base their routing decisions on the most recent set of information. With careful network design, the link-state database sizes can be minimized, leading to smaller Dijkstra's algorithm calculations and faster convergence.

Link-state routing protocols have the following limitations:

In addition to the routing table, link-state protocols require a topology database and an adjacency database. Using all of these databases can require a significant amount of memory in large or complex networks. Dijkstra's algorithm requires CPU cycles to calculate the best paths through the network. If the network is large or complex, the Dijkstra's algorithm calculation is complex as well. The same happens if the network is unstable. In this case, the Dijkstra's algorithm calculation is running regularly. All such examples for link-state protocols require the use of significant amount of CPU power. Creating an area hierarchy can cause problems because areas must remain contiguous at all times. The routers in an area must always be capable of contacting and receiving LSAs from all other routers in their area. In a multiarea design, an area router must always have a path to the backbone, or the router will have no connectivity to the rest of the network. In addition, the backbone area must remain contiguous at all times to avoid some areas becoming isolated (partitioned). If the network design is complex, the operation of the link-state protocol may have to be tuned to accommodate it. Configuring a link-state protocol in a large network can be challenging. Interpreting the information that is stored in the topology, neighboring databases, and the routing table requires a good understanding of the concepts of link-state routing.

opic Notes: Understanding Link-State Routing Protocols


Link-state routing protocols are also known as Shortest Path First protocols and built around Edger Dijkstra's Shortest Path First (SPF) algorithm. Examples of link-state routing protocols include OSPF and Intermediate System-to-Intermediate System (IS-IS). Link-state routing protocols collect routing information from all other routers in the network or within a defined area of the network. After all of the information is collected, each router, independent of the other routers, calculates the best paths to all destinations in the network. Because each router maintains its own view of the network, the router is less likely to propagate incorrect information that is provided by a router. A link is like an interface on a router. The state of the link is a description of that interface and of its relationship to its neighboring routers. An example description of the interface would include the IP address of the interface, the mask, the type of network to which it is connected, the routers that are connected to that network, and so on. The collection of link states forms a link-state (or topological) database. The link-state database is used to calculate the best paths through the network. Link-state routers find the best paths to destinations by applying Dijkstra's algorithm against the link-state database to build the SPF tree. The best paths are then selected from the SPF tree and placed in the routing table. Link-state routing protocols have the reputation of being much more complex than their distance vector counterparts. However, the basic functionality and configuration of link-state routing protocols is not complex at all. To maintain routing information, link-state routing uses link-state advertisements (LSAs), a topological database, the SPF algorithm, the resulting SPF tree, and a routing table of paths and ports to each network. Examples of link-state routing protocols include OSPF and IS-IS.

Open Shortest Path First


Open Shortest Path First (OSPF) is a link-state routing protocol that was developed as a replacement for the distance vector routing protocol RIP. RIP was an acceptable routing protocol in the early days of networking and the Internet. Its reliance on hop count as the only measure for choosing the best route quickly became unacceptable in larger networks that needed a more robust routing solution. OSPF was designed by the Internet Engineering Task Force (IETF). RFC 2328 describes OSPF link-state concepts and operations. It defines the OSPF metric as an arbitrary value called cost. Cisco IOS Software uses a calculation based on the interface bandwidth as the OSPF cost metric.

OSPF's major advantages over RIP are its fast convergence and its scalability to much larger network implementations. The ability of link-state routing protocols, such as OSPF, to divide one large autonomous system into smaller groupings of routers (called areas) is referred to as hierarchical routing. Link-state routing protocols use the concept of areas for scalability. Topological databases contain information about every router and associated interfaces which in large networks can be resource-intensive. Arranging routers into areas, effectively partitions this potentially large database into smaller and more manageable databases. With hierarchical routing, routing still occurs between the areas (called interarea routing). At the same time, many of the minute internal routing operations, such as recalculating the database, are kept within an area. When a failure occurs in the network, such as a neighbor becoming unreachable, link-state protocols flood LSAs using a special multicast address throughout an area. Each link-state router takes a copy of the LSA, updates its link-state (topological) database, and forwards the LSA to all neighboring devices. LSAs cause every router within the area to recalculate routes. Because LSAs must be flooded throughout an area and all routers within that area must recalculate their routing tables, the number of link-state routers that can be in an area should be limited.

OSPF Hierarchical Routing


The hierarchical-topology possibilities of OSPF have the following important advantages:

Reduced frequency of SPF calculations Smaller routing tables Reduced link-state update overhead

Link-State Routing Protocol Algorithms


Link-state routing algorithms maintain a complex database of the network topology. Unlike distance vector protocols, link-state protocols develop and maintain a complete awareness of the network routers and how they interconnect. This awareness is achieved through the exchange of LSAs with other routers in a network. Each router that has exchanged LSAs constructs a topological database using all received LSAs. An SPF algorithm is then used to compute reachability to networked destinations. This information is used to update the routing table. This process can discover changes in the network topology that are caused by component failure or network growth. Instead of using periodic updates, the LSA exchange is triggered by an event in the network. This mechanism can speed the convergence process considerably because there is no need to wait for a series of timers to expire before the networked routers can begin to converge.

Benefits and Limitations of Link-State Routing


The following are some of the many benefits of link-state routing protocols over the traditional distance vector algorithms:

Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Routing updates are less frequent. Link-state protocols usually scale to larger networks than distance vector protocols do, particularly the traditional distance vector protocols, such as RIPv2. The network can be segmented into area hierarchies, limiting the scope of route changes. Link-state protocols send only updates of a topology change. By using triggered, flooded updates, link-state protocols can immediately report changes in the network topology to all routers in the network. This immediate reporting generally leads to fast convergence times. Because each router has a complete and synchronized picture of the network, it is very difficult for routing loops to occur. Because LSAs are sequenced and aged, routers always base their routing decisions on the most recent set of information. With careful network design, the link-state database sizes can be minimized, leading to smaller Dijkstra's algorithm calculations and faster convergence.

Link-state routing protocols have the following limitations:

In addition to the routing table, link-state protocols require a topology database and an adjacency database. Using all of these databases can require a significant amount of memory in large or complex networks. Dijkstra's algorithm requires CPU cycles to calculate the best paths through the network. If the network is large or complex, the Dijkstra's algorithm calculation is complex as well. The same happens if the network is unstable. In this case, the Dijkstra's algorithm calculation is running regularly. All such examples for link-state protocols require the use of significant amount of CPU power. Creating an area hierarchy can cause problems because areas must remain contiguous at all times. The routers in an area must always be capable of contacting and receiving LSAs from all other routers in their area. In a multiarea design, an area router must always have a path to the backbone, or the router will have no connectivity to the rest of the network. In addition, the backbone area must remain contiguous at all times to avoid some areas becoming isolated (partitioned). If the network design is complex, the operation of the link-state protocol may have to be tuned to accommodate it. Configuring a link-state protocol in a large network can be challenging.

Interpreting the information that is stored in the topology, neighboring databases, and the routing table requires a good understanding of the concepts of link-state routing.

Topic Notes: Introducing VLSMs


Basic subnetting is sufficient for networks but does not provide the flexibility that is needed in larger enterprise networks. Variable-length subnet mask (VLSM) provides for efficient use of address space. It also allows for hierarchal IP addressing, which allows routers to take advantage of route summarization. Route summarization reduces the size of routing tables in distribution and core routers. Smaller routing tables require less CPU time for routing lookups. VLSM is the concept of subnetting a subnet. It was initially developed to maximize addressing efficiency. With the advent of private addressing, the primary advantage of VLSM now is organization and summarization. VLSM affords the options of including more than one subnet mask within a network and of subnetting an already subnetted network address. VLSM offers the following benefits:

More efficient use of IP addresses: Without the use of VLSMs, companies must implement a single subnet mask within an entire Class A, B, or C network number. For example, consider the 192.168.1.0/24 network address that is divided into subnetworks using /27 masking. One of the subnetworks in this range, 192.168.1.160/27, is further divided into smaller subnetworks using /30 masking. This creates subnets with only two hosts, to be used on the WAN links. The /30 subnets range from 192.168.1.160/30 to 192.168.1.168/30.

Greater capability to use route summarization: VLSM allows more hierarchical levels within an addressing plan and thus allows better route summarization within routing tables. Isolation of topology changes from other routers: Another advantage to using route summarization in a large, complex network is that it can isolate topology changes from other routers. For example, if a specific link in the 192.168.1.128/27 domain is rapidly fluctuating between being active and inactive (called flapping), the summary route does not change. Therefore, no router that is external to the domain needs to keep modifying its routing table because of this flapping activity.

Topic Notes: The Route Summarization Process


Understanding Route Summarization

The rapid growth of the Internet has caused a dramatic increase in the number of routes to networks around the world. This growth has resulted in heavy loads on Internet routers. A VLSM addressing scheme allows for route summarization, which reduces the number of routes advertised. Route summarization groups contiguous subnets or networks using a single address. Route summarization is also known as route aggregation. Summarization decreases the number of entries in routing updates and lowers the number of entries in routing tables. It also reduces bandwidth utilization for routing updates and results in faster routing table lookups. Route summarization is synonymous with the term supernetting. Supernetting is the opposite of subnetting. Supernetting joins multiple smaller contiguous networks together. For an efficient summarization, a good addressing plan fitting a network design is required. Route summarization is most effective within a subnetted environment in which the network addresses are in contiguous blocks in powers of 2. For example, a single routing entry can represent address block sizes of 4, 16, 32, 64, 128, 256, 512, and so on. This functionality is true because, like subnet masks, summary masks are binary, so summarization must take place on binary boundaries (powers of 2). Classful routing is a consequence of the fact that older distance vector routing protocols were designed before subnetting was widely used and hence do not advertise subnet masks in the routing advertisements that they generate. When a classful routing protocol (such as RIPv1) is used, all subnetworks of the same major network (Class A, B, or C) must use the same subnet mask or in other words a fixed-length subnet mask (FLSM). Routers that are running a classful routing protocol perform automatic route summarization across network boundaries. Upon receiving a routing update packet, a router that is running a classful routing protocol does one of the following things to determine the network portion of the route:

If the routing update information contains the same major network number as is configured on the receiving interface, the router applies the subnet mask that is configured on the receiving interface. If the routing update information contains a major network that is different from the one that is configured on the receiving interface, the router applies the default classful mask (by address class) as follows:

With the rapid depletion of IPv4 addresses, the Internet Engineering Task Force (IETF) developed classless interdomain routing (CIDR). CIDR uses IPv4 address space more efficiently and allows network address aggregation or summarization, which reduces the size of routing tables.

The use of CIDR requires a classless routing protocol, such as RIPv2, EIGRP, OSPF or static routing. To CIDR-compliant routers, address class is meaningless. The network subnet mask determines the network portion of the address. This number is also known as the network prefix, or prefix length. The class of the address no longer determines the network address. ISPs assign blocks of IP addresses to a network based on the requirements of the customer, ranging from a few hosts to hundreds or thousands of hosts. With CIDR and VLSM, ISPs are no longer limited to using prefix lengths of /8, /16, or /24. Classless routing protocols can be considered second-generation protocols because they are designed to address some of the limitations of the earlier classful routing protocols. One of the most serious limitations in a classful network environment is that the subnet mask is not exchanged during the routing update process. This functionality requires the use of the same subnet mask on all subnetworks within the same major network. In other words, a fixedlength subnet masks (FLSM) scheme, which means that IP addresses are often wasted, especially on point-to-point links. Another limitation of the classful approach is the need to automatically summarize to the classful network boundary at major network boundaries. In the classless environment, the summarization process is controlled manually and can usually be invoked at any bit position within the address. Because subnet routes propagate throughout the routing domain, you may need to perform manual summarization to keep the size of the routing tables manageable. Classless routing protocols include RIPv2, EIGRP, and OSPF.

Topic Notes: Route Summarization Operation


Selecting Routes from Route Summaries
When the router receives a packet, it looks for the longest match with one of the routes in the routing table. The longest match is the route with the largest number of left-most bits that match between the destination IP address of the packet and the network address of the route in the routing table. Several routes might match one destination and the longest matching prefix is used. The subnet mask that is associated with the network address in the routing table defines the minimum number of bits that must match for that route to be a match. For example, if a routing table has paths to 192.168.0.0/16 and 192.168.5.0/24, packets that are addressed to 192.168.5.99 would be routed through the 192.168.5.0/24 path because that address has the longest match with the destination address. Classful routing protocols summarize automatically at network boundaries. This behavior, which you cannot change, has important results, as follows:

Subnets are not advertised to a different major network. Discontiguous subnets are not visible to each other.

Discontiguous networks cause unreliable or suboptimal routing. To avoid this condition, an administrator can do the following:

Modify the addressing scheme, if possible Use a classless routing protocol, such as RIPv2, OSPF or EIGRP Turn automatic summarization off Manually summarize at the classful boundary

This situation is resolved by using a classless routing protocol:


OSPF RIPv2 with the no auto-summary command EIGRP with the no auto-summary command

Topic Notes: Single Area OSPF Operations


Introducing OSPF
RIP was an acceptable routing protocol in the early days of networking and the Internet. The limitation was its reliance on hop count as the only measure for choosing the best route. Hop count quickly became unacceptable in larger networks that needed a more robust routing solution. Open Shortest Path First, OSPF for short, is a classless link-state routing protocol that was developed as a replacement for the distance vector routing protocol RIP. OSPF's major advantages over RIP are its fast convergence and its scalability to much larger network implementations. When talking about link-state protocol, you can think of a link as an interface on a router. The state of the link is a description of that interface and of its relationship to its neighboring routers. A description of the interface would include, for example, the IP address of the interface, the subnet mask, the type of network to which it is connected, the routers that are connected to that network, and so on. The collection of all of these link states forms a link-state database. A router sends link-state advertisement, also known as LSA, packets to advertise its state immediately when there are state changes and also periodically (every 30 minutes by default). Information about attached interfaces, metrics that are used, and other variables are included in OSPF LSAs. As OSPF routers accumulate link-state information, they use the Shortest Path First, abbreviated to SPF, algorithm to calculate the shortest path to each node. A topological (link-state) database is, essentially, an overall picture of networks in relation to routers. The topological database contains the collection of LSAs received from all routers in the same area. Because routers within the same area share the same information, they have identical topological databases. OSPF can operate within a hierarchy. It uses a two-layer network hierarchy. There are two primary elements in the two-layer network hierarchy:

Area: An area is a grouping of contiguous networks. Areas are logical subdivisions of the autonomous system. Autonomous system: The largest entity within the hierarchy. An autonomous system consists of a collection of networks under a common administration that share a common routing strategy. An autonomous system, sometimes called a domain, can be logically subdivided into multiple areas.

OSPF uses the concept of areas for scalability. Within each autonomous system, a contiguous backbone area (Area 0) must be defined. All other nonbackbone areas are connected off the backbone area. The backbone area is the transit area because all other areas communicate through it to each other. The operation of OSPF within an area is different from operation between that area and the backbone area. Summarization of network information usually occurs between areas, it is not on by default and has to be manually configured. This functionality helps

to decrease the size of routing tables in the backbone. Summarization also isolates changes and unstable, or flapping, links to a specific area in the routing domain. If summarization is used, when there is a change in the topology, only those routers in the affected area receive the LSA and run the SPF algorithm. For OSPF, the nonbackbone areas can be additionally configured as special area types such as stub areas, totally stubby areas, or not-so-stubby areas, also known as NSSAs, these advanced techniques help reduce the link-state database and routing table size. Routers that operate within the two-layer network hierarchy have different routing entities and different functions in OSPF.

Establishing OSPF Neighbor Adjacencies


Neighbor OSPF routers must recognize each other on the network before they can share information because OSPF routing depends on the status of the link between two routers. This process is done using the Hello protocol. OSPF routers use the Hello protocol by sending hello packets on all OSPF-enabled interfaces to determine if there are any neighbors on those links. Receiving OSPF hello packets on an interface confirms to the OSPF router the presence of another OSPF router on a link. The hello packet contains information which allows OSPF routers to establish and maintain neighbor relationships by ensuring bidirectional (two-way) communication between neighbors. An OSPF neighbor relationship or adjacency, is formed between two routers provided they both agree on the following: area-id, hello/dead intervals, authentication and stub area flag and of course they have to be on the same IP subnet. Bidirectional communication occurs when a router recognizes itself in the neighbors list contained in the hello packet that is received from a neighbor. On point-to-point links there are two OSPF neighbors on multiaccess networks such as an Ethernet LAN, one link can have many neighbors. To reduce the amount of OSPF traffic on multiaccess networks, OSPF elects a Designated Router, DR for short, and Backup Designated Router, BDR for short. All the other routers that are neither DR nor BDR are called DROTHERs. The DR is responsible for updating all DROTHER routers when a change occurs in the multiaccess network. The BDR monitors the DR and takes over as DR if the current DR fails. Each interface that is participating in OSPF uses IP multicast address 224.0.0.5 to periodically send hello packets. A hello packet contains the following information:

Router ID: The router ID is a 32-bit number that uniquely identifies the router, expressed in a dotted decimal format, it does not have to be an actual IP address of the router, although it is preferable for administrative purposes. The highest IP address on an active interface is chosen by default, unless a loopback interface or the router ID is configured. IP address 192.168.1.1 would be chosen over 172.16.1.1 since it is

numerically greater. This identification is important in establishing and troubleshooting neighbor relationships and coordinating route exchanges, it is recommended to use a loopback address or router ID for stability. Hello and dead intervals: The hello interval specifies the frequency in seconds at which a router sends hello packets. The default hello interval on multiaccess networks is 10 seconds. The dead interval is the time in seconds that a router waits to hear from a neighbor before declaring the neighboring router out of service. By default, the dead interval is four times the hello interval. These timers must be the same on neighboring routers; otherwise, an adjacency will not be established. Neighbors : The Neighbors field lists the adjacent routers with established bidirectional communication. This bidirectional communication is indicated when the router recognizes itself listed in the Neighbors field of the hello packet from the neighbor. Area ID : To communicate, two routers must share a common segment and their interfaces must belong to the same OSPF area on that segment. The neighbors must also share the same subnet and mask. These routers will all have the same link-state information. Router priority : The router priority is an 8-bit number that indicates the priority of a router. OSPF uses the priority to select a DR and BDR. DR and BDR IP addresses : These addresses are the IP addresses of the DR and BDR for the specific network, if they are known. Authentication password : If router authentication is enabled, two routers must exchange the same password. Authentication is not required, but if it is enabled, all peer routers must have the same password. Stub area flag : A stub area is a special area. Two routers must agree on the stub area flag in the hello packets. Designating a stub area is a technique that reduces routing updates by replacing them with a default route.

SPF Algorithm
The SPF algorithm places each router at the root of a tree and calculates the shortest path to each node, using Dijkstra's algorithm. Dijkstra's algorithm is based on the cumulative cost that is required to reach that destination. The cost, or metric, of an interface is an indication of the overhead that is required to send packets across a certain interface. The cost of an interface is inversely proportional to the bandwidth of that interface, so a higher bandwidth indicates a lower cost. A higher overhead, higher cost, and more time delays are involved in crossing a T1 serial line than in crossing a 10Mb/s Ethernet line. The formula that is used to calculate OSPF cost is cost = reference bandwidth / interface bandwidth (in bits per second). The default reference bandwidth is 108 , which is 100,000,000, or the equivalent of the bandwidth of Fast Ethernet. Therefore, the default cost of a 10-Mb/s Ethernet link will be 108 / 107 = 10 and the cost of a T1 link will be 108 / 1,544,000 = 64.

Using this equation presents a problem with link speeds 100 Mb/s or greater, such as Fast Ethernet and Gigabit Ethernet (both calculate to a value of 1). To adjust the reference bandwidth for links with bandwidths greater than Fast Ethernet, use the ospf auto-cost referencebandwidth command. Each router determines its own cost to each destination in the topology. In other words, each router calculates the SPF algorithm and determines the cost from its own perspective. LSAs are flooded throughout the area using a reliable algorithm, which ensures that all routers in an area have the same topological database. As a result of the flooding process, routers learn the link-state information for each router in its routing area. Each router uses the information in its topological database to calculate a shortest path tree, with itself as the root. The SPF tree is then used to populate the IP routing table with the best paths to each network. The shortest path is not necessarily the path with the fewest hops. Each router has its own view of the topology, even though all of the routers build a shortestpath tree using the same link-state database.

Topic Notes: Configuring Single-Area OSPF


Configuring and Verifying OSPF
Configuration of basic OSPF is not a complex task; it requires only two steps. The first step enables the OSPF routing process. The second step identifies the networks to advertise. The router ospf command uses a process identifier as an argument. The process ID is a number between 1 and 65535 and is chosen by the network administrator. The process ID is locally significant, which means that it does not have to match other OSPF routers in order to establish adjacencies with those neighbors. The network command has the same function as it does in other IGP routing protocols. It uses a combination of network address and wildcard mask and serves as a criteria match to identify the interfaces that are enabled to send and receive OSPF packets. The network address, along with the wildcard mask, identifies which IP networks are part of the OSPF network and are included in OSPF routing updates. The wildcard mask can be configured as the inverse of a subnet mask. Calculating wildcard masks on non-8-bit boundaries can be prone to error. You can avoid calculating wildcard masks by having a network statement that matches the IP address on each interface and using the 0.0.0.0 mask. The area ID identifies the OSPF area to which the network belongs. When all of the routers are within the same OSPF area, the network commands must be configured with the same area ID on

all routers. Even if no areas are specified, there must be an Area 0. In a single-area OSPF environment, the area is always 0.

Topic Notes: Loopback Interfaces


The OSPF router ID is used to uniquely identify each router in the OSPF routing domain. A router ID is simply a label and expressed as an IP address. Cisco routers derive the router ID based on three criteria and on the following precedence: 1. Use the IP address (or dotted decimal formatted number) that is configured with the OSPF router-id command. 2. If the router ID is not configured, the router chooses the highest IP address of any of its loopback interfaces. 3. If no loopback interfaces are configured, the router chooses the highest active IP address of any of its physical interfaces. Note Router ID looks like an IP address, but it is not routable and therefore not included in the routing table, unless the OSPF routing process chooses an interface (physical or loopback) which is appropriately defined by a network command. If an OSPF router is not configured with an OSPF router-id command and no loopback interfaces are configured, the OSPF router ID will be the highest active IP address on any of its interfaces. The interface does not need to be enabled for OSPF, meaning that it does not need to be included in one of the OSPF network commands. However, the interface must be activeit must be in the up state. To modify the OSPF router ID to a loopback address, first define a loopback interface. The following command enables loopback interface 0 and configures an IP address. RouterX(config)#interface loopback 0 RouterX(config-if)#ip address 192.168.255.254 255.255.255.0 OSPF is more reliable if a loopback interface is configured because the interface is always active and cannot be in a down state like a "real" interface can. For this reason, the loopback address should be used on all key routers. If the loopback address is going to be published with the network ip address wildcard mask area area-id command, using a private IP address will save on registered IP address space. Note that a loopback address requires a different subnet for each router, unless the host address itself is advertised. Using an address that is not advertised saves real IP address space. Unlike an address that is advertised, the unadvertised address does not appear in the OSPF table and thus cannot be accessed across the network. Therefore, using a private IP address represents a trade-off between the ease of debugging the network and conservation of address space.

The configuration of the loopback does not change the OSPF router ID automatically. The OSPF process must be cleared in order to derive a new router ID. The clear ip ospf process command is used to clear the OSPF process. Note Modifying a router ID with a new loopback or physical interface IP address may require reloading the router. The OSPF router-id command takes precedence over loopback and physical interface IP address for determining the router ID. The command syntax is as follows: Router(config)#router ospf 100 Router(config-router)#router-id 172.16.17.5 The router ID is selected when OSPF is configured with its first OSPF network command. If the OSPF router-id command or the loopback address is configured after the OSPF network command, the router ID will be derived from the interface with the highest active IP address. The router ID can be modified with the IP address from a subsequent OSPF router-id command by reloading the router or by using the following command. RouterX#clear ip ospf process Note Modifying a router ID with a new loopback or physical interface IP address may require reloading the router.

Topic Notes: Verifying OSPF Configurations


Verifying the OSPF Configuration
You can use any one of a number of show commands to display information about an OSPF configuration. The show ip protocols command displays parameters about timers, filters, metrics, networks, and other information for the entire router. The show ip route command displays the routes that are known to the router and how they were learned. This command is one of the best ways to determine connectivity between the local router and the rest of the internetwork. Use the show ip ospf command to verify the OSPF router ID. This command also displays OSPF timer settings and other statistics, including the number of times the SPF algorithm has been executed. This command also has optional parameters so you can further specify which command output is required. The show ip ospf interface command verifies that interfaces have been configured in the intended areas. If no loopback address is specified, the interface with the highest address is

chosen as the router ID. This command also displays the timer intervals, including the hello interval, and shows the neighbor adjacencies. The show ip ospf neighbor command displays OSPF neighbor information on a per-interface basis.

Topic Notes: Configuring OSPF Load Balancing and Authentication


Load Balancing with OSPF
Load balancing is a standard functionality of Cisco IOS Software and is available across all router platforms. It is inherent to the forwarding process in the router, and it allows a router to use multiple paths to a destination when it forwards packets. The number of entries that the routing protocol puts in the routing table limits the number of paths that are used. Four entries is the default in Cisco IOS Software for IP routing protocols, except for BGP. BGP has a default of one entry. The maximum number of paths you can configure depends on the router platform. The maximum-paths command controls the maximum number of parallel routes that an IP routing protocol can support. To restore the default number of parallel routes, use the no form of this command. The command maximum-paths 1 in OSPF routing process configuration mode disables the OSPF load balancing. The cost (or metric) of an interface in OSPF is an indication of the overhead that is required to send packets across a certain interface. The cost of an interface is inversely proportional to the bandwidth of that interface. A higher bandwidth indicates a lower cost. By default, Cisco routers calculate the cost of an interface that is based on the bandwidth. If equal-cost paths exist to the same destination, the Cisco implementation of OSPF can keep track of up to 16 next hops to the same destination in the routing table (which is called load balancing). By default, the Cisco router supports up to four equal-cost paths to a destination for OSPF. Use the maximum-paths command under the OSPF router process configuration mode to set the number of equal-cost paths in the routing table. You can use the show ip route command to find equal-cost routes.

OSPF Authentication
Types of Authentication

OSPF neighbor authentication (also called "neighbor router authentication" or "route authentication") can be configured such that routers can participate in routing based on predefined passwords.

When you configure neighbor authentication on a router, the router authenticates the source of each routing update packet that it receives. This authentication is accomplished by the exchange of an authenticating key (sometimes referred to as a password) that is known to both the sending and receiving router. Note that the actual routing update is not encrypted, privacy is not the goal verification that the information came from a trusted source is the objective of authentication. By default, OSPF uses null authentication, which means that routing exchanges over a network are not authenticated. OSPF supports two other authentication methods:

Plaintext (or simple) password authentication Message Digest 5 (MD5) authentication

A more secure method of authentication is MD5. It requires a key and a key ID on each router. The router uses a one-way algorithm that processes the key, the OSPF packet, and the key ID to generate an encrypted number hash value - which is the result. Each OSPF packet includes that encrypted number. A packet sniffer cannot be used to obtain the actual key because it is never transmitted, only the digest is sent. The receiving OSPF router will also have the same key configured and will compare its computational result to one included with the update, if the results match then the key must have been equal at the sender. OSPF MD5 authentication includes a nondecreasing sequence number in each OSPF packet to protect against replay attacks.

Configuring Plaintext Password Authentication


To configure OSPF plaintext password authentication, complete the following steps: Step 1 Use the ip ospf authentication-key plainpas command to assign the password "plainpas." Use it with neighboring routers that use the OSPF simple password authentication. Note In Cisco IOS Release 12.4, the router gives a warning message if you try to configure a password longer than eight characters; only the first eight characters are used. Some earlier Cisco IOS Software releases did not provide this warning. The password that is created by this command is used as a "key" that is inserted directly into the OSPF header when Cisco IOS Software originates routing protocol packets. A separate password can be assigned to each network on a per-interface basis. All neighboring routers on the same network must have the same password to be able to exchange OSPF information. Note If the service password-encryption command is not used when you are configuring OSPF authentication, the key is stored as plaintext in the router configuration. If you configure the service password-encryption command, the key is stored and displayed in an encrypted form; when it is displayed, an encryption type of 7 is specified before the encrypted key. Step 2 Specify the authentication type using the ip ospf authentication command.

For plaintext password authentication, use the ip ospf authentication command with no parameters. Before using this command, configure a password for the interface using the ip ospf authentication-key command. The ip ospf authentication command was introduced in Cisco IOS Release 12.0. For backward compatibility, the authentication type for an area is still supported. If the authentication type is not specified for an interface, the authentication type for the area is used (the area default is null authentication). To enable plaintext authentication for an OSPF area 0, use the area 0 authentication router configuration command.

opic Notes: Troubleshooting OSPF


Components of Troubleshooting OSPF
The major components of OSPF troubleshooting include the following:

OSPF neighbor adjacencies The OSPF routing table OSPF authentication

Troubleshooting OSPF Neighbor Adjacencies


A healthy OSPF neighbor state is "Full." If the OSPF neighbor state remains in any other state, it may indicate a problem. Use the show ip ospf neighbor command to verify. To determine if a possible Layer 1 or Layer 2 problem exists with a connection, display the status of an interface using the show ip ospf neighbor command. "Administratively Down" indicates that the interface is not enabled. If the status of the interface is not up/up, there will be no OSPF neighbor adjacencies. In order for OSPF to create an adjacency with a directly connected neighbor router, both routers must agree on the maximum transmission unit (MTU) size. To check the MTU size of an interface, use the show interface command. The network command that you configure under the OSPF routing process indicates which router interfaces participate in OSPF and determines in which area the interface belongs. If an interface appears under the show ip ospf interface command, then that interface is running OSPF. OSPF routers exchange hello packets in order to create neighbor adjacencies. There are four items in an OSPF hello packet that must match before an OSPF adjacency can occur:

Area ID Hello and dead intervals

Authentication password Stub area flag

To determine whether any of these hello packet options do not match, use the debug ip ospf adj command.

Troubleshooting OSPF Routing Tables


An OSPF route that is found in the routing table can have various codes:

O: OSPF intra-area route from a router within the same OSPF area O IA: OSPF interarea route from a router in a different OSPF area O E1 or E2: An external OSPF route from another autonomous system

The network command that you configure under the OSPF routing process also indicates which interfaces are running OSPF.

Using OSPF Debug Commands


To troubleshoot OSPF operations, use the debug ip ospf commands. Useful options during troubleshooting are as follows: Use the debug ip ospf events EXEC command to display information on OSPF-related events. The events include adjacencies, flooding, designated router selection, and Shortest Path First (SPF) calculation. Such output might appear if any of the following situations occur:

The IP subnet masks for the routers on the same network do not match. The OSPF hello interval for the router does not match the OSPF hello interval that is configured on a neighbor. The OSPF dead interval for the router does not match the OSPF dead interval that is configured on a neighbor.

If a router that is configured for OSPF routing is not seeing an OSPF neighbor on an attached network, perform the following tasks:

Ensure that both routers have been configured in same IP subnet, have the same subnet mask and that the OSPF hello interval and dead intervals match on both routers. Ensure that both neighbors are part of the same area number and area type.

Note Debug commands, especially the debug all command, should be used sparingly. These commands can disrupt router operations. Debug commands are useful when configuring or troubleshooting a network; however, they can make intensive use of CPU and memory resources. It is recommended that you run as few debug processes as necessary and disable them

immediately when they are no longer needed. Debug commands should be used with caution on production networks because they can affect the performance of the device.

Troubleshooting Plaintext Password Authentication


The debug ip ospf adj command is used to display OSPF adjacency-related events and is very useful when you are troubleshooting authentication.

Topic Notes: EIGRP Operations


EIGRP Features
EIGRP is a Cisco proprietary routing protocol that combines the advantages of link-state and distance vector routing protocols. EIGRP may act like a link-state routing protocol because it uses a Hello protocol to discover neighbors or form neighbor relationships, and only partial updates are sent when a change occurs. However, it is still based on the key distance vector routing protocol principle that information about the rest of the network is learned from directly connected neighbors. EIGRP is an advanced distance vector or hybrid routing protocol that includes the following features:

Rapid convergence: EIGRP uses the Diffusing Update Algorithm (DUAL) to achieve rapid convergence. As the computational engine that runs EIGRP, DUAL resides at the center of the routing protocol, guaranteeing loop-free paths and backup paths throughout the routing domain. A router that is using EIGRP stores all available backup routes for destinations so that it can quickly adapt to alternate routes. If the primary route in the routing table fails, the best backup route is immediately added to the routing table. If no appropriate route or backup route exists in the local routing table, EIGRP queries its neighbors to discover an alternate route. Reduced bandwidth usage: EIGRP uses the terms partial and bounded when referring to its updates. EIGRP does not make periodic updates. Partial means that the update only includes information about the route changes. EIGRP sends these incremental updates when the state of a destination changes, instead of sending the entire contents of the routing table. Bounded refers to the propagation of partial updates that are sent only to those routers that the changes affect. By sending only the routing information that is needed and only to those routers that need it, EIGRP minimizes the bandwidth that is required to send EIGRP updates. Multiple network layer support: EIGRP supports AppleTalk, IP version 4 (IPv4), IP version 6 (IPv6), and Novell Internetwork Packet Exchange (IPX), all of which use protocol-dependent modules (PDMs). PDMs are responsible for protocol requirements that are specific to the network layer.

Classless routing: Because EIGRP is a classless routing protocol, it advertises a routing mask for each destination network. The routing mask feature enables EIGRP to support discontiguous subnetworks and variable-length subnet masks (VLSMs). Less overhead: EIGRP uses multicast and unicast rather than broadcast. Multicast EIGRP packets use the reserved multicast address of 224.0.0.10. As a result, end stations are unaffected by routing updates and requests for topology information. Load balancing: EIGRP supports unequal metric load balancing as well as equal metric load balancing, which allows administrators to better distribute traffic flow in their networks. Easy summarization: EIGRP allows administrators to create summary routes anywhere within the network rather than rely on the traditional distance vector approach of performing classful route summarization only at major network boundaries. In OSPF, route summarization can only be configured at specific points in the network at the ABR or the ASBR.

Note: The term hybrid routing protocol is sometimes used to define EIGRP. However, this term is misleading because EIGRP is not a hybrid between distance vector and link-state routing protocols. It is in essence a distance vector routing protocol. Therefore, Cisco no longer uses the term hybrid to refer to EIGRP. Each EIGRP router maintains a neighbor table. This table includes a list of directly connected EIGRP routers that have an adjacency with this router. Neighbor relationships are used to track the status of these neighbors. EIGRP uses a lightweight Hello protocol to monitor connection status with its neighbors. Each EIGRP router maintains a topology table for each routed protocol configuration. The topology table includes route entries for every destination that the router learns from its directly connected EIGRP neighbors. EIGRP chooses the best routes to a destination from the topology table and places these routes in the routing table. To determine the best route (successor) and any backup routes (feasible successors) to a destination, EIGRP uses the following two parameters:

Advertised distance: The EIGRP metric for an EIGRP neighbor to reach a particular network. It is also sometimes referred to as the reported distance. Feasible distance: The advertised distance for a particular network that is learned from an EIGRP neighbor plus the EIGRP metric to reach that neighbor. This sum provides an end-to-end metric from router to that remote network.

A router compares all feasible distances to reach a specific network and then selects the lowest feasible distance and places it in the routing table. The feasible distance for the chosen route becomes the EIGRP routing metric to reach that network in the routing table.

Topic Notes: Configuring EIGRP

The router eigrp global configuration command enables EIGRP. Use the router eigrp and network commands to create an EIGRP routing process. Note that EIGRP requires an autonomous system (AS) number. The autonomous system parameter is a number between 1 and 65535 that is chosen by the network administrator. The AS number does not have to be registered. Note: Although EIGRP refers to the parameter as an AS number, it actually functions as a process ID. This number is not associated with an autonomous system number discussed previously and can be assigned any 16-bit value. As opposed to OSPF the AS number in EIGRP must match on all routers that are involved in the same EIGRP process. The AS number that is chosen is the process ID number and is important. All routers in the EIGRP routing domain must use the same process ID number to exchange routing information with each other. Typically, only a single process ID of any routing protocol would be configured on a router. The network command in EIGRP has the same function as in other IGP routing protocols:

The network command defines a major network number to which the router is directly connected. Any interface on this router that matches the network address in the network command will be enabled to send and receive EIGRP updates. The EIGRP routing process looks for interfaces that have an IP address that belongs to the networks that are specified with the network command. The EIGRP process begins on these interfaces. This network (or subnet) will be included in EIGRP routing updates.

The network command is used in router configuration mode. To configure EIGRP to advertise specific subnets only, use the wildcard-mask option with the network command. Think of the wildcard mask as the inverse of a subnet mask. The inverse of the subnet mask 255.255.255.240 is 0.0.0.14 and is used in the following example to advertise the 192.168.1.0/28 subnet:
RouterC(config-router)#network 192.168.1.0 0.0.0.15

The show ip route eigrp command displays the current EIGRP entries in the routing table. EIGRP has a default administrative distance of 90 for internal routes and 170 for routes that are imported from an external source, such as default routes. When compared to other interior gateway protocols (IGPs), EIGRP is the most preferred by the Cisco IOS because it has the lowest administrative distance.

Topic Notes: Verifying EIGRP Configuration

EIGRP automatically summarizes routes at the classful boundary, by default. In some cases, you may not want automatic summarization to occur. For example, if you have discontiguous networks, you need to disable automatic summarization to minimize router confusion. To disable automatic summarization, use the no auto-summary command in the EIGRP router configuration mode. DUAL takes down all neighbor adjacencies and then re-establishes them so that the effect of the no auto-summary command can be fully realized. All EIGRP neighbors will immediately send out a new round of updates that will not be automatically summarized. The show ip protocols command displays the parameters and current state of the active routing protocol process. This command shows the EIGRP autonomous (AS) number. It also displays filtering and redistribution numbers, neighbor, and administrative distance information. Use the show ip eigrp interfaces command to determine on which interfaces EIGRP is active, and to learn information about EIGRP that relates to those interfaces. If you specify an interface (for example, show ip eigrp interfaces Fa0/0), only that interface is displayed. Otherwise, all interfaces on which EIGRP is running are displayed. If you specify a process ID (AS) (for example, show ip eigrp interfaces 100), only the routing process for the specified process ID (AS) is displayed. Otherwise, all EIGRP processes are displayed. Use the show ip eigrp neighbors command to display the neighbors that EIGRP discovered and to determine when neighbors become active and inactive. The command is also useful for debugging certain types of transport problems. The show ip eigrp topology command displays the EIGRP topology table, the active or passive state of routes, the number of successors, and the feasible distance to the destination. Use the show ip eigrp topology all-links command to display all paths, even those that are not feasible. The show ip eigrp traffic command displays the number of packets that are sent and received. The debug ip eigrp privileged EXEC command helps you analyze the EIGRP packets that an interface sends and receives. Because the debug ip eigrp command generates a substantial amount of output, use it only when traffic on the network is light.

Topic Notes: Load Balancing with EIGRP


EIGRP Metric
The EIGRP metric can be based on several criteria, but EIGRP uses only two of these criteria by default.

Bandwidth: The smallest bandwidth between source and destination

Delay: The cumulative interface delay along the path

The following criteria can be used but are not recommended, because they typically result in frequent recalculation of the topology table:

Reliability: This value represents the worst reliability between the source and destination, based on keepalives. Load: This value represents the worst load on a link between the source and destination, computed based on the packet rate and the configured bandwidth of the interface.

The composite metric formula is used by EIGRP to calculate metric value. The formula consists of values K1 through K5, known as EIGRP metric weights. By default, K1 and K3 are set to 1, and K2, K4, and K5 are set to 0. The result is that only the bandwidth and delay values are used in the computation of the default composite metric. Metric calculation method (K values) as well as AS number must match. Note: Although maximum transmission unit (MTU) is exchanged in EIGRP packets between neighbor routers, MTU is not factored into the EIGRP metric calculation. By using the show interface command, you can examine the actual values that are used for bandwidth, delay, reliability, and load in the computation of the routing metric.

Load Balancing Across Equal Paths


Equal-cost load balancing is the ability of a router to distribute traffic over all of its network ports that are the same metric from the destination address. Load balancing increases the use of network segments and increases effective network bandwidth. For IP, Cisco IOS Software applies load balancing across up to four equal-cost paths by default. With the maximum-paths router configuration command, up to 16 equal-cost routes can be kept in the routing table. If you set the value to 1, you disable load balancing. When a packet is process-switched, load balancing over equal-cost paths occurs on a per-packet basis. When packets are fast-switched, load balancing over equal-cost paths occurs on a per-destination basis.

Configuring Load Balancing Across Unequal-Cost Paths


EIGRP can also balance traffic across multiple routes that have different metrics. This type of balancing is called unequal-cost load balancing. The degree to which EIGRP performs load balancing is controlled with the variance command.

Topic Notes: MD5 Authentication with EIGRP

You can configure EIGRP neighbor authentication (also known as neighbor router authentication or route authentication) such that routers can participate in routing based on predefined passwords. By default, no authentication is used for EIGRP packets. EIGRP can be configured to use Message Digest 5 (MD5) authentication. Without neighbor authentication, unauthorized or deliberately malicious routing updates could compromise the security of your network traffic. A security compromise could occur if any unfriendly party interferes with your network. For example, an unauthorized router could launch a fictitious routing update to convince your router to send traffic to an incorrect destination. When you configure neighbor authentication on a router, the router authenticates the source of each routing update packet that it receives. For EIGRP MD5 authentication, you must configure an authenticating key and a key ID on both the sending and the receiving router. The key is sometimes referred to as a password. By default, no authentication is used for routing protocol packets. The MD5 keyed digest in each EIGRP packet prevents the introduction of unauthorized or false routing messages from unapproved sources. Each key has its own key ID, which the router stores locally. The combination of the key ID and the interface that is associated with the message uniquely identifies the authentication algorithm and the MD5 authentication key in use. EIGRP allows you to manage keys by using key chains. Each key definition within the key chain can specify a time interval for which that key is activated (its lifetime). Then, during the lifetime of a given key, routing update packets are sent with this activated key. Only one authentication packet is sent, regardless of how many valid keys exist. The software examines the key numbers in order from lowest to highest, and it uses the first valid key that it encounters. Keys cannot be used during time periods for which they are not activated. Therefore, it is recommended that for a given key chain, key activation times overlap to avoid any period for which no key is activated. If there is a time during which no key is activated, neighbor authentication cannot occur, and therefore, routing updates fail.

Configuring EIGRP Authentication


Verifying MD5 Authentication

If authentication is not successful, then routers will not process EIGRP packets and routers will not form the neighborship. Also, routers will not build the EIGRP tables and populate the IP routing table with EIGRP routes.

Troubleshooting EIGRP Authentication

You can use the debug eigrp packets command for troubleshooting MD5 authentication. However, to identify potential problems using this command, you should recognize and understand the output of a correctly configured MD5 first.

Topic Notes: Neighbor and Routing Table Issues


Components of Troubleshooting EIGRP
Troubleshooting dynamic routing protocols requires a thorough understanding of how the specific routing protocol functions. Some problems are common to all routing protocols, while other problems are particular to the individual routing protocol. The major components of EIGRP troubleshooting include the following items:

EIGRP neighbor relationships: If the routing protocol establishes an adjacency with a neighbor, check to see if there are any problems with the routers forming neighbor relationships. EIGRP routes in the routing table: Check the routing table for anything unexpected, such as missing routes or unexpected routes. Use debug commands to view routing updates and routing table maintenance. EIGRP authentication: If authentication is not successful, then routers will not process EIGRP packets and routers will not form the neighbor adjacency. Also, routers will not build the EIGRP tables and populate the IP routing table with EIGRP routes.

Troubleshooting EIGRP Neighbor Issues


You will save valuable time if you understand which show commands to use when you troubleshoot the EIGRP configuration. Use the show ip eigrp neighbors command to verify that the router recognizes its neighbors. The show ip eigrp neighbors command is very useful for verifying and troubleshooting EIGRP. If a neighbor is not listed after adjacencies have been established with a router's neighbors, use the show ip interface brief command to check the local interface to make sure that it is activated. If the interface is active, try pinging the IP address of the neighbor. If the ping fails, it means that the neighbor interface is down and must be activated. If the ping is successful and EIGRP still does not see the router as a neighbor, examine the following configurations:

Are both routers configured with the same EIGRP process ID? Is the directly connected network included in the EIGRP network statements?

Is the passive-interface command configured to prevent EIGRP hello packets on the interface? Is authentication configured properly?

For EIGRP routers to form a neighbor relationship, both routers must share a directly connected IP subnet. A log message that says that EIGRP neighbors are "not on common subnet" indicates that there is an improper IP address on one of the two EIGRP neighbor interfaces. Use the show ip interface command to verify the IP addresses. The network command that is configured under the EIGRP routing process indicates which router interfaces will participate in EIGRP. The "Routing for Networks" section of the show ip protocols command indicates which networks have been configured; any interfaces in those networks participate in EIGRP. Remember, the process ID must be the same on all routers for EIGRP to establish neighbor adjacencies and share routing information. The show ip eigrp interfaces command can quickly indicate on which interfaces EIGRP is enabled and how many neighbors can be found on each interface. EIGRP routers create a neighbor relationship by exchanging hello packets. There are certain fields in the hello packets that must match before an EIGRP neighbor relationship is established:

EIGRP process ID (autonomous system [AS] number) EIGRP K values

You can use the debug eigrp packets command to troubleshoot when hello packet information does not match.

Troubleshooting EIGRP Routing Tables


Another way to verify that EIGRP and other functions of the router are configured properly is to examine the routing tables with the show ip route command. EIGRP routes that appear with a "D" (for DUAL) in the routing table are intra-AS routes, and routes with "D EX" are external AS routes. No EIGRP routes in the routing table might indicate that there is a Layer 1 or 2 problem or an EIGRP neighbor problem. The show ip eigrp topology command displays the EIGRP router ID. The EIGRP router ID comes from the highest IP address that is assigned to a loopback interface. If no loopback interfaces are configured, the highest IP address that is assigned to any other active interface is chosen as the router ID. No two EIGRP routers can have the same EIGRP router ID. If they do, you will experience problems exchanging routes between the two routers with equal router IDs. If EIGRP routes are found in the topology table but not in the routing table, you could have a problem, and you might require help from the Cisco Technical Assistance Center (TAC) to

diagnose it. Access Lists (ACLs) provide filtering for different protocols and they may affect the exchange of the routing protocol messages causing routes to be missing from the routing table. The show ip protocols command shows if there are any filter lists applied to EIGRP. By default, EIGRP is classful and performs automatic network summarization. Automatic network summarization causes connectivity problems in discontiguous networks. The show ip protocols command confirms whether automatic network summarization is in effect.

Topic Notes: Understanding ACL Operation


Understanding ACLs
One of the most important skills a network administrator needs is mastery of ACLs. Administrators use ACLs to stop traffic or permit only specified traffic while stopping all other traffic on their networks. ACLs provide a powerful way to control traffic into and out of your network. You can configure ACLs for all routed network protocols. Network designers can implement network-based security solutions to protect the network from unauthorized use. Firewalls are hardware or software solutions that enforce network security policies. Consider a lock on a door to a room inside a building. The lock allows only authorized users with a key or access card to pass through the door. Similarly, a firewall prevents unauthorized or potentially dangerous packets from entering the network. Firewalls come in different flavors such as packet filters, Application Layer Gateways (ALGs) and Stateful Inspection firewalls and are usually deployed when interconnecting to the Internet. Cisco routers with the appropriate IOS feature set may be configured to perform the function of a Stateful Inspection firewall in small environments or a dedicated appliance such as a Cisco Adaptive Security Appliances (ASA) may be used. In certain cases basic packet filtering is only necessary, for example internally within a network where firewalls do not tend to operate. In situations like this, devices such as routers can provide basic traffic filtering capabilities using access control lists (ACLs). ACLs are versatile tools used by network administrators and supported by all Cisco routers. ACLs perform the following tasks:

Limit network traffic to improve network performance. For example, if corporate policy does not allow video traffic on the network, ACLs that block video traffic could be

configured and applied. This mechanism would greatly reduce the network load and improve network performance. Provide traffic flow control. ACLs can restrict the delivery of routing updates. If updates are not required because of network conditions, bandwidth is preserved.

Filtering

Provide a basic level of security for network access. ACLs can allow one host to access a part of the network and prevent another host from accessing the same area. For example, access to the Human Resources network can be restricted to select users. Decide which types of traffic to forward or block at the router interfaces. For example, an ACL can permit email traffic, but block all Telnet traffic. Control which areas a client can access on a network. Screen hosts to permit or deny them access to network services. ACLs can permit or deny a user to access file types, such as FTP or HTTP.

ACLs inspect network packets that are based on criteria such as source address, destination address, protocols, and port numbers. In addition to either permitting or denying traffic, an ACL can classify traffic to enable priority processing down the line. This capability is like having a special ticket to a concert or sporting event. The ticket gives selected guests privileges that are not offered to general-admission ticket holders, such as being able to enter a restricted area and to be escorted to their box seats. Access control presents new challenges due to the increase in the use of the Internet and in the number of router connections to outside networks. Network administrators face the dilemma of how to deny unwanted traffic while allowing appropriate access. For example, you can use an ACL as a filter to keep the rest of your network from accessing sensitive data on the finance subnet.

Classification
Routers also use ACLs to identify particular traffic. Once an ACL has identified and classified traffic, you can configure the router with instructions on how to manage that traffic. For example, you can use an ACL to identify the executive subnet as the traffic source and then give that traffic priority over other types of traffic on a congested WAN link. ACLs offer an important tool for controlling traffic on the network. Packet filtering, sometimes called static packet filtering, helps control packet movement through the network by analyzing the incoming and outgoing packets and passing or halting them based on stated criteria. A router acts as a packet filter when it forwards or denies packets according to filtering rules. When a packet arrives at the packet-filtering router, the router extracts certain information from the packet header. It then decides, according to the filter rules, whether the packet can pass through or be discarded.

Cisco provides ACLs to permit or deny the following:


The crossing of packets to or from specified router interfaces, and traffic going through the router Telnet traffic into or out of the router vty ports for router administration

By default, all IP traffic is permitted in and out of all of the router interfaces. When the router discards packets, some protocols return a special packet to notify the sender that the destination is unreachable. For the IP protocol, an ACL discard results in a Destination unreachable (U.U.U.)" in response to a ping and an "Administratively prohibited (!A * !A)" response to a traceroute. IP ACLs can classify and differentiate traffic. Classification enables you to assign special handling (such as the following) for traffic that is defined in an ACL:

Identify the type of traffic to be encrypted across a virtual private network (VPN) connection. Identify the routes that are redistributed from one routing protocol to another. Use with route filtering to identify which routes are included in the routing updates between routers. Use with policy-based routing to identify the type of traffic that is routed across a designated link. Use with Network Address Translation (NAT) to identify which addresses are translated.

ACL Operation
An ACL is a router configuration script that controls whether a router permits packets to pass based on criteria that are found in the packet header. ACLs express the set of rules that give added control for packets that enter inbound interfaces, packets that relay through the router, and packets that exit outbound interfaces of the router. Outbound ACL traffic that is generated locally will not have the ACLs rules applied. For example, if the router outbound ACL denies telnet, the telnet from the router to the outside host is still permitted even if that outbound ACL list is applied. ACLs operate in two ways.

Inbound ACLs: Incoming packets are processed before they are routed to an outbound interface. An inbound ACL is efficient because it saves the overhead of routing lookups if the packet will be discarded after the filtering tests deny it. If the tests permit the packet, it is then processed for routing. Outbound ACLs: Incoming packets are routed to the outbound interface, and then they are processed through the outbound ACL.

ACL statements operate in sequential, logical order. They evaluate packets from the top down, one statement at a time. If a packet header and an ACL statement match, the rest of the

statements in the list are skipped. The packet is then permitted or denied as determined by the matched statement. If a packet header does not match an ACL statement, the packet is tested against the next statement in the list. This matching process continues until the end of the list is reached. A final implied statement covers all packets for which conditions did not test true. This final test condition matches all other packets and results in a "deny" instruction. Instead of proceeding into or out of an interface, the router drops all of these remaining packets. This final statement is often referred to as the "implicit deny any statement." Because of this statement, an ACL should have at least one permit statement in it; otherwise, the ACL blocks all traffic. You can apply an ACL to multiple interfaces. However, a general rule for applying ACLs on a router can be recalled by remembering the three Ps. You can configure one ACL per protocol, per direction, per interface:

One ACL per protocol: To control traffic flow on an interface, an ACL must be defined for each protocol that is enabled on the interface (for example, IP or IPX). One ACL per direction: ACLs control traffic in one direction at a time on an interface. Two separate ACLs must be created to control inbound and outbound traffic. One ACL per interface: ACLs control traffic for an interface; for example, the FastEthernet 0/0 interface.

Every interface can have multiple protocols and directions defined. The router, for example, can have two interfaces that are configured for IP, AppleTalk, and IPX. This router could possibly require 12 separate ACLs: one ACL for each protocol, times two for each direction, times two for the number of

Topic Notes: Types of ACLs


Types of ACLs
ACLs can be categorized into the following types:

Standard ACLs: Standard IP ACLs check the source addresses of packets that can be routed. The result either permits or denies the output for an entire protocol suite, which is based on the source network, subnet, or host IP address. Extended ACLs: Extended IP ACLs check both the source and destination packet addresses. They can also check for specific protocols, port numbers, and other parameters, which allow administrators more flexibility and control.

There are two methods that you can use to identify standard and extended ACLs:

Numbered ACLs use a number for identification. Named ACLs use a descriptive name or number for identification.

Identifying ACLs
When you create numbered ACLs, you enter an ACL number as the first argument of the global ACL statement. The test conditions for an ACL vary depending on whether the number identifies a standard or extended ACL. You can create many ACLs for a protocol. Select a different ACL number for each new ACL within a given protocol. However, on an interface you can apply only one ACL per protocol, per direction. Specifying an ACL number from 1 to 99 or 1300 to 1999 instructs the router to accept numbered standard IPv4 ACL statements. Specifying an ACL number from 100 to 199 or 2000 to 2699 instructs the router to accept numbered extended IPv4 ACL statements. The named ACL feature allows you to identify IP standard and extended ACLs with an alphanumeric string (name) instead of the numeric representations. Named IP ACLs give you more flexibility in working with the ACL entries. Named ACLs have a large advantage over numbered ACLs in that they are easier to edit and a name may be chosen to provide a better description of what the ACL does. Since with Cisco IOS Software Release 12.3(2)T, it is possible to delete individual entries in a specific ACL. You can use sequence numbers to insert statements anywhere in the named or numbered ACL. There are two benefits to IP access-list entry-sequence numbering:

You can edit the order of ACL statements. You can remove individual statements from an ACL.

If you are using an earlier Cisco IOS Software version, you can add statements only at the bottom of the named ACL. Because you can delete individual entries, you can modify your ACL without having to delete and then reconfigure the entire ACL. Well-designed and well-implemented ACLs add an important security component to your network. Follow these general principles to ensure that the ACLs you create have the intended results:

Based upon the test conditions, choose a standard or extended, numbered or named ACL. Only one ACL per protocol, per direction, and per interface is allowed. Multiple ACLs are permitted per interface, but each must be for a different protocol (e.g. IP, IPX, MAC address) or different direction (in or out). Your ACL should be organized to allow processing from the top down. Organize your ACL so that the more specific references to a network or subnet appear before ones that are more general. Place conditions that occur more frequently before conditions that occur less frequently to optimize how the router processes the traffic through the ACL. Your ACL always contains an implicit "deny any" statement at the end.

Unless you end your ACL with an explicit "permit any" statement, by default, the ACL denies all traffic that fails to match any of the ACL lines. o Every ACL should have at least one permit statement. Otherwise, all traffic is denied. You should create the ACL before applying it to an interface. With most versions of Cisco IOS Software, an interface that has an empty ACL applied to it permits all traffic. Depending on how you apply the ACL, the ACL filters traffic either going through the router or going to and from the router, such as traffic to or from the vty lines.

Every ACL should be placed where it has the greatest impact on efficiency. The basic rules are as follows:

Locate extended ACLs as close as possible to the source of the traffic denied. This way, undesirable traffic is filtered without crossing the network infrastructure. Because standard ACLs do not specify destination addresses, place them as close to the destination as possible.

In today's networks, extended ACLs tend to be commonly used for traffic filtering and standard ACLs more for classification. Prior to implementing ACLs, plan them properly and select the type which best fits your network requirements.

Topic Notes: Additional Types of ACLs


Additional Types of ACLs
Standard and extended ACLs can become the basis for other types of ACLs that provide additional functionality. These other types of ACLs include the following:

Dynamic ACLs (lock-and-key): Users that want to traverse the router are blocked until they use Telnet to connect to the router and are authenticated. Reflexive ACLs: Allows outbound traffic and limits inbound traffic in response to sessions that originate inside the router. Time-based ACLs: Allows for access control that is based on the time of day and week.

Dynamic ACLs
Dynamic ACLs are dependent on Telnet connectivity, authentication (local or remote), and extended ACLs. Lock-and-key configuration starts with the application of an extended ACL to block traffic through the router. Extended ACL blocks the users who want to traverse the router until they use Telnet to connect to the router and are authenticated. The Telnet connection is then dropped, and a single-entry dynamic ACL is added to the extended ACL that exists. This permits traffic for a particular time period; idle and absolute timeouts are possible.

Using Dynamic ACLs

Some common reasons to use dynamic ACLs are as follows:

When you want a specific remote user or group of remote users to access a host within your network, connecting from their remote hosts via the Internet. Lock-and-key authenticates the user and then permits limited access through your firewall router for a host or subnet for a finite period.

When you want a subset of hosts on a local network to access a host on a remote network that is protected by a firewall. With lock-and-key, you can enable access to the remote host only for the desired set of local hosts. Lock-and-key requires the users to authenticate through a AAA, TACACS+ server, or other security server before it allows their hosts to access the remote hosts.

Benefits of Dynamic ACLs


Dynamic ACLs have the following security benefits over standard and static extended ACLs:

Use of a challenge mechanism to authenticate individual users Simplified management in large internetworks In many cases, reduction of the amount of router processing that is required for ACLs Reduction of the opportunity for network break-in by network hackers Creation of dynamic user access through a firewall, without compromising other configured security restrictions

Reflexive ACLs
Reflexive ACLs allow IP packets to be filtered based on upper-layer session information. They are generally used to allow outbound traffic and limit inbound traffic in response to sessions that originate from a network inside the router. Reflexive ACLs contain only temporary entries. These entries are automatically created when a new IP session begins (for example, with an outbound packet), and the entries are automatically removed when the session ends. Reflexive ACLs are not applied directly to an interface but are "nested" within an extended named IP ACL that is applied to the interface. Reflexive ACLs can be defined only with extended named IP ACLs. They cannot be defined with numbered or standard named ACLs or with other protocol ACLs. Reflexive ACLs can be used with other standard and static extended ACLs.

Benefits of Reflexive ACLs


Reflexive ACLs have the following benefits:

Help secure your network against network hackers and can be included in a firewall defense. Provide a level of security against spoofing and certain denial of service (DoS) attacks. Reflexive ACLs are more difficult to spoof because more filter criteria must match before

a packet is permitted through. For example, source and destination addresses and port numbersnot just ACK and RST bitsare checked. Are simple to use and, compared to basic ACLs, provide greater control over which packets enter your network.

Time-Based ACLs
Time-based ACLs are like extended ACLs in function, but they allow for access control that is based on time. To implement time-based ACLs, you create a time range that defines specific times of the day and week. The time range is identified by a name and then referenced by a function. Therefore, the time restrictions are imposed on the function itself.

Benefits of Time-Based ACLs


Time-based ACLs have many benefits:

The network administrator has more control over permitting or denying a user access to resources. One resource could be, for example, an application. This application could be identified by an IP address and mask pair and port number, or by policy routing, or by an on-demand link (which would be identified as interesting traffic to the dialer). Network administrators can set time-based security policies such as the following: Perimeter security using the Cisco IOS Firewall Feature Set or ACLs Data confidentiality with Cisco Encryption Technology or IP Security (IPsec) Policy-based routing and queuing functions are enhanced. When provider access rates vary by time of day, it is possible to automatically reroute traffic cost effectively. Service providers can dynamically change a committed access rate (CAR) configuration to support the quality of service (QoS) service level agreements (SLAs) that are negotiated for certain times of day. Network administrators can control logging messages. ACL entries can log traffic at certain times of the day, but not constantly. Therefore, administrators can simply deny access without analyzing the many logs that are generated during peak hours.
o o

Topic Notes: Additional Types of ACLs


Additional Types of ACLs
Standard and extended ACLs can become the basis for other types of ACLs that provide additional functionality. These other types of ACLs include the following:

Dynamic ACLs (lock-and-key): Users that want to traverse the router are blocked until they use Telnet to connect to the router and are authenticated. Reflexive ACLs: Allows outbound traffic and limits inbound traffic in response to sessions that originate inside the router. Time-based ACLs: Allows for access control that is based on the time of day and week.

Dynamic ACLs
Dynamic ACLs are dependent on Telnet connectivity, authentication (local or remote), and extended ACLs. Lock-and-key configuration starts with the application of an extended ACL to block traffic through the router. Extended ACL blocks the users who want to traverse the router until they use Telnet to connect to the router and are authenticated. The Telnet connection is then dropped, and a single-entry dynamic ACL is added to the extended ACL that exists. This permits traffic for a particular time period; idle and absolute timeouts are possible.

Using Dynamic ACLs


Some common reasons to use dynamic ACLs are as follows:

When you want a specific remote user or group of remote users to access a host within your network, connecting from their remote hosts via the Internet. Lock-and-key authenticates the user and then permits limited access through your firewall router for a host or subnet for a finite period.

When you want a subset of hosts on a local network to access a host on a remote network that is protected by a firewall. With lock-and-key, you can enable access to the remote host only for the desired set of local hosts. Lock-and-key requires the users to authenticate through a AAA, TACACS+ server, or other security server before it allows their hosts to access the remote hosts.

Benefits of Dynamic ACLs


Dynamic ACLs have the following security benefits over standard and static extended ACLs:

Use of a challenge mechanism to authenticate individual users Simplified management in large internetworks In many cases, reduction of the amount of router processing that is required for ACLs Reduction of the opportunity for network break-in by network hackers Creation of dynamic user access through a firewall, without compromising other configured security restrictions

Reflexive ACLs
Reflexive ACLs allow IP packets to be filtered based on upper-layer session information. They are generally used to allow outbound traffic and limit inbound traffic in response to sessions that

originate from a network inside the router. Reflexive ACLs contain only temporary entries. These entries are automatically created when a new IP session begins (for example, with an outbound packet), and the entries are automatically removed when the session ends. Reflexive ACLs are not applied directly to an interface but are "nested" within an extended named IP ACL that is applied to the interface. Reflexive ACLs can be defined only with extended named IP ACLs. They cannot be defined with numbered or standard named ACLs or with other protocol ACLs. Reflexive ACLs can be used with other standard and static extended ACLs.

Benefits of Reflexive ACLs


Reflexive ACLs have the following benefits:

Help secure your network against network hackers and can be included in a firewall defense. Provide a level of security against spoofing and certain denial of service (DoS) attacks. Reflexive ACLs are more difficult to spoof because more filter criteria must match before a packet is permitted through. For example, source and destination addresses and port numbersnot just ACK and RST bitsare checked. Are simple to use and, compared to basic ACLs, provide greater control over which packets enter your network.

Time-Based ACLs
Time-based ACLs are like extended ACLs in function, but they allow for access control that is based on time. To implement time-based ACLs, you create a time range that defines specific times of the day and week. The time range is identified by a name and then referenced by a function. Therefore, the time restrictions are imposed on the function itself.

Benefits of Time-Based ACLs


Time-based ACLs have many benefits:

The network administrator has more control over permitting or denying a user access to resources. One resource could be, for example, an application. This application could be identified by an IP address and mask pair and port number, or by policy routing, or by an on-demand link (which would be identified as interesting traffic to the dialer). Network administrators can set time-based security policies such as the following: Perimeter security using the Cisco IOS Firewall Feature Set or ACLs Data confidentiality with Cisco Encryption Technology or IP Security (IPsec) Policy-based routing and queuing functions are enhanced. When provider access rates vary by time of day, it is possible to automatically reroute traffic cost effectively.
o o

Service providers can dynamically change a committed access rate (CAR) configuration to support the quality of service (QoS) service level agreements (SLAs) that are negotiated for certain times of day. Network administrators can control logging messages. ACL entries can log traffic at certain times of the day, but not constantly. Therefore, administrators can simply deny access without analyzing the many logs that are generated during peak hours.

Topic Notes: Configuring Numbered Standard IPv4 ACLs


Standard IPv4 ACLs, numbered (1 to 99 and 1300 to 1999) or named, filter packets that are based on a source address and mask, and they permit or deny the entire TCP/IP protocol suite. This standard ACL filtering may not provide the filtering control that you require. You may need a more precise way to filter your network traffic. To configure numbered standard IPv4 ACLs on a Cisco router, you must create a standard IPv4 ACL and activate an ACL on an interface. The access-list command creates an entry in a standard IPv4 traffic filter list. To remove the ACL, the global configuration no access-list command is used. Issuing the show access-list command confirms that access list 1 has been removed. With numbered ACLs, individual entries cannot be removed with no access-list command, as this has the effect of removing the entire ACL. The traditional way of removing or modifying a single numbered ACL entry would be to copy the whole ACL to a text editor where the changes are made, then the entire ACL is removed from the router using the no access-list command and the modified ACL can then be recreated by copying and pasting from the text editor. Newer Cisco IOS versions allow easier editing using sequence numbering. Typically, administrators create ACLs and fully understand the purpose of each statement within the ACL. However, when an ACL is revisited at a later time, the purpose may no longer be as obvious as it once was. This why it is important to document ACLs detailing what they accomplish and methods like using named ACL and remarks to add descriptions are also helpful. If an ACL providing the same function is required on multiple devices, another useful technique could be to adopt a convention across the corporate network of using the same name or number for that ACL. After a standard ACL is configured, it is linked to an interface using the ip access-group command. Only one ACL per protocol, per direction, and per interface is allowed. To control traffic into and out of the router (not through the router), you will protect the router virtual ports. A virtual port is called a vty. By default, there are five such virtual terminal lines, numbered vty 0 through vty 4. When configured, Cisco IOS Software images can support more than five vty ports.

Restricting vty access is primarily a technique for increasing network security and defining which addresses are allowed telnet access to the router EXEC process. Filtering Telnet traffic is typically considered an extended IP ACL function because it filters a higher-level protocol. However, because you are using the access-class command to filter incoming or outgoing Telnet sessions by source address and to apply filtering to vty lines, you can use standard IP ACL statements to control vty access.

Topic Notes: Configuring Numbered Extended IPv4 ACLs


For more precise traffic-filtering control, you can use extended ACLs numbered 100 to 199 and 2000 to 2699, providing a total of 800 possible extended ACLs. Extended ACLs can also be named. Extended ACLs are used more often than standard ACLs because they provide a greater range of control and, therefore, add to your security solution. Like standard ACLs, extended ACLs check the source packet addresses, but they also check the destination address, protocols, and port numbers (or services), as shown in the table. This feature gives a greater range of criteria on which to base the ACL. For example, an extended ACL can simultaneously allow email traffic from a network to a specific destination while denying file transfers and web browsing. The ability to filter on protocol and port number allows you to build very specific extended ACLs. Using the appropriate port number, you can specify an application by configuring either the port number or the name of a well-known port.

Topic Notes: Configuring Named ACLs


Naming an ACL makes it easier for you to understand its function. For example, an ACL to deny FTP could be called NO_FTP. When you identify your ACL with a name instead of a number, the configuration mode and command syntax are slightly different. Capitalizing ACL names is not required, but it makes them stand out when you are viewing the running configuration output. Named IP ACLs allow you to add, modify, or delete individual entries in a specific ACL. If you are using Cisco IOS Release 12.3 or later, you can use sequence numbers to insert statements anywhere in the named ACL. If you are using an earlier version, you can insert statements only at the bottom of the named ACL. Because you can add or delete individual entries with named ACLs, you can modify your ACL without having to delete and then reconfigure the entire ACL. Use named IP ACLs when you want to intuitively identify ACLs.

Comments, also known as remarks, are ACL statements that are not processed. They are simple descriptive statements that you can use to better understand and troubleshoot either named or numbered ACLs. Each remark line is limited to 100 characters. The remark can go before or after a permit or deny statement. You should be consistent about where you put the remark so it is clear which remark describes which permit or deny statement. It would be confusing to have some remarks before the associated permit or deny statements and some remarks after the associated statements. To add a comment to a named IP ACL, use the command remark in access list configuration mode. To add a comment to a numbered IP ACL (for example, ACL 101), use the command access-list 101 remark. The remark keyword is used for documentation and makes access lists easier to understand. Each remark is limited to 100 characters. When reviewing the ACL in the configuration, the remark is also displayed. Using ACLs requires attention to detail and great care. Mistakes can be costly in terms of downtime, troubleshooting efforts, and poor network service. Before starting to configure an ACL, basic planning is required.

Topic Notes: Identifying and Resolving ACL Issues


When you finish the ACL configuration, use the show commands to verify the configuration. Use the show access-lists command to display the contents of all ACLs. By entering the ACL name or number as an option for this command, you can display a specific ACL. To display only the contents of all IP ACLs, use the show ip access-list command. The show ip interfaces command displays IP interface information and indicates whether any IP ACLs are set on the interface.

opic Notes: Introducing NAT and PAT


Small networks are commonly implemented using private IP addressing as defined in RFC 1918. Private addressing gives enterprises considerable flexibility in network design. This addressing enables operationally and administratively convenient addressing schemes as well as easier growth. However, you cannot route private addresses over the Internet, and there are not enough public addresses to allow all organizations to provide a private address to every one of their hosts. Therefore, network administrators need a mechanism to translate private addresses to public addresses (and back) at the edge of their network. Without a translation system, private hosts behind a router in the network of one organization cannot connect with private hosts behind a router in other organizations over the Internet. Network Address Translation (NAT) provides this mechanism. Before NAT, a host with a private address could not access the Internet. Using NAT, individual companies can address some or all of their hosts with private addresses and use NAT to provide an address translation that allows access to the Internet. NAT is like the receptionist in a large office. Assume that you have left instructions with the receptionist not to forward any calls to you unless you request it. Later on, you call a potential client and leave a message for them to call you back. You tell the receptionist that you are expecting a call from this client, and you ask the receptionist to put them through to your telephone. The client calls the main number to your office, which is the only number that the client knows. When the client tells the receptionist who they are looking for, the receptionist checks a lookup table that matches your name to your extension. The receptionist knows that you requested this call; therefore, the receptionist forwards the caller to your extension. NAT operates on any Cisco Layer 3 device and is designed for IP address simplification and conservation. Usually, NAT connects two networks and translates the private (inside local) addresses in the internal network into public addresses (inside global) before packets are forwarded to another network. You can configure NAT to advertise only one address for the entire network to the outside world. Advertising only one address effectively hides the internal network from the world, thus providing additional security as a side benefit. Any Layer 3 device that sits between an internal network and the public network such as a firewall, a router, or a computer uses NAT, which is defined in RFC 1631. The network address translating process of swapping one address for another is separate from the convention we use to determine what is public and private and the device needs to be configured and effectively told which IP networks are to be translated. This is one of the reasons why NAT can also be deployed internally when there is a clash of private IP addresses, for example when two companies merge. In NAT terminology, the "inside network" is the set of networks that are subject to translation. The "outside network" refers to all other addresses. Usually, these are valid addresses that are located on the Internet.

Cisco defines the following NAT terms:

Inside local address: The IP address that is assigned to a host on the inside network. The inside local address is likely not an IP address that the Network Information Center (NIC) or service provider assigned. Inside global address: A legitimate IP address that the NIC or service provider assigned that represents one or more inside local IP addresses to the outside world. Outside local address: The IP address of an outside host as it appears to the inside network. Not necessarily legitimate, the outside local address is allocated from an address space routable on the inside. In most situations, this address will be identical to the outside global address of that outside device. Outside global address: The IP address that is assigned to a host on the outside network by the host owner. The outside global address is allocated from a globally routable address or network space.

A good way to remember what's local and what's global is to add the word visible, hence an address that is locally visible normally implies a private IP address and an address that is globally visible normally implies a public IP address. After that the rest is simple, 'Inside' internal to your network and 'Outside' external to your network. So for example Inside global address means that the device is physically inside your network and has an address that is visible from the Internet this could be a web server for instance. NAT has many forms and can work in the following ways:

Static NAT: Manually entered by the network administrator and maps an unregistered IPv4 address to a registered IPv4 address (one-to-one). Static NAT is particularly useful when a device must be accessible from outside the network. Dynamic NAT: Maps an unregistered IPv4 address to a registered IPv4 address from a pool of registered IPv4 addresses (many-to-many). NAT overloading: A form of dynamic NAT that maps multiple unregistered IPv4 addresses to a single registered IPv4 address (many-to-one) by using different ports. Overloading is also known as PAT (Port Address Translation), and is a form of dynamic NAT.

NAT provides many benefits and advantages. However, there are some drawbacks to using NAT, including the lack of support for some types of traffic. The benefits of using NAT include the following:

NAT conserves the legally registered addressing scheme by allowing the privatization of intranets. NAT conserves addresses through application port-level multiplexing. With NAT overload, internal hosts can share a single public IP address for all external communications. In this type of configuration, very few external addresses are required to support many internal hosts.

NAT increases the flexibility of connections to the public network. Multiple pools, backup pools, and load-balancing pools can be implemented to ensure reliable public network connections. NAT provides consistency for internal network addressing schemes. On a network without private IP addresses and NAT, changing public IP addresses requires the renumbering of all hosts on the existing network. The costs of renumbering hosts can be significant. NAT allows the existing scheme to remain while supporting a new public addressing scheme. This means that an organization could change ISPs without having to change any of its inside clients. NAT enhances network security. Because private networks do not advertise their addresses or internal topology, they remain reasonably secure when used with NAT to gain controlled external access. However, NAT does not replace firewalls.

NAT does have some drawbacks. The fact that hosts on the Internet appear to communicate directly with the NAT device, rather than with the actual host inside the private network, creates a number of issues. In theory, a single globally unique IP address can represent many privately addressed hosts. This has advantages from a privacy and security point of view, but in practice, there are drawbacks. The first disadvantage affects performance. NAT increases switching delays because the translation of each IP address within the packet headers takes time. The first packet is processswitched, meaning that it always goes through the slower path. The router must look at every packet to decide whether it needs translation. The router needs to alter the IP header, and possibly alter the TCP or UDP header. Remaining packets go through the fast-switched path if a cache entry exists; otherwise, they too are delayed. Many IPs and applications depend on end-to-end functionality, with unmodified packets forwarded from the source to the destination. By changing end-to-end addresses, NAT blocks some applications that use IP addressing. For example, some security applications, such as digital signatures, fail because the source IP address changes. Applications that use physical addresses instead of a qualified domain name do not reach destinations that are translated across the NAT router. Sometimes, this problem can be avoided by implementing static NAT mappings. End-to-end IP traceability is also lost. It becomes much more difficult to trace packets that undergo numerous packet address changes over multiple NAT hops, so troubleshooting is challenging. On the other hand, hackers who want to determine the source of a packet find it difficult to trace or obtain the original source or destination address. Using NAT also complicates tunneling protocols, such as IPSec, because NAT modifies the values in the headers that interfere with the integrity checks done by IPSec and other tunneling protocols. Services that require the initiation of TCP connections from the outside network, or stateless protocols such as those using UDP, can be disrupted. Unless the NAT router makes a specific effort to support such protocols, incoming packets cannot reach their destination. Some protocols

can accommodate one instance of NAT between participating hosts (passive mode FTP, for example), but fail when both systems are separated from the Internet by NAT. One of the main forms of NAT is Port Address Translation , PAT for short, which is also referred to as "overload" in Cisco IOS configuration. Several inside local addresses can be translated using NAT into just one or a few inside global addresses by using PAT. Most home routers operate in this manner. Your ISP assigns one address to your router, yet several members of your family can simultaneously surf the Internet. With NAT overloading, multiple addresses can be mapped to one or to a few addresses because a TCP or UDP port number tracks each private address. When a client opens a TCP/IP session, the NAT router assigns a port number to its source address. NAT overload ensures that clients use a different TCP or UDP port number for each client session with a server on the Internet. When a response comes back from the server, the source port number (which becomes the destination port number on the return trip) determines to which client the router routes the packets. It also validates that the incoming packets were requested, thus adding a degree of security to the session.

PAT uses unique source port numbers on the inside global IPv4 address to distinguish between translations. Because the port number is encoded in 16 bits, the total number of internal addresses that NAT can translate into one external address is, theoretically, as many as 65,536. PAT attempts to preserve the original source port. If the source port is already allocated, PAT attempts to find the first available port number. It starts from the beginning of the appropriate port group, 0 to 511, 512 to 1023, or 1024 to 65535. If PAT does not find an available port from the appropriate port group and if more than one external IPv4 address is configured, PAT moves to the next IPv4 address and tries to allocate the original source port again. PAT continues trying to allocate the original source port until it runs out of available ports and external IPv4 addresses.

Differences Between NAT and NAT Overload


NAT generally only translates IP addresses on a 1:1 correspondence between publicly exposed IP addresses and privately held IP addresses. NAT overload modifies the private IP address and potentially the port number of the sender. NAT overload chooses the port numbers that hosts see on the public network. NAT routes incoming packets to their inside destination by referring to the incoming source IP address given by the host on the public network. With NAT overload, there is generally only one or a very few publicly exposed IP addresses. Incoming packets from the public network are routed to their destinations on the private network by referring to a table in the NAT overload device that tracks public and private port pairs. This mechanism is called connection tracking.

Topic Notes: Static Network Address Translation


Translating Inside Source Addresses
You can translate your own IPv4 addresses into globally unique IPv4 addresses when you are communicating outside your network. You can configure static or dynamic inside source translation.

Example: Translating Inside Source Addresses


Step 1 The user at host 10.1.1.1 wants to open a connection to host B (IP address 209.165.201.1). Step 2 The first packet that the router receives on its NAT inside-enabled interface from host 10.1.1.1 causes the router to check its NAT table. If a static translation entry was configured, the router goes to Step 3. Step 3 The router replaces the inside local source address of host 10.1.1.1 with the translated inside global address (209.165.200.225) and forwards the packet. Step 4 Host B receives the packet and responds to host 209.165.200.225 by using the inside global IPv4 destination address 209.165.200.225 (DA 209.165.200.225). Step 5 When the router receives the packet on its NAT outside-enabled interface with the inside global IPv4 address of 209.165.200.225, the router performs a NAT table lookup by using the inside global address as a key. The router then translates the address back to the inside local address of host 10.1.1.1 and forwards the packet to host 10.1.1.1. Step 6 Host 10.1.1.1 receives the packet and continues the conversation. The router performs Steps 2 through 5 for each packet. Remember that static NAT is a one-to-one mapping between an inside address and an outside address. Static NAT allows external devices to initiate connections to internal devices. For instance, you may want to map an inside global address to a specific inside local address that is assigned to your web server. Configuring static NAT translations is a simple task. You need to define the addresses to translate and then configure NAT on the appropriate interfaces. Packets arriving on an inside interface from the identified IP address are subject to translation. Packets arriving on an outside interface that are addressed to the identified IP address are also subject to translation.

Use the command show ip nat translations in EXEC mode to display active translation information.

Topic Notes: Dynamic Address Translation


You can translate your own IPv4 addresses into globally unique IPv4 addresses when you are communicating outside your network. You can configure static or dynamic inside source translation. While static NAT provides a permanent mapping between an internal address and a specific public address, dynamic NAT maps private IP addresses to public addresses. These public IP addresses come from a NAT pool. Dynamic NAT configuration differs from static NAT, but it also has some similarities. Like static NAT, it requires the configuration to identify each interface as an inside or outside interface. However, rather than creating a static map to a single IP address, a pool of inside global addresses is used. Note : The ACL must permit only those addresses that need to be translated. Remember that there is an implicit deny any statement at the end of each ACL. An ACL that is too permissive can lead to unpredictable results. Using permit any can result in NAT consuming too many router resources, which can cause network problems. Use the command show ip nat translations in EXEC mode to display active translation information.

Topic Notes: Overloading an Inside Global Address


Overloading an Inside Global Address
You can conserve addresses in the inside global address pool by allowing the router to use one inside global address for many inside local addresses. When this overloading is configured, the router maintains enough information from higher-level protocolsfor example, TCP or UDP port numbersto translate the inside global address back into the correct inside local address. When multiple inside local addresses map to one inside global address, the TCP or UDP port numbers of each inside host distinguish between the local addresses. The configuration is like dynamic NAT, except that instead of a pool of addresses, the interface keyword is used to identify the outside IP address. Therefore, no NAT pool is defined. The overload keyword enables the addition of the port number to the translation. The NAT inside-to-outside process comprises this sequence of steps: Step 1 The incoming packet goes to the route table and the next hop is identified.

Step 2 NAT statements are parsed so that the interface serial 0 IPv4 address can be used in overload mode. PAT creates a source address to use. Step 3 The router encapsulates the packet and sends it out on interface serial 0. Step 4 The NAT outside-to-inside address translation process works in sequence. Step 5 NAT statements are parsed. The router looks for an existing translation and identifies the appropriate destination address. Step 6 The packet goes to the route table and the next-hop interface is determined. Step 7 The packet is encapsulated and sent out to the local interface. No internal addresses are visible during this process. As a result, hosts do not have an external public address, so security is improved. By default, translation entries time out after 24 hours, unless the timers have been reconfigured with the ip nat translation timeout command in global configuration mode. It is sometimes useful to clear the dynamic entries sooner than the default. This fact is especially true when testing the NAT configuration. To clear dynamic entries before the timeout has expired, use the clear ip nat translation global command. You can be very specific about which translation to clear, or you can clear all translations from the table using the clear ip nat translation * global command. Only the dynamic translations are cleared from the table. Static translations cannot be cleared from the translation table.

Topic Notes: Translation Table Issues


Resolving Translation Table Issues
When you have IPv4 connectivity problems in a NAT environment, it is often difficult to determine the cause of the problem. Many times NAT is blamed, when in reality there is an underlying problem. When you are trying to determine the cause of an IPv4 connectivity problem, it helps to eliminate NAT as the potential problem. Follow these steps to verify that NAT is operating as expected: Step 1 Based on the configuration, clearly define what NAT is supposed to achieve. You may determine that there is a problem with the NAT configuration. Step 2 Use the show ip nat translations command to determine if the correct translations exist in the translation table.

Step 3 Verify whether the translation is occurring by using the show and debug commands. Step 4 Review in detail what is happening to the translated packet and verify that routers have the correct routing information for the translated address to move the packet. Step 5 If the appropriate translations are not in the translation table, verify the following items:

No inbound ACLs are denying the packets entry into the NAT router. The ACL associated with the NAT command is permitting all necessary networks. There are enough addresses in the NAT pool. The router interfaces are appropriately defined as NAT inside or NAT outside.

In a simple network environment, it is useful to monitor NAT statistics with the show ip nat statistics command. The show ip nat statistics command displays information about the total number of active translations, NAT configuration parameters, how many addresses are in the pool, and how many have been allocated. However, in a more complex NAT environment with several translations taking place, this show command many not clearly identify the issue. In this case, it may be necessary to run debug commands on the router. The debug ip nat command displays information about every packet that the router translates, which helps you verify the operation of the NAT feature. The debug ip nat detailed command generates a description of each packet that is considered for translation. This command also outputs information about certain errors or exception conditions, such as the failure to allocate a global address. The debug ip nat detailed command generates more overhead than the debug ip nat command, but it can provide the detail that you need to troubleshoot the NAT problem. Always remember to turn off debugging when finished. When decoding the debug output, note what the following symbols and values indicate:

*: The asterisk next to NAT indicates that the translation is occurring in the fast-switched path. The first packet in a conversation is always process-switched, which is slower. The remaining packets go through the fast-switched path if a cache entry exists. s=: Refers to the source IP address. a.b.c.d->w.x.y.z: Indicates that source address a.b.c.d is translated to w.x.y.z. d=: Refers to the destination IP address. [xxxx]: The value in brackets is the IP identification number. This information may be useful for debugging in that it enables correlation with other packet traces from protocol analyzers.

Topic Notes: Translation Entry Issues


Resolving Issues with Using the Correct Translation Entry
Verify:

What the NAT configuration is supposed to accomplish That the NAT entry exists in the translation table and that it is accurate That the translation is actually taking place by monitoring the NAT process or statistics That the NAT router has the appropriate route in the routing table if the packet is going from inside to outside That all necessary routers have a return route back to the translated address

You can verify if any translations have ever taken place and identify the interfaces between which translation should be occurring. Use the show ip nat statistics command to determine this information. After you correctly define the NAT inside and outside interfaces, generate another ping from host A to host B. If the ping still fails, troubleshoot the problem by using the show ip nat translations and show ip nat statistics commands again. Next, use the show access-list command to verify whether the ACL that the NAT command references is permitting all of the necessary networks.

Topic Notes: Reasons for Using IPv6


The IPv4 address space provides approximately 4.3 billion addresses. Of that address space, approximately 3.7 billion addresses are actually assignable; the other addresses are reserved for special purposes such as multicasting, private address space, loopback testing, and research. IPv4 address exhaustion will have a major impact on the growth of the Internet and on ISPs. Any ISP that wishes to continue to grow its revenue by increasing its customer base will have to find a way to add new Internet users without requiring additional global unique IPv4 addresses. IPv6 was designed primarily to function atop existing Layer 2 technologies in the same way that IPv4 does. It was designed also to have a larger address space so that it would be unlikely that the global Internet would ever suffer another such shortage. It is the networking plan of record of the industry. There is no alternative plan. The only issue is when and how a transition from IPv4 to IPv6 will occur. An IPv6 address is a 128-bit binary value, which can be displayed as 32 hexadecimal digits. It provides 3.4 * 1038 IP addresses. This version of IP addressing should provide sufficient addresses for future Internet growth needs. In addition to its technical and business potential, IPv6 offers a virtually unlimited supply of IP addresses. Because of its generous 128-bit address space, IPv6 generates a virtually unlimited stock of addressesenough to allocate more than the entire IPv4 Internet address space to everyone on the planet. Throughout its lifetime so far, the Internet has been a rapidly growing communications medium. Resources for addressing devices have been plentiful, with the major challenge being technology itself.

Because of this rapid growth, the finite pool of globally unique IPv4 addresses has almost run out. The growth of the Internet, matched by increasing computing power, has extended the reach of IP-based applications. Some reports predict IPv4 address exhaustion by 2012, and others, by 2014. IPv4 address exhaustion has been threatening for more than 15 years. Over the years, the American Registry for Internet Numbers (ARIN) and the Internet Assigned Numbers Authority (IANA) have allocated addresses slots that were "unused" or "reserved" before. This delayed the depletion by several years, but did not solve the problem. The result of the depletion will be that no new IP address will be available. The existing IPv4 addresses will still of course be usable. The major impact over the next several years will be on Internet presence (websites, e-commerce, and email). While many enterprises have enough address space (public or private) to manage their intranet needs for the next few years, the length of time needed to transition to IPv6 demands that administrators and managers consider the issue well in advance. The largest enterprises may need to act sooner rather than later to ensure sufficient enterprise connectivity. The change from IPv4 to IPv6 has already begun, particularly in Europe, Japan, and the AsiaPacific region. These areas are exhausting their allotted IPv4 addresses, which makes IPv6 even more necessary. Some countries, such as Japan, are aggressively adopting IPv6. Others, such as those in the European Union, are moving toward IPv6, and China is considering building new networks that are dedicated for IPv6, the 2008 Beijing Olympics ran on China's IPv6 Next Generation Internet (CNGI). As of October 1, 2003, the U.S. Department of Defense mandated that all new equipment purchased be IPv6 capable. Given the huge installed base of IPv4 in the world, it is easy to appreciate that transitioning to IPv6 from IPv4 deployments is a challenge. However, various techniques, including an autoconfiguration option, can make the transition easier. The transition mechanism that you use depends on the needs of your network. Some people argue that IPv6 would not exist if there were no recognized depletion of available IPv4 addresses. However, beyond the increased IP address space, the development of IPv6 has presented opportunities to apply lessons that are learned from the limitations of IPv4 to create a protocol with new and improved features. A simplified header architecture and protocol operation translates into reduced operational expenses. Built-in security features mean easier security practices that are sorely lacking in many current networks. However, perhaps the most significant IPv6 improvement is the address autoconfiguration features. The Internet is rapidly evolving from a collection of stationary devices to a fluid network of mobile devices. IPv6 allows mobile devices to quickly acquire and transition between addresses as they move among foreign networks, with no need for a foreign agent. (A foreign agent is a router that can function as the point of attachment for a mobile device when it roams from its home network to a foreign network.)

Address autoconfiguration also means more robust plug-and-play network connectivity. Autoconfiguration supports consumers who can have any combination of computers, printers, digital cameras, digital radios, IP phones, Internet-enabled household appliances, and robotic toys that are connected to their home networks. Many manufacturers already integrate IPv6 into their products. IPv6 is a powerful enhancement to IPv4. Several features in IPv6 offer functional improvements. Some of these features may also be available in IPv4 and require additional extensions, in IPv6 however they are in-built. What IP developers learned from using IPv4 suggested changes to better suit current and probable future network demands:

Enhanced IP addressing : Larger address space includes several enhancements: Improved global reachability and flexibility. Better aggregation of IP prefixes that are announced in routing tables. Multihomed hosts. Multihoming is a technique that increases the reliability of the Internet connection of an IP network. With IPv6, a host can have multiple IP addresses over one physical upstream link. For example, a host can connect to several ISPs. o Autoconfiguration that can include data link layer addresses in the address space. o More plug-and-play options for more devices. o Public-to-private, end-to-end readdressing without address translation. This enhancement makes peer-to-peer networking more functional and easier to deploy. o Simplified mechanisms for address renumbering and modification. Simpler header : A simpler header offers several advantages over IPv4: Better routing efficiency for performance and forwarding-rate scalability. No broadcasts and thus no potential threat of broadcast storms. No requirement for processing checksums. Simpler and more efficient extension-header mechanisms. Flow labels for per-flow processing with no need to open the transport inner packet to identify the various traffic flows. Mobility and security : Mobility and security help ensure compliance with Mobile IP and IP Security (IPsec) standards functionality. Mobility enables people with mobile network devices, many with wireless connectivity, to move around in networks.
o o o o o o o o o

The Internet Engineering Task Force (IETF) Mobile IP standard is available for both IPv4 and IPv6. The standard enables mobile devices to move without breaks in established network connections. Mobile devices use a home address and a care-of address to achieve this mobility. With IPv4, these addresses are manually configured. With IPv6, the configurations are dynamic, giving IPv6-enabled devices built-in mobility. IPsec is available for both IPv4 and IPv6. Although the functionalities are essentially identical in both environments, IPsec is mandatory in IPv6, making the IPv6 Internet more secure.

Transition richness : IPv4 will not disappear overnight. Rather, it will coexist with IPv6, which will gradually replace it. For this reason, IPv6 was delivered with migration techniques to cover every conceivable IPv4 upgrade case. However, the technology community ultimately rejected many of them. There are several ways to incorporate existing IPv4 capabilities with the added features of IPv6:
o o

Implement a dual-stack method, with both IPv4 and IPv6 configured on the interface of a network device. Tunneling will become more prominent as the adoption of IPv6 grows. There are various IPv6-over-IPv4 tunneling methods. Some methods require manual configuration, while others are more automatic. Cisco IOS Release 12.3(2)T and later also includes Network Address TranslationProtocol Translation (NAT-PT) between IPv6 and IPv4. This translation allows direct communication between hosts that use different versions of the IP protocol.

Topic Notes: Understanding IPv6 Addresses


Colons separate entries in a series of 16-bit hexadecimal fields that represent IPv6 addresses. The hexadecimal digits A, B, C, D, E, and F that are represented in IPv6 addresses are not case sensitive. IPv6 does not require explicit address string notation. Use the following guidelines for IPv6 address string notations:

The leading zeros in a field are optional, so that 09C0 equals 9C0, and 0000 equals 0. Successive fields of zeros can be represented as "::" only once in an address. An unspecified address is written as "::" because it contains only zeros.

Using the "::" notation, or sometimes known as double colon, greatly reduces the size of most addresses. For example, FF01:0:0:0:0:0:0:1 becomes FF01::1. This formatting is in contrast to the 32-bit dotted decimal notation of IPv4. Note An address parser identifies the number of missing zeros by separating the two parts and entering 0 until the 128 bits are complete. If two "::" notations were placed in the address, there would be no way to identify the size of each block of zeros. Broadcasting in IPv4 can cause problems. Broadcasting generates a number of interrupts in every computer on the network and, in some cases, triggers malfunctions that can completely halt an entire network. This disastrous network event is known as a "broadcast storm." In IPv6, broadcasting does not exist. IPv6 replaces broadcasts with multicasts and anycasts. Multicast enables efficient network operation by using a number of functionally specific multicast groups to send requests to a limited number of computers on the network. The multicast groups prevent most of the problems that are related to broadcast storms in IPv4.

The range of multicast addresses in IPv6 is larger than in IPv4. For the near future, allocation of multicast groups is not being limited. IPv6 also defines a new type of address that is called an anycast address. An anycast address identifies a list of devices or nodes; therefore, an anycast address identifies multiple interfaces. Anycast addresses are like a cross between unicast and multicast addresses. Unicast sends packets to one specific device with one specific address, and multicast sends a packet to every member of a group. Anycast addresses send a packet to any one member of the group of devices with the anycast address assigned. For efficiency, a packet that is sent to an anycast address is delivered to the closest interface as defined by the routing protocols in usethat is identified by the anycast address, so anycast can also be thought of as a "one-to-nearest" type of address. Anycast addresses are syntactically indistinguishable from global unicast addresses because anycast addresses are allocated from the global unicast address space. Note Internet anycast addresses have not been widely used. Generally speaking, some known complications and hazards can develop when they are used. Until more experience has been gained and solutions have been agreed upon for those problems, the following restrictions are imposed on IPv6 anycast addresses: (1) An anycast address MUST NOT be used as the source address of an IPv6 packet. (2) An anycast address MUST NOT be assigned to an IPv6 host; that is, it may be assigned to an IPv6 router only.

IPv6 Unicast Addressing


There are several basic types of IPv6 unicast addresses: global, reserved, private (link-local), loopback, and unspecified.

Global Addresses
The IPv6 global unicast address is the equivalent of the IPv4 global unicast address. A global unicast address is an IPv6 address from the global unicast prefix. The structure of global unicast addresses enables the aggregation of routing prefixes, which limits the number of routing table entries in the global routing table. Global unicast addresses that are used on links are aggregated upward through organizations and eventually to the ISPs. A global routing prefix, a subnet ID, and an interface ID define global unicast addresses. The IPv6 unicast address space encompasses the entire IPv6 address range, except for FF00::/8 (1111 1111), which is used for multicast addresses. The current global unicast address that is assigned by the IANA uses the range of addresses that start with binary value 001 (2000::/3), which is 1/8 of the total IPv6 address space and is the largest block of assigned block addresses. Addresses with a prefix of 2000::/3 (001) through E000::/3 (111) are required to have 64-bit interface identifiers in the extended universal identifier EUI-64 format.

The IANA is currently allocating the IPv6 address space in the range of 2001::/16 to the registries. The global unicast address typically consists of a 48-bit global routing prefix and a 16-bit subnet ID. Individual organizations can use a 16-bit subnet field called "Subnet ID" to create their own local addressing hierarchy and to identify subnets. This field allows an organization to use up to 65,535 individual subnets.

Reserved Addresses
The IETF reserved a portion of the IPv6 address space for various uses, both present and future. Reserved addresses represent 1/256th of the total IPv6 address space. Some of the other types of IPv6 addresses come from this block.

Link-Local Addresses
A block of IPv6 addresses is set aside for private addresses, just as is done in IPv4. These private addresses are local only to a particular link, and are therefore never routed outside of a particular company network. Private addresses have a first octet value of "FE" in hexadecimal notation, with the next hexadecimal digit being 8. Link-local addresses are new to the concept of addressing with IP in the network layer. These addresses refer only to a particular physical link (physical network). Routers do not forward datagrams using link-local addresses at all, not even within the organization; they are only for local communication on a particular physical network segment. They are used for link communications such as automatic address configuration, neighbor discovery, and router discovery. Many IPv6 routing protocols also use link-local addresses. Link-local addresses typically begin with "FE80". The next digits can be defined manually. If you do not define them manually the interface MAC address is used, thus resulting in an address in the form FE80<interface MAC address> based on EUI-64 format.

Loopback Address
Just as in IPv4, a provision has been made for a special loopback IPv6 address for testing; datagrams that are sent to this address "loop back" to the sending device. However, in IPv6 there is just one address, not a whole block, for this function. The loopback address is 0:0:0:0:0:0:0:1, which is normally expressed (using zero compression) as "::1".

Unspecified Address
In IPv4, an IP address of all zeroes has a special meaning; it refers to the host itself, and is used when a device does not know its own address. In IPv6, this concept has been formalized, and the all-zeroes address (0:0:0:0:0:0:0:0) is named the "unspecified" address. It is typically used in the source field of a datagram that is sent by a device that seeks to have its IP address configured.

You can apply address compression to this address; because the address is all zeroes, the address becomes just "::".

IPv6 over Data Link Layers


IPv6 is defined on most of the current data link layer protocols, including the following protocols:

Ethernet (Cisco supports this data link layer.) PPP (Cisco supports this data link layer.) High-Level Data Link Control (HDLC) (Cisco supports this data link layer.) FDDI Token Ring Attached Resource Computer network (ARCnet) Nonbroadcast multiaccess (NBMA) ATM (Cisco supports only ATM permanent virtual circuit [PVC], not switched virtual circuit [SVC] or ATM LAN Emulation [LANE].) Frame Relay (Cisco supports only Frame Relay PVC, not SVC.) IEEE 1394

An RFC describes the behavior of IPv6 in each of these specific data link layers, but Cisco IOS Software does not necessarily support all of them. The data link layer defines how IPv6 interface identifiers are created and how neighbor discovery manages data link layer address resolution. Larger address spaces make room for large address allocations to ISPs and organizations. An ISP aggregates all of the prefixes of its customers into a single prefix and announces the single prefix to the IPv6 Internet. The increased address space is sufficient to allow organizations to define a single prefix for their entire network as well. Aggregation of customer prefixes results in an efficient and scalable routing table. Scalable routing is necessary to expand broader adoption of network functions. Scalable routing also improves network bandwidth and functionality for user traffic that connects the various devices and applications. Internet usage, both now and in the future, can include the following elements:

A huge increase in the number of broadband consumers with high-speed connections that are always on. Users who spend more time online and are generally willing to spend more money on communications services (such as downloading music) and high-value searchable offerings. Home networks with expanded network applications such as wireless VoIP, home surveillance, and advanced services such as real-time video on demand (VoD). Massively scalable games with global participants and media-rich e-learning, providing learners with on-demand remote labs or lab simulations.

Topic Notes: Assigning IPv6 Addresses


IPv6 addresses use interface identifiers to identify interfaces on a link. Think of them as the host portion of an IPv6 address. Interface identifiers are required to be unique on a specific link. Interface identifiers are always 64 bits and can be dynamically derived from a Layer 2 address (MAC). There are several ways to assign an IPv6 address to a device:

Static assignment using a manual interface ID Static assignment using an EUI-64 interface ID Stateless autoconfiguration DHCP for IPv6 (DHCPv6)

Manual Interface ID Assignment


One way to statically assign an IPv6 address to a device is to manually assign both the prefix (network) and interface ID (host) portion of the IPv6 address. To configure an IPv6 address on a Cisco router interface and enable IPv6 processing on that interface, use the ipv6 address ipv6address/prefix-length command in interface configuration mode.

EUI-64 Interface ID Assignment


Another way to statically assign an IPv6 address is to configure the prefix (network) portion of the IPv6 address and derive the interface ID (host) portion from the Layer 2 MAC address of the device, which is known as the EUI-64 interface ID. The EUI-64 standard explains how to stretch IEEE 802 MAC addresses from 48 to 64 bits by inserting the 16-bit 0xFFFE in the middle (at the 24th bit) of the MAC address to create a 64bit, unique interface identifier. In the first byte of the Vendor's Organizational unique identifier (OUI), bit 7 indicates the scope: 0 or global and 1 for local. As most burned in addresses are globally scoped, bit 7 will usually be 0. The EUI-64 standard also specifies that the value of the 7th bit be inverted. So for example, MAC address 00 -00-0c-12-34-56, becomes 02 -00-0c-123456. The resulting EUI-64 address on network 2001:0DB8:0:1::/64 would be 2001:0DB8:0:1:02 00:0CFF:FE12:3456. To configure an IPv6 address for an interface and enable IPv6 processing on the interface using an EUI-64 interface ID in the low-order 64 bits of the address (host), use the ipv6 address ipv6prefix/prefix-lengtheui-64 command in interface configuration mode.

Stateless Autoconfiguration
Autoconfiguration, as the name implies, is a mechanism that automatically configures the IPv6 address of a node. In IPv6, it is assumed that non-PC devices, as well as computer terminals, will

be connected to the network. The auto-configuration mechanism was introduced to enable plugand-play networking of these devices, to help reduce administration overhead.

DHCPv6 (Stateful)
DHCP for IPv6 enables DHCP servers to pass configuration parameters such as IPv6 network addresses to IPv6 nodes. It offers the capability of automatic allocation of reusable network addresses and additional configuration flexibility. This protocol is a stateful counterpart to IPv6 stateless address autoconfiguration (RFC 2462), and can be used separately or concurrently with IPv6 stateless address autoconfiguration to obtain configuration parameters.

Stateless Autoconfiguration
Stateless autoconfiguration is a key feature of IPv6. It enables serverless basic configuration of the nodes as well as easy renumbering. Stateless autoconfiguration uses the information in the router advertisement messages to configure the node. The prefix included in the router advertisement is used as the /64 prefix for the node address. The dynamically created interface Identifier (which in the case of Ethernet is the modified EUI-64 format) obtains the other 64 bits. Routers periodically send route advertisements (RA). When a node boots up, the node needs its address in the early stage of the boot process. It can be "long" to wait for the next router advertisement to get the information to configure its interfaces. Instead, a node sends a router solicitation (RS) message to the routers on the network, asking them to reply immediately with a router advertisement so the node can immediately autoconfigure its IPv6 address. All of the routers respond with a normal router advertisement message, with the all-nodes multicast address as the destination address. Autoconfiguration enables plug-and-play configuration of an IPv6 device, which allows devices to connect themselves to the network without any configuration from an administrator and without any servers, such as DHCP servers. This key feature enables the deployment of new devices on the Internet, such as cellular phones, wireless devices, home appliances, and home networks.

DHCPv6 (Stateful)
DHCPv6 is an updated version of DHCP for IPv4. It supports the addressing model of IPv6 and benefits from new IPv6 features. DHCPv6 has the following characteristics:

Enables more control than serverless or stateless autoconfiguration Can be used in an environment that uses only servers and no routers Can be used concurrently with stateless autoconfiguration Can be used for renumbering

Can be used for automatic domain name registration of hosts using the Dynamic Domain Name System (DDNS)

The process for acquiring configuration data for a DHCPv6 client is like the one in IPv4, with a few exceptions. Initially, the client must first detect the presence of routers on the link by using neighbor discovery messages. If at least one router is found, then the client examines the router advertisements to determine if DHCPv6 should be used. If the router advertisements enable the use of DHCPv6 on that link, or if no router is found, then the client starts a DHCP solicit phase to find a DHCP server. DHCPv6 uses multicast for many messages. When the client sends a solicit message, it sends the message to the ALL-DHCP-Agents multicast address with link-local scope (FF02::1:2). Agents include both servers and relays. When a DHCP relay forwards a message, it can forward the message to the All-DHCP-Servers multicast address with site-local scope (FF05::1:3). This forwarding means that you do not need to configure a relay with all of the static addresses of the DHCP servers, as in IPv4. If you want only specific DHCP servers to receive the messages, or if there is a problem forwarding multicast traffic to all of the network segments that contain a DHCP server, a relay can contain a static list of DHCP servers. You can configure different DHCPv6 servers, or the same server with different contexts, to assign addresses that are based on different polices. For example, you could configure one DHCPv6 server to give global addresses using a more restrictive policy, such as "do not give addresses to printers." You could then configure another DHCPv6 server, or the same server within a different context, to give site-local addresses using a more liberal policy, such as "give to anyone."

Topic Notes: Configure IPv6 with RIPng


Configuring IPv6
There are two basic steps to activate IPv6 on a router. First, you must activate IPv6 traffic forwarding on the router, and then you must configure each interface that requires IPv6. By default, IPv6 traffic forwarding is disabled on a Cisco router. To activate IPv6 traffic forwarding between interfaces, you must first configure the global command ipv6 unicastrouting. This command enables the forwarding of unicast IPv6 traffic. The ipv6 address command can configure a global IPv6 address. The link-local address is automatically configured when an address is assigned to the interface. You must specify the entire 128-bit IPv6 address or specify to use the 64-bit prefix by using the eui-64 option (if you want the router to derive the interface ID portion from a MAC address). You can completely specify the IPv6 address or compute the host identifier (rightmost 64 bits) from the EUI-64 identifier of the interface.

Alternatively, you can manually configure the exact IPv6 address that should be assigned to a router interface by using the ipv6 address command, in interface configuration mode. Note: The configuration of the IPv6 address on an interface automatically configures the linklocal address for that interface. To display the status of interfaces that are configured for IPv6, use the show ipv6 interface command.

Routing Considerations with IPv6


IPv6 uses longest-prefix match routing, just as IPv4 classless interdomain routing (CIDR) does. Many of the common routing protocols have been modified to handle longer IPv6 addresses and different header structures. You can use and configure IPv6 static routing in the same way you would with IPv4. There is an IPv6-specific requirement per RFC 2461 that a router must be able to determine the linklocal address of each of its neighboring routers to ensure that the target address of a redirect message identifies the neighbor router by its link-local address. This requirement means that using a global unicast address as a next-hop address with IPv6 routing is not recommended. The Cisco IOS global command to enable IPv6 is ipv6 unicast-routing. You must enable IPv6 unicast routing before an IPv6-capable routing protocol, or an IPv6 static route, will work. Routing Information Protocol next generation (RIPng) (RFC 2080, RIPng for IPv6) is a distance vector routing protocol with a limit of 15 hops that uses split horizon and poison reverse to prevent routing loops. RIPng includes the following features:

Is based on IPv4 Routing Information Protocol (RIP) version 2 (RIPv2) and is like RIPv2 Uses IPv6 for transport Includes the IPv6 prefix and next-hop IPv6 address Uses the multicast group FF02::9, the all-RIP-routers multicast group, as the destination address for RIP updates Sends updates on User Datagram Protocol (UDP) port 521 Is supported by Cisco IOS Release 12.2(2)T and later

Configuring and Verifying RIPng for IPv6


When configuring supported routing protocols in IPv6, you must create the routing process, enable the routing process on interfaces, and customize the routing protocol for your particular network. Before configuring the router to run IPv6 RIP, globally enable IPv6 using the ipv6 unicastrouting global configuration command, and enable IPv6 on any interfaces on which IPv6 RIP needs to be enabled.

To enable RIPng routing on the router, use the ipv6 router rip name global configuration command. The name parameter is a tag which identifies the RIP process. This process name is used later when configuring RIPng on participating interfaces. The name only is locally significant to the router and does not have to be the same on all routers. For RIPng, instead of using the network command to identify which interfaces should run RIPng, you use the command ipv6 rip v6process enable in interface configuration mode to enable RIPng on an interface. The v6process is a name parameter that you use for the ipv6 rip enable command, where it must match the name parameter in the ipv6 router rip command (ipv6 router rip v6process). To verify the configuration of RIP, use the show ipv6 rip command or show ipv6 route rip command. Note: Enabling RIP on an interface dynamically creates a "router rip" process, if necessary. Note : Most show commands support IPv6 and are usually used simply by adding the ipv6 keyword. For example, using show ipv6 route instead of show route will show the content of the IPv6 routing table.

DNS Considerations with IPv6


If it is necessary to configure a router to locally resolve hostnames to IPv6 addresses, use the ipv6 host command. You can define up to four IPv6 addresses for one hostname. For example, define a static name (router1) for IPv6 address 2001:db8:1:1::1, as follows:
RouterX(config)#ipv6 host router1 2001:db8:1:1::1

To specify an external DNS server to resolve IPv6 addresses, use the ip name-server command. The address can be an IPv4 or IPv6 address. You can specify up to six DNS servers with this command. For example, configure a DNS server (IPv6 address 2001:db8:1:1::10) to query with this command:
RouterX(config)#ip name-server 2001:db8:1:1::10

Configuring name resolution on a router is done for the convenience of a technician who uses the router to access other devices on the network by name. It does not affect the operation of the router and this DNS server name is not advertised to DHCP clients.

Topic Notes: Understanding Frame Relay


Frame Relay originally was designed for use across ISDN interfaces. Today, it is used over various other network interfaces as well. Frame Relay is a connection-oriented data-link technology that is streamlined to provide high performance and efficiency. For error protection, it relies on upper-layer protocols and dependable fiber and digital networks. Frame Relay is an example of a packet-switched technology. Packet-switched networks enable end stations to dynamically share the network medium and the available bandwidth. Frame Relay defines the interconnection process between the router and the local access switching equipment of the service provider. It does not define how the data is transmitted within the Frame Relay service provider cloud. Devices that are attached to a Frame Relay WAN fall into the following two categories:

Data terminal equipment (DTE): Generally considered to be the terminating equipment for a specific network. DTE devices are typically located on the customer premises and may be owned by the customer. Examples of DTE devices are Frame Relay Access Devices (FRADs), routers, and bridges. Data Communications Equipment (DCE): Carrier-owned internetworking devices. The purpose of DCE devices is to provide clocking and switching services in a network and to transmit data through the WAN. In most cases, the switches in a WAN are Frame Relay switches.

Frame Relay provides a means for statistically multiplexing many logical data conversations, referred to as virtual circuits (VCs), over a single physical transmission link by assigning connection identifiers to each pair of DTE devices. The service provider switching equipment constructs a switching table that maps the connection identifier to outbound ports. When a frame is received, the switching device analyzes the connection identifier and delivers the frame to the associated outbound port. The complete path to the destination is established prior to the transmission of the first frame. The following terms are used frequently in Frame Relay discussions and may be the same or slightly different from the terms your Frame Relay service provider uses.

Local access rate: Clock speed (port speed) of the connection (local loop) to the Frame Relay cloud. The local access rate is the rate at which data travels into or out of the network, regardless of other settings. VC: Logical circuit, uniquely identified by a data-link connection identifier (DLCI), which is created to ensure bidirectional communication from one DTE device to another. A number of VCs can be multiplexed into a single physical circuit for transmission across the network. This capability can often reduce the complexity of the equipment and network that is required to connect multiple DTE devices. A VC can pass through any number of intermediate DCE devices (Frame Relay switches). A VC can be either a permanent virtual circuit (PVC) or a switched virtual circuit (SVC).

PVC: Provides permanently established connections that are used for frequent and consistent data transfers between DTE devices across the Frame Relay network. Communication across a PVC does not require the call setup and call teardown that is used with an SVC. Switched VC (SVC): Provides temporary connections that are used in situations that require only sporadic data transfer between DTE devices across the Frame Relay network. SVCs are dynamically established on demand and are torn down when transmission is complete. Data Link Connection Identifier (DLCI): Frame Relay virtual circuits are identified by DLCIs. The Frame Relay service providers (for example, telephone companies) typically assign DLCI values. DLCI contains a 10-bit number in the address field of the Frame Relay frame header that identifies the VC. DLCIs have local significance because the identifier references the point between the local router and the local Frame Relay switch to which the DLCI is connected. Therefore, devices at opposite ends of a connection can use different DLCI values to refer to the same virtual connection. Committed information rate (CIR): Specifies the maximum average data rate that the network undertakes to deliver under normal conditions. When subscribing to a Frame Relay service, you specify the local access rate, for example, 56 kb/s or T1. Typically, you are also asked to specify a CIR for each DLCI. If you send information faster than the CIR on a given DLCI, the network marks some frames with a discard eligible (DE) bit. The network does its best to deliver all packets, but discards any DE packets first if there is congestion. Many inexpensive Frame Relay services are based on a CIR of zero. A CIR of zero means that every frame is a DE frame, and the network throws away any frame when it needs to. The DE bit is within the address field of the Frame Relay frame header. Inverse Address Resolution Protocol (ARP): A method of dynamically associating the network layer address of the remote router with a local DLCI. Inverse ARP allows a router to automatically discover the network address of the remote DTE device that is associated with a VC. Local Management Interface (LMI): A signaling standard between the router (DTE device) and the local Frame Relay switch (DCE device) that is responsible for managing the connection and maintaining status between the router and the Frame Relay switch. Basically, the LMI is a mechanism that provides status information about Frame Relay connections between the router (DTE) and the Frame Relay switch (DCE). Every 10 seconds or so, the end device polls the network, either requesting a dumb sequenced response or channel status information. If the network does not respond with the requested information, the user device may consider the connection to be down. Forward explicit congestion notification (FECN): A bit in the address field of the Frame Relay frame header. The FECN mechanism is initiated when a DTE device sends Frame Relay frames into the network. If the network is congested, DCE devices (Frame Relay switches) set the FECN bit value of the frames to one. When these frames reach the destination DTE device, the address field with the FECN bit set, indicates that these frames experienced congestion in the path from source to destination. The DTE device can relay this information to a higher-layer protocol for processing. Depending on the implementation, flow control may be initiated or the indication may be ignored.

Backward explicit congestion notification (BECN): A bit in the address field of the Frame Relay frame header. DCE devices set the value of the BECN bit to 1 in frames that travel in the opposite direction of frames that have their FECN bit set. Setting BECN bits to 1 informs the receiving DTE device that a particular path through the network is congested. The DTE device can then relay this information to a higher-layer protocol for processing. Depending on the implementation, flow control may be initiated or the indication may be ignored.

By default, a Frame Relay network provides nonbroadcast multiaccess (NBMA) connectivity between remote sites. An NBMA environment is treated like other broadcast media environments, such as Ethernet, where all the routers are on the same subnet. However, to reduce cost, NBMA clouds are usually built in a hub-and-spoke topology. With a hub-and-spoke topology, the physical topology does not provide the multiaccess capabilities that Ethernet does, so each router may not have separate PVCs to reach the other remote routers on the same subnet. Split horizon is one of the main issues you encounter when Frame Relay is running multiple PVCs over a single interface. Frame Relay allows you to interconnect your remote sites in various topologies, described as follows:

Star topology: Remote sites are connected to a central site that generally provides a service or an application. The star topology, also known as a hub-and-spoke configuration, is the most popular Frame Relay network topology. This is the least expensive topology because it requires the least number of PVCs. Full-mesh topology: All routers have VCs to all other destinations. Full-mesh topology, although costly, provides direct connections from each site to all other sites and allows for redundancy. When one link goes down, a router can reroute traffic through another site. As the number of nodes in this topology increases, a full-mesh topology can become very expensive. Use the n (n 1) / 2 formula to calculate the total number of links that are required to implement a full-mesh topology, where n is the number of nodes. For example, to fully mesh a network of 10 nodes, 45 links are required: 10 (10 1) / 2. Partial-mesh topology: Not all sites have direct access to all other sites. Depending on the traffic patterns in your network, you may want to have additional PVCs connect to remote sites that have large data traffic requirements.

In any Frame Relay topology, when a single interface must be used to interconnect multiple sites, you can have reachability issues because of the NBMA nature of Frame Relay. The Frame Relay NBMA topology can cause the following two problems:

Routing update reachability: Split horizon rule reduces routing loops by preventing a routing update that is received on an interface from being forwarded out the same interface. In a scenario using a hub-and-spoke Frame Relay topology, a remote router (a spoke router) sends an update to the headquarters router (the hub router) that is connecting multiple PVCs over a single physical interface. The headquarters router then receives the broadcast on its physical interface but cannot forward that routing update

through the same interface to other remote (spoke) routers. Split horizon is not a problem if there is a single PVC on a physical interface because this type of connection would be more of a point-to-point connection type. Broadcast replication: With routers that support multipoint connections over a single interface that terminate many PVCs, the router must replicate broadcast packets, such as routing update broadcasts, on each PVC to the remote routers. These replicated broadcast packets consume bandwidth and cause significant latency variations in user traffic.

There are several methods to solve the routing update reachability issue.

One method for solving reachability issues that are brought on by split horizon may be to turn off split horizon. However, two problems exist with this solution. First, although most network layer protocols, such as IP, do allow you to disable split horizon, not all network layer protocols allow you to do this. Second, disabling split horizon increases the chances of routing loops in your network. Another method is to use a fully meshed topology; however, this topology increases the cost. The last method is to use subinterfaces. To enable the forwarding of broadcast routing updates in a hub-and-spoke Frame Relay topology, you can configure the hub router with logically assigned interfaces that are called subinterfaces, which are logical subdivisions of a physical interface. In split-horizon routing environments, routing updates that are received on one subinterface can be sent out another subinterface. In subinterface configuration, each VC can be configured as a point-to-point connection, which allows each subinterface to act like a leased line. When you use a Frame Relay point-to-point subinterface, each subinterface is on its own subnet.

A Frame Relay connection requires that, on a VC, the local DLCI must be mapped to a destination network layer address, such as an IP address. Routers can automatically discover their local DLCI from the local Frame Relay switch using the LMI protocol. On Cisco routers, the local DLCI can be dynamically mapped to the remote router network layer addresses with Inverse ARP. Inverse ARP associates a given DLCI to the next-hop protocol address for a specific connection. Inverse ARP is described in RFC 1293. Instead of using Inverse ARP to automatically map the local DLCIs to the remote router network layer addresses, you can manually configure a static Frame Relay map in the map table.

Frame Relay Signaling


The LMI is a signaling standard between the router and the Frame Relay switch. The LMI is responsible for managing the connection and maintaining the status between the devices. Although the LMI is configurable, beginning in Cisco IOS Release 11.2, the Cisco router tries to autosense which LMI type the Frame Relay switch is using. The router sends one or more complete LMI status requests to the Frame Relay switch. The Frame Relay switch responds with one or more LMI types, and the router configures itself with the last LMI type received. Cisco routers support the following three LMI types:

Cisco: LMI type that was developed jointly by Cisco, StrataCom, Northern Telecom (Nortel), and Digital Equipment Corporation ANSI: ANSI T1.617 Annex D Q.933A: ITU-T Q.933 Annex A

You can also manually configure the appropriate LMI type from the three supported types to ensure proper Frame Relay operation. When the router receives LMI information, it updates its VC status to one of the following three states:

Active: Indicates that the VC connection is active and that routers can exchange data over the Frame Relay network. Inactive: Indicates that the local connection to the Frame Relay switch is working, but the remote router connection to the remote Frame Relay switch is not working. Deleted: Indicates that either no LMI is being received from the Frame Relay switch or there is no service between the router and local Frame Relay switch.

Stages of Inverse ARP and LMI Operation


The following is a summary of how Inverse ARP and LMI signaling works with a Frame Relay connection. 1. Each router connects to the Frame Relay switch through a CSU/DSU. 2. When Frame Relay is configured on an interface, the router sends an LMI status inquiry message to the Frame Relay switch. The message notifies the switch of the router status and asks the switch for the connection status of the router VCs. 3. When the Frame Relay switch receives the request, it responds with an LMI status message that includes the local DLCIs of the PVCs to the remote routers to which the local router can send data. 4. For each active DLCI, each router sends an Inverse ARP packet to introduce itself. 5. When a router receives an Inverse ARP message, it creates a map entry in its Frame Relay map table that includes the local DLCI and the remote router network layer address. Note that the router DLCI is the local DLCI, not the DLCI that the remote router is using. Any of the three connection states can appear in the Frame Relay map table. Note If Inverse ARP is not working or the remote router does not support Inverse ARP, you must manually configure static Frame Relay maps, which map the local DLCIs to the remote network layer addresses. 6. Every 60 seconds, routers send Inverse ARP messages on all active DLCIs. Every 10 seconds, the router exchanges LMI information with the switch (keepalives). 7. The router changes the status of each DLCI to active, inactive, or deleted, based on the LMI response from the Frame Relay switch.

Topic Notes: Configuring Frame Relay

A basic Frame Relay configuration assumes that you want to configure Frame Relay on one or more physical interfaces and that the routers support LMI and Inverse ARP.

Configuring Frame Relay Subinterfaces


You can configure subinterfaces in one of the following two modes:

Point-to-point: A single point-to-point subinterface is used to establish one PVC connection to another physical interface or subinterface on a remote router. In this case, each pair of the point-to-point routers is on its own subnet, and each point-to-point subinterface has a single DLCI. In a point-to-point environment, because each subinterface acts like a point-to-point interface, update traffic is not subject to the splithorizon rule. Multipoint: A single multipoint subinterface is used to establish multiple PVC connections to multiple physical interfaces or subinterfaces on remote routers. In this case, all the participating interfaces are in the same subnet. In this environment, because the subinterface acts like a regular NBMA Frame Relay interface, update traffic is subject to the split-horizon rule.

opic Notes: Verifying Frame Relay


The show interfaces command displays information regarding the encapsulation and Layer 1 and Layer 2 status. Verify that the encapsulation is set to Frame Relay. The command also displays information about the LMI type and the LMI DLCI. The output also displays the Frame Relay DTE or DCE type. Normally, the router will be the DTE. However, a Cisco router can be configured as the Frame Relay switch; in this case, the type will be DCE. Use the show frame-relay lmi command to display LMI traffic statistics. For example, this command shows the number of status messages that are exchanged between the local router and the local Frame Relay switch. This command output helps isolate the problem to a Frame Relay communications issue between the carrier's switch and your router.

Topic Notes: Troubleshooting Frame Relay Connectivity Issues


The first step to troubleshooting Frame Relay connectivity issues is to check the status of the Frame Relay interface. Use the show interface serial command to check the status of the Frame Relay interface. If the output of the show interface serial command displays a status of "interface down/line protocol down ", this typically indicates a problem at Layer 1, the physical layer. This output means that you have a problem with the cable, CSU/DSU, or the serial line.

First, use the show controllers serial command to verify that the cable is present and recognized by the router. Next, you may need to troubleshoot the problem with a loopback test. Follow these steps to perform a loopback test. Step 1 Set the serial line encapsulation to High-Level Data Link Control (HDLC) and keepalive to 10 seconds. To do this, use the commands encapsulation hdlc and keepalive 10 in the interface configuration mode of the interface you are troubleshooting. Step 2 Place the CSU/DSU or modem in local-loop mode. Check the device documentation for how to do this. If the line protocol comes up when the CSU/DSU or modem is in local-loop mode, indicated by a "line protocol is up (looped)" message, it suggests that the problem is occurring beyond the local CSU/DSU. If the status line does not change states, there could be a problem in the router, connecting cable, CSU/DSU, or modem. In most cases, the problem is with the CSU/DSU or modem. Step 3 Execute a ping to the IP address of the interface you are troubleshooting while the CSU/DSU or modem is in local-loop mode. There should not be any misses. An extended ping that uses a data pattern of 0x0000 is helpful in resolving line problems because a T1 or E1 connection derives clock from the data and requires a transition every 8 bits. A data pattern with many zeros helps to determine if the transitions are appropriately forced on the trunk. A pattern with many ones is used to appropriately simulate a high zero load in case there is a pair of data inverters in the path. The alternating pattern (0x5555) represents a "typical" data pattern. If your pings fail or if you get cyclic redundancy check (CRC) errors, a bit error rate tester (BERT) with an appropriate analyzer from the telephone company (telco) is needed. Step 4 When you are finished testing, ensure that you return the encapsulation of the interface to Frame Relay. An incorrect statically defined DLCI on a subinterface may also cause the status of the subinterface to appear as "down/down", and the PVC status may appear as " deleted". To verify that the correct DLCI number has been configured, use the show frame-relay pvc command. PVC STATUS field in the output of show frame-relay pvc command reports the status of the PVC. The DCE device reports the status, and the DTE device receives the status. The PVC status is exchanged using the LMI protocol.

ACTIVE State indicates a successful end-to-end (DTE to DTE) circuit. INACTIVE State indicates a successful connection to the switch (DTE to DCE) without a DTE detected on the other end of the PVC. This can occur due to residual or incorrect configuration on the switch. DELETED State indicates that the DTE is configured for a DLCI the switch does not recognize as valid for that interface.

If the output of a show interface serial command displays a status of "interface up/line protocol down ", this typically indicates a problem at Layer 2, the data link layer. If so, the serial interface may not be receiving the LMI keepalives from the Frame Relay service provider. To verify that LMI messages are being sent and received, and to verify that the router LMI type matches the LMI type of the provider, use the show frame-relay lmi command.

Troubleshooting Frame Relay Remote Router Connectivity


For a Frame Relay router to reach a peer router across the Frame Relay network, it must map the IP address of the peer router with the local DLCI it uses to reach that IP address. The show frame-relay map command shows the IP address-to-DLCI mappings and whether the mapping was statically entered or dynamically learned using Inverse ARP. If you have recently changed the address on the remote Frame Relay router interface, you may need to use the clear frame-relay inarp to clear the Frame Relay map of the local router. This will cause Inverse ARP to dynamically remap the new address with the DLCI. If the IP address of the peer router does not appear in the Frame Relay mapping table, the remote router may not support Inverse ARP. Try adding the IP address-to-DLCI mapping statically by using the frame-relay map command. Additionally, there may be access control lists (ACLs) that are applied to the Frame Relay interfaces that affect connectivity. To verify whether an ACL is applied to an interface, use the show ip interface command. To temporarily remove an ACL from an interface to verify if it is affecting connectivity, use the no ip access-group command in interface configuration mode.

Troubleshooting Frame Relay End-to-End Connectivity


For end-to-end connectivity to exist between two workstations across an active Frame Relay network depends on whether general routing requirements are met. If you are experiencing endto-end connectivity problems in your Frame Relay network, check the routing tables to see if the routers have a route to the destination with which you are having connectivity problems. To check the routing table, use the show ip route command. If only directly connected routes appear in the routing table, the problem may be that the Frame Relay network is preventing the routing protocol updates from being advertised across it. Because of the NBMA nature of Frame Relay, you must configure the router to pass routing protocol broadcasts or multicasts across the Frame Relay network. With the use of Inverse ARP, this capability is in effect automatically. With a static Frame Relay map, you must explicitly configure the support for broadcast traffic. The show frame-relay map command displays whether the broadcast capability is in effect, allowing routing updates to be passed across the Frame Relay network.

Like devices: crossover Unlike devices: straight-through

S-ar putea să vă placă și