Sunteți pe pagina 1din 155

CCNA 1

Module 1 : Introduction to Networking


Module 2 : Networking Fundamentals
Module 3 : Networking Media
Module 4 : Cable Testing
Module 5 : Cabling LANs and WANs
Module 6 : Ethernet Fundamentals
Module 7 : Ethernet Technologies
Module 8 : Ethernet Switching
Module 9 : TCP/IP Protocol Suite and IP Addressing
Module 10 : Routing Fundamentals and Subnets
Module 11 : TCP/IP Transport and Application Layers
Case Study 1 : Structured Cabling Case Study
To understand the role that computers play in a networking system, consider the Internet. Internet
connections are essential for businesses and education. Careful planning is required to build a network that
will connect to the Internet. Even for an individual personal computer (PC) to connect to the Internet, some
planning and decisions are required. Computer resources must be considered for Internet connection. This
includes the type of device that connects the PC to the Internet, such as a network interface card (NIC) or
modem. Protocols, or rules, must be configured before a computer can connect to the Internet. Proper
selection of a Web browser is also important.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this lesson should be able to perform the following tasks:
Understand the physical connections needed for a computer to connect to the Internet
Recognize the components of a computer
Install and troubleshoot NICs and modems
Configure the set of protocols needed for Internet connection
Use basic procedures to test an Internet connection
Demonstrate a basic ability to use Web browsers and plug-ins
1.1 Connecting to the Internet
1.1.1 Requirements for Internet connection
This page will describe the physical and logical requirements for an Internet connection.
The Internet is the largest data network on earth. The Internet consists of many large and small networks
that are interconnected. Individual computers are the sources and destinations of information through the
Internet. Connection to the Internet can be broken down into the physical connection, the logical connection,
and applications.
A physical connection is made by connecting an adapter card, such as a modem or a NIC, from a PC to a
network. The physical connection is used to transfer signals between PCs within the local-area network
(LAN) and to remote devices on the Internet.
The logical connection uses standards called protocols. A protocol is a formal description of a set of rules
and conventions that govern how devices on a network communicate. Connections to the Internet may use
multiple protocols. The Transmission Control Protocol/Internet Protocol (TCP/IP) suite is the primary set of
protocols used on the Internet. The TCP/IP suite works together to transmit and receive data, or information.
The last part of the connection are the applications, or software programs, that interpret and display data in
an understandable form. Applications work with protocols to send and receive data across the Internet. A
Web browser displays HTML as a Web page. Examples of Web browsers include Internet Explorer and
Netscape. File Transfer Protocol (FTP) is used to download files and programs from the Internet. Web
browsers also use proprietary plug-in applications to display special data types such as movies or flash
animations.
This is an introductory view of the Internet, and it may seem to be a simplistic process. As the topic is
explored in greater depth, students will learn that data transmission across the Internet is a complicated task.
1.1.3 Network interface card
This page will explain what a NIC is and how it works. Students will also learn how to select the best NIC for
a PC.
1

A NIC, or LAN adapter, provides network communication capabilities to and from a PC. On desktop computer
systems, it is a printed circuit board that resides in a slot on the motherboard and provides an interface
connection to the network media. On laptop computer systems, it is commonly integrated into the laptop or
available on a small, credit card-sized PCMCIA card. PCMCIA stands for Personal Computer Memory
Card International Association. PCMCIA cards are also known as PC cards. The type of NIC must match the
media and protocol used on the local network.
The NIC uses an interrupt request (IRQ), an input/output (I/O) address, and upper memory space to work
with the operating system. An IRQ value is an assigned location where the computer can expect a particular
device to interrupt it when the device sends the computer signals about its operation. For example, when a
printer has finished printing, it sends an interrupt signal to the computer. The signal momentarily interrupts
the computer so that it can decide what processing to do next. Since multiple signals to the computer on the
same interrupt line might not be understood by the computer, a unique value must be specified for each
device and its path to the computer. Prior to Plug-and Play (PnP) devices, users often had to set IRQ values
manually, or be aware of them, when adding a new device to a computer.
These considerations are important in the selection of a NIC:
Protocols Ethernet, Token Ring, or FDDI
Types of media Twisted-pair, coaxial, wireless, or fiber-optic
Type of system bus PCI or ISA
Students can use the Interactive Media Activity to view a NIC.
The next page will explain how NICs and modems are installed.
1.1.4 NIC and modem installation
This page will explain how an adapter card, which can be a modem or a NIC, provides Internet connectivity.
Students will also learn how to install a modem or a NIC.
A modem, or modulator-demodulator, is a device that provides the computer with connectivity to a telephone
line. A modem converts data from a digital signal to an analog signal that is compatible with a standard
phone line. The modem at the receiving end demodulates the signal, which converts it back to digital.
Modems may be installed internally or attached externally to the computer using a phone line.
A NIC must be installed for each device on a network. A NIC provides a network interface for each host.
Different types of NICs are used for various device configurations. Notebook computers may have a built-in
interface or use a PCMCIA card. Figure shows PCMCIA wired, wireless network cards, and a Universal
Serial Bus (USB) Ethernet adapter. Desktop systems may use an internal network adapter , called a NIC,
or an external network adapter that connects to the network through a USB port.
Situations that require NIC installation include the following:
Installation of a NIC on a PC that does not already have one
Replacement of a malfunctioning or damaged NIC
Upgrade from a 10-Mbps NIC to a 10/100/1000-Mbps NIC
Change to a different type of NIC, such as wireless
Installation of a secondary, or backup, NIC for network security reasons
To perform the installation of a NIC or modem the following resources may be required:
Knowledge of how the adapter, jumpers, and plug-and-play software are configured
Availability of diagnostic tools
Ability to resolve hardware resource conflicts
The next page will describe the history of network connectivity
1.1.5 Overview of high-speed and dial-up connectivity
This page will explain how modem connectivity has evolved into high-speed services.
In the early 1960s, modems were introduced to connect dumb terminals to a central computer. Many
companies used to rent computer time since it was too expensive to own an on-site system. The connection
rate was very slow. It was 300 bits per second (bps), which is about 30 characters per second.
As PCs became more affordable in the 1970s, bulletin board systems (BBSs) appeared. These BBSs
allowed users to connect and post or read messages on a discussion board. The 300-bps speed was
acceptable since it was faster than the speed at which most people could read or type. In the early 1980s,
use of bulletin boards increased exponentially and the 300 bps speed quickly became too slow for the
transfer of large files and graphics. In the 1990s, modems could operate at 9600 bps. By 1998, they reached
the current standard of 56,000 bps, or 56 kbps.
Soon the high-speed services used in the corporate environment such as Digital Subscriber Line (DSL) and
cable modem access moved to the consumer market. These services no longer required expensive
equipment or a second phone line. These are "always on" services that provide instant access and do not
require a connection to be established for each session. This provides more reliability and flexibility and has
simplified Internet connection sharing in small office and home networks.
The next page will introduce an important set of network protocols.
2

1.1.6 TCP/IP description and configuration


This page will introduce the Transmission Control Protocol/Internet Protocol (TCP/IP).
TCP/IP is a set of protocols or rules that have been developed to allow computers to share resources across
a network. The operating system tools must be used to configure TCP/IP on a workstation. The process is
very similar for Windows or Mac operating systems.
The Lab Activity will teach students how to obtain basic TCP/IP configuration information.
The next page will introduce the ping command.
1.1.7 Testing connectivity with ping
This page will explain how the ping command is used to test network connectivity.
Ping is a basic program that verifies a particular IP address exists and can accept requests. The computer
acronym ping stands for Packet Internet or Inter-Network Groper. The name was contrived to match the
submariners' term for the sound of a returned sonar pulse from an underwater object.
The ping command works by sending special Internet Protocol (IP) packets, called Internet Control Message
Protocol (ICMP) Echo Request datagrams, to a specified destination. Each packet sent is a request for a
reply. The output response for a ping contains the success ratio and round-trip time to the destination.
From this information, it is possible to determine if there is connectivity to a destination. The ping command
is used to test the NIC transmit and receive function, the TCP/IP configuration, and network connectivity. The
following types of ping commands can be issued:
ping 127.0.0.1 This is a unique ping and is called an internal loopback test. It is used to verify the
TCP/IP network configuration.
ping IP address of host computer A ping to a ho TCP/IP address configuration for the local host
and connectivity to the host.
ping default-gateway IP address A ping to the default gateway indicates if the router that connects
the local network to other networks can be reached.
ping remote destination IP address A ping to a remote destination verifies connectivity to a remote
host.
Students will use the ping and tracert commands in the Lab Activity.

1.1.8 Web browser and plug-Ins


This page will explain what a Web browser is and how it performs the following functions:
Contacts a Web serverRequests information
Receives information
Displays the results on the screen
A Web browser is software that interprets HTML, which is one of the languages used to code Web page
content. Some new technologies use other markup languages with more advanced features. HTML, which is
the most common markup language, can display graphics or play sound, movies, and other multimedia files.
Hyperlinks that are embedded in a Web page provide a quick link to another location on the same page or a
different Internet address.
Two of the most popular Web browsers are Internet Explorer (IE) and Netscape Communicator. These
browsers perform the same tasks. However, there are differences between them. Some websites may not
support the use of one of these browsers. It is a good idea to have both programs installed.
3

Here are some features of Netscape Navigator:


Was the first popular browser
Uses less disk space
Displays HTML files
Performs e-mail and file transfers
Here are some features of IE:
Is powerfully integrated with other Microsoft products
Uses more disk space
Displays HTML files
Performs e-mail and file transfers
There are also many special, or proprietary, file types that standard Web browsers are not able to display. To
view these files the browser must be configured to use the plug-in applications. These applications work with
the browser to launch the programs required to view special files:
Flash Plays multimedia files created by Macromedia Flash
Quicktime Plays video files created by Apple
Real Player Plays audio files
Use the following procedure to install the Flash plug-in:
1. Go to the Macromedia website.
2. Download the flash32.exe file.
3. Run and install the plug-in in Netscape or IE.
4. Access the Cisco Academy website to verify the installation and proper operation.
Computers also perform many other useful tasks. Many employees use a set of applications in the form of an
office suite such as Microsoft Office. Office applications typically include the following:
Spreadsheet software contains tables that consist of columns and rows and it is often used with
formulas to process and analyze data.
Modern word processors allow users to create documents that include graphics and richly formatted
text.
Database management software is used to store, maintain, organize, sort, and filter records. A
record is a collection of information identified by some common theme such as customer name.
Presentation software is used to design and develop presentations to deliver at meetings, classes, or
sales presentations.
A personal information manager includes an e-mail utility, contact lists, a calendar, and a to-do list.
Office applications are now a part of daily work, as typewriters were before PCs.
The Lab Activity will help students understand how a Web browser works.
The next page will discuss the troubleshooting process
1.1.9 Troubleshooting Internet connection problems
The Lab Activity on this page will show students how to troubleshoot hardware, software, and network
configuration problems. The goal is to locate and repair the problems in a set amount of time to gain access
to the curriculum. This lab will demonstrate how complex it is to configure Internet access. This includes the
processes and procedures used to troubleshoot computer hardware, software, and network systems.
This page concludes this lesson. The next lesson will discuss computer number systems. The first page will
describe the binary system.
1.2

Network Math

1.2.1 Binary presentation of data


This page will explain how computers use the binary number system to represent data.
Computers work with and store data using electronic switches that are either ON or OFF. Computers can
only understand and use data that is in this two-state or binary format. The 1s and 0s are used to represent
the two possible states of an electronic component in a computer. 1 is represented by an ON state, and 0 is
represented by an OFF state. They are referred to as binary digits or bits.
American Standard Code for Information Interchange (ASCII) is the code that is most commonly used to
represent alpha-numeric data in a computer. ASCII uses binary digits to represent the symbols typed on
the keyboard. When computers send ON or OFF states over a network, electrical, light, or radio waves are
used to represent the 1s and 0s. Notice that each character is represented by a unique pattern of eight
binary digits.
Because computers are designed to work with ON/OF numbers are natural to them. Humans use the
decimal number system, which is relatively simple when compared to the long series of 1s and 0s used by
computers. So the computer binary numbers need to be converted to decimal numbers.
Sometimes binary numbers are converted to hexadecimal numbers. This reduces a long string of binary
digits to a few hexadecimal characters. It is easier to remember and to work with hexadecimal numbers.
The next page will discuss bits and bytes.F switches, binary digits and binary
4

1.2.2 Bits and bytes


This page will explain what bits and bytes are.
A binary 0 might be represented by 0 volts of electricity.
A binary 1 might be represented by +5 volts of electricity.
Computers are designed to use groupings of eight bits. This grouping of eight bits is referred to as a byte.
In a computer, one byte represents a single addressable storage location. These storage locations represent
a value or single character of data, such as an ASCII code. The total number of combinations of the eight
switches being turned on and off is 256. The value range of a byte is from 0 to 255. So a byte is an important
concept to understand when working with computers and networks.
The next page will describe the Base 10 number system.

1.2.3 Base 10 number system


Numbering systems consist of symbols and rules for their use. This page will discuss the most commonly
used number system, which is decimal, or Base 10.
Base 10 uses the ten symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. These symbols, can be combined to represent
all possible numeric values.
The decimal number system is based on powers of 10. Each column position of a value, from right to left, is
multiplied by the base number 10 raised to a power, which is the exponent. The power that 10 is raised to
depends on its position to the left of the decimal point. When a decimal number is read from right to left, the
first or rightmost position represents 100, which equals 1. The second position represents 101, which equals
10. The third position represents 102, which equals 100. The seventh position to the left represents 10 6,
which equals 1,000,000. This is true no matter how many columns the number has.
Here is an example:
2134 = (2x103) + (1x102) + (3x101) + (4x100)
This review of the decimal system will help students understand the Base 2 and Base 16 number systems.
These systems use the same methods as the decimal system.
The next page will describe the Base 2 number system.

1.2.4 Base 2 number system


This page will discuss the number system that computers use to recognize and process data, which is
binary, or Base 2.
The binary system uses only two symbols, which are 0 and 1. The position of each digit from right to left in a
binary number represents the base number 2 raised to a power or exponent. These place values are, from
right to left, 20, 21, 22, 23, 24, 25, 26, and 27, or 1, 2, 4, 8, 16, 32, 64, and 128 respectively.
Here is an example:
101102 = (1 x 24 = 16) + (0 x 23 = 0) + (1 x 22 = 4) + (1 x 21 = 2) + (0 x 20 = 0) = 22 (16 + 0 + 4 + 2 + 0)
This example shows that the binary number 10110 is equal to the decimal number 22.
The next page will explain the conversion of decimal numbers to binary numbers.

1.2.5 Converting decimal numbers to 8-bit binary numbers


This page will teach students how to convert decimal numbers to binary numbers.
There are several ways to convert decimal numbers to binary numbers. The flowchart in Figure describes
one method. This method is one of several methods that can be used. It is best to select one method and
practice with it until it always produces the correct answer.
Conversion exercise:
Use the example below to convert the decimal number 168 to a binary number:
128 is less than 168 so the left most bit in the binary number is a 1. 168 - 128 = 40.
64 is not less than or equal to 40 so the second bit from the left is a 0.
32 is less than 40 so the third bit from the left is a 1. 40 - 32 = 8.
16 is not less than or equal to 8 so the fourth bit from the left is a 0.
8 is equal to 8 so the fifth bit from the left is a 1. 8 - 8 = 0. Therefore, the bits to the right are all 0.
This example shows that the decimal number 168 is equal to the binary number 10101000.
The number converter activity in Figure will allow students to practice decimal to binary conversions.
In the Lab Activity, students will practice the conversion of decimal numbers to binary numbers.

1.2.6 Converting 8-bit binary numbers to decimal numbers


This page will teach students how to convert binary numbers to decimal numbers.
There are two basic ways to convert binary numbers to decimal numbers. The flowchart in Figure shows
one example.
Students can also multipy each binary digit by the base number of 2 raised to the exponent of its position.
Here is an example:
Convert the binary number 01110000 to a decimal number.
Note:
Work from right to left. Remember that anything raised to the 0 power is 1.
0 x 20 = 0
0 x 21 = 0
0 x 22 = 0
0 x 23 = 0
1 x 24 = 16
1 x 25 = 32
1 x 26 = 64
0 x 27 = 0
__________
= 112
The Lab Activity will let students practice the conversion of binary numbers to decimal numbers.

1.2.7 Four-octet dotted decimal representation of 32-bit binary numbers


This page will explain how binary numbers are represented in dotted decimal notation.
Currently, addresses assigned to computers on the Internet are 32-bit binary numbers. To make it easier to
work with these addresses, the 32-bit binary number is broken into a series of decimal numbers. First the
binary number is split into four groups of eight binary digits. Then each group of eight bits, or octet, is
converted into its decimal equivalent. This conversion can be performed as shown on the previous page.
When written, the complete binary number is represented as four groups of decimal digits separated by
periods. This is called dotted decimal notation and provides a compact and easy way to refer to 32-bit
addresses. This representation is used frequently later in this course, so it is necessary to understand it. For
dotted decimal to binary conversions, remember that each group of one to three decimal digits represents a
group of eight binary digits. If the decimal number that is being converted is less than 128, zeros will be
needed to be added to the left of the equivalent binary number until there are a total of eight bits.
Try the following conversions for practice:
Convert 200.114.6.51 to its 32-bit binary equivalent.
Convert 10000000 01011101 00001111 10101010 to its dotted decimal equivalent
1.2.8 Hexadecimal
This page will teach students about the hexadecimal number system. Students will also learn how
hexadecimal is used to represent binary and decimal numbers.
The hexadecimal or Base 16 number system is commonly used to represent binary numbers in a more
readable form. Computers perform computations in binary. However, there are several instances when the
binary output of a computer is expressed in hexadecimal to make it easier to read.
The configuration register in Cisco routers often requires hexadecimal to binary and binary to hexadecimal
conversions. Cisco routers have a configuration register that is 16 bits long. The 16-bit binary number can be
represented as a four-digit hexadecimal number. For example, 0010000100000010 in binary equals 2102 in
hexadecimal. A hexadecimal number is often indicated with a 0x. For example, the hexadecimal number
2102 would be written as 0x2102.
Like the binary and decimal systems, the hexadecimal system is based on the use of symbols, powers, and
positions. The symbols that hexadecimal uses are the digits 0 through 9 and the letters A through F.
All combinations of four binary digits can be represented with one hexadecimal symbol. These values require
one or two decimal symbols. Two hexadecimal digits can efficiently represent any combination of eight binary
digits. This would require up to four decimal digits. The use of two decimal digits to represent four bits could
cause confusion. For example, the eight bit binary number 01110011 would be 115 if converted to decimal
digits. It is unclear if this is 11 and 5 or 1 and 15. If 11-5 is used, the binary number would be 1011 0101,
which is not the number originally converted. The hexadecimal conversion is 1F, which always converts back
to 00011111.
An eight-bit binary number can be converted to two hexadecimal digits. This reduces the confusion of
reading long strings of binary numbers and the amount of space it takes to write binary numbers. Remember
that 0x may be used to indicate a hexadecimal value. The hexadecimal number 5D might be written as 0x5D.
To convert to binary, simply expand each hexadecimal digit into its four-bit binary equivalent.
The Lab Activity will teach students how to convert hexadecimal numbers into decimal and binary values.

1.2.9 Boolean or binary logic


This page will introduce Boolean logic and explain how it is used.
Boolean logic is based on digital circuitry that accepts one or two incoming voltages. Based on the input
voltages, output voltage is generated. For computers the voltage difference is represented as an ON or OFF
state. These two states are associated with a binary 1 or 0.
Boolean logic is a binary logic that allows two numbers to be compared and makes a choice based on the
numbers. These choices are the logical AND, OR, and NOT. With the exception of the NOT, Boolean
operations have the same function. They accept two numbers, which are 1 and 0, and generate a result
based on the logic rule.
The NOT operation takes the value that is presented and inverts it. A 1 becomes a 0 and a 0 becomes a 1.
Remember that the logic gates are electronic devices built specifically for this purpose. The logic rule that
they follow is whatever the input is, the output is the opposite.
The AND operation compares two input values. If both values are 1, the logic gate generates a 1 as the
output. Otherwise it outputs a 0. There are four combinations of input values. Three of these combinations
generate a 0, and one combination generates a 1.
The OR operation also takes two input values. If at least one of the input values is 1, the output value is 1.
Again there are four combinations of input values. Three combinations generate a 1 and the fourth generates
a 0.
The two networking operations that use Boolean logic are subnetwork and wildcard masking. The masking
operations are used to filter addresses. The addresses identify the devices on the network and can be
grouped together or controlled by other network operations. These functions will be explained in depth later
in the curriculum.

1.2.10 IP addresses and network masks


This page will explain the relationship between IP addresses and network masks.
When IP addresses are assigned to computers, some of the bits on the left side of the 32-bit IP number
represent a network. The number of bits designated depends on the address class. The bits left over in the
32-bit IP address identify a particular computer on the network. A computer is referred to as a host. The IP
address of a computer consists of a network and a host part.
To inform a computer how the 32-bit IP address has been split, a second 32-bit number called a subnetwork
mask is used. This mask is a guide that determines how the IP address is interpreted. It indicates how many
of the bits are used to identify the network of the computer. The subnetwork mask sequentially fills in the 1s
from the left side of the mask. A subnet mask will always be all 1s until the network address is identified and
then it will be all 0s to the end of the mask. The bits in the subnet mask that are 0 identify the computer or
host.
Some examples of subnet masks are as follows:
11111111000000000000000000000000 written in dotted decimal as 255.0.0.0
11111111111111110000000000000000 written in dotted decimal as 255.255.0.0
In the first example, the first eight bits from the left represent the network portion of the address, and the last
24 bits represent the host portion of the address. In the second example the first 16 bits represent the
network portion of the address, and the last 16 bits represent the host portion of the address.
The IP address 10.34.23.134 in binary form is 00001010.00100010.00010111.10000110.
A Boolean AND of the IP address 10.34.23.134 and the subnet mask 255.0.0.0 produces the network
address of this host:
00001010.00100010.00010111.10000110
11111111.00000000.00000000.00000000
00001010.00000000.00000000.00000000
The dotted decimal conversion is 10.0.0.0 which is the network portion of the IP address when the 255.0.0.0
mask is used.
A Boolean AND of the IP address 10.34.23.134 and the subnet mask 255.255.0.0 produces the network
address of this host:
00001010.00100010.00010111.10000110
11111111.11111111.00000000.00000000
00001010.00100010.00000000.00000000
The dotted decimal conversion is 10.34.0.0 which is the network portion of the IP address when the
255.255.0.0 mask is used.
This is a brief illustration of the effect that a network mask has on an IP address. The importance of masking
will become much clearer as more work with IP addresses is done. For right now it is only important that the
concept of the mask is understood.

10

Summary
Decimal representation of IP addresses and network masksThis page summarizes the topics discussed in
this module.
A connection to a computer network can be broken down into the physical connection, the logical connection,
and the applications that interpret the data and display the information. Establishment and maintenance of
the physical connection requires knowledge of PC components and peripherals. Connectivity to the Internet
requires an adapter card, which may be a modem or a network interface card (NIC).
In the early 1960s modems were introduced to provide connectivity to a central computer. Today, access
methods have progressed to services that provide constant, high-speed access.
The logical connection uses standards called protocols. The Transmission Control Protocol/Internet Protocol
(TCP/IP) suite is the primary group of protocols used on the Internet. TCP/IP can be configured on a
workstation using operating system tools. The ping utility can be used to test connectivity.
A web browser is software that is installed on the PC to gain access to the Internet and local web pages.
Occasionally a browser may require plug-in applications. These applications work in conjunction with the
browser to launch the program required to view special or proprietary files.
Computers recognize and process data using the binary, or Base 2, numbering system. Often the binary
output of a computer is expressed in hexadecimal to make it easier to read. The ablility to convert decimal
numbers to binary numbers is valuable when converting dotted decimal IP addresses to machine-readable
binary format. Conversion of hexadecimal numbers to binary, and binary numbers to hexadecimal, is a
common task when dealing with the configuration register in Cisco routers.
Boolean logic is a binary logic that allows two numbers to be compared and a choice generated based on the
two numbers. Two networking operations that use Boolean logic are subnetting and wildcard masking.
The 32-bit binary addresses used on the Internet are referred to as Internet Protocol (IP) addresses.
Overview
Bandwidth decisions are among the most important considerations when a network is designed. This module
discusses the importance of bandwidth and explains how it is measured.
Layered models are used to describe network functions. This module covers the two most important models,
which are the Open System Interconnection (OSI) model and the Transmission Control Protocol/Internet
Protocol (TCP/IP) model. The module also presents the differences and similarities between the two models.
This module also includes a brief history of networking. Students will learn about network devices and
different types of physical and logical layouts. This module also defines and compares LANs, MANs, WANs,
SANs, and VPNs.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Explain the importance of bandwidth in networking
Use an analogy to explain bandwidth
Identify bps, kbps, Mbps, and Gbps as units of bandwidth
Explain the difference between bandwidth and throughput
Calculate data transfer rates
11

Explain why layered models are used to describe data communication


Explain the development of the OSI model
List the advantages of a layered approach
Identify each of the seven layers of the OSI model
Identify the four layers of the TCP/IP model
Describe the similarities and differences between the two models
Briefly outline the history of networking
Identify devices used in networking
Understand the role of protocols in networking
Define LAN, WAN, MAN, and SAN
Explain VPNs and their advantages
Describe the differences between intranets and extranets
2.1

Networking Terminology

2.1.1 Data networks


This page will discuss the evolution of data networks.
Data networks developed as a result of business applications that were written for microcomputers. The
microcomputers were not connected so there was no efficient way to share data among them. It was not
efficient or cost-effective for businesses to use floppy disks to share data. Sneakernet created multiple
copies of the data. Each time a file was modified it would have to be shared again with all other people who
needed that file. If two people modified the file and then tried to share it, one of the sets of changes would be
lost. Businesses needed a solution that would successfully address the following three problems:
How to avoid duplication of equipment and resources
How to communicate efficiently
How to set up and manage a network
Businesses realized that computer networking could increase productivity and save money. Networks were
added and expanded almost as rapidly as new network technologies and products were introduced. The
early development of networking was disorganized. However, a tremendous expansion occurred in the early
1980s.
In the mid-1980s, the network technologies that emerged were created with a variety of hardware and
software implementations. Each company that created network hardware and software used its own
company standards. These individual standards were developed because of competition with other
companies. As a result, many of the network technologies were incompatible with each other. It became
increasingly difficult for networks that used different specifications to communicate with each other. Network
equipment often had to be replaced to implement new technologies.
One early solution was the creation of local-area network (LAN) standards. LAN standards provided an
open set of guidelines that companies used to create network hardware and software. As a result, the
equipment from different companies became compatible. This allowed for stability in LAN implementations.
In a LAN system, each department of the company is a kind of electronic island. As the use of computers in
businesses grew, LANs became insufficient.
A new technology was necessary to share information efficiently and quickly within a company and between
businesses. The solution was the creation of metropolitan-area networks (MANs) and wide-area networks
(WANs). Because WANs could connect user networks over large geographic areas, it was possible for
businesses to communicate with each other across great distances. Figure summarizes the relative sizes
of LANs and WANs.
2.1.2 Network history
This page presents a simplified view of how the Internet evolved.
The history of computer networking is complex. It has involved many people from all over the world over
the past 35 years. Presented here is a simplified view of how the Internet evolved. The processes of
invention and commercialization are far more complicated, but it is helpful to look at the fundamental
development.
In the 1940s computers were large electromechanical devices that were prone to failure. In 1947 the
invention of a semiconductor transistor opened up many possibilities for making smaller, more reliable
computers. In the 1950s large institutions began to use mainframe computers, which were run by punched
card programs. In the late 1950s the integrated circuit that combined several, and now millions, of transistors
on one small piece of semiconductor was invented. In the 1960s mainframes with terminals and integrated
circuits were widely used.
In the late 1960s and 1970s smaller computers called minicomputers were created. However, these
minicomputers were still very large by modern standards. In 1977 the Apple Computer Company introduced
the microcomputer, which was also known as the Mac. In 1981 IBM introduced its first PC. The user-friendly
Mac, the open-architecture IBM PC, and the further micro-miniaturization of integrated circuits led to
widespread use of personal computers in homes and businesses.
12

In the mid-1980s PC users began to use modems to share files with other computers. This was referred to
as point-to-point, or dial-up communication. This concept was expanded by the use of computers that were
the central point of communication in a dial-up connection. These computers were called bulletin boards.
Users would connect to the bulletin boards, leave and pick up messages, as well as upload and download
files. The drawback to this type of system was that there was very little direct communication and then only
with those who knew about the bulletin board. Another limitation was that the bulletin board computer
required one modem per connection. If five people connected simultaneously it would require five modems
connected to five separate phone lines. As the number of people who wanted to use the system grew, the
system was not able to handle the demand. For example, imagine if 500 people wanted to connect at the
same time.
From the 1960s to the 1990s the U.S. Department of Defense (DoD) developed large, reliable, wide-area
networks (WANs) for military and scientific reasons. This technology was different from the point-to-point
communication used in bulletin boards. It allowed multiple computers to be connected together through many
different paths. The network itself would determine how to move data from one computer to another. One
connection could be used to reach many computers at the same time. The WAN developed by the DoD
eventually became the Internet.
2.1.3 Networking devices
This page will introduce some important networking devices.
Equipment that connects directly to a network segment is referred to as a device. These devices are broken
up into two classifications. The first classification is end-user devices. End-user devices include computers,
printers, scanners, and other devices that provide services directly to the user. The second classification is
network devices. Network devices include all the devices that connect the end-user devices together to allow
them to communicate.
End-user devices that provide users with a connection to the network are also referred to as hosts. These
devices allow users to share, create, and obtain information. The host devices can exist without a network,
but without the network the host capabilities are greatly reduced. NICs are used to physically connect host
devices to the network media. They use this connection to send e-mails, print reports, scan pictures, or
access databases.
A NIC is a printed circuit board that fits into the expansion slot of a bus on a computer motherboard. It can
also be a peripheral device. NICs are sometimes called network adapters. Laptop or notebook computer
NICs are usually the size of a PCMCIA card. Each NIC is identified by a unique code called a Media
Access Control (MAC) address. This address is used to control data communication for the host on the
network. More about the MAC address will be covered later. As the name implies, the NIC controls host
access to the network.
There are no standardized symbols for end-user devices in the networking industry. They appear similar to
the real devices to allow for quick recognition.
Network devices are used to extend cable connections, concentrate connections, convert data formats, and
manage data transfers. Network devices provide extension of cable connections, concentration of
connections, conversion of data formats, and management of data transfers. Examples of devices that
perform these functions are repeaters, hubs, bridges, switches, and routers. All of the network devices
mentioned here are covered in depth later in the course. For now, a brief overview of networking devices will
be provided.
A repeater is a network device used to regenerate a signal. Repeaters regenerate analog or digital signals
that are distorted by transmission loss due to attenuation. A repeater does not make intelligent decision
concerning forwarding packets like a router or bridge.
Hubs concentrate connections. In other words, they take a group of hosts and allow the network to see them
as a single unit. This is done passively, without any other effect on the data transmission. Active hubs
concentrate hosts and also regenerate signals.
Bridges convert network data formats and perform basic data transmission management. Bridges provide
connections between LANs. They also check data to determine if it should cross the bridge. This makes
each part of the network more efficient.
Workgroup switches add more intelligence to data transfer management. They can determine if data
should remain on a LAN and transfer data only to the connection that needs it. Another difference between a
bridge and switch is that a switch does not convert data transmission formats.
Routers have all the capabilities listed above. Routers can regenerate signals, concentrate multiple
connections, convert data transmission formats, and manage data transfers. They can also connect to a
WAN, which allows them to connect LANs that are separated by great distances. None of the other devices
can provide this type of connection.
The Interactive Media Activities will allow students to become more familiar with network devices.
2.1.4 Network topology
This page will introduce students to the most common physical and logical network topologies.
Network topology defines the structure of the network. One part of the topology definition is the physical
topology, which is the actual layout of the wire or media. The other part is the logical topology, which defines
13

how the hosts access the media to send data. The physical topologies that are commonly used are as
follows:
A bus topology uses a single backbone cable that is terminated at both ends. All the hosts connect
directly to this backbone.
A ring topology connects one host to the next and the last host to the first. This creates a physical
ring of cable.
A star topology connects all cables to a central point.
An extended star topology links individual stars together by connecting the hubs or switches.
A hierarchical topology is similar to an extended star. However, instead of linking the hubs or
switches together, the system is linked to a computer that controls the traffic on the topology.
A mesh topology is implemented to provide as much protection as possible from interruption of
service. For example, a nuclear power plant might use a mesh topology in the networked control
systems. As seen in the graphic, each host has its own connections to all other hosts. Although the
Internet has multiple paths to any one location, it does not adopt the full mesh topology.
The logical topology of a network determines how the hosts communicate across the medium. The two most
common types of logical topologies are broadcast and token passing.
The use of a broadcast topology indicates that each host sends its data to all other hosts on the network
medium. There is no order that the stations must follow to use the network. It is first come, first serve.
Ethernet works this way as will be explained later in the course.
The second logical topology is token passing. In this type of topology, an electronic token is passed
sequentially to each host. When a host receives the token, that host can send data on the network. If the
host has no data to send, it passes the token to the next host and the process repeats itself. Two examples
of networks that use token passing are Token Ring and Fiber Distributed Data Interface (FDDI). A variation of
Token Ring and FDDI is Arcnet. Arcnet is token passing on a bus topology.
The diagram in Figure shows many different topologies connected by network devices. It shows a network
of moderate complexity that is typical of a school or a small business. The diagram includes many symbols
and networking concepts that will take time to learn.

2.1.5 Network protocols


This page will explain what network protocols are and why they are important.
Protocol suites are collections of protocols that enable network communication between hosts. A protocol is a
formal description of a set of rules and conventions that govern a particular aspect of how devices on a
network communicate. Protocols determine the format, timing, sequencing, and error control in data
communication. Without protocols, the computer cannot make or rebuild the stream of incoming bits from
another computer into the original format.
Protocols control all aspects of data communication, which include the following:
How the physical network is built
How computers connect to the network
How the data is formatted for transmission
How that data is sent
14

How to deal with errors


These network rules are created and maintained by many different organizations and committees. Included
in these groups are the Institute of Electrical and Electronic Engineers (IEEE), American National Standards
Institute (ANSI), Telecommunications Industry Association (TIA), Electronic Industries Alliance (EIA) and the
International Telecommunications Union (ITU), formerly known as the Comit Consultatif International
Tlphonique et Tlgraphique (CCITT).

2.1.6 Local-area networks (LANs)


This page will explain the features and benefits of LANs.
LANs consist of the following components:
Computers
Network interface cards
Peripheral devices
Networking media
Network devices
LANs allow businesses to locally share computer files and printers efficiently and make internal
communications possible. A good example of this technology is e-mail. LANs manage data, local
communications, and computing equipment.
Some common LAN technologies include the following:
Ethernet
Token Ring
FDDI

15

2.1.7 Wide-area networks (WANs)


This page will explain the functions of a WAN.
WANs interconnect LANs, which then provide access to computers or file servers in other locations. Because
WANs connect user networks over a large geographical area, they make it possible for businesses to
communicate across great distances. WANs allow computers, printers, and other devices on a LAN to be
shared with distant locations. WANs provide instant communications across large geographic areas.
Collaboration software provides access to real-time information and resources and allows meetings to be
held remotely. WANs have created a new class of workers called telecommuters. These people never have
to leave their homes to go to work.
WANs are designed to do the following:
Operate over a large and geographically separated area
Allow users to have real-time communication capabilities with other users
Provide full-time remote resources connected to local services
Provide e-mail, Internet, file transfer, and e-commerce services
Some common WAN technologies include the following:
Modems
Integrated Services Digital Network (ISDN)
Digital subscriber line (DSL)
Frame Relay
T1, E1, T3, and E3
Synchronous Optical Network (SONET)

16

2.1.8 Metropolitan-area networks (MANs)


This page will explain how MANs are used.
Wireless bridge technologies that send signals across public areas can also be used to create a MAN. A
MAN usually consists of two or more LANs in a common geographic area. For example, a bank with
multiple branches may utilize a MAN. Typically, a service provider is used to connect two or more LAN sites
using private communication lines or optical services. A MAN can also be created using wireless bridge
technology by beaming signals across public areas.

2.1.9 Storage-area networks (SANs)


This page will discuss the features of SANs.
A storage-area network (SAN) is a dedicated, high-performance network used to move data between servers
and storage resources. Because it is a separate, dedicated network, it avoids any traffic conflict between
clients and servers.
17

SAN technology allows high-speed server-to-storage, storage-to-storage, or server-to-server connectivity.


This method uses a separate network infrastructure that relieves any problems associated with existing
network connectivity.
SANs offer the following features:
Performance SANs allow concurrent access of disk or tape arrays by two or more servers at high
speeds. This provides enhanced system performance.
Availability SANs have built-in disaster tolerance. Data can be duplicated on a SAN up to 10 km
(6.2 miles) away.
Scalability A SAN can use a variety of technologies. This allows easy relocation of backup data,
operations, file migration, and data replication between systems.

2.1.10 Virtual private network (VPN)


This page will explain what a VPN is and how it is used.
A vitual private network (VPN) is a private network that is constructed within a public network infrastructure
such as the global Internet. Using VPN, a telecommuter can remotely access the network of the company
headquarters. Through the Internet, a secure tunnel can be built between the PC of the telecommuter and
a VPN router at the company headquarters.

18

2.1.11 Benefits of VPNs


This page will introduce the three main types of VPNs and explain how they work.
Cisco products support the latest in VPN technology. A VPN is a service that offers secure, reliable
connectivity over a shared public network infrastructure such as the Internet. VPNs maintain the same
security and management policies as a private network. The use of a VPN is the most cost-effective way to
establish a point-to-point connection between remote users and an enterprise network.
The following are the three main types of VPNs:
Access VPNs provide remote access for mobile and small office, home office (SOHO) users to an
Intranet or Extranet over a shared infrastructure. Access VPNs use analog, dialup, ISDN, DSL,
mobile IP, and cable technologies to securely connect mobile users, telecommuters, and branch
offices.
Intranet VPNs use dedicated connections to link regional and remote offices to an internal network
over a shared infrastructure. Intranet VPNs differ from Extranet VPNs in that they allow access only
to the employees of the enterprise.
Extranet VPNs use dedicated connections to link business partners to an internal network over a
shared infrastructure. Extranet VPNs differ from Intranet VPNs in that they allow access to users
outside the enterprise.

2.1.12 Intranets and extranets


This page will teach students about intranets and extranets.
One common configuration of a LAN is an intranet. Intranet Web servers differ from public Web servers in
that the public must have the proper permissions and passwords to access the intranet of an organization.
Intranets are designed to permit users who have access privileges to the internal LAN of the organization.
Within an intranet, Web servers are installed in the network. Browser technology is used as the common
front end to access information on servers such as financial, graphical, or text-based data.
Extranets refer to applications and services that are Intranet based, and use extended, secure access to
external users or enterprises. This access is usually accomplished through passwords, user IDs, and other
application-level security. An extranet is the extension of two or more intranet strategies with a secure
interaction between participant enterprises and their respective intranets.
This page concludes this lesson. The next lesson will discuss bandwidth. The first page will explain why
bandwidth is important.

19

2.2

Bandwidth

2.2.1 Importance of bandwidth


This page will describe the four most important characteristics of bandwidth.
Bandwidth is defined as the amount of information that can flow through a network connection in a given
period of time. It is important to understand the concept of bandwidth for the following reasons.
Bandwidth is finite. Regardless of the media used to build a network, there are limits on the network capacity
to carry information. Bandwidth is limited by the laws of physics and by the technologies used to place
information on the media. For example, the bandwidth of a conventional modem is limited to about 56 kbps
by both the physical properties of twisted-pair phone wires and by modem technology. DSL uses the same
twisted-pair phone wires. However, DSL provides much more bandwidth than conventional modems. So,
even the limits imposed by the laws of physics are sometimes difficult to define. Optical fiber has the physical
potential to provide virtually limitless bandwidth. Even so, the bandwidth of optical fiber cannot be fully
realized until technologies are developed to take full advantage of its potential.
Bandwidth is not free. It is possible to buy equipment for a LAN that will provide nearly unlimited bandwidth
over a long period of time. For WAN connections, it is usually necessary to buy bandwidth from a service
provider. In either case, individual users and businesses can save a lot of money if they understand
bandwidth and how the demand will change over time. A network manager needs to make the right decisions
about the kinds of equipment and services to buy.
Bandwidth is an important factor that is used to analyze network performance, design new networks, and
understand the Internet. A networking professional must understand the tremendous impact of bandwidth
and throughput on network performance and design. Information flows as a string of bits from computer to
computer throughout the world. These bits represent massive amounts of information flowing back and forth
across the globe in seconds or less.
The demand for bandwidth continues to grow. As soon as new network technologies and infrastructures are
built to provide greater bandwidth, new applications are created to take advantage of the greater capacity.
The delivery of rich media content such as streaming video and audio over a network requires tremendous
amounts of bandwidth. IP telephony systems are now commonly installed in place of traditional voice
systems, which further adds to the need for bandwidth. The successful networking professional must
anticipate the need for increased bandwidth and act accordingly.
2.2.2 The desktop
This page will present two analogies that may make it easier to visualize bandwidth in a network.
Bandwidth has been defined as the amount of information that can flow through a network in a given time.
The idea that information flows suggests two analogies that may make it easier to visualize bandwidth in a
network.
Bandwidth is like the width of a pipe. A network of pipes brings fresh water to homes and businesses and
carries waste water away. This water network is made up of pipes of different diameters. The main water
pipes of a city may be 2 meters in diameter, while the pipe to a kitchen faucet may have a diameter of only 2
cm. The width of the pipe determines the water-carrying capacity of the pipe. Therefore, the water is like the
20

data, and the pipe width is like the bandwidth. Many networking experts say that they need to put in bigger
pipes when they wish to add more information-carrying capacity.
Bandwidth is like the number of lanes on a highway. A network of roads serves every city or town. Large
highways with many traffic lanes are joined by smaller roads with fewer traffic lanes. These roads lead to
narrower roads that lead to the driveways of homes and businesses. When very few automobiles use the
highway system, each vehicle is able to move freely. When more traffic is added, each vehicle moves more
slowly. This is especially true on roads with fewer lanes. As more traffic enters the highway system, even
multi-lane highways become congested and slow. A data network is much like the highway system. The data
packets are comparable to automobiles, and the bandwidth is comparable to the number of lanes on the
highway. When a data network is viewed as a system of highways, it is easy to see how low bandwidth
connections can cause traffic to become congested all over the network.
2.2.3 Measurement
This page will explain how bandwidth is measured.
In digital systems, the basic unit of bandwidth is bits per second (bps). Bandwidth is the measure of how
many bits of information can flow from one place to another in a given amount of time. Although bandwidth
can be described in bps, a larger unit of measurement is generally used. Network bandwidth is typically
described as thousands of bits per second (kbps), millions of bits per second (Mbps), billions of bits per
second (Gbps), and trillions of bits per second (Tbps). Although the terms bandwidth and speed are often
used interchangeably, they are not exactly the same thing. One may say, for example, that a T3 connection
at 45 Mbps operates at a higher speed than a T1 connection at 1.544 Mbps. However, if only a small amount
of their data-carrying capacity is being used, each of these connection types will carry data at roughly the
same speed. For example, a small amount of water will flow at the same rate through a small pipe as
through a large pipe. Therefore, it is usually more accurate to say that a T3 connection has greater
bandwidth than a T1 connection. This is because the T3 connection is able to carry more information in the
same period of time, not because it has a higher speed.
2.2.4 Limitations
This page describes the limitations of bandwidth.
Bandwidth varies depending upon the type of media as well as the LAN and WAN technologies used. The
physics of the media account for some of the difference. Signals travel through twisted-pair copper wire,
coaxial cable, optical fiber, and air. The physical differences in the ways signals travel result in fundamental
limitations on the information-carrying capacity of a given medium. However, the actual bandwidth of a
network is determined by a combination of the physical media and the technologies chosen for signaling and
detecting network signals.
For example, current information about the physics of unshielded twisted-pair (UTP) copper cable puts the
theoretical bandwidth limit at over 1 Gbps. However, in actual practice, the bandwidth is determined by the
use of 10BASE-T, 100BASE-TX, or 1000BASE-TX Ethernet. The actual bandwidth is determined by the
signaling methods, NICs, and other network equipment that is chosen. Therefore, the bandwidth is not
determined solely by the limitations of the medium.
Figure shows some common networking media types along with their distance and bandwidth limitations.
Figure summarizes common WAN services and the bandwidth associated with each service.
2.2.5 Throughput
This page explains the concept of throughput.
Bandwidth is the measure of the amount of information that can move through the network in a given period
of time. Therefore, the amount of available bandwidth is a critical part of the specification of the network. A
typical LAN might be built to provide 100 Mbps to every desktop workstation, but this does not mean that
each user is actually able to move 100 megabits of data through the network for every second of use. This
would be true only under the most ideal circumstances.
Throughput refers to actual measured bandwidth, at a specific time of day, using specific Internet routes, and
while a specific set of data is transmitted on the network. Unfortunately, for many reasons, throughput is
often far less than the maximum possible digital bandwidth of the medium that is being used. The following
are some of the factors that determine throughput:
Internetworking devices
Type of data being transferred
Network topology
Number of users on the network
User computer
Server computer
Power conditions
The theoretical bandwidth of a network is an important consideration in network design, because the network
bandwidth will never be greater than the limits imposed by the chosen media and networking technologies.
However, it is just as important for a network designer and administrator to consider the factors that may
affect actual throughput. By measuring throughput on a regular basis, a network administrator will be aware
21

of changes in network performance and changes in the needs of network users. The network can then be
adjusted accordingly.
2.2.6 Data transfer calculation
This page provides the formula for data transfer calculation.
Network designers and administrators are often called upon to make decisions regarding bandwidth. One
decision might be whether to increase the size of the WAN connection to accommodate a new database.
Another decision might be whether the current LAN backbone is of sufficient bandwidth for a streaming-video
training program. The answers to problems like these are not always easy to find, but one place to start is
with a simple data transfer calculation.
Using the formula transfer time = size of file / bandwidth (T=S/BW) allows a network administrator to
estimate several of the important components of network performance. If the typical file size for a given
application is known, dividing the file size by the network bandwidth yields an estimate of the fastest time that
the file can be transferred.
Two important points should be considered when doing this calculation.
The result is an estimate only, because the file size does not include any overhead added by
encapsulation.
The result is likely to be a best-case transfer time, because available bandwidth is almost never at
the theoretical maximum for the network type. A more accurate estimate can be attained if
throughput is substituted for bandwidth in the equation.
Although the data transfer calculation is quite simple, one must be careful to use the same units throughout
the equation. In other words, if the bandwidth is measured in megabits per second (Mbps), the file size must
be in megabits (Mb), not megabytes (MB). Since file sizes are typically given in megabytes, it may be
necessary to multiply the number of megabytes by eight to convert to megabits.
Try to answer the following question, using the formula T=S/BW. Be sure to convert units of measurement as
necessary.
Would it take less time to send the contents of a floppy disk full of data (1.44 MB) over an ISDN line, or to
send the contents of a ten GB hard drive full of data over an OC-48 line?

2.2.7 Digital versus analog


This page will explain the differences between analog and digital signals.
Radio, television, and telephone transmissions have, until recently, been sent through the air and over wires
using electromagnetic waves. These waves are called analog because they have the same shapes as the
light and sound waves produced by the transmitters. As light and sound waves change size and shape, the
electrical signal that carries the transmission changes proportionately. In other words, the electromagnetic
waves are analogous to the light and sound waves.
Analog bandwidth is measured by how much of the electromagnetic spectrum is occupied by each signal.
The basic unit of analog bandwidth is hertz (Hz), or cycles per second. Typically, multiples of this basic unit
of analog bandwidth are used, just as with digital bandwidth. Units of measurement that are commonly seen
are kilohertz (KHz), megahertz (MHz), and gigahertz (GHz). These are the units used to describe the
frequency of cordless telephones, which usually operate at either 900 MHz or 2.4 GHz. These are also the
22

units used to describe the frequencies of 802.11a and 802.11b wireless networks, which operate at 5 GHz
and 2.4 GHz.
While analog signals are capable of carrying a variety of information, they have some significant
disadvantages in comparison to digital transmissions. The analog video signal that requires a wide frequency
range for transmission cannot be squeezed into a smaller band. Therefore, if the necessary analog
bandwidth is not available, the signal cannot be sent.
In digital signaling all information is sent as bits, regardless of the kind of information it is. Voice, video, and
data all become streams of bits when they are prepared for transmission over digital media. This type of
transmission gives digital bandwidth an important advantage over analog bandwidth. Unlimited amounts of
information can be sent over the smallest or lowest bandwidth digital channel. Regardless of how long it
takes for the digital information to arrive at its destination and be reassembled, it can be viewed, listened to,
read, or processed in its original form.
It is important to understand the differences and similarities between digital and analog bandwidth. Both
types of bandwidth are regularly encountered in the field of information technology. However, because this
course is concerned primarily with digital networking, the term bandwidth will refer to digital bandwidth.
This page concludes this lesson. The next lesson will discuss networking models. The first page will discuss
the concept of layers.
2.3

Networking Models

2.3.1 Using layers to analyze problems in a flow of materials


This page explains how layers are used to describe communications between computers.
The concept of layers is used to describe communication from one computer to another. Figure shows a
set of questions that are related to flow, which is defined as the motion through a system of either physical or
logical objects. These questions show how the concept of layers helps describe the details of the flow
process. This process could be any kind of flow, from the flow of traffic on a highway system to the flow of
data through a network. Figure shows several examples of flow and ways that the flow process can be
broken down into details or layers.
A conversation between two people provides a good opportunity to use a layered approach to analyze
information flow. In a conversation, each person wishing to communicate begins by creating an idea. Then a
decision is made on how to properly communicate the idea. For example, a person could decide to speak,
sing or shout, and what language to use. Finally the idea is delivered. For example, the person creates the
sound which carries the message.
This process can be broken into separate layers that may be applied to all conversations. The top layer is the
idea that will be communicated. The middle layer is the decision on how the idea is to be communicated. The
bottom layer is the creation of sound to carry the communication.
The same method of layering explains how a computer network distributes information from a source to a
destination. When computers send information through a network, all communications originate at a source
then travel to a destination.
The information that travels on a network is generally referred to as data or a packet. A packet is a logically
grouped unit of information that moves between computer systems. As the data passes between layers,
each layer adds additional information that enables effective communication with the corresponding layer on
the other computer.
The OSI and TCP/IP models have layers that explain how data is communicated from one computer to
another. The models differ in the number and function of the layers. However, each model can be used to
help describe and provide details about the flow of information from a source to a destination.
2.3.2 Using layers to describe data communication
This page describes the importance of layers in data communication.
In order for data packets to travel from a source to a destination on a network, it is important that all the
devices on the network speak the same language or protocol. A protocol is a set of rules that make
communication on a network more efficient. For example, while flying an airplane, pilots obey very specific
rules for communication with other airplanes and with air traffic control.
A data communications protocol is a set of rules or an agreement that determines the format and
transmission of data.
Layer 4 on the source computer communicates with Layer 4 on the destination computer. The rules and
conventions used for this layer are known as Layer 4 protocols. It is important to remember that protocols
prepare data in a linear fashion. A protocol in one layer performs a certain set of operations on data as it
prepares the data to be sent over the network. The data is then passed to the next layer where another
protocol performs a different set of operations.
Once the packet has been sent to the destination, the protocols undo the construction of the packet that was
done on the source side. This is done in reverse order. The protocols for each layer on the destination return
the information to its original form, so the application can properly read the data.
2.3.3 OSI model
This page discusses how and why the OSI model was developed.
23

The early development of networks was disorganized in many ways. The early 1980s saw tremendous
increases in the number and size of networks. As companies realized the advantages of using networking
technology, networks were added or expanded almost as rapidly as new network technologies were
introduced.
By the mid-1980s, these companies began to experience problems from the rapid expansion. Just as people
who do not speak the same language have difficulty communicating with each other, it was difficult for
networks that used different specifications and implementations to exchange information. The same problem
occurred with the companies that developed private or proprietary networking technologies. Proprietary
means that one or a small group of companies controls all usage of the technology. Networking technologies
strictly following proprietary rules could not communicate with technologies that followed different proprietary
rules.
To address the problem of network incompatibility, the International Organization for Standardization (ISO)
researched networking models like Digital Equipment Corporation net (DECnet), Systems Network
Architecture (SNA), and TCP/IP in order to find a generally applicable set of rules for all networks. Using this
research, the ISO created a network model that helps vendors create networks that are compatible with
other networks.
The Open System Interconnection (OSI) reference model released in 1984 was the descriptive network
model that the ISO created. It provided vendors with a set of standards that ensured greater compatibility
and interoperability among various network technologies produced by companies around the world.
The OSI reference model has become the primary model for network communications. Although there are
other models in existence, most network vendors relate their products to the OSI reference model. This is
especially true when they want to educate users on the use of their products. It is considered the best tool
available for teaching people about sending and receiving data on a network.
In the Interactive Media Activity, students will identify the benefits of the OSI model.

2.3.4 OSI layers


This page discusses the seven layers of the OSI model.
The OSI reference model is a framework that is used to understand how information travels throughout a
network. The OSI reference model explains how packets travel through the various layers to another device
on a network, even if the sender and destination have different types of network media.
In the OSI reference model, there are seven numbered layers, each of which illustrates a particular network
function. - Dividing the network into seven layers provides the following advantages:
It breaks network communication into smaller, more manageable parts.
It standardizes network components to allow multiple vendor development and support.
It allows different types of network hardware and software to communicate with each other.
It prevents changes in one layer from affecting other layers.
It divides network communication into smaller parts to make learning it easier to understand.
In the following Interactive Media Activity, the student will identify the seven layers of the OSI model.
24

2.3.5 Peer-to-peer communications


This page explains the concept of peer-to-peer communications.
In order for data to travel from the source to the destination, each layer of the OSI model at the source must
communicate with its peer layer at the destination. This form of communication is referred to as peer-to-peer.
During this process, the protocols of each layer exchange information, called protocol data units (PDUs).
Each layer of communication on the source computer communicates with a layer-specific PDU, and with its
peer layer on the destination computer as illustrated in Figure .
Data packets on a network originate at a source and then travel to a destination. Each layer depends on the
service function of the OSI layer below it. To provide this service, the lower layer uses encapsulation to put
the PDU from the upper layer into its data field. Then it adds whatever headers and trailers the layer needs
to perform its function. Next, as the data moves down through the layers of the OSI model, additional
headers and trailers are added. After Layers 7, 6, and 5 have added their information, Layer 4 adds more
information. This grouping of data, the Layer 4 PDU, is called a segment.
The network layer provides a service to the transport layer, and the transport layer presents data to the
internetwork subsystem. The network layer has the task of moving the data through the internetwork. It
accomplishes this task by encapsulating the data and attaching a header creating a packet (the Layer 3
PDU). The header contains information required to complete the transfer, such as source and destination
logical addresses.
The data link layer provides a service to the network layer. It encapsulates the network layer information in a
frame (the Layer 2 PDU). The frame header contains information (for example, physical addresses) required
to complete the data link functions. The data link layer provides a service to the network layer by
encapsulating the network layer information in a frame.
The physical layer also provides a service to the data link layer. The physical layer encodes the data link
frame into a pattern of 1s and 0s (bits) for transmission on the medium (usually a wire) at Layer 1.
2.3.6 TCP/IP model
This page discusses the TCP/IP reference model, which is the historical and technical standard of the
Internet.
The U.S. Department of Defense (DoD) created the TCP/IP reference model, because it wanted to design a
network that could survive any conditions, including a nuclear war. In a world connected by different types of
communication media such as copper wires, microwaves, optical fibers and satellite links, the DoD wanted
transmission of packets every time and under any conditions. This very difficult design problem brought
about the creation of the TCP/IP model.
Unlike the proprietary networking technologies mentioned earlier, TCP/IP was developed as an open
standard. This meant that anyone was free to use TCP/IP. This helped speed up the development of TCP/IP
as a standard.
The TCP/IP model has the following four layers:
Application layer
Transport layer
Internet layer
Network access layer
Although some of the layers in the TCP/IP model have the same name as layers in the OSI model, the layers
of the two models do not correspond exactly. Most notably, the application layer has different functions in
each model.
The designers of TCP/IP felt that the application layer should include the OSI session and presentation layer
details. They created an application layer that handles issues of representation, encoding, and dialog control.
The transport layer deals with the quality of service issues of reliability, flow control, and error correction. One
of its protocols, the transmission control protocol (TCP), provides excellent and flexible ways to create
reliable, well-flowing, low-error network communications.
TCP is a connection-oriented protocol. It maintains a dialogue between source and destination while
packaging application layer information into units called segments. Connection-oriented does not mean that
a circuit exists between the communicating computers. It does mean that Layer 4 segments travel back and
forth between two hosts to acknowledge the connection exists logically for some period.
The purpose of the Internet layer is to divide TCP segments into packets and send them from any network.
The packets arrive at the destination network independent of the path they took to get there. The specific
protocol that governs this layer is called the Internet Protocol (IP). Best path determination and packet
switching occur at this layer.
The relationship between IP and TCP is an important one. IP can be thought to point the way for the packets,
while TCP provides a reliable transport.
The name of the network access layer is very broad and somewhat confusing. It is also known as the hostto-network layer. This layer is concerned with all of the components, both physical and logical, that are
required to make a physical link. It includes the networking technology details, including all the details in the
OSI physical and data link layers.
Figure illustrates some of the common protocols specified by the TCP/IP reference model layers. Some of
the most commonly used application layer protocols include the following:
25

File Transfer Protocol (FTP)


Hypertext Transfer Protocol (HTTP)
Simple Mail Transfer Protocol (SMTP)
Domain Name System (DNS)
Trivial File Transfer Protocol (TFTP)
The common transport layer protocols include:
Transport Control Protocol (TCP)
User Datagram Protocol (UDP)
The primary protocol of the Internet layer is:
Internet Protocol (IP)
The network access layer refers to any particular technology used on a specific network.
Regardless of which network application services are provided and which transport protocol is used, there is
only one Internet protocol, IP. This is a deliberate design decision. IP serves as a universal protocol that
allows any computer anywhere to communicate at any time.
A comparison of the OSI model and the TCP/IP model will point out some similarities and differences.
Similarities include:
Both have layers.
Both have application layers, though they include very different services.
Both have comparable transport and network layers.
Both models need to be known by networking professionals.
Both assume packets are switched. This means that individual packets may take different paths to
reach the same destination. This is contrasted with circuit-switched networks where all the packets
take the same path.
Differences include:
TCP/IP combines the presentation and session layer issues into its application layer.
TCP/IP combines the OSI data link and physical layers into the network access layer.
TCP/IP appears simpler because it has fewer layers.
TCP/IP protocols are the standards around which the Internet developed, so the TCP/IP model gains
credibility just because of its protocols. In contrast, networks are not usually built on the OSI
protocol, even though the OSI model is used as a guide.
Although TCP/IP protocols are the standards with which the Internet has grown, this curriculum will use the
OSI model for the following reasons:
It is a generic, protocol-independent standard.
It has more details, which make it more helpful for teaching and learning.
It has more details, which can be helpful when troubleshooting.
Networking professionals differ in their opinions on which model to use. Due to the nature of the industry it is
necessary to become familiar with both. Both the OSI and TCP/IP models will be referred to throughout the
curriculum. The focus will be on the following:
TCP as an OSI Layer 4 protocol
IP as an OSI Layer 3 protocol
Ethernet as a Layer 2 and Layer 1 technology
Remember that there is a difference between a model and an actual protocol that is used in networking. The
OSI model will be used to describe TCP/IP protocols.
Students will identify the differences between the OSI model and the TCP/IP model in the Lab Activity.
In the Interactive Media Activity, students will identify the layers of the TCP/IP reference model.

26

2.3.7 Detailed encapsulation process


This page describes the process of encapsulation.
All communications on a network originate at a source, and are sent to a destination. The information sent on
a network is referred to as data or data packets. If one computer (host A) wants to send data to another
computer (host B), the data must first be packaged through a process called encapsulation.
Encapsulation wraps data with the necessary protocol information before network transit. Therefore, as the
data packet moves down through the layers of the OSI model, it receives headers, trailers, and other
information.
To see how encapsulation occurs, examine the manner in which data travels through the layers as illustrated
in Figure . Once the data is sent from the source, it travels through the application layer down through the
other layers. The packaging and flow of the data that is exchanged goes through changes as the layers
perform their services for end users. As illustrated in Figure , networks must perform the following five
conversion steps in order to encapsulate data:
1. Build the data As a user sends an e-mail message, its alphanumeric characters are converted to
data that can travel across the internetwork.
2. Package the data for end-to-end transport The data is packaged for internetwork transport. By
using segments, the transport function ensures that the message hosts at both ends of the e-mail
system can reliably communicate.
3. Add the network IP address to the header The data is put into a packet or datagram that
contains a packet header with source and destination logical addresses. These addresses help
network devices send the packets across the network along a chosen path.
4. Add the data link layer header and trailer Each network device must put the packet into a frame.
The frame allows connection to the next directly-connected network device on the link. Each device
in the chosen network path requires framing in order for it to connect to the next device.
5. Convert to bits for transmission The frame must be converted into a pattern of 1s and 0s (bits)
for transmission on the medium. A clocking function enables the devices to distinguish these bits as
they travel across the medium. The medium on the physical internetwork can vary along the path
used. For example, the e-mail message can originate on a LAN, cross a campus backbone, and go
out a WAN link until it reaches its destination on another remote LAN.
The Lab Activity will provide an in depth review of the OSI model.
The Interactive Media Activity requires students to complete an encapsulation process flowchart.
This page concludes this lesson. The next page will summarize the main points from the module.

Summary
This page summarizes the topics discussed in this module.
Computer networks developed in response to business and government computing needs. Applying
standards to network functions provided a set of guidelines for creating network hardware and software and
provided compatibility among equipment from different companies. Information could move within a company
and from one business to another.
Network devices, such as repeaters, hubs, bridges, switches and routers connect host devices together to
allow them to communicate. Protocols provide a set of rules for communication.
27

The physical topology of a network is the actual layout of the wire or media. The logical topology defines how
host devices access the media. The physical topologies that are commonly used are bus, ring, star,
extended star, hierarchical, and mesh. The two most common types of logical topologies are broadcast and
token passing.
A local-area network (LAN) is designed to operate within a limited geographical area. LANs allow multiaccess to high-bandwidth media, control the network privately under local administration, provide full-time
connectivity to local services and connect physically adjacent devices.
A wide-area network (WAN) is designed to operate over a large geographical area. WANs allow access over
serial interfaces operating at lower speeds, provide full-time and part-time connectivity and connect devices
separated over wide areas.
A metropolitan-area network (MAN) is a network that spans a metropolitan area such as a city or suburban
area. A MAN usually consists of two or more LANs in a common geographic area.
A storage-area network (SAN) is a dedicated, high-performance network used to move data between servers
and storage resources. A SAN provides enhanced system performance, is scalable, and has disaster
tolerance built in.
A virtual private network (VPN) is a private network that is constructed within a public network infrastructure.
Three main types of VPNs are access, Intranet, and Extranet VPNs. Access VPNs provide mobile workers or
small office/home office (SOHO) users with remote access to an Intranet or Extranet. Intranets are only
available to users who have access privileges to the internal network of an organization. Extranets are
designed to deliver applications and services that are Intranet based to external users or enterprises.
The amount of information that can flow through a network connection in a given period of time is referred to
as bandwidth. Network bandwidth is typically measured in thousands of bits per second (kbps), millions of
bits per second (Mbps), billions of bits per second (Gbps) and trillions of bits per second (Tbps). The
theoretical bandwidth of a network is an important consideration in network design. If the theoretical
bandwidth of a network connection is known, the formula T=S/BW (transfer time = size of file / bandwidth)
can be used to calculate potential data transfer time. However the actual bandwidth, referred to as
throughput, is affected by multiple factors such as network devices and topology being used, type of data,
number of users, hardware and power conditions.
Data can be encoded on analog or digital signals. Analog bandwidth is a measure of how much of the
electromagnetic spectrum is occupied by each signal. For instance an analog video signal that requires a
wide frequency range for transmission cannot be squeezed into a smaller band. If the necessary analog
bandwidth is not available the signal cannot be sent. In digital signaling all information is sent as bits,
regardless of the kind of information it is. Unlimited amounts of information can be sent over the smallest
digital bandwidth channel.
The concept of layers is used to describe communication from one computer to another. Dividing the network
into layers provides the following advantages:
Reduces complexity
Standardizes interfaces
Facilitates modular engineering
Ensures interoperability
Accelerates evolution
Simplifies teaching and learning
Two such layered models are the Open System Interconnection (OSI) and the TCP/IP networking models. In
the OSI reference model, there are seven numbered layers, each of which illustrates a particular network
function: application, presentation, session, transport, network, data link, and physical. The TCP/IP model
has the following four layers: application, transport, Internet, and network access.
Although some of the layers in the TCP/IP model have the same name as layers in the OSI model, the layers
of the two models do not correspond exactly. The TCP/IP application layer is equivalent to the OSI
application, presentation, and session layers. The TCP/IP model combines the OSI data link and physical
layers into the network access layer.
No matter which model is applied, networks layers perform the following five conversion steps in order to
encapsulate and transmit data:
1. Images and text are converted to data.
2. The data is packaged into segments.
3. The data segment is encapsulated in a packet with the source and destination addresses.
4. The packet is encapsulated in a frame with the MAC address of the next directly connected device.
The frame is converted to a pattern of ones and zeros (bits) for transmission on the media.
Overview
Copper cable is used in almost every LAN. Many different types of copper cable are available. Each type has
advantages and disadvantages. Proper selection of cabling is key to efficient network operation. Since
copper uses electrical currents to transmit information, it is important to understand some basics of
electricity.
Optical fiber is the most frequently used medium for the longer, high bandwidth, point-to-point transmissions
required on LAN backbones and on WANs. Optical media uses light to transmit data through thin glass or
28

plastic fiber. Electrical signals cause a fiber-optic transmitter to generate the light signals sent down the fiber.
The receiving host receives the light signals and converts them to electrical signals at the far end of the fiber.
However, there is no electricity in the fiber-optic cable. In fact, the glass used in fiber-optic cable is a very
good electrical insulator.
Physical connectivity allows users to share printers, servers, and software, which can increase productivity.
Traditional networked systems require the workstations to remain stationary and permit moves only within
the limits of the media and office area.
The introduction of wireless technology removes these restraints and brings true portability to computer
networks. Currently, wireless technology does not provide the high-speed transfers, security, or uptime
reliability of cabled networks. However, flexibility of wireless has justified the trade off.
Administrators often consider wireless when they install or upgrade a network. A simple wireless network
could be working just a few minutes after the workstations are turned on. Connectivity to the Internet is
provided through a wired connection, router, cable, or DSL modem and a wireless access point that acts as a
hub for the wireless nodes. In a residential or small office environment these devices may be combined into
a single unit.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Discuss the electrical properties of matter
Define voltage, resistance, impedance, current, and circuits
Describe the specifications and performances of different types of cable
Describe coaxial cable and its advantages and disadvantages compared to other types of cable
Describe STP cable and its uses
Describe UTP cable and its uses
Discuss the characteristics of straight-through, crossover, and rollover cables and where each is
used
Explain the basics of fiber-optic cable
Describe how fiber-optic cables can carry light signals over long distances
Describe multimode and single-mode fiber
Describe how fiber is installed
Describe the type of connectors and equipment used with fiber-optic cable
Explain how fiber is tested to ensure that it will function properly
Discuss safety issues related to fiber optics
3.1

Copper Media

3.1.1 Atoms and electrons


This lesson discusses the copper media used in networking. Since all matter is composed of atoms, this
page begins with a detailed explanation of atoms and electrons.
All matter is composed of atoms. The Periodic Table of Elements lists all known types of atoms and their
properties. The atom is comprised of three basic particles:
Electrons Particles with a negative charge that orbit the nucleus
Protons Particles with a positive charge
Neutrons Neutral particles with no charge
The protons and neutrons are combined together in a small group called a nucleus.
To better understand the electrical properties of different elements, locate helium (He) on the periodic table.
Helium has an atomic number of 2, which means that helium has two protons and two electrons. It has an
atomic weight of 4. If the atomic number of 2 is subtracted from the atomic weight of 4, the result shows that
helium also has two neutrons.
The Danish physicist, Niels Bohr, developed a simplified model to illustrate the atom. This illustration
shows the model for a helium atom. If the protons and neutrons of an atom were the size of adult soccer
balls in the middle of a soccer field, the only thing smaller than the balls would be the electrons. The
electrons would be the size of cherries that would be in orbit near the outer-most seats of the stadium. The
overall volume of this atom would be about the size of the stadium. The nucleus would be the size of the
soccer balls.
Coulomb's Electric Force Law states that opposite charges react to each other with a force that causes them
to be attracted to each other. Like charges react to each other with a force that causes them to repel each
other. In the case of opposite and like charges, the force increases as the charges move closer to each
other. The force is inversely proportional to the square of the separation distance. When particles get
extremely close together, nuclear force overrides the repulsive electrical force and keeps the nucleus
together. That is why a nucleus does not fly apart.
Examine the Bohr model of the helium atom. If Coulomb's law is true and the Bohr model describes helium
atoms as stable, then there must be other laws of nature at work. Review both theories to see how they
conflict with each other:
29

Coulomb's law Opposite charges attract and like charges repel.


The Bohr model Protons have positive charges and electrons have negative charges. There is
more than one proton in the nucleus.
Electrons stay in orbit, even though the protons attract the electrons. The electrons have just enough velocity
to keep orbiting and not be pulled into the nucleus, just like the moon around the Earth.
Protons do not fly apart from each other because of a nuclear force that is associated with neutrons. The
nuclear force is an incredibly strong force that acts as a kind of glue to hold the protons together.
Electrons are bound to their orbit around the nucleus by a weaker force than nuclear force. Electrons in
certain atoms, such as metals, can be pulled free from the atom and made to flow. This sea of electrons,
loosely bound to the atoms, is what makes electricity possible. Electricity is a free flow of electrons.
Loosened electrons that do not move and have a negative charge are called static electricity. If these static
electrons have an opportunity to jump to a conductor, this can lead to electrostatic discharge (ESD).
Conductors will be discussed later in this module.
ESD is usually harmless to people. However, ESD can create serious problems for sensitive electronic
equipment. A static discharge can randomly damage computer chips, data, or both. The logical circuitry of
computer chips is extremely sensitive to ESD. Students should take safety precautions before they work
inside computers, routers, and similar devices.
Atoms, or groups of atoms called molecules, can be referred to as materials. Materials are classified into
three groups based on how easily free electrons flow through them.
The basis for all electronic devices is the knowledge of how insulators, conductors, and semiconductors
control the flow of electrons and work together.
The Lab Activity reviews the proper way to handle a multimeter.
3.1.2 Voltage
This page discusses voltage.
Voltage is sometimes referred to as electromotive force (EMF). EMF is related to an electrical force, or
pressure, that occurs when electrons and protons are separated. The force that is created pushes toward the
opposite charge and away from the like charge. This process occurs in a battery, where chemical action
causes electrons to be freed from the negative terminal of the battery. The electrons then travel to the
opposite, or positive, terminal through an external circuit. The electrons do not travel through the battery.
Remember that the flow of electricity is really the flow of electrons. Voltage can also be created in three other
ways. The first is by friction, or static electricity. The second way is by magnetism, or an electric generator.
The last way that voltage can be created is by light, or a solar cell.
Voltage is represented by the letter V, and sometimes by the letter E, for electromotive force. The unit of
measurement for voltage is volt (V). A volt is defined as the amount of work, per unit charge, that is needed
to separate the charges.
In the Lab Activity, students will measure voltage.
3.1.3 Resistance and impedance
This page explains the concepts of resistance and impedance.
The materials through which current flows vary in their resistance to the movement of the electrons. The
materials that offer very little or no resistance are called conductors. Those materials that do not allow the
current to flow, or severely restrict its flow, are called insulators. The amount of resistance depends on the
chemical composition of the materials.
All materials that conduct electricity have a measure of resistance to the flow of electrons through them.
These materials also have other effects called capacitance and inductance that relate to the flow of
electrons. Impedance includes resistance, capacitance, and inductance and is similar to the concept of
resistance.
Attenuation is important in relation to networks. Attenuation refers to the resistance to the flow of electrons
and explains why a signal becomes degraded as it travels along the conduit.
The letter R represents resistance. The unit of measurement for resistance is the ohm (). The symbol
comes from the Greek letter omega.
Electrical insulators are materials that are most resistant to the flow of electrons through them. Examples of
electrical insulators include plastic, glass, air, dry wood, paper, rubber, and helium gas. These materials have
very stable chemical structures and the electrons are tightly bound within the atoms.
Electrical conductors are materials that allow electrons to flow through them easily. The outermost electrons
are bound very loosely to the nucleus and are easily freed. At room temperature, these materials have a
large number of free electrons that can provide conduction. The introduction of voltage causes the free
electrons to move, which results in a current flow.
The periodic table categorizes some groups of atoms in the form of columns. The atoms in each column
belong to particular chemical families. Although they may have different numbers of protons, neutrons, and
electrons, their outermost electrons have similar orbits and interactions with other atoms and molecules. The
best conductors are metals such as copper (Cu), silver (Ag), and gold (Au). These metals have electrons that
are easily freed. Other conductors include solder, which is a mixture of lead (Pb) and tin (Sn), and water with
ions. An ion is an atom that has a different number of electrons than the number of protons in the nucleus.
The human body is made of approximately 70 percent water with ions, which means that it is a conductor.
30

Semiconductors are materials that allow the amount of electricity they conduct to be precisely controlled.
These materials are listed together in one column of the periodic chart. Examples include carbon (C),
germanium (Ge), and the alloy gallium arsenide (GaAs). Silicon (Si) is the most important semiconductor
because it makes the best microscopic-sized electronic circuits.
Silicon is very common and can be found in sand, glass, and many types of rocks. The region around San
Jose, California is known as Silicon Valley because the computer industry, which depends on silicon
microchips, started in that area.
The Lab Activity demonstrates how to measure resistance and continuity.
The Interactive Media Activity identifies the resistance and impedance characteristics of different types of
material.
3.1.4 Current
This page provides a detailed explanation of current.
Electrical current is the flow of charges created when electrons move. In electrical circuits, the current is
caused by a flow of free electrons. When voltage is applied and there is a path for the current, electrons
move from the negative terminal along the path to the positive terminal. The negative terminal repels the
electrons and the positive terminal attracts the electrons. The letter I represents current. The unit of
measurement for current is Ampere (A). An ampere is defined as the number of charges per second that
pass by a point along a path.
Current can be thought of as the amount or volume of electron traffic that flows. Voltage can be thought of as
the speed of the electron traffic. The combination of amperage and voltage equals wattage. Electrical
devices such as light bulbs, motors, and computer power supplies are rated in terms of watts. Wattage
indicates how much power a device consumes or produces.
It is the current or amperage in an electrical circuit that really does the work. For example, static electricity
has such a high voltage that it can jump a gap of an inch or more. However, it has very low amperage and as
a result can create a shock but not permanent injury. The starter motor in an automobile operates at a
relatively low 12 volts but requires very high amperage to generate enough energy to turn over the engine.
Lightning has very high voltage and high amperage and can cause severe damage or injury.
3.1.5 Circuits
This page explains circuits.
Current flows in closed loops called circuits. These circuits must be made of conductive materials and must
have sources of voltage. Voltage causes current to flow. Resistance and impedance oppose it. Current
consists of electrons that flow away from negative terminals and toward positive terminals. These facts allow
people to control the flow of current.
Electricity will naturally flow to the earth if there is a path. Current also flows along the path of least
resistance. If a human body provides the path of least resistance, the current will flow through it. When an
electric appliance has a plug with three prongs, one of the prongs acts as the ground, or 0 volts. The ground
provides a conductive path for the electrons to flow to the earth. The resistance of the body would be greater
than the resistance of the ground.
Ground typically means the 0-volts level in reference to electrical measurements. Voltage is created by the
separation of charges, which means that voltage measurements must be made between two points.
A water analogy can help explain the concept of electricity. The higher the water and the greater the
pressure, the more the water will flow. The water current also depends on the size of the space it must flow
through. Similarly, the higher the voltage and the greater the electrical pressure, the more current will be
produced. The electric current then encounters resistance that, like the water tap, reduces the flow. If the
electric current is in an AC circuit, then the amount of current will depend on how much impedance is
present. If the electric current is in a DC circuit, then the amount of current will depend on how much
resistance is present. The pump is like a battery. It provides pressure to keep the flow moving.
The relationship among voltage, resistance, and current is voltage (V) equals current (I) multiplied by
resistance (R). In other words, V=I*R. This is Ohms law, named after the scientist who explored these
issues.
Two ways in which current flows are alternating current (AC) and direct current (DC). AC voltages change
their polarity, or direction, over time. AC flows in one direction, then reverses its direction and flows in the
other direction, and then repeats the process. AC voltage is positive at one terminal, and negative at the
other. Then the AC voltage reverses its polarity, so that the positive terminal becomes negative, and the
negative terminal becomes positive. This process repeats itself continuously.
DC always flows in the same direction and DC voltages always have the same polarity. One terminal is
always positive, and the other is always negative. They do not change or reverse.
An oscilloscope is an electronic device used to measure electrical signals relative to time. An oscilloscope
graphs the electrical waves, pulses, and patterns. An oscilloscope has an x-axis that represents time, and a
y-axis that represents voltage. There are usually two y-axis voltage inputs so that two waves can be
observed and measured at the same time.
Power lines carry electricity in the form of AC because it can be delivered efficiently over large distances. DC
can be found in flashlight batteries, car batteries, and as power for the microchips on the motherboard of a
computer, where it only needs to go a short distance.
31

Electrons flow in closed circuits, or complete loops. Figure shows a simple circuit. The chemical processes
in the battery cause charges to build up. This provides a voltage, or electrical pressure, that enables
electrons to flow through various devices. The lines represent a conductor, which is usually copper wire.
Think of a switch as two ends of a single wire that can be opened or broken to prevent the flow of electrons.
When the two ends are closed, fixed, or shorted, electrons are allowed to flow. Finally, a light bulb provides
resistance to the flow of electrons, which causes the electrons to release energy in the form of light. The
circuits in networks use a much more complex version of this simple circuit.
For AC and DC electrical systems, the flow of electrons is always from a negatively charged source to a
positively charged source. However, for the controlled flow of electrons to occur, a complete circuit is
required. Figure shows part of the electrical circuit that brings power to a home or office.
The Lab Activity explores the basic properties of series circuits.
3.1.6 Cable specifications
This page discusses cable specifications and expectations.
Cables have different specifications and expectations. Important considerations related to performance are
as follows:
What speeds for data transmission can be achieved? The speed of bit transmission through the
cable is extremely important. The speed of transmission is affected by the kind of conduit used.
Will the transmissions be digital or analog? Digital or baseband transmission and analog or
broadband transmission require different types of cable.
How far can a signal travel before attenuation becomes a concern? If the signal is degraded,
network devices might not be able to receive and interpret the signal. The distance the signal travels
through the cable affects attenuation of the signal. Degradation is directly related to the distance the
signal travels and the type of cable used.
The following Ethernet specifications relate to cable type:
10BASE-T
10BASE5
10BASE2
10BASE-T refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or
digitally interpreted. The T stands for twisted pair.
10BASE5 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or digitally
interpreted. The 5 indicates that a signal can travel for approximately 500 meters before attenuation could
disrupt the ability of the receiver to interpret the signal. 10BASE5 is often referred to as Thicknet. Thicknet is
a type of network and 10BASE5 is the cable used in that network.
10BASE2 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or digitally
interpreted. The 2, in 10BASE2, refers to the approximate maximum segment length being 200 meters
before attenuation could disrupt the ability of the receiver to appropriately interpret the signal being received.
The maximum segment length is actually 185 meters. 10BASE2 is often referred to as Thinnet. Thinnet is a
type of network and 10BASE2 is the cable used in that network.
3.1.7 Coaxial Cable
This page provides detailed information about coaxial cable.
Coaxial cable consists of a copper conductor surrounded by a layer of flexible insulation. The center
conductor can also be made of tin plated aluminium cable allowing for the cable to be manufactured
inexpensively. Over this insulating material is a woven copper braid or metallic foil that acts as the second
wire in the circuit and as a shield for the inner conductor. This second layer, or shield also reduces the
amount of outside electromagnetic interference. Covering this shield is the cable jacket.
For LANs, coaxial cable offers several advantages. It can be run longer distances than shielded twisted pair,
STP, unshielded twisted pair, UTP, and screened twisted pair, ScTP, cable without the need for repeaters.
Repeaters regenerate the signals in a network so that they can cover greater distances. Coaxial cable is less
expensive than fiber-optic cable and the technology is well known. It has been used for many years for many
types of data communication such as cable television.
It is important to consider the size of a cable. As the thickness increases, it becomes more difficult to work
with a cable. Remember that cable must be pulled through conduits and troughs that are limited in size.
Coaxial cable comes in a variety of sizes. The largest diameter was specified for use as Ethernet backbone
cable since it has greater transmission lengths and noise rejection characteristics. This type of coaxial cable
is frequently referred to as Thicknet. This type of cable can be too rigid to install easily in some situations.
Generally, the more difficult the network media is to install, the more expensive it is to install. Coaxial cable is
more expensive to install than twisted-pair cable. Thicknet cable is rarely used anymore aside from special
purpose installations.
In the past, Thinnet coaxial cable with an outside diameter of only 0.35 cm was used in Ethernet networks. It
was especially useful for cable installations that required the cable to make many twists and turns. Since
Thinnet was easier to install, it was also cheaper to install. This led some people to refer to it as Cheapernet.
The outer copper or metallic braid in coaxial cable comprises half the electric circuit. A solid electrical
connection at both ends is important to properly ground the cable. Poor shield connection is one of the
biggest sources of connection problems in the installation of coaxial cable. Connection problems result in
32

electrical noise that interferes with signal transmission. For this reason Thinnet is no longer commonly used
nor supported by latest standards, 100 Mbps and higher, for Ethernet networks.
The following page describes STP cable.
3.1.8 STP Cable
This page provides detailed information about STP cable.
STP cable combines the techniques of cancellation, shielded, and twisted wires. Each pair of wires is
wrapped in metallic foil. The two pairs of wires are wrapped in an overall metallic braid or foil. It is usually
150-ohm cable. As specified for use in Token Ring network installations, STP reduces electrical noise within
the cable such as pair to pair coupling and crosstalk. STP also reduces electronic noise from outside the
cable such as electromagnetic interference (EMI) and radio frequency interference (RFI). STP cable shares
many of the advantages and disadvantages of UTP cable. STP provides more protection from all types of
external interference. However, STP is more expensive and difficult to install than UTP.
A new hybrid of UTP is Screened UTP (ScTP), which is also known as foil screened twisted pair (FTP).
ScTP is essentially UTP wrapped in a metallic foil shield, or screen. ScTP, like UTP, is also 100-ohm cable.
Many cable installers and manufacturers may use the term STP to describe ScTP cabling. It is important to
understand that most references made to STP today actually refer to four-pair shielded cabling. It is highly
unlikely that true STP cable will be used during a cable installation job.
The metallic shielding materials in STP and ScTP need to be grounded at both ends. If improperly grounded
or if there are any discontinuities in the entire length of the shielding material, STP and ScTP can become
susceptible to major noise problems. They are susceptible because they allow the shield to act like an
antenna that picks up unwanted signals. However, this effect works both ways. Not only does the shield
prevent incoming electromagnetic waves from causing noise on data wires, but it also minimizes the
outgoing radiated electromagnetic waves. These waves could cause noise in other devices. STP and ScTP
cable cannot be run as far as other networking media, such as coaxial cable or optical fiber, without the
signal being repeated. More insulation and shielding combine to considerably increase the size, weight, and
cost of the cable. The shielding materials make terminations more difficult and susceptible to poor
workmanship. However, STP and ScTP still have a role, especially in Europe or installations where there is
extensive EMI and RFI near the cabling.
The following page discusses UTP cable.
3.1.9 UTP Cable
This page provides detailed information about UTP cable.
UTP is a four-pair wire medium used in a variety of networks. Each of the eight copper wires in the UTP
cable is covered by insulating material. In addition, each pair of wires is twisted around each other. This type
of cable relies on the cancellation effect produced by the twisted wire pairs to limit signal degradation caused
by EMI and RFI. To further reduce crosstalk between the pairs in UTP cable, the number of twists in the wire
pairs varies. Like STP cable, UTP cable must follow precise specifications as to how many twists or braids
are permitted per foot of cable.
TIA/EIA-568-B.2 contains specifications that govern cable performance. It involves the connection of two
cables, one for voice and one for data, to each outlet. The cable for voice must be four-pair UTP. Category 5
is the cable most frequently recommended and implemented in installations. However, analyst predictions
and independent polls indicate that Category 6 cable will supersede Category 5 cable in network
installations. The fact that Category 6 link and channel requirements are backward compatible to Category
5e makes it very easy for customers to choose Category 6 and supersede Category 5e in their networks.
Applications that work over Category 5e will work over Category 6.
UTP cable has many advantages. It is easy to install and is less expensive than other types of networking
media. In fact, UTP costs less per meter than any other type of LAN cabling. However, the real advantage
is the size. Since it has such a small external diameter, UTP does not fill up wiring ducts as rapidly as other
types of cable. This can be an extremely important factor to consider, particularly when a network is installed
in an older building. When UTP cable is installed with an RJ-45 connector, potential sources of network noise
are greatly reduced and a good solid connection is almost guaranteed.
There are some disadvantages of twisted-pair cabling. UTP cable is more prone to electrical noise and
interference than other types of networking media, and the distance between signal boosts is shorter for UTP
than it is for coaxial and fiber optic cables.
Twisted pair cabling was once considered slower at transmitting data than other types of cable. This is no
longer true. In fact, today, twisted pair is considered the fastest copper-based media.
For communication to occur the signal that is transmitted by the source needs to be understood by the
destination. This is true from both a software and physical perspective. The transmitted signal needs to be
properly received by the circuit connection designed to receive signals. The transmit pin of the source needs
to ultimately connect to the receiving pin of the destination. The following are the types of cable connections
used between internetwork devices.
In Figure , a LAN switch is connected to a computer. The cable that connects from the switch port to the
computer NIC port is called a straight-through cable.
In Figure , two switches are connected together. The cable that connects from one switch port to another
switch port is called a crossover cable.
33

In Figure , the cable that connects the RJ-45 adapter on the com port of the computer to the console port
of the router or switch is called a rollover cable.
The cables are defined by the type of connections, or pinouts, from one end to the other end of the cable.
See Figures , , and . A technician can compare both ends of the same cable by placing them next to
each other, provided the cable has not yet been placed in a wall. The technician observes the colors of the
two RJ-45 connections by placing both ends with the clip placed into the hand and the top of both ends of the
cable pointing away from the technician. A straight-through cable should have both ends with identical color
patterns. While comparing the ends of a cross-over cable, the color of pins #1 and #2 will appear on the
other end at pins #3 and #6, and vice-versa. This occurs because the transmit and receive pins are in
different locations. On a rollover cable, the color combination from left to right on one end should be exactly
opposite to the color combination on the other end.
In the first Lab Activity, a simple communication system is designed, built, and tested.
In the next Lab Activity, students will use a cable tester to determine if a straight-through or crossover cable
is good or bad.
The next three Lab Activities will provides hands-on experience with straight-through, rollover, and crossover
cable construction.
In the final Lab Activity, students will research cable costs.
This page concludes this lesson. The next lesson will discuss optical media. The first page will describe the
electromagnetic spectrum.
3.2

Optical Media

3.2.1 The electromagnetic spectrum


This page introduces the electromagnetic spectrum.
The light used in optical fiber networks is one type of electromagnetic energy. When an electric charge
moves back and forth, or accelerates, a type of energy called electromagnetic energy is produced. This
energy in the form of waves can travel through a vacuum, the air, and through some materials like glass. An
important property of any energy wave is the wavelength.
Radio, microwaves, radar, visible light, x-rays, and gamma rays seem to be very different things. However,
they are all types of electromagnetic energy. If all the types of electromagnetic waves are arranged in order
from the longest wavelength down to the shortest wavelength, a continuum called the electromagnetic
spectrum is created.
The wavelength of an electromagnetic wave is determined by how frequently the electric charge that
generates the wave moves back and forth. If the charge moves back and forth slowly, the wavelength it
generates is a long wavelength. Visualize the movement of the electric charge as like that of a stick in a pool
of water. If the stick is moved back and forth slowly, it will generate ripples in the water with a long
wavelength between the tops of the ripples. If the stick is moved back and forth more rapidly, the ripples will
have a shorter wavelength.
Because electromagnetic waves are all generated in the same way, they share many of the same properties.
The waves all travel at the same rate of speed though a vacuum. The rate is approximately 300,000
kilometers per second or 186,283 miles per second. This is also the speed of light.
Human eyes were designed to only sense electromagnetic energy with wavelengths between 700
nanometers and 400 nanometers (nm). A nanometer is one billionth of a meter (0.000000001 meter) in
length. Electromagnetic energy with wavelengths between 700 and 400 nm is called visible light. The longer
wavelengths of light that are around 700 nm are seen as the color red. The shortest wavelengths that are
around 400 nm appear as the color violet. This part of the electromagnetic spectrum is seen as the colors in
a rainbow.
Wavelengths that are not visible to the human eye are used to transmit data over optical fiber. These
wavelengths are slightly longer than red light and are called infrared light. Infrared light is used in TV remote
controls. The wavelength of the light in optical fiber is either 850 nm, 1310 nm, or 1550 nm. These
wavelengths were selected because they travel through optical fiber better than other wavelengths.
3.2.2

Ray model of light

This page describes the properties of light rays.


When electromagnetic waves travel out from a source, they travel in straight lines. These straight lines
pointing out from the source are called rays.
Think of light rays as narrow beams of light like those produced by lasers. In the vacuum of empty space,
light travels continuously in a straight line at 300,000 kilometers per second. However, light travels at
different, slower speeds through other materials like air, water, and glass. When a light ray called the incident
ray, crosses the boundary from one material to another, some of the light energy in the ray will be reflected
back. That is why you can see yourself in window glass. The light that is reflected back is called the reflected
ray.
The light energy in the incident ray that is not reflected will enter the glass. The entering ray will be bent at an
angle from its original path. This ray is called the refracted ray. How much the incident light ray is bent

34

depends on the angle at which the incident ray strikes the surface of the glass and the different rates of
speed at which light travels through the two substances.
The bending of light rays at the boundary of two substances is the reason why light rays are able to travel
through an optical fiber even if the fiber curves in a circle.
The optical density of the glass determines how much the rays of light in the glass bends. Optical density
refers to how much a light ray slows down when it passes through a substance. The greater the optical
density of a material, the more it slows light down from its speed in a vacuum. The index of refraction is
defined as the speed of light in vacuum divided by the speed of light in the medium. Therefore, the measure
of the optical density of a material is the index of refraction of that material. A material with a large index of
refraction is more optically dense and slows down more light than a material with a smaller index of
refraction.
For a substance like glass, the Index of Refraction, or the optical density, can be made larger by adding
chemicals to the glass. Making the glass very pure can make the index of refraction smaller. The next
lessons will provide further information about reflection and refraction, and their relation to the design and
function of optical fiber.
The Interactive Media Activity demonstrates how light travels.
3.2.3 Reflection
This page provides an overview of reflection.
When a ray of light (the incident ray) strikes the shiny surface of a flat piece of glass, some of the light
energy in the ray is reflected. The angle between the incident ray and a line perpendicular to the surface of
the glass at the point where the incident ray strikes the glass is called the angle of incidence. The
perpendicular line is called the normal. It is not a light ray but a tool to allow the measurement of angles. The
angle between the reflected ray and the normal is called the angle of reflection. The Law of Reflection states
that the angle of reflection of a light ray is equal to the angle of incidence. In other words, the angle at which
a light ray strikes a reflective surface determines the angle that the ray will reflect off the surface.
The Interactive Media Activity demonstrates the laws of reflection.
3.2.4

Refraction

This page provides an overview of refraction.


When a light strikes the interface between two transparent materials, the light divides into two parts. Part of
the light ray is reflected back into the first substance, with the angle of reflection equaling the angle of
incidence. The remaining energy in the light ray crosses the interface and enters into the second substance.
If the incident ray strikes the glass surface at an exact 90-degree angle, the ray goes straight into the glass.
The ray is not bent. However, if the incident ray is not at an exact 90-degree angle to the surface, then the
transmitted ray that enters the glass is bent. The bending of the entering ray is called refraction. How much
the ray is refracted depends on the index of refraction of the two transparent materials. If the light ray travels
from a substance whose index of refraction is smaller, into a substance where the index of refraction is
larger, the refracted ray is bent towards the normal. If the light ray travels from a substance where the index
of refraction is larger into a substance where the index of refraction is smaller, the refracted ray is bent away
from the normal.
Consider a light ray moving at an angle other than 90 degrees through the boundary between glass and a
diamond. The glass has an index of refraction of about 1.523. The diamond has an index of refraction of
about 2.419. Therefore, the ray that continues into the diamond will be bent towards the normal. When that
light ray crosses the boundary between the diamond and the air at some angle other than 90 degrees, it will
be bent away from the normal. The reason for this is that air has a lower index of refraction, about 1.000 than
the index of refraction of the diamond.
The Interactive Media Activity shows how refraction works.
3.2.5 Total internal reflection
This page explains total internal refraction as it relates to optical media.
A light ray that is being turned on and off to send data (1s and 0s) into an optical fiber must stay inside the
fiber until it reaches the far end. The ray must not refract into the material wrapped around the outside of the
fiber. The refraction would cause the loss of part of the light energy of the ray. A design must be achieved for
the fiber that will make the outside surface of the fiber act like a mirror to the light ray moving through the
fiber. If any light ray that tries to move out through the side of the fiber were reflected back into the fiber at an
angle that sends it towards the far end of the fiber, this would be a good "pipe" or "wave guide" for the light
waves.
The laws of reflection and refraction illustrate how to design a fiber that guides the light waves through the
fiber with a minimum energy loss. The following two conditions must be met for the light rays in a fiber to be
reflected back into the fiber without any loss due to refraction:
The core of the optical fiber has to have a larger index of refraction (n) than the material that
surrounds it. The material that surrounds the core of the fiber is called the cladding.
The angle of incidence of the light ray is greater than the critical angle for the core and its cladding.

35

When both of these conditions are met, the entire incident light in the fiber is reflected back inside the fiber.
This is called total internal reflection, which is the foundation upon which optical fiber is constructed. Total
internal reflection causes the light rays in the fiber to bounce off the core-cladding boundary and continue its
journey towards the far end of the fiber. The light will follow a zigzag path through the core of the fiber.
A fiber that meets the first condition can be easily created. In addition, the angle of incidence of the light rays
that enter the core can be controlled. Restricting the following two factors controls the angle of incidence:
The numerical aperture of the fiber The numerical aperture of a core is the range of angles of
incident light rays entering the fiber that will be completely reflected.
Modes The paths which a light ray can follow when traveling down a fiber.
By controlling both conditions, the fiber run will have total internal reflection. This gives a light wave guide
that can be used for data communications.
3.2.6

Multimode fiber

This page will introduce multimode fiber.


The part of an optical fiber through which light rays travel is called the core of the fiber. Light rays can only
enter the core if their angle is inside the numerical aperture of the fiber. Likewise, once the rays have entered
the core of the fiber, there are a limited number of optical paths that a light ray can follow through the fiber.
These optical paths are called modes. If the diameter of the core of the fiber is large enough so that there are
many paths that light can take through the fiber, the fiber is called "multimode" fiber. Single-mode fiber has a
much smaller core that only allows light rays to travel along one mode inside the fiber.
Every fiber-optic cable used for networking consists of two glass fibers encased in separate sheaths. One
fiber carries transmitted data from device A to device B. The second fiber carries data from device B to
device A. The fibers are similar to two one-way streets going in opposite directions. This provides a fullduplex communication link. Copper twisted-pair uses a wire pair to transmit and a wire pair to receive. Fiberoptic circuits use one fiber strand to transmit and one to receive. Typically, these two fiber cables will be in a
single outer jacket until they reach the point at which connectors are attached.
Until the connectors are attached, there is no need for shielding, because no light escapes when it is inside a
fiber. This means there are no crosstalk issues with fiber. It is very common to see multiple fiber pairs
encased in the same cable. This allows a single cable to be run between data closets, floors, or buildings.
One cable can contain 2 to 48 or more separate fibers. With copper, one UTP cable would have to be pulled
for each circuit. Fiber can carry many more bits per second and carry them farther than copper can.
Usually, five parts make up each fiber-optic cable. The parts are the core, the cladding, a buffer, a strength
material, and an outer jacket.
The core is the light transmission element at the center of the optical fiber. All the light signals travel through
the core. A core is typically glass made from a combination of silicon dioxide (silica) and other elements.
Multimode uses a type of glass, called graded index glass for its core. This glass has a lower index of
refraction towards the outer edge of the core. Therefore, the outer area of the core is less optically dense
than the center and light can go faster in the outer part of the core. This design is used because a light ray
following a mode that goes straight down the center of the core does not have as far to travel as a ray
following a mode that bounces around in the fiber. All rays should arrive at the end of the fiber together. Then
the receiver at the end of the fiber receives a strong flash of light rather than a long, dim pulse.
Surrounding the core is the cladding. Cladding is also made of silica but with a lower index of refraction than
the core. Light rays traveling through the fiber core reflect off this core-to-cladding interface as they move
through the fiber by total internal reflection. Standard multimode fiber-optic cable is the most common type of
fiber-optic cable used in LANs. A standard multimode fiber-optic cable uses an optical fiber with either a 62.5
or a 50-micron core and a 125-micron diameter cladding. This is commonly designated as 62.5/125 or
50/125 micron optical fiber. A micron is one millionth of a meter (1).
Surrounding the cladding is a buffer material that is usually plastic. The buffer material helps shield the core
and cladding from damage. There are two basic cable designs. They are the loose-tube and the tightbuffered cable designs. Most of the fiber used in LANs is tight-buffered multimode cable. Tight-buffered
cables have the buffering material that surrounds the cladding in direct contact with the cladding. The most
practical difference between the two designs is the applications for which they are used. Loose-tube cable is
primarily used for outside-building installations, while tight-buffered cable is used inside buildings.
The strength material surrounds the buffer, preventing the fiber cable from being stretched when installers
pull it. The material used is often Kevlar, the same material used to produce bulletproof vests.
The final element is the outer jacket. The outer jacket surrounds the cable to protect the fiber against
abrasion, solvents, and other contaminants. The color of the outer jacket of multimode fiber is usually
orange, but occasionally another color.
Infrared Light Emitting Diodes (LEDs) or Vertical Cavity Surface Emitting Lasers (VCSELs) are two types of
light source usually used with multimode fiber. Use one or the other. LEDs are a little cheaper to build and
require somewhat less safety concerns than lasers. However, LEDs cannot transmit light over cable as far as
the lasers. Multimode fiber (62.5/125) can carry data distances of up to 2000 meters (6,560 ft).
3.2.7

Single-mode fiber

This page will introduce single-mode fiber.


36

Single-mode fiber consists of the same parts as multimode. The outer jacket of single-mode fiber is usually
yellow. The major difference between multimode and single-mode fiber is that single-mode allows only one
mode of light to propagate through the smaller, fiber-optic core. The single-mode core is eight to ten microns
in diameter. Nine-micron cores are the most common. A 9/125 marking on the jacket of the single-mode fiber
indicates that the core fiber has a diameter of 9 microns and the surrounding cladding is 125 microns in
diameter.
An infrared laser is used as the light source in single-mode fiber. The ray of light it generates enters the core
at a 90-degree angle. As a result, the data carrying light ray pulses in single-mode fiber are essentially
transmitted in a straight line right down the middle of the core. This greatly increases both the speed and
the distance that data can be transmitted.
Because of its design, single-mode fiber is capable of higher rates of data transmission (bandwidth) and
greater cable run distances than multimode fiber. Single-mode fiber can carry LAN data up to 3000 meters.
Although this distance is considered a standard, newer technologies have increased this distance and will be
discussed in a later module. Multimode is only capable of carrying up to 2000 meters. Lasers and singlemode fibers are more expensive than LEDs and multimode fiber. Because of these characteristics, singlemode fiber is often used for inter-building connectivity.
Warming: The laser light used with single-mode has a longer wavelength than can be seen. The laser is so
strong that it can seriously damage eyes. Never look at the near end of a fiber that is connected to a device
at the far end. Never look into the transmit port on a NIC, switch, or router. Remember to keep protective
covers over the ends of fiber and inserted into the fiber-optic ports of switches and routers. Be very careful.
Figure compares the relative sizes of the core and cladding for both types of fiber optic in different
sectional views. The much smaller and more refined fiber core in single-mode fiber is the reason singlemode has a higher bandwidth and cable run distance than multimode fiber. However, it entails more
manufacturing costs.
3.2.8 Other optical components
This page explains how optical devices are used to transmit data.
Most of the data sent over a LAN is in the form of electrical signals. However, optical fiber links use light to
send data. Something is needed to convert the electricity to light and at the other end of the fiber convert the
light back to electricity. This means that a transmitter and a receiver are required.
The transmitter receives data to be transmitted from switches and routers. This data is in the form of
electrical signals. The transmitter converts the electronic signals into their equivalent light pulses. There are
two types of light sources used to encode and transmit the data through the cable:
A light emitting diode (LED) producing infrared light with wavelengths of either 850 nm or 1310 nm.
These are used with multimode fiber in LANs. Lenses are used to focus the infrared light on the end
of the fiber.
Light amplification by stimulated emission radiation (LASER) a light source producing a thin beam of
intense infrared light usually with wavelengths of 1310nm or 1550 nm. Lasers are used with singlemode fiber over the longer distances involved in WANs or campus backbones. Extra care should be
exercised to prevent eye injury.
Each of these light sources can be lighted and darkened very quickly to send data (1s and 0s) at a high
number of bits per second.
At the other end of the optical fiber from the transmitter is the receiver. The receiver functions something like
the photoelectric cell in a solar powered calculator. When light strikes the receiver, it produces electricity. The
first job of the receiver is to detect a light pulse that arrives from the fiber. Then the receiver converts the light
pulse back into the original electrical signal that first entered the transmitter at the far end of the fiber. Now
the signal is again in the form of voltage changes. The signal is ready to be sent over copper wire into any
receiving electronic device such as a computer, switch, or router. The semiconductor devices that are usually
used as receivers with fiber-optic links are called p-intrinsic-n diodes (PIN photodiodes).
PIN photodiodes are manufactured to be sensitive to 850, 1310, or 1550 nm of light that are generated by
the transmitter at the far end of the fiber. When struck by a pulse of light at the proper wavelength, the PIN
photodiode quickly produces an electric current of the proper voltage for the network. It instantly stops
producing the voltage when no light strikes the PIN photodiode. This generates the voltage changes that
represent the data 1s and 0s on a copper cable.
Connectors are attached to the fiber ends so that the fibers can be connected to the ports on the transmitter
and receiver. The type of connector most commonly used with multimode fiber is the Subscriber Connector
(SC). On single-mode fiber, the Straight Tip (ST) connector is frequently used.
In addition to the transmitters, receivers, connectors, and fibers that are always required on an optical
network, repeaters and fiber patch panels are often seen.
Repeaters are optical amplifiers that receive attenuating light pulses traveling long distances and restore
them to their original shapes, strengths, and timings. The restored signals can then be sent on along the
journey to the receiver at the far end of the fiber.

37

Fiber patch panels similar to the patch panels used with copper cable. These panels increase the flexibility of
an optical network by allowing quick changes to the connection of devices like switches or routers with
various available fiber runs, or cable links.
The Lab Activity will teach students about the price of different types of fiber cables.
Signals and noise in optical fibers
3.2.9
This page explains some factors that reduce signal strength in optical media.
Fiber-optic cable is not affected by the sources of external noise that cause problems on copper media
because external light cannot enter the fiber except at the transmitter end. The cladding is covered by a
buffer and an outer jacket that stops light from entering or leaving the cable.
Furthermore, the transmission of light on one fiber in a cable does not generate interference that disturbs
transmission on any other fiber. This means that fiber does not have the problem with crosstalk that copper
media does. In fact, the quality of fiber-optic links is so good that the recent standards for gigabit and ten
gigabit Ethernet specify transmission distances that far exceed the traditional two-kilometer reach of the
original Ethernet. Fiber-optic transmission allows the Ethernet protocol to be used on metropolitan-area
networks (MANs) and wide-area networks (WANs).
Although fiber is the best of all the transmission media at carrying large amounts of data over long distances,
fiber is not without problems. When light travels through fiber, some of the light energy is lost. The farther a
light signal travels through a fiber, the more the signal loses strength. This attenuation of the signal is due to
several factors involving the nature of fiber itself. The most important factor is scattering. The scattering of
light in a fiber is caused by microscopic non-uniformity (distortions) in the fiber that reflects and scatters
some of the light energy.
Absorption is another cause of light energy loss. When a light ray strikes some types of chemical impurities
in a fiber, the impurities absorb part of the energy. This light energy is converted to a small amount of heat
energy. Absorption makes the light signal a little dimmer.
Another factor that causes attenuation of the light signal is manufacturing irregularities or roughness in the
core-to-cladding boundary. Power is lost from the light signal because of the less than perfect total internal
reflection in that rough area of the fiber. Any microscopic imperfections in the thickness or symmetry of the
fiber will cut down on total internal reflection and the cladding will absorb some light energy.
Dispersion of a light flash also limits transmission distances on a fiber. Dispersion is the technical term for the
spreading of pulses of light as they travel down the fiber.
Graded index multimode fiber is designed to compensate for the different distances the various modes of
light have to travel in the large diameter core. Single-mode fiber does not have the problem of multiple paths
that the light signal can follow. However, chromatic dispersion is a characteristic of both multimode and
single-mode fiber. When wavelengths of light travel at slightly different speeds through glass than do other
wavelengths, chromatic dispersion is caused. That is why a prism separates the wavelengths of light. Ideally,
an LED or Laser light source would emit light of just one frequency. Then chromatic dispersion would not be
a problem.
Unfortunately, lasers, and especially LEDs generate a range of wavelengths so chromatic dispersion limits
the distance that can be transmitted on a fiber. If a signal is transmitted too far, what started as a bright pulse
of light energy will be spread out, separated, and dim when it reaches the receiver. The receiver will not be
able to distinguish a one from a zero.
Installation, care, and testing of optical fiber
3.2.10
This page will teach students how to troubleshoot optical fiber.
A major cause of too much attenuation in fiber-optic cable is improper installation. If the fiber is stretched or
curved too tightly, it can cause tiny cracks in the core that will scatter the light rays. Bending the fiber in too
tight a curve can change the incident angle of light rays striking the core-to-cladding boundary. Then the
incident angle of the ray will become less than the critical angle for total internal reflection. Instead of
reflecting around the bend, some light rays will refract into the cladding and be lost.
To prevent fiber bends that are too sharp, fiber is usually pulled through a type of installed pipe called
interducting. The interducting is much stiffer than fiber and cannot be bent so sharply that the fiber inside the
interducting has too tight a curve. The interducting protects the fiber, makes it easier to pull the fiber, and
ensures that the bending radius (curve limit) of the fiber is not exceeded.
When the fiber has been pulled, the ends of the fiber must be cleaved (cut) and properly polished to ensure
that the ends are smooth. A microscope or test instrument with a built in magnifier is used to examine the
end of the fiber and verify that it is properly polished and shaped. Then the connector is carefully attached to
the fiber end. Improperly installed connectors, improper splices, or the splicing of two cables with different
core sizes will dramatically reduce the strength of a light signal.
Once the fiber-optic cable and connectors have been installed, the connectors and the ends of the fibers
must be kept spotlessly clean. The ends of the fibers should be covered with protective covers to prevent
damage to the fiber ends. When these covers are removed prior to connecting the fiber to a port on a switch
or a router, the fiber ends must be cleaned. Clean the fiber ends with lint free lens tissue moistened with pure
isopropyl alcohol. The fiber ports on a switch or router should also be kept covered when not in use and
38

cleaned with lens tissue and isopropyl alcohol before a connection is made. Dirty ends on a fiber will cause a
big drop in the amount of light that reaches the receiver.
Scattering, absorption, dispersion, improper installation, and dirty fiber ends diminish the strength of the light
signal and are referred to as fiber noise. Before using a fiber-optic cable, it must be tested to ensure that
enough light actually reaches the receiver for it to detect the zeros and ones in the signal.
When a fiber-optic link is being planned, the amount of signal power loss that can be tolerated must be
calculated. This is referred to as the optical link loss budget. Imagine a monthly financial budget. After all of
the expenses are subtracted from initial income, enough money must be left to get through the month.
The decibel (dB) is the unit used to measure the amount of power loss. It tells what percent of the power that
leaves the transmitter actually enters the receiver.
Testing fiber links is extremely important and records of the results of these tests must be kept. Several types
of fiber-optic test equipment are used. Two of the most important instruments are Optical Loss Meters and
Optical Time Domain Reflectometers (OTDRs).
These meters both test optical cable to ensure that the cable meets the TIA standards for fiber. They also
test to verify that the link power loss does not fall below the optical link loss budget. OTDRs can provide
much additional detailed diagnostic information about a fiber link. They can be used to trouble shoot a link
when problems occur.
This page concludes this lesson. The next lesson will discuss wireless media. The first page will discuss
Wireless LAN organizations and standards.
3.3

Wireless Media

Wireless LAN organizations and standards


3.3.1
This page will introduce the regulations and standards that apply to wireless technology. These standards
ensure that deployed networks are interoperable and in compliance.
Just as in cabled networks, IEEE is the prime issuer of standards for wireless networks. The standards have
been created within the framework of the regulations created by the Federal Communications Commission
(FCC).
A key technology contained within the 802.11 standard is Direct Sequence Spread Spectrum (DSSS). DSSS
applies to wireless devices operating within a 1 to 2 Mbps range. A DSSS system may operate at up to 11
Mbps but will not be considered compliant above 2 Mbps. The next standard approved was 802.11b, which
increased transmission capabilities to 11 Mbps. Even though DSSS WLANs were able to interoperate with
the Frequency Hopping Spread Spectrum (FHSS) WLANs, problems developed prompting design changes
by the manufacturers. In this case, IEEEs task was simply to create a standard that matched the
manufacturers solution.
802.11b may also be called Wi-Fi or high-speed wireless and refers to DSSS systems that operate at 1, 2,
5.5 and 11 Mbps. All 802.11b systems are backward compliant in that they also support 802.11 for 1 and 2
Mbps data rates for DSSS only. This backward compatibility is extremely important as it allows upgrading of
the wireless network without replacing the NICs or access points.
802.11b devices achieve the higher data throughput rate by using a different coding technique from 802.11,
allowing for a greater amount of data to be transferred in the same time frame. The majority of 802.11b
devices still fail to match the 11 Mbps bandwidth and generally function in the 2 to 4 Mbps range.
802.11a covers WLAN devices operating in the 5 GHZ transmission band. Using the 5 GHZ range disallows
interoperability of 802.11b devices as they operate within 2.4 GHZ. 802.11a is capable of supplying data
throughput of 54 Mbps and with proprietary technology known as "rate doubling" has achieved 108 Mbps. In
production networks, a more standard rating is 20-26 Mbps.
802.11g provides the same bandwidth as 802.11a but with backwards compatibility for 802.11b devices using
Orthogonal Frequency Division Multiplexing (OFDM) modulation technology. Cisco has developed an access
point that permits 802.11b and 802.11a devices to coexist on the same WLAN. The access point supplies
gateway services allowing these otherwise incompatible devices to communicate.
3.3.2 Wireless devices and topologies
This page describes the devices and related topologies for a wireless network.
A wireless network may consist of as few as two devices. - The nodes could simply be desktop
workstations or notebook computers. Equipped with wireless NICs, an ad hoc network could be established
which compares to a peer-to-peer wired network. Both devices act as servers and clients in this environment.
Although it does provide connectivity, security is at a minimum along with throughput. Another problem with
this type of network is compatibility. Many times NICs from different manufacturers are not compatible.
To solve the problem of compatibility, an access point (AP) is commonly installed to act as a central hub for
the WLAN infrastructure mode. The AP is hard wired to the cabled LAN to provide Internet access and
connectivity to the wired network. APs are equipped with antennae and provide wireless connectivity over a
specified area referred to as a cell. Depending on the structural composition of the location in which the AP
is installed and the size and gain of the antennae, the size of the cell could greatly vary. Most commonly, the
range will be from 91.44 to 152.4 meters (300 to 500 feet). To service larger areas, multiple access points
may be installed with a degree of overlap. The overlap permits "roaming" between cells. This is very similar
to the services provided by cellular phone companies. Overlap, on multiple AP networks, is critical to allow
39

for movement of devices within the WLAN. Although not addressed in the IEEE standards, a 20-30% overlap
is desirable. This rate of overlap will permit roaming between cells, allowing for the disconnect and reconnect
activity to occur seamlessly without service interruption.
When a client is activated within the WLAN, it will start "listening" for a compatible device with which to
"associate". This is referred to as "scanning" and may be active or passive.
Active scanning causes a probe request to be sent from the wireless node seeking to join the network. The
probe request will contain the Service Set Identifier (SSID) of the network it wishes to join. When an AP with
the same SSID is found, the AP will issue a probe response. The authentication and association steps are
completed.
Passive scanning nodes listen for beacon management frames (beacons), which are transmitted by the AP
(infrastructure mode) or peer nodes (ad hoc). When a node receives a beacon that contains the SSID of the
network it is trying to join, an attempt is made to join the network. Passive scanning is a continuous process
and nodes may associate or disassociate with APs as signal strength changes.
The first Interactive Media Activity shows the levels of the OSI reference model and the related networking
devices.
The second Interactive Media Activity shows the addition of a wireless hub to a wired network.

3.3.3 How wireless LANs communicate


This page explains the communication process of a WLAN.
After establishing connectivity to the WLAN, a node will pass frames in the same manner as on any other
802.x network. WLANs do not use a standard 802.3 frame. Therefore, using the term wireless Ethernet is
misleading. There are three types of frames: control, management, and data. Only the data frame type is
similar to 802.3 frames. The payload of wireless and 802.3 frames is 1500 bytes; however, an Ethernet
frame may not exceed 1518 bytes whereas a wireless frame could be as large as 2346 bytes. Usually the
WLAN frame size will be limited to 1518 bytes as it is most commonly connected to a wired Ethernet
network.
Since radio frequency (RF) is a shared medium, collisions can occur just as they do on wired shared
medium. The major difference is that there is no method by which the source node is able to detect that a
collision occurred. For that reason WLANs use Carrier Sense Multiple Access/Collision Avoidance
(CSMA/CA). This is somewhat like Ethernet CSMA/CD.
When a source node sends a frame, the receiving node returns a positive acknowledgment (ACK). This can
cause consumption of 50% of the available bandwidth. This overhead when combined with the collision
avoidance protocol overhead reduces the actual data throughput to a maximum of 5.0 to 5.5 Mbps on an
802.11b wireless LAN rated at 11 Mbps.
Performance of the network will also be affected by signal strength and degradation in signal quality due to
distance or interference. As the signal becomes weaker, Adaptive Rate Selection (ARS) may be invoked. The
transmitting unit will drop the data rate from 11 Mbps to 5.5 Mbps, from 5.5 Mbps to 2 Mbps or 2 Mbps to 1
Mbps.
3.3.4 Authentication and association
This page describes WLAN authentication and association.
WLAN authentication occurs at Layer 2. It is the process of authenticating the device not the user. This is a
critical point to remember when considering WLAN security, troubleshooting and overall management.
Authentication may be a null process, as in the case of a new AP and NIC with default configurations in
place. The client will send an authentication request frame to the AP and the frame will be accepted or
40

rejected by the AP. The client is notified of the response via an authentication response frame. The AP may
also be configured to hand off the authentication task to an authentication server, which would perform a
more thorough credentialing process.
Association, performed after authentication, is the state that permits a client to use the services of the AP to
transfer data.
Authentication and Association types
Unauthenticated and unassociated
The node is disconnected from the network and not associated to an access point.
Authenticated and unassociated
The node has been authenticated on the network but has not yet associated with the access
point.
Authenticated and associated
The node is connected to the network and able to transmit and receive data through the access
point.
Methods of authentication
IEEE 802.11 lists two types of authentication processes.
The first authentication process is the open system. This is an open connectivity standard in which only the
SSID must match. This may be used in a secure or non-secure environment although the ability of low level
network sniffers to discover the SSID of the WLAN is high.
The second process is the shared key. This process requires the use of Wireless Equivalency Protocol
(WEP) encryption. WEP is a fairly simple algorithm using 64 and 128 bit keys. The AP is configured with an
encrypted key and nodes attempting to access the network through the AP must have a matching key.
Statically assigned WEP keys provide a higher level of security than the open system but are definitely not
hack proof.
The problem of unauthorized entry into WLANs is being addressed by a number of new security solution
technologies.
The radio wave and microwave spectrums
3.3.5
This page describes radio waves and modulation.
Computers send data signals electronically. Radio transmitters convert these electrical signals to radio
waves. Changing electric currents in the antenna of a transmitter generates the radio waves. These radio
waves radiate out in straight lines from the antenna. However, radio waves attenuate as they move out
from the transmitting antenna. In a WLAN, a radio signal measured at a distance of just 10 meters (30 feet)
from the transmitting antenna would be only 1/100th of its original strength. Like light, radio waves can be
absorbed by some materials and reflected by others. When passing from one material, like air, into another
material, like a plaster wall, radio waves are refracted. Radio waves are also scattered and absorbed by
water droplets in the air.
These qualities of radio waves are important to remember when a WLAN is being planned for a building or
for a campus. The process of evaluating a location for the installation of a WLAN is called making a Site
Survey.
Because radio signals weaken as they travel away from the transmitter, the receiver must also be equipped
with an antenna. When radio waves hit the antenna of a receiver, weak electric currents are generated in that
antenna. These electric currents, caused by the received radio waves, are equal to the currents that
originally generated the radio waves in the antenna of the transmitter. The receiver amplifies the strength of
these weak electrical signals.
In a transmitter, the electrical (data) signals from a computer or a LAN are not sent directly into the antenna
of the transmitter. Rather, these data signals are used to alter a second, strong signal called the carrier
signal.
The process of altering the carrier signal that will enter the antenna of the transmitter is called modulation.
There are three basic ways in which a radio carrier signal can be modulated. For example, Amplitude
Modulated (AM) radio stations modulate the height (amplitude) of the carrier signal. Frequency Modulated
(FM) radio stations modulate the frequency of the carrier signal as determined by the electrical signal from
the microphone. In WLANs, a third type of modulation called phase modulation is used to superimpose the
data signal onto the carrier signal that is broadcast by the transmitter.
In this type of modulation, the data bits in the electrical signal change the phase of the carrier signal.
A receiver demodulates the carrier signal that arrives from its antenna. The receiver interprets the phase
changes of the carrier signal and reconstructs from it the original electrical data signal.
The first Interactive Media Activity explains electromagnetic fields and polarization.
The second Interactive Media Activity shows the names, devices, frequencies, and wavelengths of the EM
spectrum.
Signals and noise on a WLAN
3.3.6
This page discusses how signals and noise can affect a WLAN.
On a wired Ethernet network, it is usually a simple process to diagnose the cause of interference. When
using RF technology many kinds of interference must be taken into consideration.
41

Narrowband is the opposite of spread spectrum technology. As the name implies narrowband does not affect
the entire frequency spectrum of the wireless signal. One solution to a narrowband interference problem
could be simply changing the channel that the AP is using. Actually diagnosing the cause of narrowband
interference can be a costly and time-consuming experience. To identify the source requires a spectrum
analyzer and even a low cost model is relatively expensive.
All band interference affects the entire spectrum range. Bluetooth technologies hops across the entire 2.4
GHz many times per second and can cause significant interference on an 802.11b network. It is not
uncommon to see signs in facilities that use wireless networks requesting that all Bluetooth devices be
shut down before entering. In homes and offices, a device that is often overlooked as causing interference is
the standard microwave oven. Leakage from a microwave of as little as one watt into the RF spectrum can
cause major network disruption. Wireless phones operating in the 2.4GHZ spectrum can also cause network
disorder.
Generally the RF signal will not be affected by even the most extreme weather conditions. However, fog or
very high moisture conditions can and do affect wireless networks. Lightning can also charge the
atmosphere and alter the path of a transmitted signal.
The first and most obvious source of a signal problem is the transmitting station and antenna type. A higher
output station will transmit the signal further and a parabolic dish antenna that concentrates the signal will
increase the transmission range.
In a SOHO environment most access points will utilize twin omnidirectional antennae that transmit the
signal in all directions thereby reducing the range of communication
3.3.7 Wireless security
This page will explain how wireless security can be achieved.
Where wireless networks exist there is little security. This has been a problem from the earliest days of
WLANs. Currently, many administrators are weak in implementing effective security practices.
A number of new security solutions and protocols, such as Virtual Private Networking (VPN) and Extensible
Authentication Protocol (EAP) are emerging. With EAP, the access point does not provide authentication to
the client, but passes the duties to a more sophisticated device, possibly a dedicated server, designed for
that purpose. Using an integrated server VPN technology creates a tunnel on top of an existing protocol such
as IP. This is a Layer 3 connection as opposed to the Layer 2 connection between the AP and the sending
node.
EAP-MD5 Challenge Extensible Authentication Protocol is the earliest authentication type, which
is very similar to CHAP password protection on a wired network.
LEAP (Cisco) Lightweight Extensible Authentication Protocol is the type primarily used on Cisco
WLAN access points. LEAP provides security during credential exchange, encrypts using dynamic
WEP keys, and supports mutual authentication.
User authentication Allows only authorized users to connect, send and receive data over the
wireless network.
Encryption Provides encryption services further protecting the data from intruders.
Data authentication Ensures the integrity of the data, authenticating source and destination
devices.
VPN technology effectively closes the wireless network since an unrestricted WLAN will automatically
forward traffic between nodes that appear to be on the same wireless network. WLANs often extend outside
the perimeter of the home or office in which they are installed and without security intruders may infiltrate the
network with little effort. Conversely it takes minimal effort on the part of the network administrator to provide
low-level security to the WLAN.
This page concludes the lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Copper cable carries information using electrical current. The electrical specifications of a cable determines
the kind of signal a particular cable can transmit, the speed at which the signal is transmitted and the
distance the signal will travel.
An understanding of the following electrical concepts is helpful when working with computer networks:
Voltage the pressure that moves electrons through a circuit from one place to another
Resistance opposition to the flow of electrons and why a signal becomes degraded as it travels
along the conduit
Current flow of charges created when electrons move
Circuits a closed loop through which an electrical current flows
Circuits must be composed of conducting materials, and must have sources of voltage. Voltage causes
current to flow, while resistance and impedance oppose it. A multimeter is used to measure voltage, current,
resistance, and other electrical quantities expressed in numeric form.
Coaxial cable, unshielded twisted pair (UTP) and shielded twisted pair (STP) are types of copper cables that
can be used in a network to provide different capabilities. Twisted-pair cable can be configured for straight
through, crossover, or rollover signaling. These terms refer to the individual wire connections, or pinouts,
42

from one end to the other end of the cable. A straight-through cable is used to connect unlike devices such
as a switch and a PC. A crossover cable is used to connect similar devices such as two switches. A rollover
cable is used to connect a PC to the console port of a router. Different pinouts are required because the
transmit and receive pins are in different locations on each of these devices.
Optical fiber is the most frequently used medium for the longer, high-bandwidth, point-to-point transmissions
required on LAN backbones and on WANs. Light energy is used to transmit large amounts of data securely
over relatively long distances The light signal carried by a fiber is produced by a transmitter that converts an
electrical signal into a light signal. The receiver converts the light that arrives at the far end of the cable back
to the original electrical signal.
Every fiber-optic cable used for networking consists of two glass fibers encased in separate sheaths. Just as
copper twisted-pair uses separate wire pairs to transmit and receive, fiber-optic circuits use one fiber strand
to transmit and one to receive.
The part of an optical fiber through which light rays travel is called the core of the fiber. Surrounding the core
is the cladding. Its function is to reflect the signal back towards the core. Surrounding the cladding is a buffer
material that helps shield the core and cladding from damage. A strength material surrounds the buffer,
preventing the fiber cable from being stretched when installers pull it. The material used is often Kevlar. The
final element is the outer jacket that surrounds the cable to protect the fiber against abrasion, solvents, and
other contaminants.
The laws of reflection and refraction are used to design fiber media that guides the light waves through the
fiber with minimum energy and signal loss. Once the rays have entered the core of the fiber, there are a
limited number of optical paths that a light ray can follow through the fiber. These optical paths are called
modes. If the diameter of the core of the fiber is large enough so that there are many paths that light can take
through the fiber, the fiber is called multimode fiber. Single-mode fiber has a much smaller core that only
allows light rays to travel along one mode inside the fiber. Because of its design, single-mode fiber is capable
of higher rates of data transmission and greater cable run distances than multimode fiber.
Fiber is described as immune to noise because it is not affected by external noise or noise from other cables.
Light confined in one fiber has no way of inducing light in another fiber. Attenuation of a light signal becomes
a problem over long cables especially if sections of cable are connected at patch panels or spliced.
Both copper and fiber media require that devices remains stationary permitting moves only within the limits of
the media. Wireless technology removes these restraints. Understanding the regulations and standards that
apply to wireless technology will ensure that deployed networks will be interoperable and in compliance with
IEEE 802.11 standards for WLANs.
A wireless network may consist of as few as two devices. The wireless equivalent of a peer-to-peer network
where end-user devices connect directly is referred to as an ad-hoc wireless topology. To solve compatibility
problems among devices, an infrastructure mode topology can be set up using an access point (AP) to act
as a central hub for the WLAN. Wireless communication uses three types of frames: control, management,
and data frames. To avoid collisions on the shared radio frequency media WLANs use Carrier Sense Multiple
Access/Collision Avoidance (CSMA/CA).
WLAN authentication is a Layer 2 process that authenticates the device, not the user. Association,
performed after authentication, permits a client to use the services of the access point to transfer data.
4.1

Frequency-Based Cable Testing (Core)


Overview

Networking media is the backbone of a network. Networking media is literally and physically the backbone of
a network. Inferior quality of network cabling results in network failures and unreliable performance. Copper,
optical fiber, and wireless networking media all require testing to ensure that they meet strict specification
guidelines. These tests involve certain electrical and mathematical concepts and terms such as signal, wave,
frequency, and noise. These terms will help students understand networks, cables, and cable testing.
The first lesson in this module will provide some basic definitions to help students understand the cable
testing concepts presented in the second lesson.
The second lesson of this module describes issues related to cable testing for physical layer connectivity in
LANs. In order for the LAN to function properly, the physical layer medium should meet the industry standard
specifications.
Attenuation, which is signal deterioration, and noise, which is signal interference, can cause problems in
networks because the data sent may be interpreted incorrectly or not recognized at all after it has been
received. Proper termination of cable connectors and proper cable installation are important. If standards are
followed during installations, repairs, and changes, attenuation and noise levels should be minimized.
After a cable has been installed, a cable certification meter can verify that the installation meets TIA/EIA
specifications. This module also describes some important tests that are performed.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Differentiate between sine waves and square waves
Define and calculate exponents and logarithms
Define and calculate decibels
43

Define basic terminology related to time, frequency, and noise


Differentiate between digital bandwidth and analog bandwidth
Compare and contrast noise levels on various types of cabling
Define and describe the affects of attenuation and impedance mismatch
Define crosstalk, near-end crosstalk, far-end crosstalk, and power sum near-end crosstalk
Describe how twisted pairs help reduce noise
Describe the ten copper cable tests defined in TIA/EIA-568-B
Describe the difference between Category 5 and Category 6 cable

Waves
4.1.1
This lesson provides definitions that relate to frequency-based cable testing. This page defines waves.
A wave is energy that travels from one place to another. There are many types of waves, but all can be
described with similar vocabulary.
It is helpful to think of waves as disturbances. A bucket of water that is completely still does not have waves
since there are no disturbances. Conversely, the ocean always has some sort of detectable waves due to
disturbances such as wind and tide.
Ocean waves can be described in terms of their height, or amplitude, which could be measured in meters.
They can also be described in terms of how frequently the waves reach the shore, which relates to period
and frequency. The period of the waves is the amount of time between each wave, measured in seconds.
The frequency is the number of waves that reach the shore each second, measured in hertz (Hz). 1 Hz is
equal to 1 wave per second, or 1 cycle per second. To experiment with these concepts, adjust the amplitude
and frequency in Figure .
Networking professionals are specifically interested in voltage waves on copper media, light waves in optical
fiber, and alternating electric and magnetic fields called electromagnetic waves. The amplitude of an
electrical signal still represents height, but it is measured in volts (V) instead of meters (m). The period is the
amount of time that it takes to complete 1 cycle. This is measured in seconds. The frequency is the number
of complete cycles per second. This is measured in Hz.
If a disturbance is deliberately caused, and involves a fixed, predictable duration, it is called a pulse. Pulses
are an important part of electrical signals because they are the basis of digital transmission. The pattern of
the pulses represents the value of the data being transmitted.
4.1.2 Sine waves and square waves (Core)
This page defines sine waves and square waves.
Sine waves, or sinusoids, are graphs of mathematical functions. Sine waves are periodic, which means
that they repeat the same pattern at regular intervals. Sine waves vary continuously, which means that no
adjacent points on the graph have the same value.
Sine waves are graphical representations of many natural occurrences that change regularly over time.
Some examples of these occurrences are the distance from the earth to the sun, the distance from the
ground while riding a Ferris wheel, and the time of day that the sun rises. Since sine waves vary
continuously, they are examples of analog waves.
Square waves, like sine waves, are periodic. However, square wave graphs do not continuously vary with
time. The wave maintains one value and then suddenly changes to a different value. After a short amount of
time it changes back to the original value. Square waves represent digital signals, or pulses. Like all waves,
square waves can be described in terms of amplitude, period, and frequency.

4.1.3

Exponents and logarithms (Optional)


44

In networking, there are three important number systems:


Base 2 binary
Base 10 decimal
Base 16 hexadecimal
Recall that the base of a number system refers to the number of different symbols that can occupy one
position. For example, binary numbers have only two placeholders, which are zero and one. Decimal
numbers have ten different placeholders, the numbers 0 to 9. Hexadecimal numbers have 16 different
placeholders, the numbers 0 to 9 and the letters A to F.
Remember that 10 x 10 can be written as 102. 102 means ten squared or ten raised to the second power. 10
is the base of the number and 2 is the exponent of the number. 10 x 10 x 10 can be written as 10 3. 103
means ten cubed or ten raised to the third power. The base is ten and the exponent is three. Use the
Interactive Media Activity to calculate exponents. Enter a value for x to calculate y or a value for y to
calculate x.
The base of a number system also refers to the value of each digit. The least significant digit has a value of
base0, or one. The next digit has a value of base1. This is equal to 2 for binary numbers, 10 for decimal
numbers, and 16 for hexadecimal numbers.
Numbers with exponents are used to easily represent very large or very small numbers. It is much easier and
less error-prone to represent one billion numerically as 10 9 than as 1000000000. Many cable-testing
calculations involve numbers that are very large and require exponents. Use the Interactive Media Activity to
learn more about exponents.
One way to work with the very large and very small numbers is to transform the numbers based on the
mathematical rule known as a logarithm. Logarithm is abbreviated as "log". Any number may be used as a
base for a system of logarithms. However, base 10 has many advantages not obtainable in ordinary
calculations with other bases. Base 10 is used almost exclusively for ordinary calculations. Logarithms with
10 as a base are called common logarithms. It is not possible to obtain the logarithm of a negative number.
To take the log of a number use a calculator or the Interactive Media Activity. For example, the log of
(109) = 9. It is possible to take the logarithm of numbers that are not powers of ten. It is not possible to
determine the logarithm of a negative number. The study of logarithms is beyond the scope of this
course. However, the terminology is often used to calculate decibels and measure signal intensity on
copper, optical, and wireless media
Decibels (Optional)
4.1.4
The study of logarithms is beyond the scope of this course. However, the terminology is often used to
calculate decibels and measure signals on copper, optical, and wireless media. The decibel is related to the
exponents and logarithms described in prior sections. There are two formulas that are used to calculate
decibels:
dB = 10 log10 (Pfinal / Pref)
dB = 20 log10 (Vfinal / Vref)
In these formulas, dB represents the loss or gain of the power of a wave. Decibels can be negative values
which would represent a loss in power as the wave travels or a positive value to represent a gain in power if
the signal is amplified.
The log10 variable implies that the number in parentheses will be transformed with the base 10 logarithm rule.
Pfinal is the delivered power measured in watts.
Pref is the original power measured in watts.
Vfinal is the delivered voltage measured in volts.
Vref is the original voltage measured in volts.
The first formula describes decibels in terms of power (P), and the second in terms of voltage (V). The power
formula is often used to measure light waves on optical fiber and radio waves in the air. The voltage formula
is used to measure electromagnetic waves on copper cables. These formulas have several things in
common.
In the formula dB = 10 log10 (Pfinal / Pref), enter values for dB and Pref to discover the delivered power. This
formula could be used to see how much power is left in a radio wave after it travels through different
materials and stages of electronic systems such as radios. Try the following examples with the Interactive
Media Activities:
If the source power of the original laser, or Pref is seven microwatts (1 x 10-6 Watts), and the total loss
of a fiber link is 13 dB, how much power is delivered?
If the total loss of a fiber link is 84 dB and the source power of the original laser, or P ref is 1 milliwatt,
how much power is delivered?
If 2 microvolts, or 2 x 10-6 volts, are measured at the end of a cable and the source voltage was 1
volt, what is the gain or loss in decibels? Is this value positive or negative? Does the value represent
a gain or a loss in voltage?
Time and frequency of signals (Optional)
4.1.5
One of the most important facts of the information age is that characters, words, pictures, video, or music
can be represented electrically by voltage patterns on wires and in electronic devices. The data represented
45

by these voltage patterns can be converted to light waves or radio waves, and then back to voltage waves.
Consider the example of an analog telephone. The sound waves of the callers voice enter a microphone in
the telephone. The microphone converts the patterns of sound energy into voltage patterns of electrical
energy that represent the voice.
If the voltage is graphed over time, the patterns that represent the voice will be displayed. An oscilloscope
is an important electronic device used to view electrical signals such as voltage waves and pulses. The xaxis on the display represents time and the y-axis represents voltage or current. There are usually two y-axis
inputs, so two waves can be observed and measured at the same time.
The analysis of signals with an oscilloscope is called time-domain analysis. The x-axis or domain of the
mathematical function represents time. Engineers also use frequency-domain analysis to study signals. In
frequency-domain analysis, the x-axis represents frequency. An electronic device called a spectrum analyzer
creates graphs for frequency-domain analysis.
Electromagnetic signals use different frequencies for transmission so that different signals do not interfere
with each other. Frequency modulation (FM) radio signals use frequencies that are different from television
or satellite signals. When listeners change the station on a radio, they change the frequency that the radio
receives.
4.1.6 Analog and digital signals (Core)
This page will explain how analog signals vary with time and with frequency.
First, consider a single-frequency electrical sine wave, whose frequency can be detected by the human ear.
If this signal is transmitted to a speaker, a tone can be heard.
Next, imagine the combination of several sine waves. This will create a wave that is more complex than a
pure sine wave. This wave will include several tones. A graph of the tones will show several lines that
correspond to the frequency of each tone.
Finally, imagine a complex signal, like a voice or a musical instrument. If many different tones are present,
the graph will show a continuous spectrum of individual tones.
The Interactive Media Activity draws sine waves and complex waves based on amplitude, frequency, and the
phase.

Noise in time and frequency (Optional)


4.1.7
This page will describe the sources and effects of noise.
Noise is an important concept in networks such as LANs. Noise usually refers to sounds. However, noise
related to communications refers to undesirable signals. Noise can originate from natural or technological
sources and is added to the data signals in communications systems.
All communications systems have some amount of noise. Even though noise cannot be eliminated, its
effects can be minimized if the sources of the noise are understood. There are many possible sources of
noise:
Nearby cables that carry data signals
RFI from other signals that are transmitted nearby
EMI from nearby sources such as motors and lights
Laser noise at the transmitter or receiver of an optical signal
46

Noise that affects all transmission frequencies equally is called white noise. Noise that only affects small
ranges of frequencies is called narrowband interference. White noise on a radio receiver would interfere with
all radio stations. Narrowband interference would affect only a few stations whose frequencies are close
together. When detected on a LAN, white noise could affect all data transmissions, but narrowband
interference might disrupt only certain signals.
The Interactive Media Activity will allow students to generate white noise and narrowband noise.
Bandwidth
4.1.8
This page will describe bandwidth, which is an extremely important concept in networks.
Two types of bandwidth that are important for the study of LANs are analog and digital.
Analog bandwidth typically refers to the frequency range of an analog electronic system. Analog bandwidth
could be used to describe the range of frequencies transmitted by a radio station or an electronic amplifier.
The unit of measurement for analog bandwidth is hertz (Hz), the same as the unit of frequency.
Digital bandwidth measures how much information can flow from one place to another in a given amount of
time. The fundamental unit of measurement for digital bandwidth is bps. Since LANs are capable of
speeds of thousands or millions of bits per second, measurement is expressed in kbps or Mbps. Physical
media, current technologies, and the laws of physics limit bandwidth.
During cable testing, analog bandwidth is used to determine the digital bandwidth of a copper cable. The
digital waveforms are made up of many sinewaves (analog waves). Analog frequencies are transmitted from
one end and received on the opposite end. The two signals are then compared, and the amount of
attenuation of the signal is calculated. In general, media that will support higher analog bandwidths without
high degrees of attenuation will also support higher digital bandwidths.
This page concludes this lesson. The next lesson will discuss signals and noise. The first page describes
copper and fiber optic cables.
4.2

Signals and Noise

Signals over copper and fiber optic cables (Core)


4.2.1
This page discusses signals over copper and fiber optic cables.
On copper cable, data signals are represented by voltage levels that represent binary ones and zeros. The
voltage levels are measured based on a reference level of 0 volts at both the transmitter and the receiver.
This reference level is called the signal ground. It is important for devices that transmit and receive data to
have the same 0-volt reference point. When they do, they are said to be properly grounded.
For a LAN to operate properly, the devices that receive data must be able to accurately interpret the binary
ones and zeros transmitted as voltage levels. Since current Ethernet technology supports data rates of
billions of bps, each bit must be recognized and the duration of each bit is very small. This means that as
much of the original signal strength as possible must be retained, as the signal moves through the cable and
passes through the connectors. In anticipation of faster Ethernet protocols, new cable installations should be
made with the best cable, connectors, and interconnect devices such as punch-down blocks and patch
panels.
The two basic types of copper cable are shielded and unshielded. In shielded cable, shielding material
protects the data signal from external sources of noise and from noise generated by electrical signals within
the cable.
Coaxial cable is a type of shielded cable. It consists of a solid copper conductor surrounded by insulating
material and a braided conductive shield. In LAN applications, the braided shielding is electrically grounded
to protect the inner conductor from external electrical noise. The shield also keeps the transmitted signal
confined to the cable, which reduces signal loss. This helps make coaxial cable less noisy than other types
of copper cabling, but also makes it more expensive. The need to ground the shielding and the bulky size of
coaxial cable make it more difficult to install than other copper cabling.
Two types of twisted-pair cable are shielded twisted-pair (STP) and unshielded twisted pair (UTP).
STP cable contains an outer conductive shield that is electrically grounded to insulate the signals from
external electrical noise. STP also uses inner foil shields to protect each wire pair from noise generated by
the other pairs. STP cable is sometimes called screened twisted pair (ScTP) in error. ScTP generally refers
to Category 5 or Category 5e twisted pair cabling, while STP refers to an IBM specific cable containing only
two pairs of conductors. ScTP cable is more expensive, more difficult to install, and less frequently used than
UTP. UTP contains no shielding and is more susceptible to external noise but is the most frequently used
because it is inexpensive and easier to install.
Fiber-optic cable increases and decreases the intensity of light to represent binary ones and zeros in data
transmissions. The strength of a light signal does not diminish as much as the strength of an electrical
signal does over an identical run length. Optical signals are not affected by electrical noise and optical fiber
does not need to be grounded unless the jacket contains a metal or a metalized strength member. Therefore,
optical fiber is often used between buildings and between floors within a building. As costs decrease and
speeds increase, optical fiber may become a more commonly used LAN media.
4.2.2 Attenuation and insertion loss on copper media (Core)
This page explains insertion loss caused by signal attenuation and impedance discontinuities.
47

Attenuation is the decrease in signal amplitude over the length of a link. Long cable lengths and high signal
frequencies contribute to greater signal attenuation. For this reason, attenuation on a cable is measured by a
cable tester with the highest frequencies that the cable is rated to support. Attenuation is expressed in dBs
with negative numbers. Smaller negative dB values are an indication of better link performance.
There are several factors that contribute to attenuation. The resistance of the copper cable converts some of
the electrical energy of the signal to heat. Signal energy is also lost when it leaks through the insulation of
the cable and by impedance caused by defective connectors.
Impedance is a measurement of the resistance of the cable to alternating current (AC) and is measured in
ohms. The normal impedance of a Category 5 cable is 100 ohms. If a connector is improperly installed on
Category 5, it will have a different impedance value than the cable. This is called an impedance discontinuity
or an impedance mismatch.
Impedance discontinuities cause attenuation because a portion of a transmitted signal is reflected back, like
an echo, and does not reach the receiver. This effect is compounded if multiple discontinuities cause
additional portions of the signal to be reflected back to the transmitter. When the reflected signal strikes the
first discontinuity, some of the signal rebounds in the original direction, which creates multiple echo effects.
The echoes strike the receiver at different intervals. This makes it difficult for the receiver to detect data
values. This is called jitter and results in data errors.
The combination of the effects of signal attenuation and impedance discontinuities on a communications link
is called insertion loss. Proper network operation depends on constant characteristic impedance in all cables
and connectors, with no impedance discontinuities in the entire cable system.

4.2.3 Sources of noise on copper media (Core)


This page will describe the sources of noise on copper cables.
Noise is any electrical energy on the transmission cable that makes it difficult for a receiver to interpret the
data sent from the transmitter. TIA/EIA-568-B certification now requires cables to be tested for a variety of
types of noise.
Crosstalk involves the transmission of signals from one wire to a nearby wire. When voltages change on a
wire, electromagnetic energy is generated. This energy radiates outward from the wire like a radio signal
from a transmitter. Adjacent wires in the cable act like antennas and receive the transmitted energy, which
interferes with data on those wires. Crosstalk can also be caused by signals on separate, nearby cables.
When crosstalk is caused by a signal on another cable, it is called alien crosstalk. Crosstalk is more
destructive at higher transmission frequencies.
Cable testing instruments measure crosstalk by applying a test signal to one wire pair. The cable tester then
measures the amplitude of the unwanted crosstalk signals on the other wire pairs in the cable.
Twisted-pair cable is designed to take advantage of the effects of crosstalk in order to minimize noise. In
twisted-pair cable, a pair of wires is used to transmit one signal. The wire pair is twisted so that each wire
experiences similar crosstalk. Because a noise signal on one wire will appear identically on the other wire,
this noise be easily detected and filtered at the receiver.
Twisted wire pairs in a cable are also more resistant to crosstalk or noise signals from adjacent wire pairs.
Higher categories of UTP require more twists on each wire pair in the cable to minimize crosstalk at high
transmission frequencies. When connectors are attached to the ends of UTP cable, the wire pairs should be
untwisted as little as possible to ensure reliable LAN communications.
48

4.2.4 Types of crosstalk (Core)


This page defines the three types of crosstalk:
Near-end Crosstalk (NEXT)
Far-end Crosstalk (FEXT)
Power Sum Near-end Crosstalk (PSNEXT)
Near-end crosstalk (NEXT) is computed as the ratio of voltage amplitude between the test signal and the
crosstalk signal when measured from the same end of the link. This difference is expressed in a negative
value of decibels (dB). Low negative numbers indicate more noise, just as low negative temperatures
indicate more heat. By tradition, cable testers do not show the minus sign indicating the negative NEXT
values. A NEXT reading of 30 dB (which actually indicates -30 dB) indicates less NEXT noise and a cleaner
signal than does a NEXT reading of 10 dB.
NEXT needs to be measured from each pair to each other pair in a UTP link, and from both ends of the link.
To shorten test times, some cable test instruments allow the user to test the NEXT performance of a link by
using larger frequency step sizes than specified by the TIA/EIA standard. The resulting measurements may
not comply with TIA/EIA-568-B, and may overlook link faults. To verify proper link performance, NEXT should
be measured from both ends of the link with a high-quality test instrument. This is also a requirement for
complete compliance with high-speed cable specifications.
Due to attenuation, crosstalk occurring further away from the transmitter creates less noise on a cable than
NEXT. This is called far-end crosstalk, or FEXT. The noise caused by FEXT still travels back to the source,
but it is attenuated as it returns. Thus, FEXT is not as significant a problem as NEXT.
Power Sum NEXT (PSNEXT) measures the cumulative effect of NEXT from all wire pairs in the cable.
PSNEXT is computed for each wire pair based on the NEXT effects of the other three pairs. The combined
effect of crosstalk from multiple simultaneous transmission sources can be very detrimental to the signal.
TIA/EIA-568-B certification now requires this PSNEXT test.
Some Ethernet standards such as 10BASE-T and 100BASE-TX receive data from only one wire pair in each
direction. However, for newer technologies such as 1000BASE-T that receive data simultaneously from
multiple pairs in the same direction, power sum measurements are very important tests.
4.2.5 Cable testing standards (Core)
This page will describe the TIA/EIA-568-B standard. This standard specifies ten tests that a copper cable
must pass if it will be used for modern, high-speed Ethernet LANs.
All cable links should be tested to the maximum rating that applies for the category of cable being installed.
The ten primary test parameters that must be verified for a cable link to meet TIA/EIA standards are:
Wire map
Insertion loss
Near-end crosstalk (NEXT)
Power sum near-end crosstalk (PSNEXT)
Equal-level far-end crosstalk (ELFEXT)
Power sum equal-level far-end crosstalk (PSELFEXT)
Return loss
Propagation delay
Cable length
49

Delay skew
The Ethernet standard specifies that each of the pins on an RJ-45 connector have a particular purpose. A
NIC transmits signals on pins 1 and 2, and it receives signals on pins 3 and 6. The wires in UTP cable must
be connected to the proper pins at each end of a cable. The wire map test insures that no open or short
circuits exist on the cable. An open circuit occurs if the wire does not attach properly at the connector. A short
circuit occurs if two wires are connected to each other.
The wire map test also verifies that all eight wires are connected to the correct pins on both ends of the
cable. There are several different wiring faults that the wire map test can detect. The reversed-pair fault
occurs when a wire pair is correctly installed on one connector, but reversed on the other connector. If the
white/orange wire is terminated on pin 1 and the orange wire is terminated on pin 2 at one end of a cable, but
reversed at the other end, then the cable has a reversed-pair fault. This example is shown in the graphic.
A split-pair wiring fault occurs when one wire from one pair is switched with one wire from a different pair at
both ends. Look carefully at the pin numbers in the graphic to detect the wiring fault. A split pair creates two
transmit or receive pairs each with two wires that are not twisted together. This mixing hampers the crosscancellation process and makes the cable more susceptible to crosstalk and interference. Contrast this with
a reversed-pair, where the same pair of pins is used at both ends.

Other test parameters (Optional)


4.2.6
This page will explain how cables are tested for crosstalk and attenuation.
The combination of the effects of signal attenuation and impedance discontinuities on a communications link
is called insertion loss. Insertion loss is measured in decibels at the far end of the cable. The TIA/EIA
standard requires that a cable and its connectors pass an insertion loss test before the cable can be used as
a communications link in a LAN.
Crosstalk is measured in four separate tests. A cable tester measures NEXT by applying a test signal to one
cable pair and measuring the amplitude of the crosstalk signals received by the other cable pairs. The NEXT
value, expressed in decibels, is computed as the difference in amplitude between the test signal and the
crosstalk signal measured at the same end of the cable. Remember, because the number of decibels that
the tester displays is a negative number, the larger the number, the lower the NEXT on the wire pair. As
previously mentioned, the PSNEXT test is actually a calculation based on combined NEXT effects.
The equal-level far-end crosstalk (ELFEXT) test measures FEXT. Pair-to-pair ELFEXT is expressed in dB as
the difference between the measured FEXT and the insertion loss of the wire pair whose signal is disturbed
by the FEXT. ELFEXT is an important measurement in Ethernet networks using 1000BASE-T technologies.
Power sum equal-level far-end crosstalk (PSELFEXT) is the combined effect of ELFEXT from all wire pairs.
Return loss is a measure in decibels of reflections that are caused by the impedance discontinuities at all
locations along the link. Recall that the main impact of return loss is not on loss of signal strength. The
significant problem is that signal echoes caused by the reflections from the impedance discontinuities will
strike the receiver at different intervals causing signal jitter.
Time-based parameters (Optional)
4.2.7
This page will discuss propegation delay and how it is measured.

50

Propagation delay is a simple measurement of how long it takes for a signal to travel along the cable being
tested. The delay in a wire pair depends on its length, twist rate, and electrical properties. Delays are
measured in hundredths of nanoseconds. One nanosecond is one-billionth of a second, or 0.000000001
second. The TIA/EIA-568-B standard sets a limit for propagation delay for the various categories of UTP.
Propagation delay measurements are the basis of the cable length measurement. TIA/EIA-568-B.1 specifies
that the physical length of the link shall be calculated using the wire pair with the shortest electrical delay.
Testers measure the length of the wire based on the electrical delay as measured by a Time Domain
Reflectometry (TDR) test, not by the physical length of the cable jacket. Since the wires inside the cable are
twisted, signals actually travel farther than the physical length of the cable. When a cable tester makes a
TDR measurement, it sends a pulse signal down a wire pair and measures the amount of time required for
the pulse to return on the same wire pair.
The TDR test is used not only to determine length, but also to identify the distance to wiring faults such as
shorts and opens. When the pulse encounters an open, short, or poor connection, all or part of the pulse
energy is reflected back to the tester. This can be used to calculate the approximate distance to the wiring
fault. The approximate distance can be helpful in locating a faulty connection point along a cable run, such
as a wall jack.
The propagation delays of different wire pairs in a single cable can differ slightly because of differences in the
number of twists and electrical properties of each wire pair. The delay difference between pairs is called
delay skew. Delay skew is a critical parameter for high-speed networks in which data is simultaneously
transmitted over multiple wire pairs, such as 1000BASE-T Ethernet. If the delay skew between the pairs is
too great, the bits arrive at different times and the data cannot be properly reassembled. Even though a
cable link may not be intended for this type of data transmission, testing for delay skew helps ensure that the
link will support future upgrades to high-speed networks.
All cable links in a LAN must pass all of the tests previously mentioned as specified in the TIA/EIA-568-B
standard in order to be considered standards compliant. A certification meter must be used to ensure that all
of the tests are passed in order to be considered standards compliant. These tests ensure that the cable
links will function reliably at high speeds and frequencies. Cable tests should be performed when the cable is
installed and afterward on a regular basis to ensure that LAN cabling meets industry standards. High quality
cable test instruments should be correctly used to ensure that the tests are accurate. Test results should also
be carefully documented.
Testing optical fiber (Optional)
4.2.8
This page will explain how optical fiber is tested.
A fiber link consists of two separate glass fibers functioning as independent data pathways. One fiber carries
transmitted signals in one direction, while the second carries signals in the opposite direction. Each glass
fiber is surrounded by a sheath that light cannot pass through, so there are no crosstalk problems on fiber
optic cable. External electromagnetic interference or noise has no affect on fiber cabling. Attenuation does
occur on fiber links, but to a lesser extent than on copper cabling.
Fiber links are subject to the optical equivalent of UTP impedance discontinuities. When light encounters
an optical discontinuity, like an impurity in the glass or a micro-fracture, some of the light signal is reflected
back in the opposite direction. This means only a fraction of the original light signal will continue down the
fiber towards the receiver. This results in a reduced amount of light energy arriving at the receiver, making
signal recognition difficult. Just as with UTP cable, improperly installed connectors are the main cause of
light reflection and signal strength loss in optical fiber.
Because noise is not an issue when transmitting on optical fiber, the main concern with a fiber link is the
strength of the light signal that arrives at the receiver. If attenuation weakens the light signal at the receiver,
then data errors will result. Testing fiber optic cable primarily involves shining a light down the fiber and
measuring whether a sufficient amount of light reaches the receiver.
On a fiber optic link, the acceptable amount of signal power loss that can occur without dropping below the
requirements of the receiver must be calculated. This calculation is referred to as the optical link loss budget.
A fiber test instrument, known as a light source and power meter, checks whether the optical link loss budget
has been exceeded. If the fiber fails the test, another cable test instrument can be used to indicate where
the optical discontinuities occur along the length of the cable link. An optical TDR known as an OTDR is
capable of locating these discontinuities. Usually, the problem is one or more improperly attached
connectors. The OTDR will indicate the location of the faulty connections that must be replaced. When the
faults are corrected, the cable must be retested.
The standards for testing are updated regularly. The next page will introduce a new standard.
4.2.9 A new standard (Optional)
This page discusses the new test standards for Category 6 cable.
On June 20, 2002, the Category 6 addition to the TIA-568 standard was published. The official title of the
standard is ANSI/TIA/EIA-568-B.2-1. This new standard specifies the original set of performance parameters
that need to be tested for Ethernet cabling as well as the passing scores for each of these tests. Cables
certified as Category 6 cable must pass all ten tests.

51

Although the Category 6 tests are essentially the same as those specified by the Category 5 standard,
Category 6 cable must pass the tests with higher scores to be certified. Category 6 cable must be capable of
carrying frequencies up to 250 MHz and must have lower levels of crosstalk and return loss.
A quality cable tester similar to the Fluke DSP-4000 series or Fluke OMNIScanner2 can perform all the test
measurements required for Category 5, Category 5e, and Category 6 cable certifications of both permanent
links and channel links. Figure shows the Fluke DSP-4100 Cable Analyzer with a DSP-LIA013
Channel/Traffic Adapter for Category 5e.
The Lab Activities will teach students how to use a cable tester.
This page concludes this lesson. The next page will summarize the main points from the module.

Summary
This page summarizes the topics discussed in this module.
Data symbolizing characters, words, pictures, video, or music can be represented electrically by voltage
patterns on wires and in electronic devices. The data represented by these voltage patterns can be
converted to light waves or radio waves, and then back to voltage patterns. Waves are energy traveling from
one place to another, and are created by disturbances. All waves have similar attributes such as amplitude,
period, and frequency. Sine waves are periodic, continuously varying functions. Analog signals look like sine
waves. Square waves are periodic functions whose values remain constant for a period of time and then
change abruptly. Digital signals look like square waves.
Exponents are used to represent very large or very small numbers. The base of a number raised to a
positive exponent is equal to the base multiplied by itself exponent times. For example, 10 3 = 10x10x10 =
1000. Logarithms are similar to exponents. A logarithm to the base of 10 of a number equals the exponent to
which 10 would have to be raised in order to equal the number. For example, log 10 1000 = 3 because 103 =
1000.
Decibels are measurements of a gain or loss in the power of a signal. Negative values represent losses and
positive values represent gains. Time and frequency analysis can both be used to graph the voltage or power
of a signal.
Undesirable signals in a communications system are called noise. Noise originates from other cables, radio
frequency interference (RFI), and electromagnetic interference (EMI). Noise may affect all signal frequencies
or a subset of frequencies.
Analog bandwidth is the frequency range that is associated with certain analog transmission, such as
television or FM radio. Digital bandwidth measures how much information can flow from one place to another
in a given amount of time. Its units are in various multiples of bits per second.
On copper cable, data signals are represented by voltage levels that correspond to binary ones and zeros. In
order for the LAN to operate properly, the receiving device must be able to accurately interpret the bit signal.
Proper cable installation according to standards increases LAN reliability and performance.
Signal degradation is due to various factors such as attenuation, impedance mismatch, noise, and several
types of crosstalk. Attenuation is the decrease in signal amplitude over the length of a link. Impedance is a
measurement of resistance to the electrical signal. Cables and the connectors used on them must have
similar impedance values or some of the data signal may be reflected back from a connector. This is referred
to as impedance mismatch or impedance discontinuity. Noise is any electrical energy on the transmission
cable that makes it difficult for a receiver to interpret the data sent from the transmitter. Crosstalk involves the
transmission of signals from one wire to a nearby wire. There are three distinct types of crosstalk: Near-end
Crosstalk (NEXT), Far-end Crosstalk (FEXT), Power Sum Near-end Crosstalk (PSNEXT).
52

STP and UTP cable are designed to take advantage of the effects of crosstalk in order to minimize noise.
Additionally, STP contains an outer conductive shield and inner foil shields that make it less susceptible to
noise. UTP contains no shielding and is more susceptible to external noise but is the most frequently used
because it is inexpensive and easier to install.
Fiber-optic cable is used to transmit data signals by increasing and decreasing the intensity of light to
represent binary ones and zeros. The strength of a light signal does not diminish like the strength of an
electrical signal does over an identical run length. Optical signals are not affected by electrical noise, and
optical fiber does not need to be grounded. Therefore, optical fiber is often used between buildings and
between floors within a building.
The TIA/EIA-568-B standard specifies ten tests that a copper cable must pass if it will be used for
modern, high-speed Ethernet LANs. Optical fiber must also be tested according to networking standards.
Category 6 cable must meet more rigorous frequency testing standards than Category 5 cable.
5.1 Cabling LANs
Overview
Even though each LAN is unique, there are many design aspects that are common to all LANs. For example,
most LANs follow the same standards and use the same components. This module presents information on
elements of Ethernet LANs and common LAN devices.
There are several types of WAN connections. They range from dial-up to broadband access and differ in
bandwidth, cost, and required equipment. This module presents information on the various types of WAN
connections.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Identify characteristics of Ethernet networks
Identify straight-through, crossover, and rollover cables
Describe the function, advantages, and disadvantages of repeaters, hubs, bridges, switches, and
wireless network components
Describe the function of peer-to-peer networks
Describe the function, advantages, and disadvantages of client-server networks
Describe and differentiate between serial, ISDN, DSL, and cable modem WAN connections
Identify router serial ports, cables, and connectors
Identify and describe the placement of equipment used in various WAN configurations
5.1.1 LAN physical layer
This page describes the LAN physical layer.
Various symbols are used to represent media types. Token Ring is represented by a circle. FDDI is
represented by two concentric circles and the Ethernet symbol is represented by a straight line. Serial
connections are represented by a lightning bolt.
Each computer network can be built with many different media types. The function of media is to carry a flow
of information through a LAN. Wireless LANs use the atmosphere, or space, as the medium. Other
networking media confine network signals to a wire, cable, or fiber. Networking media are considered Layer
1, or physical layer, components of LANs.
Each type of media has advantages and disadvantages. These are based on the following factors:
Cable length
Cost
Ease of installation
Susceptibility to interference
Coaxial cable, optical fiber, and space can carry network signals. This module will focus on Category 5 UTP,
which includes the Category 5e family of cables.
Many topologies support LANs, as well as many different physical media. Figure shows a subset of
physical layer implementations that can be deployed to support Ethernet.
The next page explains how Ethernet is implemented in a campus environment.

53

5.1.2 Ethernet in the campus


This page will discuss Ethernet.
Ethernet is the most widely used LAN technology. Ethernet was first implemented by the Digital, Intel, and
Xerox group (DIX). DIX created and implemented the first Ethernet LAN specification, which was used as the
basis for the Institute of Electrical and Electronics Engineers (IEEE) 802.3 specification, released in 1980.
IEEE extended 802.3 to three new committees known as 802.3u for Fast Ethernet, 802.3z for Gigabit
Ethernet over fiber, and 802.3ab for Gigabit Ethernet over UTP.
A network may require an upgrade to one of the faster Ethernet topologies. Most Ethernet networks support
speeds of 10 Mbps and 100 Mbps.
The new generation of multimedia, imaging, and database products can easily overwhelm a network that
operates at traditional Ethernet speeds of 10 and 100 Mbps. Network administrators may choose to provide
Gigabit Ethernet from the backbone to the end user. Installation costs for new cables and adapters can
make this prohibitive.
There are several ways that Ethernet technologies can be used in a campus network:
An Ethernet speed of 10 Mbps can be used at the user level to provide good performance. Clients or
servers that require more bandwidth can use 100-Mbps Ethernet.
Fast Ethernet is used as the link between user and network devices. It can support the combination
of all traffic from each Ethernet segment.
Fast Ethernet can be used to connect enterprise servers. This will enhance client-server
performance across the campus network and help prevent bottlenecks.
Fast Ethernet or Gigabit Ethernet should be implemented between backbone devices, based on
affordability.
The media and connector requirements for an Ethernet implementation are discussed on the next page.

54

5.1.3 Ethernet media and connector requirements


This page provides important considerations for an Ethernet implementation. These include the media and
connector requirements and the level of network performance.
The cables and connector specifications used to support Ethernet implementations are derived from the
EIA/TIA standards. The categories of cabling defined for Ethernet are derived from the EIA/TIA-568 SP-2840
Commercial Building Telecommunications Wiring Standards.
Figure compares the cable and connector specifications for the most popular Ethernet implementations. It
is important to note the difference in the media used for 10-Mbps Ethernet versus 100-Mbps Ethernet.
Networks with a combination of 10- and 100-Mbps traffic use Category 5 UTP to support Fast Ethernet.
The next page will discuss the different connection types.

5.1.4

Connection media

55

This page describes the different connection types used by each physical layer implementation, as shown in
Figure . The RJ-45 connector and jack are the most common. RJ-45 connectors are discussed in more
detail in the next section.
The connector on a NIC may not match the media to which it needs to connect. As shown in Figure , an
interface may exist for the 15-pin attachment unit interface (AUI) connector. The AUI connector allows
different media to connect when used with the appropriate transceiver. A transceiver is an adapter that
converts one type of connection to another. A transceiver will usually convert an AUI to an RJ-45, a coax, or
a fiber optic connector. On 10BASE5 Ethernet, or Thicknet, a short cable is used to connect the AUI with a
transceiver on the main cable.

5.1.5 UTP implementation


This page provides detailed information for a UTP implementation.
EIA/TIA specifies an RJ-45 connector for UTP cable. The letters RJ stand for registered jack and the number
45 refers to a specific wiring sequence. The RJ-45 transparent end connector shows eight colored wires.
Four of the wires, T1 through T4, carry the voltage and are called tip. The other four wires, R1 through R4,
are grounded and are called ring. Tip and ring are terms that originated in the early days of the telephone.
Today, these terms refer to the positive and the negative wire in a pair. The wires in the first pair in a cable or
a connector are designated as T1 and R1. The second pair is T2 and R2, the third is T3 and R3, and the
fourth is T4 and R4.
The RJ-45 connector is the male component, which is crimped on the end of the cable. When a male
connector is viewed from the front, the pin locations are numbered from 8 on the left to 1 on the right as seen
in Figure .
The jack is the female component in a network device, wall outlet, or patch panel as seen in Figure . Figure
shows the punch-down connections at the back of the jack where the Ethernet UTP cable connects.
For electricity to run between the connector and the jack, the order of the wires must follow T568A or T568B
color code found in the EIA/TIA-568-B.1 standard, as shown in Figure . To determine the EIA/TIA category
of cable that should be used to connect a device, refer to the documentation for that device or look for a label
on the device near the jack. If there are no labels or documentation available, use Category 5E or greater as
higher categories can be used in place of lower ones. Then determine whether to use a straight-through
cable or a crossover cable.
If the two RJ-45 connectors of a cable are held side by side in the same orientation, the colored wires will be
seen in each. If the order of the colored wires is the same at each end, then the cable is a straight-through,
as seen in Figure .
In a crossover cable, the RJ-45 connectors on both ends show that some of the wires are connected to
different pins on each side of the cable. Figure shows that pins 1 and 2 on one connector connect to pins 3
and 6 on the other.
Figure shows the guidelines that are used to determine the type of cable that is required to connect Cisco
devices.
Use straight-through cables for the following connections:
Switch to router
56

Switch to PC or server
Hub to PC or server
Use crossover cables for the following connections:
Switch to switch
Switch to hub
Hub to hub
Router to router
PC to PC
Router to PC
Figure illustrates how a variety of cable types may be required in a given network. The category of UTP
cable required is based on the type of Ethernet that is chosen.
The Lab Activity shows the termination process for an RJ-45 jack.
The Interactive Media Activities provide detailed views of a straight-through and crossover cable.

5.1.6 Repeaters
This page will discuss how a repeater is used on a network.
The term repeater comes from the early days of long distance communication. A repeater was a person on
one hill who would repeat the signal that was just received from the person on the previous hill. The process
would repeat until the message arrived at its destination. Telegraph, telephone, microwave, and optical
communications use repeaters to strengthen signals sent over long distances.
A repeater receives a signal, regenerates it, and passes it on. It can regenerate and retime network signals
at the bit level to allow them to travel a longer distance on the media. Ethernet and IEEE 802.3 implement
a rule, known as the 5-4-3 rule, for the number of repeaters and segments on shared access Ethernet
backbones in a tree topology. The 5-4-3 rule divides the network into two types of physical segments:
populated (user) segments, and unpopulated (link) segments. User segments have users' systems
connected to them. Link segments are used to connect the network repeaters together. The rule mandates
that between any two nodes on the network, there can only be a maximum of five segments, connected
through four repeaters, or concentrators, and only three of the five segments may contain user connections.
The Ethernet protocol requires that a signal sent out over the LAN reach every part of the network within a
specified length of time. The 5-4-3 rule ensures this. Each repeater that a signal goes through adds a small
amount of time to the process, so the rule is designed to minimize transmission times of the signals. Too
much latency on the LAN increases the number of late collisions and makes the LAN less efficient.

57

5.1.7 Hubs
This page will describe the three types of hubs.
Hubs are actually multiport repeaters. The difference between hubs and repeaters is usually the number of
ports that each device provides. A typical repeater usually has two ports. A hub generally has from 4 to 24
ports. Hubs are most commonly used in Ethernet 10BASE-T or 100BASE-T networks.
The use of a hub changes the network from a linear bus with each device plugged directly into the wire to a
star topology. Data that arrives over the cables to a hub port is electrically repeated on all the other ports
connected to the network segment.
Hubs come in three basic types:
Passive A passive hub serves as a physical connection point only. It does not manipulate or view
the traffic that crosses it. It does not boost or clean the signal. A passive hub is used only to share
the physical media. A passive hub does not need electrical power.
Active An active hub must be plugged into an electrical outlet because it needs power to amplify a
signal before it is sent to the other ports.
Intelligent Intelligent hubs are sometimes called smart hubs. They function like active hubs with
microprocessor chips and diagnostic capabilities. Intelligent hubs are more expensive than active
hubs. They are also more useful in troubleshooting situations.
Devices attached to a hub receive all traffic that travels through the hub. If many devices are attached to the
hub, collisions are more likely to occur. A collision occurs when two or more workstations send data over the
network wire at the same time. All data is corrupted when this occurs. All devices that are connected to the
same network segment are members of the same collision domain.
Sometimes hubs are called concentrators since they are central connection points for Ethernet LANs.
The Lab Activity will teach students about the price of different network components.
The next page discusses wireless networks.
5.1.8 Wireless
This page will explain how a wireless network can be created with much less cabling than other networks.
Wireless signals are electromagnetic waves that travel through the air. Wireless networks use radio
frequency (RF), laser, infrared (IR), satellite, or microwaves to carry signals between computers without a
permanent cable connection. The only permanent cabling can be to the access points for the network.
Workstations within the range of the wireless network can be moved easily without the need to connect and
reconnect network cables.
A common application of wireless data communication is for mobile use. Some examples of mobile use
include commuters, airplanes, satellites, remote space probes, space shuttles, and space stations.
At the core of wireless communication are devices called transmitters and receivers. The transmitter
converts source data to electromagnetic waves that are sent to the receiver. The receiver then converts
these electromagnetic waves back into data for the destination. For two-way communication, each device
requires a transmitter and a receiver. Many networking device manufacturers build the transmitter and
receiver into a single unit called a transceiver or wireless network card. All devices in a WLAN must have
the correct wireless network card installed.
The two most common wireless technologies used for networking are IR and RF. IR technology has its
weaknesses. Workstations and digital devices must be in the line of sight of the transmitter to work correctly.
An infrared-based network can be used when all the digital devices that require network connectivity are in
58

one room. IR networking technology can be installed quickly. However, the data signals can be weakened or
obstructed by people who walk across the room or by moisture in the air. New IR technologies will be able to
work out of sight.
RF technology allows devices to be in different rooms or buildings. The limited range of radio signals restricts
the use of this kind of network. RF technology can be on single or multiple frequencies. A single radio
frequency is subject to outside interference and geographic obstructions. It is also easily monitored by
others, which makes the transmissions of data insecure. Spread spectrum uses multiple frequencies to
increase the immunity to noise and to make it difficult for outsiders to intercept data transmissions.
Two approaches that are used to implement spread spectrum for WLAN transmissions are Frequency
Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). The technical details of
how these technologies work are beyond the scope of this course.
A large LAN can be broken into smaller segments. The next page will explain how bridges are used to
accomplish this.

5.1.9 Bridges
This page will explain the function of bridges in a LAN.
There are times when it is necessary to break up a large LAN into smaller and more easily managed
segments.
This decreases the amount of traffic on a single LAN and can extend the geographical area
past what a single LAN can support. The devices that are used to connect network segments together
include bridges, switches, routers, and gateways. Switches and bridges operate at the data link layer of the
OSI model. The function of the bridge is to make intelligent decisions about whether or not to pass signals on
to the next segment of a network.
When a bridge receives a frame on the network, the destination MAC address is looked up in the bridge
table to determine whether to filter, flood, or copy the frame onto another segment. This decision process
occurs as follows:
If the destination device is on the same segment as the frame, the bridge will not send the frame
onto other segments. This process is known as filtering.
If the destination device is on a different segment, the bridge forwards the frame to the appropriate
segment.
If the destination address is unknown to the bridge, the bridge forwards the frame to all segments
except the one on which it was received. This process is known as flooding.
If placed strategically, a bridge can greatly improve network performance.

59

Swtiches
5.1.10
This page will explain the function of switches.
A switch is sometimes described as a multiport bridge. A typical bridge may have only two ports that link
two network segments. A switch can have multiple ports based on the number of network segments that
need to be linked. Like bridges, switches learn information about the data packets that are received from
computers on the network. Switches use this information to build tables to determine the destination of data
that is sent between computers on the network.
Although there are some similarities between the two, a switch is a more sophisticated device than a bridge.
A bridge determines whether the frame should be forwarded to the other network segment based on the
destination MAC address. A switch has many ports with many network segments connected to them. A
switch chooses the port to which the destination device or workstation is connected. Ethernet switches are
popular connectivity solutions because they improve network speed, bandwidth, and performance.
Switching is a technology that alleviates congestion in Ethernet LANs. Switches reduce traffic and increase
bandwidth. Switches can easily replace hubs because switches work with the cable infrastructures that are
already in place. This improves performance with minimal changes to a network.
All switching equipment perform two basic operations. The first operation is called switching data frames.
This is the process by which a frame is received on an input medium and then transmitted to an output
medium. The second is the maintenance of switching operations where switches build and maintain
switching tables and search for loops.
Switches operate at much higher speeds than bridges and can support new functionality, such as virtual
LANs.
An Ethernet switch has many benefits. One benefit is that it allows many users to communicate at the same
time through the use of virtual circuits and dedicated network segments in a virtually collision-free
environment. This maximizes the bandwidth available on the shared medium. Another benefit is that a
switched LAN environment is very cost effective since the hardware and cables in place can be reused.
The Lab activity will help students understand the price of a LAN switch.
The next page will discuss NICs.
Host connectivity
5.1.11
This page will explain how NICs provide network connectivity.
The function of a NIC is to connect a host device to the network medium. A NIC is a printed circuit board that
fits into the expansion slot on the motherboard or peripheral device of a computer.
The NIC is also
referred to as a network adapter. On laptop or notebook computers a NIC is the size of a credit card.
NICs are considered Layer 2 devices because each NIC carries a unique code called a MAC address. This
address is used to control data communication for the host on the network. More will be learned about the
MAC address later. NICs control host access to the medium.
In some cases the type of connector on the NIC does not match the type of media that needs to be
connected to it. A good example is a Cisco 2500 router. This router has an AUI connector. That AUI
connector needs to connect to a UTP Category 5 Ethernet cable. A transceiver is used to do this. A
transceiver converts one type of signal or connector to another. For example, a transceiver can connect a
15-pin AUI interface to an RJ-45 jack. It is considered a Layer 1 device because it only works with bits and
not with any address information or higher-level protocols.
60

NICs have no standardized symbol. It is implied that, when networking devices are attached to network
media, there is a NIC or NIC-like device present. A dot on a topology map represents either a NIC interface
or port, which acts like a NIC.
The next page discusses peer-to-peer networks.
5.1.12 Peer-to-peer
This page covers peer-to-peer networks.
When LAN and WAN technologies are used, many computers are interconnected to provide services to their
users. To accomplish this, networked computers take on different roles or functions in relation to each other.
Some types of applications require computers to function as equal partners. Other types of applications
distribute their work so that one computer functions to serve a number of others in an unequal relationship.
Two computers generally use request and response protocols to communicate with each other. One
computer issues a request for a service, and a second computer receives and responds to that request. The
requestor acts like a client and the responder acts like a server.
In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each computer
can take on the client function or the server function. Computer A may request for a file from Computer B,
which then sends the file to Computer A. Computer A acts like the client and Computer B acts like the server.
At a later time, Computers A and B can reverse roles.
In a peer-to-peer network, individual users control their own resources. The users may decide to share
certain files with other users.
The users may also require passwords before they allow others to access
their resources. Since individual users make these decisions, there is no central point of control or
administration in the network. In addition, individual users must back up their own systems to be able to
recover from data loss in case of failures. When a computer acts as a server, the user of that machine may
experience reduced performance as the machine serves the requests made by other systems.
Peer-to-peer networks are relatively easy to install and operate. No additional equipment is necessary
beyond a suitable operating system installed on each computer. Since users control their own resources, no
dedicated administrators are needed.
As networks grow, peer-to-peer relationships become increasingly difficult to coordinate. A peer-to-peer
network works well with ten or fewer computers. Since peer-to-peer networks do not scale well, their
efficiency decreases rapidly as the number of computers on the network increases. Also, individual users
control access to the resources on their computers, which means security may be difficult to maintain. The
client/server model of networking can be used to overcome the limitations of the peer-to-peer network.
Students will create a simple peer-to-peer network in the Lab Activity.
The next page discusses a client/server network.

5.1.13 Client/server
This page will describe a client/server environment.
In a client/server arrangement, network services are located on a dedicated computer called a server. The
server responds to the requests of clients. The server is a central computer that is continuously available to
respond to requests from clients for file, print, application, and other services. Most network operating
systems adopt the form of a client/server relationship. Typically, desktop computers function as clients and
one or more computers with additional processing power, memory, and specialized software function as
servers.
Servers are designed to handle requests from many clients simultaneously. Before a client can access the
server resources, the client must be identified and be authorized to use the resource. Each client is assigned
an account name and password that is verified by an authentication service. The authentication service
guards access to the network. With the centralization of user accounts, security, and access control, serverbased networks simplify the administration of large networks.
The concentration of network resources such as files, printers, and applications on servers also makes it
easier to back-up and maintain the data. Resources can be located on specialized, dedicated servers for
easier access. Most client/server systems also include ways to enhance the network with new services that
extend the usefulness of the network.
61

The centralized functions in a client/server network has substantial advantages and some disadvantages.
Although a centralized server enhances security, ease of access, and control, it introduces a single point of
failure into the network. Without an operational server, the network cannot function at all. Servers require a
trained, expert staff member to administer and maintain. Server systems also require additional hardware
and specialized software that add to the cost.
Figures and summarize the advantages and disadvantages of peer-to-peer and client/server networks.
In the Lab Activities, students will build a hub-based network and a switch-based network.
This page concludes this lesson. The next lesson will discuss cabling WANs. The first page focuses on
the WAN physical layer.

5.2

Cabling WANs

5.2.1 WAN physical layer


This page describes the WAN physical layer.
The physical layer implementations vary based on the distance of the equipment from each service, the
speed, and the type of service. Serial connections are used to support WAN services such as dedicated
leased lines that run PPP or Frame Relay. The speed of these connections ranges from 2400 bps to T1
service at 1.544 Mbps and E1 service at 2.048 Mbps.
ISDN offers dial-on-demand connections or dial backup services. An ISDN Basic Rate Interface (BRI) is
composed of two 64 kbps bearer channels (B channels) for data, and one delta channel (D channel) at 16
kbps used for signaling and other link-management tasks. PPP is typically used to carry data over the B
channels.
As the demand for residential broadband high-speed services has increased, DSL and cable modem
connections have become more popular. Typical residential DSL service can achieve T1/E1 speeds over the
telephone line. Cable services use the coaxial cable TV line. A coaxial cable line provides high-speed
connectivity that matches or exceeds DSL. DSL and cable modem service will be covered in more detail in a
later module.
Students can identify the WAN physical layer components in the Interactive Media Activity.
The next page will describe WAN serial connections.

62

5.2.2 WAN serial connections


This page will discuss WAN serial connections.
For long distance communication, WANs use serial transmission. This is a process by which bits of data are
sent over a single channel. This process provides reliable long distance communication and the use of a
specific electromagnetic or optical frequency range.
Frequencies are measured in terms of cycles per second and expressed in Hz. Signals transmitted over
voice grade telephone lines use 4 kHz. The size of the frequency range is referred to as bandwidth. In
networking, bandwidth is a measure of the bits per second that are transmitted.
For a Cisco router, physical connectivity at the customer site is provided by one of two types of serial
connections. The first type is a 60-pin connector. The second is a more compact smart serial connector. The
provider connector will vary depending on the type of service equipment.
If the connection is made directly to a service provider, or a device that provides signal clocking such as a
channel/data service unit (CSU/DSU), the router will be a data terminal equipment (DTE) and use a DTE
serial cable. Typically this is the case. However, there are occasions where the local router is required to
provide the clocking rate and therefore will use a data communications equipment (DCE) cable. In the
curriculum router labs one of the connected routers will need to provide the clocking function. Therefore, the
connection will consist of a DCE and a DTE cable.
The next page will discuss routers and serial connections.

5.2.3 Routers and serial connections


This page will describe how routers and serial connections are used in a WAN.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing
connectivity to the WAN. Within a LAN environment the router contains broadcasts, provides local address
resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure. In
order to provide these services the router must be connected to the LAN and WAN.
In addition to determining the cable type, it is necessary to determine whether DTE or DCE connectors are
required. The DTE is the endpoint of the users device on the WAN link. The DCE is typically the point where
responsibility for delivering data passes into the hands of the service provider.
When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform signal
clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers. However,
there are cases when the router will need to be the DCE. When performing a back-to-back router scenario in
a test environment, one of the routers will be a DTE and the other will be a DCE.
When cabling routers for serial connectivity, the routers will either have fixed or modular ports. The type of
port being used will affect the syntax used later to configure each interface.
Interfaces on routers with fixed serial ports are labeled for port type and port number.
Interfaces on routers with modular serial ports are labeled for port type, slot, and port number. The slot is
the location of the module. To configure a port on a modular card, it is necessary to specify the interface
using the syntax port type slot number/port number. Use the label serial 1/0, when the interface is serial,
the slot number where the module is installed is slot 1, and the port that is being referenced is port 0.
The first Lab Activity will require students to identify the Ethernet or Fast Ethernet interfaces on a router.
In the next two Lab Activities, students will create and troubleshoot a basic WAN.
The next page discusses routers and ISDN BRI connections.

63

5.2.4 Routers and ISDN BRI connections


This page will help students understand ISDN BRI connections.
With ISDN BRI, two types of interfaces may be used, BRI S/T and BRI U. Determine who is providing the
Network Termination 1 (NT1) device in order to determine which interface type is needed.
An NT1 is an intermediate device located between the router and the service provider ISDN switch. The NT1
is used to connect four-wire subscriber wiring to the conventional two-wire local loop. In North America, the
customer typically provides the NT1, while in the rest of the world the service provider provides the NT1
device.
It may be necessary to provide an external NT1 if the device is not already integrated into the router.
Reviewing the labeling on the router interfaces is usually the easiest way to determine if the router has an
integrated NT1. A BRI interface with an integrated NT1 is labeled BRI U. A BRI interface without an
integrated NT1 is labeled BRI S/T. Because routers can have multiple ISDN interface types, determine which
interface is needed when the router is purchased. The type of BRI interface may be determined by looking at
the port label. To interconnect the ISDN BRI port to the service-provider device, use a UTP Category 5
straight-through cable.
CAUTION:
It is important to insert the cable running from an ISDN BRI port only to an ISDN jack or an ISDN switch.
ISDN BRI uses voltages that can seriously damage non-ISDN devices.
The next page discusses DSL for a router.

5.2.5

Routers and DSL connections


64

This page describes routers and DSL connections.


The Cisco 827 ADSL router has one asymmetric digital subscriber line (ADSL) interface. To connect an
ADSL line to the ADSL port on a router, do the following:
Connect the phone cable to the ADSL port on the router.
Connect the other end of the phone cable to the phone jack.
To connect a router for DSL service, use a phone cable with RJ-11 connectors. DSL works over standard
telephone lines using pins 3 and 4 on a standard RJ-11 connector.
The next page will discuss cable connections.
5.2.6 Routers and cable connections
This page will explain how routers are connected to cable systems.
The Cisco uBR905 cable access router provides high-speed network access on the cable television system
to residential and small office, home office (SOHO) subscribers. The uBR905 router has a coaxial cable, or
F-connector, interface that connects directly to the cable system. Coaxial cable and an F connector are used
to connect the router and cable system.
Use the following steps to connect the Cisco uBR905 cable access router to the cable system:
Verify that the router is not connected to power.
Locate the RF coaxial cable coming from the coaxial cable (TV) wall outlet.
Install a cable splitter/directional coupler, if needed, to separate signals for TV and computer use. If
necessary, also install a high-pass filter to prevent interference between the TV and computer
signals.
Connect the coaxial cable to the F connector of the router. Hand-tighten the connector, making
sure that it is finger-tight, and then give it a 1/6 turn with a wrench.
Make sure that all other coaxial cable connectors, all intermediate splitters, couplers, or ground
blocks, are securely tightened from the distribution tap to the Cisco uBR905 router.
CAUTION:
Do not over tighten the connector. Over tightening may break off the connector. Do not use a torque wrench
because of the danger of tightening the connector more than the recommended 1/6 turns after it is fingertight.
5.2.7 Setting up console connections
This page will explain how console connections are set up.
To initially configure the Cisco device, a management connection must be directly connected to the device.
For Cisco equipment this management attachment is called a console port. The console port allows
monitoring and configuration of a Cisco hub, switch, or router.
The cable used between a terminal and a console port is a rollover cable, with RJ-45 connectors. The
rollover cable, also known as a console cable, has a different pinout than the straight-through or crossover
RJ-45 cables used with Ethernet or the ISDN BRI. The pinout for a rollover is as follows:
1 to 8
2 to 7
3 to 6
4 to 5
5 to 4
6 to 3
7 to 2
8 to 1
To set up a connection between the terminal and the Cisco console port, perform two steps. First, connect
the devices using a rollover cable from the router console port to the workstation serial port. An RJ-45-to-DB9 or an RJ-45-to-DB-25 adapter may be required for the PC or terminal. Next, configure the terminal
emulation application with the following common equipment (COM) port settings: 9600 bps, 8 data bits, no
parity, 1 stop bit, and no flow control.
The AUX port is used to provide out-of-band management through a modem. The AUX port must be
configured by way of the console port before it can be used. The AUX port also uses the settings of 9600
bps, 8 data bits, no parity, 1 stop bit, and no flow control.
In the Lab Activity, students will establish a console connection to a router or switch.
The Interactive Media Activity provides a detailed view of a console cable.

65

Summary
This page summarizes the topics discussed in this module.
Ethernet is the most widely used LAN technology and can be implemented on a variety of media. Ethernet
technologies provide a variety of network speeds, from 10 Mbps to Gigabit Ethernet, which can be applied to
appropriate areas of a network. Media and connector requirements differ for various Ethernet
implementations.
The connector on a network interface card (NIC) must match the media. A bayonet nut connector (BNC)
connector is required to connect to coaxial cable. A fiber connector is required to connect to fiber media. The
registered jack (RJ-45) connector used with twisted-pair wire is the most common type of connector used in
LAN implementations. Ethernet
When twisted-pair wire is used to connect devices, the appropriate wire sequence, or pinout, must be
determined as well. A crossover cable is used to connect two similar devices, such as two PCs. A straightthrough cable is used to connect different devices, such as connections between a switch and a PC. A
rollover cable is used to connect a PC to the console port of a router.
Repeaters regenerate and retime network signals and allow them to travel a longer distance on the media.
Hubs are multi-port repeaters. Data arriving at a hub port is electrically repeated on all the other ports
connected to the same network segment, except for the port on which the data arrived. Sometimes hubs are
called concentrators, because hubs often serve as a central connection point for an Ethernet LAN.
A wireless network can be created with much less cabling than other networks. The only permanent cabling
might be to the access points for the network. At the core of wireless communication are devices called
transmitters and receivers. The transmitter converts source data to electromagnetic (EM) waves that are
passed to the receiver. The receiver then converts these electromagnetic waves back into data for the
destination. The two most common wireless technologies used for networking are infrared (IR) and radio
frequency (RF).
There are times when it is necessary to break up a large LAN into smaller, more easily managed segments.
The devices that are used to define and connect network segments include bridges, switches, routers, and
gateways.
A bridge uses the destination MAC address to determine whether to filter, flood, or copy the frame onto
another segment. If placed strategically, a bridge can greatly improve network performance.
A switch is sometimes described as a multi-port bridge. Although there are some similarities between the
two, a switch is a more sophisticated device than a bridge. Switches operate at much higher speeds than
bridges and can support new functionality, such as virtual LANs.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing
connectivity to the WAN. Within a LAN environment the router controls broadcasts, provides local address
resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure.
Computers typically communicate with each other by using request/response protocols. One computer
issues a request for a service, and a second computer receives and responds to that request. In a peer-topeer network, networked computers act as equal partners, or peers. As peers, each computer can take on
the client function or the server function. In a client/server arrangement, network services are located on a
dedicated computer called a server. The server responds to the requests of clients.

66

WAN connection types include high-speed serial links, ISDN, DSL, and cable modems. Each of these
requires a specific media and connector. To interconnect the ISDN BRI port to the service-provider device, a
UTP Category 5 straight-through cable with RJ-45 connectors, is used. A phone cable and an RJ-11
connector are used to connect a router for DSL service. Coaxial cable and a BNC connector are used to
connect a router for cable service.
In addition to the connection type, it is necessary to determine whether DTE or DCE connectors are
required on internetworking devices. The DTE is the endpoint of the users private network on the WAN
link. The DCE is typically the point where responsibility for delivering data passes to the service provider.
When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform
signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers.
However, there are cases when the router will need to be the DCE.
6.1

Ethernet Fundamentals

Overview
Ethernet is now the dominant LAN technology in the world. Ethernet is a family of LAN technologies that may
be best understood with the OSI reference model. All LANs must deal with the basic issue of how individual
stations, or nodes, are named. Ethernet specifications support different media, bandwidths, and other Layer
1 and 2 variations. However, the basic frame format and address scheme is the same for all varieties of
Ethernet.
Various MAC strategies have been invented to allow multiple stations to access physical media and network
devices. It is important to understand how network devices gain access to the network media before students
can comprehend and troubleshoot the entire network.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe the basics of Ethernet technology
Explain naming rules of Ethernet technology
Explain how Ethernet relates to the OSI model
Describe the Ethernet framing process and frame structure
List Ethernet frame field names and purposes
Identify the characteristics of CSMA/CD
Describe Ethernet timing, interframe spacing, and backoff time after a collision
Define Ethernet errors and collisions
Explain the concept of auto-negotiation in relation to speed and duplex
6.1.1 Introduction to Ethernet
This page provides an introduction to Ethernet. Most of the traffic on the Internet originates and ends with
Ethernet connections. Since it began in the 1970s, Ethernet has evolved to meet the increased demand for
high-speed LANs. When optical fiber media was introduced, Ethernet adapted to take advantage of the
superior bandwidth and low error rate that fiber offers. Now the same protocol that transported data at 3
Mbps in 1973 can carry data at 10 Gbps.
The success of Ethernet is due to the following factors:
Simplicity and ease of maintenance
Ability to incorporate new technologies
Reliability
Low cost of installation and upgrade
The introduction of Gigabit Ethernet has extended the original LAN technology to distances that make
Ethernet a MAN and WAN standard.
The original idea for Ethernet was to allow two or more hosts to use the same medium with no interference
between the signals. This problem of multiple user access to a shared medium was studied in the early
1970s at the University of Hawaii. A system called Alohanet was developed to allow various stations on the
Hawaiian Islands structured access to the shared radio frequency band in the atmosphere.
This work later
formed the basis for the Ethernet access method known as CSMA/CD.
The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox
designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of
Digital Equipment Company, Intel, and Xerox (DIX). Metcalfe wanted Ethernet to be a shared standard from
which everyone could benefit, so it was released as an open standard. The first products that were
developed from the Ethernet standard were sold in the early 1980s. Ethernet transmitted at up to 10 Mbps
over thick coaxial cable up to a distance of 2 kilometers (km). This type of coaxial cable was referred to as
thicknet and was about the width of a small finger.
In 1985, the IEEE standards committee for Local and Metropolitan Networks published standards for LANs.
These standards start with the number 802. The standard for Ethernet is 802.3. The IEEE wanted to make
sure that its standards were compatible with the International Standards Organization (ISO) and OSI model.
67

To do this, the IEEE 802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of
the OSI model. As a result, some small modifications to the original Ethernet standard were made in 802.3.
The differences between the two standards were so minor that any Ethernet NIC can transmit and receive
both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE 802.3 are the same standards.
The 10-Mbps bandwidth of Ethernet was more than enough for the slow PCs of the 1980s. By the early
1990s PCs became much faster, file sizes increased, and data flow bottlenecks occurred. Most were caused
by the low availability of bandwidth. In 1995, IEEE announced a standard for a 100-Mbps Ethernet. This was
followed by standards for Gigabit Ethernet in 1998 and 1999.
All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame could
leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and end up at a
100-Mbps NIC. As long as the packet stays on Ethernet networks it is not changed. For this reason Ethernet
is considered very scalable. The bandwidth of the network could be increased many times while the Ethernet
technology remains the same.
The original Ethernet standard has been amended many times to manage new media and higher
transmission rates. These amendments provide standards for new technologies and maintain compatibility
between Ethernet variations.
6.1.2 IEEE Ethernet naming rules
This page focuses on the Ethernet naming rules developed by IEEE.
Ethernet is not one networking technology, but a family of networking technologies that includes Legacy, Fast
Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps. The basic frame
format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all forms of Ethernet.
When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new
supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as
802.3u. An abbreviated description, called an identifier, is also assigned to the supplement.
The abbreviated description consists of the following elements:
A number that indicates the number of Mbps transmitted
The word base to indicate that baseband signaling is used
One or more letters of the alphabet indicating the type of medium used. For example, F = fiber
optical cable and T = copper unshielded twisted pair
Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. The
data signal is transmitted directly over the transmission medium.
In broadband signaling, the data signal is no longer placed directly on the transmission medium. Ethernet
used broadband signaling in the 10BROAD36 standard. 10BROAD36 is the IEEE standard for an 802.3
Ethernet network using broadband transmission with thick coaxial cable running at 10 Mbps. 10BROAD36 is
now considered obsolete. An analog or carrier signal is modulated by the data signal and then transmitted.
Radio broadcasts and cable TV use broadband signaling.
IEEE cannot force manufacturers to fully comply with any standard. IEEE has two main objectives:
Supply the information necessary to build devices that comply with Ethernet standards
Promote innovation among manufacturers
Students will identify the IEEE 802 standards in the Interactive Media Activity.
6.1.3 Ethernet and the OSI model
This page will explain how Ethernet relates to the OSI model.
Ethernet operates in two areas of the OSI model. These are the lower half of the data link layer, which is
known as the MAC sublayer, and the physical layer.
Data that moves from one Ethernet station to another often passes through a repeater. All stations in the
same collision domain see traffic that passes through a repeater. A collision domain is a shared resource.
Problems that originate in one part of a collision domain will usually impact the entire collision domain.
A repeater forwards traffic to all other ports. A repeater never sends traffic out the same port from which it
was received. Any signal detected by a repeater will be forwarded. If the signal is degraded through
attenuation or noise, the repeater will attempt to reconstruct and regenerate the signal.
To guarantee minimum bandwidth and operability, standards specify the maximum number of stations per
segment, maximum segment length, and maximum number of repeaters between stations. Stations
separated by bridges or routers are in different collision domains.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. Ethernet
at Layer 1 involves signals, bit streams that travel on the media, components that put signals on media, and
various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between
devices, but each of its functions has limitations. Layer 2 addresses these limitations.
Data link sublayers contribute significantly to technological compatibility and computer communications. The
MAC sublayer is concerned with the physical components that will be used to communicate the information.
The Logical Link Control (LLC) sublayer remains relatively independent of the physical equipment that will be
used for the communication process.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. While
there are other varieties of Ethernet, the ones shown are the most widely used.
68

The Interactive Media Activity reviews the layers of the OSI model.

6.1.4 Naming
This page will discuss the MAC addresses used by Ethernet networks.
An address system is required to uniquely identify computers and interfaces to allow for local delivery of
frames on the Ethernet. Ethernet uses MAC addresses that are 48 bits in length and expressed as 12
hexadecimal digits. The first six hexadecimal digits, which are administered by the IEEE, identify the
manufacturer or vendor. This portion of the MAC address is known as the Organizational Unique Identifier
(OUI). The remaining six hexadecimal digits represent the interface serial number or another value
administered by the manufacturer. MAC addresses are sometimes referred to as burned-in MAC
addresses (BIAs) because they are burned into ROM and are copied into RAM when the NIC initializes.
At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer contain
control information intended for the data link layer in the destination system. The data from upper layers is
encapsulated within the data link frame, between the header and trailer, and then sent out on the network.
The NIC uses the MAC address to determine if a message should be passed on to the upper layers of the
OSI model. The NIC does not use CPU processing time to make this assessment. This enables better
communication times on an Ethernet network.
When a device sends data on an Ethernet network, it can use the destination MAC address to open a
communication pathway to the other device. The source device attaches a header with the MAC address of
the intended destination and sends data through the network. As this data travels along the network media
the NIC in each device checks to see if the MAC address matches the physical destination address carried
by the data frame. If there is no match, the NIC discards the data frame. When the data reaches the
destination node, the NIC makes a copy and passes the frame up the OSI layers. On an Ethernet network,
all nodes must examine the MAC header.
All devices that are connected to the Ethernet LAN have MAC addressed interfaces. This includes
workstations, printers, routers, and switches.
6.1.5 Layer 2 framing
This page will explain how frames are created at Layer 2 of the OSI model.
Encoded bit streams, or data, on physical media represent a tremendous technological accomplishment, but
they, alone, are not enough to make communication happen. Framing provides essential information that
could not be obtained from coded bit streams alone. This information includes the following:
Which computers are in communication with each other
When communication between individual computers begins and when it ends
Which errors occurred while the computers communicated
Which computer will communicate next
Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.
A voltage versus time graph could be used to visualize bits. However, it may be too difficult to graph address
and control information for larger units of data. Another type of diagram that could be used is the frame
format diagram, which is based on voltage versus time graphs. Frame format diagrams are read from left to
69

right, just like an oscilloscope graph. The frame format diagram shows different groupings of bits, or fields,
that perform other functions.
There are many different types of frames described by various standards.A single generic frame has sections
called fields. Each field is composed of bytes. The names of the fields are as follows:
Start Frame field
Address field
Length/Type field
Data field
Frame Check Sequence (FCS) field
When computers are connected to a physical medium, there must be a way to inform other computers when
they are about to transmit a frame. Various technologies do this in different ways. Regardless of the
technology, all frames begin with a sequence of bytes to signal the data transmission.
All frames contain naming information, such as the name of the source node, or source MAC address, and
the name of the destination node, or destination MAC address.
Most frames have some specialized fields. In some technologies, a Length field specifies the exact length of
a frame in bytes. Some frames have a Type field, which specifies the Layer 3 protocol used by the device
that wants to send data.
Frames are used to send upper-layer data and ultimately the user application data from a source to a
destination. The data package includes the message to be sent, or user application data. Extra bytes may be
added so frames have a minimum length for timing purposes. LLC bytes are also included with the Data field
in the IEEE standard frames. The LLC sublayer takes the network protocol data, which is an IP packet, and
adds control information to help deliver the packet to the destination node. Layer 2 communicates with the
upper layers through LLC.
All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of
sources. The FCS field contains a number that is calculated by the source node based on the data in the
frame. This number is added to the end of a frame that is sent. When the destination node receives the
frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two
numbers are different, an error is assumed, the frame is discarded.
Because the source cannot detect that the frame has been discarded, retransmission has to be initiated by
higher layer connection-oriented protocols providing data flow control. Because these protocols, such as
TCP, expect frame acknowledgment, ACK, to be sent by the peer station within a certain time, retransmission
usually occurs.
There are three primary ways to calculate the FCS number:
Cyclic redundancy check (CRC) performs calculations on the data.
Two-dimensional parity places individual bytes in a two-dimensional array and performs
redundancy checks vertically and horizontally on the array, creating an extra byte resulting in an
even or odd number of binary 1s.
Internet checksum adds the values of all of the data bits to arrive at a sum.
The node that transmits data must get the attention of other devices to start and end a frame. The
Length field indicates where the frame ends. The frame ends after the FCS. Sometimes there is a formal
byte sequence referred to as an end-frame delimiter

70

6.1.6 Ethernet frame structure


This page will describe the frame structure of Ethernet networks.
At the data link layer the frame structure is nearly identical for all speeds of Ethernet from 10 Mbps to 10,000
Mbps. However, at the physical layer almost all versions of Ethernet are very different. Each speed has a
distinct set of architecture design rules.
In the version of Ethernet that was developed by DIX prior to the adoption of the IEEE 802.3 version of
Ethernet, the Preamble and Start-of-Frame (SOF) Delimiter were combined into a single field. The binary
pattern was identical. The field labeled Length/Type was only listed as Length in the early IEEE versions and
only as Type in the DIX version. These two uses of the field were officially combined in a later IEEE version
since both uses were common.
The Ethernet II Type field is incorporated into the current 802.3 frame definition. When a node receives a
frame it must examine the Length/Type field to determine which higher-layer protocol is present. If the twooctet value is equal to or greater than 0x0600 hexadecimal, 1536 decimal, then the contents of the Data
Field are decoded according to the protocol indicated.

71

6.1.7 Ethernet frame fields


This page defines the fields that are used in a frame.
Some of the fields permitted or required in an 802.3 Ethernet frame are as follows:
Preamble
SOF Delimiter
Destination Address
Source Address
Length/Type
Header and Data
FCS
Extension
The preamble is an alternating pattern of ones and zeros used to time synchronization in 10 Mbps and
slower implementations of Ethernet. Faster versions of Ethernet are synchronous so this timing information is
unnecessary but retained for compatibility.
A SOF delimiter consists of a one-octet field that marks the end of the timing information and contains the bit
sequence 10101011.
The destination address can be unicast, multicast, or broadcast.
The Source Address field contains the MAC source address. The source address is generally the unicast
address of the Ethernet node that transmitted the frame. However, many virtual protocols use and
sometimes share a specific source MAC address to identify the virtual entity.
The Length/Type field supports two different uses. If the value is less than 1536 decimal, 0x600
hexadecimal, then the value indicates length. The length interpretation is used when the LLC layer provides
the protocol identification. The type value indicates which upper-layer protocol will receive the data after the
Ethernet process is complete. The length indicates the number of bytes of data that follows this field.
The Data field and padding if necessary, may be of any length that does not cause the frame to exceed the
maximum frame size. The maximum transmission unit (MTU) for Ethernet is 1500 octets, so the data should
not exceed that size. The content of this field is unspecified. An unspecified amount of data is inserted
immediately after the user data when there is not enough user data for the frame to meet the minimum frame
length. This extra data is called a pad. Ethernet requires each frame to be between 64 and 1518 octets.
A FCS contains a 4-byte CRC value that is created by the device that sends data and is recalculated by the
destination device to check for damaged frames. The corruption of a single bit anywhere from the start of the
Destination Address through the end of the FCS field will cause the checksum to be different. Therefore, the
coverage of the FCS includes itself. It is not possible to distinguish between corruption of the FCS and
corruption of any other field used in the calculation.
This page concludes this lesson. The next lesson will discuss the functions of an Ethernet network. The
first page will introduce the concept of MAC.

6.2

Ethernet Operation

6.2.1 MAC
This page will define MAC and provide examples of deterministic and non-deterministic MAC protocols.
MAC refers to protocols that determine which computer in a shared-media environment, or collision domain,
is allowed to transmit data. MAC and LLC comprise the IEEE version of the OSI Layer 2. MAC and LLC are
sublayers of Layer 2. The two broad categories of MAC are deterministic and non-deterministic.
Examples of deterministic protocols include Token Ring and FDDI. In a Token Ring network, hosts are
arranged in a ring and a special data token travels around the ring to each host in sequence. When a host
72

wants to transmit, it seizes the token, transmits the data for a limited time, and then forwards the token to the
next host in the ring. Token Ring is a collisionless environment since only one host can transmit at a time.
Non-deterministic MAC protocols use a first-come, first-served approach. CSMA/CD is a simple system. The
NIC listens for the absence of a signal on the media and begins to transmit. If two nodes transmit at the
same time a collision occurs and none of the nodes are able to transmit.
Three common Layer 2 technologies are Token Ring, FDDI, and Ethernet. All three specify Layer 2 issues,
LLC, naming, framing, and MAC, as well as Layer 1 signaling components and media issues. The specific
technologies for each are as follows:
Ethernet uses a logical bus topology to control information flow on a linear bus and a physical star
or extended star topology for the cables
Token Ring uses a logical ring topology to control information flow and a physical star topology
FDDI uses a logical ring topology to control information flow and a physical dual-ring topology
The next page explains how collisions are avoided in an Ethernet network.

6.2.2 MAC rules and collision detection/backoff


This page describes collision detection and avoidance in a CSMA/CD network.
Ethernet is a shared-media broadcast technology. The access method CSMA/CD used in Ethernet performs
three functions:
Transmitting and receiving data packets
Decoding data packets and checking them for valid addresses before passing them to the upper
layers of the OSI model
Detecting errors within data packets or on the network
In the CSMA/CD access method, networking devices with data to transmit work in a listen-before-transmit
mode. This means when a node wants to send data, it must first check to see whether the networking media
is busy. If the node determines the network is busy, the node will wait a random amount of time before
retrying. If the node determines the networking media is not busy, the node will begin transmitting and
listening. The node listens to ensure no other stations are transmitting at the same time. After completing
data transmission the device will return to listening mode.
Networking devices detect a collision has occurred when the amplitude of the signal on the networking media
increases. When a collision occurs, each node that is transmitting will continue to transmit for a short time to
ensure that all nodes detect the collision. When all nodes have detected the collision, the backoff algorithm is
invoked and transmission stops. The nodes stop transmitting for a random period of time, determined by the
backoff algorithm. When the delay periods expire, each node can attempt to access the networking media.
The devices that were involved in the collision do not have transmission priority.
The Interactive Media Activity shows the procedure for collision detection in an Ethernet network.

73

Ethernet timing
6.2.3
This page explains the importance of slot times in an Ethernet network.
The basic rules and specifications for proper operation of Ethernet are not particularly complicated, though
some of the faster physical layer implementations are becoming so. Despite the basic simplicity, when a
problem occurs in Ethernet it is often quite difficult to isolate the source. Because of the common bus
architecture of Ethernet, also described as a distributed single point of failure, the scope of the problem
usually encompasses all devices within the collision domain. In situations where repeaters are used, this can
include devices up to four segments away.
Any station on an Ethernet network wishing to transmit a message first listens to ensure that no other
station is currently transmitting. If the cable is quiet, the station will begin transmitting immediately. The
electrical signal takes time to travel down the cable (delay), and each subsequent repeater introduces a
small amount of latency in forwarding the frame from one port to the next. Because of the delay and latency,
it is possible for more than one station to begin transmitting at or near the same time. This results in a
collision.
If the attached station is operating in full duplex then the station may send and receive simultaneously and
collisions should not occur. Full-duplex operation also changes the timing considerations and eliminates the
concept of slot time. Full-duplex operation allows for larger network architecture designs since the timing
restriction for collision detection is removed.
In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of timing
synchronization information that is known as the preamble. The sending station will then transmit the
following information:
Destination and source MAC addressing information
Certain other header information
The actual data payload
Checksum (FCS) used to ensure that the message was not corrupted along the way
Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and then
pass valid messages to the next higher layer in the protocol stack.
10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each receiving
station will use the eight octets of timing information to synchronize the receive circuit to the incoming data,
and then discard it. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous
means the timing information is not required, however for compatibility reasons the Preamble and Start
Frame Delimiter (SFD) are present.
For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a transmission
may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512 bit-times, or 64 octets.
Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time is calculated assuming maximum
cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal
maximum and the 32-bit jam signal is used when collisions are detected.
74

The actual calculated slot time is just longer than the theoretical amount of time required to travel between
the furthest points of the collision domain, collide with another transmission at the last possible instant, and
then have the collision fragments return to the sending station and be detected. For the system to work the
first station must learn about the collision before it finishes sending the smallest legal frame size. To allow
1000-Mbps Ethernet to operate in half duplex the extension field was added when sending small frames
purely to keep the transmitter busy long enough for a collision fragment to return. This field is present only on
1000-Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time
requirements. Extension bits are discarded by the receiving station.
On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100 Mbps that
same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough estimate, 20.3 cm (8 in)
per nanosecond is often used for calculating propagation delay down a UTP cable. For 100 meters of UTP,
this means that it takes just under 5 bit-times for a 10BASE-T signal to travel the length the cable.
For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it has
completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely able to
accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an entire
minimum-sized frame would be transmitted before the first bit reached the end of the first 100 meters of UTP
cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.
The Interactive Media Activity will help students identify the bit time of different Ethernet speeds.
Interframe spacing and backoff
6.2.4
This page explains how spacing is used in an Ethernet network for data transmission.
The minimum spacing between two non-colliding frames is also called the interframe spacing. This is
measured from the last bit of the FCS field of the first frame to the first bit of the preamble of the second
frame.
After a frame has been sent, all stations on a 10-Mbps Ethernet are required to wait a minimum of 96 bittimes (9.6 microseconds) before any station may legally transmit the next frame. On faster versions of
Ethernet the spacing remains the same, 96 bit-times, but the time required for that interval grows
correspondingly shorter. This interval is referred to as the spacing gap. The gap is intended to allow slow
stations time to process the previous frame and prepare for the next frame.
A repeater is expected to regenerate the full 64 bits of timing information, which is the preamble and SFD, at
the start of any frame. This is despite the potential loss of some of the beginning preamble bits because of
slow synchronization. Because of this forced reintroduction of timing bits, some minor reduction of the
interframe gap is not only possible but expected. Some Ethernet chipsets are sensitive to a shortening of the
interframe spacing, and will begin failing to see frames as the gap is reduced. With the increase in
processing power at the desktop, it would be very easy for a personal computer to saturate an Ethernet
segment with traffic and to begin transmitting again before the interframe spacing delay time is satisfied.
After a collision occurs and all stations allow the cable to become idle (each waits the full interframe
spacing), then the stations that collided must wait an additional and potentially progressively longer period of
time before attempting to retransmit the collided frame. The waiting period is intentionally designed to be
random so that two stations do not delay for the same amount of time before retransmitting, which would
result in more collisions. This is accomplished in part by expanding the interval from which the random
retransmission time is selected on each retransmission attempt. The waiting period is measured in
increments of the parameter slot time.
If the MAC layer is unable to send the frame after sixteen attempts, it gives up and generates an error to the
network layer. Such an occurrence is fairly rare and would happen only under extremely heavy network
loads, or when a physical problem exists on the network.
6.2.5 Error handling
This page will describe collisions and how they are handled on a network.
The most common error condition on Ethernet networks are collisions. Collisions are the mechanism for
resolving contention for network access. A few collisions provide a smooth, simple, low overhead way for
network nodes to arbitrate contention for the network resource. When network contention becomes too great,
collisions can become a significant impediment to useful network operation.
Collisions result in network bandwidth loss that is equal to the initial transmission and the collision jam signal.
This is consumption delay and affects all network nodes possibly causing significant reduction in network
throughput.
The considerable majority of collisions occur very early in the frame, often before the SFD. Collisions
occurring before the SFD are usually not reported to the higher layers, as if the collision did not occur. As
soon as a collision is detected, the sending stations transmit a 32-bit jam signal that will enforce the
collision. This is done so that any data being transmitted is thoroughly corrupted and all stations have a
chance to detect the collision.
In Figure two stations listen to ensure that the cable is idle, then transmit. Station 1 was able to transmit a
significant percentage of the frame before the signal even reached the last cable segment. Station 2 had not
received the first bit of the transmission prior to beginning its own transmission and was only able to send
several bits before the NIC sensed the collision. Station 2 immediately truncated the current transmission,
substituted the 32-bit jam signal and ceased all transmissions. During the collision and jam event that Station
75

2 was experiencing, the collision fragments were working their way back through the repeated collision
domain toward Station 1. Station 2 completed transmission of the 32-bit jam signal and became silent before
the collision propagated back to Station 1 which was still unaware of the collision and continued to transmit.
When the collision fragments finally reached Station 1, it also truncated the current transmission and
substituted a 32-bit jam signal in place of the remainder of the frame it was transmitting. Upon sending the
32-bit jam signal Station 1 ceased all transmissions.
A jam signal may be composed of any binary data so long as it does not form a proper checksum for the
portion of the frame already transmitted. The most commonly observed data pattern for a jam signal is simply
a repeating one, zero, one, zero pattern, the same as Preamble. When viewed by a protocol analyzer this
pattern appears as either a repeating hexadecimal 5 or A sequence. The corrupted, partially transmitted
messages are often referred to as collision fragments or runts. Normal collisions are less than 64 octets in
length and therefore fail both the minimum length test and the FCS checksum test.

6.2.6 Types of collisions


This page covers the different types of collisions and their characteristics.
Collisions typically take place when two or more Ethernet stations transmit simultaneously within a collision
domain. A single collision is a collision that was detected while trying to transmit a frame, but on the next
attempt the frame was transmitted successfully. Multiple collisions indicate that the same frame collided
repeatedly before being successfully transmitted. The results of collisions, collision fragments, are partial or
corrupted frames that are less than 64 octets and have an invalid FCS. Three types of collisions are:
Local
Remote
Late
To create a local collision on coax cable (10BASE2 and 10BASE5), the signal travels down the cable until it
encounters a signal from the other station. The waveforms then overlap, canceling some parts of the signal
out and reinforcing or doubling other parts. The doubling of the signal pushes the voltage level of the signal
beyond the allowed maximum. This over-voltage condition is then sensed by all of the stations on the local
cable segment as a collision.
In the beginning the waveform in Figure represents normal Manchester encoded data. A few cycles into the
sample the amplitude of the wave doubles. That is the beginning of the collision, where the two waveforms
are overlapping. Just prior to the end of the sample the amplitude returns to normal. This happens when the
first station to detect the collision quits transmitting, and the jam signal from the second colliding station is
still observed.
On UTP cable, such as 10BASE-T, 100BASE-TX and 1000BASE-T, a collision is detected on the local
segment only when a station detects a signal on the RX pair at the same time it is sending on the TX pair.
Since the two signals are on different pairs there is no characteristic change in the signal. Collisions are only
recognized on UTP when the station is operating in half duplex. The only functional difference between half
and full duplex operation in this regard is whether or not the transmit and receive pairs are permitted to be
used simultaneously. If the station is not engaged in transmitting it cannot detect a local collision. Conversely,
76

a cable fault such as excessive crosstalk can cause a station to perceive its own transmission as a local
collision.
The characteristics of a remote collision are a frame that is less than the minimum length, has an invalid FCS
checksum, but does not exhibit the local collision symptom of over-voltage or simultaneous RX/TX activity.
This sort of collision usually results from collisions occurring on the far side of a repeated connection. A
repeater will not forward an over-voltage state, and cannot cause a station to have both the TX and RX pairs
active at the same time. The station would have to be transmitting to have both pairs active, and that would
constitute a local collision. On UTP networks this is the most common sort of collision observed.
There is no possibility remaining for a normal or legal collision after the first 64 octets of data has been
transmitted by the sending stations. Collisions occurring after the first 64 octets are called late collisions".
The most significant difference between late collisions and collisions occurring before the first 64 octets is
that the Ethernet NIC will retransmit a normally collided frame automatically, but will not automatically
retransmit a frame that was collided late. As far as the NIC is concerned everything went out fine, and the
upper layers of the protocol stack must determine that the frame was lost. Other than retransmission, a
station detecting a late collision handles it in exactly the same way as a normal collision.
The Interactive Media Activity will require students to identify the different types of collisions.
6.2.7 Ethernet errors
This page will define common Ethernet errors.
Knowledge of typical errors is invaluable for understanding both the operation and troubleshooting of
Ethernet networks.
The following are the sources of Ethernet error:
Collision or runt Simultaneous transmission occurring before slot time has elapsed
Late collision Simultaneous transmission occurring after slot time has elapsed
Jabber, long frame and range errors Excessively or illegally long transmission
Short frame, collision fragment or runt Illegally short transmission
FCS error Corrupted transmission
Alignment error Insufficient or excessive number of bits transmitted
Range error Actual and reported number of octets in frame do not match
Ghost or jabber Unusually long Preamble or Jam event
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are
considered to be an error. The presence of errors on a network always suggests that further investigation is
warranted. The severity of the problem indicates the troubleshooting urgency related to the detected errors. A
handful of errors detected over many minutes or over hours would be a low priority. Thousands detected
over a few minutes suggest that urgent attention is warranted.
Jabber is defined in several places in the 802.3 standard as being a transmission of at least 20,000 to 50,000
bit times in duration. However, most diagnostic tools report jabber whenever a detected transmission
exceeds the maximum legal frame size, which is considerably smaller than 20,000 to 50,000 bit times. Most
references to jabber are more properly called long frames.
A long frame is one that is longer than the maximum legal size, and takes into consideration whether or not
the frame was tagged. It does not consider whether or not the frame had a valid FCS checksum. This error
usually means that jabber was detected on the network.
A short frame is a frame smaller than the minimum legal size of 64 octets, with a good frame check
sequence. Some protocol analyzers and network monitors call these frames runts". In general the presence
of short frames is not a guarantee that the network is failing.
The term runt is generally an imprecise slang term that means something less than a legal frame size. It may
refer to short frames with a valid FCS checksum although it usually refers to collision fragments.
The Interactive Media Activity will help students become familiar with Ethernet errors.

77

6.2.8 FCS and beyond


This page will focus on additional errors that occur on an Ethernet network.
A received frame that has a bad Frame Check Sequence, also referred to as a checksum or CRC error,
differs from the original transmission by at least one bit. In an FCS error frame the header information is
probably correct, but the checksum calculated by the receiving station does not match the checksum
appended to the end of the frame by the sending station. The frame is then discarded.
High numbers of FCS errors from a single station usually indicates a faulty NIC and/or faulty or corrupted
software drivers, or a bad cable connecting that station to the network. If FCS errors are associated with
many stations, they are generally traceable to bad cabling, a faulty version of the NIC driver, a faulty hub
port, or induced noise in the cable system.
A message that does not end on an octet boundary is known as an alignment error. Instead of the correct
number of binary bits forming complete octet groupings, there are additional bits left over (less than eight).
Such a frame is truncated to the nearest octet boundary, and if the FCS checksum fails, then an alignment
error is reported. This is often caused by bad software drivers, or a collision, and is frequently accompanied
by a failure of the FCS checksum.
A frame with a valid value in the Length field but did not match the actual number of octets counted in the
data field of the received frame is known as a range error. This error also appears when the length field
value is less than the minimum legal unpadded size of the data field. A similar error, Out of Range, is
reported when the value in the Length field indicates a data size that is too large to be legal.
Fluke Networks has coined the term ghost to mean energy (noise) detected on the cable that appears to be
a frame, but is lacking a valid SFD. To qualify as a ghost, the frame must be at least 72 octets long, including
the preamble. Otherwise, it is classified as a remote collision. Because of the peculiar nature of ghosts, it is
important to note that test results are largely dependent upon where on the segment the measurement is
made.
Ground loops and other wiring problems are usually the cause of ghosting. Most network monitoring tools do
not recognize the existence of ghosts for the same reason that they do not recognize preamble collisions.
The tools rely entirely on what the chipset tells them. Software-only protocol analyzers, many hardwarebased protocol analyzers, hand held diagnostic tools, as well as most remote monitoring (RMON) probes do
not report these events.
The Interactive Media Activity will help students become familiar with the terms and definitions of Ethernet
errors.
6.2.9 Ethernet auto-negotiation
This page explains auto-negotiation and how it is accomplished.
As Ethernet grew from 10 to 100 and 1000 Mbps, one requirement was to make each technology
interoperable, even to the point that 10, 100, and 1000 interfaces could be directly connected. A process
called Auto-Negotiation of speeds at half or full duplex was developed. Specifically, at the time that Fast
Ethernet was introduced, the standard included a method of automatically configuring a given interface to
match the speed and capabilities of the link partner. This process defines how two link partners may
automatically negotiate a configuration offering the best common performance level. It has the additional
advantage of only involving the lowest part of the physical layer.
10BASE-T required each station to transmit a link pulse about every 16 milliseconds, whenever the station
was not engaged in transmitting a message. Auto-Negotiation adopted this signal and renamed it a Normal
Link Pulse (NLP). When a series of NLPs are sent in a group for the purpose of Auto-Negotiation, the group
is called a Fast Link Pulse (FLP) burst. Each FLP burst is sent at the same timing interval as an NLP, and is
intended to allow older 10BASE-T devices to operate normally in the event they should receive an FLP burst.
Auto-Negotiation is accomplished by transmitting a burst of 10BASE-T Link Pulses from each of the two link
partners. The burst communicates the capabilities of the transmitting station to its link partner. After both
stations have interpreted what the other partner is offering, both switch to the highest performance common
configuration and establish a link at that speed. If anything interrupts communications and the link is lost, the
two link partners first attempt to link again at the last negotiated speed. If that fails, or if it has been too long
since the link was lost, the Auto-Negotiation process starts over. The link may be lost due to external
influences, such as a cable fault, or due to one of the partners issuing a reset.
6.2.10 Link establishment and full and half duplex
This page will explain how links are established through Auto-Negotiation and introduce the two duplex
modes.
Link partners are allowed to skip offering configurations of which they are capable. This allows the network
administrator to force ports to a selected speed and duplex setting, without disabling Auto-Negotiation.
Auto-Negotiation is optional for most Ethernet implementations. Gigabit Ethernet requires its implementation,
though the user may disable it. Auto-Negotiation was originally defined for UTP implementations of Ethernet
and has been extended to work with other fiber optic implementations.

78

When an Auto-Negotiating station first attempts to link it is supposed to enable 100BASE-TX to attempt to
immediately establish a link. If 100BASE-TX signaling is present, and the station supports 100BASE-TX, it
will attempt to establish a link without negotiating. If either signaling produces a link or FLP bursts are
received, the station will proceed with that technology. If a link partner does not offer an FLP burst, but
instead offers NLPs, then that device is automatically assumed to be a 10BASE-T station. During this initial
interval of testing for other technologies, the transmit path is sending FLP bursts. The standard does not
permit parallel detection of any other technologies.
If a link is established through parallel detection, it is required to be half duplex. There are only two methods
of achieving a full-duplex link. One method is through a completed cycle of Auto-Negotiation, and the other is
to administratively force both link partners to full duplex. If one link partner is forced to full duplex, but the
other partner attempts to Auto-Negotiate, then there is certain to be a duplex mismatch. This will result in
collisions and errors on that link. Additionally if one end is forced to full duplex the other must also be forced.
The exception to this is 10-Gigabit Ethernet, which does not support half duplex.
Many vendors implement hardware in such a way that it cycles through the various possible states. It
transmits FLP bursts to Auto-Negotiate for a while, then it configures for Fast Ethernet, attempts to link for a
while, and then just listens. Some vendors do not offer any transmitted attempt to link until the interface first
hears an FLP burst or some other signaling scheme.
There are two duplex modes, half and full. For shared media, the half-duplex mode is mandatory. All coaxial
implementations are half duplex in nature and cannot operate in full duplex. UTP and fiber implementations
may be operated in half duplex. 10-Gbps implementations are specified for full duplex only.
In half duplex only one station may transmit at a time. For the coaxial implementations a second station
transmitting will cause the signals to overlap and become corrupted. Since UTP and fiber generally transmit
on separate pairs the signals have no opportunity to overlap and become corrupted. Ethernet has
established arbitration rules for resolving conflicts arising from instances when more than one station
attempts to transmit at the same time. Both stations in a point-to-point full-duplex link are permitted to
transmit at any time, regardless of whether the other station is transmitting.
Auto-Negotiation avoids most situations where one station in a point-to-point link is transmitting under halfduplex rules and the other under full-duplex rules.
In the event that link partners are capable of sharing more than one common technology, refer to the list in
Figure . This list is used to determine which technology should be chosen from the offered configurations.
Fiber-optic Ethernet implementations are not included in this priority resolution list because the interface
electronics and optics do not permit easy reconfiguration between implementations. It is assumed that the
interface configuration is fixed. If the two interfaces are able to Auto-Negotiate then they are already using
the same Ethernet implementation. However, there remain a number of configuration choices such as the
duplex setting, or which station will act as the Master for clocking purposes, that must be determined.
The Interactive Media Activity will help students understand the link establishment process.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is not one networking technology, but a family of LAN technologies that includes Legacy, Fast
Ethernet, and Gigabit Ethernet. When Ethernet needs to be expanded to add a new medium or capability,
the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter
designation such as 802.3u. Ethernet relies on baseband signaling, which uses the entire bandwidth of the
transmission medium. Ethernet operates at two layers of the OSI model, the lower half of the data link layer,
known as the MAC sublayer and the physical layer. Ethernet at Layer 1 involves interfacing with media,
signals, bit streams that travel on the media, components that put signals on media, and various physical
topologies. Layer 1 bits need structure so OSI Layer 2 frames are used. The MAC sublayer of Layer 2
determines the type of frame appropriate for the physical media.
The one thing common to all forms of Ethernet is the frame structure. This is what allows the interoperability
of the different types of Ethernet.
Some of the fields permitted or required in an 802.3 Ethernet Frame are:
Preamble
Start Frame Delimiter
Destination Address
Source Address
Length/Type
Data and Pad
Frame Check Sequence
In 10 Mbps and slower versions of Ethernet, the Preamble provides timing information the receiving node
needs in order to interpret the electrical signals it is receiving. The Start Frame Delimiter marks the end of
the timing information. 10 Mbps and slower versions of Ethernet are asynchronous. That is, they will use the
preamble timing information to synchronize the receive circuit to the incoming data. 100 Mbps and higher
speed implementations of Ethernet are synchronous. Synchronous means the timing information is not
required, however for compatibility reasons the Preamble and SFD are present.
79

The address fields of the Ethernet frame contain Layer 2, or MAC, addresses.
All frames are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field of an
Ethernet frame contains a number that is calculated by the source node based on the data in the frame. At
the destination it is recalculated and compared to determine that the data received is complete and error
free.
Once the data is framed the Media Access Control (MAC) sublayer is also responsible to determine which
computer on a shared-medium environment, or collision domain, is allowed to transmit the data. There are
two broad categories of Media Access Control, deterministic (taking turns) and non-deterministic (first come,
first served).
Examples of deterministic protocols include Token Ring and FDDI. The carrier sense multiple access with
collision detection (CSMA/CD) access method is a simple non-deterministic system. The NIC listens for an
absence of a signal on the media and starts transmitting. If two nodes or more nodes transmit at the same
time a collision occurs. If a collision is detected the nodes wait a random amount of time and retransmit.
The minimum spacing between two non-colliding frames is also called the interframe spacing. Interframe
spacing is required to insure that all stations have time to process the previous frame and prepare for the
next frame.
Collisions can occur at various points during transmission. A collision where a signal is detected on the
receive and transmit circuits at the same time is referred to as a local collision. A collision that occurs before
the minimum number of bytes can be transmitted is called a remote collision. A collision that occurs after the
first sixty-four octets of data have been sent is considered a late collision. The NIC will not automatically
retransmit for this type of collision.
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are
considered to be an error. Ethernet errors result from detection of frames sizes that are longer or shorter than
standards allow or excessively long or illegal transmissions called jabber. Runt is a slang term that refers to
something less than the legal frame size.
Auto-Negotiation detects the speed and duplex mode, half-duplex or full-duplex, of the device on the
other end of the wire and adjusts to match those settings.

7.1

10-Mbps and 100-Mbps Ethernet

Overview
Ethernet has been the most successful LAN technology mainly because of how easy it is to implement.
Ethernet has also been successful because it is a flexible technology that has evolved as needs and media
capabilities have changed. This module will provide details about the most important types of Ethernet. The
goal is to help students understand what is common to all forms of Ethernet.
Changes in Ethernet have resulted in major improvements over the 10-Mbps Ethernet of the early 1980s.
The 10-Mbps Ethernet standard remained virtually unchanged until 1995 when IEEE announced a standard
for a 100-Mbps Fast Ethernet. In recent years, an even more rapid growth in media speed has moved the
transition from Fast Ethernet to Gigabit Ethernet. The standards for Gigabit Ethernet emerged in only three
years. A faster Ethernet version called 10-Gigabit Ethernet is now widely available and faster versions will be
developed.
MAC addresses, CSMA/CD, and the frame format have not been changed from earlier versions of Ethernet.
However, other aspects of the MAC sublayer, physical layer, and medium have changed. Copper-based
NICs capable of 10, 100, or 1000 Mbps are now common. Gigabit switch and router ports are becoming the
80

standard for wiring closets. Optical fiber to support Gigabit Ethernet is considered a standard for backbone
cables in most new installations.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe the differences and similarities among 10BASE5, 10BASE2, and 10BASE-T Ethernet
Define Manchester encoding
List the factors that affect Ethernet timing limits
List 10BASE-T wiring parameters
Describe the key characteristics and varieties of 100-Mbps Ethernet
Describe the evolution of Ethernet
Explain the MAC methods, frame formats, and transmission process of Gigabit Ethernet
Describe the uses of specific media and encoding with Gigabit Ethernet
Identify the pinouts and wiring typical to the various implementations of Gigabit Ethernet
Describe the similarities and differences between Gigabit and 10-Gigabit Ethernet
Describe the basic architectural considerations of Gigabit and 10-Gigabit Ethernet
7.1.1

10-Mbps Ethernet

This page will discuss 10-Mbps Ethernet technologies.


10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features
of Legacy Ethernet are timing parameters, the frame format, transmission processes, and a basic design
rule.
Figure displays the parameters for 10-Mbps Ethernet operation. 10-Mbps Ethernet and slower versions are
asynchronous. Each receiving station uses eight octets of timing information to synchronize its receive circuit
to the incoming data. 10BASE5, 10BASE2, and 10BASE-T all share the same timing parameters. For
example, 1 bit time at 10 Mbps = 100 nanoseconds (ns) = 0.1 microseconds = 1 10-millionth of a second.
This means that on a 10-Mbps Ethernet network, 1 bit at the MAC sublayer requires 100 ns to transmit.
For all speeds of Ethernet transmission 1000 Mbps or slower, transmission can be no slower than the slot
time. Slot time is just longer than the time it theoretically can take to go from one extreme end of the largest
legal Ethernet collision domain to the other extreme end, collide with another transmission at the last
possible instant, and then have the collision fragments return to the sending station to be detected.
10BASE5, 10BASE2, and 10BASE-T also have a common frame format.
The Legacy Ethernet transmission process is identical until the lower part of the OSI physical layer. As the
frame passes from the MAC sublayer to the physical layer, other processes occur before the bits move from
the physical layer onto the medium. One important process is the signal quality error (SQE) signal. The SQE
is a transmission sent by a transceiver back to the controller to let the controller know whether the collision
circuitry is functional. The SQE is also called a heartbeat. The SQE signal is designed to fix the problem in
earlier versions of Ethernet where a host does not know if a transceiver is connected. SQE is always used in
half-duplex. SQE can be used in full-duplex operation but is not required. SQE is active in the following
instances:
Within 4 to 8 microseconds after a normal transmission to indicate that the outbound frame was
successfully transmitted
Whenever there is a collision on the medium
Whenever there is an improper signal on the medium, such as jabber, or reflections that result from a
cable short
Whenever a transmission has been interrupted
All 10-Mbps forms of Ethernet take octets received from the MAC sublayer and perform a process called line
encoding. Line encoding describes how the bits are actually signaled on the wire. The simplest encodings
have undesirable timing and electrical characteristics. Therefore, line codes have been designed with
desirable transmission properties. This form of encoding used in 10-Mbps systems is called Manchester
encoding.
Manchester encoding uses the transition in the middle of the timing window to determine the binary value for
that bit period. In Figure , the top waveform moves to a lower position so it is interpreted as a binary zero.
The second waveform moves to a higher position and is interpreted as a binary one. The third waveform has
an alternating binary sequence. When binary data alternates, there is no need to return to the previous
voltage level before the next bit period. The wave forms in the graphic show that the binary bit values are
determined based on the direction of change in a bit period. The voltage levels at the start or end of any bit
period are not used to determine binary values.
Legacy Ethernet has common architectural features. Networks usually contain multiple types of media. The
standard ensures that interoperability is maintained. The overall architectural design is most important in
mixed-media networks. It becomes easier to violate maximum delay limits as the network grows. The timing
limits are based on the following types of parameters:
Cable length and propagation delay
81

Delay of repeaters
Delay of transceivers
Interframe gap shrinkage
Delays within the station
10-Mbps Ethernet operates within the timing limits for a series of up to five segments separated by up to four
repeaters. This is known as the 5-4-3 rule. No more than four repeaters can be used in series between any
two stations. There can also be no more than three populated segments between any two stations.

7.1

10-Mbps and 100-Mbps Ethernet

7.1.2 10BASE5
This page will discuss the original 1980 Ethernet product, which is 10BASE5. 10BASE5 transmitted 10 Mbps
over a single think coaxial cable bus.
10BASE5 is important because it was the first medium used for Ethernet. 10BASE5 was part of the original
802.3 standard. The primary benefit of 10BASE5 was length. 10BASE5 may be found in legacy installations.
It is not recommended for new installations. 10BASE5 systems are inexpensive and require no configuration.
Two disadvantages are that basic components like NICs are very difficult to find and it is sensitive to signal
reflections on the cable. 10BASE5 systems also represent a single point of failure.
10BASE5 uses Manchester encoding. It has a solid central conductor. Each segment of thick coax may be
up to 500 m (1640.4 ft) in length. The cable is large, heavy, and difficult to install. However, the distance
limitations were favorable and this prolonged its use in certain applications.
When the medium is a single coaxial cable, only one station can transmit at a time or a collision will occur.
Therefore, 10BASE5 only runs in half-duplex with a maximum transmission rate of 10 Mbps.
Figure illustrates a configuration for an end-to-end collision domain with the maximum number of segments
and repeaters. Remember that only three segments can have stations connected to them. The other two
repeated segments are used to extend the network.
The Lab Activity will help students decode a waveform.
The Interactive Media Activity will help students learn the features of 10BASE5 technology.

82

7.1

10-Mbps and 100-Mbps Ethernet

7.1.3 10BASE2
This page covers 10BASE2, which was introduced in 1985.
Installation was easier because of its smaller size, lighter weight, and greater flexibility. 10BASE2 still exists
in legacy networks. Like 10BASE5, it is no longer recommended for network installations. It has a low cost
and does not require hubs.
10BASE2 also uses Manchester encoding. Computers on a 10BASE2 LAN are linked together by an
unbroken series of coaxial cable lengths. These lengths are attached to a T-shaped connector on the NIC
with BNC connectors.
10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coaxial cable may
be up to 185 m (607 ft) long and each station is connected directly to the BNC T-shaped connector on the
coaxial cable.
Only one station can transmit at a time or a collision will occur. 10BASE2 also uses half-duplex. The
maximum transmission rate of 10BASE2 is 10 Mbps.
There may be up to 30 stations on a 10BASE2 segment. Only three out of five consecutive segments
between any two stations can be populated.
The Interactive Media Activity will help students learn the features of 10BASE2 technology.

83

7.1.4 10BASE-T
This page covers 10BASE-T, which was introduced in 1990.
10BASE-T used cheaper and easier to install Category 3 UTP copper cable instead of coax cable. The cable
plugged into a central connection device that contained the shared bus. This device was a hub. It was at the
center of a set of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star
topology. As additional stars were added and the cable distances grew, this formed an extended star
topology. Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The
explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to dominate LAN
technology.
10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each wire. The
maximum cable length is 90 m (295 ft). UTP cable uses eight-pin RJ-45 connectors. Though Category 3
cable is adequate for 10BASE-T networks, new cable installations should be made with Category 5e or
better. All four pairs of wires should be used either with the T568-A or T568-B cable pinout arrangement. This
type of cable installation supports the use of multiple protocols without the need to rewire. Figure shows
the pinout arrangement for a 10BASE-T connection. The pair that transmits data on one device is connected
to the pair that receives data on the other device.
Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-duplex mode
and 20 Mbps in full-duplex mode.
The Interactive Media Activity will help students learn the features of 10BASE-T technology.

7.1.5 10BASE-T wiring and architecture


This page explains the wiring and architecture of 10BASE-T.
A 10BASE-T link generally connects a station to a hub or switch. Hubs are multi-port repeaters and count
toward the limit on repeaters between distant stations. Hubs do not divide network segments into separate
collision domains. Bridges and switches divide segments into separate collision domains. The maximum
distance between bridges and switches is based on media limitations.
Although hubs may be linked, it is best to avoid this arrangement. A network with linked hubs may exceed
the limit for maximum delay between stations. Multiple hubs should be arranged in hierarchical order like a
tree structure. Performance is better if fewer repeaters are used between stations.
An architectural example is shown in Figure . The distance from one end of the network to the other places
the architecture at its limit. The most important aspect to consider is how to keep the delay between distant
stations to a minimum, regardless of the architecture and media types involved. A shorter maximum delay
will provide better overall performance.
10BASE-T links can have unrepeated distances of up to 100 m (328 ft). While this may seem like a long
distance, it is typically maximized when wiring an actual building. Hubs can solve the distance issue but will
allow collisions to propagate. The widespread introduction of switches has made the distance limitation less
important. If workstations are located within 100 m (328 ft) of a switch, the 100-m distance starts over at the
switch.

84

7.1.6 100-Mbps Ethernet


This page will discuss 100-Mbps Ethernet, which is also known as Fast Ethernet. The two technologies that
have become important are 100BASE-TX, which is a copper UTP medium and 100BASE-FX, which is a
multimode optical fiber medium.
Three characteristics common to 100BASE-TX and 100BASE-FX are the timing parameters, the frame
format, and parts of the transmission process. 100BASE-TX and 100BASE-FX both share timing
parameters. Note that one bit time at 100-Mbps = 10 ns = .01 microseconds = 1 100-millionth of a second.
The 100-Mbps frame format is the same as the 10-Mbps frame.
Fast Ethernet is ten times faster than 10BASE-T. The bits that are sent are shorter in duration and occur
more frequently. These higher frequency signals are more susceptible to noise. In response to these issues,
two separate encoding steps are used by 100-Mbps Ethernet. The first part of the encoding uses a technique
called 4B/5B, the second part of the encoding is the actual line encoding specific to copper or fiber.

7.1.7 100BASE-TX
This page will describe 100BASE-TX.
In 1995, 100BASE-TX was the standard, using Category 5 UTP cable, which became commercially
successful.
The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a time. In
1997, Ethernet was expanded to include a full-duplex capability that allowed more than one PC on a network
to transmit at the same time. Switches replaced hubs in many networks. These switches had full-duplex
capabilities and could handle Ethernet frames quickly.
100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to Multi-Level Transmit (MLT-3)
encoding. Figure shows four waveform examples. The top waveform has no transition in the center of the
timing window. No transition indicates a binary zero. The second waveform shows a transition in the center
85

of the timing window. A transition represents a binary one. The third waveform shows an alternating binary
sequence. The fourth wavelength shows that signal changes indicate ones and horizontal lines indicate
zeros.
Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-receive
paths exist. This is identical to the 10BASE-T configuration.
100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX can
exchange 200 Mbps of traffic. The concept of full duplex will become more important as Ethernet speeds
increase.

7.1.8 100BASE-FX
This page covers 100BASE-FX.
When copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber version could be
used for backbone applications, connections between floors, buildings where copper is less desirable, and
also in high-noise environments. 100BASE-FX was introduced to satisfy this desire. However, 100BASE-FX
was never adopted successfully. This was due to the introduction of Gigabit Ethernet copper and fiber
standards. Gigabit Ethernet standards are now the dominant technology for backbone installations, highspeed cross-connects, and general infrastructure needs.
The timing, frame format, and transmission are the same in both versions of 100-Mbps Fast Ethernet. In
Figure , the top waveform has no transition, which indicates a binary 0. In the second waveform, the
transition in the center of the timing window indicates a binary 1. In the third waveform, there is an alternating
binary sequence. In the third and fourth waveforms it is more obvious that no transition indicates a binary
zero and the presence of a transition is a binary one.
Figure summarizes a 100BASE-FX link and pinouts. A fiber pair with either ST or SC connectors is most
commonly used.
The separate Transmit (Tx) and Receive (Rx) paths in 100BASE-FX optical fiber allow for 200-Mbps
transmission.

86

7.1.9 Fast Ethernet architecture


This page describes the architecture of Fast Ethernet.
Fast Ethernet links generally consist of a connection between a station and a hub or switch. Hubs are
considered multi-port repeaters and switches are considered multi-port bridges. These are subject to the
100-m (328 ft) UTP media distance limitation.
A Class I repeater may introduce up to 140 bit-times latency. Any repeater that changes between one
Ethernet implementation and another is a Class I repeater. A Class II repeater is restricted to smaller timing
delays, 92 bit times, because it immediately repeats the incoming signal to all other ports without a
translation process. To achieve a smaller timing delay, Class II repeaters can only connect to segment types
that use the same signaling technique.
As with 10-Mbps versions, it is possible to modify some of the architecture rules for 100-Mbps versions.
Modification of the architecture rules is strongly discouraged for 100BASE-TX. 100BASE-TX cable between
Class II repeaters may not exceed 5 m (16 ft). Links that operate in half duplex are not uncommon in Fast
Ethernet. However, half duplex is undesirable because the signaling scheme is inherently full duplex.
Figure shows architecture configuration cable distances. 100BASE-TX links can have unrepeated
distances up to 100 m. Switches have made this distance limitation less important. Most Fast Ethernet
implementations are switched.
This page concludes this lesson. The next lesson will discuss Gigabit and 10-Gigabit Ethernet. The first
page describes 1000-Mbps Ethernet standards.

87

7.2

Gigabit and 10-Gigabit Ethernet

7.2.1 1000-Mbps Ethernet


This page covers the 1000-Mbps Ethernet or Gigabit Ethernet standards. These standards specify both fiber
and copper media for data transmissions. The 1000BASE-T standard, IEEE 802.3ab, uses Category 5, or
higher, balanced copper cabling. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over
optical fiber.
1000BASE-TX, 1000BASE-SX, and 1000BASE-LX use the same timing parameters, as shown in Figure .
They use a 1 ns, 0.000000001 of a second, or 1 billionth of a second bit time. The Gigabit Ethernet frame
has the same format as is used for 10 and 100-Mbps Ethernet. Some implementations of Gigabit Ethernet
may use different processes to convert frames to bits on the cable. Figure shows the Ethernet frame fields.
The differences between standard Ethernet, Fast Ethernet and Gigabit Ethernet occur at the physical layer.
Due to the increased speeds of these newer standards, the shorter duration bit times require special
considerations. Since the bits are introduced on the medium for a shorter duration and more often, timing is
critical. This high-speed transmission requires higher frequencies. This causes the bits to be more
susceptible to noise on copper media.
These issues require Gigabit Ethernet to use two separate encoding steps. Data transmission is more
efficient when codes are used to represent the binary bit stream. The encoded data provides
synchronization, efficient usage of bandwidth, and improved signal-to-noise ratio characteristics.
At the physical layer, the bit patterns from the MAC layer are converted into symbols. The symbols may also
be control information such as start frame, end frame, and idle conditions on a link. The frame is coded into
control symbols and data symbols to increase in network throughput.
Fiber-based Gigabit Ethernet, or 1000BASE-X, uses 8B/10B encoding, which is similar to the 4B/5B concept.
This is followed by the simple nonreturn to zero (NRZ) line encoding of light on optical fiber. This encoding
process is possible because the fiber medium can carry higher bandwidth signals.

7.2.2 1000BASE-T
This page will describe 1000BASE-T.
As Fast Ethernet was installed to increase bandwidth to workstations, this began to create bottlenecks
upstream in the network. The 1000BASE-T standard, which is IEEE 802.3ab, was developed to provide
additional bandwidth to help alleviate these bottlenecks. It provided more throughput for devices such as
intra-building backbones, inter-switch links, server farms, and other wiring closet applications as well as
connections for high-end workstations. Fast Ethernet was designed to function over Category 5 copper cable
that passes the Category 5e test. Most installed Category 5 cable can pass the Category 5e certification if
properly terminated. It is important for the 1000BASE-T standard to be interoperable with 10BASE-T and
100BASE-TX.
Since Category 5e cable can reliably carry up to 125 Mbps of traffic, 1000 Mbps or 1 Gigabit of bandwidth
was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs of wires instead of
the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This requires complex circuitry that
allows full-duplex transmissions on the same wire pair. This provides 250 Mbps per pair. With all four-wire
pairs, this provides the desired 1000 Mbps. Since the information travels simultaneously across the four
paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.
The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Category 5e, or better, UTP. That means
the transmission and reception of data happens in both directions on the same wire at the same time. As
might be expected, this results in a permanent collision on the wire pairs. These collisions result in complex
voltage patterns. With the complex integrated circuits using techniques such as echo cancellation, Layer 1
88

Forward Error Correction (FEC), and prudent selection of voltage levels, the system achieves the 1-Gigabit
throughput.
In idle periods there are nine voltage levels found on the cable, and during data transmission periods there
are 17 voltage levels found on the cable. With this large number of states and the effects of noise, the
signal on the wire looks more analog than digital. Like analog, the system is more susceptible to noise due to
cable and termination problems.
The data from the sending station is carefully divided into four parallel streams, encoded, transmitted and
detected in parallel, and then reassembled into one received bit stream. Figure represents the
simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-duplex as well as full-duplex
operation. The use of full-duplex 1000BASE-T is widespread.
7.2.3 1000BASE-SX and LX
This page will discuss single-mode and multimode optical fiber.
The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone
technology.
The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-encoding
schemes are defined at the physical layer. The 8B/10B scheme is used for optical fiber and shielded
copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.
1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ encoding
relies on the signal level found in the timing window to determine the binary value for that bit period. Unlike
most of the other encoding schemes described, this encoding system is level driven instead of edge driven.
That is the determination of whether a bit is a zero or a one is made by the level of the signal rather than
when the signal changes levels.
The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength light
sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber (1000BASESX). It is the lower-cost of the options but has shorter distances. The long-wavelength 1310 nm laser source
uses either single-mode or multimode optical fiber (1000BASE-LX). Laser sources used with single-mode
fiber can achieve distances of up to 5000 meters. Because of the length of time to completely turn the LED
or laser on and off each time, the light is pulsed using low and high power. A logic zero is represented by low
power, and a logic one by high power.
The Media Access Control method treats the link as point-to-point. Since separate fibers are used for
transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet permits only a
single repeater between two stations. Figure is a 1000BASE Ethernet media comparison chart.
7.2.4 Gigabit Ethernet architecture
This page will discuss the architecture of Gigabit Ethernet.
The distance limitations of full-duplex links are only limited by the medium, and not the round-trip delay.
Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits between
devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then becomes one of
logical topology and data flow, not timing or distance limitations.
A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link performance
must meet the higher quality Category 5e or ISO Class D (2000) requirements.
Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters, 1000BASE-T is
operating close to the edge of the ability of the hardware to recover the transmitted signal. Any cabling
problems or environmental noise could render an otherwise compliant cable inoperable even at distances
that are within the specification.
It is recommended that all links between a station and a hub or switch be configured for Auto-Negotiation
to permit the highest common performance. This will avoid accidental misconfiguration of the other
required parameters for proper Gigabit Ethernet operation.

7.2.5 10-Gigabit Ethernet


This page will describe 10-Gigabit Ethernet and compare it to other versions of Ethernet.
IEEE 802.3ae was adapted to include 10 Gbps full-duplex transmission over fiber optic cable. The basic
similarities between 802.3ae and 802.3, the original Ethernet are remarkable. This 10-Gigabit Ethernet
(10GbE) is evolving for not only LANs, but also MANs, and WANs.
89

With the frame format and other Ethernet Layer 2 specifications compatible with previous standards, 10GbE
can provide increased bandwidth needs that are interoperable with existing network infrastructure.
A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought of as a
LAN technology, but 10GbE physical layer standards allow both an extension in distance to 40 km over
single-mode fiber and compatibility with synchronous optical network (SONET) and synchronous digital
hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable MAN technology.
Compatibility with SONET/SDH networks operating up to OC-192 speeds (9.584640 Gbps) make 10GbE a
viable WAN technology. 10GbE may also compete with ATM for certain applications.
To summarize, how does 10GbE compare to other varieties of Ethernet?
Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and
10 gigabit, with no reframing or protocol conversions.
Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
Since only full-duplex fiber connections are used, CSMA/CD is not necessary.
The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to
accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
TCP/IP can run over LANs, MANs, and WANs with one Layer 2 transport method.
The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled 802.3ae,
governs the 10GbE family. As is typical for new technologies, a variety of implementations are being
considered, including:
10GBASE-SR Intended for short distances over already-installed multimode fiber, supports a
range between 26 m to 82 m
10GBASE-LX4 Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over
already-installed multimode fiber and 10 km over single-mode fiber
10GBASE-LR and 10GBASE-ER Support 10 km and 40 km over single-mode fiber
10GBASE-SW, 10GBASE-LW, and 10GBASE-EW Known collectively as 10GBASE-W, intended
to work with OC-192 synchronous transport module SONET/SDH WAN equipment
The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to standardize
these emerging technologies.
10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that uses only
optic fiber as a transmission medium. The maximum transmission distances depend on the type of fiber
being used. When using single-mode fiber as the transmission medium, the maximum transmission distance
is 40 kilometers (25 miles). Some discussions between IEEE members have begun that suggest the
possibility of standards for 40, 80, and even 100-Gbps Ethernet.

7.2.6 10-Gigabit Ethernet architectures


This page describes the 10-Gigabit Ethernet architectures.
As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements. The
shorter bit time duration because of increased speed requires special considerations. For 10 GbE
transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE data bits in
the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the short duration of the 10
GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE data transmissions rely on exact bit
90

timing to separate the data from the effects of noise on the physical layer. This is the purpose of
synchronization.
In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet
uses two separate encoding steps. By using codes to represent the user data, transmission is made more
efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-toNoise Ratio characteristics.
Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide
Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of
light launched into the fiber at one time.
Figure represents the particular case of using four slightly different wavelength, laser sources. Upon
receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams.
The four optical signal streams are then converted back into four electronic bit streams as they travel in
approximately the reverse process back up through the sublayers to the MAC layer.
Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches
and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be
expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these
products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types
include 10 single-mode Fiber, and 50 and 62.5 multimode fibers. A range of fiber attenuation and
dispersion characteristics is supported, but they limit operating distances.
Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly
short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.
As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture
rules slightly. Possible architecture adjustments are related to signal loss and distortion along the
medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable
beyond certain distances.

7.2.7 Future of Ethernet


This page will teach students about the future of Ethernet.
Ethernet has gone through an evolution from Legacy > Fast > Gigabit > MultiGigabit technologies.
While other LAN technologies are still in place (legacy installations), Ethernet dominates new LAN
installations. So much so that some have referred to Ethernet as the LAN dial tone. Ethernet is now the
standard for horizontal, vertical, and inter-building connections. Recently developing versions of Ethernet are
blurring the distinction between LANs, MANs, and WANs.
While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more available, the IEEE
and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps standards. The technologies
that are adopted will depend on a number of factors, including the rate of maturation of the technologies and
standards, the rate of adoption in the market, and cost.
Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of collisions
with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-TX hubs is no longer
common. Using UTP and optical fiber with separate Tx and Rx paths, and the decreasing costs of switches
make single shared media, half-duplex media connections much less important.
91

The future of networking media is three-fold:


1. Copper (up to 1000 Mbps, perhaps more)
2. Wireless (approaching 100 Mbps, perhaps more)
3. Optical fiber (currently at 10,000 Mbps and soon to be more)
Copper and wireless media have certain physical and practical limitations on the highest frequency signals
that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable future. The bandwidth
limitations on optical fiber are extremely large and are not yet being threatened. In fiber systems, it is the
electronics technology (such as emitters and detectors) and fiber manufacturing processes that most limit the
speed. Upcoming developments in Ethernet are likely to be heavily weighted towards Laser light sources and
single-mode optical fiber.
When Ethernet was slower, half-duplex, subject to collisions and a democratic process for prioritization,
was not considered to have the Quality of Service (QoS) capabilities required to handle certain types of
traffic. This included such things as IP telephony and video multicast.
The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be sufficient
at supporting even QoS-intensive applications. This makes the potential applications of Ethernet even wider.
Ironically end-to-end QoS capability helped drive a push for ATM to the desktop and to the WAN in the mid1990s, but now it is Ethernet, not ATM that is approaching this goal.

Summary
This page summarizes the topics discussed in this module.
Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in
less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent
interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copperbased Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical
fiber-based technologies.
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features
of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.
Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is
called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary
numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period,
determines the binary value of the bit.
In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing.
Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps
Ethernet operates within the timing limits offered by a series of no more than five segments separated by no
more than four repeaters.
A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable,
was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used
multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in
half-duplex mode and 20 Mbps in full-duplex mode.
10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as
repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches,
the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each
switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without
the media contention or timing issues of using repeaters and hubs.
92

100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in
100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full
duplex.
Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate
encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.
Gigabit Ethernet over copper wire is accomplished by the following:
Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per
wire pair to 125 Mbps per wire pair.
All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four
wire pairs.
Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex,
doubling the 500 Mbps to 1000 Mbps.
On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of
the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise.
The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and
still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding
and decoding data becomes even more complex.
The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages:
noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3
standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.

8.1

Ethernet Switching

Overview
Shared Ethernet works extremely well under ideal conditions. If the number of devices that try to access the
network is low, the number of collisions stays well within acceptable limits. However, when the number of
users on the network increases, the number of collisions can significantly reduce performance. Bridges were
developed to help correct performance problems that arose from increased collisions. Switches evolved from
bridges to become the main technology in modern Ethernet LANs.
Collisions and broadcasts are expected events in modern networks. They are engineered into the design of
Ethernet and higher layer technologies. However, when collisions and broadcasts occur in numbers that are
above the optimum, network performance suffers. Collision domains and broadcast domains should be
designed to limit the negative effects of collisions and broadcasts. This module explores the effects of
collisions and broadcasts on network traffic and then describes how bridges and routers are used to segment
networks for improved performance.
93

This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Define bridging and switching
Define and describe the content-addressable memory (CAM) table
Define latency
Describe store-and-forward and cut-through packet switching modes
Explain Spanning-Tree Protocol (STP)
Define collisions, broadcasts, collision domains, and broadcast domains
Identify the Layers 1, 2, and 3 devices used to create collision domains and broadcast domains
Discuss data flow and problems with broadcasts
Explain network segmentation and list the devices used to create segments
Layer 2 bridging
8.1.1
This page will discuss the operation of Layer 2 bridges.
As more nodes are added to an Ethernet segment, use of the media increases. Ethernet is a shared media,
which means only one node can transmit data at a time. The addition of more nodes increases the demands
on the available bandwidth and places additional loads on the media. This also increases the probability of
collisions, which results in more retransmissions. A solution to the problem is to break the large segment into
parts and separate it into isolated collision domains.
To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then
forwards or discards frames based on the table entries. The following steps illustrate the operation of a
bridge:
The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the
segment. When traffic is detected, it is processed by the bridge.
Host A pings Host B. Since the data is transmitted on the entire collision domain segment, both the
bridge and Host B process the packet.
The bridge adds the source address of the frame to its bridge table. Since the address was in the
source address field and the frame was received on Port 1, the frame must be associated with Port 1
in the table.
The destination address of the frame is checked against the bridge table. Since the address is not in
the table, even though it is on the same collision domain, the frame is forwarded to the other
segment. The address of Host B has not been recorded yet.
Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted
over the whole collision domain. Both Host A and the bridge receive the frame and process it.
The bridge adds the source address of the frame to its bridge table. Since the source address was
not in the bridge table and was received on Port 1, the source address of the frame must be
associated with Port 1 in the table.
The destination address of the frame is checked against the bridge table to see if its entry is there.
Since the address is in the table, the port assignment is checked. The address of Host A is
associated with the port the frame was received on, so the frame is not forwarded.
Host A pings Host C. Since the data is transmitted on the entire collision domain segment, both the
bridge and Host B process the frame. Host B discards the frame since it was not the intended
destination.
The bridge adds the source address of the frame to its bridge table. Since the address is already
entered into the bridge table the entry is just renewed.
The destination address of the frame is checked against the bridge table. Since the address is not in
the table, the frame is forwarded to the other segment. The address of Host C has not been
recorded yet.
Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted
over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host
D discards the frame since it is not the intended destination.
The bridge adds the source address of the frame to its bridge table. Since the address was in the
source address field and the frame was received on Port 2, the frame must be associated with Port 2
in the table.
The destination address of the frame is checked against the bridge table to see if its entry is present.
The address is in the table but it is associated with Port 1, so the frame is forwarded to the other
segment.
When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how
the bridge controls traffic between to collision domains.
These are the steps that a bridge uses to forward and discard frames that are received on any of its ports.
8.1.2

Layer 2 switching
94

Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a
bridge are based on MAC or Layer 2 addresses and do not affect the logical or Layer 3 addresses. A bridge
will divide a collision domain but has no effect on a logical or broadcast domain. If a network does not have a
device that works with Layer 3 addresses, such as a router, the entire network will share the same logical
broadcast address space. A bridge will create more collision domains but will not add broadcast domains.
A switch is essentially a fast, multi-port bridge that can contain dozens of ports. Each port creates its own
collision domain. In a network of 20 nodes, 20 collision domains exist if each node is plugged into its own
switch port. If an uplink port is included, one switch creates 21 single-node collision domains. A switch
dynamically builds and maintains a content-addressable memory (CAM) table, which holds all of the
necessary MAC information for each port.

8.1.3 Switch operation


A switch is simply a bridge with many ports. When only one node is connected to a switch port, the collision
domain on the shared media contains only two nodes. The two nodes in this small segment, or collision
domain, consist of the switch port and the host connected to it. These small physical segments are called
microsegments. Another capability emerges when only two nodes are connected. In a network that uses
twisted-pair cabling, one pair is used to carry the transmitted signal from one node to the other node. A
separate pair is used for the return or received signal. It is possible for signals to pass through both pairs
simultaneously. The ability to communicate in both directions at once is known as full duplex. Most
switches are capable of supporting full duplex, as are most NICs. In full duplex mode, there is no contention
for the media. A collision domain no longer exists. In theory, the bandwidth is doubled when full duplex is
used.
In addition to faster microprocessors and memory, two other technological advances made switches
possible. CAM is memory that works backward compared to conventional memory. When data is entered
into the memory it will return the associated address. CAM allows a switch to find the port that is associated
with a MAC address without search algorithms. An application-specific integrated circuit or ASIC comprises
an integrated circuit (IC) with functionality customized for a particular use (equipment or project), rather than
serving for general-purpose use. An ASIC allows some software operations to be done in hardware. These
technologies greatly reduced the delays caused by software processes and enabled a switch to keep up with
the data demands of many microsegments and high bit rates.

95

8.1.4 Latency
Latency is the delay between the time a frame begins to leave the source device and when the first part of
the frame reaches its destination. A variety of conditions can cause delays:
Media delays may be caused by the finite speed that signals can travel through the physical media.
Circuit delays may be caused by the electronics that process the signal along the path.
Software delays may be caused by the decisions that software must make to implement switching
and protocols.
Delays may be caused by the content of the frame and the location of the frame switching decisions.
For example, a device cannot route a frame to a destination until the destination MAC address has
been read.
8.1.5 Switch modes
How a frame is switched to the destination port is a trade off between latency and reliability. A switch can
start to transfer the frame as soon as the destination MAC address is received. This is called cut-through
packet switching and results in the lowest latency through the switch. However, no error checking is
available. The switch can also receive the entire frame before it is sent to the destination port. This gives the
switch software an opportunity to verify the Frame Check Sequence (FCS). If the frame is invalid, it is
discarded at the switch. Since the entire frame is stored before it is forwarded, this is called store-andforward packet switching. A compromise between cut-through and store-and-forward packet switching is
the fragment-free mode. Fragment-free packet switching reads the first 64 bytes, which includes the frame
header, and starts to send out the packet before the entire data field and checksum are read. This mode
verifies the reliability of the addresses and LLC protocol information to ensure the data will be handled
properly and arrive at the correct destination.
When cut-through packet switching is used, the source and destination ports must have the same bit rate to
keep the frame intact. This is called symmetric switching. If the bit rates are not the same, the frame must be
stored at one bit rate before it is sent out at the other bit rate. This is known as asymmetric switching. Storeand-forward mode must be used for asymmetric switching.
Asymmetric switching provides switched connections between ports with different bandwidths. Asymmetric
switching is optimized for client/server traffic flows in which multiple clients communicate with a server at
once. More bandwidth must be dedicated to the server port to prevent a bottleneck.
The Interactive Media Activity will help students become familiar with the three types of switch modes.
8.1.6 Spanning-Tree Protocol
When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to occur.
However, switched networks are often designed with redundant paths to provide for reliability and fault
tolerance. Redundant paths are desirable but they can have undesirable side effects such as switching
loops. Switching loops are one such side effect. Switching loops can occur by design or by accident, and
they can lead to broadcast storms that will rapidly overwhelm a network. STP is a standards-based routing
protocol that is used to avoid routing loops. Each switch in a LAN that uses STP sends messages called
96

Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence. This
information is used to elect a root bridge for the network. The switches use the spanning-tree algorithm
(STA) to resolve and shut down the redundant paths.
Each port on a switch that uses STP exists in one of the following five states:
Blocking
Listening
Learning
Forwarding
Disabled
A port moves through these five states as follows:
From initialization to blocking
From blocking to listening or to disabled
From listening to learning or to disabled
From learning to forwarding or to disabled
From forwarding to disabled
STP is used to create a logical hierarchical tree with no loops. However, the alternate paths are still available
if necessary.
The Interactive Media Activity will help students learn the function of each spanning-tree state.

8.2

Collision Domains and Broadcast Domains

8.2.1 Shared media environments


Here are some examples of shared media and directly connected networks:
Shared media environment This occurs when multiple hosts have access to the same medium.
For example, if several PCs are attached to the same physical wire or optical fiber, they all share the
same media environment.
Extended shared media environment This is a special type of shared media environment in
which networking devices can extend the environment so that it can accommodate multiple access
or longer cable distances.
Point-to-point network environment This is widely used in dialup network connections and is
most common for home users. It is a shared network environment in which one device is connected
to only one other device. An example is a PC that is connected to an Internet service provider
through a modem and a phone line.
Collisions only occur in a shared environment. A highway system is an example of a shared environment in
which collisions can occur because multiple vehicles use the same roads. As more vehicles enter the
system, collisions become more likely. A shared data network is much like a highway. Rules exist to
determine who has access to the network medium. However, sometimes the rules cannot handle the traffic
load and collisions occur.
97

8.2.2 Collision domains


Collision domains are the connected physical network segments where collisions can occur. Collisions
cause the network to be inefficient. Every time a collision happens on a network, all transmission stops for a
period of time. The length of this period of time varies and is determined by a backoff algorithm for each
network device.
The types of devices that interconnect the media segments define collision domains. These devices have
been classified as OSI Layer 1, 2 or 3 devices. Layer 2 and Layer 3 devices break up collision domains. This
process is also known as segmentation.
Layer 1 devices such as repeaters and hubs are mainly used to extend the Ethernet cable segments. This
allows more hosts to be added. However, every host that is added increases the amount of potential traffic
on the network. Layer 1 devices forward all data that is sent on the media. As more traffic is transmitted
within a collision domain, collisions become more likely. This results in diminished network performance,
which will be even more pronounced if all the computers use large amounts of bandwidth. Layer 1 devices
can cause the length of a LAN to be overextended and result in collisions.
The four repeater rule in Ethernet states that no more than four repeaters or repeating hubs can be between
any two computers on the network. For a repeated 10BASE-T network to function properly, the round-trip
delay calculation must be within certain limits. This ensures that all the workstations will be able to hear all
the collisions on the network. Repeater latency, propagation delay, and NIC latency all contribute to the four
repeater rule. If the four repeater rule is violated, the maximum delay limit may be exceeded. A late
collision is when a collision happens after the first 64 bytes of the frame are transmitted. The chipsets in
NICs are not required to retransmit automatically when a late collision occurs. These late collision frames
add delay that is referred to as consumption delay. As consumption delay and latency increase, network
performance decreases.
The 5-4-3-2-1 rule requires that the following guidelines should not be exceeded:
Five segments of network media
Four repeaters or hubs
Three host segments of the network
Two link sections with no hosts
One large collision domain
The 5-4-3-2-1 rule also provides guidelines to keep round-trip delay time within acceptable limits.

98

8.2.3 Segmentation
The history of how Ethernet handles collisions and collision domains dates back to research at the University
of Hawaii in 1970. In its attempts to develop a wireless communication system for the islands of Hawaii,
university researchers developed a protocol called Aloha. The Ethernet protocol is actually based on the
Aloha protocol.
One important skill for a networking professional is the ability to recognize collision domains. A collision
domain is created when several computers are connected to a single shared-access medium that is not
attached to other network devices. This situation limits the number of computers that can use the segment.
Layer 1 devices extend but do not control collision domains.
Layer 2 devices segment or divide collision domains. They use the MAC address assigned to every
Ethernet device to control frame propagation. Layer 2 devices are bridges and switches. They keep track of
the MAC addresses and their segments. This allows these devices to control the flow of traffic at the Layer 2
level. This function makes networks more efficient. It allows data to be transmitted on different segments of
the LAN at the same time without collisions. Bridges and switches divide collision domains into smaller parts.
Each part becomes its own collision domain.
These smaller collision domains will have fewer hosts and less traffic than the original domain. The fewer
hosts that exist in a collision domain, the more likely the media will be available. If the traffic between bridged
segments is not too heavy a bridged network works well. Otherwise, the Layer 2 device can slow down
communication and become a bottleneck.
Layer 2 and 3 devices do not forward collisions. Layer 3 devices divide collision domains into smaller
domains.
Layer 3 devices also perform other functions. These functions will be covered in the section on broadcast
domains.
The Interactive Media Activity will teach students about network segmentation.

99

8.2.4 Layer 2 broadcasts


To communicate with all collision domains, protocols use broadcast and multicast frames at Layer 2 of the
OSI model. When a node needs to communicate with all hosts on the network, it sends a broadcast frame
with a destination MAC address 0xFFFFFFFFFFFF. This is an address to which the NIC of every host must
respond.
Layer 2 devices must flood all broadcast and multicast traffic. The accumulation of broadcast and multicast
traffic from each device in the network is referred to as broadcast radiation. In some cases, the circulation of
broadcast radiation can saturate the network so that there is no bandwidth left for application data. In this
case, new network connections cannot be made and established connections may be dropped. This situation
is called a broadcast storm. The probability of broadcast storms increases as the switched network grows.
A NIC must rely on the CPU to process each broadcast or multicast group it belongs to. Therefore, broadcast
radiation affects the performance of hosts in the network. Figure shows the results of tests that Cisco
conducted on the effect of broadcast radiation on the CPU performance of a Sun SPARCstation 2 with a
standard built-in Ethernet card. The results indicate that an IP workstation can be effectively shut down by
broadcasts that flood the network. Although extreme, broadcast peaks of thousands of broadcasts per
second have been observed during broadcast storms. Tests in a controlled environment with a range of
broadcasts and multicasts on the network show measurable system degradation with as few as 100
broadcasts or multicasts per second.
A host does not usually benefit if it processes a broadcast when it is not the intended destination. The host is
not interested in the service that is advertised. High levels of broadcast radiation can noticeably degrade host
performance. The three sources of broadcasts and multicasts in IP networks are workstations, routers, and
multicast applications.
Workstations broadcast an Address Resolution Protocol (ARP) request every time they need to locate a
MAC address that is not in the ARP table. Although the numbers in the figure might appear low, they
represent an average, well-designed IP network. When broadcast and multicast traffic peak due to storm
behavior, peak CPU loss can be much higher than average. Broadcast storms can be caused by a device
that requests information from a network that has grown too large. So many responses are sent to the
original request that the device cannot process them, or the first request triggers similar requests from other
devices that effectively block normal traffic flow on the network.
As an example, the command telnet mumble.com translates into an IP address through a Domain Name
System (DNS) search. An ARP request is broadcast to locate the MAC address. Generally, IP workstations
cache 10 to 100 addresses in their ARP tables for about 2 hours. The ARP rate for a typical workstation
might be about 50 addresses every 2 hours or 0.007 ARPs per second. Therefore, 2000 IP end stations will
produce about 14 ARPs per second.
The routing protocols that are configured on a network can increase broadcast traffic significantly. Some
administrators configure all workstations to run Routing Information Protocol (RIP) as a redundancy and
reachability policy. Every 30 seconds, RIPv1 uses broadcasts to retransmit the entire RIP routing table to
other RIP routers. If 2000 workstations were configured to run RIP and, on average, 50 packets were
required to transmit the routing table, the workstations would generate 3333 broadcasts per second. Most
100

network administrators only configure RIP on five to ten routers. For a routing table that has a size of 50
packets, 10 RIP routers would generate about 16 broadcasts per second.
IP multicast applications can adversely affect the performance of large, scaled, switched networks.
Multicasting is an efficient way to send a stream of multimedia data to many users on a shared-media hub.
However, it affects every user on a flat switched network. A packet video application could generate a 7-MB
stream of multicast data that would be sent to every segment. This would result in severe congestion.

8.2.5 Broadcast domains


A broadcast domain is a group of collision domains that are connected by Layer 2 devices. When a LAN is
broken up into multiple collision domains, each host in the network has more opportunities to gain access to
the media. This reduces the chance of collisions and increases available bandwidth for every host.
Broadcasts are forwarded by Layer 2 devices. Excessive broadcasts can reduce the efficiency of the entire
LAN. Broadcasts have to be controlled at Layer 3 since Layers 1 and 2 devices cannot control them. A
broadcast domain includes all of the collision domains that process the same broadcast frame. This includes
all the nodes that are part of the network segment bounded by a Layer 3 device. Broadcast domains are
controlled at Layer 3 because routers do not forward broadcasts. Routers actually work at Layers 1, 2, and 3.
Like all Layer 1 devices, routers have a physical connection and transmit data onto the media. Routers also
have a Layer 2 encapsulation on all interfaces and perform the same functions as other Layer 2 devices.
Layer 3 allows routers to segment broadcast domains.
In order for a packet to be forwarded through a router it must have already been processed by a Layer 2
device and the frame information stripped off. Layer 3 forwarding is based on the destination IP address and
not the MAC address. For a packet to be forwarded it must contain an IP address that is outside of the range
of addresses assigned to the LAN and the router must have a destination to send the specific packet to in its
routing table.
8.2.6 Introduction to data flow
Data flow in the context of collision and broadcast domains focuses on how data frames propagate through a
network. It refers to the movement of data through Layers 1, 2 and 3 devices and how data must be
encapsulated to effectively make that journey. Remember that data is encapsulated at the network layer with
an IP source and destination address, and at the data-link layer with a MAC source and destination address.
A good rule to follow is that a Layer 1 device always forwards the frame, while a Layer 2 device wants to
forward the frame. In other words, a Layer 2 device will forward the frame unless something prevents it from
doing so. A Layer 3 device will not forward the frame unless it has to. Using this rule will help identify how
data flows through a network.
Layer 1 devices do no filtering, so everything that is received is passed on to the next segment. The frame is
simply regenerated and retimed and thus returned to its original transmission quality. Any segments
connected by Layer 1 devices are part of the same domain, both collision and broadcast.
Layer 2 devices filter data frames based on the destination MAC address. A frame is forwarded if it is going
to an unknown destination outside the collision domain. The frame will also be forwarded if it is a broadcast,
multicast, or a unicast going outside of the local collision domain. The only time that a frame is not forwarded
is when the Layer 2 device finds that the sending host and the receiving host are in the same collision
domain. A Layer 2 device, such as a bridge, creates multiple collision domains but maintains only one
broadcast domain.
101

Layer 3 devices filter data packets based on IP destination address. The only way that a packet will be
forwarded is if its destination IP address is outside of the broadcast domain and the router has an identified
location to send the packet. A Layer 3 device creates multiple collision and broadcast domains.
Data flow through a routed IP based network, involves data moving across traffic management devices at
Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for
collision domain management, and Layer 3 for broadcast domain management.

8.2.7 What is a network segment?


As with many terms and acronyms, segment has multiple meanings. The dictionary definition of the term is
as follows:
A separate piece of something
One of the parts into which an entity, or quantity is divided or marked off by or as if by natural
boundaries
In the context of data communication, the following definitions are used:
Section of a network that is bounded by bridges, routers, or switches.
In a LAN using a bus topology, a segment is a continuous electrical circuit that is often connected to
other such segments with repeaters.
Term used in the TCP specification to describe a single transport layer unit of information. The terms
datagram, frame, message, and packet are also used to describe logical information groupings at
various layers of the OSI reference model and in various technology circles.
To properly define the term segment, the context of the usage must be presented with the word. If segment is
used in the context of TCP, it would be defined as a separate piece of the data. If segment is being used in
the context of physical networking media in a routed network, it would be seen as one of the parts or
sections of the total network.
The Interactive Media Activity will help students identify three types of segments.

102

Summary
Ethernet is a shared media, baseband technology, which means only one node can transmit data at a time.
Increasing the number of nodes on a single segment increases demand on the available bandwidth. This in
turn increases the probability of collisions. A solution to the problem is to break a large network segment into
parts and separate it into isolated collision domains. Bridges and switches are used to segment the network
into multiple collision domains.
A bridge builds a bridge table from the source addresses of packets it processes. An address is associated
with the port the frame came in on. Eventually the bridge table contains enough address information to allow
the bridge to forward a frame out a particular port based on the destination address. This is how the bridge
controls traffic between two collision domains.
Switches learn in much the same way as bridges but provide a virtual connection directly between the source
and destination nodes, rather than the source collision domain and destination collision domain. Each port
creates its own collision domain. A switch dynamically builds and maintains a Content-Addressable Memory
(CAM) table, holding all of the necessary MAC information for each port. CAM is memory that essentially
works backwards compared to conventional memory. Entering data into the memory will return the
associated address.
Two devices connected through switch ports become the only two nodes in a small collision domain. These
small physical segments are called microsegments. Microsegments connected using twisted pair cabling are
capable of full-duplex communications. In full duplex mode, when separate wires are used for transmitting
and receiving between two hosts, there is no contention for the media. Thus, a collision domain no longer
exists.
There is a propagation delay for the signals traveling along transmission medium. Additionally, as signals are
processed by network devices further delay, or latency, is introduced.
How a frame is switched affects latency and reliability. A switch can start to transfer the frame as soon as the
destination MAC address is received. Switching at this point is called cut-through switching and results in the
lowest latency through the switch. However, cut-through switching provides no error checking. At the other
extreme, the switch can receive the entire frame before sending it out the destination port. This is called
store-and-forward switching. Fragment-free switching reads and checks the first sixty-four bytes of the frame
before forwarding it to the destination port.
Switched networks are often designed with redundant paths to provide for reliability and fault tolerance.
Switches use the Spanning-Tree Protocol (STP) to identify and shut down redundant paths through the
network. The result is a logical hierarchical path through the network with no loops.
Using Layer 2 devices to break up a LAN into multiple collision domains increases available bandwidth for
every host. But Layer 2 devices forward broadcasts, such as ARP requests. A Layer 3 device is required to
control broadcasts and define broadcast domains.
Data flow through a routed IP network, involves data moving across traffic management devices at
Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2
for collision domain management, and Layer 3 for broadcast domain management.
103

Overview
The Internet was developed to provide a communication network that could function in wartime. Although the
Internet has evolved from the original plan, it is still based on the TCP/IP protocol suite. The design of
TCP/IP is ideal for the decentralized and robust Internet. Many common protocols were designed based on
the four-layer TCP/IP model.
It is useful to know both the TCP/IP and OSI network models. Each model uses its own structure to explain
how a network works. However, there is much overlap between the two models. A system administrator
should be familiar with both models to understand how a network functions.
Any device on the Internet that wants to communicate with other Internet devices must have a unique
identifier. The identifier is known as the IP address because routers use a Layer 3 protocol called the IP
protocol to find the best route to that device. The current version of IP is IPv4. This was designed before
there was a large demand for addresses. Explosive growth of the Internet has threatened to deplete the
supply of IP addresses. Subnets, Network Address Translation (NAT), and private addresses are used to
extend the supply of IP addresses. IPv6 improves on IPv4 and provides a much larger address space.
Administrators can use IPv6 to integrate or eliminate the methods used to work with IPv4.
In addition to the physical MAC address, each computer needs a unique IP address to be part of the Internet.
This is also called the logical address. There are several ways to assign an IP address to a device. Some
devices always have a static address. Others have a temporary address assigned to them each time they
connect to the network. When a dynamically assigned IP address is needed, a device can obtain it several
ways.
For efficient routing to occur between devices, issues such as duplicate IP addresses must be resolved.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Explain why the Internet was developed and how TCP/IP fits the design of the Internet
List the four layers of the TCP/IP model
Describe the functions of each layer of the TCP/IP model
Compare the OSI model and the TCP/IP model
Describe the function and structure of IP addresses
Understand why subnetting is necessary
Explain the difference between public and private addressing
Understand the function of reserved IP addresses
Explain the use of static and dynamic addressing for a device
Understand how dynamic addresses can be assigned with RARP, BootP, and DHCP
Use ARP to obtain the MAC address to send a packet to another device
Understand the issues related to addressing between networks.
9.1

Introduction to TCP/IP

9.1.1 History and future of TCP/IP


The U.S. Department of Defense (DoD) created the TCP/IP reference model because it wanted a network
that could survive any conditions. To illustrate further, imagine a world, crossed by multiple cable runs, wires,
microwaves, optical fibers, and satellite links. Then imagine a need for data to be transmitted without regard
for the condition of any particular node or network. The U.S. DoD required reliable data transmission to any
destination on the network under any circumstances. The creation of the TCP/IP model helped to solve this
difficult design problem. The TCP/IP model has since become the standard on which the Internet is based.
Think about the layers of the TCP/IP model layers in relation to the original intent of the Internet. This will
help reduce confusion. The four layers of the TCP/IP model are the application layer, transport layer, Internet
layer, and network access layer. Some of the layers in the TCP/IP model have the same name as layers in
the OSI model. It is critical not to confuse the layer functions of the two models because the layers include
different functions in each model. The present version of TCP/IP was standardized in September of 1981.

104

9.1.2 Application layer


The application layer handles high-level protocols, representation, encoding, and dialog control. The TCP/IP
protocol suite combines all application related issues into one layer. It ensures that the data is properly
packaged before it is passed on to the next layer. TCP/IP includes Internet and transport layer specifications
such as IP and TCP as well as specifications for common applications. TCP/IP has protocols to support file
transfer, e-mail, and remote login, in addition to the following:
File Transfer Protocol (FTP) FTP is a reliable, connection-oriented service that uses TCP to
transfer files between systems that support FTP. It supports bi-directional binary file and ASCII file
transfers.
Trivial File Transfer Protocol (TFTP) TFTP is a connectionless service that uses the User
Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files and Cisco IOS
images, and to transfer files between systems that support TFTP. It is useful in some LANs because
it operates faster than FTP in a stable environment.
Network File System (NFS) NFS is a distributed file system protocol suite developed by Sun
Microsystems that allows file access to a remote storage device such as a hard disk across a
network.
Simple Mail Transfer Protocol (SMTP) SMTP administers the transmission of e-mail over
computer networks. It does not provide support for transmission of data other than plain text.
Telnet Telnet provides the capability to remotely access another computer. It enables a user to log
into an Internet host and execute commands. A Telnet client is referred to as a local host. A Telnet
server is referred to as a remote host.
Simple Network Management Protocol (SNMP) SNMP is a protocol that provides a way to
monitor and control network devices. SNMP is also used to manage configurations, statistics,
performance, and security.
Domain Name System (DNS) DNS is a system used on the Internet to translate domain names
and publicly advertised network nodes into IP addresses.
The Interactive Media Activity will help students become familiar with the application layer protocols.

105

9.1.3 Transport layer


The transport layer provides a logical connection between a source host and a destination host. Transport
protocols segment and reassemble data sent by upper-layer applications into the same data stream, or
logical connection, between end points.
The Internet is often represented by a cloud. The transport layer sends data packets from a source to a
destination through the cloud.
The primary duty of the transport layer is to provide end-to-end control
and reliability as data travels through this cloud. This is accomplished through the use of sliding windows,
sequence numbers, and acknowledgments. The transport layer also defines end-to-end connectivity
between host applications. Transport layer protocols include TCP and UDP.
The functions of TCP and UDP are as follows:
Segment upper-layer application data
Send segments from one end device to another
The functions of TCP are as follows:
Establish end-to-end operations
Provide flow control through the use of sliding windows
Ensure reliability through the use of sequence numbers and acknowledgments
The Interactive Media Activity will help students become familiar with the transport layer protocols.

9.1.4 Internet layer


The purpose of the Internet layer is to select the best path through the network for packets to travel. The
main protocol that functions at this layer is IP. Best path determination and packet switching occur at this
layer.
The following protocols operate at the TCP/IP Internet layer:
106

IP provides connectionless, best-effort delivery routing of packets. IP is not concerned with the
content of the packets but looks for a path to the destination.
Internet Control Message Protocol (ICMP) provides control and messaging capabilities.
Address Resolution Protocol (ARP) determines the data link layer address, or MAC address, for
known IP addresses.
Reverse Address Resolution Protocol (RARP) determines the IP address for a known MAC address.
IP performs the following operations:
Defines a packet and an addressing scheme
Transfers data between the Internet layer and network access layer
Routes packets to remote hosts
IP is sometimes referred to as an unreliable protocol. This does not mean that IP will not accurately deliver
data across a network. IP is unreliable because it does not perform error checking and correction. That
function is handled by upper layer protocols from the transport or application layers.
The Interactive Media Activity will help students become familiar with the protocols used in the Internet layer.

9.1.5 Network access layer


The network access layer allows an IP packet to make a physical link to the network media. It includes the
LAN and WAN technology details and all the details contained in the OSI physical and data link layers.
Drivers for software applications, modem cards, and other devices operate at the network access layer. The
network access layer defines the procedures used to interface with the network hardware and access the
transmission medium. Modem protocol standards such as Serial Line Internet Protocol (SLIP) and Point-toPoint Protocol (PPP) provide network access through a modem connection. Many protocols are required to
determine the hardware, software, and transmission-medium specifications at this layer. This can lead to
confusion for users. Most of the recognizable protocols operate at the transport and Internet layers of the
TCP/IP model.
Network access layer protocols also map IP addresses to physical hardware addresses and encapsulate IP
packets into frames. The network access layer defines the physical media connection based on the
hardware type and network interface.
Here is an example of a network access layer configuration that involves a Windows system set up with a
third party NIC. The NIC would automatically be detected by some versions of Windows and then the proper
drivers would be installed. In an older version of Windows, the user would have to specify the network card
driver. The card manufacturer supplies these drivers on disks or CD-ROMs.
The Interactive Media Activity will help students become familiar with the network access layer protocols.

107

9.1.6 The OSI model and the TCP/IP model


The OSI and TCP/IP models have many similarities:
Both have layers.
Both have application layers, though they include different services.
Both have comparable transport and network layers.
Both use packet-switched instead of circuit-switched technology.
Networking professionals need to know both models.
Here are some differences of the OSI and TCP/IP models:
TCP/IP combines the OSI application, presentation, and session layers into its application layer.
TCP/IP combines the OSI data link and physical layers into its network access layer.
TCP/IP appears simpler because it has fewer layers.
When the TCP/IP transport layer uses UDP it does not provide reliable delivery of packets. The
transport layer in the OSI model always does.
The Internet was developed based on the standards of the TCP/IP protocols. The TCP/IP model gains
credibility because of its protocols. The OSI model is not generally used to build networks. The OSI model is
used as a guide to help students understand the communication process.
The Interactive Media Activity will help students understand the differences between the TCP/IP and OSI
reference models.

108

9.1.7 Internet architecture


The Internet enables nearly instantaneous worldwide data communications between anyone, anywhere, at
any time.
LANs are networks within limited geographic areas. However, LANs are limited in scale. Although there have
been technological advances to improve the speed of communications, such as Metro Optical, Gigabit, and
10-Gigabit Ethernet, distance is still a problem.
Students can focus on the communications between source and destination computers or intermediate
computers at the application layer to get an overview of the Internet architecture. Identical instances of an
application could be placed on all the computers in a network to ease the delivery of messages. However,
this does not scale well. New software would require new applications to be installed on every computer in
the network. For new hardware to function properly, the software would need to be modified. Any failure of an
intermediate computer or computer application would cause a break in the chain of the messages that are
passed.
The Internet uses the principle of network layer interconnection. The goal is to build the functionality of the
network in independent modules. This allows a diversity of LAN technologies at Layers 1 and 2 of the OSI
model and a diversity of applications at Layers 5, 6, and 7. The OSI model provides a mechanism where the
details of the lower and the upper layers are separated. This allows intermediate networking devices to relay
traffic without details about the LAN.
This leads to the concept of internetworks, or networks that consist of many networks. A network of networks
is called an internetwork, which is indicated with the lowercase i. The network on which the World Wide Web
(www) runs is the Internet, which is indicated with a capital I. Internetworks must be scalable with regard to
the number of networks and computers attached. They must also be able to handle the transport of data
across vast distances. An internetwork must be flexible to account for constant technological innovations. It
must be able to adjust to dynamic conditions on the network. And internetworks must be cost-effective.
Internetworks must be designed to permit data communications to anyone, anywhere, at any time.
Figure summarizes the connection of one physical network to another through a special purpose computer
called a router. These networks are described as directly connected to the router. The router is needed to
handle any path decisions required for the two networks to communicate. Many routers are needed to handle
large volumes of network traffic.
Figure extends the idea to three physical networks connected by two routers. Routers make complex
decisions to allow users on all the networks to communicate with each other. Not all networks are directly
connected to one another. The router must have some method to handle this situation.
One option is for a router to keep a list of all computers and all the paths to them. The router would then
decide how to forward data packets based on this reference table. Packets would be forwarded based on the
IP address of the destination computer. This option would become difficult as more users were added to the
network. Scalability is introduced when the router keeps a list of all networks, but leaves the local delivery
details to the local physical networks. In this situation, the routers pass messages to other routers. Each
router shares information about its connected network.
Figure shows the transparency that users require. However, the physical and logical structures inside the
Internet cloud can be extremely complex as shown in Figure . The Internet has grown rapidly to allow more
and more users. The fact that the Internet has grown so large, with more than 90,000 core routes and
300,000,000 end users, proves the effectiveness of the Internet architecture.
109

Two computers located anywhere in the world that follow certain hardware, software, and protocol
specifications can communicate reliably. The standardization of ways to move data across networks has
made the Internet possible.

9.2

Internet Addresses

9.2.1 IP addressing
For any two systems to communicate, they must be able to identify and locate each other. The addresses in
Figure are not actual network addresses. They represent and show the concept of address grouping.
A computer may be connected to more than one network. In this situation, the system must be given more
than one address. Each address will identify the connection of the computer to a different network. Each
connection point, or interface, on a device has an address to a network. This will allow other computers to
locate the device on that particular network. The combination of the network address and the host address
creates a unique address for each device on a network. Each computer in a TCP/IP network must be given a
unique identifier, or IP address. This address, which operates at Layer 3, allows one computer to locate
another computer on a network. All computers also have a unique physical address, which is known as a
MAC address. These are assigned by the manufacturer of the NIC. MAC addresses operate at Layer 2 of the
OSI model.
An IP address is a 32-bit sequence of ones and zeros. Figure shows a sample 32-bit number. To make the
IP address easier to work with, it is usually written as four decimal numbers separated by periods. For
example, an IP address of one computer is 192.168.1.2. Another computer might have the address
128.10.2.1. This is called the dotted decimal format. Each part of the address is called an octet because it is
made up of eight binary digits. For example, the IP address 192.168.1.8 would be
11000000.10101000.00000001.00001000 in binary notation. The dotted decimal notation is an easier
method to understand than the binary ones and zeros method. This dotted decimal notation also prevents a
large number of transposition errors that would result if only the binary numbers were used.
Both the binary and decimal numbers in Figure represent the same values. However, the address is easier
to understand in dotted decimal notation. This is one of the common problems associated with binary
numbers. The long strings of repeated ones and zeros make errors more likely.
It is easy to see the relationship between the numbers 192.168.1.8 and 192.168.1.9. The binary values
11000000.10101000.00000001.00001000 and 11000000.10101000.00000001.00001001 are not as easy to
recognize. It is more difficult to determine that the binary values are consecutive numbers.

110

9.2.2 Decimal and binary conversion


The student may find other methods easier. It is a matter of personal preference.
When converting a decimal number to binary, the biggest power of two that will fit into the decimal number
must be determined. If this process is designed to be working with computers, the most logical place to
start is with the largest values that will fit into a byte or two bytes. As mentioned earlier, the most common
grouping of bits is eight, which make up one byte. However, sometimes the largest value that can be held in
one byte is not large enough for the values needed. To accommodate this, bytes are combined. Instead of
having two eight-bit numbers, one 16-bit number is created. Instead of three eight-bit numbers, one 24-bit
number is created. The same rules apply as they did for eight-bit numbers. Multiply the previous position
value by two to get the present column value.
Since working with computers often is referenced by bytes it is easiest to start with byte boundaries and
calculate from there. Start by calculating a couple of examples, the first being 6,783. Since this number is
greater than 255, the largest value possible in a single byte, two bytes will be used. Start calculating from 2 15.
The binary equivalent of 6,783 is 00011010 01111111.
The second example is 104. Since this number is less than 255, it can be represented by one byte. The
binary equivalent of 104 is 01101000.
This method works for any decimal number. Consider the decimal number one million. Since one million is
greater than the largest value that can be held in two bytes, 65535, at least three bytes will be needed. By
multiplying by two until 24 bits, three bytes, is reached, the value will be 8,388,608. This means that the
largest value that 24 bits can hold is 16,777,215. So starting at the 24-bit, follow the process until zero is
reached. Continuing with the procedure described, it is determined that the decimal number one million is
equal to the binary number 00001111 01000010 01000000.
Figure includes some decimal to binary conversion exercises.
Binary to decimal conversion is just the opposite. Simply place the binary in the table and if there is a one in
a column position add that value into the total. Convert 00000100 00011101 to decimal. The answer is
1053.
Figure includes some binary to decimal conversion exercises.
9.2.3 IPv4 addressing
A router uses IP to forward packets from the source network to the destination network. The packets must
include an identifier for both the source and destination networks. A router uses the IP address of the
destination network to deliver a packet to the correct network. When the packet arrives at a router connected
to the destination network, the router uses the IP address to locate the specific computer on the network.
This system works in much the same way as the national postal system. When the mail is routed, the zip
code is used to deliver it to the post office at the destination city. That post office must use the street address
to locate the final destination in the city.
Every IP address also has two parts. The first part identifies the network where the system is connected
and the second part identifies the system. As is shown Figure , each octet ranges from 0 to 255. Each one
111

of the octets breaks down into 256 subgroups and they break down into another 256 subgroups with 256
addresses in each. By referring to the group address directly above a group in the hierarchy, all of the groups
that branch from that address can be referenced as a single unit.
This kind of address is called a hierarchical address, because it contains different levels. An IP address
combines these two identifiers into one number. This number must be a unique number, because duplicate
addresses would make routing impossible. The first part identifies the system's network address. The second
part, called the host part, identifies which particular machine it is on the network.
IP addresses are divided into classes to define the large, medium, and small networks. Class A addresses
are assigned to larger networks. Class B addresses are used for medium-sized networks, and Class C for
small networks.
The first step in determining which part of the address identifies the network and which
part identifies the host is identifying the class of an IP address.
The Interactive Media Activity will require students to identify the different classes of addresses.

9.2.4 Class A, B, C, D, and E IP addresses


To accommodate different size networks and aid in classifying these networks, IP addresses are divided into
groups called classes. This is known as classful addressing. Each complete 32-bit IP address is broken
down into a network part and a host part. A bit or bit sequence at the start of each address determines the
class of the address. There are five IP address classes as shown in Figure .
The Class A address was designed to support extremely large networks, with more than 16 million host
addresses available. Class A IP addresses use only the first octet to indicate the network address. The
remaining three octets provide for host addresses.
The first bit of a Class A address is always 0. With that first bit a 0, the lowest number that can be
represented is 00000000, decimal 0. The highest number that can be represented is 01111111, decimal 127.
The numbers 0 and 127 are reserved and cannot be used as network addresses. Any address that starts
with a value between 1 and 126 in the first octet is a Class A address.
The 127.0.0.0 network is reserved for loopback testing. Routers or local machines can use this address to
send packets back to themselves. Therefore, this number cannot be assigned to a network.
The Class B address was designed to support the needs of moderate to large-sized networks. A Class B
IP address uses the first two of the four octets to indicate the network address. The other two octets specify
host addresses.
The first two bits of the first octet of a Class B address are always 10. The remaining six bits may be
populated with either 1s or 0s. Therefore, the lowest number that can be represented with a Class B address
is 10000000, decimal 128. The highest number that can be represented is 10111111, decimal 191. Any
address that starts with a value in the range of 128 to 191 in the first octet is a Class B address.
The Class C address space is the most commonly used of the original address classes. This address
space was intended to support small networks with a maximum of 254 hosts.
A Class C address begins with binary 110. Therefore, the lowest number that can be represented is
11000000, decimal 192. The highest number that can be represented is 11011111, decimal 223. If an
address contains a number in the range of 192 to 223 in the first octet, it is a Class C address.
The Class D address class was created to enable multicasting in an IP address. A multicast address is a
unique network address that directs packets with that destination address to predefined groups of IP
addresses. Therefore, a single station can simultaneously transmit a single stream of data to multiple
recipients.
The Class D address space, much like the other address spaces, is mathematically constrained. The first
four bits of a Class D address must be 1110. Therefore, the first octet range for Class D addresses is
112

11100000 to 11101111, or 224 to 239. An IP address that starts with a value in the range of 224 to 239 in the
first octet is a Class D address.
A Class E address has been defined. However, the Internet Engineering Task Force (IETF) reserves these
addresses for its own research. Therefore, no Class E addresses have been released for use in the Internet.
The first four bits of a Class E address are always set to 1s. Therefore, the first octet range for Class E
addresses is 11110000 to 11111111, or 240 to 255.
Figure shows the IP address range of the first octet both in decimal and binary for each IP address class.

9.2.5 Reserved IP addresses


Certain host addresses are reserved and cannot be assigned to devices on a network. These reserved host
addresses include the following:
Network address Used to identify the network itself
In Figure , the section that is identified by the upper box represents the 198.150.11.0 network. Data that is
sent to any host on that network (198.150.11.1- 198.150.11.254) will be seen outside of the local area
network as 198.159.11.0. The only time that the host numbers matter is when the data is on the local area
network. The LAN that is contained in the lower box is treated the same as the upper LAN, except that its
network number is 198.150.12.0.
Broadcast address Used for broadcasting packets to all the devices on a network
In Figure , the section that is identified by the upper box represents the 198.150.11.255 broadcast address.
Data that is sent to the broadcast address will be read by all hosts on that network (198.150.11.1198.150.11.254). The LAN that is contained in the lower box is treated the same as the upper LAN, except
that its broadcast address is 198.150.12.255.
An IP address that has binary 0s in all host bit positions is reserved for the network address. In a Class A
network example, 113.0.0.0 is the IP address of the network, known as the network ID, containing the host
113.1.2.3. A router uses the network IP address when it forwards data on the Internet. In a Class B network
example, the address 176.10.0.0 is a network address, as shown in Figure .
In a Class B network address, the first two octets are designated as the network portion. The last two octets
contain 0s because those 16 bits are for host numbers and are used to identify devices that are attached to
the network. The IP address, 176.10.0.0, is an example of a network address. This address is never
assigned as a host address. A host address for a device on the 176.10.0.0 network might be 176.10.16.1. In
this example, 176.10 is the network portion and 16.1 is the host portion.
To send data to all the devices on a network, a broadcast address is needed. A broadcast occurs when a
source sends data to all devices on a network. To ensure that all the other devices on the network process
the broadcast, the sender must use a destination IP address that they can recognize and process. Broadcast
IP addresses end with binary 1s in the entire host part of the address.
In the network example, 176.10.0.0, the last 16 bits make up the host field or host part of the address. The
broadcast that would be sent out to all devices on that network would include a destination address of
176.10.255.255. This is because 255 is the decimal value of an octet containing 11111111.

113

9.2.6 Public and private IP addresses


The stability of the Internet depends directly on the uniqueness of publicly used network addresses. In Figure
, there is an issue with the network addressing scheme. In looking at the networks, both have a network
address of 198.150.11.0. The router in this illustration will not be able to forward the data packets correctly.
Duplicate network IP addresses prevent the router from performing its job of best path selection. Unique
addresses are required for each device on a network.
A procedure was needed to make sure that addresses were in fact unique. Originally, an organization known
as the Internet Network Information Center (InterNIC) handled this procedure. InterNIC no longer exists and
has been succeeded by the Internet Assigned Numbers Authority (IANA). IANA carefully manages the
remaining supply of IP addresses to ensure that duplication of publicly used addresses does not occur.
Duplication would cause instability in the Internet and compromise its ability to deliver datagrams to
networks.
Public IP addresses are unique. No two machines that connect to a public network can have the same IP
address because public IP addresses are global and standardized. All machines connected to the Internet
agree to conform to the system. Public IP addresses must be obtained from an Internet service provider
(ISP) or a registry at some expense.
With the rapid growth of the Internet, public IP addresses were beginning to run out. New addressing
schemes, such as classless interdomain routing (CIDR) and IPv6 were developed to help solve the problem.
CIDR and IPv6 are discussed later in the course.
Private IP addresses are another solution to the problem of the impending exhaustion of public IP addresses.
As mentioned, public networks require hosts to have unique IP addresses. However, private networks that
are not connected to the Internet may use any host addresses, as long as each host within the private
network is unique. Many private networks exist alongside public networks. However, a private network using
just any address is strongly discouraged because that network might eventually be connected to the Internet.
RFC 1918 sets aside three blocks of IP addresses for private, internal use. These three blocks consist of
one Class A, a range of Class B addresses, and a range of Class C addresses. Addresses that fall within
these ranges are not routed on the Internet backbone. Internet routers immediately discard private
addresses. If addressing a nonpublic intranet, a test lab, or a home network, these private addresses can be
used instead of globally unique addresses. Private IP addresses can be intermixed, as shown in the
graphic, with public IP addresses. This will conserve the number of addresses used for internal connections.
Connecting a network using private addresses to the Internet requires translation of the private
addresses to public addresses. This translation process is referred to as Network Address Translation
(NAT). A router usually is the device that performs NAT. NAT, along with CIDR and IPv6 are covered in
more depth later in the curriculum.

114

9.2.7 Introduction to subnetting


Subnetting is another method of managing IP addresses. This method of dividing full network address
classes into smaller pieces has prevented complete IP address exhaustion. It is impossible to cover TCP/IP
without mentioning subnetting. As a system administrator it is important to understand subnetting as a means
of dividing and identifying separate networks throughout the LAN. It is not always necessary to subnet a
small network. However, for large or extremely large networks, subnetting is required. Subnetting a
network means to use the subnet mask to divide the network and break a large network up into smaller,
more efficient and manageable segments, or subnets. An example would be the U.S. telephone system
which is broken into area codes, exchange codes, and local numbers.
The system administrator must resolve these issues when adding and expanding the network. It is important
to know how many subnets or networks are needed and how many hosts will be needed on each network.
With subnetting, the network is not limited to the default Class A, B, or C network masks and there is more
flexibility in the network design.
Subnet addresses include the network portion, plus a subnet field and a host field. The subnet field and the
host field are created from the original host portion for the entire network. The ability to decide how to divide
the original host portion into the new subnet and host fields provides addressing flexibility for the network
administrator.
To create a subnet address, a network administrator borrows bits from the host field and designates them as
the subnet field. The minimum number of bits that can be borrowed is two. When creating a subnet, where
only one bit was borrowed the network number would be the .0 network. The broadcast number would then
be the .255 network. The maximum number of bits that can be borrowed can be any number that leaves at
least two bits remaining, for the host number.
The Lab Activity will help students become familiar with the different classes of IP addresses.

115

9.2.8 IPv4 versus IPv6


When TCP/IP was adopted in the 1980s, it relied on a two-level addressing scheme. At the time this offered
adequate scalability. Unfortunately, the designers of TCP/IP could not have predicted that their protocol
would eventually sustain a global network of information, commerce, and entertainment. Over twenty years
ago, IP Version 4 (IPv4) offered an addressing strategy that, although scalable for a time, resulted in an
inefficient allocation of addresses.
The Class A and B addresses make up 75 percent of the IPv4 address space, however fewer than 17,000
organizations can be assigned a Class A or B network number. Class C network addresses are far more
numerous than Class A and Class B addresses, although they account for only 12.5 percent of the possible
four billion IP addresses.
Unfortunately, Class C addresses are limited to 254 usable hosts. This does not meet the needs of larger
organizations that cannot acquire a Class A or B address. Even if there were more Class A, B, and C
addresses, too many network addresses would cause Internet routers to come to a stop under the burden of
the enormous size of routing tables required to store the routes to reach each of the networks.
As early as 1992, the Internet Engineering Task Force (IETF) identified the following two specific concerns:
Exhaustion of the remaining, unassigned IPv4 network addresses. At the time, the Class B space
was on the verge of depletion.
The rapid and large increase in the size of Internet routing tables occurred as more Class C
networks came online. The resulting flood of new network information threatened the ability of
Internet routers to cope effectively.
Over the past two decades, numerous extensions to IPv4 have been developed. These extensions are
specifically designed to improve the efficiency with which the 32-bit address space can be used. Two of the
more important of these are subnet masks and classless interdomain routing (CIDR), which are discussed in
more detail in later lessons.
Meanwhile, an even more extendible and scalable version of IP, IP Version 6 (IPv6), has been defined and
developed. IPv6 uses 128 bits rather than the 32 bits currently used in IPv4. IPv6 uses hexadecimal
numbers to represent the 128 bits. IPv6 provides 640 sextrillion addresses. This version of IP should provide
enough addresses for future communication needs.
Figure shows an IPv4 address and an IPv6 address. IPv4 addresses are 32 bits long, written in decimal
form, and separated by periods. IPv6 addresses are 128-bits long and are identifiers for individual interfaces
and sets of interfaces. IPv6 addresses are assigned to interfaces, not nodes. Since each interface belongs to
a single node, any of the unicast addresses assigned to the interfaces of the node may be used as an
identifier for the node. IPv6 addresses are written in hexadecimal, and separated by colons. IPv6 fields are
16 bits long. To make the addresses easier to read, leading zeros can be omitted from each field. The field :
0003: is written :3:. IPv6 shorthand representation of the 128 bits uses eight 16-bit numbers, shown as four
hexadecimal digits.
After years of planning and development, IPv6 is slowly being implemented in select networks.
Eventually, IPv6 may replace IPv4 as the dominant Internet protocol.

116

9.3

Obtaining an IP address

9.3.1 Obtaining an Internet address


A network host needs to obtain a globally unique address in order to function on the Internet. The physical or
MAC address that a host has is only locally significant, identifying the host within the local area network.
Since this is a Layer 2 address, the router does not use it to forward outside the LAN.
IP addresses are the most commonly used addresses for Internet communications. This protocol is a
hierarchical addressing scheme that allows individual addresses to be associated together and treated as
groups. These groups of addresses allow efficient transfer of data across the Internet.
Network administrators use two methods to assign IP addresses. These methods are static and dynamic.
Later in this lesson, static addressing and three variations of dynamic addressing will be covered.
Regardless of which addressing scheme is chosen, no two interfaces can have the same IP address. Two
hosts that have the same IP address could create a conflict that might cause both of the hosts involved not to
operate properly. As shown in Figure , the hosts have a physical address by having a network interface
card that allows connection to the physical medium.

9.3.2 Static assignment of an IP address


Static assignment works best on small, infrequently changing networks. The system administrator manually
assigns and tracks IP addresses for each computer, printer, or server on the intranet. Good recordkeeping
is critical to prevent problems which occur with duplicate IP addresses. This is possible only when there are
a small number of devices to track.

117

Servers should be assigned a static IP address so workstations and other devices will always know how to
access needed services. Consider how difficult it would be to phone a business that changed its phone
number every day.
Other devices that should be assigned static IP addresses are network printers, application servers, and
routers.
9.3.3 RARP IP address assignment
Reverse Address Resolution Protocol (RARP) associates a known MAC addresses with an IP addresses.
This association allows network devices to encapsulate data before sending the data out on the network. A
network device, such as a diskless workstation, might know its MAC address but not its IP address. RARP
allows the device to make a request to learn its IP address. Devices using RARP require that a RARP server
be present on the network to answer RARP requests.
Consider an example where a source device wants to send data to another device. In this example, the
source device knows its own MAC address but is unable to locate its own IP address in the ARP table. The
source device must include both its MAC address and IP address in order for the destination device to
retrieve data, pass it to higher layers of the OSI model, and respond to the originating device. Therefore, the
source initiates a process called a RARP request. This request helps the source device detect its own IP
address. RARP requests are broadcast onto the LAN and are responded to by the RARP server which is
usually a router.
RARP uses the same packet format as ARP. However, in a RARP request, the MAC headers and
operation code are different from an ARP request.
The RARP packet format contains places for
MAC addresses of both the destination and source devices. The source IP address field is empty. The
broadcast goes to all devices on the network. Figures , , and depict the destination MAC address
as FF:FF:FF:FF:FF:FF. Workstations running RARP have codes in ROM that direct them to start the
RARP process. A step-by-step layout of the RARP process is illustrated in Figures through
.

9.3.4 BOOTP IP address assignment


The bootstrap protocol (BOOTP) operates in a client-server environment and only requires a single packet
exchange to obtain IP information.
However, unlike RARP, BOOTP packets can include the IP address,
as well as the address of a router, the address of a server, and vendor-specific information.
One problem with BOOTP, however, is that it was not designed to provide dynamic address assignment.
With BOOTP, a network administrator creates a configuration file that specifies the parameters for each
device. The administrator must add hosts and maintain the BOOTP database. Even though the addresses
are dynamically assigned, there is still a one to one relationship between the number of IP addresses and
the number of hosts. This means that for every host on the network there must be a BOOTP profile with an
IP address assignment in it. No two profiles can have the same IP address. Those profiles might be used at
the same time and that would mean that two hosts have the same IP address.
A device uses BOOTP to obtain an IP address when starting up. BOOTP uses UDP to carry messages. The
UDP message is encapsulated in an IP packet. A computer uses BOOTP to send a broadcast IP packet
using a destination IP address of all 1s, 255.255.255.255 in dotted decimal notation. A BOOTP server
receives the broadcast and then sends back a broadcast. The client receives a frame and checks the MAC
address. If the client finds its own MAC address in the destination address field and a broadcast in the IP
destination field, it takes and stores the IP address and other information supplied in the BOOTP reply
message. A step-by-step description of the process is shown in Figures through
.

118

9.3.5 DHCP IP address management


Dynamic host configuration protocol (DHCP) is the successor to BOOTP. Unlike BOOTP, DHCP allows a
host to obtain an IP address dynamically without the network administrator having to set up an individual
profile for each device. All that is required when using DHCP is a defined range of IP addresses on a DHCP
server. As hosts come online, they contact the DHCP server and request an address. The DHCP server
chooses an address and leases it to that host. With DHCP, the entire network configuration of a computer
can be obtained in one message.
This includes all of the data supplied by the BOOTP message, plus a
leased IP address and a subnet mask.
The major advantage that DHCP has over BOOTP is that it allows users to be mobile. This mobility allows
the users to freely change network connections from location to location. It is no longer required to keep a
fixed profile for every device attached to the network as was required with the BOOTP system. The
importance to this DHCP advancement is its ability to lease an IP address to a device and then reclaim that
IP address for another user after the first user releases it. This means that DHCP offers a one to many ratio
of IP addresses and that an address is available to anyone who connects to the network. A step-by-step
description of the process is shown in Figures through
.
The Lab Activity will help students set up a network computer as a DHCP client.

9.3.6 Problems in address resolution


One of the major problems in networking is how to communicate with other network devices. In TCP/IP
communications, a datagram on a local-area network must contain both a destination MAC address and a
destination IP address. These addresses must be correct and match the destination MAC and IP addresses
of the host device. If it does not match, the datagram will be discarded by the destination host.
Communications within a LAN segment require two addresses. There needs to be a way to automatically
map IP to MAC addresses. It would be too time consuming for the user to create the maps manually. The
119

TCP/IP suite has a protocol, called Address Resolution Protocol (ARP), which can automatically obtain MAC
addresses for local transmission. Different issues are raised when data is sent outside of the local area
network.
Communications between two LAN segments have an additional task. Both the IP and MAC addresses are
needed for both the destination host and the intermediate routing device. TCP/IP has a variation on ARP
called Proxy ARP that will provide the MAC address of an intermediate device for transmission outside the
LAN to another network segment.

9.3.7 Address Resolution Protocol (ARP)


With TCP/IP networking, a data packet must contain both a destination MAC address and a destination IP
address. If the packet is missing either one, the data will not pass from Layer 3 to the upper layers. In this
way, MAC addresses and IP addresses act as checks and balances for each other. After devices determine
the IP addresses of the destination devices, they can add the destination MAC addresses to the data
packets.
Some devices will keep tables that contain MAC addresses and IP addresses of other devices that are
connected to the same LAN. These are called Address Resolution Protocol (ARP) tables. ARP tables are
stored in RAM memory, where the cached information is maintained automatically on each of the devices. It
is very unusual for a user to have to make an ARP table entry manually. Each device on a network maintains
its own ARP table. When a network device wants to send data across the network, it uses information
provided by the ARP table.
When a source determines the IP address for a destination, it then consults the ARP table in order to locate
the MAC address for the destination. If the source locates an entry in its table, destination IP address to
destination MAC address, it will associate the IP address to the MAC address and then uses it to
encapsulate the data. The data packet is then sent out over the networking media to be picked up by the
destination device.
There are two ways that devices can gather MAC addresses that they need to add to the encapsulated data.
One way is to monitor the traffic that occurs on the local network segment. All stations on an Ethernet
network will analyze all traffic to determine if the data is for them. Part of this process is to record the source
IP and MAC address of the datagram to an ARP table. So as data is transmitted on the network, the address
pairs populate the ARP table. Another way to get an address pair for data transmission is to broadcast an
ARP request.
The computer that requires an IP and MAC address pair broadcasts an ARP request. All the other devices on
the local area network analyze this request. If one of the local devices matches the IP address of the
request, it sends back an ARP reply that contains its IP-MAC pair. If the IP address is for the local area
network and the computer does not exist or is turned off, there is no response to the ARP request. In this
situation, the source device reports an error. If the request is for a different IP network, there is another
process that can be used.
Routers do not forward broadcast packets. If the feature is turned on, a router performs a proxy ARP. Proxy
ARP is a variation of the ARP protocol. In this variation, a router sends an ARP response with the MAC
120

address of the interface on which the request was received, to the requesting host. The router responds with
the MAC addresses for those requests in which the IP address is not in the range of addresses of the local
subnet.
Another method to send data to the address of a device that is on another network segment is to set up a
default gateway. The default gateway is a host option where the IP address of the router interface is stored
in the network configuration of the host. The source host compares the destination IP address and its own IP
address to determine if the two IP addresses are located on the same segment. If the receiving host is not on
the same segment, the source host sends the data using the actual IP address of the destination and the
MAC address of the router. The MAC address for the router was learned from the ARP table by using the IP
address of that router.
If the default gateway on the host or the proxy ARP feature on the router is not configured, no traffic can
leave the local area network. One or the other is required to have a connection outside of the local area
network.
The Lab Activity will introduce the arp -a command.
The Interactive Media Activity will help students understand the ARP process.

Summary
The U.S. Department of Defense (DoD) TCP/IP reference model has four layers: the application layer,
transport layer, Internet layer, and the network access layer. The application layer handles high-level
protocols, issues of representation, encoding, and dialog control. The transport layer provides transport
services from the source host to the destination host. The purpose of the Internet layer is to select the best
path through the network for packet transmissions. The network access layer is concerned with the physical
link to the network media.
Although some layers of the TCP/IP reference model correspond to the seven layers of the OSI model, there
are differences. The TCP/IP model combines the presentation and session layer into its application layer.
The TCP/IP model combines the OSI data link and physical layers into its network access layer.
Routers use the IP address to move data packets between networks. IP addresses are thirty-two bits long
according to the current version IPv4 and are divided into four octets of eight bits each. They operate at the
network layer, Layer 3, of the OSI model, which is the Internet layer of the TCP/IP model.
The IP address of a host is a logical address and can be changed. The Media Access Control (MAC)
address of the workstation is a 48-bit physical address. This address is usually burned into the network
interface card (NIC) and cannot change unless the NIC is replaced. TCP/IP communications within a LAN
segment require both a destination IP address and a destination MAC address for delivery. While IP address
are unique and routable throughout the Internet, when a packet arrives at the destination network there
needs to be a way to automatically map the IP address to a MAC address. The TCP/IP suite has a protocol,
called Address Resolution Protocol (ARP), which can automatically obtain MAC addresses for local
transmission. A variation on ARP called Proxy ARP will provide the MAC address of an intermediate device
for transmission to another network segment.
There are five classes of IP addresses, A through E. Only the first three classes are used commercially.
Depending on the class, the network and host part of the address will use a different number of bits. The
Class D address is used for multicast groups. Class E addresses are reserved for research use only.

121

An IP address that has binary zeros in all host bit positions is used to identify the network itself. An address
in which all of the host bits are set to one is the broadcast address and is used for broadcasting packets to all
the devices on a network.
Public IP addresses are unique. No two machines that connect to a public network can have the same IP
address because public IP addresses are global and standardized. Private networks that are not connected
to the Internet may use any host addresses, as long as each host within the private network is unique. Three
blocks of IP addresses are reserved for private, internal use. These three blocks consist of one Class A, a
range of Class B addresses, and a range of Class C addresses. Addresses that fall within these ranges are
discarded by routers and not routed on the Internet backbone.
Subnetting is another means of dividing and identifying separate networks throughout the LAN. Subnetting a
network means to use the subnet mask to divide the network and break a large network up into smaller,
more efficient and manageable segments, or subnets. Subnet addresses include the network portion, plus a
subnet field and a host field. The subnet field and the host field are created from the original host portion for
the entire network.
A more extendible and scalable version of IP, IP Version 6 (IPv6), has been defined and developed. IPv6
uses 128 bits rather than the 32 bits currently used in IPv4. IPv6 uses hexadecimal numbers to represent the
128 bits. IPv6 is being implemented in select networks and may eventually replace IPv4 as the dominant
Internet protocol.
IP addresses are assigned to hosts in the following ways:
Statically manually, by a network administrator
Dynamically automatically, using reverse address resolution protocol, bootstrap protocol
(BOOTP), or Dynamic Host Configuration Protocol (DHCP)

10.1 Routed Protocol

122

Overview
Internet Protocol (IP) is the main routed protocol of the Internet. IP addresses are used to route packets from
a source to a destination through the best available path. The propagation of packets, encapsulation
changes, and connection-oriented and connectionless protocols are also critical to ensure that data is
properly transmitted to its destination. This module will provide an overview for each.
The difference between routing and routed protocols is a common source of confusion. The two words sound
similar but are quite different. Routers use routing protocols to build tables that are used to determine the
best path to a host on the Internet.
Not all organizations can fit into the three class system of A, B, and C addresses. Flexibility exists within the
class system through subnets. Subnets allow network administrators to determine the size of the network
they will work with. After they decide how to segment their networks, they can use subnet masks to
determine the location of each device on a network.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe routed protocols
List the steps of data encapsulation in an internetwork as data is routed to Layer 3 devices
Describe connectionless and connection-oriented delivery
Name the IP packet fields
Describe how data is routed
Compare and contrast different types of routing protocols
List and describe several metrics used by routing protocols
List several uses for subnetting
Determine the subnet mask for a given situation
Use a subnet mask to determine the subnet ID
10.1.1.
This page will define routed and routable protocols.
A protocol is a set of rules that determines how computers communicate with each other across networks.
Computers exchange data messages to communicate with each other. To accept and act on these
messages, computers must have sets of rules that determine how a message is interpreted. Examples
include messages used to establish a connection to a remote machine, e-mail messages, and files
transferred over a network.
A protocol describes the following:
The required format of a message
The way that computers must exchange messages for specific activities
A routed protocol allows the router to forward data between nodes on different networks. A routable
protocol must provide the ability to assign a network number and a host number to each device. Some
protocols, such as IPX, require only a network number. These protocols use the MAC address of the host for
the host number. Other protocols, such as IP, require an address with a network portion and a host portion.
These protocols also require a network mask to differentiate the two numbers. The network address is
obtained by ANDing the address with the network mask.
The reason that a network mask is used is to allow groups of sequential IP addresses to be treated as a
single unit. If this grouping were not allowed, each host would have to be mapped individually for
routing. This would be impossible, because according to the Internet Software Consortium there are
approximately 233,101,500 hosts on the Internet

123

10.1.2 IP as a routed protocol


This page describes the features and functions of IP.
IP is the most widely used implementation of a hierarchical network-addressing scheme. IP is a
connectionless, unreliable, best-effort delivery protocol. The term connectionless means that no dedicated
circuit connection is established prior to transmission. IP determines the most efficient route for data based
on the routing protocol. The terms unreliable and best-effort do not imply that the system is unreliable and
does not work well. They indicate that IP does not verify that data sent on the network reaches its
destination. If required, verification is handled by upper layer protocols.
As information flows down the layers of the OSI model, the data is processed at each layer. At the network
layer, the data is encapsulated into packets. These packets are also known as datagrams. IP determines
the contents of the IP packet header, which includes address information. However, it is not concerned with
the actual data. IP accepts whatever data is passed down to it from the upper layers.

124

10.1.3 Packet propagation and switching within a router


This page will explain the process that occurs as a packet moves through a network.
As a packet travels through an internetwork to its final destination, the Layer 2 frame headers and trailers are
removed and replaced at every Layer 3 device. This is because Layer 2 data units, or frames, are for local
addressing. Layer 3 data units, or packets, are for end-to-end addressing.
Layer 2 Ethernet frames are designed to operate within a broadcast domain with the MAC address that is
burned into the physical device. Other Layer 2 frame types include PPP serial links and Frame Relay
connections, which use different Layer 2 addressing schemes. Regardless of the type of Layer 2 addressing
used, frames are designed to operate within a Layer 2 broadcast domain. When the data is sent to a Layer 3
device the Layer 2 information changes.
As a frame is received at a router interface, the destination MAC address is extracted. The address is
checked to see if the frame is directly addressed to the router interface, or if it is a broadcast. In either
situation, the frame is accepted. Otherwise, the frame is discarded since it is destined for another device on
the collision domain.
The CRC information is extracted from the frame trailer of an accepted frame. The CRC is calculated to
verify that the frame data is without error.
If the check fails, the frame is discarded. If the check is valid, the frame header and trailer are removed and
the packet is passed up to Layer 3. The packet is then checked to see if it is actually destined for the router,
or if it is to be routed to another device in the internetwork. If the destination IP address matches one of the
router ports, the Layer 3 header is removed and the data is passed up to the Layer 4. If the packet is to be
routed, the destination IP address will be compared to the routing table. If a match is found or there is a
default route, the packet will be sent to the interface specified in the matched routing table statement. When
the packet is switched to the outgoing interface, a new CRC value is added as a frame trailer, and the proper
frame header is added to the packet. The frame is then transmitted to the next broadcast domain on its trip to
the final destination.

125

10.1.4 Connectionless and connection-oriented delivery


This page will introduce two types of delivery systems, which are connectionless and connection-oriented.
These two services provide the actual end-to-end delivery of data in an internetwork.
Most network services use a connectionless delivery system. Different packets may take different paths to
get through the network. The packets are reassembled after they arrive at the destination. In a
connectionless system, the destination is not contacted before a packet is sent. A good comparison for a
connectionless system is a postal system. The recipient is not contacted to see if they will accept the letter
before it is sent. Also, the sender does not know if the letter arrived at the destination.
In connection-oriented systems, a connection is established between the sender and the recipient before any
data is transferred. An example of a connection-oriented network is the telephone system. The caller
places the call, a connection is established, and then communication occurs.
Connectionless network processes are often referred to as packet-switched processes. As the packets pass
from source to destination, packets can switch to different paths, and possibly arrive out of order. Devices
make the path determination for each packet based on a variety of criteria. Some of the criteria, such as
available bandwidth, may differ from packet to packet.
Connection-oriented network processes are often referred to as circuit-switched processes. A connection
with the recipient is first established, and then data transfer begins. All packets travel sequentially across the
same physical or virtual circuit.
The Internet is a gigantic, connectionless network in which the majority of packet deliveries are handled
by IP. TCP adds Layer 4, connection-oriented reliability services to IP

126

10.1.5 Anatomy of an IP packet


IP packets consist of the data from upper layers plus an IP header. This page will discuss the information
contained in the IP header:
Version Specifies the format of the IP packet header. The 4-bit version field contains the number 4
if it is an IPv4 packet and 6 if it is an IPv6 packet. However, this field is not used to distinguish
between IPv4 and IPv6 packets. The protocol type field present in the Layer 2 envelope is used for
that.
IP header length (HLEN) Indicates the datagram header length in 32-bit words. This is the total
length of all header information and includes the two variable-length header fields.
Type of service (ToS) 8 bits that specify the level of importance that has been assigned by a
particular upper-layer protocol.
Total length 16 bits that specify the length of the entire packet in bytes. This includes the data and
header. To get the length of the data payload subtract the HLEN from the total length.
Identification 16 bits that identify the current datagram. This is the sequence number.
Flags A 3-bit field in which the two low-order bits control fragmentation. One bit specifies if the
packet can be fragmented and the other indicates if the packet is the last fragment in a series of
fragmented packets.
Fragment offset 13 bits that are used to help piece together datagram fragments. This field allows
the previous field to end on a 16-bit boundary.
Time to Live (TTL) A field that specifies the number of hops a packet may travel. This number is
decreased by one as the packet travels through a router. When the counter reaches zero the packet
is discarded. This prevents packets from looping endlessly.
Protocol 8 bits that indicate which upper-layer protocol such as TCP or UDP receives incoming
packets after the IP processes have been completed.
Header checksum 16 bits that help ensure IP header integrity.
Source address 32 bits that specify the IP address of the node from which the packet was sent.
Destination address 32 bits that specify the IP address of the node to which the data is sent.
Options Allows IP to support various options such as security. The length of this field varies.
Padding Extra zeros are added to this field to ensure that the IP header is always a multiple of 32
bits.
Data Contains upper-layer information and has a variable length of up to 64 bits.
While the IP source and destination addresses are important, the other header fields have made IP very
flexible. The header fields list the source and destination address information of the packet and often indicate
the length of the message data. The information for routing the message is also contained in IP headers,
which can get long and complex
This page concludes this lesson. The next lesson will focus on IP routing protocols. The first page
provides a routing overview.

127

10.2 IP Routing Protocols


10.2.1 Routing overview
This page will discuss routing and the two main functions of a router.
Routing is an OSI Layer 3 function. Routing is a hierarchical organizational scheme that allows individual
addresses to be grouped together. These individual addresses are treated as a single unit until the
destination address is needed for final delivery of the data. Routing finds the most efficient path from one
device to another. The primary device that performs the routing process is the router.
The following are the two key functions of a router:
Routers must maintain routing tables and make sure other routers know of changes in the network
topology. They use routing protocols to communicate network information with other routers.
When packets arrive at an interface, the router must use the routing table to determine where to
send them. The router switches the packets to the appropriate interface, adds the frame information
for the interface, and then transmits the frame.
A router is a network layer device that uses one or more routing metrics to determine the optimal path along
which network traffic should be forwarded. Routing metrics are values that are used to determine the
advantage of one route over another. Routing protocols use various combinations of metrics to determine
the best path for data.
Routers interconnect network segments or entire networks. Routers pass data frames between networks
based on Layer 3 information. Routers make logical decisions about the best path for the delivery of data.
Routers then direct packets to the appropriate output port to be encapsulated for transmission. Stages of
the encapsulation and de-encapsulation process occur each time a packet transfers through a router. The
router must de-encapsulate the Layer 2 data frame to access and examine the Layer 3 address. As shown in
Figure , the complete process of sending data from one device to another involves encapsulation and deencapsulation on all seven OSI layers. The encapsulation process breaks up the data stream into segments,
128

adds the appropriate headers and trailers, and then transmits the data. The de-encapsulation process
removes the headers and trailers and then recombines the data into a seamless stream.
This course focuses on the most common routable protocol, which is IP. Other examples of routable
protocols include IPX/SPX and AppleTalk. These protocols provide Layer 3 support. Non-routable protocols
do not provide Layer 3 support. The most common non-routable protocol is NetBEUI. NetBEUI is a small,
fast, and efficient protocol that is limited to frame delivery within one segment.

10.2.2 Routing versus switching


This page will compare and contrast routing and switching. Routers and switches may seem to perform the
same function. The primary difference is that switches operate at Layer 2 of the OSI model and routers
operate at Layer 3. This distinction indicates that routers and switches use different information to send data
from a source to a destination.
The relationship between switching and routing can be compared to local and long-distance telephone calls.
When a telephone call is made to a number within the same area code, a local switch handles the call. The
local switch can only keep track of its local numbers. The local switch cannot handle all the telephone
numbers in the world. When the switch receives a request for a call outside of its area code, it switches the
call to a higher-level switch that recognizes area codes. The higher-level switch then switches the call so that
it eventually gets to the local switch for the area code dialed.
The router performs a function similar to that of the higher-level switch in the telephone example. Figure
shows the ARP tables for Layer 2 MAC addresses and routing tables for Layer 3 IP addresses. Each
computer and router interface maintains an ARP table for Layer 2 communication. The ARP table is only
effective for the broadcast domain to which it is connected. The router also maintains a routing table that
allows it to route data outside of the broadcast domain. Each ARP table entry contains an IP-MAC address
pair.
The Layer 2 switch builds its forwarding table using MAC addresses. When a host has data for a non-local IP
address, it sends the frame to the closest router. This router is also known as its default gateway. The host
uses the MAC address of the router as the destination MAC address.
A switch interconnects segments that belong to the same logical network or subnetwork. For non-local
hosts, the switch forwards the frame to the router based on the destination MAC address. The router
examines the Layer 3 destination address of the packet to make the forwarding decision. Host X knows the
IP address of the router because the IP configuration of the host contains the IP address of the default
gateway.
Just as a switch keeps a table of known MAC addresses, the router keeps a table of IP addresses known as
a routing table. MAC addresses are not logically organized. IP addresses are organized in a hierarchy. A
switch can handle a limited number of unorganized MAC addresses since it only has to search its table for
addresses within its segment. Routers require an organized address system that can group similar
addresses together and treat them as a single network unit until the data reaches the destination segment.
If IP addresses were not organized, the Internet would not work. This could be compared to a library that
contained millions of individual pages of printed material in a large pile. This material is useless because it is
impossible to locate an individual document. If the pages are identified and organized into books and each
book is listed in a book index, it will be a lot easier to locate and use the data.
Another difference between switched and routed networks is switched networks do not block broadcasts.
As a result, switches can be overwhelmed by broadcast storms. Routers block LAN broadcasts, so a
broadcast storm only affects the broadcast domain from which it originated. Since routers block broadcasts,
they also provide a higher level of security and bandwidth control than switches.
The Interactive Media Activity will help students learn the differences between routing and switching.

129

130

10.2.3 Routed versus routing


This page explains the differences between routing protocols and routed protocols.
Routed or routable protocols are used at the network layer to transfer data from one host to another across a
router. Routed protocols transport data across a network. Routing protocols allow routers to choose the best
path for data from a source to a destination.
Some functions of a routed protocol are as follows:
Includes any network protocol suite that provides enough information in its network layer address to
allow a router to forward it to the next device and ultimately to its destination
Defines the format and use of the fields within a packet
The Internet Protocol (IP) and Novell Internetwork Packet Exchange (IPX) are examples of routed protocols.
Other examples include DECnet, AppleTalk, Banyan VINES, and Xerox Network Systems (XNS).
Routers use routing protocols to exchange routing tables and share routing information. In other words,
routing protocols enable routers to route routed protocols.
Some functions of a routing protocol are as follows:
Provides processes used to share route information
Allows routers to communicate with other routers to update and maintain the routing tables
Examples of routing protocols that support the IP routed protocol include RIP, IGRP, OSPF, BGP, and EIGRP.
The Interactive Media Activity will help students learn the differences between routed and routing protocols.

131

10.2.4 Path determination


This page will explain how path determination occurs.
Path determination occurs at the network layer. A router uses path determination to compare a destination
address to the available routes in its routing table and select the best path. The routers learn of these
available routes through static routing or dynamic routing. Routes configured manually by the network
administrator are static routes. Routes learned by others routers using a routing protocol are dynamic routes.
The router uses path determination to decide which port to send a packet out of to reach its destination.
This process is also referred to as routing the packet. Each router that the packet encounters along the way
is called a hop. The hop count is the distanced traveled. Path determination can be compared to a person
who drives from one location in a city to another. The driver has a map that shows which streets lead to the
destination, just as a router has a routing table. The driver travels from one intersection to another just as a
packet travels from one router to another in each hop. At any intersection, the driver can choose to turn left,
turn right, or go straight ahead. This is similar to how a router chooses the outbound port through which a
packet is sent.
The decisions of a driver are influenced by factors such as traffic, the speed limit, the number of lanes, tolls,
and whether or not a road is frequently closed. Sometimes it is faster to take a longer route on a smaller, less
crowded back street instead of a highway with a lot of traffic. Similarly, routers can make decisions based on
the load, bandwidth, delay, cost, and reliability of a network link.
The following process is used to determine the path for every packet that is routed:
The router compares the IP address of the packet that it received to the IP tables that it has.
The destination address is obtained from the packet.
The mask of the first entry in the routing table is applied to the destination address.
The masked destination and the routing table entry are compared.
If there is a match, the packet is forwarded to the port that is associated with that table entry.
If there is not a match, the next entry in the table is checked.
If the packet does not match any entries in the table, the router checks to see if a default route has
been set.
If a default route has been set, the packet is forwarded to the associated port. A default route is a
route that is configured by the network administrator as the route to use if there are no matches in
the routing table.
If there is no default route, the packet is discarded. A message is often sent back to the device that
sent the data to indicate that the destination was unreachable.
The Interactive Media Activity will help students understand path determination

132

10.2.5 Routing tables


This page will describe the functions of a routing table.
Routers use routing protocols to build and maintain routing tables that contain route information. This aids in
the process of path determination. Routing protocols fill routing tables with a variety of route information. This
information varies based on the routing protocol used. Routing tables contain the information necessary to
forward data packets across connected networks. Layer 3 devices interconnect broadcast domains or LANs.
A hierarchical address scheme is required for data transfers.
Routers keep track of the following information in their routing tables:
Protocol type Identifies the type of routing protocol that created each entry.
Next-hop associations Tell a router that a destination is either directly connected to the router or
that it can be reached through another router called the next-hop on the way to the destination.

133

When a router receives a packet, it checks the destination address and attempts to match this
address with a routing table entry.
Routing metric Different routing protocols use different routing metrics. Routing metrics are used
to determine the desirability of a route. For example, RIP uses hop count as its only routing metric.
IGRP uses bandwidth, load, delay, and reliability metrics to create a composite metric value.
Outbound interfaces The interface that the data must be sent out of to reach the final destination.
Routers communicate with one another to maintain their routing tables through the transmission of
routing update messages. Some routing protocols transmit update messages periodically. Other
protocols send them only when there are changes in the network topology. Some protocols transmit the
entire routing table in each update message and some transmit only routes that have changed. Routers
analyze the routing updates from directly-connected routers to build and maintain their routing tables

10.2.6 Routing algorithms and metrics


This page will define algorithms and metrics as they relate to routers.
An algorithm is a detailed solution to a problem. Different routing protocols use different algorithms to choose
the port to which a packet should be sent. Routing algorithms depend on metrics to make these decisions.
Routing protocols often have one or more of the following design goals:
Optimization This is the capability of a routing algorithm to select the best route. The route will
depend on the metrics and metric weights used in the calculation. For example, one algorithm may
use both hop count and delay metrics, but may consider delay metrics as more important in the
calculation.
Simplicity and low overhead The simpler the algorithm, the more efficiently it will be processed
by the CPU and memory in the router. This is important so that the network can scale to large
proportions, such as the Internet.
Robustness and stability A routing algorithm should perform correctly when confronted by
unusual or unforeseen circumstances, such as hardware failures, high load conditions, and
implementation errors.
Flexibility A routing algorithm should quickly adapt to a variety of network changes. These
changes include router availability, router memory, changes in bandwidth, and network delay.
Rapid convergence Convergence is the process of agreement by all routers on available routes.
When a network event causes changes in router availability, updates are needed to reestablish
network connectivity. Routing algorithms that converge slowly can cause data to be undeliverable.
Routing algorithms use different metrics to determine the best route. Each routing algorithm interprets what
is best in its own way. A routing algorithm generates a number called a metric value for each path through a
network. Sophisticated routing algorithms base route selection on multiple metrics that are combined in a
composite metric value. Typically, smaller metric values indicate preferred paths.
Metrics can be based on a single characteristic of a path, or can be calculated based on several
characteristics. The following metrics are most commonly used by routing protocols:
Bandwidth Bandwidth is the data capacity of a link. Normally, a 10-Mbps Ethernet link is
preferable to a 64-kbps leased line.
Delay Delay is the length of time required to move a packet along each link from a source to a
destination. Delay depends on the bandwidth of intermediate links, the amount of data that can be
temporarily stored at each router, network congestion, and physical distance.
134

Load Load is the amount of activity on a network resource such as a router or a link.
Reliability Reliability is usually a reference to the error rate of each network link.
Hop count Hop count is the number of routers that a packet must travel through before reaching
its destination. Each router is equal to one hop. A hop count of four indicates that data would have to
pass through four routers to reach its destination. If multiple paths are available to a destination, the
path with the least number of hops is preferred.
Ticks The delay on a data link using IBM PC clock ticks. One tick is approximately 1/18 second.
Cost Cost is an arbitrary value, usually based on bandwidth, monetary expense, or other
measurement, that is assigned by a network administrator
10.2.7 IGP and EGP
This page will introduce two types of routing protocols.
An autonomous system is a network or set of networks under common administrative control, such as the
cisco.com domain. An autonomous system consists of routers that present a consistent view of routing to the
external world.
Two families of routing protocols are Interior Gateway Protocols (IGPs) and Exterior Gateway Protocols
(EGPs).
IGPs route data within an autonomous system:
RIP and RIPv2
IGRP
EIGRP
OSPF
Intermediate System-to-Intermediate System (IS-IS) protocol
EGPs route data between autonomous systems. An example of an EGP is BGP

10.2.8 Link state and distance vector


Routing protocols can be classified as either IGPs or EGPs. Which type is used depends on whether a group
of routers is under a single administration or not. IGPs can be further categorized as either distance-vector or
link-state protocols. This page describes distance-vector and link-state routing and explains when each
type of routing protocol is used.
The distance-vector routing approach determines the distance and direction, vector, to any link in the
internetwork. The distance may be the hop count to the link. Routers using distance-vector algorithms send
all or part of their routing table entries to adjacent routers on a periodic basis. This happens even if there are
no changes in the network. By receiving a routing update, a router can verify all the known routes and make
changes to its routing table. This process is also known as routing by rumor. The understanding that a
router has of the network is based upon the perspective of the adjacent router of the network topology.
Examples of distance-vector protocols include the following:
Routing Information Protocol (RIP) The most common IGP in the Internet, RIP uses hop count
as its only routing metric.
135

Interior Gateway Routing Protocol (IGRP) This IGP was developed by Cisco to address issues
associated with routing in large, heterogeneous networks.
Enhanced IGRP (EIGRP) This Cisco-proprietary IGP includes many of the features of a link-state
routing protocol. Because of this, it has been called a balanced-hybrid protocol, but it is really an
advanced distance-vector routing protocol.
Link-state routing protocols were designed to overcome limitations of distance vector routing protocols. Linkstate routing protocols respond quickly to network changes sending trigger updates only when a network
change has occurred. Link-state routing protocols send periodic updates, known as link-state refreshes, at
longer time intervals, such as every 30 minutes.
When a route or link changes, the device that detected the change creates a link-state advertisement (LSA)
concerning that link. The LSA is then transmitted to all neighboring devices. Each routing device takes a
copy of the LSA, updates its link-state database, and forwards the LSA to all neighboring devices. This
flooding of LSAs is required to ensure that all routing devices create databases that accurately reflect the
network topology before updating their routing tables.
Link-state algorithms typically use their databases to create routing table entries that prefer the shortest path.
Examples of link-state protocols include Open Shortest Path First (OSPF) and Intermediate System-toIntermediate System (IS-IS).
The Interactive Media Activity will identify the differences between link-state and distance vector routing
protocols.

10.2.9 Routing protocols


This page will describe different types of router protocols.
RIP is a distance vector routing protocol that uses hop count as its metric to determine the direction and
distance to any link in the internetwork. If there are multiple paths to a destination, RIP selects the path with
the least number of hops. However, because hop count is the only routing metric used by RIP, it does not
always select the fastest path to a destination. Also, RIP cannot route a packet beyond 15 hops. RIP Version
1 (RIPv1) requires that all devices in the network use the same subnet mask, because it does not include
subnet mask information in routing updates. This is also known as classful routing.
RIP Version 2 (RIPv2) provides prefix routing, and does send subnet mask information in routing updates.
This is also known as classless routing. With classless routing protocols, different subnets within the same
network can have different subnet masks. The use of different subnet masks within the same network is
referred to as variable-length subnet masking (VLSM).
IGRP is a distance-vector routing protocol developed by Cisco. IGRP was developed specifically to address
problems associated with routing in large networks that were beyond the range of protocols such as RIP.
IGRP can select the fastest available path based on delay, bandwidth, load, and reliability. IGRP also has a
much higher maximum hop count limit than RIP. IGRP uses only classful routing.
OSPF is a link-state routing protocol developed by the Internet Engineering Task Force (IETF) in 1988.
OSPF was written to address the needs of large, scalable internetworks that RIP could not.
Intermediate System-to-Intermediate System (IS-IS) is a link-state routing protocol used for routed protocols
other than IP. Integrated IS-IS is an expanded implementation of IS-IS that supports multiple routed protocols
including IP.
Like IGRP, EIGRP is a proprietary Cisco protocol. EIGRP is an advanced version of IGRP. Specifically,
EIGRP provides superior operating efficiency such as fast convergence and low overhead bandwidth. EIGRP
is an advanced distance-vector protocol that also uses some link-state protocol functions. Therefore, EIGRP
is sometimes categorized as a hybrid routing protocol.
Border Gateway Protocol (BGP) is an example of an External Gateway Protocol (EGP). BGP exchanges
routing information between autonomous systems while guaranteeing loop-free path selection. BGP is the
principal route advertising protocol used by major companies and ISPs on the Internet. BGP4 is the first
version of BGP that supports classless interdomain routing (CIDR) and route aggregation. Unlike common
Internal Gateway Protocols (IGPs), such as RIP, OSPF, and EIGRP, BGP does not use metrics like hop
count, bandwidth, or delay. Instead, BGP makes routing decisions based on network policies, or rules using
various BGP path attributes.
The Lab Activity will help students understand the price of a small router.
This page concludes this lesson. The next lesson will focus on the mechanics of subnetting. The first
page covers the different classes of IP addresses.
10.3

The Mechanics of Subnetting

10.3.1

Classes of network IP addresses

This page will review the classes of IP addresses. The combined classes of IP addresses offer a range from
256 to 16.8 million hosts.
136

To efficiently manage a limited supply of IP addresses, all classes can be subdivided into smaller
subnetworks. Figure provides an overview of the division between networks and hosts.

10.3.2 Introduction to and reason for subnetting


This page will describe how subnetting works and why it is important.
To create the subnetwork structure, host bits must be reassigned as network bits. This is often referred to as
borrowing bits. However, a more accurate term would be lending bits. The starting point for this process is
always the leftmost host bit, the one closest to the last network octet.
Subnet addresses include the Class A, Class B, and Class C network portion, plus a subnet field and a host
field. The subnet field and the host field are created from the original host portion of the major IP address.
This is done by re-assigning bits from the host portion to the original network portion of the address.
The ability to divide the original host portion of the address into the new subnet and host fields provides
addressing flexibility for the network administrator.
In addition to the need for manageability, subnetting enables the network administrator to provide broadcast
containment and low-level security on the LAN. Subnetting provides some security since access to other
subnets is only available through the services of a router. Further, access security may be provided through
the use of access lists. These lists can permit or deny access to a subnet, based on a variety of criteria,
thereby providing more security. Access lists will be studied later in the curriculum. Some owners of Class A
and B networks have also discovered that subnetting creates a revenue source for the organization through
the leasing or sale of previously unused IP addresses.
Subnetting is an internal function of a network. From the outside, a LAN is seen as a single network with no
details of the internal network structure. This view of the network keeps the routing tables small and efficient.
Given a local node address of 147.10.43.14 on subnet 147.10.43.0, the world outside the LAN sees only the
advertised major network number of 147.10.0.0. The reason for this is that the local subnet address of
147.10.43.0 is only valid within the LAN where subnetting is applied.

137

10.3.3 Establishing the subnet mask address


This page provides detailed information about subnet masks and how they are established on a network.
Selecting the number of bits to use in the subnet process will depend on the maximum number of hosts
required per subnet. An understanding of basic binary math and the position value of the bits in each octet is
necessary when calculating the number of subnetworks and hosts created when bits were borrowed.
The last two bits in the last octet, regardless of the IP address class, may never be assigned to the
subnetwork. These bits are referred to as the last two significant bits. Use of all the available bits to create
subnets, except these last two, will result in subnets with only two usable hosts. This is a practical address
conservation method for addressing serial router links. However, for a working LAN this would result in
prohibitive equipment costs.
The subnet mask gives the router the information required to determine in which network and subnet a
particular host resides. The subnet mask is created by using binary ones in the network bit positions. The
subnet bits are determined by adding the position value of the bits that were borrowed. If three bits were
borrowed, the mask for a Class C address would be 255.255.255.224. This mask may also be
represented, in the slash format, as /27. The number following the slash is the total number of bits that were
used for the network and subnetwork portion.
To determine the number of bits to be used, the network designer needs to calculate how many hosts the
largest subnetwork requires and the number of subnetworks needed. As an example, the network requires
30 hosts and five subnetworks. A shortcut to determine how many bits to reassign is by using the subnetting
chart. By consulting the row titled Usable Hosts, the chart indicates that for 30 usable hosts three bits are
required. The chart also shows that this creates six usable subnetworks, which will satisfy the requirements
of this scheme. The difference between usable hosts and total hosts is a result of using the first available
address as the ID and the last available address as the broadcast for each subnetwork. Borrowing the
appropriate number of bits to accommodate required subnetworks and hosts per subnetwork can be a
balancing act and may result in unused host addresses in multiple subnetworks. The ability to use these
addresses is not provided with classful routing. However, classless routing, which will be covered later in the
course can recover many of these lost addresses.
The method that was used to create the subnet chart can be used to solve all subnetting problems. This
method uses the following formula:
Number of usable subnets = two to the power of the assigned subnet bits or borrowed bits, minus two. The
minus two is for the reserved addresses of network ID and network broadcast.
(2 power of borrowed bits) 2 = usable subnets
(23)
2=6
Number of usable hosts = two to the power of the bits remaining, minus two (reserved addresses for subnet
id and subnet broadcast).
(2 power of remaining host bits) 2 = usable hosts
(25)
2 = 30

138

10.3.4 Applying the subnet mask


This page will teach students how to apply a subnet mask.
Once the subnet mask has been established it then can be used to create the subnet scheme. The chart in
Figure is an example of the subnets and addresses created by assigning three bits to the subnet field. This
will create eight subnets with 32 hosts per subnet. Start with zero (0) when numbering subnets. The first
subnet is always referenced as the zero subnet.
When filling in the subnet chart three of the fields are automatic, others require some calculation. The
subnetwork ID of subnet zero is the same as the major network number, in this case 192.168.10.0. The
broadcast ID for the whole network is the largest number possible, in this case 192.168.10.255. The third
number that is given is the subnetwork ID for subnet number seven. This number is the three network octets
with the subnet mask number inserted in the fourth octet position. Three bits were assigned to the subnet
field with a cumulative value of 224. The ID for subnet seven is 192.168.10.224. By inserting these
numbers, checkpoints have been established that will verify the accuracy when the chart is completed.
When consulting the subnetting chart or using the formula, the three bits assigned to the subnet field will
result in 32 total hosts assigned to each subnet. This information provides the step count for each
subnetwork ID. Adding 32 to each preceding number, starting with subnet zero, the ID for each subnet is
established. Notice that the subnet ID has all binary 0s in the host portion.
The broadcast field is the last number in each subnetwork, and has all binary ones in the host portion. This
address has the ability to broadcast only to the members of a single subnet. Since the subnetwork ID for
subnet zero is 192.168.10.0 and there are 32 total hosts the broadcast ID would be 192.168.10.31. Starting
at zero the 32nd sequential number is 31. It is important to remember that zero (0) is a real number in the
world of networking.

139

The balance of the broadcast ID column can be filled in using the same process that was used in the
subnetwork ID column. Simply add 32 to the preceding broadcast ID of the subnet. Another option is to start
at the bottom of this column and work up to the top by subtracting one from the preceding subnetwork ID.

10.3.5

Subnetting Class A and B networks

This page will describe the process used to subnet Class A, B, and C networks.
The Class A and B subnetting procedure is identical to the process for Class C, except there may be
significantly more bits involved. The available bits for assignment to the subnet field in a Class A address is
22 bits while a Class B address has 14 bits.
Assigning 12 bits of a Class B address to the subnet field creates a subnet mask of 255.255.255.240 or /28.
All eight bits were assigned in the third octet resulting in 255, the total value of all eight bits. Four bits were
assigned in the fourth octet resulting in 240. Recall that the slash mask is the sum total of all bits assigned to
the subnet field plus the fixed network bits.
Assigning 20 bits of a Class A address to the subnet field creates a subnet mask of 255.255.255.240 or /28.
All eight bits of the second and third octets were assigned to the subnet field and four bits from the fourth
octet.
In this situation, it is apparent that the subnet mask for the Class A and Class B addresses appear identical.
Unless the mask is related to a network address it is not possible to decipher how many bits were assigned
to the subnet field.
Whichever class of address needs to be subnetted, the following rules are the same:
Total subnets = 2 to the power of the bits borrowed
Total hosts = 2 to the power of the bits remaining
Usable subnets = 2 to the power of the bits borrowed minus 2
Usable hosts = 2 to the power of the bits remaining minus 2

140

10.3.6 Calculating the resident subnetwork through ANDing


This page will explain the concept of ANDing.
Routers use subnet masks to determine the home subnetwork for individual nodes. This process is referred
to as logical ANDing. ANDing is a binary process by which the router calculates the subnetwork ID for an
incoming packet. ANDing is similar to multiplication.
This process is handled at the binary level. Therefore, it is necessary to view the IP address and mask in
binary. The IP address and the subnetwork address are ANDed with the result being the subnetwork ID.
The router then uses that information to forward the packet across the correct interface.
Subnetting is a learned skill. It will take many hours performing practice exercises to gain a development of
flexible and workable schemes. A variety of subnet calculators are available on the web. However, a network
administrator must know how to manually calculate subnets in order to effectively design the network
141

scheme and assure the validity of the results from a subnet calculator. The subnet calculator will not provide
the initial scheme, only the final addressing. Also, no calculators, of any kind, are permitted during the
certification exam.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
IP is referred to as a connectionless protocol because no dedicated circuit connection is established between
source and destination prior to transmission, IP is referred to as unreliable because does not verify that the
data reached its destination. If verification of delivery is required then a combination of IP and a connectionoriented transport protocol such as TCP is required. If verification of error-free delivery is not required IP can
be used in combination with a connectionless transport protocol such as UDP. Connectionless network
processes are often referred to as packet switched processes. Connection-oriented network processes are
often referred to as circuit switched processes.
Protocols at each layer of the OSI model add control information to the data as it moves through the network.
Because this information is added at the beginning and end of the data, this process is referred to as
encapsulating the data. Layer 3 adds network, or logical, address information to the data and Layer 2 adds
local, or physical, address information.
Layer 3 routing and Layer 2 switching are used to direct and deliver data throughout the network. Initially, the
router receives a Layer 2 frame with a Layer 3 packet encapsulated within it. The router must strip off the
Layer 2 frame and examine the Layer 3 packet. If the packet is destined for local delivery the router must
encapsulate it in a new frame with the correct local MAC address as the destination. If the data must be
forwarded to another broadcast domain, the router must encapsulate the Layer 3 packet in a new Layer 2
frame that contains the MAC address of the next internetworking device. In this way a frame is transmitted
through networks from broadcast domain to broadcast domain and eventually delivered to the correct host.
Routed protocols, such as IP, transport data across a network. Routing protocols allow routers to choose the
best path for data from source to destination. These routes can be either static routes, which are entered
manually, or dynamic routes, which are learned through routing protocols. When dynamic routing protocols
are used, routers use routing update messages to communicate with one another and maintain their routing
tables. Routing algorithms use metrics to process routing updates and populate the routing table with the
best routes. Convergence describes the speed at which all routers agree on a change in the network.
Interior gateway protocols (IGP) are routing protocols that route data within autonomous systems, while
exterior gateway protocols (EGP) route data between autonomous systems. IGPs can be further categorized
as either distance-vector or link-state protocols. Routers using distance-vector routing protocols periodically
send routing updates consisting of all or part of their routing tables. Routers using link-state routing protocols
use link-state advertisements (LSAs) to send updates only when topological changes occur in the network,
and send complete routing tables much less frequently.
As a packet travels through the network devices need a method of determining what portion of the IP
address identifies the network and what portion identifies the host. A 32-bit address mask, called a subnet
mask, is used to indicate the bits of an IP address that are being used for the network address. The default
subnet mask for a Class A address is 255.0.0.0. For a Class B address, the subnet mask always starts out
as 255.255.0.0, and a Class C subnet mask begins as 255.255.255.0. The subnet mask can be used to split
up an existing network into subnetworks, or subnets.
Subnetting reduces the size of broadcast domains, allows LAN segments in different geographical locations
to communicate through routers and provides improved security by separating one LAN segment from
another.
Custom subnet masks use more bits than the default subnet masks by borrowing these bits from the host
portion of the IP address. This creates a three-part address:
The original network address
The subnet address made up of the bits borrowed
The host address made up of the bits left after borrowing some for subnets
Routers use subnet masks to determine the subnetwork portion of an address for an incoming packet.
This process is referred to as logical ANDing.

142

Overview
The TCP/IP transport layer transports data between applications on source and destination devices.
Familiarity with the transport layer is essential to understand modern data networks. This module will
describe the functions and services of this layer.
Many of the network applications that are found at the TCP/IP application layer are familiar to most network
users. HTTP, FTP, and SMTP are acronyms that are commonly seen by users of Web browsers and e-mail
clients. This module also describes the function of these and other applications from the TCP/IP networking
model.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe the functions of the TCP/IP transport layer
Describe flow control
Explain how a connection is established between peer systems
Describe windowing
Describe acknowledgment
Identify and describe transport layer protocols
Describe TCP and UDP header formats
Describe TCP and UDP port numbers
List the major protocols of the TCP/IP application layer
Provide a brief description of the features and operation of well-known TCP/IP applications

11.1 TCP/IP Transport Layer


11.1.1 Introduction to the TCP/IP transport layer
This page will describe the functions of the transport layer.
The primary duties of the transport layer are to transport and regulate the flow of information from a source to
a destination, reliably and accurately. End-to-end control and reliability are provided by sliding windows,
sequencing numbers, and acknowledgments.
To understand reliability and flow control, think of someone who studies a foreign language for one year and
then visits the country where that language is used. In conversation, words must be repeated for reliability.
People must also speak slowly so that the conversation is understood, which relates to flow control.
The transport layer establishes a logical connection between two endpoints of a network. Protocols in the
transport layer segment and reassemble data sent by upper-layer applications into the same transport layer
data stream. This transport layer data stream provides end-to-end transport services.
143

The two primary duties of the transport layer are to provide flow control and reliability. The transport layer
defines end-to-end connectivity between host applications. Some basic transport services are as follows:
Segmentation of upper-layer application data
Establishment of end-to-end operations
Transportation of segments from one end host to another
Flow control provided by sliding windows
Reliability provided by sequence numbers and acknowledgments
TCP/IP is a combination of two individual protocols. IP operates at Layer 3 of the OSI model and is a
connectionless protocol that provides best-effort delivery across a network. TCP operates at the transport
layer and is a connection-oriented service that provides flow control and reliability. When these protocols are
combined they provide a wider range of services. The combined protocols are the basis for the TCP/IP
protocol suite. The Internet is built upon this TCP/IP protocol suite.
11.1.2 Flow control
This page will describe how the transport layer provides flow control.
As the transport layer sends data segments, it tries to ensure that data is not lost. Data loss may occur if a
host cannot process data as quickly as it arrives. The host is then forced to discard the data. Flow control
ensures that a source host does not overflow the buffers in a destination host. To provide flow control, TCP
allows the source and destination hosts to communicate. The two hosts then establish a data-transfer rate
that is agreeable to both.
11.1.3 Session establishment, maintenance, and termination
This page discusses transport functionality and how it is accomplished on a segment-by-segment basis.
Applications can send data segments on a first-come, first-served basis. The segments that arrive first will be
taken care of first. These segments can be routed to the same or different destinations. Multiple applications
can share the same transport connection in the OSI reference model. This is referred to as the multiplexing
of upper-layer conversations. Numerous simultaneous upper-layer conversations can be multiplexed over
a single connection.
One function of the transport layer is to establish a connection-oriented session between similar devices at
the application layer. For data transfer to begin, the source and destination applications inform the operating
systems that a connection will be initiated. One node initiates a connection that must be accepted by the
other. Protocol software modules in the two operating systems exchange messages across the network to
verify that the transfer is authorized and that both sides are ready.
The connection is established and the transfer of data begins after all synchronization has occurred. The two
machines continue to communicate through their protocol software to verify that the data is received
correctly.
Figure shows a typical connection between two systems. The first handshake requests synchronization.
The second handshake acknowledge the initial synchronization request, as well as synchronizing connection
parameters in the opposite direction. The third handshake segment is an acknowledgment used to inform the
destination that both sides agree that a connection has been established. After the connection has been
established, data transfer begins.
Congestion can occur for two reasons:
First, a high-speed computer might generate traffic faster than a network can transfer it.
Second, if many computers simultaneously need to send datagrams to a single destination, that
destination can experience congestion, although no single source caused the problem.
When datagrams arrive too quickly for a host or gateway to process, they are temporarily stored in memory.
If the traffic continues, the host or gateway eventually exhausts its memory and must discard additional
datagrams that arrive.
Instead of allowing data to be lost, the TCP process on the receiving host can issue a not ready indicator to
the sender. This indicator signals the sender to stop data transmission. When the receiver can handle
additional data, it sends a ready transport indicator. When this indicator is received, the sender can resume
the segment transmission.
At the end of data transfer, the source host sends a signal that indicates the end of the transmission. The
destination host acknowledges the end of transmission and the connection is terminated.

144

145

11.1.4 Three-way handshake


This page will explain how TCP uses three-way handshakes for data transmission.
TCP is a connection-oriented protocol. TCP requires a connection to be established before data transfer
begins. The two hosts must synchronize their initial sequence numbers to establish a connection.
Synchronization occurs through an exchange of segments that carry a synchronize (SYN) control bit and the
initial sequence numbers. This solution requires a mechanism that picks the initial sequence numbers and a
handshake to exchange them.
The synchronization requires each side to send its own initial sequence number and to receive a
confirmation of exchange in an acknowledgment (ACK) from the other side. Each side must receive the initial
sequence number from the other side and respond with an ACK. The sequence is as follows:
1. The sending host (A) initiates a connection by sending a SYN packet to the receiving host (B)
indicating its INS = X:
A - > B SYN, seq of A = X
2. B receives the packet, records that the seq of A = X, replies with an ACK of X + 1, and indicates that
its INS = Y. The ACK of X + 1 means that host B has received all octets up to and including X and is
expecting X + 1 next:
B - > A ACK, seq of A = X, SYN seq of B = Y, ACK = X + 1
3. A receives the packet from B, it knows that the seq of B = Y, and responds with an ACK of Y + 1,
which finalizes the connection process:
A - > B ACK, seq of B = Y, ACK = Y + 1
This exchange is called the three-way handshake.
A three-way handshake is necessary because sequence numbers are not based on a global clock in the
network and TCP protocols may use different mechanisms to choose the initial sequence numbers. The
receiver of the first SYN would not know if the segment was delayed unless it kept track of the last
sequence number used on the connection. If the receiver does not have this information, it must ask the
sender to verify the SYN

11.1.5 Windowing
This page will explain how windows are used to transmit data.
Data packets must be delivered to the recipient in the same order in which they were transmitted to have a
reliable, connection-oriented data transfer. The protocol fails if any data packets are lost, damaged,
duplicated, or received in a different order. An easy solution is to have a recipient acknowledge the receipt of
each packet before the next packet is sent.
If a sender had to wait for an ACK after each packet was sent, throughput would be low. Therefore, most
connection-oriented, reliable protocols allow multiple packets to be sent before an ACK is received. The time
interval after the sender transmits a data packet and before the sender processes any ACKs is used to
transmit more data. The number of data packets the sender can transmit before it receives an ACK is known
as the window size, or window.
TCP uses expectational ACKs. This means that the ACK number refers to the next packet that is expected.
Windowing refers to the fact that the window size is negotiated dynamically in the TCP session. Windowing
is a flow-control mechanism. Windowing requires the source device to receive an ACK from the destination
after a certain amount of data is transmitted. The destination host reports a window size to the source host.
146

This window specifies the number of packets that the destination host is prepared to receive. The first packet
is the ACK.
With a window size of three, the source device can send three bytes to the destination. The source device
must then wait for an ACK. If the destination receives the three bytes, it sends an acknowledgment to the
source device, which can now transmit three more bytes. If the destination does not receive the three bytes,
because of overflowing buffers, it does not send an acknowledgment. Because the source does not receive
an acknowledgment, it knows that the bytes should be retransmitted, and that the transmission rate should
be decreased.
In Figure , the sender sends three packets before it expects an ACK. If the receiver can handle only two
packets, the window drops packet three, specifies three as the next packet, and indicates a new window size
of two. The sender sends the next two packets, but still specifies a window size of three. This means that the
sender will still expect a three-packet ACK from the receiver. The receiver replies with a request for packet
five and again specifies a window size of two.
11.1.6 Acknowledgment
This page will discuss acknowledgments and the sequence of segments.
Reliable delivery guarantees that a stream of data sent from one device is delivered through a data link to
another device without duplication or data loss. Positive acknowledgment with retransmission is one
technique that guarantees reliable delivery of data. Positive acknowledgment requires a recipient to
communicate with the source and send back an ACK when the data is received. The sender keeps a record
of each data packet, or TCP segment, that it sends and expects an ACK. The sender also starts a timer
when it sends a segment and will retransmit a segment if the timer expires before an ACK arrives.
Figure shows a sender that transmits data packets 1, 2, and 3. The receiver acknowledges receipt of the
packets with a request for packet 4. When the sender receives the ACK, it sends packets 4, 5, and 6. If
packet 5 does not arrive at the destination, the receiver acknowledges with a request to resend packet 5.
The sender resends packet 5 and then receives an ACK to continue with the transmission of packet 7.
TCP provides sequencing of segments with a forward reference acknowledgment. Each segment is
numbered before transmission. At the destination, TCP reassembles the segments into a complete
message. If a sequence number is missing in the series, that segment is retransmitted. Segments that are
not acknowledged within a given time period will result in a retransmission.

147

11.1.7 TCP
This page will discuss the protocols that use TCP and the fields included in a TCP segment.
TCP is a connection-oriented transport layer protocol that provides reliable full-duplex data transmission.
TCP is part of the TCP/IP protocol stack. In a connection-oriented environment, a connection is established
between both ends before the transfer of information can begin. TCP breaks messages into segments,
reassembles them at the destination, and resends anything that is not received. TCP supplies a virtual circuit
between end-user applications.
The following protocols use TCP:
FTP
HTTP
SMTP
Telnet
The following are the definitions of the fields in the TCP segment:
Source port Number of the port that sends data
Destination port Number of the port that receives data
Sequence number Number used to ensure the data arrives in the correct order
Acknowledgment number Next expected TCP octet
HLEN Number of 32-bit words in the header
Reserved Set to zero
Code bits Control functions, such as setup and termination of a session
Window Number of octets that the sender will accept
Checksum Calculated checksum of the header and data fields
Urgent pointer Indicates the end of the urgent data
Option One option currently defined, maximum TCP segment size
Data Upper-layer protocol data

148

11.1.8 UDP
This page will discuss UDP. UDP is the connectionless transport protocol in the TCP/IP protocol stack.
UDP is a simple protocol that exchanges datagrams without guaranteed delivery. It relies on higher-layer
protocols to handle errors and retransmit data.
UDP does not use windows or ACKs. Reliability is provided by application layer protocols. UDP is designed
for applications that do not need to put sequences of segments together.
The following protocols use UDP:
TFTP
SNMP
DHCP
DNS
The following are the definitions of the fields in the UDP segment:
Source port Number of the port that sends data
Destination port Number of the port that receives data
Length Number of bytes in header and data
Checksum Calculated checksum of the header and data fields
Data Upper-layer protocol data

11.1.9 TCP and UDP port numbers


This page examines port numbers.
Both TCP and UDP use port numbers to pass information to the upper layers. Port numbers are used to
keep track of different conversations that cross the network at the same time.
Application software developers agree to use well-known port numbers that are issued by the Internet
Assigned Numbers Authority (IANA). Any conversation bound for the FTP application uses the standard
port numbers 20 and 21. Port 20 is used for the data portion and Port 21 is used for control. Conversations
that do not involve an application with a well-known port number are assigned port numbers randomly from
within a specific range above 1023. Some ports are reserved in both TCP and UDP. However, applications
might not be written to support them. Port numbers have the following assigned ranges:
Numbers below 1024 are considered well-known ports numbers.
Numbers above 1024 are dynamically-assigned ports numbers.
Registered port numbers are for vendor-specific applications. Most of these are above 1024.
End systems use port numbers to select the proper application. The source host dynamically assigns source
port numbers. These numbers are always greater than 1023.
This page concludes this lesson. The next lesson will focus on the application layer. The first page
provides an introduction.

149

11.2

The Application Layer

11.2.1 Introduction to the TCP/IP application layer


This page will introduce some TCP/IP application layer protocols.
The session, presentation, and application layers of the OSI model are bundled into the application layer of
the TCP/IP model. This means that representation, encoding, and dialog control are all handled in the
TCP/IP application layer. This design ensures that the TCP/IP model provides maximum flexibility at the
application layer for software developers.
The TCP/IP protocols that support file transfer, e-mail, and remote login are probably the most familiar to
users of the Internet. These protocols include the following applications:
DNS
FTP
HTTP
SMTP
SNMP
Telnet

This page will describe DNS.


The Internet is built on a hierarchical addressing scheme. This scheme allows for routing to be based on
classes of addresses rather than based on individual addresses. The problem this creates for the user is
associating the correct address with the Internet site. It is very easy to forget an IP address to a particular
site because there is nothing to associate the contents of the site with the address. Imagine the difficulty of
remembering the IP addresses of tens, hundreds, or even thousands of Internet sites.
A domain naming system was developed in order to associate the contents of the site with the address of
that site. The Domain Name System (DNS) is a system used on the Internet for translating names of
domains and their publicly advertised network nodes into IP addresses. A domain is a group of computers
that are associated by their geographical location or their business type. A domain name is a string of
characters, number, or both. Usually a name or abbreviation that represents the numeric address of an
150

Internet site will make up the domain name. There are more than 200 top-level domains on the Internet,
examples of which include the following:
.us United States
.uk United Kingdom
There are also generic names, which examples include the following:
.edu educational sites
.com commercial sites
.gov government sites
.org non-profit sites
.net network service
See Figure for a detailed explanation of these domains
11.2.3 FTP and TFTP
This page will describe the features of FTP and TFPT.
FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support
FTP. The main purpose of FTP is to transfer files from one computer to another by copying and moving files
from servers to clients, and from clients to servers. When files are copied from a server, FTP first establishes
a control connection between the client and the server. Then a second connection is established, which is a
link between the computers through which the data is transferred. Data transfer can occur in ASCII mode or
in binary mode. These modes determine the encoding used for data file, which in the OSI model is a
presentation layer task. After the file transfer has ended, the data connection terminates automatically. When
the entire session of copying and moving files is complete, the command link is closed when the user logs off
and ends the session.
TFTP is a connectionless service that uses User Datagram Protocol (UDP). TFTP is used on the router to
transfer configuration files and Cisco IOS images and to transfer files between systems that support TFTP.
TFTP is designed to be small and easy to implement. Therefore, it lacks most of the features of FTP. TFTP
can read or write files to or from a remote server but it cannot list directories and currently has no provisions
for user authentication. It is useful in some LANs because it operates faster than FTP and in a stable
environment it works reliably.
11.2.4 HTTP
This page will describe the features of HTTP.
Hypertext Transfer Protocol (HTTP) works with the World Wide Web, which is the fastest growing and most
used part of the Internet. One of the main reasons for the extraordinary growth of the Web is the ease with
which it allows access to information. A Web browser is a client-server application, which means that it
requires both a client and a server component in order to function. A Web browser presents data in
multimedia formats on Web pages that use text, graphics, sound, and video. The Web pages are created
with a format language called Hypertext Markup Language (HTML). HTML directs a Web browser on a
particular Web page to produce the appearance of the page in a specific manner. In addition, HTML specifies
locations for the placement of text, files, and objects that are to be transferred from the Web server to the
Web browser.
Hyperlinks make the World Wide Web easy to navigate. A hyperlink is an object, word, phrase, or picture, on
a Web page. When that hyperlink is clicked, it directs the browser to a new Web page. The Web page
contains, often hidden within its HTML description, an address location known as a Uniform Resource
Locator (URL).
In the URL http://www.cisco.com/edu/, the "http://" tells the browser which protocol to use. The second part,
"www", is the hostname or name of a specific machine with a specific IP address. The last part, /edu/
identifies the specific folder location on the server that contains the default web page.
A Web browser usually opens to a starting or "home" page. The URL of the home page has already been
stored in the configuration area of the Web browser and can be changed at any time. From the starting page,
click on one of the Web page hyperlinks, or type a URL in the address bar of the browser. The Web browser
examines the protocol to determine if it needs to open another program, and then determines the IP address
of the Web server using DNS. Then the transport layer, network layer, data link layer, and physical layer work
together to initiate a session with the Web server. The data that is transferred to the HTTP server contains
the folder name of the Web page location. The data can also contain a specific file name for an HTML page.
If no name is given, then the default name as specified in the configuration on the server is used.
The server responds to the request by sending to the Web client all of the text, audio, video, and graphic files
specified in the HTML instructions. The client browser reassembles all the files to create a view of the Web
page, and then terminates the session. If another page that is located on the same or a different server is
clicked, the whole process begins again.
The Lab Activity will help students become familiar with TCP and HTTP.

151

11.2.5 SMTP
This page will discuss the features of SMTP.
Email servers communicate with each other using the Simple Mail Transfer Protocol (SMTP) to send and
receive mail. The SMTP protocol transports email messages in ASCII format using TCP.
When a mail server receives a message destined for a local client, it stores that message and waits for the
client to collect the mail. There are several ways for mail clients to collect their mail. They can use
programs that access the mail server files directly or collect their mail using one of many network protocols.
The most popular mail client protocols are POP3 and IMAP4, which both use TCP to transport data. Even
though mail clients use these special protocols to collect mail, they almost always use SMTP to send mail.
Since two different protocols, and possibly two different servers, are used to send and receive mail, it is
possible that mail clients can perform one task and not the other. Therefore, it is usually a good idea to
troubleshoot e-mail sending problems separately from e-mail receiving problems.
When checking the configuration of a mail client, verify that the SMTP and POP or IMAP settings are
correctly configured. A good way to test if a mail server is reachable is to Telnet to the SMTP port (25) or to
the POP3 port (110). The following command format is used at the Windows command line to test the ability
to reach the SMTP service on the mail server at IP address 192.168.10.5:
C:\>telnet 192.168.10.5 25
The SMTP protocol does not offer much in the way of security and does not require any authentication.
Administrators often do not allow hosts that are not part of their network to use their SMTP server to send or
relay mail. This is to prevent unauthorized users from using their servers as mail relays.

152

11.2.6 SNMP
This page will define SNMP.
The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the
exchange of management information between network devices. SNMP enables network administrators to
manage network performance, find and solve network problems, and plan for network growth. SNMP uses
UDP as its transport layer protocol.
An SNMP managed network consists of the following three key components:
Network management system (NMS) NMS executes applications that monitor and control
managed devices. The bulk of the processing and memory resources required for network
management are provided by NMS. One or more NMSs must exist on any managed network.
Managed devices Managed devices are network nodes that contain an SNMP agent and that
reside on a managed network. Managed devices collect and store management information and
make this information available to NMSs using SNMP. Managed devices, sometimes called network
elements, can be routers, access servers, switches, and bridges, hubs, computer hosts, or printers.
Agents Agents are network-management software modules that reside in managed devices. An
agent has local knowledge of management information and translates that information into a form
compatible with SNMP

153

11.2.7 Telnet
This page will explain the features of Telnet.
Telnet client software provides the ability to login to a remote Internet host that is running a Telnet server
application and then to execute commands from the command line. A Telnet client is referred to as a local
host. Telnet server, which uses special software called a daemon, is referred to as a remote host.
To make a connection from a Telnet client, the connection option must be selected. A dialog box typically
prompts for a host name and terminal type. The host name is the IP address or DNS name of the remote
computer. The terminal type describes the type of terminal emulation that the Telnet client should perform.
The Telnet operation uses none of the processing power from the transmitting computer. Instead, it transmits
the keystrokes to the remote host and sends the resulting screen output back to the local monitor. All
processing and storage take place on the remote computer.
Telnet works at the application layer of the TCP/IP model. Therefore, Telnet works at the top three layers of
the OSI model. The application layer deals with commands. The presentation layer handles formatting,
usually ASCII. The session layer transmits. In the TCP/IP model, all of these functions are considered to be
part of the application layer.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
The primary duties of the transport layer, Layer 4 of the OSI model, are to transport and regulate the flow of
information from the source to the destination reliably and accurately.
The transport layer multiplexes data from upper layer applications into a stream of data packets. It uses port
(socket) numbers to identify different conversations and delivers the data to the correct application.
The Transmission Control Protocol (TCP) is a connection-oriented transport protocol that provides flow
control as well as reliability. TCP uses a three-way handshake to establish a synchronized circuit between
end-user applications. Each datagram is numbered before transmission. At the receiving station, TCP
reassembles the segments into a complete message. If a sequence number is missing in the series, that
segment is retransmitted.
Flow control ensures that a transmitting node does not overwhelm a receiving node with data. The simplest
method of flow control used by TCP involves a not ready signal that notifies the transmitting device that the
154

buffers on the receiving device are full. When the receiver can handle additional data, the receiver sends a
ready transport indicator.
Positive acknowledgment with retransmission is another TCP protocol technique that guarantees reliable
delivery of data. Because having to wait for an acknowledgment after sending each packet would negatively
impact throughput, windowing is used to allow multiple packets to be transmitted before an acknowledgment
is received. TCP window sizes are variable during the lifetime of a connection.
Positive acknowledgment with retransmission is another TCP protocol technique that guarantees reliable
delivery of data. Because having to wait for an acknowledgment after sending each packet would negatively
impact throughput, windowing is used to allow multiple packets to be transmitted before an acknowledgment
is received. TCP window sizes are variable during the lifetime of a connection.
If an application does not require flow control or an acknowledgment, as in the case of a broadcast
transmission, User Datagram Protocol (UDP) can be used instead of TCP. UDP is a connectionless transport
protocol in the TCP/IP protocol stack that allows multiple conversations to occur simultaneously but does not
provide acknowledgments or guaranteed delivery. A UDP header is much smaller than a TCP header
because of the lack of control information it must contain.
Some of the protocols and applications that function at the application level are well known to Internet users:
Domain Name System (DNS) - Used in IP networks to translate names of network nodes into IP
addresses
File Transfer Protocol (FTP) - Used for transferring files between networks
Hypertext Transfer Protocol (HTTP) - Used to deliver hypertext markup language (HTML)
documents to a client application, such as a WWW browser
Simple Mail Transfer Protocol (SMTP) - Used to provide electronic mail services
Simple Network Management Protocol (SNMP) - Used to monitor and control network devices
and to manage configurations, statistics collection, performance and security
Telnet - Used to login to a remote host that is running a Telnet server application and then to execute
commands from the command line

155

S-ar putea să vă placă și