Documente Academic
Documente Profesional
Documente Cultură
simple terms
In this chapter of the journey to learn computer
networking technology we explain the OSI
model in simple terms, and expand on the
different layers of the OSI model.
The OSI model defines the basic building blocks
of computer networking, and is an essential part
of a complete understanding of modern TCI/IP
networks.
An understanding of the concepts of the OSI
model is absolutely necessary for someone
learning the role of the Network Administrator or
the System Administrator.
If are looking for something less technical that
focuses more on using a computer network, rather than understand the core concepts of how it
works, please visit our companion website Smart Technology.
At Smart Technology we discuss managing technology from the perspective of a business
owner or department manager. Check out the section on Managing Technology and specifically
the article on The System Administrator and the Power User.
The role of the Network Administrator or the System Administrator
On a small to mid size network there may be little, if any, distinction between a Systems
Administrator and a Network Administrator, and the tasks may all be the responsibility of a
single post. As the size of the network grows, the distinction between the areas will become more
well defined.
In larger organizations the administrator level technology personnel typically are not the first line
of support that works with end users, but rather only work on break and fix issues that could not
be resolved at the lower levels.
Network administrators are responsible for making sure computer hardware and the network
infrastructure itself is maintained properly. The term network monitoring describes the use of a
system that constantly monitors a computer network for slow or failing components and that
notifies the network administrator in case of outages via email, pager or other alarms.
The typical Systems Administrator, or sysadmin, leans towards the software and NOS (Network
Operating System) side of things. Systems Administrators install software releases, upgrades,
and patches, resolve software related problems and performs system backups and recovery.
The simplest definition of a computer network is a group of computers that are able to
communicate with one another and share a resource. A computer network is a collection of
hardware and software that enables a group of devices to communicate and provides users with
access to shared resources such as data, files, programs, and operations.
Common networking terms
Each device on a network, is called a node. In order for communications to take place, you need
the software, the network operating system (NOS) and the means of communication between
network computers known as the media.
In computer networking the term media refers to the actual path over which an electrical signal
travels as it moves from one component to another. The media can be physical such as a
specialized cable or various forms of wireless media such as infrared transmission or radio
signals.
A local area network (LAN) is a collection of computers cabled together to form a network in a
small geographic area (usually within one building).
A wide area network (WAN) is connecting computers together long distance using a public
highway such as the internet.
A network interface card (NIC) enables two computers to send and receive data over the network
media.
A gateway is software or hardware, or a combination of the two, that interconnects different
types of networks, translating as necessary between the two.
What is a protocol?
A Network protocol is a specialized electronic language that enables network computers to
communicate. Different types of computers, using different operating systems, can communicate
with each other, and share information as long as they follow the network protocols.
A protocol suite is a set of related protocols that come from a single developer or source.
A protocol stack is a set of two or more protocols that work together, with each protocol
covering a different aspect of data communications.
What is the client server network model?
In the most common network model, client server, at least one centralized server manages shared
resources and security for the other network users and computers. A network connection is only
made when information needs to be accessed by a user. This lack of a continuous network
connection provides network efficiency.
The client requests services or information from the server computer. The server responds to the
client's request by sending the results of the request back to the client computer.
Security and permissions can be managed by administrators which cuts down on security and
rights issues when dealing with a large number of workstations. This model allows for
convenient backup services, reduces network traffic and provides a host of other services that
come with the network operating system.
What are Peer-to-Peer Networks?
Simply sharing resources between computers, such as on a typical home network, every
computer acts as both a client and a server. Any computer can share resources with another, and
any computer can use the resources of another, given proper access rights.
This is a good solution when there are 10 or less users that are in close proximity to each other,
but it is difficult to maintain security as the network grows. This model can be a security
nightmare, because each workstation setting permissions for shared resources must be
maintained at the workstation, and there is no centralized management. This model is only
recommended in situations where security is not an issue.
Other Models
Before microcomputers because cost effective dumb terminals were used to access very large
main frame computers in remote locations. The local terminal was dumb in the sense that it was
nothing more than a way for a keyboard and monitor to access another computer remotely with
all the processing occurred on the remote computer. This model, sometimes referred to as a
centralized model, is not very common.
You can find a lot of resources that define the components of the OSI model, but an
understanding of the reasons behind the definitions will go a lot way to fully understanding this
complex technology model.
The acronym and the organization behind it can get confusing. The formal name for the OSI
model is the Open Systems Interconnection model. Open Systems refers to a cooperative effort
to have development of hardware and software among many vendors that could be used
together. The model is a product of the International Organization for Standardization (2) which
is often abbreviated ISO.
The logic behind ISO
Before we delve into the OSI model, let us take a moment to understand the organization behind
it. You may have seen the term ISO certified in various technology areas. ISO, International
Organization for Standardization, (1) is the world's largest developer and publisher of
International Standards. ISO helps to manage and create many international standards in many
technical areas to insure the same quality of a product or process regardless of location or
company.
The OSI (Open Systems Interconnection) model provides a set of general design guidelines for
data communications systems and gives a standard way to describe how various layers of data
communication systems interact. Applying the logic of the ISO standards to computer
networking, a computer component, or computer software needs to comply to set of standards so
that the product or process will work no matter where in the world we are, and no matter who is
the world is producing it.
Putting the OSI model into perspective
Strive for a good understanding of the intent of the model and a few of the core principles, that
will go a long way in an overall understanding of computer networking. Do not focus on the
intricate details of the OSI model at first, as the more you read the more confused you may get.
The model was created in the 1970s and the technology is ever changing. Many text books will
contradict each other on some aspects of the upper layers. Some of the reasoning behind the
upper layers are for processes that are not nearly as useful today as they were many years ago,
and for that reason many other network models will blend together the upper three layers into a
single layer.
Basic definitions of the OSI Model
The seven layers of the OSI Model can be remembered by using the following memory aide: All
People Seem To Need Data Processing. As you say the phrase, write down the first letter of each
word, and that will help you to remember the seven layers in order from highest to lowest:
Application, Presentation, Session, Transport, Network, Data Link, and Physical. We will
briefly discuss the lower four layers from the bottom up.
Layer one, the Physical layer provides the path through which data moves among devices on the
network.
Layer two, the Data Link layer provides a system through which network devices can share the
communication channel.
Layer three, the Network layer's main purpose is to decide which physical path the information
should follow from its source to its destination.
Layer four, the Transport layer provides the upper layers with a communication channel to the
network.
An analogy to understand the model
Some of reasons behind the OSI model are, to break network communication into smaller,
simpler parts that are easier to develop and to facilitate standardization of network components to
allow multiple vendor development and support.
Let's take the reasons behind the OSI model and apply them to something totally different to
illustrate how they are used. If we wanted to start a railroad and build a new type of train from
scratch, and we wanted this train to be able to use existing train tracks, and existing train stations
so our new system could get up and running quickly, we would need to understand what existing
standards are currently in place.
Even if we never had to build a set of train tracks we would need understand the standards by
which train tracks were build and designed so we could assure our train could operate on them,
and how the track is shared. Likewise, in order for components to operate, manufactures must
understand the track, layer one, and how the track is shared, layer two.
If we are building trains, not train stations, we need to know the size and shape of other vehicles
using the tracks so our trains could use the same track as all the other trains. Layer one of the
OSI model gives us the path, or the track we use for communication. Layer one, referred to as
the media, is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or
radio spectrum technology.
Once you have more than one train on the track, you need to find a way to share the track. Layer
two provides a system through which network devices can share the communication channel, or
in the case of our analogy, share the track. One of the functions of layer two is called media
access control (MAC). If you think about the term media access control you can break it down
into the two parts it represents, the media or the track, and access control, or the sharing of the
track.
In the OSI model layers one and two represent the the media, or the physical components.
Layers three through seven represent the logical, or the software components.
In layer three of the OSI model, the Network later, the logical decision is made to decide which
physical path the information should follow from its source to its destination.
In order to continue our analogy to understand this complex set of rules, think of the track system
that has already been built as layers one and two. Once this track system is in place we need a
system to control the routing of the train system that runs on the tracks. Think of layers three
through seven as processes which affect the train itself, which would represent the actual
package of information being transported along the tracks. The main purpose of layer three is
switching and routing.
Layer four of the OSI model, the transport layer ensures the reliability of data delivery by
detecting and attempting to correct problems that occurred. In terms of our analogy, think of this
as a set of standards and procedures that allows our train to arrive safely at its destination in a
timely manner.
Learning and understanding the OSI model can be confusing.. The goal of this article was not
meant to define the layers of the OSI model from purely a technical nature, but to offer an
analogy to understand why it is needed and how it used to establish standards for data
communications. In our next article we will go over the basic definitions of all the layers of the
OSI model.
Transport, Network, Data Link, and Physical can be remembered by using the following memory
aide: All People Seem To Need Data Processing.
The Application layer includes network software that directly serves the user, providing such
things as the user interface and application features. The Application layer is usually made
available by using an Application Programmer Interface (API), or hooks, which are made
available by the networking vendor.
The Presentation layer translates data to ensure that it is presented properly for the end user,
also handles related issues such as data encryption and compression, and how data is structured,
as in a database.
The Session layer comes into play primarily at the beginning and end of a transmission. At the
beginning of the transmission, it makes known its intent to transmit. At the end of the
transmission, the Session layer determines if the transmission was successful. This layer also
manages errors that occur in the upper layers, such as a shortage of memory or disk space
necessary to complete an operation, or printer errors.
The Transport layer provides the upper layers with a communication channel to the network.
The Transport layer collects and reassembles any packets, organizing the segments for delivery
and ensuring the reliability of data delivery by detecting and attempting to correct problems that
occurred.
The Network layer's main purpose is to decide which physical path the information should
follow from its source to its destination.
The Data Link layer provides a system through which network devices can share the
communication channel. This function is called media-access control (MAC).
The Physical layer provides the electro-mechanical interface through which data moves among
devices on the network.
In the articles that follow we will break down each layer
in more detail, covering topics you will need to know as
a networking professional.
The two major types of twisted-pair cabling are unshielded twisted-pair (UTP) and shielded
twisted-pair (STP).
UTP - Unshielded Twisted Pair; uses RJ-45, RJ-11, RS-232, and RS-449 connectors, max
length is 100 meters, speed is up to 100Mps. Cheap, easy to install, length becomes a problem.
Can be CAT 2,3,4 or 5 quality grades.
In shielded twisted-pair (STP) the inner wires are encased in a sheath of foil or braided wire
mesh. Shielded Twisted Pair uses RJ-45, RJ-11, RS-232, and RS-449 connectors, max length is
100 meters, speed is up to 500Mps. Not as inexpensive as UTP, easy to install, length becomes a
problem. Can be CAT 2,3,4 or 5 quality grades.
Category 1 Traditional UTP telephone cable can transmit voice signals but not data. Most
telephone cable installed prior to 1983 is Category 1.
Category 2 UTP cable is made up of four twisted-pair wires, certified for transmitting data up to
4 Mbps (megabits per second).
Category 3 UTP cable is made up of four twisted-pair wires, each twisted three times per foot.
Category 3 is certified to transmit data up to 10 Mbps.
Category 4 UTP cable is made up of four twisted-pair wires, certified to transmit data up to 16
Mbps.
Category 5 UTP cable is made up of four twisted-pair wires, certified to transmit data up to 100
Mbps.
Twisted-pair Ethernet cable has the following specifications:
a maximum of 1,024 attached workstations
a maximum of 4 repeaters between communicating workstations
a maximum segment length of 328 feet (100 meters).
100BASE-TX specification uses two pairs of Category 5 UTP or Category 1 STP cabling at a
100 Mbps data transmission speed. Each segment can be up to 100 meters long.
100BASE-T4 specification uses four pairs of Category 3, 4, or 5 UTP cabling at a 100 Mbps
data transmission speed with standard RJ-45 connectors. Each segment can be up to 100 meters
long.
Fiber optic cable (IEEE 802.8) in which the center core, a glass cladding composed of varying
layers of reflective glass, refracts light back into the core. Max length is 25 kilometers, speed is
up to 2Gbps but very expensive. Best used for a backbone due to cost.
100BASE-FX specification uses two-strand 62.5/125 micron multi- or single-mode fiber media.
Half-duplex, multi-mode fiber media has a maximum segment length of 412 meters. Full-duplex,
single-mode fiber media has a maximum segment length of 10,000 meters.
signal reflections, effectively making the cable "look" infinitely long to the signals being sent
across it.
network.
Mesh topologies are combinations of the above and are common on very large networks. For
example, a star bus network has hubs connected in a row(like a bus network) and has computers
connected to each hub.
Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with
point-to-point and multipoint design for device attachment.
Logical Topology:
Ring: Generates and sends the signal on a one-way path, usually counterclockwise.
Bus: Generates and sends the signal to all network devices.
Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular,
hybrid). Logical topology defines the network path that a signal follows (ring or bus).
IEEE 802.3 is an extension of the original Ethernet. includes modifications to the classic
Ethernet data packet structure.
The Media Access Control (MAC) sub-layer contains methods that logical topologies can use
to regulate the timing of data signals and eliminate collisions.
The MAC address concerns a device's actual physical address, which is usually designated by the
hardware manufacturer. Every device on the network must have a unique MAC address to ensure
proper transmission and reception of data. MAC communicates with adapter card.
Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining
how network devices respond when two devices attempt to use a data channel simultaneously
(called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to
detect a collision.
After detecting a collision, a device waits a random delay time and then attempts to re-transmit
the message. If the device detects a collision again, it waits twice as long to try to re-transmit the
message. This is known as exponential back off.
IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially
a subset of IEEE 802.5.
The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface
between a wireless client and a base station or access point, as well as among wireless clients.
The 802.11 standards can be compared to the IEEE 802.3 standard for Ethernet for wired
LANs.
The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control
(MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless
LAN equipment
The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for lowcomplexity and low-power consumption wireless connectivity.
IEEE 802.16 specifications support the development of fixed broadband wireless access
systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable
multi-vendor broadband wireless access products.
physical addresses, as well as enabling the option of specifying a service address, known as a
sockets or ports to point the data to the correct program on the destination computer.
Addressing
Each computer on a TCP/IP network has to have a unique, numeric IP address. The IP address is
like a mailing address, some of the bits represent the network segment that the computer is on,
like the street name of a mailing address. Other bits represent the particular host on the segment,
like the house number of a mailing address.
IP addresses have 4 bytes, each of which is referred to as an octet. Since each byte in the address
has 8 bits, an IP address is 32 bits long. IP addresses are usually displayed in decimal format
where the value of each byte is converted from binary to decimal. This makes them easier to
remember. For example, an IP address of 74.52.151.178 is much easier to remember than its
binary equivalent of: 01001010.00110100.10010111.10110010
If an IP address represents a mailing address, thing of the service address as a specific room in
the house. The service address is a number that is appended to the IP Address such as
74.52.151.178:25 where 74.52.151.178 is the IP address and 25 is the service address. In the
early days of computer networking the term socket number was use. A well-known range of port
numbers is reserved by convention to identify specific service types on a host computer.
On most IP networks, computers have not only IP addresses, but they also have descriptive
names that are easier for people to remember and use. This name is called the host name. It's a
friendly name assigned to a computer that people can use instead of the numeric IP address
Routing
Routing is the process of selecting which physical path the information should follow from its
source to its destination. The Network Layer manages data traffic and congestion involved in
packet switching and routing
Routers are devices that play a significant role in directing the flow of data between two or more
networks. Routers make sure that information makes it to the intended destination as well as
ensure that information does not go where it is not needed. This is crucial for keeping large
volumes of data from clogging connections.
One of the tools a router uses to decide where a packet should go is a configuration table. A
configuration table identifies which connections lead to particular groups of addresses and sets
priorities for connections to be used and establishes rules for handling both routine and special
cases of traffic.
A configuration table can be as simple as a half-dozen lines in the smallest routers, but can grow
to massive size and complexity in the very large routers that handle the bulk of Internet
messages.
Internet Protocol (IP) envelopes and addresses the data, enables the network to read the
envelope and forward the data to its destination and defines how much data can fit in a single
packet.
Internet Protocol (IP) is a connectionless protocol, which means that a session is not created
before sending data. IP is responsible for addressing and routing of packets between computers.
It does not guarantee delivery and does not give acknowledgement of packets that are lost or sent
out of order as this is the responsibility of higher layer protocols such as Transmission Control
Protocol (TCP).
Time To Live (TTL) is a concept in IP that prevents packets from endlessly looping around the
Internet. When a packet leaves a computer the TTL is set to a maximum of 256 Each router will
decrease the TTL by one or more If the TTL reaches Zero, the Router Sends the Source
Computer a ICMP-Time Exceeded and discards the packet
Packet Switching
Throughout the standard for Internet Protocol you will see the description of packet switching,
"fragment and reassemble internet datagrams when necessary for transmission through small
packet networks." A message is divided into smaller parts know as packets before they are sent.
Each packet is transmitted individually and can even follow different routes to its destination.
Once all the packets forming a message arrive at the destination, they are recompiled into the
original message.
simply to turn around and pass the paper to the person behind him, and in turn continue the
process until the paper made it to the person in the back row.
In the next phase of the illustration, I would take the same piece of paper that had the message
written on it, and tear it into four pieces. On each individual piece of paper I would address it as
if sending a letter through the postal service, by writing my name as the sender, and also the
name of the person in the back of the room as the recipient. I would also label each individual
piece of paper as one of four, two of four, three of four, and four of four.
This time I would take the four individual pieces of paper and walk across the front row, and as I
handed one piece of paper to four different students, I would explain to them who was to receive
the paper, and asked them to pass it to the person marked as the recipient by using the people
behind them. When all four pieces of paper arrived at the destination, I would ask the recipient to
read the label I had put on each piece of paper, and confirm they had received the entire message.
My original passing of the paper represented Circuit switching, the telecommunications
technology which used circuits to create the virtual path, a dedicated channel between two
points, and then delivered the entire message.
My second passing of the "packets" or scraps of paper illustrated packet switching, and each
individual in the room acted as a router. The key difference between the two methods was the
additional routes that the pieces of the message took. A very primitive, but effective
demonstration of packet switching and the way in which a message would be transmitted across
the internet.
Once the concept of packet switching was developed the next stage in the evolution was to create
a language that would be understood by all computer systems. This new standard set of rules
would enable different types of computers, with different hardware and software platforms, to
communicate in spite of their differences.
TCP is the more complicated, providing a connection and byte oriented stream which is almost
error free, with flow control, multiple ports, and same order delivery. UDP is a very simple
datagram service, which provides limited error reduction and multiple ports.
Transmission Control Protocol (TCP) breaks data up into packets that the network can handle
efficiently, verifies that all the packets arrive at their destination, and reassembles the data.
Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgement
(ACK) verifies that the host has received each segment of the message, reliable delivery service.
Acknowledgements are sent by receiving computer, unacknowledged packets are resent.
Sequence number are used with acknowledgements to track successful packet transfer
If the ACK is not received after a given time period, then the data is resent. If segments are not
delivered to the destination device correctly, then the Transport layer can initiate retransmission
or inform the upper layers. Uses segmentation, flow control, and error checking to insure packet
delivery the purpose of name resolution, either to an IP/IPX address or a network protocol name
resolution helps upper layer services communicate segment destinations with lower layer
services.
User Datagram Protocol (UDP) provides same services as TCP but is connectionless and
unacknowledged. UDP lets applications send datagrams without the overhead involved in
acknowledging packets and maintaining a virtual circuit. UDP is therefore used to broadcast
messages across an internetwork, because acknowledgment is unnecessary and overhead is
undesirable.
Internet Protocol (IP) envelopes and addresses the data, enables the network to read the
envelope and forward the data to its destination and defines how much data can fit in a single
packet. IP is responsible for routing of packets between computers.
Internet Protocol (IP) is a connectionless protocol, which means that a session is not created
before sending data. It does not guarantee delivery and does not give acknowledgement of
packets that are lost or sent out of order as this is the responsibility of higher layer protocols such
as TCP.
Transmission Control Protocol (TCP) breaks data up into packets that the network can handle
efficiently, verifies that all the packets arrive at their destination, and reassembles the data.
Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgement
(ACK) verifies that the host has received each segment of the message, reliable delivery service.
Acknowledgements are sent by receiving computer, unacknowledged packets are resent.
Sequence number are used with acknowledgements to track successful packet transfer
Once the basic concept of the TCP/IP family was developed, many more members of the family
were added. Some of the more common protocols are listed here.
Simple Mail Transfer Protocol (SMTP) is used for transferring email across the internet.
File Transfer Protocol (FTP) is used to upload and download files.
Hyper Text Transfer Protocol (HTTP) is the protocol used to transport web pages.
Address Resolution Protocol (ARP) translates a host's software address to a hardware (or
MAC) address (the node address that is set on the network interface card).
Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides
reverse functionality. It determines a software address from a hardware (or MAC) address. A
diskless workstation uses this protocol during bootup to determine its IP address.
BOOTP is used by diskless workstations. It enables these types of workstations to discover their
IP addresses, the address of a server host, and the name of the file that should be loaded into
memory and run at bootup.
Dynamic Host Configuration Protocol (DHCP) is used to centrally administer the assignment
of IP addresses, as well as other configuration information such as subnet masks and the address
of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to
clients dynamically instead of manually.
Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share
status and error information such as with the use of PING and TRACERT utilities.
Simple Network Management Protocol (SNMP) was designed to enable the analysis and
troubleshooting of network hardware. For example, SNMP enables you to monitor workstations,
servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers,
gateways, and wiring concentrators.
What is a Protocol?
Once the concept of packet switching was developed the next stage in the evolution was to create
a language that would be understood by all computer systems.
The network concept of protocols would establish a standard set of rules that would enable
different types of computers, with different hardware and software platforms, to communicate in
spite of their differences. Protocols describe both the format that a message must take as well as
the way in which messages are exchanged between computers.
During the 1970s Bob Kahn and Vinton Cerf would collaborate as key members of a team to
create TCP/IP, Transmission Control Protocol (TCP) and Internet Protocol (IP), the building
blocks of the modern internet.
What is an RFC?
The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969
to help record unofficial notes on the development of ARPANET. RFCs have since become
official documents of Internet specifications.
In computer network engineering, a Request for Comments (RFC) is a formal document
published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB),
and the global community of computer network researchers, to establish Internet standards.
TCP/IP RFC History
The creation of TCP/IP as the basic set of rules for computers to communicate was one of the
last major phases in the development of this global network we now call the Internet. Many
additional members of the TCP/IP family of protocols continue to be developed, expanding of
the basic principals established by Bob Kahn and Vinton Cerf back in the 1970s.
In 1981 the TCP/IP standards were published as RFCs 791, 792 and 793 and adopted for use. On
January 1, 1983, TCP/IP protocols became the only approved protocol on the ARPANET, the
predecessor to today's internet.