Sunteți pe pagina 1din 16

MicroLink Information Technology College

Data communication and computer network


Handout 4: Network Architecture
1. Introduction
In the first three handouts, we have looked at the physical aspects of a network. We
learned about cables and the various methods of connecting them so that we can share
data. Now that we can physically link computers, we need to learn how to gain access
to the wires and cables.
In this section, we will first examine how data is put together before it is sent on to the
wires of a computer network. Next, we examine the three principal methods used to
access the wires. The first method, called contention, is based on the principle of "first
come, first served." The second method, token passing, is based on the principle of
waiting to take turns. The third method, demand priority, is relatively new and is
based on prioritising access to the network. Last, we examine two of the most
common network systems (Ethernet and Token Ring).
2. How networks send data
Data usually exists as rather large files. However, networks cannot operate if
computers put large amounts of data on the cable at the same time. If a computer
sends large amounts of data it can cause other computers to wait (increasing the
frustration of the other users) while the data is being moved. There are two reasons
why putting large chunks of data on the cable at one time slows down the network:
Large amounts of data sent as one large unit tie up the network and make
timely interaction and communications impossible because one computer is
flooding the cable with data.
The impact of retransmitting large units of data further multiplies network
traffic.
These effects are minimized when the large data units are reformatted into smaller
packages. This way, only a small section of data is affected, and, therefore, only a
small amount of data must be retransmitted, making it relatively easy to recover from
the error. These packages are commonly called packets or frames, and are the basic
building blocks of network data communications.
When the operating system at the sending computer breaks the data into packets, it
adds special control information to each frame. This makes it possible to:
Send the original, disassembled data in small chunks
Reassemble the data in the proper order when it reaches its destination
Check the data for errors after it has been reassembled
Exactly what control information is added can vary, but all packets include at least the
source address, the data and the destination address. There are three different ways in
which packets can be addressed:
Unicast: packet is addressed to a single destination
Multicast: packet is addressed simultaneously to multiple destinations
Broadcast: packet is sent simultaneously to all stations on the network

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
Communication using smaller packets of data is known as packet switching, whereas
if a direct dedicated communication line is used for the duration of the transmission it
is known as circuit switching.
There are actually two types of packet switching. In connection-oriented (CO), or
virtual, circuits, a route across the network is established and all packets of data
follow that route. In CO packet switching, the sender will first requests a connection
to the receiver, and waits for the connection to be established. Once it is established
the virtual connection is left in place whilst the data is transmitted. When data
transmission is complete, the connection is given up. You can think of CO packet
switching as being like a telephone call: when you are talking to somebody on the
phone, all information is sent by the same route over the telephone network. When
you put the phone down, the connection is given up. A specific case of CO packet
switching is the virtual private network (VPN). A VPN is a private network that uses
public network infrastructure. A series of encrypted logical connections (or tunnels)
are made across the public network, enabling computers in different parts of the world
to communicate as if they were on a private network.
In connectionless (CL) circuits, no pre-determined route exists, and each packet is
routed independently. In this case, the sender simply prepares the packet for
transmission, adds the destination address, and sends it onto the network. The network
hardware will then use the address to route the packet in the best way that it can. You
can think of CL packet switching as being like posting a series of letters: the postal
service will send each letter independently, and you cannot be sure that each letter
will follow the same physical route.
CO packet switching has more overheads: before transmission can start time must
be spent setting up the virtual connection across the network, and after it has finished
more time must be spent closing the connection. However, once transmission has
commenced, bandwidth can be reserved so it is possible to guarantee higher data
rates, which is not possible with CL packet switching. Therefore CO packet switching
is well suited to real-time application such as streaming of video and/or sound. On the
other hand, CL packet switching is simpler, has fewer overheads, and allows multicast
and broadcast addressing.
3. Routing
It was mentioned above that a major difference between CO and CL networks is
whether a route is determined for all packets at once or determined individually for
each packet. We will now deal with the subject of how these routes are determined.
One important concept to understand before we begin is that of a routing table.
3.1 Routing tables
A routing table is stored in the RAM of a network device such as a bridge, switch or
router, and contains information about where to forward data to, based on its
destination address. For example, Figure 1 shows a simple network consisting of 3
switches and 6 computers. Each switch connects a different sub-network. Each
computer has an address consisting of a network number followed by a computer
number (e.g. computer A is in network number 1 and has computer number 2). The
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
routing table is shown for switch 3, and indicates where the next destination (or next
hop) should be for reaching each address on the network.
For instance, if computer C sends data to computer A, then switch 3 will first look at
the destination address (1, 2), and then look up this address in its routing table. It finds
that the next hop for this address is port 5 of the switch, and so sends the data to this
port and no other.

Figure 1 A next hop forwarding routing table


3.2 Routing strategies
The next question is how is the data in the routing table determined. We will now look
at some of the common strategies used to route packets in packet switching networks.
We will first survey some of the key characteristics of such strategies, and then
examine some specific routing strategies
3.2.1 Characteristics of routing strategies
The primary function of a packet switching network is to accept packets from a source
station and deliver them to a destination station. To accomplish this a route through
the network must be established. Often, more than one route is possible. Thus, the
best route must be determined. There are a number of requirements that this decision
should take into account:
Correctness
Simplicity
Robustness
Stability
Fairness
Optimality
Efficiency
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
The first two requirements are straightforward: correctness means that the route must
lead to the correct destination; and simplicity means that the algorithm used to make
the decision should not be too complex. Robustness has to do with the ability of the
network to cope with network failures and overloads. Ideally, the network should
react to such failures without losing packets and without breaking virtual circuits.
Stability means that the network should not overreact to such failures: the
performance of the network should remain reasonably stable over time. A tradeoff
exists between fairness and optimality. The optimal route for one packet is the shortest
route (measured by some performance criterion). However, giving one packet its
optimal route may adversely affect the delivery of other packets. Fairness means that
overall most packets should have a reasonable performance. Finally, any routing
strategy involves some overheads and processing to calculate the best routes.
Efficiency means that the benefits of these overheads should outweigh their cost.
3.2.2 Elements of routing strategies
With these requirements in mind, we are now in a position to assess the various
design elements that contribute to a routing strategy. Table 1 lists these elements.
Performance criteria
Number of hops
Cost: delay/throughput
Decision time
Packet (CL network)
Session (CO network)
Decision place
Each node (distributed)
Central node (centralised)
Originating node (source)

Network information source


None
Local
Adjacent node
Nodes along route
All nodes
Network information update timing
Continuous
Periodic
Major load change
Topology change

Table 1 Elements of routing strategies for packet switching networks


The selection of a route is generally based on some performance criterion. The
simplest criterion to use is the smallest number of hops between the source and
destination. A hop generally refers to a journey between two network nodes. A
network node could be a computer, router, or other network device. A slightly more
advanced technique is to assign a cost to each link in the network. A shortest path
algorithm can then be used to calculate the lowest cost route. For example, in Figure
2, there are 5 network devices. The weighted edges between them represent the costs
of the connections. To send data from device 1 to device 5, the shortest path is via
devices 3 and 4. However, the shortest number of hops would be via device 3 only.
The cost used could be related to the throughput (i.e. speed) of the link, or related to
the current queueing delay on the link.

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture

Figure 2 A weighted graph illustrating network connectivity


Two key characteristics of the routing decision are when and where it is made. The
decision time is determined by whether we a re using a CO network or a CL network.
For CL networks the route is established independently for each packet. For CO
networks the route is established once at the time the virtual circuit is set up. The
decision place refers to which node(s) are responsible for the routing decision. The
most common technique is distributed routing, in which each node has the
responsibility to forward each packet as it arrives. For centralised routing all routing
decisions are made by a single designated node. The danger of this approach is that if
this node is damaged or lost the operation of the network will cease. In source routing,
the routing path is established by the node that is sending the packet.
Almost all routing strategies will make their routing decisions based upon some
information about the state of the network. The network information source refers to
where this information comes from, and the network information update timing refers
to how often this information is updated. Local information means just using
information from outgoing links from the current node. An adjacent information
source means any node which has a direct connection to the current node. The update
timing of a routing strategy can be continuous (updating all the time), periodic (every
t seconds), or occur when there is a major load or topology change.
3.2.3 Examples of routing strategies
How that we are familiar with some of the characteristics and elements of routing
strategies, we will examine some specific examples.
3.2.3.1 Fixed routing
In fixed routing, a single, permanent route is established for each source-destination
pair in the network. We say in this case that the routing table of each network device
is static, i.e. it will not change once assigned. These routes can be calculated using a
shortest path algorithm based on some cost criterion, or for simple networks they can
be assigned manually by the network administrator.
Fixed routing is a simple scheme, and it works well in a reliable network with a stable
load. However, it does not respond to network failures, or changes in network load
(e.g. congestion).
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
3.2.3.2 Flooding
Another simple routing technique is flooding. This technique requires no network
information at all, and works as follows. A packet is sent by the source to each of its
adjacent nodes. At each node, incoming packets are retransmitted to every outgoing
link apart from the one on which it arrived. If/when a duplicate packet arrives at a
node, it is discarded. This identification is made possible by attaching a unique
identifier to each packet.
With flooding, all possible routes between the source and the destination are tried.
Therefore so long as a path exists at least one packet will reach the destination. This
means that flooding is a highly robust technique, and is sometimes used to send
emergency information. Furthermore, at least one packet will have used the least cost
route. This can make it useful for initialising routing tables with least cost routes.
Another property of flooding is that every node on the network will be visited by a
packet. This means that flooding can be used to propagate important information on
the network, such as routing tables.
A major disadvantage of flooding is the high network traffic that it generates. For this
reason it is rarely used on its own, but as described above it can be a useful technique
when used in combination with other routing strategies.
3.2.3.3 Random routing
Random routing has the simplicity and robustness of flooding with far less traffic
load. With random routing, instead of each node forwarding packets to all outgoing
links, the node selects only one link for transmission. This link is chosen at random,
excluding the link on which the packet arrived. Often the decision is completely
random, but an refinement of this technique is to apply a probability to each link. this
probability could be based on some performance criterion, such as throughput.
Like flooding, random routing requires the use of no network information. The traffic
generated is much reduced compared to flooding. However, unlike flooding, random
routing is not guaranteed to find the shortest route from the source to the destination.
3.2.3.4 Adaptive routing
In almost all packet switching networks some form of adaptive routing is used. The
term adaptive routing means that the routing decisions that are made change as
conditions on the network change. The two principle factors that can influence
changes in routing decisions are failure of a node or a link, and congestion (if a
particular link has a heavy load it is desirable to route packets away from that link).
For adaptive routing to be possible, information about the state of the network must
be exchanged among the nodes. This has a number of disadvantages. First, the routing
decision is more complex, thus increasing the processing overheads at each node.
Second, the information that is used may not be up-to-date. To get up-to-date
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
information means requires continuous exchange of routing information between
nodes, thus increasing network traffic. Therefore there is a tradeoff between quality of
information and network traffic overheads. Finally, it is important that an adaptive
strategy does not react to slowly or too quickly to changes. If it reacts too slowly it
will not be useful. But if it reacts too quickly it may result in an oscillation, in which
all network traffic makes the same change of route at the same time.
However, despite these dangers, adaptive routing strategies generally offer real
benefits in performance, hence their popularity. Two examples of adaptive routing
strategies are distance-vector routing and link-state routing.
Distance-vector routing
Using the distance vector technique network devices periodically exchange
information about their routing tables. The exchange of information typically takes
place every 30 seconds, is two-way and consists of the entire routing table. The
routing table contains a list of destinations, together with the corresponding next hop
and the distance to the destination. The measure of distance is usually simplified so
that each hop represents a distance of 1. Upon receiving the routing table from a
neighbouring device, each device will compare the information it receives with its
own routing table and update it if necessary. The distance vector technique is simple
to implement but it has a number of weaknesses. First, it cannot distinguish between
fast and slow connections, and second it takes time to broadcast the entire routing
table around the network.
Link-state routing
In the link state technique, each network device periodically tests the speed of all of
its links. It then broadcasts this information to the entire network. Each device can
therefore construct a graph with weighted edges that represents the network
connectivity and performance (e.g. see Figure 2). The device can then use a shortest
path algorithm such as Dijkstras agorithm to compute the best route for a packet to
take.
4. Access methods
In networking, to access a resource is to be able to use that resource. The set of rules
that defines how a computer puts data onto the network cable and takes data from the
cable is called an access method. Once data is moving on the network, access methods
help to regulate the flow of network traffic.
If data is to be sent over the network from one user to another, or accessed from
a server, there must be some way for the data to access the cable without running into
other data (a collision). And the receiving computer must have reasonable assurance
that the data has not been destroyed in a data collision during transmission. Access
methods need to be consistent in the way they handle data. If different computers
were to use different access methods, the network would fail because some methods
would dominate the cable. Access methods prevent computers from gaining
simultaneous access to the cable. By making sure that only one computer at a time can
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
put data on the network cable, access methods ensure that the sending and receiving
of network data is an orderly process.
There are three major access methods: carrier-sense multiple-access, token passing
and demand priority.
4.1 Carrier sense multiple access methods
Carrier sense multiple access methods can be divided into two subtypes: carrier sense
multiple access with collision detection (CSMA/CD) and carrier sense multiple access
with collision avoidance (CSMA/CA).
4.1.1 Carrier sense multiple access with collision detection
In CSMA/CD, each computer on the network, including clients and servers, checks
the cable for network traffic. Only when a computer "senses" that the cable is free and
that there is no traffic on the cable can it send data. Once the computer has transmitted
data on the cable, no other computer can transmit data until the original data has
reached its destination and the cable is free again. Remember, if two or more
computers happen to send data at exactly the same time, there will be a data collision.
When that happens, the two computers involved stop transmitting for a random period
of time and then attempt to retransmit. Each computer determines its own waiting
period; this reduces the chance that the computers will once again transmit
simultaneously. The waiting time is calculated using an algorithm known as
exponential backoff: the first time a collision occurs each computer waits a random
time t1, 0 t1 d (where d is a constant). If a second collision occurs with the same
packet, the wait time will be t2, 0 t2 2d. The third time the wait time will be t3, 0
t3 4d, and so on: the maximum waiting time will be doubled after each
successive collision. This will continue for a maximum of 10 times, when the
maximum waiting time will reach a peak of 2 10d (= 1024d). After 16 successive
collisions, transmission of the packet is aborted and an error is reported.
CSMA/CD is known as a contention method because computers on the network
contend, or compete, for an opportunity to send data. This might seem like an
inefficient way to put data on the cable, but current implementations of CSMA/CD
are so fast that users are not even aware they are using a contention access method.
With CSMA/CD, the more computers there are on the network, the more network
traffic there will be. With more traffic, collisions tend to increase, which slows the
network down, so CSMA/CD can be a slow-access method. After each collision, both
computers will have to try to retransmit their data. If the network is very busy, there is
a chance that the attempts by both computers will result in collisions with packets
from other computers on the network. If this happens, four computers (the two
original computers and the two computers whose transmitted packets collided with
the original computer's retransmitted packets) will have to attempt to retransmit.
These retransmissions can slow the network to a near standstill.

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
4.1.2 Carrier sense multiple access with collision avoidance
CSMA/CA is the least popular of the major access methods. In CSMA/CA, each
computer signals its intent to transmit before it actually transmits data. In this way,
computers sense when a collision might occur; this allows them to avoid transmission
collisions. Unfortunately, broadcasting the intent to transmit data increases the amount
of traffic on the cable and slows down network performance.
CSMA/CA is not commonly used in wired networks, but it has become the standard
for wireless networking. We will return to wireless networking standards later in this
handout.
4.1.3 Collision domains
A collision domain is a part of a LAN (or an entire LAN) where two computer
transmitting at the same time will cause a collision. Because switches, bridges and
routers do not forward unnecessary packets the different ports of these devices operate
in different collision domains. Repeaters and hubs broadcast all packets to all ports, so
their ports are in the same collision domain.

Figure 3 Illustration of collision domains


Figure 3 shows a simple network with one repeater (R), two hubs, a switch and 10
computers (C). Because hubs broadcast all packets to all ports, if computers 2 and 4
attempted to send at the same time there would be a collision, hence they are in the
same collision domain. However, because a switch will only forward a packet if it is
intended for the other subnet, every port of the switch is in a separate collision
domain. So if computer 2 tried to send to computer 4 at the same time as computer 7
tried to send to computer 10, there would be no collision.
4.2 Token passing
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
In Handout 1 we briefly discussed a type of network known as a token ring network.
Token ring LANs use the token passing network access method. In token passing, a
special type of packet, called a token, circulates around a cable ring from computer to
computer. When any computer on the ring needs to send data across the network, it
must wait for a free token. When a free token is detected, the computer will take
control of it if the computer has data to send. The computer can now transmit data.
Data is transmitted in frames, which consist of the data to be sent, plus some
additional information, such as addressing.
While the token is in use by one computer, other computers cannot transmit data.
Because only one computer at a time can use the token, no contention and no collision
take place, and no time is spent waiting for computers to resend tokens due to network
traffic on the cable.

4.3 Demand priority


Demand priority is a relatively new access method. Figure 4 shows a demand-priority
network. Hubs or repeaters manage network access by searching for requests to send
data from all nodes on the network. The hub is responsible for noting all addresses
and links and verifying that they are all functioning. As in CSMA/CD, two computers
using the demand-priority access method can cause contention by transmitting at
exactly the same time. However, with demand priority, it is possible to implement a
scheme in which certain types of data will be given priority if there is contention. If
the hub or repeater receives two requests at the same time, the highest priority request
is serviced first. If the two requests are of the same priority, both requests are serviced
by alternating between the two.
In a demand-priority network, there is communication only between the sending
computer, the hub, and the destination computer. This is more efficient than
CSMA/CD, which broadcasts transmissions to the entire network.

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture

Figure 4 A demand priority network


4.4 Access methods summary
The following table summarises the major features of each access method:
Feature/function

CSMA/CD

CSMA/CA

Token passing

Demand
priority

Type of
communication
Type of access
method

Broadcast
based

Broadcast
based

Token based

Hub based

Contention

Contention

Non-contention

Contention

5. Common network architectures


The term network architecture refers to the combination of network topology,
communication method, hardware components and access method used to construct a
particular network. Over the years a number of network architectures have become
very popular.
5.1 Token ring
The token ring architecture was developed in the mid-1980s by IBM. It is the
preferred method of networking by IBM and is therefore found primarily in large
IBM mini- and mainframe installations.
We introduced the token ring architecture in Handout 1. The table below gives a
summary of the features of token ring LANs.
Feature
Physical topology

Description
Star
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
Logical topology
Ring
Type of communication
Baseband
Access method
Token passing
Transfer speeds
4-16 Mbps
Cable type
STP or UTP
Hardware for token ring networks is centred on the hub, which houses the actual ring.
This combination of a logical ring and a physical star topology is sometimes referred
to as a star-shaped ring. A token ring network can have multiple hubs. STP or UTP
cabling connects the computers to the hubs. Fibre-optic cable, together with repeaters,
can be used to extend the range of token ring networks. Token ring networks are not
that commonly used these days.
5.2 The Ethernet
Ethernet has become the most popular way of networking desktop computers and is
still very commonly used today in both small and large network environments.
Standard specifications for Ethernet networks are produced by the Institute of
Electronic and Electrical Engineers (IEEE) in the USA, and there have been a large
number over the years. The original Ethernet standard used a bus topology,
transmitted at 10 Mbps, and relied on CSMA/CD to regulate traffic on the main cable
segment. The Ethernet media was passive, which means it required no power source
of its own and thus would not fail unless the media is physically cut or improperly
terminated. More recent Ethernet standards have different specifications.
Packets in Ethernet networks are referred to as frames. The format of an Ethernet
frame has remained largely the same throughout the various standards produced by
the IEEE, and is shown below.

Preamble

SFD

7 bytes

1 byte

ETHERNET FRAME FORMAT


Destination
Source
Length
Data
Address
Address
6 bytes
6 bytes
2 bytes 46-1500 bytes

FCS
4 bytes

Each frame begins with a 7-byte preamble. Each byte has the identical pattern
10101010, which is used to help the receiving computer synchronise with the sender.
This is followed by a 1-byte start frame delimiter (SFD), which has the pattern
10101011. Next are the source and destination addresses, which take up 6 bytes each.
The data can be of variable length (46-1500 bytes), so before the data itself there is a
2-byte field that indicates the length of the following data field. Finally there is a 4byte frame check sequence, used for cyclic redundancy checking. Therefore the
minimum and maximum lengths of an Ethernet frame are 72 bytes and 1526 bytes
respectively.
Although there have been a number of different standards for the Ethernet architecture
over the years, a number of features have remained the same The table below
summarises the general features of Ethernet LANs.

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
Feature
Description
Traditional topology
Linear bus
Other topologies
Star bus
Type of communication Baseband
Access method
CSMA/CD
Transfer speeds
10/100/1000 Mbps
Cable type
Thicknet/thinnet coaxial or UTP
The first phase of Ethernet standards had a transmission speed of 10Mbps. Three of
the most common of these are known as 10Base2, 10Base5 and 10BaseT. The
following table summarises some of the features of each specification.

Topology

ETHERNET STANDARDS
10Base2
10Base5
Bus
Bus

10BaseT
Star bus
UTP
(Cat. 3 or higher)

Cable type

Thinnet coaxial

Thicknet coaxial

Simplex/half/full
duplex

Half duplex

Half duplex

Half duplex

Manchester,
asynchronous
BNC
185 metres

Manchester,
asynchronous
DIX or AUI
500 metres

Manchester,
asynchronous
RJ45
100 metres

Data encoding
Connector
Max. segment length

Note that although the 10BaseT standard uses a physical star-bus topology, it still used
a logical bus topology. This combination is sometimes referred to as a star-shaped
bus. In addition to these three, a number of standard existed for use with fibre-optic
cabling, namely 10BaseFL, 10BaseFB and 10BaseFP.
The next phase of Ethernet standards was known as fast Ethernet, and increased
transmission speed up to 100Mbps. Fast Ethernet is probably the most common
standard in use today. The Manchester encoding technique used in the original
Ethernet standards is not well suited to high frequency transmission so new encoding
techniques were developed for fast Ethernet networks. Three of the most common fast
Ethernet standards are summarised below, although others do exist (e.g. 100BaseT2).

Topology
Cable type

FAST ETHERNET STANDARDS


100BaseT4
100BaseTX
Star Bus
Star Bus
UTP
UTP
(Cat. 3 or higher) (Cat. 5 or higher)

Connector

RJ45

RJ45

Max. segment length


Communication type

100 metres
Half duplex

100 metres
Full duplex

100BaseFX
Star Bus
Fibre-optic
SC, ST or FDDI
MIC
2000m
Full duplex

The most recent phase of Ethernet standards has increased transmission speeds up to
1000Mbps, although sometimes at the expense of some other features, such as

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
maximum segment length. Because of the transmission speed, it has become known
as Gigabit Ethernet, and the most common standards are summarised below.
GIGABIT ETHERNET STANDARDS
1000BaseT 1000BaseCX 1000BaseSX
Topology
Star Bus
Star Bus
Star Bus
UTP
Twinax
Cable type
(Cat. 5 or
(shielded
Fibre-optic
higher)
copper wire)
Connector
RJ45
HSSC
SC
Max. segment length
100m
25m
275m
Communication type Full duplex Full duplex
Full duplex

1000BaseLX
Star Bus
Fibre-optic
SC
316-550m
Full duplex

Finally, the IEEE has also published a number of standards for wireless Ethernet
networks. The original standard was known as 802.11, was very slow (around 2Mbps)
and was quickly superseded by more efficient standards. 802.11 now usually refers to
the family of standards that followed after this original standard.

Max. speed
Ave. speed
Max. distance
outdoors
Max. distance
indoors
Broadcast
frequency

WIRELESS ETHERNET STANDARDS


802.11b
802.11a
11Mbps
54Mbps
4.5Mbps
20Mbps

802.11g
54Mbps
20Mbps

120m

30m

30m

60m

12m

20m

2.4Ghz

5Ghz

2.4Ghz

The CSMA/CA access method has become the standard access method for use in
wireless networking.

Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
Summary of Key Points

Data on a network is not sent in one continuous stream. It is divided up into


smaller, more manageable packets. These packets of data make timely
interaction and communications on a network possible
Communication over a network using packets is known as packet switching
Circuit switching is where communication uses a direct dedicated connection
for the duration of the transmission
In connect-oriented (CO), or virtual, circuit packet switching, all packets
follow the same predetermined route through the network
In connectionless (CL) circuit packet switching, no such route exists, and each
packets is routed independently
Packets of data contain extra control information to ensure that the data
reaches its destination and can be reconstructed upon arrival. All packets
contain at least the source address, the data and the destination address.
Routing is the process of determining the best route through a network for a
packet to take
Routing tables are used by network nodes to determine the outgoing link to
send a packet to
Next hop forwarding refers to a scheme in which each network node only
knows the next location to send a packet to
When designing a routing strategy, the following desirable characteristics
should be considered: correctness, simplicity, robustness, stability, fairness,
optimality and efficiency
In fixed routing, a static routing table (i.e. one that will not change) is used by
each network device
Flooding is a robust routing strategy that guarantees to find an optimal route to
the destination
Random routing is a routing strategy that provides the robustness of flooding,
but reduces the network traffic generated
Adaptive routing strategies, such as distance vector routing and link state
routing try to adapt routing information based on changes to the network
The set of rules that governs how network traffic is controlled is called the
access method
When using the CSMA/CD access method, a computer waits until the network
is quiet and then transmits its data. If two computers transmit at the same time,
the data will collide and have to be re-sent. If two data packets collide, both
will be destroyed
The ports of switches, bridges and routers are on separate collision domains,
whereas those of hubs and repeaters are in the same collision domain.
When using the CSMA/CA access method, a computer transmits its intent to
transmit before actually sending the data
When using the token-ring access method, each computer must wait to receive
the token before it can transmit data. Only one computer at a time can use
the token
When using the demand-priority access method, each computer communicates
only with a hub. The hub then controls the flow of data
Page 2

MicroLink Information Technology College


Data communication and computer network
Handout 4: Network Architecture
The term network architecture refers to the combination of physical/logical
topology, communication method, physical hardware and access method
chosen to implement the network
Ethernet and token ring are two of the most popular network architectures
There have been many standards published by the IEEE for Ethernet
networks: the original standards had a transmission speed of 10Mbps; fast
Ethernet has a speed of 100Mbps; and Gigabit Ethernet has a speed of
1000Mbps
Wireless networking is becoming increasingly popular the three wireless
Ethernet standards are known as 801.11b, 802.11a and 802.11g

Page 2