Sunteți pe pagina 1din 726

~~

HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Enterprise business requirements highlight a need for networks that are
capable of adapting to ever changing business demands in terms of
enterprise business growth and evolving services. It is imperative
therefore to understand the principles of what constitutes an enterprise
network and how it is formed and adapted to support real world business
demands.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain what constitutes an enterprise network
Describe the common enterprise network architecture types
Describe some of the solutions commonly implemented within an
enterprise network to support business operations.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The enterprise network originally represents the interconnection of systems


belonging to a given functional group or organization to primarily enable the
sharing of resources such as printers and file servers, communication support
through means such as email, and the evolution towards applications that
enable collaboration between users. Enterprise networks can be found today
present within various industries from office environments to larger energy,
finance and government based industries, which often comprise of enterprise
networks that span multiple physical locations.
The introduction of the Internet as a public network domain allowed for an
extension of the existing enterprise network to occur, through which
geographically dispersed networks belonging to a single organization or entity
could be connected, bringing with it a set of new challenges to establish
interconnectivity between geographically dispersed enterprise networks,
whilst maintaining the privacy and security of data belonging to an individual
enterprise.

Various challenges impact todays industries in providing solutions for


establishment of interconnectivity between remote locations, which often take
the form of regional branch and head offices, as well as employees that
represent a non fixed entity within the enterprise network, often being present
in locations beyond the conventional boundaries of the existing enterprise.
Challenges for industries have created a demand for ubiquitous networks that
allow the enterprise network to be available from any location and at any time,
to ensure access to resources and tools that allow for the effective delivery of
support and services to industry partners and customers.
The evolution in enterprise solutions has enabled for public and third party IP
networks to provide this anywhere anytime connectivity, along with the
development of technologies that establish private network connections over
this public network infrastructure, to extend the remote capabilities of the
enterprise network beyond the physical boundaries of the enterprise, allowing
remote office and users alike to establish a single enterprise domain that
spans over a large geographic expanse.

Enterprise network architecture solutions vary significantly depending on the


requirement of the industry and the organization. Smaller enterprise
businesses may often have a very limited requirement in terms of complexity
and demand, opting to implement a flat form of network, mainly due to the
size of the organization that is often restricted to a single geographical
location or within a few sites, supporting access to common resources, while
enabling flexibility within the organization to support a smaller number of
users. The cost to implement and maintain such networks is significantly
reduced, however the network is often susceptible to failure due to lack of
redundancy, and performance may vary based on daily operations and
network demand.
Larger enterprise networks implement solutions to ensure minimal network
failure, controlled access and provision for a variety of services to support the
day-to-day operations of the organization. A multi layered architecture is
defined to optimize traffic flow, apply policies for traffic management and
controlled access to resources, as well as maintain network availability and
stable operation through effective network redundancy. The multi layer design
also enables easy expansion, and together with a modular design that
provides for effective isolation and maintenance should problems in the
network occur, without impacting the entire network.

1. Small enterprise networks that implement a flat network architecture may


limit the capability to scale the network in the event of growth in the number of
users. Where it is expected that a larger number of users will need to be
supported, a hierarchical approach to enterprise networks should be
considered. Medium-sized networks will generally support a greater number
of users, and therefore will typically implement a hierarchical network
infrastructure to allow the network to grow and support the required user
base.
2. Small and medium sized enterprise networks must take into account the
performance of the network as well as providing redundancy in the event of
network failure in order to maintain service availability to all users. As the
network grows, the threat to the security of the network also increases which
may also hinder services.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Establishment of an enterprise network requires a fundamental
understanding of general networking concepts. These concepts include
knowledge of what defines a network, as well as the general standards of
technology and physical components that are used to establish enterprise
networks. An understanding of the underlying network communications
and the impact that such behavior has on the network is also paramount
to ensuring performance effective implementation.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, you will be able to:
Explain what constitutes a network.

Identify the basic components of a network.

Describe the primary mechanisms for communication over a network.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

A network can be understood to be the capability of two or more entities to


communicate over a given medium. The development of any network relies
on this same principle for establishing communication. Commonly the entities
within a network that are responsible for the transmission and reception of
communication are known as end stations, while the means by which
communication is enabled is understood to be the medium. Within an
enterprise network, the medium exists in a variety of forms from a physical
cable to radio waves.

The coaxial cable represents a more historic form of transmission medium


that may today be limited in usage within the enterprise network. As a
transmission medium, the coaxial cable comprises generally of two standards,
the 10Base2 and 10Base5 forms, that are known as Thinnet or Thinwire, and
Thicknet or Thickwire respectively.
The standards both support a transmission capacity of 10Mbps transmitted as
baseband signals for respective distances of 185 and 500 meters. In todays
enterprise networks, the transmission capacity is extremely limited to be of
any significant application. The Bayonet Neill-Concelman (BNC) connector is
the common form of connector used for thin 10Base2 coaxial cables, while a
type N connector was applied to the thicker 10Base5 transmission medium.

Ethernet cabling has become the standard for many enterprise networks
providing a transmission medium that supports a much higher transmission
capacity. The medium supports a four copper wire pair contained within a
sheath which may or may not be shielded against external electrical
interference. The transmission capacity is determined mainly based on the
category of cable with category 5 (CAT5) supporting Fast Ethernet
transmission capacity of up to 100Mbps, while a higher Gigabit Ethernet
transmission capacity is supported from Category 5 extended (CAT5e)
standards and higher.
The transmission over Ethernet as a physical medium is also susceptible to
attenuation, causing the transmission range to be limited to 100 meters. The
RJ-45 connector is used to provide connectivity with wire pair cabling
requiring specific pin ordering within the RJ-45 connector, to ensure correct
transmission and reception by end stations over the transmission medium.

Optical media uses light as a means of signal transmission as opposed to


electrical signals found within both Ethernet and coaxial media types. The
optical fiber medium supports a range of standards of 10Mbps, 100Mbps,
1Gbps and also 10Gbps (10GBASE) transmission. Single or multi-mode fiber
defines the use of an optical transmission medium for propagating of light,
where single mode refers to a single mode of optical transmission being
propagated, and is used commonly for high speed transmission over long
distances.
Multi mode supports propagation of multiple modes of optical transmission
that are susceptible to attenuation as a result of dispersion of light along the
optical medium, and therefore is not capable of supporting transmission over
longer distances. This mode is often applied to local area networks which
encompass a much smaller transmission range. There are an extensive
number of fiber connector standards with some of the more common forms
being recognized as the ST connector, LC connector and SC, or snap
connector.

Serial represents a standard initially developed over 50 years ago to support


reliable transmission between devices, during which time many evolutions of
the standard have taken place. The serial connection is designed to support
the transmission of data as a serial stream of bits. The common standard
implemented is referred to as (Recommended Standard) RS-232 but it is
limited somewhat by both distance and speed. Original RS-232 standards
define that communication speeds supported be no greater that 20Kbps,
based on a cable length of 50ft (15 meters), however transmission speeds for
serial is unlikely to be lower than 115 Kbps. The general behavior for serial
means that as the length of the cable increases, the supported bit rate will
decrease, with an approximation that a cable of around 150 meters, or 10
times the original standards, the supported bit rate will be halved.
Other serial standards have the capability to achieve much greater
transmission ranges, such as is the case with the RS-422 and RS-485
standards that span distances of up to 4900ft (1200 meters) and are often
supported by V.35 connectors that were made obsolete during the late 1980s
but are still often found and maintained today in support of technologies such
as Frame Relay and ATM, where implemented. RS-232 itself does not define
connector standards, however two common forms of connector that support
the RS-232 standard include the DB-9 and DB-25 connectors. Newer serial
standards have been developed to replace much of the existing RS-232 serial
technology, including both FireWire and the universal serial bus (USB)
standards, that latter of which is becoming common place in many newer
products and devices.

In order to enable communication over physical links, signals must be


transmitted between the transmitting and receiving stations. This signal will
vary depending on the medium that is being used, as in the case of optical
and wireless transmission. The main purpose of the signal is to ensure that
synchronization (or clocking) between the sender and receiver over a physical
medium is maintained, as well as support transmission of the data signal in a
form that can be interpreted by both the sender and receiver.
A waveform is commonly recognized as a property of line encoding where the
voltage is translated into a binary representation of 0 and 1 values that can be
translated by the receiving station. Various line coding standards exist, with
10Base Ethernet standards supporting a line encoding standard known as
Manchester encoding. Fast Ethernet with a frequency range of 100MHz
invokes a higher frequency than can be supported when using Manchester
encoding.
An alternative form of line encoding is therefore used known as NZRI, which
in itself contains variations dependant on the physical media, thus supporting
MLT-3 for 100Base-TX and 100Base-FX together with extended line encoding
known as 4B/5B encoding to deal with potential clocking issues. 100Base-T4
for example uses another form known as 8B/6T extended line encoding.
Gigabit Ethernet supports 8B/10B line encoding
with the exception of
1000Base-T which relies on a complex block encoding referred to as 4DPAM5.

Ethernet represents what is understood to be a multi-access network, in which


two or more end stations share a common transmission medium for the
forwarding of data. The shared network is however susceptible to
transmission collisions where data is forwarded by end stations
simultaneously over a common medium. A segment where such occurrences
are possible is referred to as a shared collision domain.
End stations within such a collision domain rely on contention for the
transmission of data to an intended destination. This contentious behavior
requires each end station monitor for incoming data on the segment before
making any attempt to transmit, in a process referred to as Carrier Sense
Multiple-Access Collision Detection (CSMA/CD). However, even after taking
such precautions the potential for the occurrence of collisions as a result of
simultaneous transmission by two end stations remains highly probable.

Transmission modes are defined in the form of half and full duplex, to
determine the behavior involved with the transmission of data over the
physical medium.
Half duplex refers to the communication of two or more devices over a shared
physical medium in which a collision domain exists, and with it CSMA/CD is
required to detect for such collisions. This begins with the station listening for
reception of traffic on its own interface, and where it is quiet for a given period,
will proceed to transmit its data. If a collision were to occur, transmission
would cease, followed by initiation of a backoff algorithm to prevent further
transmissions until a random value timer expires, following which
retransmission can be reattempted.
Full duplex defines the simultaneous bidirectional communication over
dedicated point to point wire pairs, ensuring that there is no potential for
collisions to occur, and thus there is no requirement for CSMA/CD.

1. Gigabit Ethernet transmission is supported by CAT 5e cabling and higher,


and also any form of 1000Base Fiber Optic cabling or greater.
2. A collision domain is a network segment for which the same physical
medium is used for bi-directional communication. Data simultaneously
transmitted between hosts on the same shared network medium is
susceptible to a collision of signals before those signals reach the
intended destination. This generally results in malformed signals either
larger or smaller than the acceptable size for transmission (64 bytes
1500 bytes), also know as runts and giants, being received by the
recipient.
3. CSMA/CD is a mechanism for detecting and minimizing the possibility of
collision events that are likely to occur in a shared network. CSMA
requires that the transmitting host first listen for signals on the shared
medium prior to transmission. In the event that no transmissions are
detected, transmission can proceed. In the unfortunate circumstance that
signals are transmitted simultaneously and a collision occurs, collision
detection processes are applied to cease transmission for a locally
generated period of time, to allow collision events to clear and to avoid
further collisions from occurring between transmitting hosts.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Transmission over a physical medium requires rules that define the
communication behavior. The management of the forwarding behavior of
Ethernet based networks is controlled through IEEE 802 standards
defined for Ethernet data link technology. A fundamental knowledge of
these standards is imperative to fully understand how link layer
communication is achieved within Ethernet based networks.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the application of reference models to networks.
Describe how frames are constructed.
Explain the function of MAC addressing at the data link layer.
Describe Ethernet frame forwarding and processing behavior.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

Communication over networks relies on the application of rules that govern


how data is transmitted and processed in a manner that is understood by both
the sending and receiving entities. As a result, multiple standards have been
developed over the course of time with some standards becoming widely
adopted. There exists however a clear distinction between the standards that
manage physical data flow and the standards responsible for logical
forwarding and delivery of traffic.
The IEEE 802 standards represent a universal standard for managing the
physical transmission of data across the physical network and comprises of
standards including the Ethernet standard 802.3 for physical transmission
over local area networks. Alternative standards exist for transmission over
wide area networks operating over serial based media, including Frame
Relay, HDLC and more legacy standards such as ATM. TCP/IP has been
widely adopted as the protocol suite defining the upper layer standards,
regulating the rules (protocols) and behavior involved in managing the logical
forwarding and delivery between end stations.

The TCP/IP reference model primarily concerns with the core principles of the
protocol suite, which can be understood as the logical transmission and
delivery of traffic between end stations. As such the TCP/IP protocol reference
model provides a four layer representation of the network, summarizing
physical forwarding behavior under the network interface layer, since lower
layer operation is not the concern of the TCP/IP protocol suite.
Primary focus remains on the network (or Internet) layer which deals with how
traffic is logically forwarded between networks, and the transport (sometimes
referred to as host-to-host) layer that manages the end-to-end delivery of
traffic, ensuring reliability of transportation between the source and destination
end stations. The application layer represents an interface through a variety of
protocols that enable services to be applied to end user application
processes.

Although the TCP/IP reference model is primarily supported as the standard


model based on TCP/IP protocol suite, the focus of the TCP/IP reference
model does not clearly separate and distinguish the functionality when
referring lower layer physical transmission.
In light of this, the open systems interconnection, or OSI reference model is
often recognized as the model for reference to IEEE 802 standards due to the
clear distinction and representation of the behavior of lower layers which
closely matches the LAN/MAN reference model standards that are defined as
part of the documented IEEE 802-1990 standards for local and metropolitan
area networks. In addition the model, that is generally in reference to the ISO
protocol suite, provides an extended breakdown of upper layer processing.

As upper layer application data is determined for transmission over a network


from an end system, a series of processes and instructions must be applied to
the data before transmission can be successfully achieved. This process of
appending and pre-pending instructions to data is referred to as encapsulation
and for which each layer of the reference model is designed to represent.
As instructions are applied to the data, the general size of the data increases.
The additional instructions represent overhead to the existing data and are
recognized as instructions to the layer at which the instructions were applied.
To other layers, the encapsulated instructions are not distinguished from the
original data. The final appending of instructions is performed as part of the
lower layer protocol standards (such as the IEEE 802.3 Ethernet standard)
before being carried as an encoded signal over a physical medium.

As part of the IEEE 802.3 Ethernet standard, data is encapsulated with


instructions in the form of a header and a trailer before it can be propagated
over physical media on which Ethernet is supported. Each stage of
encapsulation is referred to by a protocol data unit or PDU, which at the data
link layer is known as a frame.
Ethernet frames contain instructions that govern how and whether data can
be transmitted over the medium between two or more points. Ethernet frames
come in two general formats, the selection of which is highly dependant on
the protocols that have been defined prior to the framing encapsulation.

Two frame formats are recognized as standard for Ethernet based networks.
The DIX version 2 frame type standard was originally developed during the
early 1980s, where today it is recognized as the Ethernet II frame type.
Ethernet II was eventually accepted and integrated into the IEEE 802
standards, highlighted as part of section 3.2.6 of the IEEE 802.3x-1997
standards documentation. The IEEE 802.3 Ethernet standard was originally
developed in 1983, with key differences between the frame formats including
a change to the type field that is designed to identify the protocol to which the
data should be forwarded to once the frame instructions have been
processed. In the IEEE 802.3 Ethernet format, this is represented as a length
field which relies on an extended set of instructions referred to as 802.2 LLC
to identify the forwarding protocol.
Ethernet II and IEEE 802.3 associate with upper layer protocols that are
distinguished by a type value range, where protocols supporting a value less
than or equal to 1500 (or 05DC in Hexadecimal) will employ the IEEE 802.3
Ethernet frame type at the data link layer. Protocols represented by a type
value greater than or equal to 1536 (or 0600 in Hexadecimal) will employ the
Ethernet II standard, and which represents the majority of all frames within
Ethernet based networks.
Other fields found within the frame include the destination and source MAC
address fields that identify the sender and the intended recipient(s), as well as
the frame check sequence field that is used to confirm the integrity of the
frame during transmission.

The Ethernet II frame references a hexadecimal type value which identifies


the upper layer protocol. One common example of this is the Internet Protocol
(IP) which is represented by a hexadecimal value of 0x0800. Since this value
for IP represents a value greater than 0x0600 , it is determined that the
Ethernet II frame type should be applied during encapsulation. Another
common protocol that relies on the Ethernet II frame type at the data link layer
is ARP, and is represented by the hexadecimal value of 0x0806.

For the IEEE 802.3 frame type, the type field is contained as part of the SNAP
extension header and is not so commonly applied the protocols in todays
networks, partially due to the requirement for additional instructions which
results in additional overhead per frame. Some older protocols that have
existed for many years but that are still applied in support of Ethernet
networks are likely to apply the IEEE 802.3 frame type. One clear example of
this is found in the case of the Spanning Tree Protocol (STP) that is
represented by a value of 0x03 within the type field of the SNAP header.

Ethernet based networks achieve communication between two end stations


on a local area network using Media Access Control (MAC) addressing that
allows end systems within a multi access network to be distinguished. The
MAC address is a physical address that is burned into the network interface
card to which the physical medium is connected. This same MAC address is
retrieved and used as the destination MAC address of the intended receiver
by the sender, before the frame is transferred to the physical layer for
forwarding over the connected medium.

Each MAC address is a 48 bit value commonly represented in a hexadecimal


(base 16) format and comprised of two parts that attempt to ensure that every
MAC address is globally unique. This is achieved by the defining of an
organizationally unique identifier that is vendor specific, based on which it is
possible to trace the origin of a product back to its vendor based on the first
24 bits of the MAC address. The remaining 24 bits of the MAC address is a
value that is incrementally and uniquely assigned to each product (e.g. a
Network Interface Card or similar product supporting port interfaces for which
a MAC is required).

The transmission of frames within a local network is achieved using one of


three forwarding methods, the first of these is unicast and refers to the
transmission from a single source location to a single destination. Each host
interface is represented by a unique MAC address, containing an
organizationally unique identifier, for which the 8th bit of the most significant
octet (or first byte) in the MAC address field identifies the type of address.
This 8th bit is always set to 0 where the MAC address is a host MAC address,
and signifies that any frame containing this MAC address in the destination
MAC address field is intended for a single destination only.
Where hosts exist within a shared collision domain, all connected hosts will
receive the unicast transmission but the frame will be generally ignored by all
hosts where the MAC address in the destination MAC field of the frame does
not match the MAC value of the receiving host on a given interface, leaving
only the intended host to accept and process the received data. Unicast
transmissions are only forwarded from a single physical interface to the
intended destination, even in cases where multiple interfaces may exist.

Broadcast transmission represents a forwarding method that allows frames to


be flooded from a single source received by all destinations within a local area
network. In order to allow traffic to be broadcasted to all hosts within a local
area network, the destination MAC address field of the frame is populated
with a value that is defined in hexadecimal as FF:FF:FF:FF:FF:FF, and which
specifies that all recipients of a frame with this address defined should accept
receipt of this frame and process the frame header and trailer.
Broadcasts are used by protocols to facilitate a number of important network
processes including discovery and maintenance of network operation,
however also generate excessive traffic that often causes interrupts to end
systems and utilization of bandwidth that tend to reduce the overall
performance of the network.

A more efficient alternative to broadcast that has begun to replace the use of
broadcasts in many newer technologies is the multicast frame type. Multicast
forwarding can be understood as a form of selective broadcast that allows
select hosts to listen for a specific multicast MAC address in addition to the
unicast MAC address that is associated with the host, and process any
frames containing the multicast MAC address in the destination MAC field of
the frame.
Since there is no relative distinction between unicast MAC addresses and
multicast MAC address formats, the multicast address is differentiated using
the 8th bit of the first octet. Where this bit value represents a value of 1, it
identifies that the address is part of the multicast MAC address range, as
opposed to unicast MAC addresses where this value is always 0.
In a local area network, the true capability of multicast behavior at the data
link layer is limited since forwarding remains similar to that of a broadcast
frame in which interrupts are still prevalent throughout the network. The only
clear difference with broadcast technology is in the selective processing by
receiving end stations. As networks expand to support multiple local area
networks, the true capability of multicast technology as an efficient means of
transmission becomes more apparent.

As traffic is prepared to be forwarded over the physical network, it is


necessary for hosts in shared collision domains to determine whether any
traffic is currently occupying the transmission medium. Transmission media
such as in the case of 10Base2 provides a shared medium over which CSMA/
CD must be applied to ensure collisions are handled should they occur. If the
transmission of a frame is detected on the link, the host will delay the
forwarding of its own frames until such time as the line becomes available,
following which the host will begin to forward frames from the physical
interface towards the intended destination.
Where two hosts are connected over a medium capable of supporting full
duplex transmission as in the case of media such as 10BaseT, it is considered
not possible for transmitted frames to suffer collisions since transmission and
receipt of frames occurs over separate wires and therefore there is no
requirement for CSMA/CD to be implemented.

Once a frame is forwarded from the physical interface of the host, it is carried
over the medium to its intended destination. In the case of a shared network,
the frame may be received by multiple hosts who will assess whether the
frame is intended for their interface by analyzing the destination MAC address
in the frame header. If the destination MAC address and the MAC address of
the host are not the same, or the destination MAC address is not a MAC
broadcast or multicast address to which the host is listening for, the frame will
be ignored and discarded.
For the intended destination, the frame will be received and processed,
initially by confirming that the frame is intended for the hosts physical
interface. The host must also confirm that the integrity of the frame has been
maintained during transmission by taking the value of the frame check
sequence (FCS) field and comparing this value with a value determined by
the receiving host. If the values do not match, the frame will be considered as
corrupted and will be subsequently discarded.
For valid frames, the host will then need to determine the next stage of
processing by analyzing the type field of the frame header and identify the
protocol to which this frame is intended. In this example the frame type field
contains a hexadecimal value of 0x0800 that identifies that the data taken
from the frame should be forwarded to the Internet Protocol, prior to which,
the frame header and trailer are discarded.

1. Data link layer frames contain a Type field that references the next
protocol to which data contained within the frame should be forwarded.
Common examples of forwarding protocols include IP (0x0800) and ARP
(0x0806).
2. The destination MAC address contained within the frame header is
analyzed by the receiving end station and compared to the MAC address
associated with the interface on which the frame was received. If the
destination MAC address and interface MAC address do not match, the
frame is discarded.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The Internet Protocol (IP) is designed to provide a means for internetwork
communication that is not supported by lower layer protocols such as
Ethernet. The implementation of logical (IP) addressing enables the
Internet Protocol to be employed by other protocols for the forwarding of
data in the form of packets between networks. A strong knowledge of IP
addressing must be attained for effective network design along with clear
familiarity of the protocol behavior, to support a clear understanding of
the implementation of IP as a routed protocol.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the fields and characteristics contained within IP.
Distinguish between public, private and speciaiiP address ranges.

Successfully implement VLSM addressing.

Explain the function of an IP gateway.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Prior to discarding the frame header and trailer, it is necessary for the next set
of instructions to be processed to be determined from the frame header. As
highlighted, this is identified by determining the field value in the type field,
which in the this instance represents a frame that is destined for the IP
protocol following completion of the frame process.
The key function of the frame is to determine whether the intended physical
destination has been reached, that the integrity of the frame has remained in
tact. The focus of this section will identify how data is processed following the
discarding of the frame headers and propagation of the remaining data to the
Internet Protocol.

The IP header is used to support two key operations, routing and


fragmentation. Routing is the mechanism that allows traffic from a given
network to be forwarded to other networks, since the data link layer
represents a single network for which network boundaries exist.
Fragmentation refers to the breaking down of data into manageable blocks
that can be transmitted over the network.
The IP header is carried as part of the data and represents an overhead of at
least 20 bytes that references how traffic can be forwarded between networks,
where the intended destination exists within a network different from the
network on which the data was originally transmitted. The version field
identifies the version of IP that is currently being supported, in this case the
version is known as version four or IPv4. The DS field was originally referred
to as the type of service field however now operates as a field for supporting
differentiated services, primarily used as a mechanism for applying quality of
service (QoS) for network traffic optimization, and is considered to be outside
of the scope of this training.
The source and destination IP addressing are logical addresses assigned to
hosts and used to reference the sender and the intended receiver at the
network layer. IP addressing allows for assessment as to whether an intended
destination exists within the same network or a different network as a means
of aiding the routing process between networks in order to reach destinations
beyond the local area network.

Each IPv4 address represents a 32 bit value that is often displayed in a dotted
decimal format but for detailed understanding of the underlying behavior is
also represented in a binary (Base 2) format. IP addresses act as identifiers
for end systems as well as other devices within the network, as a means of
allowing such devices to be reachable both locally and by sources that are
located remotely, beyond the boundaries of the current network.
The IP address consists of two fields of information that are used to clearly
specify the network to which an IP address belongs as well as a host identifier
within the network range, that is for the most part unique within the given
network.

Each network range contains two important addresses that are excluded from
the assignable network range to hosts or other devices. The first of these
excluded addresses is the network address that represents a given network
as opposed to a specific host within the network. The network address is
identifiable by referring to the host field of the network address, in which the
binary values within this range are all set to 0, for which it should also be
noted that an all 0 binary value may not always represent a 0 value in the
dotted decimal notation.
The second excluded address is the broadcast address that is used by the
network layer to refer to any transmission that is expected to be sent to all
destinations within a given network. The broadcast address is represented
within the host field of the IP address where the binary values within this
range are all set to 1. Host addresses make up the range that exists between
the network and broadcast addresses.

The use of binary, decimal and hexadecimal notations are commonly applied
throughout IP networks to represent addressing schemes, protocols and
parameters, and therefore knowledge of the fundamental construction of
these base forms is important to understanding the behavior and application
of values within IP networks.
Each numbering system is represented by a different base value that
highlights the number of values used as part of the base notations range. In
the case of binary, only two values are ever used, 0 and 1, which in
combination can provide for an increasing number of values, often
represented as 2 to the power of x, where x denotes the number of binary
values. Hexadecimal represents a base 16 notation with values ranging from
0 to F, (0-9 and A-F) where A represents the next value following 9 and F thus
represents a value equivalent to 15 in decimal, or 1111 in binary.

A byte is understood to contain 8 bits and acts as a common notation within IP


networks, thus a byte represents a bit value of 256, ranging from 0 through to
255. This information is clearly represented through translation of decimal
notation to binary, and application of the base power to each binary value, to
achieve the 256 bit value range. A translation of the numbering system for
binary can be seen given in the example to allow familiarization with the
numbering patterns associated with binary. The example also clearly
demonstrates how broadcast address values in decimal, binary and
hexadecimal are represented to allow for broadcasts to be achieved in both IP
and MAC addressing at the network and data link layers.

The combination of 32 bits within an IP address correlates to four octets or


bytes for which each can represent a value range of 256, giving a theoretical
number of 4294967296 possible IP addresses, however in truth only a
fraction of the total number of addresses are able to be assigned to hosts.
Each bit within a byte represents a base power and as such each octet can
represent a specific network class, with each network class being based on
either a single octet or a combination of octets. Three octets have been used
as part of this example to represent the network with the fourth octet
representing the host range that is supported by the network.

The number of octets supported by a network address is determined by


address classes that break down the address scope of IPv4. Classes A, B and
C are assignable address ranges, each of which supports a varied number of
networks, and a number of hosts that are assignable to a given network.
Class A for instance consist of 126 potential networks, each of which can
support 224, or 16777216 potential host addresses, bearing in mind that
network and broadcast addresses of a class range are not assignable to
hosts.
In truth, a single Ethernet network could never support such a large number of
hosts since Ethernet does not scale well, due in part to broadcasts that
generate excessive network traffic within a single local area network. Class C
address ranges allow for a much more balanced network that scales well to
Ethernet networks, supplying just over 2 million potential networks, with each
network capable of supporting around 256 addresses, of which 254 are
assignable to hosts.
Class D is a range reserved for multicast, to allow hosts to listen for a specific
address within this range, and should the destination address of a packet
contain a multicast address for which the host is listening, the packet shall be
processed in the same way as a packet destined for the hosts assigned IP
address. Each class is easily distinguishable in binary by observing the bit
value within the first octet, where a class A address for instance will always
begin with a 0 for the high order bit, whereas in a Class B the first two high
order bits are always set as 1 and 0, allowing all classes to be easily
determined in binary.

Within IPv4, specific addresses and address ranges have been reserved for
special purposes. Private address ranges exist within the class A, B and C
address ranges to prolong the rapid decline in the number of available IP
addresses. The number of actual end systems and devices that require IP
addressing in the world today exceeds the 4294967296 addresses of the 32
bit IPv4 address range, and therefore a solution to this escalating problem
was to allocate private address ranges that could be assigned to private
networks, to allow for conservation of public network addresses that facilitate
communication over public network infrastructures such as the Internet.
Private networks have become common throughout the enterprise network
but hosts are unable to interact with the public network, meaning that address
ranges can be reused in many disparate enterprise networks. Traffic bound
for public networks however must undergo a translation of addresses before
data can reach the intended destination.
Other special addresses include a diagnostic range denoted by the 127.0.0.0
network address, as well as the first and last addresses within the IPv4
address range, for which 0.0.0.0 represents any network and for which its
application shall be introduced in more detail along with principles of routing.
The address 255.255.255.255 represents a broadcast address for the IPv4
(0.0.0.0) network, however the scope of any broadcast in IP is restricted to the
boundaries of the local area network from which the broadcast is generated.

In order for a host to forward traffic to a destination, it is necessary for a host


to have knowledge of the destination network. A host is naturally aware of the
network to which it belongs but is not generally aware of other networks, even
when those networks may be considered part of the same physical network.
As such hosts will not forward data intended for a given destination until the
host learns of the network and thus with it the interface via which the
destination can be reached.
For a host to forward traffic to another host, it must firstly determine whether
the destination is part of the same IP network. This is achieved through
comparison of the destination network to the source network (host IP address)
from which the data is originating. Where the network ranges match, the
packet can be forwarded to the lower layers where Ethernet framing presides,
for processing. In the case where the intended destination network varies
from the originating network, the host is expected to have knowledge of the
intended network and the interface via which a packet/frame should be
forwarded before the packet can be processed by the lower layers. Without
this information, the host will proceed to drop the packet before it even
reaches the data link layer.

The identification of a unique network segment is governed by the


implementation of a mask value that is used to distinguish the number of bits
that represent the network segment, for which the remaining bits are
understood as representing the number of hosts supported within a given
network segment. A network administrator can divide a network address into
sub-networks so that broadcast packets are transmitted within the boundaries
of a single subnet. The subnet mask consists of a string of continuous and
unbroken 1 values followed by an similar unbroken string of 0 values. The 1
values correspond to the network ID field whereas the 0 values correspond to
the host ID field.

For each class of network address, a corresponding subnet mask is applied to


specify the default size of the network segment. Any network considered to be
part of the class A address range is fixed with a default subnet mask
pertaining to 8 leftmost bits which comprise of the first octet of the IP address,
with the remaining three octets remaining available for host ID assignment.
In a similar manner, the class B network reflects a default subnet mask of 16
bits, allowing a greater number of networks within the class B range at the
cost of the number of hosts that can be assigned per default network. The
class C network defaults to a 24 bit mask that provides a large number of
potential networks but limits greatly the number of hosts that can be assigned
within the default network. The default networks provide a common boundary
to address ranges, however in the case of class A and class B address
ranges, do not provide a practical scale for address allocation for Ethernet
based networks.

Application of the subnet mask to a given IP address enables identification of


the network to which the host belongs. The subnet mask will also identify the
broadcast address for the network as well as the number of hosts that can be
supported as part of the network range. Such information provides the basis
for effective network address planning. In the example given, a host has been
identified with the address of 192.168.1.7 as part of a network with a 24 bit
default (class C) subnet mask applied. In distinguishing which part of the IP
address constitutes the network and host segments, the default network
address can be determined for the segment.
This is understood as the address where all host bit values are set to 0, in this
case generating a default network address of 192.168.1.0. Where the host
values are represented by a continuous string of 1 values, the broadcast
address for the network can be determined. Where the last octet contains a
string of 1 values, it represents a decimal value of 255, for which a broadcast
address of 192.168.1.255 can be derived.
Possible host addresses are calculated based on a formula of 2n where n
represents the number of host bits defined by the subnet mask. In this
instance n represents a value of 8 host bits, where 28 gives a resulting value
of 256. The number of usable host addresses however requires that the
network and broadcast addresses be deducted from this result to give a
number of valid host addresses of 254.

The case scenario provides a common class B address range to which it is


necessary to determine the network to which the specified host belongs,
along with the broadcast address and the number of valid hosts that are
supported by the given network. Applying the same principles as with the
class C address range, it is possible for the network address of the host to be
determined, along with the range of hosts within the given network.

One of the main constraints of the default subnet mask occurs when multiple
network address ranges are applied to a given enterprise in order to generate
logical boundaries between the hosts within the physical enterprise network.
The application of a basic addressing scheme may require a limited number
of hosts to be associated with a given network, for which multiple networks
are applied to provide the logical segmentation of the network. In doing so
however, a great deal of address space remains unused, displaying the
inefficiency of default subnet mask application.

As a means of resolving the limitations of default subnet masks, the concept


of variable length subnet masks are introduced, which enable a default subnet
mask to be broken down into multiple sub-networks, which may be of a fixed
length (a.k.a. fixed length subnet masks or FLSM) or of a variable length
known commonly by the term VLSM. The implementation of such subnet
masks consists of taking a default class based network and dividing the
network through manipulation of the subnet mask.
In the example given, a simple variation has been made to the default class C
network which by default is governed by a 24 bit mask. The variation comes in
the form of a borrowed bit from the host ID which has been applied as part of
the network address. Where the deviation of bits occurs in comparison to the
default network, the additional bits represent what is known as the subnet ID.
In this case a single bit has been taken to represent the sub-network for which
two sub-networks can be derived, since a single bit value can only represent
two states of either 1 or 0. Where the bit is set to 0 it represents a value of 0,
where is it is set to 1 it represents a value of 128. In setting the host bits to 0,
the sub-network address can be found for each sub-network, by setting the
host bits to 1, the broadcast address for each sub-network is identifiable. The
number of supported hosts in this case represents a value of 27 minus the
sub-network address and broadcast address for each sub-network, resulting
in each sub-network supporting a total of 126 valid host addresses.

In relation to problem of address limitations in which default networks resulted


in excessive address wastage, the concept of variable length subnet masks
can be applied to reduce the address wastage and provide a more effective
addressing scheme to the enterprise network.
A single default class C address range has been defined, for which variable
length subnet masks are required to accommodate each of the logical
networks within a single default address range. Effective subnet mask
assignment requires that the number of host bits necessary to accommodate
the required number of hosts be determined, for which the remaining host bits
can be applied as part of the subnet ID, that represents the variation in the
network ID from the default network address.

Classless inter-domain routing was initially introduced as a solution to handle


problems that were occurring as a result of the rapid growth of what is now
known as the Internet. The primary concerns were to the imminent exhaustion
of the class B address space that was commonly adopted by mid-sized
organizations as the most suited address range, where class C was
inadequate and where class A was too vast, and management of the 65534
host addresses could be achieved through VLSM. Additionally, the continued
growth meant that gateway devices such as routers were beginning to
struggle to keep up with the growing number of networks that such devices
were expected to handle. The solution given involves transitioning to a
classless addressing system in which classful boundaries were replaced with
address prefixes.
This notation works on the principle that classful address ranges such as that
of class C can be understood to have a 24 bit prefix that represents the
subnet or major network boundary, and for which it is possible to summarize
multiple network prefixes into a single larger network address prefix that
represents the same networks but as a single address prefix. This has helped
to alleviate the number of routes that are contained particularly within large
scale routing devices that operate on a global scale, and has provided a more
effective means of address management. The result of CIDR has had far
reaching effects and is understood to have effectively slowed the overall
exhaustion rate of the IPv4 address space.

The forwarding of packets requires that the packet first determine a


forwarding path to a given network, and the interface via which a packet
should be forwarded from, before being encapsulated as a frame and
forwarded from the physical interface. In the case where the intended network
is different from the originating network, the packet must be forwarded to a
gateway via which the packet is able to reach its intended destination.
In all networks, the gateway is a device that is capable of handling packets
and making decisions as to how packets should be routed, in order to reach
their intended destination. The device in question however must be aware of a
route to the intended destination IP network before the routing of packets can
take place. Where networks are divided by a physical gateway, the interface
IP address (in the same network or sub-network) via which that gateway can
be reached is considered to be the gateway address.
In the case of hosts that belong to different networks that are not divided by a
physical gateway, it is the responsibility of the host to function as the gateway,
for which the host must firstly be aware of the route for the network to which
packets are to be forwarded, and should specify the hosts own interface IP
address as the gateway IP address, via which the intended destination
network can be reached.

The data of forwarded packets exists in many formats and consists of varying
sizes, often the size of data to be transmitted exceeds the size that is
supported for transmission. Where this occurs it is necessary for the data
block to be broken down into smaller blocks of data before transmission can
occur. The process of breaking down this data into manageable blocks is
known as fragmentation.
The identification, flags and fragment offset fields are used to manage
reassembly of fragments of data once they are received at their final intended
destination. Identification distinguishes between data blocks of traffic flows
which may originate from the same host or different hosts. The flags field
determines which of a number of fragments represents the last fragment at
which time initiation of a timer is started prior to reassembly, and to notify that
reassembly of the packet should commence.
Finally the fragment offset labels the bit value for each fragment as part of a
number of fragments, the first fragment is set with a value of 0 and
subsequent fragments specify the value of first bit following the previous
fragment, for example where the initial fragment contains data bits 0 through
to 1259, the following fragment will be assigned an offset value of 1260.

As packets are forwarded between networks, it is possible for packets to fall


into loops where routes to IP networks have not been correctly defined within
devices responsible for the routing of traffic between multiple networks. This
can result in packets becoming lost within a cycle of packet forwarding that
does not allow a packet to reach its intended destination. Where this occurs,
congestion on the network will ensue as more and more packets intended for
the same destination become subject to the same fate, until such time as the
network becomes flooded with erroneous packets.
In order to prevent such congestion occurring in the event of such loops, a
time to live (TTL) field is defined as part of the IP header, that decrements by
a value of 1 each time a packet traverses a layer 3 device in order to reach a
given network. The starting TTL value may vary depending on the originating
source, however should the TTL value decrement to a value of 0, the packet
will be discarded and an (ICMP) error message is returned to the source,
based on the source IP address that can be found in the IP header of the
wandering packet.

Upon verification that the packet has reached its intended destination, the
network layer must determine the next set of instructions that are to be
processed. This is determined by analyzing the protocol field of the IP header.
As with the type field of the frame header, a hexadecimal value is used to
specify the next set of instructions to be processed.
It should be understood that the protocol field may refer to protocols at either
the network layer, such as in the case of the Internet Control Message
Protocol (ICMP), but may also refer to upper layer protocols such as the
Transmission Control Protocol (06/0x06) or User Datagram Protocol
(17/0x11), both of which exist as part of the transport layer within both the
TCP/IP and OSI reference models.

1. The IP subnet mask is a 32 bit value that describes the logical division
between the bit values of an IP address. The IP address is as such
divided into two parts for which bit values represent either a network or
sub-network, and the host within a given network or sub-network.
2. IP packets that are unable to reach the intended network are susceptible
to being indefinitely forwarded between networks in an attempt to discover
their ultimate destination. The Time To Live (TTL) feature is used to
ensure that a lifetime is applied to all IP packets, so as to ensure that in
the event that an IP packet is unable to reach its destination, it will
eventually be terminated. The TTL value may vary depending on the
original source.
3. Gateways represent points of access between IP networks to which traffic
can be redirected, or routed in the event that the intended destination
network varies from the network on which the packet originated.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
ICMP is a protocol that works alongside IP as a form of messaging
protocol in order to compensate for the limited reliabi lity of IP. The
implementation of ICMP is required to be understood to famil iarize with
the behavior of numerous operations and applications that rely heavily on
ICMP, in order to support underlying messaging, based on which further
processes are often performed.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe some of the processes to which ICMP is applied.

Identify the common type and code values used in ICMP.

Explain the function of ICMP in the ping and traceroute applications.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Internet Control Message Protocol is an integral part of IP designed to


facilitate the transmission of notification messages between gateways and
source hosts where requests for diagnostic information, support of routing,
and as a means of reporting errors in datagram processing are needed. The
purpose of these control messages is to provide feedback about problems in
the communication environment, and does not guarantee that a datagram will
be delivered, or that a control message will be returned.

ICMP Redirect messages represent a common scenario where ICMP is used


as a means of facilitating routing functions. In the example, a packet is
forwarded to the gateway by host A based on the gateway address of host A.
The gateway identifies that the packet received is destined to be forwarded to
the address of the next gateway which happens to be part of the same
network as the host that originated the packet, highlighting a non optimal
forwarding behavior between the host and the gateways.
In order to resolve this, a redirect message is sent to the host. The redirect
message advises the host to send its traffic for the intended destination
directly to the gateway to with which the destination network is associated,
since this represents a shorter path to the destination. The gateway proceeds
however to forward the data of the original packet to its intended destination.

ICMP echo messages represent a means of diagnosis for determining


primarily connectivity between a given source and destination, but also
provides additional information such as the round trip time for transmission as
a diagnostic for measuring delay. Data that is received in the echo message is
returned as a separate echo reply message.

ICMP provides various error reporting messages that often determine


reachability issues and generate specific error reports that allow a clearer
understanding from the perspective of the host as to why transmission to the
intended destination failed.
Typical examples include cases where loops may have occurred in the
network, and consequentially caused the time to live parameter in the IP
header to expire, resulting in a ttl exceeded in transit error message being
generated. Other examples include an intended destination being
unreachable, which could relate to a more specific issue of the intended
network not being known by the receiving gateway, or that the intended host
within the destination network not being discovered. In all events an ICMP
message is generated with a destination based on the source IP address
found in the IP header, to ensure the message notifies the sending host.

ICMP messages are sent using the basic IP header, which functions together
as an integral part of the ICMP message, such is the case with the TTL
parameter that is used to provide support for determining whether a
destination is reachable. The format of the ICMP message relies on two fields
for message identification in the form of a type/code format, where the type
field provides a general description of the message type, and the code and a
more specific parameter for the message type.
A checksum provides a means of validating the integrity of the ICMP
message. An additional 32 bits are included to provide variable parameters,
often unused and thus set as 0 when the ICMP message is sent, however in
cases such as an ICMP redirect, the field contains the gateway IP address to
which a host should redirect packets. The parameter field in the case of echo
requests will contain an identifier and a sequence number, used to help the
source associate sent echo requests with received echo replies, especially in
the event multiple requests are forwarded to a given destination.
As a final means of tracing data to a specific process, the ICMP message may
carry the IP header and a portion of the data that contains upper layer
information that enables the source to identify the process for which an error
occurred, such as cases where the ICMP TTL expires in transit.

A wide number of ICMP type values exist to define clearly the different
applications of the ICMP control protocol. In some cases the code field is not
required to provide a more specific entry to the type field, as is found with
echo requests that have a type field of 8 and the corresponding reply, which is
generated and sent as a separate ICMP message to the source address of
the sender, and defined using a type field of 0.
Alternatively, certain type fields define a very general type for which the
variance is understood through the code field, as in the case of the type 3
parameter. A type field of 3 specifies that a given destination is unreachable,
while the code field reflects the specific absence of either the network, host,
protocol, port (TCP/UDP), ability to perform fragmentation (code 4), or source
route (code 5) in which a packet, for which a forwarding path through the
network is strictly or partially defined, fails to reach its destination.

The application of ICMP can be understood through the use of tools such as
Ping. The Ping application may be used as a tool in order to determine
whether a destination is reachable as well as collect other related information.
The parameters of the Ping application allow an end user to specify the
behavior of the end system in generating ICMP messages, with consideration
of the size of the ICMP datagram, the number of ICMP messages generated
by the host, and also the duration in which it is expected a reply is received
before a timeout occurs. This is important where a large delay occurs since a
timeout may be reported by the Ping application before the ICMP message
has had the opportunity to return to the source.

The general output of an ICMP response to a Ping generated ICMP request


details the destination to which the datagram was sent and the size of the
datagram generated. In addition the sequence number of the sequence field
that is carried as part of the echo reply (type 0) is displayed along with the
TTL value that is taken from the IP header, as well as the round trip time
which again is carried as part of the IP options field in the IP header.

Another common application to ICMP is traceroute, which provides a means


of measuring the forwarding path and delay on a hop by hop basis between
multiple networks, through association with the TTL value within the IP
header.
For a given destination, the reachability to each hop along the path is
measured by initially defining a TTL value in the IP header of 1, causing the
TTL value to expire before the receiving gateway is able to propagate the
ICMP message any further, thus generating a TTL expired in transit message
together with timestamp information, allowing for a hop by hop assessment of
the path taken through the network by the datagram to the destination, and a
measurement of the round trip time. This provides an effective means of
identifying the point of any packet loss or delay that may be incurred in the
network and also aids in the discovery of routing loops.

The implementation of traceroute in Huawei ARG3 series routers adopts the


use of the UDP transport layer protocol to define a service port as the
destination. Each hop sends three probe packets, for which the TTL value is
initially set to a value of 1 and incremented after every three packets. In
addition, a UDP destination port of 33434 is specified for the first packet and
incremented for every successive probe packet sent. A hop by hop result is
generated, allowing for the path to be determined, as well as for any general
delay that may occur to be discovered.
This is achieved by measuring the duration between when the ICMP message
was sent and when the corresponding TTL expired in transit ICMP error is
received. When receiving a packet, the ultimate destination is unable to
discover the port specified in the packet, and thus returns an ICMP Type 3,
Code 3 (Port Unreachable) packet, and after three attempts the traceroute
test ends. The test result of each probe is displayed by the source, in
accordance with the path taken from the source to the destination. If a fault
occurs when the trace route command is used, the following information may
be displayed:
!H: The host is unreachable.
!N: The network is unreachable.
!: The port is unreachable.
!P: The protocol type is incorrect.
!F: The packet is incorrectly fragmented.

1. The Ping application uses the echo request message of type 8 to attempt
to discover the destination. A separate echo reply message, defined by a
type field of 0, is returned to the original source based on the source IP
address in the IP header field.
2. In the event that the TTL value of an IP datagram reaches 0 before the
datagram is able to reach the intended destination, the gateway device
receiving the datagram will proceed to discard it and return an ICMP
message to the source to notify that the datagram in question was unable
to reach the intended destination. The specific reason will be defined by
the code value to reflect for example whether the failure was due to a
failure to discover the host, a port on the host or whether the service for a
given protocol was not supported etc.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
In order for data transmission to a network destination to be achieved it is
necessary to build association between the network layer and lower layer
protocols. The means by which the Address Resolution Protocol is used
to build this association and prevent the unnecessary generation of
additional broadcast traffic in the network should be clearly understood.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain how the MAC address is resolved using ARP.
Explain the function of the ARP cache table.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

As data is encapsulated, the IP protocol at the network layer is able to specify


the target IP address to which the data is ultimately destined, as well as the
interface via which the data is to be transmitted, however before transmission
can occur, the source must be aware of the target Ethernet (MAC) address to
which data should be transmitted. The Address Resolution Protocol (ARP)
represents a critical part of the TCP/IP protocol suite that enables discovery of
MAC forwarding addresses to facilitate IP reachability. The Ethernet next hop
must be discovered before data encapsulation can be completed.

The ARP packet is generated as part of the physical target address discovery
process. Initial discovery will contain partial information since the destination
hardware address or MAC address is to be discovered. The hardware type
refers to Ethernet with the protocol type referring to IP, defining the
technologies associated with the ARP discovery. The hardware and protocol
length identifies the address length for both the Ethernet MAC address and
the IP address, and is defined in bytes.
The operation code specifies one of two states, where the ARP discovery is
set as REQUEST for which reception of the ARP transmission by the
destination will identify that a response should be generated. The response
will generate REPLY for which no further operation is necessary by the
receiving host of this packet, and following which the ARP packet will be
discarded. The source hardware address refers to the MAC address of the
sender on the physical segment to which ARP is generated. The source
protocol address refers to the IP address of the sender.
The destination hardware address specifies the physical (Ethernet) address to
which data can be forwarded by the Ethernet protocol standards, however this
information is not present in an ARP request, instead replaced by a value of 0.
The destination protocol address identifies the intended IP destination for
which reachability over Ethernet is to be established.

The network layer represents a logical path between a source and a


destination. Reaching an intended IP destination relies on firstly being able to
establish a physical path to the intended destination, and in order to do that,
an association must be made between the intended IP destination and the
physical next hop interface to which traffic can be forwarded.
For a given destination the host will determine the IP address to which data is
to be forwarded, however before encapsulation of the data can commence,
the host must determine whether a physical forwarding path is known. If the
forwarding path is known encapsulation to the destination can proceed,
however quite often the destination is not known and ARP must be
implemented before data encapsulation can be performed.

The ARP cache (pronounced as [kash]) is a table for association of host


destination IP addresses and associated physical (MAC) addresses. Any host
that is engaged in communication with a local or remote destination will first
need to learn of the destination MAC via which communication can be
established.
Learned addresses will populate the ARP cache table and remain active for a
fixed period of time, during which the intended destination can be discovered
without the need for addition ARP discovery processes. Following a fixed
period, the ARP cache table will remove ARP entries to maintain the ARP
cache tables integrity, since any change in the physical location of a
destination host may result in the sending host inadvertently addressing data
to a destination at which the destination host no longer resides.
The ARP cache lookup is the first operation that an end system will perform
before determining whether it is necessary to generate an ARP request. For
destinations beyond the boundaries of the hosts own network, an ARP cache
lookup is performed to discover the physical destination address of the
gateway, via which the intended destination network can be reached.

Where an ARP cache entry is unable to be determined, the ARP request


process is performed. This process involves generation of an ARP request
packet, and population of the fields with the source and destination protocol
addresses, as well as the source hardware address. The destination
hardware address is unknown. As such the destination hardware address is
populated with a value equivalent to 0. The ARP request is encapsulated in an
Ethernet frame header and trailer as part of the forwarding process. The
source MAC address of the frame header is set as the source address of the
sending host.
The host is currently unaware of the location of the destination and therefore
must send the ARP request as a broadcast to all destinations within the same
local network boundary. This means that a broadcast address is used as the
destination MAC address. Once the frame is populated, it is forwarded to the
physical layer where it is propagated along the physical medium to which the
host is connected. The broadcasted ARP packet will be flooded throughout
the network to all destinations including any gateway that may be present,
however the gateway will prevent this broadcast from being forwarded to any
network beyond the current network.

If the intended network destination exists, the frame will arrive at the physical
interface of the destination at which point lower layer processing will ensue.
ARP broadcasts mean that all destinations within the network boundary will
receive the flooded frame, but will cease to process the ARP request, since
the destination protocol address does not match to the IP address of those
destinations.
Where the destination IP address does match to the receiving host, the ARP
packet will be processed. The receiving host will firstly process the frame
header and then process the ARP request. The destination host will use the
information from the source hardware address field in the ARP header to
populate its own ARP cache table, thus allowing for a unicast frame to be
generated for any frame forwarding that may be required, to the source from
which the ARP request was received.

The destination will determine that the ARP packet received is an ARP
request and will proceed to generate an ARP reply that will be returned to the
source, based on the information found in the ARP header. A separate ARP
packet is generated for the reply, for which the source and destination
protocol address fields will be populated. However, the destination protocol
address in the ARP request packet now represents the source protocol
address in the ARP reply packet, and similarly the source protocol address of
the ARP request becomes the destination protocol address in the ARP reply.
The destination hardware address field is populated with the MAC of the
source, discovered as a result of receiving the ARP request. For the required
destination hardware address of the ARP request, it is included as the source
hardware address of the ARP reply, and the operation code is set to reply, to
inform the destination of the purpose of the received ARP packet, following
which the destination is able to discard the ARP packet without any further
communication. The ARP reply is encapsulated in the Ethernet frame header
and trailer, with the destination MAC address of the Ethernet frame containing
the MAC entry in the ARP cache table, allowing the frame to be forwarded as
a unicast frame back to the host that originated the ARP request.

Upon receiving the ARP reply, the originating host will validate that the
intended destination is correct based on the frame header, identify that the
packet header is ARP from the type field and discard the frame headers. The
ARP reply will then be processed, with the source hardware address of the
ARP reply being used to populate the ARP cache table of the originating host
(Host A).
Following the processing of the ARP reply, the packet is discarded and the
destination MAC information is used to facilitate the encapsulation process of
the initial application or protocol that originally requested discovery of the
destination at the data link layer.

The ARP protocol is also applied to other cases such as where transparent
subnet gateways are to be implemented to facilitate communication across
physical networks, where hosts are considered to be part of the same
subnetwork. This is referred to as Proxy ARP since the gateway operates as a
proxy for the two physical networks. When an ARP request is generated for a
destination that is considered to be part of the same subnet, the request will
eventually be received by the gateway. The gateway is able to determine that
the intended destination exists beyond the physical network on which the ARP
request was generated.
Since ARP requests cannot be forwarded beyond the boundaries of the
broadcast domain, the gateway will proceed to generate its own ARP request
to determine the reachability to the intended destination, using its own
protocol and hardware addresses as the source addresses for the generated
ARP request. If the intended destination exists, an ARP reply shall be
received by the gateway for which the destinations source hardware address
will be used to populate the ARP cache table of the gateway.
The gateway upon confirming the reachability to the intended destination will
then generate an ARP reply to the original source (Host A) using the hardware
address of the interface on which the ARP reply was forwarded. The gateway
will as a result operate as an agent between the two physical networks to
facilitate data link layer communication, with both hosts forwarding traffic
intended for destinations in different physical networks to the relevant physical
address of the Proxy gateway.

In the event that new hardware is introduced to a network, it is imperative that


the host determine whether or not the protocol address to which it has been
assigned is unique within the network, so as to prevent duplicate address
conflicts. An ARP request is generated as a means of determining whether the
protocol address is unique, by setting the destination address in the ARP
request to be equal to the hosts own IP address.
The ARP request is flooded throughout the network to all link layer
destinations by setting the destination MAC as broadcast, to ensure all end
stations and gateways receive the flooded frame. All destinations will process
the frame, and should any destination discover that the destination IP address
within the ARP request match the address of a receiving end station or
gateway, an ARP reply will be generated and returned to the host that
generated the ARP request.
Through this method the originating host is able to identify duplication of the
IP address within the network, and flag an IP address conflict so to request
that a unique address be assigned. This means of generating a request based
on the hosts own IP address defines the basic principles of gratuitous ARP.

1. The host is required to initially determine whether it is already aware of a


link layer forwarding address within its own ARP cache (MAC address
table). If an entry is discovered the end system is capable of creating the
frame for forwarding without the assistance of the address resolution
protocol. If an entry cannot be found however, the ARP process will
initiate, and an ARP request will be broadcasted on the local network.
2. Gratuitous ARP messages are commonly generated at the point in which
an IP address is configured or changed for a device connected to the
network, and at any time that a device is physically connected to the
network. In both cases the gratuitous ARP process must ensure that the
IP address that is used remains unique.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The transport layer is associated with the end-to-end behavior of
transport layer protocols, that are defined once data reaches the intended
destination. TCP and UDP represent the protocols commonly supported
within IP networks. The characteristics of data, such as sensitivity to
delay and the need for reliability often determines the protocols used at
the transport layer. This section focuses on the knowledge of how such
characteristics are supported through the behavior of each protocol.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the common differences between TCP and UDP.
Describe the forms of data to which TCP and UDP are applied.

Identify well known TCP and UDP based port numbers.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

TCP is a connection-oriented, end-to-end protocol that exists as part of the


transport layer of the TCP/IP protocol stack, in order to support applications
that span over multi-network environments. The transmission control protocol
provides a means of reliable inter-process communication between pairs of
processes in host computers that are attached to distinct but interconnected
computer communication networks. TCP relies on lower layer protocols to
provide the reachability between process supporting hosts, over which a
reliable connection service is established between pairs of processes. The
connection-oriented behavior of TCP involves prior exchanges between the
source and destination, through which a connection is established before
transport layer segments are communicated.

As a means for allowing for many processes within a single host to use TCP
communication facilities simultaneously, TCP provides a set of logical ports
within each host. The port value together with the network layer address is
referred to as a socket, for which a pair of sockets provide a unique identifier
for each connection, in particular where a socket is used simultaneously in
multiple connections. That is to say, a process may need to distinguish among
several communication streams between itself and another process (or
processes), for which each process may have a number of ports through
which it communicates with the port or ports of other processes.
Certain processes may own ports and these processes may initiate
connections on the ports that they own. These ports are understood as IANA
assigned system ports or well known ports and exist in the port value range of
0 1023. A range of IANA assigned user or registered ports also exist in the
range of 1024 49151, with dynamic ports, also known as private or
ephemeral ports in the range of 49152 65535, which are not restricted to
any specific application. Hosts will generally be assigned a user port value for
which a socket is generated to a given application.
Common examples of TCP based applications for which well known port
numbers have been assigned include FTP, HTTP, TELNET, and SMTP, which
often will work alongside other well known mail protocols such as POP3 (port
110) and IMAP4 (port 143).

The TCP header allows TCP based applications to establish connectionoriented data streams that are delivered reliability, and to which flow control is
applied. A source port number is generated where a host intends to establish
a connection with a TCP based application, for which the destination port will
relate to a well known/registered port to which a well known/registered
application is associated.
Code bits represent functions in TCP, and include an urgent bit (URG) used
together the urgent pointer field for user directed urgent data notifications,
acknowledgment of received octets in association with the acknowledgement
field (ACK), the push function for data forwarding (PSH), connection reset
operations (RST), synchronization of sequence numbers (SYN) and indication
that no more data is to be received from the sender (FIN). Additional code bits
were introduced in the form of ECN-Echo (ECE) and Congestion Window
Reduced (CWR) flags, as a means of supporting congestion notification for
delay sensitive TCP applications.
The explicit congestion notification (ECN) nonce sum (NS) was introduced as
a follow-up alteration to eliminate the potential abuse of ECN where devices
along the transmission path may remove ECN congestion marks. The Options
field contains parameters that may be included as part of the TCP header,
often used during the initial connection establishment, as in the case of the
maximum segment size (MSS) value, that may be used to define the size of
the segment that the receiver should use. TCP header size must be a sum of
32 bits, and where this is not the case, padding of 0 values will be performed.

When two processes wish to communicate, each TCP must first establish a
connection (initialize the synchronization of communication on each side).
When communication is complete, the connection is terminated or closed to
free the resources for other uses. Since connections must be established
between unreliable hosts and over the unreliable Internet domain, a
handshake mechanism with clock-based sequence numbers is used to avoid
erroneous initialization of connections.
A connection progresses through a series of states during establishment. The
LISTEN state represents a TCP waiting for a connection request from any
remote TCP and port. SYN-SENT occurs after sending a connection request
and before a matching request is received. The SYN-RECEIVED state occurs
while waiting for a confirming connection request acknowledgment, after
having both received and sent a connection request. The ESTABLISHED
state occurs following the handshake at which time an open connection is
created, and data received can be delivered to the user.
The TCP three-way handshake mechanism begins with an initial sequence
number being generated by the initiating TCP as part of the synchronization
(SYN) process. The initial TCP segment is then set with the SYN code bit, and
transmitted to the intended IP destination TCP to achieve a SYN-SENT state.
As part of the acknowledgement process, the peering TCP will generate an
initial sequence number of its own to synchronize the TCP flow in the other
direction. This peering TCP will transmit this sequence number, as well as an
acknowledgement number that equals the received sequence number
incremented by one, together with set SYN and ACK code bits in the TCP
header to achieve a SYN-RECEIVED state.

The final step of the connection handshake involves the initial TCP
acknowledging the sequence number of the peering TCP by setting the
acknowledgement number to equal the received sequence number plus one,
together with the ACK bit in the TCP header, allowing an ESTABLISHED state
to be reached.

Since the TCP transmission is sent as a data stream, every octet can be
sequenced, and therefore each octet can be acknowledged. The
acknowledgement number is used to achieve this by responding to the sender
as confirmation of receipt of data, thus providing data transport reliability. The
acknowledgement process however is cumulative, meaning that a string of
octets can be acknowledged by a single acknowledgement by reporting to the
source the sequence number that immediately follows the sequence number
that was successfully received.
In the example a number of bytes (octets) are transmitted together before
TCP acknowledgement is given. Should an octet fail to be transmitted to the
destination, the sequence of octets transmitted will only be acknowledged to
the point at which the loss occurred. The resulting acknowledgement will
reflect the octet that was not received in order to reinitiate transmission from
the point in the data stream at which the octet was lost.
The ability to cumulate multiple octets together before an acknowledgement
enables TCP to operate much more efficiently, however a balance is
necessary to ensure that the number of octets sent before an
acknowledgement is required is not too extreme, for if an octet fails to be
received, the entire stream of octets from the point of the loss must be
retransmitted.

The TCP window field provides a means of flow control that governs the
amount of data sent by the sender. This is achieved by returning a "window"
with every TCP segment for which the ACK field is set, indicating a range of
acceptable sequence numbers beyond the last segment successfully
received. The window indicates the permitted number of octets that the
sender may transmit before receiving further permission.
In the example, TCP transmission from host A to server A contains the current
window size for host A. The window size for server A is determined as part of
the handshake, which based on the transmission can be assumed as 2048.
Once data equivalent to the window size has been received, an
acknowledgement will be returned, relative to the number of bytes received,
plus one. Following this, host A will proceed to transmit the next batch of data.
A TCP window size of 0 will effectively deny processing of segments, with
exception to segments where the ACK, RST and URG code bits are set for
incoming segments. Where a window size of 0 exists, the sender must still
periodically check the window size status of receiving TCP to ensure any
change in the window size is effectively reported, the period for
retransmission is generally two minutes. When a sender sends periodic
segments, the receiving TCP must still acknowledge with a sequence number
announcement of the current window size of 0.

As part of the TCP connection termination process, a number of states are


defined through which TCP will transition. These states include FIN-WAIT-1
that represents waiting for a connection termination (FIN) request from the
remote TCP, or an acknowledgment of a connection termination request that
was previously sent. The FIN-WAIT-2 represents waiting for a connection
termination request from the remote TCP following which will generally
transition to a TIME-WAIT state. A CLOSE-WAIT state indicates waiting for a
locally defined connection termination request, typically when a servers
application is in the process of closing.
The LAST-ACK state represents waiting for an acknowledgment of the
connection termination request previously sent to the remote TCP (which
includes an acknowledgment of its connection termination request). Finally, a
TIME-WAIT state occurs and waits for enough time to pass to ensure that the
remote TCP received the acknowledgment of its connection termination
request. This period is managed by the Max Segment Lifetime (MSL) timer
that defines a waiting period of 2 minutes. Following a wait period equal to two
times the MSL, the TCP connection is considered closed/terminated.

The User Datagram Protocol, or UDP, represents an alternative to TCP and


applied where TCP is found to act as an inefficient transport mechanism,
primarily in the case of highly delay sensitive traffic. Where TCP is considered
a segment, the UDP is recognized as a datagram form of Protocol Data Unit
(PDU), for which a datagram can be understood to be a self-contained,
independent entity of data carrying sufficient information to be routed from the
source to the destination end system without reliance on earlier exchanges
between this source and destination end systems and the transporting
network, as defined in RFC 1594. In effect this means that UDP traffic does
not require the establishment of a connection prior to the sending of data.
The simplified structure and operation of UDP makes it ideal for application
programs to send messages to other programs, with a minimum of protocol
mechanism such in the case of acknowledgements and windowing for
example, as found in TCP segments. In balance however, UDP does not
guarantee delivery of data transmission, nor protection from datagram
duplication.

The UDP header provides a minimalistic approach to the transport layer,


implementing only a basic construct to help identify the destination port to
which the UDP based traffic is destined, as well as a length field and a
checksum value that ensures the integrity of the UDP header. In addition the
minimal overhead acts as an ideal means for enabling more data to be carried
per packet, favoring real time traffic such as voice and video communications
where TCP provides a 20 byte overhead and mechanisms that influence
delays, such as in the case of acknowledgements, however the lack of such
fields means that datagram delivery is not guaranteed.

Since UDP datagram transmission is not sent as a data stream, transmission


of data is susceptible to datagram duplication. In addition, the lack of
sequence numbers within UDP means that delivery of data transmission over
various paths is likely to be received at the destination in an incorrect, nonsequenced order.
Where stream data is transported over UDP such as in the case of voice and
video applications, additional protocol mechanisms may be applied to
enhance the capability of UDP, as in the case of the real time transport
protocol (RTP) which helps to support the inability of UDP by providing a
sequencing mechanism using timestamps to maintain the order of such audio/
video data streams, effectively supporting partial connection oriented behavior
over a connectionless transport protocol.

The general UDP forwarding behavior is highly beneficial to delay sensitive


traffic such as voice and video. It should be understood that where a
connection-oriented transport protocol is concerned, lost data would require
replication following a delay period, during which time an acknowledgement
by the sender is expected. Should the acknowledgement not be received, the
data shall be retransmitted.
For delay sensitive data streams, this would result in incomprehensible audio
and video transmissions due to both delay and duplication, as a result of
retransmission from the point where acknowledgements are generated. In
such cases, minimal loss of the data stream is preferable over retransmission,
and as such UDP is selected as the transport mechanism, in support of delay
sensitive traffic.

1. The acknowledgement field in the TCP header confirms receipt of the


segment received by the TCP process at the destination. The sequence
number in the TCP header of the received IP datagram is taken and
incremented by 1. This value becomes the acknowledgement number in
the returned TCP header and is used to confirm receipt of all data, before
being forwarded along with the ACK code bit set to 1, to the original
sender.
2. The three-way handshake involves SYN and ACK code bits in order to
establish and confirm the connection between the two end systems,
between which transmission of datagrams is to occur.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The TCP/IP protocol suite operates as a collection of rules in order to
support the end-to-end forwarding of data, together with lower layer
protocols such as those defined in the IEEE 802 standards. The
knowledge of the lifecycle of data forwarding enables a deeper
understanding of the IP network behavior for effective analysis of network
operation and troubleshooting of networking faults. The entire
encapsulation and decapsulation process therefore represents a
fundamental part of all TCP/IP knowledge.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the process steps for data encapsulation and decapsulation.

Troubleshoot basic data forwarding issues.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Data forwarding can be collectively defined as either local or remote for which
the forwarding process relies on the application of the protocol stack in order
to achieve end-to-end transmission. End systems may be part of the same
network, or located in different networks, however the general forwarding
principle to enable transmission between hosts follows a clear set of protocols
that have been introduced as part of the unit. How these protocols work
together shall be reinforced, as well as building the relationship between the
upper layer TCP/IP protocols and the lower link layer based Ethernet protocol
standards.

An end system that intends to forward data to a given destination must initially
determine whether or not it is possible to reach the intended destination. In
order to achieve this, the end system must go through a process of path
discovery. An end system should be understood to be capable of supporting
operation at all layers since its primary function is as a host to applications. In
relation to this, it must also be capable of supporting lower layer operations
such as routing and link layer forwarding (switching) in order to be capable of
upper/application layer data forwarding. The end system therefore contains a
table that represents network layer reachability to the network for which the
upper layer data is destined.
End systems will commonly be aware of the network to which they reside, but
may be without a forwarding path in cases where remote network discovery
has not been achieved. In the example given, host A is in possession of a
path to the destined network through the any network address that was
briefly introduced as part of the IP Addressing section. The forwarding table
identifies that traffic should be forwarded to the gateway as a next hop via the
interface associated with the logical address of 10.1.1.1.

Following discovery of a feasible route towards the intended destination


network, a physical next hop must also be discovered to facilitate frame
forwarding. The TCP/IP protocol suite is responsible for determining this
before packet encapsulation can proceed. The initial step involves
determining whether a physical path exists to the next hop identified as part of
the path discovery process.
This requires that the ARP cache table be consulted to identify whether an
association between the intended next hop and the physical path is known.
From the example it can be seen that an entry to the next hop gateway
address is present in the ARP cache table. Where an entry cannot be found,
the Address Resolution Protocol (ARP) must be initiated to perform the
discovery and resolve the physical path.

When both the logical and physical path forwarding discovery is complete, it is
possible for encapsulation of data to be performed for successful transmission
over IP/Ethernet based networks. Upper layer processes in terms of
encryption and compression may be performed following which transport layer
encapsulation will occur, identifying the source and destination ports via which
upper layer data should be forwarded.
In the case of TCP, sequence and acknowledgement fields will be populated,
code bits set as necessary with the ACK bit commonly applied. The window
field will be populated with the current supported window size, to which the
host will notify of the maximum data buffer that can be supported before data
is acknowledged.
Values representing the TCP fields are included as part of the checksum,
which is calculated using a ones compliment calculation process, to ensure
TCP segment integrity is maintained once the TCP header is received and
processed at the ultimate destination. In the case of basic TCP code
operations, upper layer data may not always be carried in the segment, as in
the case of connection synchronization, and acknowledgements to received
data.

Following transport layer encapsulation, it is generally required that


instructions be provided, detailing how transmission over one or more
networks is to be achieved. This involves listing the IP source as well as the
ultimate destination for which the packet is intended. IP packets are generally
limited to a size of 1500 bytes by Ethernet, inclusive of the network and
transport layer headers as well as any upper layer data.Initial packet size will
be determined by Ethernet as the maximum transmission unit, or MTU to
which packets will conform, therefore fragmentation will not occur at the
source.
In the case that the MTU changes along the forwarding path, only then will
fragmentation will be performed. The time to live field will be populated with a
set value depending on the system, in ARG3 series routers, this is set with an
initial value of 255. The protocol field is populated based on the protocol
encapsulated prior to IP. In this case the protocol in question is TCP for which
the IP header will populate the protocol field with a value of 0x06 as
instruction for next header processing. Source and destination IP addressing
will reflect the originating source and the ultimate destination.

Link layer encapsulation relies on IEEE 802.3 Ethernet standards for physical
transmission of upper layer data over Ethernet networks. Encapsulation at the
lower layers is performed by initially determining the frame type that is used.
Where the upper layer protocol is represented by a type value greater than
1536 (0x0600) as is the case with IP (0x0800), the Ethernet II frame type is
adopted. The type field of the Ethernet II frame header is populated with the
type value of 0x0800 to reflect that the next protocol to be processed following
frame processing will be IP. The destination MAC address determines the next
physical hop, which in this case represents the network gateway.

As part of the link layer operation, it is imperative to ensure that the


transmission medium is clear of signals in shared collision domain. The host
will first listen for any traffic on the network as part of CSMA/CD and should
the line remain clear, will prepare to transmit the data. It is necessary for the
receiving physical interface to be made aware of the incoming frame, so as to
avoid loss of initial bit values that would render initial frames as incomplete.
Frames are therefore preceded by a 64 bit value indicating to the link layer
destination of the frames imminent arrival.
The initial 56 bits represent an alternating 1, 0 pattern is called the preamble,
and is followed immediately by an octet understood as the Start of Frame
Delimiter (SFD). The final two bits of the SFD deviate from an alternating
pattern to a 1,1 bit combination that notifies that the bits that follow represent
the first bit values of the destination MAC address and therefore the start of
the frame header.

As a frame is received by the link layer destination, it must go through a


number of checks to determine its integrity as well as validity. If the frame was
transmitted over a shared Ethernet network, other end stations may also
receive an instance of the frame transmitted, however since the frame
destination MAC address is different from the MAC address of the end station,
the frame will be discarded.
Frames received at the intended destination will perform error checking by
calculating the ones compliment value based on the current frame fields and
compare this against the value in the Frame Check Sequence (FCS) field. If
the values do not match, the frame will be discarded. Receiving intermediate
and end systems that receive valid frames will need to determine whether the
frame is intended for their physical interface by comparing the destination
MAC address with the MAC address of the interface (or device in some
cases).
If there is a match, the frame is processed and the type field is used to
determine the next header to be processed. Once the next header is
determined, the frame header and trailer are discarded.

The packet is received by the network layer, and in particular IP, at which
point the IP header is processed. A checksum value exists at each layer of the
protocol stack to maintain the integrity at all layers for all protocols. The
destination IP is used to determine whether the packet has reached its
ultimate destination. The gateway however determines that this is not the
case since the destination IP and the IP belonging to the gateway do not
match.
The gateway must therefore determine the course of action to take with
regards to routing the packet to an alternate interface, and forward the packet
towards the network for which it is intended. The gateway must firstly however
ensure that the TTL value has not reached 0, and that the size of the packet
does not exceed the maximum transmission unit value for the gateway. In the
event that the packet is larger than the MTU value of the gateway,
fragmentation will generally commence.
Once a packets destination has been located in the forwarding table of the
gateway, the packet will be encapsulated in a new frame header consisting of
new source and destination MAC addresses for the link layer segment, over
which the resulting frame is to be forwarded, before being once again
transmitted to the next physical hop. Where the next physical hop is not
known, ARP will again be used to resolve the MAC address.

Frames received at the ultimate destination will initially determine whether the
frame has arrived at the intended location. The example shows two servers
on a shared Ethernet network over which both receive a copy of the frame.
The frame is ultimately discarded by server B since the destination MAC value
and the interface MAC address of server B do not match. Server A however
successfully receives the frame and learns that the MAC fields are the same,
the integrity of the frame based on the FCS can also be understood to be
correct. The frame will use the type field to identify 0x0800 as the next header,
following which the frame header and trailer are discarded and the packet is
received by IP.

Upon reaching the ultimate destination, the IP packet header must facilitate a
number of processes. The first includes validating the integrity of the packet
header through the checksum field, again applying a ones compliment value
comparison based on a sum of the IP header fields. Where correct, the IP
header will be used to determine whether the destination IP matches the IP
address of the current end station, which in this case is true.
If any fragmentation occurred during transmission between the source and the
destination, the packet must be reassembled at this point. The identification
field will collect the fragments belonging to a single data source together, the
offset will determine the order and the flags field will specify when the
reassembly should commence, since all fragments must be received firstly
and a fragment with a flag of 0 will be recognized as the last fragment to be
received.
A timer will then proceed during which time the reassembly must be
completed, should reassembly fail in this time period, all fragments will be
discarded. The protocol field will be used to identify the next header for
processing and the packet header will be discarded. It should be noted that
the next header may not always be a transport layer header, a clear example
of where this can be understood is in the case of ICMP, which is understood
to also be a network layer protocol with a protocol field value of 0x01.

In the case where a packet header is discarded, the resulting segment or


datagram is passed to the transport layer for application-to-application based
processing. The header information is received in this case by TCP (0x06).
In the example it can be understood that a TCP connection has already been
established and the segment represents an acknowledgement for the
transmission of HTTP traffic from the HTTP server to the acknowledging host.
The host is represented by the port 1027 as a means to distinguish between
multiple HTTP connections that may exist between the same source host and
destination server. In receiving this acknowledgement, the HTTP server will
continue to forward to the host within the boundaries of the window size of the
host.

1. Prior to the encapsulation and forwarding of data, a source must have


knowledge of the IP destination or an equivalent forwarding address such
as a default address to which data can be forwarded. Additionally it is
necessary that the forwarding address be associated with a physical next
hop to which the data can be forwarded within the local network.
2. Any frame that is received by a gateway or end system (host) to which it
is not intended, is subsequently dropped, following inspection of the
destination MAC address in the frame header.
3. The delivery of data relies on the destination port number in the TCP and
UDP headers to identify the application to which the data is intended.
Following analysis of this value by the TCP or UDP protocol, the data is
forwarded.
4. The source port of the TCP header for the HTTP traffic distinguishes
between the different application sessions that are active. Return HTTP
traffic from the HTTP server is able to identify each individual web
browser session based on this source port number. For example, the
source port of two separate requests for HTTP traffic originating from IP
source 10.1.1.1 may originate from source ports 1028 and 1035, however
the destination port in both cases remains as port 80, the HTTP server.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
As more and more end stations in the form of host devices, networkable
printers other similar products are introduced into the local area network,
an increase in the density of devices results in a limitation in terms of port
interfaces, along with problems of collisions within any shared network
topology. Switching has evolved as the means for supporting this growth.
VRP is used within Huawei products as a means to configure and
operate such managed devices for which familiarity and hands-on skills
must be developed.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the role switches play in the Ethernet networks.
Describe the difference between collision and broadcast domains.
Explain the general operation of VRP in Huawei products.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Ethernet network until now has been understood to be a collection of


devices communicating over shared media such as 10Base2, through which
hosts are able to communicate with neighboring hosts or end systems. It has
been determined that the Ethernet network is a contentious network, meaning
that hosts must compete for media access which becomes increasingly
limited as more and more devices are connected over this shared media;
which causes additional limitations in scalability and the increasing potential
for collisions.
As a result, the need for collision detection in the form of CSMA/CD is ever
present in such shared Ethernet networks. Following the adoption of switched
media such as that of 100BaseT, data transmission and reception became
isolated within channels (wire pairs), enabling the potential for collisions to
occur to be eliminated. This medium as a form of non-shared Ethernet
provides only a means for point-to-point communication, however used
together with other devices such as hubs, a shared Ethernet network is once
again possible, along with the potential for collisions.
The switch was introduced as part of the evolution of the bridge, and is
capable of breaking down the shared collision domain into multiple collision
domains. The collision domains operate as a collection of point-to-point links
for which the threat of collisions is removed and link-layer traffic is isolated, to
allow higher transmission rates that optimize traffic flow within the Ethernet
network.

A broadcast domain is capable of being comprised of a single, or multiple


collision domains, and any broadcast transmission is contained within the
boundary of a broadcast domain. The edge of a broadcast domains boundary
is typically defined by a gateway that acts as the medium, via which other
networks are reachable, and will restrict the forwarding of any broadcast traffic
beyond the interface on which the broadcast is received.
Routers are synonymous with the term gateway for which the two are often
used interchangeably. A single IP network can generally be understood to
make up a broadcast domain, which refers to the scope of a link-layer
segment. Routers are generally responsible for routing Internet datagrams (IP
packets) to a given destination based on the knowledge of a forwarding
address for the destination network, found within an internally managed
forwarding table.

The Versatile Routing Platform (VRP) represents the foundation of many of


Huawei products including routers and switches. Its design has been through
many evolutions to enable continuous improvement of data management and
forwarding. The architectural design has resulted in ever enhanced modularity
that allows for greater overall performance. The configuration, management
and monitoring of devices using VRP is based on a standardized and
hierarchical command line system for which a familiarity should be developed
to support navigation and operation of Huawei products managed using VRP
software.

A familiarity with the versions of VRP network operating system (NOS) aids in
ensuring that the version currently being used is up to date and supports
certain features that may be required in an enterprise network. The general
trend for most Huawei devices is to operate using VRP version 5.x currently,
where x may vary depending on the product and VRP release. VRP version 8
is a recent revision of VRP built with a highly refined architecture for the next
generation of technologies and constructed around the need for greater
efficiency, but is not present in all Huawei products.

AR series enterprise routers (AR) include the AR150, AR200, AR1200,


AR2200, and AR3200. They are the next-generation of Huawei products, and
provide routing, switching, wireless, voice, and security functionality. The AR
series are positioned between the enterprise network and a public network,
functioning as an ingress and egress gateway for data transmitted between
the two networks. Deployment of various network services over the AR series
routers reduces operation & maintenance (O&M) costs as well as costs
associated with establishing an enterprise network. AR series routers of
different specifications can be used as gateways based on the user capacity
of an enterprise.
The Sx7 Series Ethernet Switch provides data transport functionality, and has
been developed by Huawei to meet the requirements for reliable access and
high-quality transmission of multiple services on the enterprise network. This
series of switch is positioned for access or aggregation layer operation in the
enterprise network, and provides a large switching capacity, high port density,
and cost-effective packet forwarding capabilities.
Management of the ARG3 series routers and Sx7 series of switch can be
achieved through establishing a connection to the console interface, and in
the case of the AR2200, a connection is also possible to be established via a
Mini USB interface.

A console cable is used to debug or maintain a locally established device


such as a router or switch, and will interface with the console port of such
devices. The console interface of the S5700 series switch and the AR2200
router is an RJ-45 type connection, while the interface to which a host
connection is made, represents an RS-232 form of serial connector. Often
such serial connectors are no longer present on newer devices that can be
used for establishing connectivity, such as laptop computers, and therefore an
RS-232 to USB conversion is performed. For most desktop devices however,
an RS-232 based console connection can be established to a COM port on
the host device.

Console establishment is set up through one of a number of available terminal


emulation programs. Windows users often apply the hyperterminal application
as shown in the example to interface with the VRP operating system.
Following specification of the COM port that is to be used to establish the
connection, port settings must be defined.
The example defines the port settings that should be applied, for which the
restore default button will automatically reassign should any change have
been made to these settings. Once the OK button is pressed, a session will
be established with the VRP of the device. If the device is operating using
factory default settings, the user will be prompted for a password, which will
be assigned as the default login password for future connection attempts.

The Huawei AR2200 router, additionally supports the means for terminal
connectivity via a USB connection. A type B mini USB interface exists on the
front panel of the AR2200 series router through which hosts are able to
establish a USB based connection as a serial alternative to that of RS-232.

A slight variation in the setup process requires that the mini USB firstly
establish drivers to allow USB functionality. The mini USB driver can be
obtained by visiting http://support.huawei.com/enterprise, and under the path
Support > Software > Enterprise Networking > Router > Access Router > AR
> AR2200, choose the relevant VRP version & patch path option, and
download the file labeled AR&SRG_MiniUSB_driver.zip. It should be noted
that the mini USB driver supports only Windows XP, Windows Vista, and
Windows 7 operating systems.
When upgrading the device software or installing a patch, the MD5 hash value
can be checked to confirm software validity. In order to prevent the software
from being modified or replaced, you are advised to perform this operation.
Installing requires the user to firstly double-click the driver installation file on
the PC and click Next. Secondly select I accept the terms in the license
agreement and click Next. Click the Change button to change the driver
directory if required, and click Next. Click Install and decompress the driver.
When the system finishes decompressing the driver, click Finish.
Users should then find the DISK1 folder in the specified driver directory, and
double-click the file setup.exe Following the opening of a second installation
window click Next. Users should again select I accept the terms in the license
agreement and click Next to install the driver. Once complete, click Finish to
finish installing the driver. Right-click My Computer, and choose Manage >
Device Manager > Ports(COM&LPT). The system should display the
TUSB3410 Device indicating the driver that has been installed.

As with the RS-232 console connection, the Mini USB serial connection
requires establishment to terminal emulation software to enable interaction
with the VRP command line.
Use the terminal emulation software to log in to the device through the mini
USB port, (for which the Windows HyperTerminal is used as an example). On
the host PC, start the HyperTerminal application, for which the location may
vary for each version of Windows, and create a connection by providing a
suitable terminal connection name and click OK. Select the relevant
connection (COM) port and then set the communication parameters for the
serial port of the PC. These parameters should match the default values that
are set when pressing the Restore Defaults button.
After pressing Enter, the console information is displayed requesting a login
password. Enter a relevant password and confirmation password, and the
system will save the password.

1. Any broadcast that is generated by an end system within a local network


will be forwarded to all destinations. Once a frame is broadcast to a router
or device acting as a gateway for the network, the frame will be analyzed
and should it be determined that the destination is for a locally defined
host other than the gateway, the frame will be dropped. This as such
defines the boundary of any broadcast domain.
2. VRP version 5 is supported by a large number of current Huawei
products, while high end products may often be supported by VRP
version 8.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The implementation of Huawei devices in an enterprise network requires
a level of knowledge and capability in the navigation of the VRP
command line interface, and configuration of system settings. The
principle command line architecture is therefore introduced as part of this
section along with navigation , help functions and common system
settings that are required to be understood for the successful
configuration of any VRP managed device.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Navigate the VRP command line interface.

Configure basic VRP system settings.

Perform basic VRP interface configuration and management.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The startup/boot process is the initial phase of operation for any administrator
or engineer accessing Huawei based products operating with VRP. The boot
screen informs of the system startup operation procedures as well as the
version of the VRP image that is that currently implemented on the device,
along with the storage location from where it is loaded. Following the initial
startup procedure, an option for auto-configuration of the initial system
settings prompts for a response, for which the administrator can choose
whether to follow the configuration steps, or manually configure the basic
system parameters. The auto-configuration process can be terminated by
selecting the yes option at the given prompt.

The hierarchical command structure of VRP defines a number of command


views that govern the commands for which users are able to perform
operations. The command line interface has multiple command views, of
which common views have been introduced in the example. Each command
is registered to run in one or more command views, and such commands can
run only after entering the appropriate command view. The initial command
view of VRP is the User View, which operates as an observation command
view for observing parameter statuses and general statistical information. For
application of changes to system parameters, users must enter the System
View. A number of sub command levels can also be found, in the form of the
interface and protocol views for example, where sub system level tasks can
be performed.
The command line views can be determined based on the parenthesis, and
information contained within these parenthesis. The presence of chevrons
identifies that the user is currently in the User View, whereas square brackets
show that a transition to the System View has occurred.

The example demonstrates a selection of common system defined shortcut


keys that are widely used to simplify the navigation process within the VRP
command line interface. Additional commands are as follows:
CTRL+B moves the cursor back one character.
CTRL+D deletes the character where the cursor is located.
CTRL+E moves the cursor to the end of the current line.
CTRL+F moves the cursor forward one character.
CTRL+H deletes the character on the left side of the cursor.
CTRL+N displays the next command in the historical command buffer.
CTRL+P displays the previous command in the historical command buffer.
CTRL+W deletes the word on the left side of the cursor.
CTRL+X deletes all the characters on the left side of the cursor.
CTRL+Y deletes all the characters on the right side of the cursor.
ESC+B moves the cursor one word back.
ESC+D deletes the word on the right side of the cursor.
ESC+F moves the cursor forward one word.

Additional key functions can be used to perform similar operations, the


backspace operation has the same behavior as using CTRL+H to delete a
character to the left of the cursor. The left () and right () cursor keys can
be used to perform the same operation as the CTRL+B and CTRL+F shortcut
key functions. The down cursor key () functions the same as Ctrl+N, and the
up cursor key () acts as an alternative to the CTRL+P operation.
Additionally, the command line functions support a means of auto completion
where a command word is unique. The example demonstrates how the
command word interface can be auto completed by partial completion of the
word to such a point that the command is unique, followed by the tab key
which will provide auto completion of the command word. Where the
command word is not unique, the tab function will cycle through the possible
completion options each time the tab key is pressed.

There are two forms of help feature that can be found within the VRP, these
come in the form of partial help and complete help functions. In entering a
character string followed directly by a question mark (?), VRP will implement
the partial help function to display all commands that begin with this character
string. An example of this is demonstrated. In the case of the full help feature,
a question mark (?) can be placed on the command line at any view to display
all possible command names, along with descriptions for all commands
pertaining to that view. Additionally the full help feature supports entry of a
command followed by a question mark (?) that is separated by a space. All
keywords associated with this command, as well as simple descriptions, are
then displayed.

For the majority of industries, it is likely that multiple devices will exist, each of
which needs to be managed. As such, one of the first important tasks of
device commissioning involves setting device names to uniquely identify each
device in the network. The system name parameter on AR2200 series router
is configured as Huawei by default, for the S5700 series of switch the default
system name is Quidway. The implementation of the system name takes
effect immediately after configuration is complete.

The system clock reflects the system timestamp, and is able to be configured
to comply with the rules of any given region. The system clock must be
correctly set to ensure synchronization with other devices and is calculated
using the formula: Coordinated Universal Time (UTC) + Time zone offset +
Daylight saving time offset. The clock datetime command is used to set the
system clock following the HH:MM:SS YYYY-MM-DD formula. It should be
noted however that if the time zone has not been configured or is set to 0, the
date and time set are considered to be UTC, therefore it is recommended that
the clock timezone be set firstly before configuring the system time and date.
The setting of the local timezone is achieved using the clock timezone
command and is implemented based on the time-zone-name { add | minus }
offset formula, where the add value indicates that the time of time-zone-name
is equal to the UTC time plus the time offset and minus indicates the time of
time-zone-name is equal to the UTC time minus the time offset.
Certain regions require that the daylight saving time be implemented to
maintain clock synchronization with any change in the clock timezone during
specific periods of the year. VRP is able to support daylight saving features for
both fixed dates and dates which are determined based on a set of
predetermined rules. For example, daylight saving in the United Kingom
occurs on the last Sunday of March and the last Sunday of October, therefore
rules can be applied to ensure that changes occur based on such fluctuating
dates.

The header command provides a means for displaying notifications during the
connection to a device. The login header indicates a header that is displayed
when the terminal connection is activated, and the user is being authenticated
by the device. The shell header indicates a header that is displayed when the
session is set up, after the user logs in to the device. The header information
can be applied either as a text string or retrieved from a specified file. Where
a text string is used, a start and end character must be defined as a marker to
identify the information string, where in the example the character defines
the information string. The string represents a value in the range of 1 to 2000
characters, including spaces. The information based header command follows
the format of header { login | shell } information text where information
represents the information string, including start and end markers.
In the case of a file based header, the format header { login | shell } file filename is applied, where file-name represents the directory and file from which
the information string can be retrieved.

The system structures access to command functions hierarchically to protect


system security. The system administrator sets user access levels that grant
specific users access to specific command levels. The command level of a
user is a value ranging from 0 to 3, whilst the user access level is a value
ranging from 0 to 15. Level 0 defines a visit level for which access to
commands that run network diagnostic tools, (such as ping and traceroute),
as well as commands such as telnet client connections, and select display
commands.
The Monitoring level is defined at a user level of 1 for which command levels
0 and 1 can be applied, allowing for the majority of display commands to be
used, with exception to display commands showing the current and saved
configuration. A user level of 2 represents the Configuration level for which
command levels up to 2 can be defined, enabling access to commands that
configure network services provided directly to users, including routing and
network layer commands. The final level is the Management level which
represents a user level of 3 through to 15 and a command level of up to 3,
enabling access to commands that control basic system operations and
provide support for services.
These commands include file system, FTP, TFTP, configuration file switching,
power supply control, backup board control, user management, level setting,
system internal parameter setting, and debugging commands for fault
diagnosis. The given example demonstrates how a command privilege can be
changed, where in this case, the save command found under the user view
requires a command level of 3 before the command can be used.

Each user interface is represented by a user interface view or command line


view provided by the system. The command line view is used to configure and
manage all the physical and logical interfaces in asynchronous mode. Users
wishing to interface with a device will be required to specify certain
parameters in order to allow a user interface to become accessible. Two
common forms of user interface implemented are the console interface (CON)
and the virtual teletype terminal (VTY) interface.
The console port is an asynchronous serial port provided by the main control
board of the device, and uses a relative number of 0. VTY is a logical terminal
line that allows a connection to be set up when a device uses telnet services
to connect to a terminal for local or remote access to a device. A maximum of
15 users can use the VTY logical user interface to log in to the device by
extending the range from 0 4 achieved by applying the user-interface
maximum-vty 15 command. If the set maximum number of login users is 0, no
users are allowed to log in to the router through telnet or SSH. The display
user-interface command can be used to display relevant information regarding
the user interface.

For both the console and VTY terminal interfaces, certain attributes can be
applied to modify the behavior as a means of extending features and
improving security. A user allows a connection to remain idle for a given
period of time presents a security risk to the system. The system will wait for a
timeout period before automatically terminating the connection. This idle
timeout period on the user interface is set to 10 minutes by default .
Where it may be necessary to increase or reduce the number of lines
displayed on the screen of a terminal when using the display command for
example, the screen-length command can be applied. This by default is set to
24 however is capable of being increased to a maximum of 512 lines. A
screen-length of 0 however is not recommended since no output will be
displayed.
For each command that is used, a record is stored in the history command
buffer which can be retrieved through navigation using the () or CTRL+P and
the () or Ctrl+N key functions. The number of recorded commands in the
history command buffer can be increased using the history-command maxsize command to define up to 256 stored commands. The number of
commands stored by default is 10.

Access to user terminal interfaces provides a clear point of entry for


unauthorized users to access a device and implement configuration changes.
As such the capability to restrict access and limit what actions can be
performed is necessary as a means of device security. The configuration of
user privilege and authentication are two means by which terminal security
can be improved. User privilege allows a user level to be defined which
restricts the capability of the user to a specific command range. The user level
can be any value in the range of 0 15, where values represent a visit level
(0), monitoring level (1), configuration level (2), and management level (3)
respectfully.
Authentication restricts a users capability to access a terminal interface by
requesting the user be authenticated using a password or a combination of
username and password before access via the user interface is granted. In
the case of VTY connections, all users must be authenticated before access
is possible. For all user interfaces, three possible authentication modes exist,
in the form of AAA, password authentication and non-authentication. AAA
provides user authentication with high security for which a user name and
password must be entered for login. Password authentication requires that
only the login password is needed therefore a single password can be applied
to all users. The use of non-authentication removes any authentication
applied to a user interface. It should be noted that the console interface by
default uses the non-authentication mode.
It is generally recommended that for each user that is granted telnet access,
the user be identified through usernames and passwords to allow for
distinction of individual users. Each user should also be granted privilege

In order to run IP services on an interface, an IP address must be configured


for the interface. Generally, an interface needs only the primary IP address. In
special cases, it is possible for a secondary IP address to be configured for
the interface. For example, when an interface of a router such as the AR2200
connects to a physical network, and hosts on this physical network belong to
two network segments.
In order to allow the AR2200 to communicate with all the hosts on the physical
network, configure a primary IP address and a secondary IP address for the
interface. The interface has only one primary IP address. If a new primary IP
address is configured on an interface that already has a primary IP address,
the new IP address overrides the original one. The IP address can be
configured for an interface using the command ip address <ip-address >
{ mask | mask-length } where mask represents the 32 bit subnet mask e.g.
255.255.255.0, and mask-length represents the alternative mask-length value
e.g. 24, both of which can be used interchangeably.
The loopback interface represents a logical interface that is applied to
represent a network or IP host address, and is often used as a form of
management interface in support of a number of protocols through which
communication is made to the IP address of the loopback interface, as
opposed to the IP address of the physical interface on which data is being
received.

1. The console interface is capable of supporting only a single user at any


given time; this is represented by the console 0 user interface view.
2. The loopback interface represents a logical interface that is not present in
a router until it is created. Once created, the loopback interface is
considered up. On ARG3 devices, the loopback interfaces can however
be shut down.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The file system represents the underlying platform on which VRP
operates, and where system files are stored within the physical storage
devices of the product. The capability to navigate and manage this file
system is necessary to ensure effective management of the configuration
files, VRP software upgrades and ensure that the physical devices
contained within each product are well maintained.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Successfully navigate the device file system

Manipulate the file system files and folders.


Manage Huawei router and switch storage devices.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

The file system manages files and directories on the storage devices. It can
create, delete, modify, or rename a file or directory, or display the contents of
a file.
The file system has two functions: managing storage devices and managing
the files that are stored on those devices. A number of directories are defined
within which files are stored in a logical hierarchy. These files and directories
can be managed through a number of functions which allow the changing or
displaying of directories, displaying files within such directories or subdirectories, and the creation or deletion of directories.
Common examples of file system commands for general navigation include
the cd command used to change the current directory, pwd to view the
current directory and dir to display the contents of a directory as shown in the
example. Access to the file system is achieved from the User View.

Making changes to the existing file system directories generally relates to the
capability to create and delete existing directories within the file system. Two
common commands that are used in this case. The mkdir directory command
is used to create a folder in a specified directory on a designated storage
device, where directory refers to the name given to the directory and for which
the directory name can be a string of 1 to 64 characters. In order to delete a
folder within the file system, the rmdir directory command is used, with
directory again referring to the name of the directory. It should be noted that a
directory can only be deleted if there are no files contained within that
directory.

Making changes to the files within a file system includes copying, moving,
renaming, compressing, deleting, undeleting, deleting files in the recycle bin,
running files in batch and configuring prompt modes. Creating a duplicate of
an existing file can be done using the copy source-filename destinationfilename command, where if the destination-filename is the same as that of an
existing file (source-filename), the system will display a message indicating
that the existing file will be replaced. A target file name cannot be the same as
that of a startup file, otherwise the system displays a message indicating that
the operation is invalid and that the file is a startup file.
The move source-filename destination-filename command can be used to
move files to another directory. After the move command has been
successfully executed, the original file is cut and moved to the defined
destination file. It should be noted however that the move command can only
move files in the same storage device.

For the removal of files within a file system, the delete function can be applied
using the command delete [ /unreserved ] [ /force ] { filename | device-name }.
Generally files that are deleted are directed to a recycle bin from where files
can recovered using the undelete { filename | device-name } command,
however should the /unreserved command be used, the file will be
permanently deleted. The system will generally display a message asking for
confirmation of file deletion, however if the /force parameter is included, no
prompt will be given. The filename parameter refers to the file which is to be
deleted, while the device-name parameter defines the storage location.
Where a file is directed to the recycle bin, it is not permanently deleted and
can be easily recovered. In order to ensure that such files in the recycle bin
are deleted permanently, the reset recycle-bin [ filename ]command can be
applied, where the filename parameter can be used to define a specific file for
permanent deletion.

When powered on, the device retrieves configuration files from a default save
path to initialize itself, which is then stored within the RAM of the device. If
configuration files do not exist in the default save path, the router uses default
initialization parameters.
The current-configuration file indicates the configurations in effect on the
device when it is actually running. When the configuration is saved, the
current configuration is stored in a saved-configuration file within the storage
location of the device. If the device loaded the current configuration file based
on default initialization parameters, a saved-configuration file will not exist in
the storage location of the default save path, but will be generated once the
current configuration is saved.

Using the display current-configuration command, device parameters that


take effect can be queried. If default values of certain parameters are being
used, these parameters are not displayed. The current configuration
command includes a number of parameters that allow for filtering of the
command list during the used of the display function. The display currentconfiguration | begin {regular-expression} is an example of how the current
configuration can be used to display active parameters that begin with a
specific keywords or expressions. An alternative to this command is the
display current-configuration | include {regular-expression}
which allows
parameters that include a specific keyword or experssion within the currentconfiguration file.
The display saved-configuration [ last | time ] shows the output of the stored
configuration file used at startup to generate the current-configuration. Where
the last parameter is used it displays the configuration file used in the current
startup. The configuration file is displayed only when it is configured for the
current startup. The time parameter will display the time when the
configuration was last saved.

Using the save [configuration-file] command will save the current configuration
information to a default storage path. The configuration-file parameter allows
the current configuration information to be saved to a specified file. Running
the save command with the configuration-file parameter does not affect the
current startup configuration file of the system. When configuration-file is the
same as the configuration file stored in the default storage path of the system,
the function of this command is the same as that of the save command.
The example demonstrates the use of the save command to save the currentconfiguration, which by default will be stored to the default vrpcfg.zip file in the
default storage location of the device.

The currently used save configuration file can be discovered through the use
of the display startup command. In addition the display startup command can
be used to query the name of the current system software file, name of the
next system software file, name of the backup system software file, names of
the four currently used (if used) system software files, and names of the next
four system software files. The four system software files are the
aforementioned configuration file, voice file, patch file, and license file.

Following discovery of the startup saved-configuration file, it may be


necessary to define a new configuration file to be loaded at the next startup. If
a specific configuration file is not specified, the default configuration file will be
loaded at the next startup.
The filename extension of the configuration file must be .cfg or .zip, and the
file must be stored in the root directory of a storage device. When the router is
powered on, it reads the configuration file from the flash memory by default to
initialize. The data in this configuration file is the initial configuration. If no
configuration file is saved in the flash memory, the router uses default
parameters to initiate.
Through the use of the startup saved-configuration [configuration-file] where
the configuration-file parameter is the configuration file to be used at startup, it
is possible to define a new configuration file to initialize at the next system
startup.

When the compare configuration [configuration-file] [current-line-number


save-line-number] command is used, the system performs a line by line
comparison of the saved configuration with the current configuration starting
from the first line. If the current-line-number save-line-number parameters are
specified, the system skips the non relevant configuration before the
compared lines and continues to find differences between the configuration
files.
The system will then proceed to output the configuration differences between
the saved configuration and the current configuration files. The comparison
output information is restricted to 150 characters by default. If the comparison
requires less than 150 characters, all variations until the end of two files are
displayed.

The reset saved-configuration command is used in order to delete a device


startup configuration file from the storage device. When performed, the
system compares the configuration files used in the current startup and the
next startup when deleting the configuration file from the router.
If the two configuration files are the same, they are deleted at the same time
after this command is executed. The default configuration file is used when
the router is started next time. If the two configuration files are different, the
configuration file used in the current startup is deleted after this command is
executed.
If no configuration file is configured for the device current startup, the system
displays a message indicating that the configuration file does not exist after
this command is executed. Once the reset saved-configuration command is
used, a prompt will be given to confirm the action, for which the user is
expected to confirm, as shown in the example.

The storage devices are product dependant, and include flash memory, SD
cards, or USB flash drives. The AR2200 router for example has a built-in flash
memory and a built-in SD card (in slot sd1). The router provides two reserved
USB slots (usb0 and usb1) and an SD card slot (sd0). For the S5700 it
includes a built in flash memory with a capacity that varies dependant on the
model, with 64MB supported in the S5700C-HI, S5700-LI, S5700S-LI and
S5710-EI models, and 32 MB for all others. The details regarding the Huawei
product storage devices can be detailed by using the display version
command as shown.

Formatting a storage device is likely to result in the loss of all files on the
storage device, and the files cannot be restored, therefore extra care should
be taken when performing any format command and should be avoided
unless absolutely necessary. The format [storage-device] command is used
along with the storage-device parameter to define the storage location which
is required to be formatted.

When the terminal device displays that the system has failed, the fixdisk
command can be used to attempt to fix the abnormal file system in the
storage device, however it does not provide any guarantee as to whether the
file system can be restored successfully. Since the command is used to rectify
problems, if no problem has occurred in the system it is not recommended
that this command be run. It should also be noted that this command does not
rectify device-level problems.

1. The file system attribute d represents that the entry is a directory in the file
system. It should be noted that this directory can only be deleted once
any files contained within the directory have been deleted. The remaining
rwx values refer to whether the directory (or file) can be read, written to,
and/or executed.
2. A configuration may be saved under a separate name from the default
vrpcfg.zip file name and stored within the storage device of the router or
switch. If this file is required to be used as the active configuration file in
the system, the command startup saved-configuration <configuration-filename> should be used where the configuration-file-name refers to the file
name and file extension.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Effective network administration and management within an enterprise
network relies on all devices maintaining backup files in the event of
system failures or other events that may result in loss of important
systems files and data. Remote servers that use the file transfer protocol
(FTP) service are often used to ensure files are maintained for backup
and retrieval purposes as and when needed. The means for establishing
communication with such application services is introduced in this
section.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the importance of maintaining up-to-date versions of VRP.
Establish a client relationship with an FTP server.

Successfully upgrade a VRP system image.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The VRP platform is constantly updated to maintain alignment with changes in


technology and support new advancements to the hardware. The VRP image
is generally defined by a VRP version and a product version number. Huawei
ARG3 and Sx7 series products generally align with VRP version 5 to which
different product versions are associated.
As the product version increases, so do the features that are supported by the
version. The product version format includes a product code Vxxx , Rxxx
denotes a major version release and Cxx a minor version release. If a service
pack is used to patch the VRP product version, an SPC value may also be
included in the VRP product version number. Typical examples of the VRP
version upgrades for the AR2200 include:
Version 5.90

(AR2200 V200R001C00)

Version 5.110
Version 5.120

(AR2200 V200R002C00)
(AR2200 V200R003C00)

File transfer refers to the means by which files are sent to or retrieved from a
remote server or storage location. Within the IP network this application can
be implemented for a wide range of purposes. As part of effective practice, it
is common for important files be duplicated and backed up within a remote
storage location to prevent any loss that would affect critical systems
operations. This includes files such as the VRP image of products which
(should the existing image suffer loss through use of the format command or
other forms of error), can be retrieved remotely and used to recover system
operations. Similar principles apply for important configuration files and
maintaining records of activity within devices stored in log files, which may be
stored long term within the remote server.

FTP is a standard application protocol based on the TCP/IP protocol suite and
used to transfer files between local clients and remote servers. FTP uses two
TCP connections to copy a file from one system to another. The TCP
connections are usually established in client-server mode, one for control (the
server port number is 21) and the other for data transmission (the sever port
number is 20). FTP as a file transfer protocol is used to control connections by
issuing commands from the client (RTA) to the server and transmits replies
from the server to the client, minimizing the transmission delay. In terms of
data transmission, FTP transmits data between the client and server,
maximizing the throughput.
Trivial File Transfer Protocol (TFTP) is a simple file transfer protocol over
which a router can function as a TFTP client to access files on a TFTP server.
Unlike FTP, TFTP has no complex interactive access interface and
authentication control. Implementation of TFTP is based on the User
Datagram Protocol (UDP). The client initiates the TFTP transfer. To download
files, the client sends a read request packet to the TFTP server, receives
packets from the server, and returns an acknowledgement to the server. To
upload files, the client sends a write request packet to the TFTP server, sends
packets to the server, and receives acknowledgement from the server.

The example demonstrates how connection between an FTP server and client
is established in order to retrieve a VRP image that can be used as part of the
system upgrade process. Prior to any transfer of data, it is necessary to
establish the underlying connectivity over which files can be transferred. This
begins by providing suitable IP addressing for the client and the server. Where
the devices are directly connected, interfaces can be applied that belong to
the same network. Where devices belong to networks located over a large
geographic area, devices must establish relevant IP addressing within their
given networks and be able to discover a relevant network path over IP via
which client/server connectivity can be established.

A user must determine for any system upgrade as to whether there is


adequate storage space in which to store the file that is to be retrieved. The
file system commands can be used to determine the current status of the file
system, including which files are currently present within the file storage
location of the device and also the amount of space currently available.
Where the storage space is not adequate for file transfer, certain files can be
deleted or uploaded to the FTP server in the event that they may still be
required for future use.
The example demonstrates the use of the delete file system command to
remove the existing image file. It should be noted that the system image,
while deleted will not impact the current operation of the device as long as the
device remains operational, therefore the device should not be powered off or
restarted before a new VRP image file is restored within the storage location
of the device, and set to be used during the next system startup.

The retrieving of files from an FTP server requires that a connection be


established firstly before any file transfer can take place. Within the client
device, the ftp service is initiated using the ftp <ip address> where the IP
address relates to the address of the FTP server to which the client wishes to
connect. FTP connections will be established using TCP, and requires
authentication in the form of a username and password which is defined by
the FTP server. Once authentication has been successfully achieved, the
client will have established access to the FTP server and will be able to use a
variety of commands to view existing files stored within the local current
directory of the server.
Prior to file transmission, the user may be required to set the file type for
which two formats exist, ASCII and Binary. ASCII mode is used for text, in
which data is converted from the sender's character representation to "8-bit
ASCII" before transmission, and then to the receiver's character
representation. Binary mode on the other hand requires that the sender send
each file byte for byte. This mode is often used to transfer image files and
program files, and should be applied when sending or retrieving any VRP
image file. In the example, the get vrp.cc command has been issued in order
to retrieve the new VRP image located within the remote server.

In the event that the client wishes to retrieve a VRP image from a TFTP
server, a connection to the server need not first be established. Instead the
client must define the path to the server within the command line, along with
the operation that is to be performed. It should also be noted that the AR2200
& S5700 models serve as the TFTP client only and transfer files only in binary
format. As can be seen from the example, the get command is applied for
retrieval of the VRP image file from the TFTP server following the defining of
the destination address of the TFTP server.

The transfer of the VRP image file to the client once successfully achieved,
requires that the image be enabled as the startup system software during the
next system startup process. In order to change the system software version,
the startup system-software command must be run and include the system
software file to be used in the next startup. A system software file must use .cc
as the file name extension, and the system software file used in the next
startup cannot be that used in the current startup.
Additionally, the storage directory of a system software file must be the root
directory, otherwise the file will fail to run. The display startup command
should be used to verify that the change to the startup system software has
been performed successfully. The output for the startup system software
should show the existing VRP image, while the next startup system software
should display the transferred VRP image that is now present within the root
directory of the device.

Confirmation of the startup system software allows for the safe initiation of the
system software during the next system boot. In order to apply the changes
and allow for the new system software to take effect, the device must be
restarted. The reboot command can be used in order to initiate the system
restart. During the reboot process, a prompt will be displayed requesting
confirmation regarding whether the configuration file for the next system
startup be saved.
In some cases, the saved-configuration file may be erased by the user in
order to allow for a fresh configuration to be implemented. Should this have
occurred, the user is expected define a response of no at the Continue?
prompt. If the user chooses yes at this point, the current-configuration will be
rewritten to the saved-configuration file and applied once again during the
next startup. If the user is unaware of the changes for which the save prompt
is providing a warning, it is recommended that the user select no or n and
perform a comparison of the saved and current configuration to verify the
changes. For the reboot prompt, a response of yes or y is required to
complete the reboot process.

1. A client device must have the capability to reach the FTP server over IP,
requiring an IP address be configured on the interface via which the FTP
server can be reached. This will allow a path to be validated to the FTP
server at the network layer if one exists.
2. The user can run the configuration command display startup to validate
that current startup system software (VRP) is active, identified by the .cc
extension.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The introduction of a switching device as part of the enterprise network
demonstrates how networks are able to expand beyond point-to-point
connections, and shared networks in which collisions may occur. The
behavior of the enterprise switch when introduced to the local area
network is detailed along with an understanding of the handling of unicast
and broadcast type frames, to demonstrate how switches enable
networks to overcome the performance obstacles of shared networks.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the decision making process of a link layer switch

Configure parameters for negotiation on a link layer switch

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

As the enterprise network expands, multiple users need to be established as


part of a multi-access network. The evolution of network technologies has
seen a shift away from shared local networks, to networks which support
multiple collision domains and support the use of 100BaseT forms of media
that isolated the transmission and reception of data over separate wire pairs,
thus eliminating the potential for collisions to occur and allowing higher full
duplex transmission rates. The establishment of a switch brings the capability
for increased port density to enable the connection of a greater number of end
system devices within a single local area network. Each end system or host
within a local area network is required to be connected as part of the same IP
network in order for communication to be facilitated at the network layer. The
IP address however is only relevant to the host systems since switch devices
operate within the scope of the link layer and therefore rely on MAC
addressing for frame forwarding.

As a link layer device, each switch relies on a MAC based table that provides
association between a destination MAC address and the port interface via
which a frame should be forwarded. This is commonly referred to as the MAC
address table.
The initiation of a switch begins with the switch having no knowledge of end
systems and how frames received from end systems should be forwarded. It
is necessary that the switch build entries within the MAC address table to
determine the path that each frame received should take in order to reach a
given destination, so as to limit broadcast traffic within the local network.
These path entries are populated in the MAC address table as a result of
frames received from end systems. In the example, Host A has forwarded a
frame to Switch A, which currently has no entries within its MAC address
table.

The frame that is forwarded from Host A contains a broadcast MAC address
entry in the destination address field of the frame header. The source address
field contains the MAC address of the peering device, in this case Host A.
This source MAC address is used by the switch in order to populate the MAC
address table, by associating the MAC entry in the source address field with
the switch port interface upon which the frame was received. The example
demonstrates how the MAC address is associated with the port interface to
allow any returning traffic to this MAC destination to be forwarded directly via
the associated interface.

The general behavior of an ARP request involves the frame being flooded to
all intended destinations primarily due to the MAC broadcast
(FF:FF:FF:FF:FF:FF) that represents the current destination. The switch is
therefore responsible for forwarding this frame out of every port interface with
exception to the port interface on which the frame was received, in an attempt
to locate the intended IP destination as listed within the ARP header for which
an ARP reply can be generated. As demonstrated in the example, individual
frames are flooded from the switch via port interfaces G0/0/2 and G0/0/3
towards hosts B and host C respectively.

As a result of the ARP request header, the receiving host is able to determine
that the ARP header is intended for the IP destination of 10.1.1.3, along with
the local source address (MAC) from which the frame originated, and use this
information to generate a unicast reply. The information regarding Host A is
associated with the IP address of 10.1.1.3 and stored within the MAC address
table of Host C. In doing so, the generation of broadcast traffic is minimized,
thereby reducing the number of interrupts to local destinations as well as
reduction of the number of frames propagating the local network.
Once the frame is received from Host C by Switch A, the switch will populate
the MAC address table with the source MAC address of the frame received,
and associate it with the port interface on which the frame was received. The
switch then uses the MAC address table to perform a lookup, in order to
discover the forwarding interface, based on the destination MAC address of
the frame. In this case the MAC address of the frame refers to Host A, for
which an entry now exists via interface G0/0/1, allowing the frame to be
forwarded to the known destination.

Early Ethernet systems operated based on a 10Mbps half duplex mode and
applied mechanisms such as CSMA/CD to ensure system stability. The
transition to a twisted pair medium gave rise to the emergence of full-duplex
Ethernet, which greatly improved Ethernet performance and meant two forms
of duplex could be negotiated. The auto-negotiation technology allows newer
Ethernet systems to be compatible with earlier Ethernet systems.
In auto-negotiation mode, interfaces on both ends of a link negotiate their
operating parameters, including the duplex mode, rate, and flow control. If the
negotiation succeeds, the two interfaces work with the same operating
parameters. In some cases however it is necessary to manually define the
negotiation parameters, such as where Gigabit Ethernet interfaces that are
working in auto-negotiation mode are connected via a 100 Mbps network
cable. In such cases, negotiation between the interfaces will fail.

In the event that the configuration parameters for negotiation are changed
from using auto negotiation, the defined parameters should be checked using
the display interface <interface> command to verify that the negotiated
parameters allow for the link layer interface negotiation to be successful. This
is verified by the line protocol current state being displayed as UP. The
displayed information reflects the current parameter settings for an interface.

1. When a host or other end system is connected to a switch port interface,


a gratuitous ARP is generated that is designed to ensure that IP
addresses remain unique within a network segment. The gratuitous ARP
message however also provides the switch with information regarding the
MAC address of the host, which is then included in the MAC address
table and associated with the port interface on which the host is
connected.
If the physical connection of a host connected to a switch port interface is
removed, the switch will discover the physical link is down and remove the
MAC entry from the MAC address table. Once the medium is connected
to another port interface, the port will detect that the physical link is active
and a gratuitous ARP will be generated by the host, allowing the switch to
discover and populate the MAC address table with the MAC address of
the connected host.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
As the enterprise network expands , multi-switched networks are
introduced to provide link layer communication between a growing
number of end systems. As new interconnections are formed between
multiple enterprise switches, new opportunities for building ever resilient
networks are made possible, however the potential for switching failure
as a result of loops becomes ever more likely. It is necessary that the
spanning tree protocol (STP) therefore be understood in terms of
behavior in preventing switching loops, and how it can be manipulated to
suit enterprise network design and performance.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the issues faced when using a multi-switched network.
Explain the loop prevention process of the spanning tree protocol.

Configure parameters for managing the STP network design.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Enterprise growth results in the commissioning of multiple switches in order to


support the interconnectivity of end systems and services required for daily
operations. The interconnection of multiple switches however brings additional
challenges that need to be addressed. Switches may be established as single
point-to-point links via which end systems are able to forward frames to
destinations located via other switches within the broadcast domain. The
failure however of any point-to-point switch link results in the immediate
isolation of the downstream switch and all end systems to which the link is
connected. In order to resolve this issue, redundancy is highly recommended
within any switching network.
Redundant links are therefore generally used on an Ethernet switching
network to provide link backup and enhance network reliability. The use of
redundant links, however, may produce loops that cause the communication
quality to drastically deteriorate, and major interruptions to the communication
service to occur.

One of the initial effects of redundant switching loops comes in the form of
broadcast storms. This occurs when an end system attempts to discover a
destination for which neither itself nor the switches along the switching path
are aware of. A broadcast is therefore generated by the end system which is
flooded by the receiving switch.
The flooding effect means that the frame is forwarded via all interfaces with
exception to the interface on which the frame was received. In the example,
Host A generates a frame, which is received by Switch B which is
subsequently forwarded out of all other interfaces. An instance of the frame is
received by the connected switches A and C, which in turn flood the frame out
of all other interfaces. The continued flooding effect results in both Switch A
and Switch C flooding instances of the frame from one switch to the other,
which in turn is flooded back to Switch B, and thus the cycle continues. In
addition, the repeated flooding effect results in multiple instances of the frame
being received by end stations, effectively causing interrupts and extreme
switch performance degradation.

Switches must maintain records of the path via which a destination is


reachable. This is identified through association of the source MAC address of
a frame with the interface on which the frame was received. Only one
instance of a MAC address can be stored within the MAC address table of a
switch, and where a second instance of the MAC address is received, the
more recent information takes precedence.
In the example, Switch B updates the MAC address table with the MAC
address of Host A and associates this source with interface G0/0/3, the port
interface on which the frame was received. As frames are uncontrollably
flooded within the switching network, a frame is again received with the same
source MAC address as Host A, however this time the frame is received on
interface G0/0/2. Switch B must therefore assume that the host that was
originally reachable via interface G0/0/3 is now reachable via G0/0/2, and will
update the MAC address table accordingly. The result of this process leads to
MAC instability and continues to occur endlessly between both the switch port
interfaces connecting to Switch A and Switch C since frames are flooded in
both directions as part of the broadcast storm effect.

The challenge for the switching network lies in the ability to maintain switching
redundancy to avoid isolation of end systems in the event of switch system or
link failure, and the capability to avoid the damaging effects of switching loops
within a switching topology which implements redundancy. The resulting
solution for many years has been to implement the spanning tree protocol
(STP) in order to prevent the effects of switching loops. Spanning tree works
on the principle that redundant links be logically disabled to provide a loop
free topology, whilst being able to dynamically enable secondary links in the
event that a failure along the primary switching path occurs, thereby fulfilling
the requirement for network redundancy within a loop free topology. The
switching devices running STP discover loops on the network by exchanging
information with one another, and block certain interfaces to cut off loops. STP
has continued to be an important protocol for the LAN for over 20 years.

The removal of any potential for loops serves as the primary goal of spanning
tree for which an inverted tree type architecture is formed. At the base of this
logical tree is the root bridge/switch. The root bridge represents the logical
center but not necessarily the physical centre of the STP-capable network.
The designated root bridge is capable of changing dynamically with the
network topology, as in the event where the existing root bridge fails to
continue to operate as the root bridge. Non-root bridges are considered to be
downstream from the root bridge and communication to non-root bridges
flows from the root bridge towards all non-root bridges. Only a single root
bridge can exist in a converged STP-capable network at any one time.

Discovery of the root bridge for an STP network is a primary task performed in
order to form the spanning tree. The STP protocol operates on the basis of
election, through which the role of all switches is determined. A bridge ID is
defined as the means by which the root bridge is discovered. This comprises
of two parts, the first being a 16 bit bridge priority and the second, a 48 bit
MAC address.
The device that is said to contain the highest priority (smallest bridge ID) is
elected as the root bridge for the network. The bridge ID comparison takes
into account initially the bridge priority, and where this priority value is unable
to uniquely identify a root bridge, the MAC address is used as a tie breaker.
The bridge ID can be manipulated through alteration to the bridge priority as a
means of enabling a given switch to be elected as the root bridge, often in
support of an optimized network design.

The spanning tree topology relies on the communication of specific


information to determine the role and status of each switch in the network. A
Bridge Protocol Data Unit (BPDU) facilitates communication within a spanning
tree network. Two forms of BPDU are used within STP. A Configuration BPDU
is initially created by the root and propagated downstream to ensure all nonroot bridges remain aware of the status of the spanning tree topology and
importantly, the root bridge. The TCN BPDU is a second form of BPDU, which
propagates information in the upstream direction towards the root and shall be
introduced in more detail as part of the topology change process.
Bridge Protocol Data Units are not directly forwarded by switches, instead the
information that is carried within a BPDU is often used to generate a switches
own BPDU for transmission. A Configuration BPDU carries a number of
parameters that are used by a bridge to determine primarily the presence of a
root bridge and ensure that the root bridge remains the bridge with the highest
priority. Each LAN segment is considered to have a designated switch that is
responsible for the propagation of BPDU downstream to non-designated
switches.
The Bridge ID field is used to determine the current designated switch from
which BPDU are expected to be received. The BPDU is generated and
forwarded by the root bridge based on a Hello timer, which is set to 2 seconds
by default. As BPDU are received by downstream switches, a new BPDU is
generated with locally defined parameters and forwarded to all nondesignated switches for the LAN segment.

Another feature of the BPDU is the propagation of two parameters relating to


path cost. The root path cost (RPC) is used to measure the cost of the path to
the root bridge in order to determine the spanning tree shortest path, and
thereby generate a loop free topology. When the bridge is the root bridge, the
root path cost is 0.
The path cost (PC) is a value associated with the root port, which is the port
on a downstream switch that connects to the LAN segment, on which a
designated switch or root bridge resides. This value is used to generate the
root path cost for the switch, by adding the path cost to the RPC value that is
received from the designated switch in a LAN segment, to define a new root
path cost value. This new root path cost value is carried in the BPDU of the
designated switch and is used to represent the path cost to the root.

Huawei Sx7 series switches support a number of alternative path cost


standards that can be implemented based on enterprise requirements, such
as where a multi vendor switching network may exist. The Huawei Sx7 series
of switches use the 802.1t path cost standard by default, providing a stronger
metric accuracy for path cost calculation.

A converged spanning tree network defines that each interface be assigned a


specific port role. Port roles are used to define the behavior of port interfaces
that participate within an active spanning tree topology. For the spanning tree
protocol, three port roles of designated, root and alternate are defined.
The designated port is associated with a root bridge or a designated bridge of
a LAN segment and defines the downstream path via which Configuration
BPDU are forwarded. The root bridge is responsible for the generation of
configuration BPDU to all downstream switches, and thus root bridge port
interfaces always adopt the designated port role.
The root port identifies the port that offers the lowest cost path to the root,
based on the root path cost. The example demonstrates the case where two
possible paths exist back to the root, however only the port that offers the
lowest root path cost is assigned as the root port. Where two or more ports
offer equal root path costs, the decision of which port interface will be the root
port is determined by comparing the bridge ID in the configuration BPDU that
is received on each port.
Any port that is not assigned a designated or root port role is considered an
alternate port, and is able to receive BPDU from the designated switch for the
LAN segment for the purpose of monitoring the status of the redundant link,
but will not process the received BPDU. The IEEE 802.1D-1990 standard for
STP originally defined this port role as backup, however this was amended to
become the alternate port role within the IEEE 802.1D-1998 standards
revision.

The port ID represents a final means for determining port roles alongside the
bridge ID and root path cost mechanism. In scenarios where two or more
ports offer a root path cost back to the root that is equal and for which the
upstream switch is considered to have a bridge ID that is equal, primarily due
to the upstream switch being the same switch for both paths, the port ID must
be applied to determine the port roles.
The port ID is tied to each port and comprises of a port priority and a port
number that associates with the port interface. The port priority is a value in
the range of 0 to 240, assigned in increments of 16, and represented by a
value of 128 by default. Where both port interfaces offer an equal port priority
value, the unique port number is used to determine the port roles. The highest
port identifier (the lowest port number) represents the port assigned as the
root port, with the remaining port defaulting to an alternate port role.

The root bridge is responsible for the generation of configuration BPDU based
on a BPDU interval that is defined by a Hello timer. This Hello timer by default
represents a period of 2 seconds. A converged spanning tree network must
ensure that in the event of a failure within the network, that switches within the
STP enabled network are made aware of the failure. A Max Age timer is
associated with each BDPU and represents life span of a BPDU from the
point of conception by the root bridge, and ultimately controls the validity
period of a BDPU before it is considered obsolete. This MAX Age timer by
default represents a period of 20 seconds.
Once a configuration BPDU is received from the root bridge, the downstream
switch is considered to take approximately 1 second to generate a new
BPDU, and propagate the generated BPDU downstream. In order to
compensate for this time, a message age (MSG Age) value is applied to each
BPDU to represent the offset between the MAX Age and the propagation
delay, and for each switch this message age value is incremented by 1.
As BPDU are propagated from the root bridge to the downstream switches the
MAX Age timer is refreshed. The MAX Age timer counts down and expires
when the MAX Age value exceeds the value of the message age, to ensure
that the lifetime of a BPDU is limited to the MAX Age, as defined by the root
bridge. In the event that a BPDU is not received before the MAX Age timer
expires, the switch will consider the BPDU information currently held as
obsolete and assume an STP network failure has occurred.

The spanning tree convergence process is an automated procedure that


initiates at the point of switch startup. All switches at startup assume the role
of root bridge within the switching network. The default behavior of a root
bridge is to assign a designated port role to all port interfaces to enable the
forwarding of BPDU via all connected port interfaces. As BPDU are received
by peering switches, the bridge ID will be compared to determine whether a
better candidate for the role of root bridge exists. In the event that the
received BPDU contains an inferior bridge ID with respect to the root ID, the
receiving switch will continue to advertise its own configuration BPDU to the
neighboring switch.
Where the BDPU is superior, the switch will acknowledge the presence of a
better candidate for the role of root bridge, by ceasing to propagate BPDU in
the direction from which the superior BPDU was received. The switch will also
amend the root ID field of its BPDU to advertise the bridge ID of the root
bridge candidate as the current new root bridge.

An elected root bridge, once established will generate configuration BPDU to


all other non-root switches. The BPDU will carry a root path cost that will
inform downstream switches of the cost to the root, to allow for the shortest
path to be determined. The root path cost carried in the BPDU that is
generated by the root bridge always has a value of 0. The receiving
downstream switches will then add this cost to the path cost of the port
interfaces on which the BPDU was received, and from which a switch is able
to identify the root port.
In the case where equal root path costs exist on two or more LAN segments
to the same upstream switch, the port ID is used to discover the port roles.
Where an equal root path cost exists between two switches as in the given
example, the bridge ID is used to determine which switch represents the
designated switch for the LAN segment. Where the switch port is neither a
root port nor designated port, the port role is assigned as alternate.

As part of the root bridge and port role establishment, each switch will
progress through a number of port state transitions.
Any port that is
administratively disabled will be considered to be in the disabled state.
Enabling of a port in the disabled state will see a state transition to the
blocking state .
Any port considered to be in a blocking state is unable to forward any user
traffic, but is capable of receiving BPDU frames. Any BPDU received on a port
interface in the blocking state will not be used to populate the MAC address
table of the switch, but instead to determine whether a transition to the
listening state is necessary. The listening state enables communication of
BPDU information, following negotiation of the port role in STP , but
maintains restriction on the populating of the MAC address table with
neighbor information.
A transition to the blocking state from the listening or other states may
occur in the event that the port is changed to an alternate port role. The
transition between listening to learning and learning to forwarding states is
greatly dependant on the forward delay timer, which exists to ensure that any
propagation of BDPU information to all switches in the spanning tree topology
is achievable before the state transition occurs.
The learning state maintains the restriction of user traffic forwarding to ensure
prevention of any switching loops however allows for the population of the
MAC address table throughout the spanning tree topology to ensure a stable
switching network. Following a forward delay period, the forwarding state is
reached. The disabled state is applicable at any time during the state
transition period through manual intervention (i.e. the shutdown command) .

Events that cause a change in the established spanning tree topology may
occur in a variety of ways, for which the spanning tree protocol must react to
quickly re-establish a stable and loop free topology. The failure of the root
bridge is a primary example of where re-convergence is necessary. Non-root
switches rely on the intermittent pulse of BPDU from the root bridge to
maintain their individual roles as non-root switches in the STP topology. In the
event that the root bridge fails, the downstream switches will fail to receive a
BPDU from the root bridge and as such will also cease to propagate any
BPDU downstream. The MAX Age timer is typically reset to the set value (20
seconds by default) following the receipt of each BPDU downstream.
With the loss of any BPDU however, the MAX Age timer begins to count down
the lifetime for the current BPDU information of each non-root switch, based
on the (MAX Age MSG Age) formula. At the point at which the MSG Age
value is greater than the MAX Age timer value, the BPDU information
received from the root becomes invalid, and the non-root switches begin to
assume the role of root bridge. Configuration BPDU are again forwarded out
of all active interfaces in a bid to discover a new root bridge. The failure of the
root bridge invokes a recovery duration of approximately 50 seconds due to
the Max Age + 2x Forward Delay convergence period.

In the case of an indirect link failure, a switch loses connection with the root
bridge due to a failure of the port or media, or due possibly to manual
disabling of the interface acting as the root port. The switch itself will become
immediately aware of the failure, and since it only receives BPDU from the
root in one direction, will assume immediate loss of the root bridge, and assert
its position as the new root bridge.
From the example, switch B begins to forward BPDU to switch C to notify of
the position of switch B as the new root bridge, however switch C continues to
receive BPDU from the original root bridge and therefore ignores any BPDU
from switch B. The alternate port will begin to age its state through the MAX
Age timer, since the interface no longer receives BPDU containing the root ID
of the root bridge.
Following the expiry of the MAX Age timer, switch C will change the port role
of the alternate port to that of a designated port and proceed to forward BPDU
from the root towards switch B, which will cause the switch to concede its
assertion as the root bridge and converge its port interface to the role of root
port. This represents a partial topology failure however due to the need to wait
for a period equivalent to MAX Age + 2x forward delay, full recovery of the
STP topology requires approximately 50 seconds.

A final scenario involving spanning tree convergence recovery occurs where


multiple LAN segments are connected between two switch devices for which
one is currently the active link while the other provides an alternate path to the
root. Should an event occur that causes the switch that is receiving the BPDU
to detect a loss of connection on its root port, such as in the event that a root
port failure occurs, or a link failure occurs, for which the downstream switch is
made immediately aware, the switch can instantly transition the alternate port.
This will begin the transition through the listening, learning and forwarding
states and achieve recovery within a 2x forward delay period. In the event of
any failure, where the link that provides a better path is reactivated, the
spanning tree topology must again re-converge in order to apply the optimal
spanning tree topology.

In a converged spanning tree network, switches maintain filter databases, or


MAC address tables to manage the propagation of frames through the
spanning tree topology. The entries that provide an association between a
MAC destination and the forwarding port interface are stored for a finite period
of 300 seconds (5 minutes) by default. A change in the spanning tree topology
however means that any existing MAC address table entries are likely to
become invalid due to the alteration in the switching path, and therefore must
be renewed.
The example demonstrates an existing spanning tree topology for which
switch B has entries that allow Host A to be reached via interface Gigabit
Ethernet 0/0/3 and Host B via interface Gigabit Ethernet 0/0/2. A failure is
simulated on switch C for which the current root port has become inactive.
This failure causes a recalculation of the spanning tree topology to begin and
predictably the activation of the redundant link between switch C and switch
B.
Following the re-convergence however, it is found that frames from Host A to
Host B are failing to reach their destination. Since the MAC address table
entries have yet to expire based on the 300 second rule, frames reaching
switch B that are destined for Host B continue to be forwarded via port
interface Gigabit Ethernet 0/0/2, and effectively become black holed as
frames are forwarded towards the inactive port interface of switch C.

An additional mechanism must be introduced to handle the MAC entries


timeout period issue that results in invalid path entries being maintained
following spanning tree convergence. The process implemented is referred to
as the Topology Change Notification (TCN) process, and introduces a new
form of BPDU to the spanning tree protocol operation.
This new BPDU is referred to as the TCN BPDU and is distinguished from the
original STP configuration BPDU through the setting of the BPDU type value
to 128 (0x80). The function of the TCN BPDU is to inform the upstream root
bridge of any change in the current topology, thereby allowing the root to send
a notification within the configuration BPDU to all downstream switches, to
reduce the timeout period for MAC address table entries to the equivalent of
the forward delay timer, or 15 seconds by default.
The flags field of the configuration BPDU contains two fields for Topology
Change (TC) and Topology Change Acknowledgement (TCA). Upon receiving
a TCN BPDU, the root bridge will generate a BPDU with both the TC and TCA
bits set, to respectively notify of the topology change and to inform the
downstream switches that the root bridge has received the TCN BPDU, and
therefore transmission of the TCN BPDU should cease.
The TCA bit shall remain active for a period equal to the Hello timer (2
seconds), following which configuration BPDU generated by the root bridge
will maintain only the TC bit for a duration of (MAX Age + forward delay), or 35
seconds by default.

The effect of the TCN BPDU on the topology change process ensures that the
root bridge is notified of any failure within the spanning tree topology, for
which the root bridge is able to generate the necessary flags to flush the
current MAC address table entries in each of the switches. The example
demonstrates the results of the topology change process and the impact on
the MAC address table. The entries pertaining to switch B have been flushed,
and new updated entries have been discovered for which it is determined that
Host B is now reachable via port interface Gigabit Ethernet 0/0/1.

Huawei Sx7 series switches to which the S5700 series model belongs, is
capable of supporting three forms of spanning tree protocol. Using the stp
mode command, a user is able to define the mode of STP that should be
applied to an individual switch. The default STP mode for Sx7 series switches
is MSTP, and therefore must be reconfigured before STP can be used.

As part of good switch design practice, it is recommended that the root bridge
be manually defined. The positioning of the root bridge ensures that the
optimal path flow of traffic within the enterprise network can be achieved
through configuration of the bridge priority value for the spanning tree
protocol. The stp priority [priority] command can be used to define the priority
value, where priority refers to an integer value between 0 and 61440,
assigned in increments of 4096. This allows for a total of 16 increments, with
a default value of 32768. It is also possible to assign the root bridge for the
spanning tree through the stp root primary command.

It has been understood that Huawei Sx7 series of switches support three
forms of path cost standard in order to provide compatibility where required,
however defaults to support the 802.1t path cost standard. The path cost
standard can be adjusted for a given switch using the stp pathcost-standard
{ dot1d-1998 | dot1t | legacy } command, where dot1d-1998, dot1t and legacy
refer to the path cost standards described earlier in this section.
In addition, the path cost of each interface can also be assigned manually to
support a means of detailed manipulation of the stp path cost. This method of
path cost manipulation should be used with great care however as the path
cost standards are designed to implement the optimal spanning tree topology
for a given switching network and manipulation of the stp cost may result in
the formation of a sub-optimal spanning tree topology.
The command stp cost [cost] is used, for which the cost value should follow
the range defined by the path cost standard. If a Huawei legacy standard is
used, the path cost ranges from 1 to 200000. If the IEEE 802.1D standard is
used, the path cost ranges from 1 to 65535. If the IEEE 802.1t standard is
used, the path cost ranges from 1 to 200000000.

If the root switch on a network is incorrectly configured or attacked, it may


receive a BPDU with a higher priority and thus the root switch becomes a
non-root switch, which causes a change of the network topology. As a result,
traffic may be switched from high-speed links to low-speed links, causing
network congestion.
To address this problem, the switch provides the root protection function. The
root protection function protects the role of the root switch by retaining the role
of the designated port. When the port receives a BPDU with a higher priority,
the port stops forwarding packets and turns to the listening state, but it still
retains a designated port role. If the port does not receive any BPDU with a
higher priority for a certain period, the port status is restored from the listening
state.
The configured root protection is valid only when the port is the designated
port and the port maintains the role. If a port is configured as an edge port, or
if a command known as loop protection is enabled on the port, root protection
cannot be enabled on the port.

Using the display stp command, the current STP configuration can be
determined. A number of timers exist for managing the spanning tree
convergence, including the hello timer, max age timer, and forward delay, for
which the values displayed represent the default timer settings, and are
recommended to be maintained.
The current bridge ID can be identified for a given switch through the CIST
Bridge configuration, comprised of the bridge ID and MAC address of the
switch. Statistics provide information regarding whether the switch has
experienced topology changes, primarily through the TC or TCN received
value along with the last occurrence as shown in the time since last TC entry.

For individual interfaces on a switch it is possible to display this information


via the display stp command to list all interfaces, or using the display stp
interface <interface> command to define a specific interface. The state of the
interface follows MSTP port states and therefore will display as either
Discarding, Learning or Forwarding. Other valid information such as the port
role and cost for the port are also displayed, along with any protection
mechanisms applied.

1. Following the failure of the root bridge for a spanning tree network, the
next best candidate will be elected as the root bridge. In the event that the
original root bridge becomes active once again in the network, the
process of election for the position of root bridge will occur once again.
This effectively causes network downtime in the switching network as
convergence proceeds.
2. The Root Path Cost is the cost associated with the path back to the root
bridge, whereas the Path Cost refers to the cost value defined for an
interface on a switch, which is added to the Root Path Cost, to define the
Root Path Cost for the downstream switch.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The original STP standard was defined in 1998 for which a number of
limitations were discovered, particularly in the time needed for
convergence to occur. In light of this, Rapid Spanning Tree Protocol
(RSTP) was introduced. The fundamental characteristics of RSTP are
understood to follow the basis of STP, therefore the characteristic
differences found within RSTP are emphasized within this section.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the characteristics associated with RSTP .

Configure RSTP parameters.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

STP ensures a loop-free network but has a slow network topology


convergence speed, leading to service deterioration. If the network topology
changes frequently, the connections on the STP capable network are
frequently torn down, causing regular service interruption.
RSTP employs a proposal and agreement process which allows for
immediate negotiation of links to take place, effectively removing the time
taken for convergence based timers to expire before spanning tree
convergence can occur. The proposal and agreement process tends to follow
a cascading effect from the point of the root bridge through the switching
network, as each downstream switch begins to learn of the true root bridge
and the path via which the root bridge can be reached.

Switches operating in RSTP mode implement two separate port roles for
redundancy. The alternate port represents a redundant path to the root bridge
in the event that the current path to the root bridge fails. The backup port role
represents a backup for the path for the LAN segment in the direction leading
away from the root bridge. It can be understood that a backup port represents
a method for providing redundancy to the designated port role in a similar way
that an alternate port provides a method of redundancy to the root port.
The backup port role is capable of existing where a switch has two or more
connections to a shared media device such as that of a hub, or where a single
point-to-point link is used to generate a physical loopback connection between
ports on the same switch. In both instances however the principle of a backup
port existing where two or more ports on a single switch connect to a single
LAN segment still applies.

In RSTP, a designated port on the network edge is called an edge port. An


edge port directly connects to a terminal and does not connect to any other
switching devices. An edge port does not receive configuration BPDU, so it
does not participate in the RSTP calculation.
It can directly change from the Disabled state to the Forwarding state without
any delay, just like an STP-incapable port. If an edge port receives bogus
configuration BPDU from attackers, it is deprived of the edge port attributes
and becomes a common STP port. The STP calculation is implemented
again, causing network flapping.

RSTP introduces a change in port states that are simplified from five to three
types. These port types are based on whether a port forwards user traffic and
learns MAC addresses. If a port neither forwards user traffic nor learns MAC
addresses, the port is in the Discarding state. The port is considered to be in a
learning state where a port does not forward user traffic but learns MAC
addresses. Finally where a port forwards user traffic and learns MAC
addresses, the port is said to be in the Forwarding state.

The BPDU format employed in STP is also applied to RSTP with variance in
some of the general parameters. In order to distinguish STP configuration
BPDU from Rapid Spanning Tree BPDU, thus known as RST BPDU, the
BPDU type is defined. STP defines a configuration BPDU type of 0 (0x00) and
a Topology Change Notification BPDU (TCN BPDU) of 128 (0x80), RST
BPDU are identified by the BPDU type value 2 (0x02). Within the flags field of
the RST BPDU, additional parameter designations are assigned to the BPDU
fields.
The flags field within STP implemented only the use of the Topology Change
(TC) and Acknowledgement (TCA) parameters as part of the Topology
Change process while other fields were reserved. The RST BPDU has
adopted these fields to support new parameters. These include flags
indicating the proposal and agreement process employed by RSTP for rapid
convergence, the defining of the port role, and the port state.

In STP, after the topology becomes stable, the root bridge sends configuration
BPDU at an interval set by the Hello timer. A non-root bridge does not send
configuration BPDU until it receives configuration BPDU sent from the
upstream device. This renders the STP calculation complicated and timeconsuming. In RSTP, after the topology becomes stable, a non-root bridge
sends configuration BPDU at Hello intervals, regardless of whether it has
received the configuration BPDU sent from the root bridge; such operations
are implemented on each device independently.

The convergence of RSTP follows some of the basic principles of STP in


determining initially that all switches upon initialization assert the role of root
bridge, and as such assign each port interface with a designated port role.
The port state however is set to a discarding state until such time as the
peering switches are able to confirm the state of the link.

Each switch proclaiming to be the root bridge will negotiate the port states for
a given LAN segment by generating an RST BPDU with the proposal bit set in
the flags field. When a port receives an RST BPDU from the upstream
designated bridge, the port compares the received RST BPDU with its own
RST BPDU. If its own RST BPDU is superior to the received one, the port
discards the received RST BPDU and immediately responds to the peering
device with its own RST BPDU that includes a set proposal bit.

Since timers do not play a role in much of the RSTP topology convergence
process as found with STP, it is important that the potential for switching loops
during port role negotiation be restricted. This is managed by the
implementation of a synchronization process that determines that following
the receipt of a superior BPDU containing the proposal bit, the receiving
switch must set all downstream designated ports to discarding as part of the
sync process.
Where the downstream port is an alternate port or an edge port however, the
status of the port role remains unchanged. The example demonstrates the
temporary transition of the designated port on the downstream LAN segment
to a discarding state, and therefore blocking any frame forwarding during the
upstream proposal and agreement process.

The confirmed transition of the downstream designated port to a discarding


state allows for an RST BPDU to be sent in response to the proposal sent by
the upstream switch. During this stage the port role of the interface has been
determined to be the root port and therefore the agreement flag and port role
of root are set in the flags field of the RST BPDU that is returned in response
to the proposal.

During the final stage of the proposal and agreement process, the RST BPDU
containing the agreement bit is received by the upstream switch, allowing the
designated port to transition immediately from a discarding state to forwarding
state. Following this, the downstream LAN segment(s) will begin to negotiate
the port roles of the interfaces using the same proposal and agreement
process.

In STP, a device has to wait a Max Age period before determining a


negotiation failure. In RSTP, if a port does not receive configuration BPDUs
sent from the upstream device for three consecutive Hello intervals, the
communication between the local device and its peer fails, causing the
proposal and agreement process to be initialized in order to discover the port
roles for the LAN segment.

Topology changes affect RSTP similarly to the way STP is affected, however
there are some minor differences between the two. In the example, a failure of
the link has occurred on switch C. Switch A and switch C will detect the link
failure immediately and flush the address entries for ports connected to that
link. An RST BPDU will begin to negotiate the port states as part of the
proposal and agreement process, following which a Topology Change
notification will occur, together with the forwarding of the RST BPDU
containing the agreement.
This RST BPDU will have both the Agreement bit and also the TC bit set to 1,
to inform upstream switches of the need to flush their MAC entries on all port
interfaces except the port interface on which the RST BPDU containing the
set TC bit was received.
The TC bit will be set in the periodically sent RST BPDU, and forwarded
upstream for a period equivalent to Hello Time+1 second, during which all
relevant interfaces will be flushed and shall proceed to re-populate MAC
entries based on the new RSTP topology. The red (darker) x in the example
highlights which interfaces will be flushed as a result of the topology change.

The implementation of STP within an RSTP based switching topology is


possible, however is not recommended since any limitation pertaining to STP
becomes apparent within the communication range of the STP enabled
switch. A port involved in the negotiation process for establishing its role within
STP must wait for a period of up to 50 seconds before convergence can be
completed, as such the benefits of RSTP are lost.

The configuration of the spanning tree mode of Sx7 switches requires that the
stp mode command be used to set the mode to RSTP. In doing so the Sx7
series switch will generate RST BPDU in relation to RSTP, as opposed to
other spanning tree implementations. This command is configured from the
system-view and should be applied to all switches participating in the rapid
spanning tree topology.

The display stp command will provide relative information regarding RSTP
configuration as many of the parameters follow the principle STP architecture.
The mode information will determine as to whether a switch is currently
operating using RSTP.

An edge interface defines a port that does not participate in the spanning tree
topology. These interfaces are used by end systems to connect to the
switching network for the purpose of forwarding frames. Since such end
systems do not require to negotiate port interface status, it is preferable that
the port be transitioned directly to a forwarding state to allow frames to be
forwarded over this interface immediately.
The stp edged-port enable command is used to switch a port to become an
edge port, as all ports are considered non-edge ports on a switch by default.
In order to disable the edge port the stp edged-port disable command is used.
These commands apply only to a single port interface on a given switch. It is
important to note that the edge port behavior is associated with RSTP as
defined in the IEEE 802.1D-2004 standards documentation, however due to
the VRP specific application of the underlying RSTP state machine to STP
(which also results in the RSTP port states being present in STP), it is also
possible to apply the RSTP edge port settings to STP within Huawei Sx7
series products.

In the event that multiple ports on a switch are to be configured as edge ports,
the stp edged-port default command is applied which enforces that all port
interfaces on a switch become edge ports. It is important to run the stp edgedport disable command on the ports that need to participate in STP calculation
between devices, so as to avoid any possible loops that may be caused as a
result of STP topology calculations.

The port that is directly connected to a user terminal such as a PC or a file


server, is understood to be configured as an edge port to ensure fast
transition of the port status. Usually, no BPDU are sent to edge ports,
however if the switch is attacked by pseudo BPDU, the switch sets edge ports
as non-edge ports. After these edge ports receive a BPDU the spanning tree
topology is recalculated, and as a result network flapping occurs.
To defend against pseudo BPDU attacks, RSTP provides BPDU protection.
After BPDU protection is enabled, the switch shuts down the edge port that
receives BPDU and informs any active network management station (NMS).
The edge ports that are shut down by the switch can be manually started only
by the network administrator. The stp bpdu-protection command should be
used to enable bpdu protection and is configured globally within the systemview.

The switch maintains the status of the root port and blocked ports by
continually receiving BPDU from the upstream switch. If the root switch
cannot receive BPDU from the upstream switch due to link congestion or
unidirectional link failure, the switch re-selects a root port. The previous root
port then becomes a designated port and the blocked ports change to the
forwarding state. As a result, loops may occur on the network.
The switch provides loop protection to prevent network loops. After the loop
protection function is enabled, the root port is blocked if it cannot receive
BPDU from the upstream switch. The blocked port remains in the blocked
state and does not forward packets. This prevents loops on the network. If an
interface is configured as an edge interface or root protection is enabled on
the interface, loop protection cannot be enabled on the interface. The stp
loop-protection command should be applied to enable this feature in the
interface-view.

Validation of the RSTP configuration for a given interface is attained through


the display stp interface <interface> command. The associated information
will identify the port state of the interface as either Discarding, Learning or
Forwarding. Relevant information for the port interface including the port
priority, port cost, the port status as an edge port or supporting point-to-point
etc, are defined.

1. The sync is a stage in the convergence process that involves the blocking
of designated ports while RST BPDU are transmitted containing proposal
and agreement messages to converge the switch segment. The process
is designed to ensure that all interfaces are in agreement as to their port
roles in order to ensure that no switching loops will occur once the
designated port to any downstream switch is unblocked.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The forwarding of frames and switching has introduced the data link layer
operations, and in particular the role of IEEE 802 based standards as the
supporting underlying communication mechanism, over which upper layer
protocol suites generally operate. With the introduction of routing, the
physics that define upper layer protocols and internetwork communication
are established. An enterprise network domain generally consists of
multiple networks for which routing decisions are needed to ensure
optimal routes are used , in order to forward IP packets (or datagrams) to
intended network destinations. This section introduces the foundations on
which such IP routing is based .

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the principles that govern IP routing decisions.
Explain the basic requirements for packet forwarding.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

An enterprise network generally can be understood as an instance of an


autonomous system. As defined within RFC 1030, an autonomous system or
AS, as it is also commonly known, is a connected group of one or more IP
prefixes run by one or more network operators which has a SINGLE and
CLEARLY DEFINED routing policy.
The concept of autonomous systems originally considered the existence of a
single routing protocol, however as networks have evolved, it is possible to
support multiple routing protocols that interoperate through the injection of
routes from one protocol to another. A routing policy can be understood to be
a set of rules that determine how traffic is managed within an autonomous
system, to which a single, or multiple operator(s) must adhere to.

The principles surrounding switching have dealt mainly with the forwarding of
traffic within the scope of a local area network and the gateway, which has
until now defined the boundary of the broadcast domain. Routers are the
primary form of network layer device used to define the gateway of each local
area network and enable IP network segmentation. Routers generally function
as a means for routing packets from one local network to the next, relying on
IP addressing to define the IP network to which packets are destined.

The router is responsible for determining the forwarding path via which
packets are to be sent en route to a given destination. It is the responsibility of
each router to make decisions as to how the data is forwarded. Where a
router has multiple paths to a given destination, route decisions based on
calculations are made to determine the best next hop to the intended
destination. The decisions governing the route that should be taken can vary
depending on the routing protocol in use, ultimately relying on metrics of each
protocol to make decisions in relation to varying factors such as bandwidth
and hop count.

Routers forward packets based on routing tables and a forwarding information


base (FIB), and maintain at least one routing table and one FIB. Routers
select routes based on routing tables and forward packets based on the FIB.
A router uses a local routing table to store protocol routes and preferred
routes. The router then sends the preferred routes to the FIB to guide packet
forwarding. The router selects routes according to the priorities of protocols
and costs stored in the routing table. A routing table contains key data for
each IP packet.
The destination & mask are used in combination to identify the destination IP
address or the destination network segment where the destination host or
router resides. The protocol (Proto) field, indicates the protocol through which
routes are learned. The preference (Pre) specifies the preference value that is
associated with the protocol, and is used to decide which protocol is applied
to the routing table where two protocols offer similar routes. The router selects
the route with the highest preference (the smallest value) as the optimal route.
A cost value represents the metric that is used to distinguish when multiple
routes to the same destination have the same preference, the route with the
lowest cost is selected as the optimal route. A next hop value indicates the IP
address of the next network layer device or gateway that an IP packet passes
through. In the example given a next hop of 127.0.0.1 refers to the local
interface of the device as being the next hop. Finally the interface parameter
indicates the outgoing interface through which an IP packet is forwarded.

In order to allow packets to reach their intended destination, routers must


make specific decisions regarding the routes that are learned and which of
those routes are applied. A router is likely to learn about the path to a given
network destination via routing information that is advertized from neighboring
routers, alternatively it is possible for the statically applied routes to be
manually implemented through administrator intervention.
Each entry in the FIB table contains the physical or logical interface through
which a packet is sent in order to reach the next router. An entry also indicates
whether the packet can be sent directly to a destination host in a directly
connected network. The router performs an "AND" operation on the
destination address in the packet and the network mask of each entry in the
FIB table.
The router then compares the result of the "AND" operation with the entries in
the FIB table to find a match. The router chooses the optimal route to forward
packets according to the best or "longest" match. In the example, two entries
to the network 10.1.1.0 exist with a next hop of 20.1.1.2. Forwarding to the
destination of 10.1.1.1 will result in the longest match principle being applied,
for which the network address 10.1.1.0/30 provides the longest match.

A routing table may contain the routes originating from multiple protocols to a
given destination. Not all routing protocols are considered equal, and where
the longest match for multiple routes of differing routing protocols to the same
destination are equal, a decision must be made regarding which routing
protocol (including static routes) will take precedence.
Only one routing protocol at any one time determines the optimal route to a
destination. To select the optimal route, each routing protocol (including the
static route) is configured with a preference (the smaller the value, the higher
the preference). When multiple routing information sources coexist, the route
with the highest preference is selected as the optimal route and added to the
local routing table.
In the example, two protocols are defined that provide a means of discovery
of the 10.1.1.0 network via two different paths. The path defined by the RIP
protocol appears to provide a more direct route to the intended destination,
however due to the preference value, the route defined by the OSPF protocol
is preferred and therefore installed in the routing table as the preferred route.
A summary of the default preference values of some common routing
mechanisms are provided to give an understanding of the default preference
order.

Where the route is unable to be distinguished by either a longest match value


or preference, the cost metric is taken as the decision maker in identifying the
route that should be installed in the routing table. Cost represents the length
of a path to a destination network.
Each segment provides a cost metric value along a path that is combined to
identify the cost of the route. Another common factor is network bandwidth, on
which the cost mechanism is sometimes based. A link with a higher speed
(capacity) represents a lower cost value, allowing preference of one path over
another to be made, whilst links of equal speed are given a balanced cost for
efficient load balancing purposes. A lower metric always takes precedence
and therefore the metric of 50 as shown in the example, defines the optimal
route to the given destination for which an entry can be found in the routing
table.

The capability of a router to forward an IP packet to a given destination


requires that certain forwarding information be known. Any router wishing to
forward an IP packet must firstly be aware of a valid destination address to
which the packet is to be forwarded, this means that an entry must exist in the
routing table that the router is able to consult. This entry must also identify the
interface via which IP packets must be transmitted and the next hop along the
path, to which the packet is expected to be received before consultation for
the next forwarding decision is performed.

1. Routing decisions are made initially based on the longest match value,
regardless of the preference value assigned for routes to the same
network. If the longest match value for two routes to the same destination
is equal, the preference shall be used, where the preference is also equal,
the metric shall be used. In cases where the metric value is also the
same, protocols will commonly apply a form of load balancing of data over
the equal cost links.
2. The preference is typically used to denote the reliability of a route over
routes that may be considered less reliable.
Vendors of routing
equipment may however assign different preference values for protocols
that are supported within each vendors own product. The preference
values of some common routing protocols supported by Huawei routing
devices can be found within this section.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The implementation of routes within the IP routing table of a router can be
defined manually using static routes or through the use of dynamic
routing protocols. The manual configuration of routes enables direct
control over the routing table, however may result in route failure should a
router's next hop fail. The configuration of static routes however is often
used to compliment dynamic routing protocols to provide alternative
routes in the event dynamically discovered routes fail to provide a valid
next hop. Knowledge of the various applications of static routes and
configuration is necessary for effective network administration.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the different applications for static routes.

Successfully configure static routes in the IP routing table.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

A static route is a special route that is manually configured by a network


administrator. The disadvantage of static routes is that they cannot adapt to
the change in a network automatically, so network changes require manual
reconfiguration. Static routes are fit for networks with comparatively simple
structures. It is not advisable to configure and maintain static routes for a
network with a complex structure. Static routes do however reduce the effect
of bandwidth and CPU resource consumption that occurs when other
protocols are implemented.

Static routes can be applied to networks that use both serial and Ethernet
based media, however in each situation the conditions of applying the static
route vary in which either the outbound interface or the next-hop IP address
must be defined.
The serial medium represents a form of point-to-point (P2P) interface for
which the outbound interface must be configured. For a P2P interface, the
next-hop address is specified after the outbound interface is specified. That is,
the address of the remote interface (interface on the peer device) connected
to this interface is the next-hop address.
For example, the protocol used to encapsulate over the serial medium is the
Point-to-Point protocol (PPP). The remote IP address is obtained following
PPP negotiation, therefore it is necessary to specify only the outbound
interface. The example also defines a form of point-to-point Ethernet
connection, however Ethernet represents a broadcast technology in nature
and therefore the principles of point-to-point technology do not apply.

In the case of broadcast interfaces such as Ethernet, the next hop must be
defined. Where the Ethernet interface is specified as the outbound interface,
multiple next hops are likely to exist and the system will not be able to decide
which next hop is to be used. In determining the next hop, a router is able to
identify the local connection over which the packet should be received.
In the example, packets intended for the destination of 192.168.2.0/24 should
be forwarded to the next hop of 10.0.123.2 to ensure delivery. Alternatively
reaching the destination of 192.168.3.0 requires that the next hop of
10.0.123.3 be defined.

The configuration of the static route is achieved using the ip route-static ipaddress { mask | mask-length } interface-type interface-number [ nexthopaddress ] where the ip-address refers to the network or host destination
address. The mask field can be defined as either a mask value or based on
the prefix number. In the case of a broadcast medium such as Ethernet, the
next hop address is used. Where a serial medium is used, the interface-type
and interface-number are assigned (e.g. serial 1/0/0) to the command to
define the outgoing interface.

Where equal cost paths exist between the source and destination networks,
load balancing can be implemented to allow traffic to be carried over both
links. In order to achieve this using static routes, both routes must meet the
parameters for an equal longest match, preference and metric value. The
configuration of multiple static routes, one for each next hop or outbound
interface in the case of serial medium is required.
The example demonstrates how two ip route-static commands are
implemented, each defining the same IP destination address and mask, but
alternate next hop locations. This ensures that the longest match (/24) is
equal, and naturally so is the preference value, since both routes are static
routes that carry a default preference of 60. The cost of both paths is also
equal allowing load balancing to occur.

The routing table can be queried to verify the results by running the display ip
routing-table command after the static routes are configured. The static route
is displayed in the routing table, and results show two entries to the same
destination, with matching preference and metric values. The different next
hop addresses and variation in the outbound interface identifies the two paths
that are taken, and confirms that load balancing has been achieved.

The application of static routes allows for a number of ways that routes can be
manipulated to achieve routing requirements. It is possible for the preference
of a static route to be changed for the purpose of enabling the preference of
one static route over another, or where used with other protocols, to ensure
the static route is either preferred or preference is given to the alternative
routing protocol.
The default preference value of a static route is 60, therefore by adjusting this
preference value, a given static route can be treated with unequal preference
over any other route, including other static routes. In the example given, two
static routes exist over two physical LAN segments, while normally both static
routes would be considered equal, the second route has been given a lesser
preference (higher value) causing it to be removed from the routing table. The
principle of a floating static route means that the route with a lesser
preference will be applied to the routing table, should the primary route ever
fail.

In using the display ip routing-table command, it is possible for the results of


the change to the preference value that results in the floating static route, to
be observed. Normally two equal cost routes would be displayed in the routing
table defining the same destination, however having alternative next hop
values and outbound interfaces. In this case however, only one instance can
be seen, containing the default static route preference value of 60. Since the
second static route now has a preference value of 100, it is not immediately
included in the routing table since it is no longer considered an optimal route.

In the event that the primary static route should fail as a result of physical link
failure or through the disabling of an interface, the static route will no longer
be able to provide a route to the intended destination and therefore will be
removed from the routing table. The floating static route is likely to become
the next best option for reaching the intended destination, and will be added
to the routing table to allow packets to be transmitted over a second
alternative path to the intended destination, allowing continuity in light of any
failure.
When the physical connection for the original route is restored, the original
static route also will take over from the current floating static route, for which
the route will be restored in the routing table causing the floating static route
to once again await application.

The default static route is a special form of static route that is applied to
networks in which the destination address is unknown, in order to allow a
forwarding path to be made available. This provides an effective means of
routing traffic for an unknown destination to a router or gateway that may have
knowledge of the forwarding path within an enterprise network.
The default route relies on the any network address of 0.0.0.0 to match any
network to which a match could not be found in the routing table, and provides
a default forwarding path to which packets for all unknown network
destinations should be routed. In the example, a default static route has been
implemented on RTA, identifying that should packets for a network that is
unknown be received, such packets should be forwarded to the destination
10.0.12.2.
In terms of routing table decision making, as a static route, the default route
maintains a preference of 60 by default, however operates as a last resort in
terms of the longest match rule in the route matching process.

The configuration of the static route once configured will appear within the
routing table of the router. The display ip routing-table command is used to
view this detail. As a result, all routes in the example where not associated
with any other routes in the routing table will be forwarded to the next hop
destination of 10.0.12.2 via the interface Gigabit Ethernet 0/0/0.

1. A floating static route can be implemented by adjusting the preference


value of a static route where two static routes support load balancing.
2. A default static route can be implemented in the routing table by
specifying the any network address of 0.0.0.0 as the destination address
along with a next hop address of the interface to which packets captured
by this default static route are to be forwarded.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Distance vector routing protocols are a form of dynamic routing protocol
that work on the principle of the Bellman-Ford algorithm to define the
route that packets should take to reach other network destinations. The
application of the Routing Information Protocol (RIP) is often applied in
many small networks and therefore remains a valid and popular protocol
even though the protocol itself has been in existence much longer than
other dynamic routing protocols in use today. The characteristics of such
distance vector protocols are represented in this section through the
Routing Information Protocol.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the behavior of the Routing Information Protocol.

Successfully configure RIP routing and associated attributes.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The routing information protocol or RIP as it is commonly known, represents


one of the more simple forms of routing protocol that are applied to enterprise
networks. RIP operates as an interior gateway protocol (IGP) based on the
principles of the Bellman-Ford algorithm which operates on the basis of
distance vector, defining the path that traffic should take in relation to the
optimal distance that is measured using a fixed metric value.
The RIP protocol contains a minimal number of parameters and requires
limited bandwidth, configuration and management time, making it ideal for
smaller networks. RIP however was not designed with the capability to handle
subnets, support interaction with other routing protocols, and provided no
means of authentication, since its creation predated the period that these
principles were introduced.

Routers that are RIP enabled participate in the advertisement of routing


information to neighboring routers. Route advertisements are generated that
contain information regarding the networks that are known by the sending
router, and the distance to reach those networks. RIP enabled routers
advertise to each other, but when they advertise, they only carry the best
routing information in their route advertisements.

Each router advertisement contains a number of routes, each associated with


a given metric. The metric is used to determine the distance between a router
and the destination with which the route advertisement is associated. In RIP
the metric is associated with a hop count mechanism where each hop
between routers represents a fixed hop count, typically of one. This metric
does not take into account any other factors such as the bandwidth for each
link or any delay that may be imposed to the link. In the example, router RTB
learns of a network via two different interfaces, each providing a hop metric
through which, the best route to the destination can be discovered.

As each router processes a route advertisement, the metric value is


incremented before forwarding the advertisement to the neighboring router.
Where routes become inaccessible however there is a potential for
occurrences that results in the hop count becoming infinite.
In order to resolve the problem with infinite route metrics, a value that would
represent infinity was defined that allowed the number of possible hops to be
restricted to a limit of 15 hops. This metric assumes a network size that is
deemed suitable to accommodate the size of networks for which the RIP
routing protocol is suited, and also beyond the scale that it is expected any
network of this type is expected to reach.
A hop count of 16 would assume the route to be unreachable and cause the
network status for the given network to be changed accordingly. Routing loops
can occur through a router sending packets to itself, between peering routers
or as a result of traffic flow between multiple routers.

The example demonstrates how a loop can potentially form where RIP is the
routing protocol. A network (10.0.0.0/8) has been learned through the sending
of route advertisements from RTA to RTC, for which RTC will have updated its
routing table with the network and the metric of 1, in order to reach the
destination.
In the event of failure of the connection between router RTA and the network
to which it is directly connected, the router will immediately detect loss of the
route and consider the route unreachable. Since RTC is possessing
knowledge of the network, a route advertisement is forwarded containing
information regarding network 10.0.0.0/8. Upon reception of this, RTA will
learn of a new route entry for 10.0.0.0/8 with a metric of 2. Since RTC
originally learned the route from RTA, any change will need to be updated in
RTC also, with a route advertisement being sent to RTC with a metric of 3.
This will repeat for an infinite period of time. A metric of 16 allows a cap to be
placed on infinity, thereby allowing any route reaching a hop count of 16 to be
deemed unreachable.

Mechanisms have been implemented as part of the RIP routing protocol to


address the routing loop issues that occur when routes become inaccessible.
One of these mechanisms is known as split horizon and operates on the
principle that a route that is learned on an interface, cannot be advertised
back over that same interface. This means that network 10.0.0.0/8 advertised
to router RTC cannot be advertised back to RTA over the same interface,
however will be advertised to neighbors connected via all other interfaces.

The implementation of the poison reverse mechanism allows the speed at


which erroneous routes are timed out to be increased to almost instantly as a
result of allowing routes to be returned to the originating router, containing a
metric of 16, to effectively time-out any consideration for a better route where
the route becomes invalid.
In the example, RTA advertises a metric of 1 for the network to RTC, while
RTC advertises the same network back to RTA to ensure that if 10.0.0.0/8
network fails, RTA will not discover a better path to this network via any other
router. This involves however an increase in the size of the RIP routing
message, since routes containing the network information received now must
also carry the network update, deeming the route unreachable, back to the
neighboring router from which the advertisement originated. In Huawei
AR2200 series routers, split horizon and poisoned reverse cannot be applied
at the same time, if both are configured, only poisoned reverse will be
enabled.

The default behavior of RIP involves updates of the routing table being sent
periodically to neighbors as a route advertisement, which by default is set to
occur approximately every 30 seconds. Where links fail however, it also
requires that this period be allowed to expire before informing the neighboring
routers of the failure.
Triggered updates occur when the local routing information changes and the
local router immediately notifies its neighbors of the changes in routing
information, by sending the triggered update packet. Triggered updates
shorten the network convergence time. When the local routing information
changes, the local router immediately notifies its neighbors of the changes in
routing information, rather than wait for a periodic update.

RIP is a UDP-based protocol. Each router that uses RIP uses a routing
process that involves all communications directed at another router being sent
to port 520, including all routing update messages. RIP generally transmits
routing update messages as broadcast messages, destined for the broadcast
address of 255.255.255.255, referring to all networks. Each router however
will generate its own broadcast of routing updates following every update
period.
The command and version fields are used once per packet, with the
command field detailing whether the packet is a request or response
message, for which all update messages are considered response messages.
The version refers to the version of RIP, which in this case is version 1. The
remaining fields are used to support the network advertisements for which up
to 25 route entries can be advertised in a single RIP update message.
The address family identifier lists the protocol type that is being supported by
RIP, which in this example is IP. The remaining fields are used to carry the IP
network address and the hop metric that contains a value between 1 and 15
(inclusive) and specifies the current metric for the destination; or the value 16
(infinity), which indicates that the destination is not reachable.

The introduction of a new version of RIP, known as RIP version 2, does not
change RIP as such but rather provides extensions to the current RIP protocol
to allow for a number of ambiguities to be resolved. The format of the RIP
datagram applies the same principles of the original RIP protocol with the
same command parameters. The version field highlights the extended fields
are part of version 2.
The address family identifier continues to refer to the protocol being supported
and also may be used in support of authentication information as explained
shortly. The route tag is another feature that is introduced to resolve
limitations that exist with support for interaction between autonomous systems
in RIP, the details of which however fall outside of the scope of this course.
Additional parameter extensions have been made part of the route entry
including the Subnet Mask field which contains the subnet mask that is
applied to the IP address, to define the network or sub-network portion of the
address.
The Next Hop field now allows for the immediate next hop IP address, to
which packets destined for the destination address specified in a route entry,
should be forwarded. In order to reduce the unnecessary load of hosts that
are not listening for RIP version 2 packets, an IP multicast address is used to
facilitate periodic broadcasts, for which the IP multicast address used is
224.0.0.9.

Authentication represents a means by which malicious packets can be


filtered, by ensuring that all packets received can be verified as originating
from a valid peer through the use of a key value. This key value originally
represents a plaintext password string that can be configured for each
interface, as recognized by the authentication type of 2. The authentication
configured between peers must match before RIP messages can be
successfully processed. For authentication processing, if the router is not
configured to authenticate RIP version 2 messages, then RIP version 1 and
unauthenticated RIP version 2 messages will be accepted; authenticated RIP
version 2 messages shall be discarded.
If the router is configured to authenticate RIP version 2 messages, then RIP
version 1 messages and RIP version 2 messages which pass authentication
testing shall be accepted; unauthenticated and failed authentication RIP
version 2 messages shall be discarded.
RIP version 2 originally supported only simple plaintext authentication that
provided only minimal security since the authentication string could easily be
captured. With the increased need for security for RIP, cryptographic
authentication was introduced, initially with the support for a keyed-MD5
authentication (RFC 2082) and further enhancement through the support of
HMAC-SHA-1 authentication, introduced as of RFC 4822. While Huawei
AR2200 series routers are capable of supporting all forms of authentication
mentioned, the example given demonstrates the original authentication for
simplicity.

If a network has multiple redundant links, a maximum number of equal-cost


routes can be configured to implement load balancing. In this manner,
network resources are more fully utilized, situations where some links are
overloaded while others are idle can be avoided, and long delays in packet
transmissions can be prevented. The default and maximum number of equal
cost routes supported by RIP is 8 at any one time.

It is required for all routers supporting the RIP routing process to first enable
the process on each router. The rip [process-id] command is used to enable
this, with the process-id identifying a specific process ID to which the router is
associated. If the process ID is not configured, the process will default to a
process ID of 1.Where variation in the process ID exists, the local router will
create separate RIP routing table entries for each process that is defined.
The version 2 command enables the RIP version 2 extension to RIP allowing
for additional capability for subnets, authentication, inter-autonomous system
communication etc. The network <network-address> command specifies the
network address for which RIP is enabled, and must be the address of the
natural network segment.

RIP is also capable of supporting manipulation of RIP metrics to control the


flow of traffic within a RIP routing domain. One means to achieve this is to
adjust the metric associated with the route entry when received by a router.
When an interface receives a route, RIP adds the additional metric of the
interface to the route, and then installs the route into the routing table, thereby
increasing the metric of an interface which also increases the metric of the
RIP route received by the interface.
The rip metricin <metric value> command allows for manipulation of the
metric, where the metric value refers to the metric that is to be applied. It
should also be noted that for the rip metricin command the metric value is
added to the metric value currently associated with the route. In the example,
the route entry for network 10.0.0.0/8 contains a metric of 1, and is
manipulated upon arrival at the interface of RTC, resulting in the metric value
of 3 being associated with the route.

The rip metricout command allows for the metric to be manipulated for the
route when a RIP route is advertised. Increasing the metric of an interface
also increases the metric of the RIP route sent on the interface but does not
affect the metric of the route in the routing table of the router to which the rip
metricout command is applied.
In its most basic form the rip metricout command defines the value that must
be adopted by the forwarded route entry, but is also capable of supporting
filtering mechanisms to selectively determine to which routes the metric
should be applied. The general behavior of RIP is to increment the metric by
one before forwarding the route entry to the next hop. If the rip metricout
command is configured, only the metric value referenced in the command is
applied.

The configuration of both split horizon and poisoned reverse are performed on
a per interface basis, with the rip split-horizon command being enabled by
default (with exception to NBMA networks) in order to avoid many of the
routing loop issues that have been covered within this section. The
implementation of both split horizon and poisoned reverse is not permitted on
the AR2200 series router, therefore where poisoned reverse is configured on
the interface using the rip poison-reverse command, split horizon will be
disabled.

The configuration of the routing information protocol on a per interface basis


can be verified through the display rip <process_id> interface <interface>
verbose command. The associated RIP parameters can be found in the
displayed output, including the RIP version applied along with other
parameters such as whether poison-reverse and split-horizon have been
applied to the interface. Where the display command references that both the
poison-reverse and split-horizon as both enabled, only the poison-reverse
command will take effect.

The rip output command is applied to the interface of a router participating in


RIP routing and allows RIP to forward update messages out from the
interface. Where the undo rip output command is applied to an interface, the
RIP update message will cease to be forwarded out from a given interface. Its
application is valid in circumstances where an enterprise network wishes to
not share its internal routes via an interface that connects to an external
network in order to protect the network, often applying a default route to this
interface instead for any routes which wish to reach external networks.

The undo rip input command allows an interface to reject all RIP update
messages and prevent RIP information from being added to the routing table
for a given interface. This may be applied in situations where the flow of traffic
may require to be controlled via certain interfaces only, or prevent RIP from
being received by the router completely. As such any RIP update messages
sent to the interface will be discarded immediately. The rip input command
can be used to re-enable an interface to resume receipt of RIP updates.

The display rip <process_id> interface <interface> verbose command can


also be used to confirm the implementation of restrictions to the interface.
Where the interface has been configured with the undo rip input, the capability
to receive RIP routes will be considered disabled as highlighted under the
Input parameter.

The silent interface allows for RIP route updates to be received and used to
update the routing table of the router, but will not allow an interface to
participate in RIP. In comparison, the silent-interface command has a higher
precedence than both rip input & rip output commands. Where the silentinterface all command is applied, the command takes the highest priority,
meaning no single interface can be activated. The silent-interface command
must be applied per interface to allow for a combination of active and silent
interfaces.
A common application of the silent interface is for non broadcast multi access
networks. Routers may be required to receive RIP update messages but wish
not to broadcast/multicast its own updates, requiring instead that a
relationship with the peering router be made through the use of the peer <ip
address> command.

The display rip command provides a more comprehensive router based


output for which global parameters can be verified along with certain interface
based parameters. The implementation of the silent-interface command on a
given interface for example can be observed through this command.

1. The metric is incremented prior to the forwarding of the route


advertisement from the outbound interface.
2. The advertisement of RIP routes is achieved through the configuration of
the network command. For each network that is to be advertised by a
router, a network command should be configured.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
OSPF is an interior gateway protocol (IGP) designed for IP networks, that
is founded on the principles of link state routing. The link state behavior
provides many alternative advantages for medium and even large
enterprise networks. Its application as an IGP is introduced along with
information relevant to the understanding of OSPF convergence and
implementation, for supporting OSPF in enterprise networks.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the OSPF convergence process
Describe the different network types supported by OSPF

Successfully configure single area OSPF networks

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

Open Shortest Path First or OSPF is regarded as a link state protocol that is
capable of quickly detecting topological changes within the autonomous
system and establish loop free routes in a short period of time, with minimum
additional communication overhead for negotiating topology changes between
peering routers. OSPF also deals with scalability issues that occur when
communication between an expanding number of routers becomes so
extreme that it begins to lead to instability within the autonomous system. This
is managed through the use of areas that limits the scope of router
communication to an isolated group within the autonomous system allowing
small, medium and even large networks to be supported by OSPF. The
protocol is also able to work over other protocols such as MPLS, a label
switching protocol, to provide network scalability even over geographically
disperse locations. In terms of optimal path discovery, OSPF provides rich
route metrics that provides more accuracy than route metrics applied to
protocols such as RIP to ensure that routes are optimized, based on not only
distance but also link speed.

The convergence of OSPF requires that each and every router actively
running the OSPF protocol have knowledge of the state of all interfaces and
adjacencies (relationship between the routers that they are connected to), in
order to establish the best path to every network. This is initially formed
through the flooding of Link State Advertisements (LSA) which are units of
data that contain information such as known networks and link states for each
interface within a routing domain. Each router will use the LSA received to
build a link state database (LSDB) that provides the foundation for
establishing the shortest path tree to each network, the routes from which are
ultimately incorporated into the IP routing table.

The router ID is a 32-bit value assigned to each router running the OSPF
protocol. This value uniquely identifies the router within an Autonomous
System. The router ID can be assigned manually, or it can be taken from a
configured address. If a logical (loopback) interface has been configured, the
router ID will be based upon the IP address of the highest configured logical
interface, should multiple logical interfaces exist.
If no logical interfaces have been configured, the router will use the highest IP
address configured on a physical interface. Any router running OSPF can be
restarted using the graceful restart feature to renew the router ID should a
new router ID be configured. It is recommended that the router ID be
configured manually to avoid unexpected changes to the router ID in the
event of interface address changes.

OSPF supports various network types, and in each case will apply a different
behavior in terms of how neighbor relationships are formed and how
communication is facilitated. Ethernet represents a form of broadcast network
that involves multiple routers connected to the same network segment. One of
the primary issues faced regards how communication occurs between the
neighboring routers in order to minimize OSPF routing overhead. If an
Ethernet network is established, the broadcast network type will be applied
automatically in OSPF.

Where two routers are established in a point-to-point topology, the applied


network type will vary depending on the medium and link layer technology
applied. As mentioned, the use of an Ethernet medium will result in the
broadcast network type for OSPF being assigned automatically. Where the
physical medium is serial, the network type is considered point-to-point.
Common forms of protocols that operate over serial media at the link layer
include Point to Point Protocol (PPP) and High-level Data Link Control
(HDLC).

OSPF may operate over multi access networks that do not support
broadcasts. Such networks include Frame Relay and ATM that commonly
operate using hub and spoke type topologies, which rely on the use of virtual
circuits in order for communication be to achieved. OSPF may specify two
types of networks that can be applied to links connected to such
environments. The Non Broadcast Multi Access (NBMA) network type
emulates a broadcast network and therefore requires each peering interface
be part of the same network segment. Unlike a broadcast network, the NBMA
forwards OSPF packets as a unicast, thereby requiring multiple instances of
the same packet be generated for each destination.
Point-to-Multipoint may also be applied as the network type for each interface,
in which case a point-to-point type behavior is applied. This means that each
peering must be associated with different network segments. Designated
Routers are associated with broadcast networks, and therefore are
implemented by NBMA networks. Most importantly is the positioning of a DR
which must be assigned on the hub node of the hub and spoke architecture to
ensure all nodes can communicate with the DR.

In order to address and optimize the communication of OSPF over broadcast


networks, OSPF implements a Designated Router (DR) that acts as a central
point of communication for all other routers associated with a broadcast
network on at least one interface. In a theoretical broadcast network that does
not apply a DR, it can be understood that the communication follows an
n(n-1)/2 formula, where n represents the number of router interfaces
participating in OSPF. In the example given, this would refer to 6 adjacencies
between all routers. When the DR is applied, all routers establish a
relationship with the DR to which is responsible for acting as a central point of
communication for all neighboring routers in a broadcast network.
A Backup Designated Router (BDR) is a router that is elected to take over
from the DR should it fail. As such it is necessary that the BDR establish a link
state database as that of the DR to ensure synchronization. This means that
all neighboring routers must also communicate with the BDR in a broadcast
network. With the application of the DR and BDR, the number of associations
is reduced from 6 to 5 since RTA and RTB need only communicate with the
DR and BDR. This may appear to have a minimal effect however where this is
applied to a network containing for example 10 routers, i.e. (10*9)/2 the
resulting communication efficiency becomes apparent.

OSPF creates adjacencies between neighboring routers for the purpose of


exchanging routing information. Not every two neighboring routers will
become adjacent, particularly where one of the two routers establishing an
adjacency is considered to not be the DR or BDR. These routers are known
as DROther and only acknowledge the presence of the DROther but do not
establish full communication; this state is known as the neighbor state.
DROther routers do however form full adjacency with both DR and BDR
routers to allow synchronization of the link state database of the DR and BDR
routers with each of the DROther routers. This synchronization is achieved by
establishing an adjacent state with each DROther.
An adjacency is bound to the network that the two routers have in common. If
two routers have multiple networks in common, they may have multiple
adjacencies between them.

Each router participating in OSPF will transition through a number of link


states to achieve either a neighbor state or an adjacent state. All routers begin
in the down state upon initialization and go through a neighbor discovery
process, which involves firstly making a routers presence known within the
OSPF network via a Hello packet. In performing this action the router will
transition to an init state.
Once the router receives a response in the form of a Hello packet containing
the router ID of the router receiving the response, a 2-way state will be
achieved and a neighbor relationship formed. In the case of NBMA networks,
an attempt state is achieved when communication with the neighbor has
become inactive and an attempt is being made to re-establish communication
through periodic sending of Hello packets. Routers that do not achieve an
adjacent relationship will remain in a neighbor state with a 2-way state of
communication.
Routers such as DR and BDR will build an adjacent neighbor state with all
other neighboring routers, and therefore must exchange link state information
in order to establish a complete link state database. This requires that peering
routers that establish an adjacency first negotiate for exchange of link state
information (ExStart) before proceeding to exchange summary information
regarding the networks they are aware of. Neighbors may identify routes they
are either not aware of or do not have up to date information for, and therefore
request additional details for these routes as part of the loading state. A fully
synchronized relationship between neighbors is determined by the full state at
which time both peering routers can be considered adjacent.

Neighbor discovery is achieved through the use of Hello packets that are
generated at intervals based on a Hello timer, which by default is every 10
seconds for broadcast and point to point network types; whereas for NBMA
and Point to Multipoint network types the hello interval is 30 seconds. The
hello packet contains this interval period, along with a router priority field that
allows neighbors to determine the neighbor with the highest router ID for
identification of the DR and BDR in broadcast and NBMA networks.
A period specifying how long a hello packet is valid before the neighbor is
considered lost must also be defined, and this is carried as the router dead
interval within the hello packet. This dead interval is set by default to be four
times the hello interval, thus being 40 seconds for broadcast and point to point
networks, and 120 seconds for NBMA and Point to Multipoint networks.
Additionally, the router ID of both the DR and BDR are carried, where
applicable, based on the network for which the hello packet is generated.

Following neighbor discovery, the DR election may occur depending on the


network type of the network segment. Broadcast and NMBA networks will
perform DR election. The election of the DR relies on a priority that is
assigned for each interface that participates in the DR election process. This
priority value is set as 1 by default and a higher priority represents a better DR
candidate.
If a priority of 0 is set, the router interface will no longer participate in the
election to become the DR or BDR. It may be that where point to point
connections (using Ethernet as the physical medium) are set to support a
broadcast network type, unnecessary DR election will occur, which generates
excessive protocol traffic. It therefore is recommended that the network type
be configured as a point to point network type.

In order to make the improve the efficiency of transition to a new Designated


Router, a Backup Designated Router is assigned for each broadcast and
NBMA network. The Backup Designated Router is also adjacent to all routers
on the network, and becomes the Designated Router when the previous
Designated Router fails. If there were no Backup Designated Router present,
new adjacencies would have to be formed between the new Designated
Router and all other routers attached to the network.
Part of the adjacency forming process involves the synchronizing of link-state
databases, which can potentially take quite a long time. During this time, the
network would not be available for the transit of data. The Backup Designated
Router obviates the need to form these adjacencies, since they already exist.
This means the period of disruption in transit traffic lasts only as long as it
takes to flood the new LSAs (which announce the new Designated Router).
The Backup Designated Router is also elected by the Hello packet. Each
Hello packet has a field that specifies the Backup Designated Router for the
network.

In a link-state routing algorithm, it is very important for all routers' link-state


databases to stay synchronized. OSPF simplifies this by requiring only
adjacent routers remain synchronized. The synchronization process begins as
soon as the routers attempt to bring up the adjacency. Each router describes
its database by sending a sequence of Database Description packets to its
neighbor. Each Database Description packet describes a set of LSAs
belonging to the router's database.
When the neighbor sees an LSA that is more recent than its own database
copy, it makes a note that this newer LSA should be requested. This sending
and receiving of Database Description packets is called the "Database
Exchange Process". During this process, the two routers form a master/slave
relationship. Each Database Description packet has a sequence number.
Database Description packets sent by the master are acknowledged by the
slave through echoing of the sequence number.

During and after the Database Exchange Process, each router has a list of
those LSAs for which the neighbor has more up-to-date instances. The Link
State Request packet is used to request the pieces of the neighbor's
database that are more up-to-date. Multiple Link State Request packets may
need to be used.
Link State Update packets implement the flooding of LSAs. Each Link State
Update packet carries a collection of LSAs one hop further from their origin.
Several LSAs may be included in a single packet. On broadcast networks, the
Link State Update packets are multicast. The destination IP address specified
for the Link State Update Packet depends on the state of the interface. If the
interface state is DR or Backup, the address AllSPFRouters (224.0.0.5)
should be used. Otherwise, the address AllDRouters (224.0.0.6) should be
used. On non-broadcast networks, separate Link State Update packets must
be sent, as unicast, to each adjacent neighbor (i.e. those in a state of
Exchange or greater). The destination IP addresses for these packets are the
neighbors' IP addresses.
When the Database Description Process has completed and all Link State
Requests have been satisfied, the databases are deemed synchronized and
the routers are marked fully adjacent. At this time the adjacency is fully
functional and is advertised in the two routers' router-LSAs.

OSPF calculates the cost of an interface based on bandwidth of the interface.


The calculation formula is: cost of the interface=reference value of bandwidth/
bandwidth. The reference value of bandwidth is configurable for which the
default is 100 Mbps. With the formula 100000000/Bandwidth, this gives a cost
metric of 1562 for a 64 kbit/s Serial port, 48 for an E1 (2.048 Mbit/s) interface
and a cost of 1 for Ethernet (100 Mbit/s) or higher.
To be able to distinguish between higher speed interfaces it is imperative that
the cost metric be adjusted to match the speeds currently supported. The
bandwidth-reference commands allows the metric to be altered by changing
the reference value of the bandwidth in the cost formula. The higher the value,
the more accurate the metric. Where speeds of 10Gb are being supported, it
is recommended that the bandwidth-reference value be increased to 10000
or 1010/bandwidth to provide metrics of 1, 10 and 100 for 10Gb, 1Gb and
100Mb bandwidth links respectively.
Alternatively the cost can be manually configured by using the ospf cost
command to define a cost value for a given interface. The cost value ranges
from 1 to 65535 with a default cost value of 1.

A router that has achieved a full state is considered to have received all link
state advertisements (LSA) and synchronized its link state database (LSDB)
with that of the adjacent neighbors. The link state information collected in the
link state database is then used to calculate the shortest path to each
network. Each router only relies on the information in the LSDB in order to
independently calculate the shortest path to each destination, as opposed to
relying on select route information from peers which is deemed to be the best
route to a destination. The calculation of the shortest path tree however
means that each router must utilize additional resources to achieve this
operation.

Smaller networks may involve a select number of routers which operate as


part of the OSPF domain. These routers are considered to be part of an area
which is represented by an identical link state database for all routers within
the domain. As a single area, OSPF can be assigned any area number,
however for the sake of future design implementation it is recommended that
this area be assigned as area 0.

The need to forward link state advertisements and subsequent calculation of


the shortest path based on the link state database becomes increasingly
complex as more and more routers become a part of the OSPF domain. As
such, OSPF is capable of supporting a hierarchical structure to limit the size
of the link state database, and the number of calculations that must be
performed when determining the shortest path to a given network.
The implementation of multiple areas allows an OSPF domain to
compartmentalize the calculation process based on a link state database, that
is only identical for each area, but provides the information to reach all
destinations within the OSPF domain. Certain routers known as area border
routers (ABR) operate between areas and contain multiple link state
databases for each area that the ABR is connected to. Area 0 must be
configured where multi-area OSPF exists, and for which all traffic sent
between areas is generally required to traverse area 0, in order to ensure
routing loops do not occur.

Establishing of OSPF within an AS domain requires that each router that is to


participate in OSPF first enable the OSPF process. This is achieved using the
ospf [process id] command, where the process ID can be assigned and
represents the process with which the router is associated. If routers are
assigned different process ID numbers, separate link state databases will be
created based on each individual process ID. Where no process ID is
assigned, the default process ID of 1 will be used. The router ID can also be
assigned using the command ospf [process id] [router-id <router-id>], where
<router-id> refers to the ID that is to be assigned to the router, bearing in mind
that a higher ID value represents the DR in broadcast and NBMA networks.
The parenthesis information reflects the ospf process and level at which ospf
parameters can be configured, including the area to which each link (or
interface) is associated. Networks that are to be advertised into a given area
are determined through the use of the network command. The mask is
represented as a wildcard mask for which a bit value of 0 represents the bits
that are fixed (e.g. network id) and where the bit values in the mask represent
a value of 1, the address can represent any value.

Configuration of the neighbor relationship between OSPF peers is verified


through the display ospf peer command. The attributes associated with the
peer connection are listed to provide a clear explanation of the configuration.
Important attributes include the area in which the peer association is
established, the state of the peer establishment, the master/slave association
for adjacency negotiation in order to reach the full state, and also the DR and
BDR assignments which highlights that the link is associated with a broadcast
network type.

OSPF is capable of supporting authentication to ensure that routes are


protected from malicious actions that may result from manipulation or damage
to the existing OSPF topology and routes. OSPF allows for the use of simple
authentication as well as cryptographic authentication, which provides
enhanced protection against potential attacks.
Authentication is assigned on a per interface basis with the command for
simple authentication of ospf authentication-mode { simple [ [ plain ] <plaintext> | cipher <cipher-text >] | null } where plain applies a clear-text password,
cipher a cipher-text password to hide the original contents, and null to indicate
a null authentication.
Cryptographic authentication is applied using the ospf authentication-mode
{ md5 | hmac-md5 } [ key-id { plain <plain-text >| [ cipher ] <cipher-text >} ]
command. MD5 represents a cryptographic algorithm for securing
authentication over the link, with its configuration demonstrated within the
given example. The key identifies a unique authentication key ID of the cipher
authentication of the interface. The key ID must be consistent with that of the
peer.

Where authentication is applied, it is possible to implement debugging on the


terminal to view the authentication process. Since the debugging may involve
many events, the debugging ospf packet command should be used to specify
that debugging should only be performed for OSPF specific packets. As a
result the authentication process can be viewed to validate that the
authentication configuration has been successfully implemented.

It is often necessary to control the flow of routing information and limit the
range for which such routing protocols can extend. This is particularly the
case where connecting with external networks from whom knowledge of
internal routes is to be protected. In order to achieved this, the silent interface
command can be applied as a means to restrict all OSPF communication via
the interface on which the command is implemented.
After an OSPF interface is set to be in the silent state, the interface can still
advertise its direct routes. Hello packets on the interface, however, will be
blocked and no neighbor relationship can be established on the interface. The
command silent-interface [interface-type interface-number] can be used to
define a specific interface that is to restrict OSPF operation, or alternatively
the command silent-interface all can be used to ensure that all interfaces
under a specific process be restricted from participating in OSPF.

The implementation of the silent interface on a per interface basis means that
the specific interface should be observed to validate the successful
application of the silent interface command. Through the display ospf
<process_id> interface <interface> command, where the interface represents
the interface to which the silent interface command has been applied, it is
possible to validate the implementation of the silent interface.

1. The dead interval is a timer value that is used to determine whether the
propagation of OSPF Hello packets has ceased. This value is equivalent
to four times the Hello interval, or 40 seconds by default on broadcast
networks. In the event that the dead interval counts down to zero, the
OSPF neighbor relationship will terminate.
2. The DR and BDR use the multicast address 224.0.0.6 to listen for link
state updates when the OSPF network type is defined as broadcast.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
An enterprise network may often consist of a substantial number of host
devices, each requiring network parameters in the form of IP addressing
and additional network configuration information. Manual allocation is
often a tedious and inaccurate business which can lead to many end
stations facing address duplication or failure to reach services necessary
for smooth network operation. DHCP is an application layer protocol that
is designed to automate the process of providing such configuration
information to clients within a TCP/IP network. DHCP therefore aids in
ensuring correct addressing is allocated, and reduces the burden on
administration for all enterprise networks. This section introduces the
application of DHCP within the enterprise network.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the function of DHCP in the enterprise network.
Explain the leasing process of DHCP.

Configure DHCP pools for address leasing.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

Enterprise networks are often comprised of multiple end systems that require
IP address assignment in order to connect with the network segment to which
the end system is attached. For small networks, a minimal number of end
systems attached to the network allows for simple management of the
addressing for all end systems.
For medium and large-scale networks however, it becomes increasingly
difficult to manually configure IP addresses with increased probability of
duplication of addressing, as well as misconfiguration due to human error, and
therefore the necessity to implement a centralized management solution over
the entire network becomes ever more prominent. The Dynamic Host
Configuration Protocol (DHCP) is implemented as a management solution to
allow dynamic allocation of addresses for existing fixed and temporary end
systems accessing the network domain.
In cases it is also possible that there may be more hosts than available IP
addresses on a network. Some hosts cannot be allocated a fixed IP address
and need to dynamically obtain IP addresses using the DHCP server. Only a
few hosts on a network require fixed IP addresses.

DHCP supports three mechanisms for IP address allocation. The method of


automatic allocation involves DHCP assigning a permanent IP address to a
client. The use of dynamic allocation employs DHCP to assign an IP address
to a client for a limited period of time or at least until the client explicitly
relinquishes the IP address.
The third mechanism is referred to as manual allocation, for which a client's IP
address is assigned by the network administrator, and DHCP is used only to
handle the assignment of the manually defined address to the client. Dynamic
allocation is the only one of the three mechanisms that allows automatic reuse
of an address that is no longer needed by the client to which it was assigned.
Thus, dynamic allocation is particularly useful for assigning an address to a
client that will be connected to the network only temporarily, or for sharing a
limited pool of IP addresses among a group of clients that do not need
permanent IP addresses.
Dynamic allocation may also be a good choice for assigning an IP address to
a new client being permanently connected to a network, where IP addresses
are sufficiently scarce that addresses are able to be reclaimed when old
clients are retired. Manual allocation allows DHCP to be used to eliminate the
error-prone process of manually configuring hosts with IP addresses in
environments where it may be more desirable to meticulously manage IP
address assignment.

A DHCP server and a DHCP client communicate with each other by


exchanging a range of message types. Initial communication relies on the
transmission of a DHCP Discover message. This is broadcast by a DHCP
client to locate a DHCP server when the client attempts to connect to a
network for the first time. A DHCP Offer message is then sent by a DHCP
server to respond to a DHCP Discover message and carries configuration
information.
A DHCP Request message is sent after a DHCP client is initialized, in which it
broadcasts a DHCP Request message to respond to the DHCP Offer
message sent by a DHCP server. A request message is also sent after a
DHCP client is restarted, at which time it broadcasts a DHCP Request
message to confirm the configuration, such as the assigned IP address. A
DHCP Request message is also sent after a DHCP client obtains an IP
address, in order to extend the IP address lease.
A DHCP ACK message is sent by a DHCP server to acknowledge the DHCP
Request message from a DHCP client. After receiving a DHCP ACK message,
the DHCP client obtains the configuration parameters, including the IP
address. Not all cases however will result in the IP address being assigned to
a client. The DHCP NAK message is sent by a DHCP server to in order reject
the DHCP Request message from a DHCP client when the IP address
assigned to the DHCP client expires, or in the case that the DHCP client
moves to another network.
A DHCP Decline message is sent by a DHCP client, to notify the DHCP server
that the assigned IP address conflicts with another IP address. The DHCP
client will then apply to the DHCP server for another IP address.

A DHCP Release message is sent by a DHCP client to release its IP address.


After receiving a DHCP Release message, the DHCP server assigns this IP
address to another DHCP client.
A final message type is the DHCP Inform message, and is sent by a DHCP
client to obtain other network configuration information such as the gateway
address and DNS server address after the DHCP client has obtained an IP
address.

The AR2200 and S5700 series devices can both operate as a DHCP server to
assign IP addresses to online users. Address pools are used in order to define
the addresses that should be allocated to end systems. There are two general
forms of address pools which can be used to allocate addresses, the global
address pool and the interface address pool.
The use of an interface address pool enables only end systems connected to
the same network segment as the interface to be allocated IP addresses from
this pool. The global address pool once configured allows all end systems
associated with the server to obtain IP addresses from this address pool, and
is implemented using the dhcp select global command to identify the global
address pool. In the case of the interface address pool, the dhcp select
interface command identifies the interface and network segment to which the
interface address pool is associated.
The interface address pool takes precedence over the global address pool. If
an address pool is configured on an interface, the clients connected to the
interface obtain IP addresses from the interface address pool even if a global
address pool is configured. On the S5700 switch, only logical VLANIF
interfaces can be configured with interface address pools.

The acquisition of an IP address and other configuration information requires


that the client make contact with a DHCP server and retrieve through request
the addressing information to become part of the IP domain. This process
begins with the IP discovery process in which the DHCP client searches for a
DHCP server. The DHCP client broadcasts a DHCP Discover message and
DHCP servers respond to the Discover message.
The discovery of one or multiple DHCP servers results in each DHCP server
offering an IP address to the DHCP client. After receiving the DHCP Discover
message, each DHCP server selects an unassigned IP address from the IP
address pool, and sends a DHCP Offer message with the assigned IP
address and other configuration information to the client.
If multiple DHCP servers send DHCP Offer messages to the client, the client
accepts the first DHCP Offer message received. The client then broadcasts a
DHCP Request message with the selected IP address. After receiving the
DHCP Request message, the DHCP server that offers the IP address sends a
DHCP ACK message to the DHCP client. The DHCP ACK message contains
the offered IP address and other configuration information.
Upon receiving the DHCP ACK message, the DHCP client broadcasts
gratuitous ARP packets to detect whether any host is using the IP address
allocated by the DHCP sever. If no response is received within a specified
time, the DHCP client uses this IP address. If a host is using this IP address,
the DHCP client sends the DHCP Decline packet to the DHCP server,
reporting that the IP address cannot be used, following which the DHCP client
applies for another IP address.

After obtaining an IP address, the DHCP client enters the binding state. Three
timers are set on the DHCP client to control lease update, lease rebinding,
and lease expiration. When assigning an IP address to a DHCP client, a
DHCP server specifies values for the timers.
If the DHCP server does not set the values for the timers, the DHCP client
uses the default values. The default values define that when 50% of the lease
period remains, the release renewal process should begin, for which a DHCP
client is expected to renew its IP address lease. The DHCP client
automatically sends a DHCP Request message to the DHCP server that has
allocated an IP address to the DHCP client.
If the IP address is valid, the DHCP server replies with a DHCP ACK message
to entitle the DHCP client a new lease, and then the client re-enters the
binding state. If the DHCP client receives a DHCP NAK message from the
DHCP server, it enters the initializing state.

After the DHCP client sends a DHCP Request message to extend the lease,
the DHCP client remains in an updating state and waits for a response. If the
DHCP client does not receive a DHCP Reply message from the DHCP server
after the DHCP server rebinding timer expires which by default occurs when
12.5% of the lease period remains, the DHCP client assumes that the original
DHCP server is unavailable and starts to broadcast a DHCP Request
message, for which any DHCP server on the network can reply with a DHCP
ACK or NAK message.
If the received message is a DHCP ACK message, the DHCP client returns to
the binding state and resets the lease renewal timer and server binding timer.
If all of the received messages are DHCP NAK messages, the DHCP client
goes back to the initializing state. At this time, the DHCP client must stop
using this IP address immediately and request a new IP address.

The lease timer is the final timer in the expiration process, and if the DHCP
client does not receive a response before the lease expiration timer expires,
the DHCP client must stop using the current IP address immediately and
return to the initializing state. The DHCP client then sends a DHCP
DISCOVER message to apply for a new IP address, thus restarting the DHCP
cycle.

There are two forms of pool configuration that are supported in DHCP, these
include defining a global pool or an interface based pool. The dhcp select
interface command is used to associate an interface with the interface
address pool in order to provide configuration information to connected hosts.
The example demonstrates how interface Gigabit Ethernet 0/0/0 has been
assigned as part of an interface address pool.

Each DHCP server will define one or multiple pools which may be associated
globally or with a given interface. For determining the pool attributes
associated with an interface, the display ip pool interface <interface>
command is used. The DHCP pool will contain information including the lease
period for each IP address that is leased, as well as the pool range that is
supported. In the event that other attributes are supported for DHCP related
propagation to clients such as with the IP gateway, subnet mask, and DNS
server, these will also be displayed.

The example demonstrates the DHCP configuration for a global address pool
that is assigned to the network 10.2.2.0. The dhcp enable command is the
prerequisite for configuring DHCP-related functions, and takes effect only
after the dhcp enable command is run. A DHCP server requires the ip pool
command be configured in the system view to create an IP address pool and
set IP address pool parameters, including a gateway address, the IP address
lease period etc. The configured DHCP server can then assign IP addresses
in the IP address pool to clients.
A DHCP server and its client may reside on different network segments. To
enable the client to communicate with the DHCP server, the gateway-list
command is used to specify an egress gateway address for the global
address pool of the DHCP server. The DHCP server can then assign both an
IP address and the specified egress gateway address to the client. The
address is configured in dotted decimal notation for which a maximum of eight
gateway addresses, separated by spaces, can be configured.

The information regarding a pool can be also observed through the used of
the display ip pool command. This command will provide an overview of the
general configuration parameters supported by a configured pool, including
the gateway and subnet mask for the pool, as well general statistics that allow
an administrator to monitor the current pool usage, to determine the number
of addresses allocated, along with other usage statistics.

1. IP addresses that are used for server allocation such as any local DNS
servers in order to avoid address conflicts.
2. The default lease period for DHCP assigned IP addresses is set at a
period equal to one day.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Early development of standards introduced the foundations of a file
transfer protocol, with the aim of promoting the sharing of files between
remote locations that were not impacted by variations in file storage
systems among hosts. The resulting FTP application was eventually
adopted as part of the TCP/ IP protocol suite. The FTP service remains an
integral part of networking as an application for ensuring the reliable and
efficient transfer of data, commonly implemented for effective backup and
retrieval of files and logs, thereby improving overall management of the
enterprise network. This section therefore introduces the means for
engineers and administrators to implement FTP services within Huawei
products.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the file transfer process of FTP.

Configure the FTP servi ce on supporting Huawei devices.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The implementation of an FTP server within the enterprise network allows for
effective backup and retrieval of important system and user files, which may
be used to maintain the daily operation of an enterprise network. Typical
examples of how an FTP server may be applied include the backing up and
retrieval of VRP image and configuration files. This may also include the
retrieval of log files from the FTP server to monitor the FTP activity that has
occurred.

The transfer of FTP files relies on two TCP connections. The first of these is a
control connection which is set up between the FTP client and the FTP server.
The server enables common port 21 and then waits for a connection request
from the client. The client then sends a request for setting up a connection to
the server. A control connection always waits for communication between the
client and the server, transmits related commands from the client to the
server, as well as responses from the server to the client.
The server uses TCP port 20 for data connections. Generally, the server can
either open or close a data connection actively. For files sent from the client to
the server in the form of streams, however, only the client can close a data
connection. FTP transfers each file in streams, using an End of File (EOF)
indicator to identify the end of a file. A new data connection is therefore
required for each file or directory list to be transferred. When a file is being
transferred between the client and the server, it indicates that a data
connection is set up.

There are two FTP transmission modes which are supported by Huawei,
these are ASCII mode and binary mode. ASCII mode is used for text, in which
data is converted from the sender's character representation to "8-bit ASCII"
before transmission. Put simply, ASCII characters are used to separate
carriage returns from line feeds. In binary mode the sender sends each file
byte for byte. This mode is often used to transfer image files and program files
for which characters can be transferred without format converting.

Implementing the FTP service is achievable on both the AR2200 series router
and S5700 series switch, for which the service can be enabled through the ftp
server enable command. After the FTP server function is enabled, users can
manage files in FTP mode. The set default ftp-directory command sets the
default working directory for FTP users. Where no default FTP working
directory is set, the user will be unable to log into the router, and will instead
be prompted with a message notifying that the user has no authority to access
any working directory.

Access to the FTP service can be achieved by assigning individual user login
to manage access on a per user basis. AAA is used to configure local
authentication and authorization. Once the AAA view is entered, the local user
can be created, by defining a user account and password. The account is
capable of associating with a variety of services, which are specified using the
service-type command, to allow for the ftp service type to be supported by
AAA.
If the ftp directory of the user should vary from the default directory, the ftpdirectory command can be used to specify the directory for the user. If the
number of active connections possible with a local user account is to be
limited, the access-limit command can be applied. This can range from 1 to
800, or unlimited where an access limit is not applied.
The configuration of an idle timeout helps to prevent unauthorized access in
the event that a session window is left idle for a period of time by a user. The
idle timeout command is defined in terms of minutes and seconds, with an idle
timeout of 0 0 representing that no timeout period is applied. Finally the
privilege level defines the authorized level of the user in terms of the
commands that can be applied during ftp session establishment. This can be
set for any level from 0 through to 15, with a greater value indicating a higher
level of the user.

Following configuration of the FTP service on the FTP server, it is possible for
users to establish a connection between the client and the server. Using the
ftp command on the client will establish a session through which the AAA
authentication will be used to validate the user using AAA password
authentication. If correctly authenticated, the client will be able to configure as
well as send/retrieve files to and from the FTP server.

1. In order for the control connection and data connection of the FTP service
to be established successfully, TCP ports 20 and 21 must be enabled.
2. When a user is considered to have no authority to access any working
directory, a default FTP directory needs to be defined. This is done by
using the command set default ftp-directory <directory location>, where
the directory name may be, for example, the system flash.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
As the enterprise network expands, supported devices may exist over a
larger geographical distance due to the presence of branch offices that
are considered part of the enterprise domain, and that require remote
administration. Additionally the administration of the network is often
managed from a central management location from which all devices are
monitored and administered. In order to facilitate this administration, the
telnet protocol , one of the earliest protocols to be developed, is applied to
managed devices. The principles surrounding the protocol and it's
implementation are introduced in this section.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the application and principles surrounding telnet

Establish the telnet service on supporting Huawei devices.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Telecommunication Network Protocol (Telnet) enables a terminal to log in


remotely to any device which is capable of operating as a telnet server, and
provides an interactive operational interface via which the user can perform
operations, in the same manner as is achieved locally via a console
connection. Remote hosts need not be connected directly to a hardware
terminal, allowing instead for users to take advantage of the ubiquitous
capabilities of IP in order to remotely manage devices from almost any
location in the world.

Telnet operates on a client/server model principle for which a telnet TCP


connection is established between a user port and the server telnet port,
which by default is assigned as port 23. The server listens on this well known
port for such connections. A TCP connection is full duplex and identified by
the source and destination ports. The server can engage in many
simultaneous connections involving its well known port and user ports that are
assigned from a non well-known port range.
The telnet terminal drivers interpret the keystrokes of users and translates
these to a universal character standard, based on a network virtual terminal
(NVT) which operates as a form of virtual intermediary between systems,
following which the transmission via the TCP/IP connection to the server is
performed. The server decodes the NVT characters and passes the decoded
characters to a pseudo terminal driver which exists to allow the operating
system to receive the decoded characters.

Access to the telnet service commonly involves authentication of the user


before access is granted. There are three main modes that are defined for
telnet authentication.

Establishment to a device operating as a telnet server commonly uses a


general password authentication scheme which is used for all users
connecting to the user vty interface. Once IP connectivity is established
through the implementation of a suitable addressing scheme, the
authentication-mode password command set is implemented for the vty
range, along with the password to be used.

Following configuration of the remote device that is to operate as a telnet


server, the client is able to establish a telnet connection through the telnet
command, and receive the prompt for authentication. The authentication
password should match the password implemented on the telnet server as
part of the prior password authentication configuration. The user will be then
able to establish a remote connection via telnet to the remote device
operating as a telnet server and emulate the command interface on the local
telnet client.

1. If a user is unable to establish a telnet connection, the user should verify


the device supporting the telnet service is reachable. If the device can be
reached, the password should be verified. If the password is considered
to be correct, the number of users currently accessing the device via
telnet should be checked. If it is necessary to extend the number of users
accessing the device through telnet, the user-interface maximum-vty
<0-15> command should be used, where 0-15 denotes the number of
supported users.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

Xi Foreword
An expanding enterprise business requires much more than just a
fundamental network through which the transmission of data is
supported. Enterprise networks are expected to support a growing
number of services which involves the integration of technolog ies that are
not native to packet switched networks. The enterprise network should
provide high resiliency from network failure and threats, both internal and
external, along with performance growth through the implementation of
solutions that optimize data flow. It is therefore necessary that skills be
attained for the implementation of technologies that enable an
established enterprise network to apply services and solutions, that
compliment ever growing enterprise networks for true industry
application .
Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

~ Objectives

Upon completion of this section, trainees will be able to:

Describe the architecture of an established enterprise network.

Describe the business considerations for enterprise networks.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

The establishment of local and internetwork connectivity through TCP/IP


protocols represents the foundation of the enterprise network, however does
not represent a complete solution to enabling an enterprise network to be
business ready. As an enterprise grows, so do the requirements for the
network on which it is supported. This includes the implementation of effective
network designs capable of supporting an expanding business where user
density may grow in a short period of time, where operations as a mobile
office may constantly be required, where growing technological requirements
need to be smoothly facilitated, and the traffic generated is managed
efficiently without disruption to base network operations.
It is therefore imperative that a clear understanding be built of how solutions
can be applied to establish an enterprise network capable of supporting ever
changing industry needs. An enterprise network can be logically divided into
five areas, including a core network, data center, DMZ, enterprise edge, and
operations and maintenance (O&M). Huaweis campus network solution
focuses on the core network zone. The core network implements a three-layer
architecture consisting of a core, aggregation, and access layer.
This three-layer architecture has advantages in providing a multi-layer design
for which each layer performs specific functions, and establishes a stable
topology, to simplify network expansion and maintenance, with a modular
design facilitating fault allocation. Network topology changes in one
department can be isolated to avoid affecting other departments.

Huawei enterprise networks must be capable of providing solutions for a


variety of scenarios, such as dense user access, mobile office, VoIP,
videoconference and video surveillance, access from outside the campus
network, and all-round network security.
Huawei enterprise solutions therefore must meet customer requirements on
network performance, scalability, reliability, security, and manageability, whilst
also simplifying network construction.

The enterprise network is required to be capable of establishing connectivity


via a multitude of telecom service provider networks, building on an ever
growing requirement for support of integrated and converged networks. In
taking advantage of the ubiquitous nature of IP, it is important that the
enterprise network be capable of supporting all services necessary in
enterprise based industries to provide access to internal resources through
any type of device, at any time and in any location.

Maintaining efficiency of operation in an enterprise network requires that traffic


flow is greatly optimized and redundancy is implemented to ensure that in the
case of any device or link failure, isolation of users and resources will not
occur. A two-node redundant design is implemented as part of enterprise
network design to enhance network reliability, however a balance is required
to be maintained, since too many redundant nodes are difficult to maintain
and increase overall organizational expenditure.

Network security plays an ever increasing role in the enterprise network. The
initial development of TCP/IP was never done so with the issue of security in
mind, and therefore security implementations have gradually been introduced
to combat ever growing threats to IP networks. Security threats are capable of
originating both from inside and outside of the enterprise network and
therefore solutions to tackle both types of security threats have become
prominent. Huawei network security solutions have grown to cover terminal
security management, service security control, and network attack defense.

The growth of intelligent network designs help to facilitate operations and


maintenance (O&M) for which Huawei network solutions provide means for
intelligent power consumption management, intelligent fast network
deployment, and intelligent network maintenance.

As the industry continues to evolve, new next generation enterprise solutions


are introduced including the prominence of cloud technology providing cloud
service solutions to infrastructure, platforms, software etc, to meet the needs
of each customer. Along with this is the need for support of enterprise built
data centers and infrastructure designs allowing for constant expansion in
order to keep up with the growing number of services required by customers.
This involves the realization of technologies such as virtualization and storage
solutions that continue to play an aggressive role in ensuring that the
enterprise industrys expansion into the cloud is facilitated on all service
levels.

1. The DMZ represents a location that is part of the enterprise network,


however the DMZ exists within a location that allows the services to be
accessed from both an external location and internally, without allowing
external users permission to access locations associated with internal
users. This provides a level of security that ensures data never flows
between internal and external user locations.
2. The core provides a means for high speed forwarding of traffic between
different locations in the enterprise network and to external locations
beyond the enterprise network. As such the devices used in the core must
be capable of supporting higher performance in terms of processing and
forwarding capacity.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
As a means of optimizing the throughput of data, link aggregation
enables the binding of multiple physical interfaces into a single logical
pipe. This effectively introduces solutions for providing higher utilization of
available links, as well as extended resilience in the event that failure of
individual links were to occur. Engineers are required to have a clear
understanding of the conditions that define the behavior of link
aggregation and the skills and knowledge for it's application, to ensure
effective link aggregation solutions can be applied to enterprise networks.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the use of link aggregation in the enterprise network.
Describe the various forms of link aggregation supported.

Configure link aggregation solutions.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Link aggregation refers to the implementation of a trunk link that acts as a


direct point-to-point link, between two devices such as peering routers,
switches, or a router and switch combination at each end of the link. The link
aggregation comprises of links that are considered members of an Ethernet
trunk, and build an association which allows the physical links to operate as a
single logical link. The link aggregation feature supports high availability by
allowing the physical link of a member interface to switch traffic to another
member link in the event that a particular interface fails. In aggregating the
links, the bandwidth of a trunk interface is combined, equaling the sum of the
bandwidth of all member interfaces, to enable an effective bandwidth increase
for traffic over the logical link. Link aggregation can also implement load
balancing on a trunk interface. This enables the trunk interface to disperse
traffic among its member interfaces, and then transmit the traffic over the
member links to the same destination, thus minimizing the likelihood of
network congestion.

Link aggregation is often applied in areas of the enterprise network where


high speed connectivity and the potential for congestion is likely to occur. This
generally equates to the core network where responsibility for high speed
switching resides, and where traffic from all parts of the enterprise network
generally congregates before being forwarded to destinations either in other
parts of the network, or remote destinations beyond the boundaries of the
enterprise network. The example demonstrates how core switches (SWA &
SWB) support link aggregation over member links that interconnect the two
core switch devices, as a means of ensuring that congestion does not build at
a critical point in the network.

Link aggregation supports two modes of implementation, a manual load


balancing mode and static LACP mode. In load balancing mode, member
interfaces are manually added to a link aggregation group (LAG). All of the
interfaces configured with load balancing are set in a forwarding state. The
AR2200 can perform load balancing based on destination MAC addresses,
source MAC addresses, exclusive-OR of the source and destination MAC
addresses, source IP addresses, destination IP addresses, or Exclusive-OR
of source and destination IP addresses. The manual load balancing mode
does not use the Link Aggregation Control Protocol (LACP), therefore the
AR2200 can use this mode if the peer device does not support LACP.
In static LACP mode, devices at two ends of a link negotiate aggregation
parameters by exchanging LACP packets. After the negotiation is complete,
the two devices determine the active interface and the inactive interface. In
this mode, it is necessary to manually create an Eth-Trunk and add members
to it. LACP negotiation determines which interfaces are active and which ones
are inactive. The static LACP mode is also referred to as M:N mode, where M
signifies the active member links which forward data in a load balancing
mode, and N represents those links inactive but providing redundancy. If an
active link fails, data forwarding is switched to the backup link with the highest
priority, and the status of the backup link changes to active. In static LACP
mode, some links may function as backup links, whereas all member
interfaces work in a forwarding state in manual load balancing mode, and
represents the main difference between the two modes.

As a logical interface for binding multiple physical interfaces and relaying


upper-layer data, a trunk interface must ensure that all parameters of the
physical interfaces (member interfaces) on both ends of the trunk link be
consistent. This includes the number of physical interfaces, the transmission
rates and duplex modes of the physical interfaces, and the traffic-control
modes of the physical interfaces, for which it should be noted that member
interfaces can be layer 2 or layer 3 interfaces. Where the interface speed is
not consistent, it is still possible for the trunk link to operate, however the
interfaces operating at a lower rate are likely to experience loss of frames.
In addition, the sequence of the data flow must be unchanged. A data flow can
be considered as a group of frames with the same MAC address and IP
address. For example, the telnet or FTP connection between two devices can
be considered as a data flow. If the trunk interface is not configured, frames
that belong to a data flow can still reach their destination in the correct order
because data flows are transmitted over a single physical link. When the trunk
technology is used, multiple physical links are bound to the same trunk link,
and frames are transmitted along these physical links. If the first frame is
transmitted over one physical link, and the second frame is transmitted over
another physical link, it is possible that the second frame may reach the
destination earlier than the first frame.

To prevent the disorder of frames, a frame forwarding mechanism is used to


ensure that frames in the same data flow reach the destination in the correct
sequence. This mechanism differentiates data flows based on their MAC
addresses or IP addresses. In this manner, frames belonging to the same
data flow are transmitted over the same physical link. After the frame
forwarding mechanism is used, frames are transmitted based on the following
rules:
Frames with the same source MAC addresses are transmitted over the same
physical link.
Frames with the same destination MAC addresses are transmitted over the
same physical link.
Frames with the same source IP addresses are transmitted over the same
physical link.
Frames with the same destination IP addresses are transmitted over the
same physical link.
Frames with the same source and destination MAC addresses are transmitted
over the same physical link.
Frames with the same source and destination IP addresses are transmitted
over the same physical link.

Establishment of Link Aggregation is achieved using the interface Eth-trunk


<trunk-id> command. This command creates an Eth-Trunk interface and
allows for the Eth-Trunk interface view to be accessed. The trunk-id is a value
used to uniquely identify the Eth-trunk, and can be any integer value from 0
through to 63. If the specified Eth-Trunk already exists, it is possible to directly
enter the Eth-Trunk interface view by using the interface Eth-trunk command.
An Eth-Trunk can only be deleted if the Eth-Trunk does not contain any
member interfaces. When adding an interface to an Eth-Trunk, member
interfaces of a layer 2 Eth-Trunk must be layer 2 interfaces, and member
interfaces of a layer 3 Eth-Trunk must be layer 3 interfaces. An Eth-Trunk can
support a maximum of eight member interfaces. A member interface cannot
have any service or static MAC address configured. Interfaces added to an
Eth-Trunk should be hybrid interfaces (the default interface type). An EthTrunk interface cannot have other Eth-Trunk interfaces as member interfaces.
An Ethernet interface can be added to only one Eth-trunk interface.
To add the Ethernet interface to another Eth-trunk, the Ethernet interface must
be deleted from the current Eth-Trunk first. Member interfaces of an Eth-trunk
must be the same type, for example, a Fast Ethernet interface and a Gigabit
Ethernet interface cannot be added to the same Eth-trunk interface. The peer
interface directly connected to a member interface of the local Eth-Trunk must
also be added to an Eth-Trunk, otherwise the two ends cannot communicate.
When member interfaces have different rates, the interfaces with lower rates
may become congested and packet loss may occur. After an interface is
added to an Eth-Trunk, MAC address learning is performed by the Eth-Trunk
rather than the member interfaces.

In order to configure layer 3 Link Aggregation on an Ethernet trunk link, it is


necessary to transition the trunk from layer 2 to layer 3 using the undo
portswitch command under the Eth-trunk logical interface. Once the undo
portswitch command has been performed, an IP address can be assigned to
the logical interface and the physical member interfaces that are to be
associated with the Ethernet trunk link can be added.

Using the display interface eth-trunk <trunk-id> command it is possible to


confirm the successful implementation of Link Aggregation between the two
peering devices. The command can also be used to collect traffic statistics
and locate faults on the interface.
The current state of the Eth-trunk is set to UP, signaling that the interface is
operating normally. Where the interface shows as down, this signals that an
error has occurred at the physical layer, whereas an administratively down
error reflects that the shutdown command has be used on the interface. The
specific error in the event of a failure can be discovered by verifying the status
of the ports, for which all ports are expected to show an UP status. Load
balancing is supported when the weight of all links is considered equal.

1. A Fast Ethernet interface and a Gigabit Ethernet interface cannot be


added to the same Eth-trunk interface, any attempt to establish member
links of different types will result in an error specifying that the trunk has
added a member of another port-type. It should be noted that the S5700
series switch supports Gigabit Ethernet interfaces only, however this
behavior can be applied to other models including the S3700 switch.
2. Only the LACP mode is capable of supporting backup member links and
therefore should be used if backup links are required.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
A Virtual Local Area Network (VLAN) represents a form of administrative
network that defines a logical grouping of hosts or end system devices
that are not limited to a physical location , and may be defined based on a
wide range of parameters that allow for a greater flexibility in the way that
logical groups are defined. The application of VLAN technology has
expanded to support many aspects of enterprise networking as a means
of logical data flow management and isolation.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the application of VLAN tagging.
Describe the different port link types and characteristics.

Successfully establish port based VLANs.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

As local networks expand, traffic increases and broadcasts become more


common. There are no real boundaries within such an expanding network,
causing interrupts and growing traffic utilization to occur. Traditionally, the
alternative option was to implement a layer three device within the local
network to generate broadcast domains, however in doing so additional
expense was incurred and the forwarding behavior of such devices did not
provide as efficient throughput as found with switches, leading to bottlenecks
at transit points between broadcast domains.

The principle of VLAN technology was introduced that enabled traffic isolation
at the data link layer. VLAN technology has the added advantage of traffic
isolation without the limitation of physical boundaries. Users can be physically
dispersed but still be associated as part of a single broadcast domain,
logically isolating users from other user groups at the data link layer. Today
VLAN technology is applied as a solution to a variety of challenges.

VLAN frames are identified using a tag header which is inserted into the
Ethernet frame as a means of distinguishing a frame associated with one
VLAN from frames of another. The VLAN tag format contains a Tag Protocol
Identifier (TPID) and associated Tag Control Information (TCI). The TPID is
used to identify the frame as a tagged frame, which currently only refers to the
IEEE 802.1Q tag format, for which a value of 0x8100 is used to identify this
format. The TCI contains fields that are associated with the tag format type.
The Priority Code Point (PCP) is a form of traffic classification field that is
used to differentiate one form of traffic from another so as to prioritize traffic
generally based on a classification such as voice, video, data etc. This is
represented by a three bit value allowing a range from 0-7, and can be
understood based on general 802.1p class of service (CoS) principles. The
Drop Eligibility Indicator (DEI) represents a single bit value that exists in either
a True of False state to determine the eligibility of a frame for discarding in the
event of congestion.
The VLAN ID indicates the VLAN with which the frame is associated,
represented as a 12 bit value. VLAN ID values range from 0x000 through to
0xFFF and for which the two upper and lower values are reserved, allowing
4094 possible VLAN Combinations. Huawei VRP implementation of VLANs
uses VLAN 1 as the default VLAN (PVID) as based on IEEE802.1Q
standards.

VLAN links can be classified into two types, an access link type and a trunk
link type. The access link refers to the link between an end system and a
switch device participating in VLAN tagging, the link between host terminals
and switches are all access links. A trunk link refers to the link over which
VLAN tagged frames are likely to be carried. The links between switches are
generally understood to be trunk links.

Each interface of a device participating in VLAN tagging will be associated


with a VLAN. The default VLAN for the interface is recognized as the Port
VLAN ID (PVID). This value determines the behavior that is applied to any
frames being received or transmitted over the interface.

Access ports associate with access links, and frames that are received will be
assigned a VLAN tag that is equal to the Port VLAN ID (PVID) of the interface.
Frames being transmitted from an interface will typically remove the VLAN tag
before forwarding to an end system that is not VLAN aware. If the tag and the
PVID vary however, the frame will not be forwarded and therefore discarded.
In the example a frame (untagged) is forwarded to the interface of the switch,
which can be understood to forward to all other destinations.
Upon receiving the frame, the switch will associate the frame with VLAN 10
based on the PVID of the interface. The switch is able to identify at the port
interface the PVID and make a decision as to whether the frame can be
forwarded. In the case of Host C the PVID matches the VLAN ID in the VLAN
tag, for which the tag is removed and the frame forwarded. For Host B
however the frame and the PVID differ, and therefore the frame is restricted
from being forwarded to this destination.

For trunk ports that are associated with trunk links, the Port VLAN ID (PVID)
will identify which VLAN frames are required to carry a VLAN tag before
forwarding, and which are not. The example demonstrates a trunk interface
assigned with a PVID of 10, for which it should be assumed that all VLANs
are permitted to traverse the trunk link. Only frames associated with VLAN 10
will be forwarded without the VLAN tag, based on the PVID. For all other
VLAN frames, a VLAN tag must be included with the frame and be permitted
by the port before the frame can be transmitted over the trunk link. Frames
associated with VLAN 20 are carried as tagged frames over the trunk link.

Hybrid represents the default port type for Huawei devices supporting VLAN
operation and provides a means of managing the tag switching process
associated for all interfaces. Each port can be considered as either a tagged
port or an untagged port. Ports which operate as access ports (untagged) and
ports which operate as trunk ports (tagged).
Ports which are considered untagged will generally receive untagged frames
from end systems, and be responsible for adding a tag to the frame based on
the Port VLAN ID (PVID) of the port. One of the key differences is in the
hybrid ports ability to selectively perform the removal of VLAN tags from
frames that differ from the PVID of the port interface. In the example, Host D
is connected to a port which specifies a Port VLAN ID of 20, whilst at the
same time is configured to allow for the removal of the tag from frames
received from VLAN 10, thereby allowing Host D to receive traffic from both
VLANs 10 & 20.
Hybrid Ports that are tagged will operate in a similar manner as a regular
trunk interface, however one major difference exists. VLAN frames that both
match the PVID and are permitted by the port will continue be tagged when
forwarded.

VLAN assignment can be implemented based on one of five different


methods, including Port based, MAC based, IP Subnet based, Protocol based
and Policy based implementations. The port based method represents the
default and most common method for VLAN assignment. Using this method,
VLANs are classified based on the port numbers on a switching device. The
network administrator configures a Port VLAN ID (PVID), representing the
default VLAN ID for each port on the switching device. When a data frame
reaches a port, it is marked with the PVID if the data frame carries no VLAN
tag and the port is configured with a PVID. If the data frame carries a VLAN
tag, the switching device will not add a VLAN tag to the data frame even if the
port is configured with a PVID.
Using the MAC address assignment method, VLANs are classified based on
the MAC addresses of network interface cards (NICs). The network
administrator configures the mappings between MAC addresses and VLAN
IDs. In this case, when a switching device receives an untagged frame, it
searches the MAC-VLAN table for a VLAN tag to be added to the frame
according to the MAC address of the frame. For IP subnet based assignment,
upon receiving an untagged frame, the switching Device adds a VLAN tag to
the frame based on the IP address of the packet header.
Where VLAN classification is based on protocol, VLAN IDs are allocated to
packets received on an interface according to the protocol (suite) type and
encapsulation format of the packets. The network administrator configures the
mappings between types of protocols and VLAN IDs. The Policy based
assignment implements a combination of criteria for assignment of the VLAN
tag, including the IP subnet, port and MAC address, in which all criteria must
match before the VLAN is assigned.

The implementation of VLANs begins with the creation of the VLAN on the
switch. The vlan<vlan-id> command is used to initially create the the VLAN on
the switch which can be understood to exist once the user enters the VLAN
view for the given vlan as demonstrated in the configuration example. The
VLAN ID ranges from 1 to 4094 and where it is necessary to create multiple
VLANs for a switch, the vlan batch <vlan-id1 to vlan-id2> command can be
used where contiguous VLAN ranges need to be created and vlan batch
&<1-4094> command used where & represents a space between noncontiguous VLAN ranges. All ports are associated with VLAN 1 as the default
VLAN by default, and therefore forwarding is unrestricted.


Once the VLANs have been created, the creation can be verified using the
display vlan command. The command allows information about all VLANs to
be specified, and if no parameter is specified, brief information about all
VLANs is displayed. Additional parameters include display vlan <vlan-id>
verbose command, used to display detailed information about a specified
VLAN, including the ID, type, description, and status of the VLAN, status of
the traffic statistics function, interfaces in the VLAN, and mode in which the
interfaces are added to the VLAN. The display vlan <vlan-id> statistics
command, allows for the view of traffic statistics on interfaces for a specified
VLAN. The display vlan summary command, provides a summary of all
VLANs in the system.

The configuration of the port link type is performed in the interface view for
each interface on a VLAN active switch. The default port link type on Huawei
switch devices is hybrid. The port link-type <type> command is used to
configure the port link type of the interface where the type can be set as
access, trunk or hybrid. A fourth QinQ option exists but is considered outside
of the scope of this course. It should also be noted that in the displayed
configuration if no port type is displayed, the default hybrid port link type is
configured. Prior to changing the interface type, it is also necessary to restore
the default VLAN configuration of the interface so that the interface belongs to
only the default VLAN 1.

The association of a port with a created VLAN can be achieved using two
configuration methods, the first of those is to enter the VLAN view and
configure the interface to be associated with the VLAN using the port
<interface> command. The second means of assigning ports to VLANs
involves accessing the interface view for the interface to be added to a VLAN
and implement the command port default <vlan-id> where the vlan-id refers to
the VLAN to which the port is to be added. 

The display vlan command can be used to verify the changes made to the
configuration and confirm the association of port interfaces with the VLANs to
which the ports have been assigned. In the display example port interfaces
Gigabit Ethernet 0/0/5 and Gigabit Ethernet 0/0/7 can be identified as being
associated with VLANs 2 and 3 respectively. The UT value identifies that the
port is considered untagged either through assigning of the port link type as
an access port or as an untagged hybrid port. The current state of the link can
also be determined as either up (U) or down (D).

The assigning of the port link type of trunk interfaces enables the trunk to
support the forwarding of VLAN frames for multiple VLANs between switches,
however in order for frames to be carried over the trunk interface, permissions
must be applied. The port trunk allow-pass vlan <vlan-id> command is used to
set the permission for each VLAN, where vlan-id refers to the VLANs to be
permitted. It is also necessary that the PVID for the trunk interface be included
in the command to enable untagged traffic to be carried over the trunk link.
The example demonstrates the changing of the default Port VLAN ID (PVID)
for the interface to 10 and the applying of permission for VLANs 2 and 3 over
the trunk link. In this case, any frames associated with VLAN 10 will not be
carried over the trunk even though VLAN 10 is now the default VLAN for the
trunk port. The command port trunk allow-pass vlan all can be used to allow
all VLANs to traverse the trunk link. 

The changes to the VLAN permissions can again be monitored through the
display vlan command, for which the application of VLANs over the trunk link
are reflected. The TG value identifies that VLANs have been associated with a
tagged interface either over a trunk or tagged hybrid port interface. In the
display example, VLANs 2 and 3 have been given permission to traverse the
tagged interface GigabitEthernet0/0/1, an interface that is currently active.

Hybrid port configuration represents the default port type on switch port
interfaces and therefore the command port link-type hybrid is generally only
necessary when converting the port link type from an access or a trunk port
link type. Each port however may require to be associated with a default Port
VLAN ID (PVID) over which frames are required to be either tagged or
untagged. The port hybrid pvid vlan <vlan-id> command enables the default
PVID to be assigned on a port by port basis following which it is also
necessary to associate the forwarding behavior for a given port.
For ports that are to operate as access ports, this is achieved using the port
hybrid untagged vlan<vlan-id> command. It should be clearly noted that the
use of this command multiple times under the same interface view shall result
in the interface being associated with all VLANs specified, with the associated
VLAN frames being untagged before forwarding. The undo port hybrid vlan
command can be used restore the default VLAN setting of VLAN1 and return
to the default untagged mode.

For ports that are to operate as trunk ports, the port hybrid tagged vlan <vlanid> command is used. It should be clearly noted that the use of this command
multiple times under the same interface view shall result in the interface being
associated with all VLANs specified, with the associated VLAN frames being
tagged before forwarding. In the example the hybrid port interface Gigabit
Ethernet 0/0/1 is expected to tag all frames that are associated with VLANs 2
and 3 before such frames are forwarded over the interface.

Through the display vlan command, the results of the tagged and untagged
hybrid port configuration can be verified. Interface Gigabit Ethernet 0/0/7 has
been established as a VLAN 2 untagged interface, while interface Gigabit
Ethernet 0/0/5 has been established as an untagged interface associated with
VLAN 3. In terms of both VLAN 2 and VLAN 3, frames associated with either
VLAN will be carried as a tagged frame over interface Gigabit Ethernet 0/0/1.

Switch port interfaces can use the port hybrid untagged vlan <vlan-id> [to
<vlan-id>] command to apply the untagged behavior on a port interface for
multiple VLANs in a single batch command. This behavior enables hybrid
interfaces to permit the untagged forwarding of traffic from multiple VLANs to
a given end system. All traffic forwarded from the end system is associated
with the PVID assigned to the port and tagged respectively.

The command port hybrid untagged vlan 2 to 3 on interface Gigabit Ethernet


0/0/4 results in the interface applying untagged behavior to both VLAN 2 and
VLAN 3. This means that any traffic forwarded from a host associated with
either VLAN, to an end system associated with Gigabit Ethernet interface
0/0/4, can be successfully received.

The growth of IP convergence has seen the integration of multiple


technologies that allows High Speed Internet (HSI) services, Voice over IP
(VoIP) services, and Internet Protocol Television (IPTV) services to be
transmitted over a common Ethernet & TCP/IP network. These technologies
originate from networks consisting of different forms of behavior. VoIP
originates from circuit switched network technologies that involve the
establishment of a fixed circuit between the source and destination, over
which a dedicated path is created, ensuring that voice signals arrive with little
delay and in a first-in-first-out signal order.
High Speed Internet operates in a packet switched network involving
contention, and packet forwarding with no guarantee of orderly delivery for
which packet re-sequencing is often necessary. Guaranteeing that
technologies originating from a circuit switched network concept are capable
of functioning over packet switched networks has brought about new
challenges. This challenge focuses on ensuring that the services are capable
of differentiating voice data from other data. The solution involves VoIP traffic
being isolated through different VLANs and being assigned a higher priority to
ensure voice quality throughput. Special voice VLANs can be configured on
the switch, which allows the switch to assign a pre-configured VLAN ID and a
higher priority to VoIP traffic.

Configuration of the voice VLAN involves the configuring of a specified VLAN


using the voice-vlan <vlan-id> enable command. The voice VLAN can be
associated with any VLAN between 2 and 4094. The voice-vlan mode
<mode> command specifies the working mode, by which a port interface is
added to a voice VLAN. This is set by default to occur automatically however
can be also achieved manually. The voice-vlan mac-address <mac-address>
mask <mask> command allows voice packets originating from an IP phone to
be identified and associated with the voice VLAN, based on the
Organizationally Unique Identifier (OUI), to ultimately allow a higher priority to
be given to voice traffic.

The display voice-vlan status command allows voice VLAN information to be


viewed, including the status, security mode, aging time, and the interface on
which the voice VLAN function is enabled. The status determines whether the
voice VLAN is currently enabled or disabled. The security-mode can exist in
one of two modes, either normal or security. The normal mode allows the
interface enabled with voice VLAN to transmit both voice data and service
data, but remains vulnerable to attacks by invalid packets. It is generally used
when multiple services (HSI, VoIP, and IPTV) are transmitted to a Layer 2
network through one interface, and the interface transmits both voice data and
service data. The security mode applied on an interface enabled with voice
VLAN checks whether the source MAC address of each packet that enters the
voice VLAN matches the OUI. It is applied where the voice VLAN interface
transmits ONLY voice data. The security mode can protect the voice VLAN
against the attacks by invalid packets, however checking packets occupies
certain system resources.
The Legacy option determines whether the interface can communicate with
voice devices of other vendors, where an enabled interface permits this
communication. The Add-Mode determines the working mode of the voice
VLAN. In auto voice VLAN mode, an interface can be automatically added to
the voice VLAN after the voice VLAN function is enabled on the interface, and
adds the interface connected to a voice device to the voice VLAN if the source
MAC address of packets sent from the voice device matches the OUI. The
interface is automatically deleted if the interface does not receive any voice
data packets from the voice device within the aging time. In manual voice
VLAN mode, an interface must be added to the voice VLAN manually after the
voice VLAN function is enabled on the interface.

1. The PVID on a trunk link defines only the tagging behavior that will be
applied at the trunk interface. If the port trunk allow-pass vlan 2 3
command is used, only frames associated with VLAN 2 and VLAN 3 will
be forwarded over the trunk link.
2. An access port configured with a PVID of 2 will tag all received untagged
frames with a VLAN 2 tag. This will be used by the switch to determine
whether a frame can be forwarded via other access interfaces or carried
over a trunk link.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
In networks with extensive numbers of devices, changes are at times
required for certain attributes such as the implementation or removal of a
VLAN. The manual configuration of each switch to recognize and support
the new attribute may lead to extensive administration and the potential
for human error. The Generic Attribute Registration Protocol (GARP) is
used to define the architecture on which attributes can be propagated.
The application of GARP, along with GVRP in support of VLAN
technology, is introduced as a solution for optimized administration.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the construct and messaging characteristics of GARP.

Configure GVRP.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Generic Attribute Registration Protocol (GARP) is the architecture on


which the registration, deregistration and propagation of attributes between
switches is enabled. GARP is not an entity in itself but instead is employed by
GARP applications such as GVRP to provide a shell on which the rules for
operation are supported. Interfaces that are associated with GARP
applications are considered to be GARP participants.
The primary application for GARP exists in allowing greater efficiency in the
management of multiple switches in medium to large networks. In general the
maintenance of multiple switches can become a huge burden on
administrators when system configuration details for example need to be
manually applied to each active switch. GARP helps to automate this process
for any applications that are able to employ this capability. GARP generally
relies on the spanning tree protocol to define an active topology for
propagation, however the GVRP protocol can run only in the realm of the
Common and Internal Spanning Tree (CIST).

PDUs are sent from a GARP participant and use multicast MAC address
01-80-C2-00-00-21 as the destination MAC address. When a device receives
a packet from a GARP participant, the device identifies the packet according
to the destination MAC address of the packet and sends the packet to the
corresponding GARP participant (such as GVRP). GARP uses messages
within the PDU to define attributes that are identified based on an attribute
type field and an attribute list.
The list contains multiple attributes for the specific attribute type and each
attribute is described through attribute length, event and value fields. The
length of the attribute can be anywhere from 2 to 255 bytes, the value
specifies a particular value for the attribute and the event may be one of a
number of specific events that the GARP supports represented by a value.
These events include, 0: LeaveAll event, 1: JoinEmpty event, 2: JoinIn event,
3: LeaveEmpty event, 4: LeaveIn event, and 5: Empty event.

When a GARP participant expects other devices to register its attributes, it


sends Join messages to other devices. When a GARP participant receives a
Join message from another participant, or is statically configured with
attributes, it sends Join messages to other devices to allow the devices to
register the new attributes. Join messages are classified into JoinEmpty
messages and JoinIn messages. JoinEmpty are used to declare an
unregistered attribute, whereas JoinIn messages are used to declare a
registered attribute.

Leave messages are used when a GARP participant expects other devices to
deregister its attributes, it sends Leave messages to other devices. When the
GARP participant receives a Leave message from another participant or
some of its attributes are statically deregistered, it also sends Leave
messages to other devices. Leave messages are classified into LeaveEmpty
messages and LeaveIn messages. LeaveEmpty messages are used to
deregister an unregistered attribute, whereas LeaveIn messages will
deregister a registered attribute.

Leave All messages are applied when a GARP participant when the
participant wishes to request other GARP participants deregister all the
attributes of the sender.
The Join, Leave, and Leave All messages are used to control registration and
deregistration of attributes. Through GARP messages, all attributes that need
to be registered are sent to all GARP-enabled devices on the same LAN.

GVRP is an application of GARP, and based on the working mechanism of


GARP, GVRP maintains dynamic VLAN registration information in a device
and propagates the registration information to other devices.
After GVRP is enabled on the switch, it can receive VLAN registration
information from other devices, and dynamically update local VLAN
registration information. VLAN registration information includes which VLAN
members are on the VLAN and through which interfaces their packets can be
sent to the switch. The switch can also send the local VLAN registration
information to other devices. Through exchanging VLAN registration
information, all devices on the same LAN maintain the same VLAN
information. The VLAN registration information transmitted through GVRP
contains both static local registration information that is manually configured
and dynamic registration information from other devices.

Registration can be achieved either statically or dynamically for a VLAN within


the device. A manually configured VLAN is a static VLAN, and a VLAN
created through GVRP is a dynamic VLAN. They way in which registration is
performed is dependant on the registration mode that has been configured.
There are three registration modes that can be set, which includes normal,
fixed and forbidden registration modes.

In the normal registration mode, the GVRP interface can dynamically register
and deregister VLANs, and transmit both dynamic VLAN registration
information and static VLAN registration information.

In the fixed mode, the GVRP interface is restricted from dynamically


registering and deregistering VLANs and can transmit only the static
registration information. If the registration mode of a trunk interface is set to
fixed, the interface allows only the manually configured VLANs to pass, even
if it is configured to allow all VLANs to pass.

In the forbidden mode, the GVRP interface is disabled from dynamically


registering and deregistering VLANs and can transmit only information about
VLAN 1. If the registration mode of a trunk interface is set to forbidden, the
interface allows only VLAN 1 to pass even if it is configured to allow all VLANs
to pass.

The configuration of GVRP relies on the protocol attribute being firstly enabled
in the system-view before it can be applied at an interface-view. The
command gvrp is used to enable GVRP on the device. Once an interface has
been configured to operate as part of the VLAN, GVRP can be applied to the
interface using the gvrp command at the interface-view. The registration
mode can also be applied using the gvrp registration <mode> command
where the mode may be either normal, fixed, or forbidden. The registration
mode is set as normal by default.

Verifying the configuration for GVRP involves entering the display gvrp status
command. This will simply identify whether GVRP has been enabled on the
device. The display gvrp statistics command can provide a little more
information regarding the configuration for each interface (participant) that is
currently active in GVRP. From the example, it is possible to identify the
current status of GVRP on the interface and also the registration type that has
been defined in relation to the interface.

1. The normal registration mode is used by default.


2. The ports for the links between each of the devices supporting GVRP
must be established as VLAN trunk ports in order to allow the VLAN
information to be propagated.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The implementation of VLAN technology within an enterprise network
effectively establishes broadcast domains that control the scope of traffic.
One of the limitations of broadcast domains is that communication at the
link layer is hindered between hosts that are not part of the same VLAN.
Traditional link layer switches supporting VLANs are not capable of
forwarding traffic between these broadcast domains, and therefore
routing must be introduced to facilitate communication. The application of
VLAN routing when using link layer switches, together with a device
capable of routing VLAN traffic is introduced , along with details of how
layer three switches capable of network layer operations can enable
communication over VLAN defined broadcast domains.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the purpose of VLAN routing.

Explain how VLAN routing is achieved for layer 2 & layer 3 switches.

Configure VLAN routing.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The general principle of VLAN implementation is to isolate networks as a


means of minimizing the size of the existing broadcast domain, however in
doing so, many users are cut off from other users within other VLAN domains
and require that layer three (IP) communication be established in order for
those broadcast domains to re-establish communication through reachable
routes. The implementation of a layer three switch offers an ideal means for
supporting VLAN routing whilst reducing operating costs. One of the
constraints however of VLAN routing is the need for strict IP address
management.
Generally however the VLAN routing principle is applicable to small scale
networks on which users belong to different network segments and IP
addresses of users are seldom changed.

After VLANs are configured, the hosts in different VLANs are unable to
directly communicate with each other at Layer 2. It is therefore necessary to
facilitate the communication through the creation of routes between VLANs.
There are generally two main methods via which this is achieved, the first
relies on the implementation of a router connected to the layer 2 switch. VLAN
communication is then routed through the router before being forwarded to
the intended destination. This may be over separate physical links, which
leads to port wastage and extra link utilization, or via the same physical
interface as shown in the example.
The second method relies on the use of a layer 3 switch that is capable of
performing the operation of both the switch and the router in one single device
as a more cost effective mechanism.

In order to allow communication over a single trunk interface, it is necessary


to logically segment the physical link using sub-interfaces. Each sub-interface
represents a logical link for the forwarding of VLAN traffic before being routed
by the router via other logical sub-interfaces to other VLAN destinations. Each
sub-interface must be assigned an IP address in the same network segment
as the VLAN that it is created for as well as 802.1Q encapsulation to allow for
VLAN association as traffic is routed between VLANs.
It is also necessary to configure the type of the Ethernet port of the switch that
connects to the router as either a Trunk or Hybrid link type, and allow frames
of the associated VLANs (VLAN 2 & VLAN 3 in this case) to pass.

The trunk link between the switch and the router must be established for
support of traffic for multiple VLANs, through the port link-type trunk or port
link-type hybrid command as well as the port trunk allow-pass vlan 2 3 or port
hybrid vlan 2 3 command respectively. Once the trunk is established, the
VLAN sub-interfaces must be implemented to allow the logical forwarding of
traffic between VLANs over the trunk link.

The sub-interface on a router is defined in the interface view using the


interface <interface-type interface-number.sub-interface number> command
where the sub-interface number represents the logical interface channel
within the physical interface. The command dot1q termination vid <vlan-id> is
used to perform two specific functions. Where a port receives a VLAN packet,
it will initially remove the VLAN tag from the frame and forward this packet via
layer three routing.
For packets being sent out, the port adds a tag to the frame before sending it
out, in accordance with the respective VLAN and IP settings for the routers
logical interface. Finally the arp-broadcast enable command is applied to each
logical interface. This is necessary as the capability for ARP to broadcast on
sub-interfaces is not enabled by default. If ARP broadcasts remain disabled
on the sub-interface, the router will directly discard packets. The route to the
sub-interface generally is considered as a blackhole route in these cases
since the packet is effectively lost without a trace. If ARP broadcasts are
enabled on the sub-interface, the system is able to construct a tagged ARP
broadcast packet and send the packet from the sub-interface.

Following the configuration of VLAN routing between VLAN 2 and VLAN 3,


the ping application can be used to verify reachability. The example
demonstrates how Host A (192.168.2.2) in VLAN 2 is capable of reaching
Host B (192.168.3.2) in VLAN 3. The TTL reflects that the packet has
traversed the router to reach the destination in VLAN 2.

The implementation of L3 switches brings about benefits to the process of


VLAN routing that are not possible through the use of a router. One of those
features is the ability to forward VLAN traffic with very little delay due to
support of what is known as line speed forwarding as a result of bottom layer
ASIC chips that allow traffic to be forwarded based on hardware rather than
software. Along with this is the fact that a single device is used with no trunk
link that may otherwise face congestion under heavy traffic loads. VLAN
routing when using a layer 3 switch relies on the implementation of VLAN
interfaces (VLANIF). If multiple users on a network belong to different VLANs,
each VLAN requires a VLANIF that acts as the VLAN gateway and so must
associate with an IP address relevant to the network of the VLAN. If a large
number of VLANs exist however, this can tally up to a large number of IP
addresses being required to support each VLANIF, as well as the hosts that
are part of the VLAN with which the VLANIF is associated. Through the
VLANIF, routing between different VLANs can be supported.

Configuration of VLAN routing on a switch operating at layer 3 requires that


the VLANs be initially created and the interfaces be assigned to those
respective VLANS. The configuration follows the principles for configuration of
VLANs covered as part of the VLAN principles. This involves defining the port
link-type for each port and the PVID that is associated with each port
interface.

Configuration of VLAN routing is implemented by creating VLAN interfaces


that are to operate as gateway interfaces for each VLAN within the layer 3
switch. Entering the VLANIF view is achieved via the interface vlanif <vlan-id>
command, where the vlan-id refers to the associated VLAN. The IP address
for the interface should be in the same network segment as the hosts. This IP
address shall represent the gateway for the hosts and support the inter-VLAN
communication.

1. The dot1q termination vid <vlan-id> command is used to perform two


specific functions. Where a port receives a VLAN packet, it will initially
remove the VLAN tag from the frame and forward this packet via layer 3
routing. For packets being sent out, the port adds a tag to the packet
before sending it out, in accordance with the respective VLAN and IP
settings for the routers logical interface.
2. The switch must be configured to allow frames carried over the switch/
router medium to be tagged, either through the use of the trunk command
or using tagged hybrid interfaces. Additionally the VLAN traffic must be
permitted over this link using the port trunk allow-pass vlan <vlan> or port
hybrid tagged vlan <vlan> command.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Wireless local area networks (WLAN) represent a natural evolution of the
fixed Ethernet network as a solution to demands for flexible and mobile
business environments. WLAN applies radio frequency (RF) technology
to the enterprise Ethernet network, and governed primarily by standards
defined by the IEEE standards organization. A general understanding of
WLAN requires a principle knowledge of these standards as well as the
benefits that WLAN technology brings to the enterprise network. This
section provides a brief overview of WLAN technology and its application
in an enterprise, as a considerable solution to providing benefits to an
existing enterprise network infrastructure.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the basic application of WLAN in the enterprise network.

Describe the benefits that WLAN introduces to the enterprise network.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Wireless Local Area Network (WLAN) is seen as a rapidly developing


future network technology, to which many enterprise networks are gradually
transitioning towards, with the expectation that wired Ethernet networks used
today as the primary network infrastructure in nearly all enterprise businesses
will eventually be superseded by WLAN solutions, and provide reliable
ubiquitous access.
Recent evolutions in technology have introduced a need for change in the
way in which enterprise industries operate, as a wave of tablet computing and
smart mobile device usage paves the way for Bring Your Own Device (BYOD)
solutions, in which users enhance their work capability through personal
devices, for which in most cases, wired Ethernet connectivity is not supported.
Additionally, many new challenges are faced by wireless networks in terms of
supporting a greater density of devices in the enterprise network as well as
providing media based (voice & video) support without signal loss or periodic
connection outages, and providing non intrusive security to users.
Wireless networks continue to play a secondary role to wired networks but
with the constant push to change the way enterprise networks support users,
it is expected that WLAN will continue to play an increasingly dominant role
throughout all enterprise industries.

The evolution of enterprise networks involving WLAN have undergone three


general phases since the 1980s, from fixed office, to supporting early mobile
office solutions through the use of standalone access points (AP) that
provided only limited coverage and mobility for users, to a more centralized
form of WLAN involving management of multiple (thin) AP clients by a central
controller.
Standards growth has also enabled the support of different services initially
supporting only data throughput, however as the capacity of the Ethernet
wired network continues to develop, it is generally expected that the WLAN be
capable of supporting the same services. The capacity to support services
such as real time traffic for voice and video require increasing amounts of
bandwidth to which new 802.11 standards are developed to support,
absorbing an increasingly larger amount of the 2.4GHz spectrum in order to
do so.
With the introduction of BYOD, new challenges to WLAN are faced to ensure
density of devices are supported with suitable bandwidth and connection
reliability for supported applications, whilst defending the enterprise network
against malicious entities including spyware and malware.

IEEE 802.11 represents the working group that supports the development of
all Wireless LAN standards, originating in 1997 with initial standards that
worked within a 2.4GHz range and relatively low frequency rates of up to
2Mbps. The evolution of standards saw the introduction of the 802.11a and
802.11b standards which operated under the 5GHz and 2.4GHz signal bands
respectively.
Each of these standards provides a variation in signal range, and due to
increased signal fading in the 5GHz signal band, strong adaptation has been
towards 2.4GHz which generally provides a greater range of transmission, as
such allowing for the deployment of fewer access points (AP) over a broader
range.
Over time however the 2.4GHz band has become increasingly crowded,
making interference ever more likely where transmission is concerned. In
addition, an increased rate requires a larger portion of the frequency spectrum
for which it has become increasingly difficult to accommodate, for within the
2.4GHz band. This has seen in recent years a transition to a less crowded
5GHz band where frequency ranges for higher rates can be accommodated,
at the cost however of the transmission range, resulting from attenuation that
naturally affects the 5GHz band.

Wireless network deployment solutions over existing Ethernet networks


commonly apply a two-layer architecture that is capable of meeting customer
requirements, with minimal impact to the physical structure of the enterprise
campus.
Access Controllers (AC) are deployed at the core layer of the network and
operate, in what is known as bypass mode, as a general practice. This means
that access controllers that manage the access points are not directly
connected to each AP that they manage, mainly to allow for a wireless
network overlay to be achieved whilst minimizing physical change to the
existing enterprise network architecture.

Each Access Point (AP) within a wireless local area network is designed to
provide a level of coverage that encompasses the surrounding area of the
established enterprise campus. A single AP is considered to have a finite
range that may vary depending on a number of factors and objects that are
capable of interfering with the general signal range, through imposing greater
attenuation to, or refraction of signals.
Wireless coverage is generally extended therefore by implementing multiple
AP that operate as cells, with overlapping cell ranges to allow users to
effectively hop between each AP as the user becomes mobile within the area
in which coverage is provided. Any good wireless deployment should allow for
complete coverage over an entire campus with eradication of any grey areas
or black spots where WLAN coverage may suddenly be lost.
Another important factor involving wireless coverage is the issue of security.
Unlike wired Ethernet connections, the scope of a wireless networks may
extend beyond the physical boundaries of the building or site in which the
network is intended, allowing for potential access to resources from unknown
external users without authority, therefore imposing a great risk to the integrity
of the network.

Multiple security mechanisms have been devised for maintaining the overall
integrity of the wireless enterprise. The implementation of perimeter security
as a means of protecting 802.11 networks from threats such as
implementation of unauthorized APs and users, ad-hoc networks, and denial
of service (DoS) attacks is an example of a typical wireless solution.
A wireless intrusion detection system (WIDS) can be used to detect
unauthorized users and APs. A wireless intrusion prevention system (WIPS)
can protect an enterprise network against unauthorized access by wireless
network devices such as a rogue AP.
User access security is another common solution where link authentication,
access authentication, and data encryption are used as forms of Network
Access Control (NAC) to ensure validity and security of user access on
wireless networks, essentially managing user access based on defined
permissions. Service security is another feature that may also be
implemented to protect service data of authorized users from being
intercepted by unauthorized users during transmission.

1. A growing majority of employees require mobility within an enterprise


network as part of daily work procedures, whether for meetings or
collaboration, which fixed line networks generally limit. Adoption of WLAN
allows for greater mobility and also flexibility in the number of users
connecting to the enterprise network.
2. With flexibility of access comes a greater need for security to monitor user
access and prevent sensitive information being accessed from within the
network. As a greater number of employees begin to rely on personal
devices and connect to the enterprise network over the WLAN, the
potential for viruses, malware and spyware amongst others, becomes a
greater potential threat to the network as a whole.
As the need to support a growing number of services and users, greater
bandwidth is required which translates to a larger wireless spectrum
requiring to be adopted by standards. The 5GHz bandwidth has begun to
take a prominent role in newer standards due to spectrum limitations in
the 2.4GHz range, which results in a shorter range in AP signal
transmissions for future standards.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Serial has in recent years been slowly phased out in many parts of all
networks in favor of Ethernet technology, however still remains active as
a legacy technology in a great number of enterprise networks alongside
Ethernet. Serial has traditionally provided solutions for communication
over long distances and therefore remains a prominent technology for
Wide Area Network (WAN) communication, for which many protocols and
legacy WAN technologies remain in operation at the enterprise edge. A
thorough knowledge of these technologies is required to support many
aspects of WAN operation.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain how data is carried over a serial based medium.

Configure link layer protocols for serial links.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

Serial connections represent a form of legacy technology that has commonly


been used for the support of Wide Area Network (WAN) transmissions. The
transmission of data as electrical signals over a serial link again requires a
form of signaling to control the sending and receiving of frames as found with
Ethernet. Serial connections define two forms of signaling that may be used
for synchronization of transmissions, known as Asynchronous and
Synchronous communication. Asynchronous signaling works on the principle
of sending additional bits referred to as start and stop bits with each byte or
frame to allow the receiving node to be aware of the incoming frame, and thus
periodically reset the timing between frames to ensure that the rates between
transmission and reception are maintained. The start bit is always
represented as a 0 bit value while the stop bit represents a 1 bit value. One of
the main concerns with this signaling method is the additional overhead as a
result for each frame delivered, with the start and stop bits representing a
large percentage of the frame overall. This method however is commonly
associated with technologies such as Asynchronous Transfer Mode (ATM), a
form of cell switching technology that generates fixed sized frames (cells) of
53bytes as a means of supporting lower jitter through minimizing queue
processing times, making it ideal for real time communication such as voice,
but has begun to make way for newer technologies such as MPLS switching
and due to the loss of its advantage over the frame processing speeds that
are now possible with routers and switches.
Synchronous serial connections rely on a clocking mechanism between the
peering devices in which one side (DCE) provides the clocking to synchronize
communication. This clocking is maintained through the carrying of clocking
information between the sender and receiver as part of the data signal.

The High-level Data Link Control (HDLC) is a bit-oriented data link protocol
that is capable of supporting both synchronous and asynchronous data
transmissions. A complete HDLC frame consists of the Flag fields that are
used to mark the start and end of a HDLC frame, often as 01111110, or
01111111 when a frame is to be suddenly aborted and discarded. An address
field supports multipoint situations where one or multiple secondary terminals
communicate with a primary terminal in a multipoint (multidrop) topology
known as unbalanced connections, as opposed to the more commonly
applied balanced (point to point) connections. The control field defines the
frame type as either information, supervisory or unnumbered, and frame
check sequence (FCS) field for ensuring the integrity of the frame.
Of the control field frame types, only the information frame type is supported
by Huawei ARG3 series routers and is used to carry data. The information
frame type carries send N(S) and receive N(R) sequence numbers, as well as
Poll and Final bits (P/F) for communicating status between primary and
secondary stations. Supervisory frame types in HDLC are used for error and
flow control and unnumbered frame types are used to manage link
establishment for example between primary and secondary stations.

Establishment of HDLC as the link layer protocol over serial connections


requires simply that the link protocol be assigned using the link-protocol hdlc
command under the interface view for the serial interface that is set to use the
protocol. The configuration of the link protocol must be performed on both
peering interfaces that are connected to the point to point network before
communication can be achieved.

When an interface has no IP address, it cannot generate routes or forward


packets. The IP address unnumbered mechanism allows an interface without
an IP address to borrow an IP address from another interface. The IP address
unnumbered mechanism effectively enables the conservation of IP addresses,
and does not require that an interface occupy an exclusive IP address all of
the time. It is recommended that the interface that is assigned as the interface
from which the unnumbered IP address is borrowed be a loopback interface
since this type of interface is more likely to be always active and as such
supply an available address.
When using an unnumbered address, a static route or dynamic routing
protocol should be configured so that the interface borrowing the IP address
can generate a route between the devices. If a dynamic routing protocol is
used, the length of the learned route mask must be longer than that of the
lender's IP address mask, because ARG3 series routers use the longest
match rule when searching for routes. If a static route is used and the IP
address of the lender uses a 32-bit mask, the length of the static route mask
must be shorter than 32 bits. If a static route is used and the IP address of the
lender uses a mask less than 32 bits, the length of the static route mask must
be longer than that of the lender's IP address mask.

Through the display ip interface brief command, a summary of the address


assignment is output. In the event of assigning an unnumbered address, the
address value will display as being present on multiple interfaces, showing
that the IP address has been successfully borrowed from the logical loopback
interface for use on the physical serial interface.

The Point-to-Point Protocol (PPP) is a data link layer protocol that


encapsulates and transmits network layer packets over point-to-point (P2P)
links. PPP supports point-to-point data transmission over full-duplex
synchronous and asynchronous links.
PPP is built upon the Serial Line Internet Protocol (SLIP). PPP supports both
synchronous and asynchronous links, whereas other data link layer protocols
such as Frame Relay (FR) support only synchronous links. PPP is an
extensible protocol, facilitating the extension of not only IP but also other
protocols and is capable of supporting the negotiation of link layer attributes.
PPP supports multiple Network Control Protocols (NCP) such as the IP
Control Protocol (IPCP) and Internetwork Packet Exchange Control Protocol
(IPXCP) to negotiate the different network layer attributes. PPP provides the
Password Authentication Protocol (PAP) and Challenge Handshake
Authentication Protocol (CHAP) for network security authentication. PPP has
no retransmission mechanism, reducing the network cost and speeding up
packet transmission.

PPP encapsulation provides for multiplexing of different network-layer


protocols simultaneously over the same link however in todays networks, the
capability of PPP requires generally an IP only solution. The versatility of PPP
to accommodate a variety of environments is well supported through Link
Control Protocol (LCP). In order to establish communications over a point-topoint link, each end of the PPP link must first send LCP packets to configure
and test the data link. More specifically LCP is used to negotiate and establish
agreement for encapsulation format options, manage the MRU of packets,
detect a looped-back link through magic numbers and determine errors in
terms of parameter misconfigurations, as well as terminate an established
link. Peer authentication on the link, and determination of when a link is
functioning properly and when it is failing represent other optional facilities
that are provided by LCP.
After the link has been established and optional facilities have been
negotiated as required by the LCP component of PPP, NCP packets must
then be sent to choose and configure one or more network-layer protocols.
Typical IP based Network Control Protocols enable features such as address
configuration (IPCP), and (van Jacobson) compressed TCP/IP.

This initiation and termination of a PPP link begins and ends with the dead
phase. When two communicating devices detect that the physical link
between them is activated (for example, carrier signals are detected on the
physical link), PPP will transition from the Dead phase into the Establish
phase. In the Establish phase, the two devices perform an LCP negotiation to
negotiate the working mode as either single-link (SP) or multi-link (MP), the
Maximum Receive Unit (MRU), authentication mode etc.
If the authentication mode is defined, the optional Authenticate phase will be
initiated. PPP provides two password authentication modes: PAP
authentication and CHAP authentication. Two CHAP authentication modes
are available: unidirectional CHAP authentication and bidirectional CHAP
authentication. In unidirectional CHAP authentication, the device on one end
functions as the authenticating device, and the device on the other end
functions as the authenticated device. In bidirectional CHAP authentication,
each device functions as both the authenticating device and authenticated
device. In practice however, only unidirectional CHAP authentication is used.
Following successful authentication, the Network phase initiates, through
which NCP negotiation is performed to select and configure a network
protocol and to negotiate network-layer.
Parameters. Each NCP may be in an Opened or Closed state at any time.
After an NCP enters the Opened state, network-layer data can be transmitted
over the PPP link.
PPP can terminate a link at any time. A link can be terminated manually by an
administrator, or be terminated due to the loss of carrier, an authentication
failure, or other causes.

PPP generally adopts a HDLC like frame architecture for the transmission
over serial connections. Flag fields are adopted to denote the start and the
end of a PPP frame which is identifiable from the binary sequence 01111110
(0x7E). The address field, although present, is not applied to PPP as is the
case with HDLC and therefore must always contain a 11111111 (0xFF) value,
which represents an All-Stations address. The control field is also fixed with a
value of 00000011 (0x03) representing the unnumbered information
command.
The frame check sequence (FCS) is generally a 16 bit value used to maintain
the integrity of the PPP frame. PPP additionally defines a 8 or 16 bit protocol
field that identifies the datagram encapsulated in the Information field of the
packet. Typical examples may include 0xc021 for Link Control Protocol,
0xc023 for Password Authentication Protocol, and 0xc223 for the Challenge
Handshake Authentication Protocol. The Information field contains the
datagram for the protocol specified in the Protocol field.
The maximum length for the Information field, (not including the Protocol
field), is defined by the Maximum Receive Unit (MRU), which defaults to 1500
bytes. Where the value 0xc021 is implemented, communicating devices
negotiate by exchanging LCP packets to establish a PPP link.
The LCP packet format carries a code type field that references various
packet types during PPP negotiation, for which common examples include
Configure-Request (0x01), Configure-Ack (0x02), Terminate-Request (0x05)
etc. The Data field carries various supporting type/length/value (TLV) options
for negotation, including MRU, authentication protocols etc.

As part of the LCP negotiation, a number of packet types are defined that
enable parameters to be agreed upon before a PPP data link is established. It
is necessary that the two communicating devices negotiate the link layer
attributes such as the MRU and authentication mode. In order to achieve this,
various packet types are communicated.
The Configure-Request packet type type allows initiation of LCP negotiation
between peering devices and must be transmitted at such times. Any
Configure-Request packet type sent must be responded to, and may be done
so through one of a number of response packet types. Where every
configuration option received in a Configure-Request is recognizable and all
values are acceptable, a Configure-Ack packet type will be transmitted.
Where all received configuration options in the Configure-Request packet type
are recognized, but some values are not accepted, a Configure-Nak packet
type will be transmitted, and contain only the unaccepted configuration
options originally received in the Configure-Request packet type. A ConfigureReject is used when certain configuration options received in a ConfigureRequest are not recognizable, and thus are not accepted for negotiation.

Some of the common configuration options that are negotiated and carried as
part of the LCP packet include the MRU, Authentication protocol supported by
the sending peer as well as the magic number.
The magic number provides a method to detect looped-back links and other
anomalies at the data link layer. In the case where a Configure-Request is
received containing a Magic-Number as a configuration option, the received
Magic-Number is used to compare multiple received Configure-Request
messages sent to the peer by comparison of the Magic-Number. If the two
Magic-Numbers of the received Configure-Request messages are different,
then the link is understood to be a non-looped-back link for which a RequestAck can be given in response. If the two Magic-Numbers are equal however,
then a possibility exists that the link is looped-back and that further checking
must be performed for this Configure-Request, and is done so by sending a
Configure-Nak to effectively request a different Magic-Number value.

The sequence of events leading to the establishment of PPP between two


peers is initiated by the sending of a Configure-Request packet to the peering
device. Upon receiving this packet, the receiver must assess the configuration
options to determine the packet format to respond with. In the event that all
configuration options received are acceptable and recognized, the receiver
will reply with a Configure-Ack packet.

Following the initial transmission of the Configure-Request as part of PPP


negotiation, it is also possible that a Configure-Nak be returned, in particular
where all configuration options are recognized, but some values are not
accepted. On reception of the Configure-Nak packet a new ConfigureRequest is generated and sent, however the configuration options may
generally be modified to the specifications in the received Configure-Nak
packet.
Multiple instances of a configuration option may be specified by the
Configure-Nak packet for which the peer is expected to select a single value
to include in the next Configure-Request packet.

For PPP LCP negotiation in which one or multiple configuration options


received in a Configure-Request are unrecognized or considered not
acceptable for negotiation, a Configure-Reject packet is transmitted.
Reception of a valid Configure-Reject indicates that when a new ConfigureRequest be sent, and any configuration options that are carried together with
the Configure-Reject packet must be removed from the configuration options
to be sent as part of the following Configure-Request packet.

Establishment of PPP requires that the link layer protocol on the serial
interface be specified. For ARG3 series of routers, PPP is enabled by default
on the serial interface. In the event where the interface is currently not
supporting PPP, the link-protocol ppp command is used to enable PPP at the
data link layer. Confirmation of the change of encapsulation protocol will be
prompted, for which approval should be given as demonstrated in the
configuration example.

The Password Authentication Protocol (PAP) is a two-way handshake


authentication protocol that transmits passwords in plain text. PAP
authentication is performed during initial link establishment. After the Link
Establishment phase is complete, the user name and password are
repeatedly sent by the peer to the authenticator until authentication is
acknowledged or the connection is terminated. PAP authentication effectively
simulates login operations in which plain text passwords are used to establish
access to a remote host. The authenticated device sends the local user name
and password to the authenticator. The authenticator checks the user name
and password of the authenticated device against a local user table and
sends an appropriate response to the authenticated device to confirm or reject
authentication.

The Challenge Handshake Authentication Protocol (CHAP), is used to


periodically verify the identity of the peer using a three-way handshake. This
is done upon initial link establishment, and can be repeated periodically. The
distinguishing principle of CHAP lies in the protection given through avoiding
transmission of any password over the link, instead relying on a challenge and
response process that can only be successful if both authenticator and
authenticated devices are supporting a value referred to as a secret. An
algorithm such as MD5 is commonly used to hash any challenge and
response, to ensure the integrity of the value and the resulting hash value,
and is compared to a result generated by the authenticator. If both the
response and value that is created by the authenticator match, the
authenticated peer is approved.

The IP Control Protocol (IPCP) is responsible for configuring, enabling, and


disabling the IP protocol modules on both ends of the point-to-point link. IPCP
uses the same packet exchange mechanism as the Link Control Protocol
(LCP). IPCP packets may not be exchanged until PPP has reached the
Network phase. IPCP packets received before this phase is reached are
expected to be silently discarded. The address negotiation configuration
option provides a way to negotiate the IP address to be used on the local end
of the link, for which a statically defined method allows the sender of the
Configure-Request to state which IP-address is desired. Upon configuration of
the IP address a Configure-Request message is sent containing the IP
address requested to be used, followed by a Configure-Ack from the peering
device to affirm that the IP address is accepted.

A local device operating as a client and needing to be assigned an IP address


in the range of the remote device (server) must make a request for a valid
address by applying the ip address-ppp negotiate command on the physical
interface with which the client peers with the server. Through this method, a
client can retrieve a valid address. This is applicable in scenarios such a
where a client accesses the Internet through an Internet Server Provider (ISP)
network, and through which it can obtain an IP address from the ISP. An
address is proposed to the client upon receiving a configure request for which
no IP address has been defined. The PPP server (RTB) will respond with a
Configure-Nak which contains suggested IP address parameters for RTA. A
follow up Configure-Request message with a change to the IP addressing
enables the (NCP) IPCP to successfully establish network layer protocols.

The establishment of PAP authentication requires that one peer operate as


the authenticator in order to authenticate an authenticated peer. The PPP PAP
authenticator is expected to define the authentication mode, a local user
name and password, and the service type. If a domain is defined to which the
local user belongs (as defined by AAA), the authentication domain is also
expected to be specified under the PAP authentication mode.
An authenticated peer requires that an authentication user name, and
authentication password be specified in relation to the username and
password set by the authenticator. The ppp pap local-user <username>
password { cipher | simple } <password> command is configured on the
authenticated device to achieve this.

Through debugging commands which provide a real-time output of events in


relation to specific protocols, the authentication request process can be
viewed. As displayed in the example, a PAP authentication request is
performed to which authentication establishment is deemed successful.

In CHAP authentication, the authenticated device sends only the user name
to the authenticating device. CHAP is understood to feature higher security
since passwords are not transmitted over the link, instead relying on hashed
values to provide challenges to the authenticated device based on the
configured password value on both peering devices. In its most simplest form,
CHAP may be implemented based on local user assignments as with PAP, or
may involve more stringent forms of authentication and accounting achieved
through AAA and authentication/accounting servers.
As demonstrated, the configuration of CHAP based on locally defined users
requires limited configuration of local user parameters and the enablement of
PPP CHAP authentication mode on the authenticator device. Where domains
exist, the authenticator may also be required to define the domain being used
if different from the default domain.

Debugging of the CHAP authentication processes displays the stages


involved with CHAP authentication, originating from listening on the interface
for any challenges being received following the LCP negotiation. In the event
that a challenge is sent, the authenticated device must provide a response for
which a hash value is generated, involving the set authentication parameters
on the authenticated peer (password), that the authenticator will promptly
validate and provide a success or failure response to.

1. A Configure-Ack packet is required in order to allow the link layer to be


successfully established when using PPP as the link layer encapsulation
mode.
2. The Internet Protocol Control Protocol (IPCP) is used to negotiate the IP
protocol modules as part of the NCP negotiation process. This occurs
during the Network phase of PPP establishment.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The introduction of Frame Relay during the 1990's saw a rapid growth in
it's implementation and adoption by many enterprise businesses. Over
time however, newer carrier based solutions such as Multi Protocol Label
Switching (MPLS) have been introduced allowing for greater advanced
service opportunities to be offered by providers, and so existing Frame
Relay networks continue to be maintained for existing customers as the
technology is slowly phased out. Frame Relay is primarily a service
provider technology however enterprise administration requires that
engineers are capable of establishing connectivity over frame relay virtual
circuits at the edge of the enterprise network, as well as be fami liar with
the circuit establishment process to identify and resolve issues that may
occur.
Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the function of the DLCI in a Frame Relay network
Describe the LMI negotiation process.

Configure Frame Relay using both static and dynamic mapping.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Working at the data link layer of the Open System Interconnection (OSI)
model, Frame Relay (FR) is a technique that uses simple methods to transmit
and exchange data. A distinct feature of Frame Relay is that it simplifies
processes of error control, confirmation and re-transmission, traffic control,
and congestion prevention over its predecessor protocol X.25 on packet
switch networks, thus reducing processing time. This is important for the
effective use of high-speed digital transmission channels. Frame Relay
networks are generally in decline with little in the way of the establishment of
new Frame Relay networks due to the emergence of technologies such as
MPLS that utilize IP networks to provide a vast array of service options for
enterprise customers. Where Frame Relay networks are currently maintained
however, capability for support and maintenance must be covered at the
customer edge.

Frame Relay is a packet switched network technology that emulates circuit


switched networks through the establishment of virtual circuits (VC) that are
used to establish a path between Frame Relay devices at each end of the
network. Every VC uses Data Link Connection Identifiers (DLCI) to define a
Frame Relay channel that is mapped across the Frame Relay network to an
often fixed destination. The user devices are called Data Terminal Equipment
(DTE) and represent the point in the Frame Relay network where the circuit is
both established and terminated. DTE establish virtual circuits with Data
Circuit-terminating Equipment (DCE) that provide the clocking mechanism
between the Frame Relay network and the DTE at the edge of the customer
network.

Frame Relay is a statistical multiplexing protocol. It provides multiple virtual


circuits (VCs) on a single physical line. DLCI are applied to differentiate VCs.
It is valid only on the local interface and the directly connected peer interface
but not valid globally. On a Frame Relay network, the same DLCI on different
physical interfaces does not indicate the same VC.
The available DLCI ranges from 16 to 1022, among which DLCIs 1007 to
1022 are reserved. Because the frame relay VC is connection oriented,
different local DLCIs are connected to different remote devices. The local
DLCI can therefore be considered as the Frame Relay address" of the
remote device. The Frame Relay address mapping associates the peer
protocol address with the Frame Relay address (local DLCI), so that the
upper-layer protocol can locate the remote device. When transmitting IP
packets over Frame Relay links, a router searches for the next-hop address in
the routing table first, and then it finds the corresponding DLCI in a Frame
Relay address mapping table. This table maintains the mapping information
between remote IP address and next hop DLCI. The address mapping table
can be configured manually or maintained dynamically using a discovery
process known as Inverse ARP. The VC can be understood as a logical circuit
built on the shared network between two network devices. VCs can be divided
into the permanent VC (PVC) and Switched VC (SVC).
A PVC is a manually created always up circuit that defines a fixed path to a
given destination, whereas an SVC is a VC that can be created or cleared
automatically through negotiation, similar to the way a telephone call is both
established and terminated. Generally however, only PVC are implemented
for Frame Relay.

In the PVC, both the network devices and user devices need to maintain an
awareness of the current status of each PVC. The protocol that monitors the
PVC status is called the Local Management Interface (LMI) protocol. The LMI
protocol maintains the link and PVC status of the Frame Relay through status
enquiry packets and status packets. The LMI module is used to manage the
PVC, including the adding and deleting of the PVC, the detecting of the PVC
link integrity, and the PVC status. The system supports three LMI protocols,
these include LMI complying with ITU-T Q.933 Appendix A, LMI complying
with ANSI T1.617 Appendix D and a Nonstandard compatible protocol. A
status request message is used to query the PVC status and link integrity,
while a status message is used to reply the status request message, notifying
of the PVC status and link integrity.
The interface in DTE mode periodically sends status enquiry messages to the
interface in DCE mode. The interface in DCE mode, after receiving a status
enquiry message, replies with a status message to the interface in DTE mode.
The interface in DTE mode determines the link status and PVC status
according to the received status messages. If interfaces in DCE and DTE
modes can normally exchange LMI negotiation messages, the link status
changes to Up and the PVC status changes to Active resulting in successful
LMI negotiation.

The main function of Inverse ARP is to resolve the IP address of the remote
device that is connected to every VC. If the protocol address of the remote
device that is connected to a VC is known, the mapping between the remote
protocol address and DLCI can be created on the local end, which can avoid
configuring the address mapping manually.
When a new VC is found, Inverse ARP sends a request packet to the remote
end of this VC if the local interface is configured with the protocol address.
This request packet contains the local protocol address. When the remote
device receives this request packet, the local protocol address can be
obtained to create the address mapping and an Inverse ARP response packet
is sent. The address mapping is thus created on the local end. If the static
mapping is configured manually or dynamic mapping is created, the Inverse
ARP request packet is not sent to the remote end on this VC regardless of
whether the remote address in the dynamic mapping is correct. An Inverse
ARP request packet is sent to the remote end only when no mapping exists. If
the receiver of the Inverse ARP request packet finds the remote protocol
address is the same as that in the local configured mapping, it does not create
the dynamic mapping.

Routing protocols such as RIP and OSPF are used to provide the means for
routing traffic at the network layer between the source and the destination,
while frame relay provides the underlying technology to facilitate the data link
communication. Frame relay interfaces naturally may involve multiple virtual
circuits over a single physical interface. The network layer protocols however
generally implement split horizon which does not allow the router to send out
the updated information through the interface on which the information is
received, as a defense against routing loops. One of the potential solutions to
this would be to disable split horizon via the routing protocol however would
leave the interface susceptible to the possibility of routing loops. Using
multiple physical interfaces to connect multiple adjacent nodes is another
alternative, however requires the router to have multiple physical interfaces,
and increases the general implementation cost.

A more effective means to resolve issues arising from the implementation of


split horizon over frame relay networks, is to apply logical sub-interfaces to a
single physical interface. Frame Relay sub-interfaces can be classified into
two types.
Point-to-point sub-interfaces are used to connect a single remote device.
Each point-to-point sub-interface can be configured with only one PVC. In this
case, the remote device can be determined uniquely without the static
address mapping. Thus, when the PVC is configured for the sub-interface, the
peer address is identified.
Point-to-multipoint sub-interfaces are used to connect multiple remote
devices. Each sub-interface can be configured with multiple PVCs. Each PVC
maps the protocol address of its connected remote device. In this way,
different PVCs can reach different remote devices. The address mapping
must be configured manually, or dynamically set up through the Inverse
Address Resolution Protocol (InARP).

The configuration of dynamic address mapping for frame relay PVC requires
that the Inverse Address Resolution Protocol (InARP) be used.
Implementation of dynamic mapping requires that the link layer protocol type
be set as frame relay using the link-protocol fr command. Once the interface
link layer protocol has been changed, the interface on the customer edge
device must be set to DTE. This by default on Huawei ARG3 series routers is
set to DTE, and therefore this need not be set on any primary configuration.
Finally to allow the dynamic mapping to occur, the fr inarp command is
applied to the interface.

Using the display fr pvc-info command, it is possible to discover all permanent


virtual circuits (PVC) associated with the local interface. In this instance a
single PVC exists and the circuit is established with the support of the inverse
ARP protocol. The presence of inbound and outbound packets informs that
the frame relay mapping has been successful through Inverse ARP, and that
traffic is flowing over the circuit.

The fr map ip [destination-address [ mask ] dlci-number] command configures


a static mapping by associating the protocol address of a peer device with the
frame relay address (DLCI value) of the local device. This configuration helps
upper-layer protocols locate a peer device based on the protocol address of
the peer device. If the DCE is configured with static address mapping and the
DTE is enabled with the dynamic address mapping function, the DTE can
communicate with the DCE without being configured with static address
mapping. If the DTE is configured with a static address mapping and the DCE
is enabled with the dynamic address mapping function, the DTE and DCE
cannot communicate with each other. Where it is necessary that broadcast
packets be carried over the PVC, the fr inarp [destination-address [ mask ]
dlci-number] broadcast command parameter can be included in the
command.

The state of the PVC can be determined through the display fr pvc-info
command. Each PVC that has been created is defined along with the
interface for which the PVC is associated, the local DLCI number and also the
current status of the DLCI. A fully operational PVC can be determined by an
active status, whereas a non operational PVC is given an inactive status. The
usage parameter indicates how the PVC is obtained. LOCAL indicates that
the PVC is configured locally; UNUSED indicates that the PVC is learned from
the DCE side.

1. The frame Relay PVC status will display as INACTIVE in the event that
LMI negotiation messages have not been successfully exchanged. This
may be caused when the interface with the PVC is inadvertently shut
down, or the interface configured with the PVC failed to negotiate with the
peer. An administrator would be required to therefore check the LMI
negotiation process to discover the point of negotiation failure. The
display fr lmi-info command can be used to view the related LMI details,
for which discarded messages may be seen as the LMI negotiation
attempts fail.
2. Inverse ARP is used to resolve the IP address of the peer device
connected to the local device through a virtual circuit (VC), allowing the
mapping between the peer IP address and the DLCI to be automatically
defined on the local device.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The application of DSL technology relies strongly on the existing
telephone infrastructure that is found in almost every household and
office globally. With the continued development of newer DSL standards
allowing rates of up to 100Mbps, the application of DSL as a WAN
technology for home and enterprise remains firmly valid. Traditional DSL
connections were established over legacy ATM networks, however
Ethernet has continued to emerge as the underlying technology on which
many service providers establish their networks, and therefore knowledge
of PPPoE technologies remains valued for establishing DSL connectivity
at the enterprise edge.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the PPPoE connection establishment process .

Configure a PPPoE session.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

DSL represents a form of broadband technology that utilizes existing


telephony networks to allow for data communications. Communication is
facilitated through a remote transceiver unit, or modem at the customer
premises, which communicates over the existing telephone lines to a central
office transceiver unit that takes the form of the Digital Subscriber Line Access
Multiplexer (DSLAM) where traffic is multiplexed onto a high speed ATM or
Ethernet network before reaching the broadband remote access server
(BRAS) or PPPoA/PPPoE server within the service provider network.
The distance between the two transceivers can vary depending on the
specific DSL technology applied. In the case of an Asynchronous Digital
Subscriber Line (ADSL) the distance expands up to around 18000 feet or
5,460 meters traditionally over an ATM network, whereas with a Very High
Speed Digital Subscriber Line (VDSL2), local loop distances of only around
1500 meters are supported with fiber (FTTx) technology applied to provide
Ethernet based backend transmission to the BRAS (PPPoE server).

PPPoE refers to the encapsulation of PPP within Ethernet frames to support a


connection over broadband technologies to a PPPoE broadband remote
access server (BRAS), located within the service provider network. This is
used to support the authentication and accounting process before access to
the remote network such as the Internet is provided. The router RTA operates
as the client for establishment of the PPPoE session with the PPPoE server
via which an authorized connection is established for access to remote
networks and resources. The DSL modem provides the modulation and
demodulation of signals across the copper telephone wire infrastructure that
traditionally exist in homes and offices.

PPPoE has two distinct stages, initially there is a Discovery stage followed by
a PPP Session stage. When a client wishes to initiate a PPPoE session, it
must first perform Discovery for which four stages are involved and rely on
four individual packet types for the discovery process. These include the
PPPoE Active Discovery Initiation (PADI), PPPoE Active Discovery Offer
(PADO), PPPoE Active Discovery Request (PADR) and PPPoE Active
Discovery Session-confirmation (PADS) protocol packet types. The
termination process of the PPPoE session utilizes a PPPoE Active Discovery
Terminate (PADT) packet for closure of the PPPoE session.

In the Discovery stage, when a client (RTA) accesses a server using PPPoE,
the client is required to identify the Ethernet MAC address of the server and
establish a PPPoE session ID. When the Discovery stage is complete, both
peers know the PPPoE Session_ID and the peer Ethernet address. The
PPPoE Session_ID and peer Ethernet address define the unique PPPoE
session. The client will broadcast a PPPoE Active Discovery Initiation (PADI)
packet. This packet contains the service information required by the client.
After receiving this PADI packet, all servers in range compare the requested
services to the services that they can provide. In the PADI packet, the
Destination_address is a broadcast address, the Code field is set to 0x09, and
the Session_ID field is set to 0x0000.

The servers that can provide the requested services send back PPPoE Active
Discovery Offer (PADO) packets. The client (RTA) can receive more than one
PADO packet from servers. The destination address is the unicast address of
the client that sent the PADI packet. The Code field is set to 0x07 and the
Session_ID field must be set to 0x0000.

Since the PADI packet is broadcast, the client can receive more than one
PADO packet. It looks through all the received PADO packets and chooses
one based on the AC-Name (which stands for the Access Concentrator name,
and generally refers to a value the uniquely distinguishes one server from
another), or the services offered by the PADO packet. The client looks through
the PADO packets to choose a server, and sends a PPPoE Active Discovery
Request (PADR) packet to the chosen server. The destination address is set
to the unicast address of the access server that sent the selected PADO
packet. The code field is set to 0x19 and the session ID field is set to 0x0000.

When receiving a PADR packet, the access server prepares to begin a


PPPoE session. It generates a unique session ID for the PPPoE session and
replies to the client with a PPPoE Active Discovery Session-confirmation
(PADS) packet. The destination address field is the unicast Ethernet address
of the client that sends the PADR packet. The code field is set to 0x65 and the
session ID must be set to the value created for this PPPoE session. The
server generates a unique session identifier to identify the PPPoE session
with the client and sends this session identifier to the client with the PADS
packet. If no errors occur, both the server and client enter the PPPoE Session
stage.

Once the PPPoE session begins, PPP data is sent as in any other PPP
encapsulation. All Ethernet packets are unicast. The ETHER_TYPE field is set
to 0x8864. The PPPoE code must be set to 0x00. The SESSION_ID must not
change for that PPPoE session and must be the value assigned in the
discovery stage. The PPPoE payload contains a PPP frame. The frame
begins with the PPP Protocol-ID.

In the PPPoE session establishment process, PPP LCP negotiation is


performed, at which point the maximum receive unit (MRU) is negotiated.
Ethernet typically has a maximum payload size of 1500 bytes, where the
PPPoE header is 6 bytes, and the PPP Protocol ID is 2 bytes, the PPP
maximum receive unit (MRU) cannot be greater than 1492 bytes since this will
likely cause packet fragmentation. When LCP terminates, the client and
access concentrator (server) stop using that PPPoE session. If the client
needs to start another PPP session, it returns to the Discovery stage.

The PPPoE Active Discovery Terminate (PADT) packet can be sent anytime
after a session is set up to indicate that a PPPoE session is terminated. It can
be sent by the client or access server. The destination address field is a
unicast Ethernet address. The Code field is set to 0xA7 and the session ID
must be set to indicate the session to be terminated. No tag is required in the
PADT packet. When a PADT packet is received, no PPP traffic can be sent
over this PPPoE session. After a PADT packet is sent or received, even
normal PPP termination packets cannot be sent. A PPP peer should use the
PPP protocol to end a PPPoE session, but the PADT packet can be used
when PPP cannot be used.

Three individual steps are required in the configuration of a PPPoE client,


beginning with the configuration of a dialer interface. This enables the
connection to be established on demand and a session connection to be
disconnected after remaining idle for a period of time. The dialer-rule
command enables the dialer-rule view to be accessed from which the dialerrule can be set to define conditions for initiating the PPPoE connection.
The dialer user chooses a dialer interface on which to receive a call according
to the remote user name negotiated by PPP. The dialer user enables the dial
control center and indicates the remote end user name, which must be the
same as the PPP user name on the remote end server. If the user name
parameter is not specified when using the undo dialer user command, the dial
control centre function is disabled and all of the remote user names are
deleted. If this parameter is specified, only the specified user name is deleted.
The dialer bundle <number> command in the AR2200 is used to bind physical
interfaces and dialer interfaces.
PPP authentication parameters are defined for authentication of the client
connection to the server along with the ip address ppp-negotiate command for
negotiation on an interface, to allow the interface to obtain an IP address from
the remote device.

The second step involves the establishment of the PPPoE session binding of
the dialer bundle to the interface over which negotiation is to take place for the
configured dialer parameters. The pppoe-client dial-bundle-number <number>
command is used to achieve this where the number refers to the bundle
number configured as part of the dialer parameters.
If on-demand is not specified, the PPPoE session works in permanent online
mode. If this parameter is specified, the PPPoE session works in on-demand
dialing mode. The AR2200 supports the packet triggering mode for ondemand dialing. In permanent online mode, the AR2200 initiates a PPPoE
session immediately after the physical link comes up. The PPPoE session
persists until the undo pppoe-client dial-bundle-number command is used to
delete it. In triggered online mode, the AR2200 does not initiate a PPPoE
session immediately after the physical link comes up. Instead, the AR2200
initiates a PPPoE session only when data needs to be transmitted on the link.
If no data is transmitted on the PPPoE link within the interval specified by
dialer timer idle <seconds>, the AR2200 terminates the PPPoE session.
When data needs to be transmitted on the PPPoE link, the AR2200 sets up
the PPPoE session again.
The final step requires that a default static route be configured to allow any
traffic for which a network destination is not defined as a longer match in the
routing table to initiate the PPPoE session through the dialer interface.

The display interface dialer command is used to verify the current status of
the dialer configuration and allows for the locating of faults on a dialer
interface. The dialer current state of UP identifies that the physical status of
the interface is active, while a DOWN state would confirm that a fault currently
exists. The line protocol state refers to the link layer protocol status for which
an UP status confirms the active connection.
A link that has no IP address assigned to the interface or is suppressed will
show to be in a DOWN state, where suppression can be identified by
dampening to the interface mainly due to a persistent interface flapping. The
hold timers represents the heartbeat of the PPP dialer connection after PPP
LCP negotiation transitions to Opened. The Internet address is negotiated
from the IP pool that exists within the PPPoE server, where no address is
present, the system will display Internet protocol processing: disabled. The
IP address is assigned when a connection is negotiated i.e. through a ping
request in the pool range.
The LCP negotiation status defines the current state, where initial indicates
that the physical layer is faulty or no negotiation has been started. An
Opened state indicates that the system has sent a Configure-Ack packet to
the peer device and has received a Configure-Ack packet from the peer
device, as such confirming that the link layer is operating correctly.

The display pppoe-client session summary command provides information


regarding the PPPoE sessions on the Point-to-Point Protocol over Ethernet
(PPPoE) client, including the session status and statistics. Two examples are
given to highlight the difference between the PPPoE session states. The ID
represents the PPPoE ID while the bundle ID and Dialer ID are related to the
values in the dialer parameter configuration.
The interface defines the interface on which the client side negotiation is
performed. The MAC address of the PPPoE client and server are also
defined, however the Server-MAC is only determined after negotiation is
established. The PPPoE has four states that are possible. An IDLE state
indicates that the PPPoE session is currently not established and no attempt
to establish the PPPoE connection has been made.
If the state is PADI, this indicates that the PPPoE session is at the Discovery
stage and a PPPoE Active Discovery Initiation (PADI) packet has been sent. A
PADR state indicates that the PPPoE session is at the Discovery stage and a
PPPoE Active Discovery Request (PADR) packet has been sent. Finally, an
UP state indicates that the PPPoE session has been established successfully.

Much of the implementation of PPPoE has focused on a general point-to-point


connection reflective of a typical lab environment however in real world
applications the PPPoE client often represents the gateway between an
enterprise network and the Wide Area Network (WAN), to which external
destinations and resources are accessed.
The internal network of an enterprise commonly employs a private network
scheme in order to conserve addresses and these addresses cannot be used
to establish IP communication over the public network infrastructure. This
requires that the AR2200 provide the network address translation (NAT)
function to translate private IP addresses into public IP addresses, and is a
feature that is commonly applied to facilitate internal network communications
over PPPoE.

1. IP packets have a maximum payload size of 1500 bytes, however when


used together with PPPoE, it requires an additional 6 bytes, as well as
another 2 bytes for the PPP protocol ID, that causes the size of the packet
to exceed the maximum supported payload size. As such the LCP
negotiated MRU of a packet supporting PPPoE must be no larger than
1492 bytes.
2. The dialer bundle command binds the physical interface to the dialer
interface that is used to establish the PPPoE connection.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The continued growth of IP networks in general has resulted in an ever
increasing pressure on the 1Pv4 address space, and the need for a way
to prolong the depletion until long term solutions are founded. Network
Address Translation has become well established as the existing solution
and widely implemented within enterprise networks. Many variations of
NAT have been developed thus conserving the public address space
whilst enabling continued public network communication. This section
introduces the concept of NAT along with examples of common NAT
methods applied, for maintaining internetworking between the enterprise
network and the public network domain.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
List some of the different forms of Network Address Translation .
Explain the general behavior of NAT.

Configure NAT to suit application requirements.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

One of the main issues that has faced the expanding network has been the
progressive depletion of IP addresses as a result of growing demand. The
existing IPv4 addressing scheme has struggled to keep up with the constant
growth in the number of devices that continue to make up the public IP
network which is commonly recognized as the Internet. IPv4 addressing has
already faced depletion from IANA, the industry body that is responsible for
the allocation of addressing globally.
One makeshift solution was to allocate a range of private addresses based on
the existing IP address classes that could be reused. This solution has
allowed network domains to implement these addressing schemes based on
the private address range, and in relation to the scale of the network domain.
This allows for traffic that originates and is destined for locations in the same
domain to communicate without consumption of valued public addresses.
A problem arises however on facilitating communication beyond the private
network domain where destinations for traffic exist within the public domain or
in another private domain beyond that public domain. Network Address
Translation has become the standard solution to this issue allowing end
stations to forward traffic via the public network domain from a private
network.

Network Address Translation (NAT) uses the generally established boundary


of the gateway router to identify network domains for translation. Domains are
considered to be either internal private networks or external public networks
between which NAT is performed. The main principle lies in the reception of
traffic with a source address that is in the private network and a destination
address which represents a location beyond the private network domain.
The router is expected to implement NAT to translate the private address to a
public address to allow the public destination address to receive a valid return
public address via which packets received can be replied. NAT must also
create a mapping table within the gateway to allow the gateway to determine
as to which private network destination address a packet received from the
public network should be sent, again requiring address translation to be
performed along the return path.

A number of implementations of NAT are possible and are capable of being


applied to a variety of different situations. Static NAT represents a direct oneto-one mapping that allows the IP address of a specific end system to be
translated to a specific public address. On a large scale the one-to-one
mapping of static NAT does not do anything to alleviate the address shortage,
however is applicable in cases such as where a host may wish to have certain
privileges associated with an address to which the host is statically
associated. This same principle may also apply to servers that may wish to be
reached from the external network by a specific public address.
In the example given packets originating from source 192.168.1.1 are
destined for the public network address of 1.1.1.1. The network gateway RTA,
builds a mapping between the private address of 192.168.1.1 and a public
address of 200.10.10.5 that is assigned as the source address of the packet
before being forwarded by the gateway to the intended destination. Any return
packet will be sent as a reply with the destination address of 200.10.10.5 to
which the gateway will receive and perform the static translation, before
forwarding the packet to the associated host of 192.168.1.1. The static
mapping of addresses requires no real management of address allocation for
users, since addressing is manually assigned.

Dynamic NAT works on the principle of address pools by which internal end
systems wishing to forward traffic across the public network are capable of
associating with a public address from an address pool. End systems
requiring to communicate with destinations in the public network domain must
associate with a unique public address that is attainable from the public
address range of the pool.
An address is assigned from the NAT server address pool as each end
system attempts to forward traffic to a public network destination. The number
of IP addresses owned by the NAT server is far less than the number of
internal hosts because not all the internal hosts access external networks at
the same time. This is usually determined according to the number of internal
hosts that access external networks at peak hours.
The example demonstrates a case where two internal hosts generate packets
intended for the destination 1.1.1.1/24, in which each internal host is allocated
a unique address from the address pool, to allow each host to be
distinguished in the public network from the other. Once communication is no
longer required over the public network, the address mapping will be removed
to allow the public address to be returned to the address pool.

In addition to the many-to-many address translation found within Dynamic


NAT, Network Address Port Translation (NAPT) can be used to implement
concurrent address translation. NAPT allows multiple internal addresses to be
mapped to the same public address. It is also called many-to-one address
translation or address multiplexing. NAPT maps IP addresses and interfaces.
The datagrams from different internal addresses are mapped to interfaces
with the same public address and different port numbers, that is, the
datagrams share the same public address.
The router receives a request packet sent from the host on the private
network for accessing the server on the public network. The packet's source
IP address is 192.168.1.1, and its port number is 1025. The router selects an
idle public IP address and an idle port number from the IP address pool, and
sets up forward and reverse NAPT entries that specify the mapping between
the source IP address and port number of the packet and the public IP
address and port number. The router translates the packet's source IP
address and port number to the public IP address and port number based on
the forward NAPT entry, and sends the packet to the server on the public
network. After the translation, the packet's source IP address is 200.10.10.11,
and its port number is 2843.

Easy IP is applied where hosts on small-scale local area networks require


access to the public network or Internet. Small-scale LANs are usually
deployed where only a few internal hosts are used and the outbound interface
obtains a temporary public IP address through dial-up. The temporary public
IP address is used by the internal hosts to access the Internet. Easy IP allows
the hosts to access the Internet using this temporary public address.
The example demonstrates the Easy IP process. The router receives a
request packet sent from the host on the private network for accessing a
server in the public network. The packet's source IP address in this case is
192.168.1.1, and its port number is 1025. The router sets up forward and
reverse Easy IP entries that specify the mapping between the source IP
address and port number of the packet and the public IP address and port
number of the interface connected to the public network. The router translates
the source IP address and port number of the packet to the public IP address
and port number, and sends the packet to the server on the public network.
After the translation, the packet's source IP address is 200.10.10.1, and its
port number is 2843.
After receiving a response from the server, the router queries the reverse
Easy IP entry based on the packet's destination IP address and port number.
The router translates the packet's destination IP address and port number to
the private IP address and port number of the host on the private network,
and sends the packet to the host. After the translation, the packet's
destination IP address is 192.168.1.1, and its port number is 1025.

NAT can shield hosts on private networks from public network users. When a
private network needs to provide services such as web and FTP services for
public network users, servers on the private network must be accessible to
public network users at any time.
The NAT server can address the preceding problem by translating the public
IP address and port number to the private IP address and port number based
on the preconfigured mapping.
Address translation entries of the NAT server are configured on the router,
after which the router receives an access request sent from a host on the
public network. The router queries the address translation entry based on the
packet's destination IP address and port number. The router translates the
packet's destination IP address and port number to the private IP address and
port number based on the address translation entry, and sends the packet to
the server on the private network. The destination IP address of the packet
sent by the host on the public network is 200.10.10.5, and the destination port
number of 80. After translation is performed by the router, the destination IP
address of the packet is 192.168.1.1, and its port number is 8080. After
receiving a response packet sent from the server on the private network, the
router queries the address translation entry based on the packet's source IP
address and port number. The router translates the packet's source IP
address and port number to the public IP address and port number based on
the address translation entry, and sends the packet to the host on the public
network. The source of the response packet sent from the host on the private
network is 192.168.1.1, and its port number is 8080. After translation by the
router, the source IP address of the packet is 200.10.10.5, and the port
number is again port 80.

Static NAT indicates that a private address is statically bound to a public


address when NAT is performed. The public IP address in static NAT is only
used for translation of the unique and fixed private IP address of a host. The
nat static [protocol {<tcp>|<udp>}global { <global-address >| current-interface
<global-port>} inside {<host-address> <host-port >} vpn-instance <vpninstance-name> netmask <mask> acl <acl-number> description <description
>] command is used to create the static NAT and define the parameters for
the translation.
The key parameters applied as in the example are the global parameter to
configure the external NAT information, specifically the external address, and
the inside parameter that allows for the internal NAT information to be
configured. In both cases the address and port number can be defined for
translation. The global port specifies the port number of the service provided
for external access. If this parameter is not specified, the value of global-port
is 0. That is, any type of service can be provided. The host port specifies the
service port number provided by the internal server. If this parameter is not
specified, the value of host-port is the same as the value of global-port.

The configuration of static NAT can be viewed through the display nat static
command. The command displays the network address translation
information with regards to the interface through which the address translation
is performed, the global and inside addresses which are translated along with
the used ports. Where the ports are not defined, a null result will be displayed.
The configuration for translation may be protocol specific to either TCP or
UDP protocol traffic, in which case the Protocol entry will be defined.

The configuration of dynamic network address translations involves


implementation of the nat outbound command. It relies on the prior
configuration of a network access control list that is used to specify a rule to
identify specific traffic to which an event or operation will be applied. The
details regarding access control lists are covered in a later unit. An
association is made between these access control list rules and the NAT
address pool. In this manner, the addresses specified in the access control list
can be translated by using the NAT address pool.
The example demonstrates how the nat outbound command has been
associated with an access control list with a identifier of 2000 to allow traffic
from the 192.168.1.0/24 network range to be translated as part of an address
group referred to as address-group 1. This defines a pool range from
200.10.10.11 through to 200.10.10.16 which internal addresses will employ for
address translation. The no-pat parameter in the command means that no
port address translation will occur for the addresses in the pool, therefore
each host must translate to a unique global address.

Two specific display commands will enable the detailed information regarding
the dynamic address translation to be verified. The display nat addressgroup<group-index> command allows the general network address translation
pool range to be determined. In addition the display nat outbound command
will provide specfic details for any dynamic network address translation
configuration applied to a given interface. In the example it can be understood
that the interface Serial1/0/0 is associated with an access control list rule
together with the address group 1 for address translation on the given
interface. The no-pat output confirms that port address translation is not in
effect in this network address translation.

The Easy IP configuration is very similar in configuration to that of dynamic


network address translation, relying on the creation of an access control list
rule for defining the address range to which translation is to be performed and
application of the nat outbound command. The main difference is in the
absence of the address group command since no address pool is used in the
configuration of Easy IP. Instead the outbound interface address is used to
represent the external address, in this case that being external address
200.10.10.1 of interface serial1/0/0. Additionally it is necessary that port
address translation be performed, and as such the no-pat parameter cannot
be implemented where an address group does not exist. The nat outbound
2000 represents a binding between the NAT operation and the access control
list rule detailing the address range to which the translation will apply.

The same display nat outbound command can be used to observe the results
of the nat outbound configuration and verify its correct implementation. The
interface and access control list (ACL) binding can be determined as well as
the interface (in this case) for the outbound translation. The type listing of
easyip makes it clearly understood when Easy IP has been successfully
configured.

Where it is necessary to provide internal access to external users such as in


the case of a public server which is part of an internal network, the NAT server
configuration can be performed to enable traffic destined for an external
destination address and port number to be translated to an internal
destination address and port number.
The nat server [protocol {<tcp >|<udp>}global {<global-address> | currentinterface <global port>} inside {<host-address ><host-port> vpn-instance
<vpn-instance-name> acl <acl-number> description <description >] command
enables the external internal translation to be performed, where protocol
identifies a specific protocol (either TCP or UDP depending on the service) to
be permitted for translation along with the global address, indicating the
external address for translation and the inside address relating to the internal
server address.
The port number for the external address should be defined, and commonly
relates to a specific service such as http (www) on port 80. As a means of
further improving the overall shielding of internal port numbers from external
threats, an alternative port number can be applied to the inside network and
translated through the same means as is used for address translation.

The display nat server command details the configuration results to verify the
correct implementation of the NAT server. The interface defines the point at
which the translation will occur. The global and inside IP addresses and
associated port numbers can be verified. In this case the global address of
202.10.10.5 with a port number of 80 (www) will be translated to an inside
server address of 192.168.1.1 with a port number of 8080 as additional
security against potential port based attacks. Only TCP traffic that is destined
for this address and port will be translated.

1. The NAT internal server configuration will allow a unique public address to
be associated with a private network server destination, allowing inbound
traffic flow from the external network after translation. Internal users are
able to reach the server location based on the private address of the
server.
2. The PAT feature will perform translation based on the port number as well
as the IP addresses. It is used as a form of address conservation where
the number of public addresses available for translation are limited, or
insufficient to support the number of private addresses that require
possible translation.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
A large number of enterprise networks rely on a single network provider
solution that may be in some cases protected by a service level
agreement (SLA), as protection where downtime is likely to be critically
damaging to a company's operation; and also where supporting multiple
networks is unnecessarily costly. Traditional approaches involve failover
solutions using technologies such as ISDN, however newer and more
viable solutions in the form of cellular technologies provide effective
failover measures in the event that primary networks fail. This section
introduces how 2G and 3G cellular network failover solutions can be used
to protect the enterprise network.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the application of 2G and 3G networks as an enterprise
failover solution.
Explain the process for implementing cellular interface fai lover
solutions.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Wireless Wide Area Networks (WWAN) have become a central part of the
worlds communications infrastructure, allowing for the evolution of the mobile
network for which various call and data services are becoming increasingly
prominent in everyday life. The WWAN is comprised of a multitude of
technologies and standards that allow for near ubiquitous voice and data
communication globally. Communication is commonly achieved through a
mobile handset commonly referred to as Mobile Station (MS) or User
Equipment (UE) for 3G (UMTS), and 4G (LTE) networks respectively,
providing an interface to the Radio Access Network via which services are
accessed. It should be clearly noted however that the architecture, although
similar in design, is unrelated for 3G and 4G networks but involves some level
of technology overlay. Internetworking between the technologies provides for
seamless communication where one network becomes unavailable.
Communication is achieved through a base station that operates as a cell for
a given region and allows voice and data traffic to be transmitted to the core
network for processing.
Communications will be handled over a circuit switched gateway (calls) or a
packet switched gateway (data) via which transmission signals carrying
information are used by the core for call & data processing.
The core network handles a range of services, and management of process.
The core is made up of components that include home location registers for
maintaining records of all SIM details authorized to use the network, visitor
location registers for tracking location as the user roams between regions,
SIM authentication, identity registration for banning and tracking (typically
stolen) mobile handsets, services such as short message and multimedia
messaging services voicemail systems and customer billing.

The evolved support of packet switching in mobile networks has provided a


new avenue for communications with remote workers that may often be
stationed beyond the range of an existing enterprise network and any WLAN
solutions that may be implemented. One of the major concerns has often
been with regards to the speed and services that are capable of being
supported by WWAN solutions. As the evolution of mobile network continues,
the supported speeds continue to grow allowing newer services to become
available allowing for true mobility in enterprise business.
The integration of packet switched technologies into the mobile network has
seen the evolution through 2G (GSM), the aptly known 2.5G (GPRS) and
2.75G (EDGE) to 3G (UMTS) for which evolutions in the form of HSPA and
HSPA+ have occurred, the former supporting capacity of up to approximately
14Mbps and the latter supporting 168Mbps downlink. A continued evolution of
higher capacity for services has now resulted in 4G (LTE) technologies with
data transmission capability pushing the 1Gbps mark. This has enabled the
relationship between mobile and enterprise networks to form. Remote users
are able to utilize the mobile data networks for access to company resources
and maintain communications through data channels without physical
Restriction of location.

Enterprise networks are able to utilize the packet switched capabilities of the
3G network as a method of redundancy through which network failover can be
supported in the event that the main service becomes unavailable. One of the
limitations that exists with Wireless WAN solutions is the associated costs that
limit the use as an always on solution, however where business continuity is
paramount, it can enable services to be maintained during any unforeseen
downtime. Huawei AR2200 series routers are capable of supporting 3G
interfaces that can be implemented as a failover solution, ensuring that data
costs rely on more cost efficient means primarily, but ensuring that a sound
failover solution can be implemented when necessary over increasingly less
favorable legacy technologies such as ISDN.

Support for WWAN solutions on the AR2200 series router is achieved through
the implementation of supported hardware including the 3G-HSPA+7 interface
card that allows interfacing with 2G and 3G networks. The 3G-HSPA+7
interface card provides two 3G antenna interfaces to transmit and receive 3G
service data. One interface is the primary interface, and the other is the
secondary interface with capability for support of 2G GSM/GPRS/EDGE and
3G WCDMA, HSPA and HSPA+ standards. Whip antennas are directly
installed on an AR router and are recommended when a router is desk
installed or wall mounted. Indoor remote antennas have a 3 m long feed line
and are recommended when an AR router is installed in a cabinet or rack.
GSM CS:

Upstream (Tx): 9.6kbit/s

GPRS/EDGE:

Downstream (Rx): 9.6kbit/s


Multi-slot Class 12, Class B (236.8 kbit/s)

WCDMA CS:

Upstream (Tx): 64 kbit/s

WCDMA PS:

Downstream (Rx): 64 kbit/s


Upstream (Tx): 384 kbit/s

HSPA:

Downstream (Rx): 384 kbit/s


Upstream (Tx): 5.76 Mbit/s
Downstream (Rx): 14.4 Mbit/s

HSPA+:

Upstream (Tx): 5.76 Mbit/s


Downstream (Rx): 21 Mbit/s

A 3G cellular interface is a physical interface supporting 3G technology. It


provides users with an enterprise-level wireless WAN solution. After
configuring 3G cellular interfaces, voice, video, and data services can be
transmitted over the 3G network, providing a variety of WAN access methods
for enterprises.
The 3G Cellular interface transmits radio signals and supports PPP as the link
layer protocol and an IP network layer. The ip address ppp-negotiate
command is used to allow an IP address to be obtained for the cellular
interface from the remote device in the event where failover may occur. A
parameter profile is created using the profile create <profile-number> {static
<apn> | dynamic} command where the profile number refers to an identifier for
the profile.
The static and dynamic options allow the dynamic allocation or static
assignment of an access point name (APN). When a 3G modem has a
universal subscriber identity module (USIM) installed, the mode wcdma
command can be used to configure a WCDMA network connection mode for
the 3G modem, where wcma-precedence allows WCDMA to be used
preferentially over other modes. The 3G modem is connected to the USB
module interface.

The dial control centre (DCC) is implemented in order to allow the link to
activate the cellular connection. The dialer-rule command initiates the dialerrule view where the rules are defined to enable IPv4 traffic to be carried over
the interface with which the dialer group is associated. The dialer-rule
<number> command refers to the dialer access group number. Under the
cellular interface the dialer control centre (DCC) is enabled.
The AR2200 supports two DCC modes: Circular DCC (C-DCC) and resourceshared DCC (RS-DCC). The two modes are applicable to different scenarios.
The Circular-Dial Control Centre (C-DCC) is applicable to medium or largescale sites that have many physical links where as the Resource Shared-Dial
Control Centre (RS-DCC) is applicable to small or medium-scale sites that
have a few physical links but many connected interfaces. The two ends in
communication can adopt different modes.
If C-DCC is enabled on a physical interface, only this interface can initiate
calls to remote ends. If C-DCC is enabled on a dialer interface, one or multiple
interfaces can initiate calls to remote ends. The dialer-group <number>
associates the dialer access group created under the dialer-rule command
with the given interface. The dialer-number <number> configures a dialer
number for calling the remote end.

NAT is required to allow packets generated with a source address in the


private address range of internal users to be translated to the public address
of the cellular interface. As only one cellular interface address exists, Easy IP
is implemented to allow the range of internal users to make full use of a single
interface address through port address translation.
An ACL is used to identify the private network range for which translation to
the public interface address will occur and be bound to the network address
translation, through the Easy IP nat outbound <acl-number> command. A
default static route is also created to generate a route via the cellular 0/0/0
interface that will initiate the PPP negotiation and allow a public address to be
assigned to the cellular interface.
Ultimately the enterprise intranet uses only one network segment
192.168.1.0/24 and has subscribed to the Wideband Code Division Multiple
Access (WCDMA) service, and the router uses the DCC function to connect
the enterprise to the Internet. The enterprise has obtained APN 3GNET and
dial string *99# from a carrier.

Following the successful configuration of the parameters for establishing the


3G network through the circular DCC and PPP negotiation, an IP address will
be assigned to the cellular interface. The results of the established connection
can be verified through the display interface cellular command where the state
of the interface and the negotiated address can be determined. If no IP
address is assigned to the interface, possibly due to no communication
activity on the link or failure during link negotiate, the system will display
"Internet protocol processing : disabled. The Modem State is used to
determine whether a 3G modem has been connected to the 3G Cellular
interface.

The display nat outbound command can again be used to validate the
configuration of Easy IP in relation to the establishment of the cellular backup
link. The output verifies that the cellular interface is implementing Easy IP for
address translation, as well as providing verification of the external WAN link
address to which internal (private network) addresses are being translated.
Additionally the ACL enables verification of the address range to which the
translation is applicable. The ACL 3002 is implemented in this case and as
defined in the ACL rules, network 192.168.1.0/24 is the address range that is
capable of being translated to the public cellular address.

1. In the event that the primary route fails, a default static route will provide
an alternative route via the cellular interface that will initiate the cellular
connection. The dial control center will support the initiation of the
connection that is negotiated over PPP.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Many technologies and protocols depend on Access Control Lists (ACL)
for greater management and filtering of traffic as part of security
measures or application requirements. The implementation of ACL in
support of other technologies, and as a form of security are required to be
understood , and as such common forms of ACL solutions are introduced.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the applications for ACL in the enterprise network.
Explain the decision making behavior of Access Control Lists

Successfully implement Basic and Advanced Access Control Lists.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

An Access Control List (ACL) is a mechanism that implements access control


for a system resource by listing the entities based on specific parameters that
are permitted to access the resource, as well as the mode of access that is
granted. In general, an ACL can be understood as a means for filtering, and
can be applied to control the flow of traffic as well as identify traffic to which
special operations should be performed.
A common application involves a gateway that may have a forwarding
destination to multiple networks, however may contain an ACL that manages
which traffic can flow to which destination. In the example given, the network
192.168.1.0/24 is generally seen as capable of accessing the external
network, in this case the Internet, whereas hosts that are represented by the
192.168.2.0/24 network are unable to forward traffic in the same manner, and
therefore resulting in a transmission failure. In the case of Server A, the
reverse applies with permission for access being granted by the gateway to
network 192.168.2.0/24 but restricted for all hosts that are part of the
192.168.1.0/24 network.

Where filtering is performed based on interesting traffic, there is no general


restriction made but instead additional operations are likely to be performed
which affects the current data. The example demonstrates a scenario where
inbound data is filtered based on certain criteria such as in this case, the
source IP address, and where an access control list is found to apply to data,
associated actions are taken. Common actions may involve the changing of
parameters in routed IP traffic for protocols such as the route metrics in RIP
and OSPF, and also in initiating encrypted network communications for the
interesting traffic, as is often applied as part of technologies such as Virtual
Private Networks (VPN).

There are three general ACL types that are defined as part of the ARG3
series, including basic, advanced and layer2 access control list types. A basic
ACL matches packets based on information such as source IP addresses,
fragment flags, and time ranges, and is defined by a value in the range of
2000-2999. An advanced ACL provides a greater means of accuracy in
parameter association, and matches packets based on information such as
source and destination IP addresses, source and destination port numbers,
and protocol types.
Advanced ACL are associated with a value range from 3000-3999. Lastly is
the layer 2 ACL which matches packets based on packet based Layer 2
information, such as source MAC addresses, destination MAC addresses,
and Layer 2 protocol types. Traffic is filtered based on rules containing the
parameters defined by each type of ACL.

Access Control Lists work on the principle of ordered rules. Each rule contains
a permit or deny clause. These rules may overlap or conflict. One rule can
contain another rule, but the two rules must be different. The AR2200
supports two types of matching order: configuration order and automatic
order. The configuration order indicates that ACL rules are matched in
ascending order of rule identifiers, while the automatic order follows the depth
first principle that allows more accurate rules to be matched first. The
configuration order is used by default and determines the priorities of the rules
in an ACL based on a rule ID. Rule priorities are as such able to resolve any
conflict between overlapping rules. For each rule ID the ACL will determine
whether the rule applies. If the rule does not apply, the next rule will be
considered. Once a rule match is found, the rule action will be implemented
and the ACL process will cease. If no rule matches the packet, the system
does not process the packet.
In the example, packets originating from two networks are subjected to an
ACL within RTA. Packets from networks 172.16.0.0 and 172.17.0.0 will be
assessed based on rule ID (configuration) order by default. Where the rule
discovers a match for the network based on a wildcard mask, the rule will be
applied. For network 172.16.0.0, rule 15 will match any packets with the
address of 172.16.0.X where X may refer to any binary value in the octet. No
specific rule matching the network 172.17.0.0 is found in the access control
list and so will not be subjected to the ACL process, however in the interests
of good ACL design practice, a catch all rule has been defined in rule 20 to
ensure that all networks for which there is no specifically defined rule, are
permitted to be forwarded.

The creation of a basic access control list requires that the administrator first
identify the traffic source to which the ACL is to apply. In the example given,
this refers to the source locations containing an IP address in the
192.168.1.0/24 range for which all packets containing a source IP address in
this range will be discarded by the gateway. In the case of hosts that make up
the 192.168.2.0/24 network range, traffic is permitted and no further action is
taken for these packets. The basic ACL is applied to the interface Gigabit
Ethernet 0/0/0 in the outbound direction, therefore only packets that meet both
the interface and direction criteria will be subjected to ACL processing.

The validation of the configured basic ACL can be achieved through the
display acl <acl-number> where the acl-number refers to the basic ACL
number that has been assigned to the configured ACL. The resulting output
confirms the rules that have been created to deny (drop) any IP packets with
the source IP address in the range 192.168.1.0/24 and permit addressing in
the range 192.168.2.0/24.
It should also be noted that each rule is automatically assigned a rule number
as part of the access control list creation. The rule number defines the order in
which the rules are processed and set in increments of 5 by default in Huawei
ARG3 series routers. There is an ACL step between rule numbers. For
example, if an ACL step is set to 5, rules are numbered 5, 10, 15, and so on. If
an ACL step is set to 2 and rule numbers are configured to be automatically
generated, the system automatically generates rule IDs starting from 2. The
step makes it possible to add a new rule between existing rules. It is possible
to configure the rule number as part of the basic ACL where required.

Advanced access control lists enable filtering based on multiple parameters to


support a greater detailed route selection process. While a basic ACL
provides filtering based on the source IP address, Advanced ACL are capable
of filtering based on the source and destination IP, source and destination port
numbers, protocols of both the network and transport layer and parameters
found within each layer such as IP traffic classifiers and TCP flag values
(SYN|ACK|FIN etc).
An advanced ACL is defined by an ACL value in the range of 3000-3999 as
displayed in the example, for which rules are defined to specify restriction of
TCP based packets that originate from all source addresses in the range of
192.168.1.1 through to 192.168.1.255 where the destination IP is 172.16.10.1
and the destination port is port 21. A similar rule follows to define restriction of
all IP based packets originating from sources in the 192.168.2.0/24 range
from reaching the single destination of 172.16.10.2. A catch all rule may
generally be applied to ensure that all other traffic is processed by the ACL,
generally through a permit or deny statement for all IP based packets.

The validation of the configured advanced ACL can be achieved through the
display acl <acl-number> where the acl-number refers to the advanced ACL
number that has been assigned to the configured ACL. The resulting output
confirms that three rules have been created to deny any TCP packets with the
source IP address in the range 192.168.1.0/24 destined for 172.16.10.1 from
reaching port 21(ftp), and from any source IP address in the range of
192.168.2.0/24 from reaching the destination IP address of 172.16.10.2 ,
while permitting all other IP traffic.

ACL can also be applied to the Network Address Translation (NAT) operation
to allow filtering of hosts based on IP addresses to determine which internal
networks are translated via which specific external address pools should
multiple pools exist. This may occur where an enterprise network is a
customer to connections from multiple service providers for which various
internal users that are considered part of different networks/sub-networks
wish to be translated and forwarded based on a specific address group,
potentially occurring over alternative router uplink interfaces to the different
service provider networks.
In the example given, a simplified example of this is recreated where hosts
within the internal network are to be filtered based on basic ACL rules for
which a dynamic NAT solution is applied that allows translation to a given
public address of a certain address group. In particular hosts originating from
the 192.168.1.0/24 network will be translated using public addresses in the
pool associated with address group 1, while hosts in the 192.168.2.0/24
network will be filtered based on the pool associated with address group 2.
The nat outbound <acl-number> address-group <address-group number>
command is implemented under the Gigabit Ethernet interface view to bind
NAT in the outbound direction with the ACL, and the related IP address range
is referenced by the specified address group.

1. Advanced ACL are capable of filtering based on the source and


destination IP, source and destination port numbers, protocols of both the
network and transport layer and parameters found within each layer such
as IP traffic classifiers and TCP flag values (SYN|ACK|FIN etc).
2. Once a rule match is found in the access control list to a tested condition,
the rule action will be implemented and the remaining ACL process will
not continue.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
AAA defines a security architecture that is comprised of three functions
referred to as Authentication, Authorization and Accounting. Each of
these functions represents a modular component which can be applied
as components of the security framework implemented by an enterprise,
and often managed through the use of clienUserver based protocols such
as RADIUS and HWTACACS. Implementation of the AAA architecture as
a solution for enhanced functionality is introduced to reinforce the overall
security framework of the enterprise network.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Describe the schemes of the AAA security architecture.

Successfully configure Authentication and Authorization schemes.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Authentication, Authorization, and Accounting (AAA) is a technology that is


used to check whether a user has permission to access a network, authorizes
exactly what a user is allowed to access, and makes records regarding the
network resources used by a user. VRP is capable of supporting the AAA
authentication and authorization services locally within the ARG3 series of
routers, which is commonly referred to as a Network Access Server or NAS,
however accounting services are generally supported through an external
AAA accounting server. The example here demonstrates how users that are
considered part of the Huawei domain are able to gain access to resources
that are located within the displayed destination network. The Network Access
Server (NAS) operates as the gateway device that may perform
authentication and authorization of users, or support the AAA servers
authentication and authorization of users. In the case of the AAA server, those
users that are authenticated and authorized to access the destination network
may also initiate accounting within the AAA server for the duration of a users
active session.

AAA supports three authentication modes. Non-authentication completely


trusts users and does not check their validity. This is seldom used for obvious
security reasons. Local authentication configures user information, including
the user name, password, and attributes of local users, on a Network Access
Server (NAS). Local authentication has advantages such as fast processing
and low operation costs. The disadvantage of local authentication is the
limited information storage because of the hardware. Remote authentication
configures user information including the user name, password, and attributes
on the authentication server. AAA can remotely authenticate users using the
Remote Authentication Dial In User Service (RADIUS) protocol or the Huawei
Terminal Access Controller Access Control System (HWTACACS) protocol. As
the client, the NAS communicates with the RADIUS or HWTACACS server.
If several authentication modes are used in an authentication scheme, these
authentication modes take effect in the sequence with which the configuration
modes were configured. If remote authentication was configured before local
authentication and if a login account exists on the local device but is
unavailable on the remote server, the AR2200 considers the user using this
account as having failed to be authenticated by the remote authentication,
and therefore local authentication is not performed. Local authentication
would be used only when the remote authentication server did not respond. If
non-authentication is configured, it must be configured as the last mode to
take effect.

The AAA Authorization function is used to determine the permission for users
to gain access to specific networks or devices, as such AAA supports various
authorization modes. In non-authorization mode users are not authorized.
Local authorization however authorizes users according to the related
attributes of the local user accounts configured on the NAS. Alternatively,
HWTACACS can be used to authorize users through a TACACS server.
An If-authenticated authorization mode can be used where users are
considered authorized in the event that those users are able to be
authenticated in either the local or remote authentication mode. RADIUS
authorization authorizes users after they are authenticated using a RADIUS
authentication server. Authentication and authorization of the RADIUS
protocol are bound together, so RADIUS cannot be used to perform only
authorization. If multiple authorization modes are configured in an
authorization scheme, authorization is performed in the sequence in which the
configuration modes were configured. If configured, non-authorization must be
the last mode to take effect.

The accounting process can be used to monitor the activity and usage of
authorized users who have gained access to network resources. AAA
accounting supports two specific accounting modes. Non-accounting can be
used, and provides free services for users without any record of users, or
activity logs.
Remote accounting on the other hand supports accounting using the RADIUS
server or the HWTACACS server. These servers must be used in order to
support accounting due to the requirement for additional storage capacity
necessary to store information regarding access and activity logs of each
authorized user. The example demonstrates a very general representation of
some of the typical information that is commonly recorded within user
accounting logs.

The device uses domains to manage users. Authentication, authorization, and


accounting schemes can be applied to a domain so that the device can
authenticate, authorize, or charge users in the domain using the schemes.
Each user of the device belongs to a domain. The domain to which a user
belongs is determined by the character string suffixed to the domain name
delimiter that can be @, |, or %.
For example, if the user name is user@huawei, the user belongs to the
huawei domain. If the user name does not contain an @, the user belongs to
the default domain named default in the system. The device has two default
domains: default (global default domain for common access users) and
default_admin (global default domain for administrators). The two domains
can be modified but cannot be deleted.
If the domain of an access user cannot be obtained, the default domain is
used. The default domain is used for access users such as NAC access
users. Local authentication is performed by default for users in this domain.
The default_admin domain is used for administrators such as the
administrators who log in using HTTP, SSH, Telnet, FTP, and terminals. Local
authentication is performed by default for users in this domain. The device
supports a maximum of 32 domains, including the two default domains.

The AR2200 router can be used as a Network Access Server (NAS) in order
to implement authentication and authorization schemes. The example
demonstrates the typical process that is necessary in order to successfully
implement local AAA. Users for authentication must be created using the
local-user <user-name> password [cipher \simple]<password> privilege level
<level > command. This command specifies a user name. If the user-name
contains a domain name delimiter such as @, the character before @ is the
user name and the character behind @ is the domain name. If the value does
not contain @, the entire character string is the user name and the domain
name is the default domain.
An authentication scheme is created in order to authenticate users and must
be created before additional authentication-relevant configuration can be
performed. The authentication scheme must be defined as either local, radius,
hwtacacs or none. With the exception of the none parameter, the other
authentication modes can be listed in the order in which authentication is to
be attempted, for example should the authentication-mode hwtacacs local
command be used, and if HWTACACS authentication fails, the AR2200 router
will start local authentication. The authorization scheme must also be created
to authorize users (except in the case of Radius server support), by creating
an authorization scheme defining the authorization-mode. The authorizationmode command supports modes for hwtacacs, local, if-authenticated and
none authorization.
The domain <domain-name> command is used to create a new domain, and
the implementation of the authentication-scheme <authentication-scheme>
and the authorization-scheme <authorization-scheme> under the domain view
will apply the authentication and authorization schemes to that domain.

The configuration of the local AAA domain and schemes for both
authentication and authorization can be verified through the display domain or
display domain name <domain-name> commands. Using the display domain
command provides brief information regarding all domains that have been
created, including the domain name and a domain index that is used to
reference each created domain.
The display domain name <domain-name> command provides specific
configuration details in reference to the domain defined under the domainname parameter. Along with the domain-name is the domain-state which
presents as either Active or Block, where block refers to a domain that is in
blocking state, and prevents users in this domain from being able to log in.
This is implemented through an optional state [active | block] command
implemented under the AAA domain view. A domain is in an active state after
being created by default.
The authentication-scheme-name associates the created authenticationscheme with the domain; the same applies for the authorization-scheme. The
accounting-scheme is not configured locally and therefore an accounting
scheme has not been associated with the created domain, and as such the
default accounting scheme is listed for the domain by default. In the event
that non-local (i.e RADIUS or HWTACACS) configuration is implemented to
support AAA services, these will be associated with the domain under the
server-template fields.

1. The configuration of VRP in local mode will supports both authentication


and authorization schemes, the accounting scheme requires the support
of remote management through a HWTACACS or RADIUS server.
2. If a user is created without defining the domain to which the user belongs,
the user will be automatically associated with the default domain, named
default.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Early TCP/IP protocol development did very little for ensuring the security
of communications between peering devices. As networks evolved so did
the need for greater protection of the data transmitted . Solutions for data
protection were developed, from which IPsec emerged as a security
architecture for the implementation of confidentiality, integrity and data
origin authentication, primarily through the support of underlying
protocols. IPsec remains a key framework in the protection of data, which
has seen an integration of IPsec components adopted into the next
generation of TCP/ IP standards.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the basic principles of the IPsec security architecture.

Configure IPsec peering between two devices.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

Internet Protocol security (IPsec) is a protocol suite defined by the Internet


Engineering Task Force (IETF) for securing Internet Protocol (IP)
communication by authenticating and/or encrypting each IP packet of a
communication session. Two communicating parties can encrypt data and/or
authenticate the data originating at the IP layer to ensure data confidentiality,
integrity and service availability.
Confidentiality is the security service that protects the disclosure of typically
application level data, but also encompasses disclosure of communication
which also can be a concern in some circumstances. Traffic flow
confidentiality is the service that is applied to conceal source and destination
addresses, message length, or frequency of communication.
IPsec supports two forms of integrity; connectionless and a form of partial
sequence integrity. Connectionless integrity is a service that detects
modification of an individual IP datagram, without regard to the ordering of the
datagram in a stream of traffic. The form of partial sequence integrity offered
in IPsec is referred to as anti-replay integrity, and it detects arrival of duplicate
IP datagrams (within a constrained window). This is in contrast to connectionoriented integrity, which imposes more stringent sequencing requirements on
traffic, e.g., to be able to detect lost or re-ordered messages. Although
authentication and integrity services often are cited separately, in practice
they are intimately connected and almost always offered in tandem.
Availability, when viewed as a security service, addresses the security
concerns engendered by attacks against networks that deny or degrade
service. For example, in the IPsec context, the use of anti-replay mechanisms
in the underlying protocols, specifically AH and ESP, support availability to
prevent attacks that malicious users initiate by resending captured packets.

IPsec uses two protocols to provide traffic security, Authentication Header


(AH) and Encapsulating Security Payload (ESP). The IP Authentication
Header (AH) provides connectionless integrity, data origin authentication, and
an optional anti-replay service. The Encapsulating Security Payload (ESP)
protocol may provide confidentiality (encryption), and limited traffic flow
confidentiality.
ESP optionally provides confidentiality for traffic. The strength of the
confidentiality service depends in part on the encryption algorithm employed,
for which three main forms of encryption algorithm are possible. ESP may
also optionally provide authentication, however the scope of the
authentication offered by ESP is narrower than for AH since the external IP
header (tunnel mode) and the ESP header is/are not protected under ESP
authentication.
Data Encryption Standard with Cipher Block Chaining (DES-CBC) is a
symmetric secret key algorithm that uses a key size of 64-bits, but is
commonly known as a 56-bit key as the key has 56 significant bits; the least
significant bit in every byte is the parity bit. Its use today is limited due to the
capability for computational power to perform brute force attacks within a
significantly limited time frame. Triple Data Encryption Standard (3DES) is a
168 bit encryption algorithm that involves an iteration of the DES process with
the addition of an initialization vector (IV), a value used to disguise patterns in
encrypted data to improve the overall confidentiality of data. The Advanced
Encryption Standard (AES) supports up to 256 bit encryption employing again
the CBC and IV for additional strength of encryption. As encryption becomes
additionally more complex, so does the processing time needed to achieve

AH supports an MD5 algorithm that takes as input, a message of arbitrary


length and produces as output a 128-bit "fingerprint" or "message digest" of
that input, while the SHA-1 produces a 160-bit message digest output. SHA-2
represents a next generation of authentication algorithms to extend security to
authentication following discovery of certain weaknesses within SHA-1.
The term SHA-2 is not officially defined but is usually used to refer to the
collection of the algorithms SHA-224, SHA-256, SHA-384, and SHA-512. VRP
currently supported by the AR2200 however provides support only to
SHA-256, SHA-384 and SHA-512, where the value represents the bit digest
length.

A Security Association (SA) denotes a form of connection that is established


in a single direction through which security services relevant to the traffic are
defined. Security services are attributed to an SA in the form of either AH, or
ESP, but not both. If both AH and ESP based protection is applied to a traffic
stream, then two (or more) SAs are created to attribute protection to the traffic
stream. In order to secure bi-directional communication between two hosts, or
two security gateways as shown in the example, two Security Associations
(one in each direction) are required.
IPsec SAs are established in either a manual mode or Internet Key Exchange
(IKE) negotiation mode. Establishing SAs in manual mode requires all
information such as those parameters displayed in the example, be
configured manually. The SAs established in manual mode however will never
age. Establishing SAs using IKE negotiation mode is simpler as IKE
negotiation information needs to be configured only on two peers and SAs are
created and maintained by means of IKE negotiation, for which the complexity
exists mainly in the IKE automated negotiation process itself, and as such will
not be covered in detail here. The SA established in IKE negotiation has a
time-based or traffic-based lifetime. When the specified time or traffic volume
is reached, an SA becomes invalid. When the SA is about to expire, IKE will
negotiate a new SA.

A transport mode SA is a security association between two hosts. In IPv4, a


transport mode security protocol header appears immediately after the IP
header, any options, and before any higher layer protocols (e.g., TCP or
UDP). In transport mode the protocols provide protection primarily for upper
layer protocols. An AH or ESP header is inserted between the IP header and
the transport layer protocol header. In the case of ESP, a transport mode SA
provides security services only for these higher layer protocols, not for the IP
header or any extension headers preceding the ESP header. In the case of
AH, the protection is also extended to selected portions of the IP header, as
well as selected options.

In the IPsec context, using ESP in tunnel mode, especially at a security


gateway, can provide some level of traffic flow confidentiality. The outer IP
header source address and destination address identify the endpoints of the
tunnel. The inner IP header source and destination addresses identify the
original sender and recipient of the datagram, (from the perspective of this
tunnel), respectively.
The inner IP header is not changed except to decrement the TTL, and
remains unchanged during its delivery to the tunnel exit point. No change
occurs to IP options or extension headers in the inner header during delivery
of the encapsulated datagram through the tunnel. An AH or ESP header is
inserted before the original IP header, and a new IP header is inserted before
the AH or ESP header.
The example given shows the IPsec tunnel mode during TCP based packet
transmission. If AH is also applied to a packet, it is applied to the ESP header,
Data Payload, ESP trailer, and ESP Authentication Data (ICV), if fields are
present.

The implementation of an IPsec Virtual Private Network (VPN) requires that


network layer reachability between the initiator and the recipient is verified to
ensure non-IPsec communication is possible before building any IPsec VPN.
Not all traffic is required to be subjected to rules of integrity or confidentiality,
and therefore traffic filtering needs to be applied to the data flow to identify the
interesting traffic for which IPsec will be responsible. A data flow is a collection
of traffic and can be identified by the source address/mask, destination
address/mask, protocol number, source port number, and destination port
number. When using VRP, data flows are defined using ACL groups. A data
flow can be a single TCP connection between two hosts or all traffic between
two subnets. The first step to configure IPsec involves defining these data
flows.
An IPsec proposal defines the security protocol, authentication algorithm,
encryption algorithm, and encapsulation mode for the data flows to be
protected. Security protocols AH and ESP can be used independently or
together. AH supports MD5, SHA-1 and SHA-2 authentication algorithms.
ESP supports three authentication algorithms (MD5, SHA-1 and SHA-2) and
three encryption algorithms (DES, 3DES, and AES). IPsec also supports both
transport and tunnel encapsulation modes.To Transmit the same data flow,
peers on both ends of a security tunnel must use the same security protocol,
authentication algorithm, encryption algorithm, and encapsulation mode. To
implement IPsec between two gateways, it is advised to use tunnel mode to
shield the actual source and destination IP addresses used in communication.

An IPsec policy defines the security protocol, authentication algorithm,


encryption algorithm, and encapsulation mode for data flows by referencing
an IPsec proposal. The name and sequence number uniquely identify an
IPsec policy. IPsec policies are classified into IPsec policies used for manually
establishing Sas, and IPsec policies used for establishing SAs through IKE
negotiation.
To configure an IPsec policy used for manually establishing SAs, set
parameters such as the key and SPI. If the tunnel mode is configured, the IP
addresses for two endpoints of a security tunnel also must be set. When
configuring an IPsec policy used for establishing SAs through IKE negotiation,
parameters such as the key and SPI do not need to be set as they are
generated through IKE negotiation automatically. IPsec policies of the same
name but different sequence numbers comprise an IPsec policy group.
In an IPsec policy group, an IPsec policy with a smaller sequence number has
a higher priority. After an IPsec policy group is applied to an interface, all
IPsec policies in the group are applied to the interface. This enables different
SAs to be used for different data flows.

Communication at the network layer represents the initial prerequisite of any


IPsec VPN connection establishment. This is supported in this example
through the creation of a static route pointing to the next hop address of RTB.
It is necessary to implement static routes in both direction to ensure
bidirectional communication between networks. An advanced ACL is created
to identify the interesting traffic for which the IPsec VPN will be initiated,
whereby the advanced ACL is capable of filtering based on specific
parameters and will choose to either Discard, Bypass or Protect the filtered
data.
The ipsec proposal command creates an IPsec proposal and enters the IPsec
proposal view. When configuring an IPsec policy, it is necessary to reference
an IPsec proposal for specifying the security protocol, encryption algorithm,
authentication algorithm, and encapsulation mode used by both ends of the
IPsec tunnel. A new IPsec proposal created using the ipsec proposal
command by default uses the ESP protocol, DES encryption algorithm, MD5
authentication algorithm, and tunnel encapsulation mode. These can be
redefined using various commands under the IPsec proposal view. The
transform [ah | ah-esp | esp] will enable the current transform set to be
altered, the encapsulation-mode {transport | tunnel } can alter the means used
to encapsulate packets.
The authentication algorithm used by the ESP protocol can be set using the
esp authentication-algorithm [md5 | sha1 | sha2-256 | sha2-384 | sha2-512 ]
and esp encryption-algorithm [des | 3des | aes-128 | aes-192 | aes-256 ] for
ESP encryption assignment. The AH protocol ah authentication-algorithm
[md5 | sha1 | sha2-256 | sha2-384 | sha2-512 ] command enables AH based

Verification of the parameters for the IPsec proposal can be achieved through
the display ipsec proposal [name <proposal-name>] command. As a result it
is possible to view select proposals or all proposals that have been created.
The name defines the name given to the IPsec proposal created, while the
encapsulation mode lists the current mode that is being used for this given
proposal, which may either be transport or tunnel. Transform refers to the
algorithms being applied where this may be AH, ESP or a combination of AH
and ESP transforms. Depending on the transform defined, the associated
algorithms will be listed.

The policy-name and seq-number parameters identify an IPsec policy.


Multiple IPsec policies with the same IPsec policy name constitute an IPsec
policy group. An IPsec policy group contains a maximum of 16 IPsec policies,
and an IPsec policy with the smallest sequence number has the highest
priority. After an IPsec policy group is applied to an interface, all IPsec policies
in the group are applied to the interface to protect different data flows. Along
with the policy-name and seq-number, the policy must define the creation of
SA either manually or through IKE negotiation.
If IKE negotiation is used, parameters must be set using the ipsec-policytemplate command, while where set manually parameters are configured as
shown in the example. The created ACL is bound with the policy along with
the created proposal. The tunnel local and tunnel remote commands define
the interface at which the IPsec tunnel will establish and terminate. A source
parameter index (SPI) is defined both inbound and outbound.
The inbound SPI on the local end must be the same as the outbound SPI on
the remote end. The outbound SPI on the local end must be the same as the
inbound SPI on the remote end. This number is used to reference each SA,
bearing in mind that an SA applies in only one direction therefore requiring
SPI in both directions to be specified. Finally authentication keys must be
defined again both inbound and outbound. The outbound authentication key
on the local end must be the same as the inbound authentication key on the
remote end. Where IKE negotiation is used, both the SPI and authentication
keys are automatically negotiated.

Following the creation of a policy and the binding of the proposal and ACL to
the policy, the policy itself can be applied to the physical interface upon which
traffic will be subjected to IPsec processing. In the example given an IPsec
tunnel is to be established between interface Gigabit Ethernet 0/0/1 of RTA
and Gigabit Ethernet 0/0/1 of RTB. Under the interface view, the ipsec policy
command is used to apply the created policy to the interface.
One interface can only use one IPsec policy group. To apply a new IPsec
policy group to the interface, the previous policy group must be firstly
removed. An IPsec policy group that establishes an SA through IKE
negotiation can be applied to multiple interfaces, whereas an IPsec policy
group that establishes an SA in manual mode can be applied only to one
interface. When sending a packet from an interface, the AR2200 matches the
packet with each IPsec policy in the IPsec policy group. If the packet matches
an IPsec policy, the AR2200 encapsulates the packet according to the policy.
If not, the packet is sent without encapsulation.

Using the display ipsec policy [brief | name policy-name [ seq-number ] ]


command, a specific IPsec policy or all IPsec policies can be viewed. The
parameters of the IPsec policy displayed include the policy name and the
sequence number of the policy, typically used to distinguish the priority of
policies in a group where a lower sequence number represents a higher
priority. The policy bindings such as the name of the proposal and ACL that is
used to filter the traffic in a policy is displayed along with the local and remote
tunnel addresses for the IPsec policy.

Specific parameter settings are also defined within the display ipsec policy
command for SA in both the inbound and outbound directions. The SPI
references the SA in each direction which can be used as part of the
troubleshooting process to ensure that the SPI on the outbound local interface
matches to that set for the inbound remote interface on the peer interface
(that matches the tunnel remote address). The key string will also be defined
for both the inbound and outbound SA to which the same local and remote
configuration assessment can be performed.

1. A security association can be understood as a unidirectional connection or


management construct that defines the services that will be used in order
to negotiate the connections establishment. The connection relies on
matching SA in both directions in order to successfully establish secure
bidirectional communication.
2. All traffic that is destined to be forwarded via the outbound interface on
which IPsec is established will firstly be subjected to filtering via a Security
Policy Database (SPD). This is generally in the form of an ACL that will
determine whether the traffic should be either dropped (Discard),
forwarded normally (Bypass), or forwarded over a secure IPsec boundary
(Protect).

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Limitations within IPsec VPN restrict the ability for routes to be carried
between disparate site-to-site based networks, and allowing only for
static route solutions. GRE provides a mechanism for the encapsulation
of packets of one protocol into packets of another protocol. The
application of GRE is as such implemented as a primary solution to the
IPsec VPN limitations, for which knowledge of GRE is necessary to
compliment the existing knowledge of IPsec VPN.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain how GRE can be applied to provide various solutions.
Describe the principle behavior of GRE

Configure GRE over IPsec.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

Cost effective solutions for private internetwork communication have utilized


technologies such as IPsec to enable shared public networks to act as a
medium for bridging the gap between remote offices. This solution provides
secure transmission to IP packet payloads but does not enable routing
protocols such as RIP and OSPF to be carried between the two tunnel
endpoints. GRE was originally developed as a means of supporting the
transmission of protocols such as IPX through IP networks, however over time
the primary role of GRE has transitioned towards the support of routing based
protocol tunneling.

GRE can be applied within an autonomous system to overcome certain


limitations found in IGP protocols. RIP routes offer for example a hop limit of
15 hops beyond which all routes are subjected to the count to infinity rule. The
application of GRE in such environments will allow for a tunnel to be created
over which routing updates can be carried, allowing the scale of such distance
vector routing to be increased significantly where necessary.

One of the major limitations of GRE is in its lack of ability to secure packets as
they are carried across a public network. The traffic carried within a GRE
tunnel is not subjected to encryption since such features are not supported by
GRE. As a result IPsec solutions are employed together with GRE to enable
GRE tunnels to be established within IPsec tunnels to extend features
including integrity and confidentiality to GRE.

After receiving a packet from the interface that is connected to the private
network, the packet is delivered to the private network protocol module for
processing. The private network protocol module checks the destination
address field in the private network packet header, searches the outgoing
interface in the routing table or forwarding table of the private network, and
determines how to route this packet. If the outgoing interface is the tunnel
interface, the private network protocol module sends the packet to the tunnel
module.
After receiving the packet, the tunnel module firstly encapsulates the packet
according to the protocol type of the passenger protocol and checksum
parameters are configured for the current GRE tunnel. That is, the tunnel
module adds a GRE header to the packet. It next adds an IP header
according to the configuration. The source address of this IP header is the
source address of the tunnel; the destination address of the IP header is the
destination address of the tunnel, following which the packet is delivered to
the IP module. The IP module searches for the appropriate outgoing interface
in the public network routing table based on the destination address in the IP
header, and forwards the packet. The encapsulated packet is then transmitted
over the public network.
After receiving the packet from the interface that is connected to the public
network, the egress interface analyzes the IP header, determines the
destination of the packet and discovers the Protocol Type field is 47,
indicating that the protocol is GRE.

The egress interface delivers the packet to the GRE module for processing.
The GRE module removes the IP header and the GRE header, and learns
from the Protocol Type field in the GRE header that the passenger protocol is
the protocol run on the private network, the GRE module then delivers the
packet to this protocol. The protocol itself handles the packet as with any
ordinary packet.

Key authentication indicates the authentication on a tunnel interface. This


security mechanism can prevent the tunnel interface from incorrectly
identifying and receiving the packets from other devices. If the K bit in the
GRE header is set to 1, the Key field is inserted into the GRE header. Both
the receiver and the sender perform the key authentication on the tunnel.
The Key field contains a four-byte value, which is inserted into the GRE
header during packet encapsulation. The Key field is used to identify the traffic
in the tunnel. Packets of the same traffic flow have the same Key field. When
packets are decapsulated, the tunnel end identifies the packets of the same
traffic according to the Key field. The authentication can be passed only when
the Key fields set on both ends of the tunnel are consistent, otherwise the
packets are discarded.

The keepalive detection function is used to detect whether the tunnel link is in
the keepalive state at any time, that is, whether the peer of the tunnel is
reachable. If the peer is not reachable, the tunnel is disconnected to prevent
black holes occuring. After the Keepalive function is enabled, the local end of
the GRE tunnel periodically sends a keepalive detection packet to the peer. If
the peer is reachable, the local end receives a reply packet from the peer. The
keepalive function will operate as long as at least one end is configured with
the keepalive function, the peer does not need to have the keepalive function
enabled. If the peer receives a keepalive detection packet, it sends a reply
packet to the local end, irrespective of whether it is configured with the
keepalive function.
After the keepalive function is enabled, the source of a GRE tunnel creates a
counter, periodically sends the keepalive detection packets, and counts the
number of detection packets. The number increments by one after each
detection packet is sent.
If the source receives a reply packet before the counter value reaches the
preset value, the source considers that the peer is reachable. If the source
does not receive a reply packet before the counter reaches the preset value
that defines the number of retries, the source considers that the peer is
unreachable and will proceed to close the tunnel connection.

The establishment of GRE requires initially the creation of a tunnel interface.


After creating a tunnel interface, specify GRE as the encapsulation type, set
the tunnel source address or source interface, and set the tunnel destination
address. In addition, set the tunnel interface network address so that the
tunnel can support routes. Routes for a tunnel must be available on both the
source and destination devices so that packets encapsulated with GRE can
be forwarded correctly. A route passing through tunnel interfaces can be a
static route or a dynamic route.
One other key point that should be taken into account during the configuration
phase is the MTU. GRE results in an additional 24 bytes of overhead being
applied to the existing headers and payload, resulting in the potential for
unnecessary fragmentation to occur. This can be resolved by adjusting the
MTU size using the mtu <mtu> command to allow packets to compensate for
the additional overhead created. An MTU of 1476 is considered a sufficient
adjustment to the MTU, to reduce additional load on the interface.

The running status and routing information for the GRE tunnel interface can
be observed following the creation of the tunnel interface. An active tunnel
interface can be confirmed when the tunnels current state and link layer (line)
protocol state are both showing as UP. Any adjustment to the existing MTU
can be identified to ensure that additional overhead is not created as a result
of fragmentation. The Internet address represents the configured tunnel IP
address at the egress interface. The tunnel source and destination displays
the IP addresses of the physical interface that are being used, over which the
GRE tunnel is established.

The reachability of networks over the GRE tunnel can be determined based
on the IP routing table that is viewed using the display ip routing table
command. In this case it is possible to identify the internal source network
address of a local network, and its ability to reach the destination network
address of a remote network over a GRE based tunnel interface. The
destination address refers to the network address that is reached over the
GRE tunnel, while the next hop address references the IP address of the GRE
remote tunnel interface.

The keepalive function can enabled on the GRE tunnel interface by specifying
the keepalive [period <period> [retry-times <retry-times> ] ] command, where
the period determines the number of seconds taken before each keepalive is
sent, this by default is set to a period of 5 seconds. The number of times for
which a keepalive will be sent is defined using the retry-times <retry-times>
parameter, which by default is set to 3 tries. If the peer fails to respond within
the number of retries defined, communication between the two ends of the
tunnel will be considered to have failed and the GRE tunnel will proceed to be
torn down.

The display interface tunnel command can again be applied to assess


whether the tunnel is still active following a loss of communication between
the GRE tunnel endpoints. In the example it is found that the link layer status
of the GRE tunnel has transitioned to a down state denoting that a failure in
communication has occurred. The keepalive function is currently enabled and
displays a keepalive period of three seconds between each message sent.
The number of retries has also been determined as allowing a loss of replies
up to three times.

1. GRE provides a means for establishing dynamic routing between remote


networks that commonly belong to a single administrative domain such as
in the case of branch offices. IPsec VPN is generally used to provide a
site-to-site private tunnel over which routing dynamic information may
wish to be transmitted, however IPsec VPN tunnels will not support the
forwarding of routing updates over IPsec directly, therefore GRE is
implemented.
2.

The internet address represents a virtual tunnel network address over


which the GRE is established, while the tunnel source references the
entry-point at which the tunnel is established.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
Management framework solutions for TCP/IP networks were introduced
as hardware and software increased, in order to support rapid network
growth. SNMP was originally adapted from a more simpler SGMP
protocol for use as the basis for common network management
throughout the system. SNMP has since experienced version revisions,
however remains the standard protocol for network management. The
SNMP framework, as well as the supporting Management Information
Base act as the foundation for network management, and are introduced
in support of a well rounded understanding of the network management
framework for TCP/IP.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the SNMP architecture and messaging behavior.
Describe the function of the Management Information Base (MIB).

Configure general SNMP parameters and traps.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Simple Network Management Protocol (SNMP) is a network


management protocol widely used in the TCP/IP network. SNMP is a method
of managing network elements using a network console workstation which
runs network management software.
SNMP may be used to achieve a number of communicative operations. The
Network Management Station (NMS) relies on SNMP to define sources for
network information and obtain network resource information. SNMP is also
used to relay reports in the form of trap messages to the NMS so that the
station can obtain network status in near real time, to allow the network
administrator to quickly take action in the event of system discrepancies and
failures.
SNMP is largely used to manage application programs, user accounts, and
write/read permissions (licenses) etc, as well as to manage the hardware that
makes up the network, including workstations, servers, network cards, routing
devices, and switches. Commonly, these devices are located far from the
central office where the network administrator is based. When faults occur on
the devices, it is expected that the network administrator can be notified
automatically of the faults. SNMP effectively operates as a communications
medium between the network elements and the network administrator/NMS.

Network elements such as hosts, gateways, terminal servers etc, contain two
important components that support the network management functions
requested by the network management stations. The management agent
resides on the network element in order to retrieve (get) or alter (set)
variables.
Network Management Stations (NMS) associate with management agents
that are responsible for performing the network management functions
requested by the NMS. The MIB stores a number of variables associated with
the network element, with each of these variables being considered an MIB
object. The exchange of SNMP messages within IP requires only the support
of UDP as an unreliable datagram service for which each message is
independently represented by a single transport datagram.

A Management Information Base (MIB) specifies the variables maintained by


network elements. These variables are the information that can be queried
and set by the management process. A MIB presents a data structure,
collecting all possible managed objects over the network. The SNMP MIB
adopts a tree structure similar to that found in a Domain Name System (DNS).
The object naming tree has three top objects: ISO, ITU-T (originally CCITT),
and the joint organizations branch. Under the ISO, there are four objects
among which number 3 is the identified organization. A sub-tree of the US
Department of Defense dod (6) is defined under the identified organization (3)
under which the Internet (1) sub tree is located. The object under the Internet
is mgmt (2). What follows mgmt (2) is MIB-II, originally MIB until 1991 when
the new edition MIB-II was defined. The tree path itself can be defined as an
object identifier (OID) value {1.3.6.1.2.1}.

SNMP defines five types of Protocol Data Units (PDUs), namely, SNMP
packets, to be exchanged between the management process and the agent
process. The get-request operation indicates that the management process
reads one or more parameter values from the MIB of the agent process. The
get-next-request indicates that the management process reads the next
parameter value in the lexicographic order from the MIB of the agent process.
The set-request indicates that the management process sets one or more
parameter values in the MIB of the agent process. The get-response returns
one or more parameter values. This operation is performed by the agent
process. It is the response to the preceding three operations. Lastly is the trap
function which is actively sent by the agent process to inform the
management process of important or critical events.

SNMPv1 is the original application protocol by which the variables of an


agent's MIB may be inspected or altered. The evolution of SNMP involved not
only changes to the protocol but also the MIB that was used. New objects
were defined in the MIB resulting in MIB-II (or MIB-2) being defined, including
for example sysContact. sysName, sysLocation, sysServices to provide
contact, administrative, location, and service information regarding the
managed node in the system group, and ipRouteMask, ipRouteMetric5, and
ipRouteInfo objects included as part of the IP route table object.
The transition to SNMP version 2 involved a number of revisions that resulted
in SNMPv2c being developed including the introduction of a new PDU type in
the form of GetBulkRequest-PDU to allow information from multiple objects to
be retrieved in a single request and the Inform Request, a manager to
manager communication PDU, used where one manager sends information
from an MIB view to another manager. Specific objects also use counters as a
syntax which in SNMP version 1 represented a 32 bit value. This meant that in
given objects such as the byte count of interfaces it was easy for the counter
to complete a full cycle of the values and wrap, similar to the odometer that
measures mileage in vehicles.
Using the 32 bit counter, octets on an Ethernet interface transmitting at
10Mbps would wrap in 57 minutes, at 100Mbps the counter would wrap in 5.7
minutes, and at 1Gbps it would take only 34 seconds before the counter fully
cycled. Objects are commonly polled (inspected) every 1 or 5 minutes, and
problems arise when counters wrap more than once between object polling as
a true measurement cannot be determined.

To resolve this, new counters were defined in SNMP version 2c in the form of
64 bit counters for any situations where 32 bit counters wrap too fast, which
translated to any interface that counts faster than 650 million bits per second.
In comparison, using a 64 bit counter for counting octets on a 1Tbps (1,000
Gbps) will wrap in just under 5 years, and it would take an 81,000,000 Tbps
link to cause a 64-bit counter to wrap in 30 minutes.

One of the key improvements to SNMPv3 is with regards to security of the


transmission of MIB object information. Various threats can be identified.
These include modification of object information from an unauthorized entity
during transit, the performing of unauthorized management operations by
users masquerading as another authorized user; eavesdropping on message
exchanges and the modification of the message stream through means such
as message replay.
SNMP enhances security through applying four principle measures. Data
integrity is applied to ensure that data has not been altered or destroyed in an
unauthorized manner, nor have data sequences been altered to an extent
greater than can occur non-maliciously.
Data origin authentication is supported to ensure that the claimed identity of
the user on whose behalf received data was originated is corroborated using
MD5 and SHA-1. Data confidentiality is applied to ensure information is not
made available or disclosed to unauthorized individuals, entities, or
processes. Additionally, solutions for limited replay protection provide a means
of ensuring that a message, whose generation time is outside of a specified
time window, is not accepted.

The SNMP agent is an agent process on a device on the network. The SNMP
agent maintains managed network devices by responding to NMS requests
and reporting management data to the NMS. To configure SNMP on a device,
the SNMP agent must be enabled, for which the snmp-agent command is
applied.
The snmp-agent sys-info command sets the SNMP system information and is
also used to specify the version(s) of SNMP that are supported, where snmpagent sys-info version [ [ v1 | v2c | v3 ] * | all ] is used to achieve this, and
should be noted that all versions of SNMP are supported by default. The
snmp-agent trap enable command, activates the function of sending traps to
the NMS, following which the device will proceed to report any configured
events to the NMS.
In addition it is necessary to specify the interface via which trap notifications
will be sent. This should be the interface pointing towards the location of the
NMS, as in the example where the NMS is reached via interface Gigabit
Ethernet 0/0/1.

Using the display snmp-agent sys-info command displays contact information


of personnel responsible for the system maintenance, the physical location of
the device, and currently supported SNMP version(s). The given information
in the example represents the typical default system information found within
Huawei AR2200 series routers, however this can be altered through the use
of the snmp-agent sys-info [contact | location | version ] parameters to reflect
contact and location details relevant to each individual device.

1. In the Huawei AR2200 series router, all versions of SNMP (SNMPv1,


SNMPv2c and SNMPv3) are enabled by default.
2. The agent forwards trap messages to the Network Management Station
(NMS) using UDP destination port 162.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
As the network scales and the number of enterprise network applications
continue to grow, more and more devices require support, including multiservice routers, security gateways, and WLAN Access Points (AP) that
implement communication and collaboration services in decentralized
networks, like enterprise campus networks and branch office networks.
An enterprise network is also increasingly comprised of multi-vendor
devices. Each device may have its own management system, creating a
nightmare for system and network administrators. This section introduces
the eSight Unified Network Management Platform as an effective unified
network management station (NMS) solution for the monitoring and
management of all network and system resources.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Explain the application of eSight in the enterprise network
Describe the services provided by eSight as part of a complete
network management solution.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The eSight system is developed by Huawei for the management of enterprise


networks, such as enterprise park, campus, branch, and data center
networks. It implements unified management of - and intelligent interaction
between - enterprise resources, services and users. In addition, eSight can
manage devices from multiple manufacturers, including network devices from
Huawei, H3C, Cisco, and ZTE, and IT devices from IBM, HP, and Sun
Microsystems.
Three editions of eSight exist to support enterprise networks of various scales
and complexity, referred to as compact, standard and professional. The
compact edition offers alarm management, performance management,
topology management, configuration file management, network element (NE)
management, link management, log management, physical resources,
electronic labeling, IP topology, smart configuration tools, custom device
management, security management, terminal access management, system
monitoring tools, database backup/restoration tools, fault collection tools, and
MIB management.
In addition to the functions of the compact edition, the standard edition offers
support for WLAN management, network traffic analysis (NTA), service level
agreement (SLA) management, quality of service (QoS) management, MPLS
VPN Management, MPLS tunnel management, IPsec VPN management,
report management, LogCenter, Secure Center, and SNMP alarm northbound
interface (NBI). The professional edition provides all functions of the standard
edition as well as Data center nCenter management, hierarchical network
management, and support for Linux two-node cluster hot standby.

eSight can manage devices from multiple manufacturers, monitor and analyze
and manage network services and elements. The SLA Manager for example
diagnoses network links immediately or at the scheduled time through SLA
tasks, which facilitates assessment of network service quality. The Network
Traffic Analyzer (NTA) analyzes network traffic packets based on the packet
source, destination, protocol, and application, which enables clear
understanding of the distribution of network traffic. The eSight Log Center is a
unified security service management system developed by Huawei for
telecom carriers and industry customers. Characterized by high integration
and reliability, eSight Log Center offers comprehensive network element (NE)
management, security service analysis, and security audit over Huawei
security products.
Fault management provides measures such as real-time alarm browsing,
alarm operations, alarm rule settings (alarm masking rule and alarm sound
setting), and remote alarm notification to monitor exceptions on the network in
real time. This helps the network administrator take timely measures to
recover network operation. Resource Management enables devices to be
divided into different subnets based on their actual locations on the network,
as well as classify devices into groups and perform batch operations on
devices in a same Group, as well as configure and query resources such as
device systems, frames, boards, subcards, and ports. eSight offers the
management information base (MIB) tool that can read, compile, store, and
use .mib files. eSight reads and monitors MIB data through SNMP v1, v2c, or
v3, which helps perform effective network management. eSight Secure Center
allows the management of large-scale security policies for Huawei firewalls or
devices deployed in Unified Threat Management (UTM) environments.

eSight allows administrators to ensure the security of device configuration


files by providing the functions for backing up and restoring device
configuration files. FTP operates as a service within eSight and from which
the FTP service parameters are configured. This includes parameters such as
the username and password used to access the FTP service, the location
within eSight where received configuration files are stored as well as the FTP
service type (FTP, SFTP and TFTP are supported) for which the default is
FTP.
An SNMP trap is defined within the Network Element (NE), which in this case
refers to the router. The NE will define the parameters for the SNMP trap
using the snmp-agent trap enable command, and following which the
parameters can be set for sending traps to the target NMS, to perform
periodic file backup. eSight can be configured to periodically (daily, weekly, or
monthly) perform back up of the configuration files of devices specified in a
backup task, at a specified time. It can also be configured to trigger a backup
upon the generation of a device configuration change alarm, as well as
perform instant backups.

As part of the standard edition, the WLAN Manager manages wireless


resources, for example, access controllers (ACs) and access points (APs) and
configurations of wireless campus networks, diagnoses device and network
faults, and displays both wired and wireless resources in topology views.
The WLAN Manager is capable of delivery and deployment of AP services
end to end, which improves the deployment efficiency by approximately 90%
compared to manual deployment. An access controller (AC) is used to control
and manage access points (AP) in the Wireless LAN. With AC management, it
is possible to connect an AP to the WLAN and confirm AP identities, add an
AP in offline mode and add those AP to a white list to allow the AP
authorization to become active in the WLAN.
Unauthorized or rogue AP can be detected as a result of not being listed in
the white list, and be prevented from becoming active in the WLAN, or can be
authorized by being added to the white list. Details regarding each AP and AC
can be observed from eSight, including AC status, name, type, IP address, AP
authentication mode, region information and performance statistics. This
information is communicated through AC management service data generated
by each AC.

Each AP generates radio spectrum signals that are susceptible to


interference. After the AP radio spectrum function is enabled on devices,
users can view the signal interference information around APs in the NMS.
Users are able to judge the channel quality and surrounding interference
sources on spectrum charts. Spectrum charts include real-time, depth,
channel quality, channel quality trend, and device percentage charts.
Furthermore users can deploy APs in regions, view the hotspot coverage, and
detect signal coverage blind spots and conflicts promptly, similarly to that
displayed in the given example. In regions where location is enabled, the
topology refreshes the latest location of users and unauthorized devices in
real time. It is then possible to view the hotspot location and radio signal
coverage in the location topology and mark conflict regions, as well as predeploy APs, view the simulated radio coverage, and review the actual radio
coverage after APs are brought online.

eSight Network Traffic Analyzer (NTA) enables users to detect abnormal traffic
in a timely manner based on the real-time entire-network application traffic
distribution, and plan networks based on the long-term network traffic
distribution. As such NTA can implement transparent network management.
eSight NTA allows users to configure devices, interfaces, protocols,
applications, IP groups, application groups, interface groups etc.

NTA provides the traffic dashboards function, and displays the real-time traffic
for the entire-network. The dashboard offers rankings for interface traffic,
interface utilization, device traffic, application traffic, host traffic, DSCP traffic,
and session traffic, for which it is possible to customize the display format and
view specific content, by drilling down through the network traffic analysis
details, and generate detailed traffic reports.

The eSight NMS allows report information to be represented in a number of


formats for representation of traffic and other analysis data. Report formats
such as Pie, Chart, Table, Line, Graph, and Region can be supported for
displaying the report information. General summary types are also defined
such as Application summary, Session summary, DSCP summary, Source
host summary, Destination host summary, and Interface summary, however
filtering conditions can also be applied to filter traffic by source address,
destination address, application etc.
Information may be output as an instant report or periodically based on a
period defined by the administrator, after which the status is displayed on the
page, and reports detailed traffic statistics, which may also be exported as a
batch report, or forwarded as an email.

1. The eSight network management platform editions include compact,


standard, and professional editions.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
With the gradual exhaustion of the 1Pv4 address space, new solutions for
continued address space were needed. Temporary measures in the form
of NAT were applied, however long term solutions were required. The
1Pv6 addressing architecture is a developing solution for IP to provide for
the next generation of networks and beyond. The transition to an all 1Pv6
architecture, while progressive, requires a major overhaul of many
protocols and applications as well as standards. The 1Pv6 network
however aims to resolve many limitations within the current TCP/IP suite,
most notably addressing the need for integrated security measures and
the streamlining of protocols to minimize overhead. A deep knowledge of
the 1Pv6 architecture is required by engineers as 1Pv6 continues to evolve
as an integral part of the enterprise network.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:

Explain the characteristics of 1Pv6

Explain the 1Pv6 address format and addressing types.

Describe the process for 1Pv6 stateless address autoconfiguration .

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

As a set of specifications defined by the Internet Engineering Task Force


(IETF), Internet Protocol version 6 (IPv6) is the next-generation network layer
protocol standard and the successor to Internet Protocol version 4 (IPv4). The
most obvious difference between IPv6 and IPv4 is that IP addresses are
lengthened from 32 bits to 128 bits. In doing so, IPv6 is capable of supporting
an increased number of levels of addressing hierarchy, a much greater
number of addressable nodes, and simpler auto-configuration of addresses.
The existing IPv4 address range was implemented at a time when such
networks as ARPANET and the National Science Foundation Network
(NSFNET) represented the mainstream backbone network, and at a time
when IPv4 was considered more than ample to support the range of hosts
that would connect to these forming networks. The unforeseen evolution of
the Internet from these ancestral networks resulted in the rapid consumption
of the IPv4 address space of 4.3 billion addresses (of which many are
reserved), for which counter measures in the form of NAT and CIDR were put
in place to alleviate the expansion, and give time for a more permanent
solution to be formed. Additionally, early IPv4 network address allocation was
highly discontiguous making it difficult for addressing to be clustered into
effective address groups and ease the burden on global IP routing tables used
in autonomous system based routing protocols such as BGP.
Eventually however the IP network is expected to shed its IPv4 addressing to
make way for IPv6 and provide the addressing capacity of over 340
undecillion unique addresses, considered more than necessary for continued
IP network growth. Along with this, address allocation by the Internet Assigned
Numbers Authority (IANA) ensures that address allocation for IPv6 is
contiguous for efficient future management of IP routing tables.

The IPv6 header required some level of expansion to accommodate the


increased size of the IP address, however many other fields that were
considered redundant in many cases of packet forwarding were removed in
order to simplify the overall design of the IPv6 header, and optimize the level
of overhead that is necessary during each packet that is forwarded across the
network. In comparison with the IPv4 packet header, the IPv6 packet header
no longer contains an Internet Header Length (IHL), identifier, flags, fragment
offset, header checksum, options, or padding fields, but instead carries a flow
label field.
The term flow can be understood as a continuous stream of unicast, multicast
or anycast packets relating to a specific application or process, that originate
from a given source and are transmitted to a given destination or multiple
destinations. The natural behavior of a packet switched network means that
such traffic flows may be carried over multiple paths and arrive at the intended
destination out of sequence. System resources are then required to resequence the packet flow before it can be processed by the upper layers.
The flow label intends to identify such traffic flows, together with the source
and destination address fields, to maintain a common path across networks
for all packets associated with the traffic flow. In doing so packets are capable
of maintaining sequence over the IP network, and this optimizes process
efficiency.

As IP network convergence introduces technologies such as voice that relies


on a circuit switched traffic flow behavior and near real time performance, the
importance of the flow label field becomes more paramount.
Various options can also now be supported without changing the existing
packet format, with the inclusion of extension header information fields. These
extension header fields are referenced through the Next Header field that
replaces the protocol field found in IPv4, in order to provide additional
flexibility in IPv6.

The extension header is one of the primary changes to IPv6 for enabling the
IPv6 header to be more streamlined. There are a number of extension
headers that exist in IPv6 and are referenced directly through a hexadecimal
value in the IPv6 Next Header field, in the same way that ICMP, TCP, UDP
headers are referenced in the protocol field of the IPv4 header. Each
extension header also contains a next header field to reference the next
header following the processing of the extension header. Multiple extension
headers can be associated with a packet and each referenced in sequence.
The list of extension headers and the sequence in which the extension
headers are processed are as follows.
IPv6 header
Hop-by-Hop Options header
Destination Options header
Routing header
Fragment header
Authentication header
Encapsulating Security Payload header
Destination Options header
Upper-layer header (as in IPv6 tunneling)
Each extension header should occur at most once, except for the Destination
Options header which should occur at most twice (once before a Routing
header and once before the upper-layer header). Another key aspect of the
extension headers is the introduction of IPsec protocols in the form of AH and

The IPv6 address is a 128 bit identifier that is associated with an interface or a
set of interfaces as (in the case of anycast address assignment covered later).
The address notation due to size is commonly defined in hexadecimal format
and written in the form of xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where the
xxxx relates to the four hexadecimal digits that translate to a 16 bit binary
value. The set of 8*16 bit grouped hexadecimal values comprises the 128 bit
IPv6 address.
The address itself is made up of an IPv6 prefix that is used to determine the
IPv6 network, followed by an interface ID. The basic addressing scheme
represents a two layer architecture of prefix and interface ID, however IPv6
address types may contain many sublevels that enable an extensive
hierarchical addressing architecture to be formed in the IPv6 network, to
support effective grouping and management of users and domains.

The extensive 128 bit size of the IPv6 address means that the general written
notation can be impractical in many cases. Another factor that will remain
common for quite some time is the presence of long strings of zero bits,
primarily due to the unfathomable enormity of the address scale involved. This
however brings a means for simplification and practicality when writing IPv6
addresses that contain zero bits. One initial means of simplification of the
address scope is through the removal of any leading zero values that are
found in each set of 16 bit hexadecimal values.
A special syntax is available to further compress the zeros and is represented
in the form of a double colon or "::. This value indicates one or more 16 bit
groups of zeros. In order to enable the string size to be determinable however,
the "::" can only appear once in an address. The double colon "::" can be used
to compress leading zeros in an address as displayed in the given example.

The IPv6 address space has yet to be completely defined firstly due to the
magnitude of the addressing space, for which it is not likely to be required
anytime in the near future, secondly the formulation of addressing schemes is
still somewhat in an infancy stage with much discussion occurring with
regards to the address types that should be applied.
Currently a fraction of the address range has been sectioned for use in the
form of a 2000::/3 address range that represents a global unicast address
range. This can be understood to be any address that is routable in the public
IP network. The governing body IANA (now a part of ICANN) is responsible for
distributing portions of this address range to various Regional Internet
Registries (RIR) that manage addressing for one of five regions in the world.
Supersets of IPv6 addresses in the form of 2400::/12 (APNIC), 2600::/12
(ARIN), 2800::/12 (LACNIC), 2A00::/12 (RIPE NCC) and 2C00::/12 (AfriNIC)
have been allocated, allowing the potential for addressing for a given region to
be referenced and managed in the mean time using a single address prefix.
Also contained within the 2000::/3 prefix range are reserved address fields,
including 2001:0DB8::/32 that is used to reference fictional addressing within
documentation.
Multicast addressing is defined within IPv6 as FF00::/8, with much of the
addressing scope being reserved to specific address ranges (such as link
local) and in support of routing protocols, in a manner similar to how multicast
addresses are used in IPv4. One major change to IPv6 is that there are no
broadcast addresses, their function being superseded by multicast addresses,
which reduces unnecessary processing by end stations at the MAC layer
within IPv6 networks, since hosts are able to filter multicast MAC address

Some special addresses also exist within IPv6 including the unspecified
address ::/128 which represents an interface for which there is no IP address
currently assigned. This should not be confused however with the default IP
address ::/0 which is used as a default address value for any network in the
same way the 0.0.0.0/0 default address is used within IPv4. For the loopback
address (127.0.0.1), this is defined in IPv6 as the reserved address ::1/128.

Unicast addressing comprises primarily of global unicast and link local unicast
address prefixes, however other forms also exist. All global unicast addresses
have a 64 bit interface ID field, with exception to addresses for which the first
three bits in the address are 000. For such addresses there is no constraint
on the size or structure of the interface ID field. The address format for global
unicast addresses comprise of a hierarchy in the form of a global routing
prefix and a subnet identifier, followed by the interface identifier. The global
routing prefix is designed to be structured hierarchically by the Regional
Internet Registries (RIR) and the Internet Service Providers (ISP) to whom the
RIR distribute IP address prefixes. The subnet field is designed to be
structured hierarchically by site administrators to provide up to 65535
individual subnets.
In terms of the Link Local unicast address, the FE80::/10 means that the first
10 bits of the link local address is clearly distinguishable as 1111111010. The
64 bit interface address range is more than sufficient for the addressing of
hosts, and therefore the remaining 54bits within the Link Local address are
maintained as 0. The example displays the address formats that can be
typically associated with each of the common unicast address types.
It should be noted also that a third unicast addressing scheme, referred to as
site local addressing (FC00::/7) was originally proposed in RFC3513 and may
appear in some implementations of IPv6, however the addressing range has
been deprecated as of RFC4291 and should be avoided as part of future
implementations.

A multicast address is an identifier for a set of interfaces (typically belonging


to different nodes). A packet sent to a multicast address is delivered to all
interfaces identified by that address. The multicast address range is
determined from a FF00::/8 address prefix and is clearly distinguishable from
the first 8 bits which are always set to 11111111. The address architecture for
multicast addresses comprises of flags, scope and group ID fields.
The flags field though 4 bits, is currently used to support identification of
whether a multicast address is well known (as in assigned and recognized by
the global internet numbering authority) or transient, meaning the multicast
address is temporarily assigned. A second bit value is used within the flags
field to determine whether or not the multicast address is based on the
network prefix. The remaining higher order two bits are reserved and remain
set to 0.
The scope defines a number of potential ranges to which the multicast is
applied. Generally this may refer to a node local scope FF01::, or to the link
local scope FF02:: which refers to the listeners within the scope of the linklayer, such as an Ethernet segment for which a gateway defines the
boundary. Common well known link local multicast addresses include FF02::1
which is used to forward to all nodes, and FF02::2 that allows multicast
transmission to all router addresses.

Anycast represents an identifier for a set of interfaces (typically belonging to


different nodes). A packet sent to an anycast address is delivered to one of
the interfaces identified by that address (the "nearest" one, according to the
routing protocols' measure of distance). The anycast architecture commonly
involves an anycast initiator and one or multiple anycast responders.
The anycast initiator may commonly be a host that is enquiring to a service
such as in the case of a DNS lookup, or requesting the retrieval of specific
data, such as HTTP web page information. Anycast addresses are in no way
distinguishable from unicast addresses, with the exception that multiple
instances of the same address are applied to a given device.
The application for anycast over traditional methods of addressing brings
many potential benefits to enterprise network operation and performance.
Service redundancy is one such application for anycast that allows a service
such as HTTP to be used across multiple servers, for which the same address
is associated with one or more servers that operate as anycast responders. In
the event that service failure occurs on one server, clients would need
traditionally to search for the address of another server and then restart/
reestablish communication.
Using anycast addressing, the anycast initiator automatically establishes
communication with another server that is using the same address, providing
an effective redundancy solution.

In terms of performance, multi-server access can be applied, where an


anycast initiator establishes a connection to the same unicast address, each
connection automatically establishes to multiple servers that specify the same
anycast address such as in the case of a client browsing a company web
page.
The client will download the html file and individual image files from separate
mirror servers. As a result, the anycast initiator untilizes bandwidth from
multiple servers to download the web page much more quickly than is
possible with a unicast approach.

Upon physically establishing with an IPv6 network, hosts must establish a


unique IPv6 address and associated parameters such as the prefix of the
network. Routers send out Router Advertisement messages periodically, and
also in response to Router Solicitations (RS) to support router discovery, used
to locate a neighboring router and learn the address prefix and configuration
parameters for address auto-configuration.
IPv6 supports stateless address autoconfiguration (SLAAC) that allows hosts
to obtain IPv6 prefixes and automatically generate interface IDs without
requiring an external service such as DHCP. Router Discovery is the basis for
IPv6 address auto configuration and is implemented through two message
formats.
Router Advertisement (RA) messages allow each router to periodically send
multicast RA messages that contain network configuration parameters, in
order to declare the routers existence to Layer 2 hosts and other routers. An
RA message is identified by a value of 134 in the type field of the message.
Router Solicitation (RS) messages are generated after a host is connected to
the network.
Routers will periodically send out RA messages however should a host wish
to prompt for an RA message, the host will send an RS message. Routers on
the network will generate an RA message to all nodes to notify the host of the
default router on the segment and related configuration parameters. The RS
message generated by a host can be distinguished by the type field which
contains a value of 133.

In order to facilitate communication over an IPv6 network, each interface must


be assigned a valid IPv6 address. The interface ID of the IPv6 address can be
defined either through manual configuration by a network administrator,
generated through the system software, or generated using the IEEE 64-bit
Extended Unique Identifier (EUI-64) format.
Due to practicality, the most common means for IPv6 addressing is to
generate the interface ID using the EUI-64 format. IEEE EUI-64 standards
use the interface MAC address to generate an IPv6 interface ID. The MAC
address however represents a 48-bit address whilst the required interface ID
must be composed of a 64 bit value. The first 24 bits (expressed by c) of the
MAC address represent the vendor (company) ID, while the remaining 24 bits
(expressed by e) represents the unique extension identifier assigned by the
manufacturer.
The higher seventh bit in the address represents a universal/local bit to
enable interface identifiers with universal scope. Where this value is 0, the
MAC address is unique locally. During conversion, the EUI-64 process inserts
two octet 0xFF and 0xFE values totaling 16 bits between the vendor
identifier and extension identifier of the MAC address, and the universal/local
bit 0 is changed to 1 to indicate that the interface ID now represents a globally
unique address value. A 64 bit interface ID is created and associated with the
interface prefix to generate the IPv6 interface address.

Before an IPv6 unicast address is assigned to an interface, Duplicate Address


Detection (DAD) is performed to check whether the address is used by
another node. DAD is required if IP addresses are configured automatically.
An IPv6 unicast address that is assigned to an interface but has not been
verified by DAD is called a tentative address. An interface cannot use the
tentative address for unicast communication but will join two multicast groups:
ALL-nodes multicast group and Solicited-node multicast group.
A Solicited-Node multicast address is generated by taking the last 24 bits of a
unicast or anycast address and appending the address to the
FF02:0:0:0:0:1:FF00::/104 prefix. In the case where the address 2000::1 is
used, the address solicited node multicast address FF02::1:FF00:1 would be
generated.
IPv6 DAD is similar to the gratuitous ARP protocol used in IPv4 to detect
duplicated IPv4 host addresses upon address assignment or connection of
the host to the network. A node sends a neighbor solicitation (NS) message
that requests the tentative address be used as the destination address to the
Solicited-node multicast group.
If the node receives a neighbor advertisement (NA) reply message, the
tentative address is confirmed as being used by another node and the node
will not use this tentative address for communication, following which manual
assignment of an address by an administrator would be necessary.

1. The address 2001:0DB8:0000:0000:0000:0000:032A:2D70 can be


condensed to 2001:DB8::32A:2D70. The double colon can be used to
condense strings of zeroes that are often likely to exist due to the
unfathomable size of the IPv6 address space, but may only be used once.
Leading zero values can be condensed, however trailing zero values in
address strings must be maintained.
2. The 64-bit Extended Unique Identifier (EUI-64) format can be used to
generate a unique IPv6 address. This is achieved by inserting two FF
and FE octet values into the existing 48-bit MAC address of the interface,
to create a unique 64-bit value that is adopted as the IPv6 address of the
end station interface. Validation of the address as a unique value is
confirmed through the Duplicate Address Detection (DAD) protocol.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The changes to the address architecture have introduced the need for
routing protocols that are capable of supporting 1Pv6. Two such routing
protocols include RIPng and OSPFv3. The characteristics and operation
of each of these protocols generally reflects those used in 1Pv4, however
contain some distinct differences that are required to be understood to
support the implementation of 1Pv6 based routing protocols within an
1Pv6 founded enterprise network.

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the characteristics and operation of RIPng
Describe the characteristics and operation of OSPFv3

Configure RIP and OSPF routing protocols for 1Pv6

Copyright 2014 Huawei Technologies Co., Ltd. All rights reserved.

Page 3

~~ HUAWEI

RIPng represents the next generation of distance vector routing for IPv6. It is
largely based on the distance vector principles associated with prior IPv4
versions of RIP and applies the same distance vector principles. The protocol
maintains many of the same conditions including a network diameter of 15
hops beyond which networks become unreachable as a result of the count
to infinity principle. Each hop in RIPng is defined with a cost metric value of
1.
The major differences within RIPng boil down to the addressing mechanisms
used in order to support communication between peering routers that
participate in RIPng based distance vector routing. Periodic updates are
transmitted using the IPv6 multicast address range of FF02:: while for RIPng,
the reserved multicast address of FF02::9 is specified as a logical translation
from the RIPv2 multicast address of 224.0.0.9.
The addressing features of most routing protocols within IPv6 rely more
heavily on the use of link local addresses than within IPv4. The equivalent link
local address within IPv4 is taken from the range of 169.254.0.0/16 in order to
facilitate communication where no address is defined and no DHCP server is
available. IPv4 based routing protocols rely on public or privately defined
address ranges to determine the next hop, whereas in IPv6 routing protocols
default to specifying the link local address as the next hop. The example
demonstrates communication between routers on the same broadcast
segment in which RTA and RTB are configured with loopback interfaces using
global unicast addresses. The physical interfaces however depend only on
the link local address to define the next hop when communicating routing
information. For example, the destination address of 2001::B of RTB will be

One of the main concerns with this is that where EUI-64 is used to define the
link local address, the link local address may change any time that an
interface card is replaced since the MAC of the interface will also change. It is
possible however to manually define the link local address and is generally
recommended to provide stability, or alternatively the global unicast
addressing can be manually defined. If a global unicast address is applied to
the logical interface, the link local addressing may not be automatically
defined on the physical interfaces by the router. Where this occurs the ipv6
address auto link-local command can be used on the physical interface.

RIPng is, as with prior IPv4 versions, a UDP-based protocol with a RIPng
routing process on each router that sends and receives datagrams on UDP
port number 521. All routing update messages are communicated between
the RIPng ports of both the sender and the receiver, including commonly
periodic (unsolicited) routing update messages. There may be specific queries
sent from ports other than the RIPng port, but they must be directed to the
RIPng port on the target machine. The command field of the RIPng header
defines a request for routing table information either in part or in full of the
responding peer, or as a response message that contains all or part of the
senders routing table. The message may be carried as a result of a response
to a request or as part of the periodic (unsolicited) routing update. The version
field refers to the version of RIPng, which currently is defined as version 1.
Each message may contain one, or a list of Routing Table Entries (RTE) that
define RIPng routing information, including the destination prefix, the length of
the prefix and also the metric value that defines the cost, in order to reach the
destination network defined in the prefix.

In order to implement RIPng on the router it is first necessary that the router
establish support for IPv6 at the system view. The system view command ipv6
should be implemented to achieve this. At the interface view IPv6 capability is
disabled by default, therefore the ipv6 enable command must be used to
enable IPv6 for each interface and allow the forwarding of IPv6 unicast
packets as well as the sending and receiving of link local IPv6 packets. The
ripng [process-id] at the interface level associates the router interface with a
process instance of the RIPng protocol. The process ID can be any value
between 1 and 65535, but should be the same for all router interfaces
associated with a single RIPng protocol database.

Using the display ripng command, the process instance will be shown along
with the parameters and the statistics for the instance. RIPng supports a
preference of 100 by default and maintains an update period of 30 seconds as
found in IPv4 instances of RIP. The operation of RIPng can be verified by the
sending of periodic updates, however these alone may not contain any routes
if only link local addressing is enabled on the peering interface. The number of
routes in the database should represent a value greater than zero to show
that RIPng is actively aware of the route, and that it is advertising routes,
through the total number of routes in the advertising database.

OSPFv3 is the version of OSPF that is used over IPv6 networks, and in terms
of communication, assumes that each router has been assigned link-local
unicast addresses on each of the router's attached physical links. OSPF
packets are sent using the associated link-local unicast address of a given
physical interface as the source address. A router learns the link-local
addresses of all other routers attached to its links and uses these addresses
as next-hop information during packet forwarding. This operation is true for all
physical links with the exception of virtual links which are outside the scope of
this material.
A reserved AllSPFRouters multicast address has been assigned the value
FF02::5, reflecting the multicast address 224.0.0.5 used in OSPFv2, for which
it should be noted that the two versions are not compatible. All routers running
OSPFv3 should be prepared to receive packets sent to this address. Hello
packets are always sent to this destination, as well as certain OSPF protocol
packets during the flooding procedure.

The router ID plays a prominent role in OSPFv3 and is now always used to
identify neighboring routers. Previously, they had been identified by an IPv4
address on broadcast, NBMA (Non-Broadcast Multi-Access), and point-tomultipoint links. Each router ID (as well as the area ID) is maintained as a 32
bit dotted decimal value and cannot be configured using an IPv6 address.
The router ID also continues to actively act as a tie breaker in the event that
the priority associated with each OSPFv3 enabled router is equal. In such
cases the Designated Router (DR) is identified as the router possessing the
highest router ID. Hello messages sent will contain a router ID set to 0.0.0.0 if
there is no Designated Router. The same principle applies for the Backup
Designated Router, identified by the next highest router ID. A priority of 0 set
on a router interface participating in OSPFv3 will deem the router as ineligible
to participate in DR/BDR elections. Router Priority is only configured for
interfaces associated with broadcast and NBMA networks.
The multicast address AllDRouters has been assigned the value FF02::6,
the equivalent of the 224.0.0.6 multicast address used in OSPFv2 for IPv4.
The Designated Router and Backup Designated Router are both required to
be prepared to receive packets destined to this address.

IPv6 refers to the concept of links to mean a medium between nodes over
which communication at the link layer is facilitated. Interfaces therefore
connect to links and multiple IPv6 subnets can be associated with a single
link, to allow two nodes to communicate directly over a single link, even when
they do not share a common IPv6 subnet (IPv6 prefix). OSPFv3 as a result
operates on a per-link basis instead of per-IP-subnet as is found within IPv4.
The term link therefore is used to replace the terms network and subnet which
are understood within OSPFv2. OSPF interfaces are now understood to
connect to a link instead of an IP subnet. This change affects the receiving of
OSPF protocol packets, the contents of Hello packets, and the contents of
network-LSAs.
The impact of links can be understood from OSPFv3 capability to now support
multiple OSPF protocol instances on a single link. Where separate OSPF
routing domains that wish to remain separate but operate over one or more
physical network segments (links) that are common to the different domains.
IPv4 required isolation of the domains through authentication which did not
provide a practical solution to this design requirement.
The per-link operation also means the Hello packet no longer contains any
address information, and instead it now includes an Interface ID that the
originating router assigns to uniquely identify (among its own interfaces) its
interface to the link.

Authentication in OSPFv3 is no longer supported, and as such the


authentication type and authentication (value) fields from the OSPF packet
header no longer exist. As of IPv6, changes to the IP header have allowed for
the utilization of security protocols that were only present in IPv4 as part of the
IPsec suite, since initial TCP/IP development in the late 1960s was never
designed with security in mind, since security of TCP/IP at the time was not
foreseen as a future requirement.
With the security improvements made to IPv6 and the utilization of IP
extension headers in the protocol suite, OSPFv3 is capable of relying on the
IP Authentication Header and the IP Encapsulating Security Payload to
ensure integrity as well as authentication & confidentiality, during the
exchange of OSPFv3 routing information.

Implementing OSPFv3 between peers requires, as with RIPng, that the router
firstly be capable of supporting IPv6. Additionally the router must also enable
the OSPFv3 protocol globally in the system view. Each interface should be
assigned an IPv6 address. During the configuration of RIPng it was
demonstrated how the interface can be automatically configured to assign a
link local address. In this example an alternative and recommended method of
link local address assignment is demonstrated. The link local IPv6 address is
manually assigned and designated as a link local address using the ipv6 <link
local address> link-local command. If the address associated with the
physical interface is a global IPv6 unicast address, the interface will also
automatically generate a link local address.
In order to generate an OSPFv3 route, the example demonstrates assigning
global unicast addresses to the logical loopback interface of each router. The
physical and logical interfaces both should be associated with the OSPFv3
process and be assigned a process ID (typically the same ID unless they are
to operate as separate instances of OSPFv3) and also be assigned to an
area, which in this case is confined to area 0.
As a reminder, OSPFv3 nodes rely on identification by neighboring routers
through the use of the router ID, and therefore a unique router ID should be
assigned for each router under the ospfv3 protocol view following the
configuration of the ospfv3 command.

Following the configuration of neighboring routers participating in OSPFv3,


the display ospfv3 command is used to verify the operation and parameters
associated with OSPFv3. Each router will show a running OSPFv3 process
and a unique router ID. If the adjacency between neighbors is established, the
number of FULL (state) neighbors will show a value greater than zero.

1. RIPng listens for route advertisements using UDP port 521.


2. A 32-bit router ID is used to uniquely distinguish each node running
OSPFv3.

Thank you
www.huawei.com

~~
HUAWEI TECHNOLOGIES CO., LTD.

---

HUAWEI

~ Foreword
The 1Pv6 architecture has led to the redesign of many aspects of network
operation. One such design change involves Neighbor Discovery, which
in itself now defines a means for Stateless Address Autoconfiguration
(SLMC). DHCP for 1Pv6 (DHCPv6) includes a number of design
changes that includes support for both SLMC and stateful 1Pv6
addressing. DHCPv6 remains a client/server based application layer
protocol, however includes a significant number of changes to align with
the design aspects of 1Pv6. As such, DHCPv6 stateful and stateless
implementations and characteristics are explained.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 2

~~ HUAWEI

Objectives
Upon completion of this section, trainees will be able to:
Describe the features of DHCPv6.
Explain the stateful and stateless behavior of DHCPv6.

Successfully configure DHCPv6 services.

Copyright 2014 Huawei Technologies Co . Ltd. All rights reserved.

Page 3

~~ HUAWEI

The Dynamic Host Configuration Protocol for IPv6 (DHCPv6) is a technology


that dynamically manages and configures IPv6 addresses in a centralized
manner. It is designed to assign IPv6 addresses and other network
configuration parameters to hosts. DHCPv6 uses the client/server model. A
client requests configurations such as the IPv6 address and DNS server
address from the server, the server replies with requested configurations
based on policies.
In stateless address autoconfiguration (SLAAC), routers do not record the
IPv6 addresses of the hosts, therefore stateless address autoconfiguration
has poor manageability. In addition, hosts configured with stateless address
autoconfiguration cannot obtain other configuration parameters such as the
DNS server address. ISPs do not provide instructions for automatic allocation
of IPv6 prefixes for routers. Users therefore need to manually configure IPv6
addresses for devices during IPv6 network deployment.
As a stateful protocol for configuring IPv6 addresses automatically, DHCPv6
solves this problem. During stateful address configuration, the DHCPv6 server
assigns a complete IPv6 address to a host and provides other configuration.
Parameters such as the DNS server address and domain name. A relay agent
may be used to forward DHCPv6 packets, however lies outside of the scope
of this material The DHCPv6 server binds the IPv6 address to a client,
improving overall network manageability.
Clients and servers exchange DHCP messages using UDP. The client uses a
link-local address, determined through other mechanisms for transmitting and
receiving DHCP messages. Clients listen for DHCP messages on UDP port
546, whilst servers (and relay agents) listen for DHCP messages on UDP port
547.

Prior to the allocation of addresses, it should be clearly understood that an IPv6 node
(client) is required to generate a link-local address and be successfully evaluated by
the Duplicate Address Detection (DAD) process. Following this, a link router discovery
process is involved, for which the IPv6 client node broadcasts a Router Solicitation
(RS) message, and the link router responds with a Router Advertisement (RA)
message after receiving the RS message.
Contained within the RA message are numerous fields containing configuration
parameters for the local network. One field in particular referred to as the Autoconfig
Flags field, is an 8 bit field that contains two specific bit values to determine the autoconfiguration process for the local network. A managed address configuration
flag (M) is a 1 bit value that is for defining whether stateful address configuration
should be used, commonly applied in the presence of a DHCPv6 server on the local
network. Where the value is set to 1, stateful addressing should be implemented,
meaning the IPv6 client should obtain IPv6 addressing through stateful DHCPv6.
The other stateful configuration flag (O) represents the second flag bit value in the
Autoconfig Flags field, and defines whether other network configuration parameters
such as DNS and SNTP (for time management servers) should be determined through
stateful DHCPv6. RFC2462 defines that where the M bit is true (a value of 1), the O bit
must also be implicitly true, however in practice the M bit and the O bit may be defined
interchangeably to support stateless addressing services in DHCPv6, in which an IPv6
address is not assigned but configuration parameters are.
It should also be noted that the managed address flag and other configuration flag is
managed through VRP on the router, and is not set in the RA message by default. In
order to set these flags, the commands ipv6 nd autoconfig managed-address-flag and
ipv6 nd autoconfig other-flag should be configured on the gateway responsible for
generating RA messages.

Client nodes initiated on a network supporting stateful addressing may be


serviced by one or more DHCPv6 servers. The IPv6 client uses a link-local
address assigned to the interface for which it is requesting configuration
information as the source address in the header of the IPv6 datagram.
The multicast address FF02::1:2 is a reserved multicast address that
represents All_DHCP_Relay_Agents_and_Servers, and is used by a client
to communicate with neighboring servers. All servers (and relay agents) are
members of this multicast group. For any client sending a DHCP message to
the All_DHCP_Relay_Agents_and_Servers address, it is expected that the
client send the message through the interface for which configuration
information is being requested, however exceptions may occur to this rule
where two interfaces on the client are associated with the same link, for which
it is possible for the alternative interface to be used. In either case the link
local address of the forwarding interface must be used as the source address.

Obtaining stateful addressing and other parameters from a DHCPv6 server


requires a series of messages be sent. A client initially sends a solicit
message to locate servers, from which addressing and configuration
parameters can be received.
Following the solicit message, a DHCPv6 server supporting the link will
generate an advertise message in response to the solicit message, that
indicates to the client, the IPv6 address of the server, providing the required
DHCPv6 service. The client is then capable of using this IPv6 address to
reference the DHCPv6 server and generate a request message. Where
multiple servers respond to the solicit message, the client will need to decide
which DHCPv6 server should be used, typically defined by a server
preference value defined by the DHCPv6 administrator on the server, and
carried in the advertise message. Additionally the server may carry options
including a server unicast option which enables the client to use the IPv6
address of the DHCPv6 server to transmit further correspondence with this
server as unicast messages.
The request message is transmitted to the selected DHCP server to request
configuration parameters and one or multiple IPv6 addresses to be assigned.
Finally the DHCPv6 server responds with a Reply message that contains the
confirmed addresses and network configuration parameters.

DHCP may also be employed to support stateless configuration in the event


where a host is capable of retrieving IPv6 addressing information through
stateless configuration means, and requires only specific configuration
parameters from the DHCPv6 server. In such events Information-request
messages are generated, and sent by clients to a server to request
configuration parameters. The client is able to obtain configuration information
such as server addresses and domain information, as a list of available
configuration parameters, using only a single message and reply that is
exchanged with a DHCP server.
The Information-Request message is sent to the
All_DHCP_Relay_Agents_and_Servers multicast address following which
servers respond with a Reply message containing the configuration
information for the client. Since no dynamic state is being maintained (i.e. in
the form of IPv6 address assignment) the allocation of configuration
information is understood to be stateless.

A DHCP Unique Identifier (DUID) is a value that is used to distinguish


between each client and DHCP server, for which only one DUID is present in
each case. Clients may have one or multiple interfaces for which each will be
assigned an IPv6 address along with other configuration parameters and is
referenced using an Identity Association Identifier (IAID). These are used
together with DUID to allow DHCPv6 servers to reference a client and the
interface address/configuration parameters that should be assigned.
In the case of each client, the DUID will be used to identify a specific DHCP
server with which a client wishes to communicate. The length of the DUID
value can vary from anywhere in the range of 96bits (12 bytes) to 160 bits (20
bytes), depending on the format that is used. Three such formats exist, using
either the link-layer (DUID-LL) address, a combination of the link-layer
address and enterprise number (DUID-EN), a vendor assigned value at the
point of device manufacture, or a combination of the link-layer address and a
timestamp value (DUID-LLT) generated at the point of DUID creation in
seconds from midnight Jan 1st 2000 (GMT), modulo 232.
The initial 16 bit values (00:01) represent the format used, where 00:01
denotes the DUID-LLT format, 00:02 the DUID-EN format and 00:03 the
DUID-LL format. In the case of the DUID-LL and DUID-LLT formats, the 16
bits immediately after represent the hardware address based on IANA
hardware type parameter assignments, with 00:01 representing Ethernet
(10Mb) and 00:06 defining IEEE 802 network standards. A time stamp follows
in the DUID-LLT format and finally the link layer address value. For DUID-LL
only the link layer address follows.

The DUID format can be assigned through the dhcpv6 duid command, for
which either the DUID-LL or DUID-LLT format can be applied. The DUID-LL
format is applied by default. For the DUID-LLT, the timestamp value will
reference the time from the point at which the dhcpv6 duid llt command is
applied. The display dhcpv6 duid command can be used to verify the current
format based primarily on the length of the DUID value, as well as the DUID
value itself.

The implementation of stateful addressing requires that an address pool be


defined with a typical address prefix defined for the given range, as well as
pool specific parameters. The example demonstrates how a pool is created
with the defining of a pool and associated pool name, as well as the address
prefix from which the range of host addresses will be allocated.
Excluded addresses refer to addresses that comprise of the pool range
however should not be allocated through DHCP since they are commonly
used for other applications such as the interface address of the DHCP server.
Additional configuration parameters will also be specified for a given pool with
examples such as server addresses and domain names being specified for
parameter allocation to DHCPv6 clients.

A created DHCPv6 pool is required to be associated with an interface through


which it will service DHCP clients. An IPv6 address is assigned to the
interface of the DHCPv6 server and the interface then associated with the
address pool. In this case the excluded address value has been used to
represent the interface address in order to ensure no attempts are made by
the DHCPv6 server to assign the interface address to a DHCP client.

The resulting configuration of the DHCPv6 server can be clarified through the
display dhcpv6 pool command, from which point the defined pool(s) can be
identified and the address prefix associated with the pool determined.
Additional information such as the lifetime can be viewed, for which the default
lifetime of a leased address is 86400 seconds, or 1 day and can be
reconfigured as necessary using the information-refresh command under the
dhcpv6 pool <pool-name> view. Where active clients have leased addresses
from the DHCP server, the related statistics can be found here.

1. The AR2200 series router is capable of supporting Link Layer (DUID-LL)


and Link Layer with Time (DUID-LLT) formats.
2. When a router advertisement is received containing M and O bit values
set to 1, the client will perform discovery of a DHCPv6 server for stateful
address configuration. The configuration will include an IPv6 address and
also other configuration information such as prefixes and addressing of
servers providing services such as DNS.

Thank you
www.huawei.com

S-ar putea să vă placă și