Documente Academic
Documente Profesional
Documente Cultură
Training Program
Overview
www.huawei.com
Preface
Page 2
Vertical Industries
Career Certification
Wireless
Transmission
UC&C
CT
Expert
Professional
VC
Cloud
Storage & Server
IT
Associate
BASIS
Capability
Library and IT Learning Platform
Page 3
Routing &
Switching
Sales Specialist
WLAN
Security
HCS-Sales-IP
Pre-sales
Specialist
HCS-Pre-sales-IP Network(Datacom)
Solution
Specialist
HCS-Solution-IP
HCS-Pre-sales-IP
Network(Security)
HCS-Field-R&S
HCS-FieldWLAN
HCIE
HCIE-R&S
HCIE-WLAN
HCIE-Security
HCNP
HCNP-R&S
HCNP-WLAN
HCNP-Security
HCNA
HCNA-R&S
HCNA-WLAN
HCNA-Security
Field Specialist
HCS-Field-NMS
Page 4
Courses Overview
Date
Training Contents
Huawei Box Switches Introduction
A
Huawei Chassis Switches Introduction
M
S12700 Agile Switches Introduction
AR G3 Routers Product Introduction
P
Huawei NE Series Routers Introduction
M
Migration from IOS to VRP
A Huawei Datacom Products Routine Maintenance
M Huawei Datacom Products Troubleshooting
P Huawei Datacom Products Software Upgrade
M S Series Chassis Switches ISSU Feature Introduction
A Huawei Datacom Products Security Features and Application
M AR Firewall Features and Application
Lab Guide for Huawei Datacom Products Security Features
P
and Application
M
Lab Guide for AR Firewall Features and Application
Huawei Datacom Products AAA Features and Application
A
Lab Guide for Huawei Datacom Products AAA Features and
M
Application
P Huawei Switches iStack Features and Application
M Huawei Switches CSS Features and Application
A Huawei Datacom Products NTP Features and Application
M Huawei Datacom Products NQA Features and Application
P Huawei Datacom Products LLDP Features and Application
M Huawei Datacom Products SNMP Features and Application
Methods
Lecture
Products Introduction
Lecture
Lecture
Operation &
Maintenance
Lecture
Lab Exercise
Lecture
Security Features
Lab Exercise
Lecture
Lab Exercise
Lecture
Lab Exercise
Lecture
Lab Exercise
Management Features
Lecture
Lab Exercise
Page 5
Delete
New
Update
Old Version
Products
Introduct
ion
Operatio
n&
Mainten
ance
High
Availabili
ty
Security
Features
Manage
ment
Features
New Version
Huawei Box Switches Introduction
Huawei Chassis Switches Introduction
Products
S12700 Agile Switches Introduction
Introductio
AR G3 Routers Product Introduction
n
Huawei NE Series Routers Introduction
Migration from IOS to VRP
Huawei Datacom Products Routine
Operation Maintenance
Huawei Datacom Products Troubleshooting
&
Maintenan Huawei Datacom Products Software Upgrade
ce
S Series Chassis Switches ISSU Feature
Security
Features
Introduction
Huawei Datacom Products Security Features
and Application
AR Firewall Features and Application
Huawei Datacom Products AAA Features and
Application
Huawei Switches iStack Features and
Application
Page 6
Learning recommendations
for each course
The multimedia contains not only the lectures, but also practice
demonstrations
Page 7
Learning suggestions
http://learning.huawei.com/en/
Page 8
HedEx Lite
Q&A
learning@huawei.com
Thank you
Thank you
www.huawei.com
Page 9
S2700 Series switches are L2 switches, and all the downlink ports are 100M;
S3700 series switches are L3 switches, and all the downlink ports are 100M;
S5700-LI series switches are L2 switches, S5700-SI/EI/HI series are L3 switches, all the
S5700 series switches downlink ports are 1000M;
S6700 series are L3 switches, and all the downlink ports are 10000M.
S2700 and S3700 can support to V1R6 software version, and the others can support to
higher software version, now is V2R1.
A: Switch.
TP: A device has combo interfaces supporting optical and electrical interfaces.
C: A device supports interface cards. There can be two or four uplink interfaces on
an interface subcard.
The number of interfaces on an S3700 can be 26, 28, or 52, depending on the
device model.
NOTE:
G: Downlink interface type. The value 24S indicates that 24 downlink interfaces of the
S3700-52P-EI-24S are optical interfaces.
NOTE: If this letter is not displayed, all downlink interfaces are electrical interfaces.
H: Powering mode:
S3700 positioned for the access layer or aggregation layer of enterprise network.
S5700 positioned for the access layer or aggregation layer of enterprise network.
The Quidway S6700 Series series Ethernet switches (hereinafter referred to as the
S6700) provide the access, aggregation, and data transport functions. They are
developed by Huawei to meet the requirements for reliable access and high-quality
transmission of multiple services on the enterprise network and the data center
network.
SX7 series switches provide large capacity, high port density, and cost-effective
Forwarding performance capabilities. In addition, the SX7 swithes provide multiservice access capabilities, excellent extensibility, quality of service (QoS)
guarantee, powerful multicast replication, and carrier-class security, and can be
used to build ring topologies of high
The S2700 Ethernet switches adopt an integrated hardware platform. An S2700 consists
of the chassis, power supply unit, fan, switch control unit (SCU), and interface subcard.
The width of an S2700 complies with industry standards, and the S2700 can be installed in
an IEC297 cabinet or an ETSI cabinet.
on the left of S2726TP-EI panel is the power supply module, the middle part is the SCU
SCU board has one Console port, 24 10/100BASE-T Ethernet ports and 2 Gigabit Combo
ports(10/100/1000BASE-T+100/1000BASE-X).
The dimensions of S3700-28TP-EI-MC-AC, S3700-28TP-SI-AC, S3700-28TP-EI-AC, S370028TP-SI-DC, S3700-28TP-EI-DC, S3700-26C-HI, S3700-28TP-EI-24S-AC, S3700-52P-SI-AC,
S3700-52P-EI-DC or S3700-52P-EI-AC are 442.0 mm x 220.0 mm x 43.6 mm (width x
depth x height).
The dimensions of S5700-24TP-SI-AC, S5700-24TP-SI-DC, S5700-28C-HI or S5700-28C-HI24S are 442.0 mm x 220.0 mm x 43.6 mm (width x depth x height).
In the event of a mains power failure the battery can power the switch, so services
will not be interrupted.
Compared with switches using external power supply units, the S5700-LI-BAT
occupies less space and is easier to install.
Battery LAN switches on the entire network can be managed centrally using a web
system, facilitating network operation and maintenance. As the battery lifetime is
predictable, you do not need to replace batteries periodically, reducing hardware
costs.
The S5700&S5710 Series Ethernet switch (hereinafter referred to as the S5700) provide
the access, aggregation, and data transport functions. They are developed by Huawei to
meet the requirements for reliable access and high-quality transmission of multiple services
on the enterprise network.
S5710-108C-PWR-HI is the powerful huawei fixed switch, the height is 2U, it could be
Small & Medium Enterprise Core. There are 3 highlights, 1. Its designed very compact, the
link could be up to 40G, 2. Full MPLS solution could support as a Branch core of an
enterprise, 3.Integrated AC could support BYOD solution.
Fully programmable, energy-efficient Gbit/s access switches for building highdensity, agile Ethernet networks.
Innovative virtualization technology and specialized electronics greatly simplify
management of converged, wired and wireless networks, provide more granular
quality monitoring and error recovery, and enable rapid provisioning of new
services and network features.
The S5720-SI series are next-generation standard gigabit Layer 3 Ethernet switches
with high-performance hardware, providing high-density GE, and 10 GE uplink
interfaces. With extensive service features and IPv6 forwarding capabilities, the
S5720-SI can be used as access or aggregation switches on campus networks or
access switches in data centers. Integrating many advanced reliability, security, and
energy saving technologies, they are simple and convenient to install and maintain
to reduce customers OAM costs.
The chassis of the is 1 U (1 U = 44.45 mm) high and its dimensions are 442.0 mm x 420.0
mm x 43.6 mm (width x depth x height).
The industry's highest-performing fixed switches, the S6720 series provides 24/48
full line-speed 10 GE ports, which are scalable to 6 x QSFP+ full line-speed ports.
The S6720 supports long-distance stacking with up to 480 Gbit/s bidirectional
stack bandwidth. It also supports 1+1 backup of AC and DC power modules that
can be installed on the same device.
These switches offer various service features, supports comprehensive security
policies and QoS capabilities, and are best suited for data center servers and the
core campus network.
Functions
The G2S provides two 1000M SFP optical interfaces to implement data access and
line-speed switching.
The S3700HI SCU powers on or off the G2S, detects whether the G2S is installed
or not, and manages PHY chips and optical interfaces on the G2S. The G2S works
with the entire system to provide enhanced service features such as OAM and BFD.
Applications
The G2S can be inserted into the front subcard slot of the S3700HI and is hot
swappable.
The E2XX can be inserted into the front subcard slot of the S5700-28C-EI, S5700-52C-EI,
S5700-28C-EI-24S, S5700-28C-SI, S5700-52C-SI, S5700-28C-PWR-EI, or S5700-52C-PWREI. The E2XX is not hot swappable.
The E2XY can be inserted into the front subcard slot of the S5700-28C-EI, S5700-52C-EI,
S5700-28C-EI-24S, S5700-28C-SI, S5700-52C-SI, S5700-28C-PWR-EI, or S5700-52C-PWREI. The E2XY is not hot swappable.
The E4XY can be inserted into the front subcard slot of the S5700-28C-SI, S5700-52C-SI,
S5700-28C-EI, S5700-52C-EI, S5700-28C-PWR-EI, S5700-52C-PWR-EI or S5700-28C-EI24S. The E4XY is not hot swappable.
The E4GF can be inserted into the front subcard slot of the S5700-28C-EI, S5700-52C-EI,
S5700-28C-EI-24S, S5700-28C-PWR-EI, or S5700-52C-PWR-EI.
The E4GFA can be inserted into the front subcard slot of the S5700-28C-SI, or S5700-52CSI.
These three sub-cards can install into S5710 series switches, and can support hot plug-in.
In the intelligent mode, the fans start to operate only when the ambient temperature goes
higher than a specified value.
Only S5700-28C-HI and S5700-28C-HI-24S support the 170 W DC power supply unit.
PoE provides power for terminals such as IP phones, access points (APs), portable device
chargers, point-of-sale (POS) machines, cameras, and data collectors. These terminals are
powered when they access the network, so the indoor power supply systems are not
required. Complying with IEEE 802.3af and IEEE 802.3at, the PoE SX7 is able to remotely
provide power for the devices of different vendors. IEEE 802.3af supports a maximum of
15.4 W power and IEEE 802.3at supports a maximum of 30 W power.
The PoE function transmits power together with data to terminals over cables or transmits
power without data over idle lines. The SX7 can transmit power together with data at a
rate of up to 1000 Mbit/s.
S6700 series switches use the 500W PoE power module, in this situation the power
module can only provide power for the device, cannot provide PoE function.
Functions
RPS1800 is a redundant power supply unit for a switch when the internal power supply
unit of the switch fails.
Clears the power-off risk when the internal power supply unit of a switch fails.
Provides power backup for a maximum of six network devices, and can take over
power supply for one device when its internal power supply unit fails.
Stops power supply when the faulty internal power of the switch recovers.
Provides 800 W PoE for two switches concurrently after software configuration.
Applications
All the member switches belong to the same series. The EI series and SI series
cannot form a stack.
All the member switches are connected by using stack cables and stack modules.
The stack rear subcard cannot be used together with the E4GF/E4GFA or E4XY
front subcard.
The ETPC can be inserted into the S5700-24TP-SI-AC, S5700-24TP-SI-DC, S5700-48TP-SIAC, S5700-48TP-SI-DC, S5700-28C-EI, S5700-52C-EI, S5700-28C-EI-24S, S5700-28C-SI,
S5700-52C-SI, S5700-24TP-PWR-SI, S5700-48TP-PWR-SI, S5700-28C-PWR-EI, or S570052C-PWR-EI for stacking.
The ETPB can be inserted into the S5700-28C-EI, S5700-52C-EI, S5700-28C-EI-24S, S570028C-SI, S5700-52C-SI, S5700-28C-PWR-EI, or S5700-52C-PWR-EI for interface extension.
The ETPB extended rear subcard must be used together with the E4GF/E4GFA or E4XY
front subcard to provide four SFP GE interfaces or SFP+ 10GE interfaces.
If the E4XY front subcard is used, only ports 1 and 3 are available.
If the E4GFA or E4GF front subcard is used, only ports 1 and 2 are available.
S2700/3700/5700/6700 is integrated with HTTP Server. When HTTP Server is enabled, the
switch can be accessed through various web browsers.
Switches can be configured of different power levels(250w and 500w) POE power supply
to support POE(Power Over Ethernet) function, so that remote PD(eg,IP phone, WLAN AP,
Security, Bluetooth AP) will be provided -48V DC power supply through the twisted pair.
Switches that use service interfaces and switches that use stack cards cannot be combined
in a single stack. Switches from different series cannot be combined in a single stack.
Large capacity: The S7700/S6700/S5700of large capacity is used to ensure the rate of
1000M on floors.
High availability: VRRP is run on two core switches to realize the backup mode. Double
links are used to converge services at each floor to the core switches.
High security: The terminal needs to pass the RADIUS authentication before accessing the
network. The Eudemon is deployed at the edge of servers to ensure the security of
servers.
S: Switch
37: Sub-Series
The S-Series chassis switch provides powerful multicast functions, comprehensive QoS
guarantee, effective security management mechanisms, and carrier-class high reliability to
meet the requirements of high-end customers for multi-service support, high reliability,
large capacity, and modular design. With the switch, costs in network construction and
maintenance are lowered. Additionally, the S-Series chassis switch can be used as a core
switch or an aggregation switch for IP MANs and large campus networks.
The S-Series Chassis Switch is a next-generation high-end switch of Huawei. The switch
features large capacity, wire-speed forwarding, and high density and it is a future-proof
switch for MANs. The switch can be used as an aggregation switch or a core switch for
enterprise networks, campus networks, and data centers.
The S-Series Chassis Switch uses a fully distributed architecture and the latest
hardware forwarding engine technology. The services supported by all the ports
can be forwarded at wire speed, including IPv4 services, MPLS services, and Layer
2 forwarding services. The switch can use ACLs to forward packets at wire speed.
The hardware implements 2-level multicast replication: The SFU replicates multicast
packets to the LPU. Then the forwarding engine of the LPU replicates the multicast
packets to the ports on the LPU.
The S7700/S9300 switch supports 2 Tbit/s switching capacity and various highdensity boards to meet requirements for the large capacity and high-density ports
of core and convergence layer devices. The switch can meet the increasing
bandwidth requirements and maximally reduce investments.
Adopting the full mesh architecture, the SXX03 provides 16 Gbit/s bandwidth in
each HIG group, that is, 4 x 5 Gbit/s x 8/10 (8B/10B code). The channel between
each slot and the backplane supports eight HIG groups; therefore, the total
bandwidth for each slot is 128 Gbit/s.
There is no switching fabric unit in the full mesh architecture. The switching
Adopting the switching fabric architecture, the switch provides 16 Gbit/s bandwidth
in each HIG group, that is, 4 x 5 Gbit/s x 8/10 (8B/10B code). The channel between
each slot and the switching fabric unit supports four HIG groups (an active SRU and
a standby SRU); therefore, the total bandwidth for each slot is 64 Gbit/s. Each
12x10G LPU slot supports eight HIG groups; therefore, the total bandwidth is 128
Gbit/s. (Only two 12x10G LPUs of the S7712/S9312 support wire-speed
forwarding.)
The maximum switching capability of the switch is 2048 Gbit/s, that is, 16 Gbit/s x
16 (ports) x 1 (switching fabric unit) x 2 (bidirectional) x 4 (SRUs).
S9700 Series Switch is design for integrated multi-service network architecture, It is a highend intelligent terabit routing switch.
S9700 provides 16x10GE ports inter-board wire speed switching, and supports
40GE/100GE standards in the future.
The S7700 and S9300 have the same power supply system, The output capability of single
AC power module is 800W(Input 220V), so 2+2 hot standby can provide 1600W max. The
output power is 400W when input volt is 110V.
The fan trays, AC power supplies, DC power supplies, LPUs, cables, and cabinet handles
can be used by all types of the switch. The handles can be removed from the cabinet.
The SXX12 and the SXX06 shared the monitoring boards and control boards of the same
type.
Highly reliable fan tray mode. Each fan tray supports two overlapping fans.
PWR1 and PWR2 belong to Area A. PWR3 and PWR4 belong to Area B. Area A and Area B
work in backup mode. PWR1 and PWR2 work in load balancing mode, and PWR3 and
PWR4 also work in load balancing mode.
The SXX12 supports the DC power supply and the AC power supply.
When the DC power supply is used, the S7712/S9312 supports 1+1 DC power
supplies, achieving a maximum power of 1600 W. Area A and Area B each is
configured with a DC power module. The filler panels are installed in other slots of
DC power modules are installed on PWR1 and PWR3, and filler panels are
installed on PWR2 and PWR4.
DC power modules are installed on PWR1 and PWR4, and filler panels are
installed on PWR2 and PWR3.
DC power modules are installed on PWR2 and PWR3, and filler panels are
installed on PWR1 and PWR4.
DC power modules are installed on PWR2 and PWR4, and filler panels are
installed on PWR1 and PWR3.
When the AC power supply is used, the S7712/S9312 supports 1+1 or 2+2 AC power
supplies.
When 1+1 220 V AC is selected, the maximum power of the S7712/S9312 is 800
W. This 1+1 configuration can be used if the S7712/S9312 has a few LPUs and the
total power consumption of the device is less than 800 W. Area A and Area B each
is configured with an AC power module. The filler panels are installed in other slots
of Area A and Area B. For example:
AC power modules are installed on PWR1 and PWR3, and filler panels are
installed on PWR2 and PWR4.
AC power modules are installed on PWR1 and PWR4, and filler panels are
installed on PWR2 and PWR3.
AC power modules are installed on PWR2 and PWR3, and filler panels are
installed on PWR1 and PWR4.
AC power modules are installed on PWR2 and PWR4, and filler panels are
installed on PWR1 and PWR3.
When 2+2 220V AC is selected, the maximum power of the S7712 is 1600 W. Four
AC power supplies are configured in Area A and area B. PWR1 to PWR4 slots are all
equipped with AC power modules.
The S7712/S9312 does not support the configuration of 110 V AC power supplies.
S9712 do not support PoE, the PoE power area are revered; S7712/S9312 support
PoE LPU, PoE power modules are installed in this area.
Highly reliable fan tray mode. Each fan tray supports two overlapping fans.
The SXX06 supports the DC power supply and the AC power supply.
When the DC power supply is used, the S7706/S9306 supports 1+1 DC power
supplies, achieving a maximum power of 1600 W. Area A and Area B each is
configured with a DC power module. The filler panels are installed in other slots of
Area A and Area B. For example:
DC power modules are installed on PWR1 and PWR3, and filler panels are
installed on PWR2 and PWR4.
DC power modules are installed on PWR1 and PWR4, and filler panels are
installed on PWR2 and PWR3.
DC power modules are installed on PWR2 and PWR3, and filler panels are
installed on PWR1 and PWR4.
DC power modules are installed on PWR2 and PWR4, and filler panels are
installed on PWR1 and PWR3.
When the AC power supply is used, the S7706/S9306 supports 1+1 or 2+2 AC
power supplies.
When 1+1 220 V AC is selected, the maximum power of the S7706/S9306 is 800
W. Area A and Area B each is configured with an AC power module. The filler
panels are installed in other slots of Area A and Area B. For example:
AC power modules are installed on PWR1 and PWR3, and filler panels are
AC power modules are installed on PWR1 and PWR4, and filler panels are
AC power modules are installed on PWR2 and PWR3, and filler panels are
installed on PWR1 and PWR4.
AC power modules are installed on PWR2 and PWR4, and filler panels are
installed on PWR1 and PWR3.
When the 2+2 110 V AC is selected, the S7706//S9306 provides the power of 400
W. AC power modules in 1+1 mode cannot meet requirements for power supply.
To meet the power requirements, the S7706//S9306 must use 2+2 mode for 110 V
AC power supplies. PWR1 to PWR4 slots are all equipped with AC power modules.
S9706 do not support PoE, the PoE power area are revered; S7706/S9306 support PoE LPU,
PoE power modules are installed in this area.
Highly reliable fan tray mode. Each fan tray supports two overlapping fans.
The SXX03 supports the DC power supply and the AC power supply.
When the DC power supply is used, the S7703/S9303 supports 1+1 DC power
supplies, achieving a maximum power of 1600 W. Area A and Area B each is
configured with a DC power module.
When the AC power supply is used, the S7703/S9303 supports 1+1 or 2+2 AC
power supplies.
When the 1+1 110 V AC is selected, the S7703/S9303 provides the power
of 400 W. The entire power of the S7703 is less than 400 W; therefore,
you can configure AC power modules in 1+1 mode when the voltage is
110 V.
S9703 do not support PoE, the PoE power area are revered; S7703/S9303 support PoE
LPU, PoE power modules are installed in this area.
The SXX06 and the SXX12 use the same system architecture. The hardware structure is
divided into three planes: data plane, control plane, and management plane. The three
planes are separated from each other. The data plane is exclusively used for transmitting
service data. The data plane is physically divided into several star HIG planes. The control
plane is exclusively used for transmitting protocol data, such as the data of routing
protocols. The control plane transmits data by way of dedicated channels. The
management plane is exclusively used for managing devices. The management plane uses
the CANBUS and independent power supply system.
At the core of the SXX06 and the SXX12 is the SRUA. The SRUA is responsible for data
forwarding and control protocol processing. The SRUAs work in hot standby mode. In
addition, the switching fabric unit supports load balancing mode.
The CMU is the centralized management unit. It monitors and manages the fans, power
supplies, external trunk nodes, and POE power supplies. The CMUs support hot standby.
The SXX03 uses the full mesh structure. The hardware structure is divided into three
planes: data plane, control plane, and management plane. The three planes are separated
from each other. The data plane is exclusively used for transmitting service data. The data
plane uses the mesh topology and is physically divided into several HIG planes. The control
plane is exclusively used for transmitting protocol data, such as the data of routing
protocols. The control plane transmits data by way of dedicated outband channels. The
management plane is exclusively used for managing devices. The management plane uses
the CANBUS and independent power supply system.
The SXX03 is integrated with an equipment management module. The module monitors
and manages the fans, power supplies, external trunk nodes, and POE power supplies.
The SRU integrates the control and switching functions and provides the control plane,
management plane, and switching plane for the system.
Control module: Functions as the control and management plane for the SRUA and
the entire system to implement protocol processing, route calculation, forwarding
control, system management, and system security.
Power supply module: Provides power supplies for the SRUA, Flexible Service Unit
(FSU), and clock pinch board.
SRUD includes the control plane, management plane, and switching plane.
Usage Scenario
The FSU is an optional subcard on the SRU of the SXX12 and SXX06, and can be
removed and installed flexibly.
Users can choose to install the FSU based on service requirements, which improves
Equipment management module: Sends port control signals for equipment management.
Panel port module: Manages the external PoE power supply and passive devices.
Backplane port module: Sends all port signals on the backplane to control the power
supply, fans, and active/standby state of SRU.
Usage Scenario
The CMU applies to the SXX12 or SXX06. There are two CMU slots on the subrack, one
for the active CMU and the other for the standby CMU. You can configure one or two
CMUs as required.
The switch uses the independent monitoring unit and adopts the CBUS as the monitoring
channel. The CBUS uses the integrated ASIC to complete a variety of tasks, including
detecting the board temperature, checking clocks, monitoring and controlling voltage,
controlling power-on and power-off of boards, and supporting JTAG loading. The switch
provides intelligent power management and fan speed adjustment. On the SXX03, the
CMU is integrated on the MCUA and no independent slot is provided. The functions of the
CMU of the SXX03 are the same as those of the CMUs of the SXX12 and SXX06.
The CBUS is a device management bus and also an outband management platform, which
is separated from services. If the CBUS is powered off or reset, services are not interrupted
but certain system functions may be affected, for example, board power and temperature
monitoring. If a board is faulty, the CBUS is not affected.
Runs routing protocols. All routing packets are sent by the forwarding engine to
the MCU for processing. The MCU processes routing packets, updates routing
entries, and delivers the forwarding table to the forwarding engine. The MCU also
broadcasts and filters routing packets, and downloads routing policies from the
policy server.
Runs the signaling protocol. In MPLS application, the MCU runs the MPLS signaling
protocol to process call requests, perform admission control, and establish and
maintain label switched paths (LSPs).
Monitors system operation. The MCU collects operation data of different units
periodically. Based on the running status of the units, the MCU generates the
control information. The control information helps check availability of boards,
control the running status of the switching fabric, perform port switching, reset the
forwarding engine, and increase fan speeds.
Implements data configuration. The MCU stores the configuration data, startup file,
accounting data, upgrade software, and running logs of the SXX03. The MCU
provides a CF card to store data files.
Works in 1+1 backup mode to improve reliability. The active and standby MCUs
monitor the status of each other. When the active MCU fails, the standby MCU
takes over all services of the active MCU to ensure the normal running of the
system.
Ensuring security of the CPU on the SRU and limiting the rate of packets sent to
the CPU
Each VSTSA provides four 16G electrical interfaces to implement data access and linespeed switching. The switches connected through the interfaces on the VSTSA belong to a
switching domain and are considered as a device. Users can manage all the switches in a
stack on the master switch.
We must pay attention that a kind of LPU may can be used in S9700/S9300/S7700, but
one specific board only suit for certain chassis.
For example, all of S9700/S9300/S7700 support 48-port GE electrical LPU, but the
one which we use in S9700 cannot be used in S9300 and S7700.
G48VA only can be used in S7700/S9300, and should be installed in the chassis
which support PoE.
The WMNPA provides two subcard slots. Subcards can be inserted into one WMNPA.
The switching and routing engines, power modules, CMUs, and fan modules of the SSeries Chassis Switch are all in 1+1 or N+N redundancy backup mode. They are all hotswappable.
The fan tray of the S-Series Chassis Switch supports double layers of fans for redundancy
backup, thereby improving the system reliability.
The DLDP protocol can detect the link status of fibers or twist pairs. If a unidirectional fault
has occurred, DLDP automatically shuts down the related port or requests the
administrator to shut down the port.
DLDP can be associated with RRPP or SmartLink to carry out protection switchover.
Fault management
Performance management
Performance management is used to measure the packet loss ratio, delay, and jitter during
the transmission of packets. It also collects statistics on various types of packets.
Performance management is usually implemented at the user access points. By using
performance management tools, an ISP can monitor the network running status and
locate faults through an NMS. The ISP can then check whether the forwarding capacity of
the network complies with the Service Level Agreement (SLA) signed with users.
The S-Series Chassis Switch provides 32K OAM sessions that are bound to the specific
port, user, and service, ensuring fast fault detection and location.
The S-Series Chassis Switch provides 32K OAM sessions that are bound to the specific port,
user, and service, ensuring fast fault detection and location.
The S-Series Chassis Switch uses an advanced heat dissipation architecture. The left-to-rear
ventilation path when working in tandem with efficient heat dissipation management and
cabling management can increase the number of use lines and reduce the cabinet depth
by 200 mm while not affecting the system's heat dissipation. The S-Series Chassis Switch
reduces the required equipment room space by up to 25% compared with traditional
switches.
The S-Series Chassis Switch is designed with multiple innovated and patented heat
dissipation technologies, such as monitoring over key components, zone-based fan
control, intelligent fuzzy fan speed adjustment, and left-to-rear ventilation path. The
unique technologies improve the heat dissipation efficiency and enable the S-Series Chassis
Switch to work at 45C for a long time.
It is estimated that the 45C working temperature reduces the energy consumption for
air conditioning by up to 39% than at the 24C working temperature.
Using the fuzzy fan speed adjustment algorithm, the S-Series Chassis Switch reduces noise
by 5 dB and decreases energy consumption by 4% compared with similar products.
The rotation speeds of the fans in each fan zone can be adjusted based on the LPU load.
The adjustment can maximally reduce electricity consumption.
If there is no LPU in a fan zone, the rotation speeds of the fans in the zone are reduced to
the minimum value (now the minimum value can be less than 2 W). This reduces power
consumption and noise and extends the service life of fans.
The S-Series Chassis Switch supports a wide range of LPU power, from 40 W to 250 W,
and possibly 350 W in the future. The zone-based fan speed adjustment can increase the
rotation speeds of the high-power LPUs while maintaining the low rotation speeds of
other LPUs. This approach reduces energy consumption and noise.
In traditional switches, all components are in full working state even when service traffic is
low, resulting in much waste of energy. In the S-Series Chassis Switch, energy
consumption is dynamically managed based on service traffic changes. This approach
reduces energy consumption by up to 8%.
The S-Series Chassis Switch can function as a core switch in enterprise campus networks.
The S-Series Chassis Switch provides powerful switching capabilities and support multiple
Layer 3 protocols. The S-Series Chassis Switch also features easy service rollouts and
simplified management.
The S-Series Chassis Switch provides high-density 10GE ports and a large number of GE
ports for aggregation and access. The S-Series Chassis Switch , firewalls, and service loadbalancing servers combine to create a powerful Internet data center (IDC).
What are the S-Series Chassis Switch slot and board types?
The S-Series Chassis Switch provides three series: S7700, S9300, and S9700, each
series has 3 model: SXX03, SXX06, SXX12
The S-Series Chassis Switch provides a broad range of boards, such as SRU, MCU,
CMU, and LPU. For details about the boards, see the slides therein.
The S-Series Chassis Switch ensures high reliability from both hardware and
software. As for hardware, the S-Series Chassis Switch supports redundancy
backup for all major components, such as control boards, power modules, and fan
modules. In terms of software, the switch provides a rich set of features, such as
SSO, separation between the control plane and the forwarding plane, and
hardware Ethernet OAM.
Why an agile network is required? Let's look at the IP network history. A network is used
to transmit information data. Service and application development determines network
development.
Initial phase: Typical services are mainly text services similar to Email and Telnet. Networks
are mainly used by specialists. There are low requirements for network bandwidth and
real-timeness. The networks only need to provide basic connection functions. Typical
network devices include Ethernet hubs and software forwarding routers.
Popularity phase: Starting from 1995, Netscape browser promotes Internet popularity.
Internet users increase sharply, services are diversified, text services become web services,
download services become popular, and network traffic increase rapidly. The network
scale, bandwidth, and performance problems are major problems in this phase. Layer 2/3
switches and hardware forwarding routers are used.
Multi-service phase: Starting from 2000, IP services are of major concern. The IP network
is required to transmit voice, video, and leased line services. Many new network
technologies are used, including MPLS/TE, QoS, BFD, fast switching, and NSF/NSR/ISSU.
Multi-service transmission needs to be ensured, and there are requirements for network
quality of real-time services, reliability, and service isolation.
In recent years, new services emerge increasing. A large number of new services and
2 Resource cloud: Resources including computing and storage resources are integrated into
the cloud to improve resource use efficiency. Resource cloud greatly improves IT efficiency,
and reduces costs including investment costs and O&M costs. Computing and storage
resources can be dynamically allocated. Network resources need to be dynamic to meet fast
deployment requirements in the cloud era.
3 Network real-timeness: Social media includes many real-time services. Voice, video, and
future cloud desktop services are migrated to networks. In addition to data service
transmission, the network needs to consider the impact of real-time and interactive services.
4 Fast-changing network requirements and rapid increase of traffic, functions, and nodes:
Network scalability is required to meet fast-changing network requirements.
1 Network resources are statically configured, and cannot respond to dynamic services, for
example, dynamic allocation of computing resources, VM migration, and mobile terminal
position change.
2 The IP network is unaware of service experience. Smooth service experience cannot be
ensured.
3 The fault location efficiency is low. When faults occur and cannot be rectified automatically,
faulty points cannot be accurately located in a short time.
4 Borderless security problems involve egress security and security at any position and on any
device. In particular, as mobility develops, there may be attack points elsewhere.
5 The response is slow. Functions on network devices, especially Ethernet switches, are
hardware-defined. New components are required to support new functions. The response time
cannot meet requirements of software-defined service development.
The S12700 comes in two different models: S12704, S12708, and S12712.
The ET1D2MPUA000 is the main control unit for the S12708 and S12712. It provides the
control plane and management plane for the entire system and reserves a slot for clock
daughter card.
Control module: functions as the control and management plane for the
ET1D2MPUA000 and the entire system, implementing protocol processing, route
calculation, forwarding control, system management, and system security.
Local clock module: provides the working clock for the chips of the control module,
and device management and monitoring module on the ET1D2MPUA000.
S12700 fully configured with SFUAs provides a maximum of 160 Gbit/s slot
bandwidth
S12700 fully configured with SFUCs provides a maximum of 480 Gbit/s slot
bandwidth
S12700 fully configured with SFUDs provides a maximum of 640 Gbit/s slot
bandwidth
The ET1D2SFU provides the data plane for the entire system. The data plane provides
high-speed and non-blocking data channels for service switching between service
modules.
Device management
management.
module:
sends
interface
control
signals
for
device
Note: 2,200W DC and 2,200W AC power supplies cannot be used together in a chassis.
Wireless network deployed on wired network. Two networks cost a lot and require
two management platforms, plus double workload
Higher QoS requirements with the increase of popular cloud desktop, video
conferencing, and VoIP services
Unified authentication and management for wired and wireless users , user policies
delivered to access devices automatically; no manual configuration
Wired and wireless convergence: converged network for wired and wireless; simple
management and fast forwarding for thousands of Aps
Fine-grained user management: unified authentication and effective policy control
for various terminals in different areas
Advanced network technology: network devices must be programmable, support
SDN, adapt to next-generation development
Increasing requirements for high bandwidth, high reliability, large routing table,
large ARP table, and intelligent O&M in campus network
Schools connect to the education network through VPNs, sharing teaching
information
Wired and wireless networks must be converged to provide wireless coverage in
every classroom
Data center: S12700 used as small enterprise's data center; reliability is critical
Huawei AR G3 series enterprise routers (AR G3) are next generation routers dedicated for
enterprise customers. The AR G3 all-in-one router series integrates multiple services
including; routing, switching, 3G, WLAN, voice, and security functions in one device.
These features combine to deliver industry leading performance and extensibility, meeting
customer requirements for a robust, reliable and flexible solution
for enterprise-grade network deployments. Due to strict adherence to industry standards,
the AR G3 router series are easily integrated into existing networks, accelerating multiservice network deployment while preserving existing network infrastructure investments.
ARs are located between an internal network and a public network. The deployment of
various network services over ARs reduces costs in enterprise network construction and
long-term operation & maintenance (O&M).
AR routers use a multi-core CPU and non-blocking switching structure to provide industryleading system performance.
Those models with V stand for supporting voice, Those models with W stand for
supporting WIFI, Those models with G stand for supporting 3G upstream. AR2200 series
and AR3200 series support voice function only when equipped with the DSP module.
To provide voice services for POTS users on AR1200, AR2200 , and AR3200 series routers,
4FXS/1FXO board is required.
To provide voice services for ISDN users on AR1200, AR2200 , and AR3200 series routers,
2BST board is required.
The AR150 and AR200 share the same simple logical architecture, which is consist of CPU
and LSW(Switching module).
The CPU is responsible for complex calculation, it is directly connected to the WAN
interface, and to the LSW with a GE bus.
LSW is responsible for forwarding the L2 and L3 Ethernet traffics.
In V2R1C00, the SRU40 is installed only on the AR2240, and the SRU80 is installed only on
the AR3260.
In V2R1C01 and later versions, the SRU40 and SRU80 can be installed both on the AR2240
and AR3260.
In V2R3C00and later versions, the SRU40, SRU60 and SRU80 can be installed both on the
AR2240 and AR3260.
The SRU40, SRU60, and SRU80 panels are identical except for having different silkscreen.
The SRU must be installed on the AR2240 and AR3260. You can install one SRU. Two
SRUs can be installed on the router.
Two SIC slots can be combined into one WSIC slot by removing the guide rail.
The two SIC slots and the WSIC slot below them can be combined into one XSIC slot by
removing the guide rail.
Two XSIC slots can be combined into one EXSIC slot by removing the guide rail.
Slots can be combined into one, but one slot cannot be divided into multiple slots.
After two slots are combined into one, the slot ID is the larger one between the original
two slots.
In V200R002C00, a WSIC card can be inserted into an XSIC slot with a special component.
The WSIC card is in the lower side of the slot and uses the XSIC slot ID as its own slot ID.
The AR2201-48FE and AR2202-48FE have no slot for pluggable subcards, so they do not
support subcards.
Slots can be combined into one, but one slot cannot be divided into multiple slots.
The number of the new merged slot equals to the larger one of the former slots.
The 3G-HSPA+7 is a 3G access SIC card. It can function as the primary or backup link of an
enterprise to connect to the Internet and transmit voice, video, and data services.
Only a list of USB 3G Modems are supported, you can contact Huawei TAC to get the
latest list.
The 8FE1GE can be installed in the WSIC slots of the AR1200, AR2200, and AR3260. On
the AR1200 and AR2204, two SIC slots are combined into one WSIC slot.
The 24GE can be installed into the XSIC slot on the AR2220, AR2240, and AR3260. On the
AR2220, two WSIC slots are combined into one XSIC slot.
An FXS interface is a simulated subscriber line interface and provides access to AT0 loop
trunk of the analog phone, fax, and telephone exchange.
An FXO interface is a loop trunk interface and provides access to the telephone exchange
by using regular subscriber lines.
The 2BST is the ISDN module on the AR routers and provides two ISDN S/T interfaces,
which transmit voice service.
The 2BST implements the ISDN BRI function and provides the bandwidth of two B
channels and one D channel:
The total bandwidth of two B channels and one D channel is 144 kbit/s.
The S/T interface on the 2BST provides a rate of 192 kbit/s, including 144 kbit/s for data
transmission and 48 kbit/s for maintenance information transmission.
Powering off the AR router before removing and reinstalling power modules.
If a single fan failed, the device will be overheated and its performance is then affected.
When this occurs, replace the entire fan module immediately.
A network cable subtends devices, enables a device to communication with other network
devices, and allows users to locally or remotely maintain the device.
The appearances of the single-mode optical fiber and the multimode optical fiber are the
same, but their colors are different. The single-mode optical fiber is yellow, and the multimode optical fiber is orange.
The optical transmitting module of the multi-transverse mode is connected to the
multimode fiber.
The optical transmitting module of the single-longitudinal mode or multi-longitudinal
mode is connected to the single mode fiber.
E1 trunk cables are classified into 75-ohm unbalanced coaxial cables and 120-ohm
balanced twisted pair cables. The connectors of the cables are as follows:
A T1 trunk cable is a 100-ohm balanced twisted pair cable. Its appearance is the same as
the appearance of an E1 120-ohm balanced twisted pair cable.
A console cable connects the console port of the device to the serial port of an operation
terminal to transmit configuration data. A shielded cable or an unshielded cable can be
used according to the onsite situation.
The 8-pin RJ45 connector is inserted into the console port of the device.
The DB9 male connector is connected to an operation terminal, which is usually a PC.
Ethernet LAN-Ethernet LAN Layer 3 (in a subcard): through LSW and Fabric
Ethernet LAN-fixed Ethernet WAN2 Layer 3 (in a subcard): through LSW, Fabric and CPU
Basic voice functions are provided by the built-in PBX, SIP server, and SIP access gateway
Value-added voice services include multi-party communication, IVR automatic connection,
ring-backtone, parallel ringing, sequential ringing, one number link you (ONLY), bill
management, and subscriber management.
The Quality of Experience (QoE) feature monitors voice service quality in real time.
Jitter buffer, echo cancellation, and packet loss compensation combine to deliver a
superior user experience
Only SRU80 with TM card supports Hardware-based QoS, all model can support H - QoS
While delivering enterprise-class network services, the AR router provides robust network
security. Comprehensive security solutions include user access control, packet detection,
and active attack defense.
The built-in firewall is the first line of defense.
Port authentication
authentication, and
technologies
include
802.1x
authentication,
MAC
portal authentication.
VPN technologies include IPSec VPN, GRE VPN, DSVPN, L2TP VPN and SSL VPN.
address
Application:
Benefits:
The AR integrates routing, switching, voice, security, and WLAN functions. You need
to deploy only one device at the egress to meet multi-service requirements, which
reduces the TCO and protects investments.
The AR supports high-density voice card 32FXS and high-density Ethernet card 24GE
to connect many voice and data terminals.
The AR provides built-in AC, leading in industry. It provides cost-efficient WLAN
access solution without deploying extra cards.
The AR supports dual SRUs and hot standby, ensuring nonstop service transmission.
The AR G3 routers function as the egress routers of enterprise branches and provide
flexible access methods to support remote network connections.
An AR G3 meets various access requirements, including leased line, Ethernet, xDSL, 3G,
and WLAN. This saves deployment and maintenance costs and provides a large value to
customers.
The 100 Mbit/s Ethernet interfaces of an AR1220V and AR1220W (V2R1C01) support PoE
in compliance with IEEE 802.3af and 802.3at; therefore, the AR1220V and AR1220W
(V2R1C01) can provide power for powered devices (PDs), such as IP phones. An 802.3at
interface provides higher than 30 W power, ensuring power for large-power PDs.
The 8FE1GECombo and 24GE interface cards on the AR2200/AR3200 support inter-card
VLAN switching, spanning trees, link bundling, and Layer 2/Layer 3 data exchange.
The AR G3 provides a built-in PBX supporting the enterprise switchboard, IVR navigation,
and CDR query functions to enhance corporate image and improve enterprise
communication efficiency.
The AR G3 is located in a branch to provide the smart call routing function. When a fault
occurs on the WAN, the PSTN network is used as a backup for calls.
When the SIP server at the headquarters is unreachable, the built-in SIP server of the AR
G3 implements communication between the branch and the PSTN network. This ensures
reliability of voice services.
The AR G3 provides multiple security access functions such as GRE VPN tunnel and IPSec
VPN tunnel, implementing secure data access and transmission. The AR G3 implements
fast tunnel deployment and authentication for branches. Using a tunnel, partners can
access and share enterprise resources and users are authenticated and authorized.
As the PEs of an MPLS network, the AR G3 routers are located in the branches. Different
types of services are separated by MPLS L3VPN. The AR G3 implements flexible
deployment, fast distribution, and secure transmission of VPN services, and supports
enterprise service operation over networks.
The AR G3 complies with 3G standards including CDMA2000 EV-DO, WCDMA, and TDSCDMA, meeting wireless communication requirements between branches and the
headquarters.
Users can use a 3G USB card to deploy 3G services on the AR G3, saving service card slots.
In addition, the 3G data link can be used as a backup for wired link to protect the xDSL,
FE/GE, ISDN, and CPOS uplinks. The backup link improves network stability and reduces
network construction costs.
The AR G3 provides the NQA function to monitor 3G link quality, ensuring the SLA.
Answers:
ABCD
ACDE
Meanwhile, in order that you can study Huawei NE series routers in the round, we attach
some contents of Huawei NE20E-X6 introduction to the end of this course especially.
Huawei NetEngine20E-X6 High-end Service Router(hereinafter referred to as the NE20EX6) is a high performance router designed for the following custom, such as finance,
power, government, education, enterprise, carrier and so on by Huawei, in order to meet
the requirement for Carrier HA of carriers and enterprise aggregation and access network.
This is the introduction of NE40E product family. All LPUs can be applied to NE40E-X16,
X8 or X3. The main difference between LPUs is forwarding capability.
The NE40E-X adopts a system architecture as shown in Figure above. In this architecture,
the data plane, management and control plane, and monitoring plane are separated. This
design helps to improve system reliability and facilitates separate upgrade of each plane.
The SFU on the NE40E-X16 switches data for the entire system at wire speed of 640 Gbit/s
(320 Gbit/s for the upstream traffic and 320 Gbit/s for the downstream traffic). This
ensures a non-blocking switching network.
The NE40E-X16 has four SFUs working in 3+1 load balancing mode. The entire system
provides a switching capacity at wire speed of 2.56 Tbit/s.
The four SFUs load balance services at the same time. When one SFU is faulty or replaced,
the other three SFUs automatically take over its tasks to ensure normal running of services.
As shown in figure above, the NE40E-X16 backplane is divided into four areas, with each
area having two power inputs. These eight power inputs work in backup mode.
In a DC power supply system of the NE40E-X16, eight 70 A PEMs work in 4+4 backup
mode.
The input AC power is converted into regulated DC power by an AC/DC converter. The
resulting DC power output is connected to the PEMs through external cables to supply
power for all boards and fan modules.
Two -48V power inputs are joined on the board.
After the low-frequency filtering, the two -48 V power inputs for fans are joined inside the
fan module.
The NE40E-X16 is divided into the upper chassis and the lower chassis, and draws air from
the front and exhausts air from the rear. The air intake vent on the upper chassis resides
above the board area on the front chassis; the air exhaust vent resides above the board
area on the rear chassis. The lower chassis and the upper chassis are opposites. In
addition, the upper chassis and the lower chassis have separate heat dissipation systems.
The middle area of the chassis is for SFU slots. The air intake vent of this area resides on
the left of the chassis. Two upper SFU slots in the area draw air from the left. When
flowing to the right, the air joins the air from the upper chassis. Two lower SFU slots in the
area draw air from the left. When flowing to the right, the air joins the air from the lower
chassis.
The upper and lower chassis have separate air channels that draw air from the front
and exhausts air from the rear. The air filters at the air intake vents are vertically
installed. The curved face, large area, and small windage resistance of the air filters
help to improve the heat dissipation efficiency. The two air filters on the upper and
The air channel in the SFU slot area is located on the left of the chassis. The air filter
adopts front access. The depth of the air filter is the same as that of an SFU and the
height of the air filter is four times the height of the an SFU.
The SFU on the NE40E-X8 switches data for the entire system at wire speed of 480 Gbit/s
(240 Gbit/s for the upstream traffic and 240 Gbit/s for the downstream traffic). This
ensures a non-blocking switching network.
The NE40E-X8 has three SFUs working in 2+1 load balancing mode. The entire system
provides a switching capacity at wire speed of 1.44 Tbit/s.
The three SFUs load balance services at the same time. When one SFU is faulty or replaced,
the other two SFUs automatically take over its tasks to ensure normal running of services.
As shown in figure above, the NE40E-X8 backplane is divided into two areas, with each
area having two power inputs. These four power inputs work in backup mode.
After the low-frequency filtering, the two -48 V power inputs for fans join inside the
fan module.
Each DC power input contains one -48 V power input and one RTN input. Two
separated RTN inputs join on the board.
In the case of an AC power supply system, an AC power frame is placed outside the
chassis and installed with rectifier modules based on system power. The AC power frame
is then connected to the input terminals on the DC-PEMs to supply power for the system.
(In short, an external AC power frame is added to the DC power supply system to
constitute an AC power supply system.)
The heat dissipation system is responsible for dissipating heat for the entire system. The
heat generated by boards is dissipated through the heat dissipation system. In this
manner, the temperature of the components on boards are controlled within a normal
range, enabling the boards to work stably.
The heat dissipation system is composed of fan modules (one fan in each fan
module), fan control boards (FCBs), temperature sensors, air filters, air intake and
exhaust vents, and a system air channel.
When a single fan fails, the other fans automatically rotate at full speed. In this case,
the heat dissipation system enables the system to work in a short period of time at
ambient temperature of 40.
Temperature sensors, located on the air exhaust vent and boards, are used to
monitor the temperature of the components on boards and adjust the fan speed
through the command delivered by the SRU to control the temperature in a normal
range.
The power modules of the system have two fans of their own for independent heat
dissipation.
As the figure shown above, The NE40E-X8 draws air from the front and exhausts air from
the back. The air intake vent resides above the board area on the front chassis; the air
exhaust vent resides above the board area on the rear chassis.
The two fan modules of the NE40E-X8 are located side by side at the air exhaust vent, with
each module containing one fan. The entire system dissipates heat by drawing air, as
Two AC power modules or two DC power modules work in 1+1 backup mode to improve
the reliability of power supply. The figure shows the diagram of the power supply system.
The NE40E-X3 draws in air from the left and exhausts air from the rear. The air intake vent
is located at the left side of the chassis and the air exhaust vent is located at the rear of
the chassis.
The fan module of the NE40E-X3 is located at the air exhaust vent. The system draws in air
for heat dissipation.
Supports the biggest USB fat32 format, and supports the memory available in the
market.
Two USB ports: supporting version downloading through USB devices and power
supply for USB devices
RJ-45/SMB connector: processing Stratum-3 clock and 1588 clock; supporting input
and output of 2MHz/2Mbps/1PPS clock signals
The bandwidth of the control bus between the MPU and the LPU is increased to 1
Gbit/s.
Providing two 1G or 2.5G SFP interfaces for future expansion into clusters
The architecture is designed to be compatible with the SFU function on future MPUs.
The control plane of the NE40E is separated from the data plane and the monitoring plane.
The SRU is adopted on the NE40E-X8. The SRU integrates an SFU used for data switching.
Supports the biggest USB fat32 format, and supports the memory available in the
market.
Two USB ports: supporting version downloading through USB devices and power
supply for USB devices
RJ-45/SMB connector: processing Stratum-3 clock and 1588 clock; supporting input
and output of 2MHz/2Mbps/1PPS clock signals
The MPU of the NE40E-X3 controls and manages the system and switches data. The MPUs
work in 1+1 backup mode. The MPU consists of the main control unit, system clock unit,
synchronous clock unit, and system maintenance unit. The functions of the MPU are
described from the following aspects.
A switching network is a key component of the NE40E and is responsible for switching
data between LPUs.
NE40E-X16 has four SFUs that work in 3+1 load balancing mode.
Indicators on panel include ACT indicator, RUN indicator and OFL indicator.
Alarm detection of the smoke sensor :Supports the connection to the smoke sensor
through the panel to detect the alarm signals from the chassis or equipment room.
Detection of the ambient temperature :Supports the connection to the temperature
sensor through the panel to detect the temperature of the chassis or equipment
room.
Device alarm output :The CMU provides two-level alarm output signals.
Main contact point inspection :The CMU can provide six main contact points to
detect signal input and monitor whether the devices outside the chassis work
normally.
One 232 and 485 serial interface :Provides an RS-232 serial interface, which is
connected to the panel. You can use it to query or locate information about the
CMU. In addition, the CMU provides an R-485 serial port, which is connected to the
panel. You can connect an device to this interface. The interface supports full-duplex
mode.
As the Universal Service Router , NE40E-X series routers supply divers interfaces, such as
ethernet, POS, CPOS, E1 and so on.
LPUI
LPUS
SPU
the LPUF-40-B supports all software features except L3VPN, MVPN, and IPv6, and
can be upgraded to support all features of the LPUF-40-A through licenses
100G linecards include two types: High-Queue LPUF-100 and Medium-Queue LPUI-100.
LPUF-100 is Flexible Linecard and provide 512K flow queues, and support flexible
configuration of 10GE, GE, 10G POS and 40G POS. LPUI-100 is Integrated Ethernet
Linecard and provide 256K flow queues, and meet the requirement of different
networking.
100G linecards in V6R3 can provide 8*10GE, 10*10GE, 16*10GE oversubscribed, 96*GE,
8*10G POS, 2*40G POS and 1*100GE. In industry, the interface type of NE40E 100G
linecards is most abundant, and the port density of NE40E 100G linecards is highest.
Note
Provided using 100G Board, we have to switch SFU board(and SRU board on NE40EX8) to another one with 200G, whatmore, the 200G SFU board and corresponding
SRU board cant be used together with 40G SFU, LPUA, LPUB, LPUG at the same
time
An SPUC implements the NetStream function and processes tunnel services related to GRE
and NAT and multicast VPNs.
An SPUC does not have any physical interfaces and can be inserted into any LPU slot.
NetStream mode
Under the NetStream mode, the SPUC board can implement centralized NetStream
mode.
Tunnel mode
Centralized multicast VPNIf running the multicast VPN in SPUC boards, We need
to configurate the same number of MVPN License with SPUC amount.
Tunnel:SPUC board can provide centralized tunnel, including GRE tunnel and 4over6
tunnel currentlyIf running the tunnel in SPUC boards, We need to configurate the
same number of tunnel License with SPUC amount.
NAT mode
SPUC board support NAT, NAT License must form 1:1 with the SPUC.
The NE40E supports entire HQoS solutions, HUAWE is the only vendor that supports
HQoS, DS-TE and MPLS HQoS, the other vendors support one or two. Thus, HUAWEI can
provide a entire HQoS solution to meet kinds of scenarios of carrier-class services.
The main scenario of NE40E Router: Campus and IDC interconnection, Large branch
access, Key nodes of WAN.
Answers:
ABCD
Both IOS and VRP provide command line interface (CLI), and CLI is the main user interface
for both of them.
The Network Migration Tool can also translate H3C Comware commands.
Note:
Routine maintenance requires less technical skills to the operators. With predefined
policies and procedures, the operators can perform routine maintenance easily.
The environment or the running condition includes the equipment room, power supply,
heat dissipation, etc. A device can work normally only after the running conditions are
met.
All Huawei Datacom Products (including routers, Ethernet switches and firewalls) use VRP
as the software platform. To maintain Huawei Datacom Products is mainly to maintain the
VRP.
The temperature and humidity requirement may be different for different products. For
example, the AR requires the ambient temperature in the equipment room should range
from 0C to 40C, and the ambient humidity in the equipment room should range from
5% RH to 90% RH.
Short-term operation means that the continuous working time does not exceed 48 hours
and the accumulated time per year does not exceed 15 days.
The cleaning of the equipment room or the ambient is closely related to the heat
dissipation of the devices.
Sundries in the equipment room may clog the outlet of the air channel of the device.
Too much dust covered on the air filter will also clog the outlet of the air channel.
Sometimes a device needs a license to provide advanced functions. The user need to buy
the right license and active it on the device. And some of the license is time limited, so we
need to care about the license state.
Different devices use different local storage, and the prompts are different too.
For the dual MPU equipment, it is necessary to check both the master and the
backup MPU.
Other information
Note: there is no command for NE routers to display resetting information, because of its
complicated structure. If there is unexpected resetting occurs on NE routers, please
contact professional personnel to handle.
There are chapters in the product documentation to describe the alarms, logs and traps in
detail.
It is recommended to use a NMS (such as Huawei eSight) to gather the alarms, logs and
traps. A NMS will make the information gathering and analyzing easy.
The display reset-reason command displays the reason for board reset. It is used on the
devices that support modular cards.
Above is an example on S9706. We can see that Slot 5 has reset only once, Slot 5 has
reset 967 times. And we can see the reset reason.
The display interface brief command displays brief information about interfaces, including
the physical status, link layer protocol status, inbound and outbound bandwidth usage
within a certain period, and numbers of sent and received error packets. This information
helps locate faults on interfaces.
The check item of services are related to what services the network is providing.
Too much is displayed in one window, difficult to analyze. For example, if you use
the display logbuffer command to display the log buffer, there may be hundreds of
lines of information displayed. It is very difficult to lookup and analyze.
If you need help from others, you need to send some information to him or her. All
you typed in and all that displayed in the CLI is the best raw information.
In many organizations, what the operators did should be saved for auditing.
Therefore, logging all information displayed in the CLI is a good habit in maintenance.
The above shows the commonly used terminal programs SecureCRT and Hyper Terminal.
Both of them support to capture the information in a text file.
Answers:
ABC
Security statement
For brief analysis, the course uses FTP as an example to describe corresponding
technologies. The device supports file transfer using FTP, TFTP and SFTP. FTP, TFTP,
and SFTPv1 have security risks, so SFTPv2 is recommended.
If you are a maintenance engineer, read the following precautions before doing your
work:
Check whether the fault is an emergency fault. If so, use the pre-defined
troubleshooting methods to recover the faulty module immediately and then restore
services.
Strictly conform to operation rules and industrial safety standards, ensuring
personnel and device safety.
Take electrostatic discharge (ESD) measures and wear an ESD wrist strap when
replacing or maintaining devices.
Record original
troubleshooting.
information
in
detail
during
Make records when performing important operations such as restarting the device
or erasing the database. Before performing important operations, confirm the
operation feasibility, back up data, and prepare emergency and security measures.
Only qualified personnel can perform important operations.
Some faults cause resource or money loss for customers, so maintenance engineers should
focus on how to prevent faults and quickly rectify faults. Backing up key data helps you
quickly locate and rectify faults. Back up key data as soon as possible when the network
runs properly.
The fault symptoms are different, but the root causes are technical issues.
For example, a user cannot access the Internet. Ping the gateway from the PC. The ping
operation fails. That is, a PC can not connect to its gateway.
If you have trouble locating the fault, collect fault information and send the information to
Huawei or Huawei agent for fault analysis.
Fault occurrence time, network topology (for example, location of the faulty device
on the network, and upstream and downstream devices connected to the faulty
device), operations triggering the fault, measures that you have taken and results,
symptom and influence of the fault (for example, on which ports services are
affected).
Name, version, current configurations, interfaces of the faulty device. For the
method of obtaining these information, see Collecting Diagnostic
Information and Common display Commands.
Logs generated when the fault occurs. For the method of obtaining the log
information, see Obtaining Logs and Alarms.
Executing this command requires a long time. You can press Ctrl+C to pause diagnosis
information display on screen.
When a large amount of diagnostic information is displayed, the CPU usage may be high
in a short period.
Therefore, do not use this command when the system is running properly. Running
the display diagnostic-information command simultaneously on multiple terminals
connected to the device is prohibited. This is because CPU usage of the device may
obviously increase and the device performance may be degraded.
When a device is faulty, collect logs and alarms on the device immediately. These logs and
alarms help you know what happened during device operation and where the fault
occurred.
Logs, including user logs and diagnostic logs, record user operations, system faults, and
system security.
The equipments supporting log file will periodically save files on the local storage device.
Taking Sx7 Chassis switches as an example: By default, the switch records all logs and
alarms in log files and saves log files in the logfile folder. The file name is *.log or *.dblg,
and the default file size is 8 MB. When the size of a log file exceeds 8 MB, the system
compresses the log file into a zip file and names the compressed file saving time.log.zip
or saving time.dblg.zip, for example, 2013-06-03.19-49-37.log.zip and 2013-09-11.10-5452.dblg.zip. The system then records logs and alarms in a new log file.
80% network faults are caused by simple reasons, for example, cable failures and incorrect
configurations.
Analyze problems from simple to complex. In the OSI model, analyze problems from the
physical layer first. Then analyze the data link layer and network layer.
If no fault occurs at the network layer, the transport layer will work properly. TCP/IP has
been running for dozens of years and is mature. Most application faults are caused by
application software.
Problem analysis depends on our knowledge and experience to some degree. Having a
good understanding of network protocols helps rapidly analyze and locate network faults.
Note: Before performing any operation, ensure that the operation has the minimal impact
on network services.
Highlights of HedEx
Knowledge base is where you learn and share experience. A large collection of cases and
technical articles is available. You are more than welcome to submit your own article to
share with others.
There are abundant cases to help you solve common issues and complete
installation or maintenance tasks quickly.
Support-E boasts Chinese and English technical communities, including many specialized
forums. Personal space is also supported.
All registered users can browse the forums and comments. Huawei engineers are
there to give you a real-time response.
When seeking for technical assistance, we must analyze the trouble first. Because we are
on site, we are the ones who are the most familiar with the troubles.
If we contact others for help without our own analysis, we can not provide enough
information at once, time may be wasted for gathering information again and again.
If we contact others for help after gathering enough information and necessary analysis,
we can provide enough information at once, and time will be saved.
For different region, Huawei provides different hotline telephone number or Email, you can
find the details at the following web page: Home page > Contact Us > Aftersale Support
http://support.huawei.com/enterprise/NewsReadAction.action?contentId=NEWS100
0000563
If a network fault occurs after a configuration operation, it does not mean that the fault is
caused by the configuration.
Analyzing the problem and finding out the root cause must be done before deciding
whether to recover the configuration.
The compare configuration compares whether the current configurations are identical with
the next startup configuration file.
Note: only the first different will be displayed each time. You need to run it several
times to make sure there is no difference between the running and the saved
configuration.
The configuration can be recovered only when there is the backup configuration file, so
back up the configuration before any configuration modification.
Answers:
ABCD
ABCD
The hardware is the foundation of a device. The basis functions of a device are determined
by its hardware. But the devices hardware features cannot be manifested without its
software.
When designing a product, venders often choose the more advanced hardware
architecture, and then keep updating the software to improve and optimize the products
features.
Products manufactured years ago might not support IPv6, but most of them can support
IPv6 by upgrading the system software to a updated version.
The new software version will be compatible with older hardware, but the new hardware
is generally not compatible with the old software version.
This course only introduce Online upgrade, Installing Software Patch and Using the
BootRom
First of all, Assess the feasibility of the upgrade, confirm that the upgrade is
necessary.
Then, Obtain version software and related documents through official channels.
For the stability of business, if not necessary, software upgrade is not recommended.
A variety of risks exist during upgrading, be sure to do full assessment and risk aversion
preparation.
Upgrading the running device may affect business, the operability is not only confined in a
technical level.
Both technology and business are considered to risk assessment. And the risk control
measures are also considered.
If you can not circumvent the risk, you should seek product support, and should not risk
operating.
The operability of the upgrade is weather the upgrade is operable when technical
conditions are satisfied.
Various factors are considered to assess the operability of the upgrade: the user's business,
operations personnel, maintenance habits, the operator's level of technology, technical
support, and etc.
You can find the version software on Huawei enterprise website: software>enterprise
networking. A set of release documentations will be provided for every software (including
patch). Generally, they are:
Upgrade guide. Generally, the upgrade steps are similar for all Huawei Datacom
products, but there may be some exceptions. Read the upgrade guide before
upgrading carefully.
Command, alarm and MIB delta information, which describes the changes in
command, alarm and MIB.
Feature delta information, which describes the changes of the product features.
Release notes, which describes the compatibility and the unreserved issues.
When analyzing the risks of upgrade, the command, alarm and MIB delta information, the
feature delta information and the release notes should be read.
How many devices should be upgraded? Where are they located? What the version
are they are currently running? Is this version can be upgraded to the new version
directly? Can we operate remotely or locally? ...
Closely related to the choice of upgrade and the upgrade object. Generally, we use
the online upgrade of the command line.
Verification Method
The verification contains two aspects: before and after the upgrade:
Before the upgrade, make sure what the exact problem is and no other problems
exist.
After the upgrade, make sure that the exact problem is solved and no new problem is
introduced.
Rollback Scheme
The risks must be fully considered, once the upgrade did not achieve the desired
goal, the rollback scheme should be applied.
The rollback and upgrade operation steps are roughly the same, just set the startup
version to the old.
Configuration file, license file, and some other files need to be backed up.
Some kind of device is lack of storage space, they can only keep one version
software. Before we delete the old version file, wed better backup the old file to a
PC.
Files can be transfer to the device via FTP, TFTP or a USB drive.
If the old configuration is not compatible with the new version software, a
compatible new configuration should be transferred as well.
The restart of the device will cause business interrupt, ask the business owner for a
This course takes S3700 switch as an example to introduce the steps of upgrade using CLI.
In this example, we use the device as the ftp client. The device has been configured to
communicate with the ftp server normally. And the ftp server has user/password
configured, provides the related files.
Use command get to download the version files from the ftp server.(upload for the PCs
view)
When a new system software is specified to be the startup file, the device will
automatically prompt will the BootRom be upgrade?. Generally, we choose yes to
upgrade the BootRom immediately, or the BootRom will be upgraded when the device
start next time.
Use command reboot to restart the device. The device will prompt to save the
configuration. Choose Y to save the current configuration, or N to keep the old.
After the device restarted, use command display version to check if the device is running
the new software version.
At the same time, check if the problem has been solved and no new problem is
introduced.
Some patches are developed based on another patch, so check the patch information as
well as the basic version.
Use command patch delete all to delete the patch that not needed.
In this example, we use the device as the ftp client, and the PC as the ftp server. Use
command get on the device to download the patch file.
Use command patch load, you can activate a patch in the one-click mode. That is, load,
activate, and run a patch.
The BootRom menu is the most basic operation options of a device, as the BIOS of a PC.
We use the BootRom to upgrade often because that the device has been unable to start
properly. At this time, the business has been interrupted, and we do not have to concern
more about the business.
At the beginning of the devices starting, the Console CLI will prompt: Press Ctrl+B
to enter the BootRom menu
Some basic network parameters can be configured using the BootRom menu, so the
device can connect to the ftp or tftp server to transfer files.
A variety of methods can be used to transfer files: Xmodem(Console), FTP, or TFTP.
Considering the transfer speed, we generally choose FTP or TFTP.
Upload the version file to the device with the method chosen.
At the beginning of the devices starting, the Console CLI will prompt: Press Ctrl+B to
enter the BootRom menu.
Press Ctrl+B before the countdown timer expires, and then enter the password. Different
devices may have different passwords. The default password of S3700 switch is huawei.
Follow the menu options to modify network parameters. In this example, we choose ftp as
the transfer protocol.
If the device has a management Ethernet port, then use this port for transfer, if not, use a
simple Ethernet port. In this example, we use Ethernet 1/0/1.
The device must connect to the server first, and the server must be configured in advance.
On the other hand, there must be enough storage space for the new version files on the
device.
You can enter the Filesystem Submenu(the 5th option of the main menu)to operate
the files on the device.
FILESYSTEM SUBMENU
1. Erase Flash
2. Format flash
It is recommended to display files first, and then decide which file should be deleted.
Different devices may use different storage media, maybe a flash, maybe a CF card.
The patches
And so on
The device will not reboot automatically, we must reboot the device under the main
menu.
Remember to check version information and the business status after upgrade.
Answers:
ABCD
Note:
To simplify the problem description, this course uses FTP as an example to describe
the related technologies. The device can transfer files through FTP, TFTP, and
SFTPv1. Using FTP and TFTP or SFTPv1 has potential security risks. SFTPv2 is
recommended.
The software upgrade is used to revise bugs of devices, and expand the capacity and
features.
On most networks, upgrading system software of a device requires the device to restart.
The device restart interrupts service operation and traffic forwarding. Setting up multiple
equal-cost paths can relieve the impact of system software upgrade on services. Services
can be switched to the backup paths during an upgrade. This method requires the
network configuration to be modified, which increases error probability and upgrade time.
In addition, traffic may concentrate on one path after traffic switching and then be
interrupted due to congestion.
For the S series chassis switches, the traditional upgrade always requires a complete
reboot, which brings 15 to 25 minutes of service interruption. While, the ISSU can reduce
the interruption time to 2 minutes.
ISSU is a mechanism that reduces service forwarding interruption time when the system
software is being upgraded or rolled back. This mechanism reduces the traffic interruption
time and improves service reliability during system software upgrade.
The system checks whether ISSU conditions are met. Two ISSU modes are available, but
currently, lossy ISSU is supported.
If one of the LPUs does not support ISSU, it will be marked as fast-reboot (then reboots
automatically or manually), the ISSU will continue.
ISSU check: The system checks whether ISSU conditions are met and restarts the
standby MPU using the new system software.
ISSU start: The system backs up data between the master MPU and standby MPU.
ISSU switchover: The standby MPU becomes the new master MPU.
ISSU confirm: The old master MPU restarts using the new system software and
becomes the new standby MPU after the restart.
Before the ISSU confirm phase, you can use the ISSU abort function to abort ISSU and roll
back the system to the old version.
ISSU provides the version rollback mechanism to allow the system to return to the old
version. This mechanism reduces version upgrade risks.
The issu precheck command enables the system to perform in-service software upgrade
(ISSU) precheck. This helps determine whether ISSU can be performed.
Before an ISSU upgrade, run this command to perform an ISSU precheck and
determine whether the current system resources (software version, number of VTY
users, and memory space on the cards) and card type support ISSU. If the current
software version does not support ISSU or there are more than one VTY user in the
system, the system stops the precheck and the ISSUupgrade cannot be performed. If
an SPU or WAN card is used or a card does not have sufficient memory space, this
card does not respond to the ISSU instructions and must be upgraded by resetting
it.
The ISSU precheck hardly affect system operation and can be performed at any time
except in an ISSU process.
Different from issu check, the issu precheck command only checks whether the current
system resources and card type support ISSU. The displayed upgrade type in the check
resource may not be the final upgrade type used on the cards. The final upgrade type will
be displayed in the check result after you run the issu check command.
The issu timer rollback command sets the ISSU rollback timer value. The default value is
120 minutes.
After the system enters the ISSU check phase, the ISSU rollback timer is automatically
activated. If the timer expires in different situations, the system uses different processing
methods:
If the timer expires before the standby MPU restarts using the new version, the
system considers that you abort ISSU and exits ISSU.
If the timer expires after the standby MPU restarts using the new version and before
the ISSU plane is successfully switched, the system considers that you abort ISSU, the
standby MPU rolls back to the old version, and then the system aborts ISSU.
If the timer expires after the ISSU plane is successfully switched, the system extends
the upgrade time and considers that ISSU is successful by default.
Before the ISSU plane is switched, you can run the issu timer rollback command to reset
the ISSU rollback timer value. In ISSU confirm phase, if ISSU is confirmed successful, the
ISSU rollback timer is automatically disabled.
The issu check command enables the system to enter the ISSU phase and perform ISSU
check. The ISSU check affects the system in the following aspects:
The standby MPU restarts using the new version and becomes the new master MPU.
The ISSU rollback timer is activated. You can adjust the ISSU rollback timer value
during ISSU. If the timer expires before ISSU is complete, ISSU may fail and then the
system will roll back to the old version.
Before the ISSU check phase ends, you can press Ctrl+C to abort ISSU. Then the system
continues ISSU check until ISSU check finishes. You cannot continue ISSU but only run
the issu abort command to exit ISSU.
The issu start command starts ISSU. The issu check command must be used to enable
the system to perform ISSU check before using the issu start command.
After the system starts ISSU, you can press Ctrl+C to abort ISSU. Then the system aborts
ISSU. You cannot continue to switch the ISSU plane but only run the issu abort command
to exit ISSU.
The issu switchover command configures the system to switch the ISSU plane.
The ISSU plane switchover will interrupt a Telnet connection for 30s. Wait for 30s and
then press Enter to re-log in to the device for ISSU.
After you run the issu confirm command to confirm ISSU and restart the old master MPU
using the new version, check the status of the master MPU and standby MPU again. Then
you can find that the new master MPU is in Master state because the hardware switchover
has completed.
The issu confirm command confirms ISSU after the ISSU plane is successfully switched so
that the old master MPU restarts using the new version. The issu switchover command
must have been used to switch the ISSU plane. After the issu confirm command is
executed, the new system software is specified as the software for the next startup. The
ISSU is complete.
NOTE: the commands above are executed on the new master MPU (the former backup
MPU).
Answers:
ABC
ABC
Confidentiality
Integrity
Availability
Availability means that the information must be available when it is needed. High
availability systems aim to remain available at all times, preventing service
disruptions due to power outages, hardware failures, and system upgrades. Ensuring
availability also involves preventing denial-of-service attacks.
Controllability
Non-Repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It
also implies that one party of a transaction cannot deny having received a
transaction nor can the other party deny having sent a transaction.
Electronic commerce uses technology such as digital signatures and public key
Information Security and Network Security are seemingly the same, but they different in
reality.
Information security not only includes network security, but also includes computer
system security, application security, and a variety of security management
(personnel, technology, operations management).
Information Security is a very large system, and Network Security is only a part of it.
Different devices from different vendors may share the same security features like AAA
and 802.1X, or each may has its own security features, like local attack defense features.
This course mainly introduces the unique security features of Huawei devices, including
ACL, Traffic Suppression, Local Attack Defense, and IP Address Spoofing.
ACLs are used to identify the packets that need to be filtered. After identifying the
packets, the ACL permits or denies the passage of the packets based on a certain policy
created by the set of rules.
ACLs configured on a devece can be referenced by function modules, such as the Policybased routing (PBR), route filtering, QoS, device security, firewall, and IPSec modules.
Different devices may support different types of ACL, but most of them support Basic ACL,
Advanced ACL and Layer 2 ACL, and the range of number is the same.
In addition to the classification above, ACLs can also be classified to ACL4 and ACL6, by IP
version. Or numbered ACL and named ACL by Naming mode.
An ACL can consist of multiple deny and permit statements. Each statement describes a
rule. These rules may overlap, that is, one rule contains another rule but the two rules are
not completely the same. The matching order of ACL rules determines the priorities of ACL
rules to be matched with a packet.
The rule that defines the smallest source IP address range is matched first. A greater
number of 0 bits in the wildcard mask indicates a smaller source IP address range.
The rule that is configured first is matched first if the source IP address ranges are
the same.
The rule that defines the smallest destination IP address range is matched first if the
protocol types and source IP address ranges are the same. A greater number of 0
bits in the wildcard mask indicates a smaller destination IP address range.
The rule that defines the smallest Layer 4 port number (TCP/UDP port number) range
is matched first if the protocol types, source IP address ranges, and destination IP
The rule that is configured first is matched first if the preceding ranges are the same.
The rule that defines the smallest source MAC address range is matched first. A
greater number of 1 bits in the mask indicates a smaller source MAC address range.
The rule that defines the smallest destination MAC address range is matched first if
the source MAC address ranges are the same. A greater number of 1 bits in the mask
indicates a smaller destination MAC address range.
The rule that is configured first is matched first if the source and destination MAC
address ranges are the same.
If the number of a named ACL is not specified, the device automatically allocates a number
to the named ACL. The following situations are involved:
If only the type of a named ACL is specified, the number of the named ACL allocated
by the device is the maximum value of the named ACL of the type.
If the number and the type of a named ACL are not specified, the device considers
the named ACL as the advanced ACL and allocates the maximum value to the
named ACL.
The device does not allocate the number to a named ACL repeatedly.
Rule-id can be specified manually by user or automatically by the device. If the rule ID is
not specified, the device allocates an ID to the new rule. The rule IDs are sorted in
ascending order. The device automatically allocates IDs according to the step. The step
value is set by using the step command. The default step value is 5. With this step value,
the device creates ACL rules with IDs being 5, 10, 15, and so on. This parameter is valid
only when ACL rules are matched in configuration mode.
Time-range is a user-defined parameter that limits the time that the rules take effect.
Many parameters can be configured under the advanced ACL view, protocol(IP, TCP, UDP,
ICMP), port number, flags and etc. Refer the product manual for detail.
Compared to basic ACL and advanced ACLs, Layer 2 ACL and user-defined ACL less use,
but they can meet the application requirements under certain conditions. User-defined
ACL also enhance the flexibility of the device.
There are two segments of network, and the interoperability of the devices has been
verified. The configuration requirements are:
Hosts 192.168.10.2~7 can access Internet, and can not be access by the hosts in
192.168.20.0/24 segment;
All the hosts in 192.168.10.0/24 segment can not access Internet, except
192.168.10.2~7, all hosts can access 192.168.20.0/24 segment;
Different devices may have different ways to filer packet, but ACL will be applied to all.
Switches and routers apply ACL to define traffic-classifiers, and bind them to permit
or deny behaviors.
Using the traffic-filter command, you can configure packet filtering based on the ACL
rule.
The traffic-filter command can be used globally, under the VLAN view, or under the
interface view, but only the inbound direction can be configured.
When a Layer 2 Ethernet interface on the device receives broadcast packets, multicast
packets, or unknown unicast packets it forwards these packets to other Layer 2 Ethernet
interfaces in the same VLAN. If there are a large number of such packets, the interface
bandwidth is used and performance of the device is diminished. To resolve the problem,
suppress these packets to ensure that the rate of these packets stays within the desired
range.
Generally, LAN switches support traffic suppression, and routers support traffic
suppression only when a LAN module is applied.
In addition to traffic suppression, the Sx7 switches also support storm control functions.
The commands above can be configured under the interface view or VLAN view, different
parameters can be carried depending on the different product models and views.
For example, the AR G3 routers only support suppression based on packet rate, but the
Sx7 switches also support suppression based on the percentage of interface bandwidth.
And the S7700 switch can refer the CAR template to suppress under the VLAN view.
If the value of min-rate-value is specified, the interface enter the forwarding state when
the rate of receiving packets on the interface is smaller than the value of min-rate-value in
the interval for detecting storms.
If the value of max-rate-value is specified, storm control is performed on an interface
when the rate of receiving packets on the interface is greater than the value of max-ratevalue in the interval for detecting storms.
Block means the interface will block packets, and shutdown means the interface will
shutdown the interface, and when the interface is shut down, it can only be enabled
manually.
Using the storm-control enable command, you can enable the function of recording logs
or reporting traps during storm control.
By default, the interval for detecting storms is 5s.
For security concerns, configure traffic suppression on VLAN 10 and VLAN 20, configure
storm-control on Ethernet 0/0/48 of SWA.
Notice: Traffic Suppression and Storm-Control can not be configured under the same view.
Use display flow-suppression interface command to view the configuration of the
specified interface where traffic suppression is enabled.
Use display storm-control command to view the configuration of the specified interface
where storm-control is enabled.
With the development and wide application of the network, users poses higher
requirement for security of the network and network devices. On the network, a large
number of packets including the malicious attack packets are sent to the Central
Processing Unit (CPU). These packets cause high CPU usage, degrade the system
performance, and affect service provisioning. The malicious packets that aim at attacking
the CPU busy the CPU in processing the attack packets during a long period. Therefore,
other normal services are interrupted and even the system fails.
To protect the CPU and enable the CPU to process and respond to normal services, the
packets to be sent to the CPU need to be limited. For example, filtering and classifying
packets to be sent to the CPU, limiting the number of such packets and their rate, and
setting the priority of such packets. Packets that do not conform to certain rules are
directly discarded to ensure that the CPU can process normal services.
The local attack defense feature of the device is specially designed for packets directing at
the CPU and mainly used to protect the device from attacks and ensure that the existing
services run normally upon attacks.
Level 1: The device filters invalid packets sent to the CPU by using blacklists.
Level 2: The device limits the rate of packets sent to the CPU based on the protocol
type to prevent excess packets of a protocol from being sent to the CPU.
Level 3: The device schedules packets sent to the CPU based on the protocol priority
to ensure that packets with higher protocol priorities are processed first.
Level 4: The device uniformly limits the rate of packets and randomly discards the
packets that exceed the rate limit to protect the CPU.
Analyze packets based on users and ports. Users are identified by MAC addresses;
ports are identified by physical port numbers and VLAN IDs (including inner VLAN
IDs).
Count the number of received packets based on protocols and MAC addresses (or
port information).
When the number of packets exceeds the threshold, the system considers that an
attack occurs.
When detecting an attack, the system reports a log and a trap, or carries out
Punishment. The current Punishment action is Deny.
Different device may support different features, refer product manuals for detail.
Users on 192.168.10.0/24 often attack the network and are added to the blacklist.
In this manner, they cannot access the network.
Set the CAR for sending ARP Request packets to the CPU to prevent attacks of ARP
Request packets.
Set the CIR for sending FTP packets to the CPU when FTP connections are set up.
Rate limit of sending FTP packets to the CPU when FTP connection is set up:
128kbps.
Using the cpu-defend policy command, you can create an attack defense policy and
enter the attack defense policy view.
Using the car command, you can set the rate of sending packets to the CPU.
Using the linkup-car command, you can set the CIR and CBS to limit the rate of protocol
packets after a protocol connection is set up.
Using the cpu-defend-policy command, you can enable the attack defense policy.
Using the display cpu-defend configuration command, you can view the CAR
configuration.
Using the display cpu-defend-policy command, you can view information about the
attack defense policy.
As shown above, if Attacker fakes Users IP address to access networks, then User can not
normally access the networks, or the data sent to User may be received by Attacker.
If Attacker query SWA with a source IP of RTA or Internet, SWA will continue to response
to the request that does not exist, and network resources are wasted.
To defend against such an attack, Huawei device provides IP source guard and Unicast
Reverse Path Forwarding (URPF).
Before the device forwards an IP packet, it compares the source IP address, source MAC
address, interface number, and VLAN ID in the IP packet with entries in the binding table.
If a binding entry is matched, the device considers the IP packet as a valid packet and
forwards the IP packet. Otherwise, the device considers the IP packet as an attack packet
and discards the IP packet.
As shown above, SWA bind Users IP address, MAC address and Access port, if all the
items in the binding table match, then the switch forward the packet. If Attacker fakes IP
address or MAC address or both, it can access networks neither.
Before a packet is forwarded, URPF obtains the source address and inbound interface of
the packet. URPF then compares the retrieved interface with the inbound interface. If they
mismatch, URPF considers the source address as a spoofing address and discards the
packet. This allows URPF to effectively protect a device against malicious attacks by
blocking packets from bogus source addresses.
As shown above, URPF is enabled on SWA. Attacker requests SWA with a fake sources
address 100.99.98.97, SWA will check the routing table for the address 100.99.98.97,
and find it is mismatch the interface which receives the packet, and SWA will discard the
packets.
IP Source Trail is a function added to IP source guard. When the host is under attack,
configure IP Source Trail on the device connected to the host, then the attack source can
be traced.
The device supports the following types of URPF check modes:
Strict check: Packets can pass the check only when the FIB table of the device has a
corresponding routing entry with the destination address being the source address
of the packet and the inbound interface of the packets matches the outbound
interface in the routing entry. Unmatched packets are discarded.
Loose check: A packet can pass the check as long as the FIB table of the device has
a routing entry with the destination address being the source address of the packet.
Different device support different features, refer the product manuals for detail.
Configure IP Source Guard on SWA, apply static bind check to PC1, and dynamic check of
DHCP-Snooping for the others.
The IP address, MAC address and interface are all included in the binding table.
For large amount of hosts, we preferred to configure IP Source Guard on VLANs, which
reduces lots of command scripts.
Using the ip source check user-bind enable command, you can enable the IP source
guard function on an interface or in a VLAN to check the received IP packets.
Using the ip source check user-bind check-item command, you can configure the items
to be checked in IP packet checking.
Under interface view, IP address, MAC address and VLAN are checked by default.
Under VLAN view, IP address, MAC address and interface are checked by default.
Using the user-bind static command, you can configure a static user binding entry.
Using the ip source check user-bind alarm enable command, you can enable the alarm
function for checking the received IP packets.
Using the ip source check user-bind alarm threshold command, you can set the alarm
threshold for checking the received IP packets.
There may be some IP source address attack from the network connected to Ethernet
0/0/1 of SWA.
By default, the URPF check function is enabled globally, but the URPF check function is
disabled on an interface.
Strict: Indicates URPF strict check. A packet is forwarded only when the source address of
the packets exists in the FIB table and the outgoing interface of the matching entry is the
same as the incoming interface of the packet.
Loose: Indicates URPF loose check. When the source address of a packet exists in the FIB
table, the packet is forwarded according to URPF regardless of whether the outgoing
interface of the matching entry is the same as the incoming interface of the packet.
URPF determines the mode for processing a default route by specifying allow-defaultroute.
When the allow-default-route parameter is not specified and the source address of
packets does not exist in the FIB table, the packets are discarded in URPF strict or
loose check mode even if a corresponding default route is found.
When the allow-default-route parameter is specified and the source address of
packets does not exist in the FIB table,
Packets pass the URPF check and are forwarded in URPF strict check mode if
the outgoing interface of a default route is the same as the incoming interface
of the packets. packets are discarded if the outgoing interface of a default
Packets pass the URPF check and are forwarded in URPF loose check mode
Address Resolution Protocol (ARP) security prevents ARP attacks and ARP-based network
scanning attacks using a series of methods such as strict ARP learning, dynamic ARP
inspection (DAI), ARP anti-spoofing, and rate limit on ARP packets.
ARP flood attack: ARP flood attacks, also called denial of service (DoS) attacks, occur
in the following scenarios:
System resources are consumed when the device processes ARP packets and
maintains ARP entries. To ensure that ARP entries can be queried efficiently, a
maximum number of ARP entries is set on the device. Attackers send a large
number of bogus ARP packets with variable source IP addresses to the device.
In this case, APR entries on the device are exhausted and the device cannot
generate ARP entries for ARP packets from authorized users. Consequently,
communication is interrupted.
When attackers scan hosts on the local network segment or other network
segments, the attackers send many IP packets with irresolvable destination IP
addresses to attack the device. As a result, the device triggers many ARP Miss
messages, generates a large number of temporary ARP entries, and broadcasts
ARP Request packets to resolve the destination IP addresses, leading to
ARP spoofing attack: An attacker sends bogus ARP packets to network devices. The
devices then modify ARP entries, causing communication failures.
Attackers initiate ARP spoofing attacks to intercept user packets to obtain accounts
and passwords of systems such as the game, online bank, and file server, leading to
losses.
The VLANIF interface replicates ARP Request packets in each sub-VLAN when learning ARP
entries. If a large number of sub-VLANs are configured for the super-VLAN, the device
generates a large number of ARP Request packets. As a result, the CPU is busy processing
ARP Request packets, and other services are affected. To prevent this problem, limit the
rate of ARP packets on the VLANIF interface of a super-VLAN.
There are optional commands for the globally or on an interface rate limit on ARP packets:
The arp anti-attack rate-limit alarm enable command enables the alarm function
for ARP packets discarded when the rate of ARP packets exceeds the limit.
The arp anti-attack rate-limit alarm threshold command sets the alarm threshold
of ARP packets discarded when the rate of ARP packets exceeds the limit. The
default value is 100.
The arp-miss anti-attack rate-limit alarm enable command enables the alarm
function for ARP Miss messages discarded when the rate of ARP Miss messages
exceeds the limit.
The arp-miss anti-attack rate-limit alarm threshold command sets the alarm
threshold for ARP Miss messages discarded when the rate of ARP Miss packets
exceeds the limit. The default value is 100.
The arp-fake expire-time command sets the aging time of temporary ARP entries. The
default value is 1 second.
The arp learning strict (system view) command enables strict ARP learning.
The arp learning strict (interface view) command enables strict ARP learning the
interface. This command can be used only on Layer 3 interfaces. There are 3 parameters:
Trust: Indicates that the configuration of strict ARP learning is the same as the
global configuration.
Strict ARP learning function can also used to prevent ARP spoofing attack.
The interface view (NOT VLANIF) includes: GE interface view, GE sub-interface view,
Ethernet interface view, Ethernet sub-interface view, Eth-Trunk interface view, Eth-Trunk
sub-interface view, VE interface view, or port group view.
The arp-limit command sets the maximum number of ARP entries that an interface can
dynamically learn. By default, the maximum number of ARP entries that an interface can
dynamically learn is the same as the number of ARP entries supported by the device.
Different products may have different maximum value, please refer to the product
documentation.
It is advised to configure the maximum value according to the amount of the devices
(always terminal devices, such as PC) under an interface or a VLAN.
All the above functions are advised to be enabled on the gateway. The ARP packet
validity check function is also advised to be enabled on the access devices.
To defend against ARP address spoofing attacks, configure ARP entry fixing. The fixed-mac, fixedall, and send-ack modes are applicable to different scenarios and are mutually exclusive:
fixed-mac mode: When receiving an ARP packet, the device discards the packet if the MAC
address does not match that in the corresponding ARP entry. If the MAC address in the ARP
packet matches that in the corresponding ARP entry while the interface number or VLAN ID
does not match that in the ARP entry, the device updates the interface number or VLAN ID in
the ARP entry. This mode applies to networks where user MAC addresses are unchanged but
user access locations often change. When a user connects to a different interface on the
device, the device updates interface information in the ARP entry of the user timely.
fixed-all mode: When the MAC address, interface number, and VLAN ID of an ARP packet
match those in the corresponding ARP entry, the device updates other information about the
ARP entry. This mode applies to networks where user MAC addresses and user access
locations are fixed.
send-ack mode: When the device receives an ARP packet with a changed MAC address,
interface number, or VLAN ID, it does not immediately update the corresponding ARP entry.
Instead, the device sends a unicast ARP Request packet to the user with the IP address
mapped to the original MAC address in the ARP entry, and then determines whether to
change the MAC address, VLAN ID, or interface number in the ARP entry depending on the
response from the user. This mode applies to networks where user MAC addresses and user
access locations often change.
You can configure ARP entry fixing globally. If ARP entry fixing is enabled globally, all interfaces
have this function enabled by default.
DAI is short for Dynamic ARP inspection. This function is available only for DHCP snooping
scenarios.
The arp anti-attack check user-bind alarm enable command enables the alarm
function for ARP packets discarded by DAI.
The arp anti-attack check user-bind alarm threshold command sets the alarm
threshold for ARP packets discarded by DAI. The default value is 100.
To prevent bogus gateway attacks, enable ARP gateway anti-collision on the gateway. The
gateway considers that a gateway collision occurs when a received ARP packet meets
either of the following conditions:
The source IP address in the ARP packet is the same as the IP address of the VLANIF
interface matching the physical inbound interface of the packet.
The source IP address in the ARP packet is the virtual IP address of the inbound
interface but the source MAC address in the ARP packet is not the virtual MAC
address of the Virtual Router Redundancy Protocol (VRRP) group.
The device generates an ARP anti-collision entry and discards the received packets with the
same source MAC address and VLAN ID in a specified period. This function prevents ARP
packets with the bogus gateway address from being broadcast in a VLAN.
The arp gratuitous-arp send enable command enables gratuitous ARP packet sending.
The arp gratuitous-arp send interval command sets the interval for sending gratuitous
ARP packets. The default value is 90 seconds.
This function defends against attacks from bogus ARP packets in which the source and
destination MAC addresses are different from those in the Ethernet frame header.
This function enables the gateway to check the MAC address consistency in an ARP packet
before ARP learning. If the source and destination MAC addresses in an ARP packet are
different from those in the Ethernet frame header, the device discards the packet as an
attack. If the source and destination MAC addresses in an ARP packet are the same as
those in the Ethernet frame header, the device performs ARP learning.
After receiving an ARP packet, the device checks validity of the ARP packet, including:
Packet length
Validity of the source and destination MAC addresses in the ARP packet
IP address length
The arp validate command can be used to configure the device to check whether the
source MAC address in an ARP packet is the same as that in the Ethernet frame header.
This command is different from the arp anti-attack packet-check sendermac command.
The arp validate command configures ARP packet validity check only on a physical
interface. The arp anti-attack packet-check sender-mac command configures
ARP packet validity check globally.
The arp validate command checks whether the source and destination MAC
addresses in an ARP packet are the same as those in the Ethernet frame header.
The arp anti-attack packet-check sender-mac command checks whether the
source MAC address in an ARP packet is the same as that in the Ethernet frame
header.
When there are a large number of DHCP users, the device needs to learn many ARP entries
and age them. This affects device performance.
ARP learning triggered by DHCP prevents this problem on the gateway. When the DHCP
server allocates an IP address for a user, the gateway generates an ARP entry for the user
based on the DHCP ACK packet received on the VLANIF interface.
Answers:
ABCD
ABC (To create a static ARP entry, a interface number is not specified.)
In the following slides, the router icon is replaced by the firewall icon to highlight security
functions.
Zone is used in firewall. The firewall is often located at the network boundary and uses
zones to represent networks. The firewall adds the interface to the zone and implements
security check between zones, filtering traffic passing through different zones. The security
check is often based on ACLs and the application layer status.
For AR G3 series routers,the zone amount which one router can support lies on its specific
type. For instance, AR150 series,AR160 series,AR200 series,AR1200 series and AR220x
series products support 16 configurable zones;AR2220 series and AR3200 with
SRU40/SRUC board or SRU60 board suport 64 configurable zones;AR3200 with
SRU80/SRUF board support 128 configurable zones,and AR3200 with SRU200 board or
SRU400board support 1024 configurable zones.There is only one zone by default,namely
local zone with the highest priority, and others zones must be created manually.
Association between zones and networks must comply with the following principles:
Networks which provide limited service for external networks must be arranged in
medium-security demilitarized zones ,such as the Demilitarized Zone (DMZ).
The security check is triggered on the AR G3 router only when data is transmitted between
zones. The security check of the AR G3 router is implemented based on interzones, for
example, an interzone between an Untrust zone and a Trust zone. Different interzones
have different security policies, such as the packet filtering policy and status filtering
policy.
An incoming packet refers to a packet sent from a low-priority zone to a highpriority zone.
An outgoing packet refers to a packet sent from a high-priority zone to a lowpriority zone.
Note:
Packets in the same zone from different interfaces are directly forwarded.
The AR G3 router is located between the internal network and external network. On the
firewall, interfaces connected to the internal network, external network, and demilitarized
zone (DMZ) must be on different network segments. In addition, the network topology
needs to be changed. In this case, the firewall functions as a router. After the routing
mode is enabled, ACL-based packet filtering, ASPF filtering, and NAT functions can be
used.
Session is the basis of the stateful firewall. A session table is created on the firewall when
packets pass through the firewall. The session table uses the quintuple as the Key value,
that is, source IP address, destination IP address, source port, destination port, and
protocol number. The firewall provides a higher security for high-priority zones by
establishing a dynamic session table.
Create a zone: set the priority for the zone based on security requirements
Add the interface to the zone: specify the zone for the interface
Under interzone view, firewall enable command enables the firewall function in an
interzone
firewall-nat session { dns | ftp | ftp-data | http | icmp | tcp | tcp-proxy | udp | sip | sipmedia | rtsp | rtsp-media | pptp | pptp-data } aging-time time-value command sets the
timeout interval of each entry in the session table
The dynamic session table is a resource. To prevent the resource from being exhausted,
the session table on the firewall has an aging time, which is set based on applications.
Session entries are aged when no packet is transmitted in the session within a specified
duration.
The display firewall session aging-time command displays the aging time of the
firewall session table. This value is the same as the Time To Live (TTL) value in the
display firewall session table verbose command output.
You can change the aging time. For example, you can change the aging time of ICMP
packets to 15 seconds.
Blacklist is a way to filter packets based on source IP addresses. Compared with ACL-based
packet filtering, the blacklist uses simpler matching zones to implement high-speed packet
filtering. The packets from certain IP addresses can be filtered out. The AR G3 router can
add or delete IP addresses to the blacklist dynamically. By determining packet behaviors,
the firewall detects a potential attack from an IP address. The firewall adds the IP address
to the blacklist so that packets from the attacker can be filtered out and discarded.
Blacklist features:
If the expire-time internal parameter is used, blacklist entries will be automatically deleted
after a specified aging time. The packets with corresponding IP addresses will not be
filtered.
The entries configured with the aging time are not written into the configuration file, but
can be viewed by using the display firewall blacklist command.
Description:The display firewall blacklist command displays the operation status and entry
information about the blacklist. The [sour-address] parameter displays information about
blacklist entries. If the IP address is not specified, the command displays brief information
about all the blacklist entries. If the IP address is specified, the command displays detailed
information about the specified blacklist entry.
Port mapping creates and maintains a system-defined and user-defined port identification
table for each type of application protocol.
The AR G3 router supports host port identification based on basic ACLs. Host port
identification is based on user-defined port numbers and application protocols created for
certain type of packets. For example, the TCP packet sent to the host on the network
segment 10.110.0.0 through port 8080 is identified as an HTTP packet. The range of the
hosts can be specified by a basic ACL.
The ACL used for host port identification and packet filtering differs in the following
aspects:
The packet filtering firewall only permits the packets to be transmitted from
specified source addresses to a specified destination addresses.
In host port identification, a basic ACL can define the hosts by matching source or
destination IP addresses of the packets.
port-mapping { dns | ftp | http | sip | rtsp | pptp } port port-number acl acl-number
command configures the mappings between ports and application-layer protocols
Parameter:
acl-number: specifies the number of a basic ACL. It ranges from 2000 to 2999.
Description:
The port-mapping command matches the port numbers with application layer
protocols.
The AR G3 router supports host port identification based a basic ACL. Host port
identification based on basic ACLs uses user-defined port numbers and application
protocols to identify packets sent to specific hosts. The range of the hosts can be
specified by the basic ACL.
display port-mapping [ dns | ftp | http | rtsp | sip | port port-number | pptp ] command
displays mappings between the specified application-layer protocols and ports
ASPF maintains the connection status information in the data structure of the session table
and maintains access rules of sessions based on the connection status information. ASPF
saves important connection status information that cannot be saved by ACLs. The firewall
detects each packet in data flows and ensures that the packet status and the packet meet
user-defined security policies. The firewall dynamically permits or rejects packets based on
the connection status information. When a session is terminated, session entries are
deleted and the session in the firewall is ended.
ASPF can intelligently detect three handshakes of TCP and handshakes for connection
removal. The system rejects the packets of incomplete TCP handshake connections.
the ACL-based IP packet filtering technology cannot protect networks. For example, it is
difficult to configure the firewall for the multi-channel protocol that uses FTP for
communication.
ASPF enables Eudemon series firewalls to support application of multiple data connection
protocols over one control connection. In addition, ASPF facilitates enforcement of various
security policies in the scenarios where multiple applications are used. Most multimedia
application protocols (for example, SIP and FTP) use designated interfaces to initialize
control connections and then dynamically select interfaces to transmit data. The interfaces
to be selected are random. Some applications may occupy multiple interfaces. The packet
filtering firewall only blocks application transmission on a single channel to protect internal
networks against attacks. That is, it only blocks applications using fixed interfaces. This
introduces security risks to a network. ASPF monitors the interface used by each connection
of each application, opens a proper channel to allow session data to pass through the
firewall, and closes the channel upon session termination. In this manner, ASPF implements
access control on the applications using dynamic interfaces.
Under HSB group viewhsb enable commend enables HSB for an HSB group
NAT log
A NAT log includes the following information: source address, source port, destination
address, and destination port of a flow, start and end time of the flow, and status of the
flow. The log identifies flows where NAT is performed based on translated Attack defense
log
When a large number of attacks occur, the AR G3 router uses the queue mechanism to
provide alarm logs. The AR G3 router generates alarms in Syslog mode, including the
attack source (the source address) and attack type.
Based on zones and IP addresses, the AR G3 router monitors traffic to check whether the
rate or number of connections exceeds the upper or lower limit.
If the rate or number of connections exceeds the upper limit, the AR G3 router will trigger
and record alarm logs.
If the rate or number of connections exceeds the lower limit, the AR G3 router will trigger
alarms.
Blacklist log
The AR G3 router adds the source IP address of an unauthorized user detected to the
blacklist and generates a blacklist log. The log records the host address and the reason of
adding the host to the blacklist.
Traffic statistics
The log records the traffic statistics to learn the firewall operation status. The traffic
statistics include the total number of connections, number of current connections and split
connections, peak traffic volume, and number of discarded packets. The log also records
the number of attack packets so that you can learn attack events.
The AR G3 router generates a small number of logs for attack defense, traffic
monitoring, and blacklist and address binding. Therefore, the router outputs logs in
the text file in Syslog mode. The log information must be managed and redirected
by the VRP information center, displayed on the terminal screen, or stored and
analyzed by the log collection server.
The AR G3 router generates a large number of logs for NAT/ASPF, so logs are
output in binary mode. The AR G3 router outputs logs directly to the log collection
server for storage and analysis. The information center module on the AR G3 router
does not participate in this process. The logs are transmitted more efficiently in
binary mode than in Syslog mode.
On the AR G3 router, Ethernet 1/0/0 is configured in the trust zone, Ethernet 2/0/0 is
configured in the untrust zone, and Ethernet 2/0/1 is configured in the DMZ. The firewall
outputs the attack defense log to the host.
info-center loghost ip-address [ channel { channel-number | channel-name } | facility localnumber | { language language-name | binary [ port ] } | { vpn-instance vpn-instance-name |
public-net } ] commend configurate output the log information to log server
Answers:
AD
ABCD
Note:
Authentication: authenticates remote users and checks whether the user is valid.
Authorization: authorizes a user to use specific services. For example, when a user
logs in to the server, the administrator grants the right to users to access and print
files in the server.
Accounting: records all the operations performed by a user and the service type,
start time, and data traffic.
To obtain the right to access certain networks or to use certain network resources, a user
needs to set up a connection with the NAS over a network. In this case, the NAS
authenticates the user and coordinates the connection. The NAS delivers authentication,
authorization, and accounting information to an AAA server (a RADIUS server or an
HWTACACS server).The RADIUS and HWTACACS protocols define user information
exchanged between the NAS and the AAA server.
There are multiple servers in the AAA architecture. Users can determine the servers that
perform authentication, authorization, and accounting. For example, the HWTACACS
server can be used for authentication and authorization, and the RADIUS server for
accounting.
Users can use one or two security services provided by AAA. For example, if a company
only needs to authenticate employees that access certain network resources, only an
authentication server is needed. If the company also needs to record operations performed
by employees, an additional accounting server is needed.
Huawei Datacom Product uses the RADIUS protocol and the HWTACACS protocol to
When only a small number of access users need to be authenticated and authorized,
deploying an independent AAA server may increase the workload of a network
administrator.
Huawei routers and switches support local authentication. You can maintain a small user
information database on Huawei routers and switches to authenticate and authorize
access users.
Some devices also support local accounting for access users. However, most devices do
not support local accounting because the accounting information is too large to maintain.
RADIUS was originally an AAA protocol designed for dial-in user access. As access modes
become diversified, RADIUS is applicable to more access modes, such as Ethernet access
and ADSL access. RADIUS authenticates and authorizes users to provide access services,
and performs accounting to collect and record usage information about network
resources.
The RADIUS protocol defines the RADIUS packet format and message transmission
mechanism, and specifies UDP as the transmission layer protocol of RADIUS packets. UDP
port 1812 is used as the authentication port and port 1813 as the accounting port.
Client/Server model
Client: The RADIUS client runs on the NAS. It transmits user information to the
specified RADIUS server and processes responses from the RADIUS server. For
example, the RADIUS server accepts or rejects user access requests.
Server: The RADIUS server runs on the core computer or workstation and maintains
information relavant to user authentication and network service access. It receives
connection requests, authenticates users, and returns the processing results to the
clients.
User: stores user information, such as user names, passwords, protocols, and IP
addresses.
Client: stores information about the RADIUS client, such as the shared key and the IP
The RADIUS client and server exchange authentication messages using a shared key.
The shared key cannot be transmitted on the network, which enhances security. In
addition, to protect the user password from theft on an insecure network, the
password is encrypted during transmission.
The RADIUS server authenticates users in multiple modes, such as PPP-based PAP and
CHAP authentication. Besides, the RADIUS server can be used as the RADIUS client to
communicate with other RADIUS authentication servers and forward RADIUS
authentication and accounting packets.
A user sends a request packet containing the user name and password to the
RADIUS client.
The RADIUS client sends an Access-Request packet, including the user name and
password, to the RADIUS server. The password is encrypted using the MD5
algorithm through a shared key.
The RADIUS server authenticates the user name and password. If authentication
succeeds, the RADIUS server sends a RADIUS Access-Accept packet to the RADIUS
client. If authentication fails, the RADIUS server sends a RADIUS Access-Reject
packet to the RADIUS server. The RADIUS Access-Accept packet contains
authorization information.
The RADIUS client permits or rejects the user according to the authentication result.
If the user is permitted, the RADIUS client sends an accounting-start packet to the
RADIUS server.
The RADIUS server sends a response packet to the RADIUS client and starts
accounting.
The user requests to disconnect from the network. The RADIUS client sends an
The RADIUS server sends a response packet to the RADIUS client and stops
accounting.
RADIUS messages are transmitted in User Datagram Protocol (UDP) packets. RADIUS
ensures reliability of information exchanged between the RADIUS server and client by
using the timer, retransmission mechanism, and secondary server. RADIUS integrates
authentication and authorization. Each field is described as follows:
(1) Code field: indicates the type of the RADIUS packet. The field is 1 byte long. The value
of the Code field is described as follows:
1. Access-Request packets are sent from the client to the server. The server determines
whether to connect to the user. The Access-Request packet must carry the User-Name
attribute. NAS-IP-Address, User-Password, NAS-Port are optional in the Access-Request
packet.
2. Access-Accept packets are sent from the server to the client. If all the Attribute
values in the Access-Request packet are accepted (authentication succeeds), the type of
this packet is transmitted.
3. Access-Reject packets are sent from the server to the client. If any Attribute value in
the Access-Request packet is rejected (authentication fails), the type of this packet is
transmitted.
4. Accounting-Request packets are sent from the client to the server, requesting
accounting start or stop. The value of the Acct-Status-Type attribute in the packet
distinguishes whether to start or stop accounting.
5. Accounting-Response packets are sent from the server to the client. The server
notifies the client that it has received the Accounting-Request packet and recorded the
accounting information.
(2) Identifier field: used to map the request packet and the response packet and detect the
request packet retransmitted within a certain period. The field is 1 byte long. The request
packet and response packet of the same type have the same Identifier value.
(3) Length field: indicates the length of a RADIUS packet, including Code, Identifier, Length,
Authenticator, and Attribute. The value ranges from 20 to 4096. The field is 2 bytes long.
Bytes outside the range of the Length field are treated as padding and are ignored. If the
length of a received packet is smaller than the value of the Length field, the packet will be
discarded.
(4) Authenticator field: used to check the response packet of the RADIUS server and encrypt
the password. The field is 16 bytes long. Authenticator has two types: Request
Authenticator and Response Authenticator.
(5) Attribute field: see the next page.
The commonly-used RADIUS attributes, such as the user name, password, and NAS-IP are
defined by RFC 2865, RFC 2866, RFC 2867, and RFC 2868.
Different RADIUS packets have different attributes. For example, the user name is only
used in authentication request packets and accounting request packets.
As shown in the figure, the sub-attribute encapsulated in attribute 26 packet includes the
following four fields:
Vendor-ID: indicates the vendor ID. The leftmost byte is 0. For the code of the other
3 bytes, see RFC 1700. The Vendor-ID of Huawei is 2011.
The display radius-attribute command displays RADIUS attributes supported by the device.
Attributes names starting with "HW" are Huawei proprietary attributes.
HWTACACS is used to authenticate and authorize terminal users that need to log in to
Huawei Datacom Product, and record operations performed by terminal users. As the
HWTACACS client, Huawei Datacom Product sends the user name and password to the
HWTACACS server for authentication. If authentication succeeds, the user can log in to the
device. The HWTACACS server will record user operations on Huawei Datacom Product.
server.
19. The HWTACACS server sends an accounting-stop packet, indicating that the
By default, HWTACACS packets are encrypted and transmitted and the packet headers are
in plain text.
Minor Version: is 0x0 by default. In some cases, it needs to be set to 0x1 for compatibility.
0x01 (Authentication)
0x02 (Authorization)
0x03 (Accounting)
Flag Some functions can be enabled by setting different flag bits. For example, when the
value of Flags is 0x01, the packet is not encrypted. When the value is 0x04, multiple
HWTACACS can share one TCP connection.
Both the HWTACACS protocol and the RADIUS protocol implement authentication,
authorization, and accounting. They have the following similarities: 1. Adopt the
client/server model. 2. Encrypt the user information by a shared key. 3. Have good
flexibility and extensibility.
Compared with RADIUS, HWTACACS is more reliable in transmission and encryption, and
is more suitable for security control.
A Huawei device can manage users based on their domains. The default authorization,
RADIUS/HWTACACS template, and authentication and accounting schemes can be
configured in a domain.
Authentication, authorization, and accounting are implemented by applying the
authentication scheme, authorization scheme, and accounting scheme to a domain, so you
must preconfigure the authentication scheme, authorization scheme, and accounting
scheme in the AAA view.
The device supports the combination of local, RADIUS, and HWTACACS authentication,
authorization, and accounting. For example, the device provides local authentication, local
authorization, and RADIUS accounting. In practice, the following schemes are used
separately:
Local authentication is often used when there are few users and no accounting
requirements. In this case, the AAA server does not need to be deployed, reducing the
workload of the network administrator.
By default, the administrator uses AAA schemes in the domain default_admin, that is, local
authentication, local authorization, and non-accounting.
You can check the local authentication configuration by logging in to 127.0.0.1 through
Telnet.
The command output shows that the user level of VTY 0 user is set to level 3.
There is one administrator on the server. The user name is admin@admin and the
password is huawei.
You can configure authentication and accounting functions on multiple servers to load
balance server resources.
You can configure the secondary authentication server and accounting server to ensure
nonstop service transmission when the primary authentication server and accounting
server fail.
The RADIUS server template, authentication scheme, and accounting scheme can be
associated with the domain. You can deploy AAA flexibly based on networking. For
example, you can use local authentication and RADIUS accounting.
The configuation in the VTY user view is the same as local authentication, and is not
mentioned here.
The display radius-server configuration command can be used to display the RADIUS
server template configuration. This command also displays some default parameters.
Shared key
Add and configure users. If the user name without the domain name is configured
on the device, the user name must carry a domain name.
Huawei Datacom Product sends an accounting request packet to the RADIUS server. If
Huawei Datacom Product receives a response, the RADIUS server is working properly. The
RADIUS server checks the user name and password during accounting.
Most configurations of the HWTACACS server are the same as those of the RADIUS server.
The following configurations also need to be configured on the HWTACACS server:
Shared key
The HWTACACS server needs to grant default user levels to different users.
If there are a small number of devices, use local authentication so that you do not need to
deploy the AAA server.
Information about user access authentication will be described in the Network Access
Control (NAC) course.
Answers:
ABC
BD
Currently, two types of communication devices are available on the network: box devices
and chassis devices.
Box devices are cost-effective, but do not support high availability and uninterrupted
service protection. So box devices cannot be used at the core layer, at the
aggregation layer, or in data centers. In complex networking environment, because
box devices have low scalability, you have to maintain more network devices and
change the original networking structure when adding new devices.
Chassis devices have advantages such as high availability, high performance, and
high port density. Therefore, chassis devices are often used at the core layer, at the
aggregation layer, and in data centers. Compared with box devices, chassis devices
have disadvantages such as high initial investment and high single port cost.
The iStack technology combines the advantages of both box devices and chassis devices. It
virtualizes multiple devices supporting the stacking function into one logical device. This
logical device has advantages like cost-effectiveness of box devices and high scalability and
reliability of chassis devices.
Powerful network scalability: You can increase ports, bandwidth, and processing
capacity of a stack by simply adding member switches to the stack.
Simplified configuration and management: After a stack is set up, multiple physical
devices are virtualized into one logical device. You can log in to the stack through any
member device to configure and manage all the member devices in a unified manner.
Switch roles: Each switch in a stack is a member switch. Member switches are classified
into the following roles:
Master switch: The master switch manages the entire stack. A stack has only one
master switch.
Standby switch: The standby switch is the backup to the master switch. A stack has
only one standby switch.
Slave switch: In a stack, all member switches except the master switch are slave
switches. The standby switch is also a slave switch.
Stack ID: A stack ID, also called a member ID, is used to identify and manage member
switches in a stack. All member switches in a stack have a unique stack ID.
Stack priority: The stack priority is an attribute of member switches, which helps determine
the role of member switches in role election. A larger priority value indicates a higher
priority. The member switch with a higher stack priority has a higher probability of
becoming the master switch.
Physical member interface: Switches connect to each other to form a stack using physical
member interfaces. Physical member interfaces forward service packets and stack protocol
packets between member switches.
Stack interface: A stack interface is a logical interface that is bound to physical member
interfaces to implement the stacking function. Each member switch has two stack
interfaces, which are named Stack-Portn/1 and Stack-Portn/2. n specifies the stack ID of the
member switch.
A stack contains multiple member switches, each of which has a role. During the setup of
a stack, member switches exchange packets to elect the master, standby, and slave
switches. The master switch collects member information, calculates the stack topology,
and synchronizes the stack topology to all the other member switches.
The rules for electing the master switch are as follows. Start from the first rule until the
master switch is elected.
The switch that has started is preferred over the switch that is starting.
When the master and slave switches use different software versions, slave switches
synchronize the software version with the master switch, restarts, and then joins the stack.
Before a stack is set up, each switch is an independent entity and has its own IP address.
You need to manage the switches separately. After the stack is set up, the switches in the
stack form a logical entity, and you can use a single IP address to manage and maintain
the switches uniformly. The IP address and MAC address of the stack is the IP address and
MAC address of the master switch when the stack is set up.
On a single switch that does not join any stack, the interface number is in the format
0/subcard ID/interface sequence number. After the switch joins a stack, the interface
number is in the format stack ID/subcard ID/interface sequence number. For example,
when a switch does not join a stack, the interface number is GigabitEthernet0/0/1; when
the switch joins a stack, the interface number is GigabitEthernet2/0/1 if the stack ID is 2.
A ring topology is more reliable than a chain topology. When a link fault occurs in the
chain topology, the stack splits. When a link fault occurs in the ring topology, a chain
topology is formed, which prevents stack services from being affected.
During stack maintenance, the master device collects new topologies, and takes different
actions when a new member device is added to the stack.
If the newly added device is not in a stack (for example, it has the stacking function
configured and then is powered off, connected to the stack using a stack cable, and
powered on for startup), the device is elected as the slave device, and the master
and standby devices in the stack remain unchanged.
If the new device has been in a stack (for example, it has the stacking function
configured and then is connected to the existing stack using a stack cable), the two
stacks are merged.
When the two stacks are merged, the master switches of the two stacks will automatically
choose a better switch as the master switch of the new stack. The stack in which the new
master switch locates keeps unchanged and the services in the stack are not affected. All
switches in the other stack will restart and be added to the new stack. The master switch
synchronizes its configurations with these switches. Services in this stack are interrupted.
You can remove a member switch from a stack. The stack may be affected differently
depending on member switch roles:
If the master switch leaves the stack, the standby switch becomes the new master
switch, updates the stack topology, and specifies a new standby switch.
If the standby switch leaves the stack, the master switch updates the stack topology
and specifies a new standby switch.
If a slave switch leaves the stack, the master switch updates the stack topology.
After a stack is set up, the standby switch becomes the master switch, if the master switch
is faulty or leaves the iStack. Then the new master switch specifies a new standby switch
and synchronizes data with the standby switch.
After a stack is set up for the first time, the MAC address of the master switch is the
stack MAC address. When the master switch is faulty or leaves the stack, the MAC
address of the new master switch becomes the stack MAC address if the function
that delays the stack MAC address switchover is disabled. By default, the function
that delays the stack MAC address switchover is enabled, and the delay time is 10
minutes.
When the master switch is faulty or leaves the stack, the new master switch changes
the stack MAC address to its own MAC address if the previous master switch does
not rejoin the stack before the MAC address switchover timer expires. If the previous
master switch rejoins the stack before the MAC address switchover timer expires,
the switch becomes a slave switch and the stack MAC address remains unchanged.
In this case, the stack MAC address is the MAC address of a slave switch.
When a slave switch leaves a stack, the master switch changes the stack MAC
address to its own MAC address, if the MAC address of the leaving switch is the
same as the stack MAC address and the leaving switch does not rejoin the stack
before the MAC address switchover timer expires.
As shown in the figure, a stack splits into multiple stacks when some member switches are
removed from the running stack with power-on or when multiple nodes on the stack cable
fail. A stack may split into multiple stacks with the same configurations, which causes
conflicts of IP addresses and MAC addresses.
When a stack splits, two stacks with the same IP address and MAC address exist on the
network. That is, a dual-active scenario occurs. To improve system availability after a stack
splits, a mechanism is required to detect a dual-active scenario and take recovery action.
When a stack splits, two stacks with the same IP address and MAC address exist on the
network. That is, a dual-active scenario occurs. To improve system availability after a stack
splits, a mechanism is required to detect a dual-active scenario and take recovery action.
Dual-active detection (DAD) is a method to detect a dual-active scenario and take recovery
action, ensuring network stability.
DAD in direct mode
DAD is configured on the inter-chassis Eth-Trunk in a stack, and DAD in relay mode
is configured on the proxy device.
After a stack splits into multiple stacks, the stacks exchange DAD packets on the DAD link.
The stack compares information in the received packet with local information. The rules
for electing the master stack are the same as the rules for electing the master switch. If a
stack is elected as the master stack, the switches in the stack remain in the Active state
and continues forwarding service packets. If a stack is elected as the standby stack, the
switches in the stack enter the Recovery state, shuts down all its service interfaces except
those excluded from shutdown, and stops forwarding service packets.
Stack interface: A stack interface is a logical interface that is bound to physical member
interfaces to implement the stacking function. Each member switch has two stack
interfaces, which are named Stack-Portn/1 and Stack-Portn/2. n specifies the stack ID of the
member switch.
As the development of technology, both the hardware and software of Huawei Box
Switches are continuously upgrading. The iStack features may be change as the version
upgrades.
In addition, after a stack is set up, some features may be affected. For example, the N:1
VLAN Mapping feature is not available for a stack.
Confirming that the switches support the stacking function and starting the devices
You need to perform this step only when a stack is connected using stack card
connection. If a stack is connected using service interface connection, skip this step.
In a stack connected using stack card connection, you can configure commands only
when a stack card is inserted.
After enabling or disabling the stacking function on a device, restart the device to
make the configuration take effect.
As the network size rapidly increases, the number of access interfaces provided by an
access switch needs to be increased, and the network must be easy to manage and
maintain. However, a single access switch cannot meet these requirements.
The configuration on S5700LI is used as an example.
The stack port enable command configures service interfaces as physical member
interfaces. After a service interface is configured as a physical member interface, the
service configuration on the interface becomes invalid.
On the S5700LI, only the last four service interfaces can be configured as physical
member interfaces.
On the S5710EI, only the last four service interfaces can be configured as physical
member interfaces. This number can be increased by installing a subcard. The
ES5D21X02S00 can be installed to allow four more interfaces to be configured as
physical member interfaces.
On the S6700EI, only four or eight service interfaces with contiguous IDs can be
configured as physical member interfaces. The last ID of the service interfaces that
can be added to the same stack interface must be a multiple of four, for example,
interfaces with IDs from 1 to 4, 5 to 8, or 1 to 8 can be added to the same stack
interface but interfaces with IDs from 2 to 5 or 3 to 6 cannot.
The interface stack-port command displays the stack interface view. Each member
switch has two stack interfaces, which are named Stack-Portn/1 and StackPortn/2. n specifies the stack ID of a member switch. After the interface stackport command displays the view of a stack interface, you can configure attributes for the
stack interface.
The stack slot priority command sets the stack priority of a member switch in a stack.
A larger priority value indicates a higher priority. The possibility that a switch is
selected as the master switch is greater.
In a stack connected using stack card connection, enable the iStack function before
running the command.
The stack slot renumber command changes the stack ID of a specified member switch in
a stack.
After the stack ID is configured, the configuration takes effect only after the device
restarts.
In a stack connected using stack card connection, enable the iStack function before
running the command.
The display stack command displays information about the member switches in a stack.
MAC switch delay time: MAC address switchover time of the stack. You can run this stack
timer mac-address switch-delay command to set the MAC address switchover time of
the stack.
Answers:
BD
High reliability
You can increase ports, bandwidth, and processing capacity of a CSS by simply
adding member switches to the CSS.
After a CSS is set up, multiple physical devices are virtualized into one logical device.
You can log in to the CSS through any member device to configure and manage all
the member devices in a unified manner.
Cluster Switch System Generation 2 (CSS2): This technology sets up a CSS by connecting
CSS cards on switch fabric units (SFUs). In addition to the existing features, CSS2 supports
1+N backup of MPUs. CSS2 is supported by the S12700.
Compared with the traditional CSS, CSS2 has the following advantages:
Clustering through CSS cards on the SFUs Compared with the service port connection
mode, control packets and data packets of a cluster only need to be forwarded once by
the SFUs and do not go through service cards. CSS2 minimizes impact of software failures,
reduces risks of service interruption caused by service cards, and also significantly shortens
the transmission latency. Compared with the CSS connection mode through CSS cards on
MPUs, this mode allows for simpler cable connections and faster system startup because
the SFUs and MPUs start up simultaneously.
1+N backup of MPUs in a cluster A dual-chassis CSS can work normally as long as one
MPU in any chassis is running normally. Compared with the service port connection mode,
in which each chassis must have at least one MPU working normally, CSS2 is more
reliable. Compared with the CSS connection mode through CSS cards on MPUs, in which
each chassis must have two MPUs installed, CSS2 is more flexible.
Master switch: A master switch manages the CSS. A CSS has only one master
switch.
Standby switch: The standby switch is the backup to the master switch. When the
master switch becomes faulty, the standby switch takes over the master role. A CSS
has only one standby switch.
CSS ID
CSS IDs are used to identify and manage member switches in a CSS. Each member
switch has a unique CSS ID.
CSS Priority
The CSS priority of a member switch determines the role of the member switch in
role election. A larger value indicates a higher priority and higher probability that the
member switch is elected as the master switch.
After a CSS is set up, member switches send competition packets to each other to elect
the master switch. A switch is elected as the master switch to manage the CSS. The other
switch becomes the standby switch.
The rules for electing the master switch among S series switches are as follows. Start from
the first rule until the master switch is elected.
The switch that has started is preferred over the switch that is starting.
When the master and standby switches use different software versions, standby switches
synchronize the software version with the master switch, restarts, and then joins the CSS.
After the CSS is set up, on the control plane, the active SRU of the master switch
becomes the active SRU of the CSS and manages the CSS. The active SRU of the standby
switch becomes the standby SRU of the CSS and plays a standby role in CSS management.
The standby SRUs of the master switch and the standby switch are candidate standby
SRUs of the CSS,
Before a CSS is set up, each switch is an independent entity and has its own IP address.
You need to manage the switches separately. After the CSS is set up, the switches in the
CSS form a logical entity, and you can use a single IP address to manage and maintain the
switches uniformly. The IP address and MAC address of the CSS is the IP address and MAC
address of the master switch when the CSS is set up for the first time.
After a CSS is set up, all member switches function as one logical switch on the network,
and the master switch manages the resources of all remember switches. You can log in to
the CSS through a service interface on an LPU, the management interface or the serial
interface on the active SRU to manage and maintain the CSS.
On a single switch that does not join any CSS, the interface number is in the format slot
ID/subcard ID/interface sequence number. After the switch joins a CSS, the interface
number is in the format CSS ID/slot ID/subcard ID/interface sequence number. For
example,
when a switch does not join a CSS,
is GigabitEthernet1/0/1; when the switch joins a CSS,
is GigabitEthernet2/1/0/1 if the CSS ID is 2.
Different from the service traffic forwarded on a single switch, the service traffic
forwarded between two switches of the CSS needs to pass through the switching fabric
twice. Similar to the single switch, the CSS also processes the packet content in both
inbound direction and outbound direction.
Two switches form a CSS. To ensure reliable traffic transmission, an inter-chassis EthTrunk interface is configured as the outbound interface of traffic. Data from a downstream
switch enters the CSS from the Eth-Trunk interface and is forwarded preferentially through
the local upstream Eth-Trunk to an upstream switch.
When a member interface in the Eth-Trunk fails, traffic can be transmitted between
devices through CSS links. This ensures reliable data transmission.
After you enable the CSS function and restart the device immediately, the system adds file
name extension .bak to the previous configuration file.
If the original file name extension of the configuration file is .cfg, the file name
extension of the backup configuration file is .cfg.bak.
If the original file name extension of the configuration file is .zip, the file name
extension of the backup configuration file is .zip.bak.
If you want to restore the original configuration after disabling the CSS function, change
the extension of the backup configuration file and specify the file as the configuration file
for next startup. Then restart the device to restore the original configuration.
Automatic configuration file backup prevents configuration loss caused by incorrect
operations. It is recommended that you save the configuration manually when upgrading
the system software and adjusting the network.
After the active/standby switchover is performed on SRUs on the master switch or standby
switch, the roles of the switch and SRU changes as follows:
The standby switch becomes the master switch, and the previous standby
SRU of the CSS becomes the new active SRU of the CSS.
The previous active SRU of the CSS restarts. The candidate standby SRUs of
the previous master switch becomes the standby SRU of the CSS and
synchronizes data from the new active SRU of the CSS.
The master switch and the standby switch do not change their roles.
The active SRU of the standby switch (the previous standby SRU of the CSS)
restarts. The candidate standby SRUs of the CSS becomes the standby SRU of
the CSS and synchronizes data from the active SRU of the CSS.
Clustering through CSS cards on the MPUs: Inter-chassis unicast packets are forwarded
along the following path: SFU on the MPU of the local chassis > CSS card on the local
chassis > cluster cable > SFU on the MPU of the peer chassis > LPU on the peer chassis >
the corresponding port.
Service port connection mode: Inter-chassis unicast packets are forwarded in a similar
manner as in clustering through CSS cards on the MPUs. Packets are forwarded through
the SFUs of both the local and peer chassis before they reach the corresponding port.
Clustering through CSS cards on the SFUs: Inter-chassis unicast packets are forwarded
along the following path: SFU of the local chassis > CSS card on the local chassis > cluster
cable > SFU of the peer chassis > LPU on the peer chassis > the corresponding port.
Compared with clustering through CSS cards on the MPUs and the service port connection
mode, clustering through CSS cards on the SFUs forwards packets through the SFUs
directly without the need to send packets to the MPUs. In addition, the S12700 provides
an independent monitoring/power supply plane1 which ensures that LPUs can function
normally and data packets can be forwarded uninterruptedly when all MPUs in either
chassis are faulty or removed. If all MPUs of the master switch are faulty or removed, a
master/standby switchover is triggered. The S12700 cluster can run normally even when
the standby chassis has no MPU installed. During this process, the cluster sends an alarm
every 30 minutes, indicating that the standby chassis has no MPU installed. The alarm ID
and name are CSSM_1.3.6.1.4.1.2011.5.25.183.3.3.2.16 and hwCssStandbyError. You
can only run the display commands but cannot use the configuration commands when the
standby chassis has no MPU installed. You are advised to install an MPU to the standby
In the CSS system, if the MPU fails to receive the heartbeat packet from the LPU within 20
seconds, the possible causes are as follows:
Two MPUs of the standby switch are reset using management devices.
If the two switches run normally after the CSS splits, they use the same IP address and
same MAC address to communicate with other devices on the network because their
global configurations are the same. This causes conflicts of IP addresses and MAC
addresses and faults on the entire network. To rectify these faults, use Multi-Active
Detection.
When a CSS splits, two CSS (enabled with CSS) with the same IP address and MAC
address exist on the network. That is, a Multi-Active scenario occurs. To improve system
availability after a CSS splits, a mechanism is required to detect a Multi-Active scenario
and take recovery action.
MAD is performed between member switches in a CSS using a dedicated direct link.
MAD is configured on the inter-chassis Eth-Trunk in a CSS, and MAD in relay mode
is configured on the proxy device.
After a CSS splits into multiple CSSs, the CSSs exchange MAD competition packets on the
MAD link. The CSS compares information in the received packet with local competition
information. The rules for electing the CSS master are the same as the rules for electing
the master switch. If the switch in the CSS is elected as the master switch, the switch
remains in Active state and continues forwarding service packets. If the switch in the CSS
is elected as the standby switch, the switch shuts down all its service interfaces except
those excluded from shutdown, enters the Recovery state, and stops forwarding service
packets.
After the CSS link recovers, the switch in Recovery state restarts and restores all the
CSS fast upgrade provides a non-stop traffic forwarding mechanism when the software of
member devices is being upgraded. This minimizes the upgrade impact on services.
When fast upgrade is performed, the standby switch restarts with the new version and
completes the upgrade. Data is forwarded by the master switch. If the upgrade fails, the
standby switch restarts and rolls back to the previous version. After completing the
upgrade, the standby switch becomes the master switch and transmits data. The previous
master switch restarts with the new version and becomes the standby switch after the
upgrade as shown in the figure.
Devices need to be dual-homed to the CSS and the CSS is configured to preferentially
forward local traffic. Otherwise, data transmission may be interrupted.
When fast upgrade is performed, the cluster will sends an alarm, indicating that the
standby chassis has no MPU installed.Thats ok.
A physical member port is a service port used to set up a cluster link between
CSS member switches. Physical member ports forward service packets or CSS
protocol packets between member switches.
A logical CSS port is bound to physical member ports for CSS connection. Each
CSS member switch supports two logical CSS ports.
For details of Software and Hardware Requirements of a CSS, see the product document.
The numbers in red in this figure are sequence numbers of labels. You must connect all the
ports in full meshed mode based on the sequence numbers.
Each CSS card provides eight SFP+ ports. Ports numbered from 1 to 4 belong to one
group, and ports numbered 5 to 8 belong to the other group. Ports in one group can be
randomly connected to any port in the corresponding group and at least one port in a
group must be connected. Full meshed connection is recommended.
For details of Software and Hardware Requirements of a CSS, see the product document.
1+0 networking: Each member switch has one logical CSS port and connects to
another member switch through the physical member ports located on the same
service card.
1+1 networking: Each member switch has two logical CSS ports, and physical
member ports of the logical CSS ports are located on two service cards. Cluster links
on the two service cards implement link redundancy.
For details of Software and Hardware Requirements of a CSS, see the product document.
It is recommended that you connect the same number of cluster cables to the CSS cards (if
not, the total cluster bandwidth will be affected) and connect CSS ports on the two
member switches based on port numbers.
If the SFU model used in the member switches is ET1D2SFUD000, it is recommended that
the number of cluster cables connected to each CSS card be an even number.
As the network scale rapidly increases, a core switch cannot satisfy data forwarding
requirements. The data forwarding capability needs to be doubled while the return on
investment (ROI) is maximized. In addition, network reliability needs to be improved
through redundancy backup to manage and maintain the network easily.
Configure the CSS ID, CSS priority, and connection mode of a switch to form a CSS.
Multiple physical member interfaces can be added to a CSS interface to improve the
bandwidth and reliability of a CSS link.
Enable the CSS function on switches to make the configuration take effect and set up a
CSS successfully. Use CSS cables or optical fibers to connect CSS interfaces on devices and
restart the devices.
The set css priority command sets the CSS priority of a member switch in a CSS.
After setting a CSS priority, restart the switch to make the configuration take effect.
If you do not specify chassis chassis-id in the command, the command takes effect
only on the master switch.
The set css mode command configures the CSS connection mode of devices.
After the CSS connection mode is changed, the configuration takes effect only after
the device is restarted.
The two connection modes cannot be used at the same time and cannot be
switched dynamically.
When devices have SRUA/SRUBs installed, both connection modes can be used.
When devices have SRUDs installed, only service interface connection is supported
and this command is not supported.
The interface css-port command configures a CSS interface and displays the CSS
interface view.
Only a CSS connected through service interface connection supports this command.
If a CSS interface is canceled, all physical member interfaces added to the CSS interface
will be deleted. If all the CSS interfaces are canceled, a CSS splits.
A CSS interface maps a board. Physical member interfaces on two boards cannot be
added to the same CSS interface.
A CSS interface can be connected to one only CSS interface but not two CSS interfaces.
Before running the undo interface css-port command to cancel a CSS interface, run
the shutdown interface command to shut down all the physical member interfaces of
the CSS interface.
The port interface enable command configures a specified interface as a physical member
interface, and adds the physical member interface to a CSS interface.
Only a CSS connected through service interface connection supports this command.
immediately to make the configuration take effect. Otherwise, the configuration does
not take effect and you need to run the command again.
You can use the css enable command only when the CSS function is disabled.
You can use the undo css enable command only when the CSS function is enabled.
If you do not specify the chassis chassis-id parameter in the undo css
enable command, the CSS function is disabled only on the master switch.
If you use the all keyword in the undo css enable command, the CSS function is
disabled on both the master and standby switches. If the configuration fails on one
switch, the CSS function fails to be disabled on the two switches.
The set css id command sets the frame ID of a member switch in a CSS.
When setting up a CSS, run the set css id command to set different frame IDs for
the two member switches. The frame IDs of the two switches in a CSS must be 1
and 2; otherwise, the CSS cannot be set up.
After setting a frame ID, restart the switch to make the configuration take effect.
If you do not specify chassis chassis-id in the command, the command takes effect
only on the master switch.
On: enabled
Off: disabled
CSS master force: Whether the local switch is specified as the master switch in the CSS. To
specify the master switch, run the css master force command.
CSS status: Status of the device in the CSS
When the CSS splits because of a CSS link fault and there are two CSSs with the same
configuration on the network, you can use MAD to reduce the impact of a CSS split on
the network.
The mad detect mode direct command configures Multi-Active detection (MAD) in
direct mode on an interface.
After MAD in direct mode is configured on an interface, the interface is blocked and
service forwarding is affected. Disabling MAD in direct mode on an interface
restores the forwarding function on the interface. If a loop exists on the network, a
broadcast store occurs.
The display mad command displays the Multi-Active detection (MAD) configuration.
election enters the Recovery state and blocks all of its service interfaces except
those excluded from shutdown.
After a CSS is divided because of a link failure, configuration collision between two CSSs
occurs. MAD can be used to detect a Multi-Active scenario caused by CSS division and
take recovery action, ensuring network stability.
The mad detect mode relay command configures Multi-Active detection (MAD) in relay
mode on an interface.
The mad relay command enables the relay function on a specified interface of a proxy
device.
In MAD in relay mode, you need to use the mad relay command to configure the
relay function on a specified Eth-Trunk interface of a proxy device. Member
interfaces of the Eth-Trunk interface forward MAD packets to each other so that
member switches can exchange MAD packets.
When higher uplink bandwidth is required, you can connect a new member switch to the
original one using cluster cables so that the two switches set up a CSS. Then bundle
physical links of the member switches into a link aggregation group to increase the uplink
bandwidth.
Two switches are virtualized into a logical switch. This simplified network does not require
MSTP or VRRP, so network configuration is much simpler. Inter-chassis link aggregation
also speeds up network convergence and improves network reliability.
Long-distance clustering enables switches far from each other to form a CSS, users on
each floor of two buildings connect to the aggregation switches through respective
corridor switches. The aggregation switches connect users to the external network. The
aggregation switches in the two buildings can be connected using cluster cables to form a
CSS. The two aggregation switches then work like one device, simplifying the network
structure. The device management and maintenance costs are therefore reduced. In
addition, two links to the external network are available to users in each building, which
Answers:
False. When a device enters the CSS state, it automatically backs up the previous
configuration file.
NTP is evolved from the Time Protocol and the ICMP Timestamp message but is specifically
designed to maintain accuracy and robustness.
NTPv1 puts forward complete NTP rules and algorithms for the first time, but it does
not support authentication and control messages.
NTPv2 supports authentication and control messages.
NTPv3 uses correctness principles and improves clock selection and filter algorithms,
and it is widely used.
NTPv3 only applies to an IPv4 network. As IPv6 develops and network security
requirements grow, NTPv4 is produced. NTPv4, an extension of NTPv3, is
compatible with NTPv3.
NTP applies to the following situations where all the clocks of the devices on a network
need to be consistent: In network management, analysis of logs or debugging messages
collected from different routers requires time for reference.
An accounting system requires that the clocks of all the devices be consistent.
When several systems work together to process a complicated event, they have to
refer to the same clock to ensure a correct execution order.
Incremental backup between a backup server and clients requires that their clocks
be synchronized.
Some applications need to obtain the time in which a user logs in a system and a
document is modified.
Presuming that:
Before the clocks of RouterA and RouterB are synchronized, the clock of RouterA is
10:00:00 a.m. and the clock of RouterB is 11:00:00 a.m.
RouterB acts as an NTP time server, and RouterA must synchronize its clock with
that of RouterB.
It takes one second to unidirectionally transmit an NTP message between RouterA
and RouterB.
Both RouterA and RouterB take one second to process an NTP message.
When the NTP message leaves RouterB, RouterB adds the transmit timestamp
11:00:02 a.m. (T3) to the NTP message, indicating the time when the message
leaves RouterB.
When RouterA receives this response message, it adds a new receive timestamp,
10:00:03 a.m. (T4).
NOTE:
The preceding example is only a brief description of the operating principle of NTP.
In fact, NTP uses the standard algorithms in RFC 1305 to ensure the precision of
clock synchronization.
In a synchronization subnet, the primary time server sends time information to other
secondary time servers using the NTP protocol. The secondary time servers then
synchronize their clocks with the primary time server. These servers are hierarchically
connected, and each level of the hierarchy is called a stratum and assigned a layer number.
For example, the primary time server is a stratum 1 server, the secondary time servers are
stratum 2 servers, and following strata can be obtained by analogy. A larger clock stratum
indicates lower precision.
Synchronization subnet consists of the primary time server, secondary time servers,
clients, and interconnecting transmission paths.
Primary time server directly synchronizes its clock with a standard reference clock
using a cable or radio. The standard reference clock is usually a radio clock or the
Global Positioning System (GPS).
Secondary time server synchronizes its clock with the primary time server or other
secondary time servers on the network. A secondary time server transmits the time
information to other hosts on a LAN through NTP.
Under normal circumstances, the primary time server and the secondary time servers in a
synchronization subnet are arranged in a hierarchical-master-slave structure. In this
structure, the primary time server is located at the root, and the secondary time servers are
arranged close to leaf nodes. As their strata increase, the precision decreases accordingly.
The extent to which the precision of the secondary time servers decreases depends on
stability of network paths and the local clock.
NOTE: When the synchronization subnet has multiple primary time servers, the optimal
server can be selected using an algorithm.
When faults occur in one or more primary/secondary time servers or network paths
interconnecting them, the synchronization subnet will automatically be
reconstructed into another hierarchical-master-slave structure to obtain the most
precise and reliable time.
When all primary time servers in the synchronization subnet become invalid, a
standby primary time server runs.
When all primary time servers in the synchronization subnet become invalid, other
secondary time servers are synchronized among themselves. These secondary time servers
become independent of the synchronization subnet and automatically run at the last
determined time and frequency. When a router with a stable oscillator becomes
independent of the synchronization subnet for an extended period of time, its timing error
can be kept less than several milliseconds in a day because of highly precise calculations.
A device on the network can synchronize its clock in the following manners.
Synchronizing with the local clock: The local clock is used as the reference clock.
Synchronizing with another device on the network: This device is used as an NTP
clock server to provide a reference clock for the local clock.
If both manners are configured, the device selects an optimal clock source by comparing
the clocks determined in the two manners. The clock of a lower stratum is preferred.
You can select an appropriate operating mode as required. When an IP address of the NTP
server or peer device cannot be determined or a large number of devices require
synchronization on a network, the broadcast or multicast mode can be used for clock
synchronization. In server and peer mode, the devices synchronize their clocks with a
specified server or peer, which increases clock reliability.
The unicast server/client mode runs on a higher stratum on a synchronous subnet. In this
mode, devices need to obtain the IP address of the server in advance.
Client: A host running in client mode (client for short) periodically sends packets to
the server. The Mode field in the packets is set to 3, indicating that the packets are
coming from a client. After receiving a reply packet, the client filters and selects
clock signals, and synchronizes its clock with the server that provides the optimal
clock. A client does not check the reachability and stratum of the server. Usually, a
host running in this mode is a workstation on a network. It synchronizes its clock
with the clock of a server but does not change the clock of the server.
Server: A host running in server mode (server for short) receives the packets from
clients and responds to the packets received. The Mode field in reply packets is set
to 4, indicating that the packets are coming from a server. Usually, the host running
in server mode is a clock server on a network. It provides synchronization
information for clients but does not change its own clock.
During and after the restart, the host operating in client mode periodically sends NTP
request messages to the host operating in server mode. After receiving the NTP request
message, the server swaps the position of destination IP address and source IP address,
and the source port number and destination port number, fills in the necessary
information, and sends the message to the client. The server does not need to retain state
information when the client sends the request message. The client freely adjusts the
interval for sending NTP request messages according to the local conditions.
In symmetric peer mode, the symmetric active peer initiates an NTP packet with the Mode
field set to 3 (the client mode), and the symmetric passive peer responds with an NTP
packet with the Mode field set to 4 (the server mode). This interaction creates a network
delay so that devices at both ends enter the symmetric peer mode.
Symmetric active peer: A host that functions as a symmetric active peer sends
packets periodically. The value of the Mode field in a packet is set to 1. This
indicates that the packet is sent by a symmetric active peer, without considering
whether its symmetric peer is reachable and which stratum its symmetric peer is on.
The symmetric active peer can provide time information about the local clock for its
symmetric peer, or synchronize the time information about the local clock based on
that of the symmetric peer clock.
Symmetric passive peer: A host that functions as a symmetric passive peer receives
packets from the symmetric active peer and sends reply packets. The value of the
Mode field in a reply packet is set to 2. This indicates that the packer is sent by a
symmetric passive peer. The symmetric passive peer can provide time information
about the local clock for its symmetric peer, or synchronize the time information
about the local clock based on that of the symmetric peer clock.
The prerequisite for having a host run in symmetric passive mode is that: The host receives
an NTP packet from a symmetric peer running in symmetric active peer mode. The
symmetric active peer has a stratum lower than or equal to that of the host, and is
The broadcast mode is applied to the high speed network that has multiple workstations
and does not require high accuracy. In a typical scenario, one or more clock servers on the
network periodically send broadcast packets to the workstations. The delay of packet
transmission in a LAN is at the milliseconds level.
Broadcast server: A host that runs in broadcast mode sends clock synchronization
packets to the broadcast address 255.255.255.255 periodically. The value of the
Mode field in a packet is set to 5. This indicates that the packet is sent by a host that
runs in broadcast mode, without considering whether its peer is reachable and
which stratum its peer is on. The host running in broadcast mode is usually a clock
server running high-speed broadcast media on the network, which provides
synchronization information for all of its peers but does not alter the clock of its
own.
Broadcast client: The client listens to the clock synchronization packets sent from
the server. When the client receives the first clock synchronization packet, the client
and server exchange NTP packets whose values of Mode fields are 3 (sent by the
client) and the NTP packets whose values of Mode fields are 4 (sent by the server).
In this process, the client enables the server/client mode for a short time to
exchange information with the remote server. This allows the client to obtain the
network delay between the client and the server. Then, the client returns the
broadcast mode, and continues to sense the incoming clock synchronization packets
to synchronize the local clock.
Multicast mode is useful when there are large numbers of clients distributed in a network.
This normally results in large number of NTP packets in the network. In the multicast
mode, a single NTP multicast packet can potentially reach all the clients on the network
and reduce the control traffic on the network.
Multicast client: The client listens to the multicast packets from the server. When the
client receives the first broadcast packet, the client and server exchange NTP packets
whose values of Mode fields are 3 (sent by the client) and the NTP packets whose
values of Mode fields are 4 (sent by the server). In this process, the client enables the
server/client mode for a short time to exchange information with the remote server.
This allows the client to obtain the network delay between the client and the server.
Then, the client returns the multicast mode, and continues to sense the incoming
multicast packets to synchronize the local clock.
Manycast mode is applied to a small set of servers scattered over the network. Clients can
discover and synchronize to the closest manycast server. Manycast can especially be used
where the identity of the server is not fixed and a change of server does not require
reconfiguration of all the clients in the network.
Manycast server: The manycast server continuously listens to the packets. If a server
can be synchronized, the server returns a packet (the Mode field is set to 4) by using
the unicast address of the client as the destination address.
Manycast client: The client in manycast mode periodically sends request packets (the
Mode field is set to 3) to an IPv4/IPv6 multicast address. After receiving a reply
packet, the client filters and selects clock signals, and synchronizes its clock with the
server that provides the optimal clock.
To prevent the client from constantly sending NTP request packets to the manycast server
and reduce the load of the server, the NTP protocol defines a minimum number of
connections. In manycast mode, the client records the number of connections established
every time it synchronizes clock with the server. The minimum number of connections is
to live (TTL) mechanism to ensure that the client can successfully synchronize with the
server. Every time the client sends an NTP packet, the TTL of the packet increases (the initial
value as 1) until the minimum number of connections is reached or the TTL value reaches
the upper limit. If the TTL reaches the upper limit or the number of connections called by
the client reaches the minimum number, but connections called by the client still cannot
complete the synchronizing process, the client stops data transmission in a timeout period
to eliminate all connections. Then the client repeats the preceding process.
If a source address from which NTP packets are sent is specified on the server, the address
must be the same as the server IP address configured on the client. Otherwise, the client
cannot process the NTP packets sent by the server, resulting in failed clock
synchronization.
Only after the clock on the server is synchronized, the server can function as a clock server
to which other devices can be synchronized.
Access Authority
A device provides access authority, which is simpler and more secure, to protect a local
clock.
NTP access control is implemented based on an access control list (ACL). NTP supports five
levels of access authority, and a corresponding ACL rule can be specified for each level. If
an NTP access request hits the ACL rule for a level of access authority, they are successfully
matched and the access request enjoys the access authority at this level.
When an NTP access request reaches the local end, the access request is successively
matched with the access authority from the maximum one to the minimum one. The first
successfully matched access authority takes effect. The matching order is as follows:
peer: indicating the maximum access authority. A time request may be made for the
local clock and a control query may be performed on the local clock. The local clock
can also be synchronized to a remote server.
server: indicating that a time request may be made for the local clock and a control
query may be performed on the local clock, but the local clock cannot be
synchronized with the clock of the remote server.
1.
synchronization: indicating that only a time request can be made for the local clock.
query: indicating the minimum access authority. Only a control query can be
limited: taking effect only when the KoD function is enabled. The rate of incoming
packets is controlled and the kiss code is sent after the KoD function is enabled.
KOD
The Kiss-o'-Death (KOD) is a brand new access control technology put forward by NTPv4,
and the KOD is mainly used for a server to provide information, such as a status report and
After the KOD is enabled on the server, the server sends the kiss code DENY or the kiss
code RATE to the client according to the operating status of the system.
When receiving the kiss code DENY, the client terminates all connections with the
server, and stops sending packets to the server.
When receiving the kiss code RATE, the client immediately shortens a poll interval
with the server. Every time the kiss code RATE is received after the first shortening
operation, the poll interval is further shortened
Authentication
The NTP authentication function can be enabled on networks demanding high security.
Different keys may be configured in different operating modes.
When a user enables the NTP authentication function in a certain NTP operating mode, the
system records the key ID in this operating mode.
Sending process
Receiving process
After receiving a packet, the system determines whether the packet needs to be
authenticated. If the packet does not need to be authenticated, the system directly
performs subsequent processing on the packet. If the packet needs to be
authenticated, the system authenticates the packet using the key ID and a decryption
algorithm. If the authentication fails, the system directly discards the packet. If the
authentication succeeds, the system processes the received packet.
Configuration Roadmap
As is required by the user, the NTP protocol is used to synchronize clocks. The
configuration roadmap is as follows:
1.
2.
3.
4.
RouterA and RouterB are connected through the network, which is not secure, so
that the NTP authentication function is enabled.
Configure IP addresses, and configure reachable routes between any two of RouterA,
RouterB, RouterC, and RouterD.
You can configure multiple keys for each device. After the NTP authentication key is
configured, you need to set the key to reliable using the ntp-service reliable
authentication-keyid command. If you do not set the key to reliable, the NTP key does
not take effect.
Currently, the device supports only the MD5 key authentication algorithm.
If the NTP authentication key is a reliable key, it automatically becomes unreliable when
you delete the key. You do not need to run the undo ntp-service reliable
authentication-keyid command.
Check the NTP status of RouterB, and you can find that the clock status is "synchronized",
indicating that the synchronization is complete. The stratum of the clock is 3, which is one
stratum lower than that of the clock of the server RouterA.
Check the NTP status of RouterC and RouterD, and you can find that the clock status is
"synchronized", indicating that the synchronization is complete. The stratum of the clock is
4, which is one stratum lower than that of the clock of the server RouterB.
Answer
ABCDE
To visualize the quality of network services and allow users to check whether the quality of
network services meets requirements, the following measures must be taken:
The preceding measures require devices to provide statistical parameters such as the delay,
jitter, and packet loss ratio and require dedicated probe devices. These requirements
increase investments on devices.
NQA can precisely test the network operating status and output statistics without using
dedicated probe devices, effectively saving costs.
NQA measures the performance of different protocols running on the network. It allows
you to collect network operation indexes in real time, such as the total HTTP connection
delay, TCP connection delay, DNS resolution delay, file transmission rate, FTP connection
delay, and DNS resolution error ratio.
Compared with NQA, the Ping program obtains only limited information though it can also
monitor the quality of network services. The following are differences between NQA and
Ping.
Functions Comparisons
The Ping program can test only the round-trip time (RTT) of a packet and the
reachability of the destination. A client sends an Internet Control Message Protocol
(ICMP) Echo Request message to a remote client, expecting an ICMP Echo packet.
Besides testing the round-trip time (RTT) of ICMP packets, NQA can detect whether
network services, such as TCP, UDP, DHCP, FTP, HTTP, SNMP, DNS, Traceroute, and
LSP Ping/Traceroute, are enabled. In addition, NQA can test the response time of
each service and the network jitter through Jitter tests.
Configuration Comparisons
Run the ping command on the Console to test the reachability of a specified IP
address. The RTT or timeout period of every packet is also displayed in real time.
To display the statistics, you can run the command on the NQA client.
BFD is an international standardized technology, the mainstream network vendors have all
been achieved interoperability, while NQA is an independent R&D technology. They all
have its own advantages.
NQA and BFD all provides test results for other modules so that other modules can take
measures according to test results.
Network quality analysis (NQA) is a feature that the system provides to monitor network
quality of service (QoS) in real time and to locate and diagnose network faults.
Independent of the hardware, NQA functions on the link layer to measure the
performance of protocols running at the network layer, transport layer, and application
layer.
NQA requires two test ends, an NQA client and an NQA server (or called the source
and destination). The NQA client (or the source) initiates an NQA test. You can
configure test instances through command lines or the NMS. Then NQA places the
test instances into test queues for scheduling.
When starting an NQA test instance, you can choose to start the test instance
immediately, at a specified time, or after a delay. A test packet is generated based
on the type of a test instance when the timer expires. If the size of the generated
test packet is smaller than the minimum size of a protocol packet, the test packet is
generated and sent out with the minimum size of the protocol packet.
After a test instance starts, the protocol-related running status can be collected
according to response packets. The client adds a timestamp to a test packet based
on the local system time before sending the packet to the server. After receiving the
test packet, the server sends a response packet to the client. The client then adds a
timestamp to the received response packet based on the current local system time.
This helps the client calculate the round-trip time (RTT) of the test packet based on
the two timestamps.
NOTE: In a jitter test instance, both the client and server add a timestamp to the sent and
received packets based on the local system time. In this manner, the client can calculate the
jitter value.
You can view the test results to learn about the network operating status and service
quality.
The source then can calculate the time for communication between the source and the
destination by subtracting the time the source sends the ICMP Echo Request packet from
the time the source receives the ICMP Echo Reply packet. The calculated data can reflect
the network operating status.
The ICMP test results and historical records are collected in test instances. You can run
commands to view the test results and historical records.
An NQA trace test detects the forwarding path between the source and the destination
and collects statistics about each device along the forwarding path.
According to the ICMP packet received from each hop, the source obtains information
about the forwarding path from the source to the destination and statistics about each
device along the forwarding path. These statistics can reflect the forwarding path status.
The process of a trace test:
1. The source constructs a UDP packet, with the TTL as 1, and sends the packet to the
destination.
2. After the first-hop router receives the UDP packet, it checks the TTL field and finds
that the TTL decreases to 0. Then RouterB returns an ICMP Time Exceeded packet.
3. After the source receives the ICMP Time Exceeded packet, it obtains the IP address
of the first-hop router and reconstructs a UDP packet, with the TTL as 2.
4. After the second-hop router receives the UDP packet, it checks the TTL field and
finds that the TTL decreases to 0. Then Router returns an ICMP Time Exceeded
packet.
5. The preceding process is repeated until the packet reaches the destination, which
then returns an ICMP Port Unreachable packet to the source.
In this example, the process is 1-2-3-5.
Through the DHCP test, you can get the time taken by the client to obtain its IP address
from the DHCP server. After the test is complete, the leased IP address is released. In the
DHCP test, you need to configure the source interface that sends the discovery packet to
the NQA server.
3. The client broadcasts a DHCP Request packet to the network segment where the
interface resides. The Request packet contains the IP address of the DHCP server.
4. After receiving the Request packet, the DHCP server returns a DHCP ACK packet
carrying an IP address assigned to the interface.
After receiving the DHCP ACK packet, the client calculates the time taken to obtain
an IP address from the DHCP server by subtracting the time the client sends the
Discovery packet from the time the client receives the ACK packet.
A DHCP test only uses an interface to send DHCP packets and releases the DHCP lease
after obtaining an IP address for the interface. Therefore, the DHCP test does not consume
address resources of the DHCP server. The interface used in a DHCP server must be in Up
state.
The DHCP test results and historical records are collected in test instances. You can run
commands to view the test results and historical records.
FTP tests are performed to obtain the time taken for the FTP client to set up a connection
with the FTP server and the time spent on packet transmission. To set up the connection
with the FTP server, you must first enter the IP address, user name, and password on the
FTP client. In FTP tests, you can perform the Put operation on a specified file and specify
the file size. In the Get operation, the time for downloading the file is recorded, whereas in
the Put operation, the time for uploading the file is recorded.
FTP test, based on TCP, is used to test the speed of downloading file from FTP server or
uploading file to FTP server. FTP test provides response time of two stages:
Control connection time: the time for the client setting up three-way handshake
with the FTP server and exchanging signaling.
Data connection time: the time for the client downloading specific file from FTP
server or uploading specific file to FTP server.
In an FTP test, the following data can be calculated based on the information in the
packets received by the client:
The FTP test results and historical records are collected in test instances. You can run
commands to view the test results and historical records.
DNS tests are used to check whether the client can set up a DNS connection with the DNS
server and collect the time taken to respond to a DNS request packet. In DNS tests,
domain names are resolved into IP addresses. In addition, the time taken to set up a DNS
connection and return the response packet is recorded.
DNS test provides the speed of resolving the specific domain name into IP address. The
process of a DNS test:
1. The DNS client sends a DNS Query packet to the DNS server, requesting the server
to resolve a specified DNS name.
2. After receiving the Query packet, the DNS server constructs a Response packet and
sends it to the client.
3. After receiving the Response packet, the client calculates the difference between
the time the client sends the Query packet and the time the client receives the
Response packet to obtain the time taken to resolve the DNS name. This can reflect
DNS protocol performance on the network.
A DNS test only simulates the DNS resolution process but not saves the mapping between
domain names and IP addresses.
The DNS test results and historical records are collected in test instances. You can run
commands to view the test results and historical records.
DNS resolution: You can obtain the DNS resolution time, the period from when the
client (RouterA) sends a DNS packet to the resolver to resolve the name of the HTTP
server to an IP address to the time when the client receives a DNS resolution packet
containing the IP address.
TCP connection setup: You can obtain the time taken to set up a TCP connection
between the client and the HTTP server through three-way handshake.
TCP transaction: You can obtain the transaction time, the period from the time the
client sends a Get or Post packet to the HTTP server to the time the client receives a
response packet from the HTTP server.
NOTE: You can also obtain the time taken for the HTTP client to set up a connection with
the HTTP server and the time of packet transmission by directly entering the IP address of
the HTTP server.
In an HTTP test, the following data can be calculated based on the information in the
packets received by the client:
In TCP tests, you must configure the TCP service on the NQA server. The client then
originates a test to the specified IP address and port of the server. This test is used to
collect the time taken to set up a TCP connection.
TCP test provides the speed of three-way handshake between the client and TCP Server.
The process of a TCP test:
1. The client sends TCP SYN packet to TCP Server requesting a TCP connection.
2. Upon receiving the request, the TCP Server responds with a TCP SYN ACK packet.
3. The client receives the packet and sends an ACK packet as reply to set up
connection.
After receiving the UDP packet, the source calculates the time taken for communication
between the source and the destination by subtracting the time the source sends the UDP
packet from the time the source receives the UDP packet. This can reflect network UDP
performance.
In SNMP tests, SNMPv1, SNMPv2c, and SNMPv3 packets are sent to the SNMP agent
simultaneously to query the status of the managed device. The agent returns an SNMP
response packet of a certain version. That is, if SNMPv1 is enabled on the agent, an
SNMPv1 response packet is returned. You can calculate the interval from the time a query
packet is sent to the time a response packet is received, based on the timestamp carried in
the packets.
SNMP test provides the speed of communication between the client and SNMP Agent. The
process of an SNMP test:
1. The source sends a request packet to the SNMP agent to obtain the system time.
2. After receiving the request packet, the SNMP agent queries the system time,
constructs a reply packet, and sends it to the source.
After receiving the reply packet, the source calculates the time taken for
communication between the source and the SNMP agent by subtracting the time
the source sends the request packet from the time the source receives the reply
packet. This can reflect network SNMP performance.
Jitter Tests
In Jitter tests, the sender periodically sends packets to the remote end, with every
packet being marked with a timestamp. After receiving a packet, the remote end
also marks the packet with a timestamp based on the local system time and returns
the packet to the sender. The sender then calculates the jitter time based on the
timestamp carried in the received packet. Jitter tests support the sending of a
maximum of 3000 packets continuously to simulate voice traffic. You can adjust the
number of packets to be sent through Licenses.
After receiving a packet, the destination adds a timestamp to the packet and sends
it back to the source.
After receiving the returned packet, the source calculates the jitter by subtracting
the interval at which the source sends two consecutive packets from the interval at
which the destination receives the two consecutive packets.
The following data can be calculated based on information in the packets received by the
source:
Maximum, minimum, and average jitter of the packets from the source to the
Maximum unidirectional delay from the source to the destination or from the
Networking Requirements
Configuration Roadmap
As shown in the figure, RouterA functions as an NQA client to test whether RouterB
is reachable.
Perform the NQA ICMP test function to test whether the packet sent by RouterA can
reach RouterB.
Perform the NQA ICMP test to obtain the RTT of the packet.
nqa test-instance admin-name test-name , The nqa command creates an NQA test instance
and enters the NQA view.
test-type { dhcp | dns | ftp | http | icmp | jitter | lspping | lsptrace | snmp | tcp | trace |
udp }, The test-type command configures the test type for an NQA test instance.
destination-address { ipv4 ipv4-address | ipv6 ipv6-address }, The destination-address
command specifies the destination address of an NQA test instance.
The start command sets the start mode and end mode for an NQA test instance. The start
now command performs a test instance immediately.
NQA test results cannot be displayed automatically on the terminal. To view NQA test
results, run the display nqa results command. NQA test results cannot be displayed
automatically on the terminal. To view NQA test results, run the display nqa results
command.
If no test instance is specified, the test result of the test instance is displayed in the
corresponding test instance view, and the test result of all test instances is displayed in the
system view or other views irrelevant to test instances. If a test instance is specified, the
test result of only this test instance is displayed.
The display nqa results collection command is used to display the statistics on
accumulative results. At present, only the jitter test supports the query of accumulated
results.
Output information discription
Networking Requirements
As shown in the figure, the performance of the FTP download function needs to be
checked.
Configuration Roadmap
Configure RouterB as the FTP server. Log in to the FTP server using user name user1
and password Helloword@6789 to download file test.txt.
Create and start an FTP test instance on RouterA to check whether RouterA can set
up a connection with the FTP server and to obtain duration for downloading the file
from the FTP server.
The ftp server enable command enables the FTP server function to allow FTP users
to log in to the FTP server.
The ftp-operation command sets the operation mode for an NQA FTP test instance. The
ftp-operation command can be used to specify the FTP operation mode as put or get. A
connection with the FTP server is set up using the IP address, the user name, and the
password of the FTP server, and the time to set up FTP connection is recorded.
The ftp-username command sets the user name for logging in to the FTP server in an FTP
test instance.
The ftp-password command sets a password for logging in to the FTP server in an NQA
FTP test instance.
The ftp-filename command configures the file name and file path for an NQA FTP test
instance.
NOTE:
During the FTP test, select a file of a small size. If the file is too large, the test may
fail because of timeout.
The file download operation cannot save the file to the local file system, but only
count the time taken to download the file. The system releases the memory
immediately after obtaining the data.
CtrlConnTime Min/Max/Average
taken to set up a control connection
Networking Requirements
As shown in the figure, a UDP Jitter test needs to be performed to obtain the jitter
time of transmitting a packet from RouterA to RouterC.
Configuration Roadmap
Configure the monitoring service type and port number on the NQA server.
nqa-server udpecho [ vpn-instance vpn-instance-name ]{ ip-address | ipv6 ipv6address } port-number, The monitoring IP address and port number of the UDP
server are configured. The nqa-server udpecho command configures the IP
address and port number for the UDP server in an NQA test.
UDP packets are transmitted in a jitter test. The test is used to obtain the packet
delay, jitter, and packet loss ratio by comparing timestamps in the request and
response packets. A UDP server needs to be configured for an NQA test to respond
to probe packets. If the client and the server are connected through a VPN, you
need to specify the VPN instance name.
The destination-port command configures the destination port number for an NQA
test.
The following data can be calculated based on information in the packets received by the
source:
Maximum, minimum, and average jitter of the packets from the source to the
destination and from the destination to the source.
Maximum unidirectional delay from the source to the destination or from the
destination to the source.
Min Positive SDMinimum positive jitter from the source to the destination.
Min Positive DSMinimum positive jitter from the destination to the source.
Positive SD SumSum of the positive jitter from the source to the destination.
Positive SD Square SumSquare sum of the positive jitter from the source to
the destination.
You may often encounter such problems as intermittent network disconnections, failure to
access websites, slow Internet access, and slow file downloading. When this occurs, you
need to collect statistics about the device to locate the faults. These statistics need to be
provided by the device. As shown in the figure, users in different places connect to each
other over a VPN network. However, users reflect that the network is disconnected
intermittently and the network connection is slow.
You can deploy NQA on PEs to analyze network quality. Perform an ICMP test between
the PEs and CEs to check the continuity of the network. After confirming that the network
is correctly connected, perform a jitter test to measure the network jitter. Then perform
the same tests between the PEs. Analyze the test data and the faults that users encounter
for fault location.
The service level agreement (SLA) is a service agreement between a service provider and a
customer to ensure the service performance and reliability under specified costs.
AnswerACD
AnswerABD
CDP (Cisco Discovery Protocol) is similar to LLDP, but it is the Cisco proprietary link layer
discovery protocol.
An NMS must be capable of managing multiple network devices with diverse functions
and complex configurations. Most NMSs can detect Layer 3 network topologies, but they
cannot detect detailed Layer 2 topologies or detect configuration conflicts. A standard
protocol is required to exchange Layer 2 information between network devices.
The LLDP protocol provides a standard link-layer discovery method. Layer 2 information
obtained from LLDP allows the NMS to detect the topology of neighboring devices, and
display paths between clients, switches, routers, application servers, and network servers.
The NMS can also detect configuration conflicts between network devices and identify
causes of network failures. Enterprise users can use an NMS to monitor the link status on
devices running LLDP and quickly locate network faults.
The LLDP module uses an LLDP agent to interact with the Physical Topology MIB,
Entity MIB, Interfaces MIB, and other MIBs to update the LLDP local system MIB and
LLDP local organizationally defined extended MIB.
The LLDP agent encapsulates local device information in LLDP frames and sends the
LLDP frames to remote devices.
After receiving LLDP frames from remote devices, the LLDP agent updates the LLDP
remote system MIB and LLDP remote organizationally defined extended MIB.
By exchanging LLDP frames with remote devices, the local device can obtain
information about remote devices, including remote interfaces connected to the
local device and MAC addresses of remote devices.
The LLDP local system MIB stores local device information, including the device ID, port ID,
system name, system description, port description, and management address.
The LLDP remote system MIB stores neighbor information, including the device ID, port ID,
system name, system description, port description, and management address of each
neighbor.
An LLDP frame is an Ethernet frame encapsulated with an LLDP data unit (LLDPDU).
An LLDPDU contains local device information and is encapsulated in an LLDP frame. Each
LLDPDU consists of several information elements known as TLVs that each includes Type,
Length, and Value fields. The local device encapsulates its local information in TLVs,
constructs an LLDPDU with several TLVs, and encapsulates the LLDPDU in the data field of
an LLDP frame.
LLDPDUs can encapsulate basic TLVs, TLVs defined by IEEE 802.1 working groups, TLVs
defined by IEEE 802.3 working groups, and Media Endpoint Discovery (MED) TLVs. Basic
TLVs are used for basic device management. The TLVs defined by IEEE 802.1 and IEEE
802.3 working groups, and MED TLVs defined by other organizations are used for
enhanced device management functions. A device determines whether to encapsulate
organizationally specific TLVs.
MED TLVs are related to voice over IP (VoIP) applications and provide functions such as
basic configuration, network policy configuration, address management, and directory
management. These TLVs meet the requirements of voice device manufacturers for cost
efficiency, easy deployment, and easy management. Use of these TLVs allows the
deployment of voice devices on Ethernet network. This brings great convenience for
Additionally, in the Data Center Network scenario, DCBX TLV is introduced. Neighboring
nodes on data center networks use the Data Center Bridging Exchange (DCBX) protocol to
exchange and negotiate DCB information so that they have the same DCB information.
This prevents packet loss on the data center network. DCBX encapsulates DCB information
in DCBX TLVs and uses LLDP frames to exchange DCB information between neighboring
nodes.
After LLDP is enabled on a device, the device periodically sends LLDP frames to
neighbors. When the local configuration changes, the device sends LLDP frames to
notify neighbors of the changes. To reduce the number of LLDP frames sent when
the local information changes frequently, the device waits for a period before
sending the next LLDP frame.
The device starts fast transmission of LLDP frames in the following scenarios: when it
receives an LLDP frame with device information not in its MIB (a new neighbor is
discovered), when LLDP is enabled, or a when port transitions from Down to Up
state. When fast transmission starts, the local device sends LLDP frames at 1-second
intervals. After a specified number of LLDP frames have been sent at this interval,
the device reverts the previous transmission interval.
An LLDP-capable device checks the validity of received LLDP frames and the TLVs in
those frames. When determining that an LLDP frame and its TLVs are valid, the local
device saves neighbor information and sets the aging time of neighbor information
on the local device to the TTL value carried in the received LLDPDU. If the TTL value
carried in the received LLDPDU is 0, the neighbor information ages out immediately.
CDP refers to Cisco Discovery Protocol and is the Cisco proprietary link layer discovery
protocol.
LLDP can be enabled in the system view and the interface view.
After LLDP is enabled in the system view, all interfaces are enabled with LLDP.
After LLDP is disabled in the system view, all LLDP settings are restored to the
default settings except the setting of LLDP trap. Therefore, LLDP is also disabled on
all interfaces.
An interface can send and receive LLDP packets only after LLDP is enabled in both
the system view and the interface view.
After LLDP is disabled globally, the commands for enabling and disabling LLDP on an
interface do not take effect.
The display lldp neighbor brief command displays brief information about neighbors of
the device.
The lldp compliance cdp txrx command enables an interface to exchange information
with CDP-capable devices.
The display cdp neighbor brief command displays brief information about CDP
neighbors of the device.
Answers:
ABCD
ABC
Communication between the manager and the agent process in two ways:
1. The manager request agent, ask a specific parameter values.((For example: How
many unreachable ICMP port appear?))
2. Agency initiative to report the manager there some important events occur.(For
example: a connection dropped)
Of course, It can also change agent values.(For example: The default IP TTL value is
changed to 64)
These operations, please four kinds of operations is a simple request - response mode (that
is, the management process to issue requests, the proxy process response response)
SNMP is built on the basis of the TCP / IP application layer protocols. SNMP NMS, provides
a simple command set, in accordance with the rules of the BER (Basic an Encoding the
Rules Basic Encoding Rules) form packets using UDP communication between the NMS
and the managed device.
SNMP PDUs: SNMP PDU (Protocol Data Unit), the SNMP protocol data unit, the
SNMP payload.
SNMP UDP protocol, the first three requests issued by the management process operating
on UDP port 161 UDP port 162 Trap operation of the proxy process to issue.
Trap does not belong to the basic operation of the network management side of the
managed devices, it is the spontaneous behavior of the managed devices.
Device-side due to the trigger condition is met by the Agent to send traps to the NMS in
the network client workstation, to inform the equipment side. NMS from the receiving
agent to send Trap messages on UDP port 162, so that network managers will be able to
timely processing network. Such as the status change of the interface, the device side will
be sent to the network-side traps, network managers will be analyzed according to the
specific situation to make further processing.
Predecessors
Before completing the basic functions of configuration SNMPv2c, complete the following
tasks:
Routing protocols are configured to make up between the router and NM Station
Steps
The community name will be saved in encrypted format in the configuration file.
After the read-and-write community name is set, the NMS with this name has the
right of the ViewDefault view (OID: 1.3.6.1). To change the access right of the NMS ,
If configure acl acl-numberyou should set an ACL firstand restrict the access right
of the NMS.
If both the include and exclude parameters are configured for MIB objects that have
an inclusion relationship, whether to include or exclude the lowest MIB object will be
determined by the parameter configured for the lowest MIB object. For example, the
snmpV2, snmpModules, and snmpUsmMIB objects are from top down in the MIB
table. If the exclude parameter is configured for snmpUsmMIB objects and include is
configured for snmpV2, snmpUsmMIB objects will still be excluded.
Run: snmp-agent target-host trap-hostname hostname address { ipv4-addr [ udp-port udpportid ] [ public-net | vpn-instance vpn-instance-name ] | ipv6 ipv6-addr [ udp-port udpportid ] } trap-paramsname paramsnamethe destination host for receiving trap messages
and error codes is specified.
Note the following when running the command:
The default destination UDP port number is 162. To ensure secure communication
between the NMS and managed devices, run the udp-port command to change the
UDP port number to a non-well-known port number.
If trap messages sent from the managed device to the NMS need to be transmitted
over a public network, the parameter public-net needs to be configured. If trap
messages sent from the managed device to the NMS need to be transmitted over a
private network, the parameter vpn-instance vpn-instance-name needs to be
configured to specify a VPN that will take over the transmission task.
(Optional)Run: snmp-agent trap enable the trap function is enabled for all modules.
NoticeFor VRPthe alarm information generated by interface-name-changeport
standard need to enable by runing snmp-agent trap enable.
The source interface that sends traps must have an IP address; otherwise, the
commands will fail to take effect. To ensure device security, it is recommended that
you set the source IP address to the local loopback address.
The source interface in traps on the device must be the same as the source interface
Predecessors
Before completing the basic functions of configuration SNMPv2c, complete the following
tasks:
Routing protocols are configured to make up between the switch and NM Station
Steps
Run: snmp-agent sys-info version v2c The SNMP version is set to SNMPv2c.
(Optional) Run:snmp-agent trap enable the trap function is enabled for all modules.
NoticeFor VRPthe alarm information generated by interface-name-changeport
standard need to enable by runing snmp-agent trap enable.
The default destination UDP port number is 162. To ensure secure communication
between the NMS and managed devices, run the udp-port command to change the
UDP port number to a non-well-known port number.
The parameter securityname identifies devices that send traps on the NMS.
If the NMS and managed device are both Huawei products, the parameter privatenetmanager can be configured to add more information to trap messages, such as
the alarm type, alarm serial number, and alarm sending time. The information will
help you locate and solve problems more quickly.
If trap messages sent from the managed device to the NMS need to be transmitted
over a public network, the parameter public-net needs to be configured. If trap
messages sent from the managed device to the NMS need to be transmitted over a
private network, the parameter vpn-instance vpn-instance-name needs to be
configured to specify a VPN that will take over the transmission task.
The source interface that sends traps must have an IP address; otherwise, the
commands will fail to take effect. To ensure device security, it is recommended that
you set the source IP address to the local loopback address.
The source interface in traps on the device must be the same as the source interface
Note: Considering device security router or switch with different version has
requirements for community name complexity sometimes , such as default minimum
length of a community name or at least two kinds of characters included in a community
name.
SNMP parameters
SNMP version: Currently, the SNMPv1, SNMPv2c, and SNMPv3 versions are
supported. The SNMPv3 version is applied in the scenario requiring high parameter
security level.
Read community: The read community name for eSight to send a read request to an
NE. The read operation is available when the read community name is the same as
that acknowledged by the NE.
Write community: The write community name for eSight to send a write request to
an NE. The write operation is available when the write community name is the same
as that acknowledged by the NE.
Timeout interval(s): The time when eSight waits for a response for an operation
request.
Resending times: The maximum number of times for eSight to resend an operation
requests when eSight configures SNMP parameters for an NE in the case that the
timer expires. If the actual number of times exceeds this value, operation fails.
NE port: SNMP communication port of the NE.
If you configure simple network management protocol (SNMP) parameters for an
NE, click Save Protocol Template to save the settings as an SNMP parameter
configuration template. If you need to configure SNMP parameters again, click Select
Protocol Template to select the saved protocol template to apply.
On eSight, the topology view displays the NE status and links between NEs. With topology
management, it is easy to visualize the architecture and determine the running status of all
Nes.
The network management system able to operate and control the equipments. For
example: the port of the device enable / disable
Through the SNMP protocol, equipment alarm initiative reported to the network
management system(NMS), the NMS received and displayed the alarms.
AnswerABC