Documente Academic
Documente Profesional
Documente Cultură
(BSC6900-Based)
Contents
7.4.5 RAN Capacity Monitoring Guide (BSC6900-Based)
RAN17.1
Capacity Monitoring Guide (BSC6900-based)
Issue 05
Date 2017-06-15
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their
respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between
Huawei and the customer. All or part of the products, services and features described in this
document may not be within the purchase scope or the usage scope. Unless otherwise specified
in the contract, all statements, information, and recommendations in this document are
provided "AS IS" without warranties, guarantees or representations of any kind, either express
or implied.
The information in this document is subject to change without notice. Every effort has been
made in the preparation of this document to ensure accuracy of the contents, but all statements,
information, and recommendations in this document do not constitute a warranty of any kind,
express or implied.
1.1 Overview
Growing traffic in mobile networks requires more and more network resources. Lack of network
resources will affect user experience. Therefore, monitoring network resources, locating
bottlenecks, and performing capacity expansion in real time are critical to the provision of high
quality services.
This document describes how to monitor the usage of various network resources and locate
network resource bottlenecks.
This document applies to BSC6900s and 3900 series base stations.
For details about the MML commands, parameters, alarms, and performance counters, see
section "Operation and Maintenance" in the BSC6900 UMTS Product Documentation or 3900
Series Base Station Product Documentation.
05 (2017-06-15)
Compared with Issue 04 (2017-01-16) of RAN 17.1, this issue includes the following changes.
Change Type Change Description
Technical change None
Editorial change Optimized descriptions on how to monitor capacity of the Uplink
CE Usage.
04 (2017-01-16)
Compared with Issue 03 (2016-10-28) of RAN 17.1, this issue includes the following changes.
Change Type Change Description
Technical change None
Editorial change Added monitoring methods and suggestions on the physical port
usage of interface boards.
03 (2016-10-28)
Compared with Issue 02 (2015-05-08) of RAN17.1, Issue 03 of RAN17.1 includes the following
changes:
Change Type Change Description
Technical change None.
Editorial change Added descriptions on how to monitor capacity of the NIU and
DEU boards.
02 (2015-05-08)
Compared with the 01 (2015-03-25) of RAN17.1, issue 02 of RAN17.1 includes the following
changes:
Change Type Change Description
Technical change Added HS-PDSCH Code Usage, HS-SCCH Code Usage, I
Bandwidth Usage, Iur Bandwidth Usage.
Editorial change None.
01 (2015-03-25)
This issue does not include any changes.
Draft A (2015-01-15)
This is a Draft A release of RAN17.1.
Compared with Issue 01 (2015-01-15) of RAN17.0, Draft A of RAN17.1 includes the following
changes.
Change Type Change Description
Technical change Added RNC license monitoring.
Editorial change None.
Contents
About This Document
1.1 Overview
1.2 Intended Audience
1.3 Change History
1 Overview
1.1 Network Resources
1.2 Monitoring Methods
2 Network Resource Monitoring
2.1 Monitoring Metrics and Procedure
2.2 SPU CPU Load
2.3 MPU CPU Load
2.4 DPU DSP Load
2.5 Interface Board CPU Load
2.6 SCU CPU Load
2.7 RNC License
2.8 NIU CPU Load
2.9 DEU CPU Load
2.10 Common Channel Usage
2.11 Downlink Load
2.12 Uplink Load
2.13 OVSF Code Usage
2.14 HS-PDSCH Code Usage
2.15 HS-SCCH Code Usage
2.16 Iub/Iu/Iur Bandwidth Usage
2.17 CE Usage
2.18 NodeB CNBAP Load
2.19 HSPA Users
3 Network Resource Troubleshooting
3.1 Possible Block and Failure Points in the Basic Call Flow
3.2 Counters Related to Call Congestion
3.3 Resource Usage Analysis
4 Metrics Definitions
5 Appendix
1 Overview
This chapter introduces network resources and monitoring methods.
Metrics are defined to monitor the usage or load of UTRAN resources. Resource thresholds are
also recommended based on specific criteria.
Defining peak hours is important for monitoring metrics. It is recommended that you set the peak
hours to the time when the corresponding resource usage is at its highest.
Helping ensure stable network operation, capacity monitoring employs the following thresholds:
Warning threshold: If this threshold is reached, the corresponding resource status becomes
"Warning" on the PRS.
Capacity expansion threshold: If this threshold is reached, the corresponding resource status
becomes "Critical" on the PRS. This threshold is lower than the alarm threshold.
VS.LC.DLCreditUsed.Mean
VS.HW.DLCreditAvailable
VS.HW.ULCreditAvailable
VS.BTS.CnbapCap.UseRate
HSPA Percentage 60%
VS.BOARD.UsedHsdpaUserRatio.Mean
users of number of
HSPA users VS.BOARD.UsedHsupaUserRatio.Mean
to the
configured
number of
HSPA users
not re-transmitted)
FACH usage VS.CRNCIubBytesFACH.Tx 70%
VS.CRNCIubBytesPCH.Tx
VS.CRNC.IUB.PCH.Bandwidth
RACH usage VS.CRNCIubBytesRACH.Rx 70%
VS.TRBNum.RACH
VS.TRBNum.RACH
DL load Non-HSPA VS.MeanTCP 70%
power usage
TCP usage VS.MeanTCP.NonHS 85%
VS.HSDPA.MeanChThroughput
Resource Monitoring Item Counter Capacity Expansion
Threshold
UL load RTWP VS.MeanRTWP Higher than –100
VS.MinRTWP dBm or lower than –
110 dBm (for
VS.MinRTWP)
–90 dBm (for
VS.MeanRTWP)
ENU VS.RAC.UL.EqvUserNum 75%
UlTotalEqUserNum
OVSF DCH OVSF VS.RAB.SFOccupy 70%
codes Usage
HS- HS-PDSCH VS.PdschCodeAvail.Mean 70%
PDSCH Code Usage VS.PdschCodeAvail.Mean
codes
HS-SCCH HS-SCCH Code VS.ScchCodeUtil.Mean 70%
codes Usage
This section discusses the network resource monitoring procedure, which can be easily
implemented and works in most scenarios.
The procedure starts with individual resource monitoring. When a resource exceeds a predefined
threshold, it should be cross checked against other resources.
For example, if CE usage is greater than 70% but RTWP, TCP, or OVSF resources are normal,
this indicates that the cell load is normal but CEs are insufficient. In this scenario, increase the
number of CE licenses or add baseband processing boards rather than expand the NodeB
capacity.
Figure 2-1 shows the resource monitoring flowchart.
Figure 2-1 Resource monitoring flowchart
As shown in Figure 2-1, if SPU CPU usage is greater than 50% during peak hours, the SPU is
overloaded. In this situation, it is normally unnecessary to continue analysis on other resources.
This procedure applies to most resource monitoring scenarios, but sometimes the system may be
overloaded because of other abnormalities instead of traffic increases. Some of these
abnormalities can be discovered by resource cross-checking. If more extensive resource cross-
checking is unsuccessful, advanced troubleshooting is required. Chapter 3 "Network Resource
Troubleshooting" addresses advanced troubleshooting procedures.
For the sake of simplicity, it is assumed that there are no other abnormalities on the network
during the proactive monitoring process described in chapter 2 "Network Resource Monitoring."
An RNC can be configured with multiple SPU boards. SPU boards are
classified into main control SPU boards and non-main control SPU boards,
depending on the loaded software. Each SPUa board contains four CPUs and
each SPUb/SPUc board contains eight CPUs. Each CPU represents a
subsystem. Subsystems can be classified into SPU subsystems and Main
Processing Unit (MPU) subsystems.
Subsystem 0 on the main control SPU board is the MPU subsystem, which manages the user-
plane, control-plane, and transmission resources of the RNC.
Other subsystems on the main control SPU board and all the subsystems on the non-main
control SPU board are the SPU subsystems, also called CPU for Service (CPUS) subsystems.
They process control-plane services.
To set the logical function type of an SPU board, run the ADD BRD command
and set the parameter as follows:
To add a main control SPU board, set the Logical function type parameter to RUCP.
To add a non-main control SPU board, set the Logical function type parameter to UCP.
This section describes SPU subsystems only. For details about the MPU subsystem, see section
2.3.
The functions of SPU subsystems are as follows:
Processing upper-layer signaling over the Uu, Iu, Iur, and Iub interfaces
Processing transport layer signaling
Allocating and managing the various resources that are necessary for service setup, and
establishing signaling and service connections
Processing RNC frame number (RFN) signaling
The SPU CPU load is classified as follows:
Shareable load caused by call management. When an SPU subsystem is overloaded, the load
sharing mechanism works automatically to forward new calls to lightly loaded subsystems and
share the loads caused by the new calls.
Non-shareable load caused by cell resource management, NodeB resource management, and
signaling management at the transport layer. By setting the appropriate parameters, these
functions and the loads they caused can be processed by a specific subsystem instead of being
forwarded to other subsystems. The non-shareable load on a specific subsystem can be reduced
by enabling the Automatic NodeB and Cell Allocation in the RNC feature.
The ratio of shareable load to non-shareable load is approximately 2:1. The ratio varies slightly
according to different traffic models but it applies to most traffic models. Figure 2-2 shows the
CPU load allocation on the SPU.
Figure 2-2 CPU load allocation on the SPU
2.2.2 Monitoring Methods
RNC-level Monitoring
RNC-level monitoring covers all SPU subsystems in an RNC. The formulas used to calculate the
RNC-level SPU CPU load are as follows:
If the RNC is configured with only SPUa or SPUb/SPUc boards, use the following formula:
SPU CPU load = Average of all SPU CPU (VS.XPU.CPULOAD.MEAN)
If the RNC is configured with both SPUa and SPUb/SPUc boards, use the following formula:
SPU CPU load = Sum (1.33 x ∑VS.XPU.CPULOAD.MEAN of SPUa +
∑VS.XPU.CPULOAD.MEAN of SPUb + ∑VS.XPU.CPULOAD.MEAN of SPUc)/(1.33 x N +
M + P)
N, M and P indicate the number of SPU subsystems on SPUa, SPUb and SPUc boards,
respectively.
The processing capacity of an SPU subsystem on an SPUa board is 1.33 times that on an
SPUb/SPUc board.
When the RNC detects that the CPU load on an SPU subsystem exceeds a specified threshold,
ALM-20256 CPU Overload is reported. You can query the overload threshold by running the
LST CPUTHD command.
Based on the analysis of RNC-level SPU CPU and SPU subsystem-level CPU loads,
optimization suggestions are provided to address problems in scenarios listed in the following
table.
No. Scenario Optimization Suggestion
1 The RNC-level SPU CPU load is high. Add SPU boards.
2 The CPU load on an SPU subsystem is much Adjust the load sharing threshold.
less than the RNC-level SPU CPU load.
3 The CPU load on an SPU subsystem is much Enable the Automatic NodeB and Ce
greater than the RNC-level SPU CPU load. Allocation in the RNC feature.
Scenario 2: The CPU load on an SPU subsystem is much less than the RNC-
level SPU CPU load.
Assume that the load sharing threshold is set to 50% and the CPU load on SPU subsystem A is
less than the RNC-level SPU CPU load by more than 10%, as shown in Figure 2-4.
Figure 2-4 CPU load on an SPU subsystem much less than RNC-level SPU CPU load
The RNC-level SPU CPU load is less than the load sharing threshold. Therefore, the load sharing
mechanism is not triggered, and the shareable load is not allocated to SPU subsystem A. In this
situation, run the MML command SET UCTRLPLNSHAREPARA with the Control Plane
Sharing Out Threshold parameter set to a smaller value to trigger the load sharing mechanism.
Scenario 3: The CPU load on an SPU subsystem is much greater than the RNC-
level SPU CPU load.
Assume that the load sharing threshold is set to 35%, the CPU load on SPU subsystem B is
greater than the load sharing threshold (50%) of a single subsystem and greater than the RNC-
level SPU CPU load by 10%, as shown in Figure 2-5.
Figure 2-5 CPU load on an SPU subsystem much greater than RNC-level SPU CPU load
As shown in Figure 2-5, the non-shareable load on SPU subsystem B is significantly high and
cannot be transferred to other subsystems through load sharing. In this situation, it is
recommended that you enable the Automatic NodeB and Cell Allocation in the RNC feature.
For details about the Automatic NodeB and Cell Allocation in the RNC feature, see Flow
Control Feature Parameter Description in the RAN Feature Documentation.
The MPU is located in subsystem 0 on the main control SPU board. It manages the user-plane
resources, control-plane resources, and transmission resources in the RNC. The number of MPUs
on an SPU board can be configured based on service requirements. The MPU allocates resources
such as SPU, DPU, and interface board to each call. More than 60% of the MPU load is used for
transmission resource management.
The MPU resource pool function, used to manage transmission resources, is enabled by default.
After this function is enabled, all the MPUs work in a resource pool, and the RNC allocates
transmission resources based on MPU CPU loads to balance the MPU CPU loads in the RNC.
In effective mode, run the MML command DSP BRD to query the logical function type of an
SPUa or SPUb/SPUc board. If Logic function type is set to RUCP, subsystem 0 on this SPU is
the MPU subsystem.
In ineffective mode, find the MML command ADD BRD in the RNC configuration file. If the
value of Logic function type is RUCP, subsystem 0 on this SPU is the MPU subsystem.
2.3.2 Monitoring Methods
The RNC counter VS.XPU.CPULOAD.MEAN measures the CPU usage on a single MPU
subsystem, which indicates the CPU load on the MPU subsystem.
To query the real-time CPU usage on an MPU subsystem, run the MML command DSP
CPUUSAGE.
It is recommended that you monitor the RNC-level MPU CPU load, which indicates the average
CPU load on all MPU subsystems in an RNC. The formulas used to calculate the RNC-level
MPU CPU load are as follows:
If the RNC is configured with either SPUa or SPUb/SPUc boards, use the following formula:
MPU CPU load = Average of all MPU CPU (VS.XPU.CPULOAD.MEAN)
If the RNC is configured with both SPUa and SPUb/SPUc boards, use the following formula:
MPU CPU load = Sum (1.7 x ∑VS.XPU.CPULOAD.MEAN of SPUa +
∑VS.XPU.CPULOAD.MEAN of SPUb + ∑VS.XPU.CPULOAD.MEAN of SPUc)/(1.7 x N +
M + P)
N, M, and P indicate the number of MPU subsystems on SPUa, SPUb and SPUc boards,
respectively.
The processing capacity of an MPU subsystem on an SPUa board is 1.7 times that on an
SPUb/SPUc board.
When the RNC detects that the CPU load on an MPU subsystem exceeds a specified threshold,
ALM-20256 CPU Overload is reported. You can query the overload threshold by running the
LST CPUTHD command.
Run the MML command SET TNLOADBALANCEPARA with MPU Load Sharing
Threshold set to 40 and MPU Load Difference Threshold set to 10 to adjust the load sharing
threshold and offset.
After the load sharing threshold is adjusted, Perform capacity expansion in the following
scenarios:
The RNC-level MPU CPU load during peak hours is greater than the expansion threshold
(50%) for n days (n is specified according to the actual situation; 3 days by default) in a week.
When the RNC-level MPU CPU load during peak hours is greater than the emergency
expansion threshold (60%) for n days (n is specified according to the actual situation; 3 days by
default) in a week, the RNC may report ALM-20256 CPU Overload and enable the flow control
function. In this situation, perform capacity expansion immediately.
Perform capacity expansion as follows:
If there are vacant slots, add SPU boards.
If there are no vacant slots, perform the following to configure more MPU subsystems:
− Change the logical type from non-main control SPU boards to main control SPU boards
when the CPU loads on non-main control SPU boards are less than or equal to 40%.
− Add an RNC when the CPU loads on non-main control SPU boards are greater than 40%.
The digital signal processing (DSP) subsystems are logical subsystems on the DPUb or DPUe
board. A DPUb board contains 22 logical subsystems and a DPUe board contains 28 logical
subsystems.
The DPU board processes and distributes UMTS user-plane service data. The main functions of
the DPU board are as follows:
Multiplexing and demultiplexing
Processing frame protocols
Selecting and distributing data
Performing the functions of the GTP-U, Iu UP, PDCP, RLC, MAC, and FP protocols
Performing encryption, decryption, and paging
Processing internal communication protocols between the SPU and DPU boards
Processing the Multimedia Broadcast and Multicast Service (MBMS) at the RLC and MAC
layers
If a DSP subsystem in the RNC is overloaded, new calls will be transferred to lightly loaded
subsystems. It is recommended that you monitor the RNC-level DSP loads.
For details about resource sharing on the user plane, see System Reliability Feature Parameter
Description in the RAN Feature Documentation.
The RNC counter VS.DSP.UsageAvg measures the CPU usage on a single DSP subsystem,
which indicates the CPU load on the DSP subsystem.
RNC-level monitoring involves all the DSP subsystems in an RNC.
The DSP load on an RNC can be indicated by the average load on all DSP subsystems in the
RNC during peak hours. The formulas used to calculate the DSP load on an RNC are as follows:
If the RNC is configured with either DPUb or DPUe boards, use the following formula:
DSP load = Average of all DSP (VS.DSP.UsageAvg)
If the RNC is configured with both DPUb and DPUe boards, use the following formula:
DSP load = Sum (0.5 x ∑VS.DSP.UsageAvg of DPUb + ∑VS.DSP.UsageAvg of DPUe)/(0.5 x
N + M)
N and M indicate the number of DSP subsystems on DPUb and DPUe boards, respectively.
The processing capacity of a DSP subsystem on a DPUb board is half of that on a DPUe board.
When the RNC detects that the CPU load on a DSP subsystem exceeds a specified threshold,
ALM-20256 CPU Overload is reported. You can query the overload threshold by running the
LST CPUTHD command.
If the average load on all DSP subsystems managed by RNC exceeds a specified overload
threshold, ALM-22305 Resource overload on the user plane is reported. You can run the LST
UALMTHD command to query the overload thresholds.
Based on an analysis of resource pool- and RNC-level DSP loads, optimization suggestions are
provided to address problems in scenarios listed in the following table.
No. Scenario Optimization Suggestion
1 The RNC-level DSP load is high. Add DPU boards.
Replace DPUb boards with DPUe
boards.
2 The RNC-level DSP load is less than the Adjust the load sharing threshold.
expansion threshold, and DSP loads are
severely unbalanced.
Scenario 2: The RNC-level DSP load is less than the expansion threshold, and
DSP loads are severely unbalanced among physical subrack.
Assume that the expansion threshold is set to 60%, and the DSP load on physical subrack A is
less than that on other physical subrack by more than 10%, as shown in Figure 2-7. The DSP
load on physical subrack A is low because the RNC-level DSP load is less than the expansion
threshold, and DSP loads are severely unbalanced among physical subrack.
Figure 2-7 RNC-level DSP load less than expansion threshold and DSP loads severely
unbalanced among physical subrack
In this situation, run the MML command SET UUSERPLNSHAREPARA to set Percentage of
User Plane CPU Usage Sharing Out Threshold(UserPlnCpuSharingOutThd).
For details about load sharing, see Controller Resource Sharing Feature Parameter Description
in the RAN Feature Documentation.
In effective mode, run the MML command LST BRD to query information about a specific
board, for example, whether the board is an interface board.
In ineffective mode, find the MML command ADD BRD in the RNC configuration file. If the
value of the BRDCLASS parameter is INT, the board is an interface board. You can obtain
information about this interface board using this command.
The load of an interface board is reflected from the following aspects: physical port usage, CPU
usage on the control plane, forwarding load on the user plane, channel identifier (CID) of the
interface board, and average usage of user datagram protocol (UDP) or tunnel endpoint ID
(TEID).
Calculate the physical port usage of an interface board using the following formula:
Physical port usage =
Maximum(VS.FEGE.RXMEANSPEED,VS.FEGE.TXMEANSPEED)/Bandwidth of the
physical Ethernet port
The counters used to calculate the physical port usage are as follows:
VS.FEGE.RXMEANSPEED
VS.FEGE.TXMEANSPEED
Physical ports include FE and GE ports, whose bandwidths are 100 Mbit/s and 1000 Mbit/s,
respectively. The ports of the GOUa, GOUc, or GOUe board are GE port. FG2a, FG2c or FG2e
ports are FE or GE ports, which can be queried by running the LST ETHPORT command. If
the auto-negotiation mode is used, run the DSP ETHPORT command to query the negotiated
bandwidths.
The counters used to monitor the interface board CPU load are as follows:
VS.INT.CPULOAD.MEAN: Average CPU Usage of the CPU
VS.INT.TRANSLOAD.RATIO.MEAN: Average Forwarding Ratio of Interface Boards
VS.INT.TRM.FLOW.UDP.RESUSAGE.MEAN: Average Usage of UDP Ports of Interface
Boards
VS.INT.TRM.FLOW.CID.RESUSAGE.MEAN: Average Usage of CID of Interface Boards
VS.INT.TRM.FLOW.TEID.RESUSAGE.MEAN: Average Usage of TEIDs of Interface
Boards
Whether to expand capacity for interface boards or physical ports is determined by the preceding
evaluation results.
For details about the Transmission Resource Pool in RNC feature, see Transmission Resource
Pool in RNC Feature Parameter Description in the RAN Feature Documentation.
2.6 SCU CPU Load
2.6.1 Monitoring Principles
Both the SCU CPU load and inter-subrack bandwidth need to be monitored for SCU boards.
If the SCU CPU load reaches 60%, contact Huawei for technical support.
If the value of Frame Mean Usage exceeds 40%, contact Huawei for technical support.
If VS.Frame.Flux.DropPackets exceeds the threshold (0.02%), ALM-20277 Communication
Between Subracks Faulty is reported. In the case of other faults, contact Huawei for technical
support.
KPIs such as the licensed RNC-level CS and PS traffic volumes as well as hardware capacity can
be obtained using RNC license files. The usage of each license can be evaluated based on the
obtained KPIs and the RNC's actual traffic volume.
The licenses related to the traffic volume are listed in the following table.
License Item Abbreviation Sales Unit
Voice Erlang-Erlang LQW1CS01 Erl
PS throughput only (per Mbit/s) LQW1PO01 Mbit/s
HSDPA Throughput (per Mbit/s) LQW1HSDPAT01 Mbit/s
HSUPA Throughput (per Mbit/s) LQW1HSUPAT01 Mbit/s
The license related to the hardware capacity is listed in the following table.
License Item Abbreviation Sales Hardware
Unit Dependency
Hardware Capacity License LQW1HWCL01 165Mbps DPUe
(165Mbps) (per board)
Hardware Capacity License LQW1HWCL02 300Mbps DPUe
(300Mbps) (per board)
The licenses related to the cell-level HSDPA/HSUPA user number capacity are listed in the
following table.
License Item Abbreviation Sales Unit
If the peak- hour utilization rate reaches the capacity expansion threshold for n days (n is
specified based on the actual situation; 3 days by default), license purchase is required.
The NIU is an evolved network intelligence processing board that implements service awareness
and collects PS service statistics. The NIU boards can be inserted into slots 0 to 5, 8 to 11, 14 to
23, 26, and 27 of the MPS, and into slots 0 to 5 and 8 to 27 of the EPS.
NIU overload will adversely affect service awareness and PS service statistics.
The VS.NIU.CPULOAD.MEAN counter provides the average CPU usage of a subsystem in the
measurement period. It indicates the load and performance of the CPU on the subsystem in the
measurement period.
The NIU CPU load is calculated using the following formula:
NIU CPU Load = AVG(VS.NIU.CPULOAD.MEAN)
If the peak-hour NIU CPU usage reaches 50% within n days (n is specified based on site
requirements; 3 days by default) in a week, you are advised to initiate pre-expansion; if the usage
exceeds 60%, you are advised to initiate emergency capacity expansion.
To expand the capacity, insert more NIU boards.
The DEU is an evolved data processing board that implements enhanced UMTS data processing.
DEU overload will adversely affect enhanced UMTS data processing.
When the CPU is not overloaded, DEU performance will also be affected if resources required
for the following features exceed the preset threshold: eTrFO, Crystal Voice Coverage
Improvement by Rate Adaption, and Always-HD Voice.
The VS.DEU.DSP.UsageAvg counter provides the average CPU usage of a subsystem board in
the measurement period. It indicates the load and performance of the CPU on the subsystem in
the measurement period.
The DEU CPU load is calculated using the following formula:
DEU Load = AVG(VS.DEU.DSP.UsageAvg)
The DEU specification usage for the eTrFO feature is calculated using the following formula:
DEU Specification Usage (eTrFO) = (VS.AMRNB.TrFO.TransCoder.Frame.Num x
20)/1000/60/Total DEU Capacity
where
Total DEU capacity equals the number of DEU boards multiplied by the specifications of each
DEU board. The eTrFO specifications of a DEU board are 1000.
The DEU specification usage for the Crystal Voice Coverage Improvement by Rate Adaption
feature is calculated using the following formulas:
− DEU Specification Usage (Crystal Voice Coverage Improvement by Rate Adaption) =
VS.AMRNB.CrystalAdapt.MEAN/Total DEU Capacity
where
The VS.AMRNB.CrystalAdapt.MEAN counter provides the average number of AMR-NB voice
UEs enabled with Crystal Voice Coverage Improvement by Rate Adaption in a cell.
Total DEU Capacity equals the number of DEU boards multiplied by the specifications of each
DEU board. The specifications of the average number of AMR-NB voice UEs enabled with
Crystal Voice Coverage Improvement by Rate Adaption for each DEU are 1000.
− DEU Specification Usage (Crystal Voice Coverage Improvement by Rate Adaption) =
VS.AMRWB.CrystalAdapt.MEAN/Total DEU Capacity
where
The VS.AMRWB.CrystalAdapt.MEAN counter provides the average number of AMR-WB
voice UEs enabled with Crystal Voice Coverage Improvement by Rate Adaption in a cell.
Total DEU Capacity equals the number of DEU boards multiplied by the specifications of each
DEU board. The specifications of the average number of AMR-WB voice UEs enabled with
Crystal Voice Coverage Improvement by Rate Adaption for each DEU are 500.
The DEU specification usage for the Always-HD Voice feature is calculated using the following
formula:
DEU Specification Usage (Always-HD Voice) = VS.RAB.AlwaysHDVoice.Erlang.cell/Total
DEU Capacity
where
The VS.RAB.AlwaysHDVoice.Erlang.cell counter provides the number of Erlangs of Always-
HD Voice services based on the call duration in a cell.
Total DEU Capacity equals the number of DEU boards multiplied by the specifications of each
DEU board. The specifications of the number of Erlangs of Always-HD Voice services for each
DEU are 260.
Common channels include paging channels (PCHs), forward access channels (FACHs), and
random access channels (RACHs).
Table 2-8 Common channel monitoring
Resource Function Impact of Resource Insufficiency
PCH PCHs are downlink channels used to Paging messages may be lost.
transmit paging messages.
FACH FACHs are downlink transmission FACH insufficiency will cause a large
channels used to transmit signaling number of state transitions for PS
data and a small amount of user data. services, occurrence of RRC signaling
storms, and loss of signaling messages
user data.
RACH RACHs are uplink transmission RACH insufficiency will decrease t
channels used to transmit signaling network access success rate a
data and a small amount of user data. deteriorate user experience.
Based on the RNC's default parameter settings, the usages of PCHs, FACHs, and RACHs are
calculated using the following formulas, respectively:
PCH Usage
PCH usage = VS.UTRAN.AttPaging1/(<SP> x 60 x 5/0.01)
where
<SP> indicates the measurement period expressed in minutes.
60 indicates 60 seconds in a minute.
5 indicates the typical configuration (that is, five users in each TTI) of the PCH.
0.01 indicates the length (that is, 0.01s) of a PCH TTI.
FACH Usage
If a secondary common control physical channel (SCCPCH) carries both a FACH and a PCH,
the FACH usage is calculated using the following formula:
Usage of an FACH carried on a SCCPCH = VS.CRNCIubBytesFACH.Tx x 8/[(60 x <SP> x 168
x 1/0.01) x VS.PCH.Bandwidth.UsageRate x 6/7 + (60 x <SP> x 360 x 1/0.01) x (1 -
VS.PCH.Bandwidth.UsageRate x 6/7)]
where
VS.PCH.Bandwidth.UsageRate =
<VS.CRNCIubBytesPCH.Tx>/(<VS.CRNC.IUB.PCH.Bandwidth> x <SP> x 60)
8 indicates 8 bits in a byte.
60 indicates 60 seconds in a minute.
<SP> indicates the measurement period in minutes.
1 indicates one packet transmitted per TTI.
0.01 indicates the length (that is, 0.01s) of a PCH TTI.
6/7 indicates the payload ratio in a PCH frame.
168 indicates the size of a TB on the signaling plane of a FACH.
360 indicates the size of a TB on the user plane of a FACH.
Associated counters are as follows:
VS.CRNCIubBytesFACH.Tx
VS.CRNCIubBytesPCH.Tx
VS.CRNC.IUB.PCH.Bandwidth
If an SCCPCH only carries a FACH, the FACH usage is calculated using the following formula:
FACH Usage = ((VS.SRBNum.FACH - VS.OneSRBTTINum.FACH)/2 +
VS.OneSRBTTINum.FACH + VS.IndepTRBNum.FACH)/(<SP> x 60/0.01)
where
2 indicates scenarios where two SRB TBs are transmitted within one TTI.
<SP> indicates the measurement period in minutes.
60 indicates 60 seconds in a minute.
0.01 indicates the length (that is, 0.01s) of a PCH TTI.
Associated counters are as follows:
VS.SRBNum.FACH
VS.OneSRBTTINum.FACH
VS.OneSRBTTINum.FACH
VS.IndepTRBNum.FACH
RACH Usage
Each cell has only one RACH. When signaling and user data coexist on the RACH, the RACH
usage is calculated using the following formula:
RACH Usage = ((VS.CRNCIubBytesRACH.Rx - VS.TRBNum.RACH x 360/8) x 8/168)/(<SP>
x 60 x 4/0.02) + VS.TRBNum.RACH/(<SP> x 60 x 4/0.02)
where
360 indicates the size of a TB on the user plane of a FACH.
8 indicates 8 bits in a byte.
168 indicates the size of a TB on the signaling plane of a FACH.
<SP> indicates the measurement period in minutes.
60 indicates 60 seconds in a minute.
4 indicates that each TTI on the RACH can accommodate four UEs.
0.02 indicates the TTI of 0.02s (or 20 ms).
Associated counters are as follows:
VS.CRNCIubBytesRACH.Rx
VS.TRBNum.RACH
VS.TRBNum.RACH
If the peak-hour usage of a measurement item reaches the capacity expansion threshold within n
days (n is specified based the actual situation; 3 days by default) in a week, prepare for capacity
expansion as instructed in Figure 2-11.
Figure 2-11 Capacity expansion procedure
2.11 Downlink Load
2.11.1 Monitoring Principles
rejected.
The downlink capacity of a cell is limited by its total available transmit power, which is
determined by the NodeB power amplifier's capability and the power configured for the cell.
The downlink transmit power consists of the following, as shown in Figure 2-12:
Common channel (CCH) power
Non-HSPA power without CCH
HSPA power
Power margin
Figure 2-12 Dynamic power resource allocation
To obtain MAXTXPOWER, run the LST UCELL command to query the value of the Max
Transmit Power of Cell parameter, and convert the parameter value from the unit "0.1 dBm" to
"W".
Associated counters are as follows:
VS.MeanTCP
VS.MeanTCP.NonHS
VS.HSDPA.MeanChThroughput
If the peak-hour usage of a measurement item reaches the capacity expansion threshold within n
days (n is specified based the actual situation; 3 days by default) in a week, prepare for capacity
expansion as instructed in Figure 2-14.
Figure 2-14 Capacity expansion procedure
The relationship between the RoT and the uplink load factor is as follows:
For example, a 3 dB noise increase corresponds to 50% of the uplink load and a 6 dB noise
increase corresponds to 75% of the uplink load.
Figure 2-15 Relationship between RTWP, noise increase, and uplink load
A large RTWP value in a cell is caused by traffic overflow, hardware faults (for example, poor
quality of antennas or feeder connectors), or external interference.
The RTWP and Equivalent Number of Users (ENU) are indicated by the following counters:
VS.MeanRTWP: Mean Power of Totally Received Bandwidth for Cell
VS.MinRTWP: Minimum Power of Totally Received Bandwidth for Cell
VS.RAC.UL.EqvUserNum: Mean Number of UL Equivalent Voice UEs in CELL_DCH State
for Cell
UlTotalEqUserNum: total number of equivalent users in the uplink, which can be queried
using the RNC MML command LST UCELLCAC.
The UL ENU Ratio is calculated using the following formula:
UL ENU Ratio = VS.RAC.UL.EqvUserNum/UlTotalEqUserNum x 100%
In some areas, the background noise increases to -106 dBm or above due to external interference
or hardware faults. If this occurs, the value of the VS.MinRTWP counter (the RTWP value
obtained when the cell carries no traffic) is considered the background noise.
The RTWP of a cell is considered high when the value of the VS.MeanRTWP counter is greater
than -100 dBm during off-peak hours or greater than -90 dBm during peak hours for two or three
consecutive days in a week.
A cell is considered heavily loaded if the UL ENU Ratio exceeds 75% during peak hours for two
or three consecutive days in a week.
If the value of the VS.MeanRTWP counter is greater than –90 dBm during peak hours for n
days (n is specified based on the actual situation; 3 days by default) in a week, there may be
hardware faults or external interference. Locate and rectify the faults and attempt to eliminate the
interference. If the value of the VS.MeanRTWP counter is still greater than –90 dBm after
hardware faults are rectified and external interference is eliminated, enable the following features
as required:
− WRFD-140215 Dynamic Configuration of HSDPA CQI Feedback Period
− WRFD-010712 Adaptive Configuration of Traffic Channel Power offset for HSUPA
If the uplink capacity of the cell still does not meet the requirements after the preceding features
are enabled, add carriers. If there are no additional UARFCNs available, add NodeBs.
For details about how to enable the WRFD-140215 Dynamic Configuration of HSDPA CQI
Feedback Period feature, see Dynamic Configuration Based on the Uplink Load Feature
Parameter Description in the RAN Feature Documentation.
For details about how to enable the WRFD-010712 Adaptive Configuration of Traffic Channel
Power offset for HSUPA feature, see Power Control Feature Parameter Description in the RAN
Feature Documentation.
If the number of uplink ENUs is insufficient and the amount of uplink power is sufficient, run
the MOD UCELLCAC command with the UL total equivalent user number parameter set to
a larger value. In addition, run the SET UADMCTRL command with the AF of hsupa
interactive service and AF of hsupa background service parameters set to 10.
The system reserves code resources for HSDPA services, and these code resources can be shared
among HSDPA services. Therefore, HSDPA services do not require admission control based on
cell code resources.
Figure 2-19 shows NodeB-controlled dynamic code allocation.
Figure 2-19 NodeB-controlled dynamic code allocation
NodeB-controlled dynamic code allocation is more flexible than RNC-controlled dynamic code
allocation. It shortens the response time and saves the Iub signaling used for code allocation.
Huawei RNCs monitor the average usage of an OVSF code tree based on the number of
equivalent codes with SF256, which is measured by the VS.RAB.SFOccupy counter.
The number of codes used by the dedicated channel (DCH) can be calculated using the following
formula:
DCH_OVSF_CODE = (<VS.SingleRAB.SF4> + <VS.MultRAB.SF4>) x 64 +
(<VS.MultRAB.SF8> + <VS.SingleRAB.SF8>) x 32 + (<VS.MultRAB.SF16> +
<VS.SingleRAB.SF16>) x 16 + (<VS.SingleRAB.SF32> + <VS.MultRAB.SF32>) x 8 +
(<VS.MultRAB.SF64> + <VS.SingleRAB.SF64>) x 4 + (<VS.SingleRAB.SF128> +
<VS.MultRAB.SF128>) x 2 + (<VS.SingleRAB.SF256> + <VS.MultRAB.SF256>)
The maximum number of codes available for the DCH can be calculated using the following
formula:
DCH_OVSF_CODE_Ava = 256 - (Codes occupied by CCHs + Codes occupied by E-AGCHs +
Codes occupied by E-RGCHs and E-HICHs + Codes reserved for HS-PDSCHs + HS-SCCH
codes)
For example, if the following conditions are met:
A cell that supports HSPA is configured with one SCCPCH, one E-AGCH, one E-RGCH/E-
HICH, and two HS-SCCHs.
At least one code is reserved for HSDPA services.
Then, DCH_OVSF_CODE_Ava = 256 - (8 + 1 + 2 + 16 + 4) = 225.
OVSF code usages are calculated as follows:
OVSF Usage = VS.RAB.SFOccupy/256 x 100%
DCH OVSF Usage = DCH_OVSF_CODE/DCH_OVSF_CODE_Ava
If the peak-hour usage reaches the capacity expansion threshold within n days (n is specified
based the actual situation; 3 days by default) in a week, prepare for capacity expansion as
instructed in Figure 2-21.
Figure 2-21 Capacity expansion procedure
For details about how to enable the WRFD-010631 Dynamic Code Allocation Based on NodeB
feature and the WRFD-010653 96 HSDPA Users per Cell feature, see HSDPA Feature
Parameter Description in the RAN Feature Documentation.
The HS-PDSCH code usage in a measurement period is calculated using the following formula:
HS-PDSCH Code Usage =
Sum_AllCells_of_NodeB(VS.PdschCodeUsed.Mean)/Sum_AllCells_of_NodeB(VS.PdschCode
Avail.Mean)
where
Sum_AllCells_of_NodeB means a sum of this counter value for all cells under a NodeB.
Associated counters are as follows:
VS.PdschCodeUsed.Mean
VS.PdschCodeAvail.Mean
The following suggestions are provided on the assumption that the HS-PDSCH Code Usage is
greater than 70% during peak hours for n days (n is specified based on the actual situation; 3
days by default) in a week and the value of VS.HSDPA.MeanChThroughput is less than what is
required by users (for example, 300 kbit/s).
If the number of licensed HS-PDSCH codes is greater than the number of
available HS-PDSCH codes, increase carriers with the TCP usage taken
into consideration.
If the number of licensed HS-PDSCH codes is less than or equal to the
number of available HS-PDSCH codes, purchase more licenses.
If the HS-SCCH Code Usage is greater than 70% during peak hours for n days (n is specified
based on the actual situation; 3 days by default) in a week and the number of configured HS-
SCCHs is less than 4, add more HS-SCCHs.
ATM Transmission
On an ATM transport network, the RNC and NodeB can monitor the uplink and downlink load
on their interface boards. The Iub/Iu/Iur bandwidth usage is indicated by the ratio of the actual
uplink or downlink load to the configured bandwidth. For the Iub interface, the NodeB can also
monitor the uplink and downlink load on their interface boards.
The RNC monitors the admission success rate for each NodeB.
1. Bandwidth-based admission success rate
The bandwidth-based admission success rate is calculated using the following formula:
Bandwidth-based admission success rate = VS.AAL2.CAC.Succ/VS.AAL2.CAC.Att
Associated counters are as follows:
− VS.AAL2.CAC.Succ: Number of Successful AAL2 Path Admissions
− VS.AAL2.CAC.Att: Number of AAL2 Path Resource Admissions
2. Bandwidth usage
(1) Control plane
The uplink and downlink bandwidth usages of the SAAL links (for NCP, CCP, and ALCAP) on
the control plane over the Iub/Iu/Iur interface are calculated using the following formulas,
respectively:
− DL Iub/Iu/Iur Usage on Control Plane = VS.SAALLNK.PVCLAYER.TXBYTES x
8/1000/<SP>/TX BW_CFG
− UL Iub/Iu/Iur Usage on Control Plane = VS.SAALLNK.PVCLAYER.RXBYTES x
8/1000/<SP>/RX BW_CFG
where
− 8 indicates 8 bits in a byte.
− 1000 indicates 1 kbit/s.
− <SP> indicates the measurement period in minutes.
− TX BW_CFG indicates the configured transmit bandwidth, which can be got by running the
command LST ATMBW.
− RX BW_CFG indicates the configured transmit bandwidth, which can be got by running the
command LST ATMBW.
Associated counters are as follows:
− VS.SAALLNK.PVCLAYER.TXBYTES
− VS.SAALLNK.PVCLAYER.RXBYTES
(2) User plane
The uplink and downlink bandwidth usages of the user plane on the Iub/Iu/Iur interface are
calculated using the following formulas, respectively:
− UL Iub/Iu/Iur Usage on User Plane = Sum (VS.AAL2PATH.PVCLAYER.RXBYTES) x
8/<SP>/1000/RX BW_CFG
− DL Iub/Iu/Iur Usage on User Plane = Sum (VS.AAL2PATH.PVCLAYER.TXBYTES) x
8/<SP>/1000/TX BW_CFG
where
− 8 indicates 8 bits in a byte.
− 1000 indicates 1 kbit/s.
− <SP> indicates the measurement period in minutes.
− TX BW_CFG indicates the configured transmit bandwidth, which can be got by running the
command LST ATMBW.
− RX BW_CFG indicates the configured transmit bandwidth, which can be got by running the
command LST ATMBW.
Associated counters are as follows:
− VS.AAL2PATH.PVCLAYER.TXBYTES
− VS.AAL2PATH.PVCLAYER.RXBYTES
IP Transmission
On an IP transmission network, the RNC and NodeB can monitor the average uplink and
downlink loads on their interface boards. The Iub/Iu/Iur bandwidth usage is represented by the
ratio of the average uplink or downlink load to the configured Iub/Iu/Iur bandwidth. For the Iub
interface, the NodeB can also monitor the uplink and downlink load on their interface boards. In
addition, the RNC and NodeB can dynamically adjust the bandwidth of a service based on the
QoS requirements of the service and user priority and improve the Iub bandwidth usage using the
reserve pressure algorithm on their interface boards.
1. Admission success rate (or IP connection setup success rate)
The IP connection setup success rate is calculated using the following formula:
IP connection setup success rate = VS.ANI.IP.Conn.Estab.Succ/VS.ANI.IP.Conn.Estab.Att
Associated counters are as follows:
− VS.ANI.IP.Conn.Estab.Succ
− VS.ANI.IP.Conn.Estab.Att
− VS.ANI.IP.FailResAllocForBwLimit
2. Bandwidth usage
(1) Physical port usage
Calculate the physical port usage of an interface board to determine whether the physical
ports are congested. For details, see "2.5.2 Monitoring Methods."
(2) Control plane
The uplink and downlink traffic volumes of the SCTP links (for NCP, CCP, and ALCAP) on the
control plane over the Iub interface are calculated using the following formulas, respectively:
− SCTP UL KBPS = VS.SCTP.RX.BYTES x 8/<SP>/1000
− SCTP DL KBPS = VS.SCTP.TX.BYTES x 8/<SP>/1000
where
− 8 indicates 8 bits in a byte.
− <SP> indicates the measurement period in minutes.
− 1000 indicates 1 kbit/s.
Associated counters are as follows:
− VS.SCTP.TX.BYTES
− VS.SCTP.RX.BYTES
(3) User plane
In non-transmission resource pool networking, the number of bytes received or transmitted on an
IP path of each NodeB is measured to calculate the bandwidth usage:
− UL IUB User Plane Utility Ratio = Sum (VS.IPPATH.IPLAYER.RXBYTES) x
8/<SP>/1000/RX BW_CFG
− DL IUB User Plane Utility Ratio = Sum (VS.IPPATH.IPLAYER.TXBYTES) x
8/<SP>/1000/TX BW_CFG
where
− 8 indicates 8 bits in a byte.
− 1000 indicates 1 kbit/s.
− <SP> indicates the measurement period in minutes.
− TX BW_CFG indicates the configured transmit bandwidth.
Associated counters are as follows:
− VS.IPPATH.IPLAYER.TXBYTES
− VS.IPPATH.IPLAYER.RXBYTES
In transmission resource pool networking, the uplink and downlink Iub bandwidth usages on a
transmission resource pool are calculated using the following formulas, respectively:
− UL Iub/Iu/Iur Usage on User Plane = VS.IPPOOL.ANI.IPLAYER.RXBYTES x
8/<SP>/1000/RX BW_CFG
− DL Iub/Iu/Iur Usage on User Plane = VS.IPPOOL.ANI.IPLAYER.TXBYTES x
8/<SP>/1000/TX BW_CFG
where
− 8 indicates 8 bits in a byte.
− 1000 indicates 1 kbit/s.
− <SP> indicates the measurement period in minutes.
− TX BW_CFG indicates the configured transmit bandwidth.
Associated counters are as follows:
− VS.IPPOOL.ANI.IPLAYER.TXBYTES
− VS.IPPOOL.ANI.IPLAYER.RXBYTES
The following part uses the Iub interface as an example to introduce how to implement capacity
expansion.
Scenario 1: Bandwidth Congestion on the ATM Control Plane
If an SAAL link is congested, increase the bandwidth configured for the SAAL link by running
the following commands:
ADD ATMTRF: TRFX=NewIndex, ST=NRTVBR, UT=KBIT/S, PCR=NewValue, SCR=
NewValue;
MOD SAALLNK: SRN=xxx, SN=xxx, SAALLNKN=xxx, CARRYT=IMA, CARRYSRN=1,
CARRYSN=NewIndex, CARRYIMAGRPN=NewIndex;
The activity factor equals the ratio of actual bandwidth occupied by a UE to the bandwidth
allocated to the UE during its initial access. It is used to predict the bandwidth required by
admission. Each NodeB can be configured with its own activity factor. The default activity factor
for voice services is 70%, and the default activity factor for PS BE services is 40%.
----End
2.17 CE Usage
2.17.1 Monitoring Principles
Uplink CE resources can be shared in an uplink resource group, but not between uplink resource
groups. Downlink CE resources are associated with the baseband processing boards where a cell
is set up. CE resources allocated by licenses are shared among services on the NodeB. (CE
resources are shared on a per operator basis in MOCN scenarios.)
The NodeB sends the RNC a response message that carries its CE capability. The NodeB's CE
capability is limited by both the installed hardware and the configured software licenses.
The methods of calculating the credit resource usage of admitted UEs are different before and
after the CE Overbooking feature is enabled. Table 2-18 describes the details.
Table 2-18 Credit resources consumed by admitted UEs before and after CE Overbooking is
enabled
Before or After CE Credit Resource Consumed by Admitted UEs
Overbooking is Enabled
Before CE Overbooking is The RNC calculates the usage of CEs for admitted UEs by
enabled adding up credit resources reserved for each UE.
R99 UE: The RNC calculates the usage of credit resources fo
If the uplink/downlink license- or hardware-based CE usage is greater than 70% during peak
hours for n days (n is specified based on the actual situation; 3 days by default) in a week,
perform capacity expansion as follows:
If the license-based CE usage exceeds its capacity expansion threshold, CE resources are
limited by the license. In this situation, upgrade the license file.
If the hardware-based CE usage exceeds the capacity expansion threshold, CE resources are
limited by the hardware capacity. In this situation, add WBBP boards.
If capacity expansion is inapplicable, perform the following operations:
Run the RNC MML command SET UCORRMALGOSWITCH. In this step, select the
DRA_DCCC_SWITCH and DRA_BASE_ADM_CE_BE_TTI_RECFG_SWITCH check
boxes under the Dynamic Resource Allocation Switch parameter to enable the DCCC
algorithm and the TTI dynamic adjustment algorithm for admission CE-based BE services,
respectively.
Run the RNC MML command SET UUSERGBR with the Uplink GBR for BE service
parameter set to D32.
Newly added CE resources can share the traffic in hotspots and relieve CE congestion caused by
traffic overflow.
The features mentioned in Figure 2-27 refer to WRFD-010202 UE State in Connected Mode
(CELL_DCH, CELL_PCH, URA_PCH, CELL_FACH) and WRFD-020500 Enhanced Fast
Dormancy.
For details about how to enable the WRFD-010202 UE State in Connected Mode
(CELL_DCH, CELL_PCH, URA_PCH, CELL_FACH) feature, see State Transition Feature
Parameter Description in the RAN Feature Documentation.
For details about how to enable the WRFD-020500 Enhanced Fast Dormancy feature, see
Enhanced Fast Dormancy Feature Parameter Description in the RAN Feature Documentation.
The maximum CNBAP capability of a NodeB is 1500. When the CNBAP capability
configured for a NodeB is less than 1500, replace boards to expand the capacity if CNBAP
overload occurs. For the CNBAP capabilities of the WBBP boards, see 3900 Series Base Station
Technical Description.
The following NodeB counters are used to monitor the percentage of the number of HSPA users
on a BBP board to the configured number of HSPA users:
VS.BOARD.UsedHsdpaUserRatio.Mean
VS.BOARD.UsedHsupaUserRatio.Mean
If the percentage of the number of HSDPA/HSUPA users on a WBBP or UBBP board to the
configured number of HSDPA/HSUPA users exceeds 60% during peak hours within n days (n is
specified based the actual situation; 3 days by default) in a week, prepare for capacity expansion
as instructed in Figure 2-28.
Figure 2-28 Capacity expansion procedure
For details about how to enable the WRFD-150242 HSDPA Scheduler Pool feature, see
HSDPA Scheduler Pool Feature Parameter Description in the RAN feature documentation. For
details about how to enable the WRFD-170205 HSUPA Scheduler Pool feature, see HSUPA
Scheduler Pool Feature Parameter Description in the RAN feature documentation.
The number of HSPA users supported by a WBBP board varies according to board type. For
detailed numbers of HSPA users supported by different WBBP boards, see 3900 Series Base
Station Technical Description.
3 Network Resource Troubleshooting
The monitoring method described in chapter 2 "Network Resource Monitoring" is effective in
most scenarios. However, other scenarios may require more detailed troubleshooting to
determine if high resource occupancy is caused by traffic increases or other abnormalities.
This chapter describes how to locate network resource problems that require more complex
troubleshooting and is intended for personnel who:
Have a deep understanding of WCDMA networks
Are familiar with the signaling procedure
Are familiar with the operation and maintenance of Huawei products
3.1 Possible Block and Failure Points in the Basic Call Flow
When network resources are insufficient, accessibility-related KPIs are likely to be affected first.
Figure 3-1 uses a mobile terminated call (MTC) as an example to illustrate the basic call flow,
with possible block/failure points indicated. For details about the basic call process, see 3GPP TS
25.931.
Figure 3-1 Basic call flow chart and possible block/failure points
The basic call flow illustrated in Figure 3-1 is described as follows:
Step 1 The CN sends a paging message to the RNC.
Step 2 Upon receiving the paging message, the RNC broadcasts the message on a PCH. If the PCH is
congested, the RNC may drop the message at block point #1.
Step 3 The UE may not receive the paging message or access the network, and fails to respond to
RNC's paging message. See failure point # 2.
Step 4 If the UE receives the paging message, it sends an RRC connection setup request to the RNC.
Step 5 If the RNC is congested when receiving the RRC connection setup request, the RNC may drop
the request message. See block point #3.
Step 6 If the RNC receives the RRC connection setup request and does not discard it, the RNC
determines whether to accept or reject the request. The request may be rejected due to
insufficient resources, as shown in block point #4.
Step 7 If the RNC accepts the request, the RNC instructs the UE to set up an RRC connection. The
UE may not receive RNC's response message or may find that the configuration does not support
RRC connection setup. See failure points #5 and #6.
Step 8 After the RRC connection is set up, the UE sends NAS messages to negotiate with the CN
about service setup. If the CN determines to set up a service, the CN sends an RAB assignment
request to the RNC.
Step 9 The RNC accepts or rejects the RAB assignment request based on the resource usage of RAN.
See block point #7.
Step 10 If the RNC accepts the RAB assignment request, it initiates an RB setup process. During the
process, the RNC sets up radio links (RLs) to the NodeB over the Iub interface and sends an RB
setup message to the UE over the Uu interface. The RL or RB setup process may incur failures.
See failure points #8 and #9.
----End
The counters measuring RNC- and cell-level paging loss are as follows:
RNC-level paging loss
The counters measuring paging loss caused by Iu-interface flow control, CPU overload, or RNC-
level PCH congestion are as follows:
− VS.RANAP.CsPaging.Loss: number of failed responses after the RNC receives CS paging
messages from the CN
− VS.RANAP.PsPaging.Loss: number of failed responses after the RNC receives PS paging
messages from the CN
− VS.RANAP.CsPaging.Att: number of CS paging messages received by the RNC from the
CN
− VS.RANAP.PsPaging.Att: number of PS paging messages received by the RNC from the CN
Calculation formula:
Iu-interface paging loss ratio (RNC) = [(VS.RANAP.CsPaging.Loss +
VS.RANAP.PsPaging.Loss)/(VS.RANAP.CsPaging.Att + VS.RANAP.PsPaging.Att)] x 100%
Cell-level paging loss
The counters measuring paging loss caused by cell-level PCH congestion are as follows:
− VS.RRC.Paging1.Loss.PCHCong.Cell: number of discarded paging messages due to PCH
congestion in a cell
− VS.UTRAN.AttPaging1: number of paging messages of paging type 1 sent by the RNC in a
cell
Calculation formula:
Iu-interface paging loss ratio (cell) =
(VS.RRC.Paging1.Loss.PCHCong.Cell/VS.UTRAN.AttPaging1) x 100%
The following table describes the mapping between reasons of RRC connection setup rejections
and corresponding counters.
Rejection Reason Counter
Uplink power VS.RRC.Rej.ULPower.Cong
congestion
Downlink power VS.RRC.Rej.DLPower.Cong
congestion
Uplink CE congestion VS.RRC.Rej.ULCE.Cong
Downlink CE VS.RRC.Rej.DLCE.Cong
congestion
Uplink Iub bandwidth VS.RRC.Rej.ULIUBBand.Cong
congestion
Downlink Iub VS.RRC.Rej.DLIUBBand.Cong
bandwidth congestion
Downlink code resource VS.RRC.Rej.Code.Cong
congestion
The following table describes the mapping between reasons of RAB setup rejections and
corresponding counters.
Rejection Reason Counter
Power congestion VS.RAB.FailEstabCS.ULPower.Cong
VS.RAB.FailEstabCS.DLPower.Cong
VS.RAB.FailEstabPS.ULPower.Cong
VS.RAB.FailEstabPS.DLPower.Cong
Uplink CE VS.RAB.FailEstabCS.ULCE.Cong
congestion VS.RAB.FailEstabPS.ULCE.Cong
Downlink CE VS.RAB.FailEstabCs.DLCE.Cong
congestion VS.RAB.FailEstabPs.DLCE.Cong
VS.RAB.FailEstabPS.DLIUBBand.Cong
VS.RAB.FailEstabPS.ULIUBBand.Cong
Figure 3-5 illustrates the procedure for analyzing Iub bandwidth congestion.
Figure 3-5 Procedure for analyzing Iub bandwidth congestion
If the uplink RTWP and downlink TCP values are greater than the preset thresholds, power
congestion occurs.
If power congestion occurs in the downlink, enable the load reshuffling (LDR) and overload
control (OLC) functions. If power congestion occurs in the uplink, analyze whether the problem
is caused by interference or traffic increases.
If the RTWP value remains greater than -97 dBm, identify root causes and troubleshoot the
problem.
If the problem is caused by heavy traffic instead of signaling storms, perform the following
operations:
Enable LDR and OLC for temporary troubleshooting.
Add carriers or split cells for a long-term solution.
Figure 3-6 illustrates the procedure for analyzing power resource usage.
Figure 3-6 Procedure for analyzing power resource usage
Generally, adding a carrier is the most effective means of relieving uplink power congestion. If
no additional carrier is available, add a NodeB or reduce the downtilt of the antenna.
Among all RNC CPUs, SPU CPUs are most likely to cause system bottlenecks because
smartphones often cause signaling storms on networks.
If the SPU CPU usage exceeds the SPU CPU alarming threshold, RNCs will enable the flow
control function to discard some RRC connection setup or paging requests. Therefore, the SPU
CPU usage should be less than the SPU CPU alarming threshold.
Figure 3-7 illustrates the procedure for analyzing SPU CPU usage.
Figure 3-7 Procedure for analyzing SPU CPU usage
As shown in Figure 3-7, if SPU CPU usage exceeds 50%, analyze the causes to prevent a
continuous increment of CPU usage. If this problem is caused by signaling storms, check
whether the system configuration is correct and whether the radio link is interrupted. If this
problem is caused by traffic increase, add SPU boards.
In addition, check whether SPU subsystem loads are balanced. If they are unbalanced, adjust the
load sharing thresholds or evenly allocate NodeBs to SPU subsystems.
3.3.6 DPU DSP Load and Interface Board CPU Usage Analysis
If the DPU DSP or interface board CPUs are overloaded, the RNC discards some user data.
Figure 3-8 illustrates the procedure for analyzing DPU DSP load and interface board CPU usage.
Figure 3-8 Procedure for analyzing DPU DSP load and interface board CPU usage
If DPU DSP load exceeds 60% or interface board CPU usage exceeds 50%, start the
troubleshooting procedure to prevent a continuous increment of load.
If traffic increase or unbalanced transmission resources cause the overload, expand hardware
capacity.
In most cases, PCHs are overloaded because an LA covers too many cells. Re-plan LAs to
resolve the PCH overload.
Figure 3-9 illustrates the procedure for analyzing PCH usage.
Figure 3-9 Procedure for analyzing PCH usage
FACH congestion is less likely to occur when UE state transition is disabled. However, the RNC
usually enables UE state transition to transfer low-traffic services to FACHs. This saves radio
resources but increases traffic on FACHs.
Methods of relieving FACH congestion are as follows:
Shorten the period during which PS services are carried on FACHs to enable fast UE state
transition to the CELL_PCH state or idle mode. In addition, set up RRC connections on DCHs if
DCH resources are sufficient.
Add an SCCPCH to carry FACHs.
Figure 3-10 illustrates the procedure for analyzing FACH usage.
Figure 3-10 Procedure for analyzing FACH usage
4 Metrics Definitions
Metric Name Counter Description
Congestion-related Metrics
Call Block Vs.Call.Block.Rate (customized) Vs.RRC.Block.Rate + (<RRC.SuccConnEst
Rate <VS.RRC.AttConnEstab.CellFACH>)) x Vs
RRC Vs.RRC.Block.Rate (customized) (<VS.RRC.Rej.ULPower.Cong> + <VS.RR
Congestion <VS.RRC.Rej.DLIUBBand.Cong> + <VS.R
Rate <VS.RRC.Rej.Code.Cong>)/<VS.RRC.AttC
RAB Vs.RAB.Block.Rate (customized) (<VS.RAB.FailEstabCS.ULPower.Cong> +
Congestion +<VS.RAB.FailEstabPS.ULPower.Cong> +
Rate <VS.RAB.FailEstabCS.ULCE.Cong> + <V
<VS.RAB.FailEstabCs.DLCE.Cong> + <VS
<VS.RAB.FailEstabCs.Code.Cong> + <VS.
<VS.RAB.FailEstabCS.DLIUBBand.Cong>
<VS.RAB.FailEstabPS.DLIUBBand.Cong>
<VS.RAB.FailEstabPS.ULIUBBand.Cong>
where
MAXTXPOWER is the maximum power co
Mean UL VS.MeanRTWP VS.MeanRTWP
RTWP
Min UL VS.MinRTWP VS.MinRTWP
RTWP
Metric Name Counter Description
UL ENU VS.RAC.UL.EqvUserNum VS.RAC.UL.EqvUserNum/UlTotalEqUserN
Rate
Iub Interface Resource Usage-related Metrics
Bandwidth- VS.AAL2.CAC.Succ VS.AAL2.CAC.Succ/VS.AAL2.CAC.Att
based VS.AAL2.CAC.Att
Admission
Success
Rate
DL Iub VS.SAALLNK.PVCLAYER.TXBYTES DL Iub Usage on Control Plane
Usage on VS.SAALLNK.PVCLAYER.RXBYTES = VS.SAALLNK.PVCLAYER.TXBYTES x
Control
Plane
UL Iub UL Iub Usage on Control Plane = VS.SAAL
Usage on
Control
Plane
DL Iub ATM transmission: ATM transmission:
Usage on VS.AAL2PATH.PVCLAYER.TXBYTES DL Iub Usage on User Plane = Sum (VS.AA
User Plane VS.AAL2PATH.PVCLAYER.RXBYTES BW_CFG
IP transmission in non-pooled networking: IP transmission in non-pooled networking:
VS.IPPATH.IPLAYER.TXBYTES DL Iub Usage on User Plane = Sum (VS.IPP
VS.IPPATH.IPLAYER.RXBYTES IP transmission in resource pool networking
IP transmission in resource pool networking: DL Iub Usage on User Plane = VS.IPPOOL
UL Iub VS.IPPOOL.ANI.IPLAYER.TXBYTES ATM transmission:
Usage on VS.IPPOOL.ANI.IPLAYER.RXBYTES UL Iub Usage on User Plane = Sum (VS.AA
User Plane BW_CFG
IP transmission in non-pooled networking:
UL Iub Usage on User Plane = Sum (VS.IPP
IP transmission in resource pool networking
UL Iub Usage on User Plane = VS.IPPOOL
IP VS.ANI.IP.Conn.Estab.Succ VS.ANI.IP.Conn.Estab.Succ/VS.ANI.IP.Co
Connection VS.ANI.IP.Conn.Estab.Att
Setup
Success
Rate
PCH/FACH Usage-related Metrics
PCH Usage VS.UTRAN.AttPaging1 VS.UTRAN.AttPaging1/(<SP> x 60 x 5/0.0
Metric Name Counter Description
FACH VS.CRNCIubBytesFACH.Tx If only one SCCPCH is configured, the fol
Usage VS.PCH.Bandwidth.UsageRate FACH Usage = VS.CRNCIubBytesFACH.T
VS.SRBNum.FACH VS.PCH.Bandwidth.UsageRate x 6/7 + (60
VS.OneSRBTTINum.FACH 6/7))
VS.IndepTRBNum.FACH where
VS.PCH.Bandwidth.UsageRate = <VS.CRN
<SP> x 60.0)
If two SCCPCHs are configured, the follow