Documente Academic
Documente Profesional
Documente Cultură
CT53331EN53GLA0
CT53331EN53GLA0
CT53331EN53GLA0
The picture shows an overview of mobile network supporting both 2G and 3G. The
core network (CN) is divided into Circuit Switched and Packet Switched domains.
The 3G radio access network, or UTRAN (UMTS Terrestrial Radio Access Network),
consists of Node B's and RNC's. One RNC together with all Node B controlled forms
an RNS (Radio Network Subsystem).
IPA2800 Platform is used in RNC and MGW
CT53331EN53GLA0
The general functional architecture of the IPA2800 Packet Platform based network elements is
shown above. At the high level network element consists of switching functions, interface functions,
control functions, signal processing functions, and system functions (such as timing and power
feed).
Functionality is distributed to a set of functional units capable of accomplishing a special purpose.
These are entities of hardware and software or only hardware.
Operation and Maintenance Unit (OMU) for performing centralized parts of system maintenance
functions; peripherals such as Winchester Disk Drive (WDU) and Floppy Disk Drive (FDU) (i.e.
magneto-optical disk in the ATM Platform) connected via SCSI interface;
Distributed Control Computers (signaling and resource management computers) which consist of
common hardware and system software supplemented with function specific software for control,
protocol processing, management, and maintenance tasks;
Network Interface Units (NIU) for connecting the network element to various types of transmission
systems (e.g. E1 or STM-1); (Please note that actual names of functional units are different, e.g.
NIS1 and NIP1 instead of NIU)
Network Interworking Units (NIWU, IWS1) for connecting the network element to non-ATM
transmission systems (e.g. TDM E1);
ATM Multiplexer (MXU) and ATM Switching Fabric Unit (SFU) for switching both circuit and packet
switched data channels, for connecting signaling channels, as well as for system internal
communications;
AAL2 switching unit (A2SU) performs switching of AAL type 2 packets;
Timing and Hardware Management Bus Unit (TBU) for timing, synchronization and system
maintenance purposes; and
Distributed Signal Processing units (DMCU/TCU) which provide support for e.g. transcoding, macro
diversity combining, data compression, and ciphering.
Units are connected to the SFU either directly (in the case of units with high traffic capacity) or via
the MXU (in the case of units with lower traffic capacity). The order of magnitude of the
interconnection capacity for both cases is shown in the figure.
CT53331EN53GLA0
More formal way to view the generic functional architecture is by the generic block diagram. Note
that the naming of functional units is different in actual network elements based on the platform.
Here more generic terms are used to describe the concepts (for example, NIU, SPU and CU).
Such generic terms are marked with an asterisk (*).
To achieve higher reliability, many functional units are redundant: there is a spare unit designated
for one or more active units. There are several ways to manage these spare units. All the
centralized functions of the system are protected in order to guarantee high availability of the
system.
To guarantee high availability, the ATM Switching Fabric and ATM Multiplexer as core functions
of the system are redundant. Power feed, hardware management bus, and timing supply are also
duplicated functions. Hot standby protected units and units that have management or mass
memory interfaces are always duplicated. Hard discs and buses connecting them to control units
are always duplicated.
Computing platform provides support for the redundancy. Hardware and software of the system
are constantly supervised. When a defect is detected in an active functional unit, a spare unit is
set active by an automatic recovery function. The number of spare units and the method of
synchronization vary, but redundancy always operates on software level.
If the spare unit is designated for only one active unit the software in the unit pair is kept
synchronized so that taking the spare in use in fault situations (switchover) is very fast. This is
called 2N redundancy principle or duplication.
For less strict reliability requirements, the spare unit may also be designated to a group of
functional units. The spare unit can replace any unit in the group. In this case the switchover is a
bit slower to execute, because the spare unit synchronization (warming) is performed as a part of
the switchover procedure. This redundancy principle is called replaceable N+1.
A unit group may be allocated no spare unit at all, if the group acts as a resource pool. The
number of unit in the pool is selected so that there is some extra capacity available. If a few units
of the pool are disabled because of faults, the rest of the group can still perform its designated
functions. This redundancy principle is called complementary N+1 or load sharing.
CT53331EN53GLA0
The IPA2800 Packet Platform consists of the Switching Platform Software, the Fault
Tolerant Computing Platform Software, Signal Processing Platform Software, and the
Hardware Platform. In addition, adjunct platforms can be used if needed in an application.
The Switching Platform Software provides common telecom functions (for example,
statistics, routing, and address analysis) as well as generic packet switching/routing
functionality common for several application areas (for example, connection control, traffic
management, ATM network operations and maintenance, and resource management).
The Fault Tolerant Computing Platform Software provides a distributed and fault tolerant
computing environment for the upper platform levels and the applications. It is ideal for use
in implementing flexible, efficient and fault tolerant computing systems. The Computing
Platform Software includes basic computer services as well as system maintenance
services, and provides DX Light and POSIX application interfaces.
The Computing Platform Software is based upon general purpose computer units with interprocessor communications implemented using ATM virtual connections. The number of
computer units can be scaled according to application and network element specific
processing capacity requirements.
The Hardware Platform based on standard mechanics provides cost-efficiency through the
use of modular, optimized and standardized solutions that are largely based on
commercially available chipsets.
The Signal Processing Platform Software provides generic services for all signal processing
applications. Digital signal processing (DSP) is needed in providing computation intensive
end-user services, such as speech transcoding, echo cancellation, or macrodiversity
combining.
The Adjunct Platform (NEMU) provides a generic platform for O&M application services and
different NE management applications and tools.
Concept platform and it's layer structure should in this context be seen as a modular set of
closely related building blocks which provide well defined services. Structure must not be
seen as static and monolithic, as the subset of services needed for an application (specific
network element) can be selected.
CT53331EN53GLA0
The IPA2800 platform introduces a new mechanics concept, with new cabinet, new
sub rack (EMC shielded), and new plug-in unit dimensions. Fan units are needed
inside the cabinet for forced cooling.
The M2000 mechanics comprises the basic mechanics concept based on ETSI 300
119-4 standard and IEC 917 series standards for metric dimensioning of electronic
equipment.
The concept supports the platform architecture which allows modular scalability of
configurations varying from modest to very large capacity. It also allows the
performance to be configured using only few hardware component types.
The mechanics consists of following equipment:
cabinet mechanics
19-slot subrack, it's backplane and front plate mechanics
connector and cabling system
cooling equipment.
Dimensions of the cabinet are: width 600 mm, depth 600 mm, and height 1800/2100
mm (based on standard ETS 300 119-2 and IEC 917-2).
Subrack has a height of 300 mm, a depth of 300 mm, and a width of 500 mm. The
nominal plug-in unit slot in the sub rack is 25 mm which results in 19 slots per one
sub rack. The basic construction allows dividing a part of a sub rack vertically into
two slots with optional guiding mechanics for the use of half-height plug-in units.
The backplane and cabling system provides reliable interconnections between plugin units. In addition to this, the backplane provides EMC shield to the rear side of the
sub rack. Common signals are delivered via the backplane and all other
interconnection signals are connected via cabling. This allows backplane modularity
and flexibility in different configurations. Because of flexible cabling and redundancy
it is possible to scale the system to a larger capacity in an active system without
shutting down the whole system.
Cabinet power distribution equipment and four sub racks with cooling equipment can
be installed in one cabinet. Openings in the sides of the cabinet behind the subrack
backplanes allow direct horizontal cabling between cabinets.
CT53331EN53GLA0
10
CT53331EN53GLA0
11
2N Redundancy (duplication) is used when two units are dedicated to a task for
which one is enough at any given time. One of the units is always active, that is in
the working state. The other unit is kept in the hot standby state, the spare state.
For example:
2N in RNC: OMU, SFU, MXU, RSMU
2N in BSC: OMU, GSW, MCMU
When a unit is detected faulty, it is taken into the testing state, and the fault location
and testing programs are activated. On the basis of the diagnosis, the unit is taken to
the separated state, if a fault is detected, or into use automatically, if no fault is
detected.
If the spare unit is designated for only one active unit, the software in the spare unit
is kept synchronized so that taking it in use in fault situations (switchover) is very
fast. The spare unit can be said to be in hot standby. This redundancy principle is
called duplication, abbreviated "2N".
CT53331EN53GLA0
12
Replaceable N+1 / N+m Redundancy are used when there is just one or a few spare
units for a set of N units of a given type. The spare unit is not used by the
applications and is not permanently bound to one of the N active units, but can take
over the load of any one of them. When a commandinitiated changeover for a
replaceable N+1 unit is performed, a pair is made up, the spare unit is warmed up to
the hot standby state, and changeover takes place without major interruptions.
When a unit is detected faulty, it is automatically replaced without interruptions to
other parts of the system.
For example:
N+1 in RNC: ICSU
N+1 in BSC: BCSU
CT53331EN53GLA0
13
CT53331EN53GLA0
14
For example:
RNC: OMS
BSC: ET
CT53331EN53GLA0
15
MSP is the SDH name for the Multiplex Section Protection scheme, as defined in
ITU-T
recommendation G.783. In SONET, the equivalent term APS (Automatic Protection
Switching) is used instead. Throughout the rest of the document the term MSP is
used
for both SDH and SONET. In the basic MSP functionality, the service line is
protected
using another line which is called the protection line: if an error occurs, for instance a
loss of signal (LOS), the protection mechanism switches over to the protection line.
CT53331EN53GLA0
16
CT53331EN53GLA0
17
CT53331EN53GLA0
18
Sub racks
The sub rack mechanics consist of a sub rack frame, backplane, and front plate
forming electromagnetic shielding for electronics to fulfill EMC requirements.
The basic construction allows dividing a part of a subrack vertically into two slots with
optional guiding mechanics for the use of half-height plug-in units.
Plug-in unit
The RNC is constructed by using a total of approximately 11 plug-in unit types. The
basic mechanical elements of the plug-in units are PCB, connectors and front plate
mechanics. Front plate mechanics include insertion/extraction levers, fixing screws
and EMC gasket.
CT53331EN53GLA0
19
External PDH lines are connected to the RNC cabinet using a back interface plug-in
unit which allows modular backplane connections. One back interface plug-in unit
supports one E1 plug-in unit. The back interface plug-in unit is installed in the same
row as the plug-in unit, but at the rear of the cabinet. There are two kinds of connector
panels available:
connector panel with RJ45 connectors for balanced E1/T1 line connection to/from the
cabinet
connector panel with SMB connectors for coaxial E1 line connection to/from the
cabinet
External timing requires a specific connector panel. PANEL 1 in the RNAC cabinet
provides the physical interface connectors
Picture on top:
Cabling cabinet IC183 installed next to IC186. Notice the balanced cabling between
rear transition cards and cabling cabinet patch panels.
Topmost patch panel in IC186 is CPSAL.
Picture on bottom:
BIE1C (SMB connectors) and BIE1T (RJ45 connectors) rear transition cards
installed to SRBI in rear side of cabinet.
CT53331EN53GLA0
20
Acoustic noise emitted by one IPA2800 fully equipped cabinet is 67 dBA (Power level) 61 dBA
(pressure level) in normal conditions (4 FTR1 fan trays containing 32 fans). Acoustic noise
increases by 3 dB per new cabinet. FTR1 meet the ETS 300-753 requirements.
Expected lifetime L10(time when 10% of fans failed) ~8years (@+40 degree Celsius).
Fan tray replacement is possible in live system. Without the fan tray live system will overheat
approx. in 5 minutes.
Faulty FTRA fan tray replacement procedure:
-Remove front cable conduit if present (move cables carefully away)
-Unscrew the fan tray from mounting flanges
-Unplug the control cable first from sub rack side and secondly from fan tray side.
-Extract the faulty fan tray from cabinet and insert the spare fan tray unit
-Plug the control cable first in fan tray and secondly to the sub rack side
-Screw the fan tray to the cabinet flanges
-Install cable conduit and cables (if present)
-Faulty FTRA-A and FTRA-B replacement procedure:
-Remove fan tray front grill and extract air filter
-Unplug the control cable from fan tray side (rear side of cabinet)
-Open two thumb-screws behind the grill
-Lower and extract the fan assembly by opening the locking latches (drawer assembly and
cable conduit is still mounted to cabinet)
-Insert spare fan assembly and secure latches and thumb-screws
-Plug the control cable
-Insert new air filter and close the fan tray front grill.
CT53331EN53GLA0
21
The functions are distributed to a set of functional units capable of accomplishing a special
purpose. These are entities of hardware and software. The main functional units of the RNC
are listed below:
The control computers (ICSU and RSMU) consist of common hardware and system software
supplemented with function-specific software.
The AAL2 switching units (A2SU) perform AAL2 switching.
The Data and Macro Diversity Unit (DMCU) performs RNC-related user and control plane L1
and L2 functions.
The Operation and Maintenance Unit (OMU) performs basic system maintenance functions.
The O&M Server (OMS) is responsible for RNC element management tasks. The OMS has
hard disk units for program code and data.
The Magneto-Optical Disk Drive (FDU) is used for loading software locally to the RNC.
The Winchester Disk Unit (WDU) serves as a non-volatile memory for program code and data
for the OMU.
The Timing and Hardware Management Bus Unit (TBU) takes care of timing, synchronization
and system maintenance functions.
The Network Interface Unit (NIU) STM-1/OC-3 (NIS1/NIS1P) provides STM-1 external
interfaces and the means to execute physical layer and ATM layer functionality.
Network interface and processing unit 2x1000Base-T/LX provides Ethernet external interfaces
and the means to execute physical layer and IP layer functionality.
The NIU PDH (NIP1) provides 2 Mbit/s / 1,5 Mbit/s (E1/T1) PDH external interfaces and the
means to execute physical layer and ATM layer functionality.
The GPRS Tunneling Protocol Unit (GTPU) performs RNC-related Iu user plane functions
towards the SGSN.
The External Hardware Alarm Unit (EHU) receives external alarms and sends indications of
them as messages to the OMU-located external alarm handler through HMS. Its second
function is to drive the Lamp Panel (EXAU), the cabinet-integrated lamp and other possible
external equipment.
The Multiplexer Unit (MXU) and the Switching Fabric Unit (SFU) are required for switching
both circuit- and packet-switched data channels, for connecting signaling channels and for the
system's internal communication.
CT53331EN53GLA0
22
The functions are distributed to a set of functional units capable of accomplishing a special
purpose.
These are entities of hardware and software. The main functional units of the RNC are listed
below.
The control computers (ICSU and RSMU) consist of common hardware and system software
supplemented with function-specific software.
The Data and Macro Diversity Unit (DMCU) performs RNC-related user and control plane L1
and L2 functions.
The Operation and Maintenance Unit (OMU) performs basic system maintenance functions.
The Operation and Maintenance Server (OMS) is responsible for RNC element management
tasks.
The OMS has hard disk units for program code and data.
From RU20/RN5.0, standalone OMS is recommended for new RNC2600 deliveries.
Both standalone and integrated OMS are supported in RU20/RN5.0 release.
The Winchester Disk Unit (WDU) serves as a non-volatile memory for program code and data.
The Timing and hardware management Bus Unit (TBU) takes care of timing, synchronization
and system maintenance functions.
The Network interface and processing unit 8xSTM-1/OC-3 (NPS1/NPS1P) provides STM-1
external interfaces and the means to execute physical layer and ATM/AAL2 layer functionality.
It also terminates the GTP protocol layer in Iu-ps interface.
Network interface and processing unit 2x1000Base-T/LX (NPGE/NPGEP) provides Ethernet
external interfaces and the means to execute physical layer and IP layer functionality.
The External Hardware alarm Unit (EHU) receives external alarms and sends indications of
them as messages to the OMU located external alarm handler via HMS. Its second function is
to drive the lamp panel (EXAU), the cabinet-integrated lamp and possible other external
equipment.
The Multiplexer Unit (MXU) and the Switching Fabric Unit (SFU) are required for switching
both circuit and packet-switched data channels, for connecting signaling channels and for the
system's internal communication.
CT53331EN53GLA0
23
CT53331EN53GLA0
24
RNC196/48M
The smallest capacity step, RNC196/48M includes the first cabinet and the plug-inunits
NIS1 and NIS1P share same unit locations and are mutually exclusive. If redundancy
is to be used, RNC196 can be configured to use NIS1 or NIS1P in case of STM1
ATM transport, and to NPGE or NPGEP in case of IP transport.
RNC196/85M to 196M
In capacity steps 2 to 5, the capacity is expanded by taking additional sub racks 1 to
4 into use from the second cabinet.
RNC196/300M
The capacity of RNC196/196M is increased to 300Mbit/s (Iub) by removing some
units and replacing them with other functional units.
NIP1 and FDU are removed. Optionally, one NIP1 can be left to the configuration.
The FDU or the magneto-optical disk drive functionality is replaced by an external
USB memory stick supported with OMU. The external USB memory stick can be
used for transferring data to or from the RNC. The OMU unit must be upgraded with
another hardware variant (CCP18-A) that supports the USB interface.
There are additional units for A2SU, ICSU, MXU, and GTPU.
The number of NIS1/NIS1P units can be increased.
The HDS-A plug-in-unit is replaced by another variant (HDS-B) that supports two
hard disk units in one card.
CT53331EN53GLA0
25
In case RAN1754: HSPA optimized configuration is used, the maximum possible R99
data capacity is 67% from the maximum throughput of the configuration defined in
Table Capacity and reference call mix model.
CT53331EN53GLA0
26
RNC450/150
The smallest capacity step, RNC150 includes the first cabinet and the plug-in-units
RNC450/300
Expanded capacity to 300 Mbits/s, the RNC can be obtained by adding another
cabinet and the necessary plug-in units and connecting internal cabling between the
cabinets.
RNC450/450
Expanded capacity to 450 Mbits/s, the RNC can be obtained by adding the
necessary plug-in units into two sub racks.
Note: NIS1 and NIS1P share same unit locations and are mutually exclusive.
If redundancy is to be used, RNC196 can be configured to use NIS1 or NIS1P in
case of STM1 ATM transport, and to NPGE or NPGEP in case of IP transport.
Reference: DN0628405 : RNC capacity extensions and upgrade
CT53331EN53GLA0
27
CT53331EN53GLA0
28
configuration step.
RNC2600/step 1
The smallest configuration step RNC2600/step 1 includes the first cabinet and the
plug- in-units.
Note that NPS1 and NPS1P / NPGE and NPGEP are mutually exclusive.
RNC2600/step 2
Configuration extension to RNC2600/step 2 can be obtained by adding the new
cabinet, necessary plug-in units.
There are more reserved slots for NPGE(P) and NPS1 units than can be installed at
the same time - the combined maximum is 14.
RNC2600/step 3
Configuration extension to RNC2600/step 3 can be obtained by adding the
necessary plug-in units into two sub-racks
There is a restriction on a number of NPS1 and NPGE.
There is a total of 28 slots and 16 SFU ports available:
1 NPS1 occupies 2 slots and 1 SFU port
1 NPGE occupies 1 slot and 1 SFU port
As a result, you cannot exceed either of the available slots or SFU ports.
For PIU detail please check DN70474741 : RNC Capacity extension and upgrade
CT53331EN53GLA0
29
CT53331EN53GLA0
30
Availability performance calculations describe the system from the availability point
of view presenting availability
Availability performance values are calculated for the complete system, that is,
redundancy principles are taken into account
In reference to ITU-T Recommendation Q.541, intrinsic unavailability is the
unavailability of an exchange (or part of it) due to exchange (or unit) failure itself,
excluding the logistic delay time (for example, travel times, unavailability of spare
units, and so on) and planned outages
The results of the availability performance calculations for the complete system
are presented in the Predicted availability performance values.
Some units from earlier releases are no longer exist, because
The functionalities are embedded to other units, or
The unit is no longer supported
The units are:
GTPU, functionalities are embedded to NPS1(P) and/or NPGE(P)
A2SU, functionalities are embedded to NPS1(P)
RRMU, functionalities are distributed to ICSU and OMU/RSMU
NIS1(P), replaced with NPS1(P)
NIP1, no more PDH interface are supported
CT53331EN53GLA0
31
CT53331EN53GLA0
32
Duplication (2N)
If the spare unit is designated for only one active unit, the software in the spare unit
is kept synchronized so that taking it in use in fault situations (switchover) is very fast.
The spare unit can be said to be in hot stand-by. This redundancy principle is called
duplication, abbreviated "2N".
Replacement (N+1)
For less strict reliability requirements, one or more spare units may also be
designated to a group of functional units. One spare unit can replace any unit in the
group. In this case, the execution of the switchover is a bit slower, because of the
spare unit synchronization (warming) is performed as a part of the switchover
procedure. The spare unit is in cold stand-by. This redundancy principle is called
replacement, abbreviated "N+1".
Load sharing (SN+)
A unit group may be allocated no spare unit at all, if the group acts as a resource
pool. The number of units in the pool is selected so that there is a certain amount of
extra capacity. If a few units of the pool are disabled because of faults, the whole
group can still perform its designated functions. This redundancy principle is called
load sharing, abbreviated "SN+".
None
Some functional units have no redundancy at all. This is because a failure in them
does not prevent the function or cause any drop in the capacity.
CT53331EN53GLA0
33
CT53331EN53GLA0
34
SF10
The main function of the SF10 plug-in unit is to switch ATM cells from 16 input ports
to 16 output ports. The cell switching uses self-routing where the cell is forwarded by
hardware to the target output port based on the given output port address. The
correct cell sequence at the output port is guaranteed. The switching fabric supports
spatial multicasting.
The total switching capacity of SF10 is 10 Gbit/s with 16x16 switching fabric port
interfaces capacity of each is 622 Mbit/s. Port interfaces are duplicated for redundant
multiplexer units and redundant network interface units. The active input is selected
inside the SF10
SF10E
The main function of the SF10E (C110899) plug-in unit is to switch cells from input to
output ports. Within the SF10E switching is protocol independent, meaning that
before the cells are sent to the fabric they are encapsulated inside a special fabric
frame. In the case of APC based legacy port cards, the cells are always ATM cells,
but network processor based units (such as MX1G6) are able to process any
protocol.
SF20H
The main function of the SF20H plug-in unit is to switch cells from input to output
ports. Within the SF20H, switching is protocol independent. This means that before
the cells are sent to the fabric, they are encapsulated inside a special fabric frame.
With a total of 32 ports, the SF20H provides a 2.5 Gbit/s serial switching fabric
interface (SFPIF2G5). Several SFPIF2G5 ports can be combined for higher capacity
ports
CT53331EN53GLA0
35
CT53331EN53GLA0
36
MX622
The ATM Multiplexer Plug-in Unit 622 Mbit/s MX622 multiplexes and demultiplexes
ATM cells and performs ATM Layer functions and Traffic Management functions.
MX1G6 and MX1G6-A
The MX1G6 and MX1G6-A are 1.6 Gbit/s ATM multiplexer plug-in units. They
multiplex and demultiplex ATM cells and perform ATM layer and traffic management
functions. The MX1G6 and MX1G6-A enable connecting low speed units to the
switching fabric and improve the use of switching fabric port capacity by multiplexing
traffic from up to twenty tributary units to a single fabric port.
CT53331EN53GLA0
37
A2SU
AAL2 Switching Unit (A2SU) performs switching of AAL Type 2 CPS packets
between external interfaces and signal processing units. A2SU operates in the loadsharing redundancy configuration (SN+).
The AAL Type 2 guarantees bandwidth-efficient transport of information with limited
transfer delay in the RAN transmission network.
If Iub, Iu-CS, and Iur have been IP upgraded, A2SU units are not used.
CT53331EN53GLA0
38
CT53331EN53GLA0
39
CT53331EN53GLA0
40
CCP10
The Control Computer with 800 MHz Pentium III-M processor (CCP10) acts as the
central processing resource in the IPA2800 system computer units.
The CCP10 incorporates an Intel Mobile Pentium III-M Microprocessor with 133MHz
SDRAM memory on DIMM modules.
CCP10 has ATM connections to other plug-in units. This is done by an interface to
ATM multiplexer (MX622-B /-C).
CCP10 has an interface to the Hardware Management System (HMS) which is
implemented in CCP10 as two Hardware Management Nodes (HMN): the HMS
Master Node (HMSM) and HMS Slave Node (HMSS).
CCP10 has a 16 bit wide Ultra3 SCSI bus. It is possible to connect up to 16 devices
into the SCSI bus (including CCP10). CCP10 has two SCSI interfaces because the
mass memory system is 2N redundant. Current Ultra2 SCSI is also supported.
The timing and synchronization of CCP10 is provided by Timing and Synchronization
plug-in unit (TSS3). TSS3 provides 19.44 MHz clock signal for real time clock and
UX ASIC.
There are two V.24/V.28 based serial interfaces for service terminals to provide an
interface for controlling and monitoring CCP10.
CCP10 has two 10 Base-T /100 Base-TX /1000 Base-T Ethernet interfaces to
connect to LAN.
In addition to the interfaces described above CCP10 gets the - 48 V DC supply and
HMNs power feed through back plane connectors.
CCP10 is assembled into sub rack SRA1 and SRA2. There can be more than two
CCP10 units in the sub rack
CT53331EN53GLA0
41
OMU has two dedicated hard disk units, which serve as a redundant storage for the
entire system software, the event buffer for intermediate storing of alarms, and the
radio network configuration files.
Backup copies are made onto a USB memory stick that is connected to the CCP18A front plate. Only memory sticks can be used.
FDU is the functional unit when using the USB memory stick. No separate
configuration in the HW database is needed, because the USB memory stick is an
external device. When removing the USB memory stick, set the state to blocked,
because the system does not do it automatically.
In previous deliveries, the MDS-(A/B) magneto optical drive with a SCSI interface is
used. FDU is the functional unit. No separate configuration is needed.
CT53331EN53GLA0
42
The USB stick is an optional external device that is not automatically delivered. The
operator can choose to use the USB memory stick for backup purposes in RN2.2
new deliveries. When USB memory stick is used (the functional unit is FDU), it is
plugged in one CPU card. There is no direct connection to the other CPUs. Only the
USB memory stick that is connected to the active OMU can be used. For OMU
switchover, two USB memory sticks are needed: one for each OMU.
In previous deliveries, the MDS-A plug-in unit is used (the functional unit is FDU).
When MDS-A is used, FDU connects to the SCSI 0 bus. It has been left without
backup since it is primarily used for facilitating temporary service operations.
CT53331EN53GLA0
43
CT53331EN53GLA0
44
CCP18-A, CCP18-C
The Control Computer with Pentium M 745 processor (CCP18-A and CCP18-C) acts
as the central processing resource in the IPA2800 system computer units.
The CCP18-A/-C incorporate an Intel Pentium M 745 Microprocessor with DDR200
SDRAM memory on board
The CCP18-A and CCP18-C have ATM connections to other plug-in units. This is
done by an interface to the ATM multiplexer (MXU).
The CCP18-A and CCP18-C have an interface to the Hardware Management
System (HMS). CCP18-A has two Hardware Management Nodes (HMN): the HMS
Master Node (HMSM) and HMS Slave Node (HMSS-B). CCP18-C has only the HMS
Slave Node (HMSS-B).
The CCP18-A has a 16 bit wide Ultra3 SCSI bus. It is possible to connect up to 16
devices into the SCSI bus (including CCP18-A). The CCP18-A has two SCSI
interfaces because the mass memory system is 2N redundant. Current Ultra2 SCSI
is also supported. CCP18-C does not have a SCSI bus.
The timing and synchronization of the CCP18-A and CCP18-C is provided by the
Timing and Synchronization plug-in unit (TSS3). TSS3 provides 19.44 MHz clock
signal for real time clock and UX2 FPGA.
There are two V.24/V.28 based serial interfaces for service terminals to provide an
interface for controlling and monitoring the CCP18-A and the CCP18-C.
The CCP18-A and CCP18-C have two 10 Base-T /100 Base-TX /1000 Base-T
Ethernet interfaces to connect to LAN.
In addition to the interfaces described above, CCP18-A and CCP18-C get the - 48 V
DC supply and HMNs power feed through back plane connectors
CT53331EN53GLA0
45
GTPU
The GTPU performs the RNC-related IU user plane functions towards the SGSN.
The unit is SN+ redundant.
The unit is responsible for the following tasks:
Iu-PS transport level IP protocol processing and termination
Gateway Tunnelling Protocol User Plane (GTP-U) protocol processing
CT53331EN53GLA0
46
CCP18-A, CCP18-C
The Control Computer with Pentium M 745 processor (CCP18-A and CCP18-C) acts
as the central processing resource in the IPA2800 system computer units.
The CCP18-A/-C incorporate an Intel Pentium M 745 Microprocessor with DDR200
SDRAM memory on board
The CCP18-A and CCP18-C have ATM connections to other plug-in units. This is
done by an interface to the ATM multiplexer (MXU).
The CCP18-A and CCP18-C have an interface to the Hardware Management
System (HMS). CCP18-A has two Hardware Management Nodes (HMN): the HMS
Master Node (HMSM) and HMS Slave Node (HMSS-B). CCP18-C has only the HMS
Slave Node (HMSS-B).
The CCP18-A has a 16 bit wide Ultra3 SCSI bus. It is possible to connect up to 16
devices into the SCSI bus (including CCP18-A). The CCP18-A has two SCSI
interfaces because the mass memory system is 2N redundant. Current Ultra2 SCSI
is also supported. CCP18-C does not have a SCSI bus.
The timing and synchronization of the CCP18-A and CCP18-C is provided by the
Timing and Synchronization plug-in unit (TSS3). TSS3 provides 19.44 MHz clock
signal for real time clock and UX2 FPGA.
There are two V.24/V.28 based serial interfaces for service terminals to provide an
interface for controlling and monitoring the CCP18-A and the CCP18-C.
The CCP18-A and CCP18-C have two 10 Base-T /100 Base-TX /1000 Base-T
Ethernet interfaces to connect to LAN.
In addition to the interfaces described above, CCP18-A and CCP18-C
CT53331EN53GLA0
47
CT53331EN53GLA0
48
CT53331EN53GLA0
49
OMS
The Operation and Maintenance Server (OMS) is a computer unit which provides an
open and standard computing platform for applications which do not have strict realtime requirements. The OMS provides functions related to external O&M interfaces.
For example:
Post-processing of fault management data
Post-processing of performance data
Software upgrade support
These functions include both generic interfacing to the data communication network
(DCN) and application specific functions such as processing of fault and
performance management data, implementation of the network element user
interface and support for configuration management of the network element. This
way the OMS provides easy and flexible interfacing to the network element.
The OMS is implemented with the Red Hat Enterprise Linux 4. It contains its own
disks devices, interfaces for keyboard, mouse and display for debugging purposes,
and a LAN (10/100/1000 Mbit Ethernet) interface. Communication between the OMS and
the rest of the network element uses Ethernet.
The basic services of the OMS are:
MMI interface implemented as a telnet protocol through which the user can execute
the existing MML commands.
Alarm transfer from network element to network management system (NMS)
Provides the statistical interface for NMS
CT53331EN53GLA0
50
MCP18-B
The MCP18-B plug-in unit is used as the management computer unit in network
elements. For OMS, MCP18-B B01 or later must be used.
The MCP18-B is a PentiumM based, PC compatible, single slot computer designed
to interface to the internal standard PCI bus. The PentiumM 745 central processing
unit (CPU) comes in an Intel 479 ball micro-FCBGA form factor. The Intel
PentiumM chipset (E7501 MCH & P64H2) provides the PCI and PCI-X interfaces.
Integrated PCI peripherals provide dual Ethernet, dual SCSI, SVGA and USB
interfaces.
Scalability
RNC OMS is capable of handling capacity of RNC2600, 2 800 WCDMA BTSs and 4
800 cells.
RNC OMS is capable of handling different types of mass management operations
under the control of Net Act, so that there are management operations going on in
parallel towards several elements. Mass operations are used when certain
management operations need to be done to certain group of network elements. An
example of this are configuration data and software downloads to new base stations.
CT53331EN53GLA0
51
CT53331EN53GLA0
52
HDS-B
The HDS-B serves as a non-volatile memory for program code and data in the MGW
and RNC. It connects via the SCSI bus to the OMU and NEMU units.
Operating environment of HDS-B
The HDS-B plug-in unit is used with the OMU and NEMU units. The computer units
serve two 16-bit wide Ultra SCSI buses which connect to the HDS-B through external
shielded back-cables. The HDS-B has two independent SCSI buses for two
computer units. In the case of OMU the SCSI buses pass through the HDS-B and
continue to the other unit of the duplicated pair (OMU only). In the case of NEMU the
SCSI buses pass into the HDS-B and end there. It is possible to connect other SCSI
devices on the same bus. The maximum number of installed SCSI devices, not
counting computer units, is 14.
The HDS-B plug-in unit is connected to the hardware management bus the via the
bus interface of the HMSS. HDS-B has an interface to two HMS transmission lines
via back connectors.
The HDS-B has automatically functioning SCSI bus terminators.
The HMSS has a separate 2N redundant power feed.
The HDS-B gets also 48V DC supply and power feed from the HMSS through back
connectors
CT53331EN53GLA0
53
54
CT53331EN53GLA0
55
CT53331EN53GLA0
56
CT53331EN53GLA0
57
NIP1 contains PDH E1/T1/JT1 interfaces with Inverse Multiplexing for ATM (IMA)
function, which allows for flexible grouping of physical links to logical IMA groups.
Normally, the PDH lines are used for connections between RNC and the BTSs.
NI16P1A
The NI16P1A plug-in unit implements sixteen PDH E1/T1/JT1 based ATM interfaces.
The NI16P1A supports IMA, that is, several E1/T1/JT1 interfaces can be grouped
into one group that seems like one interface to the upper protocol layers. The
NI16P1A makes ATM layer processing related to the traffic management and Utopia
address embedding. The NI16P1A also provides a reference clock (that is recovered
from the incoming E1/T1/JT1 lines) for the TSS3 plug-in unit.
CT53331EN53GLA0
58
NI4S1-B
The main functions of the NI4S1-B plug-in unit are the following:
implementing adaptation between SDH transport technology and ATM
performing ATM layer functions
implementing interface to ATM Switch Fabric.
The NI4S1-B can also be used to implement four SONET OC-3 interfaces.
Operating environment of NI4S1-B
NI4S1-B has the following interfaces with its environment:
four interfaces with physical medium
interface with ATM Switch Fabric (SF10)
interface with the Hardware Management System (HMS)
interface with TSS3 or TBUF plug-in unit.
CT53331EN53GLA0
59
CT53331EN53GLA0
60
CT53331EN53GLA0
61
The filter is closed in an EMC tight enclosure. Both supply lines go through the filter.
The supply lines share a common choke but have X- and Y-capacitors of their own.
In both supply branches, large electrolytic capacitors are located on two dedicated
capacitor boards. After the electrolytic capacitors, supply branches are branched
further to five sub branches which are protected by fast glass tube fuses and then
supplied to the backplane. One of the sub branches in the supply branches is for fan
tray.
CT53331EN53GLA0
62
PD20
The Power Distribution Unit 20 A (PD20 plug-in unit) is a sub rack level power
distribution unit in the IPA2800 network element power feed system. The PD20
provides filtering, power distribution and fan control functions.
PD30
The Power Distribution Unit 30 (PD30 plug-in unit) is a sub rack level power
distribution unit in the Nokia IPA2800 Network Elements power feed system. The
PD30 provides filtering, power distribution, and fan control functions. In addition, the
PD30 also provides over-current and overvoltage protection, and power dropout
stretching.
The PD30 incorporates reverse battery voltage protection for accidental installation
errors. It continues to operate after correct battery voltage polarity and voltage level
have been applied to it.
The use of the older fan tray models FTR1 and FTRA damages the equipment. Only
use the fan trays FTRA-A and FTRA-B with PD30
CT53331EN53GLA0
63
The CPD120-A allows for either grounding the 0V lead from the battery or for a use
of a separate grounding cable to achieve floating battery voltage. From the CPD120A unit, the voltage is fed through the sub rack-specific PD30 power distribution plugin units, which have individual 10-A fuses for each outgoing distribution line, to the
other plug-in units in a likewise manner as to the cabinets, that is, through two
mutually redundant supply lines. The two distribution lines are finally combined in the
power converter blocks of individual plug-in units, which adapt the voltage so that it is
appropriate for the plug-in unit components.
CT53331EN53GLA0
64
CT53331EN53GLA0
65
New clock plug-in unit variant TSS3-A is implemented in RN5.0 based RNC2600
deliveries. However, TSS3-A can be used with RN4.0 software if Bridge HMX1BNGX
version inside the plug-in unit is newer than in RN4.0 release package
Due to 2N redundancy a mixed configuration of TSS3 and TSS3-A is not allowed.
The same variant must be used for both clock units in each RNC.
TSS3/-As generate the clock signals necessary for synchronizing the functions of
RNC. Normally, TSS3/-A operates in a synchronous mode, that is, it receives an
input timing reference signal from an upper level of the network and adjusts its local
oscillator to the long time mean value by filtering jitter and wander from the timing
signal. It transmits the reference to the plug-in units in the same sub rack (all plug-in
units are equipped with onboard PLL blocks), as well as to the TBUF units, which
distribute the signals to units not directly fed by TSS3/-As.
TSS3/-A has inputs for both synchronization references from other network elements
(via the network interfaces) and for those from external sources (options are 2048
kbit/s, 2048 kHz, 64+8 kHz, 1544 kHz, or 1544 kbit/s (TSS3-A)). TSS3-A input is 5 V
tolerant.
If all synchronization references are lost, TSS3/-A can operate in plesiochronous
mode, that is, by generating independently the synchronization reference for the
units in the network element.
TSS3/-As are also involved in the functioning of the HMS bus. They convey HMS
messages through the HMS bridge node to the HMS master node. Each OMU has
one master node.
TSS3-A is designed to conform ITU-T G813, G.703 and Bell core GR-1244
recommendation.
CT53331EN53GLA0
66
CT53331EN53GLA0
67
Synchronization
Usually the distribution of synchronization references for RAN NEs (BTS, RNC) is
based on a master-slave architecture, where the transport network is used for
carrying the synchronization references. In particular, this is the case for base
stations.
In a master-slave synchronization architecture, a synchronization reference traceable
to the Primary Reference Clock (PRC) is carried via the transport network to RAN
NEs. Traceability to the PRC means that the synchronization reference originates
from a timing source of PRC quality. The characteristics of primary reference clocks
are specified in ITU-T Recommendation G.811 [8].
The hierarchical master-slave principle is generally used in traditional TDM based
synchronization, where a PRC traceable reference is carried through a
synchronization distribution chain via intermediate nodes to RAN NEs. In RAN NEs
(BTSs shown in the following figure) the timing reference is recovered from the
incoming transport interface (e.g. E1, T1, STM-1). The recovered reference is
frequency locked to the original PRC signal, but due to impairments in the transport
network there is some jitter and wander in the recovered synchronization reference.
CT53331EN53GLA0
68
RNC has two separate timing and synchronization distribution buses to ensure 2N
redundancy for the internal timing signal distribution. Each bus has its own system
clock (a TSS3/-A plug-in unit), distribution cabling, and timing buffers (TBUF plug-in
units).
The two TSS3/-A units backing up each other are placed in different subracks
(subracks 1 and 2), each of which is powered by a power supply plug-in unit of its
own to ensure redundancy for the power supply. Each of these subracks is also
equipped with a TBUF plug-in unit, which connects the equipment in the sub rack to
the other clock distribution bus. The RNAC subracks 3 and 4 and all RNBC subracks
have two separate TBUF units, which connect to different clock distribution buses by
means of cables of their own.
In order to function correctly, the differential buses need terminations in the ends of
the bus by means of a termination cable. Due to the expansion of the network
element through the capacity steps, the end of the bus and similarly the termination
point changes. When a new subrack is taken into use in a capacity step, the cabling
must always be moved to the new subrack.
Duplicated buses need two terminations, which means that four terminators
altogether in each cabinet are required for the HMS and the timing and
synchronization distribution bus
CT53331EN53GLA0
69
The optional peripheral EXAU-A / EXAU provides a visual alarm of the fault
indications of RNC. The EXAU-A / EXAU unit is located in the equipment room.
The CAIND/-A is located on top of the RNAC cabinet and provides a visual alarm
indicating the network element with a fault.
CT53331EN53GLA0
70
The Hardware Management System (HMS) provides a duplicated serial bus between
the master node (located in the OMU) and every plug-in unit in the system. The bus
provides fault tolerant message transfer facility between plug-in units and the HMS
master node.
The HMS is used in supporting auto-configuration, collecting fault data from plug-in
units and auxiliary equipment, collecting condition data external to network elements
and setting hardware control signals, such as restart and state control in plug-in
units.
The hardware management system is robust. For example, it is independent of
system timing and it can read hardware alarms from a plug-in unit without power.
The HMS allows power alarms and remote power on/off switching function.
The hardware management system forms a hierarchical network. The duplicated
master network connects the master node with the bridge node of each sub-rack.
The sub-rack level networks connect the bridge node with each plug-in unit in the
sub-rack.
CT53331EN53GLA0
71
CT53331EN53GLA0
72
CT53331EN53GLA0
73