Documente Academic
Documente Profesional
Documente Cultură
01.09/ EN Standard
February 2010
2009-2010 Alcatel-Lucent
All rights reserved.
UNCONTROLLED COPY: The master of this document is stored on an electronic database and is write
protected; it may be altered only by authorized persons. While copies may be printed, it is not recommended.
Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be
regarded as uncontrolled copies.
ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of Alcatel-Lucent.
Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information contained
herein confidential, shall disclose the information only to its employees with a need to know, and shall protect
the information from disclosure and dissemination to third parties. Except as expressly authorized in writing
by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have
received this document in error, please notify the sender and destroy it immediately.
Alcatel-Lucent
Publication history
PUBLICATION HISTORY
March 2008
Issue 01.00 / EN, Draft
- Document creation
March 2008
Issue 01.01 / EN, Draft
- Update after review
July 2008
Issue 01.02 / EN, Preliminary
Engineering information with regards to the following features:
-
September 2008
Issue 01.03 / EN, Preliminary
-
September 2008
Issue 01.04 / EN, Preliminary
-
New section added describing the WMS server failure scenarios and consequences.
: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
Publication history
ii
December 2008
Issue 01.05 / EN, Preliminary
-
Capacity figure restriction added (NPO cluster configuration) with 15mn counters feature
activated
New Optical Switch Brocade 300 replacing the FC Switch Brocade 200
March 2009
Issue 01.06 / EN, Standard
-
April 2009
Issue 01.07 / EN, Standard
-
August 2009
Issue 01.08 / EN, Standard
-
February 2010
Issue 01.09 / EN, Standard
-
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
Publication history
iii
Update on PC RAM requirements for WIPS usage to 4GB (in case of X-large Network). Client
simultaneous usage table updated accordingly
Additional information in backup and restore section with table describing tape drive and
server-domain compatibility matrix
Update on HMI server with new HP PROLIANT DL320 G6 and the support of Window server
2003 SP2 (instead of SP1)
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
iv
TABLE OF CONTENTS
1.
2.
1.2.
NOMENCLATURE ...........................................................................................12
1.3.
SCOPE .........................................................................................................12
1.4.
REFERENCES................................................................................................13
OVERVIEW.................................................................................................................. 14
2.1.
3.
4.
3.2.
3.3.
3.4.
3.5.
3.6.
CAPACITY .....................................................................................................23
3.7.
3.8.
OVERVIEW....................................................................................................46
4.2.
4.3.
4.4.
4.5.
4.6.
4.7.
4.8.
4.9.
4.10.
4.11.
4.12.
4.13.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
5.
WMS CLIENTS AND SERVER OF CLIENTS ENGINEERING
CONSIDERATIONS ............................................................................................................. 54
6.
7.
5.1.
5.2.
5.3.
5.4.
OVERVIEW....................................................................................................57
6.2.
6.3.
6.4.
6.5.
6.6.
6.7.
NETWORK ARCHITECTURE.................................................................................. 79
7.1.
7.2.
REFERENCE ARCHITECTURE.................................................................79
7.3.
7.4.
7.5.
7.6.
7.7.
7.8.
7.9.
7.10.
8.
9.
8.2.
8.3.
8.4.
9.2.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
vi
9.3.
9.4.
9.5.
9.6.
9.7.
9.8.
9.9.
SSL...........................................................................................................109
9.10.
RADIUS/IPSEC ....................................................................................110
9.11.
9.12.
SNMP.................................................................................................112
9.13.
IP FILTERING........................................................................................112
9.14.
FIREWALL ............................................................................................112
9.15.
10.
10.1.
10.2.
COMPATIBILITIES ..................................................................................114
10.3.
10.4.
10.5.
10.6.
10.7.
10.8.
10.9.
11.
11.1.
OVERVIEW ...........................................................................................119
11.2.
11.3.
CAPACITY CONSIDERATIONS..................................................................120
11.4.
11.5.
QOS MANAGEMENT...............................................................................123
11.6.
TOPOLOGY GRANULARITIES...................................................................124
11.7.
11.8.
11.9.
11.10.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
vii
11.11.
11.12.
12.
MS PORTAL............................................................................................................... 131
12.1.
OVERVIEW ............................................................................................131
12.2.
12.3.
CAPACITY CONSIDERATIONS..................................................................133
12.4.
12.5.
12.6.
13.
13.1.
13.2.
13.3.
13.4.
14.
14.1.
OVERVIEW ...........................................................................................140
14.2.
14.3.
15.
15.1.
15.2.
15.3.
15.4.
15.5.
15.6.
15.7.
15.8.
15.9.
15.10.
15.11.
15.12.
16.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
viii
16.1.
16.2.
NE SOFTWARE .....................................................................................152
17.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
ix
LIST OF FIGURES
Figure 1 : NSP Overview ................................................................................................................................. 17
Figure 2 : Fault Management Architecture ................................................................................................. 20
Figure 3 : Configuration Management Architecture................................................................................. 20
Figure 4: Performance Management Architecture ................................................................................... 22
Figure 5: Dual Main Server configuration.................................................................................................. 26
Figure 6 : SMG architecture ............................................................................................................................ 28
Figure 7: Alarm Correlation Functional Diagram...................................................................................... 32
Figure 8: Storage Area Network Architecture............................................................................................ 44
Figure 9: 3GPP FM High Level architecture ............................................................................................... 47
Figure 10 : Basic CM/Kernel CM High Level architecture....................................................................... 48
Figure 11 : Bulk CM High Level architecture ............................................................................................. 49
Figure 12 : PM High Level architecture ....................................................................................................... 49
Figure 13 : 3GPP Output Building Block Deployment within a ROC ................................................... 50
Figure 14 : WMS East-West Interface .............................................................................................................. 53
Figure 15 : RAMSES Solution Architectural Diagram ................................................................................... 76
Figure 16 : Reference Architecture ................................................................................................................... 80
Figure 17 : Example of E4900 with System controller and ST6140 connectivity ............................. 87
Figure 18 : M5000 with System controller and ST2540 connectivity ................................................... 88
Figure 19 : Example of NETRA T5440 with System controller connectivity ...................................... 88
Figure 20 : M4000 with System controller and ST2540 connectivity ................................................... 91
Figure 21 : M4000 with System controller and ST2540 connectivity ................................................... 93
Figure 22 : Magnified View of M4000-4 CPU Interface connectivity in Cluster Mode...................... 94
Figure 23 : Subnet Groups in a NPO Cluster ................................................................................................... 95
Figure 24 : NPO Cluster Fibre Channel Switch Connectivity ........................................................................ 96
Figure 25 : NPO Cluster Fibre Channel Switch Redundancy......................................................................... 97
Figure 26 : Terminal Server Connections......................................................................................................... 98
Figure 27 : Recommended Time Synchronization Architecture................................................................... 116
Figure 28 : NPO Architecture.......................................................................................................................... 119
Figure 29 : NPO Cluster Architecture ............................................................................................................ 122
Figure 30 : NPO Backup and restore overview ....................................................................................... 124
Figure 31 : Centralized Backup & Restore architecture........................................................................ 125
Figure 32 : MS-PORTAL architecture......................................................................................................... 131
Figure 33 : WQA Architecture ........................................................................................................................ 137
Figure 34 : WQA Backup & Restore............................................................................................................... 138
Figure 35 : NetworkStations in a 5620 network ............................................................................................. 143
Figure 36 : 7670 Network Management from WMS...................................................................................... 149
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
LIST OF TABLES
Table 1: WMS Nominal Main Server Capacity............................................................................................ 24
Table 2: WMS legacy Main Server Capacity............................................................................................... 24
Table 3 : WMS failure scenarios and consequences............................................................................... 29
Table 4 : Maximum recommended threshold alarms per server type ................................................. 30
Table 5 : Simultaneous software downloads to Access NE (Nominal machines) ........................... 33
Table 6: Simultaneous software downloads to Access NE (legacy machines)................................ 34
Table 7: Typical software size per Access NE .......................................................................................... 34
Table 8: Supported GPM data granularities............................................................................................... 34
Table 9: Call Trace Type Definitions ............................................................................................................ 37
Table 10: Call Trace Engineering guidelines and daily recommended volumes of data ............... 38
Table 11: Maximum number of standing alarms per hardware type ................................................... 39
Table 12: Tape drive and Domain/Server matrix compatibility ............................................................. 43
Table 13: 3GPP FMBB Specifications.......................................................................................................... 51
Table 14 : 3GPP CM BB Specifications ....................................................................................................... 51
Table 15 : Number of concurrent clients per Main Server type ............................................................ 54
Table 16 : Number of Registered users per ROC ..................................................................................... 54
Table 17 : Sun N240 Hardware Requirements........................................................................................... 58
Table 18 : SUN SPARC ENTERPRISE T5220 Hardware Requirements............................................... 58
Table 19 : Sun V890 Hardware Requirements ........................................................................................... 59
Table 20 : SUN NETRA T5440 Hardware Requirements.......................................................................... 60
Table 21 : SF E4900 Hardware Requirements ........................................................................................... 60
Table 22 : Sun StorEdge 6140 Hardware Requirements......................................................................... 60
Table 23 : SUN ENTERPRISE M5000 Hardware Requirements ............................................................. 61
Table 24 : SF V490 Hardware Requirements.............................................................................................. 62
Table 25 : SUN SPARC ENTERPRISE M4000............................................................................................. 62
Table 26 : Sun Ultra 45 Hardware Requirements...................................................................................... 63
Table 27 : Windows PC Hardware Requirements for WMS.................................................................... 64
Table 28 : Windows PC Hardware Requirements for MS-PORTAL...................................................... 65
Table 29: RAM requirements for client simultaneous usage ................................................................. 67
Table 30 : WQA Hardware Specifications........................................................................................................ 67
Table 31 : VPN Firewall Brick Platform .......................................................................................................... 76
Table 32 : VPN Router Platform ....................................................................................................................... 76
Table 33 : Terminal Server Console Specifications ......................................................................................... 78
Table 34 : Lexmark Printer Hardware Requirements............................................................................... 78
Table 35 : Interface Configuration - Configuration A..................................................................................... 81
Table 36 : Interface Configuration - Configuration B ..................................................................................... 82
Table 37 : Interface Configuration - Configuration C..................................................................................... 82
Table 38 : Interface Configuration - Configuration D..................................................................................... 82
Table 39 : Interface Configuration - Configuration E ..................................................................................... 82
Table 40 : Supported Interface Configurations per server type (Nominal)................................................... 82
Table 41 : Supported Interface Configurations per server type (legacy)....................................................... 83
Table 42 : Interface Naming Convention.......................................................................................................... 83
Table 43 : WMS IP Requirements Summary .................................................................................................. 89
Table 44 : Interface configuration on NPO or MS-Portal............................................................................... 90
Table 45 : T5220/T5440 NPO / MS PORTAL IP Requirements Summary.................................................. 90
Table 46 : Interface configuration on NPO or MS-Portal............................................................................... 91
Table 47 : M4000-2CPU NPO / MS PORTAL IP Requirements Summary................................................. 92
Table 48 : Interface configuration on NPO or MS-Portal............................................................................... 92
Table 49 : Subnet and IP Addressing configuration on NPO or MS-Portal.................................................. 92
Table 50 : M4000-4CPU NPO / MS PORTAL IP Requirements Summary................................................. 94
Table 51 : Protocols used on southbound Interfaces........................................................................................ 98
Table 52 : Bandwidth Requirements for RNC Call Trace (maximum value) .................................... 103
Table 53 : Bandwidth Requirements for RNC CN Observation counters......................................... 103
Table 54 : Maximum number of simultaneous software downloads ................................................. 104
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
xi
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
1.
12
This document details the engineering rules for the WMS Main Server, OAM Server
hardware/software requirements, OAM DCN recommendations, backup and restore, remote access
and other OAM engineering information for WMS.
1.1.
This WMS Engineering guide has been specifically prepared for the following audience:
-
Network Engineers
Installation Engineers
Network & System Administrators
Network Architects
1.2.
NOMENCLATURE
<>
<Engineering rule>: The OAM rules (non negotiable) are typically OAM capacity values, IP parameters
addressing (Sub Net, range, etc).
<System Restrictions>: A system restriction can be a feature that is not applicable to an OAM
Hardware model.
<Engineering recommendations> : Mainly recommendations related to performance (QoS, Capacity,
KPI) to get the best of the network
<Engineering note>: Can be an option suggestion, or a configuration note that cab be operator
dependant.
1.3.
SCOPE
Alcatel-Lucent MS-PORTAL including the 9959 MS-NPO (Multi Standard - Network Performance
Optimizer) and/or the 9953 MSSUP (Multi Standard - Supervision Portal)
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
13
Throughout the document, these different management components will be referred to by their main
names such as WMS, WPS, etc.
1.4.
REFERENCES
All references about Alcatel-Lucents WMS can be found in the following Alcatel-Lucent Technical
Publications.
-
Alcatel-Lucent 9300 W-CDMA Product Family - Document Collection Overview and Update
Summary (NN-20500-050).
Additional updates and corrections can be found in the OAM Release Notes for the particular release.
For further information on how to obtain these documents, please contact your local Alcatel-Lucent
representative.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
2.
14
OVERVIEW
2.1.
WMS is focused on efficiently delivering the foundation on which to deploy and maintain the Wireless
Internet network resources, deliver services, and account for network and service use by subscribers.
The key functions of the network management layer are described below.
2.1.1.1
FAULT MANAGEMENT
NSP Fault management tools provide an integrated set of fault surveillance, diagnosis and resolution
tools that span the domain radio access as well as the service enabling platforms and the IP/ATM
backbone, to give the operator a single alarm view across the entire network. These tools enable the
operator to identify and resolve network or service affecting issues quickly and efficiently.
WMS Fault Management functionality for wireless network includes: Alarm Management (real-time
alarm surveillance, delivered as an integral part of the NSP), Historical Fault Management (Historical
Fault Browser) and the Trouble Ticketing Interface.
Also included in WMS Fault Management functionality is the ability to perform alarm filtering,
specifically the support of alarm delay and alarm inhibit capabilities on the alarm stream. As well, the
ability to modify the alarm severity attribute of the alarm stream allows operators the ability to optimize
their alarm handling capabilities.
2.1.1.2
PERFORMANCE MANAGEMENT
WMS Performance Management functionality for Wireless networks includes as a base Performance
Monitoring (near real-time) and a collection/mediation and conversion to 3GPP compliant XML format
for use with any 3rd Party Performance Management tools. From OAM06, the Performance Server
functionality (which was previously on a separate Sun server), will co-reside on the WMS Main server.
The WMS performance management tools are designed for viewing and optimizing network element
and service performance across Alcatel-Lucent radio access (UMTS). Performance Management
helps service providers to pinpoint and resolve potential network performance issues before they
become a problem to their end customers.
For Performance Reporting (historical), a powerful tool NPO (Network Performance Optimizer) is
offered as an option.
To optimize neighbouring cells, WQA (W-CDMA Quality Analyzer) tool is introduced based on
Neighbouring cell Call Traces as an option.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
15
Finally, to post-process Call Trace data (CTx), a powerful tool based on years of industry experience
called Radio Frequency Optimizer (RFO) is introduced in OAM06.
2.1.1.3
CONFIGURATION MANAGEMENT
An integrated set of capabilities designed to configure parameters of all network elements within the
Wireless network is provided as part of WMS. Configuration Management has two aspects: off-line
configuration tools that are designed to make efficient and effective the most time-consuming
configuration activities through pre-integrated assistants for standard configuration activities. On-line
configuration is performed via an integrated set of network element-focused configuration tools,
accessible directly from the management platform via a context-sensitive launch capability ensuring
network element configuration can be done quickly, easily and with minimal risk of errors. WMS
Configuration Management functionality includes: Off-line and Online Configuration for the radio
access network (UMTS), combined with On-line configuration reach-through across the entire network.
2.1.1.4
WMS offers 3GPP OAM standards compliant interfaces to allow customers OSS to manage the
Alcatel-Lucent wireless networks. The 3GPP compliant ITF-N interfaces are based on the 3GPP
standards, and the solutions offered include support for the Alarm IRP, the BasicCM IRP, the BulkCM
IRP (UMTS Access), as well as support of XML transfer of 3G performance counters. The alarm IRP
allows fault OSSs to receive, through a 3GPP compliant interface, alarm information from the AlcatelLucent wireless networks.
The BasicCM IRP allows the OSS to discover network elements as well as attributes of the network
element. The BulkCM IRP allows the OSSs to bulk provision standards based attributes of the UTRAN
networks.
The support of the XML interface for performance allows performance OSSs to gather performance
statistics from the Alcatel-Lucent wireless networks using standards compliant mechanisms.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
3.
16
This chapter gives an architectural overview and describes the general engineering rules for the WMS
Main Server.
The Main server is the heart of the Network Management platform for managing the Alcatel-Lucent
Radio access Network.
From OAM06, the Main Server functionality is enhanced to provide Performance Management of the
UTRAN network in addition to Fault, Configuration, User Access and System Management, Software
Repository of the wireless network and 3GPP compliant Itf-N.
The different components of the Main server are as follows:
3.1.
NSP OVERVIEW
Device adapters collect real-time data from the network (either from the network elements
themselves or from the element management systems such as MDM, Access Module) and
translate network element data into a format that the NSP applications can process.
The collected data from the various device adapters is passed to distributed CORBA applications
(building blocks).
These building blocks (SUMBB, FMBB, TUMSBB) process the data and where necessary
summarize it.
This processed data is then provided to client applications. Java-based multi-platform enabled
GUI clients display the processed data.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
17
SUMBB
Alarm Count
FMBB
NE
Discovery Info
Detailed NE Info/
Topology Info
TUMSBB
NE Info
Alarm Info
DA
DA
DA
ASCII, SNMP,
CORBA,
CMIP
Network Elements
Figure 1 : NSP Overview
3.1.1.1
The NSP GUI is a Java-based GUI with point-and-click navigation. It provides integrated real-time
fault management capabilities, the ability to view OSI node state information for data devices (where
supported) and a context-sensitive reach-through to underlying EMS and devices. The NSP GUI also
provides application launch, customer-configurable custom commands, nodal discovery of devices,
technology layer filtering (i.e. wireless, switching, IP transport), access controls, network partitioning,
and multiple independent views.
NSP provides the ability to launch other applications (i.e. element provisioning) directly from NSP
using Application Launch scripts, delivering a single point of access to multiple applications. NSP
enables easy, in-context reach-through to underlying Network Element interfaces or Element
Management Systems (EMS), via the drop-down menu accessible from each NEs icon.
3.1.1.2
FMBB acts as the common point of contact to provide integrated alarm information for the entire
network. FMBB provides the following fault management interfaces:
-
Alarm Log Monitor interface to allow its clients to retrieve a current snapshot of alarms within the
system
The alarm manager interface allows clients to monitor alarms on an ongoing basis
Control interface to allow clients to acknowledge alarms and manually clear alarms
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
18
FMBB communicates with application clients via the Object Request Broker (ORB) to service requests
for alarm and event information. FMBB also communicates with Device Adaptors (DAs) via the ORB to
retrieve the required data requested by the applications clients.
FMBB is solely concerned with the current alarms/events for the network, that is, the alarm conditions
as they occur or only those alarm conditions which are still active on the network elements. Other
service assurance applications, such as the Historical Fault Browser (HFB), address the requirement
for alarm history. There is 1 FMBB per WMS Main Server.
For scalability, multiple instances of FMBB can be deployed. Typically, a network could be subdivided
into sub-domains. In such a deployment, one instance of FMBB would be responsible for the alarm
information from a single sub-domain.
3.1.1.3
The WMS Topology Unified Modelling Service (TUMS) is used for NE Discovery and Network Layer
Management.
Network Element Discovery is done using an interface between TUMS and the DA. When the DA
discovers new NEs it reports these to the TUMS. TUMS registers this information with the NSP GUI
(via SUMBB) and the NE is available to be added to a Network Layout.
3.1.1.4
The Summary Server (SUMBB) is involved with summarizing fault information passed to it via FMBB,
and NE information passed to it via TUMSBB. The NSP GUI then uses this information to report to the
user.
As well as summarizing alarm information, SUMBB is used to store and process all of the information
that identifies layouts, groups and NEs within NSP. This provides the means to partition NEs into
groups and layouts for different sets of users.
3.2.
The following applications provide the operator with additional service assurance features to manage
their network:
3.2.1.1
Historical Fault Browser (HFB) provides a generic event history capability across WMS managed
network elements. It has a flexible query mechanism allowing users to aggregate selective alarm
history information. Specifically, HFB captures all alarm data for historical analysis, incident reporting,
and customer impact analysis. A Web-based graphical user interface (GUI) provides easy accessibility
to fault information for troubleshooting in Operations Centres or remote locations. The HFB allows the
user to perform the following tasks:
-
HFB retrieves raise alarm and clear alarm events from the network via the WMS (Building Block)
architecture. Historical Fault Browser automatically supports newly added network elements without
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
19
any additional configuration required. Alarm events are stored in an Oracle Relational Database
Management System (RDBMS).
HFB Query Interface
In OAM06, a new feature called HFB query interface is introduced which generates advanced tabular
and graphical reports from the HFB and stores them in comma separated (csv) format plain text file.
Users can download the file from the primary Main Server to create specific reports using standard
tools like Excel.
To build a report, users need to issue one WICL command with appropriate parameters. At the very
least, the command shall envelop a SQL statement which is used to query out result record set and
the location of the file to be returned for the user to download then. Users specific WICL commands
after being reassembled are turned into one or more pure Oracle PL/SQL statements, and then they
are passed through WICL engine to the Shell/Tcl script. The script, then launches SQL/PLUS and
executes the SQL statements within a predefined procedure. Finally, the procedure will save the query
result to a csv formatted data file which was denoted by the argument in WICL command.
3.2.1.2
Trouble Ticketing Interface provides an interface between WMS software and trouble management
software systems. It gives network operators the ability to create trouble reports with complete fault
and originator information. Trouble tickets for alarm events raised within WMS are seamlessly
managed by a number of third-party trouble ticketing systems that support simple mail transfer
protocol (SMTP) interfaces, directly from WMS - for all network elements managed by WMS. Trouble
Ticketing Interface provides the ability to:
-
Retrieve all existing open trouble tickets and their relationship to the network alarm selected when
the ticket was created
Request the creation of a trouble ticket associating a unique network alarm as a related object to
the created ticket
Register and receive notification when a creation request has been completed
The Trouble Ticketing Interface accepts responses from the trouble management systems allowing bidirectional, inter-working communication between the two. This means that the trouble ticket identifier
assigned by the third-party trouble management system is tagged to the WMS alarm object and
displayed in the Alarm Manager. Alarms that are cleared in WMS will be forwarded to the trouble
ticketing system using the previously assigned identifier, allowing the alarm to be properly cleared in
the trouble ticketing system. This bi-directional capability thus resolves the time-consuming and errorprone process of manually synchronizing the two systems.
The Trouble Ticketing application provides inter-working with the following trouble management
systems:
- Clarify's Clear Support Trouble Management system
- Remedy's Action Request System
3.3.
This section gives a high-level architecture overview of the fault and configuration management within
WMS.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
20
To NSP
GUI Client
Network Management
Layer and Applications
3GPP FMBB
Interface to OSS
To
OSS
DA Layer
Access DA
EMS Layer
Access Object
Model Manager
Multiservice
Data Manager (MDM)
Network Element
Layer
Access
Other
MSS
Devices
OSS
ACCESS
GUI
MDM
GUI
WPS
TUMS BB
3GPP CM BB
Access
ACCESS
Access
MDM
IP Backbone
ATM Backbone
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
21
The following sections describe the tools used within the components:
3.3.1 ACCESS-DA
The Access Object Model Manager portion of the Access Modules sends fault information to the
Access DA. The Access DA ensures the OAM facilities of the Access network by providing the
following functionalities for fault management:
-
Receive and store the notifications from the NEs in the access network
Convert the notifications into alarms
Transmit the alarms to the GUI
The Access DA also receives fault information from the RNC I-Node via MDM APIs.
3.4.
SRS FUNCTIONALITY
The Software Repository Server is used to store the software installed on the wireless network. This
server contains software in a format ready to be used by all the installation tools. The software is
obtained from web server, e-mail or CD-ROM.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
22
The WMS software tar files are available from: Alcatel Lucent web site (e-delivery), e-mail or CDs, in
compressed format and the 3rd party software allowed for the compression of these files are gzip
(extension.gz), compress (extension.Z) or zip (extension.zip).
There is only one SRS per ROC located on the WMS Main Server. This SRS can be shared by
several ROCs. The SRS functionality on the WMS Main Server covers WMS load patches and Access
NE software loads. The SRS contains dedicated software accessible by any web browser. This tool
helps the end-user to correctly install the delivery files at the right location on the SRS.
3.5.
NPO
3GPP XML
Interface
ADI
PDI
APM
MDP
Main Server
FTP Pull from Main
RNC
Node B
MSS based
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
3.5.1.1
23
The APM is an Access collector. The Access Performance Manager (APM) collects the data from the
access network elements using FTP. The APM collects the raw data from RNC & NODE B.
3.5.1.2
The Management Data Provider is the collector for the Multiservice Switch devices (i.e. without any
wireless specific software).It retrieves Multiservice Switch counters onto the WMS Main Server.
3.5.2.1
The ADI is the interface for Access Performance and Configuration data to the performance reporting
application. ADI mediates counters and call trace data from the devices native format into XML files.
ADI converts the raw data to the XML file format in the 3GPP XML interface directory, and aggregates
the supported performance data into hourly XML files, which are also placed in the 3GPP XML
interface directory.
3.5.2.2
MDP
The CC files collected from the Multiservice Switch based devices are converted into BDF files. After
conversion to BDF, BDF files from Multiservice Switch devices are further processed by the PDI.
3.5.2.3
PDI converts the files from Multiservice Switch based devices to an XML format. PDI does not perform
time based counter aggregation. However, new functionality on the PDI supports the merging of the
multiple files which a Multiservice Switch shelf can generate within a single 15 minute period.
3.5.2.4
All XML data interfaces support XML compression. When this is done, files have an added gzip
extension. This is recommended increase storage time on WMS Server as well as to lower bandwidth
requirements for transfers to external OSS. External OSS must be compatible or have a mechanism to
decompress the XML Files.
3.6.
CAPACITY
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
24
M5000 (4 CPU)
+ 1*ST2540 &
expansion Tray
ST2501
NETRA T5440
(2 CPU)
T5220
(1 CPU)
50
4000
(24000)
20
2000
(12000)
7
700
(4200)
3
150
(900)
RNC
NODE B
(max 3G cells)
Network Element
Hardware Platform
SF4900
(12 CPU)
SF4900 (8
CPU)
SF4800 (12
CPU)
SF4800 (8
CPU)
SF v890
(4 CPU)
SF v880
(4 CPU)
N240 (2
CPU)
RNC
30
20
24
16
NODE B
(max 3G cells)
3000
(18000)
2000
(12000)
2400
14400
1600
9600
700
(4200)
550
3300
150
(900)
Engineering Restriction: Features restriction on the SF v250, Sun N240 and SF v880
Three features are in restriction currently on the SF v250, Sun N240 and SF v880 due to the 8 GB
RAM limitation. These features are:
- Alarm Correlation
- WMS East-West Interface
Extended Interface for integration into MS-Portal
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
25
: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
26
of the NEs themselves. Ideally the planning of a Dual Main server deployment will occur during the
CIQ data fill process.
WMS
Client
Primary Main Server
HFB
OSS
3GPP
Interface
SUMBB
Security
FMBB
TUMSBB
TUMSBB
FMBB
XML
Obs files
XML
Obs files
Data
Collection &
XML
Converter
RNC 1
NodeB
1
NodeB
2
Access DA
Access DA
RNC 2
NodeB
3
NodeB
4
RNC 3
NodeB
5
NodeB
6
Data
Collection &
XML
Converter
RNC 4
NodeB
7
NodeB
8
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
27
Customers that deploy a dual Main server configuration should give preference to the secondary
server when deploying NEs and workload across the two Main servers. The number of clients must
still be balanced across the servers.
Engineering Note: Mixed server configurations
The mixed configuration may be required to address a capacity extension scenario by adding a
secondary main server based on the nominal hardware platform.
It is mainly applicable between the same hardware platforms and it is always mandatory to keep the
primary main server model superior to the secondary one to support the module distribution.
For a ROC footprint with a legacy hardware platform SFE4900 or SFV890, the mixed
configuration with a nominal hardware M5000 or NETRA T5440 is not supported.
The Performance Management application individually collects observation files on the primary and
secondary main server which then be ftp to the relevant OSS (including NPO) for post-processing.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
28
Server Impacted
Nature of Impact
Loss of all WMS Clients
Loss of FM supervision
WICL
UMT/OAM/APP/024291
01.09/ EN Standard
Comments
Supervision and operation no
more available from all the
WMS Clients.
GUIs are not available.
(SUMBB down)
WICL not available
2009-2010 Alcatel-Lucent
Alcatel-Lucent
29
CM not available
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
30
Hardware Platform
SF v250, N240, SE T5220
SF v880
750
1000
1500
In case of a ROC composed of 2 main servers, it can be assumed that the number in this table can be
applied to each server (primary or secondary).
In assessing the number of alarms events that can be generated by a threshold, the number of
instances that each threshold applies to needs to be considered (for example, a single FDDCell
threshold can have thousands of instances). It is recommended that the threshold definitions be
tested (or simulated) against the actual real values of the counters prior to actual implementation on
the server to ensure that they are well defined and don't produce an excess of alarms. The fact that
the threshold evaluation was done over a network which was probably in normal running condition
should be taken into account also in trying to assess worst case conditions and threshold alarm rates
(some network conditions could have impact and increase rates beyond what was measured with
sample data).
In implementing thresholds, it is recommended that a progressive approach be taken when setting the
thresholds values (i.e. setting them initially to a threshold crossing value which generates few alarms
and then adjust the threshold crossing value incrementally over a longer period of time). Also, for
UMTS Access, the use of the hysteresis capability of the thresholding feature can be useful, especially
when the threshold crossing value is somewhat closer to the normal average value of the counter or
metric against which the threshold is defined.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
31
Some examples for the evaluation of the impact of the use of this feature are given below and are
specific to UMTS Access.
Case Example 1:
An operator wants to define 3 different thresholds against some specific FDDCell counters and 5 other
thresholds against base RNC counters. The network is composed of 1500 NODE Bs with 3 cells each
and 15 RNCs (100 NODE B per RNC). The concern is if these thresholds could have an impact on the
main server.
Assessment No. 1: worst case with flood alarms
A very extreme worst case scenario would be for all the threshold instances to raise a maximum
number of threshold alarms simultaneously (in one evaluation period). In this particular case, the
FDDCell thresholds would reach their limit of 300 alarm events per RNC and would all be replaced by
1 flood alarm.
The 5 RNC thresholds can't generate more then 15 alarms each (15 RNCs total) and the 3 FDDCell
would only generate 1 alarm per RNC each So in this worse case analysis, these definitions would
generate a burst of 15 RNCs x 8 alarms = 120 alarms.
Assessment No. 2: worst case without flood alarms
This assessment shows a worst case analysis which is based on scenarios which generate the
maximum number of alarms (in one granularity period) without generating a flood alarm. In this case,
only the FDDCells thresholds can reach a possible amount of 299 alarms raise instances per
threshold in one interval. Since there are 15 RNCs and 3 FDDCell thresholds, such a worst case
scenario would yield 15 (RNC) x 3 (thresholds) x 299 (alarms) =13455 alarm raise events! This by far
exceeds the limit of what is recommended to generate on any server type (the impact of such a burst
would be that other alarms from NEs could be delayed by many minutes). This example goes to show
that the best way to do these type of assessments is per the technique used in the case study 2 below
and based on probabilities rather than on worst case scenarios.
We continue this case assessment assuming that we have done a more detailed assessment into the
behaviour of the nature of the alarms generated by these particular threshold definitions and we have
found out that it would be practically impossible that the 3 FDDCells thresholds would generate a high
number of alarms on more than 1 RNC at any point in time. So this means that the maximum number
of threshold alarms which could be raised in one period becomes 1 RNC x 3 threshold x 299 alarms =
897 alarms, a number which can be managed on servers of type 890, 4800 and 4900.
Case Example 2: (recommended assessment methodology)
An operator is interested in implementing many thresholds on a series of FDDCell based counters on
a SF 4800 based ROC which is managing 6000 cells. Operator sets the threshold crossing values in
such a way that under normal conditions only 0.1% of components (cells) have a threshold alarm
raised against it. To be safe, the operator assumes that in some extreme conditions, this number can
increase 20 times (to 2%). It has been observed that when a threshold is raised it normally stays
raised for 2 intervals. It has also been observed that threshold crossings are statistically independent
from one cell to another and more or less uniformly distributed over time (this is to keeps this example
simple)
In this case, using the assumed worst case value of 2% of 6000 cells, we have at any point in time
120 cells which are in an alarmed state for each threshold. The average hold time of these alarms
lasting for 2 periods means that we will have 120 raise alarm events and 120 clear alarm events per 2
periods, so 120 alarm events per period.
The maximum recommended value for the number of
threshold alarms for a SF 4800 main server is 1000. We could therefore support 8 of these types of
thresholds, a number which is below the maximum number of thresholds which can be applied to the
FDDCell counter group.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
32
Alarms ->
Update with
correlation fields <-
NMS Layer
WMS platform
alarm
management
Alarm
Correlation
engine: CA
(Correlation
Asset)
Topology
Utran alarms
EMS Layer
Utran
Topology
links
Utran EMS
alarm
management
CM XML
snapshot
Events
Utran
Predefined
Rules
Topology
extract script
Events
Node B
RNC
CorrelationGroupId: Correlation group identifier. This field is a string max length 10 characters (set
to -at GUI level if not correlated).
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
33
CorrelationAlarmType: Primary Main or Main or Symptom (set to - at GUI level if not correlated)
CorrelationGroupCount: Alarms number of the group; Main & symptoms. The primary Main alarm
is not counted. This value is only available on the Primary Main alarm (set to -at GUI level if not
correlated).
Alarm correlation rule groups are pre-defined i.e. the predefined rules group cannot be changed.
Topology changes like NODE B re-parenting, NODE B addition, etc in the day are not
automatically taken into account by the alarm correlation feature. The user would need to launch
manually the topology extractor script or wait to automatic extract during the next night.
There are no predefined rules for POC and Transport Node 7670.
Hardware Platform
Node B
RNC
SE M5000 (8-CPU)
SE M5000 (4-CPU)
NETRA T5240 (2-CPU)
SE T5220
48
32
32
8
6
4
4
1
UMT/OAM/APP/024291
Hardware Platform
Node B
RNC
SF E4900 (12-CPU)
SF4800 (12-CPU)
SF E4900 (8-CPU)
SF4800 (8-CPU)
SF V890 (4-CPU)
48
24
32
16
16
6
3
4
2
2
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
34
SF V880 (4-CPU)
SF V250/N240 (2-CPU)
8
4
1
1
Table below shows the data granularities supported by this release and used to determine the storage
and capacity considerations of the server. Data granularities are the rate at which the performance
counters can be generated by the network elements and is usually the rate at which performance data
can be transferred from the network element to the server.
Network Element
Type
Granularity (Minutes)
NODE B
15, 60
RNC
15, 30, 60
3.6.8.2
In OAM06, RNC introduces a counters list mechanism, with counters list management which allows
users to specify the list of counters to be activated on RNC, and to have the collection and mediation
layer of WMS aligned with counters list management. The counters list is defined through csv
formatted ASCII files and activation of the counters list using the ASCII file is done through WICL.
The advantage of this feature is two fold:
- To have RNC dedicating its resources to call management, as opposed to call monitoring.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
35
Some counters are linked to a UTRAN feature. If the feature is not activated, the counter becomes
useless. Some other counters need to be activated at the time a new feature or functionality is
introduced or for a specific optimization service but does not need to be active from a day to day
network operational basis. This allows the customer to select what they want.
The counters list csv file has several parameters per counter including isActivated, group, name,
measuredobjectclass, weight and priority. The user can estimate the weight associated to its
customized RNC counter list by using the weight values of the counter whose isActivated value is
set to Y, supposing all counter from the same group have an identical isActivated value.
The total weight of the RNC counter list can be estimated by considering only the counters whose
measuredObjectClass field is set to RNCFunction/Cell, summing up the value of their weight
field, and multiplying the result by the number of cells configured or projected for the RNC
considered.
The RNC counter list total weight can then be compared to the RNC max counter capacity. The
RNC max counter capacity depends on the RNCs INode platform type. The RNC platform type is
given by the value of the INodes attribute EM/RncIn/hardwareCapability, available through
WICL.
o Example - For the INode platform type all6mPktServSP (PSFP, CP3, 16pOC3STM1),
the RNC max counter capacity is of 4.75 million counter instances
Note that even if counters are deactivated in the csv counter list file, they will still appear in the
3GPP observation xml files produced by WMS with null values.
If the number of cell-counters records reaches or exceeds 80% of the RNC capacity limit, the RNC
raises a Warning alarm.
The priority field value indicates the priority associated to the counter, as defined by R&D, and
implemented in the UA06 RNC. The higher the priority value, the less important the counter is
considered. In case of resource shortage, the RNC will stop collecting counters from high priority
values.
If the isActivated field is set to Y for a given counter, all the counters sharing the same group
field value will be activated. This means that setting a single isActivated field value to Y will
impact all the counters from the same group. For clarity, the user is advised to set the isActivated
of all the counters sharing the same group to Y when at least 1 counter from this group needs
to be activated.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
36
3.6.8.3
For a server which is fully loaded with the number of NEs, the catch up time ratio is around 1:1. This
means that if the server is down (outage, network outage, patch installation, maintenance, etc....) for
one hour, it will take one hour to catch up (or 1 day of catch up for 1 day down). Once caught up, the
server is back to its normal steady state and all the XML files are delivered as per their normal
schedule.
Note that in some special circumstances like during intensive use of RNC call trace, the time required
to recover from an outage can increase. For planned outages (for example during an upgrade), it is
recommended to stop call trace sessions on the RNC before the outage.
3.6.8.4
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
37
NPO is compatible with XML files in gzip format and there will be no performance impact on NPO to
read compressed XML files.
As a general rule, the compression ratio achieved (gzip format) is typically of 90% or better so up to
10x more data could be stored using this format.
Global WMS purge functionality
The purging algorithm is applied on the Main Server to global partitions. The XML storage (for
observations/counters and call trace data, etc.) is stored in the /opt/nortel/data partition and the purge
algorithm will attempt to maintain this global partition at 80% of usage. This leads to a more efficient
usage of the disk space so that in general it is expected that a WMS server is capable to store more
days of XML data than what was possible in previous releases.
The purge functionality can possibly bring on some noticeable changes in the amount of days stored
for different type of networks. One reason for this is that the number of days of storage of XML data
which can be kept is dynamically assessed so this parameter can actually vary over time. Also, the
amount of storage days is applied uniformly across all data on the server. In all cases, WMS server
should be able to keep a minimum of 3 days of XML data
3.6.8.5
WMS server supports the following types of call trace given in the table below:
Call Trace Session
Neighbouring Call Trace
CTn
Core Network Invoked Call Trace
CTa
Access Invoked Call Trace
CTb
Geographic Call Trace
CTg
Purpose
To trace mobility specific events and trace handovers
between neighbouring cells
To trace one or several UE calls selected by the Core
Network and to trace UE emergency calls
To trace dedicated data for calls based on a predefined UE
identity (TMSI, P-TMSI, IMSI or IMEI)
To trace dedicated data for calls established within a
geographical area in the UTRAN (may be a cell, a set of cells
or all the cells in the RNS)
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
38
Call Trace functionality generates a large number of records at the RNC that are post processed by
WMS APM and ADI modules running on the WMS server. As an example the RNC can generate a few
MegaBytes (up to 5.0 MB) of call trace data per minute.
Table below contains call trace engineering guidelines for OAM06.
Server type1
Minutes of traced
calls per day3
V250/N240
V880-4 CPU-900 MHz
V880-4 CPU-1200 MHz
V890-4 CPU-1200 MHz
4800-8 CPU-900 MHz
4800-8 CPU-1200 MHz
4800-12 CPU-900 MHz
4800-12 CPU-1200 MHz
V4900-8 CPU-1200 MHz
V4900-12 CPU-1200
MHz
3000
5700
7500
15000
11500
15000
17000
23000
30000
46000
3100
6000
8000
16000
12000
16000
18000
24000
32000
48000
Monitoring
guidelines
Maximum Nominal
Call Trace CPU
usage4
7.50%
6%
6%
6%
6%
6%
4%
4%
3%
2%
Table 10: Call Trace Engineering guidelines and daily recommended volumes of data
For CTa or CTb, the Main server can process simultaneously the Traces of 10 identified user
equipments.
For CTg or CTn or OTCell, the Main server supports number of RNCs / 10 CTg sessions
simultaneously active, with a minimum of 1 session. Note that 1 RNC supports only 1 CTg or CTn
or OTCell session at a time but can run CTg and CTn sessions simultaneously on the same RNC
(restricted in UA05)
The maximum number of simultaneous calls a CTn session can trace per TMU is 300 calls. For a
fully configured RNC (12+2 TMU), the maximum number of simultaneous calls 300x12 = 3600
calls.
Intensive usage of call trace counters should be avoided.
For planned server outages: it is recommended to stop call trace sessions prior to any shutdown.
In general, there are no software controls implemented on the Call Trace Wizard or the NEs to ensure
that user complies with most of the recommendations in this section.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
39
The number of standing alarms is defined by the number of active alarms present in the global layout
of the NSP GUI i.e the total number of alarms as indicated by the network banner in NSP for the entire
network. A very large number of active alarms in the NSP GUI can use server resources such as
memory and can cause degradation in server performance that can lead to an increase in alarm
latency, a slowdown of server response time, loss of alarms and issues with alarm re-synchronization.
Accordingly, limits have been set in table below "Maximum number of standing alarms per hardware
type" on the maximum number of standing alarms supported on the server at any one time. These
limits will vary based on the WMS server hardware type. The operator must manually clear nonessential alarms on a regular basis to help ensure that this maximum number is not reached.
Main Server type
SF E4900 & SF 4800
& M5000
SF V890 & SF V880 &
T5440
SF V250 & N240 &
T5220
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
3.6.9.2
40
After a system restart, the NSP GUI will not display all the NE and alarm information until all DAs have
"synchronized" with the building blocks.
The time it takes for the synchronization to complete varies. The delay is primarily dependant upon:
- The number of NEs being managed
- The number of active alarms
- The number of DAs registered with TUMSBB
- Where each NE is in their polling cycle
The DAs synchronize with each NE individually. When the last NE is synchronized with it's DA, that
DA performs another update of NE and alarm information before passing a "synchronization complete"
message to TUMSBB. Sometimes the DA must wait for the next polling cycle for an NE, before it can
synchronize that particular NE.
When TUMSBB receives a "synchronization complete" message from all DAs, then TUMSBB sends
its synchronize message to SUMBB. It is only when the SUMBB synchronization is complete that all
NE's are guaranteed to appear in their layouts, with ACIs applied, etc.
Understanding how the synchronization process works, and all the variables that effect the time of this
process, it is understandable that synchronization time will vary from system to system.
After a system restart it is recommended to allow 10 minutes for the synchronization to complete
before performing any actions.
It has been observed in some large networks with 10000 to 20000 Active alarms, or when receiving
high alarm rates, that the synchronization process can take in excess of one hour.
3.6.9.3
The Multiservice Switch and SNMP alarms are stored in the MDM GMDR on the Main Server(s). By
default, GMDR stores up to 6000 alarms. If the number of active alarms is more than the GMDR alarm
list limit, some active alarms will be lost from the alarm list on the Main Server.
It is important that the number of active alarms does not exceed the GMDR limit, so that active alarms
are not lost. To avoid the number of active alarms reaching GMDR alarm list limit, it is recommended
to monitor the number of active alarms periodically using the GMDR Administration GUI, which is
launched from MDM Toolset via NSP GUI. The operator may be required to take necessary actions to
reduce the number of active alarms if the number of active alarms is close to the configured GMDR
maximum number of alarms or to increase this limit.
3.7.
Following types of backup are supported as part of the WMS Main Server (Primary and Secondary) to
local tapes:
-
In addition to the local backup and restore solution, centralized backup and restore of the WMS
servers using VERITAS NetBackup 6.0 is also available. Using a Veritas NetBackup 6.0 DataCenter
server, it is possible to perform all the B&R operations previously mentioned.
Please refer to Wireless Network Management System Backup and Restore User Guide, NTP NN10300-035 for more information on the backup and restore procedures including the centralized
solution.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
41
To maintain consistency and integrity of the data on the WMS servers; when a Backup or Restore
is performed on one server, it must also be performed for all the other servers in the ROC.
There is one SUN StorEdge tape drive per server standard equipment.
It is recommended to perform a system backup at least once after installation of every release.
Performing a system backup is also suggested after installation of important set of patches.
It is recommended to backup the essential data at least once a day. It is possible to increase the
frequency of data backup to daily since it is done online.
The capacity of a DDS5 tape is 36GB and the capacity of a DAT 72 tape is 36 GB. The maximum
data transfer rate of SUN StorEdge tape drives before data compression is 3 MB/s.
Data compression ratio can vary depending of the type of data. Assuming a compression ratio of
2:1, 72 GB could be stored per DDS-5 tape and 72 GB could be stored per DAT 72 tape. The data
transfer rate would be 6 MB/sec.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
3.7.2
42
DAT 72 4: The Standalone Rack Mount Dual DAT72 tape drive (available in AC or DC power) is
delivered with one DAT72 tape. DAT 72 solution is mainly proposed with the small WMS solution
based on NETRA 240 hardware. (Tape drive DAT 72 is per default delivered within WMS Sun Fire
SFV890 and E4900).
LTO4HH: The Sun Storage TEK HP LTO4 Half Height SAS Tape drive (AC power) is delivered
with one tape drive. Another tape drive (Sun Storage TEK Bare HP LTO4 drive) can be purchased
such as to be installed within the Sun Storage TEK HP LTO4. Each Sun Storage TEK HP LTO4 tape
drive has a capacity of 800GB.
LTO4HH is mainly proposed to the small scale server. If a standalone LTO4 bare is added, the
resulting Sun Storage TEK HP LTO4 box can be shared by two servers (.e.g.: two SE T5220), each
server being connected to a LTO4 Tape drive. LTO4HH is also applicable to a medium MS NPO
based on M4000 2CPU.
SL24 LTO4HH: For more simplicity and value of high-capacity automated backup and recovery, the
SL24 LTO4HH SAS Tape autoloader can be used. It enables to automatically reinsert tape in the
drive when the previous has been ejected. The SL24 arrives rack-ready for installation into a standard
19-inch rack, or you can use an optional kit to integrate it into a tabletop environment. The SL24 ships
with one drive and include two removable 12-slot magazines with one slot dedicated to import/export
of data cartridges.
The table below describes the compatibility between tape solution and a server model according the
domain (MS-NPO, WMS, etc...)
Tape Drive
SDLT600
DAT 72
LTO4H
SL24 LTO4HH
SAS Tape
autoloader
Domain
Server Model
WMS
NETRA 240
(NEBS product)
SF V890
N.A
Recommended
N.A
N.A
N.A
Recommended (preinstalled)
N.A
N.A
SF E4900
N.A
Recommended (preinstalled)
SE M5000
N.A
N.A
N.A
Recommended for
automatic
cartridges
management
: Please contact your Alcatel Lucent representative to identify the Backup and
restore solution that suits with customers infrastructure.
4
: DAT 72 is NEBS compliant
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
43
SE T5220
NETRA T5440
N.A
N.A
N.A
Recommended
DAT 72 recommended N.A
with T5440 to address
full NEBS
requirements
(DAT 72 available in
AC or DC)
NETRA 240
(NPO only)
Supported
Recommended
N.A
N.A
Recommended for
automatic
cartridges
management and if
the full NEBS
requirements is not
required.
(In case of T5440
in DC, make sure,
power converter
DC to AC is
available on site
for the SL24
LTO4H)
N.A
SE T5220
N.A
N.A
Recommended
N.A
SFV490 -2CPU
(NPO only)
NETRA T5440
(MS SUP only)
Recommended N.A
N.A
N.A
N.A
Recommended
(In case of T5440 in
DC, make sure,
power converter DC
to AC is available on
site for the LTO4H)
N.A
Recommended
Supported 5
N.A
(NEBS product.
Available in AC or
DC power)
NPO
MS-NPO
MS-SUP
(NEBS product.
Available in AC or
DC power)
SFV490 -4CPU
M4000 - 2CPU
M4000 - 4CPU
N.A
Recommended N.A
N.A
N.A
N.A
N.A
N.A
N.A
N.A
The time required to perform an essential and non essential data backup or historical data archival
is dependent on the amount of data accumulated on the servers. Therefore, it is not possible to
give with precision the time required to perform data backups. The minimum time can be
estimated if the amount of the data to be backup is available:
Time in hours = x MB / (4.5 MB/sec * 3600)
The restore time can be estimated to be about 15% more than the backup time + reboot time of
the server.
: Local Backup solution trough Tape drives is not recommended for large MS-NPO. Local backup to
disk, or a Centralized Backup & Restore solution with partner (e.g.: LEGATO, or VERITAS
infrastructure) has to be considered.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
3.8.
44
WMS server supports Direct Attached Storage (DAS) where disk arrays are connected directly to the
SF E4x00 servers. A new feature in OAM06 allows integration of the WMS AS E4x00 servers to a
Storage Area Network (SAN) to provide a more flexible solution to the customer.
With the SAN solution, the /opt/nortel/data partition is transferred to the SAN disk volumes while
keeping the internal system and application data on local disks of the host server.
WMS ROC
secondary
MS
primary
MS
2 * Dual FC optical
ports HBA cards
Redundant Fibre
Channel Switches
Metadevice #1
Metadevice #2
LUN@1
LUN@2
LUN@3
LUN@4
LUN@5
LUN@6
(virtual disks)
(virtual disks)
The SAN is expected to support Sun StorageTek Traffic Manager software (STMS), formerly Sun
MPxIO for multipathing. This is integrated in the Sun Solaris 10 Operating System.
The SAN is expected to support Solaris format command with EFI (Extensible Firmware
Interface) label
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
45
The operator is expected to provide 2 or more SAN volumes/LUNs (Logical Unit Number)
The customer is expected to group these volumes in 2 sets for SVM sub-mirror to optimize
performances, load balancing, redundancy and upgrade downtime
Each SAN set should be 1.5 TB large (and less than 2 TB)
The bandwidth required between the SAN and WMS servers are as follows 1Gbps for SF E4800
and 4 Gbps for SF E4900 WMS servers
SAN is supported on the following WMS hardware platforms SF E4x00 (8-CPU and 12CPU) and SE M5000
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
46
4.
4.1.
OVERVIEW
The OAM solution also offers support for 3GPP compliant (XML format) performance management
interfaces.
In addition, Alcatel-Lucents 3GPP offering includes implementation of the Alcatel-Lucent-specific
Security BB, which is mandatory for all customers. The Security BB authenticates the IRP Managers
identity and provides the authenticated IRP Manager with the Inter-operable Object Reference (IOR)
of the Entry Point IRP Agent.
For detailed information on the 3GPP External Interfaces, please refer to the following Technical
Manuals:
-
4.2.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
47
4.3.
The 3GPP Notification BB implements the CORBA interface needed to access required event
channels. There is one event channel per notification category (e.g. Kernel CM event is one
notification category while Alarm event is another notification category). There is only one Notification
BB per 3GPP BB instance.
4.4.
BB)
The 3GPP FM BB is Alcatel-Lucent implementation of the 3GPP Alarm IRP Agent CORBA solution
set. The purpose of the Alarm IRP is to define an interface through which an Alarm IRP Agent can
communicate alarm information (for its managed objects) to one or several Alarm IRP Managers.
The 3GPP FM BB connects to the WMS underlying system instances within a ROC; collects alarms,
does the mediation, and forwards 3GPP-formatted alarms to Alarm IRP Manager(s). The BB also pulls
periodically on detection of new alarm related events.
The 3GPP FM BB communicates with the 3GPP Notification BB informing it of all alarm specific
events. Alarm events are then propagated to the subscribed Alarm IRP Managers by the Notification
Service.
The 3GPP Communication Surveillance IRP
As part of the 3GPP FM BB, Alcatel-Lucent has also implemented the Communication Surveillance
solution and supports the sending of the notification notifyHeartbeat through the 3GPP FM BB
channel. This feature provides the Alarm IRP Managers the ability to monitor the communication
between the 3GPP FM BB and itself through notification channels of the CORBA Notification Service.
The solution consists in periodically broadcasting, if activated, a specific standard notification called
notifyHeartbeat to all subscribed Alarm IRP Managers.
Figure below illustrates Alcatel-Lucents 3GPP FM high level architectural implementation.
3G PP Alar m
I RP M an age r
3 GPP stan d ar d
C ORB A In te rfac e
3GP P Ou tp ut
Bu il di ng Bl oc k
3GPP
Notific ation BB
3G PP FM B B
( in cludes CS IRP)
W M S Und e r lyi ng
Sy ste m
WM S S e cu r ity
Se r vic e s
Fr am e wor k
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
4.5.
48
The 3GPP Basic CM BB is Alcatel-Lucents implementation of both the 3GPP Basic CM IRP and the
3GPP Kernel CM IRP Agents CORBA solution sets. It connects to the underlying OAM system
instances within a ROC; retrieves NE information, does the mediation, and provides Basic CM IRP
Manager(s) with 3GPP compliant network information.
The Basic CM part of this IRP Agent provides operations to the manager to retrieve all supported
Managed Objects and attributes.
The Kernel CM part, which is the main part of the Building Block, interacts with the Notification IRP
Agent to get the event channel reference dedicated to Kernel CM events. All ongoing events will be
sent to subscribe Kernel CM IRP Managers via this event channel by the Notification service.
Figure below illustrates Alcatel-Lucents 3GPP Basic CM / Kernel CM high level architecture.
Alcatel-Lucent Security BB
(includes Entry Point IRP)
3GPP
Notification BB
Authenticate
3GPP Basic CM BB
(inc lude s Ker nel CM
IRP)
3GPP Output
Building Block
Alcatel-Lucent
Specific CORBA Interface
WM S S ec u r ity
S e r vic e s
Fr am e wor k
WM S Un d er ly in g
Sy ste m
4.6.
The Bulk CM BB implements the Bulk CM IRP Agent CORBA interface defined by 3GPP standards.
The supported Bulk CM Managed Objects are within Alcatel-Lucents UMTS Terrestrial Radio Access
Network (UTRAN) domain.
The Bulk CM IRP Agent interacts with the Notification IRP Agent to get the event channel reference
dedicated to Bulk CM events. All applicable events will be sent to subscribe Bulk CM IRP Managers
via this event channel by the Notification service. The main operations available to IRP Managers are:
upload, download, activate and fallback. Also, IRP Agent uses XML configuration data files to
exchange data with the OSS client (IRP Manager).
Upload operations request the upload of configuration data to the OSS. XML files are sent describing
the Network Resource Model (NRM) of the Alcatel-Lucent Bulk supported 3GPP NRM.
For download requests, the OSS sends an XML configuration file to perform active configuration
management for Alcatel-Lucents Bulk supported 3GPP NRM.
Figure below illustrates Alcatel-Lucents 3GPP Bulk CM high level architecture.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
49
3GPP BulkCM
IRP Manager
3GPP standard
CORBA Interface
Alcatel-Lucent Security BB
(includes Entry Point IRP)
3GPP Output
Building Block
3GPP
3GPP
Notification BB
Bulk CM BB
Authenticate
Alcatel-Lucent Specific
CORBA Interface
W M S S e c ur ity
S er vi ce s
F ra m ewo rk
WM S U n d er ly in g
S ys te m
UTRAN
BTS
RNC
4.7.
The PM BB implements the PM IRP Agent CORBA interface defined by 3GPP standards. This IRP
solution set supports Alcatel-Lucents UMTS Terrestrial Radio Access Network (UTRAN) domain.
Fundamentally, the PM BB is dependent on the Access Data Interface (ADI). The ADI is responsible
for collecting performance data, creating the proprietary performance data files, and notifying the PM
BB of the availability of these files. The PM BB, in turn, receives events from the underlying ADI
instance, mediates and translates the data from Alcatel-Lucents proprietary format to 3GPP standard
format, and sends notifications to PM IRP Manager via the Notification Service. 3GPP PM BB also
handles measurement job operation requests (e.g. creating, listing, and stopping measurement jobs)
from PM IRP Manager.
Figure below illustrates Alcatel-Lucents 3GPP PM high level architecture.
3GPP PM
IRP Manager
3GPP standard
CORBA Interface
Alcatel-Lucent Security BB
(include Entry Point IRP)
3GPP Output
Building Block
3GPP
Notification BB
3GPP PM BB
Authenticate
Alcatel-Lucent Specific
CORBA Interface
W -NM S S e c ur ity
Se r vic e s
Fr am e wor k
Access Data
In te r fac e (ADI)
(Pe r for m an ce S e rv er )
UTRAN
BTS
RNC
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
4.8.
50
Typically one 3GPP FM BB, one 3GPP BasicCM BB, one 3GPP Bulk CM BB, and one 3GPP PM BB
instance is deployed per ROC. All BBs co-reside with the primary WMS Main Server.
Figure below, shows a typical 3GPP Output Block configuration within a ROC.
3 GP P
IRP Manager
3GPP standard
CORBA Interface
3GPP FM BB
(includes CS IRP)
3GPP Basic CM BB
(includes Kernel CM
IRP)
3GPP Bulk
CM BB
3GPP PM BB
3GPP Output
Building Block
3GPP Notification BB
Main Server
(Primary)
Alcatel-Lucent SecurityBB
(includes Entry Point
IRP)
Alcatel-Lucent
Specific CORBA Interface
WMS Underlying
System
Main Server
(Secondary)
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
51
80,000 alarms
20,000 alarms
200 alarms/second
150
90
events/second events/second
for one user
for two users
120
alarms/second
with one (1)
IRP Manager
60
alarms/second
with two (2)
IRP Managers
2
10,000
2
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
52
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
53
WMS
WMS
CP_UMTS
CP_UMTS
Export snapshot
Activate WO
CMXML
CMXML
Synchronize Publish CSV
Aggregation
of cell info
3G-3G E/W
interface
Aggregation
of cell info
Synchronize
Csv
file
Csv
file
2G-3G E/W
interface
BTS 3G
OMC 2G
Csv
file
BTS 3G
BTS 2G
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
5.
54
This section focuses on the usage of WMS from an OAM user perspective and related client
engineering considerations (excluding OSS clients).
The following clients are in the scope of this section: WMS Clients (PC & UNIX), WMS Server of
Client.
For RFO, NPO and WQA clients, please refer to the individual chapters on this topic.
5.1.
The number of concurrent WMS Clients (i.e. number of simultaneously active clients) supported by the
Main server is dependent on the number and type of servers in the ROC as per table below:
Main Server Type
SE M5000-8CPU
SF E4900-12 CPU, SF 4800-12 CPU
SE M5000-4CPU
SF E4900-8 CPU, SF 4800-8 CPU
NETRA T5440
SF V890, SF V880
SE T5220
SF V250, N240
Maximum number of
concurrent clients
70
50
40
30
20
5
Maximum number of
registered users
600
500
480
400
240
200
50
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
5.2.
55
Client usage can significantly impact performance of the main server, this section defines user profiles
in order to provide a baseline for overall client usage.
In order to determine relative overall performance impact of the entire client user population of a ROC
it is necessary to understand the relative tasks of each type of user.
NSP Level usage model
-
Groups are set up to have a maximum number of 201 NEs in a group (this means an RNC and all
of its associated NODE Bs)
As end to end network monitoring most likely to happen at OSS level, only 15% of users assumed
to monitor the entire networks. Others users are looking at one or a few groups at a time
Active Alarm manager are used against the opened groups. It is assumed that the total number of
alarm managers used is equal to 100% of the total number of WMS users supported with a
maximum of 5 alarm manager windows per user.
At any point in time, Historical Fault Browser (HFB) is used by about 50% of all main server users
(and assumed 1 instance of HFB per HFB user) and the overall query rate for this group of HFB
users averages to 1 query per active HFB user every 5 minutes.
Equipment Monitors: although the limit is 5 for the number of equipment monitors an individual
user can launch, It is assumed the average is 2 per user
SRS for NE specific patch download to main server, TIL and TMN GUIs (occasional only) 5% of
users.
It is anticipated that 5% of the total number of users are system administrators needing to use
tools such as SunMC, add users in LDAP directory, download patches by using SRS tools, etc.
5.3.
There is no platform specific validation done during the WMS Client installation. However, it is
important that the minimum client hardware specification be met. Please note that there is a difference
in specifications for WMS Clients depending on if it is a ROC or NOC client, see Server Hardware
Specifications section.
A guideline of 1 GB of memory is used for ROC clients. There are many different GUIs in WMS and
under most usage scenarios; 1 GB should provide sufficient memory to avoid memory swapping which
could impact individual client performance.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
56
At the NOC, connectivity from the NOC WMS Client to multiple ROCs is possible, an increased
minimum memory requirement of 2 GB has been provided to support more GUIs being open on the
NOC client. This should ensure that the level of performance obtained on a single client station can be
closely matched on a NOC station connected to two different ROCs (with twice the number of
concurrent GUIs opened).
For usage for other OAM tools, please refer to the relevant section (e.g: NPO client consideration are
described in section "PC Hardware Requirements" of chapter "Network Performance Optimizer")
The difference between PC Based and Solaris based WMS Clients is that the Solaris based WMS
Clients includes OS Hardening & Network Time Protocol (for time synchronization).
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
57
6.
HARDWARE SPECIFICATIONS
6.1.
OVERVIEW
This section provides hardware recommendations on the following server components in the WMS
OAM solution:
-
5620 NM tool is covered in its dedicated section as the hardware is ordered separately.
Please contact your Alcatel-Lucent representative to order the equipment.
The hardware requirements listed in this section focus on the major requirements such as CPU,
memory and disk space. For complete hardware specifications, please refer to the following
documentation: Alcatel-Lucent 9353 WMS Architecture, Hardware Strategy and Requirements OAM06
The nominal hardware is RoHS compliant since July 2006. RoHS Directive (Restrictions of Hazardous
Substances Directive) bans 6 substances:
- Lead (solders, electrical/mechanical components)
- Hexavelent Chromium (corrosion resistant coating)
- Polybrominated biphemyls & Polybrominated diphenyl ethers (flame retardants for PCBs, plastics)
- Mercury & Cadmium
The following legacy hardware below is supported in OAM06 but not orderable:
-
SF V250
SF V880
SF E4800
SF E4900
SF V490
Sun StorEdge 6120 Array
Sun T3 Storage Array
SunBlade 150
SunBlade 1500
Multi Service Switch 8600 (Nortel)
Ethernet Switch 470 (Nortel)
Ethernet Switch 5510 (Nortel)
For more information on hardware specifications of these legacy hardware, please refer to the
following documentation: Alcatel-Lucent W-NMS Engineering Guide for OAM 5.1/5.2.
6.2.
The following hardware specifications are based on capacity that needs to be supported for the
different network elements.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
58
Software
Operating
System
RAM
Hard Disk
Ethernet board
16 GB
8 x 146 GB Internal Disk Drives
2* Sun Quad Gigabit Ethernet 100/1000BaseT PCI Adapter
Tape drive
Software
Operating
System
: Native capacity of LTO4 tape cartridge is 800 GB with an IO speed of 120 MB per second
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
59
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
60
Ethernet board
Tape drive
DAT 72 is recommended with T5440 to address full NEBS
requirements.
SL24 LTO4 is recommended for automatic cartridges
management and if the full NEBS requirement is not required.
(External SL24 LTO4 7 HH SAS rack-mounted tape autoloader
for backup & restore, 24 LTO tape cartridge slots)
Software
Operating
System
: Native capacity of LTO4 tape cartridge is 800 GB with an IO speed of 120 MB per second
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
61
Ethernet board
Tape drive
Software
Operating
System
Extension Hardware
CPU, RAM
Ethernet board
Tape drive
UMT/OAM/APP/024291
2009-2010 Alcatel-Lucent
Alcatel-Lucent
(Optional)
Software
Operating
System
Extension
CPU, RAM
62
RAM
Hard Disk
Ethernet board
Tape drive
(Optional)
Software
Operating
System
Extension
CPU, RAM
Hard Disk
: Native capacity of LTO4 tape cartridge is 800 GB with an IO speed of 120 MB per second
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
63
For more details on the tape drive equipments proposed and the compatibility with server and domain,
please refer to backup and restore section 3.7.2.
6.3.
WMS Clients are supported on both Windows based PC Clients and Solaris based Sun Client
workstations.
Engineering Note: Purchasing
PC Clients are not orderable through Alcatel-Lucent. It is left to the customer to purchase PC Clients
(including all software associated to the client like OS, X-Display, etc) from their preferred supplier
thanks to the hardware recommendation described in this section.
Since SUN Microsystems retires the Sparc Processor based Workstations, Ultra 45 Unix Workstation
are no more orderable. However the Sun Ultra 45 described below in Table 26, and already deployed
on site are still supported.
SUN ULTRA 45
Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk
Ethernet board
Software
Operating
System
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
64
WMS CLIENT
X Display
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
6.3.2.2
65
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
6.3.2.3
66
WPS CLIENT
If you get some memory troubles with WPS managing a large network; it is possible to increase the
RAM used by WPS. Please call the Alcatel-Lucent support team.
6.3.2.4
WQA CLIENT
6.3.2.5
RFO CLIENT
6.3.2.6
The following table described the minimum RAM requirements for PC running simultaneously WMS
application with another client application.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
67
Medium Network
3GB
3GB
2G
2G
Large Network
4GB
5GB
2G
3G
Usage
6.4.
2 CPU
8 GB
600 GB (Please see Engineering Notes below)
100/1000 Mb/sec Ethernet boards
Microsoft Windows Server 2003 Enterprise Edition 32 bits
Service Pack 2
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
6.5.
68
Hardware Serviceability
To reduce Mean Time to Repair (MTTR), the following strategies can be employed:
- Spare FRU's at the customer sites,
- A Sun Spectrum Platinum contract: 7x24 on site support, 7x24 phone support, and 2 hour
response time
- Protected server environment: rooms with access control and air-conditioning, protected power
supplies (un-interruptible power supply (UPS) shall be used to ensure high availability), etc.
Hardware reliability
Reliability of Sun servers and its sub-components is excellent such that the MTBF (Mean Time
Between Failure) is minimized. Most outages on customer sites are typically due to software issues,
human errors, or acts of God.
Environmental monitoring
The Sun Server's environmental monitoring and control system continuously monitors temperatures at
several critical locations throughout the machine. Readings from thermal sensors provide input to the
server's airflow management system, which automatically adjusts fan speeds as necessary to keep
components operating temperatures within acceptable ranges.
If measured temperatures exceed safe operating limits and fans are already operating at maximum
speed, the system automatically notifies the console and suspends operation.
6.5.1.1
SF 4X00
The architecture of the Sun Fire 4x00 server family is built around the redundant re-configurable Sun
Fireplane Interconnect.
Hardware Redundancy
Sun Fire 4x00 servers provide full hardware redundancy.
Should any key component fail (whether it is a CPU, memory, system controller, power supply, cooling
unit, interconnect or system clock) the system is able to recover, and in many cases continue to run
uninterrupted.
Full hardware redundancy includes the following components:
-
Redundant CPUs
Memory
Memory boards
I/O assemblies
I/O adapters (if configured)
Redundant system controllers
Redundant system clock with automatic fail-over
Redundant Sun Fireplane switches
Redundant AC power sources, facilitated by the Redundant Transfer Switch
Redundant power supplies and intelligent power switching mechanism that will fail-over to
remaining power modules
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
69
CPU modules
Memory modules
PCI buses
System IO interfaces
The automatic system recovery allows the system to resume operation after experiencing certain
hardware failures. The automatic self-test feature enables the system to detect failed hardware
components. An auto-configuration capability designed into the system's boot firmware allows the
system to de-configure failed components and restore system operation.
Redundant Power Distribution System
Three power supplies provide the power. These modular hot swap units are installed and removed
from the rear of the system, even while the machine is fully operational. Maximally configured systems
operate continuously with two power supplies installed.
Power supplies feed all active system components through a common power distribution bus. Power is
drawn equally from all supplies installed in the system. If the service from one power supply is
interrupted, the system power demand shifts automatically to the remaining active supplies. If the
combined output of the remaining supplies satisfies the system's requirements, the machine continues
to operate with no interruption of service.
With a hot standby, or "n+1" power supply installed, the system can continue operating while a
replacement power supply is being installed. If power demands exceed the output of the active
supplies, the system automatically notifies the console and suspends operation.
6.5.1.2
SF V8X0
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
70
Hardware Redundancy
This powerful server incorporates many key RAS features such as Automatic System Recovery
(ASR), multi-pathing support to the storage subsystems and networks, hot-swap power supplies,
cooling fans, internal disks and PCI slots. All systems are configured with redundant (N+1) power
supplies and a redundant set of cooling fan trays.
Automatic System Recovery (ASR)
The SUN Fire V8x0 provides automatic system recovery from the following types of hardware faults:
-
CPU modules
Memory modules
PCI buses
System IO interfaces
The automatic system recovery allows the system to resume operation after experiencing certain
hardware failures. The automatic self-test feature enables the system to detect failed hardware
components. An auto-configuration capability designed into the system's boot firmware allows the
system to de-configure failed components and restore system operation.
Redundant Power Distribution System
Three power supplies provide the power. These modular hot swap units are installed and removed
from the rear of the system, even while the machine is fully operational. Maximally configured systems
operate continuously with two power supplies installed.
Power supplies feed all active system components through a common power distribution bus. Power is
drawn equally from all supplies installed in the system. If the service from one power supply is
interrupted, the system power demand shifts automatically to the remaining active supplies. If the
combined output of the remaining supplies satisfies the system's requirements, the machine continues
to operate with no interruption of service. With a hot standby, or "n+1" power supply installed, the
system can continue operating while a replacement power supply is being installed. If power demands
exceed the output of the active supplies, the system automatically notifies the console and suspends
operation.
6.5.1.3
NETRA 240
Netra 240 is NEBS Level 3 certified. The Sun Netra 240 Server is a high-performance, reliable server
for enterprise network computing based upon the UltraSPARC IIIi microprocessor technology. All
memory is accessible by any processor as workgroup servers do not implement domains or partitions.
An internal storage array supports 2 Ultra 160 SCSI disks.
Hardware Redundancy
This powerful server incorporates many key RAS features such multi-pathing support to the storage
subsystems and networks, hot-swap power supplies, cooling fans and internal disks. All systems are
configured with redundant (1+1) power supplies and a redundant set of cooling fan trays.
6.5.1.4
Sun Enterprise T5220 servers incorporate the following key features to increase RAS:
-
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
71
ILOM provides administrators with the capability to monitor and control T5220 servers over a
dedicated Ethernet connection and supports secure shell (SSH), Web, and Integrated Platform
Management Interface (IPMI) access. ILOM functions can also be accessed through a dedicated
serial port for connection to a terminal or terminal server.
6.5.1.5
The Sun Enterprise T5440 servers incorporate the following key features to increase RAS:
- Lower heat generation, reducing hardware failures
- Hot-pluggable hard drives
- Redundant, hot-swappable power supplies (four)
- Redundant N+1 hot-swappable fan modules
- Environmental monitoring
- Internal hardware drive mirroring (RAID 1)
- Error detection and correction for improved data integrity
- Easy access for most component replacements
Hot-Pluggable and Hot-Swappable Components
Sun Enterprise T5440 server hardware is designed to support hot-plugging of the chassis-mounted
hard drives, and hot-swapping of fan units, power supplies.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
72
6.5.1.6
To deliver reliability, availability and serviceability, the Sun Enterprise M4000/M5000 offers the
following features:
-
Supports redundant configurations and active replacement of power supplies and fans.
Periodically performs memory patrol to detect memory software errors and stuck-at faults,
(Memory patrol).
Supports redundant configurations, mirroring, and active replacement of disks.
XSCF (detailed below) collection of fault information, and preventive maintenance using different
types of warnings.
Shortens the downtime by using automatic system reboot and time taken for system start-up.
Status LEDs mounted on the main components and the operator panel to display which active
components need replacement
Centralized systematic monitoring, such as with SNMP
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
73
The system features two/four (M4000/M5000) hot-swappable power supplies, either of which is
capable of handling the systems entire load. Thus, the system provides N+1 redundancy, allowing the
system to continue operating should one of the power supplies or its AC power source fail.
eXtended System Control Facility Unit (XSCFU)
The eXtended System Control Facility Unit (XSCFU) is a service processor that operates and
administrates the M4000/M5000. The XSCFU diagnoses and starts the entire server, configures
domains, offers dynamic reconfiguration, as well as detects and notifies various failures. The XSCFU
enables standard control and monitoring function through network. Using this function enables starts,
settings, and operation managements of the server from remote locations
The XSCFU uses the eXtended System Control Facility (XSCF) firmware to provide the following
functions:
- Controls and monitors the main unit hardware
- Monitors the Solaris Operating System (Solaris OS), power-on self-test (POST), and the
OpenBoot PROM
- Controls and manages the interface for the system administrator (such as a terminal console)
- Administrators device information
- Controls remote messaging of various events
The XSCF firmware provides the system control and monitoring interfaces listed below:
- Serial port through which the command-line interface (XSCF shell) can be used
- Two LAN ports:
o XSCF shell
o XSCF Web (browser-based user interface)
6.5.1.7
The ST6140 is a stackable fail-safe storage array with 16x146 disks 15k RPM. The configuration
supporting the OAM solution is a dual ST6140:
- 1 controller tray
o 16 x 146GB 4Gps Fibre Channel disks, 15rpm
o 2 redundant FC RAID Channel disks, 15Krpm
o 2 redundant FC connections to the server
o 2 redundant 100 Base-T Ethernet coonections, 2 IP addresses
o 2 redundant AC power supplies
o 2 redundant cooling fans
- 1 expansion tray
o 16 x 146 BB 4 Gps Fibre Channel disks, 15 Krpm
o 2 redundant I/O modules, connection to the controller tray
o 2 redundant AC power supplies
o 2 redundant cooling fans
6.5.1.8
SF V490
To deliver high levels of reliability, availability and serviceability, the Sun Fire V490 system offers the
following features:
-
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
74
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
75
Standby servers (also called cold standby servers) can be made available in the event of a failure of a
WMS Primary / Secondary Main Server. In the event of a failure of one of these servers, operators
can manually switch over from the failed server to the standby server.
This strategy can be implemented according to the following two scenarios:
Scenario 1: If only 1 standby server is available
- The standby server must be initially installed and configured as a WMS Primary Main Server.
- In the event of a failure of a WMS Primary Main Server, the standby server can be manually
activated and becomes the WMS Primary Main Server.
Scenario 2: If more than one standby server is available
- The standby servers must be initially installed and configured as the servers that they are standby
for.
- In the event of a failure of any of the supported servers the associated standby server can be
manually activated to replace the faulty server.
Once the standby server is activated, the operator must restore the last data backup that was
performed on the failed server. Please refer to the Wireless Management System Backup and Restore
User Guide, for more information.
The standby server hardware must have the same hardware configuration as the original server.
Thus, if there are different types of servers deployed, there must be one standby server of each type
available to have a backup plan for each of them.
Disaster recovery (or emergency Recovery) is provided as an enhanced service by Alcatel-Lucent.
The customer should contact their local Alcatel-Lucent representative for more information.
6.6.
This section provides hardware recommendations for the various DCN components that are part of the
OAM reference architecture as well as recommendations that will help engineer the OAM DCN.
Hardware specifications and recommendations are provided for:
-
Firewalls
LAN/WAN equipment of the OAM reference architecture
Alcatel-Lucents VPN Firewall portfolio offers a broad range of carrier-class platform for delivering
advanced security, VPN, bandwidth management, and other high-demand IP services.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
76
Model
Throughput (Gbps)
Specifications
Session connections per second
0.33
20 000
Total concurrent
sessions
245 000
Model
Memory
Interfaces
Encryption
Number of
tunnels
Software
Specifications
VPN Brick 150
128 MB (base)
4 Built-in 10/100 Ethernet LAN ports (standard)
128 bit encryption
License allows up to 1000 simultaneous tunnels
ALSMS V9.1 Package
PC Linux Mediation
Device
Public IP Network
Netscreen Gate
NS-5GT
Alcatel-Lucent secure premises
with authorised users and
servers
Customer premisis
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
77
The monitored Customer Network is accessed via the public IP network from Alcatel-Lucents secure
premises. The Customer Network and the RAMSES data flow are protected versus Internet using a
dedicated gateway (Firewall) denying any flow except an IPSec tunnel from a central gateway in
Alcatel-Lucent premises.
The IPSEC Tunnel characteristics are:
-
At the customer side, the Netscreen firewall is connected to a unique point called RAMSES Mediation
Device (Linux PC).
The remote commands from Alcatel-Lucent premises towards the NE of the monitored network are
issued from:
- RAMSES application servers on an isolated and protected sub-network (DMZ)
- RAMSES authorized users on the Alcatel-Lucent Intranet
These commands are controlled and relayed (when authorized) by a proxy software running on the
Mediation Device.
6.6.3.1
OMNISWITCH 6850
The OmniSwitch 6850 is recommended for LAN connections including local client site connections
and for remote/local NE sites. The OmniSwitch 6850 is available in 2 models:
-
Characteristics:
- AC or DC power supplies
- 4 Ethernet 1000BaseX ports
- NEBS compliant
- Stackable to 8 switches (8*48=384 Ethernet ports)
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
78
Model
Interfaces
Power Supply
Specifications
MRV LX-8020S
2 Ethernet ports for connection
to LAN
20 RS-232 RJ45 ports for
connection to servers
2 redundant power supplies AC
or DC
MRV LX-4016T
2 Ethernet ports for connection
to LAN
16 RS-232 RJ45 ports for
connection to servers
2 redundant power supplies AC
or DC
6.7.
OTHER EQUIPMENT
This section provides recommendations for optional equipment that can be added to the WMS
offering. Note that that equipment is not orderable through Alcatel-Lucent.
For printers needed as part of the WMS solution, the recommendation from Alcatel-Lucent is the
Lexmark printer with specifications in the table below. The driver for this printer is included in the WMS
software load. No integration from Alcatel-Lucent is available if other network printer chosen for WMS.
Hardware
Print Technology
Print Resolution
Print Speed
Processor
Standard Ports
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
7.
79
NETWORK ARCHITECTURE
This chapter contains network architecture considerations for the WMS network. Also in the scope of
this chapter will be considerations for features which have impact on WMS network design.
7.1.
The National Operation Centre (NOC) provides a view of the whole network. A NOC is composed of
WMS Clients. From the WMS Clients in a NOC, it is possible to monitor several ROCs. This allows to
remotely manage a complete network from a single location.
A Regional Operation Centre (ROC) is designed to manage a region of the network. It is designed so
that it is autonomous and independent of the other ROCs and the NOC.
All OAM servers in a ROC are co-located at one site and it is recommended that they share the same
LANs (i.e. OAM LAN & Network Element LAN) and can provide an integrated view of the alarms and
performance counters for all the NEs managed by that ROC.
A WMS network is composed of the following components:
- Primary Main Server (Mandatory)
- WMS Client (one Mandatory)
- RAMSES (Mandatory)
- Secondary Main Server (Optional)
- Server of Clients (Optional)
- WQA server (Optional)
- NPO server (Optional)
- WPS Client (Optional)
- RFO tool (Optional)
- Other networking equipment (Optional)
7.2.
REFERENCE ARCHITECTURE
The OAM network represented in figure below serves as a reference to the network architecture
considerations that will be outlined throughout this chapter.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
80
PC Clients
UNIX Clients
PC Clients
Internet
Internet
IP Switch
VPN Switch
VPN Switch
UNIX Clients
External
OSS/BSS
IP/ATM
Switch
IP / ATM
Mediation
Device
OAM Network*
Primary
Main Server
Secondary
Main Server
NPO
Server/Servers
WQA Server
Public/Customer
Network
Server of Clients
IP / ATM
IP / ATM
Terminal
Server
R*
NE Network*
IP/ATM
Switch
Legend:
_______ Ethernet
* OmniSwitch
UMTS Network Elements
7.3.
FIREWALL IMPLEMENTATION
Firewalls (and packet filters) are implemented for security reasons to enforce flow control to only allow
required communications in between different networks.
Firewalls support information (i.e. list of protocols used by the WMS Servers) is available for point to
point communications in the OAM network.
Recommendations representing deployment of firewalls are as follows:
-
Firewalls should be placed between the OAM Client network (NOC) and the rest of the OAM
network (mainly the server network).
Firewalls should be used on any communications path which goes from one site to another. This
would include the communication between the OAM-NE network on which the servers reside and
any remote OAM-NE networks.
There are no connectivity requirements in between the OAM interfaces of the NEs themselves (i.e.
the OAM interface on one NE doesnt need to communicate with OAM interfaces on other NEs,
they only need to communicate with the OAM servers).
Firewalls should be implemented between OAM server interfaces which lead to other non OAM
networks (example DHCP, DNS, centralized B&R interfaces on WMS servers, etc). This should be
based on a security assessment.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
81
Firewalls are not recommended within a ROC to subdivide the server network (OAM-OAM) or the
NE Network (i.e. there should be no firewalls in between the communication paths of the WMS
servers in a ROC). Only firewalls should be those leading to client network or remote NE network.
7.4.
This section gives some high level information about standard hardware configurations that are used
in WMS servers. The specific OAM server hardware components are available via different predefined bundles. Therefore from an engineering and network design point of view these bundles are
not a variable which needs to be covered in the context of DCN architecture. An important note which
needs to be added is the fact that the WMS software installation is fully automated using scripts which
will not allow for variations of the hardware bundle outside of the scope of what has been defined by
WMS.
This section will outline the network connectivity variations for each WMS server.
Tables below define different server network interface configurations. The configurations define how
the different network segments can be connected directly to one or more server network interfaces.
Where an interface is used to connect to more than one type of network, the assumption is that these
networks are merged together into one single combined network (same subnet).
Interface number starts at 0 and go up to (n-1) where n is the total number of interfaces (example, for
the SF V8x0, the interface number go from 0 to 7 but given IPMP redundancy, we only need to specify
connectivity for the first half).
Communication Group
All Networks
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
82
Communication Group
0
1
OAM_OAM + OAM_Backup
OAM_NE
Communication Group
0
1
2
3
OAM_OAM
OAM_Backup
OAM_NE
Reserved for future use
Communication Group
0
1
OAM_OAM + OAM_NE
OAM_Backup
Communication Group
0
1
2
3
OAM_OAM
OAM_Backup
Citrix ICA Clients
Reserved for future use
Hardware model
SE T5220
Supported Interface
Configuration
Recommendations
A, B, C, D
NETRA T5440
A, C, D
M5000
A, C, D
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
83
OAM Application
Server Type
SF v8x0, SF v250,
SF4900 with quad gigabit
"copper" Ethernet cards.
SF 4x00 with optical
Ethernet cards, N240
Server of Clients
SF V8x0
Supported
Interface
Configuration
A, C, D
A,B
A, E
0
ce0
1
ce1
ce2
ce3
ce6
ce7
ce2
ce3
ce4
ce7
ce8
ce9
ce2
ce3
ce4
ce5
ce8
ce9
ce10
ce11
qfe0
qfe1
qfe2
qfe3
qfe4
qfe5
qfe6
qfe7
ce0
ce1
ce2
ce3
ce4
ce5
ce6
ce7
ce0
ce1
ce2
ce3
ce4
ce5
ce6
ce7
qfe0
qfe1
qfe2
qfe3
qfe4
qfe5
qfe6
qfe7
6
-
7
-
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
84
Failure Detection - Ability to detect when network adapter has failed and failover to the alternative
network adaptor
Repair Detection - Ability to detect when a network adapter that has previously failed has been
repaired and fallback
The implementation of Solaris IPMP covers the following four types of communication failures:
-
The WMS servers provide physical redundancy on the network interface cards by adding a redundant
card. In the WMS context, each group of interfaces has active interface(s) (on the primary network
interface card) and corresponding failover standby interface(s) (on the second network interface card).
IPMP on the WMS Servers are configured in Active - Standby mode. In this mode, only one network
interface in a group is activated while the other standby interface has no traffic going over it besides
the traffic required to test the health status of the interface.
IPMP in OAM06 is performed using a new Solaris 10 feature that is link-based failure detection rather
than the probe-based failure detection in previous releases. In link-based failure detection, IPMP
daemon (mpathd) watches for standard "Link Up"/"Link Down" reports from the network adapter. If the
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
85
adapter reports "Link Down", it moves the IP configuration for that interface to another adapter in the
same group that has a link.
The advantage of using the link-based IPMP is that it does not require Test IP addresses. It only
requires the use of a single Data IP address that is used for the standard server communication and
migrates automatically between interfaces in the event of an interface failure. The Standby Interface
does not have a data IP address assigned. Should the Active Interface fail, the data address will
migrate to the Standby Interface.
Rules which apply to OAM server interface IPMP groups are:
-
7.4.4.1
Disk arrays for the SF 4800 require one IP each. This Interface is only used for configuration and
administration of the disk array and is not used for the disk write data in between the OAM server and
the disk array (which is actually done through a dedicated dual fibre channel optical connection).
The Disk Array IP addresses should be on the same subnet as the server network (OAM-OAM).
7.4.4.2
The two SE 6120 disk arrays used with the SF 4900 have 1 Ethernet connection each; however, they
use multipathing and therefore only require one IP address for both. These Interfaces are only used
for configuration and administration of the disk array and is not used for the disk write data in between
the OAM server and the disk array (which is actually done through a dedicated dual fibre channel
optical connection).
The Disk Array IP address should be on the same subnet as the server network (OAM_OAM).
7.4.4.3
The ST6140 contains two Ethernet boards (Dual Ethernet 10/100BaseT cards) hosted in the controller
tray. In the context of WMS, one pair of Ethernet ports are used and connected to the OAM Ethernet
LAN.
These Interfaces are only used for configuration, administration and supervision of the disk array and
are not used for the disk write data in between the OAM server and the disk array (which is actually
done through a dedicated dual fibre channel optical connection).
They do not support IPMP multipathing. Two physical IP addresses are provisioned (one for each
card) and managed internally by the CAM (Common Area Manager). CAM is the unique entry point to
perform the administration and the configuration of the ST6140 (alarms are also sent to Sun MC
through CAM). Since the data IP@ are know by CAM (declared at I&C time), the usage of an effective
IP@ is managed by CAM.
The Disk Array IP addresses should be on the same subnet as the server network (OAM_OAM).
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
7.4.4.4
86
The ST2540 is used with M5000. It contains two Ethernet boards (Dual Ethernet 10/100BaseT cards)
hosted in the controller tray. In the context of WMS, one pair of Ethernet ports are used and connected
to the OAM Ethernet LAN.
These Interfaces are only used for configuration, administration and supervision of the disk array and
are not used for the disk write data in between the OAM server and the disk array (which is actually
done through a dedicated dual fibre channel optical connection).
They do not support IPMP multipathing. Two physical IP addresses are provisioned (one for each
card) and managed internally by the CAM (Common Area Manager). CAM is the unique entry point to
perform the administration and the configuration of the disk (alarms are also sent to Sun MC through
CAM). Since the data IP@ are know by CAM (declared at I&C time), the usage of an effective IP@ is
managed by CAM.
The Disk Array IP addresses should be on the same subnet as the server network (OAM_OAM).
Engineering Rule: IP addresses for ST2540 Disk Array
The ST2540 Disk Array IP addresses should be on the same subnet as the server network (OAMOAM).
7.4.4.5
The SF 4x00 servers uses System Controller cards to reboot and configure the system. These cards
are used in a 1+1 redundancy configuration. With two System Controller cards in each system, if one
System Controller card fails, the other System Controller card can take control of the system without
causing a disruption in the system operation.
Each one of these requires 1 IP address. 1 supplemental IP address is required for the active SC card
and is also referred to as the SC logical IP address. This logical IP will automatically switch over to the
standby one upon failure of the active card. This concept is not to be confused with IPMP. Both
system controllers have a valid IP address by which they can be reached. The active system controller
card can also be reached by the logical IP address. Therefore a total of 3 IP addresses will be
required for the system controllers on the SF4x00.
The system controller IP addresses should be on the same subnet as the server network (OAM-OAM).
The following picture summarizes the Ethernet connectivity for an integrated E4900 with system
controller cards and ST6140 disk array:
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
87
7.4.4.6
The SF V250 and Netra 240 servers also have a SC card. Same description as for the SF4x00 except
that the SF V250 and Netra 240 only have 1 SC card, therefore the concept of logical IP does not
apply. Only 1 IP Address is required.
7.4.4.7
The M5000 uses one System Controller card to reboot and configure the system. XSCF (EXtended
System Control Facility) is the system management firmware that is preinstalled on the system
controller (SC) of the M5000 server. XSCF provides a browser-based web interface, a command-line
interface, an SNMP user interface and includes also a SunMC Agent Platform.
The XSCF controller card contains also a Sun MC agent to communicate with the Sun MC server
(running on the M5000 Primary main server) though dedicated ports 10.
One IP address is required for the XSCF-LAN interface (a 100-BASE-T Ethernet connection with one
cable is required) and no redundancy is available in the XSCF controller card. (The Serial link on the
controller card can still be reached though a console server)
The system controller IP address should be on the same subnet as the server network (OAM-OAM).
Engineering Rule: IP address for XCSF System Controller
The system controller IP address should be on the same subnet as the server network (OAM-OAM).
10
: Please refer to " UMT/OAM/APP/024293 Alcatel-Lucent 9353 WMS - Ports and Services", for the
list of ports used within the WMS ROC Perimeter.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
88
7.4.4.8
In addition to its Quad Ethernet Interface Cards, the NETRA T5440 server uses its ILOM card to
manage and configure the system. Integrated Lights Out Manager (ILOM) is system management
firmware that provides a browser-based web interface and a command-line interface, as well as a
Syslog interface, an SNMP user interface and an IPMI user interface.
The NETRA T5440 SP-ILOM card has a 100-BASE-T Ethernet connection, 1 cable is required.
It requires one IP address and no redundancy is available in the XSCF controller card. (The Serial link
on the controller card can still be reached though a console server)
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
89
7.4.4.9
The same ILOM system controller card is available for the T5220 hardware machine. (see figure
above)
7.5.
As a summary of the above sections, the following table provides the IP address requirements for
different supported WMS hardware:
Network
interface
Groups
Hardware
Platform
SC
ST2540 Min IP
Max IP
Cards
Address Address
SE T5220
N.A.
N.A
SUN
NETRA
T5440
SE M5000 4 CPU
N.A
N.A
N.A
N.A
SE M5000 8 CPU
7.6.
The NPO (or MS Portal) server communicates with the Primary and Secondary Main server and NPO
(or MS Portal) Clients. It can optionally communicate to centralized backup systems such as
Legato/VERITAS.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
90
The network segmentation for a standalone NPO or MS PORTAL on T5220 or T5440 server can be
configured as:
-
Subnet Group
Communication to:
0
1
OAM-OAM
OAM-BR
As a summary of the above sections, the following table provides the IP address requirements for
different supported NPO or MS PORTAL hardware:
Hardware Platform
Network interface
Groups
SC
Cards
ST2540
Min IP
Max IP
Address Address
SE T5220
0
1
1
1
2
N.A.
3
N.A.
N.A
N.A
N.A
N.A
The network segmentation for a standalone NPO or MS PORTAL on a M4000-2CPU server can be
configured as:
-
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
91
Subnet Group
Communication to:
0
1
OAM-OAM
OAM-BR
A separate IP address should also be considered for the system controller card (XSCF) specific to
the Sun Enterprise M4000. Please refer to previous section for more information.
As a summary of the above sections, the following table provides the IP address requirements for
different supported NPO or MS PORTAL hardware:
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
92
Hardware Platform
Network interface
Groups
0
1
SE M4000 - 2 CPU
1
1
2
N.A
3
N.A
SC
Cards
ST2540
(2)
Min IP
Max IP
Address Address
: two IP@ of the ST2540 for M4000 are set automatically with default values (192.168.128.101 & 192.168.128.102), and the ST2540
has direct Ethernet connectivity with M4000.
For M4000-4CPU, the NPO server needs to be installed in a cluster mode as a Master node.
The network segmentation for a cluster mode NPO or MS PORTAL on a M4000-4CPU server can be
configured as:
-
Subnet Group
Communication to:
0
1
OAM-OAM
OAM-BR
Floating IP address - This is the NPO cluster IP address as seen from the NPO clients; it is the
"external" IP address of the master node and the only known IP to the NPO clients.
Public IP address - This is the default IP addresses of the NPO servers and the IP address to
interact with the WMS Servers. Note that all NPO servers in a cluster communicate with the WMS
servers. The Public IP address is the address assigned during the installation of Solaris on NPO
server.
Private and Virtual IP address - These are IP addresses reserved to the Oracle RAC clustering
activities and communication between the NPO servers in a cluster.
The table below explains the subnet to which each of the IP addresses in a NPO Cluster belong to.
Network Interface
Number
0
Subnet Group
Contains IP Address
OAM-OAM
OAM-BR
Public IP Address
Floating IP Address
Virtual IP Address
B&R IP Address
Private IP Address
Communication to:
WMS Servers
NPO Clients
Other NPO Servers
Centralized Backup &
Restore
Other NPO Servers
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
93
A separate IP address should also be considered for the system controller card (XSCF) specific to
the Sun Enterprise M4000. Please refer to previous section for more information.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
94
SC IP@
Subnet OAM_OAM
Redundancy through IPMP
SC
Connection to external
storage ST2540 with two
internal IP@ for controller
A and controller B
(Used for administration
purpose only. The rest
passed over the FC cables
as usual)
Subnet OAM_BR
nxge 4
nxge 0
nxge 5
nxge 1
nxge 6
nxge 2
nxge 7
nxge 3
To be connected to
Slave node.
Hardware Platform
SE M4000 - 4 CPU
Network interface
Groups
0
42
1
1
2
N.A
3
N.A
SC
ST2540
Cards
(2)
Min IP
Max IP
Address Address
: two IP@ of the ST2540 for M4000 are set automatically with default values (192.168.128.101 & 192.168.128.102), and the ST2540
has direct Ethernet connectivity with M4000.
2
: Of this, the Private IP address is an internal IP address and can be left as default such as 192.168.0.11 at time of installation
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
95
In a NPO Cluster, the following IP addresses are used to interact with the data source (WMS Servers),
NPO Clients and between NPO servers:
-
Floating IP address - This is the NPO cluster IP address as seen from the NPO clients, it is the
"external" IP address of the master node and the only known IP to the NPO clients.
Public IP address - This is the default IP addresses of the NPO servers and the IP address to
interact with the WMS Servers. Note that all NPO servers in a cluster communicate with the WMS
servers.
Private and Virtual IP address - These are IP addresses reserved to the Oracle RAC clustering
activities and communication between the NPO servers in a cluster.
OAM-BR Group
NPO Cluster Group
OAM-OAM Group
NPO
Master
Server
NPO
Slave
Server
NPO
Client
Network Interface
Number
0
Subnet Group
Contains IP Address
OAM-OAM
1
2
NPO Cluster
OAM-BR
Public IP Address
Floating IP Address
Virtual IP Address
Private IP Address
B&R IP Address
Communication to:
WMS Servers
NPO Clients
Other NPO Servers
Other NPO Servers
Centralized Backup &
Restore
An additional IP address each is required for access and management of the Brocade Silkworm 300
FC switch, this IP address can reside on the OAM-OAM subnet group.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
96
2 0
4 6
SE3510 FC0
OAM-OAM
NPO Cluster
OAM-BR Group
FC0 SE3510
Master
Slave
SF490
SF490
PC10
PC10
OAM-OAM Group
NPO Cluster Group
OAM-BR Group
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
97
NPO
SE3510
FC Switch
NPO
SE3510
NPO
SE3510
Non Redundant FC Switch
NPO
FC Switch
SE3510
NPO
SE3510
NPO
FC Switch
SE3510
Redundant FC Switch
7.7.
WQA server requires 1 IP address on the OAM_OAM communication group. For more information on
WQA, please refer to W-CDMA Quality Analyser (WQA) section
7.8.
OAM Clients are used to run different OAM applications including WMS GUI, WQA client, NPO Client,
SDA tool, RFO tool and WPS tool. One or multiple OAM Clients can be used to access the different
OAM applications based on Windows or Unix. These clients require 1 IP address each on the
OAM_OAM communication group. For more information on clients, please refer to WMS Clients and
Server of clients Engineering Considerations section.
7.9.
Server Console Access This is provided by using a Terminal Server such as LX8020S series
from MRV.
VPN Remote Access for Alcatel-Lucent Support This is provided by RAMSES VPN solution
VPN Remote Access for Customers users This is provided by Alcatel-Lucents Brick VPN
solution.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
98
Terminal
Server
Other Devices
with Serial
OAM Server
Serial Link
Ethernet
Telnet
Protocols Used
SEPE (Proprietary), FMIP
(Proprietary),FTP, Telnet, RADIUS, IKE
(IPSec), NTP, ICMP
RNC
NODE B
Models: 931x, 932x, 933x and 934x
NODE B
Model: 939x
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
99
Engineering Recommendation:
NODE B.
For network without 939x NODE B Models, It is recommended to block the SYSLOG port.
All NEs communicating with the WMS servers in the ROC need to also point to the new IP
address.
Static routes on other servers / NEs need to be updated as required.
The IP address of the new default router must be known.
All firewalls and packet filters between the servers and NEs or between server and clients need to
be configured with the new IP Address of the WMS Servers.
The VPN remote access solution must also be verified and re-configured as necessary to ensure
that the Servers with the new IP addresses will be reachable after the networking changes have
been applied. In particular, the firewall rules in the Secure Router used for remote access need to
be updated with the new IP addresses of the WMS servers.
It is recommended that the WMS CIQ be filled out with all the new networking information prior to the
network parameter change procedure as if it was a new installation.
11
: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
100
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
8.
101
BANDWIDTH REQUIREMENTS
This chapter describes the bandwidth requirements for WMS. Bandwidth requirements between the
WMS Clients and the ROC, between the ROC and the Network Elements and OSS bandwidth
requirements will be described in this section.
8.1.
Within the Network composed of WMS servers, the Ethernet LAN which provides the first level
connectivity to the OAM servers (i.e. WMS Primary / Secondary Main Server, WQA, NPO, etc.) must
operate at 1000 Mbps (1 Gbps).
Engineering Recommendation: Routing Switch and bandwidth considerations
Every Ethernet port of every server must be connected to an Ethernet switch through a 1000 Mbps.
The 1000 Mbps LAN should be extended to the Routing Switch which provides aggregation of the
ATM/WAN interfaces to remote OAM networks and routing.
The provisioning of bandwidth for these ATM circuits should be based on the bandwidth specifications
provided in the bandwidth requirements section of this document. For a given link an assessment first
needs to be made to determine what are the major communication flows that need to go through on a
point to point basis. For each one of these point to point pairs, the required bandwidth should be
looked up and added. The total will provide the required bandwidth for that link.
In addition, extra considerations will be required for the software download link given the high number
of NODE B and the parallel download feature supporting several simultaneous downloads.
FOR
THE
MULTIPLE
From a bandwidth perspective using multiple networks would be recommended in the following cases:
-
Use (or plans to use in the future) of the centralized backup and restore features which perform
the backup over the network (multiple networks mandatory in this case with a separate network for
BU&R) as a backup can involve transferring hundreds of gigabytes of data over the network.
In a ROC managing many NEs and which has a high number of users it would be recommended
to separate client traffic from NE facing traffic for improved GUI response.
8.2.
This section gives the OAM bandwidth requirements between a ROC and different NE types.
In general, performance data normally dominate the bandwidth requirements from the NEs to the
OAM servers. Fault management requires much less than what is required for performance
management but consists of more bursts. Software downloads from the WMS Main Server to the NEs
which support software download will be bandwidth intensive. A trade-off needs to be made between
the bandwidth requirements for software download and the response time of a software download
given that it is not performed on a regular basis.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
102
With the exception of software download or similar bulk provisioning features, most of the
management traffic originates at the NEs and terminates on the OAM servers and therefore the
bandwidth figures given in this section are for those flows in this direction. However, it is
recommended that the provisioning of the bandwidth of the links to the NEs be symmetrical in order to
support software downloads.
Unless noted otherwise, bandwidth requirements listed in this section are for all FCAPS functions and
combined for the ROC.
All bandwidth requirements in this section represent the effective throughput required.
8.2.1 LATENCY
For the links to the NEs it is recommended to keep latency below 500 msec and for the links to the
clients, we recommend keeping latency below 100 msec in order to not impact WMS client
performance.
Firewalls can be responsible for the biggest introduction of latency in some networks. In order to
ensure that firewalls do not degrade latency, it is important to ensure that they are adequately
dimensioned to meet the traffic loads.
The bandwidth requirements listed in this section represent the throughput required for TCP
communications. Data transfer tests using FTP can be used to validate that a given link meets the
required specifications.
8.2.2.1
CALL TRACE
The bandwidth requirements for call trace can vary greatly based on the usage of this functionality and
based on the configuration of the NEs (i.e. number of TMUs), the load on the UMTS Access network
(number of active subscribers, cells, etc.) and the type of template used when invoking call trace can
have a major impact on the quantity of data generated by the UA NEs from the call trace feature. The
numbers given below are worst case examples of the amount of data generated by call trace. Should
the amount of data generated from CT exceed that which can be transferred over the link from the
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
103
NEs to the WMS Main server, the data will accumulate on the NEs. Given there is a limit to how much
CT data can be accumulated on the NEs, as of UA4.2/OAM5.0, there is a protection mechanism on
the NEs and the OAM server to prevent overflow (CT data could be discarded however).
Maximum Bandwidth
Requirement (kbps)
1 CTg or OTCell
1 CTa or CTb
1000
100
Where:
FDDCells = Number of FDDCells managed by this RNC
RncNei = Number of Neighboring RNCs to this RNC
DT = allocated time for a download of a RNC_CN in seconds.
Examples of the application of this equation with DT=150 seconds are given in the table below:
FDDCells
Neighbouring RNCs
100
300
300
600
600
900
900
1200
1200
3
4
8
5
10
8
16
10
20
BW Requirement
(kbps)
83
228
241
443
459
665
691
884
916
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
104
files would only be about 15% longer than that which was predicted by the DT variable used in the
previous equation (which was only for RNC CN observation files).
The same considerations apply to the download of the NODE B observation files. The bandwidth
requirement for the download of the NODE B observation files are already small given that they are
normally distributed over the hour and that an algorithm is used spread the downloads over multiple
RNCs. Given that the download of the RNC CN observation file only takes a few minutes out of every
15 minutes, there is unused bandwidth during the rest of the time to allow for a rapid download of
these.
Therefore, the total bandwidth per RNC, including the CN, IN and the POC, should be equal to the
bandwidth required for the support the download of the RNC CN observation counters and optionally
the bandwidth required to support the call trace functionality.
The following rule recommends a minimum limit for the amount of bandwidth that should be available
on the link from the RNC to the OAM:
Node-B
RNC
SF E4900 - 12 CPU
SF4800 - 12 CPU
SF E4900 - 8 CPU
SF4800 - 8 CPU
SF V890
SF V880
SF V250/N240
48
24
32
16
16
8
4
6
3
4
2
2
1
1
: Please refer to the RNC Product Engineering guide UMT/IRC/APP/13736 to get RNC
configuration information.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
105
Note that although it is possible to simultaneously download software to multiple NODE B and RNC, it
is recommended, in order to minimize time required for software download operations for the entire
network, to spread download operations of the NODE B across different RNCs. Software download
to multiple NODE B, associated to the same RNC simultaneously with an RNC software download
represents the worst case.
The following table provides the typical SysLog message characteristics per NODE B.
SysLog characteristics
110
214
526
Based on this average values, the additional workload within a WMS Large server E4900 (8 or 12
CPU) is negligible. It does not affect the WMS performances in terms of disk usage, average network
and Disk I/O.
8.3.
13
: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
106
For NPO Client bandwidth requirements, refer to section 11.10 in the NPO Chapter.
WQA Client is a web based client and requires very low bandwidth (128 Kbps) per client.
RFO is an offline call trace tool that requires adhoc bandwidth depending on the number of call
trace files downloaded and the frequency of download.
WPS is an offline configuration tool that requires adhoc bandwidth depending on the number of
workorders, cmXML import/export, etc is done on a daily basis by the user.
8.4.
Table below lists bandwidth requirements from the WMS Main Server to the Fault and Performance
OSS. Bandwidth requirements for Performance OSS are to transfer XML files in "gzip" format per
Network Element.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
9.
107
Several security topics are covered throughout the WMS engineering guide. The intent of this security
section is to summarize these security topics and to cover other miscellaneous security related topics
related to WMS. This section will also point to other documents containing security information related
to WMS.
9.1.
Operating system hardening scripts are included with the WMS software load. The OS Hardening
Scripts disable all services that are not used (and potentially unsafe) by the software application
residing on the server.
It is important from a security perspective to ensure that all WMS servers have been hardened.
Furthermore, some OS hardening features are optional. E.g.:
-
sendmail can be optionally turned off if the trouble ticketing option is not installed on the Primary
Main Server.
Graphical login support can be disabled on screenless servers. (i.e. X11). From a security
perspective, it is recommended that X11 be disabled on the WMS servers and to always use client
workstations/PCs to use any graphical application.
Oracle security can be increased on the Primary Main Server by enabling the Oracle valid node
checking feature. It is recommended to activate this feature.
Other daemons can be optionally turned off on all servers such as the router discovery daemon.
(i.e. in.rdisc).
It is recommended that all these optional OS hardening features be activated where possible.
Please see "WMS Security - NN-10300-031", for procedures and detailed information on operating
system hardening.
9.2.
AUDIT TRAIL
On the WMS servers, several events are logged as part of the "audit trail" functionality that is provided
through the Solaris SunShield Basic Security Module (BSM). Events audited include creating/deleting
users login/logout attempts, FTP sessions, switch user commands...
All logged events are stored locally in audit log files. A subset of these events are sent to the
centralized security audit login (CSAL) mechanism located on the WMS Main Server, some of the data
in the audit log files is sent to the logging service.
The CSAL system is a service in the Network Services Platform that manages the network audit log
files. You can use the CSAL system to view the data in the flat text files.
The Logging Service enables applications or system components to create and manage logs. It also
provides central access to security logs and events.
From a security perspective the audit log files as well as the CSAL data should be properly stored to
tape. The historical data archive & retrieval feature can be used to backup the audit log files.
To avoid filling up the disks to 100% with audit log files, there is an automatic purging mechanism in
place which deletes old audit log files when the disk capacity hits a certain threshold. Therefore, to
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
108
prevent the potential loss of audit log files, the archive and retrieval backup must be executed at the
appropriate frequency. It is recommended that the historical data archive be done daily.
From a security perspective the audit records files should be properly stored to tape. The historical
data archive & retrieval feature can be used to backup the audit records files.
Please see "WMS Security - NN-10300-031", Security Fundamentals and Implementing Security
Chapters for procedures and detailed information on audit trail.
9.3.
WMS allows to set different level of access control for the different users of the solution. It is possible
to restrict/allow the access to certain feature, tools and operations to specific groups of users.
It is recommended to create different role of users for every different user functions. E.g. one role for
directors with only viewing privileges, another group for operators for certain regions of the network,
another group for different NE types/domains (i.e. Access)... Each user is allowed to access the
resources that are belonged to his/her user group and are need to perform his/her functions. This is a
recommended security practice.
E.g. WMS is used to manage an end to end wireless network spanning an entire country. Different
OAM user group can be configured in order to have groups for different functions or different regions
of the network. Each group is configured to only have access to resources pertaining to his group.
Groups can be divided so that say a Supervisors group is allowed to view alarms whereas Technicians
group are allowed access to alarms and configuration tools, support group has access to the Network
elements to troubleshoot and so on.
Please see "WMS Security User Configuration and Management NN-10300-074", for procedures
and detailed information on access control.
9.4.
The WMS Servers communicate with the managed NEs using, in most cases, the management port
of these NE. It is good security practice to enable access control on these NEs to restrict connection
to the management port from only known hosts/networks that need to communicate to the
management port such as the WMS servers. It is recommended to utilize this security feature for all
NEs that support ACL for the management port.
Currently Network Elements based on Multi Service Switch 7400, 15000, and 20000 (previously
Passport) support this functionality.
9.5.
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
109
Account lockout
WMS allows to lockout an account after a configurable number of failed log-in attempts.
9.6.
The IP Discovery tool is a tool that is part of Multiservice Data Manager which is available on the WMS
Main Server.
The IP Discovery tool is used to add/delete NEs from the NSP GUI. By default, no password
authentication is required to launch this tools aside the NSP access control restrictions that are
configured. This means that users with access to the MDM toolset will be able to launch and use the
IP Discovery tool. This might not be the desired behaviour.
It is recommended to enable password authentication of the IP Discovery tool. For related procedures,
please refer to the Nortel Multiservice Data Manager Fault Management tools - 241-6001-011.
9.7.
Centralized WMS User Administration reduces security vulnerabilities with an OAM user
authentication and authorization privileges and administration across the supported WMS applications.
All users, including NE users, are able to authenticate using the centralized WMS user directory. For
NE users, RADIUS authentication is used with the NEs (supported on the RNC, xCCM boards on the
NODE B) and the WMS embedded RADIUS server authenticates the access request towards the
WMS users directory.
Please see "Nortel Multiservice Data Manager SecurityFundamentals - NN10400-065", for
information about centralized OAM authentication and RADIUS.
9.8.
This allows the operator to identify the list of currently active users logged and the ability for the
administrator to lock out / force out users.
9.9.
SSL
WMS supports X.509 certificates in Privacy Enhanced Mail (PEM) format for Secure Sockets Layer
(SSL). SSL secures the authentication channels of the network between the clients and the servers
(Primary and Secondary main servers).
WMS supports the maximum level of security with regards to key encryption methods allowed by any
Web browser (1024 bits).
Please see "WMS Security NN-10300-031", for procedures on enabling SSL on WMS.
Engineering Note: SSL
SSL does not secure all the network traffic between the client and the servers. It only secures the
authentication channels of the network between the clients and the servers. For a complete security of
the packets between client and server, IP Sec is recommended.
(Alcatel-Lucent Brick 150 can be used as secure VPN IP Sec Gateway).
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
110
9.10. RADIUS/IPSEC
As an option, the external authentication module provides authentication support of the WMS
operators to an external authentication module (provided by the customer) using the Radius protocol.
Support of IPSEC between WMS and NEs provides secured OAM traffic between the NEs and WMS.
The table below provides list of NEs that support Radius and/or IPSEC.
Protocol/NE
RNC
Node B 14
Node B
(Model: 939x)
MSS 74X0
Radius
IPSEC
X
X
X
-
X
X
IPSec connections for OAM communication are supported between the primary and secondary Main
servers and Multiservice Switch based network elements such as the RNC. IPSec provides
confidentiality and integrity of the OAM messaging, credentials are encrypted, and data origin
authentication is provided by a shared secret key.
The main objective for implementing IPSec is to protect ftp, telnet, FMIP, NTP, and RADIUS services.
ICMP and ftpdata are not encrypted since the data is not sensitive, so two bypass policies are created
for those two services. All other traffic is discarded.
Refer to the following restrictions, limitations, and supported configuration:
-
IKE (Internet Key Exchange) is introduced in OAM5.1. It is a standardized protocol used to automate
IPSEC SAs deployment and Keys. IKE is based on a framework protocol called ISAKMP and
implements semantics from the Oakley key exchange, therefore IKE is also known as
ISAKMP/Oakley.
The IKE protocol has two phases:
- Phase1: to establishes a secure channel between the two peers,
- Phase2: to negotiate IPSEC SA through the secured channel establish in phase 1
Please see Alcatel Please see "WMS Security - NN-10300-031", for procedures and detailed
information on Radius and IPSEC configuration.
14
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
111
This section provides some basic security considerations when using SSH.
All WMS servers are installed with Solaris SSH.
Protocol Version
WMS supports only SSHv2, as it is the most secure one.
Access Control
For added security, access allow/deny lists should be created to filter incoming connections to the ssh
daemon. This administrative task can be done by creating the following access allow/deny files (these
files should be owned and writable only by the root user):
"/etc/hosts.allow" and "/etc/hosts.deny"
The syntax of these file is as follows:
#> more /etc/hosts.allow
sshd: 50.0.0.0/255.0.0.0, 10.10.10.0/255.255.255.0
#> more /etc/hosts.deny
sshd:ALL
In the example above, the SSH daemon will only accept client connections from SSH clients on the
50.0.0.0 and 10.10.10.0 networks only. All other connection requests will be denied.
Logging
As a good security practice, it is recommended to use the syslog daemon to capture SSH daemon
logs and to monitor these logs periodically for any suspicious activities. To enable this feature, the
following operations need to be performed by an administrator on each of the WMS servers:
Edit the /etc/syslog.conf file to include the following entry:
auth.info <tab> /var/log/authlog (It is important to use a tab and not spaces!)
Also, the "/etc/ssh/sshd_config" file must contain the following lines:
SyslogFacility AUTH
LogLevel INFO
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
112
E.g. Using a UNIX client, an administrator needs to launch an application on one of the WMS servers
and have the display of that application sent back to the UNIX Client.
It is possible to perform this operation using the X11 forwarding option of ssh by entering the following
command from the command prompt of the UNIX client:
UNIX Client#> ssh -X <Main Server IP@> <application>
<application> can be any application using a GUI such as xterm, xclock or other.
9.12. SNMP
Several components of WMS are managed using the SNMP protocol. See below for SNMP versions
supported:
MDM 16.1: SNMP V1, V2
SMC 3.6.1: SNMP V1, V2, V2 USEC entities (i.e. User-based Security Model)
Security Recommendation (Community Strings)
When the WMS servers are installed or when new devices are integrated to the Main Server, it
important to change the community strings to values other than the default value. Leaving the default
community strings unchanged on devices in a network can allow a malicious user to gain access to
sensitive information about the configuration of the devices in a network.
9.13. IP FILTERING
This feature implements IP Filters within Solaris 10. Support of IP Filters allows the WMS servers to
have an host based IP filtering capability within the server solution. This allows flexibility in
provisioning of desired firewall policy rules, as desired, to ensure that the WMS servers comply with
customer security firewall policies.
IP Filtering provides a first level of security directly on the server.
The integrated support of WMS specific IP Filter rules can also help additionally harden the WMS
server solutions deployed, by restricting the accessibility of any weak services that are still currently
used within the WMS solution, where applicable, as well as restricting the visibility of any local ports
and services used within WMS servers to external systems.
Support of Sun Solaris 10 IP Filter firewall allows the WMS servers to have a host based firewall within
the server solution. This allows flexibility in provisioning of desired firewall policy rules, as desired, to
ensure that the WMS servers comply with customer security firewall policies.
This feature provides a set of default rules for the IP Filter firewall to help further additionally harden
the WMS servers, by ensuring that any local ports used within the WMS servers are not visible to
external systems.
This feature also allows customers and/or Alcatel-Lucent personnel to make changes to IP Filter rules,
if and as desired, and ensure that any such changes are reflected after upgrades and installations
(over and above the default set of rules).
9.14. FIREWALL
As detailed in the Network Architecture section, it is recommended to deploy a VPN Router firewall
between the NOC and the ROC (i.e. between client and servers).
The recommended firewall is the VPN Firewall Brick 150. It is proposed as firewall between the OAM
LAN and the WAN access to the Network Elements and the access to another OAM ROC.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
113
If a Client PC on the network is accessed by multiple users or a machine gets its IP address
dynamically, it might be necessary to allow access to a particular user only and not to the machine
itself.
Hardware and software requirements for the Brick 150 are provided in WMS Solution Hardware
Specifications section of this document.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
10.
114
Proper time synchronization is useful and should be considered mandatory in order to adequately
support the requirements for accounting/billing CDR's, network and fault management, efficient
troubleshooting & support, security and audit logs, and performance counter correlation. Network Time
Protocol (NTP) is the main protocol used in Alcatel-Lucent Wireless OAM network to synchronize the
time of day (TOD) on servers and the NEs together.
10.2. COMPATIBILITIES
NTP version 3 should be deployed as part of the UMTS OAM solution. NTP V3 (RFC 1305) is the
most popular version (and the default for most devices).
Implementation of NTP usage, within the UMTS network is straightforward since support for NTP
already exists on NEs and servers. This includes RNC. All Solaris based OAM servers also support
NTP such as WMS Main Servers, NPO server, Server of Clients and Unix Clients.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
115
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
116
Public time
Server
ROC
Other ROC
Internet
NTP
NTP
Primary Main
Server
Secondary
Main Server
Intermediary
NTP Server
(secured) Can
be used as
backup
Preferred NTP
Secondary NTP
NEs
Other OAM
servers
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
117
accumulated to stratum-1 source). Note that this is the accuracy of the time at the System/OS level
(there can actually be some extra internal delays on a server or a NE in associating a time stamp on
an event).
This estimate is based on the fact that the actual OAM network design follows two standard
engineering guidelines which ensure optimal NTP accuracy: a) symmetrical transport times in between
the server and client b) avoid sustained congestion conditions.
Note on time accuracy convergence - after initially starting the NTP processes, it may take several
minutes, or even up to an hour to adjust a system's time to the ultimate degree of accuracy. To avoid
long stabilization times, one could do an initial manual adjustment to the local clock before starting the
NTP processes.
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
118
UMTS access network will create inconsistent time stamp information and will complicate network
management.
Engineering Note: daylight saving time
There is a known issue in respect to the daylight saving time change on the RNC. If a time
synchronization discrepancy materializes between the RNC on OAM, some alarms such as bad data
file related to counters could possibly be generated via WMS GUI due to a time stamp inconsistency.
Alarms must be cleared manually.
The WMS Client must be set to the same time zone as the OAM servers.
The consequences of choosing this alternate strategy is that correlation of time information of
nodes in different time zones will be not be as straightforward as when they are all in the same
time zone.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
11.
119
11.1. OVERVIEW
This section describes the NPO (Network Performance Optimizer), a performance management
reporting application for WMS.
NPO is an option that offers a full range of multi-standard QoS Monitoring and radio network
optimization facilities. It requires a dedicated server in the LAN including the installation of NPO client
application running on PC.
PC Client
Counters
files
File transfers
(FTP services)
WMS
Main servers
Network
Configuration
files
Meta data
definition
Oracle
NPO server
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
120
NPO Packaging
Small
Medium
Large
Server Type
Number of
concurrent
users
Network Size in
Cells/Node B
Maximum
Number
RNC
1400/500
15
4500/1500
27
9000/3000
38
18000/6000
X-Large
15
The complete description and characteristics of each server model is available in the hardware
specifications section 6.2
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
121
NPO Cluster is introduced in NPO Release M2.3 (OAM 5.2) with one Master and up to one Slave
server. In M3.0 (OAM6.0), the cluster configuration allows maximum configuration of one Master and
two Slave servers. There is no cluster configuration in releases prior to M2.3.
NPO Cluster allows multiple NPO servers to group together in order to improve computing
performance and increase supported capacity of cells.
NPO cluster relies on Oracle Real Application Cluster (RAC) solution which allows multiple servers to
run the Oracle RDBMS software simultaneously while accessing a single database.
11.4.2 ARCHITECTURE
The NPO cluster is composed of a master node and one or more slave nodes. Only the master node
is communicating with NPO clients. Slave nodes are in charge of providing computing performances to
ensure file loading and data processing activities.
Between the NPO cluster nodes, the following flows are implemented:
- The clustering activity flow (Oracle RAC flow) that ensures that tasks or computations can be
spread over the various nodes,
- The file sharing activity flow (NFS and Rsync) that allows to share NPO files.
- The time synchronization flow (NTP) that is mandatory for clustering.
- IIOP (Internet Inter-Orb Protocol) flow used mainly for process monitoring and logging activities.
Figure below shows the implementation of the NPO Cluster in the UTRAN OAM solution.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
122
Floating IP@
NPO Clients
NPO Cluster
Virtual IP@
Private IP@
Virtu
a
Slave
IP@
Public IP@
Public IP@
Priv
ate
Slave
l IP@
Public IP@
Master
WMS Servers
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
123
In this case, a failover procedure can be applied so that the slave node is re-declared as the master
node and the LDAP database (used for the list of the users and their access rights) is automatically
exported to their new master node. The slave node detects the failure and takes over the floating IP
address thus not impacting access for the NPO Clients.
Once the Master server is recovered, consolidation of data restarts with minimum loss of data and the
server runs with degraded performance till all the data is recovered and consolidated.
Slave node crashes
-
Once the Slave server is recovered, consolidation of data restarts with minimum loss of data and the
server runs with degraded performance till all the data is recovered and consolidated.
Backup and Restore
In case of cluster configuration, the backup and restore operations can be performed only with a
centralized backup and restore solution (e.g.: LEGATO) and must be done only on the master node.
OSB (Oracle Secure Backup) backup and restore on cluster configuration is not supported.
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
124
Local Backup
OSB to tape
drives
Centralized
Backup
infrastructure
OS
Local Tape Backup and Restore OSB (Oracle secure Backup) is applicable to tape drives
support: This solution is recommended for small and medium network (not applicable to NPO in
cluster configuration) and is applicable to the NPO essential Data only.
Centralized Backup & Restore: This solution covers the two levels of data and is used to interact
with any backup and restore 3rd party infrastructure. The purpose of the centralized methods is to
provide generic interfaces to be used by any 3rd parties: The Third party agent interacts on one
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
125
hand with the Media Manager for the management of the ORACLE and LDAP database and on
the other hand for the management of the system data separately based on the system
catalogue 15 description. (e.g. : usually though a dedicated policy in the 3rd party engine)
The solution already supports LEGATO 16 centralized solution.
For other 3rd party (e.g.: HP Data Protector or VERITAS), Media Manager needs to be configured
accordingly.
NPO
Centralized
Backup
infrastructure
MM
3rd party
Agent
3rd party
Server
11.8.2 POLICY
A backup and restore policy consist in the production of the best NPO image in order to restore the
system in any case of disaster scenarios and in the best timing delay. In case of Oracle database
crash or anomaly, the restoration of the NPO essential data is enough. In case of software crash, the
complete NPO image (essential and system) becomes useful to avoid the re-installation of the whole
NPO application.
Engineering recommendation: Operational Effectiveness
It is recommended to use the centralized backup solution to allow quicker recovery of the NPO
system. By considering all the NPO data including the System part, this prevents the re-installation of
the NPO application by restoring the system and essential data on the machine.
15
: The catalogue describing the backup of the NPO system is described in 9953 MP / NPO Platform
Administration Guide - 3BK 21315 AAAA PCZZA
16
: If LEGATO is chosen; LEGATO agent can be installed and configured as part of the NPO
installation procedure. The system data is not part of the default installation procedure. It still needs to
be configured based on the system catalogue description. (Refer to 9953 MP / NPO Platform
Administration Guide - 3BK 21315 AAAA PCZZA).
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
126
However for more critical disaster scenario such as server or OS (operating system) crash, the full reinstallation of the Unix OS and NPO application becomes necessary followed by the restoration of the
Oracle and LDAP databases.
When performing a backup, there is no interruption to NPO. Backups can be performed manually or
scheduled for automatic execution. Backup/Restore of Oracle database is performed by the RMAN
(Recovery Manager) utility. In case of a successful backup, RMAN clears the archive log which avoids
the filling up of disk space.
The backup can be launched with the incremental mode or with the full mode. The Oracle database
runs in the ARCHIVELOG Mode in order to allow backup of data. In this mode, the Oracle database
constantly produces archive logs, which are needed for online backups and point-in-time recoveries.
DAT72 SDLT600
LT04H
SDLT600
SDLT600
LT04H
LT04H
SFV490 2CPU
SFV490 4CPU
SE M4000 2CPU
SE M4000 4CPU
The following table provides information of tape drive supported and throughput:
SDLT type
DAT 72
36 GB , 3 MB/s
SDLT 600
LTO4H
300 GB , 36 MB/s
800 GB, 120 MB/s
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
127
This method performs the backup and restore of the NPO Oracle Database only which includes
the LDAP user management database.
DAT 72 and SDLT600 and LTO4H are the tape drives supported by OSB method. Other types of
Tapes drives are not supported.
OSB backup and restore on cluster configuration is not supported.
For more details on the tape drive equipments proposed and the compatibility with server and domain,
please refer to backup and restore section.
For more information on the procedure to perform backup and restore of MS-NPO servers, please
refer to document 9953 MP / MS-NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA
(ORACLE LDAP)
SE T5220
SFV490 2CPU
SFV490 4CPU
SE M4000 2CPU
SE M4000 4CPU
Cluster (n*SFV490
4CPU)
YES
YES
YES
YES
YES
YES
YES
Legato
YES
YES
YES
YES
YES
YES
Other 3rd
party
YES
YES
YES
YES
YES
YES
YES
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
128
Engineering Note: Centralized Backup and Restore using Media Manager (MM) to support
integration to other centralized backup solutions such as HP Data Protector.
The MM must be configured and required expertise and support relative to the 3rd party. The catalogue
describing the complete list of system data to be backuped, and the integration points with MM are
described in 9953 MP / NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA.
For more details on the procedure to perform backup and restore of NPO servers, please refer to
document 9953 MP / NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
129
For MS-NPO, the server can be deployed in a different data centre from the WMS server. In such
condition, it communicates through an ATM/WAN interfaces to remote OAM networks.
The minimum throughput requirements with regards to data transfer between peers through a WAN
with IP routers must be calculated according the volume of data and the GPO (General Permanent
Observation) period.
Please refer to annexes section 16.1 to determine the minimum throughput requirements for the
transfer of Observation files through a WAN.
Engineering Note: Interface and bandwidth with centralized Backup and Restore Infrastructure
For large configuration using a centralized Backup and Restore Infrastructure it is highly
recommended to use a dedicated Giga bits Ethernet link on the second interface (OAM-BR).
(Refer to the 9953 MP / MS-NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA for the
configuration of a dedicated Gb/s Network interface for Backup and restore flows)
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
130
KPI
Overnight activity: Full Day Reimport. Every night NPO reexamines all the performance data
from the previous day and imports
any missing counters. This KPI is
defined assuming less then 25% of
data from the previous day was
missing.
Catch up mode: after an outage,
rate at which NPO for WMS can
recuperate.
< 4 hours
Maximum : 25 minutes
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
12.
131
MS PORTAL
The dimensioning model and capacity figures provided in this section give the hardware models where
the Multi Standard product will be qualified.
Engineering Note: Commons rules for 9959 MS-NPO & 9359 NPO
9959 MS-NPO is the performance reporting application for a customer mixed 2G/3G radio-access
network. 9359 NPO solution is proposed for customer willing to manage 3G networks only.
9959 MS-NPO and 9359 NPO rely on the same product architecture. The previous section gives more
details of the solution architecture.
Except rules applied within a multi standard context, the engineering rules and considerations
described in previous section for 9359 NPO apply to 9959 MS-NPO.
12.1. OVERVIEW
MS-Portal is a Multi Standard OAM Portal able to manage networks of the same or different
technologies (2G/3G). It is composed of the following optional software applications running on the
platform (based on SUN server):
-
9953 MS-SUP server offering a common supervision window for the 2G and 3G alarms
9959 MS-NPO server offering a common follow-up of the QoS (counters, indicators, reports) with
checks, diagnosis and tuning facilities.
MS Portal client
MS-Portal
MS-SUP
MS-NPO
2G OMC-R
3G OMC-R
(A1353-RA)
(9353 WMS)
UMTS
GSM
Figure 32 : MS-PORTAL architecture
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
132
Engineering Rule: Capacity model for MS-NPO in a multi standard 2G/3G scenario
The Total Number of Reference Cell to determine the right MS-NPO model is equal
as follow :
[0.75* nb of 2G cells + 1 * nb of 3G cells]
The maximum of OMC server is limited to 5. (Up to 5 data sources can be configured
within a MS-NPO). The ROC in 3G may contain two OMC or 2 data sources. Therefore,
when considering a ROC configuration, the total capacity should be considered when
choosing the right MS-NPO model.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
133
Network Capacity
Hardware
SUN NETRA
T5440 - 2CPU
Maximum Number
of Users
Maximum Number of
Cells (2G+3G)
54
27000
32
16000
(12 x 146 GB
Internal Disk Drives)
SUN
ENTERPRISE
T5220 - 1CPU
(8 x 146 GB Internal
Disk Drives)
Network Capacity
Hardware
SUN
ENTERPRISE
M4000 - 4CPU
(2 x 146 GB
Internal Disk
Drives)
SUN
ENTERPRISE
M4000 - 2CPU
(2 x 146 GB
Internal Disk
Drives)
17
18
24000
18000
27
12000
9000
Maximum
Number
of 3G
Node B 17
6000
18
3000
Maximum
Number
RNC
Maximum
Number
of
"reference
cells"
60
18000
15
9000
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
SUN
ENTERPRISE
T5220 - 1CPU
(8 x 146 GB
Internal Disk
Drives)
134
1860
1400
500
1400
Engineering Recommendation: Routing Switch and bandwidth considerations through the WAN
MS-NPO server can be deployed in a different data centre from the WMS server. It such condition, it
communicates through an ATM/WAN interfaces to remote OAM networks.
The minimum throughput requirements with regards to data transfer between peers through a
WAN with IP routers must be calculated according the volume of data and the GPO (General
Permanent Observation) period.
Please refer to annexes section 16.1 to determine the minimum throughput requirements for the
transfer of Observation files through a WAN
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
135
The following table provides the recommended hardware (with example of hardware type) and the
supported number of simultaneous MS-Portal users that can be supported on the given hardware. In
order to support more than the users in the table, multiple HMI servers need to be deployed in a Citrix
farm.
Type
HMI Server
Server name
Operating system
Applications
HP ProLiant DL320 G6
Windows 2003 Server Enterprise Edition SP2
Citrix Presentation Server 4.5 Enterprise Edition
Microsoft Office
CPU
RAM
12 GB
Ethernet Ports
Disks
1 disk of 160 GB
10 users
It is recommended to have a minimum of 1 Mbps per user bandwidth between the HMI and the
MS-Portal servers.
It is recommended to have a minimum of 256 Kbps per user bandwidth between HMI Citrix Server
and Citrix client
If the HMI clients network must be separated from the MS-Portal server network, two Ethernet
interfaces can be used.
For more details on the procedure to install and setup a HMI server, please refer to document
Install 9753 OMC, 9953 MS-SUP,NPO HMI Server and Client Using Citrix 4.5 - 3BK 17436 4022
RJZZA
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
136
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
13.
137
CTn Activation
UTRAN
XML Files
XML
Storage
Buffer
ADI
WQA
3
Matrices computation
(Cell 1 3,50
100,0
90,0
80,0 c
70,0 u
60,0 m
ul
50,0
at
40,0 iv
30,0 e
20,0
10,0
0,00
3,00
Performance Mgmt
p 2,50
erin
c te 2,00
e rf 1,50
nter
1,00
0,50
-
0,00
-
C/I
Configuration
Management
CM XML
WMS
A data collection and transformation layer responsible for collecting the XML trace files and
populating the information into a database.
A database based on Oracle.
A 3rd party reporting interface which runs reports and delivers them to the clients over a web
interface.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
138
In terms of architecture, an instance of the WQA application is hosted on a Windows based server. A
single instance of WQA can support multiple users. WQA users access the WQA GUIs and reporting
using separate Client platforms (PCs).
WMS
Temporary
download
tables
Typical Case :
corruption of DB
FT
DB dump schedulable
Weekly & Daily
ETL XML
Uploading
Weekly full
DB dump
Storage/load
ETL
processing
Reports
DataModel
Reports
Pre
calculation
XML
files
Archival
Rolap DataModel
Daily
incremental
DB dump
Storage on tape
(M-weekly, O-Daily)
By system
administrator
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
139
For CTn engineering consideration, please refer to UTRAN Call Trace section of the WMS Main
server Engineering Considerations chapter.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
14.
140
14.1. OVERVIEW
The Alcatel-Lucent RFO is a standalone PC application that replaces the NIMS PrOptima Call Trace
functionality in OAM06. It provides an excellent tool for the examination of RF links between mobile
equipment and the Radio network.
RFO manages this by post-processing call traces on the radio network by decoding and analysing the
following data:
-
By being standalone, the user can work without being connected to the WMS and the radio network.
To learn more about the RFO, please refer to the document Alcatel-Lucent UMTS RFO Product User
Manual - NN-20500-181.
Call trace data is generated by the WMS using the Call Trace Wizard. Call Trace cannot be
initiated by the RFO.
Once the Call trace data is generated, it is stored locally on the WMS Main Server.
The user can manually transfer the call trace data which are several files or directories and import
them to the RFO PCs hard disk. Note that sufficient bandwidth (as mentioned in hardware
requirements) is available to allow quick transfer of data from WMS server to RFO.
Once imported, the RFO processes these physical files and converts them to logical files. It also
simultaneously parses the physical files decoding each supported message and stores it in the
SQL database which is part of the RFO software.
Physical files imported from WMS servers can be deleted as RFO uses the logical files and data in
its SQL database to analyse data
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
15.
141
5620 NM
This section describes the 5620 Network Management product suite which manages the AlcatelLucent 7670 Routing Switch Platform (RSP) and 7670 Edge Service Extender (ESE) and replaces the
Passports 7k, 15k in ALUs UMTS network.
The Alcatel-Lucent 5620 Network Manager (NM) is a reliable and scalable network management
solution. It provides network operators with a full range of capabilities on multi-technology networks.
Traffic and service parameters on frame relay, ATM, X25, SONET/SDH and ISDN links and paths can
be configured through a point-and-click graphical user interface. It allows multiple operators to
simultaneously access the same system, and thus manage the network from different sites.
The different components of 5620 to manage the 7670 switch are described below.
Standalone NetworkStation
Redundant Pair Database NetworkStation
The Standalone Database NetworkStation is as its name implies; a Standalone platform. With this
type of management, there is no database redundancy.
The Redundant Pair Database NetworkStations have a synchronised database that is maintained
between the Active and Standby database platforms. The role of the Active Database is to manage
and control the network and network management elements by maintaining an up to date database.
The role of the Standby Database is to constantly access the active database and ensure that its own
database is identical to that of the active database. Part of the redundancy features are that the Active
and Standby Database NetworkStations constantly check their network connectivity and visibility, by
verifying that the Active Database platform is communicating to more than 50% of the network at any
given time. If this situation changes and the Standby communicates with more than the Active and
within a specified time period, then an activity switch will occur between the two stations.
Both types cannot operate at the same time in the same network domain.
The 5620 NM Release 8.0 runs on Solaris 10 and uses Informix as its database system.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
142
Integrated (default)
Distributed
To collect statistics for up to 100 000 paths, install the Integrated Statistics Collector. In an integrated
configuration, the Aggregator and Collector software are installed on a 5620 NM Database
NetworkStation.
In a distributed configuration, the Collector/Aggregator and Collector NetworkStations run as separate
products from the 5620 NM.
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
143
5620 NM
Database
NetworkStatio
n (standby)
5620 NM
Database
NetworkStatio
n (active)
CPSS Router
NetworkStatio
IP
Network
CMIP/CORBA
Server
7670 RSP
Operator
Server
Operator
position
Statistics Collector
NetworkStation
Operator Operator
position position
7670 ESE
These requirements are appropriate for test/trial networks with up to 2 7670 switches and 5 user
sessions. Such a server can perform the following functionality simultaneously:
-
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
144
It is mandatory to use a separate Sun server for the CPSS Router NetworkStation in a network that
has more than 24 CPSS links terminating at the 5620 NM Database Networkstation. Please refer to
Section "7670 Node Types" to optimize the number of CPSS links to the 5620 Network Stations. In
this case, the minimum platform requirements for a standalone CPSS Router NetworkStation are a
Sun Server with:
-
The minimum platform requirement for a CMIP/CORBA server (which is mandatory on a separate
server) is a Sun server with:
-
For more information on required hardware to manage the 7670 in your network, please contact your
local Alcatel-Lucent representative.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
145
A logical log backup is an incremental backup, which includes only those changes that have been
made to the database since the last backup. You enable the logical log during installation of the 5620
NM.
When the 5620 NM database is lost or inconsistent, you can restore the database from your backup
directory on your disk or tape. You can use the db_restore script to perform a full system restore. The
script also asks you whether you want to salvage the logical log and perform a restore of all logical log
backups.
For the db_restore script to be successful, the hardware configuration must be the same as it was
when the database backup was performed.
Choosing to salvage the logical log makes the recovery process more complete. The salvage process
backs up the logical log files that were not backed up before the failure occurred. By recovering these
files, the recovery process recovers all changes that were made to the database, up to the point of
failure.
For the 5620 CMIP/CORBA server, it is recommended to backup the MIB to a backup directory so
that, if the MIB becomes corrupted, the backup MIB can be used to restore the MIB. The CE_control
mib-backup command saves a copy of the MIB in a backup file that can be recovered using the
CE_control mib-restore command. The 5620 CORBA/CMIP database must be running to perform the
backup or restore procedure.
control information
alarm information
performance information
configuration Status information
timing information
routing messages
CPSS messages are delivered by a means of address indicators (CPSS address). Each element in a
network (except CMIP/CORBA server, Operator Positions) must be assigned a unique address to
enable this identification process. The CPSS address is made up of:
-
Domain Number identifies the top level of network messaging subdivision or domain. Domain
numbers can be from 1..60. Each domain number must be unique within the network
Major Node Number identifies the node (eg 3600, 8230, etc.) that is part of the specified domain.
Node numbers can be from 1..1023. Each major node number must be unique within the domain.
Minor Node Number identifies individual card types (eg control, FRE, FRS, DCP etc.) resident on
a node that have addressable capabilities. These cards have individual functions and operate as a
node within a node.
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
146
Node Number
Server Type
1023
1021
769 to 1020 (except
1000, 1001)
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
147
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
148
Stub node - A stub node is a 7670 that terminates or originates CPSS traffic. Stub nodes do not
route CPSS traffic.
Routing node - A routing node is a 7670 that can route CPSS traffic. Such nodes connect to more
than one node.
Gateway node - A gateway is a routing node that handles CPSS communications between the
Network Management System (5620) and a CPSS domain. Every domain in the network has at
least one routing node that is designated as a gateway. A gateway node has a direct CPSS link to
the NMS and routes CPSS packets to the rest of the domain. Gateway links from the gateway
nodes must terminate on either a Database or Router NetworkStation.
Leaf node - A leaf node is a 7670 that can route CPSS traffic only to the single node to which it is
linked. The leaf node derives its CPSS address from the node to which it is linked
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
149
7670 ESE
7670 RSP/ESE
Node B
RNC
Node B
5620
RNC
WMS
CM/FM/PM
FM
Reach through
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
16.
150
ANNEXES
In this section, volumetric information is provided with regards to the sizes of the different data per
application. This information is an average value with some example of configuration.
This is indented for customer & services teams in case of volumetric exercise. (E.g.: data space
reservation within backup & restore infrastructure, manual file management within a given repository
or media, etc...)
In this section, the minimum throughput requirements are with regards to data transfer between
applications through a WAN with IP routers except for remote connection to Network Element.
Within the Ethernet LAN of the ROC, every client should be connected to an Ethernet switch through a
100/1000Mbps connection. And every Ethernet port of every server must be connected to an Ethernet
switch through a 1000 Mbps. This guarantees the best KPI for the data transfer within the LAN.
In a context of remote connection through a WAN, the throughput information below has to be
considered as a minimum to comply with the WMS KPI. This is indented for customer & services
teams for the configuration of the transmission nodes through the WAN to guarantee that the traffic
rate operate at the specified level.
For communication with Network Element, the Routing Switch which provides aggregation of the
ATM/WAN interfaces to remote Network Elements must comply with the generic Minimum
Throughput Requirements provided in section 8.2
UA6
UMT/OAM/APP/024291
01.09/ EN Standard
RNC
BTS
150 kiloBytes
4 kiloBytes
RNC
BTS
185 kiloBytes
6 kiloBytes
2009-2010 Alcatel-Lucent
Alcatel-Lucent
151
UA6
RNC
BTS
680 kiloBytes
6 kiloBytes
Data center
Data center
MS PORTAL
NPO
OSS PM
Minimum
Throughput
WAN
Private/Public IP
backbone
Minimum
Throughput
Data center
(ROC WMS)
WMS
The maximum deadline for files availability in NPO, including loading, must be in general under 1/3 of
the configured GP0 (General permanent observation) period.
This 1/3 GPO is an absolute period within file transfer occurs continuously, including regular pooling
activity, file parsing, and the loading of data within NPO oracle database. With regard to the pure
file transfer activity, the duration usually takes 10% of the 1/3 GPO.
To guarantee the NPO performance with regards to basic recovery scenarios (e.g.: missing data, loss
of connection that imply managing more data within a same GPO), the quantity of data to be managed
by NPO has to be double accordingly.
As a consequence, the General Minimum throughput requirement for a nominal NPO usage is defined
as follow:
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
152
Srnc i = Size of the RNC i observation file (in kiloBytes) under a given configuration (e.g.: 185
kiloBytes for each RNC with CP3 configured with 100 dNodeB2U BTS)
Nbrnc i = Number of RNC i
Sbts i = Size of the BTS i observation file under a given configuration (e.g.: 6 kiloBytes for each
dNodeB2U BTS configured with about 3 cells)
Nbbts i = Number of BTS
GPO: The minimum general permanent observation period (in seconds) configured on the BTS
Network elements (e.g.: 900 seconds).
Example for 30 RNC (with CP3 board) and 3000 BTS (15 minutes GPO activated) considering that
each RNC is configured in average with 100 dNodeB2U BTS each (185 kiloBytes): The general
Minimum throughput requirement through a remote channel becomes 19 mbps.
Engineering note: Defining Minimum throughput requirements
The Size of the Network Element observation files Network Element and the corresponding average
value, is specific to a customer network configuration (RNC Card configuration, Number of cells per
BTS, Number of counters activated per RNC, etc...).
It not has to be measured, before applying the formula above to determine in the best condition the
Minimum throughput requirements.
16.2. NE SOFTWARE
The scenario for the transfer of NE software files through a WAN applies for the remote Software
repository management.
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
NE Software Size
in MegaByte
UA6
153
RNC
1000
BTS
(iBTS 19)
50
BTS
BTS
(OneBTS)
(Pico/Micro)
40
19
iBTS can be iNode-B, Macro Node-B, distributed Node-B, digital 2U Node-B, digital Compact Node-B, RRH (Remote Radio
Head) or Radio Compact Node-B
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
154
17.
ABBREVIATIONS
ADI
API
ATM
ASCII
BB
Building Block
CD
Compact Disk
CMIP
CORBA
CPSS
CPU
DCN
DHCP
DNS
IPMP
IPSec
LAN
MDM
MIB
NE
Network Element
NEBS
NOC
NM
Network Manager
NMS
NPO
OAM
OSS
RADIUS
RAMSES
RFO
RNC
ROC
UMTS
UTRAN
VPN
WAN
WMS
WQA
UMT/OAM/APP/024291
01.09/ EN Standard
2009-2010 Alcatel-Lucent
Alcatel-Lucent
155
2009-2010 Alcatel-Lucent
All rights reserved.
UNCONTROLLED COPY: The master of this document is stored on an electronic database and is write
protected; it may be altered only by authorized persons. While copies may be printed, it is not recommended.
Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be
regarded as uncontrolled copies.
ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of Alcatel-Lucent.
Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information contained
herein confidential, shall disclose the information only to its employees with a need to know, and shall protect
the information from disclosure and dissemination to third parties. Except as expressly authorized in writing
by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have
received this document in error, please notify the sender and destroy it immediately.
Document number:
Document issue:
Document status:
Product Release:
Date:
UMT/OAM/APP/024291
UMT/OAM/APP/024291
01.09/ EN
Standard
OAM 6.0
February 2010
01.09/ EN Standard
2009-2010 Alcatel-Lucent