Documente Academic
Documente Profesional
Documente Cultură
Document Number: SEB-08-00-010 Document Status: Draft Version: 9.0.3 Publication Date: July 18, 2005 Security Status: Nortel Confidential Document Owner: Randy Tuttle Document Prime(s): Randy Tuttle
NO NORTEL LIABILITY FOR CERTAIN USES OF ITS PRODUCTS The information contained in this release relates to certain Nortel products that have been designed by or on behalf of Nortel to conform to applicable Nortel and third party specifications and requirements, including, for example, NEBS compliance. In addition, such Nortel products are designed to be used for its intended purpose as specified in this documentation. In the event, and to the extent, that such Nortel products are used by the user for a purpose other that its intended purpose, or used with third party products that do not conform to any such applicable Nortel and third party specifications and requirements, including, for example, NEBS compliance, Nortel shall have no liability of any kind to the user for any problems which may arise, including, without limitation, any related failure of the Nortel product to perform in accordance with its specification.
2005 Nortel All rights reserved Published in the United States NORTEL CONFIDENTIAL: The information contained in this document is the property of Nortel . Except as specifically authorized in writing by Nortel, the holder of the information contained herein confidential and shall protect same in whole or in part from disclosure and dissemination to third parties and use same for evaluation, operation, and maintenance purposes only. Information subject to change without notice.
ii
Version: 9.0.3
NORTEL CONFIDENTIAL
iii
Document History
Issue No. Rel Date Version 8.0.1 Feb. 24nd, 2004 Version 8.0.2 Mar. 11th, 2005 Version 8.0.3 Mar. 25th, 2005 Reason(s) for Reissue Preliminary version for SN08 Author and Department Succession System Engineering JB20 Succession System Engineering JB20 Succession System Engineering JB20 Succession System Engineering JB20
Updated version for SN08. Changes include: - Corrections to various sections from internal review - Addition of List of figures and List of tables - Propagation of content from (I)SN07, where appropriate Preliminary draft version for (I)SN09 Changes include: - Updated information for engineered capacity on GWCs - Additional descriptive information on CS 2000-Compact - QoS section refined, with solution-specific sections moved to solution or gateway documents Update to propagate changes from (I)SN08 document. Changes include: - Corrections to information on introduction of MCPN905 GWC cards Clarified the use of "inter-domain" flag, used to determine when to insert media portals for certain calls
Version: 9.0.3
NORTEL CONFIDENTIAL
iv
Version: 9.0.3
NORTEL CONFIDENTIAL
Document Family
The following documents are all the Carrier VoiP Engineering Rules. These are in the form of System Engineering Bulletins, or SEBs. Their format is as follows: SEB-XX-YY-NNN Document Number Document Type Document Family where,
XX YY 08 Carrier VoIP Document Family 00 Engineering Rules 01 Engineering Bulletins 02 Capacity Reports 03 White Papers NNN 000-999 Document Number
Version: 9.0.3
NORTEL CONFIDENTIAL
vi
SEB-08-00-000 Carrier VoIP Document Roadmap SEB-08-00-001 Carrier Voice over IP CS-LAN Engineering Rules SEB-08-00-002 Small Line Gateway and Access Engineering Rules SEB-08-00-003 V5.2 Gateway Engineering Rules SEB-08-00-004 Carrier Voice over IP Carrier Hosted Services Engineering Rules SEB-08-00-005 BASG Series 7000 DSLAM & MG 9000 Multi-service on MSS 15000 SEB-08-00-006 Packet Access- Cable Access Engineering Rules SEB-08-00-007 Universal Access MG 9000 Engineering Rules SEB-08-00-008 BGP Engineering for CS-LAN SEB-08-00-009 Carrier Voice over IP Packet Trunking Engineering Rules SEB-08-00-010 Carrier Voice over IP Solution Engineering Rules SEB-08-00-011 Carrier Voice over ATM Solution Engineering Rules SEB-08-00-012 MG 15000 SVoIP Engineering Guidelines
SEB-08-00-013 Trimodal I/W Engineering Rules SEB-08-00-015 Carrier Voice over IP Nuera Engineering Rules SEB-08-00-016 Carrier Voice over IP Keymile Engineering Rules SEB-08-00-017 Carrier Voice over IP SIP Lines Engineering Rules SEB-08-00-018 CS2000 Compact Performance Monitoring SEB-08-00-019 MG 3200 Engineering Rules SEB-08-00-020 Cisco 6509 Based Carrier VoIP CS-LAN SEB 08-00-021 Media Gateway 7000/15000/20000 Switched Voice over ATM SEB-08-00-024 Carrier Voice over AAL2 Engineering Rules Centrex IP Client Manager (CICM) Engineering Guide NTP 297-5551-100 Multimedia Communications Server 5200 Network Deployment and Eng Guide NN10314-191 Multimedia Communications Server 5200 Network MAS Network Deployment and Eng Guide NN10420-191
Version: 9.0.3
NORTEL CONFIDENTIAL
vii
The purpose of this document is to provide configuration and engineering rules for deploying Carrier VoIP solutions.
Audience
This document is intended for Nortel internal and external customer use only. The audience for this document consists of the following individuals or groups: Network Planners/Engineers and Sales Engineers who have the responsibility of implementing the guidelines described in this document. System Verification who have the responsibility of verifying that the implementation matches the design. Field Engineering Groups who have the responsibility of implementing the final customer networks using these guidelines.
Terminology
The term CS LAN used in this document refers to the components that provide packet connectivity for CS2000 network devices, traditionally located in the Central Office (CO). The CS LAN also provides network connectivity to the core packet network, either IP or ATM, as well as to other external networks, typically Operations Support Systems (OSS) or other OAM&P networks. Words or phrases used interchangeably in this document to refer to the CS-LAN include: CS LAN CO-LAN CS LAN components (or devices) Components (or devices) that comprise/comprising the CS LAN CS2000 CS LAN Throughout this document the term Media Gateway is referred to all voice gateways. These are: Trunk Gateways (i.e. MG 15000s). Line Gateways (Cable Modems E-MTA, IAD). IP Phones and soft-clients (i.e. i2004 and i2050). In this document, a trunk gateway is referred to as: A PVG representing a combination of a VSP FP and its assigned ATM/TDM FP cards.
Version: 9.0.3
NORTEL CONFIDENTIAL
viii
A MG 7000 representing a number of VSP FPs and ATM/TDM FPs housed in a Multiservice Switch (MSS) 7000 shelf. A MG 15000 representing a number of VSP FPs and ATM/TDM FPs housed in an MSS 15000 shelf. A Fully populated MG 15000 represents a shelf that maximizes the capacity, i.e. number of trunks of a PVG with regards to the interconnecting IP network. PVG - the terms MG 15000, MG 7000, and PVG may be used interchangeably unless noted specifically.
Words or phrases used interchangeably in this document to refer to the CS 2000 Management Tools Server include: CMT, CMT/IEMS SSPFS Packet Telephony Manager (PTM) Succession Element and Subnetwork Manager (SESM)
Version: 9.0.3
NORTEL CONFIDENTIAL
ix
Table of contents
1.0 Introduction .............................................................................................................................. 1
1.1 1.2 Carrier Voice over IP solution objective............................................................................................................................1 Carrier Voice over IP network.............................................................................................................................................1 1.2.1 CS-LAN..................................................................................................................................................................2 1.2.2 Core network .........................................................................................................................................................3 1.2.3 Media Gateway sites .............................................................................................................................................5 IP network planning and traffic engineering process ......................................................................................................5
1.3
2.0
2.2
2.3
2.4
NORTEL CONFIDENTIAL
2.4.4 2.4.5
OSPF ................................................................................................................................................... 34 2.4.3.2.1 OSPF area design .............................................................................................................. 34 2.4.3.2.2 Route announcement ....................................................................................................... 36 2.4.3.2.3 OSPF Domain ID............................................................................................................. 37 2.4.3.2.4 Routing information loop ................................................................................................ 37 2.4.3.2.5 OSPF Down-Bit................................................................................................................ 38 2.4.3.2.6 OSPF VPN route tag........................................................................................................ 39 2.4.3.2.7 Site of Origin ..................................................................................................................... 39 2.4.3.3 BGP ..................................................................................................................................................... 39 2.4.3.3.1 BGP neighbor relation ..................................................................................................... 39 2.4.3.3.2 Routing information loop ................................................................................................ 41 2.4.3.3.3 MED ................................................................................................................................... 41 2.4.3.4 Route announcement ........................................................................................................................ 41 End-to-End network outage ............................................................................................................................ 42 2.4.4.1 Within BGP/MPLS VPN backbone.............................................................................................. 42 2.4.4.2 Within Carrier VoIP VPN site ........................................................................................................ 43 Redundancy......................................................................................................................................................... 44 2.4.5.1 BGP route aggregation ..................................................................................................................... 45 2.4.5.2 IP address scheme ............................................................................................................................. 47 2.4.5.3 BGP Route Refresh and Soft Reconfiguration............................................................................. 48 2.4.5.4 BGP route dampening...................................................................................................................... 48
2.4.3.2
3.0
4.0 5.0
6.0
6.2 6.3
NORTEL CONFIDENTIAL
xi
6.4
Redundancy ........................................................................................................................................ 71 6.3.1.2.1 Link Aggregation Group.................................................................................................. 71 6.3.1.2.2 Layer-3 routing .................................................................................................................. 72 6.3.1.3 IP addressing ..................................................................................................................................... 72 6.3.1.4 Routing rules ...................................................................................................................................... 72 6.3.1.4.1 OSPF................................................................................................................................... 72 6.3.1.4.2 Static routes........................................................................................................................ 75 6.3.1.4.3 Interaction between OSPF and PSR.............................................................................. 79 6.3.2 IP over ATM networks (AAL5)....................................................................................................................... 79 6.3.3 Multiservice Switch 15000 core network interconnect scalability considerations ................................... 81 CS-LAN to WAN MLT link engineering........................................................................................................................ 82
6.3.1.2
7.0
IP addressing .......................................................................................................................... 83
7.1 7.2 IP addressing considerations ............................................................................................................................................ 83 Network view and general strategy for IP addressing .................................................................................................. 84 7.2.1 Private address space ......................................................................................................................................... 85 7.2.2 Strategy summary............................................................................................................................................... 86 7.2.2.1 Call Processing subnets .................................................................................................................... 86 7.2.2.2 The OAM&P subnet......................................................................................................................... 86 7.2.2.3 The NOC............................................................................................................................................ 86 7.2.2.4 Out-of-band management................................................................................................................ 86 7.2.2.5 The Media Gateways......................................................................................................................... 87 7.2.2.5.1 Media Gateways in Packet Access-Cable ...................................................................... 87 7.2.2.5.2 Media Gateways in Packet Access-Integrated Access ................................................. 87 7.2.2.5.3 Media Gateways in Packet Trunking.............................................................................. 87 7.2.2.6 IP address assignment....................................................................................................................... 88 7.2.2.7 Ethernet Routing Switch 8600 addressing..................................................................................... 88 Addressing schemes for CS-LAN and NOCs ............................................................................................................... 88 7.3.1 CS-LAN overview.............................................................................................................................................. 88 7.3.2 Addressing for the Call Processing subnet .................................................................................................... 89 7.3.3 Addressing for the Media Gateways in the CS-LAN ................................................................................... 91 7.3.4 Addressing for the OAM&P subnet ............................................................................................................... 91 7.3.5 In-band management for the MSS 15000 ...................................................................................................... 92 7.3.6 Connecting the CS-LAN to the Corporate Network................................................................................... 93 7.3.7 NOCs and future expansion ............................................................................................................................ 93 7.3.8 Out-of-band management, router interfaces and loopback interfaces ...................................................... 94
7.3
8.0
8.4 8.5
8.6
NORTEL CONFIDENTIAL
xii
8.7
Media Portal insertion rules and guidelines.................................................................................................................. 108 8.7.1 VoIP gateway locations ................................................................................................................................... 108 8.7.2 Carrier Hosted solutions................................................................................................................................. 108 8.7.2.1 Mandatory use of Media Portal for CICM .................................................................................. 108 8.7.2.2 When is a Media Portal inserted for CHS ................................................................................... 109 8.7.2.3 Type of Media Portal ports allocated for CHS ........................................................................... 109 8.7.3 Inter-Carrier solutions..................................................................................................................................... 110 8.7.3.1 When should Inter-domain be set ............................................................................................ 110 8.7.3.2 When is a Media Portal inserted for Inter-Carrier...................................................................... 111 8.7.3.3 Type of Media Portal ports allocated for Inter-Carrier ............................................................. 111 8.7.3.4 Network considerations.................................................................................................................. 112
9.0
9.3
9.4
9.5
NORTEL CONFIDENTIAL
xiii
9.6
9.7 9.8
9.9
Session Server Trunks overview .................................................................................................................... 148 Physical network connectivity ........................................................................................................................ 149 9.5.3.1 Protection mechanism .................................................................................................................... 149 9.5.3.2 Third party CS-LAN ....................................................................................................................... 149 9.5.4 IP address requirements.................................................................................................................................. 150 9.5.4.1 LAN links ......................................................................................................................................... 150 9.5.4.2 PTP links........................................................................................................................................... 152 9.5.5 Session Server Trunks engineering................................................................................................................ 152 9.5.5.1 Capacity engineering ....................................................................................................................... 152 9.5.5.1.1 Channel capacity.............................................................................................................. 153 9.5.5.2 Scalability .......................................................................................................................................... 153 9.5.5.2.1 IP bandwidth engineering.............................................................................................. 156 9.5.5.3 Signaling delay tolerance................................................................................................................. 156 9.5.5.4 NAT consideration.......................................................................................................................... 156 9.5.5.5 Network Monitor ............................................................................................................................ 156 9.5.5.6 NTP considerations ........................................................................................................................ 157 9.5.5.7 Session Server Trunks OA&M...................................................................................................... 157 9.5.5.8 Software upgrade ............................................................................................................................. 157 9.5.5.9 Session Server Trunks security considerations ........................................................................... 157 9.5.5.10 CS 2000 considerations................................................................................................................... 157 Policy Controller............................................................................................................................................................... 158 9.6.1 Configuration and Connectivity..................................................................................................................... 159 9.6.1.1 Protection mechanism .................................................................................................................... 160 9.6.1.2 3rd Party CS-LAN........................................................................................................................... 160 9.6.1.3 Geographic Survivability ................................................................................................................ 161 9.6.2 IP addressing..................................................................................................................................................... 161 9.6.2.1 LAN Links........................................................................................................................................ 161 9.6.2.2 PTP Links ......................................................................................................................................... 163 9.6.3 Security .............................................................................................................................................................. 163 9.6.4 Network Time Protocol.................................................................................................................................. 164 9.6.5 Network Monitor ............................................................................................................................................. 164 9.6.6 Billing ................................................................................................................................................................. 164 9.6.7 Engineering the Policy Controller ................................................................................................................. 164 9.6.7.1 SOC Option ..................................................................................................................................... 164 9.6.7.2 Treatment.......................................................................................................................................... 164 9.6.7.3 Policy Controller and Inter- domain SIP-T trunks setting........................................................ 165 9.6.7.4 Capacity and Scalability................................................................................................................... 165 9.6.8 Surveillance ....................................................................................................................................................... 165 9.6.8.1 Alarms and Logs.............................................................................................................................. 165 9.6.8.2 OMs ................................................................................................................................................... 166 Packet Media Anchor....................................................................................................................................................... 166 9.7.1 Engineering the Packet Media Gateway ....................................................................................................... 167 ERS 8600 engineering...................................................................................................................................................... 167 9.8.1 Capacity ............................................................................................................................................................. 167 9.8.2 Hardware engineering...................................................................................................................................... 168 9.8.2.1 ERS 8600 Series E-Modules .......................................................................................................... 168 Engineering an IWSPM ................................................................................................................................................ 170 9.9.1 IW-SPM capacity.............................................................................................................................................. 170 9.9.2 Determining the number of IWSPMs needed in a office ....................................................................... 170 9.9.3 Bearer subnet sizing considerations .............................................................................................................. 170
Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
9.5.2 9.5.3
NORTEL CONFIDENTIAL
xiv
9.13
9.14
Monitoring IWSPMs ..................................................................................................................................... 171 9.9.4.1 IWSPM average bridge attempt rate .......................................................................................... 172 9.9.4.2 IWSPM bridge attempt failure rate............................................................................................. 172 Anchor Packet Gateway capacity ................................................................................................................................... 172 Universal Audio Server.................................................................................................................................................... 172 RTP Media Portal engineering ....................................................................................................................................... 175 9.12.1 Component capacities ..................................................................................................................................... 175 9.12.1.1 Capacity with G.711 10 ms ............................................................................................................ 176 9.12.2 Minimum Media Portal configuration for CICM mandatory use ............................................................ 177 9.12.3 Determining the number of Media Portals needed in a office ................................................................. 177 9.12.4 Determining the assignment of Media Portals to GWCs.......................................................................... 179 9.12.5 Media Proxy insertion and selection ............................................................................................................. 180 9.12.6 Unsupported functions with RTP Media Portals ....................................................................................... 181 9.12.7 RTP Media Portal engineering and sizing for Media Gateway sites......................................................... 181 9.12.7.1 Determining the number of Media Portals needed in a geographic region ........................... 183 9.12.7.2 Determining the assignment of Media Portals to GWCs in a geographic region ................. 183 OAM&P server engineering ........................................................................................................................................... 183 9.13.1 CS 2000 Management Tools Server engineering......................................................................................... 183 9.13.1.1 Traffic engineering .......................................................................................................................... 183 9.13.1.2 Number of clients supported......................................................................................................... 184 9.13.1.3 Bandwidth and network performance.......................................................................................... 184 9.13.2 SuperNode Data Manager (SDM)................................................................................................................. 184 9.13.2.1 Traffic engineering .......................................................................................................................... 184 9.13.2.2 Number of clients supported......................................................................................................... 185 9.13.2.3 Bandwidth......................................................................................................................................... 185 9.13.3 Core and Billing Manager (CBM).................................................................................................................. 185 9.13.3.1 Traffic engineering .......................................................................................................................... 185 9.13.3.2 Number of clients supported......................................................................................................... 185 9.13.3.3 Bandwidth......................................................................................................................................... 185 9.13.4 MG 15000 Element Manager......................................................................................................................... 186 OAM messaging, bandwidth, latency and loss requirements .................................................................................... 186 9.14.1 MDM to SDM.................................................................................................................................................. 186 9.14.2 MSS 15000 to MDM ....................................................................................................................................... 188 9.14.3 SDM to OSS network ..................................................................................................................................... 189 9.14.4 MDM to OSS network.................................................................................................................................... 191
9.9.4
10.0
10.2
NORTEL CONFIDENTIAL
xv
10.3
10.4
10.5
Media Gateways ............................................................................................................................................... 201 10.2.2.1 Media Gateway 15000..................................................................................................................... 201 10.2.2.1.1IPSec on MDM ............................................................................................................... 202 10.2.2.1.2UDP ports for bearer flows .......................................................................................... 202 10.2.2.2 IW-SPM ............................................................................................................................................ 203 10.2.2.3 Universal Audio Server (UAS)....................................................................................................... 203 10.2.2.4 Media Server 2010 (MS 2010)........................................................................................................ 204 10.2.2.5 MG9000 IP....................................................................................................................................... 204 10.2.2.6 Media Portal ..................................................................................................................................... 205 10.2.3 OAM&P network elements ............................................................................................................................ 206 10.2.4 Operations Support System (OSS) / Network Operations Center (NOC)............................................ 207 Packet filtering and security on the ERS 8600............................................................................................................. 207 10.3.1 Introduction and general concepts................................................................................................................ 207 10.3.1.1 Source and destination filters......................................................................................................... 208 10.3.1.2 Global filters..................................................................................................................................... 208 10.3.2 Summary of filter characteristics on the ERS 8600.................................................................................... 208 10.3.3 Capacity engineering on filtered ports .......................................................................................................... 209 10.3.4 Important information on configuring filtering features........................................................................... 210 10.3.4.1 Enabling ARP traffic....................................................................................................................... 210 10.3.4.2 Configuring a range of UDP ports ............................................................................................... 210 10.3.5 Securing the management of the ERS 8600 ................................................................................................ 210 10.3.6 Layer 2 filtering on the ERS 8600 ................................................................................................................. 211 Securing the Media Gateway site ................................................................................................................................... 212 10.4.1 Anti-spoofing.................................................................................................................................................... 214 10.4.2 TFTP flows ....................................................................................................................................................... 214 Securing the enterprise network..................................................................................................................................... 214 10.5.1 Network Address Translation (NAT) ........................................................................................................... 215
10.2.2
11.0
12.0
12.2
NORTEL CONFIDENTIAL
xvi
12.3 12.4
12.5 12.6
12.2.3.4 Queues in the MSS 7000/15000 ................................................................................................... 231 MPLS QoS in Carrier VoIP ............................................................................................................................................ 231 QoS on the ERS 8600...................................................................................................................................................... 232 12.4.1 DiffServ access port ........................................................................................................................................ 233 12.4.1.1 Tagged traffic ................................................................................................................................... 233 12.4.1.2 Untagged traffic ............................................................................................................................... 235 12.4.2 DiffServ core port ........................................................................................................................................... 236 12.4.2.1 Tagged traffic ................................................................................................................................... 236 12.4.2.2 Untagged traffic ............................................................................................................................... 236 12.4.3 Priority queuing and servicing........................................................................................................................ 237 12.4.4 Mapping Carrier VoIP traffic to queues....................................................................................................... 238 QoS on the Media Gateway site router......................................................................................................................... 241 QoS in the Enterprise Network ..................................................................................................................................... 243
13.0
14.0
14.7 14.8
Version: 9.0.3
NORTEL CONFIDENTIAL
xvii
15.0 16.0
Element distance limits .........................................................................................................277 Appendix: Bearer path insertion considerations for DPT ....................................................279
16.1 16.2 16.3 General overview.............................................................................................................................................................. 279 DPT trunk services .......................................................................................................................................................... 279 Other services requiring bearer-path insertion ............................................................................................................ 280
Version: 9.0.3
NORTEL CONFIDENTIAL
xviii
Version: 9.0.3
NORTEL CONFIDENTIAL
xix
List of figures
Figure 1 Basic IP network architecture for Carrier Voice over IP solution............................................................................ 2 Figure 2 Metropolitan area distribution networks....................................................................................................................... 4 Figure 3 Carrier VoIP network for intra-CS 2000 office traffic.............................................................................................. 10 Figure 4 Carrier VoIP network for inter-CS 2000 office traffic.............................................................................................. 11 Figure 5 Carrier VoIP BGP/MPLS VPN architecture............................................................................................................. 20 Figure 6 PE-to-CE physical connections.................................................................................................................................... 21 Figure 7 Route Target example 1 ................................................................................................................................................. 27 Figure 8 Route Target example 2 ................................................................................................................................................. 29 Figure 9 Route Target example 3 ................................................................................................................................................. 31 Figure 10 PE-to-CE with OSPF Area 0.0.0.0 support ............................................................................................................. 35 Figure 11 PE-to-CE without OSPF Area 0.0.0.0 support ....................................................................................................... 36 Figure 12 Routing information loop............................................................................................................................................ 38 Figure 13 MP-iBGP and EBGP neighbors ................................................................................................................................ 40 Figure 14 Route propagation delay.............................................................................................................................................. 43 Figure 15 BGP route aggregation ................................................................................................................................................ 45 Figure 16 Route aggregation with OSPF Area 0.0.0.0 support ............................................................................................... 46 Figure 17 Route confusion............................................................................................................................................................ 47 Figure 18 Example rate-limit and cp-limit on a CS-LAN port................................................................................................ 58 Figure 19 Fully meshed redundant configuration into IP core network ............................................................................... 59 Figure 20 Dual link redundant configuration with dual IP core edge routers ...................................................................... 60 Figure 21 Fully meshed redundant configuration into Avici IP core network ..................................................................... 62 Figure 22 Fully meshed configuration......................................................................................................................................... 64 Figure 23 Square configuration .................................................................................................................................................... 64 Figure 24 Single MSS 15000 configuration................................................................................................................................. 65 Figure 25 Fully meshed configuration with less than 1 Gbps traffic ..................................................................................... 66 Figure 26 Square configuration with less than 1 Gbps traffic ................................................................................................. 67 Figure 27 Single MSS 15000 configuration with less than 1 Gbps traffic ............................................................................. 68 Figure 28 Fully meshed configuration with more than 1 Gbps but less than 2 Gbps traffic............................................. 69 Figure 29 Square configuration with more than 1 Gbps but less than 2 Gbps traffic ........................................................ 70 Figure 30 Single MSS 15000 configuration with more than 1 Gbps but less than 2 Gbps traffic .................................... 71 Figure 31 OSPF area planning for fully meshed configuration............................................................................................... 73 Figure 32 OSPF area planning for square configuration.......................................................................................................... 73 Figure 33 OSPF area planning for single MSS 15000 configuration...................................................................................... 74 Figure 34 Static routes in fully meshed configuration .............................................................................................................. 77 Figure 35 Static routes in square configuration ......................................................................................................................... 78 Figure 36 Static routes in single MSS 15000 configuration...................................................................................................... 79 Figure 37 Dual link redundant configuration with single Multiservice Switch 15000 ......................................................... 80 Figure 38 Dual link redundant configuration with dual Multiservice Switch 15000s .......................................................... 81 Figure 39 High level view of IP addressing................................................................................................................................ 85 Figure 40 MSS 15000 in-band management............................................................................................................................... 93 Figure 41 Network Address Translation basics......................................................................................................................... 97 Figure 42 VoIP NAT unknown translated address problem .................................................................................................. 98 Figure 43 NAT bind expiry issues ............................................................................................................................................... 99
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
xx
Figure 44 Overview of the solution for NAT traversal.......................................................................................................... 100 Figure 45 Media Portal solution ................................................................................................................................................. 101 Figure 46 Usage of Media Proxies for internetwork communications ................................................................................ 102 Figure 47 Internet Transparency generic topology ................................................................................................................. 103 Figure 48 Additional traffic on certain links............................................................................................................................. 104 Figure 49 RTP Media Portal location........................................................................................................................................ 105 Figure 50 MP in the CS-LAN..................................................................................................................................................... 106 Figure 51 MP in a remote site..................................................................................................................................................... 107 Figure 52 Carrier inter-working.................................................................................................................................................. 112 Figure 53 Session Server Trunks in a Carrier VoIP environment ........................................................................................ 147 Figure 54 CS 2000 to CS 2000 DPT SIP-T signaling paths................................................................................................... 154 Figure 55 CS 2000 to SIP Proxy signaling paths ..................................................................................................................... 155 Figure 56 A network topology with Policy Controller ........................................................................................................... 159 Figure 57 Packet Media Anchor call topology ......................................................................................................................... 167 Figure 58 Remote Portals at Media Gateway sites .................................................................................................................. 182 Figure 59 Carrier VoIP IP high-level network diagram ......................................................................................................... 197 Figure 60 The Media Gateway route ......................................................................................................................................... 213 Figure 61 Protecting the enterprise............................................................................................................................................ 215 Figure 62 Protecting the corporate network ............................................................................................................................ 217 Figure 63 CS-LAN router ........................................................................................................................................................... 233 Figure 64 QoS in the Media Gateway router ........................................................................................................................... 241 Figure 65 QoS in the Enterprise ................................................................................................................................................ 243 Figure 66 Impact of network impairments on voice quality.................................................................................................. 245 Figure 67 Voice quality: G.711 vs. G.729................................................................................................................................. 246 Figure 68 Enabling QoS collection on the GWC.................................................................................................................... 252 Figure 69 Provisioning the QCA server within the CMT ...................................................................................................... 253 Figure 70 Associating the GWC to the QCA server............................................................................................................... 254 Figure 71 R Value to MOS score mapping .............................................................................................................................. 259 Figure 72 Hybrid echo example ................................................................................................................................................. 261 Figure 73 Graph of echo signal.................................................................................................................................................. 262
Version: 9.0.3
NORTEL CONFIDENTIAL
xxi
List of tables
Table 1 Carrier VoIP document family........................................................................................................................................ vi Table 2 Routing table relevant to MPLS on Juniper................................................................................................................. 15 Table 3 RT assignment - example 1............................................................................................................................................. 28 Table 4 RT assignment - example 2............................................................................................................................................. 30 Table 5 RT assignment - example 3............................................................................................................................................. 32 Table 6 ERS 8600 ATM MDA voice connections.................................................................................................................... 50 Table 7 Number of 4-port GigE on a MSS 15000 shelf .......................................................................................................... 66 Table 8 OSPF GigE interface parameters .................................................................................................................................. 74 Table 9 Throughput of 4-Port OC12 ATM card on Multiservice Switch 15000 core router ............................................ 82 Table 10 Initial subnetting............................................................................................................................................................. 90 Table 11 Subnetting to obtain CS 2000 blocks.......................................................................................................................... 90 Table 12 Internet Transparency terms and acronyms .............................................................................................................. 95 Table 13 NAT conventions .......................................................................................................................................................... 96 Table 14 CS 2000 / CS 2000-Compact Call Handling Capacity ........................................................................................... 113 Table 15 750 GWC capacities..................................................................................................................................................... 115 Table 16 905 GWC capacities..................................................................................................................................................... 117 Table 17 USP messaging type vs. number of office Code Points......................................................................................... 127 Table 18 USP system node engineering.................................................................................................................................... 128 Table 19 USP link capacity.......................................................................................................................................................... 128 Table 20 USP messages per call type......................................................................................................................................... 129 Table 21 USP message rate and message size .......................................................................................................................... 130 Table 22 IPS7 Link system node throughput .......................................................................................................................... 132 Table 23 ABI and H.248 lines combinations ........................................................................................................................... 138 Table 24 Session Server Trunks supported signaling and transport protocols................................................................... 148 Table 25 Session Server Trunks VLAN or IP address assignment ...................................................................................... 150 Table 26 Sample Session Server Trunks IP address configuration....................................................................................... 151 Table 27 Session Server Trunks engineering limits................................................................................................................. 153 Table 28 IP bandwidth for Session Server Trunks on SIP trunking (SIP/UDP) .............................................................. 156 Table 29 Policy Controller VLAN assignment ........................................................................................................................ 162 Table 30 Sample Policy Controller IP address configuration................................................................................................ 162 Table 31 North American announcement and 3-port conferencing model........................................................................ 174 Table 32 UAS provisioning results and sensitivities ............................................................................................................... 174 Table 33 MDM to SDM data path requirements .................................................................................................................... 186 Table 34 SDM to MDM traffic characteristics ........................................................................................................................ 187 Table 35 MSS 15000 to MDM data path requirements.......................................................................................................... 188 Table 36 MSS 15000 to MDM traffic characteristics.............................................................................................................. 189 Table 37 SDM to OSS Network data path requirements ...................................................................................................... 190 Table 38 SDM to OSS Network traffic characteristics........................................................................................................... 190 Table 39 MDM to OSS Network data path requirements..................................................................................................... 192 Table 40 MDM to OSS Network traffic characteristics......................................................................................................... 192 Table 41 Protocol stacks ............................................................................................................................................................. 199 Table 42 MG 15000s bearer port ranges .................................................................................................................................. 202 Table 43 UAS bearer port ranges............................................................................................................................................... 204
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
xxii
Table 44 MS 2010 bearer port ranges........................................................................................................................................ 204 Table 45 MG9000 bearer port ranges ....................................................................................................................................... 205 Table 46 Media Portal bearer port ranges................................................................................................................................. 206 Table 47 Summary of filter parameters..................................................................................................................................... 209 Table 48 Common CS-LAN to NOC flows - Corporate firewalls....................................................................................... 218 Table 49 IETF DSCPs................................................................................................................................................................. 223 Table 50 Examples of the NSCs with eight classes................................................................................................................. 224 Table 51 Examples of the NSCs with four classes.................................................................................................................. 224 Table 52 Default DSCP............................................................................................................................................................... 226 Table 53 Protocol stacks ............................................................................................................................................................. 227 Table 54 Mapping to queues in routers with four queues...................................................................................................... 230 Table 55 Mapping to queues in BPS2000................................................................................................................................. 230 Table 56 DSCP to MPLS EXP mapping.................................................................................................................................. 231 Table 57 Ingress DSCP and IEEE 802.1p to QoS level mapping ....................................................................................... 234 Table 58 Egress QoS level to DSCP and IEEE 802.1p mapping ........................................................................................ 235 Table 59 DiffServ access point behavior .................................................................................................................................. 236 Table 60 DiffServ core port behavior ....................................................................................................................................... 237 Table 61 Traffic service classes mapping to QoS levels ......................................................................................................... 238 Table 62 Mapping to queues in the ERS 8600......................................................................................................................... 238 Table 63 QoS in the Media Gateway router Common flows............................................................................................. 242 Table 64 Fields of PACKET_QOS_OM_THRESHOLDS in table OFCVAR................................................................ 250 Table 65 SLR and RLR for various equipment ....................................................................................................................... 263 Table 66 A/D and D/A attenuation for some small NCS gateways ................................................................................... 264 Table 67 Loss plan and ECAN characteristics ........................................................................................................................ 264 Table 68 Jitter buffer characteristics.......................................................................................................................................... 265 Table 69 MG 9000 PADDATA recommendation ................................................................................................................. 272 Table 70 Definition of standard pad group keys ..................................................................................................................... 273 Table 71 Voice gateway loss plan............................................................................................................................................... 274 Table 72 Port definition............................................................................................................................................................... 274 Table 73 Distance limits for network elements ....................................................................................................................... 277 Table 74 DPT trunk features requiring bearer interaction..................................................................................................... 279 Table 75 Analog line service ....................................................................................................................................................... 280 Table 76 ACD services ................................................................................................................................................................ 282 Table 77 EBS services.................................................................................................................................................................. 282 Table 78 XA-Core Destination Protection configuration parameters ................................................................................. 285
Version: 9.0.3
NORTEL CONFIDENTIAL
1.0 Introduction
1.1 Carrier Voice over IP solution objective
The Carrier Voice over IP Network solutions were developed to support shared use by voice, signaling, OAM and data traffic. The objective of this document is to describe the overall engineering required to ensure that all Carrier Voice over IP solutions are highly scalable and will maintain optimal network performance and reliability, as the customer's network requirements change to meet its growing number of subscribers. Specifically, this document addresses aspects of engineering in the core, or backbone, network including use of MPLS and use of Virtual Private Networks (MPLS-BGP/VPN). Moreover, it also discusses the specific engineering of the various elements that make up the diverse portfolio that is Carrier Voice over IP. This document does not, however, address the specifics of media gateway engineering. These rules can be found in the other System Engineering Bulletins (SEBs) contained within the suite of documents making up the Carrier Voice over IP engineering documentation family. These documents are listed earlier in this document.
Version: 9.0.3
NORTEL CONFIDENTIAL
Figure 1
1.2.1 CS-LAN The CS-LAN consists of dual Nortel Ethernet Routing Switch 8600s (ERS 8600), the CS 2000, the Gateway Controllers (GWC), the Universal Signaling Point, the Succession Data Manager (SDM)/Core and Billing Manager (CBM), Audio Servers, and in some instances, media gateways. The OAM network management clients can be located anywhere within the network, but servers are recommended in the CS-LAN. Note: Throughout this document, it is assumed that the CS-LAN routing switches are dual ERS 8600s. However, the rules described here are equally applicable to third-party devices, unless otherwise noted.
Version: 9.0.3
NORTEL CONFIDENTIAL
1.2.2 Core network The IP core network in its basic form is comprised of one or more routers (possibly multi-vendor) which serve to interconnect the CS-LAN to the gateway sites and other CS 2000 nodes within the customer's network. It is possible that the IP core network will be carrying significant additional data traffic beyond the Carrier Voice over IP traffic. A typical network will use a number of distribution networks, located in a single metropolitan area, to aggregate multiple access networks together. Figure 2 on page 4 shows two different metropolitan areas. It also shows two alternative access technologies, Ethernet and Digital Subscriber Line (DSL). A combination of these different technologies may obviously be used within any particular area. Major connections between sites should be designed with redundant paths for added reliability.
Version: 9.0.3
NORTEL CONFIDENTIAL
Figure 2
3 Cm o
3 Cm o
L OOB A L DB T F A NA S T P Y U
C S E
CS-LAN
Core Network
Metro Area
OAMP Network
Metro Area
Internet
Customer Premises
Switch
3 Cm o
3 m Co
DSLAM
Media gateways
Media gateways
It is recommended that the metropolitan distribution network (MDN) also be used as an aggregation point for directing Internet traffic to and from the internet service provider's network, and therefore, the Internet. This aggregation would be accomplished by using a device such as the Nortel Services Edge Router 5500 (SER 5500), with connectivity to the SER 5500 (or similar) equipment in each MDN. The metropolitan distribution network will probably have connectivity to an ISP network for non-Carrier Voice over IP traffic, and will definitely have access to the core network. There are no specific rules for
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
what is supported and unsupported in the distribution network, but this document, and the appropriate Carrier Voice over IP media gateway (MG) engineering rules documents, contain guidelines for what should, and should not, be done in this layer of the network hierarchy. (See the Document Roadmap, in the front section of this document). It may be possible to keep the bulk of traffic (especially media) switched at Layer 2 only, for example, in a Gigabit Ethernet access solution where traffic stays in the same broadcast domain. 1.2.3 Media Gateway sites Remotely located from the CS-LAN, media gateways interconnect at distribution networks to aggregate the traffic prior to connecting to the managed IP core network. In (I)SN09, media portals may be located at remote media gateway sites (also known as remote media points of presence (POPs)), with the support of a geographic selection algorithm. The rules in this document have been verified for the various media gateway interconnect using ERS 8600 Series switches, Cisco 6509, Juniper M-Series routers, Avici QSR routers, and Nortel Multiservice Switch 15000 ATM switches. The acceptable distance of an MG from the CS-LAN is largely driven by the latency in the network. Latency, or delay across a network, can be influenced by many factors including, but not limited to, codec encoding/decoding, cable/fiber distance, network element processing time, and network element congestion/overload (queuing delay). As a result, the communication between the MG and the CS 2000 may incur some call processing problems as the call control signaling exchanged between the two may not be received and acknowledged prior to the associated transmission timers expiring. See section 2.1.1 Performance requirement on page 7 for information on signaling latency requirements for media gateways.
NORTEL CONFIDENTIAL
3. CS-LAN Traffic Engineering and Configuration a. Size the Central Office equipment, capacity required in the CS 2000, number of GWCs, and number of UASs. 4. Network Loss Planning. 5. Core Network Engineering. a. Specify the Quality-of-Service (QoS) objectives. b. Size the transport links interconnecting the media gateway sites, CS-LAN and core router based on the forecasted amount of voice, signaling or OAM traffic. c. Verify signaling and voice transport latency objectives will be met. d. Identify the traffic control mechanisms that will be used to control congestion and assist in meeting QoS objectives. e. Consider centralized versus decentralized media gateways 6. Size the network management platforms required. 7. Ensure appropriate equipment redundancy is in place to meet network reliability objectives. 8. Network Qualification
Version: 9.0.3
NORTEL CONFIDENTIAL
NORTEL CONFIDENTIAL
< 100 msecs for trunk gateways Maximum packet loss of 0.1% Maximum jitter of 10 msecs
The objectives aim for a performance level that is similar to that familiar to users of the current PSTN. Note that the packet loss requirement is driven by modem and fax requirements. Voice and call control signaling are significantly more tolerant of packet loss. 2.1.2 Availability requirements The key availability requirements for Carrier Grade performance can be summarized as: A maximum of 1 second break in speech path on any link failure A cumulative maximum of <30 seconds per year isolation of a trunk gateway These requirements drive the need for redundant, resilient interconnection between the gateways and access and/or core network, and between CS-LAN and the core network. The interconnection of gateway to switch/router configurations supported by Carrier VoIP for optimum carrier grade performance are detailed in other sections in these guidelines. Optimum interconnection of Nortel gateways to third party routers which are not part of the standard Carrier VoIP product offering, is not covered in these guidelines. A separate integration effort must be allowed for if planning to interconnect with third party routers.
Version: 9.0.3
NORTEL CONFIDENTIAL
2.2.2 Traffic estimation when planning networks When upgrading an existing network, the media traffic across the backbone network between two CS 2000s may be estimated from the current operational measurements on the trunk groups between the two offices. In a live network, this would gather the information from the OMs associated with the DPT trunk groups. 2.2.3 MSS 15000 core The MSS 15000 will support transport and routing of IP over ATM using AAL5, and Gigabit Ethernet for bearer traffic, signaling and OAM. It is required that MG 15000s within a single CS 2000 node implement the Virtual Router on the MG 15000 shelf. This configuration is required to optimize capacity as well as to aggregate the signaling and OAM traffic back to the CS-LAN. 2.2.3.1 Single node CS 2000 office The MG 15000s served from a single CS 2000 will be fully meshed with PVCs interconnecting each MG 15000. Static routes are used to route the bearer traffic between MG 15000s. OSPF is used to route the signaling control messages, OAM and any bearer traffic between the MG 15000s and the ERS 8600 CS-LAN.
Version: 9.0.3
NORTEL CONFIDENTIAL
10
Figure 3
For this scenario, bearer traffic is exchanged between MG 15000 VRs. An ERS 8600 is located in the OSPF backbone and provides the route to the GWCs for all VSP cards. Static routes are added on the MG 15000 VR to route the bearer traffic between them. 2.2.3.2 Multiple CS 2000 offices When there are multiple CS 2000 nodes in the network, bearer path interconnection between gateways supported by two different CS 2000, can be routed over the MSS 15000 backbone. This functionality is termed Dynamic Packet Trunking. In this instance, an aggregation VR, or router, should be deployed to groom the interoffice traffic, as it becomes impractical to fully mesh all the MG 15000s between nodes. The aggregation VR will be fully meshed with every MG 15000 supported by its associated CS 2000. On the WAN side, the aggregation VR will be fully meshed with all other CS 2000 in the network.
Version: 9.0.3
NORTEL CONFIDENTIAL
11
Figure 4
CS2000
100 BT MGCP+ (ASPEN 2.1) / H.248
UAS
STP
USP
Signaling Path 100 BT ISUP/M3UA/ SCTP Intra-Complex Bearer Path Inter-Complex Bearer Path
CO LAN
OC 3 ATM
OC 3 ATM To Other Succession Of f ices Bearer Path and SIP-T Interof f ice Signaling Traf f ic
PP 15K PVG VR
OC12 ATM (APS protected) TDM OC12 ATM (APS Protected)
PP 15K PVG VR
PP 15K
TDM
ISUP OC3 CH
A later chapter details the interconnection of the MG 15000 to the MSS 15000. This chapter also details the capacity of the Multiservice Switch Virtual Router. Interconnect with the CS-LAN is detailed in the section titled CS-LAN Core Network Connectivity in SEB 08-00-001. Note: Carrier VoIP solutions which utilize gateways, such as the IW-SPM, which only support Gigabit Ethernet connectivity should be based on native IP access or core routers. See 2.2.3 MSS 15000 core on page 9 for details regarding this configuration. 2.2.4 Juniper core network A later section details the interconnection of the MG 15000 to the Juniper M-series routers. The section on Core Network Connectivity in SEB 08-00-001 discusses the connectivity of the ERS 8600 to the core network. Specific network engineering guidelines for configuring a Juniper core network for Carrier VoIP are not considered in this document.
Version: 9.0.3
NORTEL CONFIDENTIAL
12
2.2.5 Cisco core network Specific engineering and configuration details for interconnecting Cisco core routers with Carrier VoIP are expected to be available under a Nortel integration service.
NORTEL CONFIDENTIAL
13
2.3.2.1 Information distribution - Inner Gateway Protocol (IGP) MPLS TE relies on an IGP routing protocol to distribute link state information such as maximum link bandwidth per priority, reserved bandwidth per priority, attribute flags per interface and administrative weight per interface. Distance vector routing protocols are not suitable for TE due to their nature. They only rely on the number of hops to determine the best route. The routing protocol itself does not convey any network topology or link status information. Existing link state routing protocols such as OSPF and IS-IS are also not adequate by themselves. Their routing decisions are based on metric or cost and do not take traffic characteristics into account. Extra features must be enabled as an extension so that OSPF or IS-IS are capable of carrying TE information in their advertisements. Please note that in this Carrier VoIP release Nortel only supports OSPF as IGP in an MPLS network. OSPF If customers already have an OSPF-based IP architecture, it is necessary to enable the support of Type 10 Opaque Area-Local Link State Advertisement (LSA2) on all related interfaces. The flooding scope of Type 10 LSA is only within one area (not necessarily Area 0.0.0.0). This restriction exists because, when trying to setup an inter-area LSP, the headend router cannot compute a full path to the tailend router because it does not have TE details for another area. Computation of an inter-area path using MPLS TE has well-known restrictions for almost all vendors. Nortel recommends that all Carrier VoIP MPLS TE nodes are located in the same OSPF area. In the core network it is recommended to overlap OSPF Area 0.0.0.0 with the MPLS backbone. In other words, to set up a LSP through the core network, all devices including the headend router, the tailend router and all routers alone the path must be in OSPF Area 0.0.0.0. Inter-area MPLS TE is not supported in the Carrier Voice over IP solutions.
2. There are two other new OSPF LSA types for MPLS: Type 9 and Type 11. However, currently most vendors do not support them.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
14
TED accuracy One critical element of MPLS TE is the Traffic Engineering Database (TED) created by IGP extensions. The TED gives the device a complete view of all nodes and links participating in traffic engineering. Also the TED is maintained by the IGP and the current available bandwidth is advertised by RSVP. When the bandwidth of a link changes, RSVP informs the IGP, which in turn, updates the TED and forwards the new bandwidth information to other neighbors. Bandwidth availability on a link changes in accordance with the LSP establishment, refreshment, and disconnection, potentially leading to inconsistency between the TED and the reality of the network. As a result, an LSP may be set on a sub-optimal path or completely fail. To improve the reliability of TED, Juniper provides the option to change the threshold at which a change in bandwidth triggers an IGP update. By default, the threshold is set to 10, which means that, if the change of bandwidth on a link exceeds 10% of overall reserved bandwidth (either increase or decrease), RSVP will force the IGP to issue an update. Note: In case of overbooking, the overall bandwidth on a port will be different from the physical bandwidth. For example, for a Gigabit Ethernet port, if the oversubscription rate is 300%, the overall bandwidth will become 3 Gbps. Based on the default threshold value, a bandwidth change below 300 Mbps will not trigger an update. Please note that the threshold must be provisioned in BOTH directions and Nortel recommends that customers keep the default value. 2.3.2.2 Path calculation - CSPF Since path calculation (CSPF) is identical between vendors and normally is not visible to end users, this section will not deal with this subject. 2.3.2.3 Path setup Path setup in an MPLS TE environment requires two fundamental functions: label distribution and bandwidth reservation. Label distribution There are two widely deployed protocols in the industry: Label Distribution Protocol (LDP) and Resource Reservation Protocol (RSVP). Only RSVP is supported by Nortel. Bandwidth reservation
Version: 9.0.3
NORTEL CONFIDENTIAL
15
Although Juniper supports automatic bandwidth allocation, which allows the route to automatically adjust the bandwidth size based on the current traffic volume, it is timer and threshold-based. When more traffic enters the LSP, there may not be sufficient bandwidth already in place. In addition, bandwidth size cannot be adjusted until the percentage difference exceeds the pre-defined threshold. Thus, leaving an LSP without a bandwidth requirement could cause an extended outage for Carrier VoIP applications. The bandwidth value for an LSP must be explicitly defined at the headend router. The bandwidth requirement should be calculated according to the worst case. In other words, assuming 100% of local subscribers are calling other subscribers behind the tailend router, the LSP must be big enough to handle this traffic. When the headend router needs to re-route the path for an existing LSP, the new and the old LSP might traverse a common link. Since the old LSP already reserved the bandwidth from the link, the request from the new LSP may be rejected due to insufficient bandwidth. To eliminate this double counting problem on shared links, Nortel requires that customers enable the adaptive feature for each Carrier VoIP LSP. 2.3.2.4 Packet forwarding Building LSPs in the network does not necessarily mean the router will automatically route traffic down into the tunnels. By default, IGPs maintain their own routing information in the main routing table (called inet.0 on Juniper). MPLS routing information is stored in a separate table (called inet.3 on Juniper), which consists of the IP addresses of the LSP egress points. MPLS also has another forwarding table (called mpls.0 table) listing the label mapping used for switching incoming packets to outgoing interfaces. Neither the MPLS routing nor the forwarding tables are accessible to the IGP. Additional steps need to be taken in order to inform IGPs of the existence of MPLS LSPs. The following sections will discuss this topic in detail.
Type Unicast routing table MPLS routing table MPLS switching table
Description All unicast routing protocols install their best route information into this table. RSVP enters destinations reachable via LSP into this table. These destination prefixes are always the IP address of LSP egress points. This table contains MPLS label binding information to switch incoming labeled packets to outgoing interfaces.
Version: 9.0.3
NORTEL CONFIDENTIAL
16
IGP Shortcuts By default, the IP information associated with an LSP is not visible to the IGP. Hence, any prefix announced by the tail-end router and its downstream neighbors are not known by the headend router via the LSP. When using IGP Shortcuts, the LSP is presented to IGPs as an uni-directional point-to-point logical interface. This allow the LSP ingress router to consider the LSP into its SPF calculations and the result will be entered into the inet.3 table. Note: IGPs can use LSPs in their SPF calculations only if the egress address of the LSP is a loopback interface. The IGP shortcuts do not use LSPs whose endpoints are physical interfaces. Traffic Engineering BGP-IGP When IGP Shortcuts is enabled, the results of an SPF calculation are installed into inet.3 table, but not in the main routing table. BGP is the only protocol with visibility into inet.3 table for the purpose of route resolution. In a network which only runs OSPF, a new feature called Traffic Engineering BGP-IGP is required to merge the inet.3 table with the inet.0 table. Working in conjunction with IGP Shortcuts, the IGP can again use the LSP in its calculations. Since there is no inet.3 table, the results of the calculations are entered into the inet.0 table. Nortel requires that customers activate both IGP Shortcuts and Traffic Engineering BGP-IGP in order to obtain the proper routing information on the router. Failure to follow this rule will result in a suboptimal routing path in the network 2.3.3 Fault protection 2.3.3.1 RSVP Hello extension The RSVP Hello extension is optional for MPLS TE. By sending Hello packets, an RSVP node is able to detect the loss of its neighbors faster (at the second level). This feature is added to provide node-to-node failure protection when next-hop failure is not detectable by link layer mechanisms. A node running RSVP Hello periodically sends a Hello request out of its interfaces. The receiving end responds with a Hello Ack. The declaration of a neighbors state change depends on two factors: Hello interval and keep-multiplier. An RSVP node will declare its neighbors state to be down when a loss of (2 x keep-multiplier + 1) consecutive Hello packets occurs. Nortel recommends decreasing the value of the RSVP Hello timer to 1 second from default nine (9) seconds. Nortel also recommends decreasing the keep-multiplier down to 1 from the default value of 3. It is worth mentioning that a RVSP Hello instance may be expensive because of its large volume of traffic and the strain put on the router resources. The above recommendations are based on the assumption that all nodes in an MPLS TE network support RSVP hello protocol and they all have sufficient resources to handle it.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
17
2.3.3.2 RSVP PathErr message The RSVP Path-Error message can report a wide range of problems in the network. The ingress router can monitor PathErr packets transmitted from downstream routers. If an LSP cannot be built due to insufficient bandwidth or unavailable links, an MPLS TE node can integrate this information into a subsequent calculation. Although Juniper provides an option to turn this feature off, Nortel recommends leaving it enabled. 2.3.3.3 Protection mechanisms There are a couple of potential ways to protect an MPLS LSP against outages: Headend Reroute Secondary LSP Fast Reroute Headend reroute Without the assistance from other mechanisms, only the headend router of an LSP can perform the path restoration. Before an impairment occurs along the path, the headend router has NO backup plan in its record to find the alternative path. Upon reception of either an IGP notification of a topology change, or an RSVP notification of an LSP error, the headend router will re-calculate and re-signal a new path. The time it takes to resume traffic on an alternative path is calculated as follows:
Min (Time (IGP reaction), Time (RSVP Reaction) ) + Time (Run CSPF for Detour Path) + Time (Signal a New Path)
The total amount of time largely depends on the IGP or RSVP reaction of the link/node failure, which can be at second level. In this calculation, computing and RSVP-TE signaling times can be ignored. Secondary LSP A secondary LSP is provisioned at the headend router to protect the primary LSP ONLY. The primary LSP is the preferred path while the secondary path is not used until the primary LSP fails. On Juniper routers, secondary LSPs can run in two modes: Standby: Pre-calculated and pre-signaled
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
18
In a non-standby mode, an RSVP PATH message needs to be sent from the headend router after being notified by either the IGP or RSVP about the primary path failure. In a standby mode, this step is skipped because the backup path is already established in advance. Please note that in standby mode the secondary LSP reserves the same amount of resources as the primary LSP does even while the primary LSP is active. More bandwidth consumption is the trade off needed to enhance the recovery time. Either IGP or RSVP can trigger the traffic switchover. Upon receiving a link-down LSA or RSVP error message for a segment that is part of the primary LSP, the headend router will tear down the primary path and try to reroute the traffic through the backup tunnel. The time it takes to resume the traffic on the alternative path is calculated as follows:
Non-Standby Mode: Min (Time (IGP reaction), Time (RSVP Reaction) ) + Time (Signal a new path) Standby Mode: Min (Time (IGP reaction), Time (RSVP Reaction) )
Fast Reroute Fast Reroute (FRR) is also called local repair in some articles. FRR allows the router immediately upstream to the outage to quickly shift the LSP from the failed link or node before informing the headend routers. Two FRR modes can be used on Juniper: FRR Link Protection and FRR Node Protection. FRR Node Protection provides more robustness because it bypasses both the failed link as well as the node. However, Juniper believes Graceful Restart is more suitable to protect against node failure. Note: Only the headend router is allowed to send node-link protection requests. All links capable of doing fast reroute along the path only have link-protection specified. The backup path is pre-calculated and pre-signaled immediately after the primary path is established. By default, the backup path inherits the same constraints as its primary LSP. However, because of the assumption that restoration with FRR is only a temporary solution, the reservation is NOT made until the primary LSP fails. In FRR, the link-down IGP message will be ignored. The headend router must wait an RSVP error message before tearing down the protected path.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
19
The time it takes to resume the traffic on alternative path is calculated as follows:
Time (RSVP Reaction)
Summary With all of these protection mechanisms in hand, Nortel uses them to build a Carrier VoIP MPLS Core network with the maximum redundancy and reliability. For obvious reasons, headend reroute is not desirable. The detour path is not pre-calculated and not presignaled. In addition, the reaction time for the IGP or RSVP could potentially take several seconds. Thus, Nortel does not recommend this method to any Carrier VoIP customer. Secondary LSP and Fast Reroute have their own pros and cons. According to the technical description, it seems that Fast Reroute could be a better solution. However, the network administrator should keep in mind that the restoration in local protection is ONLY temporary. In the post-failure stage, a new LSP will be computed by the headend. It may potentially lead to another traffic switchover. Furthermore, based on observations on Juniper M-series routers, Nortel believes that physical alarms cannot aid the RSVP reaction to link failure. In other words, on Juniper routers, RSVP has to wait until the Hello packet time-out (the minimal value is one (1) second) before notifying its adjacent neighbors. For the topologies using secondary LSPs, such a concern does not exist because the IGP protocol can detect the link failure much faster. To improve the protection in the core network, these guidelines must be followed to ensure the LSPs can be used to carry Carrier VoIP: A secondary LSP is required and must be pre-configured at the headend router. The secondary LSP must run in Standby mode so the required bandwidth is always in place and ready to serve the switched traffic. The secondary LSP must be defined to minimize as much as possible the sharing of physical links/nodes with the primary LSP. In this way, a single point of failure will not tear down both LSPs. Fast Reroute is strongly recommended. On Juniper routers, Nortel requires customers to enable both link-protection and node-protection so that routers along the path will try to compute detour path as diverse as possible. The best protection is given when both secondary path and fast reroute (link-node protection) are configured for Carrier VoIP LSPs. When a link or node failure occurs, fast reroute can perform local repair to provide a temporary bypass link. In the meantime, the immediate upstream router to the failed link or node will notify the headend router to switch the traffic to the secondary LSP.
NORTEL CONFIDENTIAL
20
remote sites. This document provides the general guidelines for integrating Carrier VoIP sites into BGP/MPLS VPN backbone. In addition, the methods to minimize the end-to-end outage will be highlighted. Although inter-carrier BGP/MPLS VPN is possible, we do not cover it in our discussion. We only consider the intra-carrier configuration, which may contains multiple GW sites and one, or more than one CS2000 sites. In addition, we assume that the BGP/MPLS VPN backbone is composed of a single BGP Autonomous System (AS). The rules we provide here can be easily extended to engineer the BGP/MPLS VPN core network with multiple Autonomous Systems.
Figure 5
On top of the above assumptions, this document does not cover the following topics:
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
21
Engineering rules for MPLS within the backbone, for instance, MPLS Traffic engineering, MPLS LSP Protection, etc. Engineering rules to optimize MP-BGP within the backbone, for instance, BGP Route Reflection, BGP Confederations, etc. Engineering rules for IGP protocol with a VPN site. OSPF Sham link between VPN sites
2.4.1 PE-to-CE connections For maximum redundancy, Nortel requires a pair of CE routers to aggregate the traffic from a Carrier VoIP VPN site. These CE routers must be connected to dual PE routers in fully-meshed mode. By this means, each site has redundant entry points into the backbone. If a Carrier VoIP routing capable device (for instance, a MG 15000 with VR in a small VPN site, etc.) is directly attached to the BGP/MPLS VPN backbone, this device must be dual-homed to a pair of PE routers by redundant uplinks.
Figure 6
2.4.2 PE router planning 2.4.2.1 VRF A PE router maintains separate VRFs (Virtual Routing and Forwarding Instances) for each VPN its CE
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
22
routers participate. Each VRF acts as a collection of all related subnets inside a VPN. Nortel requires one dedicated VRF on every PE router connected to a Carrier VoIP site. If the site is also a member of another VPN, non-Carrier VoIP subnets can not be leaked to the Carrier VoIP VRF on the PE router. Route filter may be required. Please note that, VRF is only required on PE routers, not on CE routers. Note: In some corner cases, more than one VRF may be required for a VPN per site. We do not consider these scenarios. However, if customer really needs this type of design, please contact Nortel for further support. The Carrier VoIP VRF on a PE router should include those interfaces to its directly attached CE routers so the routes learned across these interfaces will be associated with the proper VPN. Here the interface has not to be a physical interface. It can also be defined as a sub-interface. If a site has both Carrier VoIP and non-Carrier VoIP network behind the CE routers, the PE-to-CE connection must be separated physically or logically for both services. Here are several options: Customer can install additional cable connections only for non-Carrier VoIP subnets, and leave the existing cables for Carrier VoIP subnets. In this case, only the physical ports serving Carrier VoIP traffic will be included into the Carrier VoIP VRF. Customer can divide the existing cable connections into two sub-interfaces, one for Carrier VoIP subnets and another one for non-Carrier VoIP subnets. In this case, only the sub-interface serving Carrier VoIP traffic will be included into the Carrier VoIP VRF. Customer can enable the ports as VLAN trunking interfaces. One of the VLANs carried on the links can be used to announce Carrier VoIP subnets. In this case, only the VLAN interface serving Carrier VoIP traffic will be included into the Carrier VoIP VRF. No matter which way customer selects, please make sure that bandwidth requirement is met for Carrier VoIP related traffic. 2.4.2.2 Route Distinguisher PE router will combine any IPv4 address learned from a CE router with a eight-byte RD (Route Distinguisher). In Carrier VoIP. all VRFs belonging to the Carrier VoIP VPN must share the same globally unique RD. RD can be specified in two different numbering schemes: A 16-bit AS number followed by a colon and a 32-bit arbitrary number: ASN:nn This option is recommended when the BGP/MPLS VPN backbone uses a public AS number. A 32-bit IP address followed by a colon and a 16-bit arbitrary number: IP:nn
Version: 9.0.3
NORTEL CONFIDENTIAL
23
This option is recommended when the BGP/MPLS VPN backbone uses a private AS number. In this case, Nortel strongly encourages customer to use a public IP address in the form of a PE routers Router ID, or one of the loopback IP address which is unique across the network. Using RD will eliminate any ambiguity in the network if there is address overlap between different VPNs. However, it does not solve the problem caused by overlapped address space within a VPN. Thus, as a requirement, no duplicate IP subnets are allowed among Carrier VoIP VPN sites. Note: In some corner cases, if there is more than one VRF on a PE router belongs to a VPN, then each VRF must have different RDs. Here in this document, we do not consider these scenarios. However, if customer really needs this type of design, please contact Nortel for further support. 2.4.2.3 Route Target RT (Route Target) extended BGP community attribute is used to create import and export policy on a PE router to control the route distribution between VPN-IPv4 and IPv4 tables. When a VPN route is injected into BGP, it has a RT value attached to it. A receiving PE router will match the RT against its import policy to see whether the route is eligible to be imported into a local VRF. RT value is independent from the one for RD. They do not have to share the same value. Customer can pick up any number as they want. However, keep in mind that a RT carries a certain meaning with it. The planning is very important before the deployment so that each RT will clearly identify a service the end users may be interested in. 2.4.2.3.1 Export policy Before advertising local routes to other PE routers, the ingress PE router attaches a route target attribute to each route learned from directly connected sites. The route target attached to the route is based on the configured export policy associated with the VRF. Carrier VoIP VPN needs more than one RT. The following rules for RT assignment must be obeyed for maximum flexibility. Please note that the rules for export policy definition are identical for the case with a single CS 2000 complex or the one with more than one CS 2000 complex. First we categorize all VPN sites into three major groups: CS2000 VPN Site: It contains all CallP devices including CS-LAN itself. It may, or may not have local gateway devices. NOC center may, or may not subtend to CS-LAN. Gateway VPN Site: It contains one or more than one remote gateways. Each CS2000 Complex must has at least one Gateway VPN Site. NOC VPN Site: It contains OAM&P devices which are running in the server mode.
Version: 9.0.3
NORTEL CONFIDENTIAL
24
For a CS2000 VPN Site site, several route targets are defined for EXPORT policy: CallP Carrier (Required): This route target is used to identify all CallP subnets which can be advertised within the BGP/MPLS VPN carrier (i.e. Intra-carrier). CallP Public (Required): This route target is used to identify all CallP subnets which can be released between multiple carriers, or to the Internet (i.e. Inter-carrier). This route target is only used when customers have more than one BGP/MPLS VPN service providers. CS 2000 Bearer (Optional): This route target is used to identify all Bearer subnets if gateway device is present in the CS 2000 VPN site. It is also used if a subnet within the site is shared for both signaling and bearer purpose. For instance, a MG 15000 with VR is connected to the CS-LAN. Its VSP FPs may have their signaling (CTRL/MG) and bearer (IPMCONN) addresses in the same subnet. In this case, the subnet is marked with CS 2000 Bearer only. CS 2000 OAM&P (Required): This route target is used to identify all management subnets within the site, including the routes learned from a directly attached NOC center. For a Gateway VPN Site site, several route targets are defined for EXPORT policy. Please note that the same RT value will be shared across all Gateway VPN sites within the same CS2000 complex: Site CallP (Required): This route target is used to identify all signaling subnets for local gateway devices Site Bearer (Required): This route target is used to identify all bearer subnets for local gateway devices Site OAM&P (Required): This route target is used to identify all management IP subnets for local gateway devices
For a NOC VPN Site site, one route targets is defined for EXPORT policy: NOC OAM&P (Optional): This route target is used to identify all management ip subnets inside the NOC center with which all OAM&P clients need to communicate. It is required only if the NOC center is connected to the CS 2000 VPN site. Please note that, the numbering scheme of all RTs listed above must follow the form of X:Y: X is the CS 2000 Complex ID, which must be unique across the network Y is a random number, which must be unique inside a CS 2000 Complex. In order to attach the appropriate RT with a route, export policy is required on a PE router. If a subnet is shared by multiple services, then all corresponding RTs should be appended with the BGP update. For instance, an existing MG 15000 in a Gateway VPN site may already allocated its signaling and bearer IP addresses in the same subnet. In this case, this route will be associated with the Site CallP RT together with the Site Bearer RT.
Version: 9.0.3
NORTEL CONFIDENTIAL
25
2.4.2.3.2 Import policy Before installing remote routes that have been distributed by another PE router, the VRF on an egress PE router is configured with an import policy. A PE router can only install a VPN-IPv4 route in a VRF if the route target attribute carried with the route matches one of the acceptable route targets in the import policy. Import Policy is a little bit different for the case with a single CS 2000 Complex and the one with more than one CS 2000 Complex. We will discuss them separately: Single CS 2000 Complex For a CS2000 VPN Site, it needs to import all routes associated with at least one of the following RTs: Site CallP (Required) Site Bearer (Optional) - Only if the CS 2000 site has gateway devices Site OAM&P (Optional) - Only if the CS 2000 site has management devices running in server mode NOC OAM&P (Optional) - Only if the NOC center is located in the different VPN site
For a Gateway VPN site, it needs to import all routes associated with at least one of the following RTs: CallP Carrier or CallP Public (Required) Site Bearer (Required) CS 2000 Bearer (Optional): Only if the CS 2000 site has gateway devices CS 2000 OAM&P (Optional): Only if the CS 2000 site has management devices running in server mode NOC OAM&P (Optional) - Only if the NOC center is located in the different VPN site
For a NOC VPN site, it needs to import all routes associated with at least one of the following RTs: CS 2000 OAM&P (Required) Site OAM&P (Required)
Multiple CS 2000 Complex In addition to the RTs listed above for local devices within the same complex, a CS2000 VPN site also needs to import all routes with at least one of the following RTs: CallP Carrier or CallP Public from other CS2000 complex (Required)
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
26
Site Bearer from other CS2000 complex (Required) CS2000 Bearer from other CS2000 complex (Optional) - Only if the remote CS 2000 site has gateway devices
In addition to the RTs listed above for local devices within the same complex, a Gateway VPN site also needs to import all routes with at least one of the following RTs: Site Bearer from other CS2000 complex (Required) CS2000 Bearer from other CS2000 complex (Optional) - Only if the remote CS 2000 site has gateway devices Worth noting is that, if customer would like to apply security policy (traffic filter, etc.) in addition to import/export RT policies, please do it on CE routers. 2.4.2.4 Route Target examples 2.4.2.4.1 Single CS2000 complex examples Example 1:
Given an assumption that there is no gateway device residing in a CS2000 VPN site, here is a sample RT implementation if the NOC center is separate from the CS-LAN.
Version: 9.0.3
NORTEL CONFIDENTIAL
27
Figure 7
The above figure shows that the PE routers export the CallP VLAN (Carrier) subnet with a route target of 100:100 and the OAM&P VLAN subnet with a route target of 100:300. Both Gateway VPN Sites have MG 15000 shelves, which are provisioned with a VRAP port on each VR to host VSP subnets. Because the VRAP subnet actually represents both signaling and bearer traffic, this route is marked with two RTs. One is 100:400 for signaling and another one is 100:500 for bearer. Also, for the out-of-band management IP address, the route is assigned with a route target of 100:600. OAM&P IP subnet from the NOC center is exported to the BGP/MPLS VPN backbone with a route target of 100:700. In the following table, we summarize the export police and also list the corresponding Import policy for each site:
Version: 9.0.3
NORTEL CONFIDENTIAL
28
RT for Exported Routes 100:100 100:300 100:400 100:500 100:600 100:400 100:500 100:600 100:700
RT for Import Routes 100:400 100:700 100:100 100:500 100:700 100:100 100:500 100:700 100:300 100:600
NOC
Example 2: Here is another example assuming that the CS-LAN contains a MG 15000 shelf with VR and VRAP port provisioned. The rest of the conditions are the same as the above example. The import and export policy should be alternated accordingly. The CS2000 VPN site now needs to mark the MG 15000 subnet with a new route target of 100:200, which is designed for bearer traffic. Those gateway VPN sites need to import this route target in order to make voice calls to/from the CS-LAN MG 15000.
Version: 9.0.3
NORTEL CONFIDENTIAL
29
Figure 8
In the following table, we summarize the export police and also list the corresponding Import policy for each site:
Version: 9.0.3
NORTEL CONFIDENTIAL
30
RT for Exported Routes 100:100 100:200 100:300 100:400 100:500 100:600 100:400 100:500 100:600 100:700
RT for Import Routes 100:400 100:500 100:700 100:100 100:200 100:500 100:700 100:100 100:200 100:500 100:700 100:300 100:600
NOC
2.4.2.4.2 Multiple CS2000 complex examples Example 3: In this example, we have two CS2000 complexes and both are managed by the same NOC center. Each CS2000 complex contains one Gateway VPN site, which runs MG 15000 for voice service. There is also a MG 15000 in each CS2000 VPN site. To differentiate the services offered by the two CS 2000 Complexes, the RTs for CS2000 Complex #1 begin with 100 and the RTs for CS2000 Complex #2 begin with 200.
Version: 9.0.3
NORTEL CONFIDENTIAL
31
Figure 9
In the following table, we summarize the export police and also list the corresponding Import policy for each site:
Version: 9.0.3
NORTEL CONFIDENTIAL
32
VPN Site
RT for Import Routes 100:400 100:500 100:700 200:100 200:200 200:500 100:100 100:200 100:500 100:700 200:200 200:500 200:400 200:500 100:700 100:100 100:200 100:500 200:100 200:200 200:500 100:700 100:200 100:500 100:300 100:600 200:300 200:600
CS2000 Site A
CS2000 Site #B
NOC
100:700
2.4.2.5 MP-iBGP After the VRF is already populated with the VPN routes, these routes need to be advertised across the BGP/MPLS VPN backbone. Multiprotocol BGP (MP-BGP) is the component to perform this job other than BGP-v4 because the route update conveys more information than just an IP subnet. Assuming that the BGP/MPLS VPN backbone is composed of a single AS, only MP-iBGP session is required. The BGP standard states that a BGP speaker must peer with all other BGP speakers within the same AS. This rule still applies in the BGP/MPLS VPN scenario. In other words, all PE routers in the backbone must be fully-meshed by MP-iBGP sessions and the VPN routes in the form of VPN-IPv4 prefixes are exchanged between them. The routing information inside the route update between a pair of PE neighCarrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
33
bors will be invisible to any P router along the path. So the VPN routes are only stored on PE routers, not on P routers. Note: Although the restriction for fully-meshed iBGP neighbors can be lifted by using BGP Confederation or Route Reflection, we do not cover these methods in this document. Please refer to vendors technical document for details. Always run MP-iBGP sessions between loopback interfaces. MP-iBGP sessions can stay active as long as there is a valid path between them. By this means, MP-iBGP sessions can even survive a short loss of connectivity with minimal impact on the data forwarding. After a failure, the IGP within the BGP/MPLS VPN backbone will re-establish the path between PE routers. Remember that, In this configuration, PE router will set the BGP NEXT_HOP in the route update to its own loopback IP address. If a BGP speaker does not possess an IGP route to the BGP NEXT_HOP address, the BGP route will be ignored. Thus, the IP address for the loopback interface on any PE router must be announced by the IGP so that the BGP NEXT_HOP address of all local originated routes can be successfully resolved by other PE routers. MP-iBGP update must be attached with the extended community attributes such as the route target or the route origin. On some vendors platform, this process is not done automatically. Additional provisioning is required to instruct the PE router to support the extended community attributes on per neighbor basis. 2.4.2.6 MPLS interfaces On a PE router, all its interfaces connected to other PE routers or P routers must participate in MPLS. Some vendors also requires the PE-to-CE interfaces as a part of the MPLS so that the PE router can create an MPLS label for the private interface. Please refer to vendors technical document for details. 2.4.3 Routing protocol between PE and CE There are several possible ways that a PE router can receive routes from a CE router. Carrier VoIP only supports the following protocols: Static Routes OSPF EBGP 2.4.3.1 Static routes Static route provides a simple solution for a VPN site. On each PE router, two equal cost static routes need to be provisioned for every Carrier VoIP subnet using direct redundant links to the dual CE routers as the outgoing interfaces. If the IP address space for all Carrier VoIP related devices in the site cannot be
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
34
summarized by one prefix, multiple static routes are required per PE router per link. Please note that, these static routes must be configured in the Carrier VoIP VRF, not in the global routing table. Also worth mentioning that, on the PE router, equal cost multiple path must be enabled in the Carrier VoIP VRF. This feature allows multiple static routes for the same destination to be installed at the same time. If one static route is removed due to any network failure, the presence of another static route in the VRF table will suppress the BGP update if BGP route aggregation is used. Please refer to the section 2.4.5 Redundancy on page 44 for details. On the CE router side, static default routes using redundant uplinks to the dual PE routers are required. Since the CE routers do not have any knowledge about BGP/MPLS VPN network, the static routes are provisioned in the normal way, i.e. in the global routing table. Carrier VoIP VPN routes for a site must be advertised via the MP-iBGP sessions to all other PE routers as VPN-IPv4 prefixes. This process is NOT done automatically if the route is learned via static route. Inside the address family of VPN-IPv4 under BGP protocol, redistribution policy must be configured in order to announce local Carrier VoIP routes across the network. RD and RT will be attached to all local generated routes according to export policies. Please note that, using Static Routes between PE and CE routers may lead to a routing blackhole in the network. Thus, Nortel does not recommend this method for customers using BGP/MPLS VPN. 2.4.3.2 OSPF This option can be desirable if customers already have OSPF running in their existing sites and would like to exchange routing information between them across the BGP/MPLS VPN core network. Unlike Static Routes, using dynamic routing protocol such as OSPF can eliminate the blackhole problem. 2.4.3.2.1 OSPF area design To support OSPF connectivity between sites, MPLS VPN cloud acts as a Super Backbone on top of the OSPF two-level hierarchy. This third layer interconnects the VPN sites which are running OSPF.
Version: 9.0.3
NORTEL CONFIDENTIAL
35
Figure 10
In the configuration with Site Area 0 support, all links between PE and CE routers are placed into the Area 0. More than one site can run Area 0 as long as the Area 0 is directly connected to the MPLS VPN backbone. In the OSPF update sent to a CE router, the advertising router IP address is set to the OSPF Router ID of the PE router itself. So from the CE routers perspective, if the received LSA is Type 3 Summary, the subnet appears to be located in an OSPF area directly attached to the PE router, which acts as an ABR. If the received LSA is Type 5 External, then the subnet appears to be redistributed by the PE routers from other routing protocol. Here, the PE router acts as an ASBR. In either case, the BGP/MPLS VPN backbone is totally transparent to the CE routers. From a PE routers standpoint, CE routers become ABRs for those non-backbone areas residing in the same VPN site. As a result, CE routers gain the capability of performing route summarization. Nortel recommends customer enabling route summarization on both CE routers so the total number of OSPF updates sent to the PE routers will be reduced.
Version: 9.0.3
NORTEL CONFIDENTIAL
36
Figure 11
In the configuration without Site Area 0 support, all links between PE and CE routers are placed into a non-backbone area. In comparison to the previous configuration, PE routers are still deemed as ABRs and ASBRs, but CE routers are not ABR any more. Route summarization has to be done on PE router by the BGP Route Aggregation feature. Worth noting is that, non-backbone area can not be present across multiple sites. Otherwise, this area will appear to be partitioned either by the OSPF backbone or by the MPLS VPN Super Backbone. 2.4.3.2.2 Route announcement Depending on vendors implementation, redistribution between OSPF and MP-BGP may not happen automatically, or only those routes with certain OSPF route type will be redistributed by default. Thus, caution should be taken to make sure that a PE router will export all OSPF VPN routes to the BGP/MPLS VPN backbone by the proper commands and all relevant route types are included. At the opposite direction, both PE routers need to advertise the default route to a local VPN site. This
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
37
can be done by redistributing a static default route into OSPF. The nexthop address of this static default route can point to a loopback interface so it will stay active independently from any physical interface. 2.4.3.2.3 OSPF Domain ID When propagating OSPF routes across the BGP/MPLS VPN backbone, a PE router marks every route with an attribute called OSPF Domain ID to indicate whether the route is originated within the same OSPF domain or from outside it. Technically, all Carrier VoIP routes learned from CE routers via OSPF should be in the same OSPF domain. However, the default setting of the Domain ID on different vendors platform can lead to unexpected result. On Cisco router, the OSPF Domain ID is derived from the OSPF process number. If the OSPF process numbering is inconsistent between two PE routers, the receiving PE router will convert all incoming OSPF updates with different Domain ID to Type 5 External route. On Juniper router, the default Domain ID is the null value 0.0.0.0. Juniper routers will treat all routes either without the Domain ID or with different Domain ID value as Type 5 External route. As we can see, the OSPF Domain ID can control the LSA translation during the route propagation. If customer would like to advertise OSPF specific route into a VPN site, converting Intra-area (Type 1 & Type 2) or Inter-area (Type 3) LSA into Type 5 LSA may lead to slower convergence. Remember that, OSPF External LSAs are stored in a separate topology table. When SPF runs, OSPF External LSAs are examined in the end. Thus, to keep the original LSA type across various router platforms, Nortel recommends setting the same OSPF Domain ID on all PE routers serving Carrier VoIP traffic. 2.4.3.2.4 Routing information loop Routing information loop can occur when using dynamic routing protocol such as OSPF in a multihomed site (i.e. the site is subtended to the BGP/MPLS VPN backbone by more than one link). As illustrated in the following diagram, when a CE router announces a route to a PE router, the PE router will publish the route to its PE router peer via MP-iBGP and in turn, advertise the OSPF route back to another CE router in the same site.
Version: 9.0.3
NORTEL CONFIDENTIAL
38
Figure 12
On the other direction, a VPN-IPv4 route learned from MP-iBGP sessions on a PE router will be advertised to a CE router. After installing the route into its routing table, the CE router will announce this route back to another PE router, and in turn, get propagated to the BGP/MPLS VPN backbone again. Although the routing information loop may be harmless, it consumes unnecessary resource on all routers being involved. There are several means to correct this behavior. However, each of them has its own limitation. 2.4.3.2.5 OSPF Down-Bit For any Type 3 Summary Route received through MP-iBGP, a PE router will set the high-order bit of the LSA Options field known as DN Bit (Down Bit) before generating LSAs to attached CE routers. When a PE receives a Type 3 LSA from a CE router with the DN bit set, the information from that LSA will not be used during the OSPF route calculation. Because the PE router knows this route was introduced into OSPF by another PE router in the network. As a result, the LSA is not translated into a BGP route.
Version: 9.0.3
NORTEL CONFIDENTIAL
39
2.4.3.2.6 OSPF VPN route tag Marking DN-bit is done automatically on PE routers. However, not all vendors support this feature. In addition, the DN bit will be ignored in other LSA types. It can only prevent the routing information loop for Type 3 Summary Route. OSPF route tagging is required to ensure that a Type 5 LSA generated by a PE router will be ignored by any other PE routers which are connected to the same site. A special tag called VPN Route Tag or Domain Tag is added for this purpose. For every Type 5 LSA advertised to CE routers, the PE router makes the OSPF route tag equal to its local configured VPN Route Tag. This tag identifies the route as having come from a PE router and it remains the same as the LSA traverses within the site. When another PE receives the same LSA via OSPF, it will compare the OSPF route tag against its own value. If they are equal, then the information from the LSA cannot be used by the SPF computation. The VPN Route Tag is derived from the AS number of the PE router. To keep the consistency across multi-vendor platforms, PE routers attached to the same site must be explicitly provisioned with the same VPN Route Tag and its value must be unique across the network. 2.4.3.2.7 Site of Origin In general, a PE may distribute to a CE any route which the PE has placed in the forwarding table. However, there is one exception: if a route contains Route of Origin attribute, also called Site of Origin (SOO), which identifies a particular site, then this route must never be redistributed back to any CE belonging to the same site. By this means, when a PE router advertises a route learned from its CE router via OSPF out to its MP-iBGP sessions, it will append SOO attribute with the update. When another PE router receives the update, it will not redistribute the route to OSPF if the SOO attribute for the route is equal to its local configured SOO value. This will stop the routing information loop. Please note that, all three methods must be utilized at the same time on a PE router. From the BGP/MPLS VPN backbone to a VPN site, the PE router checks SOO attribute to determine whether a BGP update should be redistributed into OSPF. From a VPN site to the BGP/MPLS VPN backbone, the PE router examines DN-bit or Route Tag to decide whether a Type 3 or Type 5 OSPF LSA is valid or not. Lack of any method in the implementation will lead to unexpected behavior for Carrier VoIP traffic. 2.4.3.3 BGP 2.4.3.3.1 BGP neighbor relation A Carrier VoIP site can be attached to the BGP/MPLS VPN backbone through BGP protocol. The AS number of the site must be different from the one used by PE routers and must be unique across the network. Unlike MP-iBGP, the EBGP peer sessions between a PE and a CE router must be configured by the physical IP address for the direct interface. Because of this, the link failure to an external neighbor will immediately trigger the tear down of the corresponding EBGP session, instead of waiting for the hold
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
40
timer to expire. From the standpoint of a site, the PE-to-CE connection just a standard BGP peering with which the exchange of IPv4 prefixes can be achieved. Since both CE routers reside in the same AS, iBGP session is required between them.
Figure 13
If possible, the Keep alive interval and Hold Time interval must be reduced to 1 seconds and 3 seconds respectively between PE and CE routers. Surveillance is required to monitor the peer establishment process in order to make sure that the KEEPALIVE and Hold Timer are successfully negotiated to predefined value on both sides. No redistribution is required in this configuration. The translation from IPv4 route to VPN-IPv4 route is done automatically on PE routers. Like OSPF configuration, both PE routers must originate default route into a local VPN site. For any unknown destination, the CE routers will forward the packets to the PE routers following the default route learned via EBGP.
Version: 9.0.3
NORTEL CONFIDENTIAL
41
2.4.3.3.2 Routing information loop Routing information loop can also occur when BGP is used between the BGP/MPLS VPN backbone and a VPN site. As we mentioned in the previous section, a BGP extended community called Route of Origin (or Site of Origin, SOO) is designed to prevent the routing information loop in the complex network topologies. When this feature is enabled, the BGP process on a PE router will check each received route for the SOO value and filter based on the following conditions: If a route is received from a CE router without any SOO value, the PE router knows it is originated internally from the site and the SOO value configured on the receiving interface is appended to the route before it is redistributed into MP-iBGP. If a route is received from MP-iBGP with an associated SOO that identifies the same local VPN site, the update will be discarded and never be redistributed to any CE for that site. If a route is received from MP-iBGP with an associated SOO that does not match the one for the local VPN site, then the route is accepted and will be redistributed to CE routers This filtering is designed to prevent transient routes from being relearned from the originating site, which prevents transient routing loops from occurring. 2.4.3.3.3 MED As part of the path selection, PE routers will compare the MED value for the same route advertised by both CE routers. This attribute is normally used to influence the entry point selection for inbound traffic into a VPN site. Different MED values can lead to asymmetric traffic flows. In other words, one CE router is always used by incoming traffic in favor of another one. Thus, Nortel recommends customer using the same MED value on both CE routers to advertise the VPN routes. 2.4.3.4 Route announcement From a VPN site standpoint, it needs to know how to reach remote Carrier VoIP subnets through the BGP/MPLS VPN backbone. A CE router does not have to maintain the whole VPN routing information before it can start routing the Carrier VoIP traffic. Remember that the only entry points to the BGP/MPLS VPN backbone are the PE routers. The outbound traffic from a VPN site can be simply passed to the PE routers and in turn get routed from there. For maximum stability, Nortel recommends customer only announcing the default route from PE routers to CE routers via dynamic routing protocol. In the case of static route, only the default static routes are required on CE routers. A PE router must have comprehensive view of the network. For every single Carrier VoIP prefix, the PE router needs to have a valid path in the MP-iBGP table with MPLS labels associated with it. Thus, it is very important for CE routers to announce ALL local VPN routes to the PE routers via dynamic routing protocol. In the case of static route, the static route entries configured on the PE routers must fully cover all local VPN subnets. Fail to do so will cause unexpected service outage.
Version: 9.0.3
NORTEL CONFIDENTIAL
42
2.4.4 End-to-End network outage One thing critical to voice applications is the network outage. Unlike data traffic, Carrier VoIP elements are sensitive to the delay and the packet loss. Some gateway devices such as MG 15000 will tear down the call if no bearer traffic is present for more than 5 seconds. When any type of impairment (link cutoff, node reboot, etc.) takes place in the network, routers need to learn about the topology changes and to synchronize their view with other devices. The time they take primarily depends on how routing protocol reacts to the failure. Transmitting voice packets across BGP/MPLS VPN backbone has an implication that MP-BGP must be involved. Unlike IGP, BGP was designed to provide stable network environment in large scale. In general, it is not good at fast re-convergence in the event of failure. In addition, the introduction of the BGP/MPLS VPN architecture will impact on the end-to-end convergence time between members of a VPN. Network failure can occur in two areas: Within BGP/MPLS VPN backbone Within Carrier VoIP VPN site. 2.4.4.1 Within BGP/MPLS VPN backbone The end-to-end network outage in this scenario must be assessed from two aspects: IGP Convergence time within backbone The time for backbone routers to reroute the traffic At the time of a failure, IGP will find alternative path toward the BGP next-hop if the backbone is designed in a redundant way. IGP convergence time should be fast enough to restore the connectivity between two BGP speakers before BGP Hold Timer expires. In addition, MPLS feature like Fast Reroute (FRR) will perform temporary local repair before IGP finds a permanent replacement. Thus MP-iBGP sessions inside the BGP/MPLS VPN backbone should be quite stable even when one network segment becomes unstable. The time for backbone routers to reroute the traffic can be minimized by MPLS protection methods such as FRR or Standby Secondary LSP to achieve the best failover time in ms level. As we can see from the above discussion, a network failure inside the BGP/MPLS VPN backbone will not trigger any BGP convergence for those routes originated from Carrier VoIP VPN sites because the stability of MP-iBGP sessions is maximized by IGP together with MPLS. The only thing we can consider is the interruption of Carrier VoIP traffic flowing between VPN sites, which is minimal.
Version: 9.0.3
NORTEL CONFIDENTIAL
43
2.4.4.2 Within Carrier VoIP VPN site In comparison with the above case, network failure within a VPN site will impose more challenge on Carrier VoIP network engineering. First, the VPN site loses the assistance from all MPLS protection methods because MPLS LSPs do not go beyond PE routers. Second, IGP convergence within the site is not sufficient enough. The responsibility for end-to-end convergence delay is primarily handed over to the BGP/MPLS VPN backbone. In the following paragraphs, we will discuss several factors which are essential for end-to-end convergence speed as perceived by BGP/MPLS VPN customers:
Figure 14
1. The route propagation between a local CE router and a local PE router This item is only applied to the implementation when dynamic routing protocol is used. Running OSPF or BGP between PE routers and CE routers definitely have an effect on convergence, however, this impact is not a specific result by obtaining the BGP/MPLS VPN service. 2. The route redistribution process from the relevant VRF table to the VPN-IPv4 table on the local PE router After the PE router reflects the change in its VRF table, it needs to redistribute the route to the BGP VPN-IPv4 table. This process is event-driven and should take place in ms level.
Version: 9.0.3
NORTEL CONFIDENTIAL
44
3. The advertisement of a VPN-IPv4 prefix across the MP-BGP backbone between PE and P routers BGP protocol intentionally adds delay between the successive advertisement for the same destination sent to the same BGP neighbor. In a stable network, the impact of this timer on end-to-end convergence is not significant because a new VPN route only triggers the BGP update once. In addition, route withdrawn is not restricted by this timer. If a route becomes unreachable, BGP protocol will immediately revoke it from all of its neighbors. However, this timer will play important role in a unstable environment. For example, a route fluctuation as the result of a malfunctioning port can lead to continuous route advertisement and route withdrawn. The reappearance of the route will not be seen by a P/PE router until the advertisement timer expires on its upstream BGP neighbor. A smaller advertisement interval will result in faster convergence. However, it may also stress the router which has a large number of BGP sessions. Thus Nortel does not recommend customer modifying the advertisement interval from the default value. 4. The route redistribution process from the VPN-IPv4 table on a remote PE router to the relevant VRF table This process is the most important factor for end-to-end delay because it applies to all new learned routes. The BGP protocol periodically scan the VPN-IPv4 routing table to discover any new route that needs to be announced to the CE routers. Some major vendors default this timer to 15-second with the minimum value equal to 5second. This means, if a PE router receives the BGP update which announces a new route, the change may not be recognized by its attached CE router after 5 seconds elapse. Like BGP General Scanning process, the BGP import scan routine is very CPU-intensive. A shorter BGP scan time for a router with large BGP table can significantly impact the performance. Thus, changing the scan time from the default is strongly discouraged. 5. The route propagation between the remote PE router and a remote CE router The delay happened in this stage is the same as Step 1. Using BGP/MPLS VPN does not cause extra delay when the remote PE router advertises the route to a CE router. 2.4.5 Redundancy As we discuss in the previous section, reducing BGP timers can be an option to improve the end-to-end convergence time. However, it may lead to router performance degradation. In addition, if a local network issue is propagated across the backbone, all PE/P routers plus remote CE routers are forced to take the reaction accordingly. This will consume more local resource including physical memory and CPU time.
Version: 9.0.3
NORTEL CONFIDENTIAL
45
2.4.5.1 BGP route aggregation Given the requirement that the network within a VPN site must be engineered with the maximum redundancy, the solution to minimize the end-to-end outage is to confine the instability within a VPN site if alternative path still exists internally. A network failure in a VPN site is not necessary to be propagated across the whole BGP/MPLS VPN domain. All routers beyond the PE routers for that site should keep directing the traffic as normal because they are not aware of any topology change. By this means, only intra-site devices involve in the route updates and route calculations. To achieve our goal, we must utilize the BGP route aggregation. This features can suppress multiple more specific routes into a single advertisement by using summary-only option. The aggregate will be announced as long as there is at least one network fallen in the specified range in the BGP table. When a PE router learns multiple Carrier VoIP subnets (10.10.1.0/24 - 10.10.3.0/24) from its attached CE routers via OSPF, all of them will be installed in the VRF table and in turn, get advertised in the form of multiple individual BGP route. However, if the BGP route aggregation is enabled, we can instruct the PE router to generate a single update of 10.10.0.0/22 to its MP-iBGP neighbors. Those more specific routes will be marked as suppressed in the BGP VPN-IPv4 routing table and become inactive.
Figure 15
If one of the route learned from the CE routers becomes unreachable, for instance, 10.10.2.0/24, this route will be removed by OSPF after SPF calculation. The route deletion in the VRF table will immediately trigger the invalidation of the corresponding BGP route in the local VPN-IPv4 table. However, the
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
46
PE router will not generate any route update because it still has another two valid routes which fall into the range of 10.10.0.0/22 in its VPN-IPv4 table. If any packet arrives at the PE route for the unreachable subnet, the PE router will silently discard it. As we can see, the BGP route aggregation hides the instability of a VPN site from the rest of the network. If the links between PE routers and CE routers are located in the OSPF Area 0.0.0.0, this improvement can be further enhanced by using OSPF route summarization. Remember that, in this configuration, the CE routers acts as ABRs, which are capable of converting the intra-area route (Type 1 and Type 2) into the inter-area route (Type 3). If the VPN site has two-tier OSPF implementation, we can reduce the number of routes advertised to the PE routers by summarizing the routes from all subtended non-backbone area.
Figure 16
The benefit of using the BGP route aggregation can be maximized in the redundant configuration. In Carrier VoIP, Nortel requires fully-meshed connections between dual PE routers and dual CE routers for a VPN site. Since both CE routers advertise the same prefix, if OSPF load balancing is enabled, the PE router will install two different routing entries for the same destination in its local VRF table. When BGP scanning process begins, BGP will walk through the VRF table and redistribute both of them into the BGP VPN-IPv4 table as two individual paths. If the route through one CE router becomes invalid, the PE router will not send out update because the route through another CE router still exists. Worth noting is that, extreme caution should be taken when using the BGP Route Aggregation. Incautious implementation can lead to route confusion in the network. Please refer to the example in the next
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
47
section for details. 2.4.5.2 IP address scheme In order to fully utilize the BGP route aggregation, the IP address planning is very important in the field implementation. Nortel requires that the subnets for Carrier VoIP elements within a VPN site must be assigned in the continuous manner so they can be easily summarized. Furthermore, the aggregated subnet for one VPN site must be unique within the Carrier VoIP VPN. Failure to obey this rule will cause routing confusion in the network. Please look at the following example.
Figure 17
Route confusion
As shown in the above diagram, VPN Site #3 receive two route updates from VPN Site #1 and VPN Site #2 for the same prefix of 10.10.0.0/22. Since both sites aggregate their local subnets into a single advertisement, those more specific routes are not visible to the router PE3. After the BGP path selection process, the router PE3 only installs the best path for the route of 10.10.0.0/22. Assuming the router PE2 is preferred as the nexthop, if a gateway device within the VPN Site #3 would like to start voice call to
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
48
another gateway device in the VPN Site #1, the packets will be send to the router PE2 instead of the PE1. 2.4.5.3 BGP Route Refresh and Soft Reconfiguration When a BGP policy on a PE router is changed, for instance, adding more import route targets to an existing VRF, the PE router might need to obtain the VPN-IPv4 routes which was previously discarded. This can present a problem with conventional BGP4 because once BGP peers synchronize their routing tables, they do not exchange routing information any more until there is a change in the network. As a solution, all MP-iBGP peer sessions from the local PE routers to other PE routers need to be reset in order to trigger route re-advertisement. The new route policy is applied as the PE router populates it s VRFs. Apparently, this process will cause large amount of routing updates across the network. Soft reconfiguration is the feature which processes the new policy without bouncing the peer session. Although it can apply a new policy in non-intrusive manner, its drawback must be explained too. Soft reset at inbound direction introduces additional overhead to memory resource. When route policy is modified to reject a route received from a peer, that prefix information may not exist in the BGP table. Thus, by setting soft-reconfiguration-in parameter to enable for a particular neighbor, the BGP process must maintain all prefix information advertised by that router, even if the route may not be installed in the routing table due to the policy. Comparing with Soft reconfiguration, Route Refresh is a better approach which does not require any local copies of BGP updates. With this feature enabled, a BGP router requests a retransmission of routing updates from its neighbors to obtain any missing information. Route Refresh is advertised as an optional capability during the BGP neighbor establishment. Not all routers support this feature yet. Please check vendors technical document for details. If Route Refresh is available, Nortel recommends customer using this feature on all PE routers. It is also recommended on CE routers If BGP is running between the backbone and a VPN site. If Route Refresh is not available, Nortel recommends customer using Soft Reconfiguration. However, extreme caution should be taken to make sure no PE router suffers from memory exhaustion. 2.4.5.4 BGP route dampening The general idea of route dampening is to penalize misbehaving routes, while stable routes should be advertised with minimal delay. However, under certain circumstance, the normal network event can lead to route suppression in Carrier VoIP network. In contrast to data traffic, voice traffic is very sensitive to network outage. The hold down period can easily cause severe service outage on Carrier VoIP element. Due to this concern, if BGP routers support setting dampening parameters for individual prefixes, Nortel requires that all Carrier VoIP related routes are excluded from dampening. Otherwise, Nortel requires
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
49
that BGP flap dampening is disabled on all PE routers, and CE routers if eligible.
Version: 9.0.3
NORTEL CONFIDENTIAL
50
Encoding OC3 # voice connections Management threshold # voice connectionsa Ports supported at 3.6ccs 930 820
7585
9303
9788
1908
Table 6 ERS 8600 ATM MDA voice connections a. This value is the threshold at which traffic may begin to degrade due to packet loss. It is based on lines generating 3.6ccs High Day traffic. Different ccs rates will change the number of ports supported.
Unlike when connecting a MG 15000 to the ERS 8600, there is no admission control when using the ERS 8600 as a (concentrating) access router for small line gateways. The traffic from the remote gateways
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
51
should be engineered such that High Day Busy Hour traffic, as measured on a 30 minute sample, should not exceed the threshold on the number of speech channels identified above. This engineering will allow for normal short term peakedness of the traffic to stay below the threshold where speech quality will be degraded due to packet loss. Referring to the last two rows of the table above, 820 channels can carry traffic from 7585 lines, generating 3.6 ccs High Day traffic. The customer will be required to monitor the OC3 link usage to ensure the traffic stays below this level. In the event that the traffic approaches this level, then a second OC3 WAN interface may potentially be added, utilizing ECMP to spread the traffic over the two links. This configuration is NOT currently verified within a Carrier VoIP solution. There will be an increased need to monitor the link utilization to ensure traffic stays below the limits above. 3.1.2 Access ports Redundant Gigabit Ethernet links are supported to carry both voice and data traffic from Layer 2 aggregation devices in the access network. SMLT should be configured on these links. Redundant OC3 links may also be used, with the same caution on throughput and thresholds on the number of simultaneous voice channels as stated above. 3.1.3 Other configuration items 3.1.3.1 Dynamic Host Configuration Protocol (DHCP) relay Since the line media gateways in the Customer Premise Equipment Access Hub (CPEAH) get their configuration from the service provider's network, the DHCP server providing the auto-configuration capability will be in the service provider's network, particularly in an OAMP subnet. Since the DHCP server is not in the same subnet as the line media gateways in the CPEAH, the ERS 8600 will need to relay the DHCP request to the Carrier VoIP DHCP servers.The DHCP Offer message sent back by the DHCP server provides the IP address, netmask, Fully Qualified Domain Name (FQDN), Domain Name Service (DNS) server address, Trivial File Transfer Protocol (TFTP) server, configuration file path, Remote Media Gateway Controller (RMGC) address as well as other parameters. In case Point-to-Point Protocol over Ethernet (PPPoE) is not used for the data service, data subscribers will be getting their IP address with DHCP. In this case the ERS 8600 will need to relay DHCP request based on VLAN memberships. The ERS 8600 has the ability to relay DHCP messages toward several DHCP servers based on the VLAN membership of the DHCP clients. This feature of the ERS 8600 will need to be used.
Version: 9.0.3
NORTEL CONFIDENTIAL
52
A complete CAC solution must provide elements of all of the above. This requires that all elements of the network PEPs support the policy control interfaces. Regrettably, not all vendors are at a point to support these, or the take-rate among the vendors variers widely. As such, Virtual Call Admission Control has been implemented in Succession for early release, with subsequent releases support full Network Requested CAC. In SN07, the CS2000 delivered admission control functionality by means of virtual call admission control (VCAC) on the Gateway Controller (GWC) component of the CS2000 architecture. The limitations of this as policy control platform along with the need to support CS2000 and potentially other application servers together behind the same access links drove the requirements to move VCAC onto a common platform which could serve multiple applications. In SN08, the Policy Controller provides policy control and management mechanisms for many different types of solutions including Carrier VoIP Carrier Hosted Services through Net-work-based requests. The focus of the Session Policy Controller is enforcement of resource control plane policies such as resource reservation, usage policies, and QoS/CoS policies. The Policy Controller combines awareness of subscribers with awareness of network resource to assure reliable and predictable network handling of media flows. Policy-based call admission control decisions in the Session Policy Controller take into account the
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
53
requester's resource reservation request and available capacity to determine whether or not to accept the request. Note: A known condition exists with audible ringback for the originator of a call between CS 2000 servers or between a CS 2000 server and an MCS 5200 when SIP-based protocols are being used between these servers AND Virtual Call Admission Control (VCAC) is being used for call admission. For inter-CS 2000 calls that would be subject to denial of call admission due to VCAC, this condition results in the originating subscriber experiencing less than 200-ms of audible ringback before call takedown. For CS 2000 to MCS 5200 calls that would be subjected to denial of call admission due to VCAC, the possibility of audible ringback to the originator until the call is answered also exists prior to call takedown [due to VCAC denial]. The condition occurs when the terminating office allows the call control signaling (SIP 180) to be forwarded back to the originating office before VCAC has determined that the call can not be allowed to terminate. While physical ringing does not occur at the terminating office, the originating office however, permits audible ringing to begin only to be notified later that the call has been denied by VCAC. Currently, there is no known workaround or resolution to this condition.
Version: 9.0.3
NORTEL CONFIDENTIAL
54
4. In a physical topology where routing of packets flows becomes non-deterministic, this segment of the network should be modeled as a NetworkZone. The intra-Zone bandwidth attribute should be set to either unlimited or the sum of all the possible aggregated traffic passing through this NetworkZone from subtending Endpoints or other Network Zones. For example, a physical router that has two northbound interfaces and a flow could take one of the two, this segment of the network should be modeled as a NetworkZone. 5. Network Address Translation, or NAT, devices should be modeled as a NetworkZone. 6. The Core, or backbone, network should be modeled as the Root of the topology, and therefore as a NetworkZone. Its intra-Zone bandwidth should be set to unlimited.
Version: 9.0.3
NORTEL CONFIDENTIAL
Network surveillance
55
Version: 9.0.3
NORTEL CONFIDENTIAL
56
Version: 9.0.3
NORTEL CONFIDENTIAL
57
Note: The recommendation for cp-limit and rate-limit settings assumes routed links to these edge devices. For additional explanation of port cp and rate limits see SEB 08-00-001.
Version: 9.0.3
NORTEL CONFIDENTIAL
58
Figure 18
Example rate-limit and cp-limit on a CS-LAN port Port 1/21 : lock block-traffic name auto-negotiate enable-diffserv access-diffserv qos-level routing unknown-mac-discard high-secure default-vlan-id untag-port-default-vlan tagged-frames-discard perform-tagging svlan-porttype untagged-frames-discard loop-detect state linktrap multicast rate-limit broadcast rate-limit sffd cp-limit : : : : : : : : : : : : : : : : : : : : : : : false false CORE NETWORK true false false 1 enable disable false 200 disable disable disable normal disable disable action port-down up enable 1000 1000 disabled enabled multicast-limit 1000 broadcast-limit 1000
6.1.1 Fully meshed redundant configuration To interconnect the CS-LAN Ethernet Routing Switch 8600s with an IP or other (e.g. AAL5) routed core network, connect one (1) Gigabit Ethernet link between each Ethernet Routing Switch 8600 and each of two (2) different edge routers in the core network.
Version: 9.0.3
NORTEL CONFIDENTIAL
59
Figure 19
This fully meshed redundant configuration avoids a single point of failure between the CS-LAN and the core network. Additionally, it is carrier grade because Call Processing is unaffected by WAN link failures. For added redundancy, the links between each of the CS-LAN Ethernet Routing Switch 8600 and the two (2) edge routers should use ports from different modules on the Ethernet Routing Switch 8600. 6.1.2 ECMP and fully meshed redundant configuration In order to load balance the routed IP traffic across the multiple links in a fully meshed redundant configuration, it is recommended that Equal Cost Multi Path (ECMP) be enabled on the CS-LAN Ethernet Routing Switch 8600s and the edge routers. The following is an example of ECMP configuration on the CS-LAN Ethernet Routing Switch 8600:
config ip ecmp enable config ip ecmp-max-path 4
Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
60
6.1.3 Dual link redundant configuration Alternatively, to interconnect the CS-LAN Ethernet Routing Switch 8600s with an IP core network, connect one (1) Gigabit Ethernet link between each Ethernet Routing Switch 8600 and two (2) different edge routers in the core network.
Figure 20 Dual link redundant configuration with dual IP core edge routers
This dual link redundant configuration avoids a single point of failure between the CS-LAN and the core network. However, although this solution is less expensive than the fully meshed one, it is less robust and offers reduced redundancy in comparison to the fully meshed configuration. Additionally, this configuration has a single point of WAN uplink failure per chassis which causes the chassis with the failed WAN uplink to route WAN traffic over the MLT to the other ERS 8600 chassis. 6.1.4 Engineering considerations for IP over Ethernet interconnect This section discusses items related to the sizing of the WAN links from the CS-LAN to an IP over Ethernet core network.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
61
When planning and building out the interconnect to the core network, it is worthwhile considering all points where bearer traffic may originate and/or terminate. The sources include, but may not be limited to, UAS/MS 2010 bearer traffic to/from the CS-LAN in the form of announcements and 3-way conferencing, IW-SPM bearer traffic to/from the CS-LAN interworking with legacy DMS peripherals such as Line Trunk Controllers (LTCs), Digital Trunk Controllers (DTCs), and SPMs, MG 15000 to MG 15000 bearer traffic as inter or intra-CS 2000 calls. Considering these key factors, multi-link trunks (MLTs) may be necessary in order to provide adequate bandwidth to/from the WAN. When providing MLT links to the WAN, it is not necessary to provide mated links since the WAN connections are already redundant, and routing to the redundant WAN connection during a failure is provided through the CS-LAN MLT.
Version: 9.0.3
NORTEL CONFIDENTIAL
62
Figure 21
This fully meshed redundant configuration avoids a single point of failure between the CS-LAN and the Avicis. The links between each of the CS-LAN Ethernet Routing Switch 8600 and the two (2) Avici routers should use ports from different modules on the routers and the Edge Routing Switches to minimize hardware failure. Layer 3 redundancy is provided by OSPF. The protocol is configured on both Edge Routing Switch 8600s and Avicis. It is strongly recommended to set the Hello timers to one (1) second to guarantee the quickest failover time in case of a failure. To minimize SPF runs, Equal Cost Multi Path (ECMP) on all routers (Edge Routing Switches and Avicis) is also recommended to further decrease outages due to failures in the network. In addition, to increase the ease of management, it is recommended to configure a loopback interface on the Avici routers and use this interface as the router-id of the OSPF process.
Version: 9.0.3
NORTEL CONFIDENTIAL
63
Version: 9.0.3
NORTEL CONFIDENTIAL
64
Figure 22
Figure 23
Square configuration
Version: 9.0.3
NORTEL CONFIDENTIAL
65
Figure 24
In addition to the links to CS-LAN ERS 8600s, there must be another connection between MSS 15000 shelves in square mode. This connection is located in the same OSPF area as ERS 8600s. As a backbone router, MSS 15000 will aggregate signaling and bearer traffic from all remote gateway sites into the CS-LAN. The traffic volume is the essential factor for determining how many 4-port GigE FPs should be deployed on a MSS 15000 shelf. Remember that, each port on 4-port GigE FP can route IP packets at line speed (1 Gbps), however, the maximum throughput per FP is limited to 2.5 Gbps. Thus, the number of port for each connection varies in different scenarios:
Version: 9.0.3
NORTEL CONFIDENTIAL
66
Number of Ports Fully Meshed Mode: 2 Square Mode: 2 Single MSS 15000: 2 Fully Meshed Mode: 4 Square Mode: 4 Single MSS 15000: 4 Not Supported
Number of 4-port GigE FPs Fully Meshed Mode: 2 Square Mode: 1 Single MSS 15000: 2 Fully Meshed Mode: 2 Square Mode: 2 Single MSS 15000: 2
The information in the table is explained in detail below: Less than 1 Gbps
Figure 25
Version: 9.0.3
NORTEL CONFIDENTIAL
67
Fully-meshed mode: MSS 15000 VR is connected to dual ERS 8600s via two physical ports across different 4-port GigE FPs.
Figure 26
Square mode: MSS 15000 VR is connected to one of ERS 8600s via a GigE port. Another port on the same FP is used for inter-MSS 15000 connection.
Version: 9.0.3
NORTEL CONFIDENTIAL
68
Figure 27
Single MSS 15000 mode: MSS 15000 VR is connected to both ERS 8600s via two GigE ports located on different 4-port GigE FPs.
Version: 9.0.3
NORTEL CONFIDENTIAL
69
Figure 28
Fully meshed configuration with more than 1 Gbps but less than 2 Gbps traffic
Fully-meshed mode: MSS 15000 VR is connected to dual ERS 8600s via two LAG groups. Each LAG group is composed of two GigE ports on the same 4-port GigE FP. However, the port belongs to the different LAG groups must be located on the different 4-port GigE FP.
Version: 9.0.3
NORTEL CONFIDENTIAL
70
Figure 29
Square configuration with more than 1 Gbps but less than 2 Gbps traffic
Square mode: MSS 15000 VR is connected to one of ERS 8600s via a LAG group. The LAG group is composed of two GigE ports on the same 4-port GigE FP. Another LAG group containing two GigE ports on the different 4-port GigE FP must be provisioned for the connection between MSS 15000 shelves.
Version: 9.0.3
NORTEL CONFIDENTIAL
71
Figure 30
Single MSS 15000 configuration with more than 1 Gbps but less than 2 Gbps traffic
Single MSS 15000 mode: MSS 15000 VR is connected to both ERS 8600s via two LAG groups. Each LAG group is composed of two GigE ports on the same 4-port GigE FP. However, the port belongs to the different LAG groups must be located on the different 4-port GigE FP.
More than 2 Gbps This is not supported due to traffic engineering considerations.
6.3.1.2 Redundancy 6.3.1.2.1 Link Aggregation Group Link Aggregation Group (LAG) provides a mechanism to aggregate bandwidth across multiple GigE links. In addition, it also facilitates link protection with quicker recovery from local link failure as compared with Layer 3 mechanisms. When connecting the MSS 15000 VR to the CS-LAN ERS 8600s via 4-port GigE FP, there must be no more than two GigE links per LAG group. As a requirement, the minimum active link per LAG group must be set to two (2) so the failure of one link will disable the whole LAG interface and in turn, trigger Layer-3 reactions. When interworking with ERS 8600s, the LACP must be enabled on both devices. On MSS 15000 side,
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
72
the LACP must run in PASSIVE mode. On ERS 8600s side, the LACP must run in ACTIVE mode. Note: If CS-LAN is built upon different vendors equipment, the configurations using LAG are not supported on MSS 15000. 6.3.1.2.2 Layer-3 routing The 4-port GigE FP doesnt support card level redundancy. Traffic switchover is actually triggered by routing protocol as a result of topology change. In the event of link failure, 4-port GigE card failure, or next-hop router failure, the routing protocol such as OSPF or Static Routes will react to it by moving the traffic from the failed path to the redundant one. 6.3.1.3 IP addressing To efficiently utilize the IP address space, Nortel recommends customers use a /30 subnet for the routed connection between a MSS 15000 VR and a ERS 8600. The same rule applies to the connection between MSS 15000 shelves. 6.3.1.4 Routing rules 6.3.1.4.1 OSPF OSPF is the only dynamic routing protocol Nortel supports when connecting MSS 15000 VR to CS-LAN. As a general rule, Nortel recommends a hierarchical design with multiple OSPF areas to reduce the size of routing table and minimize the impact of flapping connections. Assuming MSS 15000 VR acts as Area Border Router (ABR), there should be at least one interface on MSS 15000 VR residing in the OSPF Area 0.0.0.0. However, all GigE connections to ERS 8600s and the GigE connections between MSS 15000 VRs in square mode, must be located in a non-backbone area.
Version: 9.0.3
NORTEL CONFIDENTIAL
73
Figure 31
Figure 32
Version: 9.0.3
NORTEL CONFIDENTIAL
74
Figure 33
When OSPF is enabled on GigE links, some parameters need to be adjusted from default values. The following table lists the rules for planning and provisioning OSPF on 4-port GigE FPs:
Item Area ID Interface Type (ifType) Hello Timer (helloInterval) Dead Timer (rtrDeadInterval)
Required Value x.x.x.xa broadcast 1 second 4 seconds if HA mode is not enabled on ERS 8600s 5 seconds if HA mode is enabled on ERS 8600s.
Version: 9.0.3
NORTEL CONFIDENTIAL
75
Note: Extreme caution should be used when manipulating above parameters on MSS 15000 VR. The values should be adjusted and verified on ERS 8600s too. Otherwise, both parties cannot form adjacency. In a fully-meshed mode, the OSPF cost of GigE interfaces on a MSS 15000 VR should be designed in a way that, the traffic to a route which is only advertised by one ERS 8600 must use the direct interface to that device. The recommended approach is to configure the identical OSPF cost on both IP interfaces. In addition, ECMP for OSPF must be disabled globally. As a result, for a route which is learned from both ERS 8600s with equal cost, only one interface is carrying traffic at a given time. The standby interface is only used when the primary interface is impaired. In a square mode configuration, the OSPF cost of GigE interfaces on a MSS 15000 VR should be designed in the way that the traffic to a route which is only advertised by the directly connected ERS 8600 must use the direct GigE interface to that device. In a square mode configuration with geographic survivability support, the traffic to a route only advertised by the ERS 8600 which is not directly connected to the MSS 15000 must use the inter-connection between MSS 15000s instead of MLT link across dual ERS 8600s. In a single MSS 15000 configuration, the OSPF cost of GigE interfaces on the MSS 15000 VR should be designed in a way that, the traffic to a route which is only advertised by one ERS 8600 must use the direct interface to that device. The recommended approach is to configure the identical OSPF cost on both IP interfaces. In addition, ECMP for OSPF must be disabled globally. As a result, for a route which is learned from both ERS 8600s with equal cost, only one interface is carrying traffic at a given time. The standby interface is only used when the primary interface is impaired. In any case, the OSPF cost design on ERS 8600 side should follow the engineering rules in SEB 08-00-001 document.
OSPF Summarization
OSPF route summarization must be enabled on MSS 15000s, which act as ABR to CS-LAN. The IP addressing scheme must be carefully designed so that the range of subnets assigned within CS-LAN OSPF area is contiguous. By this method, the MSS 15000s receive all specific routes from ERS 8600s and in turn will publish them in the summarized form into other areas. 6.3.1.4.2 Static routes Static routes must be implemented along with OSPF to guarantee hitless voice service in the event of a failure. Static routes provide an alternative path during OSPF reconvergence. The destination which is temporarily not learned by OSPF will follow static route to one of the downstream routers: In fully-meshed mode, static routes for all CS-LAN subnets must be provisioned on each MSS 15000 using the direct GigE interfaces to ERS 8600s as the next hops.
Version: 9.0.3
NORTEL CONFIDENTIAL
76
In square mode, static route for all CS-LAN subnets must be provisioned on each MSS 15000 using the direct GigE interface to ERS 8600 as the nexthop with lower cost. Another set of static routes for the same CS-LAN subnets must be provisioned using the inter-MSS 15000 GigE interface as the nexthop with higher cost. In single MSS 15000 mode, static routes for all CS-LAN subnets must be provisioned on the MSS 15000 using the direct GigE interfaces to ERS 8600s as the next hops.
Please note that ECMP load-balancing for static route is not supported in this release and must be disabled globally. Static routes only protect the outgoing traffic from the MSS 15000s to the CS-LAN. For the reverse direction, the installed static route has no influence. In order to reinforce the carrier grade support, Nortel recommends the customer install static default routes on each CS-LAN ERS 8600s: In fully-mesh mode, static default routes must be provisioned using the direct links to MSS 15000s as the next hops In square mode, one static default route must be provisioned using the direct link to MSS 15000 as the nexthop with lower metric. Another static default route is installed using the MLT interface to its peer as the nexthop with higher metric. In single MSS 15000 mode, one static default route must be provisioned using the direct link to MSS 15000 as the nexthop with lower metric. Another static default route is installed using the MLT interface to its peer as the nexthop with higher metric.
Version: 9.0.3
NORTEL CONFIDENTIAL
77
Figure 34
Version: 9.0.3
NORTEL CONFIDENTIAL
78
Figure 35
Version: 9.0.3
NORTEL CONFIDENTIAL
79
Figure 36
6.3.1.4.3 Interaction between OSPF and PSR As a requirement of this release, the route preference for static routes must be set to 253 (least preferred) on MSS 15000 VR. By default, an OSPF route, regardless of its route type, has a lower preference value and is more preferred by the MSS 15000. As a result, the provisioned static routes will only be installed in the routing table when OSPF adjacencies to ERS 8600s are impaired. Furthermore, MSS 15000s also need to redistribute the above static routes into OSPF as External Type 1 route. On CS-LAN ERS 8600s, the route preference for static default routes must be increased to 253 (least preferred). The static default route should not be redistributed to OSPF. 6.3.2 IP over ATM networks (AAL5) There are two options to interconnect the CS-LAN Ethernet Routing Switch 8600s with a Multiservice Switch 15000 IP over ATM core network:
Version: 9.0.3
NORTEL CONFIDENTIAL
80
Connect an OC12 MDA from an 8672ATM-E module of each Ethernet Routing Switch 8600 to two (2) OC12 interfaces on one Multiservice Switch 15000 (Figure 37 on page 80), or Connect an OC12 MDA from an 8672ATM-E module of each Ethernet Routing Switch 8600 to an OC12 interface on two (2) separate Multiservice Switch 15000s (Figure 38 on page 81).
Figure 37
Note: Hitless Software Migration (HSM) is not available on the Multiservice Switch 15000 for unprotected ATM OC-n Function Processors (FP). As a consequence, an upgrade of the single Multiservice Switch 15000 will result in an outage. Therefore, the configuration is not recommended for CS-LAN core network interconnect.
Version: 9.0.3
NORTEL CONFIDENTIAL
81
Figure 38
This dual link redundant configuration avoids a single point of failure between the CS-LAN and the ATM core network. This configuration also works with Multiservice Switch 15000 VR feature enabled. 6.3.3 Multiservice Switch 15000 core network interconnect scalability considerations The intent of this section is to address the CS-LAN to Multiservice Switch 15000 core network interconnect, and the scalability considerations for aggregating traffic from a subtending MG 15000 network. When planning and building out the interconnect to the core network, it is worthwhile considering all points where bearer traffic may originate and/or terminate. The sources include, but may not be limited to UAS/MS 2010 bearer traffic to/from the CS-LAN in the form of announcements and 3-way conferencing,
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
82
IW-SPM bearer traffic to/from the CS-LAN interworking with legacy DMS peripherals such as LTCs, DTCs, and SPMs, MG 15000 to MG 15000 bearer traffic as inter or intra-CS 2000 calls.
The CS-LAN Ethernet Routing Switch 8600 ATME MDAs have limits to their throughput capacities. Therefore, if the aggregate amount of bearer and signaling traffic coming from either the UAS or IW-SPM exceeds those limits specified, additional ATME MDAs may need to be added. Furthermore, as more bearer traffic enters (and exits) the CS-LAN, additional OC-12 ports and/or OC-12 cards may also be required on the Multiservice Switch 15000 Virtual Router (VR). If physical space in the chassis of the Ethernet Routing Switch 8600 or Multiservice Switch 15000 is limited, then other alternatives must be considered. Lastly, one should also consider the scalability limits of the Multiservice Switch 15000 VR. The combination of traffic to/from the CS-LAN (discussed previously) in conjunction with the DPT-associated bearer traffic traversing to/from other CS 2000 networks, the limits of scalability in the Multiservice Switch 15000 VR may reach the OC-12/OC-48 forwarding throughput (PQC12) limit as shown in the following table. When this point is reached, it may become necessary to add additional Multiservice Switch 15000 VRs, and reconfigure the VCCs from the subtending MG 15000 network as needed. However, the limits of scalability in the Ethernet Routing Switch 8600 ATME MDAs should not be ignored.
Table 9 Throughput of 4-Port OC12 ATM card on Multiservice Switch 15000 core router a. Constrained by ATM bandwidth of OC12 port
Considering these key factors on scalability, it may be prudent to consider other alternatives if capacities beyond these levels are required and/or expected.
Version: 9.0.3
NORTEL CONFIDENTIAL
83
7.0 IP addressing
This section intends to provide an end-to-end view on IP addressing strategies for new customers as well as migrating customers. The following addressing strategy was written keeping in mind three main requirements: Scalability, Flexibility, Ease of management. One of the most serious problems facing the evolution of the Internet is IP address depletion. The Internet community, in particular IETF, has developed the IPv6 standard in order to remedy this situation. Unfortunately, it will be a long time before IPv6 sees widespread use; consequently, IPv6 becomes a longterm solution. Therefore, we recommend the use of private IP addresses in the Carrier VoIP network. Depending on the customers' requirements or current network architecture, this can be done in two ways: Using only private IP addresses Using a mix of both public and private IP addresses.
It is important to know that throughout this chapter the term public address means an IP address that belongs to a block that is considered public for the customer. This block can either be assigned directly by IANA or be defined in [RFC1918]. Note: As described in the following paragraphs, some caution needs to be taken when using private IP addressing strategies, especially Network Address Translation (NAT) and Network Address and Port Translation (NAPT).
NORTEL CONFIDENTIAL
84
It is important to remember that the Carrier VoIP line solutions that are Enterprise-based (H.323 and MCS) use the RTP Media Portal to avoid the issues introduced by NAT/NAPTs at the Enterprise edge. Also important to note is that Nortel does not currently support NAT/NAPT between the CS-LAN OAM&P network and the Corporate Network. In addition, in other Carrier VoIP solutions such as the Cable Voice over IP solution, the end-subscriber's PC which is used for internet browsing would typically have a public address. This addressing would allow accessibility from the outside (if the customer runs private servers, this may be an important requirement) and also it would avoid any possible problems with new applications that don't seamlessly traverse NATs. Unfortunately, as discussed previously, most service providers will not have enough public IP addresses to accommodate both data and voice services. Therefore, a private addressing scheme for the telephony interfaces only will be the best solution. Service providers will enjoy many benefits from their usage of private address space. Mainly, a lot of flexibility in network design will be gained because of more address availability. As a direct consequence of the larger address space, operationally and administratively convenient addressing schemes, as well as easier growth paths, are now possible. When private addressing is used in the Carrier space, the following guidelines must be observed: Routing protocols (and possibly static routes) need to be configured properly in order to prohibit private routes from being published to the Internet. Depending on a customer's network topology, a segmentation of the Media Gateway sites into VLANs (subnet-based &/or protocol-based) might be required to separate the data and voice broadcast domains. No overlap of IP addresses is possible without the use of NAT. Note: It is important to note that the private subnets described in RFC1918 are used across this document as an example. Depending on the customers requirements on current addressing, on the system size and future growth, only one of the RFC1918 subnets or a public subnet might be selected for addressing the entire solution. The key point of this document is the size of the subnets that are described throughout the document. During interaction sessions with customers, the Network Sales Engineer should investigate if the described private subnets and the subnetting scheme agree with the customer requirements.
Version: 9.0.3
NORTEL CONFIDENTIAL
85
Figure 39
For more details, the following sections will provide an example of the recommended addressing scheme. In addition, the diagram will show an example of the subnets used for all the logical components. 7.2.1 Private address space The Internet Assigned Numbers Authority (IANA) has reserved the following three blocks of the IP address space for private Internets (the reader can review RFC1918 [1] for additional details): From 10.0.0.0 to 10.255.255.255 (10.0.0.0/8 prefix) From 172.16.0.0 to 172.31.255.255 (172.16.0.0/12 prefix) From 192.168.0.0 to 192.168.255.255 (192.168.0.0/16 prefix) Nortel recommends the use of these blocks to provide an extremely scalable design. At the same time, the different areas and entities of the solution will be logically separated for better management and troubleshooting strategies when/if problems occur.
Version: 9.0.3
NORTEL CONFIDENTIAL
86
7.2.2 Strategy summary The addressing strategy is mainly designed around the following logical areas: The Call Processing elements The OAM&P elements The Media Gateways Out-of-band management The Network Operations Center (NOC) 7.2.2.1 Call Processing subnets The Call Processing elements are common across all Carrier VoIP solutions and are located in the CS-LAN Call Processing subnets. To address these elements, Nortel recommends the following steps: Subnetting the 172.16.0.0/12 subnet with a 19-bit mask, Subnetting each block with a 23-bit mask to address the CS-LAN Call Processing subnets for a single CS 2000. It is important to remember that one of these subnets might need to be allocated in a public address space for the Carrier VoIP line solutions that have Enterprise-based gateways (for instance, H.323). This recommendation is based on the Carrier topology and its overall addressing requirements. 7.2.2.2 The OAM&P subnet The OAM&P elements are located in the CS-LAN OAM&P subnet. It is recommended to use a public address block for this subnet. To address these elements, Nortel recommends the following steps: Reserving a /26 public subnet to address the CS-LAN OAM&P subnet. 7.2.2.3 The NOC The NOC address space typically already exists in a customers network. In case a NOC is not already present, Nortel recommends the following steps: Reserving a /27 public subnet to address the NOC/OSS subnet 7.2.2.4 Out-of-band management Out-of-band management is a solution mainly used to add extra-reliable network management. If the customer uses this strategy, Nortel recommends the following steps: Using the 192.168.0.0/16 for out-of-band management (if and where it is required).
Version: 9.0.3
NORTEL CONFIDENTIAL
87
Re-subnet this subnet according to the number of elements that need to be managed and their physical location.
7.2.2.5 The Media Gateways The Media Gateways are located in a Media Gateway site (either local to the CS-LAN or remote). Their addressing is dependant on the Carrier VoIP solution. This document will describe the details of the addressing strategy for each solution in later chapters. As a general initial step for all solutions, Nortel recommends: Using the 10.0.0.0/8 subnet, Re-subnetting it with a 9-bit subnet mask (255.128.0.0), Re-subnetting the resulting subnets with a 15-bit mask. The next subnetting steps are related to the specific Carrier VoIP solutions. Please see the following chapter for the summary of the steps. 7.2.2.5.1 Media Gateways in Packet Access-Cable For Packet Access-Cable Carrier VoIP solutions, Nortel recommends the following steps: Using the lower blocks of the 10.0.0.0/8 subnet (from 10.0.0.0/9 to 10.127.255.255/9) for Cable Modems, in-band management (where it is required) and router-to-router links, Using the higher blocks of the 10.0.0.0/8 subnet (from 10.128.0.0/9 to 10.255.255.255/9) for E-MTAs (voice interfaces of the Cable Modems) and trunk MGs (MG 15000s, APGs and IW-SPMs), Re-subnet both blocks with a 17-bit long mask (255.255.128.0). 7.2.2.5.2 Media Gateways in Packet Access-Integrated Access For Packet Access-Integrated Access Carrier VoIP solutions, Nortel recommends the following steps: Using the 10.0.0.0/8 subnet for the MGs and the routers/switches in the Carrier VoIP domain. 7.2.2.5.3 Media Gateways in Packet Trunking For Packet Trunking Carrier VoIP solutions, Nortel recommends the following steps: Subnetting a 15-bit subnet with a 21-bit mask, Reserving one of the resulting subnets to address all MG 15000s, APGs and IW-SPMs.
Version: 9.0.3
NORTEL CONFIDENTIAL
88
7.2.2.6 IP address assignment The supported design requires a dynamic IP address assignment (via BOOTP) for the UAS, GWCs, and the SCs. In addition, all line Media Gateways will typically obtain their addresses dynamically. (for Packet AccessCable and Packet Access-Integrated Access solutions only). All the other devices (OAM&P platforms, CS 2000, trunk Media Gateways, routers and switches) will be given a static IP address. 7.2.2.7 Ethernet Routing Switch 8600 addressing The IP addresses that are required on the pair of ERS 8600s are the following: One (1) per chassis for Management Interface (only needed for out-band-management) One (1) per chassis for each of configured VLANs. Please note that VLANs and subnets are equivalent in the CS-LAN. The typically configured VLANs are: Up to three (3) VLANs for Call Processing elements (Callp-Private, Callp-Carrier, Callp-Public) One (1) VLAN for OAM&P elements One (1) VLAN for Media Gateways One (1) VLAN for intra-ERS 8600 communication One (1) VLAN for NOC connectivity One (1) for each VLAN/subnet that requires a VRRP instance. The subnets that require VRRP are: Each Call Processing subnet OAM&P subnet Media Gateways subnet Therefore, at least 15 IP addresses (1x2 + 5x2 + 3x1) are required for the suggested configuration for Carrier VoIP solutions.
NORTEL CONFIDENTIAL
89
For the VLAN interconnecting the ERS 8600s, it is suggested to use the same addressing scheme that the customer currently uses to interconnect routers in its Corporate Network. The typical subnet size for these links is 255.255.255.252. The CS-LAN elements requiring IP addresses are: A Nortel VPN Router will require one (1) address for the CS-LAN interface + 1 for the NMI, An ERS 8600 will require one (1) address per configured VLAN + one (1) for NMI. In a dual configuration where VRRP is used for redundancy, it is required to assign another IP address (the Virtual IP address) for each configured VRRP instance. A fully configured USP will require four (4) IP addresses. 7.3.2 Addressing for the Call Processing subnet Since the Call Processing elements will have to interact only with other Carrier VoIP elements, Nortel recommends using private addresses for this subnet. The network elements requiring an IP addresses that will typically be in this subnet are: XA-Core will require six (6) addresses for the HIOPs., A fully configured SAM21 will need 36 IP addresses (four (4) per GWC pair, four (4) per SC pair). Please note that in UA-AAL1 up to eight (8) SAM21s can be supported. The SDM requires one (1) address in the Call Processing subnet. Please note that other two addresses are required: one (1) address in the OAM&P subnet and one (1) address on the DS-512 link to the XA-Core (see section 7.3.8 Out-of-band management, router interfaces and loopback interfaces on page 94). A SAM16 (UAS) will require one (1) IP address per CPU card. Therefore a fully configured SAM16 with two domains will require two (2) IP addresses. Please note that 12 additional addresses are required in the Media Gateway subnet. USP will require one (1) address for each of the M3UA IP Link System cards. A total of six (6) addresses in the Call Processing subnet are required for a fully configured USP. Please note that two other addresses are required: one (1) address for each of the RTC cards in the OAM&P subnet Because the second largest private subnet (from 172.16.0.0 to 172.31.255.255) has up to 1,048,574 usable addresses, it is the perfect choice to address the CS-LAN devices. Nortel recommends using a 19-bit subnet mask (255.255.224.0). In this fashion, 128 subnets are obtained with 8190 usable addresses each (as shown in the example of Table 10 on page 90). Each of the /19 subnets will support up to 16 CS 2000 nodes and each will be able to grow beyond 64 Gateway Controllers.
Version: 9.0.3
NORTEL CONFIDENTIAL
90
Host Range 172.16.0.1 to 172.16.31.254 172.16.32.1 to 172.16.63.254 172.16.64.1 to 172.16.95.254 . 172.31.160.1 to 172.31.191.254 172.31.192.1 to 172.31.223.254 172.31.224.1 to 172.31.255.254
Usage 1st block with 16 CS 2000 nodes 2nd block with 16 CS 2000 nodes 3rd block with 16 CS 2000 nodes .......... 127th block with CS 2000 16 nodes 127th block with CS 2000 16 nodes 128th block with CS 2000 16 nodes
Nortel recommends using the first eight (8) subnets (172.16.0.0/19, 172.16.32.0/19, 172.16.64.0/19, 172.16.96.0/19, 172.16.128.0/19, 172.16.160.0/19, 172.16.192.0/19 and 172.16.224.0/19) to address up to 128 CS 2000s. It is suggested to reserve the other 120 subnets (from 172.17.0.0/19 to 172.31.224.0/19) for other uses (i.e. the addressing of the OAM&P subnets or NOC/OSS) or future growth. The next step is to subnet the 172.16.0.0/255.255.224.0 subnet with a 23-bit mask (255.255.254.0), yielding 16 subnets with 510 usable addresses each (see Table 11 on page 90 for an example).
Host Range 172.16.0.1 to 172.16.1.254 172.16.2.1 to 172.16.3.254 . 172.16.28.1 to 172.16.29.254 172.16.30.1 to 172.16.31.254
Usage CS-LAN-1 CS 2000 node 1 CS-LAN-2 CS 2000 node 2 . CS-LAN-15 CS 2000 node 15 Reserved
NORTEL CONFIDENTIAL
91
The first 15 of the above /23 subnets will be used to address all the Call Processing devices in the CS-LAN. The last subnet (172.16.30.0/23) is reserved for CS-LAN based Media Gateways. Please see the next section for more details. 7.3.3 Addressing for the Media Gateways in the CS-LAN Depending on the customers requirements and network topology, the Carrier VoIP solutions can have Media Gateways co-located with the CS 2000 in the CS-LAN. Nortel recommends using the subnet reserved in the previous paragraph to address these elements. The network elements requiring IP addresses that will typically be in this subnet are: A SAM16 (UAS) will require one (1) IP address per CG6000 card. Therefore, a fully configured redundant SAM16 configuration will require 12 CG6000 cards and 12 IP addresses. Please note that two (2) additional addresses are required in the Call Processing subnet. IW-SPM will require three (3) IP addresses for each GEM card. MG 15000 (Gigabit Ethernet only) will require three (3) addresses per VSP card. Please see the IP Addressing section of SEB 08-00-009 for more details. 7.3.4 Addressing for the OAM&P subnet The OAM&P subnet will be connected (logically and/or physically) with the NOC/OSS, a network that would traditionally use public addresses. For this reason, it is recommended to use public IP addresses for the OAM&P subnet. However, if a NOC/OSS does not currently exist or the customer prefers using private IP addresses, this choice does not impair functionality. The network elements requiring IP addresses that will typically be in this subnet are: CS 2000 Management Tools server: refer to SEB 08-00-001 for IP addressing information. The SDM requires one (1) address in the OAM&P subnet. Please note that another two addresses are required: one (1) address in the Call Processing-Private subnet and one (1) address on the DS-512 link to the XA-Core (see section 7.3.8 Out-of-band management, router interfaces and loopback interfaces on page 94). Core and Billing Manager (CBM): refer to SEB 08-00-001 for IP addressing information. Integrated Element Management System (IEMS): refer to SEB 08-00-001 for IP addressing information. USP will require one (1) address for each of the RTCs. A total of two (2) addresses in the OAM&P subnet are required for a fully configured USP. Please note that another six addresses
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
92
are required: one (1) address for each of the M3UA IP Link System cards in the Call Processing subnet MDM server: For a clustered solution with two servers, each will require two (2) addresses - one external + one for internal communication only. For a stand-alone configuration, four (4) addresses are required, if multipathing is enabled. CS 2000 RTP Media Portal EM: refer to SEB 08-00-001 for IP addressing information. MG 9000 Server: refer to SEB 08-00-001 for IP addressing information. MG 9000 Mid Tier: refer to SEB 08-00-001 for IP addressing information.
In addition, the following network elements may have interfaces on the OAM&P subnet: Nortel VPN Router and/or Firewall (for Remote Access). CM and CMTS Managers (for Packet Access-Cable only). Line Gateway Manager (for Packet Access-Integrated Access only).
Please note that there may be some network elements provided by the customer that might be residing in the OAM&P subnet. Typically, these elements will all require one (1) address each. Examples are: DCE Server. If a customer prefers or requires private addresses, it is suggested to use one of the reserved /19 subnets (from 172.16.32.0/19 to 172.31.224.0/19). Independently from the choice of a public or private subnet, Nortel recommends applying the same subnetting scheme described in section 7.3.2 Addressing for the Call Processing subnet on page 89 (where a /23 subnet mask is used) and then using a /26 subnet mask (255.255.255.192) to reduce the subnet size. Nortel suggests using a single /26 subnet to address the OAM&P subnet in each CS 2000 node. Such a scheme will yield 62 usable IP addresses. 7.3.5 In-band management for the MSS 15000 In the Carrier VoIP solutions where MSS 15000s are present, a topology with a centralized MDM can be the preferred solution to manage all MSS 15000s in the network. If this is the case, Nortel recommends using a public block (i.e. visible from the centralized MDM) to address the MSS 15000 logical in-band management interface. The CP of the MSS 15000 collocated with the CS-LAN (PP15000-1 in Figure 40 on page 93) is connected to the ERS 8600 via a 100 Base-T link. All other remote MSS 15000s will have one IP over AAL5 PVC (used for management) to the CS-LAN MSS 15000. PP15000-1 will have the task of routing the traffic coming from the management PVC to the CS-LAN. The IP addresses of the remote and CS 2000 location CPs can be in the same subnet. Alternatively, mulCarrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
93
tiple 30-bit mask subnets (i.e. one per remote MSS 15000) can be used.
Figure 40
7.3.6 Connecting the CS-LAN to the Corporate Network It is recommended to configure a VLAN on the ERS 8600 for the sole purpose of interfacing logically with the access routers of the Corporate Network. For this reason, it is suggested to use the same addressing scheme that the customer currently uses to interconnect routers in its Corporate Network. The typical subnet size for these links is 255.255.255.252. 7.3.7 NOCs and future expansion Depending on the customers requirements, the NOC/OSS might be using public or private IP addresses. Given the number of addressable elements, it is suggested to reserve a /27 subnet. Therefore, Nortel recommends again applying the same subnetting scheme described in the 7.3.2 Addressing for
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
94
the Call Processing subnet on page 89 (where a /24 subnet mask is used) and then using a /27 subnet mask (255.255.255.224) to reduce the subnet size to 30 usable addresses. It is recommended to use private IP addresses for the Network Operation Center/OSS when such networks do not yet exist. For such a case, it is suggested that one of the reserved /19 subnets (from 172.16.32.0/19 to 172.31.224.0/19) be used. The following are the typical clients that are needed for managing Carrier VoIP solutions. Please note that the physical platform where the clients run is typically provided by the customer. MDM Client SESM Client MG 9000 Manager Client CM and CMTS Manager client (for Packet Access-Cable only) Line Gateway Manager client (for Packet Access-Integrated Access only) SAM21 Manager Client ERS 8600 Manager Client (Device Manager) DCE Server Clients for other OAM&P network servers (DNS, DHCP, NTP, etc.). Note: For security reasons, OAM&P Clients should NOT be provisioned on the OAM&P VLAN. For further details see section on Connectivity to the NOC in SEB 08-00-001. 7.3.8 Out-of-band management, router interfaces and loopback interfaces Nortel recommends the use of the 192.168.0.0/16 subnet for providing fixed addressing to the out-of-band interfaces of the devices (ERS 8600, other routers/ATM switches, MSS 15000/Media Gateway 15000s) supporting this functionality. In addition, it is recommended to reserve a subnet in this space for the private communication between the SDM and XA-Core. This strategy, paired with a parallel management network, allows the network administrators access to the network elements even in case of catastrophic failures. We further scale the 192.168.0.0/16 subnet by using a 24-bit mask (255.255.255.0) because this yields 256 subnets with 254 usable addresses each. We then use the first two subnets (192.168.0.0/24 and 192.168.1.0/24) and reserve the others for future expansion. In addition, a further subnetting of the above blocks would yield enough subnets for router interfaces and loopback addresses where these are required.
Version: 9.0.3
NORTEL CONFIDENTIAL
95
8.1 Terminology
The following table describes the most common terms and acronyms used throughout this section.
Term NAT NAPT MP Description Network Address Translator Network Address and Port Translator Media Portal. It is a network intermediary that proxies media packets. In this chapter the Media Portal is the Nortel RTP portal A network element that has at least packet filtering capabilities preventing specific packets from reaching particular destinations based on various criteria Gateway Controller A set of parameters identifying flows (unique packet flows or group of flows) An address belonging to an address scheme used inside a specific network located at the boundary of a NAT. Typically, this address follows the [RFC1918] addressing scheme guidelines An address belonging to an address scheme used outside a specific network located at the boundary of a NAT. Typically, this is a registered address (please keep in mind that this address could still be one defined in [RFC1918])
Inside address
Outside address
NORTEL CONFIDENTIAL
96
Conventions used
Description A set of information that includes, at least, private (or inside address) address and transport port, translated external source and destination transport ports and transport protocol type Internet Transparency Agent. It is a process that controls the insertion of a Media Portal in the path of media flows Internet Telephony Service Provider
ITA ITSP
a.b.c.d:k
a.b.c.d:k, e.f.g.h:I
Version: 9.0.3
NORTEL CONFIDENTIAL
NAT traversal
97
Figure 41
In Carrier VoIP, a VoIP endpoint (a CPE Media Gateway, an IP Phone or PC with a soft client) behind a NAT will not be aware of the NAT and will not know about the translated address and ports on which it could receive traffic. This configuration is shown in Figure 42 on page 98.
Version: 9.0.3
NORTEL CONFIDENTIAL
98
NAT traversal
Figure 42
The other main issue with NATs is that the binds have a specific lifetime. This issue is depicted in Figure 43 on page 99. After a specific inactivity time, when no packets are sent from the inside to the outside of the network, the NAT bind will expire and no packets can be received on the translated address and port pair.
Version: 9.0.3
NORTEL CONFIDENTIAL
NAT traversal
99
Figure 43
8.3.2 Overview of the solution for NAT traversal Nortels solution uses a Media Portal to determine the translated address and port pair on which the translated VoIP endpoints could receive media. As shown in the example of Figure 44 on page 100, all calls between VoIP endpoints where media packets will be traversing NATs will be proxied by the MP. The Media Portal will be seen as the remote endpoint of the VoIP communication.
Version: 9.0.3
NORTEL CONFIDENTIAL
100
NAT traversal
Figure 44
User A and User B are told by the CS 2000 GWC to send their media to the MP, but each of the end users are not aware that it is a MP. Once User A starts sending media packets to the MP (at 20.1.10.9:40000), the MP will send media traffic to User A using the translated source address and transport port of the media packets (193.2.10.4:16000). Figure 45 on page 101 illustrates the details of the RTP Media Portal solution.
Version: 9.0.3
NORTEL CONFIDENTIAL
101
Figure 45
This mechanism is described step by step below: When the gateway behind the NAT (User A) initiates a call, the GWC knows that it is located behind a NAT and therefore it inserts the Media Portal in the call. User A places all RTP packets in a UDP datagram (with source port of 25000) and then in an IP packet with address of 10.1.20.3. The NAT device translates 10.1.20.3:25000 into 193.2.10.4:16000. The IP packet arrives at the Media Portal and an IP address substitution is performed on the source address (User A NATed address 193.2.10.4 becomes 20.1.10.9) and destination IP address (20.1.10.9 becomes 222.4.10.7, which is User B) in the IP header. The UDP datagram is also changed so that: the Source Port is updated (from 16000 to 40000), and the Destination Port is updated (from 40000 to 17000). All traffic leaving User B going to User A will undergo the same process.
NORTEL CONFIDENTIAL
102
Since the access from external networks to the carrier hosted Media Gateways is allowed and authorized by the CS 2000 on per call basis, this solution is much more secure than deploying traditional firewalls where the network administrator defines static policy rules on the allowed port ranges. Figure 46 on page 102 shows a generic network topology when two carriers interconnect. In addition, using a MP allows the co-existence of overlapped address realms within the ITSP network. Consequently, every address domain can access media resources through the Media Proxies.
Figure 46
Version: 9.0.3
NORTEL CONFIDENTIAL
103
Figure 47
As mentioned above, all bearer flows proxied by the MPs do not follow the typical logical route (i.e. directly from Enterprise A to Enterprise B via Carrier network) but are instead diverted to the MP. As a result, proxied flows require more bandwidth than non-proxied flows on certain links of the network. Figure 48 on page 104 shows an example of the resulting additional bandwidth required on such links. It is important to carefully evaluate the location of the RTP Media Portal within the network. There are cost, bandwidth and potential voice quality implications to be taken into account. Therefore, it is essential
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
104
that this extra traffic in the network is evaluated properly and, if necessary, to add new interfaces and/or links on the network elements traversed by these additional flows. In addition, it is clear that extra latency is added by the Media Portal, therefore an improper location of the MP can increase the flows packet end-to-end delays.
Figure 48
The total incremental delay is calculated by adding the additional propagation time and the delays required to transmit the packet on the various traversed links to reach the MPs. Since the propagation delay is the biggest component of the total delay, it is safe to use it as a good approximation of the total delay. The propagation delay can be estimated as follows: For every kilometer of fiber optics, five (5) s of propagation time should be added ([G114]): for 1000Km, five (5) ms of propagation time is added. The recommendation on the location of the RTP Media Portal is shown in Figure 49 on page 105. Wherever it is possible, the Media Portal should be located in a Point of Presence (POP) so that it is closer to
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
105
the greater majority of the enterprise gateways. This configuration would guarantee that bandwidth and delay are optimized across the Access network. Note: The geographical proximity algorithm is implemented manually by assigning Media Gateways and Media Portals from the same geographic location to the same GWC.
Figure 49 RTP Media Portal location
Version: 9.0.3
NORTEL CONFIDENTIAL
106
8.5.1 RTP Media Portal in the CS-LAN The Media Portal can be located in the CS-LAN and connected to a pair of Nortel Ethernet Routing Switch 8600s (ERS 8600s). Figure 50 on page 106 illustrates the logical connectivity of this configuration. Refer to SEB-08-00-001 for additional information on RTP Media Portal connectivity to the CS-LAN.
Figure 50 MP in the CS-LAN
8.5.2 RTP Media Portal in a Media Gateway site The Media Portal can be located in a remote site and also be connected to a pair of ERS 8600s. Figure 51 on page 107 illustrates the logical connectivity of this configuration. Refer to section 9.12.7 RTP Media Portal engineering and sizing for Media Gateway sites on page 181 for additional information on RTP Media Portal engineering.
Version: 9.0.3
NORTEL CONFIDENTIAL
Routing considerations
107
Figure 51
MP in a remote site
NORTEL CONFIDENTIAL
108
NORTEL CONFIDENTIAL
109
Pickup In-Progress Call: If the user logs in to a terminal on the CICM while the call is in progress, the CICM will setup the call on the terminal and the RTP Media Portal will adopt the incoming RTP stream from the terminal so a bidirectional audio path is established.
Therefore, an RTP Media Portal becomes a mandatory requirement in all CICM deployments regardless of network topology and existence of a NAT along the media path. However, this configuration does not mean that all calls go through the RTP Media Portal. Only a small percentage of calls would use the Media Portal for a short time: The resources on the Media Portal are only used while the call is terminating on the CICM. When the call is forwarded to voice mail (e.g. after a few seconds), the Media Portal will be released. As such, if an RTP Media Portal is already provisioned for the purpose of media NAT traversal, Lawful Intercept or carrier-to-carrier interconnect, it is generally believed that no additional capacity is required for CICM media sink support purposes. However, if no RTP Media Portal is provisioned for the purpose of media NAT traversal, Lawful Intercept or carrier-to-carrier interconnect, it is recommended that a minimum RTP Media Portal configuration be provisioned for CICM media sink support purpose. Refer to the Media Portal Engineering section for the details of this minimum RTP Media Portal configuration. 8.7.2.2 When is a Media Portal inserted for CHS The media Portal is inserted into the media flow in the following scenarios: 1. 2. 3. 4. 5. Calls between two different Enterprise VPNs Calls between an Enterprise VPN and the Public network Calls between an Enterprise VPN and Carrier Private network Calls between the Public Network and the Carrier Private Network CICM calls terminating on terminals not logged in (see section 8.7.2.1 Mandatory use of Media Portal for CICM on page 108)
The media portal is not inserted into the media flow in the following Scenarios: 1. Calls between two gateways located in the same Enterprise VPN 2. Calls between two gateways located on the Public network 3. Calls between two gateways located on the Carrier network 8.7.2.3 Type of Media Portal ports allocated for CHS The media portal has two types of interfaces to use for connections to VoIP gateways. One interface is connected to the Carrier Private network and the other interface is connected to the Public network. Whenever the Media Portal is inserted into the media flow, two (2) RTP ports (and two (2) UDP ports if T.38 fax is enabled on the GWCs) are allocated from among the two available interfaces. It is always the
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
110
case that at least one of the two ports is allocated on the Carrier Public interface. The type of each port allocated depends on the location of each of two gateways involved in the call. A port on the Media Portals Carrier Private interface (private port) is always allocated to connect to a gateway located in the Carrier Private VPN A port on the Media Portals Carrier Public interface (public port) is always allocated to connect to a gateway located in an Enterprise VPN or on the Carrier Public network. 8.7.3 Inter-Carrier solutions In the SN06.2 Carrier VoIP release, a new option, called inter-domain, is introduced in table TRKOPTS for use with SIP-T trunks. This option is set to Y to identify a SIP-T trunk that is used to setup calls between two carrier domains. By default, this option is set to N on all SIP-T trunks during the initial SN06.2 upgrade. In this discussion, a trunk that has this option set to Y is referred to as an inter-domain trunk; otherwise, it is referred to as an intra-domain trunk. 8.7.3.1 When should Inter-domain be set Once RTP Medial Portals are available in the CS 2000 office, the Inter-domain option should be set to Y in the following cases: The SIP-T trunk connects call servers between two different service providers that do not share a common network topology. The SIP-T trunk connects call servers of the same service provider but do not share a common network topology. The SIP-T trunk connects call servers where at least one call server is in a network that is considered private or must be isolated. One end of the SIP-T trunk does not understand the CS 2000 proprietary SDP extensions. These devices include: Third party Call Servers MCS 5200 CS 2000 with a software load prior to SN06 This option should be set to the same value on both call servers serving either end of the trunk, whenever possible. The Inter-domain option should not be set to Y for looparound trunks. Because the Inter-domain option has multiple uses, including VCAC, this option should be set to Y on all SIP-T trunks connecting the office to MCS 5200 or 3rd party call servers, even if the CS 2000 office does not include portals for inter-carrier. Calls utilizing these inter-domain SIP-T trunks will work regardCarrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
111
less of portal availability in the office. But, whenever an office is not setup to use portals for inter-carrier calls, IP routing must be implemented to allow routing between the sites VoIP gateways and other sites. 8.7.3.2 When is a Media Portal inserted for Inter-Carrier The media Portal is inserted into the media flow in the following scenarios: 1. Both ends of the trunk are marked as inter-domain, each call server, independently inserts a Media Portal on their end of the call if: The VoIP gateway on their end is located in the Carrier VPN, or The VoIP gateway on their end is located in an Enterprise VPN 2. One end of the trunk is marked intra-domain and the other end is marked inter-domain, then the call server with the inter-domain end of the trunk inserts a Media Portal if: The VoIP gateway on their end is located in the Carrier VPN, or The VoIP gateway on their end is located in an Enterprise VPN 3. Both ends of the trunk are marked as intra-domain, the call server on the terminating side of the call inserts a Media Portal on its side of the call for calls between the Public Network and the Carrier Private Network, considering the locations of the VoIP gateways on both ends of the call, and using the insertion rules as outlined in section 8.7.3.2 When is a Media Portal inserted for Inter-Carrier on page 111. 4. For SIP-T to SIP-T trunk calls, whenever one trunk is an inter-domain and the other trunk is an intra-domain, the call server inserts a media portal if: The VoIP gateway on the far end of the intra-domain trunk is located in the Carrier VPN, or The VoIP gateway on the far end of the intra-domain trunk is located in an Enterprise VPN 8.7.3.3 Type of Media Portal ports allocated for Inter-Carrier The media portal has two types of interfaces to use for connections to VoIP gateways. One interface is connected to the Carrier Private network and the other interface is connected to the Carrier Public network. Whenever the Media Portal is inserted into the Media flow, two (2) RTP ports (and two (2) UDP ports if T.38 fax is enabled on the GWCs) are allocated from among the two available interfaces. It is always the case that at least one of the two ports is allocated on the Carrier Public interface. The type of each port allocated depends on the location of each of two gateways and /or the SIP-T trunk (s) involved in the call. A port on the Media Portals Carrier Private interface (private port) is always allocated to connect to a gateway located in the Carrier Private VPN A port on the Media Portals Carrier Public interface (public port) is always allocated to connect to a gateway located in an Enterprise VPN or on the Carrier Public network.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
112
A port on the Media Portals Carrier Public interface (public port) is always allocated to connect to the side of the other call server if the SIP-T trunk is an inter-domain trunk. The type of the Media Portals port allocated to connect to the side of a call server on the far end of an intra-domain trunk depends on the location of the VoIP gateway on that far end of the trunk. Refer to Section 8.7.3.2 When is a Media Portal inserted for Inter-Carrier on page 111, Bullet 2.
8.7.3.4 Network considerations The following recommendations need to be followed to assure a correct interworking between two different carriers: The Call Processing subnets of the two carriers are advertised to each other (i.e. as shown in Figure 52 on page 112, Carrier A Call Processing subnet is advertised to Carrier B and vice versa). The Media Portals Carrier Public subnets of the two carriers are advertised to each other (i.e. as shown in Figure 52 on page 112, Carrier A MP subnet is advertised to Carrier B and vice versa).
Figure 52
Carrier inter-working
In addition, the engineering of the routing protocol between the two Carriers (typically BGP) needs to be carefully considered to avoid outages that might disrupt telephony services (i.e. in case of failure the system would be able to re-route around the failed element in under six (6) seconds).
Version: 9.0.3
NORTEL CONFIDENTIAL
113
NA SN09 CS 2000 (XA-Core) PE Configuration CS2000 Call Capacity CS 2000-Compact (NTRX51GZ)) CS 2000-Compact (NTRX51HZ) XA-Core Rhino PE Config XA-Core Atlas PE Config (min/max) Maximum BHCA (hybrid tandem) Maximum BHCA Maximum BHCA 3+1 2+1/3+1 2.0M BHCA 1.4M BHCA 2.4M BHCA
While the supported capacity of the CS 2000 XA-Core 3+1 configurations is no higher than that of the XA-Core 2+1 configuration, the CS 2000 XA-Core 3+1 configurations are able to process more CPU intensive calls. When the call processing occupancy of the CS 2000 XA-Core 2+1 configuration reaches 80% or more, an additional Processor Element (PE) should be considered. This example illustrates a call model that fully engineers a CS 2000 XA-Core: Call interworkings: 100% enbloc trunk calls (ISUP => ISUP) Number of call agents: A = 112,000 Call hold time: H = 90s Erlang: E = 0.74 Capacity: C = A/2 * 3600/H * E = 1.65 M BHCA A similar equation could be used for PRI => PRI calls because they have a similar call processing cost to
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
114
ISUP => ISUP calls. But call models including any other type of call (e.g. calls involving Analogue lines, DPT, overlap signalling, IN) would push the call processing occupancy over 80%. For these call models, the number of call agents should be reduced, or the number of Processor Elements increased. Due to the multi-processing and resource sharing nature of the CS 2000 XA-Core architecture, the call processing cost (and hence the BHCA capacity) can only be determined empirically.
Trunks
Max # trunks supported (IP or TriModal) Max # AAL1 trunks Max # IP trunks Max # ISUP trunks Max # PRI trunks Max # DPTs Max # H.323 connections Max # lines supported (packet + legacy) Max lines (RES or IBN / CTX) Max # legacy lines including ABI lines Max # cable lines Max CICM/CTX_IP + p-phone/Meridian lines Max # MG9000 lines (ex ABI) Max # SIP lines Max # lines + trunks combined (IP and TriModal) Max # IP and/or UA-IP endpoints in PTM Max simultaneous calls
200,000 150,000 200,000 200,000 50,025(note 2) Note 1 ~80,000 (note 3) 180,000 180,000 150,000 150,000 125,000 110,000 179,025 230,000 230,000 100,000
200,000 150,000 200,000 200,000 60,000 Note 1 ~80,000 (note 3) 180,000 180,000 150,000 150,000 125,000 110,000 179,025 230,000 230,000 100,000
Lines
Trunks + Lines
Note: 1- The number of DPTs is not inherently limited in IP solutions. The expectation is that the percentage of trunks which are DPT would be less than 50%. Note that the Session Server Trunks limit is 50K ports/NGSS Note: 2- This is based on the limit of 2175 PRI trunks groups on the core * 23 or 30 channels, non NFAS. Note: 3- This number is primarily limited by the # of GWCs and the # of H.323 trunk groups.
Version: 9.0.3
NORTEL CONFIDENTIAL
115
9.2.1 CS 2000 number of IBN lines IBN is the only line type supported for POTS/analog lines on the International CS 2000. SOC option MDC00058 is used in North American loads with a upper limit set to 180,000 lines. Two new soft-limit usage SOC options (ILIN0100 for GWC hosted lines and LNBA0002 for XPM hosted lines) for international Carrier VoIP loads only have been introduced. The two new usage SOCs introduced in the World-Trade DRU are ILIN0100 (for Carrier VoIP lines) and LNBA0002 (for legacy lines). For international loads, all existing updates and queries of SOC option MDC00058 are replaced by a query or update to SOC options ILIN0100 or LNBA0002 as appropriate (depending on whether the line being updated is legacy or Carrier VoIP). 9.2.2 GWC port and BHCA capacity The Nortel Carrier Voice over IP solution offers great flexibility in GWC planning. There are two basic type of GWC hardware platform. One is known as the 750 GWC and the other is known as the 905 GWC. The 750 GWC supports lines, trunks and audio as shown below.
GWC
Number of Ports
Half-Calls / Hour
North American Trunk (ISUP) GWC Trunk (PTS) GWC Line GWC Trunk (PRI) GWC H.323 GWC Audio Controller SIP-T GWC VRDN APG GWC 4094a 40941 6400 4094 D and B channels 1032 4094 6048 96,000 40,000 38,000 76,800 30,000 80,000 96,000 900,000c 72,500 6048d 4094 4094 4000 4094 1032 300/85b 4094
Version: 9.0.3
NORTEL CONFIDENTIAL
116
GWC
Number of Ports
Table 15 750 GWC capacities (Continued) a. The number of ports is 4094, but is constrained by the integral number of spans. Therefore, the actual number of usable endpoints may be slightly lower. See SEB 08-00-009 for more details. b. Announcements/conference circuits c. If the VRDN uses SCTP (pre-SN04) rather than UDP (SN04 and later), the BHCA is 200,000. d. The APG GWC supports 3024 simultaneous calls
The total number of GWC pairs supported is 60. For each 750 GWC, there is a maximum of 6400 small gateways or 27 large gateways per GWC. A small gateway, in general, is any gateway containing no more than 30 endpoints. Large gateways, on the other hand, are gateways with greater than 30 endpoints. The 905 GWC offers flexibility and ease of engineering. The 905 GWC provides increased port density per GWC and allows for lines, trunks & audio to be configured on the same GWC.
Version: 9.0.3
NORTEL CONFIDENTIAL
117
Max Endpoints (native Lines only) Max Simultaneous calls (without IPSEC) Max Simultaneous calls (with IPSEC) 905 card Max Small Line GW (IAD, MTA) per GWC Max Large Line GW per GWC Maximum ABI pairs per GWC (note) BHHCA Note: The Max Endpoint value is not applicable for ABI lines attached. Combo Trunk & Small Line GWC MGCP, NCS, Megaco
Max Line Endpoints (native Lines only) Max Small Line GW (IAD, MTA) per GWC Max Trunk Endpoints Max Audio Ports Max GW per GWC Max Simultaneous line calls (with or without IPSEC) Max Simultaneous trunk calls BHHCA
Max Endpoints (native Lines only) Max Simultaneous calls (without IPSEC) Max Large Line GW per GWC Max Small Line GW (IAD, MTA) per GWC BHHCA
NORTEL CONFIDENTIAL
118
NTLX02CA (RHINO) N+1 where N = 3 Processor Elements + 1 spare NTLX02DA (ATLAS) N+1 where N = 2 or 3 Processor Elements + 1 spare
Memory configurations from 768 MB to 1728 MB are supported in 192 MB increments. 9.3.2 CS 2000-Compact Three flavors of a CS 2000-Compact are available based on the capacity requirements. The first generation, known as the 750 CCA (NTRX51FZ), the second generation known as the 7410 CCA (NTRX51GZ) and the third generation know has the 905 CCA (NTRX51HZ). Both the 7410 CCA and 905 CCA offer 1.5 GByte of memory and roughly over 1 GByte of data store. The 750 CCA offered 1GByte of memory and roughly 600 MByte of data store. The 750, 7410 & 905 CCA have a point-to-point Fibre Channel connector used for synchronization. The 905 CCA has planned to move to a Gigabit Ethernet point-to-point for synchronization. 9.3.3 IOP, HIOP, HCMIC engineering CS 2000 Ethernet I/O messaging interfaces consist of the old IOP with Ethernet packlet (EIOP), the HIOP and the new HCMIC.
Note: CMIC and RTIP IOP packlets are not supported with HCMIC. If EIOP is replaced by HCMIC, CMIC and RTIF links must move onto their respective HCMIC interfaces. 1. For SN07+ Initials or Legacy to IP conversions IP offices: Configure 1+1 HCMIC for RTIF, CMIC and Ethernet. Monitor OM IOCAP, if one of the services exceeds 80% utilization (see next section for IOCAP description), then add 1+1 HIOP.
2. For Existing XA-Core offices upgrading to SN07+ IP offices with HIOP: No configuration change. Monitor OM IOCAP, if one of the services exceeds 80% bandwidth utilization, then add 1+1 HCMIC. Move CMIC and RTIF from IOP to HCMIC and configure Ethernet on both HCMIC and HIOP (2+2). IP offices with EIOP: No configuration change as long as BHCA < 400 K.
Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
119
If BHCA > 400K then replace RTIF/CMIC/Ethernet packlets with 1+1 HCMIC (RTIF and CMIC packlets are not supported with HCMIC). Or move Ethernet to 1+1 HIOP.
9.3.4 CS 2000 surveillance Various tools exist for monitoring the capacity of the CS 2000. Proper engineering recommends that planning for future capacity upgrade starts when the switch utilization reaches 80%. CS 2000 Core Utilization Monitoring
>capci
CATMP/HR 931920 SCHED 162% UTIL 84% FORE 59% ENGCATMP 1107475 ENGLEVEL ABOVE OM 4% SYNC YES GTERM 0% CCOVRLD OFF BKG 144% IDLE NO NETM 0% SNIP 49%
MAINT 19%
DNC AUXCP 0% 1%
Measurements at the CS 2000 XA-Core or CS 2000 - Compact (customers typically collect these measurements at the SDM) are accumulated just as they are on the DMS100. Accumulation classes are defined and specific OM groups are added to the accumulation class. The OM group BRSTAT can be used for the CS 2000c to monitor switch utilization on a hourly, daily or monthly approach depending on how the OM group is setup. The OM group XASTAT can be used on CS 2000 XA-Core switches to monitor switch utilization on a hourly, daily or monthly approach depending on how the OM group is set up.
>OMSHOW BRSTAT ACTIVE BRSTAT CLASS: ACTIVE START:2003/06/20 10:30:00 FRI; STOP: 2003/06/20 10:52:55 FRI; SLOWSAMPLES: 14 ; FASTSAMPLES: 138 ; BRSCAP BRSCMPLX BRSSCHED BRSFORE BRSMAINT BRSDNC BRSOM BRSGTERM BRSBKG BRSIDLE BRSAUXCP BRSNETM BRSSNIP 0 84 18 171 0 0 0
Version: 9.0.3
165 3 1
59 0 0
Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
120
22
> iocapci; status IOCAP Version 1.0 Time CMICUTIL 17:16 32 ETHRUTIL 29 AMDIUTIL 15
IO_WARNING_THRESHOLD
Measurements at the CS 2000 XA-Core (customers typically collect these measurements at the SDM) are accumulated just as they are on the DMS100. Accumulation classes are defined and specific OM groups are added to the accumulation class. The OM group IOCAP can be used on CS 2000 XA-Core switches to monitor IO utilization on a hourly, daily or monthly approach depending on how the OM group is set up.
>OMSHOW IOCAP ACTIVE IOCAP CLASS: ACTIVE START:2003/06/20 10:30:00 FRI; STOP: 2003/06/20 10:52:55 FRI; SLOWSAMPLES: 14 ; FASTSAMPLES: 138 ; IO_SERVICE IOUTIL IOHWM TXMSGPS TXSIZE RXMSGPS RXSIZE IOTHRESH CMIC 0 0 ETHER 0 0 AMDI 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Version: 9.0.3
NORTEL CONFIDENTIAL
121
9.3.4.1 Call rate and Grade Of Service monitoring Available at the office level from the CS 2000, the existing OM group OFZ has registers that record call disposition, such as Line to Line, Line to Trunk, and Trunk to Line calls. OFZ can continue to be used as it is today on the DMS100. This OM group will include peg counts from both the Carrier VoIP packet gateways as well as existing legacy peripherals. For example the NORIG register will include the number of line originations for both small and large line gateway-originated calls and legacy LCM line-originated calls. Key registers are:
NORIG: Number of line originations ORIGTRM: Number of Line to Line calls ORIGOUT:Number of Line to Trunk calls OUTNWAT: Number of outgoing call attempts including Line to Trunk and Trunk to Trunk calls NIN:Number of Incoming call attempts to the office INTRM: Incoming calls that terminate on a line
Each of the registers noted above has an associated extension register which is scored when the register reaches a count of 65535. The extension register is identified by the number 2 after the name. For example, the extension register for NORIG is NORIG2. Using OFZ to determine the Calls Per Hour:
CPH= (NIN + (NIN2 * 65535)) + (NORIG + (NORIG2 * 65535))
The OM group DTSRPM, available at the XA-Core provides a measurement of the Grade-of-Service (Dial Tone Speed Recording) provided by line type. Each LGRP will have its own key in DTSRPM just as an LCM does on the DMS100. Below is an example of the DTSRPM OM showing an LGRP CC04 is the four digit site identifier, 00 is the frame number, 0 is the shelf number within the frame.
DTSRPM CLASS: HOLDING START:2003/xx/xx 12:15:00 TUE; STOP: 2003/xx/xx 12:20:00 TUE; SLOWSAMPLES: 3 ; FASTSAMPLES: 30 ;
DGTTOT
DGTDLY
227
Version: 9.0.3
NORTEL CONFIDENTIAL
122
Register DGTTOT provides a peg count of the total originations from DT lines at the LGRP. Register DGTDLY provides a peg count of the total originations that experienced a Dial Tone Delay > 3 seconds. These registers can be used to calculate a GOS as follows:
%DTD = 100% * DTD > 3 Seconds / Total Line originations = 100% * DGTDLY / DGTTOT
OM group LMD, available at the XA-Core provides a peg count on a per shelf basis of the originations and terminations, as well as a traffic usage (CCS) register. From these registers, the total call volume and CCS per line can be calculated. Below is an example of the LMD OM group As with the DTSRPM OM, LMD provides a key for each LGRP.
LMD CLASS: HOLDING START:2003/xx/xx 15:00:00 TUE; STOP: 2003/xx/xx 16:00:00 TUE; SLOWSAMPLES: 36 ; FASTSAMPLES: 360 ;
INFO (LMD_OMINFO) NTERMATT NORIGATT ORIGFAIL PERCLFL MADNTATT ORIGBLK CC02 02 1 600 0 0 800 0 0
TERMBLK REVERT
0 0
Total CA (Call Attempt) volume on an MG 9000 shelf is found using the following formula:
Total CA = NORIGATT + NTERMATT
should be used to verify that the shelf is operating within the call attempt capacity.
Other tools such as SPMS can be used to monitor switch performance and DMSMON to monitor CPU utilization and memory usage.
Version: 9.0.3
NORTEL CONFIDENTIAL
123
9.3.5 Traffic surveillance The Call Server provides many Operational Measurements above those listed in section 9.3.4 CS 2000 surveillance on page 119. The following list of OM groups should be used to monitor the solution level traffic disposition (call mix), call server utilization, feature activations, trunk group usage, etc. An OM accumulation class should be established with these OM groups collected during the office Busy Hour.
CP - Call Server call processing resources CP2 - Additional Call Server call processing resources EXT - Records seizures, overflows, high water mark of various software resources such as AMA recording units, feature extension blocks OFZ - Office level call mix, i.e. Line-Line, Line-Trunk, Trunk-Line, Trunk-Trunk BRSTAT - CS 2000c call server call processing utilization XASTAT - CS 2000 XA-Core call processing utilization IOCAP - HIOP / HCIMC link utilization TRK - Per trunk group traffic measurements, attempts, CCS or Erlang LMD - Per LGRP traffic measurements, attempts, CCS AMA - Automatic Message Accounting attempts TCAPUSAG - TCAP messaging measurements ISUPUSAG - ISUP messaging measurements DPTNODE - Dynamic Packet Trunk node level usage DPTOFC - Dynamic Packet Trunk office level usage IBNGRP - Attempts per IBN customer group ACDGRP - Automatic Call Distribution, per group measurements CND - Class feature calling number delivery activations AIN - Advanced Intelligent Network measurements LNP - Local Number portability measurements CNAMD - Class feature calling name delivery activations ANN - Announcement attempts and usage CF3P - 3 port conference circuit attempts and usage CF6P - 6 port conference circuit attempts and usage C7ROUTER - External Router utilization (LPP platform only) C7RTESET - SS7 route set measurements C7LKSET - SS7 link measurements C7LINK1 - SS7 link measurements C7LINK2 - SS7 link measurements COT - Class feature Customer Originated Trace measurements ACB - Class feature Automatic Call Back measurements AR - Class feature Automatic Recall measurements SCRJ - Selective Call Rejection measurements SCF - Selective Call Forwarding measurements SCA - Selective Call Acceptance measurements UCDGRP - Universal Call Distribution group measurements SCPOTS - Speed Calling on POTS measurements TWCPOTS - Three Way Calling on POTS lines
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
124
TWCIBN - Three Way Calling on IBN lines CWTPOTS - Call Waiting measurements on POTS lines CALLFWD - Call Forwarding activations CALLWAIT - Call Waiting activations XPMOCC - Gateway Controller call processing occupancy and call attempts SPMACT - SPM, IW-SPM, DPT SPM and MG4K call processing occupancy and call attempts SPMUSAGE - SPM, IW-SPM, DPT SPM and MG4K DSP resources utilization IWBM - InterWorking SPM attempts STORE - Call Server memory measurements
9.3.5.1 OM accumulation class The following steps can be used as an example to set up an OM accumulation class
OMACCGRP <NEW ACC GRP> ADD GROUP <OM GROUPS> OMDUMP CLASS <NEW ACC GRP> COMMANDS
TABLE OMPRT pos <rep #> ..... CHA Y N ALLCLASS <NEW ACC GRP>
9.3.6 Trunk traffic engineering Time Division Multiplexed (TDM) trunk traffic engineering on Carrier VoIP is the same as TDM trunk traffic engineering on the legacy DMS100/DMS250 products, and is consistent with the use of the trunk operational measurements (OM group TRK). Members of the same trunk group can be supported across VSPs and across MG 15000s, and across switching fabrics. For example, a TDM trunk group may contain members subtending both an SPM and a MG 15000. Note: For Carrier VoIP deployments, Nortel recommends using a Most-Idle (MIDL) Least-Idle (LIDL) trunk selection algorithm for trunk groups. On the other hand, Dynamic Packet Trunks (DPT) between any two CS 2000 nodes are engineered as separate trunk groups1. However, a DPT group may not contain both DPTs and TDM trunks. As well, a DPT trunk group is limited to 256K members2. However, it is constrained by the maximum number of trunks for the office which is currently 165K ports (although it is not likely that the number of DPT
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
125
trunks in the office would grow beyond 50% of the total trunks in the office). Note: The number of other SIP-based application servers (e.g. other CS 2000 nodes or MCS 5200 nodes) is limited to 100 which is the size of table MGCINV. This implies that any single CS 2000 can communicate with no more than 100 other SIP-based applications servers. 9.3.7 LIU7 SS7 traffic engineering SEB 99-01-001 details the traffic engineering of the LIU7 External Routers and LIU7s for the estimated volume of SS7 ISUP and TCAP messaging.SIP-T signaling between CS 2000s is not counted as part of this volume, as it is transported through the IP network via the SIP-T GWC and VRDN. Note: To allow table C7TRKMEM to grow beyond 100,000 entries, the 32Meg LIU7s (NTEX22CA) must be provisioned in the LPP. Note: Any office containing SPM-based peripherals, including IW-SPM, must use external routers. A minimum of two (2) must be used. The engineering of external routers can be found in SEB 99-01-001. 9.3.7.1 Monitoring SS7 link utilization To monitor LIU7 link capacity for purposes of determining if more links are required, the user should utilize the following registers from the Operational Measurement group C7LINK2 for all links in the linkset:
C7BYTTX Number of bytes transmitted during the interval period. C7BYTTX2 Extension register for C7BYTTX. C7BYTRX Number of bytes transmitted during the interval period. C7BYTRX2 Extension register for C7BYTRX.
Next using the contents of these registers, apply the following algorithm for each link being verified. LinkUtil = ( MAX (bytesTX,bytesRX) ) ( maxUtil OMSamplePeriodSeconds ) 100
where,
1. Trunk groups are limited to 2048 trunk members each. 2. For SIP-T.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
126
and maxUtil is 7000 for 56kbps links or 8000 for 64kbps links at 100% utilization. If the result is greater than 40%, then additional link capacity is needed. The following is used as an example of a 56kbps link with a 5 minute (300 second) sample:
CLASS: HOLDING START:2002/09/25 07:00:00 WED; STOP: 2002/09/25 07:05:00 WED; SLOWSAMPLES: 3 ; FASTSAMPLES: 30 ;
KEY (C7_LINKSET_NUMBER) INFO (C7LINK_OMINFO) C7MSUTX C7MSUTX2 C7MSURX C7BYTTX C7BYTTX2 C7BYTRX C7BYTRT C7BYTRT2 C7MSUDSC C7ONSET2 C7ONSET3 C7ONSETV C7ABATE2 C7ABATE3 C7ABATEV C7MSUDC2 C7MSUDC3 C7STRET C7MSGLOS C7MSGMSQ C7MSUOR C7MSUTE C7MSUTE2 C7MSUTS 0 SP1LK 0 36671 63105 0 0 0 0 0 25228 0 6 0 0 0 0 0 0 25226 60923 0 0 0 1 36674 0
0 11 0 0 0 1 0 0
Version: 9.0.3
NORTEL CONFIDENTIAL
127
9.3.8 Universal Signaling Point link engineering The Universal Signaling Point is introduced as an alternative to the LPP/LIU, thereby eliminating the need for the DMS-BUS (Message Switch). With the USP, two types of messaging are provided for ISUP messages internal to the CS 2000. One type is direct messaging and the other is message bounce. Direct messaging sends messages to/from the GWC, whereas message bounce directs the messages to/from the XA-Core for routing to/from the GWC. Note that BICC DPT GWCs must always use direct messaging with the USP. For SN06 and later, only direct messaging is supported. The following table illustrates pre-SN06 limits for direct messaging based on the number of point codes required in the office. These limits have been removed since SN06.
Table 17 USP messaging type vs. number of office Code Points a. The XA-Core SS7 capacity is limited to 1.5M BHCA when this option is used.
9.3.8.1 System Node engineering The following tables illustrate the engineering of the USP in USP8.0.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
128
System Node RTC CC SS7 DS0A Link SS7 V.35 Link IP Link ATM HSL Link SS7 IP HSL Link
Table 18 USP system node engineering a. The sum of the SS7 IP HSL Link nodes, IP-Gateway-Link nodes, and ATM HSL Link nodes must not exceed 8 per shelf. b. The maximum number of IP Link nodes in the USP must not exceed 16.
9.3.8.2 SS7 link engineering and capacity USP-based SS7 links are traffic provisioned. Each link operates at either 56- or 64-kbits/sec in each direction, and should be engineered in mated pairs to meet the dual STP routing in the SS7 network. Each V.35 link (SS7 V.35 Link System Node) or DS0A link (SS7 DS0A System Node) will support a full 0.8 Erlang traffic load under a single-failure condition.
56 kbits/sec (112 kbits/sec bi-directional) Msg/sec @ 0.4E Msg/sec @ 0.8E 448 339 140
64 kbits/sec (128 kbits/sec bi-directional) Msg/sec @ 0.4E 256 206 80 Msg/sec @ 0.8E 512 388 160
25 33 80b
224 170 70
Version: 9.0.3
NORTEL CONFIDENTIAL
129
a. ISUP messaging. b. Average message size for TCAP or LNP query/response pairs
Description Line to SS7 Trunk SS7 Trunk to Line DP/MF Trunk to SS7 Trunk SS7 Trunk to SS7 Trunk E800 Service (E800) Exchange Alternate Billing Service (EABS) Private Virtual Network (PVN) Automatic Callback (ACB) Automatic Recall (AR) Ring Again (RAG) Local Number Portability (LNP)
ISUP/TUP Msg/CA 6 6 6 12
TCAP Msg/CA
2 2 2, 4 13 2 4 2
Using the tables above, the link capacity in terms of call attempts per hour, can be determined using the following algorithm:
For example, the following is the link capacity (56kbps) at 0.8E for message sizes of 33 bytes for a Tandem (SS7 Trunk to SS7 Trunk) application (12 msgs/CA).
Version: 9.0.3
NORTEL CONFIDENTIAL
130
= 101,700 CA/hour
For example, assume the HDBHCA rate for ISUP is 760K BHCA for a Tandem application,
From the above example, rounding to the next even integer, it can be seen that eight (8) link pairs (16 links to support dual routing to STP) are required to support 760K BHCA of ISUP traffic. A similar approach should be used for expected office-wide HDBHCA rates for LNP and other TCAP-based services (E800, ACB, RAG, AR, etc.) messaging. The following table shows the number of messages per second based on varying message sizes.
56 kbits/sec (112 kbits/sec bi-directional) Msg/sec @ 0.4E Msg/sec @ 0.8E 140 112
64 kbits/sec (128 kbits/sec bi-directional) Msg/sec @ 0.4E 80 64 Msg/sec @ 0.8E 160 128
80 100
70 56
Version: 9.0.3
NORTEL CONFIDENTIAL
131
a. ISUP messaging.
Once all the link-pair requirements have been determined for all messaging, then the number of system nodes (V.35 or DS0A) can be determined. Recall from an earlier section that the V.35 and DS0A System Nodes each contain four (4) links each. So when determining the number of these system nodes required, multiply the total link pairs required by two (2), and divide the results by four (4), rounding up to the next highest integer. So using the above
TotalSystemNodes = 8 2 = 4 ----------4
example, four (4) system nodes would be required. 9.3.8.3 Monitoring SS7 link utilization To monitor USP link capacity for purposes of determining if more links are required, the user should utilize the following registers from the Operational Measurement Application Group Link Traffic on the USP Element Manager for all links in the linkset: Octets Received Count Number of bytes received during the interval period. Octets Transmitted Count Number of bytes transmitted during the interval period. Next using the contents of these registers, apply the following algorithm for each link being verified. LinkUtil = ( MAX (bytesTX,bytesRX) ) ( maxUtil OMSamplePeriodSeconds ) 100
where bytesTX is the value from the Octets Transmitted Count and bytesRX is the value contained in the Octets Received Count for the interval period. The value used for maxUtil is 7000 for 56kbps links or 8000 for 64kbps links at 100% utilization. If the results is greater than 40%, then additional link capacity is needed. Note: The number of octets received in a 5 minute interval should never exceed 1,680,000 octets for a 56kbps link, or 1,920,000 octets for a 64kbps link. If this condition is true, then the links are operating above 0.8 erlang.
Version: 9.0.3
NORTEL CONFIDENTIAL
132
9.3.8.4 IPS7 link engineering The IPS7 Links are traffic provisioned. The IPS7 Links can handle, at 0.8E, a continuous throughput of 1792 messages per second. Typically, the IPS7 Links must be engineered in mated pairs to a maximum rated capacity of 40% of their maximum capacity in order to meet the reliability requirements for transporting SS7 messages. In failure mode, the IPS7 Link can support a sustained message throughput rate of 2240 messages per second for approximately 30 seconds.
Message Size 19 bytes 273 bytes 20% 448 msg/sec Normal 40% 896 msg/sec
In order to correctly engineer the number of IPS7 links required by the USP, the following data is required:3 a. b. c. d. e. Number of GWCs controlling ISUP trunks Number of DS0/V.35 SS7 links Number of ATM SS7 links Number of IP SS7 links Max engineered traffic rate on SS7 links (i.e. 40%)
The number of IPS7 links cards required is the greater of the following values (all values must be rounded up to the nearest even number, i.e. 4.1 = 6): SS7 Traffic limited
be IPS7Links = ----------- + c + d 3.2
NORTEL CONFIDENTIAL
133
or a minimum of two (2) IPS7 Link cards. For example, a. The number of ISUP Gateway Controllers required has been determined to be 25 (see 9.4 Gateway Controller engineering on page 137). b. Sixteen (16) DS0A or V.35 signalling links (see 9.3.8 Universal Signaling Point link engineering on page 127). c. Zero (0) ATM SS7 links d. Zero (0) IP SS7 links e. Max engineered traffic rate on SS7 links (i.e. 40%) Using the above algorithms and taking the larger of the two, the number of IPS7 links can be determined as follows, SS7 Traffic limited
16 0.40 IPS7Links = ---------------------- + 0 + 0 = 2 3.2
then taking the largest of the two results, with two (2) being the minimum required. 9.3.8.4.1 Application Server Paths Note: Each IPS7 Link card has a limit of 16 paths to Application Servers such as the Gateway Controllers or the XA-Core. If the total number of ISUP-based Gateway Controllers exceeds 16, then two (2) additional IPS7 Link cards should be added since each Gateway Controller should have two paths each. When configuring the ASP, paths to both the CS 2000 (XA-Core or Compact) and Gateway Controllers are required. For the paths to the CS 2000, up to the first four (4) mated IPS7 cards (2 pairs) should each have a path between the USP and CS 2000 for a total of four (4) paths. The remaining 15 paths per card should be directed to the Gateway Controllers. If still more IPS7 cards are required to support the Gateway Controllers, then the paths should be placed on these cards, up to 16 paths each.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
134
Note: IMPORTANT PROVISIONING RULE FOR ALL SN07 SITES: When provisioning the USP ASP for paths to the GWC, ensure the SNM broadcast checkbox is NOT selected. 9.3.8.5 Monitoring M3UA IP link capacity To monitor M3UA IP Link capacity for purposes of determining if more links are required, the user should utilize the following registers on the USP OM Form, ASP Path Traffic: Transmit path: Through-Switched MSU Count: Messages from the SS7 network to the IPS7 (CS-LAN side) network. Originated MSU Count: Messages originating from the USP to the XA-Core Receive path: Terminated MSU Count: Messages from the XA-Core, terminating on the USP Received MSU Count: Messages from the XA-Core such as TCAP query/response pairs or LNP queries. Applying the following algorithm with data from the above sampled registers, one can determine link utilization for purposes of engineering more links as needed. Sum of Through-Switched MSU Count + Originated MSU Count MSURX = Sum of Terminated MSU Count + Received MSU Count
MSUTX =
If the combined utilization of the mated links exceeds 80%, an additional set of mated links should be added. 9.3.9 USP-Compact The USP - Compact supports a maximum of 16 links and linksets on two (2) USP - Compact blades and supports both channelized T1/E1 SS7 links (four (4) or eight (8) channels per card), and IPS7 connections. Each of the eight (8) channelized DS0A SS7 channels on a single T1 carrier can be configured for all channels on that connection to utilize either the 64kbps or 56kbps data rate. These eight channels may be assigned to any of the available time slots on the T1. SS7 messaging capacity is restricted by the bandwidth of the SS7 channels. Each channel is engineered at 0.4 Erlang, much the same as the USP described in the previous section. Please refer to section 9.3.8 Universal Signaling Point link engineering on page 127 for engineering each of the SS7 links.
Version: 9.0.3
NORTEL CONFIDENTIAL
135
Note: USP - Compact will not support m2pa IP high speed SS7 links, ATM based high speed SS7 links, DS0a SS7 links, or V-35 SS7 links. 9.3.10 MS2010 and Audio GWC engineering The MS2010 is controlled by the Audio Gateway Controller (GWC). In SN07, GWC capacity was increased to 1280 simultaneous announcements to accommodate Music on Hold, a feature expected to have longer holding times than treatment announcements. Features supported on the MS2010 are listed below: Announcements (ANN) Music on Hold (MoH) 3-Port Conferences (C3P) 6-Port Conferences (C6P) Meet-Me Conferences (CMM) Lawful Intercept (LI) Trunk Testing (TT) 9.3.10.1 MS2010 provisioning Provisioning the Audio GWCs and MS2010 typically begins with customer input on expected demand for the various supported features at the site. Demand is stated in terms of how many calls will access each feature and the average holding time for usage of the feature. When needed, trunk testing demand is based on the number of trunks at the site. Traffic calculations then translate demand into busy hour call attempts (BHCA) and port requirements. A provisioning tool is available to assist with the planning/provisioning calculations. Contact your Nortel network engineer for provisioning tool assistance. Demand for MS2010 services is typically higher in end offices than in tandems. This section illustrates provisioning calculations for a hypothetical large IP end office with 650,000 BHCA. The demand for features is assumed to be: Treatment announcements: 7% with 12 second AHT (average holding time) MoH announcements: 1% with 30 second AHT 3-Port conferences:1% with 340 second AHT Packet Media Anchored (PKTMA) calls: 5% of eligible4 calls with 240 second AHT. For this example, assume 10% of total office calls are carried on DPT and may require PKTMA insertion at 5%.
4. DPT trunked calls are PKTMA eligible, meaning some types of calls over these trunks MAY have a PKTMA inserted.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
136
LI and trunk testing are not included for simplicity, and because they are not expected to create large demand for BHCA or ports. Although 6-Port conferences could require a large number of ports in a particular office, field OMs from a few study sites did not show substantial 6-Port usage. No field data is available currently on Meet-Me conferences. Both 6-Port and Meet-Me conferences are ignored in the provisioning calculations below. Total BHCAs load from the above assumed demand is:
BHCA = 650,000*(7% for ANN + 1% for MoH + 1%*3 conference legs for 3-Port Conf. + 5%*10%*4 legs for PKTMA calls) = 84,500
The traffic load in erlang for each service, and port requirements using an approximate traffic efficiency factor are: ANN erl = 650,000*(7%*12)/3600 = 152 erlang, which translates into about 152/0.8 = 190 ports MoH erl = 650,000*(1%*30)/3600 = 54 erlang, requiring about 68 ports C3P erl = 650,000*(1%*3*340)/3600 = 1,842 erlang, requiring about 2,304 ports. PKTMA erl = 650,000*10%*(5%*4*240)/3600 = 867 erlang, requiring about 1,084 ports The total BHCA is 84,500, which requires two (2) Audio Controller GWC, each rated at 80,000 BHHCA. The total MS2010 port requirement for the four services is 3,646. Both the 84,500 BHCA load and 3,646 ports required can be accommodated by 16 240-port MS2010s and two (2) GWC pairs. After satisfying the N+1 sparing requirement, the equipment needed consists of: 17 (N+1) 240-port MS2010s and 2 GWC This configuration of 2 GWC with a sufficient number of MS2010s to handle the port requirements and sparing is likely to cover most applications. In that case, all announcement ports are controlled by MS2010s under one (or more) GWC and all conferencing ports are controlled by a different GWC. N+1 sparing of MS2010s must be carried out for both the announcement MS2010s and the conferencing MS2010s.
Version: 9.0.3
NORTEL CONFIDENTIAL
137
Note: A GWC may not be provisioned in table SERVSINV with an UAS and MS2010 if the MS2010 is performing the PKTMA function. It is permissible to have a UAS and MS2010 if that MS2010 on the GWC is configured with LI. In all cases, after the equipment is in service, it is important to monitor performance to determine if additional equipment must be provisioned.
The MCPN750 (NTRX51BL - Nortel PEC) can reside in the same shelf with the MCPN905 blades. 9.4.1 Line-based Gateway Controllers Total BHHCA rated capacity of a line-based GWC is 38,000 BHHCA. The gateway controllers can accommodate up to 6400 H.248 lines (native MG 9000 lines). (However, 12 shelf and 512 lines per shelf limits described below reduce this number in practice to 6144 H.248 lines). Busy Hour Half Call Attempts and simultaneous calls must be checked to ensure that the limits identified in Table 27 are not exceeded. Each shelf on the MG 9000 is a separate logical gateway, or LGRP, from a call processing perspective. All the lines in an MG 9000 shelf, are supported from the same GWC. When a new MG 9000 shelf is provisioned, the system will reserve the maximum number of lines defined by the gateway provisioning template, 512 lines (end points). A GWC can support a maximum of 12 MG 9000 shelves. A separate GWC is needed for the UAS. ABI DS-512 supports connection of ESMA, LGCI, LGC, LTC and LTCI off the MG 9000. DS-512
NORTEL CONFIDENTIAL
138
cards must be housed on a master shelf of the MG 9000 and are provisioned in pairs to service both planes (0 and 1) of the subtended XPM. Up to four (4) pair of ABI DS-512 cards can be provisioned on an MG 9000 master shelf. Up to eight (8) pair of ABI DS-512 cards may be provisioned on an individual Line Gateway Controller (GWC). The lines subtending the ABI peripherals do not count as part of the 6400 H.248 lines that can be used on the GWC. A GWC can support the 6400 H.248 lines (on POTS32, SAA12, or DSL line cards) in addition to the zero (0) to four (4) subtending ABI XPMs. If eight (8) ABI DS-512 cards are supported by one GWC, no MG 9000 H.248 lines (LGRPs) are supported on that GWC. GR303 and BRI lines do not count in the 6400 limit. As shown in the table below, four GWC configurations are defined based on the maximum number of H.248 lines and the maximum number of ABI DS-512 cards supported.
GWC Config A B C D
Given the various GWC configurations discussed above, a tool has been developed to simplify GWC provisioning and check that Busy Hour Half Call Attempts and simultaneous half call capacity limits are not exceeded. Contact your Nortel representative to obtain access to the tool. If the Call Server complex is not equipped with ABI DS-512 cards then the GWC provisioning rule is:
Number of MG 9000 shelves in the complex divided by 12 rounded up to the next integer plus 1 for the UAS.
For administrative reasons or other considerations, customers may want to deploy more GWCs than the minimum calculated by the formula above. In that case the formula results can still be useful in showing how close the actual deployment comes to the minimum possible. Also, it is recommended to spread the ABI DS-512 cards across GWCs. The gateway controllers have sufficient real time to accommodate up to 6400 lines, not to exceed six (6) call attempts/line/hour at a maximum average hold time of 187 seconds. To determine the number of
Version: 9.0.3
NORTEL CONFIDENTIAL
139
line-based GWCs required, simply divide the total number of lines in the office by 6400, rounding up to the nearest integer. If the average per-port call rate should exceed the rate specified in the previous paragraph, or the number of simultaneous calls exceed the value specified in Table 15 on page 115, then each of the capacity limits must be considered 1. Determine the number of GWC pairs based on port limits.
NumberofGWC ports = TotalNumberofPorts -----------------------------------------------------6400
3. Determine the number of GWC pairs based on simultaneous calls. Note that the following equations assumes a grade-of-service of 0.1% blocking for the 2000 simultaneous calls, thereby producing an Erlang load of 1907.
NumberofGWC simulCalls = ( TotalNumberofPorts ErlangLoadperPort ) ----------------------------------------------------------------------------------------------------------------------1907
4. Determine the number of GWC pairs based on the maximum of the previous steps, rounding up to the next whole integer.
For an example of an office requiring 100,000 lines at 4.5 BHHCA per port with an average hold time of 250 seconds. 1.
NumberofGWC ports = 100000 = 15.6 ----------------6400
2.
NumberofGWC BHHCA = ( 100000 4.5 ) = 11.8 ----------------------------------38000
Version: 9.0.3
NORTEL CONFIDENTIAL
140
3.
NumberofGWC simulCalls = ( 100000 0.3125 ) = 16.3 -------------------------------------------1907
4.
NumberofGWC = MAX ( 15.6, 11.8, 16.3 ) = 17
In this example, 17 GWC pairs are required to satisfy the capacity needs. 9.4.2 Trunk-based Gateway Controllers The gateway controllers have sufficient real time to accommodate up to 4094 trunks, not to exceed more than 23 call attempts/trunk/hour. The engineering approach is to divide the total number of ISUP (or PRI, PTS) trunks in the office by 4094 to determine the number of trunk GWCs required. Attention should also be paid to optimize the engineering between gateway and GWC. If the average per-port call rate should exceed the rate specified in the previous paragraph, or the number of simultaneous calls exceed the value specified in Table 15 on page 115, then each of the capacity limits must be considered 1. Determine the number of GWC pairs based on port limits.
TotalNumberofPorts NumberofGWC ports = -----------------------------------------------------4094
2. Determine the number of GWC pairs based on engineered BHHCA limits, where GWCBHHCA is the value for ISUP, PTS or PRI respectively in Table 15 on page 115.
NumberofGWC BHHCA = ( TotalNumberofPorts BHHCAperPort ) -----------------------------------------------------------------------------------------------------------GWC BHHCA
3. Determine the number of GWC pairs based on the maximum of the previous steps, rounding up to the next whole integer.
NumberofGWC = MAX ( NumberofGWC ports, NumberofGWC BHHCA )
Version: 9.0.3
NORTEL CONFIDENTIAL
141
For an example of an office requiring 95,000 ISUP trunks at 16 BHHCA and 5,000 PRI at 20 BHHCA per port. 1.
NumberofGWC ports = 95000 + 5000 = 24.4 -------------------------------4094
2.
( 95000 16 ) NumberofGWC BHHCA = ------------------------------- = 15.8 96000 ( 5000 20 ) NumberofGWC BHHCA = ---------------------------- = 1.3 76800
3.
NumberofGWC = MAX ( 24.4, ( 15.8 + 1.3 ) ) = MAX (24.4,17.1) = 25
In this example, 25 GWC pairs are required to satisfy the capacity needs. 9.4.3 Audio Controller-based Gateway Controllers One AC GWC pair (active+spare) can be deployed with one UAS and/or MS 2010. The engineering of the MS 2010 and UAS is detailed in section 9.3.10 MS2010 and Audio GWC engineering on page 135. 9.4.4 Anchor Packet Gateway-based Gateway Controller Note: A decision has been made to restrict further deployment of APG and has been replaced with PMA. The engineering of PMA is detailed in section 9.3.10 MS2010 and Audio GWC engineering on page 135. Further deployment of APG is restricted, requiring approval from PLM. 9.4.5 SIP-T DPT Gateway Controllers The rated capacity of the SIP-T GWC is 96,000 BHHCA. The SIP-T GWC has sufficient real time to accommodate up to 4094 ports at 23 call attempts/trunk/hour. Each port corresponds to a Dynamic Packet Trunk (DPT). The engineering approach is to divide the total number of Dynamic Packet Trunks in the office by 4094 to determine the number of SIP-T GWCs required. If the average per-port call rate should exceed the rate specified in the previous paragraph, or the number of simultaneous calls exceed the value specified in Table 15 on page 115, then apply the following algorithm to calculate the correct number of gateway controllers.
Version: 9.0.3
NORTEL CONFIDENTIAL
142
3. Determine the number of GWC pairs based on the maximum of the previous steps, rounding up to the next whole integer.
NumberofGWC = MAX ( NumberofGWC ports, NumberofGWC BHHCA )
For an example of an office requiring 50,000 DPT trunks at 20 BHHCA per port. 1.
50000 NumberofGWC ports = -------------- = 12.2 4094
2.
( 50000 20 ) NumberofGWC BHHCA = ------------------------------- = 10.4 96000
3.
NumberofGWC = MAX (12.2,10.4) = 13
Note: In this example, 13 GWC pairs are required to satisfy the capacity needs. 9.4.6 VRDN Gateway Controllers The VRDN is used to route SIP-T signaling messages to other CS 2000 nodes while supporting signaling to a number of other CS 2000 nodes. In a CS 2000 office, multiple VRDNs can exist, however, the signaling path between any two (2) specific CS 2000 nodes is only supported on single VRDN. That is, individual DPT trunk groups to an adjacent CS 2000 can not exist on more than a single VRDN.
Version: 9.0.3
NORTEL CONFIDENTIAL
143
Trunk traffic engineering is required to ensure that the sizing of the DPT trunk groups keeps the call volume through the VRDN to less than the rated capacity of 900K calls/hour. Note: If an SN06-based VRDN is communicating with a far-end VRDN at another CS 2000, and that far-end VRDN is running a pre-SN05 load, the two VRDNs will negotiate to use SCTP rather than UDP. This lowers the capacity of the near-end VRDN. 9.4.7 H.323 Gateway Controllers The H.323 Gateway Controller is a non-concentrating GWC that is capable of those capacities shown in Table 15 on page 115 (including the signaling path, or D-channel). For the most part, media gateways subtending H.323 Gateway Controllers appear much like a PBX (Private Branch Exchange). As such, the PRI trunks6, or H.323 VoIP trunks7 for the following discussion, are configured on the CS 2000 to communicate with the H.323 media gateway and should be engineered based on the amount of traffic expected to/from the media gateway. The total H.323 VoIP trunks provisioned on a GWC to the various media gateways must not exceed the values shown in Table 15 on page 115. They must also be data filled on the CS 2000 Core component. When engineering the number of H.323-based GWC required, each of the capacity limits (BHHCA and ports) must be considered using the following algorithm.
n P C T M 3600 n n n n 1
where
P is the number of ports of each port type n C is the per-port call-attempt rate of each port type n T is the per-port hold time of each port type n M is the per-port percentage expected to access the VoIP trunks to/from the CS 2000 for each port type n
As an illustration of engineering, determine the amount of erlang traffic expected to originate out of, or terminate to, an H.323 media gateway. For example, a gateway with 48 trunks at 0.8 erlang with 180 second AHT and 200 lines at 6ccs each (0.167 erlang) with 250 second AHT has 75% and 65%, respectively, of their traffic going to/from the CS 2000. Also assume that subtending H.323 media gateways can carry
6. It should be noted that the total number of PRI trunk groups provisionable on a CS 2000 is limited to 2175. Since H.323 VoIP trunks are provisioned as PRI appearances on the CS 2000, they contribute to the number of PRI trunk groups in the office. 7. These trunks are actually logical in that there is no physical connectivity other than through the routers to the device. It is more of a signalling path.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
144
100 erlang of traffic each. Using the above example, the ports on the media gateways have the capability of generating 71.8 erlang of traffic on the media gateway. Each trunk handles 16 call-attempts/hour and each line, 2.4 call-attempts/hour. From this, the number of H.323 VoIP trunks to/from the media gateway to the CS 2000 can be determined from the above algorithm: Applying the algorithm, the following results
( 48 16 180 0.75 + 200 2.4 250 0.65 ) 3600 = 50.5
From this, it can be seen that 50.5 erlang of traffic requires access outside of the media gateway. Applying standard erlang tables and 0.1% GoS for this media gateway, 72 H.323 VoIP trunks are required to support this level of traffic. H.323 trunks can be allocated in any number between four (4) and 672. Terminal id space (internal representation of the VoIP trunks) is allocated when the corresponding gateway is provisioned, not when the carriers are provisioned. Hence, the reserved terminal id space in the GWC is equal to the number of ports provisioned on the gateway, not the number of carriers provisioned for the gateway. This process is repeated for all H.323 media gateways. Then sum all H.323 VoIP trunks required and divide by the number of VoIP trunks per GWC, rounding to the next highest integer, and keeping in mind that each individual GWC can not exceed 1024 (International) or 1032 (North America) H.323 VoIP trunks minus the number of D-channels required. If a given media gateway causes a GWC to exceed this limit, then this media gateway must be provisioned on another GWC. For this example, assume all H.323 media gateways required the same number of VoIP trunks. In this case, ten (10) H.323 media gateways can subtend a single Gateway Controller. Since the number of gateways is ten (10), we
10 96 ----------------------- = ROUNDUP ( 0.947 ) = 1 1024 10
must also remove one (1) D-channel per gateway from the total VoIP trunks supported since these are reserved for the signalling. Finally, the 30,000 BHHCA constraint must be considered. If each of the ten (10) H.323 media gateways offers 50.5 erlang of traffic, then the BHHCA required is determined as follows
10 50.5 3600 --------------------------------------------------------- = 6111 180 0.75 + 250 0.65
Version: 9.0.3
NORTEL CONFIDENTIAL
145
The result above is well within the constraint of 30,000 BHHCA of the Gateway Controller. So, for this example the GWC is port limited rather than capacity constrained. 9.4.8 CICM Gateway Controllers Please refer to NTP 297-5551-100, Centrex IP Client Manager (CICM) Engineering Guide for information on CICM GWC capacity. 9.4.9 GWC surveillance Table SERVRINV can be used to identify the number of GWCs. Table LGRPINV identifies the LGRPs (field srvrname) as discussed above with their associated GWC. Also, table LTCINV (field OPT-ATTR, datafilled as EXTDS512) should be used to determine the ABI DS-512s associated with a particular GWC. All of these tables are datafilled in the XA-Core. An important part of determining proper engineering of GWC is monitoring their performance. This is achieved by using the XPMOCC OM group. Available on fifteen minute intervals, these registers provide processor occupancy of the GWC. Most processor occupancy registers peg the number of one minute intervals the CPU was at a given level. The following registers are of key importance to the discussion of surveillance: AVGCPOCC Average call processing occupancy (expressed as a percentage) PMORIGS Total call originations attempts PMTERMS Total call termination attempts NUMRPTS Total number of fifteen minute reports added to the accumulation registers during the accumulation period to normalize AVGCPOCC. The following algorithm should be used to calculate the number of half-call BHCA:
PMORIGS + PMTERMS BHHCA = --------------------------------------------------------------- 3600 S
where S is the number of seconds for the accumulation period. If this result is approaching the associated value in Table 15 on page 115, additional ports should not be provisioned on this GWC pair but rather on another GWC pair. Additionally, the average call processing occupancy should also be monitored to ensure proper engineer-
Version: 9.0.3
NORTEL CONFIDENTIAL
146
ing or that the engineering point is not being exceeded. The following algorithm should be applied:
CallProces sin gOcc = AVGCPOCC --------------------------------NUMRPTS
If the result is approaching 80%, then it is recommended that no additional ports be provisioned on this GWC pair, but rather on another GWC pair. Note: Nortel recommends an accumulation period of at least one hour for more accurate results. Also this should be done, if possible, during the offices HDBH.
The focus of this section is to provide rules and guidelines for the two (2) RFC 3261 compliant SIP offerings of CS 2000, known as the CS 2000 SIP Inter-Softswitch Signaling and the CS 2000 SIP Application Server Signaling formerly known as the NGSS. Throughout this section the two IP Trunking applications (and corresponding hardware platform) will be referred to as the CS 2000 Session Server Trunks or Session Server Trunks (SST) and may be used interchangeably. Also, CS 2000 and CS 2000-Compact are used interchangeably. 9.5.1 Introduction The Session Server Trunks provides the CS 2000 with an IETF-compliant (RFC3261), open communication interface using Session Initiation Protocol (SIP), as well as maintaining the previously supported Session Initiation Protocol for Telephones (SIP-T). Prior to Session Server Trunks, the Carrier VoIP SIP offering was a proprietary implementation of the Session Initiation Protocol for Telephones (RFC3372), also known as SIP-T. This architecture was based
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
147
on Virtual Router Distribution Node or VRDN and Gateway Controllers. Session Server Trunks is an evolution path for VRDN, expanding CS 2000s SIP/SIP-T functionality as well as market compatibility. Session Server Trunks, similar to VRDN, communicates with a pool of DPT GWCs, however with the Session Server Trunks all SIP call processing is done within the Session Server Trunks rather than the GWCs. Session Server Trunks communicates with the DPT GWCs with a generic call control protocol (GCP), which will trigger the usage of Dynamic Packet Trunking (DPT) infrastructure to set up and tear down calls. Session Server Trunks provides real time active sessions as it maintains all call states. By mapping the ISUP messages to SIP and vice versa, Session Server Trunks creates a trunking interface for the CS 2000 TDM interface to any SIP domain. The CS 2000 is now capable of communicating with other CS 2000s, third party Call Servers, or Application Servers utilizing RFC 3261-compliant SIP. Figure 53 on page 147, illustrates the supported Session Server Trunks implementations in a Carrier VoIP environment.
Figure 53
As shown in Figure 53 on page 147, Session Server Trunks inter-operates with other CS 2000s Session Server Trunks, VRDNs, MCSs or 3rd Party Application servers or Call servers. The far end systems are provisioned in the local Session Server Trunks as remote SIP servers. Session Server Trunks supports both TCP and UDP protocols as transport mechanisms. For Session Server Trunks to Session Server Trunks communications, SIP is utilized. SIP messages can be carried over either UDP or TCP. For Session Server Trunks to VRDN, DPT is utilized, and the messages are carried over
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
148
UDP. Finally, for Session Server Trunks to MCS communications, SIP is utilized and carried over UDP. Table 24 on page 148, summarizes the supported protocols.
Signaling Session Server Trunks to Session Server Trunks Session Server Trunks to VRDN Session Server Trunks to MCS Session Server Trunks to 3P Apps SIP or SIP-T SIP-T SIP SIP or SIP-T
Currently, Session Server Trunks and VRDN may coexist in a single CS 2000 office only during an upgrade scenario. Once Session Server Trunks is installed and commissioned, as DPT GWCs are provisioned in Session Server Trunks, GWCs will start routing calls through the Session Server Trunks. Upon full call transition from VRDN to Session Server Trunks, the VRDN should be decommissioned. Please refer to appropriate documentation for migrating from VRDN to Session Server Trunks. The following section provide an overview of the Session Server Trunks and its underlying hardware/software platform. 9.5.2 Session Server Trunks overview The Session Server Trunks consists of a mated pair of Services Application Module- eXtreme Thin Server (SAM-XTS) units (Active and Inactive Hot standby). Each unit is, in reality, a fully functional server that is connected via 10/100 or 1000 Base-T interfaces to the CS-LAN. Each server provides processor capacity, local disk storage, and high-bandwidth network connectivity to the Session Server Trunks. The applications listed earlier, run at the top level of a multi-layer software platform. All of the Session Server Trunks provisioning is done via a web interface, (i.e. IEMS) communicating to the resident web server running as part of the CS 2000 Session Servers base layer functionality. Session Server Trunks inter-operates with all User Agents on the SIP network(s). The SIP signaling from Session Server Trunks is tunneled into the CS 2000 as Dynamic Packet Trunks (DPT). The DPTs in the CS 2000 are datafilled as ISUP trunks. The supported ISUP trunk types are: IBNT2, IBN7, IMT, IT, ATC, and EANT.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
149
For SIP-T, any variant is allowed for the DPTs, as Session Server Trunks encapsulates the ISUP message as a SIP message payload. For SIP gateways that may not support SIP-T and do not provide an ISUP payload in the SIP message, Session Server Trunks can generate an ISUP payload. However, the payload will be based on ETSI or ANSI variants and only for IBNT2 and IBN7, and IT DPT trunk types. 9.5.3 Physical network connectivity The Session Server Trunks is a CS-LAN-based device, meaning it must be connected to redundant L2/L3 devices such as the ERS 8600s and must meet the CS-LAN equipment redundancy requirements. Please refer to SEB 08-00-001 for detailed physical network connectivity. Note: Although the Session Server Trunks is capable of operating at 1000 Base-T, from an engineering point of view use of 100 Base-T is recommended. Moreover, the use of 100 Base-T interface is a more cost effective alternative. 9.5.3.1 Protection mechanism Session Server Trunks, as a carrier grade platform, provides both link and node protection. On node protection, the two (2) units of Session Server Trunks operate under an active/inactive mode. On a unit activity switch, if the switch-over is due to a warm SWACT, stable calls are preserved and calls in progress may be lost. If the activity switch is due to a cold SWACT, all calls will be impacted. On link protection, the LAN links on each Session Server Trunks unit, inter-connect the units to dual L2/L3 devices in CS-LAN (i.e ERS 8600s with an active/inactive role as well). Any link or port failures are detected via loss of signal and will result in sub-second switchover between the links. NO calls are impacted as a result of a link failure. Also, the two (2) units are connected via two (2) Gigabit Ethernet links for inter-unit communications via PTP links. These links provide synchronization between the active and inactive units. 9.5.3.2 Third party CS-LAN Third party CS-LAN equipment must meet the following requirements in order to guarantee the expected failure detection between the Session Server Trunks and LAN connectivity. The third party CS-LAN must: Consist of dual L2/L3 devices, Support 100Base-T or 1000Base-T connectivity, Be capable of sharing/extending the subnet where Session Server Trunks is subtending, Support VRRP.
Version: 9.0.3
NORTEL CONFIDENTIAL
150
9.5.4 IP address requirements IP address assignment on the Session Server Trunks is ONLY applicable to the links that are interconnecting the Session Server Trunks to the CS-LAN (noted as LAN Links). However there are additional IP addresses that are reserved and used by the Session Server Trunks which are utilized only for inter-unit connectivity (noted as PTP Links). This section describes the two link types and the IP address assignment options. 9.5.4.1 LAN links Session Server Trunks requires a block of eight consecutive IP addresses where the last octet of the highest IP address MUST be divisible by eight (8). The highest IP address is used as the primary address (Logical Active IP) for Session Server Trunks communications. This address floats between the two units of the Session Server Trunks and always represents the Active unit. Session Server Trunks must be able to communicate with all the provisioned remote SIP servers (e.g. remote Session Server Trunks, VRDN, etc.). Therefore, the IP address assignment may differ depending on the IP address domain of the remote servers. As of the SN07 release, the CS-LAN supports existence of several VLANs which meet the Session Server Trunks IP address domain requirements. This support removes the need for dedicating a separate VLAN for Session Server Trunks communications. In summary:
If the remote SIP servers that the local Session Server Trunks must communicate with are in the same Carrier IP address domain, the Session Server Trunks should be configured in the existing CallP-Carrier VLAN. If the remote SIP servers that the local Session Server Trunks must communicate with are NOT in the same Carrier IP address domain, the Session Server Trunks MUST be configured in the existing CallP-Public VLAN.
Table 25 on page 150, summarizes the VLAN/IP address assignment for a local Session Server Trunks, based on the IP address of the remote servers.
Remote SIP servers are in the same Carrier IP domain as local Session Server Trunks are in different Carrier IP domain as the local
Version: 9.0.3
NORTEL CONFIDENTIAL
151
In a scenario where the existing VLANs IP address range does not meet Session Server Trunkss IP address requirements, either the subnet mask for the VLAN must be extended, or a new VLAN with sufficient IP addresses must be created (e.g. a subnet with at least a /28 network mask). Note: Please refer to SEB 08-00-001 for a detailed description of the required VLANs. If the mentioned VLANs are not already part of your local configuration, consult with your local network administrator. The following table illustrates a Session Server Trunks IP address assignment in an existing VLAN. As shown below, the first three (3) IP addresses are assigned to the ERS 8600s physical VLAN addresses as well as the VRRP address. For the Session Server Trunks units, the highest address has the last octet of 48 which meets the divisible by eight (8) requirement.
IP address
Description
Assigned by
ERS 8600s (VRRP) ERS 8600-0 (VLAN Physical) ERS 8600-1 (VLAN Physical)
on Session Server Trunks Units a.b.c.41 a.b.c.42 Unit 0 Physical (Link 0) Unit 0 Physical (Link 1) Session Server Trunks (Lower number Unit) Session Server Trunks Internally Session Server Trunks Internally
a.b.c.43
Unit 0
(Logical)a
a.b.c.44 a.b.c.45
Unit 1 Physical (Link 0) Unit 1 Physical (Link 1) Session Server Trunks (Higher number Unit)
a.b.c.46
Unit 1
(Logical)b
a.b.c.47 a.b.c.48
Table 26 Sample Session Server Trunks IP address configuration a. Unit 0 IP address = (Active node) - 5 b. Unit 1 IP address = (Active node) - 2
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
152
As shown in Table 26 on page 151, these addresses are assigned to the two units of the Session Server Trunks as well as the two ERS 8600s. On the Session Server Trunks, each unit has one (1) logical and two (2) physical addresses. You are ONLY required to configure the Logical unit addresses at commissioning time. The Physical, the Active, and the Inactive addresses are internally assigned by the Session Server Trunks. Please refer to Session Server Trunks commissioning IMs for further information. As mentioned earlier, in release SN07, only one Session Server Trunks is supported in a CS 2000 office. However, it is possible that in future Carrier VoIP releases this requirement will be lifted. It is recommended to select a VLAN size which will not limit the future growth. 9.5.4.2 PTP links Session Server Trunks reserves two private IP addresses to use for its inter-unit communications. Session Server Trunks assigns 192.168.1.1 and 192.168.1.2 to the PTP links between the mate units of the Session Server Trunks. These addresses are NOT visible to the outside world. However, it is recommended NOT to use PTP Links addresses as part of the Session Server Trunks LAN Links address range to avoid any possible conflicts. 9.5.5 Session Server Trunks engineering Co-existence of the Session Server Trunks and VRDN is supported only for the duration of the VRDN to Session Server Trunks migration. Please note that for VRDN call migration, ONLY UDP calls can be migrated, because SCTP is not supported by Session Server Trunks. Upon successful migration, the VRDN must be de-commissioned. Please refer to appropriate documentation for VRDN to Session Server Trunks upgrade procedures. Session Server Trunks maintains call state for all the active calls, and provides real time status of the current call volume. In addition Session Server Trunks provides 5, 20, 30 minute real time unit CPU utilization. Both metrics are available via IEMS. 9.5.5.1 Capacity engineering The capacity of Session Server Trunks is related to the number of available DPT GWCs and the DPT trunk groups in the CS 2000 DPT GWC pool. Each DPT GWC supports 4093 end points and 96K BHHCA of DPT calls. DPT GWCs can be added to the CS 2000 with NO impact on the Session Server Trunks. Once the DPT trunk members are brought into service, they will be automatically selected for call admission. DPT calls will be distributed amongst the DPT GWCs in a round-robin algorithm.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
153
Currently for Session Server Trunks to be able to support its POR numbers, a maximum8 of thirteen (13) DPT GWCs are required, assuming DPT GWC are operating at their POR levels. For more details on DPT GWC engineering, refer to section 9.4 Gateway Controller engineering on page 137. 9.5.5.1.1 Channel capacity Each Session Server Trunks server pair supports a maximum of 50,000 simultaneous call states. 9.5.5.2 Scalability To achieve higher port and BHHCA capacity when UDP is used, multiple Session Server Trunks servers may be deployed in the same CS 2000 site. The maximum office DPT call capacity is 1.5M BHHCA and 100K DPT end points. Table 27 on page 153 summarizes the required number of Session Server Trunks servers to achieve the supported BHHCA as well as DPT endpoints.
When multiple servers are deployed, several options are available for configuring the signaling paths to the far-end SIP applications9. Specifically, a given configuration should leverage the alternate IP address, or sub-channel, capability available on a SIPLINK when the far-end has multiple SIP applications present. For example, CS 2000 to CS 2000 DPT trunking may utilize multiple Session Server Trunks at each of the CS 2000 sites. This is illustrated in the following diagram.
8. "Maximum" implies maximum simultaneous calls only. 9. SIP applications include other Session Server Trunks servers, SIP Proxies, or MCS5020.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
154
Figure 54
DPT GW C
S IP L IN K _ A /T R K G R P _ A S IP L IN K _ B /T R K G R P _ B
DPT GW C
C S2K _2
S IP L IN K _ B /T R K G R P _ B
C S2K _1
S IP L IN K _ C /T R K G R P _ C
In this figure, the CS2K_1 on the right side of the diagram is configured with one trunk group, and therefore a single SIPLINK for this trunk group, to CS2K_2 on the left side of the diagram. CS2K_2 is configured with three SST servers while CS2K_1 is provisioned with two SST servers. For a given SST server, the SIPLINK defined to the remote-end Session Server Trunks should have a sub-channel defined to each possible IP address at the near-end Session Server Trunks. Up to seven (7) alternate sub-channels may be defined per SIPLINK. While the CS 2000 Session Server platform is fully redundant, this configuration assures connectivity between the CS 2000 sites in the event of multiple failures on the same Session Server Trunks server pair. A second (or third) trunk group, and therefore SIPLINK, should be added if trunking requirements are beyond the per-Session Server Trunks port or busy-hour limits as stated in Table 27 on page 153. A new route selector in (sub)table10 RTEREF may need to be added to reach the new trunk group. Moreover, while not required, the additional trunk group provides added resiliency. Similar to the above configuration, the Session Server Trunks may also be configured to communicate to multiple SIP proxy devices. The following figure illustrates this similar approach.
NORTEL CONFIDENTIAL
155
Figure 55
DPT GWC
SIPLINK_A/TRKGRP_A
Packet Network
SIPLINK_B/TRKGRP_B
CS2K
In this diagram, a trunk group, and therefore a SIPLINK, is configured between a Session Server Trunks and the SIP Proxies. Each Session Server Trunks should define alternate sub-channels to each remote SIP Proxy to which it must communicate. Up to seven (7) alternate sub-channels may be defined per SIPLINK. This configuration protects the SIP clients behind the SIP proxies in the event that one should fail. The Session Server Trunks continues to have signaling connectivity to the remaining SIP proxies. Note: The above diagram is a simplification of the network. It should be noted that additional SIPLINKs may be defined from the same Session Server Trunks server, but may terminate to other SIP Proxies, if so required. That is, in addition to SIPLINK_A, the same Session Server Trunks server may define SIPLINK_C on TRKGRP_C to another set of SIP Proxies as long as the port and busy-hour do not exceed those of the per-Session Server Trunks engineering limits shown in Table 27 on page 153. A second (or third) trunk group, and therefore SIPLINK, should be added if trunking requirements are beyond the per-Session Server Trunks port or busy-hour limits as stated in Table 27 on page 153. A new route selector in (sub)table11 RTEREF may need to be added to reach the new trunk group. Moreover, while not required, the additional trunk group provides added resiliency.
NORTEL CONFIDENTIAL
156
9.5.5.2.1 IP bandwidth engineering The IP bandwidth calculation for SIP/SIP-T is related to the amount of messaging to and from Session Server Trunks and that is directly related to the amount of BHHCA. From a SIP/SIP-T perspective each half call is equivalent to a call leg. Table 28 on page 156, shows the IP bandwidth comparison for Session Server Trunks communication for SIP and SIP-T. These average message sizes and the number of messages are based on basic SIP methods which are limited for the SIP trunking application. In future releases of Session Server Trunks where other SIP applications may be introduced, possible wider range of methods may be applicable therefore, it may result a different IP bandwidth usage.
number of messages per call leg Session Server Trunks to Session Server Trunks Session Server Trunks to VRDN 8
BHHCA
900,000
15
570
900,000
17.10
9.5.5.3 Signaling delay tolerance Far-end servers can be located remotely from the near-end Session Server Trunks with tolerances of up to 500 ms one-way latency when UDP is used as the transport layer. When TCP is used, the one-way latency tolerance can be up to 200 ms without any impact on call completion rate. 9.5.5.4 NAT consideration Currently Session Server Trunks does not support NAT. 9.5.5.5 Network Monitor Session Server Trunks requires a Network Monitor IP address as a Network Health mechanism. The VRRP address of the VLAN where Session Server Trunks is assigned, must be used as the Network Monitor IP address.
Version: 9.0.3
NORTEL CONFIDENTIAL
157
9.5.5.6 NTP considerations At commissioning time, Session Server Trunks requires at least one (1) NTP server and can be provisioned for up to three (3) servers. At least two (2) NTP servers must be configured on the Session Server Trunks to avoid a single point of failure. The SDM/CBM should be used as one of the NTP servers for the Session Server Trunks. SSPFS can be used as the second server. Please refer to the NTP section for further recommendations. 9.5.5.7 Session Server Trunks OA&M The Session Server Trunks must be configured to use the Integrated Element Manager System (IEMS) between the customer operation LAN and the Call Server 2000 (CS 2000) CS-LAN. IEMS communicates with Session Server Trunks OAMP platform via the Logical Active IP address of the Session Server Trunks. 9.5.5.8 Software upgrade Each unit of the Session Server Trunks must be upgraded individually. The software upgrade must start from the inactive unit. Once the inactive unit is fully upgraded, a unit SWACT must be performed to transfer the activity to the newly upgraded unit and then the newly inactive unit can be upgraded. Please refer to proper software upgrade documentation for detail instructions. 9.5.5.9 Session Server Trunks security considerations Session Server Trunks supports both UDP and TCP transport protocols. Standard SIP port 5060 is used for both SIP and SIP-T communications by the SIP GW application. Since Session Server Trunks is a CS-LAN-based device, all security implementations must follow the recommended guidelines as stated in section 10.0 General security on page 193. Please refer to the appropriate section for all CS-LAN based elements. 9.5.5.10 CS 2000 considerations This section contains information about the Session Server Trunks OA&M-related requirements on the CS 2000 datafill for proper Session Server Trunks-to-CS 2000 operations. The following is a list of tables that must be properly datafilled in the CS 2000, in the listed order, for appropriate commissioning of the Session Server Trunks:
SERVRINV - Stores provisioned data for gateway controllers including IP addresses, exec informa-
tion, etc. Gateway controllers are provisioned in SESM and then the system automatically enters required datafill in SERVRINV
Version: 9.0.3
NORTEL CONFIDENTIAL
158
Policy Controller
- Contains the names of the server subtending nodes and their associated gateways. The subtending nodes are the Audio Controller and the Dynamic Packet Trunk (DPT). SIPLINK - Contains the access link information that is used to direct calls to the appropriate SIP server. This information is also used by Session Server Trunks devices for mapping incoming calls. TRKGRP - Contains customer-defined data associated with each trunk group that exists in the switch. TRKSGRP - List the supplementary information for each subgroup that is assigned to one of the trunk groups listed in table TRKGRP. TRKOPTS - Provisions special options on trunk groups. For the Session Server Trunks, that option is dynamic and the signaling type is SIPT. DPTRKMEM - Used to provision ISDN User Part and SIP trunks dynamically for both ATM and IP networks.
SERVSINV
NORTEL CONFIDENTIAL
Policy Controller
159
of the resources available to it. Once the topology is built, counting of available resources across Limited Bandwidth Links (LBLs) and connection admission decisions will be carried out on the Policy Controller. At this point GWCs communicate with the Policy Controller to determine whether a call can be set-up. Each Service Providers network is unique in its implementation and there are no two alike. However, the figure below, tries to depict a possible topology that could be modeled in the Policy Controller.
Figure 56
Throughout this document Policy Controller and PC may be used interchangeably. 9.6.1 Configuration and Connectivity The Policy Controller consists of a mated pair of Services Application Module- eXtreme Thin Server (SAM-XTS) units (Active and Inactive Hot standby). Each unit is, in reality, a fully functional server that is connected via 10/100 or 1000 Base-T interfaces to the CS-LAN. Each server provides processor capacity, local disk storage, and high-bandwidth network connectivity to the Policy Controller. All of the Policy Controller provisioning is done via a web interface, (i.e. IEMS) communicating to the resident web server running as part of the Policy Controller base layer functionalDocument Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
160
Policy Controller
ity. The Policy Controller is a CS-LAN based device and must be connected to redundant L2/L3 devices such as the ERS 8600s and meet the CS-LAN equipment redundancy requirements. Please refer to SEB 08-00-001 for detailed physical network connectivity. Note: Although the Policy Controller is capable of operating at 1000 Base-T, from engineering point of view use of 100 Base-T is recommended. Moreover, the use of 100 Base-T interface is a more cost effective alternative. The Policy Controller must be configured to use the Integrated Element Manager System (IEMS) between the customer operation LAN and the CS 2000 CS-LAN. IEMS communicates with the Policy Controller OAMP platform via the Logical Active IP address of the Policy Controller. 9.6.1.1 Protection mechanism Policy Controller, as a carrier grade platform, provides both Link and Node protection. On node protection, the two units of the Policy Controller operate under an Active/Inactive mode. On a unit activity switch, if the switch-over is due to a Warm SWACT, stable calls and their corresponding enforced policies that were active prior to the SWACT are preserved. However any calls in progress may be lost. If the activity switch is due to a Cold SWACT, all calls and their corresponding enforced policies will be impacted. On link protection, the LAN links on each Policy Controller unit inter-connect the units to dual L2/L3 devices in CS-LAN, i.e ERS 8600s with an Active/Inactive role as well. Any link or port failures are detected via loss of signal and will result in sub-second switchover between the links. NO calls are impacted as a result of a link failure. Also, the two units are connected via two Gigabit Ethernet links for inter-unit communications via PTP links. These links provide a means to synchronize the Active and Inactive units. 9.6.1.2 3rd Party CS-LAN 3rd party CS-LAN equipment must meet the following requirements in order to guarantee the expected failure detection between the Policy Controller and LAN connectivity. The 3rd party CS-LAN must: consist of dual L2/L3 devices, support 100BT or 1000BT connectivity, be capable of sharing/extending the subnet where Policy Controller is subtending, support VRRP.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
Policy Controller
161
9.6.1.3 Geographic Survivability Please refer to the Geographic Survivability CS-LAN chapter in SEB 08-00-001. 9.6.2 IP addressing IP address assignment on the Policy Controller is ONLY applicable to the links that are interconnecting the Policy Controller units to the CS-LAN (noted as LAN Links). However, there are additional IP addresses that are reserved and used by the Policy Controller, which are utilized only for Inter-Unit connectivity (noted as PTP Links). This section describes the two Link types and the IP address assignment options. 9.6.2.1 LAN Links Policy Controller requires a block of eight (8) consecutive IP addresses where the last octet of the highest IP address MUST be divisible by 8. The highest IP address is used as the primary address (Logical Active IP) for Policy Controller communications. This address floats between the two units of the Policy Controller and always represents the Active unit. The CS-LAN supports existence of several VLANs which meets the Policy Controllers IP address domain requirements. This removes the need for dedicating a separate VLAN for Policy Controller communications. In SN08 release, since Policy Controller only performs VCAC Bandwidth Policy enforcement and does not communicate with external Policy Enforcement Points (PEP), the Policy Controller can be configured within the CallP-Private subnet. However, since the Policy Controller will be communicating with external Policy Enforcement Points (PEP) in future releases, the Policy Controllers IP address assignment may differ depending on the IP address domain of the PEPs. Therefore, it is recommended to assign the Policy Controller within a VLAN that provides future needs and eliminates readdressing in the future. In summary:
If PEPs that the local Policy Controller must communicate with are in the same Carrier IP address domain, the Policy Controller should be configured in the existing CallP-Carrier VLAN. If PEPs that the local Policy Controller must communicate with are NOT in the same Carrier IP address domain, the Policy Controller MUST be configured in the existing CallP-Public VLAN.
Version: 9.0.3
NORTEL CONFIDENTIAL
162
Policy Controller
Table 25 on page 150, summarizes the VLAN/IP address assignment for a local Policy Controller based on the IP address of the PEPs.
PEP in the same Carrier IP domain as the Policy Controller in different Carrier IP domain as the Policy Controller
In a scenario where the existing VLANs IP address range does not meet Policy Controllers IP address requirement, either the subnet mask for the VLAN must be extended, or a new VLAN with sufficient IP addresses must be created (e.g. a subnet with at least a /28 network mask). Note: Please refer to SEB 08-00-001 for a detailed description of the required VLANs. If the mentioned VLANs are not already part of your local configuration, consult with your local network administrator. The following table illustrates a Policy Controller IP address assignment in an existing VLAN. As shown below, the first three IP addresses are assigned to the ERS 8600s physical VLAN addresses as well as the VRRP address. For the Policy Controller units, the highest address has the last octet of 48 which meets the divisible-by-8 requirement.
IP address
Description
Assigned by
on Policy Controller Units x.y.z.41 x.y.z.42 x.y.z.43 x.y.z.44 Unit 0 Physical (Link 0) Unit 0 Physical (Link 1) Unit 0 (Logical)a Unit 1 Physical (Link 0) Policy Controller
(Lower number Unit)
Policy Controller Internally Policy Controller Internally User at commission time Policy Controller Internally
NORTEL CONFIDENTIAL
Policy Controller
163
Description Unit 1 Physical (Link 1) Unit 1 (Logical)b Inactive node Active node
Assigned to
Policy Controller
(Higher number Unit)
Table 30 Sample Policy Controller IP address configuration a - Unit 0 IP address = (Active node) - 5 b - Unit 1 IP address = (Active node) - 2
As shown in Table 30 on page 162, these addresses are assigned to the two units of the Policy Controller as well as the two ERS 8600s. On the Policy Controller, each unit has one (1) Logical and two (2) Physical addresses. You are ONLY required to configure the Logical unit addresses at commissioning time. The Physical, the Active, and the Inactive addresses are internally assigned by the Policy Controller. Please refer to proper Policy Controller commissioning IMs for further information. Currently, only one Policy Controller is supported in a CS 2000 office. However, it is possible that in future Carrier VoIP releases this requirement may be lifted. It is recommended to select a VLAN size which will not limit future growth. 9.6.2.2 PTP Links Policy Controller reserves the use of two private IP addresses for its Inter-Unit communications. Policy Controller assigns 192.168.1.1 and 192.168.1.2 to the PTP links between the mate units of the Policy Controller. These addresses are NOT visible to the outside world. However, it is recommended NOT to use PTP Link addresses as part of the Policy Controller LAN Links address range to avoid any possible conflicts. 9.6.3 Security Since the Policy Controller is a CS-LAN-based device, all security implementations must follow the recommended guidelines as stated in the security section of the Engineering Rules document. However in the SN08 release, since the Policy Controller does not communicate with any external devices, (i.e. PEPs) the default security filters should cover Policy Controller security concerns. Please refer to the appropriate section for all CS-LAN based elements. Policy Controller supports TCP transport for COPS communication over standard COPS port 3288. For all other communications (i.e. OAM, OSS, as IEMS is used as the secure proxy for Policy Controller access), all related traffic is contained within CS-LAN and should not be a security concern.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
164
Policy Controller
9.6.4 Network Time Protocol At commissioning time, the Policy Controller requires at least one NTP server and can be provisioned for up to three servers. Two NTP servers must be configured on Policy Controller to avoid single point of failure. The SDM/CBM should be used as one of the NTP servers. SSPFS can be used as the second server. Please refer to the NTP section for further recommendations. 9.6.5 Network Monitor Policy Controller requires a Network Monitor IP address as a Network Health mechanism. The VRRP address of the VLAN where Policy Controller is assigned, must be used as the Network Monitor IP address. 9.6.6 Billing Policy Controller billing records resource usage is generated in Internet Protocol Detail Record (IPDR) format and stored on the internal hard disk. The Billing records are based on half call sessions. The IPDR formatted records can be transported to the OSS via QCA using Restricted FTP Shell command. 9.6.7 Engineering the Policy Controller As outlined above, the Policy Controller must be provisioned, in parallel with the CS 2000 Manager (CS2M), with a logical view of the network. The logical model does not have to match the exact physical topology of the network. However, the following rules apply: The highest level in the network must be the Core network with unrestricted bandwidth, A top level zone must be connected to the Core network, Zones can not have multiple links (paths) to higher level zones, In a scenario where there is a zone with NAT capability in at least one GWs path, a media portal will be required. The media portal will be in the Core network and network resource calculations will be based on the full path to the media portal and back even if there may be a shorter path available. 9.6.7.1 SOC Option SOC order code CS2Q0002 must be in the ON state for the Policy Controller in order for Network VCAC to operate. 9.6.7.2 Treatment For the Policy Controller to report VCAC failures on the CS 2000 (i.e. due to lack of bandwidth on a LBL) an NBLN treatment must be datafilled in the CS 2000. The appropriate tables are: TMTCNTL, TMTMAP
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
Policy Controller
165
and FLXCMAP. Note: The NBLN treatment must be set to a tone, e.g. T120. Setting the treatment to an announcement can result in a treatment loop, as there is insufficient bandwidth to play the announcement. Please consult appropriate customer NTPs for further instructions on setting the proper treatment. 9.6.7.3 Policy Controller and Inter- domain SIP-T trunks setting Because the Inter-domain option has multiple uses, including VCAC, this option should be set to Y on all SIP-T trunks connecting the office to MCS 5200 or 3rd party call servers, even if the CS 2000 office does not include portals for inter-carrier. Calls utilizing these inter-domain SIP-T trunks will work regardless of portal availability in the office. But, whenever an office is not set up to use portals for inter-carrier calls, IP routing must be implemented to allow routing between the sites VoIP gateways and other sites. Please refer to Internet Transparency section of the SEB 08-00-010 for full details on the SIP-T trunk setting and its impact on possible call blocking by VCAC. 9.6.7.4 Capacity and Scalability Policy Controllers capacity is measured based on the number of sessions that it processes for network resource usage authorization. Each session is, in reality, equivalent to a Half Call, however the concept of BHHCA is applied in terms of Busy Hour Session Attempts (BHSA). In the SN08 release, Policy Controller supports up to 650K BHSA. Only one (1) Policy Controller system is supported per CS2000. MCS is not supported in SN08. Policy Controller supports call attempts made from up to 100,000 endpoints. 9.6.8 Surveillance The Policy Controller provides functionality for application maintenance such as Logs, Alarms, and OMs. The OMs, Logs, and Alarm information is recorded on the internal hard disk. These subsystems provide an interface to export the data off-board to IEMS. The Web Server can also retrieve active alarms and logs so that they can be displayed via IEMS GUI interface. 9.6.8.1 Alarms and Logs Alarm notifications are sent to the IEMS via SNMP traps utilizing Nortel Standard Alarm MIB. In addition as stated above alarms and logs are distributed to the web server which can be viewed from the web GUI.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
166
Policy Controller Alarms and Logs cover the following areas: Application server signaling communication Database connection loss Reservation request failures Endpoint number exceeds SOC limitation Topology manager database connection loss Policy Controller startup Policy Controller shutdown Audit results CAC request denial Topology changes 9.6.8.2 OMs The OMSHOW utility can be used to view the OMs on the Policy Controller. In addition, the OMs can be retrieved via FTP for off-board processing. The OM subsystem of Policy Controller stores 5, 15, and 30 Minutes OM data in CSV files. The 5Min files are kept for 30 minutes and 15, 30Min files for 24 hours. Policy Controller OMs cover the following areas: The Request, Response Events on Application Server Control Interface Session Setup, Terminate Events The CAC attempts, Success, Failure Events related to Network Segment The CAC attempts, Success, Failure Events related to Endpoint Topology Update Success, Failure Events on Topology Server Interface Retry time on the Topology Server interface with SESM The 15 or 30 Min. CSV files are stored at:
/opt/apps/ngsspm/stdhist
Version: 9.0.3
NORTEL CONFIDENTIAL
167
Figure 57
The MS2010 is required to supply media anchoring functionality. The Packet Media Anchor solution uses the bearer channel tandeming (BCT) capability of the MS2010 to provide media stream anchoring functionality. The media anchor is directed by the CS 2000, which is responsible for managing call topology, resource allocation/deallocation and resource usage. Media anchoring through the MS2010 does not require conversion from packet to TDM back to packet again, thus impact on bearer path latency will be minimal. 9.7.1 Engineering the Packet Media Gateway Refer to section 9.3.10 MS2010 and Audio GWC engineering on page 135 for additional information.
NORTEL CONFIDENTIAL
168
sideration), the traffic generated will still be well within the 96Mpps maximum forwarding rate of the CS-LAN ERS 8600s. Note: As more voice gateways are added to the CS-LAN and the expected bearer traffic leaving the CS-LAN surpasses 3-4Gbps, it is recommended for cost-effectiveness to upgrade the ERS 8600 WAN uplink interfaces from 1-GigE ports to 10-GigE ports. 9.8.2 Hardware engineering The 8010co chassis is a ten (10) slot chassis, which can be configured with various combinations of interface modules depending on the customers requirements. With two (2) slots reserved solely for switching fabric modules, the total number of interface modules supported on the ERS 8600 is eight (8). The minimum number of interface modules required for a particular solution depends strictly on how many ports are needed for the VRRP/MLT connectivity, needed to uplink to the Core network, and needed to interconnect the subtending devices (e.g. GWC, HIOP, UAS, CMTS). Note: Due to the extensive recovery time of the ERS 8600 with dual CPUs installed, it is a requirement that ONLY one (1) CPU be installed per redundant ERS 8600, unless High Availability mode is enabled on the ERS 8600. See the section on configuring HA for the ERS 8600 in SEB 08-00-001 for additional information. Note that additional ports/modules may need to be included when scalability is considered. If the network needs to be scaled, then: Read through all of this chapter to determine the additional GWCs, UASs, etc. required and resize the network accordingly Reapply the following engineering rules Order additional parts if necessary. 9.8.2.1 ERS 8600 Series E-Modules The ERS 8600 Series E-modules are direct replacements for the original ERS 8600 I/O modules and are the baseline for newly commissioned sites. The release notes are available on the Nortel Customer Service Documentation Web page (http://www.nortelnetworks.com/documentation). The Series E-Modules contain changes to all modules that will improve manufacturing yields and are completely compatible with non-E-modules in a mixed environment. Furthermore, minor changes have been made to ASICs for the RAPTillion Address Resolution Unit (RAPTARU4) and the Octal Port Interface Device (OCTAPID3B) to support egress port mirroring. The following step-by-step rules were defined to engineer the redundant CS-LAN ERS 8600s. Two (2) 8632TXE routing switch modules are required, per chassis, as a baseline, which includes 32 10/100 Base-T ports and two (2) GBIC ports.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
169
Note: The MLT group should only be configured using two (2) GigE interfaces per ERS 8600 chassis. For more details, refer to the MLT guidelines in the ERS 8600 section of the CS-LAN Common Components chapter. With the remaining six (6) interface slots, determine the number of 10/100 Base-T modules required to interconnect with the CallP and OAM&P devices not already linked to the 8632TXE modules. The number of 48-port 8648TXE modules required per redundant chassis is defined by: 1. Determine the number of devices remaining in the CallP and OAM VLANs with 100 BaseT Ethernet interfaces. a. GWC b. XA-Core c. MSS 15000 d. SAM21 SC e. UAS/MS 2010 f. SDM: one for CallP and one for OAM&P VLANs. Or CBM: one for OAM&P VLAN g. OAM&P: CS 2000 Management Tools Server, CMTS EM, NMS, etc. 2. From this total subtract the number of 10/100 Base-T ports not used on the 8632TXE module. 3. Divide this difference by 48 and round up. 4. Order/install this many 8648TXE modules per chassis as a baseline. With the remaining interface slots, determine the number of 8-port 8608GBE GigE modules required per redundant chassis for VRRP/MLT and WAN uplinks once the GBICs on the 8632TXE modules have already been populated. The number of 8608GBE GigE modules required per redundant chassis is defined by: 1. If the WAN uplink is GigE, then order/install an additional GBIC Media Dependent Adapter (MDA) per chassis as a baseline. 2. Calculate the total number of GBIC MDAs used. 3. Divide this total by eight (8) and round up. 4. Order/install this many 8-port 8608GBE GigE baseboard modules per chassis as a baseline. Note: To avoid a single point of failure the VRRP/MLT links need to be evenly separated across multiple modules.
Version: 9.0.3
NORTEL CONFIDENTIAL
170
Engineering an IWSPM
NORTEL CONFIDENTIAL
Engineering an IWSPM
171
added into this table, up to the maximum of 32. As bearer flows end, their entry is removed thereby allowing a new bearer flow to be added. Note that additional bearer flows are permitted to land on any one of the 32 entries already in the table without blocking the bearer flow. Only when a new bearer flow is added to a 33rd host will the bearer flow block. Two (2) additional items for the local loopback and default gateway MAC address are not included in these 32 entries. These are maintained elsewhere, and therefore, do not impact the maximum number of active bearer flows. Note: It is also worth noting that these 32 table entries are applicable only to the bearer plane and not the control plane. The size of this routing table limits the number of hosts (media gateways), for which the IW-SPM can have active bearer flows, to 32. If the number of other gateway hosts is large, or is expected to grow, it is recommended to limit the number of hosts in the same bearer subnet as the IW-SPM to 32 or less. 9.9.4 Monitoring IWSPMs A new Operational Measurement group, IWBM, is available to monitor bridge usage at the office level. Two registers should be considered: IWGBATT/IWGBATT2 Contains a count of the number of bridge attempts for both DPT and Gateway trunk to/from legacy peripherals such as SPMs, and DTCs. IWGBFAIL Contains a count of the number of bridge attempts failures for both DPT and Gateway trunk to/from legacy peripherals such as SPMs, and DTCs. In addition to the registers mentioned above, a number of other registers are defined to provide a threshold view of the number of bridges in use: IWONSET1 Indicates that the number of IW bridges in use exceeds 70% of the office total. IWONSET2 Indicates that the number of IW bridges in use exceeds 90% of the office total.
13. An active bearer flow is a flow of cells from the on-board DSP for a call in progress.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
172
9.9.4.1 IWSPM average bridge attempt rate Using the register definitions discussed in the previous section, it is possible to determine the average number of bridge attempts per second for the IWSPM. Recall that the IWSPM is rated at 12 bridges per second. The following represents a calculation of a busy-hour data collection period.
NORTEL CONFIDENTIAL
173
Operational Measurement (OM) data can be used to develop UAS models. In OM group ANN, register ANNATT counts total announcement attempts, and register ANNTRU provides traffic usage. Suppose these registers give the following busy hour values:
ANNATT= 5700 ANNTRU= 684 (based on 100 sec scans)
In 3-port conferencing OM group CF3P, analogous attempt and usage registers with example data are:
CNFSZRS= 570 CNFTRU= 19380 (based on 10 sec scans)
In 6-port conferencing OM group CF6P, attempt and usage registers with example data are:
CF6SZRS= 285 CF6TRU= 19950 (based on 10 sec scans)
When total call attempts in the switch are known (for example, by summing OFZ registers NIN with extension NIN2, and NORIG with extension NORIG2), the percentage of call attempts with announcements can be found. Suppose there are 82,000 call attempts and 5,700 announcement attempts as above, then about 7% of call attempts require an announcement. For North American end offices, the following model was developed using OMs from several offices. An Asian site showed similar 3-Port Conference attempt rates and similar AHTs in the 300 sec range. However, announcements at the Asian site were less than 3%, but the AHT was 10 sec.
Version: 9.0.3
NORTEL CONFIDENTIAL
174
% of Attempts 7% 1%
To illustrate UAS engineering, we apply the model percentages in Table 31 on page 174 to call volumes of 300,000 and 600,000 BHCAs. It is expected that Lawful Intercept (LI) will generate some load on the UAS, but the amount of LI traffic is not expected to require more than two CG6000 cards with redundancy. Table 32 on page 174 provides UAS provisioning results for the number of UAS, cards and ports, as well as sensitivity results. The base case is as follows: 600K BHCA 7% of CAs require announcements with 12 sec AHT 1% of CAs are 3-Port Conference calls with 340 sec AHT G.711 codec with 10 ms packet size LI required N+1 redundancy GWC restrictions in effect
Case Base 20 ms Pkt Size G.729 Codec 275K 3P AHT 200 sec No LI
Provisioning Results (Including Spares) 8 UAS, CG6000: 34 AC & 2 LI, 4 GWC; Ports: 309 Ann, 2435 Conf, 176 LI 7 UAS, CG6000: 29 AC & 2 LI, 4 GWC; Ports: 309 Ann, 2598 Conf, 176 LI Same as Base Case 6 UAS, CG6000: 16 AC & 2 LI, 1 GWC; Ports: 154 Ann, 1169 Conf, 176 LI 7 UAS, CG6000: 24 AC & 2 LI, 4 GWC; Ports: 309 Ann, 1528 Conf, 176 LI 8 UAS, CG6000: 33 AC & 0 LI, 4 GWC; Ports: 269 Ann, 2435 Conf, 0 LI
The first row in the table above is for the base case identified above. The base case requires eight (8) UASs containing a total of 34 CG6000 cards for Announcements and Conferencing (AC), and two (2) for
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
175
Lawful Intercept. In addition, four (4) GWC pairs are needed. The ports required break down into 309 for announcements, 2435 for conferencing and 176 for Lawful Intercept. Using this information, the required number of ports can be datafilled. The next row shows provisioning requirements for the case when IP packet size is increased from 10 ms in the Base Case to 20 ms. The table also shows that results are unchanged if the codec is G.729, rather than G.711. The impacts of a reduced call volume, lower holding time for 3-Port conference calls, and removing Lawful Intercept are also shown. The UAS also supports T1 trunk testing via a Sage trunk testing device. Additional UAS provisioning for T1 trunk testing is as follows: One (1) additional card CG6000 card (i.e. in addition to the Provisioning Results provided above) The additional card can be accommodated in any UAS which is not already filled with cards (i.e. six (6) CG6000 cards) If all UASs provisioned are completely filled and have no room for a trunk testing card, then one (1) additional UAS must be provisioned to house the CG6000 card needed for trunk testing.
Explanatory Notes:
1. The trunk testing card is a dedicated resource, unavailable for any other activities on the UAS. It does NOT require sparing. 2. Trunk testing is carried out in a period of low traffic, and does not affect the busy hour erlang load and call rates given above. 3. Given Note 2, there is no need for additional GWC pairs based on erlang and call rate loads.
NORTEL CONFIDENTIAL
176
Up to 20 RTP Media Portals can be controlled by each GWC. An RTP Media Portal can be controlled by up to 20 GWCs. Each SAM16 chassis supports two (2) non-redundant RTP Media Portals, with up to six (6) media blades per portal. Each Media Blade supports 400 simultaneous voice calls (G.711/10 ms or G.711/2014 ms with silence suppression) with up to six (6) media blades per portal. This equates to 2400 simultaneous voice calls per media portal. See section 9.12.1.1 Capacity with G.711 10 ms on page 176 for engineering rules when using G.711/10 ms codec Maximum number of Media Portals per CS 2000 is 60. Maximum number of managed components by the MCS 5200 Management Server (Portal EM) is 64 (portals, as well as the MCS 5200 Database Server and the MCS 5200 SIP Server are managed components).
Note: CS 2000 and MCS 5200 require two separate RTP Portal pools Note: Starting with SN08, CS 2000 requires a dedicated CS 2000 Portal EM. Sharing of the MCS5200 Management server from a co-located MCS 5200 is not supported. Beginning in SN07, only GWCs controlling GWs that are behind a NAT or outside the Carrier VoIP VPN would need to be provisioned with RTP Media Portals. 9.12.1.1 Capacity with G.711 10 ms The following rules must be observed when engineering the RTP Media Portal for use with G.711 codec with 10 ms sample rate: The Media Portal supports 2400 simultaneous calls (400 calls per Media Blade) at G.711/10 or G.711/20 ms codec. When using 10 ms sample size, this assumes that at most 60% of the calls will use a public-public portal connections and at least 40% of the calls will use a public-private portal connections. This is necessary in order to ensure that the utilization on the public interface of each media blade does not exceed 80% of the bandwidth. In an end-office configuration, it is extremely unlikely that intra-calling within the office, requiring public-public portal connections, would exceed 25%, well under this 60% figure. In the unlikely event that there is a situation where more than 60% of the calls require publicpublic connections, Media Portals should be engineered for 1800 simultaneous call capacity (300 calls per Media Blade)
NORTEL CONFIDENTIAL
177
9.12.2 Minimum Media Portal configuration for CICM mandatory use If RTP Media Portals are provisioned for the purpose of media NAT traversal, Lawful Intercept or intercarrier traffic, then no additional Portal engineering is required for CICMs mandatory use of ports. However, if RTP Media Portals are being introduced to the office solely to meet CICMs mandatory use of RTP Media Portal, a minimum RTP Media Portal configuration should be provisioned for CICM media sink support purpose. The following RTP Media Portal configuration is required to meet CICMs mandatory Media Portal use requirements: Two half shelves (i.e. RTP Media Portals) on One portal frame One SAM16 portal chassis Two host CPU cards Two media blades This configuration provides two RTP Media Portals on a single SAM16 chassis, with one Media Blade per portal. 9.12.3 Determining the number of Media Portals needed in a office Each GWC can have some or all its gateways behind a NAT or outside the Carrier VoIP VPN. However, a portal is only inserted once in a call flow. In addition, only a certain percentage of the simultaneous calls on a GWC would require portal insertion. GWC that may require portals are: Line GWCs: Packet Access-Integrated Access GWCs, CICM GWCs Trunk GWCs: SIP-T GWCs for inter-carrier, H.323 GWCs Once the number of GWCs that require portals is determined, then compute the required number of portal ports using the following information: Each Line GWC supports a maximum of 4000 portal ports. Each SIP-T GWC supports a maximum of 4032 portal ports. Each H.323 GWC supports a maximum of 1032 portal ports (North America) or 1024 portal ports (International). Percentage of calls that will require insertion of media portal. (% Portal Calls) The following list of terms will be used in the algorithms below: L = number of Line GWCs requiring portals
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
178
T = number of SIP-T GWCs requiring portals H = number of H.323 GWCs requiring portals P = percentage of calls requiring Media Portal insertion.
Using the above information, the following algorithm yields the number of Media Portals required for the office. 1. Determine the number of Media Portal ports required for each set of GWCs, rounding up to the next integer Number of Portal Ports for for Line GWC = L * 4000 * P Number of Portal Ports for Trunk GWC = T * 4093 * P Number of Portal Ports for H.323 GWC = H * 1032 * P 2. Next, determine the total number of portal ports required for the office: Number of Portal Ports = Sum of Portal Ports for all GWCs 3. Next, determine the number of Portal Blades required, rounding up to the next integer Number of Portal Blades = Number Portal Ports / 400 4. Next, determine the number of Media Portals required, rounding up to the next integer: Number Active Media Portals = Number Portal Blades / 6 5. Next, allow for spare portals: Using a 1+1 sparing configuration: Number of Media Portal = Number of Active portals * 2
Version: 9.0.3
NORTEL CONFIDENTIAL
179
Note: The total number of Media Portals required in an office includes spare portals. It is important to note that spare Media Portals are active spares that are actually processing calls
Example:
To illustrate Media Portal Engineering for an office with three Line GWCs, two Trunk GWCs and five H.323 GWCs needing portals, assuming 35% of calls require Media Portal insertion and using 1+1 sparing, Here are the results: Number of Portal Ports for Line GWCs = 3 * 4000 * 35% = 4200 Number of Portal Ports for Trunk GWCs = 2 * 4093 * 35% = 2866 Number of Portal Ports for H.323 GWCs = 5 * 1032 * 35% = 1806 Number of Portals Ports = 4200 + 2866 + 1806 = 8872 Number of Portal Blades = 8872 / 400 = 23 Number of Active Portal = 23 / 6 = 4 Number of Portal = 4 * 2 = 8
That is eight (8) Media Portals or four (4) SAM16 chassis fully configured with a total of 48 Media Blades are required. 9.12.4 Determining the assignment of Media Portals to GWCs With the ability to provision up to 20 GWCs on each portal, and up to 20 portals on each GWC, assigning portals to GWC is very flexible. Because no geographic preference can be assigned to portals, if the portals are located at remote sites, careful consideration must be given to the GWC to portal provisioning. See section 9.12.7 RTP Media Portal engineering and sizing for Media Gateway sites on page 181 for further details. When the Media portals are directly connected to the ERS 8600s, each GWC provisioned with Media Portals must be assigned media portals in the following combination:
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
180
Assign media portals from Domain A and Domain B portals to the GWC. This is necessary to ensure that the GWC has the necessary media portal capacity in the case of an ERS 8600 chassis failure. See The CS-LAN chapter of SEB 08-00-001 for Portal connectivity details. If more than one SAM16 chassis is deployed, assign media portals to the GWC from different SAM16 chassis. This is necessary to avoid a single point of failure on the SAM16 chassis.
9.12.5 Media Proxy insertion and selection The CS 2000 has the capability to determine if both ends of the calls are part of the same Enterprise VPN, based on provisioned topological information. This topological information includes if a NAT is deployed at the end of a network hosting a VoIP device. The CS 2000 decides to insert Media Proxies on the voice media path in the following cases: Calls initiated between an Enterprise network and the Carrier VoIP network Calls initiated between a public network and the Carrier VoIP network Calls initiated between two different Enterprise networks Calls initiated between an Enterprise network and a public network
Once the CS 2000 decides, for a particular call, that a Media Portal needs to be inserted on the bearer path, it needs to select a Media Portal. The CS 2000s GWCs use a round-robin allocation algorithm through its pool of RTP media portals for load balancing purposes. A three-tiered resource management system is implemented to allocate resources: Tier 1 - RTP Media Portal Selection: Selection of a particular RTP Media Portal from the resource pool is accomplished by the GWCs using a round-robin mechanism to distribute the load. The round-robin order is determined by the sequence that the RTP Media Portals requested membership in the resource pool. If the selected Media Portal can not handle the call, the next Media Portal in the round-robin order is selected. If all Media Portals in the resource pool can not handle the call, the call is dropped and a log is generated. Tier 2 - Blade Selection: Once a specific RTP Media Portal has been identified, the most available blade (up to six (6) blades per RTP Media Portal) is selected. The most available blade is the one with the most ports (UDP) available. If a blade can not handle the call, the next most available blade is selected.
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
181
If no blade in the Media Portal can handle the call, the request is rejected, and GWC will continue with RTP Media Portal selection algorithm.
Tier 3 - Port Selection: When a blade is allocated to process a media session, it allocates random ports from its port pool. The port pool is a randomized number of ports that reside within a configured range. 9.12.6 Unsupported functions with RTP Media Portals The following is a list of unsupported functions with the RTP Media Portal: Sharing of media portals between MCS 5200 and CS 2000 is not supported. Two separate media portal pools are required in a shared CS-LAN. Starting with SN08, CS 2000 requires a dedicated CS 2000 Portal EM. Sharing of the MCS5200 Management server from a co-located MCS 5200 is not supported. 9.12.7 RTP Media Portal engineering and sizing for Media Gateway sites This section provides recommendation for selecting geographic locations for the RTP Media Portals. Two configurations are common: Media Portals in the CS-LAN Media Portals in remote site (often together with other carrier-based media gateways (like the MG 15000s)). In this context, remote site is used to mean a geographic region which encompasses multiple media gateway sites that have trunk and line media gateways that share common GWCs. In order for the media path to be optimized as media portals are inserted into the media path, GWCs from one geographic region should not be controlling media gateways and media portals that are located in other geographic regions. Referring to Figure 58 on page 182, if a GWC controls media gateways in Enterprise A and Enterprise B and media portals in Region 1 and Region 2, when the GWC is inserting a portal into the media stream for a call from Enterprise A network, the media portal in Region 2 might end up being the selected portal for the call. This is because the GWC has no knowledge of geographic location for the media gateways or the media portals that it controls. Ideally, the media portal in Region 1 should be selected for calls from Enterprise A.
Version: 9.0.3
NORTEL CONFIDENTIAL
182
Figure 58
The recommendation on the location of the RTP Media Portal is shown in Figure 58 on page 182. Because bearer flows proxied by the media portals do not follow the typical logical route between the two media gateways and are instead diverted to the media portal, whenever it is possible, the media portal should be located closer to the greater majority of the remote gateways in the geographic region. This configuration would guarantee that bandwidth and delay are optimized across the Access network.
Version: 9.0.3
NORTEL CONFIDENTIAL
183
Note: In SN06.1, the portal selection algorithm does not take into account geographic location of media gateways or media portals, thus a geographical proximity algorithm is implemented manually by assigning media gateways and media portals from one and only one geographic location to each GWC. 9.12.7.1 Determining the number of Media Portals needed in a geographic region The same algorithm used for determining the number of media portals needed for the entire office can be used to determine the number of media portals needed for a geographic region. The only modification to the algorithm is to only take into account the media gateways in that region. The CS-LAN site can be treated as a geographic region that could include remote media gateway site. 9.12.7.2 Determining the assignment of Media Portals to GWCs in a geographic region The same strategy used for determining media portal assignment to GWCs for the entire office can be used for each geographic region. Each region can be managed independently of the other regions when it comes to portal provisioning, as long as GWCs do not control media gateways from multiple geographic regions.
Version: 9.0.3
NORTEL CONFIDENTIAL
184
9.13.1.2 Number of clients supported The large system capacity and performance verification of the CS 2000 Management Tools Server has used a model of no greater than: Ten (10) client machines (GUIs) open for alarm browsing / maintenance Four (4) client machines (GUIs) open for LMM/TMM 9.13.1.3 Bandwidth and network performance It is recommended to interconnect the CS 2000 Management Tools Server with Ethernet 100 Base-T, full duplex. 9.13.2 SuperNode Data Manager (SDM) 9.13.2.1 Traffic engineering The SDM is an application server that resides between the CS 2000 and the OAM&P management plane. The SDM supports the following functions: Bootp Server, up to 60 GWC TFTP Server, up to 60 GWC (for reboots) DMS Data Management System (DDMS) Event Record Manager (ERM) Operational Measurements Data Delivery (OMDD) SuperNode Billing Application (SBA), up to 1.5M billing records/hour Secure File Transfer (SFT) Enhanced Terminal Access (ETA) Network Time Protocol daemon (NTPd) ASCII Terminal Access (ATA) Engineering and Administrative Data Acquisition System (EADAS) over TCP/IP Log Delivery Remote Procedure Call (RPC) The SDM resides on the Nortel F/X platform and is a high-performance, fault tolerant, UNIX-based processing platform composed of a Motorola PowerPC 750 dual processor system, utilizing the IBM AIX operating system. System I/O is achieved using fault tolerant I/O buses, mirrored disk storage, and redundant communication links. The SDM/FT is housed in a C28 Model B streamlined cabinet.
Version: 9.0.3
NORTEL CONFIDENTIAL
185
9.13.2.2 Number of clients supported The SDM will support up to 32 client machines (GUIs). 9.13.2.3 Bandwidth The SDM must interconnect to the CS-LAN with Ethernet 100 Base-T, full duplex links. 9.13.3 Core and Billing Manager (CBM) 9.13.3.1 Traffic engineering The CBM is an application server that resides between the CS 2000 and the OAM&P management plane. The CBM supports the following functions: Bootp Server, up to 60 GWC TFTP Server, up to 60 GWC (for reboots) DMS Data Management System (DDMS) Event Record Manager (ERM) Operational Measurements Data Delivery (OMDD) SuperNode Billing Application (SBA), up to 1.5M billing records/hour Secure File Transfer (SFT) Enhanced Terminal Access (ETA) Network Time Protocol daemon (NTPd) ASCII Terminal Access (ATA) Engineering and Administrative Data Acquisition System (EADAS) over TCP/IP Log Delivery Remote Procedure Call (RPC) 9.13.3.2 Number of clients supported The CBM will support up to 32 client machines (GUIs). 9.13.3.3 Bandwidth The CBM must interconnect to the CS-LAN with Ethernet 100 Base-T, full duplex links.
Version: 9.0.3
NORTEL CONFIDENTIAL
186
9.13.4 MG 15000 Element Manager The MG 15000 Element Manager utilizes the Multiservice Data Manager (MDM) platform to provide provisioning, SNMP alarm collection, and performance measurements display in support of the MG 7000 or the MG 15000. Up to 30 MG 15000s can be supported on a single MDM. Multiple MDMs will need to be deployed to support greater than 30 MG 15000s.
MDM to SDM Data Path Bandwidth = 1.23 x 106 bps Performance Maximum latency = less than one second Maximum packet loss = not applicable Protocols Physical Connection Characteristics Notes IP, ICMP, TCP, ARP, FTP SDM: 10Base-T Ethernet, two links, one IP address CBM: 100Base-T Ethernet, two links, one IP address MDM: 10Base-T Ethernet, one link/T1400 server, one IP address 1. Link connecting T1400s/N240s is on a separate (private) subnet.
Version: 9.0.3
NORTEL CONFIDENTIAL
187
MDM to SDM Data Path Security Needs A network isolated from all non-Carrier VoIP traffic with a dedicated path from the SDM to the MDM
The following table illustrates examples of OAM capacity data for two Ethernet links associated with the SDM.
Application
Bandwidth (bits/second)
ARP Alarms (from MDM to SDM 5-minute performance measurements (from MDM) 5-minute performance measurements (from SDM) 30-minute performance measurements (from MDM) SDM to MDM connection audit alarms SDM to SDM link audit SDM to MDM connection audit performance measurements
4.64 x 102 6.69 x 102 1.26 x 105 1.55 x 105 1.26 x 105 4.64 x 102 3.36 x 102 2.58 x 100
1.00 0.02 72.00 72.00 72.00 1.00 1.00 0.01 Total Bandwidth Total Packets
Version: 9.0.3
NORTEL CONFIDENTIAL
188
9.14.2 MSS 15000 to MDM The traffic on the path from the MSS 15000 to the MDM consists of the following message types: Alarms Performance measurements (PMs) State change notifications (SCNs) Security log files Time-of-day (TOD) Statewalk audits The 10Base-T Ethernet links between the MSS 15000 and the IP edge equipment are load-spared (only one is active at any given time). The following table specifies requirements that apply to this path:
MSS 15000 to MDM Data Path Bandwidth = 3.39 x 105 bps Performance Maximum latency = one second Maximum packet loss = not applicable Tolerance for out of order packets = not applicable Protocols Physical Connection Characteristics Notes Security Needs IP, ICMP, ARP, TCP, UDP, FMIP, Telnet, FTP, NTP MSS 15000: 10Base-T Ethernet MDM: 10Base-T Ethernet 1. Both the MSS 15000 and the MDM support 10/100Base-T Ethernet, but only the 10Base-T Ethernet has been tested in the Carrier VoIP configuration. Not applicable, but a user ID and password are required for all access
The following table illustrates examples of capacity data for the 10Base-T Ethernet path from the MSS 15000 to the MDM:
Version: 9.0.3
NORTEL CONFIDENTIAL
189
Application
Bandwidth (bits/second)
Packets (per second) MSS 1500 Link 1 (secs) MSS 15000 Link 2 (secs)
ARP Alarms from MSS 15000 SCNs from MSS 15000 State walk to MSS 15000 5-minute performance measurements from Multiservice Switch 1500 Network time of day Download to MSS 15000 MDM connection query for MSS 15000
4.64 x 102 1.42 x 103 7.10 x 102 3.92 x 101 3.85 x 104 1.00 x 104 2.47 x 105 2.56 x 101
1.00 0.40 0.20 0.02 24.00 8.33 53.67 0.22 Total Bandwidth Total Packets
1 2 2 2 2 1 1 1
0.00 0.00
9.14.3 SDM to OSS network The traffic on the path from the SDM to the OSS Network consists of the following message types: Performance measurements (PM) generated at five or 30-minute intervals Operational measurements (OM) generated at five-minute intervals Logs Automatic Message Accounting (AMA) billing records MAPCI data forwarded to the OSS Network from the SDM during remote MAPCI sessions The CS 2000 sends AMA billing records to the SDM. This path is implemented via Nortel proprietary DS512 protocol. If a CBM is used, the path is via the HIOP/HCMIC from the CS 2000 to the CBM.
Version: 9.0.3
NORTEL CONFIDENTIAL
190
SDM to OSS Network Data Path Bandwidth = 3.49 x 106 bps Performance Maximum latency = less than one second Maximum packet loss = not applicable Protocols Physical Connection Characteristics Notes Security Needs Firewall protection is recommended on the OSS side. IP, ICMP, SNMP, ARP, TCP, FTP SDM: 10Base-T Ethernet CBM: 100Base-T Ethernet, two links, one IP address OSS: 10Base-T Ethernet
The following table illustrates examples of capacity data for the two OAM&P Ethernet links between the SDM and the OSS Network:
Application
Bandwidth (bits/second)
Packets (per second) OSS (Link 1) (secs) OSS (Link 2) (secs) 0 1 0 3.36 x 102 1.00
Version: 9.0.3
NORTEL CONFIDENTIAL
191
Application
Bandwidth (bits/second)
Packets (per second) OSS (Link 1) (secs) OSS (Link 2) (secs) 0 0 0 0 0 0 0 0 3.36 x 102 1.00
DMS-100 alarms from SDM to OSS 5-minute performance measurements from SDM 30-minute performance measurements from SDM AMA billing from SDM to OSS 5-minute operational measurements from SDM to OSS
3.34 x 102 1.55 x 105 1.55 x 105 8.62 x 105 1.07 x 105 1.42 x 105 1.23 x 104 1.42 x 105
0.10 72.00 72.00 411.11 51.00 31.00 3.00 31.00 Total Bandwidth Total Packets
9.14.4 MDM to OSS network The traffic on the path from the MDM to the OSS Network consists of the following message types: Remote display of the graphical user interface (GUI) The following table specifies requirements that apply to this path:
Version: 9.0.3
NORTEL CONFIDENTIAL
192
MDM to OSS Network Data Path Bandwidth = 1.64 x 105 bps Performance Maximum latency = less than one second Maximum packet loss = not applicable Protocols Physical Connection Characteristics Notes Security Needs Firewall protection is recommended on the OSS side. IP, TCP, ICMP, FTP, ARP OSS: 10Base-T Ethernet MDM: 10Base-T Ethernet
The following table illustrates examples of capacity data for the two OAM&P Ethernet links between the MDM and the OSS Network:
Application
Bandwidth (bits/second)
Version: 9.0.3
NORTEL CONFIDENTIAL
193
NORTEL CONFIDENTIAL
194
UAS MS 2010 Media Portal Cable Modems Other line gateways OAM&P Systems CBM CMT IEMS
Depending on the solution and the customer requirements, the Call Processing components might be split into three subnets/VLANs: Call Processing-Private, used for CS-LAN only communication Call Processing-Carrier, used for communication with internal Carrier components Call Processing-Public, used for communication with the rest of the network (including end-customers). Nortel recommends that the three logical blocks above (Call Processing, Media Gateways, and OAM&P Systems) be assigned their own VLANs. As shown below, segregating the CS-LAN in VLANs increases the overall security. Through the entire document, the following assumptions are used: All Call Processing network elements are located in the same subnets (Call Processing subnets). This is based on the three Call Processing subnets described above (where applicable). All OAM&P servers are located in the same subnet. This is different from the Call Processing subnet (the OAM&P subnet). At least one subnet for each of the types of CS-LAN based gateways (i.e. one for UAS/MS 2010, one for MG 15000s, one for IW-SPMs) is created. These are the Media Gateway subnets. The Media Portal is configured in its own subnet when present. For more details on the Media Portal, please see SEB 08-00-001. All the subnets above are connected to the same router/Layer 3 switch (ERS 8600). The NOC and the OSS are considered as a unique entity with access to the CS-LAN through its own subnet. All OAM&P GUIs are located in the NOC/OSS. In-band management is used, unless otherwise specifically recommended. In addition, this section will help to provide the information required to fill a Customer Interaction security questionnaire. The CI session is fundamental to obtain a detailed description on all the physical locations of all the network elements. These sessions should be used to identify which portions of the network are considered insecure or untrusted by the customer. In addition, during these meetings it is
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
195
also important to include any customer-specific application in the overall filtering strategy. 10.1.1 Isolation of the CS-LAN logical blocks via VLANs Nortel recommends separating Call Processing, Media Gateways, and OAM&P elements in three different port-based VLANs. This is because in port-based VLANs each VLAN is completely separated from each other (and in their own broadcast domain). Because of the ERS 8600 architecture, each packet is analyzed independently of the precedent and allows complete traffic isolation. In addition, the ERS 8600 can be configured to discard untagged traffic on tagged ports or tagged traffic on untagged ports. 10.1.2 Routing security 10.1.2.1 Routing policies Added security can be achieved by tuning the overall routing processes of the Carrier VoIP system. Nortel recommends using route policies to selectively announce the CS-LAN subnets, and blocking the propagation of some of these subnets to the rest of the network. It is strongly recommended to limit the announcement of the CS-LAN Call Processing subnet to the rest of the Carrier VoIP network and the Enterprise network, as described in SEB 08-00-001. 10.1.2.2 Routing protocol protection It is possible that some areas of a customer routed network are or cannot be trusted. In such situations, since malicious users might potentially create routing black holes, Nortel recommends protecting the OSPF updates with an MD5 key on each interface connected to a non-trusted router, as described in RFC2178 (OSPF cryptographic authentication with the MD5 algorithm). 10.1.3 Traffic filtering It is very important to guarantee that only authorized devices can communicate with the Carrier VoIP elements. Traffic filtering is used to control access to Carrier VoIP elements and insure that only authorized traffic can reach them. As it will be seen below, traffic filtering is enabled at the demarcation points between trusted and untrusted networks. The most efficient way of designing an effective traffic filtering strategy is to have in mind the overall system topology and the traffic flows that are present in the system. The document addresses the above statement by: Identifying all network elements. Identifying locations of all network elements.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
196
Identifying the logical communication flows between all network elements. Identifying which protocols are used for this communication. Identifying which devices are allowed to communicate with each other. Identifying the IP addresses and/or subnets for all devices.
It is also very important to identify which networks/subnet/domains are trusted and which ones are considered a security risk. Nortel will always assume that traffic filtering will be applied at the ingress ports that might carry untrusted/insecure traffic. Throughout this document, all the flows that will be initiated and terminated inside the following subnets will always be considered trusted: All the CS-LAN Call Processing subnets The CS-LAN OAM&P subnet The CS-LAN Media Gateway subnets (if present) Again, Nortel assumes that these subnets are all located off the same router/Layer 3 switch (ERS 8600). Therefore, it is not necessary to filter at the ingress ports that subtend these subnets. Throughout this document, all the flows that will traverse or are initiated from the following subnets will always be considered untrusted: The remote Media Gateway subnets, with the exception of the MG 15000s. Please note that Nortel recommends that the MG 15000s be considered trusted since they are not accessible by end-users. The customers Corporate Network subnet (please note that not all customers will consider this domain as untrusted) Figure 59 on page 197 shows the trusted and untrusted networks by using different colors (see the legend of that figure). In the generalized topology for IP-based Carrier VoIP solutions (as shown in Figure 59 on page 197), it is required that packet filtering take place in the following network elements: The CS-LAN Router This will protect the CS-LAN (Call Processing, OAM&P and the media gateways, if present at this location). In addition, it filters NOC/OSS traffic if it transits through insecure networks. Please note that the NOC/OSS might have a direct physical connection into CS-LAN Router, in which case traffic filtering policies do not change. The Media Gateway Router This will protect the MGs.
Version: 9.0.3
NORTEL CONFIDENTIAL
197
Figure 59
10.1.3.1 Layer 2 filtering In some Packet Access-Integrated Access solution enabling Layer 2 filtering on the Media Gateway site router might add extra security. This feature restricts access to the network based on the MAC addresses connected to the switch. For more details, please see section 10.3.6 Layer 2 filtering on the ERS 8600 on page 211. 10.1.3.2 Traffic filtering limitations It is important to remember that traffic filtering does not guarantee total protection. The main limitations are: IP addressing spoofing The filtering policies described in the filtering sections will mitigate (though not completely eliminate) IP address spoofing.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
198
DoS attacks (both single and distributed) using allowed traffic flows. Please see below for more details on how to minimize IP packet spoofing.
Note: The combination of traffic filtering and IPSec offers better protection to the key Carrier VoIP elements from Denial of Service (DoS) attacks. Special attention should be given to ICMP. This protocol is very useful to debug network connectivity issues but it introduces several security risks involving DoS attacks. For this reason, Nortel recommends that ICMP flows should not be allowed from untrusted networks into the CS-LAN. 10.1.3.3 Minimizing spoofing attacks Using a strong anti-spoofing policy greatly decreases the risks for Denial of Service (DoS) attacks. Therefore, the indirect anti-DoS strategy recommended in this section is built on blocking all unauthorized traffic. This is achieved by throttling traffic filtering on the router/switch closest to the possible source of attacks so that stopping spoofed IP packets would be easier. Spoofed IP packets should be stopped by configuring the ERS 8600 to ensure that only IP packets containing the correct source IP address are forwarded. The basic idea of anti-spoofing protection is to have a filter rule/configuration assigned to the interface facing the non-trusted network, which examines the source address of all outside packets crossing that interface. If that address does not belong to the external site/domain as assigned by the network administrator, the packet should be dropped. It is particularly important that this strategy is applied throughout the Carrier VoIP network, especially at the external connections to all non-trusted sites/domains. By denying all invalid source IP addresses, the chances of a spoofed DOS attack are greatly minimized. It is strongly recommended that for all line solutions (for instance, Packet Access-Cable Access and Packet Access-Integrated Access), most of the DoS-related filtering should be performed at the Media Gateways site where the end-subscribers access the Carrier network. 10.1.4 Protocol stacks Table 41 on page 199 shows the protocols used in the network element communication flows that are described in the packet filtering sections. Note that this stack is a representation of actual protocol encapsulation used in Carrier VoIP solutions, not true ISO layers.
Version: 9.0.3
NORTEL CONFIDENTIAL
199
Upper Layer Protocol AFT Bootp/DHCP Citrix CORBA CORBA Socks DCE DNS FMIP FTP FTS FTS Audit GR740/EADAS HTTP HTTPS M3UA MDM Alarms MDS MGCP/H.248 NFS NTP NTSTD Export OMDD OSSGate PAM PManagement PP OM PPVM Q.931
UDP
TCP X
SCTP
X X X X X X X X X X X X X X X X X X X X X X X X X X X X
Version: 9.0.3
NORTEL CONFIDENTIAL
200
Network elements
Upper Layer Protocol Radius RMI RMI Socks SCC2 SFT2 (SSH-based) SIP(CS 2000 Session Server) SNMP (client, trap and agent) SNMP Trap SSH SSL Sync Manager Syslog (client and server) Telnet TFTP Tomcat TomcatSSL UTA Version Port VRL Look-up X11
UDP X
TCP
SCTP
X X X X X X X X X X X X X X X X X X X X
NORTEL CONFIDENTIAL
Network elements
201
Please note that the SDM has also one interface in the Call Processing subnet, even if it does not participate in any Call Processing function. It is also important to note that all flows from the Call Processing elements that use both physical and logical IP addresses for their redundancy strategy (i.e. GWC, XA-Core) will use the logical IP address as the source address for all flows. 10.2.2 Media Gateways The Media Gateways can be divided into the following groups: trunk gateways (i.e. MG 15000, MG 7000, MG 3200) large line gateways (MG9000 IP) small line gateways (MTAs and IADs). The exceptions to this division is are the audio servers (UAS and MS2010) and the Media Portal. 10.2.2.1 Media Gateway 15000 The security strategy for MG 15000s is more complex than the other Media Gateways because of the possibility of being located in different parts of the network: the CS-LAN or a Media gateway Site. Three different types of traffic flows will be directed to the MG 15000s: Call Processing Aspen 2.1 on UDP port 2427. H.248 on UDP port 2944. Q.931 PRI signaling over SCTP. Bearer - variable UDP ports, depending on VSP2 and VSP3s OAM&P IPSEC SSH Telnet FTP NTP Please note that the Call Processing port is configurable. As indicated in section 7.0 IP addressing on page 83, a unique IP address is assigned to each of the Call Processing, OAM&P and Media functions. This means that it is simple to identify the different types of
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
202
Network elements
traffic flows based on addresses and port numbers, with the end result being a stronger security. In the following chapters, where flows are listed per function, it is assumed that these flows originate from the corresponding logical entity. This means that all bearer traffic will use the IPMCON address while signaling will use the SG and MG addresses. 10.2.2.1.1 IPSec on MDM It is important to note that the MG 15000/MSS 15000 Element Manager (PMDM) supports IPSEC. This means that an IPSEC pass-through filter needs to be enabled to allow IPSEC flows to traverse packet filtering elements and firewalls. All IPSEC flows are indicated in the flow tables later in this section. 10.2.2.1.2 UDP ports for bearer flows The following table shows the default port ranges for bearer flows in both VSP2 and VSP3s, including T.38 values for Fax. It is important to note that, depending on the choice of the UDP Base Port, the range of RTP/RTCP port changes. This is illustrated in the table below.
Card Type
RTP range (Even ports) 8192-14190 16384-22382 24576-30574 32768-38766 40960-46958 49152-55150 57344-63342 16384-23998 32768-40382 49152-56766
RTCP range (Odd ports) 8193-14191 16385-22383 25477-30575 32769-38767 40961-46959 49153-55151 57345-63343 16385-23999 32767-40383 49153-56767
T.38 range N/A N/A N/A N/A N/A N/A N/A 24576-32190 40960-48574 57344-64958
VSP2
VSP3
32768 49152a
Version: 9.0.3
NORTEL CONFIDENTIAL
Network elements
203
Card Type
VSP3o
32768 49152a
Table 42 MG 15000s bearer port ranges a. Default value for udpBasePort is 49152
In addition, please note that, even if the UDP port ranges can be configured, Nortel does not recommend changing the default UDP base of 49152. The customer should contact Nortel if network requirements will force them to change these values. It is important to note that, even if a wide range of UDP ports needs to be opened on a firewall or a router with traffic filtering functions, the security risk is minimal since the source IP address is also filtered. This means that only traffic coming from the IP addresses of the Media Gateways will be accepted. In addition, with VSP2s and IP-over-AAL5 VSP3s, there is a separation of traffic (Call Processing, Bearer and OAM, if present) at Layer 2 since different PVCs are used for these flows. Such separation also adds additional security. Please note that the same strategy can be applied in MPLS networks. Note: The udpPortBase must be 49152 if a Gigabit Ethernet VSP3 is used. 10.2.2.2 IW-SPM The IW-SPM will be located in the CS-LAN and will communicate with MG 15000s and other IW-SPMs for DPT. The current UDP port range is as follows: Bearer - UDP ports 30000-34031 (even numbers for RTP and odd numbers for RTCP). 10.2.2.3 Universal Audio Server (UAS) Three different types of traffic flows will be directed to the UAS: Call Processing - UDP port 2427. Bearer - See Table 43 on page 204. OAM&P TCP port 23 for Telnet, UDP port 161 for SNMP and TCP/UDP ports 56315632 for pcAnywhere. The following table shows the default port ranges for bearer flows in the UAS.
Version: 9.0.3
NORTEL CONFIDENTIAL
204
Network elements
Please note that the UDP port range used for bearer has always 240 ports. The initial value of 30000 is configurable and if another value is selected, the range needs to be modified accordingly. 10.2.2.4 Media Server 2010 (MS 2010) Three different types of traffic flows will be directed to the MS 2010: Call Processing - UDP port 2944. Bearer - See Table 44 on page 204. OAM&P HTTP, SNMP and FTP. The following table shows the default port ranges for bearer flows in the MS 2010.
Please note that the UDP port range used for bearer has always 1200 ports. The base value of 4000 is configurable. If a different base value is chosen, the last value of the range needs to be adjusted accordingly. 10.2.2.5 MG9000 IP Three different types of traffic flows will be directed to the MG9000: Call Processing - UDP port 2944. ABI PPVM - SCTP (listens on port 4683, sends on port 23767) Lnk Mtc. - UDP l port 23766 CSM - UDP port 22533
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
Network elements
205
ESA - UDP port 23768 Bearer - See Table 45 on page 205. OAM&P SNMP TCP Port 8002, 8004, 8006 ICMP (optional) - messages for OAMP and Call Control Heartbeat pings HTTPS - TCP port 443 SSH - TCP port 24 SSH LCI debug port -TCP port 69 FTP (available when cards are at IBL) - TCP port 21 SSH FTP - TCP port 22 (available starting from SN08)
The following table shows the default port ranges for bearer flows in the MG9000.
10.2.2.6 Media Portal The Media Portal is used to hide the real addresses of the two Media Gateways involved in a call. The MGs participating in a call that goes through a Media Portal are not aware that there is a proxy on the path. MG A sends its bearer stream to the Media Portal which then sends it to MG B. To increase the security, the Media Portal inspects all the incoming packets. This minimizes the risk that packets destined for a given port are really meant for that port and not malicious packets. The Media Portal performs the following checks: Source port verification - the Media Portal checks that the source address of the packets received matches the address provided in the signaling information at call establishment. If it does not match, the packet is discarded. Upper layer verification - the Media Portal only accepts packets that carry RTP, RTCP and UDP. The following table shows the default port ranges for bearer flows in the RTP Media Portal.
Version: 9.0.3
NORTEL CONFIDENTIAL
206
Network elements
Please keep in mind that these port ranges are configurable on the Media Portal. 10.2.3 OAM&P network elements There are many network element managers, including clients and servers that may be connected to the CS LAN. The traffic flows originating from or going to these OAM&P network elements might need to be secured if they are transiting an insecure or untrusted network. The following are all the generic OAM&P elements: Call Management Tools (CMT) SDM Core Billing Manager (CBM) SAM21 Element Manager UAS Audio Provisioning Server (APS) UAS Element Manager Server USP EM Media Portal EM (only for solutions with Media Portals) Optional Generic Servers (TFTP, FTP, NTP, DNS, DHCP, KDC, DCE) Network Management System (optionally in CS-LAN) PMDM (MG 15000/MSS 15000 Element Manager) CM/CMTS EM Server (only for Packet Access-Cable Access) Line Gateway EM Server (only for Packet Access-Integrated Access) MG9000 EM Server (only for UA-IP and UA-AAL1) It is important to remember that the MG 15000s can be managed in-band or out-of-band. The most secure and recommended approach is the out-of-band management since the Management interface would have an IP address on a different addressing plane from the rest of the system. Nevertheless, for completeness and management flexibility, the in-band flows are shown in the OAM tables later in this section. Please keep in mind that the same strategy can be applicable to other third party gateways or CMTSs. It is important to note that all references to CMT in the flow tables are applicable to both Netra 1400 and
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
207
clustered N240. In addition, as it is indicated in the CS-LAN and IP addressing sections, a unique IP address is assigned to each of the Call Processing and OAM&P functions of the SDM. In the following chapters, all SDM flows originate from the OAM&P logical interface. It is also important to note that all flows from the OAM&P elements that are clustered or multi-homed (i.e. CMT, IEMS, CBM) use the logical IP address as the source address. 10.2.4 Operations Support System (OSS) / Network Operations Center (NOC) The Operations Support Systems (OSS) and the Network Operations Center (NOC) are both OAM&P networks and they are not typically directly connected to the CS-LAN.
NORTEL CONFIDENTIAL
208
that should be performed when a match is made. Filtering actions include Forward Forward to next hop Drop Prioritize Mirror Stop-on-match Two types of filters can be applied: Source/Destination filters or global filters. 10.3.1.1 Source and destination filters Source filters must specify a source IP address and mask, and they may optionally specify a destination IP address and mask. Destination filters must specify a destination IP address and mask, and they may optionally specify a source IP address and mask. The minimum mask length is 8 bits. A source or destination filter can cause the following actions to be applied to a packet that matches the filter record: Forward the packet when the filter is applied with a forward action Drop the packet when the filter is applied with a drop action Mirror the packet to the defined mirror port Match the DS field Modify the DS code point (only on DiffServ access ports as shown in section 12.0 Quality of Service (QoS) on page 223) Modify IEEE 802.1p 10.3.1.2 Global filters Global filters can specify a source IP address and mask, a destination IP address and mask, both of these, or neither of these. Global filters have the following characteristics: No minimum or maximum mask length exists. Up to eight global filters can be applied on any given set of eight 10/100 Mb/s ports or one 1000 Mb/s port. 10.3.2 Summary of filter characteristics on the ERS 8600 Filters on the ERS 8000 Series switches have the following characteristics and requirements: Up to 3071 filter IDs can be defined among all ports or on a single port, including source/destination and global filters. Up to 200 filter sets can be defined for source/destination filters, while up to 100 filter sets can be defined for global filters.
Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
209
A collection of source/destination filters is defined in a set (not exceeding thirty-two per set), and the set is applied to a port or group of ports. Multiple sets can be assigned to any given port but the maximum number of source/destination sets that can be enabled on a given port set is thirty-two. A collection of global filters is defined in a global set (not exceeding eight per set), and the set is applied to a port or group of ports. Multiple sets may be applied to a given port or set of ports, but the maximum number of global filters that can be enabled on a given port set is eight. Filter counters are maintained for all active filters. Each time an active filter is hit by a packet, its counter is incremented by one. These counters are maintained chassis-wide and may be viewed or reset administratively at any time.
The following table contains a summary of the value of the most significant parameters in the configuration of traffic filtering on the ERS 8600.
Parameter Filter IDs Range of Global Filters Set IDs Number of Global Filters per Set Number of Global Filters Range of Source/Destination Filters Set IDs Number of Filters per Source/Destination Set Number of sets per port (any type) Number of filters per port Number of characters in a filter name
Value/Range 3071 (including global and source/destination filters) 1 to 100 Maximum 8 Maximum 8 per group of eight 10/100 Mb/s ports or one 1000 Mb/s port 300 to 1000 Maximum 32 Maximum 32 Maximum 1024 (32 * 32) 15
For more details on ERS 8600 filtering, refer to the specific ERS 8600 documentation. 10.3.3 Capacity engineering on filtered ports A typical Carrier VoIP solution requires several traffic filters (between 40 and 60, on average) on any interface facing a potentially untrusted network, depending on the customer requirements and security strategy.
Version: 9.0.3
NORTEL CONFIDENTIAL
210
The majority of traffic in Carrier VoIP solutions uses UDP as a Layer 4 protocol. In addition, the packet sizes for both Media and Signaling streams are much smaller than traditional data streams (the average media packet size is around 160 Bytes). Therefore, to avoid any potential issues, it is strongly recommended to engineer the capacity of all filtered links at 50% capacity. If more capacity is needed, more Ethernet links can be added to the physical link and create a logical trunk (MLT). In addition, whenever it is possible it is recommended to use global filters. 10.3.4 Important information on configuring filtering features The following paragraphs describe the configuration of important features that are needed for Carrier VoIP. 10.3.4.1 Enabling ARP traffic On the ports where filtering is enabled, the ERS 8600 Switch needs to accept and process ARP traffic on port-based VLANs when the default port action is set to drop. To permit ARP traffic, the network administrator must use the command line interface to do the following: Configure a user-defined protocol-based VLAN for ARP EtherType (byprotocol usrDefined 0x0806) Set the ports with a default port action of DROP Then these ports need to be added to the VLAN as static members. Finally, the port Default VLAN ID needs to be set to the correct port-based VLAN where the ARPs will be processed. 10.3.4.2 Configuring a range of UDP ports The following steps describe how to configure UDP port ranges (i.e. allow all traffic between UDP port X and port Y, inclusive). Two filters need to be configured: The first filter will drop all traffic with UDP ports smaller than UDP port X. The option Stop on Match is set to True. The second filter will forward traffic with UDP ports smaller than UDP port Y+1. The option Stop on Match is set to True. The port action must be set to drop. 10.3.5 Securing the management of the ERS 8600 The ERS 8600 can be managed via several services. To optimize security, by default, the SSH, FTP, rlogin, Telnet and TFTP daemons are disabled. Nortel recommends that the customers manually enable only the
Version: 9.0.3
NORTEL CONFIDENTIAL
211
required services. The HTTP server allows only read parameters (read only mode) and can also be disabled. Access policies allow the network administrator to control management access by setting policies for services to prevent or allow access to the switch. If management access to the switch is permitted, the administrator can specify which hosts or networks can access the switch through which services. Nortel recommends defining the network stations and/or networks that are explicitly allowed or denied access to the switch. For each service, the network administrator can also specify the level of access, such as read-only or read/write/all. For more details, please see the specific ERS 8600 documentation. Access policies define who can access the switch management functions remotely. To enable access services (i.e. how the switch management functions are accessed), please use the flags or config bootconfig flags command. For a detailed guide on configuring SSH on the ERS 8600, please see the appendix on Management via SSH in SEB 08-00-001. 10.3.6 Layer 2 filtering on the ERS 8600 On the ERS 8600, it is possible to set individual ports to discard packets that originate from a MAC address or are going to a MAC address that is not known to the switch. This feature is configured on a per port basis with the following command:
config ethernet <slot/port> unknown-mac-discard <options>
with a maximum value of 2048 The switch learns these addresses in one of two ways: manually (statically) config ethernet <slot/port > unknown-mac-discard add-allow-mac <mac>. In that case, the MAC addresses are saved in the config file and restored following a switch reboot. dynamically
config ethernet <slot/port > unknown-mac-discard autolearn <enable|disable>
Additionally, the switch can be configured to dynamically learn these MAC addresses in one of two auto-learn modes:
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
212
One shot (config ethernet <slot/port > unknown-mac-discard autolearnmode one-shot). The switch learns the addresses until the table maximum is reached. Entries are never aged out. Continuous mode (config ethernet <slot/port > unknown-mac-discard autolearn-mode continuous). MAC addresses are never aged out.
After learning some addresses, the process can be enabled/disabled by the following command:
config ethernet <slot/port > unknown-mac-discard lock-autolearn-mac <enable|disable>
A MAC address can be removed from the list using the following command:
config ethernet <slot/port > unknown-mac-discard remove-allow-mac <mac>
An ERS 8600 can be configured to be disabled in the case of a MAC address violation, as follows:
config ethernet <slot/port > unknown-mac-discard violation-downport <enable|disable>
To bring the port back up, the selected port must be manually enabled, or the switch must be rebooted.
Version: 9.0.3
NORTEL CONFIDENTIAL
213
Figure 60
Additionally, other filtering rules need to be applied to protect the Media Gateways from potential attacks coming from the untrusted Backbone IP Network (see Figure 60 on page 213). This set of rules will be applied to Interface 0 (I/F 0). Figure 60 on page 213 shows a logical view of all the interfaces in the Media Gateway site. In particular, the diagram shows the logical interfaces where traffic filtering needs to be enabled. Interface 0 (I/F 0) will filter traffic flows that are coming from potentially untrusted networks. Interfaces 1 and 2 (I/F1 and I/F2) are connected to the Media Gateways and will only allow selected flows. Please note that in a Layer 3 switch a logical interface typically includes several physical ports that are configured in the same VLAN. Special attention should be given to ICMP. This protocol is very useful to debug network connectivity issues but it introduces several security risks involving DoS attacks. For this reason, Nortel recommends that ICMP flows should not be allowed from untrusted networks into the Media Gateway subnets.
Version: 9.0.3
NORTEL CONFIDENTIAL
214
10.4.1 Anti-spoofing The Media Gateway site router is the network element that can help significantly to stop Spoofing attacks before they reach the Carrier VoIP components. Since this router is the closest to the possible source of attacks, stopping spoofed IP packets and blocking all unauthorized traffic is easier here. It is important to note that using a strong anti-spoofing policy greatly decreases the risk of Denial of Service (DoS) attacks. It is strongly recommended that for the Packet Access-Cable Access and Packet Access-Integrated Access solutions, most of the anti-spoofing related filtering should be performed at the Media Gateways site where the end-subscribers access the network. 10.4.2 TFTP flows Please note that TFTP flows offer a unique challenge in a traffic filtering strategy. The main reason is because after the initial request on UDP port 69 on the server, the server chooses a random port for the rest of the connection. Without the use of IPSec, this kind of connection presents an interesting challenge to the network administrator. Nortel recommends using a TFTP server that allows a flexible configuration so that a certain range of UDP ports can be selected. This would minimize the number of ports that should be open on the filtering elements.
Version: 9.0.3
NORTEL CONFIDENTIAL
215
Figure 61
Again, as seen in the packet filtering sections, the firewall needs to allow all Carrier VoIP flows to enter the enterprise network. 10.5.1 Network Address Translation (NAT) Typically, most Enterprise networks do not internally use IANA publicly assigned addresses but instead use private addresses and then utilize NAT devices to access public networks (i.e. the Internet).
Version: 9.0.3
NORTEL CONFIDENTIAL
216
Version: 9.0.3
NORTEL CONFIDENTIAL
217
Figure 62
A special attention should be given to ICMP. This protocol is very useful to debug network connectivity issues but it introduces several security risks involving DoS attacks. Because of this reason, Nortel recommends that ICMP flows should not be allowed from untrusted networks into the Corporate Network.
NORTEL CONFIDENTIAL
218
OAM&P/NOC flows that can be common to all Carrier VoIP solutions (in Table 48 on page 218)
11.1.1 Flow analysis of traffic ingressing the Corporate network The following table show all the flows that need to be allowed through the customers firewalls.
Protocol
Source Port Range 23 20-21 123 1024 - 65535 22 N/A 5505 (configurable)
Destination Port Range 1024 - 65535 1024 - 65535 123 1812 1024 - 65535 N/A 1024 - 65535
Version: 9.0.3
NORTEL CONFIDENTIAL
219
Protocol
Source Port Range 22 10080 (Configurable) 80 443 10023 and 11023 8080 8443 20-21 23 161 1024 - 65535 9999 24482 24484 1024 - 65535 123 Configurable 161 1024 - 65535 23 22
Destination Port Range 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 162 1024 - 65535 1024 - 65535 1024 - 65535 53 123 Configurable 1024 - 65535 162 1024 - 65535 1024 - 65535
CORBA Socks HTTP HTTPS OSSGate Tomcat TomcatSSL FTP Telnet SNMP SNMP Trap FTP over SSH PORT FWD. Version Port VRL Look-up DNS NTP PAM Router/Switch NOC/OSS SNMP SNMP Trap Telnet SSH
Version: 9.0.3
NORTEL CONFIDENTIAL
220
Protocol
Source Port Range 23 20-21 1024 - 65535 22 1024 - 65535 1812 1024 - 65535 123 Configurable 9550-9556 7701 1646, 1647, 3197 7100 111, 2049, 4045, 32771-32773, 32781 30000 50022
Destination Port Range 1024 - 65535 1024 - 65535 20-21 1024 - 65535 22 1024 - 65535 53 123 Configurable 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535
FTP (Server) FTP (Client) SSH Server SSH Client Radius DNS NTP SCC2 Adapter GR740/EADAS UTA PP OM OMDD NFS Server AFT SFT2
Version: 9.0.3
NORTEL CONFIDENTIAL
221
Protocol
Source Port Range 23 20-21 1024 - 65535 1024 - 65535 22 1812 1024 - 65535 123 Configurable 9550-9556 7701 1646, 1647, 3197 7100 111, 2049, 4045, 32771-32773, 32781 30000 50022 80, 8080 1024 - 65535 1024 - 65535
Destination Port Range 1024 - 65535 1024 - 65535 20-21 22 1024 - 65535 1024 - 65535 53 123 Configurable 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 1024 - 65535 161
FTP (Server) FTP (Client) SSH SSH Radius DNS NTP SCC2 Adapter GR740/EADAS UTA PP OM OMDD NFS Server AFT SFT2 PAM Proxy PAM SNMP
Version: 9.0.3
NORTEL CONFIDENTIAL
222
Protocol PAM - IEMS HTTP SSL FTP SSH Radius Tomcat Syslog Client SNMP SNMP Trap SNMP Agent NTSTD Export SCC2
Source Port Range 7777 9090, 9091, 443, 8443, 80, 8080 9004, 9005 1024-65535 1024-65535 1812 18005,18009 1024 - 65535 1024 - 65535 162 1024 - 65535 8555 8556
Destination Port Range 1024-65535 1024-65535 1024-65535 20-21 22 1024 - 65535 1024 - 65535 514 161 1024 65535 8001 (configurable) 1024 - 65535 1024 - 65535
It is important to note that when the CMT and CBM are deployed in the clustered N240 configuration, NTP needs to be allowed for the CMT and CBM physical server addresses on both N240s in the cluster.
Version: 9.0.3
NORTEL CONFIDENTIAL
223
DS PHB EF '101110' Class 1 '001010' (AF11) AF[Y][X] '001100' (AF12) '001110' (AF13) Class 2 '010010' (AF21) '010100' (AF22) '010110' (AF23)
Y = Class Number X = Drop Precedence (discard priority), 1=lowest, 3=highest DF CS '000000' '111000' (CS7) '110000' (CS6) '101000' (CS5) '100000' (CS4) '011000' (CS3) '010000' (CS2) '001000' (CS1) '000000' (CS0)
It is typically recommended that all network elements pre-mark their traffic according to their class category. A detailed description of the flows and their DSCP can be found in SEB 08-00-001, Carrier VoIP CS-LAN Engineering Rules and Section 12.5 on page 241. If the elements are unable to mark packets with the appropriate DSCP value or the elements are not trusted, then the re-marking must be performed on DS edge node.
NORTEL CONFIDENTIAL
224
The following paragraphs give a high-level overview of these classes. Table 50 on page 224 shows an example of the NSC.
Example Application Critical Alarms Routing, Billing, Critical OAM VoIP, Interactive Gaming
NSC Critical Network Premium Platinum Gold Silver Bronze Standard CS7 CS6
DSCP in NSC
Interactive Video Conferencing Responsive Streaming audio/video Client/Server transactions E-mail Best Effort
EF or CS51 AF41, AF42, AF43, CS4 AF31, AF32, AF33, CS3 AF21, AF22, AF23, CS2 AF11, AF12, AF13, CS1 DF
Timely
It should be noted that Carrier VoIP solutions that support IP Telephony, priority data, and best effort data services only require a subset of the Nortel Service Classes. For such solutions above, four NSCs MUST be supported as summarized in Table 51 on page 224. Additional service classes (NSCs) MAY be supported in the Managed IP Network for additional services/applications like video conferencing, streaming video services and various differentiated data services.
NSC Network
Example Application Critical Alarms, heartbeats, Routing, Critical OAM, Signaling between Call Servers (SIP-T) IP Telephony (voice G.711 and compressed, DTMF Tones, voice-band data, clear-channel data, lawful intercept, signaling between GW/phone and call server) Non-critical OAM&P, Priority data, Billing Best effort or unclassified data
Interactive
Premium
EF, CS5
Timely
Bronze Standard
NORTEL CONFIDENTIAL
225
12.1.1 Critical class The Critical NSC is used only for network-to-network device communications within an administrative domain (i.e. critical heartbeats between nodes). Network devices should premark their packets with CS7 DSCP to receive Critical NSC treatment. An example of the Critical NSC includes VRRP heartbeats. 12.1.2 Network class The Network NSC is used for communications between network devices within one administrative domain (i.e. routing protocols, ICMP) if Critical NSC is not supported as well for control and signaling communication between administrative domains (i.e. SIP-T, DNS, DHCP/BootP, RSVP). Network devices should premark their packets with CS6 DSCP to receive the Network NSC edge-to-edge treatment. 12.1.3 Premium class This class is intended for support of telephony service over IP networks. It is required that the end equipment (voice media gateways, IP phones and call servers) set the DSCP as indicated below: EF = Voice data (bearer traffic) CS5 = Telephony signaling between media gateways and call server. Also T.38 encoded fax calls 12.1.4 Platinum class The Platinum NSC is used for Interactive Video (Video Conferencing) and Interactive Gaming. Packets marked with Assured Forwarding 4 (AF4) or Class Selector 4 (CS4) belong to this NSC. 12.1.5 Gold class The Gold NSC is recommended for streaming audio and video. Packets marked with Assured Forwarding 3 (AF3) or Class Selector 3 (CS3) belong to this NSC. 12.1.6 Silver class The Silver NSC is used for fast response for TCP and HTTP short lived flows (i.e. interactive TCP traffic, eCommerce). Packets marked with Assured Forwarding 2 (AF2) or Class Selector 2 (CS2) belong to this NSC.
Version: 9.0.3
NORTEL CONFIDENTIAL
226
12.1.7 Bronze class The Bronze NSC is used for long-lived TCP, and HTTP flows (E-Mail, Priority data, non-critical OAM&P). Packets marked with Assured Forwarding 1 (AF1) or Class Selector 1 (CS1) belong to this NSC. 12.1.8 Standard class The Standard NSC is used for all traffic that has not been characterized into one of the other supported NSC in the DS network domain. Packets marked with DSCP value 000 000 (or any other DSCP value that is not mapped to the supported classes described above) must be mapped to the Standard NSC.
Flow Type Voice Media (Bearer Traffic) Voice Signaling (Control) and T.38 Fax
Application RTP, RTCP Unistim, SIP, H.248, MGCP H.323, NCS SIP-T, SIP DHCP, BootP, DNS, high priority OAM
Critical OAM&P
OAM&P Servers - MG
NORTEL CONFIDENTIAL
227
Application FTP, HTTP, Telnet, SSH SNMP, Syslog, Billing record transfers, NFS, NTP, TFTP
It is important to note that there is a project in IETF that could cause the DSCP for UDP-based noncritical OAM&P traffic to change from CS1 to CS2. This recommendation would be reflected in this document when IETF approves this change. 12.2.2 Detailed protocol stacks Table 53 on page 227 shows the protocols used in the network element communication flows that are used in Carrier VoIP. The table lists layer 4 and upper layer information for each of the applications and maps them to the recommended DSCP. It is important to know that it is not necessary to set a QoS filter for each of these application. The best approach is to aggregate them and use multi-field information to aggregate traffic efficiently.
Application AFT Bootp/DHCP Citrix CORBA CORBA Sock DCE DNS FMIP FTP FTS FTS Audit GR740/EADAS
UDP
TCP X
SCTP
DSCP AF11 CS6 AF11 AF11 AF11 AF11 CS6 AF11 AF11 AF11 AF11 AF11
X X X X X X X X X X X X
Citrix CORBA CORBA DCE RPC DNS Proprietary FTP Proprietary Proprietary Proprietary
NORTEL CONFIDENTIAL
228
Application H.225 H.245 H.248 HTTP HTTPS M3UA MDM Alarms MDS MGCP NCS NFS NTP NTSTD Export OMDD OSSGate PAM PManagement PP OM PPVM Q.931 RADIUS RMI RTP Portal Management Console RTP/RTCP SCC2 SFT2 (SSH-based) SIP (NGSS) SIP-T (used UDP up to SN04) SNMP (client, trap and agent)
UDP X
TCP
SCTP
DSCP CS5 CS5 CS5 AF11 AF11 CS5 AF11 AF11 CS5 CS5 CS1 CS1 AF11 AF11 AF11 CS1 AF11 AF11 CS5 CS5 CS1 AF11 AF11 EF AF11 AF11 CS5 CS6 CS1
X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
H.245 H.248 HTTP HTTPS Proprietary SSH Proprietary MGCP NCS NFS NTP Proprietary Proprietary CORBA PAM Proprietary Proprietary Proprietary Q.931 RADIUS RMI Proprietary RTP/RTCP Proprietary Proprietary SIP SIP SNMP
NORTEL CONFIDENTIAL
229
Application SNMP Trap SSH SSL Sync Manager Syslog (client and server) Telnet TFTP Tomcat TomcatSSL UTA Version Port VRDNS (Sync Manager for VRDN GWC) VRL Look-up X11
UDP X
TCP
SCTP
DSCP CS1 AF11 AF11 AF11 CS1 AF11 CS1 AF11 AF11 AF11 AF11 AF11 AF11 AF11
X X X X X X X X X X X X X
SSH SSL Proprietary Syslog Telnet TFTP HTTP SSL Proprietary Proprietary Proprietary Proprietary X11
12.2.3 Mapping to queues All the service classes defined above need to be mapped to the internal queues of the routers in the DS domain for optimal servicing. Nortel recommends network elements that support eight (8) queues. However, this is not always possible, especially in a multi-vendor network. 12.2.3.1 Queues in the Ethernet Routing Switch 8600 Since the router that is used for the CS-LAN and for aggregating Media Gateway sites is the ERS 8600, Table 62 on page 238 shows this mapping for all the Carrier VoIP logical flows. The ERS 8600 supports eight output queues per port. Each of the eight queues is mapped to one of the eight QoS levels, and queues are serviced using guaranteed Weighted Round Robin. More information on the ERS 8600 QoS engineering can be found in section 12.4 QoS on the ERS 8600 on page 232
Version: 9.0.3
NORTEL CONFIDENTIAL
230
12.2.3.2 Queues in routers with four queues Other routers that might be present in Carrier VoIP solutions would typically support four (4) internal queues. Table 54 on page 230 shows the mapping of NSC to these queues for all the Carrier VoIP logical flows. It is strongly recommended not to mix data with voice in the same queue.
Traffic Type Network control OAM Signaling Bearer Priority IP Data Priority IP Data Best Effort Data
CoS Code Points 111 110 101 101 011 010 000
NSC
Queue Number 3
Network
Expedited-forwarding
Premium
Assured-forwarding Best-effort
Bronze Standard
1 0
12.2.3.3 Queues in the BPS Another network element that can be present in a Carrier VoIP solution is the BPS2000, used in the Packet Access - Integrated Access solutions. Table 55 on page 230 describes the default DSCP, QoS class, IEEE 802.1p, and egress queue assignment for packets in each traffic class.
Incoming or re-marked DSCP CS7 CS6 EF, CS5 AF41, AF42, AF43, CS4 AF31, AF32, AF33, CS3 AF21, AF22, AF23, CS2
NORTEL CONFIDENTIAL
231
12.2.3.4 Queues in the MSS 7000/15000 See NN10600-591, Nortel Multiservice Switch 7400/15000/20000 Layer 3 Traffic Management Configuration for information on DSCP mapping in the MSS 7000/MSS 15000.
Application Network Control VoIP Inter-Office Traffic VoIP (Media) VoIP (Signaling) Real-time Interactive Multimedia Conferencing Broadcast Video Multimedia Streaming, High Throughput Data, Low Latency Data, OAM&P Low Priority Data Standard CS7 CS6 EF CS5 CS4 AF41, AF42, AF43 CS3
DSCP
AF31, AF21, AF11, CS2 AF12, AF22, AF32, AF13, AF23, AF33 CS1 DF, CS0
Rate
NORTEL CONFIDENTIAL
232
Please note that MPLS EXP 101 is reserved for future implementations.
Version: 9.0.3
NORTEL CONFIDENTIAL
233
Figure 63
CS-LAN router
In Figure 63 on page 233, interfaces I/F 0, I/F 1 and I/F 2 are Access ports. 12.4.1 DiffServ access port The DiffServ access element is at the edge of the DiffServ network and classifies traffic by marking it with the appropriate DSCP and assigning it to the internal QoS level based on the filters and traffic policies that are enabled. The traffic filters allow setting criteria for identifying a micro flow or an aggregate flow by matching on multiple fields in the IP packet. 12.4.1.1 Tagged traffic In tagged IP packets, the QoS level is derived from the IEEE 802.1p bits or its DSCP, depending on whether the traffic is bridged or routed. In bridged IP packets, the switch examines the IEEE 802.1p bits as the packets enter the DiffServ access port and bases the QoS level on the ingress Tag-to-QoS mapping (Table 57 on page 234).
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
234
DSCP CS7, CS6 EF, CS5 AF41, AF42, AF43, CS4 AF31, AF32, AF33, CS3 AF21, AF22, AF23, CS2 AF11, AF12, AF13, CS1 DE and all undefined code points User-defined chokepoints
IEEE 802.1p 7 7 6 5 4 3 0 2
QoS Level 7 6 5 4 3 2 1 0
Traffic service class Network Premium Platinum Gold Silver Bronze Standard Standby
As the tagged, bridged packet leaves the DiffServ network via an access port, the switch re-marks the IEEE 802.1p bits and DSCP based on whether the packet is part of a port-based VLAN or a policybased VLAN, and if global filters are enabled. Re-marking occurs as follows: A bridged, tagged packets IEEE 802.1p bits will be re-marked by the global filter only if the IEEE 802.1p value in the filter is higher than the value already marked on the packet. If a global filter is present and set to modify DSCP, the bridged, tagged packets DSCP will be re-marked according to the filter. When no global filter is present, packets that are part of a port-based VLAN are re-marked with a DSCP according to the egress mapping based on the port QoS level. When no global filter is present, the packets that are part of the policy-based VLAN are re-marked with a DSCP according to the egress mapping based on the VLAN QoS level. In tagged, routed IP packets entering the access port, the switch sets the DSCP to the default (000000, binary) unless the traffic filters set for that port indicate otherwise. Based on the new DSCP marking, the packets are then given the appropriate PHB and assigned a QoS level based on the ingress DSCP-to-QoS mapping (Table 57 on page 234). When a tagged IP packet leaves the DiffServ network, the switch sets the IEEE 802.1p bits based on the egress mapping table (Table 58 on page 235), unless traffic filters indicate otherwise.
Version: 9.0.3
NORTEL CONFIDENTIAL
235
DSCP
IEEE 802.1p 7 7 6 5 4 3 0 2
QoS Level 7 6 5 4 3 2 1 0
Traffic service class Network Premium Platinum Gold Silver Bronze Standard Standby
EF AF41, AF42, AF43, CS4 AF31, AF32, AF33, CS3 AF21, AF22, AF23, CS2 AF11, AF12, AF13, CS1 DE
12.4.1.2 Untagged traffic All bridged, untagged IP traffic ingressing a DiffServ access port is assigned a QoS level based on the QoS setting at the port, the VLAN or MAC address level, or the DSCP-to-QoS mapping. When the packet enters a DiffServ access port, the assigned QoS level is one of the QoS levels assigned at the MAC address level, Port level or VLAN level. If the untagged packet used a port-based/policybased VLAN, then the QoS level assigned is the greater of the QoS levels found at: The MAC address level The Port level The VLAN level If global filters are used, the QoS level is determined differently. A global filter re-marks the packet DSCP according to that filter, and the QoS level is then determined by the highest QoS levels derived from the: DSCP-to-QoS mapping port or MAC level if the packet is using a port-based VLAN. DSCP-to-QoS mapping VLAN or MAC level if the packet is using a policy-based VLAN. As the packet leaves the DiffServ network via an access port, the switch re-marks the DSCP based on the egress QoS-to-DSCP mapping (Table 58 on page 235). The DSCP of routed, untagged IP traffic is reset to the default DSCP (000000 binary) unless a traffic filter indicates otherwise. The switch can use an IP filter to re-mark the DSCP of untagged IP traffic entering the DiffServ access port. When a routed, untagged packet leaves the DiffServ network, its DSCP remains unchanged.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
236
An Access port, when receiving the four types of traffic as described above, will behave according to Table 59 on page 236.
Type of traffic
Ingress Use Tag-to-QoS mapping to map the IEEE 802.1p bits to a QoS level, if no global filter is present. With global filter, re-mark IEEE 802.1p bits with filter value only if the filter value is higher. Preserve DSCP. Reset DSCP to zero, and use traffic filters to set the new DSCP. Use the DSCP-to-QoS mapping to map the new DSCP to a QoS level. Ignore IEEE 802.1p bits. If a global filter is used, set the new DSCP according to that filter. Assign QoS level based on highest QoS level in the DSCP mapping or port, VLAN, or MAC address. Reset DSCP to zero, and use traffic filters to set the new DSCP. Use the DSCP-to-QoS mapping to map the new DSCP to a QoS level.
Egress Use the QoS-to-Tag mapping to reset the IEEE 802.1p bits based on the QoS level. Use the QoS-to-DSCP mapping to set the DSCP, if no filter is present. Otherwise, re-mark DSCP to filter DSCP. Use the QoS-to-Tag mapping to reset the IEEE 802.1p bits based on the QoS level.
No action is performed.
12.4.2 DiffServ core port The DiffServ core port does not change packet classification or marking done in the DiffServ access port. The core port preserves the DSCP or IEEE 802.1p bit marking of all incoming packets and uses these markings to assign the packet to an internal queue. If the reader refers to the network diagram in Figure 63 on page 233, Interfaces 3 and 4 are Core ports. 12.4.2.1 Tagged traffic For all tagged IP traffic, the switch examines the DSCP of the packets and places them in the appropriate queue based on the ingress DSCP-to-QoS mapping table (Table 57 on page 234). All DSCP and IEEE 802.1p bit markings are preserved. 12.4.2.2 Untagged traffic For untagged IP traffic, the switch examines the DSCP of the packets and places them in the appropriate
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
237
queue based on the ingress DS-to-QoS mapping table (Table 57 on page 234). A Core port, when receiving traffic, behaves according to Table 61 on page 238.
Ingress Place the packet in the QoS queue based on the DSCPto-QoS mapping. Ignore the IEEE 802.1p bits. Place the packet in the QoS queue based on the DSCPto-QoS mapping. Ignore the IEEE 802.1p bits. Place the packet in the QoS queue based on the DSCPto-QoS mapping. Place the packet in the QoS queue based on the DSCPto-QoS mapping.
Egress No action is performed. Use the QoS-to-Tag mapping to reset the IEEE 802.1p bits based on the QoS level. No action is performed. No action is performed.
12.4.3 Priority queuing and servicing The ERS 8600 supports eight output queues per port. Each of the eight queues is mapped to one of the eight QoS levels, and queues are serviced using guaranteed Weighted Round Robin (WRR). Table 61 on page 238 lists the eight traffic service classes corresponding to the QoS levels. The priority is assigned from the highest (7) to the lowest (0). The default queue for all traffic is QoS level 1 or the standard traffic service class. After the packets are queued, the queues are serviced according to the guaranteed WRR mechanism. This mechanism ensures strict priority for the queue assigned to the Premium class, and the other queues are serviced according to WRR. The WRR mechanism uses the queues packet transmit opportunity to determine which queue is serviced first. It is important to note that the Network class is not configurable and is reserved for network node initiated traffic.
Version: 9.0.3
NORTEL CONFIDENTIAL
238
Traffic Service Class Network Premium Platinum Gold Silver Bronze Standard User-defined
QoS level 7 6 5 4 3 2 1 0
PHB
Expedited-forwarding
Assured-forwarding
8 6 4
Default
2 0
When the packet transmit opportunity allocated to a particular time slot arrives and the level contains data, it is serviced. If two queues contain data and their time slots arrive simultaneously, the queue with the higher priority is serviced first. See Table 61 on page 238 for the relationship between the QoS level, packet transmit opportunity, and percentage weight. For each port, every queue level, except for the network class, can be configured to own any, all, or none of the packet transmit opportunities.The switch uses the percentage weight to configure the packet transmit opportunity for each queue. 12.4.4 Mapping Carrier VoIP traffic to queues The following table shows the recommended mapping for Carrier VoIP traffic to the eight queues on the ERS 8600.
Traffic Type Network control Critical OAM & Inter-office Signaling VoIP Signaling VoIP Bearer Priority IP Data
Code Points CS7 CS6 CS5 EF AF41 111000 110000 101000 101110 100010
Class name
Queue Number 7
Network
Premium Platinum
6 5
NORTEL CONFIDENTIAL
239
Traffic Type Priority IP Data Priority IP Data Non-Critical OAM Priority IP Data Best Effort Data Best Effort Data
Code Points AF31 AF21 CS1 AF11 DF CS0 011010 010010 001010 001000 000000 000000
Queue Number 4 3 2
Standard
Version: 9.0.3
NORTEL CONFIDENTIAL
240
Version: 9.0.3
NORTEL CONFIDENTIAL
241
As previously discussed, the filters that perform marking will be applied on logical interfaces I/F 1 and as shown in Figure 64 on page 241 below.
Please note that in a Layer 3 switch a logical interface typically includes several physical ports that are configured in the same VLAN.
Figure 64
The following tables show the VoIP flows and a DSCP based on their NSC (Section 12.2 DiffServ in Carrier VoIP on page 226 for more details) is assigned to each of them. Table 63 on page 242 illustrates all the common flows that originate from the Media Gateway site. The traffic filters derived from the flows described in Table 63 on page 242 will be applied on interfaces I/F 1 and 2.
Version: 9.0.3
NORTEL CONFIDENTIAL
242
MG 15000
MDM
All
Version: 9.0.3
NORTEL CONFIDENTIAL
243
Figure 65
Version: 9.0.3
NORTEL CONFIDENTIAL
244
Version: 9.0.3
NORTEL CONFIDENTIAL
245
Figure 66
O n e -w a y D e la y (m s )
The next figure shows a comparison of voice quality achieved with G.711 and G.729 codecs. G.729 may be considered where it is necessary to conserve bandwidth. G.729 compresses the speech to 8 kbit/s. However, once the packetization overhead had been added in, the effective bandwidth saving is around 50%. The bandwidth saving comes at the price of lower voice quality. The figure below shows the drop in voice quality when using G.729 vs. G.711. Note, when reading this figure G.729 encoding introduces an addi-
Version: 9.0.3
NORTEL CONFIDENTIAL
246
tional 25 msecs latency. The network planner should also be aware that the capacity and number of voice channels supported by some gateways may be lower when using G.729.
Figure 67
R
70
On e -wa y De la y (m s )
NORTEL CONFIDENTIAL
247
Note: Some small line gateways may only support 20 msec encoding. Also, some network elements have greater throughput when handling 20 msec encoded voice, due to the smaller number of packets per second that need to be routed. These elements are noted in the appropriate sections in these guidelines.
Version: 9.0.3
NORTEL CONFIDENTIAL
248
13.3.2 Quality-of-Service control mechanisms Carrier VoIP is a multi-service network supporting packet voice, call signaling, OAM signaling and subscriber data traffic, each of which has different performance requirements. A number of capabilities exist to maintain the QoS requirements. 13.3.2.1 Bandwidth Bandwidth on the various transport links can be engineered to accommodate the expected simultaneous peak traffic from the multiple services. This engineering is most straightforward on links that only carry a mix of call signaling, voice and OAM traffic, such as the network router to CS-LAN link. The nature of the call signaling and voice traffic is time consistent. However, it is a different problem on links that carry a mix of voice and subscriber data traffic. The subscriber data traffic is expected to be self-similar in nature, with an extreme peak-to-average ratio. For these links it is advisable to deploy additional QoS mechanisms. These are as follows: 13.3.2.2 DiffServ DiffServ is recommended on the Access router and CS-LAN to network router links. DiffServ will give preferential transport to the voice, call signaling, and OAM traffic over subscriber data traffic. The DiffServ basic elements are implemented within the network and include: Packet classification functions A small set of per-hop forwarding behaviors Traffic metering, marking, and policing Details on the DiffServ strategies and configuration are included in the QoS section of this document. 13.3.2.3 802.1P To guarantee end-to-end QoS in Ethernet networks, it is important to implement a QoS strategy in switched networks. Since at Layer 2 there is no awareness of which type of traffic is carried at upper layers, it is necessary to use IEEE 802.1P to differentiate priority traffic at the MAC level. IEEE 802.1P is an OSI Layer 2 standard for prioritizing network traffic at the data link/MAC sublayer. Traffic is classified and sent to the destination but no bandwidth reservations are established. Since 802.1P is a spin-off of the 802.1Q (VLANs) standard, the 802.1P field is contained in the VLAN tag (which carries VLAN information). The VLAN tag has two parts: the VLAN ID (12-bit) and the Prioritization (3-bit). Therefore, 802.1P establishes eight levels of priority which network adapters and switches use to route traffic. The use of a Layer 3 switch (such as the ERS 8600) allows the network administrator to map 802.1P prioritization schemes to DiffServ schemes before forwarding to routers.
Version: 9.0.3
NORTEL CONFIDENTIAL
249
13.3.3 Voice quality verification In Voice over IP networks, Quality of Service (QoS) can be adversely affected by the components in the network. Unlike TDM networks where the voice quality is consistent for all calls, VoIP networks can experience different voice quality on all calls. The common parameters that make up voice quality are: Latency Packet loss Jitter Most gateways in VoIP solutions report these statistics via end-of-call reporting mechanisms specific to the protocol used for MGC-to-MG communication. 13.3.3.1 Voice quality monitoring in a live network Nortel recommends that network planning focus on achieving the latency, packet loss and jitter metrics required for good quality speech. Network surveillance for voice quality should therefore be focused on the specific metrics measuring network impairments. This focus can be achieved through monitoring the Quality-of-Service metrics reported by the gateways at the end of every call. The metrics are collected by the GWC (through end-of-call stats) and forwarded to the QoS Collector Application (QCA), located on the CMT server. These metrics identify packet loss, latency and jitter. Additionally, a new trunk group-based Operational Measurement group is introduced that contains pegs of these same parameters when they cross threshold values defined by the telco. Together, these tools can be used for the purposes of: Network engineering Trend analysis Trouble-shooting network problems Service Level Agreement (SLA) validation 13.3.3.2 Trunk group based Quality of Service OM - TRKQOSOM A capability introduced in SN06 provides an operational measurement group on the CS 2000. The operational measurement values record instances in which QoS threshold values have been exceeded for calls handled by a particular GWC-based TDM trunk group. The QoS statistics that are included in the OM for each GWC trunk group are packet loss, jitter, and delay (latency). As GWC-based trunk calls are released, the QoS statistics for that call are compared to the QoS OM threshold values. If a statistic for the call exceeds the QoS OM threshold value for that statistic, the OM value for that statistic on the GWC-based trunk group will increment.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
250
The OM group will contain entries for each GWC-based trunk group datafilled on the CS 2000. In order for the trunk group to appear in the OM group, at least one member of the trunk group must be datafilled to reside on a GWC node via table TRKMEM. Each entry in the OM group will contain a CLLI as well as threshold crossing counters for packet loss, jitter, and delay (latency). The active registers will contain data for the current reporting interval. The holding registers will contain data for the previous reporting period. The interval for OM peg count transfer from the active to holding registers is controlled by the OM system. Refer to the OM system documentation for details on the available intervals for active to holding transfer. The following table describes the range of values possible for each QoS OM threshold office parameter datafilled in table OFCVAR.
Field Y/N
Range
Units
The QOS OM threshold values defined in table OFCVAR will be applied to all GWCs datafilled for the call server. The values may be changed at any time without the need for core/GWC restart. The threshold values may be set in table OFCVAR by positioning on and changing the appropriate threshold as follows:
>table ofcvar JOURNAL FILE UNAVAILABLE - DMOS NOT ALLOWED TABLE: OFCVAR >pos packet_qos_om_threshold PACKET_QOS_OM_THRESHOLDS Y 10 40 0 001
Version: 9.0.3
NORTEL CONFIDENTIAL
251
In the example above, the QOS OM threshold reporting was changed such that the reporting is active and the thresholds were set (jitter=10 ms, delay= 40 ms, packet loss= 0.001%).
>OMSHOW TRKQOSOM HOLDING TRKQOSOM CLASS: HOLDING START:2003/04/18 09:10:00 FRI; STOP: 2003/04/18 09:15:00 FRI; SLOWSAMPLES: 3 ; FASTSAMPLES: 30 ; KEY (COMMON_LANGUAGE_NAME) PKTLOSS 227 INET4SS7OG1 228 INET4SS7OG2 0 13 JITTER 2 0 DELAY 2 0
To enable the collection of these pegs, the GWC must be provisioned to send this data to the CS 2000. The following figure illustrates how this is achieved. Note: The QoS Collector Application Server does not need to be provisioned in order to receive these pegs, only QoS Collection need be enabled.
Version: 9.0.3
NORTEL CONFIDENTIAL
252
Figure 68
13.3.3.3 QoS Collector Application QoS reports are delivered from the GWC to the QoS collector application (QCA). QoS reports are correlated to applicable billing records with a correlation ID (CID). Each CID references a set of QoS report pairs. Applicable AMA billing records contain zero or more CIDs, thus referencing zero or more QoS report pairs. Each GWC can have up to two links to different QCAs, each receiving identical information. The QCA stores the QoS reports in XML format to be off-loaded by the OSS for processing The QoS Collector Application must be datafilled in CMT, and individual GWCs must enable QoS record collection. Only two QoS Collectors can be datafilled. In the CS 2000 Management Tools GUI (formerly PTM), select Gateway Controller under Device Types. Then, under Network Devices, select the QoS Collectors tab. Here, you will click Add. You will be prompted for the following: QoS Collector Name: Give a name for the collector
Carrier Voice over IP Solution Engineering Rules Version: 9.0.3 Document Number: SEB-08-00-010
NORTEL CONFIDENTIAL
253
IP Address:
The address of your SESM server. Port: The port number of your QoS Collector.
Figure 69
Next, QoS record collection must be enabled on each GWC. Select a GWC, and click on its Provisioning tab. Then, select the QoS Collectors tab. Click the Associate button, and select the QoS Collector. Then, check Enable QoS Collection.
Version: 9.0.3
NORTEL CONFIDENTIAL
254
Figure 70
The QCA records are stored in XML format. The following XML tags correspond to the QoS parameters of interest: Latency - <averagePacketLatency> Packet Loss - <inboundLostPacketCount> and <inboundPacketCount>
Jitter - <packetDelayVariation>
Version: 9.0.3
NORTEL CONFIDENTIAL
255
To enable the Core to populate the billing records with Correlation IDs, the following tuple must be set in table AMAOPTS
TABLE: AMAOPTS RECORD_QOS ON
Each Billing record corresponds to two QoS Records, one for each half-call. All three records have the same correlation ID. In the following examples, the correlation ID is highlighted in bold. In the QCA record, the correlation ID is in hexadecimal, while it is in Binary Coded Decimal (BCD) in the billing record. The following example shows how to convert the BCD number to its hexadecimal representation, using the data from the sample records.
096 016 024 124 016 = 60 10 18 7C 10 164 068 133 061 000 = A4 44 85 3D 00
Record data: -----------RDW HEX_ID STRUCTURE_CODE CALL_CODE SENSOR_TYPE SENSOR_ID RECORD_OFFICE_TYPE RECORD_OFFICE_ID DATE TIMING_INDICATOR STUDY_INDICATOR ANSWER SERVICE_OBSERVED OPERATOR_ACTION SERVICE_FEATURE SIG_DIGITS_NEXT_FIELD ORIGINATING_OPEN_DIGITS_1 ORIGINATING_OPEN_DIGITS_2 ORIGINATING_CHARGE_INFO DOMESTIC_INTL_INDICATOR SIG_DIGITS_NEXT_FIELD TERMINATING_OPEN_DIGITS_1 TERMINATING_OPEN_DIGITS_2 CONNECT_TIME
Document Number: SEB-08-00-010 Version: 9.0.3
00630000 aa 40510C 006C 036C 0000000C 036C 0000000C 30410C 00000C 0200000C 0C 0C 0C 000C 010C 08602120279C FFFFFFFFFF FFFF 9C 007C 00003120779C FFFFFFFFFF 1152445C
Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
256
ELAPSED_TIME 000000133C
Subrecord data: --------------MODULE_CODE_ID GENERIC_CONTEXT_ID DIGITS_DIALED_1_EUR DIGITS_DIALED_2_EUR 612C 8006600C 096016024124016C 164068133061000C
Version: 9.0.3
NORTEL CONFIDENTIAL
257
Record: Start Time = 1970-01-01T00:00:00.000Z End Time = 1970-01-01T00:00:00.000Z TimeZone Offset = 0 Call Comp Code = CC Orig Dest Id = Host name = GWC1@RTP4 Subscriber id = aaln/1@cg1131.rtp4.net Correlation Id = 4d006531001cbeb9a601 IP Address = 10.129.129.30 Port Number = 2427 Sequence Number = 1576 Latency = 15 Octets Received = 141036 Octets Sent = 141220 Packets Received = 1533 Packets Sent = 1535 Packets Lost = 0 Jitter =0
Record: Start Time = 1970-01-01T00:00:00.000Z End Time = 1970-01-01T00:00:00.000Z TimeZone Offset = 0 Call Comp Code = CC Orig Dest Id = Host name = GWC24@RTP4 Subscriber id = STS/11/0/3/VT15/1/1/3/17@PVG3809 Correlation Id = 4c00732d4091cdb9a601 IP Address = 172.16.203.4 Port Number = 2944 Sequence Number = 184139 Latency =0 Octets Received = 122720 Octets Sent = 122000 Packets Received = 1534 Packets Sent = 1525 Packets Lost = 0 Jitter =0
In addition, network management systems can be used to monitor frame/packet discard on any of the router ports. 13.3.3.4 Nortel Product Test voice quality verification When used in the context of packet networks, Quality of Service (QoS) refers not to voice quality, but to mechanisms intended for bandwidth management and optimization of service quality. Examples of QoS mechanisms include call admission control, queue management and packet classification, traffic shaping techniques, and QoS protocols such as diff-serv, RSVP, and MPLS. The acceptability of voice quality varies depending on the users expectation of the particular service; for instance, users expect better quality from local wireline than from cellular. Several standards exist for measuring QoS (quality of service) and VQ (voice quality) and many white papers have been written on why one standard is better than another. In brief: 13.3.3.4.1 ITU-T G.107 This recommendation is based on the use of the E-Model, which estimates the perceived voice quality. The E-Model calculates a metric called Transmission Rating (R) with a range of 0 to 100. R takes into account all factors known to contribute to telephony conversation quality, including distortion, delay, echo, and listening levels, as well as any interactions between these factors. R is more difficult to measure than PESQ-LQ because the E-Model requires a number of input parameters to compute the final value.
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
258
A major advantage of using the E-Model R as the quality metric is the ability to do predictive modeling in advance of building the system, and then measuring R for the system to compare to the predicted value. The E-Model and R-Factor are the basis of the Nortel voice quality test program. 13.3.3.4.2 ITU-T P.800 This recommendation describes accepted methodology for conducting subjective Mean Opinion Score (MOS) testing. Listeners rate speech samples on a five-category scale, ranging from Bad (1) to Excellent (5). Using the associated numerical values, ratings given to each test case by each listener are averaged to obtain a mean, i.e. the MOS. The absolute MOS for any test case depends on the full set of conditions tested in the session as well as the parameter values used in defining a particular condition. MOS tests are very good at comparing the quality of conditions tested in the same session, but are not as useful for comparing quality across different studies where different sets of conditions have been included (such as might be done by different vendors). Subjective methods are time-consuming and labor-intensive, and so are not generally used for design and verification testing. 13.3.3.4.3 ITU-T P.861 This recommendation describes an objective method for estimating the subjective quality of telephoneband (300-3400 Hz) speech codecs. The measurement algorithm ignores echo and delay. It cannot factor out temporal shifts where silence is added or removed. The range is from 0 (best) to 5 (worst). P.861 has been replaced by P.862 13.3.3.4.4 ITU-T P.862 This recommendation describes the current standard objective estimator of subjective speech quality. Similar to P.861, the method addresses only distortion, ignoring signal level, echo, and delay. Its main advantage over P.861 is in removing temporal shifts in silent periods before it calculates the difference in the input and output signals to estimate the distortion. The range of the raw PESQ score is -0.5 to 4.5. This raw score is sometimes called PESQ MOS, although the transformed value, called PESQ-LQ, is better correlated with subjective MOS. TIA TSB116 provides the following comparison:
Version: 9.0.3
NORTEL CONFIDENTIAL
259
Figure 71
R
100 94
90 Satisfied 80 Some Users Dissatisfied 70 Many Users Dissatisfied 60 Nearly All Users Dissatisfied 50 Not Recommended 0
4.0 3.6
3.1 2.6
1.0
13.3.3.5 Impact of latency and jitter on modem speed Modems are much more sensitive to delays and jitter than voice, and as such, present a problem and will fall back to slower rates or even disconnect when encountering network conditions that may not adversely affect voice calls. The data rate of CCD connection via the IW-SPM is 64 kbps. When no IW bridge is involved, the throughput will reach 64kbps, the actual bandwidth. The reason for the reduced throughput with IW bridges is the end-to-end delay in a handshake-oriented protocol like Z-modem. The end-to-end delay consists of the packetization delay (from TDM to packet network) and a 50ms fixed jitter buffer delay (from packet network to TDM). The reason to use 50ms fixed jitter for the clear channel data (CCD) (and voice band data (VBD)) calls is to reduce the possibility of buffer under-runs when there is a real network jitter. The IW will play out silence towards the TDM in the under-runs condition. Then the data will no longer be bit-exact and the bit error rate will increase. Therefore, it is a trade-off between the endto-end delay and the data reliability. The following measurements were taken:
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
260
JITTARG=50: throughput 44300 bps JITTARG=40: throughput 51500 bps JITTARG=30: throughput 54900 bps JITTARG=20: throughput 60100 bps JITTARG=10: throughput 62700 bps JITTARG=0: throughput 66400 bps
The results showed that the throughput improved when the target jitter buffer size was reduced. In fact, with 10 ms packetization and 0 ms jitter buffer, the throughput (66400bps) was about the same as the one (68100bps) going through the ATM IW bridges with 5.5 ms packetization and 3 ms jitter. The call setup for the measurement used two IW bridges connecting internally in the same IW-SPM. That was the most ideal situation since the jitter was almost non-existent.
Version: 9.0.3
NORTEL CONFIDENTIAL
261
T e lc o 4 w ir e tr u n k C u s to m e r P r e m is e GW T a lk e r E cho
Version: 9.0.3
NORTEL CONFIDENTIAL
262
Figure 73
-1 0
-2 0
R e fe re n c e S ig n a l Signal Amplitude
-3 0
-4 0
-5 0
-6 0
E c h o S ig n a l
-7 0
-8 0
T im e
Different means exist which attempts to eliminate or reduce echo. These are: attenuation, echo suppressors and echo cancellers. Attenuation: A common method used by LEC and IP line gateway devices. The basis in Network Loss Plan planning. Echo Suppressors: Defined in ITU-T G.164. Not often used outside of hands free sets and not efficient. These attempt to create a one way speech path by adding excessive attenuation in the send path when speech is detected in the receive path. Many problems can occur with echo suppressors such as muffling of the voice. Echo Cancellers: Defined in ITU-T G.165 and G.168. The intent here is to have the canceller read the voice in one direction and subtract an estimated echo from the return path thus removing the echo. The best performance is often achieved when both a properly engineered loss plan is used in conjunction with echo cancellers.
NORTEL CONFIDENTIAL
263
present in all calls becomes more noticeable. Each IP voice gateway typically adds an additional 10 msecs or 20 msecs of network delay depending on its jitter buffer size above the codec and gateway processing delays.
Equipment Type
Send Loudness Rating (SLR) Includes Loop Lost TIA-470-B TIA-470.110-C 4dB 8dB 8dB
Receive Loudness Rating (RLR) Includes Loop Lost TAI-470-B -6dB -3dB 2dB TIA-470.110-C -7dB -3dB 2dB
Short Loop Analog Set (600 ohm MG) Long Loop Analog Set (Typically 900ohm MG @ 2.7km) Digital Set
ITU-T P.79 specifies the optimum at Overall Loudness Rating (OLR) = 10 dB.
OLR = SLR + Loss A/D + Loss D/D + Loss D/A + RLR
The T1.508 & TIA-912-A recommends the following attenuation on the media gateway
A/D LOSS = 0dB D/A LOSS = 6dB
Some small line NCS gateways still implement the older A/D & D/A values listed below.
Version: 9.0.3
NORTEL CONFIDENTIAL
264
Standard TIA 912 Pan European TIA 912 North American Packet Cable Telcordia GR 909
Table 66 A/D and D/A attenuation for some small NCS gateways
a. An ECR is open with CableLabs to revise the loss plan recommendation for small IP line gateways. The PacketCable specification will reflect the new values. Refer to ECR emta-r-02221. The MTA loss plan values listed below are subject to change.
A PADGRP of PKTNIL exists for table PADDATA. PKNIL should be used for media gateways that do not support CS 2000 downloaded loss plan values, i.e. MGCP & NCS-controlled gateways.
Gateway I2002/I2004 CICM I2002/I2004 MCS 5200 Askey VG201 Askey VG401 Askey VG601 Ambit LG1 Ambit LG32 Ambit MG1000 Mediatrix 1102 Mediatrix 1104 Mediatrix 1124
Fixed / Configurable
ECAN Supported
ERL dB
Tail ms
Range -1 / -10 Range -1 / -10 Range -1 / -10 Range +4 / -10 Range +4 / -10 gain Range +4 / -10 gain Based on Country Selection - can be modified with MIB Based on Country Selection - can be modified with MIB Based on Country Selection - can be modified with MIB
14 14 14 14 14 14 17 17 17
16 16 16 16 16 16 16 16 16
NORTEL CONFIDENTIAL
265
Gateway MG 9000 I/W SPM MG 15000 VSP2 MG 15000 VSP3 Motorola CG3500 Motorola CG4500 Motorola CG4501 Motorola SBV4500 Arris TTM102 Arris TTM202
Fixed / Configurable Configurable Configurable Configurable Configurable Fixed Fixed Fixed Fixed Configurable Configurable
ERL dB
Tail ms
Uses CS 2000 PADDATA Range 14L, 7G Uses CS 2000 MNIPPARM Range -6,+6 Range 0-6 Range 0-6 4 4 4 4 11 4
128 32 55 20 20 20 128 8 8 8
Gateway
Jitter Buffer Adaptive / Static Adaptive Adaptive Adaptive Adaptive Adaptive Adaptive Adaptive Adaptive Adaptive Adaptive
Jitter Buffer Defaults 0/120 0/120 0/120 60/120 60/120 60/120 30/150 30/150 30/150 0/100
Jitter Buffer Range 0/600 0/600 0/600 60/120 60/120 60/120 0/300 0/300 0/300 0/100
Settable
Askey VG201 Askey VG401 Askey VG601 Ambit LG1 Ambit LG32 Ambit MG1000 Mediatrix 1102 Mediatrix 1104 Mediatrix 1124 MG 9000
Version: 9.0.3
NORTEL CONFIDENTIAL
266
Gateway
Jitter Buffer Adaptive / Static Adaptive Fixed Adaptive Fixed Fixed Fixed
Settable
I/W SPM MG 15000 VSP2 MG 15000 VSP3 Motorola CG3500 Motorola CG4500 Motorola CG4501 Motorola SBV4500 Arris TTM102 Arris TTM202 Arris TTM402
Table MNIPPARM CLI CLI Config File MIB Config File MIB Config File MIB
Version: 9.0.3
NORTEL CONFIDENTIAL
267
If the ECSTAT field of table TRKSGRP is INTERNAL, then ECAN will be turned on if the NatureofConnection field in the incoming IAM indicates that no ECAN has been enabled at the adjacent office. The ACM forwarded back to this office will indicate that the CS 2000 has enabled ECAN and the connection control message to the MG 15000 will enable ECAN for the call. If the ECSTAT field of table TRKSGRP is EXTERNAL, the ACM will also indicate the NatureofConnection field as ECAN enabled, but the connection control message to the MG 15000 will turn off ECAN. Here it is assumed that there is an external device to handle echo.
In the backwards direction
If the ECSTAT field of table TRKSGRP is INTERNAL, then the outgoing IAM will indicate the NatureofConnection field has echo enabled and echo will be on at the MG 15000 (since it is provisioned at the brag/bragS component). However, when the ACM coming back from the adjacent office in the NatureofConnection field indicates that ECAN has been enabled, the connection control message sent to the MG 15000 will disable ECAN at the MG 15000 for this call. If the ECSTAT field of table TRKSGRP is EXTERNAL, again the IAM will indicate the NatureofConnection field has echo enabled but the connection control message to the MG 15000 will turn off ECAN. Again, here it is assumed that there is an external device to handle echo. For trunk groups that have a mix of legacy and packet trunks, it is suggested that the customer create two subgroup indices in table TRKSGRP, one for packet and one for legacy. Then apply these appropriately to the trunk group field SGRP in table TRKGRP. The following illustrates how to utilize the MG 15000s ability to cancel echo via provisioning in table TRKSGRP.
Example: TRKSGRP
LDBLSS7IC 0 DS1SIG C7UP 2W N N INTERNAL NONE Q764 TLRH 0 ISUP $ NIL CIC
The CS 2000 will then inform the MG 15000 Trunk Gateway via the connection control whether or not to apply ECAN, based on it being applied or not on the previous segment. It uses the NATURE OF CONNECTION from the IAM and ACM SS7 messages to carry this information. When the SS7 message
Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
268
Interworking SPM
reports ISUP_NO_HALF_ECHO_SUP (ECAN not previously applied), it informs the GW via the E: parameter to apply ECAN on the MG 15000.
GWC: 01:21:17:11.46 CC: 18:13:31:27.81 MDCX 16194 ASPEN 2.1 C: callid Z: PVG4206.DS3_20.7.1 I: 3 M: sendrecv L: e:on
Message 10 [sent]
Version: 9.0.3
NORTEL CONFIDENTIAL
269
CS2K
MTA IP
MTA
x Distance Talker
MTA ECAN
CG4501 MTA A/D 4 dB D/A 4 dB
MTA ECAN
Echo
CG4501 MTA A/D 4 dB D/A 4 dB
Echo
Talker
The CG4501 provides 4 dB of attenuation in the send and 4 dB attenuation in the receive direction. The echo canceller provides an additional 20 dB. The echo heard by the talker equals SLR + MTA A/D + ECAN ERL + MTA D/A + RLR Thus, for a 600 ohm short loop set, Talker Echo Return Loss (TERL) would provide a 30 dB attenuation. Assuming the talkers loudness level is -11 dBm, the echo amplitude reflecting towards the talker would be at -41 dBm (Talker Echo Loudness Rating -TELR). With ECAN disabled on the MTA, the TERL would be equal to 24 dB due to the Hybrid ERL specification at 14 dB. We can conclude that we clearly see better attenuation of the reflected signal when ECAN is used with a loss plan.
Version: 9.0.3
NORTEL CONFIDENTIAL
270
CS2K
DMS500
TDM Network
Line GW
MG9K
MG9K MG9K
PVG ECAN
T-L 3 dB loss
Echo
Talker (-11 dB)
Echo
The MG 15000 ECAN along with the loss plan will protect the IP subscriber from echo. The MTA ECAN along with the loss plan will protect the PSTN subscriber from echo. Trunk groups between the IP network and PSTN network should be provisioned with the ECSTAT parameter set to EXTERNAL in table TRKSGRP in the Call Server. The TERL for the IP subscriber calling a PSTN user within the same LATA would be 68 dB (Assuming a 55 dB attenuation in the MG 15000 ECAN). The TERL for the PSTN subscriber calling an IP subscriber within the same LATA would be 31 dB (assuming the MTA ECAN attenuation of 20 dB). If these calls are routed over an LD network, the IP to PSTN subscriber would have a TERL of 74 dB and the PSTN to IP subscriber would have a TERL of 37 dB since the local office applies an additional 6 dB of loss on a trunk-to-trunk call. In order for the MG 7000/MG 15000 VSP2 ECAN to remove 100% of the echo, the reflected signal from the hybrid 4-2 wire conversion in the LCM must be below -26dBm and the latency between the LCM and the MG 15000 must be less than 32 msec. If the signal is above -26 dBm, the ECAN will still work, however, a potential small burst of echo may be heard at the beginning while the ECAN converges. In this example, the reflected signal is at -26dBm for intra-LATA calls and -32 dBm for calls going thought a LD tandem network assuming a talker loudness of -11 dBm. With the use of VSP3, this is not an issue and the MG 15000 provides a tail of 64 msec.
MTA ECAN
Version: 9.0.3
NORTEL CONFIDENTIAL
271
CS2K
CS2K
MTA
CMTS
PVG15K IP DS3
PVG15K
MTA
X Distance
MTA ECAN PVG ECAN
Echo
Echo
In this case, the MG 15000 should not apply echo cancellation. Since the MTA (or any IP gateway) supports ECAN, both MTAs (IP Gateways) should apply echo cancellation and the network is considered echo-free. However, since the call server is unaware if the local line gateway supports ECAN, the SS7 message will inform the MG 15000 to apply ECAN. The only way to force ECAN on the MG 15000 to be disabled would be to set the ECSTAT parameter to UNEQ for the trunk group between the two MG 15000s which will cause the SS7 message to send a CRCX message with e:0ff to turn off the ECAN in the MG 15000. By default, it is recommended to leave the ECAN on the MG 15000, since double ECAN does not cause any voice degradation issues.
NORTEL CONFIDENTIAL
272
with NIL pad control in table PADDATA. For an End Office Telco, it is important that a 6 dB loss is added on any incoming call from the packet network. For the Universal Access (AAL1/IP) solutions, call processing makes use of the PADDATA values and sends these to the gateway. The pad group (PADGRP) defined as PKLNL (Packet Line Long) is recommended for the MG 9000 subtending lines. This includes both H.248 lines, and AAL1 lines served by the ABI DS-512 interface. Legacy line to packet line or packet trunk calls should be configured with 6 dB loss pad in table PADDATA. Refer to the table below for additional PADDATA settings. To create new pad groups, you can add the groups as keys in fields PADGRP1 and PADGRP2 of table PADDATA. To use pad groups as line or trunk data, ensure that the pad groups are in table PADDATA. Tables LNINV, TRKGRP, CONF3PR and CONF6PR can then use the above pad group. An example follows for some of the recommended PADGRP values. The following table is not a complete set for interworking and is included for illustration purposes only. The L represent a loss and a G represents a gain.
PADGRP1 PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL PKLNL
PADGRP2 STDLN PKLNL PPHONE ELO TLA PRA CONF TPOS ETLS ETLL TLD IAO LCO CPOS
PAD 1to2 6L 0 to 6L 6L 0 0 0 0 0 0 0 0 0 0 0
PAD 2to1 6L 0 to 6L 6L 6L 6L 6L 6L 6L 6L 6L 6L 6L 6L 6L
Version: 9.0.3
NORTEL CONFIDENTIAL
273
PAD 1to2 0 0 0 6L
PAD 2to1 6L 6L 6L 6L
Pad Group STDLN UNBAL PKLNL PKNIL PPHONE ELO TLA CONF TPOS ETLS ETLL TLD IAO LCO CPOS DAVLN PRAC NPDGP LRLM standard line unbalanced line packet line long
Description
packet line short loop with NIL pad control P-phone POTS interoffice trunk POTS toll connecting trunk (TCT) to toll trunk conference circuit Traffic Operator Position System (TOPS) position POTS end office trunk (short distance) POTS end office trunk (long distance) POTS TCT to toll trunk (digital) plain ordinary telephone (POTS) intraoffice trunk POTS collocated step-by-step (SXS) trunk centralized automatic message accounting (CAMA) position data above voice line primary node access (PRA) no pad group remote line module (RLM)
Version: 9.0.3
NORTEL CONFIDENTIAL
274
The following table is out of the TSB122-A specification and is included for illustration only. It shows the expected loss plan values between various call types.
NORTEL CONFIDENTIAL
275
Detect dial tone Send digits Calling fax sends a 1100 Hz tone Receiving fax sends a 2100 Hz tone (~3 secs) Receiving fax sends pre-image handshake sequence Data rate, resolution, compression, image size,.... Line testing event message Image transmission Post image handshake Message confirmation
To avoid data corruption, you can configure MG 15000 to disable echo cancellation when it detects 2100 Hz tones. If you set the echoCancellation attribute to g164Mode, the system disables echo cancellation when it detects 2100 Hz tones. If you set the echoCancellation attribute to g165Mode, the system disables echo cancellation when it detects 2100 Hz tones with phase reversals. For g164Mode, echo cancellation is re-enabled after three (3) seconds of silence is detected in both directions. For g165Mode, echo cancellation is re-enabled when 150 to 350 milliseconds of silence is detected in both directions. If you set the echoCancellation attribute to alwaysOn, the system will not disable echo cancellation for any calls, regardless of the presence of tones. This setting can cause call connection problems for some modems.
Version: 9.0.3
NORTEL CONFIDENTIAL
276
Version: 9.0.3
NORTEL CONFIDENTIAL
277
Element A
Element B
Gigabit Ethernet Single mode 10 microns Multi-mode 62.5 microns Multi-mode 50 microns 100 Base-T Fast Ethernet CAT5 100 Base-T Fast Ethernet CAT5 Multimode fiber 62.5 microns 100 Base-T Fast Ethernet, Gigabit Ethernet, or remoted Gigabit Ethernet Single mode 10 microns Multi-mode 62.5 microns Multi-mode 50 microns Gigabit Ethernet Single mode 10 microns Multi-mode 62.5 microns Multi-mode 50 microns
5 km 275 meters 550 meters 100 meters 100meters 275 meters Depends on medium chosen
CS-LAN ERS 8600 CS-LAN ERS 8600 Compact 3PC CS-LAN ERS 8600
XA-Core, UAS, GWC, SDM, NGSS Compact 3PC & Storm Compact 3PC
OAM&P Platforms
Co-located Trunk Gateway Site with M40 router Core Router and/or Remote Trunk Gateway Site
1 Gbps
1 Gbps
Version: 9.0.3
NORTEL CONFIDENTIAL
278
Version: 9.0.3
NORTEL CONFIDENTIAL
279
Service
DISA
Use translations based TDM looparound to forward call to looparound trunk and provide service there.
Version: 9.0.3
NORTEL CONFIDENTIAL
280
Service
Use translations based TDM looparound to forward call to looparound trunk and provide service there. Use translations based TDM looparound to forward call to looparound trunk and provide service there. Use translations based TDM looparound to forward call to looparound trunk and provide service there. Use translations based TDM looparound to forward call to looparound trunk and provide service there. Use translations based TDM looparound to forward call to looparound trunk and provide service there.
Service
ACRJ CFRA
Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use translations based TDM looparound to forward call to looparound trunk and provide service there.
Version: 9.0.3
NORTEL CONFIDENTIAL
281
Service
Call Hold (CHD) Call Park (PPK) Call Transfer (CXR) CEPT Int'l Call Waiting ICWT CEPT Int'l Three Way Call Including Consultation Hold I3WC Denied Termination (DTR) Directed Call Park (DCPK) Do Not Disturb (DND) Plug Up (PLP) Requested Suspend Service (RSUS) Selective Call Acceptance (SCA) Selective Call Rejection (SCRJ)
Configure music on hold using announcement, instead of audible ringback. Alternatively, configure for silence. Configure music on hold using announcement, instead of audible ringback. Alternatively, configure for silence. Audible ringback is not heard in case of blind call transfer Configure music on hold using announcement, instead of audible ringback. Hold tone is not supported on SIP-T trunks. Configure music on hold using announcement, instead of audible ringback. Hold tone is not supported on SIP-T trunks.
Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Configure music on hold using announcement, instead of audible ringback. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used.
YES
YES YES
YES
YES YES
Version: 9.0.3
NORTEL CONFIDENTIAL
282
Service
Use announcement on media server as audio source, not audible ringback. If audible ringback is required, then engineer TDM loopback. Use announcement on media server as audio source, not audible ringback. If audible ringback is required, then engineer TDM loopback. Use announcement on media server as audio source, not audible ringback. If audible ringback is required, then engineer TDM loopback. Use announcement on media server as audio source, not audible ringback. If audible ringback is required, then engineer TDM loopback.
Agent Queue
YES
YES
YES
Service
ACRJ Call Park (PPK) Call Transfer (CXR) Denied Termination (DTR) Directed Call Park (DCPK) Do Not Disturb (DND) Plug Up (PLP)
Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Configure music on hold using announcement, instead of audible ringback. Alternatively, configure for silence. Audible ringback will not be heard during blind call transfer Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Configure music on hold using announcement, instead of audible ringback. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used.
YES
YES YES
NORTEL CONFIDENTIAL
283
Service
Selective Call Acceptance (SCA) Selective Call Rejection (SCRJ) Suspend (SUS)
Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used. Use announcement treatment or release back with cause. If tones are required, then TDM loopback must be used.
Version: 9.0.3
NORTEL CONFIDENTIAL
284
Version: 9.0.3
NORTEL CONFIDENTIAL
285
Look for the GWC nodes. Make sure the HW column is less than the Max column under the Overflow Q Occs banner as seen in Table 78 on page 285.
Space Min Node 26 27 28 29 Buff Size 64 64 64 64 Capacity (bytes/Sec) 128000 128000 128000 128000 (bytes) 4096 4096 4096 4096 Max (bytes) 0 0 0 0 0 0 0 0
Overflow Que Occs Node Occ Max Cong HW Cong 0 0 0 0 NO NO NO NO Type GWC_NODE GWC_NODE GWC_NODE GWC_NODE Name GWC 0 GWC 1 GWC 2 GWC 3
(# of msg) 60 60 60 60 50 50 50 50
If the DP parameters are not configured correctly, use the following commands to change and verify the configuration.
DPMON: > set_default nodetype ioui gwc_node DPMON: > show all Document Number: SEB-08-00-010 Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
286
Look for the GWC nodes again and make sure the HW column is less than the Max column under the Overflow Q Occs banner as in Table 78 on page 285.
Version: 9.0.3
NORTEL CONFIDENTIAL
287
18.0 References
[1] [2] [3] [4] [5] [6] [7] [RFC 1918] Address Allocation for Private Internets, Y. Rekhter, B. Moskowitz, D. Karrenberg, G. J. de Groot, E. Lear, February 1996 [RFC 2474] Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers, Nichols, K., Blake, S., Baker, F. and D. Black, December 1998. [RFC 2597] Assured Forwarding PHB Group, J. Heinanen, F. Baker, W. Weiss, J. Wroclawski, June 1999. [RFC 2598] An Expedited Forwarding PHB, V. Jacobson, K. Nichols, K. Poduri, June 1999. [Networking] Networking Concepts for the Passport 8000 Series Switch (10/2001, Rev A) [Management] Managing the Passport 8000 Series Switch Using the Command Line Interface Release 3.2 (October 2001, Rev 00) MD-2000.0173 PVG PRI Backhaul
Version: 9.0.3
NORTEL CONFIDENTIAL
288
References
Version: 9.0.3
NORTEL CONFIDENTIAL
289
19.0 Glossary
AAL1 AAL5 AHT APG APS APS ATM BHCA BHHCA CallP CCS CHT CLI CMTS CoS CP CS 2000 CS-LAN CPS2000 DCE DHCP DNS DOS DP DSCP EIOP EM GEM GWC HCMIC HFC HIOP IW-SPM LDAP LMM LTM MADN
Document Number: SEB-08-00-010
ATM Adaptation Layer 1 ATM Adaptation Layer 5 Average Hold Time Anchor Packet Gateway Audio Provisioning Server Automatic Protection Switching Asynchronous Transfer Mode Busy Hour Call Attempts Busy Hour Half Call Attempt Call Processing 100 Call Seconds Call Hold Time Common Line Interface Cable Modem Termination System Class of Service Control Processor Communications Server 2000 Communications Server Local Area Network Cornerstone Provisioning System 2000 Distributed Computing Environment Dynamic Host Configuration Protocol Domain Name Server Denial of Service Destination Protection DiffServ Code Point Ethernet Input/Output Processor Element Manager Gigabit Ethernet Module Gateway Controller High-Capacity Core Messaging Integrated Circuit Hybrid Fiber Coax High-Performance Input/Output Processor Interworking SPM Light-weight Directory Access Protocol Line Maintenance Manager Line Test Manager Multiple Appearance Directory Number
Version: 9.0.3 Carrier Voice over IP Solution Engineering Rules
NORTEL CONFIDENTIAL
290
Glossary
MBS MG MLT MTA NCS NMI NOC NPM NTP OOB PCR POP PSTN PTM MG 15000 QoS RTP SAM16 SAM21 SAM21EM SC/SCU SCR SDM SPM TDM TMM UAS USP VPN VR VRRP VSP WAN XA-Core
Maximum Burst Size Media Gateway Multi-Link Trunking Multimedia Terminal Adapter Network-based Call Signaling Network Management Interface Network Operations Center Network Patch Manager Network Time Protocol Out of Band Peak Cell Rate Point of Presence Public Switched Telephone Network Packet Telephony Manager Media Gateway 15000 Quality of Service Real Time Protocol Service Application Module 16 slot Service Application Module 21 slot SAM21 Element Manager Shelf Controller/Shelf Controller Unit Sustained Cell Rate SuperNode Data Manager Spectrum Peripheral Module Time Division Multiplex Trunk Maintenance Manager Universal Audio Server Universal Signaling Point Virtual Private Network Virtual Router Virtual Router Redundancy Protocol Voice Services Processor Wide-Area Network eXtended Architecture Core
Version: 9.0.3
NORTEL CONFIDENTIAL