Sunteți pe pagina 1din 14

Subject:

An MPLS Network Design Case Study

Date: From:

2013-4-10 Ahmet Akyamac Benjamin Tang Ramesh Nagarajan Advanced Technologies Bell Laboratories Holmdel, NJ07744 (732) 949-5413 (732) 949-6477 (732) 949-2761 akyamac@lucent.com btang@lucent.com rameshn@lucent.com

1 Introduction
MPLS (Multi-Protocol Label Switching) has become the choice of packet transport technology to meet multiple service requirements in the next generation network (NGN). More and more service providers, both wireline and wireless, are starting to deploy MPLS as the common packet backbone to achieve convergence of existing heterogeneous TDM and packet (X.25, ATM/FR, best-effort IP) networks and services, and offer new MPLS-enabled services such as IP VPN and VPLS to increase revenue generation. For those service providers the MPLS network must be designed to be scalable, highly available and capable of supporting various quality of service (QoS) requirements, and be operated in an efficient and cost-effective manner. Lucent Worldwide Services (LWS) is offering a complete suite of MPLS network design and optimization services to help service providers resolve key issues encountered in the rollout and operation of MPLS networks. The LWS MPLS services, powered by rigorous methods and procedures (M&P), intelligent algorithms and tools developed by Bell Labs, are geared to address such key issues as service characterization and traffic modeling, Greenfield IP/MPLS network design, multi-class traffic engineering, design and optimization, and protection design based on mixed protection policies for various types of traffic. This paper presents a case study of MPLS network design that was performed for a service provider using the service capabilities available from LWS MPLS network design and optimization services. In the remainder of this study, this service provider will be referred to as Service Provider A, or SPA. Furthermore, due to the sensitive nature of the information, the geographical and traffic information have been modified from the original study. The case study addresses a specific traffic engineering (TE) challenge facing service providers in the operation and growth of an MPLS network. Based on the service providers existing MPLS network and a set of services selected, we performed a DiffServ based traffic engineering (DS-TE) design to find the routing of both primary and backup LSPs carrying that multiservice traffic. The LSP routing is driven by minimizing a particular cost measurement of the MPLS network chosen for this study that is, the total link bandwidth consumed by the routing of LSPs. The details of the case study are presented in the remainder of this paper as follows:

Section 2 A brief summary of SPAs current ATM and MPLS networks and how they are used in the case study is given. Section 3 We describe our design methodology.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

Section 4 We describe how traffic modeling for the two selected services, SPAs existing Internet access and new IP VPN service, is conducted in the study. By using historical SPA network data and reported growth rates, and leveraging Bell Labs developed traffic modeling tools, we were able to derive a reasonable estimate of the Internet access and IP VPN traffic that were used as input to the MPLS TE design. Section 5 The alternative design scenarios addressed and M&P and design tool used in the study are described, followed by a comparison and analysis of the results. Depending on the TE design philosophies used whether link bandwidth is pre-partitioned for various class types (CTs), whether single or multi-class LSPs are used three different design scenarios are defined and addressed in the study. For each of the design scenarios a DS-based TE design was conducted to find the routing of primary and backup LSPs. The results of TE design for these scenarios are then compared and analyzed, with a look into how these TE designs can help service providers meet their MPLS requirement for efficient utilization, low cost, and assured QoS support and high availability for multi-class traffic. Section 6 Conclusion and possible additional studies in the future.

2 SPA MPLS Network


SPA currently operates a large ATM backbone network based on Lucent equipment and an overlay MPLS backbone based on Juniper routers. The ATM backbone consists of two tiers: one national backbone including seven core nodes fully mesh connected via OC12 (in this document, we will refer to these nodes as Gate1, Gate2, Gate3, Core1, Core2, Core3, Core4), and thirty tier-2 provincial backbone networks homed to the core nodes. The ATM backbone supports multiple services including ATM, FR, Internet access and VoIP. In 2004 SPA started to deploy an overlay MPLS backbone initially consisting of seven core nodes collocated with the ATM core nodes, each containing one Juniper T640 and interconnected via OC48. Over time SPA plans to migrate traffic from ATM to the MPLS backbone. At present Internet access traffic previously carried over the ATM national backbone has been diverted to the MPLS backbone at the core node locations (Figure 1). Additional existing traffic in the ATM backbone (such as VoIP) and new IP-based services such as IP VPN and VPLS will be carried over MPLS in the future.

Internet access Core node location

GX 550

ATM Prov. BB

ATM Natl BB

T640

MPLS BB

IP VPN
Figure 1: SPA ATM and MPLS Backbone Networks This MPLS network design study is based on SPAs current MPLS backbone network and traffic demands that include existing Internet access and new IP VPN traffic assumed to be carried over the MPLS backbone. The IP VPN traffic may be terminated directly on T640 (as shown in Figure 1) or coming from some PE routers connected to T640s via logical links provisioned through the provincial ATM backbones. In either case, we are only concerned about the total IP VPN

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

traffic aggregated at T640s and use it as input to the MPLS network design. Section 4 describes how Internet access and IP VPN traffic is modeled in this study.

3 Network Design Methodology


The following figure shows a flow diagram of the general MPLS network design methodology used in this study.

Input Multi-Class traffic and/or LSP information, (VoIP, VPN, 3G, IA, etc.), protection options (Path, FRR, etc.)

Input Network topology and link class partitioning info

TE constraints (subscription, hops, delay, etc.)

MPLS Multi-Class Network Design

Reports

Routing and Performance Analysis


Figure 2: General MPLS Network Design Methodology

Reports

Typical inputs to the MPLS network design procedure include topology information (such as node locations, capacity/configurations; link types, connections, class type partitioning and subscription constraints) and traffic information (multi-class traffic and/or LSP information, protection - path based, FRR etc.). The gathering of this type of information for this study is further discussed in Sections 4 and 5. The traffic engineering constraints (hop, delay etc.), and design objectives (such as minimizing bandwidth, minimizing maximum subscription, etc.) are further inputs to the design process. Subsequent to network design, routing and performance analysis can be performed to estimate network performance in terms of capacity utilization, traffic quality of service measures such as delay, loss, etc. The study presented in this document focused on capacity utilization as the main performance measure. As a final step, the design output information is collected through a series of reports, which can be analyzed to compare design results to design objectives and specifications. This could result in recommended changes to the network, which would then be used to perform further network studies as part of a closed loop process (shown by the dashed lines in Figure 2).

4 Traffic Modeling
For this study we assumed two types of traffic: Internet access (IA) and IP Virtual Private Network (VPN).

4.1 Internet Access


As discussed earlier, the SPA MPLS backbone consists of seven nodes. Of these, Gate1, Gate2 and Gate3 are considered to be the gateway nodes for IA traffic. IA traffic aggregated at the gateway nodes does not traverse the MPLS core network but goes directly to the Internet off those gateway nodes. On the other hand, IA traffic aggregated at any of the four non-gateway core nodes (Core1 through Core4) must traverse the core network to reach one of the gateway nodes. We assumed that each non-gateway node sends IA traffic to the two geographically closest gateway nodes, split equally to each of the two gateways.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

The IA traffic demand observed in this study is based on our estimate of the current IA traffic in SPAs network. Since we did not have the current traffic information available from SPA, our estimate is based on the IA traffic levels in August 2003 available from earlier SPA studies, which are then augmented to account for IA traffic growth since August 2003. We used the Internet subscriber growth data available for SPAs geographical region of operation and extrapolated to August 2005, as seen in Table 1. Based on our extrapolation, the IA traffic levels in August 2003 were multiplied by a factor of 1.627 and the current IA traffic is estimated to be 5508 Mbps in total. Jan-02 33.7 Internet access subscribers (millions) Aug-02 Jan-03 Aug-03 Jan-04 45.8 59.1 68 79.5 Aug-04 87 Extrapolated Jan-05 Aug-05 98.4 110.6

Table 1: IA subscribers in SPAs geographical region The total IA traffic estimated was proportionally distributed among the seven nodes based on their weights (i.e., the total amount of traffic aggregated at each node based on the August 2003 traffic data). As mentioned before, IA traffic aggregated at gateway nodes does not traverse the MPLS core and thus does not produce input traffic to the MPLS network design. For the non-gateway nodes IA traffic is split between the two geographically closest gateways, resulting in 16 uni-directional IA traffic demands in the traffic matrix shown in Table 2. IA traffic is assumed to be of class type CT0.
IA Traffic Gate1 Gate2 Gate3 Core1 Core2 Core3 Core4 Gate1 Gate2 Gate3 Core1 Core2 Core3 Core4 296 188 460 433 433 296 188 460 296 296 188 460 188 460 433 433

Table 2: IA Traffic Demand Matrix (Mbps)

4.2 IP VPN
SPA currently does not offer IP VPN service. For the purpose of the study we assumed VPN traffic demands exist between each pair of core nodes. IP VPN traffic demands were generated by first calculating the total IP VPN traffic in the network based on the IP VPN service demand model illustrated in Table 4. The IP VPN service demand model takes various input parameters as specified in the table and calculates the consequent total IP VPN traffic. This study uses the figures shown in the table and as a result the total IP VPN traffic is estimated to be 15,655 Mbps.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

Number of IP-VPN Customers IP VPN Customers Yearly Growth Rate

100 5% Large 10% 50 NxDS0 T1 T3 OC3 50% 30% 20% Medium 30% 30 65% 25% 10% Small 60% 10 80% 20% 0%

IP-VPN Customer Distribution Avg. Number of Nodes per IP VPN Customer Access Speed Distribution

Avg. IP-VPN Traffic (as % of access speed) NxDS0 T1 T3 OC3 50% 30% 30%

Table 3: IP VPN Service Demand Model All Class Types Next the total IP VPN traffic is converted to a traffic demand matrix (42 source-destination pairs) shown in Table 4 based on a weighted distribution traffic model (again the weight being the total amount of aggregated traffic at the seven nodes based on August 2003 traffic data). The total VPN traffic between each pair of nodes was split into the four VPN classes for a total of 168 VPN demands, using the following percentages: 50% for VPN0, 25% for VPN1, 20% for VPN2 and 5% for VPN3. The VPN traffic was assumed to be of class types CT3 (for VPN0) to CT6 (for VPN3).

Total VPN Traffic Gate1 Gate1 Gate2 Gate3 Core1 Core2 Core3 Core4 1502.5

Gate2

Gate3 451.5

Core1 Core2 Core3 Core4 283 68.5 94 23.5 23.5 58 54.5 36.5 34.5 85 736 170.5 234.5 58 36.5 690 160.5 221 54.5 34.5 85 108.5 149

1502.5 2209.5 456.5 2209.5 451.5 456.5 283 736 690 108.5 68.5 170.5 160.5 149 94 234.5 221

Table 4: Total VPN Traffic Demand Matrix (Mbps)

5 Detailed MPLS Network TE Design


In this section, we discuss the different aspects of the SPA multi-class MPLS network TE design. The MPLS network TE design is based on SPAs current MPLS backbone and existing Internet Access and new IP VPN traffic as modeled in the previous section. The goal of the TE design is to find routing of LSPs carrying multi-class IA and IP VPN traffic such that the network cost, defined by total consumed link bandwidth, is minimized. In the subsequent descriptions we first define the three design scenarios addressed, followed by the procedure and design tool used in the MPLS network TE design. Lastly we present the TE design results with a comparison and further analysis.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

5.1 Design Scenarios


For the SPA MPLS network, all traffic demands are carried on LSPs. IA demands are carried on LSPs with zero TE bandwidth and as a result routed over shortest paths. IP VPN demands are carried on LSPs with non-zero TE bandwidth where the LSPs can be single or multi-class depending on the design philosophy. In the case of single-class LSP, the LSP carries traffic from a single grade of IP VPN service and contains bandwidth request (with MPLS framing overhead included) for that grade of service, while in the case of multi-class LSP, the LSP carries traffic from multiple grades of IP VPN service and contains multiple bandwidth requests, one for each grade of service. In either case, each LSP is assigned a setup and holding priority that, combined with class type(s) associated with the LSP, will be used to decide the routing priority of the LSP in the TE design. All LSPs are protected against single link failure by end-to-end path protection backup paths. The backup paths share capacity as long as the corresponding primary paths are link disjoint. 1 Both primary and backup LSPs are routed over the given SPA MPLS network. On all network links, 100% of the available bandwidth (after deducting SONET framing overhead) is available to the routing of LSPs. In addition, the available bandwidth on a network link may be pre-partitioned to various class types (again depending on the design philosophy) and the routing of LSPs must be subject to this constraint. When class based partitioning of link bandwidth is applied, Maximal Allocation Model (MAM) is used as the bandwidth constraint model on all network links. The routing of primary and backup LSPs is decided through traffic engineering with an objective of minimizing the total link bandwidth consumed (or subscribed) by the routing of LSPs. Based on different design philosophies, three different design scenarios were considered in the MPLS TE design. These are summarized in Table 5 and Table 6, and discussed below. LSP Type Single Class Multi Class Class Based Link Bandwidth Partitioning No Yes Scenario 1 Scenario 2 Scenario 3

Table 5: Summary of Design Scenarios Setup Priorities VPN3 VPN2 VPN1 VPN0 IA Scenario 1 1 2 3 4 7 Scenario 2 1 2 3 4 7 Scenario 3 50% BW 1 25% BW 20% BW 5% BW 7

Table 6: LSP Class Setup Priorities

Single class LSPs allow for more granular bandwidth control, whereas multi class LSPs present operational advantages. Design Scenario 1 This design scenario corresponds to a traffic-engineered network with single-class LSPs and no class based partitioning of link bandwidth. Each LSP carries one traffic class, thus there are 5 types of LSPs: IA, VPN0 through VPN3. Each traffic type is bound to its corresponding LSP type, for example VPN3 traffic is carried on VPN3 LSPs. As discussed
1

In some cases sharing of backup bandwidth is subject to the Shared Risk Group (SRG) constraint. That is, backup paths for link disjoint primary LSPs cannot share capacity if some of the disjoint links from the primary LSPs are in the same SRG i.e., those disjoint links are actually provisioned through physical conduits that have shared risk such that one failure in the physical network may bring down all of the primary LSPs at the same time. Although we have the capability to address SRG in the protection design, we do not consider it in the case study.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

above, IA LSPs have zero TE bandwidth. For VPN LSPs, the TE bandwidth is set to the amount of traffic carried (plus required MPLS overhead). The links are not partitioned to classes and 100% of the link bandwidth is available for TE subscription. All LSPs have a holding priority of 1. The setup priorities are as follows: 1 for VPN3, 2 for VPN2, 3 for VPN1, 4 for VPN0 and 7 for IA LSPs. Thus, the VPN3 LSPs have the highest setup priority. Design Scenario 2 This design scenario corresponds to a traffic-engineered network with single-class LSPs and class-based partitioning of link bandwidth. Each LSP carries one traffic class, thus there are 5 types of LSPs: IA, VPN0 through VPN3, same as Design Scenario 1. All of the link bandwidth is available for TE subscription, partitioned for various class types as follows: IA: 0%, VPN0: 50%, VPN1: 25%, VPN2: 20%, VPN3: 5%, which is the same as the distribution used in the IP VPN traffic model in Section 4. Thus, the link partitioning uses a priori knowledge of the class type bandwidth requirements. LSPs have the same holding and setup priorities as assigned in Design Scenario 1 Design Scenario 3 This design scenario corresponds to a traffic-engineered network with multi-class LSPs and class-based partitioning of link bandwidth. For Design Scenario 3, single-class LSPs are still used to carry IA traffic while the four classes of VPN traffic are carried altogether on multi-class VPN LSPs. Each multi-class LSP contain bandwidth requests for each of the four classes of VPN traffic. All of the link bandwidth is available for TE subscription, partitioned as follows: IA: 0%, VPN0: 50%, VPN1: 25%, VPN2: 20%, VPN3: 5%, same as in Design Scenario 2. All LSPs have a holding priority of 1. Single-class IA LSPs have a setup priority of 7 and multi-class VPN LSPs have a setup priority of 1.

5.2 MPLS TE Design Procedure and Tool


The MPLS network design was performed using a commercially available MPLS network design and simulation tool customized for Lucent Technologies. The tool is capable of conducting TE design and performs flow analysis and packet-based discrete event simulation based on the output of the TE design. Using this tool, the following steps were followed in the MPLS TE design: Import network topology and traffic demands for multiple CTs Import LSP bandwidth requests for single or multiple CTs Run design action to find routing paths of primary and backup LSPs with the objective to minimize the total consumed link bandwidth Run flow analysis that places traffic onto the routed LSPs and collects network performance data (such as hop counts and link bandwidth subscription by both primary and backup LSPs)

The MPLS network topology (nodes and links) was imported to the tool via Juniper configlet files provided by SPA. Figure 3 shows a non-geographical display2 of SPA MPLS network consisting of 7 Juniper T640 routers and 10 OC-48 links as viewed via the graphical user interface (GUI) of the tool after the import of the configlet files. As shown in the figure there is only one OC48 link between Core4 and Gate1. For redundancy purposes, a second OC48 link between the pair was added in the MPLS TE design (for a total of 11 OC-48 links).

Since no coordinates are provided in the Juniper configlet files, the display of network nodes in Figure 3 does not reflect their geographical locations. The nodes were later moved to locations on a map (Figure 4).

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

Figure 3: SPA MPLS network after configlet file import to the tool

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

Figure 4: Network model after geographical modifications The traffic demands and LSP information (including bandwidth requests) were imported to the tool via CSV based text files. Figure 5 shows the network model in Design Scenario 3 after traffic demands and LSP information are imported.

Figure 5:Network model for Design Scenario 3 showing IA and multi-class VPN LSPs

5.3 MPLS Network Design


The TE LSP routing problem is NP-hard and the solution for primary path routing has complexity of ~2|D||A|, where |D| is the number of LSPs and |A| is the number of unidirectional links (for example, for the first two design scenarios, there are |D| =184 LSPs and 12 bi-directional links, |A|=24). The complexity for a protection design is higher and includes an additional factor relating to the number of failures to be considered. The primary/backup path routing problem networks of the size considered in this study is computationally notoriously difficult and finding optimal solutions is impossible. Our design uses a heuristic based approach to arrive at a quality solution satisfying the design objectives. After network topology, traffic demands and LSPs import, the routing of primary and backup LSPs was determined by the mpls_ds_te design action available in the tool, custom built for Lucent Technologies. The mpls_ds_te design action uses a heuristic approach (as mentioned above) based on LSP ordering to arrive at a network design solution with a minimum cost objective. For this study, the cost is defined to be the total link bandwidth subscribed by the LSPs across the network. The outcome of the TE design is a set of explicitly routed LSPs, which can be evaluated based on the following metrics:

Minimum, maximum and average hop count of LSP explicit routes

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

Minimum, maximum and average link TE subscription3

Subsequent to the TE design, we ran flow analysis (FLAN) available from the tool where traffic demands were placed on the primary LSP ER paths and link utilization4 was measured. We did not conduct failure analysis in this study.

5.4 Study Results and Analysis


The tool is capable of generating a rich set of class-based and summary reports on the outcome of the TE design and Flow Analysis. For example, Figure 6 to Figure 8 show portions of the LSP explicit routes and link TE subscription reports generated after the TE design, and link utilization report generated after FLAN, all for Design Scenario 2.

Figure 6: LSP Explicit Routes Report for Design Scenario 2 (Partial)

Figure 7: Link TE Subscription Report for Design Scenario 2 (Partial)

Link TE subscription refers to the portion of a links capacity that is reserved for all LSPs routed over the link during TE design. 4 Link utilization, as opposed to link TE subscription, refers to the portion of a links capacity that is consumed by actual traffic. Its measurement is usually obtained through a flow analysis where actual traffic is placed onto the network. A links utilization may be higher or lower than its TE subscription.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

10

Figure 8: Link Utilization Report for Design Scenario 2 (Partial) Based on the information generated by the reports, we summarize the study results in the following descriptions with a comparison among the three alternative Design Scenarios. The study results are presented based on the measurement of three metrics: LSP hop count, link TE subscription and link utilization. LSP Hop Count Minimum, average and maximum hop counts5 of primary and backup LSPs are summarized in Figure 9. Note that all primary LSPs were successfully routed in all design scenarios. However, in Design Scenario 1 backup paths failed for 5 VPN LSPs (corresponding to 5 unprotected VPN demands), while in Design Scenario 3 backup paths failed for 4 multiclass VPN LSPs (corresponding to 16 unprotected VPN demands). For Design Scenario 2, all backup LSPs were successfully routed.
LSP Hops (in Core Links) P r i m a r y B a c k u p Min Hops Avg Hops Max Hops Min Hops Avg Hops Max Hops Scenario 1:184 LSPs Scenario 2:184 LSPs Scenario 3:58 LSPs IA 1 1.125 2 1 2 3 VPN 1 1.54 2 1 2.63 2 5 All 1 1.50 2 1 2.57 2 5 IA 1 1.125 2 1 2 3 VPN 1 1.57 3 1 2.77 5 All 1 1.53 3 1 2.71 5 IA 1 1.125 2 1 2 3 VPN 1 1.57 3 1 2.95 3 5 All 1 1.53 1 3 1 2.86 1,3 5

1 : Normalized to number of demands per VPN LSP 2 : Backup paths failed for 5 LSPs (5 VPN demands unprotected) 3 : Backup paths failed for 4 LSPs (16 VPN demands unprotected)

Figure 9: LSP Hop Counts


5

The calculation of hop count is based on the number of links traversed by the LSPs.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

11

For IP VPN, the hop count for primary and backup LSPs increases from Design Scenario1 to Scenarios 2, 3 since classbased link bandwidth partitioning employed in Design Scenarios 2 and 3 reduces the chances of picking topologically shortest paths. The hop count for backup LSPs increases from Design Scenario 2 to Scenario 3 since it is more difficult for Design Scenario 3 to route multi-class backup LSPs with a much higher total bandwidth (equivalent to routing 4 single-class LSPs simultaneously) over the residual capacity left after routing of primary LSPs, leaving the backup LSPs to be routed over longer paths. The same difficulty also results in the 4 un-routable backup LSPs in Design Scenario 3. For IA, the LSP hop counts are the same in all Design Scenarios since IA LSPs with zero TE bandwidth are always routed over shortest paths. Link TE Subscription by Primary and Backup LSPs Link TE subscription, by primary and backup LSPs respectively, is summarized in Figure 10. In Design Scenario 1 where link bandwidth is not partitioned, LSPs of higher priority with smaller bandwidth get routed first 6, causing those LSPs of lower priority with larger bandwidth to be routed over longer paths (this also accounts for the 5 un-routable backup IP VPN LSPs in Design Scenario 1 as noted in Figure 9). On the other hand class-based link partitioning was employed in Design Scenarios 2 and 3, preventing higher priority LSPs from using up link capacity and leaving room for lower priority IP VPN LSPs to find shorter paths. As a result, the average link TE subscription by primary LSPs (referred to as primary link TE subscription in subsequent discussion) in Design Scenario 1 is higher than those in Design Scenarios 2 and 3. The average primary link TE subscriptions in Design Scenarios 2 and 3 are identical since the class-based link partitioning was chosen to be proportional to the distribution of multi-class traffic. Finally, the difficulty of routing multi-class LSPs, as mentioned above, led to a higher average backup link TE subscription for Design Scenario 3 as compared to the other two scenarios.

Design Scenario

Link TE Subscription Link TE Subscription Link TE Subscription by Primary LSPs (%) by Backup LSPs (%) by All LSPs (%) Min Avg 41.32 36.30 36.30 Max 98.86 96.85 96.85 Min 0.00 0.00 0.00 Avg 35.78 18.68 38.07 Max 64.63 54.59 85.87 Min 52.73 23.67 52.29 Avg 77.02 54.98 74.37 Max 99.90 99.37 97.81

Scenario 1 0.00 Scenario 2 0.00 Scenario 3 0.00

Figure 10: Link TE Subscription by Primary and Backup LSPs Link Utilization by Traffic Carried on the Primary LSPs Link utilization 7, in both forward and return directions, by traffic carried on the primary LSPs is summarized in Figure 11. Note that some links have link utilization greater than 100%. This is because IA LSPs are routed over shortest paths with zero TE bandwidth in this study, leaving them with no bandwidth reservation on the links. On certain links where IA traffic is routed and there is a high level of IP VPN traffic, the link utilization will be greater than 100%. Counting both forward and return directions, the total link capacity consumed by traffic carried on primary LSPs is 28,257 Mbps for Design Scenario 1 and 25,632 Mbps for both Design Scenarios 2 and 3. As a general technical summary, Scenario 1 is best in terms of minimum hop counts. Scenario 2 is best if all LSPs are to be successfully protected. Scenario 2 is best in terms of overall TE subscription. Scenarios 2 and 3 are best in terms of link utilization.

6 7

This behavior is a result of the heuristic approach based on LSP ordering. The calculation of link utilization is based on the link bandwidth after deducting the SONET framing overhead.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

12

Total Fwd Link Design Capacity Scenario Consumed (Mbps) Scenario 1 12.36 55.51 117.84 14,520 Scenario 2 12.36 49.00 112.03 12,816 Scenario 3 12.36 49.00 112.03 12,816

Min Fwd Util (%)

Avg Fwd Util (%)

Max Fwd Util (%)

Min Rtn Util (%) 0.00 0.00 0.00

Avg Rtn Util (%)

Total Rtn Link Capacity Consumed (Mbps) 52.52 116.19 13,737 49.00 102.35 12,816 49.00 102.35 12,816

Max Rtn Util (%)

Figure 11: Link Utilization by traffic carried on primary LSPs Scenario 2 generates a TE design with the lowest link utilization and overall link capacity consumed while maintaining only slightly higher hop counts for primary and backup LSPs. We discuss further considerations for service providers in the next section.

5.5 What Does It Mean To Service Providers?


From a service providers perspective, the various design scenarios addressed in the study represent different options for the service provider to use in designing its MPLS network. Each of the network performance metrics shown above corresponds to a certain key requirement for the operation of the MPLS network. For example, a maximum LSP hop count for a particular class of traffic may be needed in order to meet the end-to-end delay requirement for the class of traffic (such as max end-to-end delay for VoIP). Link subscription by LSPs reflects how efficient the capacity of the MPLS network is used by multi-class traffic, and could be used to derive the total network cost or cost per unit bandwidth carried. As the study showed, various design scenarios led to different measurements of network performance metrics. Depending on the particular requirements set up by the service provider for the MPLS network, some design scenario would be the best option for the service provider to adopt. For example, Design Scenario 2 (with single-class LSPs and class-based link bandwidth partitioning) generates a TE design with the lowest link utilization and overall link capacity consumed while maintaining only slightly higher hop counts for primary and backup LSPs, will be a good choice for a service provider looking to enhance the MPLS network efficiency and reduce unit bandwidth cost. Single class LSPs represent a more granular routing, thus making better use of available capacity. However, from a service providers point of view, multi class LSPs could present operational advantages. While partitioning bandwidth may result in under-utilization of existing resources, it also creates fairness in that a certain bandwidth is always available for each class type. There are additional design parameters that can be used in the TE design to meet other MPLS requirements desired by the service provider. For example, a different protection policy such as MPLS Fast Re-Route (FRR) can be used in the TE design to meet the service providers requirement for fast recovery in the event of network failures. Overall, LWS has the full capability and innovative tools to help service providers design an MPLS network that meets their specific requirements for efficient network utilization, low cost, and assured QoS support and high availability for multi-class traffic carried over the MPLS network.

6 Conclusion and Future Study Opportunities


The following presents some additional areas where we would like to conduct future studies:

MPLS TE design with a class-based link bandwidth partitioning based upon no a priori knowledge of the multiclass traffic In the current study, the class-based link bandwidth partitioning used a priori knowledge of the relative bandwidths required by the various classes of IP VPN traffic. In reality, the actual IP VPN traffic for

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

13

each class type may deviate from the original forecast due to uncertainty in traffic forecast or growth and therefore would not match the partitioning of link bandwidth. A further study of MPLS TE design would be employing a class-based link bandwidth partitioning that is not proportional to the distribution of multi-class traffic and examining how it affects the routing of LSPs and the consequent network performance.

MPLS TE design with different protection policies This study assumed backup LSPs are derived in the TE design based on end-to-end path protection. Different protection policies can be used in the TE design, such as one-to-one or facility-based FRR, or a mixed of end-to-end path protection and FRR applied differently to various classes of traffic deepening on their availability requirements. Failure analysis to assess the robustness of the various TE designs In this new study we would run failure simulation (failing a single link or single node at a time) and examine how backup LSPs kick in to protect traffic on the affected primary LSPs and the consequent network performance. End-to-end performance analysis for multi-class traffic over an MPLS network The current study focused on measuring the network performance at the flow level that is, LSP hop count and link utilization. The tool we have is capable of performing end-to-end performance analysis at packet level (packet delay, jitter and loss) for multi-class traffic carried over a traffic-engineered MPLS network. Analysis for cases where IA traffic takes non-zero LSP bandwidth.

Lucent Technologies Inc. - Proprietary


Use pursuant to Company instructions.

14

S-ar putea să vă placă și