Sunteți pe pagina 1din 107

MEE10:38

Improving the Quality of Multicast Networks by Using the OPNET Modeler


Krishna Kumaran Vienjipuram Asmeer Babu Abdul

This thesis is presented as part of the Degree of Master of Science in Electrical Engineering with Emphasis in Telecommunication Blekinge Institute of Technology

June 2010

Master of Science Program in Electrical Engineering School of Computing Blekinge Institute of Technology Advisor: Dr. Doru Constantinescu Examiner: Dr. Doru Constantinescu

MEE10:38

ii

MEE10:38

Abstract
This thesis presents different Internet Protocol (IP) multicasting design models and together with their implementation methods in OPNET Modeler 14.5. The models presented in this report are designed for handling various types of advanced multimedia applications and services when using the multicast communication paradigm. The architectural models presented in this thesis are independent of the routing protocols used as well as of the underlying networking environment. Several exiting challenges related to high bandwidth requirements, low network delay, and jitter are discussed together with these design models, and are simulated using the OPNET Modeler. The emerging demands for new multicast capable applications, for both real-time and non real-time traffic are considered together in this report in order to better evaluate multicast traffic requirements. Efficient routing methods, group management policies, and shortest path algorithms are also discussed in the proposed models. The pros and cons of IP multicasting are described using the Protocol Independent Multicast Sparse Mode (PIM-SM) method, in the highlight of the proposed design models. Several results related to link utilization, end-to-end delay, packet loss, PIM control traffic, Internet Group Management Protocol (IGMP), and network convergence are obtained by using already available statistical methods and modules from OPNET Modeler. Our major contribution with this thesis work relates to components associated with network convergence, routing delay, and processing delay. These components are illustrated in this thesis using various OPNET metrics. Moreover, several issues related to protocol scalability, supportability, performance optimization and efficiency are also presented by using OPNETs build in reports, i.e., executive flow and survivability analysis reports.

iii

MEE10:38

iv

MEE10:38

Preface and Acknowledgements


This project is a work required for t he fulfillment of our Masters Degree in Electrical Engineering with emphasis on Telecommunication in June 2010. The thesis was carried out with OPNET Modeler 14.5, at Blekinge Institute of Technology (BTH), Karlskrona, Sweden. We are grateful to our honorable thesis supervisor Dr. Doru Constantinescu at School of Computing, Blekinge Institute of Technology, who had guided our thesis work with his scholarly advice. Without his guidance and encouragement this task would have been unachievable. We express our heartfelt gratitude to him. We would like to thank our friends and parents who supported us during our master thesis. We would also like to thank, the program managers Mikael sman, Anders Nelsson as well as other BTH staff, too numerous to mention here.

Krishna Kumaran Vienjipuram, Asmeer Babu Abdul, June, 2010

MEE10:38

vi

MEE10:38

Goals
The main goal of this thesis is to investigate the performance of different types of multicast traffic using the OPNET Modeler 14.5. Protocol overhead in steady state is observed by using Discrete Event Simulations (DES) [44, 51]. The performance as well as future capacity planning of the multicast enabled networks was analyzed by using OPNETs build in features. Various application statistics such as delay, delay variation, application type, traffic sent and received were collected for further analysis for this thesis report.

vii

MEE10:38

viii

MEE10:38

Table of Contents
ABSTRACT ....................................................................................................................................................................... III GOALS ............................................................................................................................................................................ VII LIST OF FIGURES ............................................................................................................................................................ XIII LIST OF TABLES ...............................................................................................................................................................XV ACRONYMS ..................................................................................................................................................................XVII CHAPTER I ........................................................................................................................................................................ 1 1INTRODUCTION .............................................................................................................................................................. 1 1.1 INTRODUCTION ................................................................................................................................................... 1 1.2 PROBLEM DESCRIPTION .................................................................................................................................... 1 1.3 INTENDED AUDIENCE ........................................................................................................................................ 1 1.4 SCOPE .................................................................................................................................................................... 2 1.5 MOTIVATION ....................................................................................................................................................... 2 1.6 LIMITATIONS ....................................................................................................................................................... 2 1.7 STRUCTURE OF THE REPORT ............................................................................................................................ 2 1.8 MULTICAST HISTORY ......................................................................................................................................... 3 CHAPTER II ....................................................................................................................................................................... 5 2 BACKGROUND & RELATED WORK ................................................................................................................................. 5 2.1 BACKGROUND ..................................................................................................................................................... 5 2.2 RELATED WORK .................................................................................................................................................. 5 2.3 COMPARISON TO THIS WORK ........................................................................................................................... 6 CHAPTER III ...................................................................................................................................................................... 7 3.1 INTERIOR GATEWAY ROUTING PROTOCOLS ................................................................................................. 8 3.1.1 DISTANCE VECTOR ROUTING ............................................................................................................................. 8 3.1.2 LINK STATE ROUTING ........................................................................................................................................ 8 3.1.3 HYBRID ROUTING .............................................................................................................................................. 9 3.2 EXTERIOR GATEWAY ROUTING PROTOCOLS ................................................................................................ 9 3.3 CLASSLESS ROUTING ......................................................................................................................................... 9 3.3.1 OPEN SHORTEST PATH FIRST ........................................................................................................................... 10 3.3.2 BORDER GATEWAY PROTOCOL ........................................................................................................................ 10 3.4 CLASSFULL ROUTING ...................................................................................................................................... 10 3.5 STATIC AND DYNAMIC ROUTING ................................................................................................................... 10 3.5.1 STATIC ROUTING ............................................................................................................................................. 10 3.5.2 DYNAMIC ROUTING ........................................................................................................................................ 11 3.6 ROUTING ALGORITHMS ................................................................................................................................... 11 3.6.1 GLOBAL ROUTING ALGORITHM ....................................................................................................................... 11 3.7 HIERARCHICAL ROUTING................................................................................................................................ 11 3.8 IP COMPONENTS................................................................................................................................................ 12 3.9 ERROR IDENTIFICATION .................................................................................................................................. 12 3.9.1 ICMP MESSAGE TYPES ..................................................................................................................................... 12 CHAPTER IV .................................................................................................................................................................... 13 4 MULTICAST ADDRESSING ............................................................................................................................................ 13 4.1 MULTICAST ADDRESS ALLOCATION ............................................................................................................. 14

ix

MEE10:38

4.1.1 DYNAMIC ALLOCATION ................................................................................................................................... 14 4.1.2 STATIC ALLOCATION........................................................................................................................................ 14 4.2 MULTICAST ADDRESS MAPPING .................................................................................................................... 14 4.3 MULTICAST ROUTING PROTOCOLS ............................................................................................................... 15 4.3.1 MULTICAST ROUTING PROTOCOLS ........................................................................................................... 15 4.3.2 PROTOCOL INDEPENDENT MULTICAST VERSIONS ............................................................................................ 15 4.3.3 PROTOCOL INDEPENDENT MULTICAST ................................................................................................... 15 4.3.4 MSDP............................................................................................................................................................. 15 4.3.5 DENSE MODE PROTOCOLS ......................................................................................................................... 16 4.3.6 DISTANCE VECTOR MULTICAST ROUTING PROTOCOL ........................................................................... 16 4.3.7 SPARSE MODE PROTOCOLS ....................................................................................................................... 16 4.3.8 PIM SPARSE-DENSE MODE .............................................................................................................................. 18 4.4 ROUTING MULTICAST TRAFFIC ...................................................................................................................... 19 4.4.1 TRASNPORT PROCOCOLS AND MULTICASTING ....................................................................................... 19 4.5 MULTICAST FORWARDING ............................................................................................................................. 20 4.6 MULTICAST TRAFFIC ENGINEERING ............................................................................................................. 20 4.6.1 RP POSITION ................................................................................................................................................ 20 4.6.2 MULTICAST ROUTING CONFIGURATION .................................................................................................. 20 4.6.3 DEPENDENCY ON UNICAST ........................................................................................................................ 20 4.7 IGMP .................................................................................................................................................................... 21 4.7.1 IGMP v1.......................................................................................................................................................... 21 4.7.2 IGMP v2.......................................................................................................................................................... 21 4.7.3 IGMP v3.......................................................................................................................................................... 21 4.7.4 IGMP MESSAGES ............................................................................................................................................. 21 4.7.5 IGMP PROTOCOL OPERATION ......................................................................................................................... 22 4.8 CGMP ................................................................................................................................................................... 22 4.9 IP MULTICAST MODEL ARCHITECTURE & MULTICAST OPERATIONS ..................................................... 23 4.9.1 JOINING A GROUP........................................................................................................................................... 23 4.9.2 SENDING TRAFFIC TO A GROUP ....................................................................................................................... 24 4.9.3 MULTICAST TUNNELING .................................................................................................................................. 25 4.9.4 DVMRP TUNNELS ......................................................................................................................................... 25 4.9.5 GRE TUNNEL ................................................................................................................................................ 25 4.9.6 MULTICAST APPLICATIONS .............................................................................................................................. 25 4.9.7 PERFORMANCE METRICE IN MULTICASTING .................................................................................................... 25 CHAPTER V ..................................................................................................................................................................... 27 5. MODEL DESIGN I......................................................................................................................................................... 27 5.1 MODEL DESIGN EXPERIMENT 1 ...................................................................................................................... 27 CHAPTER VI .................................................................................................................................................................... 31 6 MODEL DESIGN II......................................................................................................................................................... 31 6.1 DESIGN MODEL EXPERIMENT II ..................................................................................................................... 31 CHAPTER VII ................................................................................................................................................................... 33 7. SIMULATION RESULTS AND ANALYSIS ........................................................................................................................ 33 7.1 EXPERIMENT I ........................................................................................................................................................ 33 7.1.1 INTRODUCTION TO NEW DESIGN AND SCENARIO EXPERIMENT I ..................................................................... 33 7.2 EXPERIMENT II .................................................................................................................................................. 64 7.2.1 INTRODUCTION TO NEW DESIGN AND EXPERIMENT II ..................................................................................... 64 7.3 INTERNET STANDARDS USED IN THIS REPORT ...................................................................................... 82 CHAPTER VIII .................................................................................................................................................................. 83

MEE10:38

8 CONCLUSIONS AND FUTURE WORK ............................................................................................................................. 83 BIBLIOGRAPHY ............................................................................................................................................................... 85

xi

MEE10:38

xii

MEE10:38

LIST OF FIGURES
FIGURE 1: MULTICAST DEPLOYMENT FROM 1992 TO 2006 ................................................................................................... 3 FIGURE 2 ROUTING PROTOCOLS ......................................................................................................................................... 7 FIGURE 3 CLASS D RANGE IP MULTICAST ......................................................................................................................... 13 FIGURE 4 IEEE 802.3 MAC ADDRESS STRUCTURE ............................................................................................................ 14 FIGURE 5 PIM SPARSE-DENSE MODE ............................................................................................................................... 18 FIGURE 6 CGMP OPERATION WITH ROUTERS AND SWITCHES ............................................................................................ 23 FIGURE 7 IGMP GROUP JOINING PROCEDURE ................................................................................................................... 24 FIGURE 8 MULTICAST TRAFFIC FORWARDING TO GROUPS .................................................................................................. 24 FIGURE 9 NEW MODEL DESIGN EXPERIMENT I ................................................................................................................... 27 FIGURE 10 RONNEBY SUBNET .......................................................................................................................................... 28 FIGURE 11 KARLSHAM SUBNET ........................................................................................................................................ 29 FIGURE 12 KARLSKRONA SUBNET WITH R1 AS RP ............................................................................................................ 30 FIGURE 13 NEW MODEL DESIGN E XPERIMENT II .............................................................................................................. 31 FIGURE 15 ROUTING BETWEEN DEVICES IN RONNEBY ....................................................................................................... 35 FIGURE 16 BGP ROUTING BETWEEN DEVICES IN RONNEBY ............................................................................................... 35 FIGURE 17 KARLSKRONA JUNCTION ROUTING INFORMATION ............................................................................................. 36 FIGURE 18 CONNECTIVITY BETWEEN DEVICES FROM KARLSKRONA JUNCTION ................................................................... 37 FIGURE 19 KARLSHAM JUNCTION ROUTING INFORMATION ................................................................................................. 38 FIGURE 20 GLOBAL IP STATISTICS SUMMARY IN EXP I ...................................................................................................... 39 FIGURE 21 GLOBAL STATISTICS PIM-SM SUMMARY ......................................................................................................... 40 FIGURE 22 GLOBAL STATISTICS SUMMARY BGP ............................................................................................................... 40 FIGURE 23 GLOBAL STATISTICS EIGRP SUMMARY ............................................................................................................ 41 FIGURE 24 POINT-TO-POINT QUEUING DELAY BETWEEN MAJOR DEVICES ............................................................................ 41 FIGURE 25 TRAFFIC SENT /RECEIVED IN PACKETS/SEC ...................................................................................................... 42 FIGURE 26 FLOW ANALYSIS EXP 1 .................................................................................................................................... 43 FIGURE 27 NET DOCTOR FINAL REPORT EXP I ................................................................................................................... 45 FIGURE 28 NET DOCTOR SURVIVABILITY SCORE ............................................................................................................... 46 FIGURE 29 CASE VIOLATION HISTORY PIE CHART .............................................................................................................. 47 FIGURE 30 FAILURE IMPACT SUMMARIES .......................................................................................................................... 47 FIGURE 31 ELEMENT SURVIVABILITY REPORT ................................................................................................................... 48 FIGURE 32 NETWORK PERFORMANCE REPORT ................................................................................................................... 48 FIGURE 33 NETWORK PERFORMANCE (AVERAGE LINK UTILIZATION).................................................................................. 49 FIGURE 34 NET REPORTS OVER-UTILIZATION OF LINKS ...................................................................................................... 50 FIGURE 35 OVER-UTILIZATION AND LINK FAILURES .......................................................................................................... 51 FIGURE 36 CAPACITY PLANNING FLOW ANALYSIS REPORT ................................................................................................. 55 FIGURE 37 ENDTO-END DELAY DISTRIBUTION CAPACITY PLANNING REPORT..................................................................... 56 FIGURE 38 IP BACKGROUND TRAFFIC ............................................................................................................................... 57 FIGURE 39 PIM-SM CONTROL TRAFFIC SENT /RECEIVED .................................................................................................... 58 FIGURE 40 BGP AND EIGRP TRAFFIC E XP 1 ..................................................................................................................... 59 FIGURE 41 UDP TRAFFIC E XP 1 ........................................................................................................................................ 60 FIGURE 42 CLIENT FTP TRAFFIC RECEIVED/SENT .............................................................................................................. 61 FIGURE 43 FTP TRAFFIC EXP 1 ......................................................................................................................................... 62 FIGURE 44 MULTICAST TRAFFIC, LAN INBOUND/OUTBOUND ............................................................................................. 63 FIGURE 45 NEW DESIGN SCENARIO MAIN TOPOLOGY E XP II ............................................................................................... 64 FIGURE 46 CONFERENCING APPLICATION ROUTING WITH GROUP ADDRESS 224.0.6.1 .......................................................... 65 FIGURE 47 FTP APPLICATION ROUTING WITH GROUP ADDRESS 224.0.6.11.......................................................................... 66 FIGURE 48 DATABASE APPLICATION ROUTING WITH GROUP ADDRESS 224.0.6.12 ............................................................... 66 FIGURE 49 DISCRETE EVENT SIMULATION E XP 2 ............................................................................................................... 68 FIGURE 50 GLOBAL STATISTICS PIM-SM EXP 2 ................................................................................................................ 69 FIGURE 51 FLOW ANALYSIS EXP 2 .................................................................................................................................... 70 FIGURE 52 E XECUTIVE SUMMARY IP MULTICAST GROUP ................................................................................................... 71 FIGURE 53 NET DOCTOR REPORT EXECUTIVE SUMMARY EXP 2 .......................................................................................... 72 FIGURE 54 OSPF AREAS TOPOLOGY II PIE CHART EXP 2 .................................................................................................... 72 FIGURE 55 OSPF AREAS BAR CHART EXP 2 ....................................................................................................................... 73

xiii

MEE10:38

FIGURE 56 SURVIVABILITY ANALYSIS EXECUTIVE REPORT E XP 2 ....................................................................................... 74 FIGURE 57 POINT-TO-POINT QUEUING DELAY (SEC) ........................................................................................................... 74 FIGURE 58 PIM-SM DETAILS E XP 2 .................................................................................................................................. 76 FIGURE 59 VIDEO CONFERENCING AND OSPF NETWORK CONVERGENCE ............................................................................ 77 FIGURE 60 AVERAGE IP TRAFFIC, VLAN, MANAGEMENT TRAFFIC, FTP TRAFFIC ............................................................... 78 FIGURE 61 FTP DOWNLOAD/UPLOAD RESPONSE TIMES, VIDEO CONFERENCING TRAFFIC .................................................... 79 FIGURE 62 IP END-TO-END DELAY TRAFFIC, IP MULTICAST TRAFFIC SENT/RECEIVED/DROPPED ......................................... 80 FIGURE 63 DATABASE TRAFFIC ........................................................................................................................................ 81

xiv

MEE10:38

List of tables
TABLE 1 ACRONYMS .................................................................................................................................................. XVII TABLE 2 ROUTING PROTOCOL COMPARISON ........................................................................................................................ 8 TABLE 3 ICMP MESSAGE TYPES ...................................................................................................................................... 12 TABLE 4 MULTICAST RANGE AND DESCRIPTION ................................................................................................................ 13 TABLE 5 RESERVED MULTICAST ADDRESSES .................................................................................................................... 13 TABLE 6 PIM OPERATION................................................................................................................................................. 15 TABLE 7 IP MULTICAST MODEL ARCHITECTURE .............................................................................................................. 23 TABLE 8 SIMULATION LOG DESCRIPTION E XP 2 ................................................................................................................. 68 TABLE 9 OSPF AREAS TABULAR FORMAT EXP 2 ............................................................................................................... 73 TABLE 10 INTERNET STANDARDS ..................................................................................................................................... 82

xv

MEE10:38

xvi

MEE10:38

ACRONYMS
Table 1 ACRONYMS

BGP BSR CGMP DVMRP EIGRP Hop Count ICMP IETF IGMP IS-IS LAN MAC MBGP MBone MOSPF MRIB MTU Multicast OSPF Overhead Packet delay PIM PIM-DM PIM-SM Reliability RFC RP RPF SLA SPT TCP Throughput UDP Unicast VLSM WAN

Border Gateway Protocol Boot Strap Router Cisco Group Management Protocol Distance Vector Multicast Routing Protocol Exterior Gateway Routing Protocol Estimated number of hops Internet Control Message Protocol Internet Engineering Task Force Internet Group Management Protocol Intermediate System to Intermediate System Local Area Network Medium Access Control Multicast extension to Border Gateway Protocol Multicast Backbone Multicast Open Shortest Path First Multicast Routing Information Base Maximum Transfer Unit One-to-many Communication Open Shortest Path First The average number of control bits generated The average time required to deliver the data packets to destination Protocol Independent Multicast PIM Dense Mode PIM Sparse Mode Refers in this report to reliable delivery of data Request For Comments Rendezvous Point Reverse Path Forwarding Service Level Agreement Shortest Path Tree Transport Control Protocol Average number of data bits received by multicast group members User Datagram Protocol One-to-one Communication Variable Length Subnet Masking Wide Area Network

xvii

MEE10:38

xviii

MEE10:38

CHAPTER I 1INTRODUCTION
1.1 INTRODUCTION
IP multicast is an efficient way to distribute information from a single source to multiple destinations. Network resources of the existing Internet are limited and the numbers of applications that utilize the IP multicast services are rising every day. The majority of applications require reliable Quality of Service (QoS) at the cost of limited available network resources. The continuous rise of applications requiring a one-to-many communication paradigm, demands for further research of new designs and implementations of IP multicasting. Such research work can be achieved by using various simulators for easier future estimations of network growth and control over its limited resources. Among such limited resources we mention a limited IP address span, limited network bandwidth, and limited number of multicast capable routers in a network segment [56, 57, 24]. In this thesis we suggest several designs methods, and provide their implementation methods along with achieving reliable IP multicast services using the popular industrial research tool OPNET Modeler 14.5 [27, 52]. A vast majority of real-time multicast applications are based on the User Datagram Protocol (UDP). Consequently, most of the simulations presented in this report are performed using UDP as transport protocol in our design models [25, 52, 58, 64].

1.2 PROBLEM DESCRIPTION


The purpose of this work is to find out problem areas of application level multicast traffic using different routing protocols, and to find an efficient multicast path (i.e., shortest path from a multicast source to a destination) for delivering multicast related traffic [59, 60]. Problems related to path failures, survivability, group management, performance tuning, along with multicast protocol complexity are presented in this thesis through various types of build in reports from OPNET Modeler 14.5.

1.3 INTENDED AUDIENCE


The intended audiences for this report are researchers, network engineers, and developers who have a strong knowledge of group based media distribution as well as strong knowledge of efficient multicast distribution mechanisms, multicast routing protocols for a heterogeneous networking environment. It is assumed that the reader has a fair understanding of the TCP/IP architecture, and in particular UDP, TCP, IP, along with understanding specialized multicast routing protocols such as Protocol Independent Multicast Sparse Mode (PIM-SM), Protocol Independent Multicast Dense Mode (PIMDM), Internet Group Management Protocol (IGMP), Cisco Group Management Protocol (CGMP), and IGMP group management issues.

MEE10:38

Nevertheless, several chapters in this thesis are dedicated to provide some basic background information to commonly used routing protocols, configuration issues, with main focus on multicast routing protocols.

1.4 SCOPE
The scope of this thesis focuses on currently available multicast methods available in OPNET Modeler 14.5 for achieving reliable group communication, Shortest Path Tree (SPT), and reliable data delivery in Local Area Network (LAN)/ Wide Area Network (WAN) environments [55, 63, 43]. This thesis also focuses on network design, configuration issues, failure potential, and alternate route diversion for optimal path selection in case of critical network failures.

1.5 MOTIVATION
The motivation behind the design models presented in this report is to discuss issues related to aggressive traffic behavior, network congestion, heavy bandwidth utilization in relation to an ever growing IP multicast enabled and resource demanding network.

1.6 LIMITATIONS
The limitations in this project are as follows [44, 51]: 1. The multicast addressing is restricted to IPv4 only, i.e., no IPv6 is implemented in the current design. 2. Security issues are not taken into consideration in our case studies due to existing restrictions in OPNET. 3. No wireless clients or wireless technology is implemented. 4. Pure PIM-DM is not implemented due to existing restriction in OPNET. 5. IP tunneling between different zones is not implemented

1.7 STRUCTURE OF THE REPORT


This report is organized as follows: Chapter 1 briefly explains the problem formulation, intended audience for this report, the scope of the project and its limitations, thesis structure as well as a brief history of IP multicast. Chapter 2 presents some background and related work, and provides a comparison of this thesis work to other work. This chapter also discusses fault recovery, performance tuning, shortest path switch over, distributed decision making, complexity of routing protocols together with their issues and problems and how this thesis has dealt with these problems.

MEE10:38

Chapter 3 presents some basic information about routing protocols, routing types, and the importance of routing algorithms. Several components of various routing protocols, ICMP message types, and error identification methods are presented here. Chapter 4 describes multicast addressing. It describes the multicast address allocation methods, multicast address mapping methods, multicast routing protocols (PIM-SM, PIM-DM, Multicast Source Discovery Protocol (MSDP), Distance Vector Multicast Routing Protocol (DVMRP), multicast tree construction, interoperability issues, Rendezvous Point (RP) mapping, and usage of Boot Strap Routers (BSRs). This chapter also describes implementation details, transport methods, group management protocols, model architecture of IP multicast, multicast tunneling issues, and performance metrics used in our multicast simulation models. Chapter 5 and 6 describe the proposed model designs. The simulation design models are explained with various evaluation methods available with OPNET Modeler. Several types of network traffic such as FTP, voice, video conferencing, and database traffic are generated in the designed models. Several statistics over traffic utilization are collected after simulating these models. Chapter 7 covers the description of the implementation work with reports, and contains the analysis of different graphs, the comparison between the different routing protocols used in our models and their statistics. Chapter 8 concludes this report and presents the work done and goals achieved. Future enhancements to this project are also explained in this chapter.

1.8 MULTICAST HISTORY


The IP multicast background history is given in Figure 1. The figure illustrates the multicast technologies ranging from approximately 1992 to 2006. The figure clearly indicates the PIM development and implementation during the period 1996 to 1998 [11].
Native PIM Multicast on Production network

MBone Overlay Multicast deployme nt

SSM Deployment

1992

1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Native PIM Multicast Deployment in ISPs

Figure 1: Multicast deployment from 1992 to 2006

MEE10:38

The ever growing demand of IP multicasting capability takes a crucial role in everyday real-time streaming applications. Multicast uses UDP, so reliability must be handled by the application [20, 54]. The multicast group addressing allows different hosts to join or leave a multicast group regardless of their current place in a network topology. In this project the multicast source is providing a variety of services such as video conferencing, FTP, VoIP, database, etc. The hosts can register with any of the services available in the network. The IP multicast solution was originally provided by using IGMP in most of the proposed designs [41]. For this report, applications such as streaming, voice, video, and database were simulated in OPNET Modeler 14.5, by using PIM-SM. The pros and cons of multicast traffic are discussed in detail in the following chapters.

MEE10:38

CHAPTER II 2 BACKGROUND & RELATED WORK


2.1 BACKGROUND
Fault recovery, performance tuning of Protocol Independent Multicast (PIM) enabled native IP multicast networks is extremely complex to attain in both LAN as well as in WAN environments [14]. Network performance can be improved by switching a Rendezvous Point (RP) tree to Shortest Path Tree (SPT) in multicast enabled networks [6]. Distributed decision making regarding receiver access control have influence over SPT, and provides better scalability [35]. A multicast routing protocol is more complex, as multicast traffic takes an increasing role in network traffic. Improper design models give poor network performance for realtime, multicast transmissions [15].

2.2 RELATED WORK


Fault recovery behavior depends on many factors, such as the protocols used, compatibility issues between different network devices with the environment they operate in, and in particular the multicast group management employed. For instance, an IP video conference simulated in OPNET demonstrates the complexity behind multicast traffic forwarding, traffic engineering issues through detailed analysis, and evaluation. Simulation results show details of PIM control traffic, and provide a comparison of the end-to-end delay for video traffic. However, this report will only present one type of traffic for analyzing the multicast behavior in a LAN/WAN environment. A distributed decision making approach allows a router other than an RP in tree decision making. A practical solution is provided for receiver access control within PIM-SM multicast routing protocol. Distributing the tree is a key management task for all RPs in a network and this approach provides predictable results. Delivering data to multiple receivers in an efficient way requires that multicast is implemented in that network segment. Multicast routing is independent of other routing methods, and is far more complex than unicast routing. The multicast performance analyzer in OPNET monitors multicast messages as well as other details of the employed multicast routing protocol. Experiment I in our thesis shows the

MEE10:38

important task of assigning multiple RPs in a network. These PIM routers assist in distributing the multicast traffic to the entire network according to the requests obtained from a particular receiver.

2.3 COMPARISON TO THIS WORK


The proposed design models in this thesis are using different routing protocols, group management, path failure recovery, performance, survivability analysis, etc. The qualitative and quantitative analysis is made using OPNET for our design models. The PIM-SM control traffic, OSPF, EIGRP, RIP end-toend delay, network delay, throughput are presented in CHAPTER VII Results section. The failure scenarios consider a single or multiple device failures in simultaneous attempts. The auto web page facility was utilized to make easier the understanding of a failure, and the survivability validation analysis. This thesis considers different types of real- and non real-time traffic along with video conferencing traffic. In particular the flow of multicast traffic through SPT is clearly shown in our second experiment.

MEE10:38

CHAPTER III 3. ROUTING PROTOCOLS


Routing Protocols

Interior

Exterior

Dist- Vect

Link-Vect

Hybrid

BGP

EGP

RIP

IGRP

OSPF

EIRGP

Figure 1 Routing Protocols

Figure 1 illustrates a possible hierarchy of different routing protocols. Routing protocols are used to route data packets with best, loop free paths. Responsibilities include adding new routes, or replacing lost ones. Routing is the process of forwarding packets from a source to a destination in systematic way. There are many routing protocols commonly used today, and most routing protocols are based on the following algorithms: Bellman-Ford algorithm, Dijkstras algorithm also called as shortest path algorithms. Leas-cost-path algorithm computes the shortest path from source to all other nodes in a network, also called link state algorithm. The name link state called due to its capability of showing link status using update messages in routers. One such algorithm is Dijkstras algorithm, which uses an iterative procedure to find least cost paths to all destination nodes. The link state routing sends information to all directly connected links using link state packets. This algorithm provides complete cost information about each link in a network, since the least cost among the all possible paths is available. The pros behind this issue are quick response to topology changes and node failures, but suffer with computation overhead, over space requirements [69]. Bell-man ford algorithm uses two approaches in finding the shortest path, such approaches are centralized and distance vector approach. In centralized approach least cost is calculated from each node to destination nodes, and uses the same approach once per destination with multiple nodes. The repetition of iterations going to happen asynchronously or all at once. In distance vector approach each node maintains a table of information relating to distance, destination, node information. In this approach the periodically triggered updates are sent to all neighboring nodes when vector changes. This method is used in, ARPNET, RIP, DECnet and Novel IPX. The distance vector approach suffers

MEE10:38

with infinite loop, propagation delay problems which can be avoided by selecting loop-free paths, split-horizon techniques [65, 69].

A routing algorithm specifies the best path for data packets to take from a source to destination by maintaining a so-called routing table. The forwarding method of data packets is different for a connection-oriented or a connection-less service. The selection of a routing algorithm is based upon networking requirements such as accuracy, simplicity, optimality, stability, convergence, load balancing, etc. [1, 13, 55, 64].
Table 2 Routing protocol comparison

PROPERTY Method Summary VLSM Converge Timers Update/Hello/dead

EIGRP Advanced Distance Vector Auto / Arbitrary Yes Seconds Triggered

OSPF Link state Arbitrary Yes Seconds LSA refreshes every 30 min

IS-IS Link state Arbitrary Yes Seconds Triggered(10/30)

BGP Path Vector Arbitrary Yes Seconds Triggered(60/80)

A brief comparison of several routing protocols is given in Table 2 [1, 9].

3.1 INTERIOR GATEWAY ROUTING PROTOCOLS


In Interior Gateway Routing, the routing information is exchanged within the same routing domain. IGP can be classified into three different classes. Distance Vector, Link state, hybrid routing protocol algorithms. Enhanced Interior Gateway Routing Protocol (EIGRP), Routing Information Protocol (RIP) and Open shortest Path First (OSPF) are examples of interior gateway routing protocols. Such protocols can also be divided based on their addressing paradigm, i.e., classfull or classless addressing [39, 55

3.1.1 DISTANCE VECTOR ROUTING


In Distance Vector Routing Algorithm (DVRA) the routes are selected based on the agreed distance metric between networks. The distance metric can be number of hops, or routers between them. The distance to all known networks is regularly calculated using distance vector routing algorithm. DVRA is also known as the Bellman-Ford algorithm. RIP, IGRP are examples of distance vector routing protocols [2]. In experiment I, II the RIP protocol was configured at High end servers.

3.1.2 LINK STATE ROUTING


In Link state algorithm the routes are dynamically selected based on the shortest path first method. Routers maintain the current topology information, by exchanging link state information through network. Routes calculated using link state are based on realistic metrics, such as true cost. A link state

MEE10:38

router keeps immediate neighbor information to all other networks, as well as the best path to each destination. OSPF, IS-IS are some examples of Link state routing algorithms [2]. In experiment II all routers are configured with OSPF as routing protocol, and using OSPFs area concept. Some of the end devices are configured with RIP as routing protocol. The reason behind the selection of RIP is because of its simplicity in configuration, widespread usage, the supportability, and low computational overhead. As the network size grows it is advisable to go for OSPF, but the complexity also increases proportionally.

3.1.3 HYBRID ROUTING


Hybrid routing protocols are balanced routing protocols which make use of both distance-vector, as well as link-state routing mechanisms for their routing and neighbor relationship to other routers in a network environment. Hybrid routing protocols allow rapid convergence and less processing power in routers. For instance, EIGRP is a balanced routing protocol which uses a distance vector technique for a more accurate routing metric, and a link state technique for better neighbor information exchange [64]. In experiment I EIGRP is configured between the devices in each and every subnet, but BGP is configured between the three subnets.

3.2 EXTERIOR GATEWAY ROUTING PROTOCOLS


Exterior Gateway Routing Protocols (EGRP) is used to exchange routing information between different domains, e.g., Border Gateway Protocol (BGP). BGP uses a path vector algorithm [50, 54]. Path vector algorithm changes the operational behavior of the protocol by including path information to improve its convergence time. In a path vector algorithm a node receives the full-fledged path information as well as the distance from their neighbor. BGP advertises the complete path information as a list of Autonomous Systems (AS) in its routing mechanism. Paths with loops are detected locally by creating a policies using BGP as path vector protocol. The best path computation is possible with a path vector algorithm [68]. The BGP is configured in our experiment I, between the Ronneby subnet and (Karlskrona, Karlshamn) subnets.

3.3 CLASSLESS ROUTING


Also called Classless Inter domain Routing (CIDR), allows each router interface to be configured with different networks based on arbitrary bit boundaries. Aggregation, summarization are some great features of CIDR. Summarization reduces the size of routing table whereas aggregation provides scalability. Classless routing provides extensibility in IP addressing such as variable length subnet masking (VLSM), along with fixed length subnet masking (FLSM). By using classless routing the IP address space can be conserved [38, 52, 64].

MEE10:38

3.3.1 OPEN SHORTEST PATH FIRST


OSPF is a classless open standard IETF link state routing protocol. OSPF uses flooding for updating its link state, and Dijkstra algorithm for finding the shortest least-cost path between nodes. OSPF is mostly used for intra-AS routing. OSPF uses a shortest path first (SPF) algorithm against its own topology to calculate best routes to each network. OSPF has an added advantage of fast convergence in case of configuration changes. Neighbor database can be maintained with Hello packets [9, 19].

3.3.2 BORDER GATEWAY PROTOCOL


BGP is a complex internet routing protocol of the internet. Internet service providers (ISP) use BGP to share and establish routing information among different autonomous systems. BGP is path vector protocol, in which the routing information stored as a combination of a destination and path attributes to a network destination. BGPv1 was a classfull routing protocol, without support of route aggregation. BGP-4 supports CIDR. BGP messages are sent over reliable transport connections having a maximum BGP message size of 1024 bytes. The mode of operation in BGP is External BGP (eBGP) and Internal BGP (iBGP). eBGP is used between different ASs, whereas iBGP is used between similar ASs for exchanging BGP updates. BGP speakers establish peer relationship with each of their neighboring routers using the concept of eBGP, and iBGP [20]. eBGP, iBGP concept is implemented in experiment I, and presented in CHAPTER V SECTION 5.1.

3.4 CLASSFULL ROUTING


Used for routing packets based on the default network address boundary. The major drawback of this addressing technique is an enormous waste of IP addresses [39, 52].

3.5 STATIC AND DYNAMIC ROUTING


3.5.1 STATIC ROUTING
In static routing, the path between a source and a destination node is fixed and no routing protocol is required for exchanging the latest routing information. A static route has to be configured manually in all routers, and is simple to configure and maintain. Static routing is not scalable and does not reflect the routes in case of topology changes; explicit addition of routes to all networks is needed.

10

MEE10:38

Static routing is secure because of well known or defined routes can only have access to particular network. [52, 55, 54]. For the sake of argument the concept of static routing also implemented in experiment I along with eBGP between neighboring routers.

3.5.2 DYNAMIC ROUTING


In dynamic routing the state of the network is dynamically propagated after each and every route update. In this method, each router has a chance to calculate the best path to a destination after receiving the new neighbor information. Dynamic routing protocols are classified into Interior Gateway routing protocols (IGP), and exterior Routing Gateway routing protocols (EGP) [7, 52, 55, 54].

3.6 ROUTING ALGORITHMS


3.6.1 GLOBAL ROUTING ALGORITHM
In global routing, the least cost path is determined from source to destination by using a shortest path routing algorithm. This type of routing algorithm has complete information about link costs, connection type, and is also known as link state routing algorithm. Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS) are some examples of link-state routing protocols. Link state has several advantages over distance vector algorithm, such as: more stable, faster convergence, faster network topology discovery, better source routing, better routing services. Distance vector suffers from count to infinity, slower network convergence, routing loop problems [13, 21, 31]. The shortest path from source to destination is presented in experiment II, for four different types of traffic, using OSPF as main routing protocol. The periodical exchange of routing information between neighboring routers, causes the network routing information to pass very slowly, and contributes to slower network convergence. Network convergence is indicated as change in either router status or link status. Routing loops are generated in any network due to wrong routing information that is spreading throughout the network. Routing loops are generated due to passing same information recursively or never being able to reach a destination (phenomenon also known as count to infinity) [5].

3.7 HIERARCHICAL ROUTING


The growth of routing tables and number of nodes in network creates scalability problems. The number of flooded packets will cause improper communication between routers and large protocol overhead. This is already prohibitive in todays Internet. Several routing problems can be solved by using hierarchical routing techniques. Hierarchical routing is achieved by segmenting the network and using AS routing. This way, the burden of the routing process is reduced. An AS is consisting of a group of routers sharing the routing services under the same administrative control.

11

MEE10:38

3.8 IP COMPONENTS
Internet addressing and forwarding are the most important components of the internet protocol. IP addressing is of two types: IPv4 [RFC 791] and IPv6 [RFC 2460, RFC 3513]. All the experiments performed in this thesis work are restricted to IPv4 addressing.

3.9 ERROR IDENTIFICATION


Internet Control Message Protocol (ICMP) is an error and information reporting protocol. It can be used by Packet Internet Gopher (PING) command to send ICMP echo request, echo response messages, and to check connectivity between hosts and routers and is used by communicating parties to retrieve network related information. Foremost, ICMP is used for error reporting. Several examples of more general types of network related information possible through ICMP are: Destination host unreachable, Unable to find path to host, Destination host unknown, and Time to Live (TTL) expired . The connectivity between different network devices was tested in this thesis using the PING command [3, 66].

3.9.1 ICMP MESSAGE TYPES


ICMP message types are classified as either error messages or queries and responses and are presented in Table 3 [66].
Table 3 ICMP Message Types

ICMP Type 0 3 3 3 3 30 3 4 8 9 10

Code 0 0 1 2 3 6 7 0 0 0 0

Description Echo reply to ping Destination N/W unreachable Destination host unreachable Destination protocol unreachable Destination port unreachable Destination network unknown Destination host unknown Congestion control Echo request Route advertisement Route discovery

12

MEE10:38

CHAPTER IV 4 MULTICAST ADDRESSING


According to Internet Assigned Numbers Authority (IANA) the assignment of IP multicast addressing is given in Table 4 and Table 5. A class D address space is indicated by the high order 4 bits set to binary 1110 as illustrated in Figure 2. Class D address range for IP Multicast is from 224.0.0.0 239.255.255.255. All multicast addresses have to be within the limits defined for Class D. This address range is only for the group address or destination address of IP multicast traffic [23, 20, 47].

28 Bits 1 0
Figure 2 class D range IP Multicast

Multicast Group ID

Table 4 Multicast Range and description

Starting Range 224.0.0.0 224.0.1.0 239.0.0.0

Ending Range 224.0.0.255 238.255.255.255 239.255.255.255

Description Locally Scoped Globally Scoped Limited Scoped

Locally scoped addresses are only for network protocol use, and routers will not forward data related to locally scoped addresses. These are also used for stateless auto configuration of hosts. Multicasts in locally scoped addresses are never forwarded off the local network.

Table 5 Reserved Multicast addresses

S No 1 2 3 4 5 6 7 8

Reserved Multicast Address 224.0.0.1 224.0.0.2 224.0.0.4 224.0.0.5 224.0.0.6 224.0.0.9 224.0.0.10 224.0.0.13

Description All Multicast enabled systems in a network All Multicast enabled routers in a network All DVMRP (Distance vector Multicast ) routers All OSPF routers All OSPF Designated Routers All RIPv2 routers All EIGRP routers All PIMv2 routers

13

MEE10:38

Table 5 illustrates the IANA recommended reserved link local addresses for network protocols on a local network segment. Packets with these addresses should never be forwarded by a router but instead should remain local to a particular network segment . Globally scoped addresses are assigned dynamically over the internet. The 224.2.X.X address range is used by Multicast Backbone (MBone) applications. The MBone concept was introduced by IETF in 1992 for the purpose of supporting applications using a large number of users using audio and video meetings through a virtual multicast channel. The routers enabled with multicast features are called MBone routers. Encapsulation of multicast packets inside IP packets, and transferring the multicast data over the existing physical media through tunneling within the islands of MBone subnets is a never-ending demanding feature for ubiquitous multicast applications. Normal IP routers that are disabled with multicast capability for the sake of bandwidth tracking, and billing can make use of MBone concept to provide multicast services over Internet [34].

4.1 MULTICAST ADDRESS ALLOCATION


4.1.1 DYNAMIC ALLOCATION
Used for group addresses on demand allocation to multicast applications. It provides addressing for a specified life time according to need and demand. Applications are allowed to get this type of addressing depending on their needs [52].

4.1.2 STATIC ALLOCATION


Well known addressing scheme for specific protocols. It is specifically assigned by IANA, for global inter-working. Addressing schemes are used for specific protocols [52, 47].

4.2 MULTICAST ADDRESS MAPPING


IEEE 802.3 MAC Address structure is illustrated in Figure 3

01-00-5E

0
Figure 3 IEEE 802.3 MAC Address structure

OUI These fields are defined as follows: OUI: Organizational Unique Identifier is always 01-00-5E.

23bits

Mapping Destination IP addresses to MAC address, and vice versa.

14

MEE10:38

Multicast MAC address is derived from the IP multicast address. The destination IP address of IP multicast packets maps to a multicast MAC address [52].

4.3 MULTICAST ROUTING PROTOCOLS


4.3.1 MULTICAST ROUTING PROTOCOLS
The multicast routing protocols have a major role in constructing a distribution tree in order to forward multicast packets. The basic approaches in most multicast protocols for building distribution trees are Sparse Mode (SM) and Dense Mode (DM) [24, 26, 34, 67].

4.3.2 PROTOCOL INDEPENDENT MULTICAST VERSIONS


PIMv1 Provides automatic RP discovery with Auto-RP (Cisco proprietary) PIMv2 Automatic RP discovery is accomplished by the bootstrap router method (standards based) 4.3.3 PROTOCOL INDEPENDENT MULTICAST The Protocol Independent Multicast (PIM) is independent of the unicast routing protocols that are already running in a network. In addition PIM is also the most widely used internet multicast routing protocol. The PIM in conjunction with MBGP (Multicast extension to BGP) provide scalable multicast solutions to enterprise wide multicast-enabled networks. The PIM router does not participate in updating the routing information. MBGP uses BGP to keep additional Routing Information Base (RIB) for protocols other than IPv4 unicast. Multicast RIB (MRIB) allows unicast and multicast routing to exist at same place when performing RPF check [20, 26, 51, 53, 54].

4.3.4 MSDP
Multicast Source Discovery Protocol (MSDP) can be used to connect RPs between different SM domains. Each PIM SM domain uses its own RP and need not depend on other RPs in different domains [5]. Tree construction protocol: PIM-SM, PIM-DM, SPARSE-DENSE MODE
Table 6 PIM operation

Mode of Operation PIM-DM PIM-SM SPARSE-DENSE

Description Tree initializations with all MC enabled routers and prune back the branches if IGMP is absent. Tree with a shared point called RP (Rendezvous point) between source and destination Tree can be of either Sparse mode or Dense Mode

15

MEE10:38

Table 6 [51] illustrates the PIM mode of operation and their description.

4.3.5 DENSE MODE PROTOCOLS


PIM Dense Mode (PIM-DM) is mainly designed for multicast LAN applications based on source trees. Uses a technique called flood and prune in order to stop sending multicast packets to stations no longer interested in receiving multicast traffic. Best suited in an environment where there are a number of hosts waiting to receive multicast data and with a need to cope with the flooding in that network segment. PIM-DM initially floods traffic out of all non-RPF interfaces where there is another PIM-DM neighbor or a directly connected member of the multicast group. As each router receives multicast traffic via its RPF interface (the interface in the direction of the source), it forwards the multicast traffic to all of its PIM-DM neighbors. This approach results in some traffic arriving via a non-RPF interface. When this happens, the packets arriving via the non-RPF interface are discarded. Prune messages are also sent on non-RPF interfaces to shut off the flow of multicast traffic. Prune messages are sent on an RPF interface only when the router does not have any downstream receivers for multicast traffic. In PIM-DM, all prune messages expire in 3 minutes. After that, the multicast traffic is flooded again to all of the participating routers. This periodic flood-and-prune behavior is normal and must be taken into consideration when the network is designed to use PIM-DM. PIM-DM can use unicast routing tables populated through OSPF, IS-IS, BGP, or they could use a special multicast RPF table populated by MBGP or M-ISIS when performing RPF checks [54, 61, 62, 64].

4.3.6 DISTANCE VECTOR MULTICAST ROUTING PROTOCOL


DVMRP is the oldest and most widely used reverse path flooding multicast protocol of internet. DVMRP functions within an autonomous system (AS) and routes multicas t datagrams within an AS only, but not between Autonomous Systems. Uses reverse path flooding to forward multicast data, and pruning to reduce the amount of traffic sent [5].

4.3.7 SPARSE MODE PROTOCOLS


PIM Sparse Mode (PIM-SM) is defined in RFC 2362. PIM-SM uses shared multicast distribution trees, but it can also switch to the SPT. In PIM-SM traffic is forwarded only to those parts of the network that need it. PIM-SM uses an RP to coordinate forwarding of multicast traffic from a source to receivers. Senders register with the RP and send a single copy of multicast data through it to the registered receivers. Group members join the shared tree through their local Designated Router (DR). A shared tree that is built this way is always rooted at the RP. PIM-SM is appropriate for wide-scale deployment for both densely and sparsely populated groups in the enterprise network. It is the optimal choice for all production networks, regardless of size and membership density. The sparse mode protocols are widely used in a network environment where there is a sparsely distributed multicast data across the network and the bandwidth is not good enough.

16

MEE10:38

Currently this is the most scalable IP multicast routing protocol. In this case, group members are spread sparsely throughout the network. Flooding may still cause unnecessary bandwidth and performance problems. The empty distribution tree starts with adding its branches depending upon the request from hosts. Information is passed from a sender to a receiver by using a common point called RP. Receivers always need to register with the RP in order to request data from the multicast source. The optimized path from sender to receiver is always calculated in SM environment. PIM bidirectional mode is designed for many-to-many applications. Source Specific Multicast (SSM) is a variant of PIM-SM that builds only source-specific SPTs and does not need an active RP [20, 34, 51, 52]. Interoperability: PIM-SM, PIM-DM, and DVMRP can operate at same time by using these protocols on same interface either by using IPv4 or IPv6 addressing. In DM the multicast source broadcasts the data throughout the network and prunes back the unwanted branches, where as in SM the requested receiver is only eligible to receive multicast data. Rendezvous Point (RP): RP is a central place where all senders and receivers start sending their multicast information. Receivers always have to send a join messages to RP and senders start sending according to the request from respective hosts. RP is always placed at an optimal point in a network. SPT switch occurs automatically after the first transmission attempt [51, 52]. Characteristics of an RP: 1. Defined on all multicast enabled routers 2. Reachable to all sources and destinations 3. Optimal location in a network. Groups to RP mapping There are several possible ways for mapping between multicast groups and RPs, as follows: Static RP: It must be configured on all routers. The configuration information is local to the router and there is no protocol that propagates this information to the network. At least one static RP must be configured for a multicast group operating in SM [51, 52]. Auto RP: A Cisco proprietary protocol must be configured with candidate RPs called mapping agents. Ciscos reserved address space assigned by IANA for this purpose is: 224.0.1.39 for announcements, and 224.0.1.40 for discovery. It simplifies the process of automatic distribution of RP to group mapping for different multicast groups. Auto RP avoids miss-configuration issues between similar RPs, or between mapping agents. A Cisco router automatically listens for RP related information. Mostly used with PIM DM or with PIM sparse-dense mode only [51, 52]. Boot Strap Router (BSR): The BSR capability was added in PIM version 2, and is enabled by default in Cisco PIMv2 supporting routers. It simplifies the auto RP process. As it supports only PIMv2, interoperability issues may arise. A single BSR router can be elected from multiple candidates BSRs. BSR periodically sends BSR messages to all routers in a network. A candidate BSR with high priority is then elected as BSR.

17

MEE10:38

Redundancy through RP configuration methods: static RP cannot provide any redundancy. Auto RP and BSR may provide some redundancy in a different way. Similar to auto-RP, candidates RPs are configured and the information is propagated to the network. This approach supports PIMv2 and above [24, 36, 39].

4.3.8 PIM SPARSE-DENSE MODE


Maximum efficiency can be achieved by keeping multiple RPs at optimal locations. This approach with multiple RPs is complex to configure and to trouble shoot. PIM Sparse-Dense-Mode comes up with a feature of auto selection RPs for each multicast scenario. PIM Sparse-Dense-Mode solves several problems such as scalability, heavy router resource consumption of PIM-DM, and the limited configuration issues of PIM-SM. PIM-Sparse-Dense-Mode mostly takes the DM features, so automatic RP discovery is implemented. This method also permits auto RP, BSR, along with statically defined RPs.

Figure 4 PIM Sparse-Dense Mode

The above figure 4 [20 (module 7.3.7, Figure 1), 53] illustrates two multicast sources each with RPs in an optimal location. The PIM sparse-dense mode is difficult to configure, manage, and troubleshoot through manual configurations of RPs. PIM sparse-dense mode supports automatic selection of RPs for each multicast. Router A in the figure could be the RP for source 1, and router F could be the RP for source 2. PIM sparse-dense mode is the recommended solution from Cisco for IP multicast, because PIM-DM does not scale well and requires heavy router resources and PIM-SM offers limited RP configuration options. If no RP is discovered for the multicast group or none is manually configured, PIM sparsedense mode operates in DM.

18

MEE10:38

Designated Router: Controls the multicast routes on a directly connected network. The router with highest IP address becomes the DR when more PIM routers are present in a network [5, 52].

4.4 ROUTING MULTICAST TRAFFIC


Identifying multicast traffic is more complex than unicast or broadcast because the multicast enabled routers has to translate multicast addresses to host addresses. The routers must interact with each other in order to exchange their status to neighboring routers and to establish shortest path among them. Before forwarding multicast traffic, the establishment of distribution trees among designated routers and multicast enabled devices is mandatory. The tree process may be performed in several ways, but the most common approaches are: source specific tree and shared tree. Source Specific Tree: Multiple delivery trees are built from a source to the receivers. Most of the trees build a shortest path from source to receiver. Shared tree: The path is shared by different receivers and senders originating in one common point also called RP. Pruning: The heavily flooded multicast traffic or unwanted traffic can be removed by using a technique called pruning. PIM-DM: uses the technique called flood and prune reverse path forwarding [51, 54, 57].

4.4.1 TRASNPORT PROCOCOLS AND MULTICASTING


In TCP/IP architecture, the two transport layer protocols have a major role in information transmission. The TCP guarantees data transmission whereas UDP does not guarantee data transfers but it is faster. UDP is a connectionless, unreliable mechanism which does not provide the ordered delivery of packet contents. A vast majority of multicast applications use UDP as transport layer protocol which also implies that multimedia applications are error prone. Meantime, little delay is also observed with these applications. Multicast is UDP based, so reliable delivery of data is not possible by using best effort delivery. Lack of TCP windowing, and slow start techniques result in congestion. Occasional duplicate of packets are also observed in multicasting along with out of synch packet delivery. Network topologies have a greater impact on the order of delivery in multicast transmission. UDP can perform error detection to a certain extent but not at the same level as TCP. The lack of handshaking mechanism makes UDP an unreliable protocol.UDP provides smart services over TCP even though it is not a reliable protocol. Some of advantages of using UDP: faster delivery, support for larger number of receivers in a multicast enabled network, smaller segment overhead, etc. Streaming media, audio video

19

MEE10:38

conferencing, software upgrades to large group of receivers, gains an added advantage over UDP related multicast transmission. FTP transmissions by using multicast transmission have an advantage of lower network bandwidth usage. Multicast transmission is used for better bandwidth utilization, less host/router processing, simultaneous delivery of data streams [4, 64].

4.5 MULTICAST FORWARDING


Multicast sources send traffic to arbitrary group of hosts specified in the destination address field of an IP packet. All multicast capable routers must create a distribution tree in order to control the IP multicast flow before sending multicast traffic in a network. All multicast routing protocols make use of Reverse Path Forwarding (RPF) check so as to decide whether to forward or drop IP multicast packets. Switches forward frames based multicast MAC addresses at layer L2. A multicast router always forwards data from a source (upstream direction) source to participating receivers (downstream direction) after performing the RPF process [51].

4.6 MULTICAST TRAFFIC ENGINEERING


The traffic flow in multicast enabled networks depends on various factors. The most common issues are as follows [51] a) RP Position in a network b) The multicast routing configuration c) IP Unicast routing table

4.6.1 RP POSITION
Optimal placement of a RP in a network is necessary in ordert to get best performance in a network. RP placement takes a crucial role when the router default behavior is changing. In our scenarios, experiment I have three RPs defined while experiment II has one RP defined.

4.6.2 MULTICAST ROUTING CONFIGURATION


Depending upon the demands and performance analysis on throughput the interface configuration has to be changed. Most of the newest network devices come with extensive support to multicast configuration capability.

4.6.3 DEPENDENCY ON UNICAST


Multicast packets are sent from a source to a receiver as unicast packets before the switchover to shortest path. The shortest path tree route can be modified by making changes to a unicast routing table.

20

MEE10:38

4.7 IGMP
The multicast data flow between two IP sub-networks can be controlled by Internet Group Management Protocol (IGMP) for allowing receivers subscribing to a particular multicast group. IGMP is an integral part of IP. Hosts that want to receive multicast data have to inform their immediate neighboring routers their interest in a multicast transmission through IGMP messages. A multicast enabled router periodically checks for new group members on each configured network. Hosts use IGMP to register with the router such as to join and leave a specific multicast group. By using IGMP multicast routers keep track of multicast memberships. There are three versions of this protocol [24, 27, 49, 51].

4.7.1 IGMP v1
The multicast enabled routers query periodically with 224.0.0.1 multicast address in order to check hosts that are in queue for multicast traffic. Here, the router cannot identify itself the host that left from a particular multicast group. In this version IGMP messages are of fixed size and encapsulated in IP datagrams. Widely deployed version on internet [49, 53, 64].

4.7.2 IGMP v2
This version comes with special message called IGMP leave message from hosts no longer interested in multicast traffic. Added support for low leave latency, routers can find the hosts that are not really interested in group membership. Here, queries are sent to specific multicast group instead of sending to all host address [41, 46, 49]. Most of the experiments in our thesis are using IGMP v2 as group management protocol.

4.7.3 IGMP v3
The matured version of IGMP, come up with a special feature of intimating the nearby router by saying which source a particular host can accept a multicast message along with the available feature of intimating their membership report. Source filtering, expressing the interest in receiving data from a particular source, support for SSM. Version 3 is interoperable with versions 1 and 2 [49, 53, 64].

4.7.4 IGMP MESSAGES


There are three types of IGMP messages. They are query, member ship report, and leave report. The group multicast address is treated as membership report. Leave report is used to inform all the routers on a particular network with 224.0.0.2 as standard address.

21

MEE10:38

Host membership query: all systems in a particular network are addressed with 224.0.0.1 as standard address. These membership messages are further divided into 2 subtypes a) General query: Groups that have membership on a bounded network can learn about group membership with the help of general queries. b) Group specific query: Group membership can be learned with the help of group specific queries, i.e., whether a specified group has members on a bounded network [52].

4.7.5 IGMP PROTOCOL OPERATION


A host sends an IGMP report message when it joins a multicast group. No report is sent when a host leaves a group, after sending the first message. A multicast router can multicast at regular intervals with an IGMP query to all hosts. The interested host can send an IGMP report message as response.

4.8 CGMP
Cisco Group Management Protocol (CGMP) is a protocol developed by Cisco which allows the Catalyst switches to learn from Cisco routers and layer 3 switches about the existence of multicast clients from other Cisco routers and layer 3 switches. Cisco Catalyst switches make use of CGMP to forward layer 2 information based on IGMP operations. With CGMP running, any router receiving a multicast join message via a switch will reply back to the switch with CGMP join message [20]. CGMP is a legacy multicast switching protocol. CGMP is not compatible with IGMPv3. Denials of Service (DoS) attacks are possible on the CGMP enabled switches. The latest switches support IGMP instead of CGMP because of lack of security, flood control, etc [68]. CGMP imitates the client/server paradigm, where the router is considered a CGMP server and the Switch is considered a client. CGMP packets contain information about join and leave messages, MAC address of IGMP clients, and multicast MAC address of the multicast group [1].

22

MEE10:38

Figure 5 CGMP Operation with Routers and switches

Figure 5 [20 (module 7.2.7, Figure1)] illustrates the CGMP operation on routers and switches.

4.9 IP MULTICAST MODEL ARCHITECTURE & MULTICAST OPERATIONS


The IP Multicast model suite contains the following processing models defined inside its layer technology. The IP Multicasting is implemented by using the process models presented in Table 7 [49, 51].
Table 7 IP Multicast Model Architecture

Process Model Ip_igmp_host Ip_igmp_rte_intf Ip_igmp_rte_grp Ip_pim_sm

Description To implement IGMP in hosts To implement IGMP in routers IGMP implementation routers for every multicast group in a defined network To implement PIM-SM in router nodes

4.9.1 JOINING A GROUP

23

MEE10:38

Figure 6 IGMP Group joining Procedure

Figure 6 [51 (Figure 14.37)] illustrates the group joining procedure. Applications join a multicast group, by sending a join request to the ip_igmp_host_process. The ip_igmp_host process sends an IGMP membership report to neighboring routers. The ip_pim_sm process sets up a distribution tree, and starts sending packets to the designated group.

4.9.2 SENDING TRAFFIC TO A GROUP

Error! Figure 7 Multicast traffic forwarding to groups

24

MEE10:38

Figure 7 [51 (Figure 14-38)] describes the multicast traffic forwarding by using a broadcasting method between sender and broadcast interface. Applications start sending packets to a multicast group. Router forwards the multicast packets to the ip_pim_sm process. The ip_pim_sm process on the router creates and sends one packet of the multicast packet for each out interface as specified in the routing table.

4.9.3 MULTICAST TUNNELING


Multicast Tunneling: Tunneling is very useful in case of a lack of contiguous connectivity between multicast routers. The idle link for multicast traffic also provides security. Multicast traffic can also be encapsulated by using two techniques. Tunneling provides added advantage of load splitting: DVMRP tunnels and GRE tunnels [64].

4.9.4 DVMRP TUNNELS


Cannot be used between two Cisco routers. Between a Cisco router and another DVNRP router this is possible.

4.9.5 GRE TUNNEL


Can be used between two Cisco routers, and resembles a point to point connection.

4.9.6 MULTICAST APPLICATIONS


The following multicast applications are simulated in this report: Video conferencing, FTP, Database, Voice.

4.9.7 PERFORMANCE METRICE IN MULTICASTING


The following performance metrics are evaluated in this thesis: Throughput The average number of data bits received by multicast group members (per second). Overhead (Bits/sec) The average number of control bits generated (per second) by all nodes in a network. Hop count The total number of hops necessary to reach the multicast destination. Packet Delay (sec) The average time required to deliver data packets to the required destination [51].

25

MEE10:38

26

MEE10:38

CHAPTER V 5. MODEL DESIGN I


5.1 MODEL DESIGN EXPERIMENT 1

Figure 8 New model design experiment I

Figure 8 illustrates the topology used for experiment 1 with 3 subnets located at different places: Ronneby, Karlskrona, and Karlshamn. The Ronneby subnet was setup as ISP and was connected to the remaining subnets by using EBGP and static routing.

27

MEE10:38

Figure 9 Ronneby subnet

Figure 9 shows the ISP connected interface for Ronneby having the following configuration: eBGP running between ISP and Ronneby.Karlskrona.junction, ISP and Ronneby.Karlsham.junction router. iBGP Running between junction routers, i.e., ISP-SANJOSE1, ISP-SANJOSE2. Static routing was established between the other subnets along with junction routers.

28

MEE10:38

Figure 10 Karlsham Subnet

Figure 10 illustrates the design in Karlshamn with R9 as RP. The RP have a crucial role in finding SPT.

29

MEE10:38

Figure 11 Karlskrona subnet with R1 as RP

Figure 11 illustrates the Karlskrona subnet with end devices each connected to their respective switches. The subnets are interconnected through the Ronneby subnet. In Karlskrona subnet we configured a video server. LANs are interconnected through a switch and the Karlsham subnet contains the heavy, low profile usage video servers.

30

MEE10:38

CHAPTER VI 6 MODEL DESIGN II


6.1 DESIGN MODEL EXPERIMENT II

Figure 12 New Model Design Experiment II

Figure 12 illustrates the new model design for Experiment 2. The topology describes the type of routing used between the interconnected devices, and the services involved, along with group management concepts. There are a total of 9 Cisco routers of which some are not multicast enabled, along with end devices shown in the above figure. Three OSPF areas are defined here, one acting as backbone area.

31

MEE10:38

32

MEE10:38

CHAPTER VII 7. SIMULATION RESULTS AND ANALYSIS


7.1 EXPERIMENT I

Figure 13 New design model topology I

7.1.1 INTRODUCTION TO NEW DESIGN AND SCENARIO EXPERIMENT I


The PIM SM is simulated in this network having 3 subnets located in different regions. As shown in Figure 13, the Ronneby subnet is connected to the ISP (Loopback Address) and the EBGP is established between 2 more routers configured with static routes towards the remaining two subnets. The karlskrona.junction, ronneby.junction is interconnected with IBGP.

33

MEE10:38

The whole network is designed by using CISCO AS5100 series routers: 3 in Ronneby, 6 routers in Karlskrona, 6 routers in Karlsham along with 2 CISCO Catalyst 3524-PWR XL switches. A total of 5 RP routers are selected for the whole network. Routing between the different subnets is static routing. EIGRP is configured for running inside the Karlskrona and Karlsham subnets. Some of the servers are enabled with RIP according to scenario requirements. The application configuration is configured with default applications which supports near 16 applications, and the profile configuration is configured for Database and FTP traffic only. FTP Heavy, FTP Low, Database (Heavy), Database (Low) is simulated in the designed network, and the given interfaces are designed with PIM-SM at each interface. The applications operation mode is assigned as Simultaneous, and start time is set to Constant (100). The simulation duration is set to End of Simulation. The application start time offset is set to uniform (0, 300), and duration is set to End of Simulation. The end hosts are changed according to scenario definition. Every simulation scenario is taken in different styles for a clearer analysis. There are 4 types of analysis available for this design, as follows: 1. Discrete Event Simulation 2. Flow Analysis 3. Net Doctor Report 4. Survivability Analysis The performance of PIM-SM enabled network can be estimated after collecting different statistics. The PIM-SM statistics such as control traffic sent, received, network convergence duration, EIGRP, BGP, etc. are collected along with other performance statistics. The link utilization is varied from 0% to 30%, 40%, and 50% with different timings. The main links between routers is of DS0 type. The end hosts have 10 Base T duplex links.

34

MEE10:38

Ronneby subnet routing information in detail

Figure 14 Routing between devices in Ronneby

Figure 14 illustrates the routing information through e.g., using the show ip route command from different routers of the Ronneby subnet. This figure also describes the routing between these devices, including interface information in details.

Figure 15 BGP Routing between devices in Ronneby

35

MEE10:38

Figure 16 Karlskrona Junction routing information

The ISP router information is shown in Figure 16. This figure illustrates the neighbor relations with the loopback interface details of interconnected interfaces. Also shown in this figure is the BGP information. Figure 16 illustrates the routing information from the Karlskrona junction router, the multicast status, and BGP neighbors.

36

MEE10:38

Figure 17 Connectivity between devices from Karlskrona Junction

Figure 17 illustrates the routing information from Ronneby to all connected interfaces. All connected interfaces were checked with the ping command. The average round trip time, and the connectivity issues using ICMP echo messages are displayed in this figure.

37

MEE10:38

The Karlsham junction configuration in detail

Figure 18 Karlsham junction routing information

The BGP AS information is given above with the show ip bgp summary command. This figure also illustrates multicast routing status on the Karlshamn junction router.

38

MEE10:38

The statistics were collected by varying traffic information between the links. The variation of these parameters is shown in different graphs for clearer analysis. The DES (Discrete Event Simulation) is designed in such a way to collect Flow analysis, Net doctor Reports, and Survivability Reports.

Discrete Event Simulation

Figure 19 Global IP statistics summary in Exp I

Figure 19 illustrates the results obtained for the Average, Maximum, Minimum IP statistics in experiment I. The average IP background traffic delay is observed to be 0.9782 sec., the network convergence duration was 2.8 sec. while the maximum hops are varying from 1 to 32.

39

MEE10:38

Figure 20 Global statistics PIM-SM summary

Figure 20 illustrates the PIM_SM control traffic sent and received (packets/sec). A little loss of less than 1.5% control traffic observed. This loss in control traffic causes the controlling task to fail between the devices, and it will be resumed in the next attempt.

Figure 21 Global statistics summary BGP

Figure 21 and 22 illustrate the BGP, EIGRP global statistics such as network convergence activity, traffic sent and received. According to the above figures the network convergence duration time for BGP is lesser than that of EIGRP. There is 1.1% loss of BGP traffic observed, whereas in EIGRP there was no loss of data observed.

40

MEE10:38

Figure 22 Global statistics EIGRP summary

Point to Point Queuing Delay between the devices in Karlsham subnet

Top Objects Report: Point-to-Point Queuing *Delay (sec) Rank ---1 2 3 4 5 6 7 8 9 10 Object Name ------------------------------------------------------Karlskrona_LAN <-> SWITCH [0] --> Karlskrona.R5 <-> R4 [0] --> Karlskrona.R1 <-> R2 [0] --> Karlsham.R9 <-> R10 [0] --> Karlskrona.R5 <-> R6 [0] --> Karlskrona.R5 <-> R4 [0] <-Karlsham.R10 <-> R11 [0] --> Karlsham.R11 <-> R13 [0] <-Karlsham.R10 <-> R11 [0] <-Karlskrona.R4 <-> R2 [0] <-Minimum -------------0.0072349 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 Average -------------0.78418 0.10804 0.08469 0.08437 0.08245 0.08064 0.07976 0.07746 0.07406 0.07147 Maximum -------------1.1555 1.1064 1.1053 1.0421 0.9812 1.1020 1.0924 1.0201 0.9875 1.0506

--

Figure 23 Point-to-point queuing delay between major devices

Figure 23 illustrates the Point to Point Queuing Delay in sec. The maximum queuing delay was observed to be 1.1064 sec. between Karlskrona.R5 <-> R4 A minimum queuing delay of 0.0025000 sec. was observed between several devices.

41

MEE10:38

The Average Traffic sent and received in Packets/sec in experiment I

Top Objects Report: Traffic Sent (packets/sec) Rank ---1 2 3 4 5 6 7 8 9 10 Object Name --------------R10 --> R2 R11 --> R5 R1 --> Sanjose1 R6 --> R13 Sanjose2 --> R5 Sanjose1 --> R1 R12 --> R13 R11 --> R6 R1 --> R2 R13 --> R4 Min Average ------------- -------------0 11.210 0 11.057 0 10.965 0 10.956 0 10.940 0 10.930 0 10.887 0 10.866 0 10.850 0 10.804 Maximum Std Dev -------------- -------------108.09 31.765 105.67 31.494 105.10 31.188 104.44 31.131 108.12 31.141 105.99 31.123 105.63 31.040 104.21 30.934 105.79 30.903 105.01 30.844

Top Objects Report: Traffic Received (packets/sec) Rank ---1 2 3 4 5 6 7 8 9 10 Object Name Min Average ----------- --------------------------R12 --> R13 0 5.9170 R1 --> R2 0 5.9098 R2 --> R6 0 5.7426 R6 --> R5 0 5.6695 R9 --> R10 0 5.4788 R5 --> R2 0 5.4443 R3 --> R2 0 5.0859 R13 --> R10 0 5.0316 R10 --> R11 0 4.8782 R14 --> R13 0 4.7890 Maximum -------------56.120 59.344 56.150 56.548 56.578 55.099 55.536 55.678 57.109 56.153 Std Dev -------------16.857 16.861 16.399 16.193 15.902 16.026 15.407 15.239 15.215 15.243

Figure 24 Traffic Sent /Received in packets/sec

Figure 24 illustrates the maximum traffic sent (108.12 packets/sec) between the network objects Sanjose2 --> R5. The maximum traffic received was 59.344 (packets/sec) between R1 --> R2. The figure also shows the maximum standard deviation (Std Dev) of 31.765 for the traffic sent, and 16.857 for the traffic received.

42

MEE10:38

I.

Flow Analysis

Figure 25 Flow analysis Exp 1

Figure 25 illustrates the flow analysis executive summary. The number of successful demands, failed routes can be found with the help of flow analysis. The figure also illustrates the link maximum utilization (67%), and the average utilization (22.8%). Node configuration shows the routing information configured on particular devices. The figure also describes that there are 47 interfaces configured with the IPv4 addressing scheme, out of which 34 interfaces are configured with EIGRP routing, and 3 with BGP on the main links.

II.

Net Doctor

The given topology was checked for the given rules and summaries. The following rules are checked and success rate show up with a green ticking on left side. The services not used in this work are showed with N|A symbol (i.e., not available). For example duplicate address with green ticking shows that there were no IP address conflicts between devices. This practically means that each interface has a unique IP address assigned for identification, connectivity with other interfaces, and flow of data between the configured devices.

43

MEE10:38

IP Addressing Duplicate Address Invalid Interface IP Address Invalid Subnet Mask Overlapping Subnets Peers In Different Subnets

IP Multicast Group List for PIM Candidate RP Configuration References Undefined ACL Group List for Static RP References Undefined ACL IGMP Access Group References Undefined ACL Invalid PIM Register Source Invalid Static RP Address PIM RP-Announce-Filter Configuration References Undefined ACL

IP Routing Interface Not Advertised by Router Routing Protocols Verify Scheduler Allocate Verify Scheduler Interval Inconsistent Routing Protocols Mismatched Interface MTU Cisco Express Forwarding Not Enabled Net Flow Not Enabled SPD Disabled

Static Routing Default Route Results in Routing Loop Invalid Administrative Distance Invalid Default Network Address

44

MEE10:38

Figure 26 Net Doctor final report Exp I

Figure 26 illustrates the executive summary of net doctor report. 100 % indicates the rules mentioned above are following according to standards, and there are no configuration issues between the devices. For example the rule invalid Static RP address means that the routers are given perfect IP addressing with no conflict between IP address, and in particular the RP definition and configuration are according to the defined standards.

45

MEE10:38

III. Survivability Analysis

Executive Summary Report 2


Network Survivability Score
The Figure 27 illustrates the network survivability score provides an overall measure of the network's ability to recover from failures. It is computed as the percentage of investigated failure cases which did not result in critical violations in any of the performance metrics. A higher value indicates a more resilient network.

Figure 27 Net Doctor survivability score

Impact Summary
Figure 28 shows Out of the 44 failure cases that were analyzed, there were 19 cases (43% of total) that had critical impact on the network, while there were 0 cases (0% of total) that had moderate impact. There were 25 cases (56% of total) that had only benign or no impact on the network performance. More information on the most severe failures can be found in the Worst Case Failures section.

46

MEE10:38

Figure 28 Case violation history pie chart

Figure 29 illustrates a pie chart representation of the case violation, which means that out of 44 issues considered, 25 cases have no violations, while the remaining 19 cases are in critical condition. This would practically require immediate attention from a designated network administrator.

Breakdown by Performance Metrics


2 out of 6 performance metrics were critically impacted in at least one failure case. The table below summarizes the impact of failure on each of the performance metrics: Performance Metric Affected Flows Worst Case Impact Critical % of Failure Cases 43%

Average Link Utilization

Benign

N/A

Peak Link Utilization

Benign

N/A

Overutilized Links

Benign

N/A

Site-Pairs with Affected Connectivity

Critical

18%

Overutilized Interfaces

Benign

N/A

Figure 29 Failure impact summaries

Figure 29 illustrates the failure impact. From this figure we see that 2 out of 6 performance metrics have a serious effect over network after making some critical devices go down in a network.

Element Survivability
Figure 30 illustrates the Element Survivability is measured as the percentage of failure cases where the element was successful. In the following survivability table, element types with poor average survivability numbers -- below 95% -- have been marked as failed: Element category Test Flows Average survivability in category 93% of failure cases Worst survivability in category 81% of failure cases

Links

100% of failure cases

100% of failure cases

Site-Pairs

96% of failure cases

90% of failure cases

Interfaces

100% of failure cases

100% of failure cases

47

MEE10:38

Figure 30 Element survivability report

Network Performance: Affected Flows


This graph places the analyzed failure cases according to the resulting number of affected test flows. In a highly survivable network, most of the cases would be in the green region of the chart. For example, the graph shows that there is 1 failure case where the number of affected test flows is between 24.0 % and 26.0 %, placing the network in a critical state.

Figure 31 Network performance report

Figure 31 illustrates the number of affected flows versus number of registered failure cases. Affected flows are divided into 3 categories: normal, moderate and critical. For example, 2% of the flows have 23 failure cases.

48

MEE10:38

Network Performance: Average Link Utilization


This graph places the analyzed failure cases according to the resulting average link utilization. In a highly survivable network, most of the cases would be in the green region of the chart. For example, the graph shows that there are 4 failure cases where the average link utilization is between 70.0 % and 72.0 %, placing the network in a benign state.

Figure 32 Network performance (average link utilization)

49

MEE10:38

Network Performance: Over-utilized Links


Figure 33 shows the analyzed failure cases according to the resulting number of over utilized links. In a highly survivable network, most of the cases would be in the green region of the chart. For example, the graph shows that there are 44 failure cases ere the number of over utilized links is 0, placing the network in a benign state.

Figure 33 Net reports over-utilization of links

50

MEE10:38

Network Performance: Over utilized Interfaces


Figure 34 illustrates the analyzed failure cases according to the resulting number of over-utilized IP interfaces. In a highly survivable network, most of the cases would be in the green region of the chart. For example, the graph shows that there are 44 failure cases where the number of over-utilized IP interfaces is 0, placing the network in a benign state.

Figure 34 Over-utilization and link failures

51

MEE10:38

Tabular Format for Link Utilization and Average Affected Flows, Total number of violations

Failed Objects

Affected Flows

Average Link Utilization

Peak Link Utilization

Overutilized Links

Site-Pairs with Affected Connectivity

Overutilized Interfaces

Total Number of Critical Violations 0

None (baseline case) 0% 67.0% 67.0% 0 0% 0

Karlsham.R9 24% 67.0% 67.0% 0 16% 0

Karlskrona.R1 23% 67.0% 67.0% 0 16% 0

Karlskrona.R2 20% 67.0% 67.0% 0 16% 0

Karlsham.R10 20% 67.0% 67.0% 0 16% 0

Ronneby <-> Karlskrona 15% 67.0% 67.0% 0 16% 0

Ronneby <-> Karlsham 15% 67.0% 67.0% 0 16% 0

Karlskrona.R1 <-> R2 12% 67.0% 67.0% 0 16% 0

Karlsham.R9 <-> R10 12% 67.0% 67.0% 0 16% 0

Karlskrona.R5 14% 67.0% 67.0% 0 0% 0

Karlsham.R13 13% 67.0% 67.0% 0 0% 0

52

MEE10:38

Karlsham.R11 11% 67.0% 67.0% 0 0% 0

Karlskrona.R3 11% 67.0% 67.0% 0 0% 0

Karlsham.R12 10% 67.0% 67.0% 0 0% 0

Karlskrona.R4 10% 67.0% 67.0% 0 0% 0

Karlskrona.R6 10% 67.0% 67.0% 0 0% 0

Karlsham.R14 10% 67.0% 67.0% 0 0% 0

Ronneby.Ronneby_karlsham_junction <-> Ronneby_Karlskrona_junction

1 10% 44.7% 44.7% 0 0% 0

Ronneby.Ronneby_Karlskrona_junction 10% 44.7% 44.7% 0 0% 0

Ronneby.Ronneby_karlsham_junction

10%

44.7%

44.7%

0%

53

MEE10:38

Network Health Summary: Breakdown by Performance Metrics


Each of the 44 failure cases were analyzed for impact on individual performance metrics. This chart shows the results of the analysis. Metrics that were most impacted by the failures are listed ahead of the others.

54

MEE10:38

IV.

Capacity Planning

Figure 35 Capacity planning flow analysis report

Figure 35 illustrates the 23 links having between 20%-40 % bandwidth utilization in a network.

55

MEE10:38

Figure 36 Endto-end delay distribution capacity planning report

V.

Results

Routed Demands The number of routable and non-routable demands is viewable after running the flow analysis. Link utilization Current load in the network can be estimated as individual link statistics or the most repeatable stats.

56

MEE10:38

Figure 37 IP background traffic

Figure 37 describes the IP traffic information such as background traffic delay, convergence activity, network convergence duration, and packet drop rate during simulation time.

57

MEE10:38

Figure 38 PIM-SM control traffic sent/received

Figure 38 illustrates the PIM control traffic information for the multicast group 232.32.32.32. The average network convergence duration varies from 2-7 sec during the time of simulation. A drop of PIM-SM control traffic is also observed.

58

MEE10:38

Figure 39 BGP and EIGRP traffic Exp 1

Figure 39 describes the average drop of EIGRP vs. BGP traffic. The figure shows that the flow of traffic rate for EIGRP is higher than for BGP.

59

MEE10:38

Figure 40 UDP traffic Exp 1

Figure 40 illustrates the big drop of UDP traffic during the rush hours of simulation.

60

MEE10:38

Figure 41 Client FTP traffic received/sent

Figure 41 illustrates the FTP client traffic. Here, the received traffic is higher in most of the cases. This means that the downloading rate from a client station is higher than the uploading rate.

61

MEE10:38

Figure 42 FTP traffic Exp 1

Figure 42 illustrate the FTP traffic information in detail. When Heavy FTP traffic is used, a big drop observed during all stages of simulation. Client FTP Upload response time is always higher than the download response time.

62

MEE10:38

Figure 43 Multicast traffic, LAN inbound/outbound

Figure 43 illustrates the LAN and multicast traffic. A heavy drop of IP multicast traffic observed. Average LAN delay is observed to vary from 0 to 0.016 sec. The figure also shows the flow of traffic from the LAN such as: inflow, outflow and average flow differences. This is indicated in separate graphs. The figure also describes that the average information received was always higher than the information sent outside the LAN network. Server FTP load shows the number of requests the FTP server is receiving together with the saturation point, and the active sessions remaining.

63

MEE10:38

7.2 EXPERIMENT II
7.2.1 INTRODUCTION TO NEW DESIGN AND EXPERIMENT II

Figure 44 New design scenario main topology Exp II

Introduction to New Design and scenario Experiment II This topology is making use of 3 types of customized applications: Video Conferencing, Database, and FTP traffic. The traffic defined in this topology was designed for carrying different types of data traffic. Here, we do not provide any guarantee over the provide services because the type of service is defined as best effort. We used UDP as transport protocol. The provision for QoS does not come with any guaranteee. In addition, in this experiment the source considers only multicast address. The following multicast groups are defined: video conferencing, FTP and database. Scenario The network is composed of 3 different OSPF areas. The sender is in OSPF Area 0 and the receivers in area 1 and area 2. The RP router is in backbone area and a static RP is configured in one of the routers in the backbone area. Out of 9 routers in the network one is taking the role of RP. This is the central reference point in the given topology. In the given topology all routers are not multicast enabled. The multicast traffic takes the shortest path through PIM-SM RP router for all multicast traffic. The RP router forwards all types of multicast related join/leave/register services using PIM protocol. The type of multicast-tree built using PIM is shared tree in this scenario. The PIM_RP can be a common reference point to the source-hosts in the given topology.

64

MEE10:38

The sender requests to register as a source with RP router before starting transmitting multicast traffic. After the source registers with RP the destinations are able to get their requested information through a shared tree.

Figure 45 Conferencing application routing with group address 224.0.6.1

Figure 45 illustrates the video conferencing routing from source to destination. The group 24.0.6.1was involved in Video Conferencing, no other groups are allowed to interact with this traffic unless they register with RP to receive video traffic. This is also the shortest path taken from source to all receivers interested in receiving the video traffic. Any device can join or leave according to their wish by informing RP. Here the RP takes care of tree distribution from source to all destinations in a network.

65

MEE10:38

Figure 46 FTP application routing with group address 224.0.6.11

Figure 46 illustrates the shortest path taken by FTP traffic from Sender to Receiver. Routing is taken through PIM_RP router after enabling SM in the given network.

Figure 47 Database application routing with group address 224.0.6.12

66

MEE10:38

Figure 47 illustrates the Database traffic flow from source to destination. The shortest path is taken through the PIM_RP router. The following globally scoped multicast addresses were defined for these groups. Accordingly, the requests of a receiver are served by the source of the defined multicast group anywhere in the network. Application Config: Video application is defined with a constant distribution with a mean outcome of 0.001 for both in and out stream. The frame size for incoming and outgoing frames are constant (5000). FTP application is defined under exponential distribution for an amount of time between file transfers with a constant file size (50000). Database application is defined with an exponential distribution with a mean outcome of 30 for database transactions. The transaction size is constant with a mean outcome of 1024. Profile Config: User profiles are created in Profile Config object by making use of the available applications defined in the Application Config object. In this design there were 3 profiles defined for video, FTP, and Database traffic. In all user profiles applications are starting one after the other in a serial fashion. All applications profile session is starting between the mean intervals with a uniform(100,110) distribution. Only the repeatability option for video is taken once at start time.

I Discrete Event Simulation


DES contains descriptions and procedures for collecting individual, advanced statistics, along with the facility of Service Level Agreement (SLA), threshold definition for individual statistical collection. Simulation details such as events, configuration issues, error messages etc are generated as a log file after the end of each simulation.

67

MEE10:38

Figure 48 Discrete event simulation Exp 2

Figure 48 illustrates DES node details. Here, each and every node description, event number, their category along with messages about configuration issues is displayed for further action after simulation.
Table 8 Simulation log description Exp 2

Simulation Log Label Time Event Node Category Class Message

Description Log entry Simulation Time Current Simulation Event Number Individual Node event details Low level simulation errors Messages generated by Protocol Possible errors, suggestions as log message

68

MEE10:38

Figure 49 Global statistics PIM-SM Exp 2

Figure 49 illustrates the global statistics summary of PIM-SM control traffic, network convergence activity, register messages. Here, statistics such as average, maximum, minimum are collected for further statistical analysis.

II

Flow Analysis

Flow Analysis analyzes IP, ATM, Frame Relay and Circuit-Switched networks by considering traffic flows in the network as well as detailed models for network addressing and routing protocol implementation. Network planners, traffic engineers, and network operations staff can use Flow Analysis to help diagnose current network problems and to predict future network performance. Flow Analysis functionality can further be used to conduct routing analysis, survivability analysis, demand performance analysis, link performance analysis, capacity planning, and VoIP readiness assessment. Flow analysis can also assist in viewing traffic volumes, traffic types, equipment failures, or device configurations. Flow analysis also provides information about utilization and performance statistics for each network object, end to end routing for each flow, steady state delay estimates, routing table information, IP forwarding tables, detailed protocol configuration and network inventory reports.

69

MEE10:38

Figure 50 Flow analysis Exp 2

Figure 50 illustrates the executive summary for flow analysis. This information shows the number of links between different devices, their IPv4 addressing scheme, the routing protocols used as well as node and interface configuration information in detail.

70

MEE10:38

Figure 51 Executive summary IP multicast group

Figure 51 illustrates the IP multicast group addressing. Different traffic is using different group addressing scheme. The picture also illustrates the source address, group address, size, mode of operation, and interface details.

III

Net Doctor (Exp 2)

Net Doctor is a rule based engine that identifies incorrect device configurations, policy violations, and potential problems related to availability, performance and security in a network. Net Doctor is also used to setup periodic network audit reports and for identifying errors in a network. Rules can be applied against configured data from a production network. The inconsistencies in a network are easily identified using Net Doctor. In this experiment the connectivity between different OSPF area border routers, backbone zones are evaluated using Net Doctor Rules. The issues related to duplicate IP addressing, invalid interface IP address, invalid subnet mask, overlapping subnets, IP multicasting RP configuration, IGMP, IP routing issues are checked with Net Doctor Flow analysis.

71

MEE10:38

Figure 52 Net Doctor report executive summary Exp 2

Figure 52 illustrates the 16 tested devices with 107 rules having a score of 100 which means the design is good for these rules, and no miss-configuration issues were found.

Figure 53 OSPF areas topology II pie chart Exp 2

Figure 53 illustrates a pie chart representation of experiment II, the OSPF areas and the number of interfaces configured under Area 0, 1, 2. There were 8 interfaces configured in area 2, 11 interfaces configured in area 1, and 9 interfaces configured in area 0. The RP router was located in the backbone area.

72

MEE10:38

Figure 54 OSPF areas bar chart Exp 2

Figure 54 illustrates the OSPF area summary in bar chart. This figure shows the number of active OSPF areas during the snap shot period. The total number of routers are clearly observed in this chart. For example, in area 0 there were 4 routers, area 2 had 4 routers, and area 1 had 5 routers.
Table 9 OSPF areas tabular format Exp 2 OSPF Area Summary (3 areas, 9 devices, 28 interfaces)

Area
0.0.0.0 0.0.0.1 0.0.0.2

Devices 4 5 4 IV Survivability Analysis

Configuration 9 11 8

Survivability analysis is useful for identifying communication issues between the devices, for performance evaluation, by defining IP test flows. Test flows can be defined between all IP capable devices for future survivability analysis and network performance issues. Failure analysis is used to identify individual or simultaneous failures of the specified network devices. This figure 56 illustrates the Survivability analysis when 44 failure cases are simulated simultaneously, out which 35 cases are near to poor performance. The threshold limits are allowed to be defined based on different criteria. The survival statistics of individual devices and links will give good estimation for future failure cases.

73

MEE10:38

Figure 55 Survivability analysis executive report Exp 2

Figure 55 illustrates network health summary and worst cast failures. For example MC2 have 1 critical violation over 55% of the effected flows. This figure also indicates each and every flow has at least one critical violation in their affected flows.

Top Objects Report: point-to-point.queuing delay (sec) Minimum, average, max, Std Dev statistics. Rank ---1 2 3 4 5 6 7 8 9 10 Object Name ----------------------R3 <-> Dest1 [0] <-MC3 <-> MC2 [0] <-MC3 <-> MC2 [0] --> R3 <-> Dest1 [0] --> R3 <-> Source1 [0] --> R3 <-> Source1 [0] <-R4 <-> R3 [0] --> MC3 <-> Source1 [0] <-R4 <-> R3 [0] <-MC3 <-> Source1 [0] --> Minimum -------------0.016727 0.016242 0.014303 0.017212 0.008875 0.008875 0.008375 0.008875 0.008875 0.008875 Average -------------0.041375 0.034264 0.032260 0.030176 0.020015 0.016692 0.016332 0.014827 0.014802 0.014674 Maximum -------------0.25673 0.09964 0.12904 0.09479 0.13237 0.05138 0.07888 0.04888 0.06287 0.04888 Std Dev -------------0.045055 0.024999 0.027064 0.021094 0.022856 0.011907 0.017579 0.009622 0.011147 0.009786

Figure 56 Point-to-point queuing delay (sec)

Figure 56 illustrates the point to point queuing delay between different network objects. For example a maximum queuing delay of 0.13237 sec is observed between R3 <-> Source1 [0], whereas a minimum queuing delay of 0.008875 sec is observed between many network objects. IV Capacity Planning

74

MEE10:38

Capacity Planning Module Capacity planning in flow analysis estimates future network scalability issues over a period of time. The Capacity Planning module of Flow Analysis lets you extend your analysis to cover future time periods so that you can see how the network will perform over time. The capacity planning feature includes several reports that summarize the performance of the network.

75

MEE10:38

Results Exp2

Figure 57 PIM-SM details Exp 2

In the given topology the network convergence activity is varying slightly depending upon the type of traffic. In this scenario the only one RP was defined, so the final convergence time is varying from 0 to 4 sec. Load balancing is provided by the PIM-RP router with a minute drop of control traffic after the traffic increase. Network convergence time increases slightly with an increase in multicast traffic.

76

MEE10:38

Figure 58 Video conferencing and OSPF network convergence

A huge drop in packets is observed for video conferencing in the designed network. A sudden increment of packet end to end delay is observed after simulating a certain time due to heavy usage of multicast traffic. The health of a network depends on routing protocol convergence time. Here the network is using OSPF, RIP, and PIM as routing protocols.

77

MEE10:38

Figure 59 Average IP traffic, VLAN, management traffic, FTP traffic

Figure 59 illustrates the average IP traffic, FTP traffic, VLAN traffic, and network convergence activity. The average traffic received is lower than the traffic sent which means the uploading was higher than downloading. The VLAN traffic drop shows the packet loss in switches regarding all types of management traffic. The average network convergence duration is varying from 0-20 sec. The maximum number of hops is 30 on an average, and IP traffic drop rate is 1500 packets at most during peak simulation periods.

78

MEE10:38

Figure 60 FTP Download/Upload response times, video conferencing traffic

FTP upload response time is higher than download response time. There is huge drop in video traffic after a certain extent. The average packet end to end packet delay varying from 0-28 sec.

79

MEE10:38

Figure 61 IP End-to-end delay traffic, IP multicast traffic Sent/Received/Dropped

Figure 61 illustrates the average IP end to end delay, IP traffic sent, dropped, received, average IP multicast traffic sent and received.

80

MEE10:38

Figure 62 Database traffic

Figure 62 illustrates the database entry and query response times, average client DB query and response times. A heavy drop of client database traffic is observed. The average client database entry response time was observed to be 11 sec, and the query response time was observed to be 20 sec. A little drop of client database entry traffic was also observed.

81

MEE10:38

7.3 INTERNET STANDARDS USED IN THIS REPORT


The Internet Engineering Task Force (IETF) deals with Internet standardization through a series of official publications called Request for Comments (RFC). IETF is an open community. RFCs can be divided into standards, historical, informational, and experimental. A standard document can be a proposed standard, draft standard or Internet standard. Standards do not last forever, but keep changing according to development requirements with various versions. World Wide Web Consortium (W3C) is also one of the standard types for www.
Table 10 Internet standards Standard Type 1. IETF 2. Working Group 3. Internet Draft 4. RFC Definition Open community contributing to Internet development The IETF division with particular standard type Standard document before approval An official document related to internet standards

82

MEE10:38

CHAPTER VIII

8 CONCLUSIONS AND FUTURE WORK


The purpose of this thesis was to design and implement a new design model for IP multicast communication across LAN/WAN using UDP as transport protocol, PIM-SM as the multicast routing protocol, and IGMP as the group management protocol. The research and implementation of the new design models led to the conclusion that there are five issues that need to be addressed in order to implement multicast routing in any new design: the routing protocols in hand, the traffic type used, the network environment (LAN or WAN) and in particular the multicast protocols at hand as well as the multicast group management protocols in use. This approach of this thesis may be improved in the future by enhancing it with IPv6 addressing, IP tunneling, and by using the latest available multimedia applications for the forthcoming Ned design Models in OPNET. QoS problems related to multicasting, traffic engineering and efficient delivery of data from a source to the multicast groups may be improved by using the Multi Protocol Labe Switching (MPLS) technology. We believe that the above mentioned issues will be solved in the latest trends in multicast delivery for real-time applications.

83

MEE10:38

84

MEE10:38

BIBLIOGRAPHY
[1] Rick Graziani and Allan Jonson, Routing protocols and concepts: CCNA exploration companion guide, Pearson Education, London, 2008. [2] Catherine Boutremans, Gianluca Iannaccone, and Christophe Diot, Impact of link failures on VoIP performance, In Proceedings of NOSSDAV Workshop, ACM press pages 63-71, Florida, USA, May 2002. [3] Renata Teixeira and Jennifer Rexford, Managing Routing Disruptions in Internet Service Provider Networks, IEEE Communications Magazine, March 2006. [4] Douglas E. Comer, Internetworking with TCP/IP, Principles, Protocols and Architecture 5th ed. Vol.1, Pearson Prentice Hall, 2006. [5] Rita Puzmanova, Routing and switching time of convergence? PEARSON EDUCATION LIMITED, 2002. [6] Wang Jilu; Yu Zhenwei; Zhang Yi, Research On Simulation of Multicast Protocol", Computer Science and Software Engineering, International Conference, Volume 4, 2008. [7] Jeff Doyle, Dynamic Routing Protocols, 2001 http://www.informit.com/articles/ [8] Online source. (2004, Aug 27), Advanced IP Addressing Management, Cisco Systems, http://www.informit.com/articles/ [9] Radia Perlman, A Comparison between Two Routing Protocols: OSPF and IS-IS, IEEE Network Magazine, September, 1991. [10]Cisco Internet Technology Handbook, http://www.cisco.com/en/US/docs/internetworking/technology/handbook/Enhanced_IGRP.html [11] Brian Adams, Ed Cheng, Tina Fox, Andy Kessler, Mark Manzanares,Bryan McLaughlin, Jim Rushton, Beverly Tai, Kevin Tran, "Inter domain Multicast Solutions Guide", Cisco Press Pub Date : July 08, 2002. [12] TCP / IP suite, http://www.protocols.com/pbook/tcpip3.htm. [13]Ravi Malhotra, IP Routing, 0-596-00275-0, January 2002. http://oreilly.com/catalog/iprouting/chapter/ch04.html#45434 [14] Xin Wang; Yu, C.; Schulzrinne, H. Stirpe, P. Wei Wu, IP Multicast Fault Recovery in PIM over OSPF Network Protocols, Publication Year: 2000 , Page(s): 116 - 125 [15] Uemo, S.; Kato, T.; Suzuki, K, "Analysis of Internet multicast traffic performance considering multicast routing protocol", Network Protocols, 2000. Proceedings. 2000 International Conference on Publication Year: 2000 , Page(s): 95 - 104.

85

MEE10:38

[16] http://www.javvin.com/protocolEIGRP.html [17] Javvin network management and security. IS-IS: Intermediate System to Intermediate system routing protocol, http://www.javvin.com/protocolOSPF.html [18] Todd Lammle, Cisco Certified Network Associate, 5th edition, 2005 [19] Microsoft. TechNet, OSPF operation, http://technet.microsoft.com/en-us/library/cc940481.aspx [20] CCNP Building Scalable Cisco Internetworks v5.0, Cisco Network Academy Program, Module 1-7, Cisco Systems. [21] Christian Huitema, Routing in the internet, 2nd Ed, Prentice Hall, 2000. [22] Faraz Shamim, Zaheer Aziz, Johnson Liu, Abe Martey, Troubleshooting IP Routing Protocols, Cisco Press, Pages : 912, May 07, 2002. [23] http://www.rhyshaden.com/ospf.htm [24] Doru Constantinescu., Overlay Multicast Networks: Elements, Architectures and Performance , Blekinge Institute of Technology Doctoral Dissertation Series No. 2007:17 [25] Stallings W, High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002. [26] Wittmann R. and Zitterbart M, Multicast Communication, Morgan Kaufmann, 2001. [27] Burlin, Samuel Strandstrm, A Security and Group Management in IP Multicast ID, LULE TEKNISKA UNIVERSITET Masters Thesis, August 2, 2001. [28] Wu S. and Banerjee S., Improving the Performance of Overlay Multicast with Dynamic Adaptation, Proceedings of the IEEE Consumer Communications and Networking Conference (CCNC 2004), pp. 152157, Las Vegas, USA, 2004. [29] Zhang B., Jamin S. and Zhang L., Host Multicast: A Framework for Delivering Multicast to End Users, Proceedings of the IEEE INFOCOM 2002, Vol. 3, pp.1366 1375, New York, USA, 2002. [30] sterb O., Models for Calculating End-to-End Delay in Packet Networks, Proceedings of the 18th International Teletraffic Congress: ITC-18, pp. 12311240, Berlin, Germany, 2003. [31] S Lao L., Cui J.-H., Gerla M. and Maggiorini D., A Comparative Study of Multicast Protocols: Top, Bottom, or In the Middle? , Proceedings of the IEEE,INFOCOM 2005, Vol. 4, pp. 28092814, Miami, USA, 2005. [32] Stallings W., High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002.

86

MEE10:38

[33] Law A. M. and Kelton W. D., Simulation Modeling & Analysis, Third Edition, McGraw-Hill, 2000 [34] Diane Teare, Catherine Paquet, Authorized self-study Guide, Building Scalable Cisco Internetworks (BSCI), third edition, http://www.ciscopress.com/safarieenabled, Cisco Press, USA. [35] Hardjono,T., Router-assistance for receiver access control in PIM-SM, Computers and Communications, Fifth IEEE Publication Year: 2000 , Page(s): 687 - 692 [36] Debora Estrin, Mark handley, Ahmed Helmy, Poly Huang, David Thaler., High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002. [37] Tim Szigeti, CCIE No. 9794, Christina Hattingh, End-to-End QoS Network Design Cisco Press, November 09, 2004. [38] Diane Teare and Catherine Paquet, Building Scalable Cisco Internetworks (BSCI) (Authorized Self-Study Guide), Cisco Press, 3rd Edition. [39] Wade Edwards, et al., CCNP Complete Study Guide, SYBEX Inc., 2004. [40] CCNP: Cisco Internetwork Troubleshooting Study Guide 2004 SYBEX Inc. [41] W. Fenner, Internet Group management Protocol, version 2, RFC2236, November 1997. [42] Cisco Systems, Inc, Configuring Basic IP Multicast, [http://www.cisco.com/en/US/docs/ios/ipmulti/configuration/guide/imc_basic_cfg_ps6350_TSD_Prod ucts_Configuration_Guide_Chapter.html] [43] Protocol Independent Sparse Mode, URL: [http://www.javin.com/protocolPIMSM.html] [44] OPNET Modeler Discrete Event Simulator, [http://www.opnet.com/support/des_model_library/], OPNET Technologies Inc. [45] OPNET Modeler Simulator, Version: 14.5, OPNET Technologies,19862008 [46] Deploying IP Multicast, Session RST-2701, Cisco Systems, 1999-2004 [47] Introduction to IP Multicast, Session RST-2214, Cisco Systems, 2000 [48] PIM Multicast Routing, Session RST-2215, Cisco Systems, 2000 [49] Advanced IP Multicast Routing, Session RST-2217, Cisco Systems, 2000 [50] Deploying BGP, Session RST-2209, Cisco Systems, 2000 [51] OPNET Modeler Documentation Set, OPNET Technologies Version: 14.5, 19862008

87

MEE10:38

[52] Brent Stewart and Denise Donohue, CCNP BSCI Quick Reference 642-901, Cisco Press, 2007 [53] IP Multicast, Courtesy of Cisco Enterprise Marketing, Cisco Systems. [54] Jeremy stretch, IPv4 Multicast, [http://packetlife.net/]. [52] Brian Hill/Osborne CISCO The Complete Reference, Cisco Press, 2002

[53] Yes Home Page Multicasting, [http://www.hep.ucl.ac.uk/~ytl/multi-cast/pim-dm_01.html] [54] Jeff Doyle, Jennifer Dehaven Carrol, Routing TCP/IP Volume II (CCIE Professional Development), Cisco Press, April 11, 2001. [55] Jeff Doyle, Jennifer Dehaven Carrol, Routing TCP/IP Volume I (CCIE Professional Development), Second Edition, Cisco Press, April 11, 2001. [56] Constantinescu D. and Popescu A., Modeling of One-Way Transit Time in IP Routers, Proceedings of the IEEE Advanced International Conference on Telecommunications (AICT 2006), pp. 1626, Guadeloupe, French Caribbean, 2006. [57] Quinn B. and Almeroth K. C., IP Multicast Applications: Challenges and Solutions, IETF RFC 3170, 2001. [58] M. Handley, S.Floyd, B.Whetten, R.Kermode, L.Vicisano, M.Luby The Reliable Multicast Design Space for Bulk Data Transfer, RFC2887, August 2000.

[59] El-Sayed A., Application Level Multicast Transmission Techniques Over the Internet, PhD dissertation, Institute National Poly-technique de Grenoble, Grenoble, France, 2004. [60] Castro M., Druschel P., Kermarrec A.-M. and Rowstron A. I. T., Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure, IEEE Journal of Selected Areas in Communications, Vol. 20, No. 8, pp. 1489 1499, 2002. [61] Protocol Independent Dense [http://www.javvin.com/protocolPIMDM.html] Mode, Inc, PIM-DM URL:

[62] PIM-DM, [http://www.javvin.com/protocolPIMDM.html] [63] S.deering, RFC-2362 Protocol Independent Multicast-Sparse Mode (PIM-SM), version 2, RFC2362, August 1989. [64] Lydia Parziale, David T. Britt, Chuck Davis, Wie Liu and Nicolas Rosselot, TCP/IP Tutorial and Technical Overview, IBM Red Books, ibm.com/rebooks.

88

MEE10:38

[65] Distance Vector Routing Algorithm, http://en.wikipedia.org/wiki/Distance-vector_routing_protocol [66] ICMP Message Types, [http://www.networksorcery.com/enp/protocol/icmp.htm].
[67]

Ramalho M., Intra- and Inter-Domain Multicast Routing Protocols: A Survey and Taxonomy, IEEE Communications Surveys and Tutorials, FIRST Quarter 2000, Vol. 3, No. 1, pp. 225, 2000. [68] Eric Vyncke, Christopher Paggen, LAN Switch Security: What Hackers Know about your switches, Cisco Press [69] Deepankar Medhi, Karthick Ramasamy, Network Routing: Algorithms, protocols, Architectures, The Morgan Kaufmann Series in Networking.

89

S-ar putea să vă placă și