Documente Academic
Documente Profesional
Documente Cultură
This thesis is presented as part of the Degree of Master of Science in Electrical Engineering with Emphasis in Telecommunication Blekinge Institute of Technology
June 2010
Master of Science Program in Electrical Engineering School of Computing Blekinge Institute of Technology Advisor: Dr. Doru Constantinescu Examiner: Dr. Doru Constantinescu
MEE10:38
ii
MEE10:38
Abstract
This thesis presents different Internet Protocol (IP) multicasting design models and together with their implementation methods in OPNET Modeler 14.5. The models presented in this report are designed for handling various types of advanced multimedia applications and services when using the multicast communication paradigm. The architectural models presented in this thesis are independent of the routing protocols used as well as of the underlying networking environment. Several exiting challenges related to high bandwidth requirements, low network delay, and jitter are discussed together with these design models, and are simulated using the OPNET Modeler. The emerging demands for new multicast capable applications, for both real-time and non real-time traffic are considered together in this report in order to better evaluate multicast traffic requirements. Efficient routing methods, group management policies, and shortest path algorithms are also discussed in the proposed models. The pros and cons of IP multicasting are described using the Protocol Independent Multicast Sparse Mode (PIM-SM) method, in the highlight of the proposed design models. Several results related to link utilization, end-to-end delay, packet loss, PIM control traffic, Internet Group Management Protocol (IGMP), and network convergence are obtained by using already available statistical methods and modules from OPNET Modeler. Our major contribution with this thesis work relates to components associated with network convergence, routing delay, and processing delay. These components are illustrated in this thesis using various OPNET metrics. Moreover, several issues related to protocol scalability, supportability, performance optimization and efficiency are also presented by using OPNETs build in reports, i.e., executive flow and survivability analysis reports.
iii
MEE10:38
iv
MEE10:38
MEE10:38
vi
MEE10:38
Goals
The main goal of this thesis is to investigate the performance of different types of multicast traffic using the OPNET Modeler 14.5. Protocol overhead in steady state is observed by using Discrete Event Simulations (DES) [44, 51]. The performance as well as future capacity planning of the multicast enabled networks was analyzed by using OPNETs build in features. Various application statistics such as delay, delay variation, application type, traffic sent and received were collected for further analysis for this thesis report.
vii
MEE10:38
viii
MEE10:38
Table of Contents
ABSTRACT ....................................................................................................................................................................... III GOALS ............................................................................................................................................................................ VII LIST OF FIGURES ............................................................................................................................................................ XIII LIST OF TABLES ...............................................................................................................................................................XV ACRONYMS ..................................................................................................................................................................XVII CHAPTER I ........................................................................................................................................................................ 1 1INTRODUCTION .............................................................................................................................................................. 1 1.1 INTRODUCTION ................................................................................................................................................... 1 1.2 PROBLEM DESCRIPTION .................................................................................................................................... 1 1.3 INTENDED AUDIENCE ........................................................................................................................................ 1 1.4 SCOPE .................................................................................................................................................................... 2 1.5 MOTIVATION ....................................................................................................................................................... 2 1.6 LIMITATIONS ....................................................................................................................................................... 2 1.7 STRUCTURE OF THE REPORT ............................................................................................................................ 2 1.8 MULTICAST HISTORY ......................................................................................................................................... 3 CHAPTER II ....................................................................................................................................................................... 5 2 BACKGROUND & RELATED WORK ................................................................................................................................. 5 2.1 BACKGROUND ..................................................................................................................................................... 5 2.2 RELATED WORK .................................................................................................................................................. 5 2.3 COMPARISON TO THIS WORK ........................................................................................................................... 6 CHAPTER III ...................................................................................................................................................................... 7 3.1 INTERIOR GATEWAY ROUTING PROTOCOLS ................................................................................................. 8 3.1.1 DISTANCE VECTOR ROUTING ............................................................................................................................. 8 3.1.2 LINK STATE ROUTING ........................................................................................................................................ 8 3.1.3 HYBRID ROUTING .............................................................................................................................................. 9 3.2 EXTERIOR GATEWAY ROUTING PROTOCOLS ................................................................................................ 9 3.3 CLASSLESS ROUTING ......................................................................................................................................... 9 3.3.1 OPEN SHORTEST PATH FIRST ........................................................................................................................... 10 3.3.2 BORDER GATEWAY PROTOCOL ........................................................................................................................ 10 3.4 CLASSFULL ROUTING ...................................................................................................................................... 10 3.5 STATIC AND DYNAMIC ROUTING ................................................................................................................... 10 3.5.1 STATIC ROUTING ............................................................................................................................................. 10 3.5.2 DYNAMIC ROUTING ........................................................................................................................................ 11 3.6 ROUTING ALGORITHMS ................................................................................................................................... 11 3.6.1 GLOBAL ROUTING ALGORITHM ....................................................................................................................... 11 3.7 HIERARCHICAL ROUTING................................................................................................................................ 11 3.8 IP COMPONENTS................................................................................................................................................ 12 3.9 ERROR IDENTIFICATION .................................................................................................................................. 12 3.9.1 ICMP MESSAGE TYPES ..................................................................................................................................... 12 CHAPTER IV .................................................................................................................................................................... 13 4 MULTICAST ADDRESSING ............................................................................................................................................ 13 4.1 MULTICAST ADDRESS ALLOCATION ............................................................................................................. 14
ix
MEE10:38
4.1.1 DYNAMIC ALLOCATION ................................................................................................................................... 14 4.1.2 STATIC ALLOCATION........................................................................................................................................ 14 4.2 MULTICAST ADDRESS MAPPING .................................................................................................................... 14 4.3 MULTICAST ROUTING PROTOCOLS ............................................................................................................... 15 4.3.1 MULTICAST ROUTING PROTOCOLS ........................................................................................................... 15 4.3.2 PROTOCOL INDEPENDENT MULTICAST VERSIONS ............................................................................................ 15 4.3.3 PROTOCOL INDEPENDENT MULTICAST ................................................................................................... 15 4.3.4 MSDP............................................................................................................................................................. 15 4.3.5 DENSE MODE PROTOCOLS ......................................................................................................................... 16 4.3.6 DISTANCE VECTOR MULTICAST ROUTING PROTOCOL ........................................................................... 16 4.3.7 SPARSE MODE PROTOCOLS ....................................................................................................................... 16 4.3.8 PIM SPARSE-DENSE MODE .............................................................................................................................. 18 4.4 ROUTING MULTICAST TRAFFIC ...................................................................................................................... 19 4.4.1 TRASNPORT PROCOCOLS AND MULTICASTING ....................................................................................... 19 4.5 MULTICAST FORWARDING ............................................................................................................................. 20 4.6 MULTICAST TRAFFIC ENGINEERING ............................................................................................................. 20 4.6.1 RP POSITION ................................................................................................................................................ 20 4.6.2 MULTICAST ROUTING CONFIGURATION .................................................................................................. 20 4.6.3 DEPENDENCY ON UNICAST ........................................................................................................................ 20 4.7 IGMP .................................................................................................................................................................... 21 4.7.1 IGMP v1.......................................................................................................................................................... 21 4.7.2 IGMP v2.......................................................................................................................................................... 21 4.7.3 IGMP v3.......................................................................................................................................................... 21 4.7.4 IGMP MESSAGES ............................................................................................................................................. 21 4.7.5 IGMP PROTOCOL OPERATION ......................................................................................................................... 22 4.8 CGMP ................................................................................................................................................................... 22 4.9 IP MULTICAST MODEL ARCHITECTURE & MULTICAST OPERATIONS ..................................................... 23 4.9.1 JOINING A GROUP........................................................................................................................................... 23 4.9.2 SENDING TRAFFIC TO A GROUP ....................................................................................................................... 24 4.9.3 MULTICAST TUNNELING .................................................................................................................................. 25 4.9.4 DVMRP TUNNELS ......................................................................................................................................... 25 4.9.5 GRE TUNNEL ................................................................................................................................................ 25 4.9.6 MULTICAST APPLICATIONS .............................................................................................................................. 25 4.9.7 PERFORMANCE METRICE IN MULTICASTING .................................................................................................... 25 CHAPTER V ..................................................................................................................................................................... 27 5. MODEL DESIGN I......................................................................................................................................................... 27 5.1 MODEL DESIGN EXPERIMENT 1 ...................................................................................................................... 27 CHAPTER VI .................................................................................................................................................................... 31 6 MODEL DESIGN II......................................................................................................................................................... 31 6.1 DESIGN MODEL EXPERIMENT II ..................................................................................................................... 31 CHAPTER VII ................................................................................................................................................................... 33 7. SIMULATION RESULTS AND ANALYSIS ........................................................................................................................ 33 7.1 EXPERIMENT I ........................................................................................................................................................ 33 7.1.1 INTRODUCTION TO NEW DESIGN AND SCENARIO EXPERIMENT I ..................................................................... 33 7.2 EXPERIMENT II .................................................................................................................................................. 64 7.2.1 INTRODUCTION TO NEW DESIGN AND EXPERIMENT II ..................................................................................... 64 7.3 INTERNET STANDARDS USED IN THIS REPORT ...................................................................................... 82 CHAPTER VIII .................................................................................................................................................................. 83
MEE10:38
xi
MEE10:38
xii
MEE10:38
LIST OF FIGURES
FIGURE 1: MULTICAST DEPLOYMENT FROM 1992 TO 2006 ................................................................................................... 3 FIGURE 2 ROUTING PROTOCOLS ......................................................................................................................................... 7 FIGURE 3 CLASS D RANGE IP MULTICAST ......................................................................................................................... 13 FIGURE 4 IEEE 802.3 MAC ADDRESS STRUCTURE ............................................................................................................ 14 FIGURE 5 PIM SPARSE-DENSE MODE ............................................................................................................................... 18 FIGURE 6 CGMP OPERATION WITH ROUTERS AND SWITCHES ............................................................................................ 23 FIGURE 7 IGMP GROUP JOINING PROCEDURE ................................................................................................................... 24 FIGURE 8 MULTICAST TRAFFIC FORWARDING TO GROUPS .................................................................................................. 24 FIGURE 9 NEW MODEL DESIGN EXPERIMENT I ................................................................................................................... 27 FIGURE 10 RONNEBY SUBNET .......................................................................................................................................... 28 FIGURE 11 KARLSHAM SUBNET ........................................................................................................................................ 29 FIGURE 12 KARLSKRONA SUBNET WITH R1 AS RP ............................................................................................................ 30 FIGURE 13 NEW MODEL DESIGN E XPERIMENT II .............................................................................................................. 31 FIGURE 15 ROUTING BETWEEN DEVICES IN RONNEBY ....................................................................................................... 35 FIGURE 16 BGP ROUTING BETWEEN DEVICES IN RONNEBY ............................................................................................... 35 FIGURE 17 KARLSKRONA JUNCTION ROUTING INFORMATION ............................................................................................. 36 FIGURE 18 CONNECTIVITY BETWEEN DEVICES FROM KARLSKRONA JUNCTION ................................................................... 37 FIGURE 19 KARLSHAM JUNCTION ROUTING INFORMATION ................................................................................................. 38 FIGURE 20 GLOBAL IP STATISTICS SUMMARY IN EXP I ...................................................................................................... 39 FIGURE 21 GLOBAL STATISTICS PIM-SM SUMMARY ......................................................................................................... 40 FIGURE 22 GLOBAL STATISTICS SUMMARY BGP ............................................................................................................... 40 FIGURE 23 GLOBAL STATISTICS EIGRP SUMMARY ............................................................................................................ 41 FIGURE 24 POINT-TO-POINT QUEUING DELAY BETWEEN MAJOR DEVICES ............................................................................ 41 FIGURE 25 TRAFFIC SENT /RECEIVED IN PACKETS/SEC ...................................................................................................... 42 FIGURE 26 FLOW ANALYSIS EXP 1 .................................................................................................................................... 43 FIGURE 27 NET DOCTOR FINAL REPORT EXP I ................................................................................................................... 45 FIGURE 28 NET DOCTOR SURVIVABILITY SCORE ............................................................................................................... 46 FIGURE 29 CASE VIOLATION HISTORY PIE CHART .............................................................................................................. 47 FIGURE 30 FAILURE IMPACT SUMMARIES .......................................................................................................................... 47 FIGURE 31 ELEMENT SURVIVABILITY REPORT ................................................................................................................... 48 FIGURE 32 NETWORK PERFORMANCE REPORT ................................................................................................................... 48 FIGURE 33 NETWORK PERFORMANCE (AVERAGE LINK UTILIZATION).................................................................................. 49 FIGURE 34 NET REPORTS OVER-UTILIZATION OF LINKS ...................................................................................................... 50 FIGURE 35 OVER-UTILIZATION AND LINK FAILURES .......................................................................................................... 51 FIGURE 36 CAPACITY PLANNING FLOW ANALYSIS REPORT ................................................................................................. 55 FIGURE 37 ENDTO-END DELAY DISTRIBUTION CAPACITY PLANNING REPORT..................................................................... 56 FIGURE 38 IP BACKGROUND TRAFFIC ............................................................................................................................... 57 FIGURE 39 PIM-SM CONTROL TRAFFIC SENT /RECEIVED .................................................................................................... 58 FIGURE 40 BGP AND EIGRP TRAFFIC E XP 1 ..................................................................................................................... 59 FIGURE 41 UDP TRAFFIC E XP 1 ........................................................................................................................................ 60 FIGURE 42 CLIENT FTP TRAFFIC RECEIVED/SENT .............................................................................................................. 61 FIGURE 43 FTP TRAFFIC EXP 1 ......................................................................................................................................... 62 FIGURE 44 MULTICAST TRAFFIC, LAN INBOUND/OUTBOUND ............................................................................................. 63 FIGURE 45 NEW DESIGN SCENARIO MAIN TOPOLOGY E XP II ............................................................................................... 64 FIGURE 46 CONFERENCING APPLICATION ROUTING WITH GROUP ADDRESS 224.0.6.1 .......................................................... 65 FIGURE 47 FTP APPLICATION ROUTING WITH GROUP ADDRESS 224.0.6.11.......................................................................... 66 FIGURE 48 DATABASE APPLICATION ROUTING WITH GROUP ADDRESS 224.0.6.12 ............................................................... 66 FIGURE 49 DISCRETE EVENT SIMULATION E XP 2 ............................................................................................................... 68 FIGURE 50 GLOBAL STATISTICS PIM-SM EXP 2 ................................................................................................................ 69 FIGURE 51 FLOW ANALYSIS EXP 2 .................................................................................................................................... 70 FIGURE 52 E XECUTIVE SUMMARY IP MULTICAST GROUP ................................................................................................... 71 FIGURE 53 NET DOCTOR REPORT EXECUTIVE SUMMARY EXP 2 .......................................................................................... 72 FIGURE 54 OSPF AREAS TOPOLOGY II PIE CHART EXP 2 .................................................................................................... 72 FIGURE 55 OSPF AREAS BAR CHART EXP 2 ....................................................................................................................... 73
xiii
MEE10:38
FIGURE 56 SURVIVABILITY ANALYSIS EXECUTIVE REPORT E XP 2 ....................................................................................... 74 FIGURE 57 POINT-TO-POINT QUEUING DELAY (SEC) ........................................................................................................... 74 FIGURE 58 PIM-SM DETAILS E XP 2 .................................................................................................................................. 76 FIGURE 59 VIDEO CONFERENCING AND OSPF NETWORK CONVERGENCE ............................................................................ 77 FIGURE 60 AVERAGE IP TRAFFIC, VLAN, MANAGEMENT TRAFFIC, FTP TRAFFIC ............................................................... 78 FIGURE 61 FTP DOWNLOAD/UPLOAD RESPONSE TIMES, VIDEO CONFERENCING TRAFFIC .................................................... 79 FIGURE 62 IP END-TO-END DELAY TRAFFIC, IP MULTICAST TRAFFIC SENT/RECEIVED/DROPPED ......................................... 80 FIGURE 63 DATABASE TRAFFIC ........................................................................................................................................ 81
xiv
MEE10:38
List of tables
TABLE 1 ACRONYMS .................................................................................................................................................. XVII TABLE 2 ROUTING PROTOCOL COMPARISON ........................................................................................................................ 8 TABLE 3 ICMP MESSAGE TYPES ...................................................................................................................................... 12 TABLE 4 MULTICAST RANGE AND DESCRIPTION ................................................................................................................ 13 TABLE 5 RESERVED MULTICAST ADDRESSES .................................................................................................................... 13 TABLE 6 PIM OPERATION................................................................................................................................................. 15 TABLE 7 IP MULTICAST MODEL ARCHITECTURE .............................................................................................................. 23 TABLE 8 SIMULATION LOG DESCRIPTION E XP 2 ................................................................................................................. 68 TABLE 9 OSPF AREAS TABULAR FORMAT EXP 2 ............................................................................................................... 73 TABLE 10 INTERNET STANDARDS ..................................................................................................................................... 82
xv
MEE10:38
xvi
MEE10:38
ACRONYMS
Table 1 ACRONYMS
BGP BSR CGMP DVMRP EIGRP Hop Count ICMP IETF IGMP IS-IS LAN MAC MBGP MBone MOSPF MRIB MTU Multicast OSPF Overhead Packet delay PIM PIM-DM PIM-SM Reliability RFC RP RPF SLA SPT TCP Throughput UDP Unicast VLSM WAN
Border Gateway Protocol Boot Strap Router Cisco Group Management Protocol Distance Vector Multicast Routing Protocol Exterior Gateway Routing Protocol Estimated number of hops Internet Control Message Protocol Internet Engineering Task Force Internet Group Management Protocol Intermediate System to Intermediate System Local Area Network Medium Access Control Multicast extension to Border Gateway Protocol Multicast Backbone Multicast Open Shortest Path First Multicast Routing Information Base Maximum Transfer Unit One-to-many Communication Open Shortest Path First The average number of control bits generated The average time required to deliver the data packets to destination Protocol Independent Multicast PIM Dense Mode PIM Sparse Mode Refers in this report to reliable delivery of data Request For Comments Rendezvous Point Reverse Path Forwarding Service Level Agreement Shortest Path Tree Transport Control Protocol Average number of data bits received by multicast group members User Datagram Protocol One-to-one Communication Variable Length Subnet Masking Wide Area Network
xvii
MEE10:38
xviii
MEE10:38
CHAPTER I 1INTRODUCTION
1.1 INTRODUCTION
IP multicast is an efficient way to distribute information from a single source to multiple destinations. Network resources of the existing Internet are limited and the numbers of applications that utilize the IP multicast services are rising every day. The majority of applications require reliable Quality of Service (QoS) at the cost of limited available network resources. The continuous rise of applications requiring a one-to-many communication paradigm, demands for further research of new designs and implementations of IP multicasting. Such research work can be achieved by using various simulators for easier future estimations of network growth and control over its limited resources. Among such limited resources we mention a limited IP address span, limited network bandwidth, and limited number of multicast capable routers in a network segment [56, 57, 24]. In this thesis we suggest several designs methods, and provide their implementation methods along with achieving reliable IP multicast services using the popular industrial research tool OPNET Modeler 14.5 [27, 52]. A vast majority of real-time multicast applications are based on the User Datagram Protocol (UDP). Consequently, most of the simulations presented in this report are performed using UDP as transport protocol in our design models [25, 52, 58, 64].
MEE10:38
Nevertheless, several chapters in this thesis are dedicated to provide some basic background information to commonly used routing protocols, configuration issues, with main focus on multicast routing protocols.
1.4 SCOPE
The scope of this thesis focuses on currently available multicast methods available in OPNET Modeler 14.5 for achieving reliable group communication, Shortest Path Tree (SPT), and reliable data delivery in Local Area Network (LAN)/ Wide Area Network (WAN) environments [55, 63, 43]. This thesis also focuses on network design, configuration issues, failure potential, and alternate route diversion for optimal path selection in case of critical network failures.
1.5 MOTIVATION
The motivation behind the design models presented in this report is to discuss issues related to aggressive traffic behavior, network congestion, heavy bandwidth utilization in relation to an ever growing IP multicast enabled and resource demanding network.
1.6 LIMITATIONS
The limitations in this project are as follows [44, 51]: 1. The multicast addressing is restricted to IPv4 only, i.e., no IPv6 is implemented in the current design. 2. Security issues are not taken into consideration in our case studies due to existing restrictions in OPNET. 3. No wireless clients or wireless technology is implemented. 4. Pure PIM-DM is not implemented due to existing restriction in OPNET. 5. IP tunneling between different zones is not implemented
MEE10:38
Chapter 3 presents some basic information about routing protocols, routing types, and the importance of routing algorithms. Several components of various routing protocols, ICMP message types, and error identification methods are presented here. Chapter 4 describes multicast addressing. It describes the multicast address allocation methods, multicast address mapping methods, multicast routing protocols (PIM-SM, PIM-DM, Multicast Source Discovery Protocol (MSDP), Distance Vector Multicast Routing Protocol (DVMRP), multicast tree construction, interoperability issues, Rendezvous Point (RP) mapping, and usage of Boot Strap Routers (BSRs). This chapter also describes implementation details, transport methods, group management protocols, model architecture of IP multicast, multicast tunneling issues, and performance metrics used in our multicast simulation models. Chapter 5 and 6 describe the proposed model designs. The simulation design models are explained with various evaluation methods available with OPNET Modeler. Several types of network traffic such as FTP, voice, video conferencing, and database traffic are generated in the designed models. Several statistics over traffic utilization are collected after simulating these models. Chapter 7 covers the description of the implementation work with reports, and contains the analysis of different graphs, the comparison between the different routing protocols used in our models and their statistics. Chapter 8 concludes this report and presents the work done and goals achieved. Future enhancements to this project are also explained in this chapter.
SSM Deployment
1992
1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006
Native PIM Multicast Deployment in ISPs
MEE10:38
The ever growing demand of IP multicasting capability takes a crucial role in everyday real-time streaming applications. Multicast uses UDP, so reliability must be handled by the application [20, 54]. The multicast group addressing allows different hosts to join or leave a multicast group regardless of their current place in a network topology. In this project the multicast source is providing a variety of services such as video conferencing, FTP, VoIP, database, etc. The hosts can register with any of the services available in the network. The IP multicast solution was originally provided by using IGMP in most of the proposed designs [41]. For this report, applications such as streaming, voice, video, and database were simulated in OPNET Modeler 14.5, by using PIM-SM. The pros and cons of multicast traffic are discussed in detail in the following chapters.
MEE10:38
MEE10:38
important task of assigning multiple RPs in a network. These PIM routers assist in distributing the multicast traffic to the entire network according to the requests obtained from a particular receiver.
MEE10:38
Interior
Exterior
Dist- Vect
Link-Vect
Hybrid
BGP
EGP
RIP
IGRP
OSPF
EIRGP
Figure 1 illustrates a possible hierarchy of different routing protocols. Routing protocols are used to route data packets with best, loop free paths. Responsibilities include adding new routes, or replacing lost ones. Routing is the process of forwarding packets from a source to a destination in systematic way. There are many routing protocols commonly used today, and most routing protocols are based on the following algorithms: Bellman-Ford algorithm, Dijkstras algorithm also called as shortest path algorithms. Leas-cost-path algorithm computes the shortest path from source to all other nodes in a network, also called link state algorithm. The name link state called due to its capability of showing link status using update messages in routers. One such algorithm is Dijkstras algorithm, which uses an iterative procedure to find least cost paths to all destination nodes. The link state routing sends information to all directly connected links using link state packets. This algorithm provides complete cost information about each link in a network, since the least cost among the all possible paths is available. The pros behind this issue are quick response to topology changes and node failures, but suffer with computation overhead, over space requirements [69]. Bell-man ford algorithm uses two approaches in finding the shortest path, such approaches are centralized and distance vector approach. In centralized approach least cost is calculated from each node to destination nodes, and uses the same approach once per destination with multiple nodes. The repetition of iterations going to happen asynchronously or all at once. In distance vector approach each node maintains a table of information relating to distance, destination, node information. In this approach the periodically triggered updates are sent to all neighboring nodes when vector changes. This method is used in, ARPNET, RIP, DECnet and Novel IPX. The distance vector approach suffers
MEE10:38
with infinite loop, propagation delay problems which can be avoided by selecting loop-free paths, split-horizon techniques [65, 69].
A routing algorithm specifies the best path for data packets to take from a source to destination by maintaining a so-called routing table. The forwarding method of data packets is different for a connection-oriented or a connection-less service. The selection of a routing algorithm is based upon networking requirements such as accuracy, simplicity, optimality, stability, convergence, load balancing, etc. [1, 13, 55, 64].
Table 2 Routing protocol comparison
OSPF Link state Arbitrary Yes Seconds LSA refreshes every 30 min
MEE10:38
router keeps immediate neighbor information to all other networks, as well as the best path to each destination. OSPF, IS-IS are some examples of Link state routing algorithms [2]. In experiment II all routers are configured with OSPF as routing protocol, and using OSPFs area concept. Some of the end devices are configured with RIP as routing protocol. The reason behind the selection of RIP is because of its simplicity in configuration, widespread usage, the supportability, and low computational overhead. As the network size grows it is advisable to go for OSPF, but the complexity also increases proportionally.
MEE10:38
10
MEE10:38
Static routing is secure because of well known or defined routes can only have access to particular network. [52, 55, 54]. For the sake of argument the concept of static routing also implemented in experiment I along with eBGP between neighboring routers.
11
MEE10:38
3.8 IP COMPONENTS
Internet addressing and forwarding are the most important components of the internet protocol. IP addressing is of two types: IPv4 [RFC 791] and IPv6 [RFC 2460, RFC 3513]. All the experiments performed in this thesis work are restricted to IPv4 addressing.
ICMP Type 0 3 3 3 3 30 3 4 8 9 10
Code 0 0 1 2 3 6 7 0 0 0 0
Description Echo reply to ping Destination N/W unreachable Destination host unreachable Destination protocol unreachable Destination port unreachable Destination network unknown Destination host unknown Congestion control Echo request Route advertisement Route discovery
12
MEE10:38
28 Bits 1 0
Figure 2 class D range IP Multicast
Multicast Group ID
Locally scoped addresses are only for network protocol use, and routers will not forward data related to locally scoped addresses. These are also used for stateless auto configuration of hosts. Multicasts in locally scoped addresses are never forwarded off the local network.
S No 1 2 3 4 5 6 7 8
Reserved Multicast Address 224.0.0.1 224.0.0.2 224.0.0.4 224.0.0.5 224.0.0.6 224.0.0.9 224.0.0.10 224.0.0.13
Description All Multicast enabled systems in a network All Multicast enabled routers in a network All DVMRP (Distance vector Multicast ) routers All OSPF routers All OSPF Designated Routers All RIPv2 routers All EIGRP routers All PIMv2 routers
13
MEE10:38
Table 5 illustrates the IANA recommended reserved link local addresses for network protocols on a local network segment. Packets with these addresses should never be forwarded by a router but instead should remain local to a particular network segment . Globally scoped addresses are assigned dynamically over the internet. The 224.2.X.X address range is used by Multicast Backbone (MBone) applications. The MBone concept was introduced by IETF in 1992 for the purpose of supporting applications using a large number of users using audio and video meetings through a virtual multicast channel. The routers enabled with multicast features are called MBone routers. Encapsulation of multicast packets inside IP packets, and transferring the multicast data over the existing physical media through tunneling within the islands of MBone subnets is a never-ending demanding feature for ubiquitous multicast applications. Normal IP routers that are disabled with multicast capability for the sake of bandwidth tracking, and billing can make use of MBone concept to provide multicast services over Internet [34].
01-00-5E
0
Figure 3 IEEE 802.3 MAC Address structure
OUI These fields are defined as follows: OUI: Organizational Unique Identifier is always 01-00-5E.
23bits
14
MEE10:38
Multicast MAC address is derived from the IP multicast address. The destination IP address of IP multicast packets maps to a multicast MAC address [52].
4.3.4 MSDP
Multicast Source Discovery Protocol (MSDP) can be used to connect RPs between different SM domains. Each PIM SM domain uses its own RP and need not depend on other RPs in different domains [5]. Tree construction protocol: PIM-SM, PIM-DM, SPARSE-DENSE MODE
Table 6 PIM operation
Description Tree initializations with all MC enabled routers and prune back the branches if IGMP is absent. Tree with a shared point called RP (Rendezvous point) between source and destination Tree can be of either Sparse mode or Dense Mode
15
MEE10:38
Table 6 [51] illustrates the PIM mode of operation and their description.
16
MEE10:38
Currently this is the most scalable IP multicast routing protocol. In this case, group members are spread sparsely throughout the network. Flooding may still cause unnecessary bandwidth and performance problems. The empty distribution tree starts with adding its branches depending upon the request from hosts. Information is passed from a sender to a receiver by using a common point called RP. Receivers always need to register with the RP in order to request data from the multicast source. The optimized path from sender to receiver is always calculated in SM environment. PIM bidirectional mode is designed for many-to-many applications. Source Specific Multicast (SSM) is a variant of PIM-SM that builds only source-specific SPTs and does not need an active RP [20, 34, 51, 52]. Interoperability: PIM-SM, PIM-DM, and DVMRP can operate at same time by using these protocols on same interface either by using IPv4 or IPv6 addressing. In DM the multicast source broadcasts the data throughout the network and prunes back the unwanted branches, where as in SM the requested receiver is only eligible to receive multicast data. Rendezvous Point (RP): RP is a central place where all senders and receivers start sending their multicast information. Receivers always have to send a join messages to RP and senders start sending according to the request from respective hosts. RP is always placed at an optimal point in a network. SPT switch occurs automatically after the first transmission attempt [51, 52]. Characteristics of an RP: 1. Defined on all multicast enabled routers 2. Reachable to all sources and destinations 3. Optimal location in a network. Groups to RP mapping There are several possible ways for mapping between multicast groups and RPs, as follows: Static RP: It must be configured on all routers. The configuration information is local to the router and there is no protocol that propagates this information to the network. At least one static RP must be configured for a multicast group operating in SM [51, 52]. Auto RP: A Cisco proprietary protocol must be configured with candidate RPs called mapping agents. Ciscos reserved address space assigned by IANA for this purpose is: 224.0.1.39 for announcements, and 224.0.1.40 for discovery. It simplifies the process of automatic distribution of RP to group mapping for different multicast groups. Auto RP avoids miss-configuration issues between similar RPs, or between mapping agents. A Cisco router automatically listens for RP related information. Mostly used with PIM DM or with PIM sparse-dense mode only [51, 52]. Boot Strap Router (BSR): The BSR capability was added in PIM version 2, and is enabled by default in Cisco PIMv2 supporting routers. It simplifies the auto RP process. As it supports only PIMv2, interoperability issues may arise. A single BSR router can be elected from multiple candidates BSRs. BSR periodically sends BSR messages to all routers in a network. A candidate BSR with high priority is then elected as BSR.
17
MEE10:38
Redundancy through RP configuration methods: static RP cannot provide any redundancy. Auto RP and BSR may provide some redundancy in a different way. Similar to auto-RP, candidates RPs are configured and the information is propagated to the network. This approach supports PIMv2 and above [24, 36, 39].
The above figure 4 [20 (module 7.3.7, Figure 1), 53] illustrates two multicast sources each with RPs in an optimal location. The PIM sparse-dense mode is difficult to configure, manage, and troubleshoot through manual configurations of RPs. PIM sparse-dense mode supports automatic selection of RPs for each multicast. Router A in the figure could be the RP for source 1, and router F could be the RP for source 2. PIM sparse-dense mode is the recommended solution from Cisco for IP multicast, because PIM-DM does not scale well and requires heavy router resources and PIM-SM offers limited RP configuration options. If no RP is discovered for the multicast group or none is manually configured, PIM sparsedense mode operates in DM.
18
MEE10:38
Designated Router: Controls the multicast routes on a directly connected network. The router with highest IP address becomes the DR when more PIM routers are present in a network [5, 52].
19
MEE10:38
conferencing, software upgrades to large group of receivers, gains an added advantage over UDP related multicast transmission. FTP transmissions by using multicast transmission have an advantage of lower network bandwidth usage. Multicast transmission is used for better bandwidth utilization, less host/router processing, simultaneous delivery of data streams [4, 64].
4.6.1 RP POSITION
Optimal placement of a RP in a network is necessary in ordert to get best performance in a network. RP placement takes a crucial role when the router default behavior is changing. In our scenarios, experiment I have three RPs defined while experiment II has one RP defined.
20
MEE10:38
4.7 IGMP
The multicast data flow between two IP sub-networks can be controlled by Internet Group Management Protocol (IGMP) for allowing receivers subscribing to a particular multicast group. IGMP is an integral part of IP. Hosts that want to receive multicast data have to inform their immediate neighboring routers their interest in a multicast transmission through IGMP messages. A multicast enabled router periodically checks for new group members on each configured network. Hosts use IGMP to register with the router such as to join and leave a specific multicast group. By using IGMP multicast routers keep track of multicast memberships. There are three versions of this protocol [24, 27, 49, 51].
4.7.1 IGMP v1
The multicast enabled routers query periodically with 224.0.0.1 multicast address in order to check hosts that are in queue for multicast traffic. Here, the router cannot identify itself the host that left from a particular multicast group. In this version IGMP messages are of fixed size and encapsulated in IP datagrams. Widely deployed version on internet [49, 53, 64].
4.7.2 IGMP v2
This version comes with special message called IGMP leave message from hosts no longer interested in multicast traffic. Added support for low leave latency, routers can find the hosts that are not really interested in group membership. Here, queries are sent to specific multicast group instead of sending to all host address [41, 46, 49]. Most of the experiments in our thesis are using IGMP v2 as group management protocol.
4.7.3 IGMP v3
The matured version of IGMP, come up with a special feature of intimating the nearby router by saying which source a particular host can accept a multicast message along with the available feature of intimating their membership report. Source filtering, expressing the interest in receiving data from a particular source, support for SSM. Version 3 is interoperable with versions 1 and 2 [49, 53, 64].
21
MEE10:38
Host membership query: all systems in a particular network are addressed with 224.0.0.1 as standard address. These membership messages are further divided into 2 subtypes a) General query: Groups that have membership on a bounded network can learn about group membership with the help of general queries. b) Group specific query: Group membership can be learned with the help of group specific queries, i.e., whether a specified group has members on a bounded network [52].
4.8 CGMP
Cisco Group Management Protocol (CGMP) is a protocol developed by Cisco which allows the Catalyst switches to learn from Cisco routers and layer 3 switches about the existence of multicast clients from other Cisco routers and layer 3 switches. Cisco Catalyst switches make use of CGMP to forward layer 2 information based on IGMP operations. With CGMP running, any router receiving a multicast join message via a switch will reply back to the switch with CGMP join message [20]. CGMP is a legacy multicast switching protocol. CGMP is not compatible with IGMPv3. Denials of Service (DoS) attacks are possible on the CGMP enabled switches. The latest switches support IGMP instead of CGMP because of lack of security, flood control, etc [68]. CGMP imitates the client/server paradigm, where the router is considered a CGMP server and the Switch is considered a client. CGMP packets contain information about join and leave messages, MAC address of IGMP clients, and multicast MAC address of the multicast group [1].
22
MEE10:38
Figure 5 [20 (module 7.2.7, Figure1)] illustrates the CGMP operation on routers and switches.
Description To implement IGMP in hosts To implement IGMP in routers IGMP implementation routers for every multicast group in a defined network To implement PIM-SM in router nodes
23
MEE10:38
Figure 6 [51 (Figure 14.37)] illustrates the group joining procedure. Applications join a multicast group, by sending a join request to the ip_igmp_host_process. The ip_igmp_host process sends an IGMP membership report to neighboring routers. The ip_pim_sm process sets up a distribution tree, and starts sending packets to the designated group.
24
MEE10:38
Figure 7 [51 (Figure 14-38)] describes the multicast traffic forwarding by using a broadcasting method between sender and broadcast interface. Applications start sending packets to a multicast group. Router forwards the multicast packets to the ip_pim_sm process. The ip_pim_sm process on the router creates and sends one packet of the multicast packet for each out interface as specified in the routing table.
25
MEE10:38
26
MEE10:38
Figure 8 illustrates the topology used for experiment 1 with 3 subnets located at different places: Ronneby, Karlskrona, and Karlshamn. The Ronneby subnet was setup as ISP and was connected to the remaining subnets by using EBGP and static routing.
27
MEE10:38
Figure 9 shows the ISP connected interface for Ronneby having the following configuration: eBGP running between ISP and Ronneby.Karlskrona.junction, ISP and Ronneby.Karlsham.junction router. iBGP Running between junction routers, i.e., ISP-SANJOSE1, ISP-SANJOSE2. Static routing was established between the other subnets along with junction routers.
28
MEE10:38
Figure 10 illustrates the design in Karlshamn with R9 as RP. The RP have a crucial role in finding SPT.
29
MEE10:38
Figure 11 illustrates the Karlskrona subnet with end devices each connected to their respective switches. The subnets are interconnected through the Ronneby subnet. In Karlskrona subnet we configured a video server. LANs are interconnected through a switch and the Karlsham subnet contains the heavy, low profile usage video servers.
30
MEE10:38
Figure 12 illustrates the new model design for Experiment 2. The topology describes the type of routing used between the interconnected devices, and the services involved, along with group management concepts. There are a total of 9 Cisco routers of which some are not multicast enabled, along with end devices shown in the above figure. Three OSPF areas are defined here, one acting as backbone area.
31
MEE10:38
32
MEE10:38
33
MEE10:38
The whole network is designed by using CISCO AS5100 series routers: 3 in Ronneby, 6 routers in Karlskrona, 6 routers in Karlsham along with 2 CISCO Catalyst 3524-PWR XL switches. A total of 5 RP routers are selected for the whole network. Routing between the different subnets is static routing. EIGRP is configured for running inside the Karlskrona and Karlsham subnets. Some of the servers are enabled with RIP according to scenario requirements. The application configuration is configured with default applications which supports near 16 applications, and the profile configuration is configured for Database and FTP traffic only. FTP Heavy, FTP Low, Database (Heavy), Database (Low) is simulated in the designed network, and the given interfaces are designed with PIM-SM at each interface. The applications operation mode is assigned as Simultaneous, and start time is set to Constant (100). The simulation duration is set to End of Simulation. The application start time offset is set to uniform (0, 300), and duration is set to End of Simulation. The end hosts are changed according to scenario definition. Every simulation scenario is taken in different styles for a clearer analysis. There are 4 types of analysis available for this design, as follows: 1. Discrete Event Simulation 2. Flow Analysis 3. Net Doctor Report 4. Survivability Analysis The performance of PIM-SM enabled network can be estimated after collecting different statistics. The PIM-SM statistics such as control traffic sent, received, network convergence duration, EIGRP, BGP, etc. are collected along with other performance statistics. The link utilization is varied from 0% to 30%, 40%, and 50% with different timings. The main links between routers is of DS0 type. The end hosts have 10 Base T duplex links.
34
MEE10:38
Figure 14 illustrates the routing information through e.g., using the show ip route command from different routers of the Ronneby subnet. This figure also describes the routing between these devices, including interface information in details.
35
MEE10:38
The ISP router information is shown in Figure 16. This figure illustrates the neighbor relations with the loopback interface details of interconnected interfaces. Also shown in this figure is the BGP information. Figure 16 illustrates the routing information from the Karlskrona junction router, the multicast status, and BGP neighbors.
36
MEE10:38
Figure 17 illustrates the routing information from Ronneby to all connected interfaces. All connected interfaces were checked with the ping command. The average round trip time, and the connectivity issues using ICMP echo messages are displayed in this figure.
37
MEE10:38
The BGP AS information is given above with the show ip bgp summary command. This figure also illustrates multicast routing status on the Karlshamn junction router.
38
MEE10:38
The statistics were collected by varying traffic information between the links. The variation of these parameters is shown in different graphs for clearer analysis. The DES (Discrete Event Simulation) is designed in such a way to collect Flow analysis, Net doctor Reports, and Survivability Reports.
Figure 19 illustrates the results obtained for the Average, Maximum, Minimum IP statistics in experiment I. The average IP background traffic delay is observed to be 0.9782 sec., the network convergence duration was 2.8 sec. while the maximum hops are varying from 1 to 32.
39
MEE10:38
Figure 20 illustrates the PIM_SM control traffic sent and received (packets/sec). A little loss of less than 1.5% control traffic observed. This loss in control traffic causes the controlling task to fail between the devices, and it will be resumed in the next attempt.
Figure 21 and 22 illustrate the BGP, EIGRP global statistics such as network convergence activity, traffic sent and received. According to the above figures the network convergence duration time for BGP is lesser than that of EIGRP. There is 1.1% loss of BGP traffic observed, whereas in EIGRP there was no loss of data observed.
40
MEE10:38
Top Objects Report: Point-to-Point Queuing *Delay (sec) Rank ---1 2 3 4 5 6 7 8 9 10 Object Name ------------------------------------------------------Karlskrona_LAN <-> SWITCH [0] --> Karlskrona.R5 <-> R4 [0] --> Karlskrona.R1 <-> R2 [0] --> Karlsham.R9 <-> R10 [0] --> Karlskrona.R5 <-> R6 [0] --> Karlskrona.R5 <-> R4 [0] <-Karlsham.R10 <-> R11 [0] --> Karlsham.R11 <-> R13 [0] <-Karlsham.R10 <-> R11 [0] <-Karlskrona.R4 <-> R2 [0] <-Minimum -------------0.0072349 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 0.0025000 Average -------------0.78418 0.10804 0.08469 0.08437 0.08245 0.08064 0.07976 0.07746 0.07406 0.07147 Maximum -------------1.1555 1.1064 1.1053 1.0421 0.9812 1.1020 1.0924 1.0201 0.9875 1.0506
--
Figure 23 illustrates the Point to Point Queuing Delay in sec. The maximum queuing delay was observed to be 1.1064 sec. between Karlskrona.R5 <-> R4 A minimum queuing delay of 0.0025000 sec. was observed between several devices.
41
MEE10:38
Top Objects Report: Traffic Sent (packets/sec) Rank ---1 2 3 4 5 6 7 8 9 10 Object Name --------------R10 --> R2 R11 --> R5 R1 --> Sanjose1 R6 --> R13 Sanjose2 --> R5 Sanjose1 --> R1 R12 --> R13 R11 --> R6 R1 --> R2 R13 --> R4 Min Average ------------- -------------0 11.210 0 11.057 0 10.965 0 10.956 0 10.940 0 10.930 0 10.887 0 10.866 0 10.850 0 10.804 Maximum Std Dev -------------- -------------108.09 31.765 105.67 31.494 105.10 31.188 104.44 31.131 108.12 31.141 105.99 31.123 105.63 31.040 104.21 30.934 105.79 30.903 105.01 30.844
Top Objects Report: Traffic Received (packets/sec) Rank ---1 2 3 4 5 6 7 8 9 10 Object Name Min Average ----------- --------------------------R12 --> R13 0 5.9170 R1 --> R2 0 5.9098 R2 --> R6 0 5.7426 R6 --> R5 0 5.6695 R9 --> R10 0 5.4788 R5 --> R2 0 5.4443 R3 --> R2 0 5.0859 R13 --> R10 0 5.0316 R10 --> R11 0 4.8782 R14 --> R13 0 4.7890 Maximum -------------56.120 59.344 56.150 56.548 56.578 55.099 55.536 55.678 57.109 56.153 Std Dev -------------16.857 16.861 16.399 16.193 15.902 16.026 15.407 15.239 15.215 15.243
Figure 24 illustrates the maximum traffic sent (108.12 packets/sec) between the network objects Sanjose2 --> R5. The maximum traffic received was 59.344 (packets/sec) between R1 --> R2. The figure also shows the maximum standard deviation (Std Dev) of 31.765 for the traffic sent, and 16.857 for the traffic received.
42
MEE10:38
I.
Flow Analysis
Figure 25 illustrates the flow analysis executive summary. The number of successful demands, failed routes can be found with the help of flow analysis. The figure also illustrates the link maximum utilization (67%), and the average utilization (22.8%). Node configuration shows the routing information configured on particular devices. The figure also describes that there are 47 interfaces configured with the IPv4 addressing scheme, out of which 34 interfaces are configured with EIGRP routing, and 3 with BGP on the main links.
II.
Net Doctor
The given topology was checked for the given rules and summaries. The following rules are checked and success rate show up with a green ticking on left side. The services not used in this work are showed with N|A symbol (i.e., not available). For example duplicate address with green ticking shows that there were no IP address conflicts between devices. This practically means that each interface has a unique IP address assigned for identification, connectivity with other interfaces, and flow of data between the configured devices.
43
MEE10:38
IP Addressing Duplicate Address Invalid Interface IP Address Invalid Subnet Mask Overlapping Subnets Peers In Different Subnets
IP Multicast Group List for PIM Candidate RP Configuration References Undefined ACL Group List for Static RP References Undefined ACL IGMP Access Group References Undefined ACL Invalid PIM Register Source Invalid Static RP Address PIM RP-Announce-Filter Configuration References Undefined ACL
IP Routing Interface Not Advertised by Router Routing Protocols Verify Scheduler Allocate Verify Scheduler Interval Inconsistent Routing Protocols Mismatched Interface MTU Cisco Express Forwarding Not Enabled Net Flow Not Enabled SPD Disabled
Static Routing Default Route Results in Routing Loop Invalid Administrative Distance Invalid Default Network Address
44
MEE10:38
Figure 26 illustrates the executive summary of net doctor report. 100 % indicates the rules mentioned above are following according to standards, and there are no configuration issues between the devices. For example the rule invalid Static RP address means that the routers are given perfect IP addressing with no conflict between IP address, and in particular the RP definition and configuration are according to the defined standards.
45
MEE10:38
Impact Summary
Figure 28 shows Out of the 44 failure cases that were analyzed, there were 19 cases (43% of total) that had critical impact on the network, while there were 0 cases (0% of total) that had moderate impact. There were 25 cases (56% of total) that had only benign or no impact on the network performance. More information on the most severe failures can be found in the Worst Case Failures section.
46
MEE10:38
Figure 29 illustrates a pie chart representation of the case violation, which means that out of 44 issues considered, 25 cases have no violations, while the remaining 19 cases are in critical condition. This would practically require immediate attention from a designated network administrator.
Benign
N/A
Benign
N/A
Overutilized Links
Benign
N/A
Critical
18%
Overutilized Interfaces
Benign
N/A
Figure 29 illustrates the failure impact. From this figure we see that 2 out of 6 performance metrics have a serious effect over network after making some critical devices go down in a network.
Element Survivability
Figure 30 illustrates the Element Survivability is measured as the percentage of failure cases where the element was successful. In the following survivability table, element types with poor average survivability numbers -- below 95% -- have been marked as failed: Element category Test Flows Average survivability in category 93% of failure cases Worst survivability in category 81% of failure cases
Links
Site-Pairs
Interfaces
47
MEE10:38
Figure 31 illustrates the number of affected flows versus number of registered failure cases. Affected flows are divided into 3 categories: normal, moderate and critical. For example, 2% of the flows have 23 failure cases.
48
MEE10:38
49
MEE10:38
50
MEE10:38
51
MEE10:38
Tabular Format for Link Utilization and Average Affected Flows, Total number of violations
Failed Objects
Affected Flows
Overutilized Links
Overutilized Interfaces
52
MEE10:38
Ronneby.Ronneby_karlsham_junction
10%
44.7%
44.7%
0%
53
MEE10:38
54
MEE10:38
IV.
Capacity Planning
Figure 35 illustrates the 23 links having between 20%-40 % bandwidth utilization in a network.
55
MEE10:38
V.
Results
Routed Demands The number of routable and non-routable demands is viewable after running the flow analysis. Link utilization Current load in the network can be estimated as individual link statistics or the most repeatable stats.
56
MEE10:38
Figure 37 describes the IP traffic information such as background traffic delay, convergence activity, network convergence duration, and packet drop rate during simulation time.
57
MEE10:38
Figure 38 illustrates the PIM control traffic information for the multicast group 232.32.32.32. The average network convergence duration varies from 2-7 sec during the time of simulation. A drop of PIM-SM control traffic is also observed.
58
MEE10:38
Figure 39 describes the average drop of EIGRP vs. BGP traffic. The figure shows that the flow of traffic rate for EIGRP is higher than for BGP.
59
MEE10:38
Figure 40 illustrates the big drop of UDP traffic during the rush hours of simulation.
60
MEE10:38
Figure 41 illustrates the FTP client traffic. Here, the received traffic is higher in most of the cases. This means that the downloading rate from a client station is higher than the uploading rate.
61
MEE10:38
Figure 42 illustrate the FTP traffic information in detail. When Heavy FTP traffic is used, a big drop observed during all stages of simulation. Client FTP Upload response time is always higher than the download response time.
62
MEE10:38
Figure 43 illustrates the LAN and multicast traffic. A heavy drop of IP multicast traffic observed. Average LAN delay is observed to vary from 0 to 0.016 sec. The figure also shows the flow of traffic from the LAN such as: inflow, outflow and average flow differences. This is indicated in separate graphs. The figure also describes that the average information received was always higher than the information sent outside the LAN network. Server FTP load shows the number of requests the FTP server is receiving together with the saturation point, and the active sessions remaining.
63
MEE10:38
7.2 EXPERIMENT II
7.2.1 INTRODUCTION TO NEW DESIGN AND EXPERIMENT II
Introduction to New Design and scenario Experiment II This topology is making use of 3 types of customized applications: Video Conferencing, Database, and FTP traffic. The traffic defined in this topology was designed for carrying different types of data traffic. Here, we do not provide any guarantee over the provide services because the type of service is defined as best effort. We used UDP as transport protocol. The provision for QoS does not come with any guaranteee. In addition, in this experiment the source considers only multicast address. The following multicast groups are defined: video conferencing, FTP and database. Scenario The network is composed of 3 different OSPF areas. The sender is in OSPF Area 0 and the receivers in area 1 and area 2. The RP router is in backbone area and a static RP is configured in one of the routers in the backbone area. Out of 9 routers in the network one is taking the role of RP. This is the central reference point in the given topology. In the given topology all routers are not multicast enabled. The multicast traffic takes the shortest path through PIM-SM RP router for all multicast traffic. The RP router forwards all types of multicast related join/leave/register services using PIM protocol. The type of multicast-tree built using PIM is shared tree in this scenario. The PIM_RP can be a common reference point to the source-hosts in the given topology.
64
MEE10:38
The sender requests to register as a source with RP router before starting transmitting multicast traffic. After the source registers with RP the destinations are able to get their requested information through a shared tree.
Figure 45 illustrates the video conferencing routing from source to destination. The group 24.0.6.1was involved in Video Conferencing, no other groups are allowed to interact with this traffic unless they register with RP to receive video traffic. This is also the shortest path taken from source to all receivers interested in receiving the video traffic. Any device can join or leave according to their wish by informing RP. Here the RP takes care of tree distribution from source to all destinations in a network.
65
MEE10:38
Figure 46 illustrates the shortest path taken by FTP traffic from Sender to Receiver. Routing is taken through PIM_RP router after enabling SM in the given network.
66
MEE10:38
Figure 47 illustrates the Database traffic flow from source to destination. The shortest path is taken through the PIM_RP router. The following globally scoped multicast addresses were defined for these groups. Accordingly, the requests of a receiver are served by the source of the defined multicast group anywhere in the network. Application Config: Video application is defined with a constant distribution with a mean outcome of 0.001 for both in and out stream. The frame size for incoming and outgoing frames are constant (5000). FTP application is defined under exponential distribution for an amount of time between file transfers with a constant file size (50000). Database application is defined with an exponential distribution with a mean outcome of 30 for database transactions. The transaction size is constant with a mean outcome of 1024. Profile Config: User profiles are created in Profile Config object by making use of the available applications defined in the Application Config object. In this design there were 3 profiles defined for video, FTP, and Database traffic. In all user profiles applications are starting one after the other in a serial fashion. All applications profile session is starting between the mean intervals with a uniform(100,110) distribution. Only the repeatability option for video is taken once at start time.
67
MEE10:38
Figure 48 illustrates DES node details. Here, each and every node description, event number, their category along with messages about configuration issues is displayed for further action after simulation.
Table 8 Simulation log description Exp 2
Description Log entry Simulation Time Current Simulation Event Number Individual Node event details Low level simulation errors Messages generated by Protocol Possible errors, suggestions as log message
68
MEE10:38
Figure 49 illustrates the global statistics summary of PIM-SM control traffic, network convergence activity, register messages. Here, statistics such as average, maximum, minimum are collected for further statistical analysis.
II
Flow Analysis
Flow Analysis analyzes IP, ATM, Frame Relay and Circuit-Switched networks by considering traffic flows in the network as well as detailed models for network addressing and routing protocol implementation. Network planners, traffic engineers, and network operations staff can use Flow Analysis to help diagnose current network problems and to predict future network performance. Flow Analysis functionality can further be used to conduct routing analysis, survivability analysis, demand performance analysis, link performance analysis, capacity planning, and VoIP readiness assessment. Flow analysis can also assist in viewing traffic volumes, traffic types, equipment failures, or device configurations. Flow analysis also provides information about utilization and performance statistics for each network object, end to end routing for each flow, steady state delay estimates, routing table information, IP forwarding tables, detailed protocol configuration and network inventory reports.
69
MEE10:38
Figure 50 illustrates the executive summary for flow analysis. This information shows the number of links between different devices, their IPv4 addressing scheme, the routing protocols used as well as node and interface configuration information in detail.
70
MEE10:38
Figure 51 illustrates the IP multicast group addressing. Different traffic is using different group addressing scheme. The picture also illustrates the source address, group address, size, mode of operation, and interface details.
III
Net Doctor is a rule based engine that identifies incorrect device configurations, policy violations, and potential problems related to availability, performance and security in a network. Net Doctor is also used to setup periodic network audit reports and for identifying errors in a network. Rules can be applied against configured data from a production network. The inconsistencies in a network are easily identified using Net Doctor. In this experiment the connectivity between different OSPF area border routers, backbone zones are evaluated using Net Doctor Rules. The issues related to duplicate IP addressing, invalid interface IP address, invalid subnet mask, overlapping subnets, IP multicasting RP configuration, IGMP, IP routing issues are checked with Net Doctor Flow analysis.
71
MEE10:38
Figure 52 illustrates the 16 tested devices with 107 rules having a score of 100 which means the design is good for these rules, and no miss-configuration issues were found.
Figure 53 illustrates a pie chart representation of experiment II, the OSPF areas and the number of interfaces configured under Area 0, 1, 2. There were 8 interfaces configured in area 2, 11 interfaces configured in area 1, and 9 interfaces configured in area 0. The RP router was located in the backbone area.
72
MEE10:38
Figure 54 illustrates the OSPF area summary in bar chart. This figure shows the number of active OSPF areas during the snap shot period. The total number of routers are clearly observed in this chart. For example, in area 0 there were 4 routers, area 2 had 4 routers, and area 1 had 5 routers.
Table 9 OSPF areas tabular format Exp 2 OSPF Area Summary (3 areas, 9 devices, 28 interfaces)
Area
0.0.0.0 0.0.0.1 0.0.0.2
Configuration 9 11 8
Survivability analysis is useful for identifying communication issues between the devices, for performance evaluation, by defining IP test flows. Test flows can be defined between all IP capable devices for future survivability analysis and network performance issues. Failure analysis is used to identify individual or simultaneous failures of the specified network devices. This figure 56 illustrates the Survivability analysis when 44 failure cases are simulated simultaneously, out which 35 cases are near to poor performance. The threshold limits are allowed to be defined based on different criteria. The survival statistics of individual devices and links will give good estimation for future failure cases.
73
MEE10:38
Figure 55 illustrates network health summary and worst cast failures. For example MC2 have 1 critical violation over 55% of the effected flows. This figure also indicates each and every flow has at least one critical violation in their affected flows.
Top Objects Report: point-to-point.queuing delay (sec) Minimum, average, max, Std Dev statistics. Rank ---1 2 3 4 5 6 7 8 9 10 Object Name ----------------------R3 <-> Dest1 [0] <-MC3 <-> MC2 [0] <-MC3 <-> MC2 [0] --> R3 <-> Dest1 [0] --> R3 <-> Source1 [0] --> R3 <-> Source1 [0] <-R4 <-> R3 [0] --> MC3 <-> Source1 [0] <-R4 <-> R3 [0] <-MC3 <-> Source1 [0] --> Minimum -------------0.016727 0.016242 0.014303 0.017212 0.008875 0.008875 0.008375 0.008875 0.008875 0.008875 Average -------------0.041375 0.034264 0.032260 0.030176 0.020015 0.016692 0.016332 0.014827 0.014802 0.014674 Maximum -------------0.25673 0.09964 0.12904 0.09479 0.13237 0.05138 0.07888 0.04888 0.06287 0.04888 Std Dev -------------0.045055 0.024999 0.027064 0.021094 0.022856 0.011907 0.017579 0.009622 0.011147 0.009786
Figure 56 illustrates the point to point queuing delay between different network objects. For example a maximum queuing delay of 0.13237 sec is observed between R3 <-> Source1 [0], whereas a minimum queuing delay of 0.008875 sec is observed between many network objects. IV Capacity Planning
74
MEE10:38
Capacity Planning Module Capacity planning in flow analysis estimates future network scalability issues over a period of time. The Capacity Planning module of Flow Analysis lets you extend your analysis to cover future time periods so that you can see how the network will perform over time. The capacity planning feature includes several reports that summarize the performance of the network.
75
MEE10:38
Results Exp2
In the given topology the network convergence activity is varying slightly depending upon the type of traffic. In this scenario the only one RP was defined, so the final convergence time is varying from 0 to 4 sec. Load balancing is provided by the PIM-RP router with a minute drop of control traffic after the traffic increase. Network convergence time increases slightly with an increase in multicast traffic.
76
MEE10:38
A huge drop in packets is observed for video conferencing in the designed network. A sudden increment of packet end to end delay is observed after simulating a certain time due to heavy usage of multicast traffic. The health of a network depends on routing protocol convergence time. Here the network is using OSPF, RIP, and PIM as routing protocols.
77
MEE10:38
Figure 59 illustrates the average IP traffic, FTP traffic, VLAN traffic, and network convergence activity. The average traffic received is lower than the traffic sent which means the uploading was higher than downloading. The VLAN traffic drop shows the packet loss in switches regarding all types of management traffic. The average network convergence duration is varying from 0-20 sec. The maximum number of hops is 30 on an average, and IP traffic drop rate is 1500 packets at most during peak simulation periods.
78
MEE10:38
FTP upload response time is higher than download response time. There is huge drop in video traffic after a certain extent. The average packet end to end packet delay varying from 0-28 sec.
79
MEE10:38
Figure 61 illustrates the average IP end to end delay, IP traffic sent, dropped, received, average IP multicast traffic sent and received.
80
MEE10:38
Figure 62 illustrates the database entry and query response times, average client DB query and response times. A heavy drop of client database traffic is observed. The average client database entry response time was observed to be 11 sec, and the query response time was observed to be 20 sec. A little drop of client database entry traffic was also observed.
81
MEE10:38
82
MEE10:38
CHAPTER VIII
83
MEE10:38
84
MEE10:38
BIBLIOGRAPHY
[1] Rick Graziani and Allan Jonson, Routing protocols and concepts: CCNA exploration companion guide, Pearson Education, London, 2008. [2] Catherine Boutremans, Gianluca Iannaccone, and Christophe Diot, Impact of link failures on VoIP performance, In Proceedings of NOSSDAV Workshop, ACM press pages 63-71, Florida, USA, May 2002. [3] Renata Teixeira and Jennifer Rexford, Managing Routing Disruptions in Internet Service Provider Networks, IEEE Communications Magazine, March 2006. [4] Douglas E. Comer, Internetworking with TCP/IP, Principles, Protocols and Architecture 5th ed. Vol.1, Pearson Prentice Hall, 2006. [5] Rita Puzmanova, Routing and switching time of convergence? PEARSON EDUCATION LIMITED, 2002. [6] Wang Jilu; Yu Zhenwei; Zhang Yi, Research On Simulation of Multicast Protocol", Computer Science and Software Engineering, International Conference, Volume 4, 2008. [7] Jeff Doyle, Dynamic Routing Protocols, 2001 http://www.informit.com/articles/ [8] Online source. (2004, Aug 27), Advanced IP Addressing Management, Cisco Systems, http://www.informit.com/articles/ [9] Radia Perlman, A Comparison between Two Routing Protocols: OSPF and IS-IS, IEEE Network Magazine, September, 1991. [10]Cisco Internet Technology Handbook, http://www.cisco.com/en/US/docs/internetworking/technology/handbook/Enhanced_IGRP.html [11] Brian Adams, Ed Cheng, Tina Fox, Andy Kessler, Mark Manzanares,Bryan McLaughlin, Jim Rushton, Beverly Tai, Kevin Tran, "Inter domain Multicast Solutions Guide", Cisco Press Pub Date : July 08, 2002. [12] TCP / IP suite, http://www.protocols.com/pbook/tcpip3.htm. [13]Ravi Malhotra, IP Routing, 0-596-00275-0, January 2002. http://oreilly.com/catalog/iprouting/chapter/ch04.html#45434 [14] Xin Wang; Yu, C.; Schulzrinne, H. Stirpe, P. Wei Wu, IP Multicast Fault Recovery in PIM over OSPF Network Protocols, Publication Year: 2000 , Page(s): 116 - 125 [15] Uemo, S.; Kato, T.; Suzuki, K, "Analysis of Internet multicast traffic performance considering multicast routing protocol", Network Protocols, 2000. Proceedings. 2000 International Conference on Publication Year: 2000 , Page(s): 95 - 104.
85
MEE10:38
[16] http://www.javvin.com/protocolEIGRP.html [17] Javvin network management and security. IS-IS: Intermediate System to Intermediate system routing protocol, http://www.javvin.com/protocolOSPF.html [18] Todd Lammle, Cisco Certified Network Associate, 5th edition, 2005 [19] Microsoft. TechNet, OSPF operation, http://technet.microsoft.com/en-us/library/cc940481.aspx [20] CCNP Building Scalable Cisco Internetworks v5.0, Cisco Network Academy Program, Module 1-7, Cisco Systems. [21] Christian Huitema, Routing in the internet, 2nd Ed, Prentice Hall, 2000. [22] Faraz Shamim, Zaheer Aziz, Johnson Liu, Abe Martey, Troubleshooting IP Routing Protocols, Cisco Press, Pages : 912, May 07, 2002. [23] http://www.rhyshaden.com/ospf.htm [24] Doru Constantinescu., Overlay Multicast Networks: Elements, Architectures and Performance , Blekinge Institute of Technology Doctoral Dissertation Series No. 2007:17 [25] Stallings W, High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002. [26] Wittmann R. and Zitterbart M, Multicast Communication, Morgan Kaufmann, 2001. [27] Burlin, Samuel Strandstrm, A Security and Group Management in IP Multicast ID, LULE TEKNISKA UNIVERSITET Masters Thesis, August 2, 2001. [28] Wu S. and Banerjee S., Improving the Performance of Overlay Multicast with Dynamic Adaptation, Proceedings of the IEEE Consumer Communications and Networking Conference (CCNC 2004), pp. 152157, Las Vegas, USA, 2004. [29] Zhang B., Jamin S. and Zhang L., Host Multicast: A Framework for Delivering Multicast to End Users, Proceedings of the IEEE INFOCOM 2002, Vol. 3, pp.1366 1375, New York, USA, 2002. [30] sterb O., Models for Calculating End-to-End Delay in Packet Networks, Proceedings of the 18th International Teletraffic Congress: ITC-18, pp. 12311240, Berlin, Germany, 2003. [31] S Lao L., Cui J.-H., Gerla M. and Maggiorini D., A Comparative Study of Multicast Protocols: Top, Bottom, or In the Middle? , Proceedings of the IEEE,INFOCOM 2005, Vol. 4, pp. 28092814, Miami, USA, 2005. [32] Stallings W., High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002.
86
MEE10:38
[33] Law A. M. and Kelton W. D., Simulation Modeling & Analysis, Third Edition, McGraw-Hill, 2000 [34] Diane Teare, Catherine Paquet, Authorized self-study Guide, Building Scalable Cisco Internetworks (BSCI), third edition, http://www.ciscopress.com/safarieenabled, Cisco Press, USA. [35] Hardjono,T., Router-assistance for receiver access control in PIM-SM, Computers and Communications, Fifth IEEE Publication Year: 2000 , Page(s): 687 - 692 [36] Debora Estrin, Mark handley, Ahmed Helmy, Poly Huang, David Thaler., High-Speed Networks and Internets: Performance and Quality of Service, Prentice Hall, 2002. [37] Tim Szigeti, CCIE No. 9794, Christina Hattingh, End-to-End QoS Network Design Cisco Press, November 09, 2004. [38] Diane Teare and Catherine Paquet, Building Scalable Cisco Internetworks (BSCI) (Authorized Self-Study Guide), Cisco Press, 3rd Edition. [39] Wade Edwards, et al., CCNP Complete Study Guide, SYBEX Inc., 2004. [40] CCNP: Cisco Internetwork Troubleshooting Study Guide 2004 SYBEX Inc. [41] W. Fenner, Internet Group management Protocol, version 2, RFC2236, November 1997. [42] Cisco Systems, Inc, Configuring Basic IP Multicast, [http://www.cisco.com/en/US/docs/ios/ipmulti/configuration/guide/imc_basic_cfg_ps6350_TSD_Prod ucts_Configuration_Guide_Chapter.html] [43] Protocol Independent Sparse Mode, URL: [http://www.javin.com/protocolPIMSM.html] [44] OPNET Modeler Discrete Event Simulator, [http://www.opnet.com/support/des_model_library/], OPNET Technologies Inc. [45] OPNET Modeler Simulator, Version: 14.5, OPNET Technologies,19862008 [46] Deploying IP Multicast, Session RST-2701, Cisco Systems, 1999-2004 [47] Introduction to IP Multicast, Session RST-2214, Cisco Systems, 2000 [48] PIM Multicast Routing, Session RST-2215, Cisco Systems, 2000 [49] Advanced IP Multicast Routing, Session RST-2217, Cisco Systems, 2000 [50] Deploying BGP, Session RST-2209, Cisco Systems, 2000 [51] OPNET Modeler Documentation Set, OPNET Technologies Version: 14.5, 19862008
87
MEE10:38
[52] Brent Stewart and Denise Donohue, CCNP BSCI Quick Reference 642-901, Cisco Press, 2007 [53] IP Multicast, Courtesy of Cisco Enterprise Marketing, Cisco Systems. [54] Jeremy stretch, IPv4 Multicast, [http://packetlife.net/]. [52] Brian Hill/Osborne CISCO The Complete Reference, Cisco Press, 2002
[53] Yes Home Page Multicasting, [http://www.hep.ucl.ac.uk/~ytl/multi-cast/pim-dm_01.html] [54] Jeff Doyle, Jennifer Dehaven Carrol, Routing TCP/IP Volume II (CCIE Professional Development), Cisco Press, April 11, 2001. [55] Jeff Doyle, Jennifer Dehaven Carrol, Routing TCP/IP Volume I (CCIE Professional Development), Second Edition, Cisco Press, April 11, 2001. [56] Constantinescu D. and Popescu A., Modeling of One-Way Transit Time in IP Routers, Proceedings of the IEEE Advanced International Conference on Telecommunications (AICT 2006), pp. 1626, Guadeloupe, French Caribbean, 2006. [57] Quinn B. and Almeroth K. C., IP Multicast Applications: Challenges and Solutions, IETF RFC 3170, 2001. [58] M. Handley, S.Floyd, B.Whetten, R.Kermode, L.Vicisano, M.Luby The Reliable Multicast Design Space for Bulk Data Transfer, RFC2887, August 2000.
[59] El-Sayed A., Application Level Multicast Transmission Techniques Over the Internet, PhD dissertation, Institute National Poly-technique de Grenoble, Grenoble, France, 2004. [60] Castro M., Druschel P., Kermarrec A.-M. and Rowstron A. I. T., Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure, IEEE Journal of Selected Areas in Communications, Vol. 20, No. 8, pp. 1489 1499, 2002. [61] Protocol Independent Dense [http://www.javvin.com/protocolPIMDM.html] Mode, Inc, PIM-DM URL:
[62] PIM-DM, [http://www.javvin.com/protocolPIMDM.html] [63] S.deering, RFC-2362 Protocol Independent Multicast-Sparse Mode (PIM-SM), version 2, RFC2362, August 1989. [64] Lydia Parziale, David T. Britt, Chuck Davis, Wie Liu and Nicolas Rosselot, TCP/IP Tutorial and Technical Overview, IBM Red Books, ibm.com/rebooks.
88
MEE10:38
[65] Distance Vector Routing Algorithm, http://en.wikipedia.org/wiki/Distance-vector_routing_protocol [66] ICMP Message Types, [http://www.networksorcery.com/enp/protocol/icmp.htm].
[67]
Ramalho M., Intra- and Inter-Domain Multicast Routing Protocols: A Survey and Taxonomy, IEEE Communications Surveys and Tutorials, FIRST Quarter 2000, Vol. 3, No. 1, pp. 225, 2000. [68] Eric Vyncke, Christopher Paggen, LAN Switch Security: What Hackers Know about your switches, Cisco Press [69] Deepankar Medhi, Karthick Ramasamy, Network Routing: Algorithms, protocols, Architectures, The Morgan Kaufmann Series in Networking.
89