Sunteți pe pagina 1din 478

Authored By: Anthony Sequeira CCIE# 15626 (R&S), CCDP, CCSP

Terry Vinson CCNP


Technical Editor: Carl Yost Jr CCIE# 30486 (R&S), Jason Gooley CCNP
Technical Consultant: Scott Morris CCIEx4 #4713 (R&S) (ISP-Dial) (Security) (SP), CCDE #2009::13
Editor: Tiffany Pagan

IPv4/6 Multicast Operation and Troubleshooting

Before We Begin
This product is part of the IPexpert suite of materials that provide CCIE candidates and network
engineers with a comprehensive training program. For information about the full solution, contact an
IPexpert Training Advisor today.

Telephone: +1.810.326.1444
Email: sales@ipexpert.com

Congratulations! You now possess one of the ULTIMATE CCIETM Lab preparation and network operation
resources available today! Senior engineers, technical instructors, and authors boasting decades of
internetworking experience produced this resource.

In order to enjoy technical support from IPexpert and your CCIE community, be sure to visit the
following Internet resources:

http://blog.ipexpert.com
http://onlinestudylist.com

IPexpert is proud to lead the industry with multiple support options at your disposal free of charge. Our
online communities have attracted a membership of over 20,000 of your peers from around the world!
At blog.ipexpert.com, you can keep up to date with everything IPexpert does and read the latest in
technical articles from word-renowned IPexpert instructors. At OnlineStudyList.com, you may subscribe
to multiple SPAM-free, moderated CCIE-focused email lists.

Feedback

Do you have a suggestion or other feedback regarding this book or other IPexpert products? At IPexpert,
we look to you our valued clients for the real world, frontline evaluation that we believe is necessary
so that we may always improve. Please send an email with your thoughts to feedback@ipexpert.com or
call 1.866.225.8064 (international callers dial +1.810.326.1444).

In addition, for those using this book as CCIETM preparation, when you pass the CCIETM Lab exam, we
want to hear about it! Email your CCIETM number to success@ipexpert.com and let us know how
IPexpert helped you succeed. We would like to send you a gift of thanks and congratulations.



Copyright by IPexpert, Inc. All Rights Reserved.

IPv4/6 Multicast Operation and Troubleshooting

Additional CCIETM Preparation Material



IPexpert, Inc. is committed to developing the most effective Cisco CCIETM R&S, Security, Voice and
Wireless Lab certification preparation tools available. Our team of certified networking professionals
develops the most up-to-date and comprehensive materials for networking certification, including self-
paced workbooks, online Cisco hardware rental, classroom training, online (distance learning) instructor-
led training, audio products, and video training materials. Unlike other certification-training providers,
we employ the most experienced and accomplished teams of experts to create, maintain, and
constantly update our products. At IPexpert, we are focus on making your CCIETM Lab preparation more
effective.

Issues with this Book



This book is carefully edited to ensure the accuracy of all content. Should you find any error whatsoever,
please email a page reference and detailed comment to compsolv@me.com. Your email will be
responded to promptly.

Copyright by IPexpert, Inc. All Rights Reserved.

II

IPv4/6 Multicast Operation and Troubleshooting

IPEXPERT END-USER LICENSE AGREEMENT


END USER LICENSE FOR ONE (1) PERSON ONLY

IF YOU DO NOT AGREE WITH THESE TERMS AND CONDITIONS,
DO NOT OPEN OR USE THE TRAINING MATERIALS.

This is a legally binding agreement between you and IPEXPERT, the Licensor, from whom you have
licensed the IPEXPERT training materials (the Training Materials). By using the Training Materials, you
agree to be bound by the terms of this License, except to the extent these terms have been modified by
a written agreement (the Governing Agreement) signed by you (or the party that has licensed the
Training Materials for your use) and an executive officer of Licensor. If you do not agree to the License
terms, the Licensor is unwilling to license the Training Materials to you. In this event, you may not use
the Training Materials, and you should promptly contact the Licensor for return instructions.

The Training Materials shall be used by only ONE (1) INDIVIDUAL who shall be the sole individual
authorized to use the Training Materials throughout the term of this License.

Copyright and Proprietary Rights



The Training Materials are the property of IPEXPERT, Inc. ("IPEXPERT") and are protected by United
States and International copyright laws. All copyright, trademark, and other proprietary rights in the
Training Materials and in the Training Materials, text, graphics, design elements, audio, and all other
materials originated by IPEXPERT at its site, in its workbooks, scenarios and courses (the "IPEXPERT
Information") are reserved to IPEXPERT.

The Training Materials cannot be used by or transferred to any other person. You may not rent, lease,
loan, barter, sell or time-share the Training Materials or accompanying documentation. You may not
reverse engineer, decompile, or disassemble the Training Materials. You may not modify, or create
derivative works based upon the Training Materials in whole or in part. You may not reproduce, store,
upload, post, transmit, download or distribute in any form or by any means, electronic, mechanical,
recording or otherwise any part of the Training Materials and IPEXPERT Information other than printing
out or downloading portions of the text and images for your own personal, non-commercial use without
the prior written permission of IPEXPERT.

You shall observe copyright and other restrictions imposed by IPEXPERT. You may not use the Training
Materials or IPEXPERT Information in any manner that infringes the rights of any person or entity.

Copyright by IPexpert, Inc. All Rights Reserved.

III

IPv4/6 Multicast Operation and Troubleshooting

Exclusions of Warranties

THE TRAINING MATERIALS AND DOCUMENTATION ARE PROVIDED AS IS. LICENSOR HEREBY DISCLAIMS
ALL OTHER WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING WITHOUT LIMITATION, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. SOME STATES
DO NOT ALLOW THE LIMITATION OF INCIDENTAL DAMAGES OR LIMITATIONS ON HOW LONG AN
IMPLIED WARRANTY LASTS, SO THE ABOVE LIMITATIONS OR EXCLUSIONS MAY NOT APPLY TO YOU. This
agreement gives you specific legal rights, and you may have other rights that vary from state to state.


Choice of Law and Jurisdiction
This Agreement shall be governed by and construed in accordance with the laws of the State of
Michigan, without reference to any conflict of law principles. You agree that any litigation or other
proceeding between you and Licensor in connection with the Training Materials shall be brought in the
Michigan state or courts located in Port Huron, Michigan, and you consent to the jurisdiction of such
courts to decide the matter. The parties agree that the United Nations Convention on Contracts for the
International Sale of Goods shall not apply to this License. If any provision of this Agreement is held
invalid, the remainder of this License shall continue in full force and effect.

Limitation of Claims and Liability



ANY ACTION ON ANY CLAIM AGAINST IPEXPERT MUST BE BROUGHT BY THE USER WITHIN ONE (1) YEAR
FOLLOWING THE DATE THE CLAIM FIRST ACCRUED, OR SHALL BE DEEMED WAIVED. IN NO EVENT WILL
THE LICENSORS LIABILITY UNDER, ARISING OUT OF, OR RELATING TO THIS AGREEMENT EXCEED THE
AMOUNT PAID TO LICENSOR FOR THE TRAINING MATERIALS. LICENSOR SHALL NOT BE LIABLE FOR ANY
SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, REGARDLESS OF WHETHER LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES. WITHOUT LIMITING THE FOREGOING, LICENSOR WILL NOT BE LIABLE FOR LOST
PROFITS, LOSS OF DATA, OR COSTS OF COVER.


Entire Agreement

This is the entire agreement between the parties and may not be modified except in writing signed by
both parties.

Copyright by IPexpert, Inc. All Rights Reserved.

IV

IPv4/6 Multicast Operation and Troubleshooting

U.S. Government - Restricted Rights



The Training Materials and accompanying documentation are commercial computer Training
Materials and commercial computer Training Materials documentation, respectively, pursuant to
DFAR Section 227.7202 and FAR Section 12.212, as applicable. Any use, modification, reproduction
release, performance, display, or disclosure of the Training Materials and accompanying documentation
by the U.S. Government shall be governed solely by the terms of this Agreement and shall be prohibited
except to the extent expressly permitted by the terms of this Agreement.

IF YOU DO NOT AGREE WITH THE ABOVE TERMS AND CONDITIONS, DO NOT OPEN OR USE THE
TRAINING MATERIALS AND CONTACT LICENSOR FOR INSTRUCTIONS ON RETURN OF THE TRAINING
MATERIALS.

Copyright by IPexpert, Inc. All Rights Reserved.

IPv4/6 Multicast Operation and Troubleshooting

Table of Contents
Before We Begin ...................................................................................................................................... 1
Feedback .................................................................................................................................................. 1
Additional CCIETM Preparation Material .................................................................................................. 2
IPEXPERT END-USER LICENSE AGREEMENT ............................................................................................. 3
Copyright and Proprietary Rights ............................................................................................................ 3
Exclusions of Warranties ......................................................................................................................... 4
Choice of Law and Jurisdiction ................................................................................................................. 4
Limitation of Claims and Liability ............................................................................................................. 4
Entire Agreement .................................................................................................................................... 4
U.S. Government - Restricted Rights ....................................................................................................... 5
Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting ......................................... 1-1
About the Authors ................................................................................................................................ 1-2
About the Technical Editors .................................................................................................................. 1-2
About the Technical Consultant ........................................................................................................... 1-2
About the Editor ................................................................................................................................... 1-3
Who Should Read this Book? ................................................................................................................ 1-3
How to Use this Book ........................................................................................................................... 1-3
An Introduction to IPv4/6 Multicast ..................................................................................................... 1-4
An Introduction to IPv4/6 Multicast Troubleshooting .......................................................................... 1-5
Chapter 2: Internet Group Management Protocol (IGMP) ....................................................................... 2-1
IGMP Technology Review ..................................................................................................................... 2-2
IGMP Version 1 ................................................................................................................................. 2-3
IGMP Version 2 ................................................................................................................................. 2-4
IGMP Version 3 ................................................................................................................................. 2-5
IGMP Leave Process .......................................................................................................................... 2-6
The Operation and Troubleshooting of IGMP ...................................................................................... 2-7
IGMP Version 1 ................................................................................................................................. 2-8

Copyright by IPexpert, Inc. All Rights Reserved.

VI

IPv4/6 Multicast Operation and Troubleshooting

IGMP Version 2 ............................................................................................................................... 2-10


IGMP Version 3 ............................................................................................................................... 2-12
IGMP and Multicast Forwarding ..................................................................................................... 2-13
Common Issues with IGMP ................................................................................................................. 2-14
Host Fails to Send IGMP joins ......................................................................................................... 2-14
Switch Fails To Forward IGMP Packets ........................................................................................... 2-14
IGMP Packet Filtering ..................................................................................................................... 2-15
IGMP Sample Troubleshooting Scenarios ............................................................................................... 2-16
Host Fails To Send IGMP Joins ........................................................................................................ 2-16
Switch Fails To Forward IGMP Packets ........................................................................................... 2-18
IGMP Packet Filtering ..................................................................................................................... 2-20
IGMP Show Command Tools .................................................................................................................. 2-21
IGMP Debug Command Tools ................................................................................................................. 2-24
Chapter Challenge: IGMP Sample Trouble Tickets ................................................................................. 2-26
Trouble Ticket #1 ............................................................................................................................ 2-26
Trouble Ticket #2 ............................................................................................................................ 2-26
Trouble Ticket #3 ............................................................................................................................ 2-26
Chapter Challenge: IGMP Sample Trouble Tickets Solutions .................................................................. 2-27
Trouble Ticket #1 Solution .............................................................................................................. 2-27
Trouble Ticket #2 Solution .............................................................................................................. 2-29
Trouble Ticket #3 Solution .............................................................................................................. 2-31
Chapter 3: Protocol Independent Multicast - Dense Mode (PIM-DM) ..................................................... 3-1
PIM-DM Technology Review ................................................................................................................. 3-2
PIMv1 ................................................................................................................................................ 3-2
PIMv2 ................................................................................................................................................ 3-2
PIM Dense Mode .............................................................................................................................. 3-4
The Operation and Troubleshooting of PIM-DM .................................................................................. 3-5
Reverse Path Forwarding ................................................................................................................... 22

Copyright by IPexpert, Inc. All Rights Reserved.

VII

IPv4/6 Multicast Operation and Troubleshooting

Common Issues with PIM-DM ............................................................................................................ 3-25


RPF Failures ..................................................................................................................................... 3-25
Hub and Spoke Designs .................................................................................................................. 3-25
Multicast Threshold Problems ........................................................................................................ 3-26
PIM-DM Sample Troubleshooting Scenarios .......................................................................................... 3-27
Fault isolation in PIM-DM ............................................................................................................... 3-27
PIM-DM show Command Tools .............................................................................................................. 3-38
PIM-DM debug Command Tools ............................................................................................................. 3-41
Chapter Challenge: PIM-DM Sample Trouble Tickets ............................................................................. 3-43
Trouble Ticket #1 ............................................................................................................................ 3-43
Trouble Ticket #2 ............................................................................................................................ 3-43
Trouble Ticket #3 ............................................................................................................................ 3-43
Chapter Challenge: PIM-DM Sample Trouble Tickets Solutions ............................................................. 3-44
Trouble Ticket #1 Solution .............................................................................................................. 3-44
Trouble Ticket #2 Solution .............................................................................................................. 3-46
Trouble Ticket #3 Solution .............................................................................................................. 3-49
Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM) ..................................................... 4-1
PIM-SM Technology Review ................................................................................................................. 4-2
The Operation and Troubleshooting of PIM-SM .................................................................................. 4-3
Merging the Trees ........................................................................................................................... 4-12
The Shortest Path Tree (SPT) .......................................................................................................... 4-17
Common Issues with PIM-SM ............................................................................................................. 4-21
RPF Failures ..................................................................................................................................... 4-21
Unicast Routing and Forwarding Problems .................................................................................... 4-21
Multicast Routing and Forwarding Problems ................................................................................. 4-22
PIM-SM Sample Troubleshooting Scenarios ........................................................................................... 4-23
PIM-SM show Command Tools ............................................................................................................... 4-32
PIM-SM debug Command Tools ............................................................................................................. 4-36

Copyright by IPexpert, Inc. All Rights Reserved.

VIII

IPv4/6 Multicast Operation and Troubleshooting

Chapter Challenge: PIM-SM Sample Trouble Tickets ............................................................................. 4-38


Trouble Ticket #1 ............................................................................................................................ 4-38
Trouble Ticket #2 ............................................................................................................................ 4-38
Trouble Ticket #3 ............................................................................................................................ 4-38
Chapter Challenge: PIM-SM Sample Trouble Tickets Solutions .............................................................. 4-39
Trouble Ticket #1 Solution .............................................................................................................. 4-39
Trouble Ticket #2 Solution .............................................................................................................. 4-42
Trouble Ticket #3 Solution .............................................................................................................. 4-45
Chapter 5: Protocol Independent Multicast Sparse-Dense Mode (PIM-S-DM) ..................................... 5-1
PIM-S-DM Technology Review .............................................................................................................. 5-2
The Operation and Troubleshooting of PIM-S-DM ............................................................................... 5-3
Introduction of the Topology ............................................................................................................ 5-3
The Problem with PIM-S-DM ............................................................................................................ 5-8
Common Issues with PIM-S-DM ......................................................................................................... 5-10
RPF Failures ..................................................................................................................................... 5-10
Unicast Routing and Forwarding Problems .................................................................................... 5-11
Multicast Routing and Forwarding Problems ................................................................................. 5-11
PIM-S-DM Sample Troubleshooting Scenarios ....................................................................................... 5-13
RPF Fault Isolation in PIM-S-DM (for Sparse Mode Traffic) ............................................................ 5-13
RPF Fault isolation in PIM-S-DM (for Dense Mode Traffic) ............................................................. 5-16
Unicast Routing and Forwarding Problems .................................................................................... 5-18
Multicast Routing and Forwarding problems ................................................................................. 5-22
PIM-S-DM show Command Tools ........................................................................................................... 5-28
PIM-S-DM debug Command Tools ......................................................................................................... 5-32
Chapter Challenge: PIM-S-DM Sample Trouble Tickets .......................................................................... 5-34
Trouble Ticket #1 ............................................................................................................................ 5-34
Trouble Ticket #2 ............................................................................................................................ 5-34
Chapter Challenge: PIM-S-DM Sample Trouble Tickets Solutions .......................................................... 5-35

Copyright by IPexpert, Inc. All Rights Reserved.

IX

IPv4/6 Multicast Operation and Troubleshooting

Trouble Ticket #1 Solution .............................................................................................................. 5-35


Trouble Ticket #2 Solution .............................................................................................................. 5-40
Chapter 6: Bidirectional Protocol Independent Multicast (BIDIR-PIM) .................................................... 6-1
BIDIR-PIM Technology Review .............................................................................................................. 6-3
The Operation and Troubleshooting of BIDIR-PIM ............................................................................... 6-5
BIDIR-PIM RP .................................................................................................................................... 6-5
Host-to-RP Shared Tree .................................................................................................................... 6-7
Source-to-RP Shared Tree ................................................................................................................. 6-9
BIDIR-PIM Neighbors and Designated Forwarder Election ............................................................. 6-11
Common Issues with BIDIR-PIM ......................................................................................................... 6-13
RP and DR Failures .......................................................................................................................... 6-13
Multicast Routing and Forwarding Problems ................................................................................. 6-13
BIDIR-PIM Sample Troubleshooting Scenarios ....................................................................................... 6-15
RP Failure in BIDIR-PIM ................................................................................................................... 6-15
DF Failure in BIDIR-PIM ................................................................................................................... 6-17
Multicast Routing and Forwarding Failure in BIDIR-PIM ................................................................ 6-20
BIDIR-PIM show Command Tools ........................................................................................................... 6-24
BIDIR-PIM debug Command Tools .......................................................................................................... 6-28
Chapter Challenge: BIDIR-PIM Sample Trouble Tickets .......................................................................... 6-30
Trouble Ticket #1 ............................................................................................................................ 6-30
Trouble Ticket #2 ............................................................................................................................ 6-30
Chapter Challenge: PIM-DM Sample Trouble Tickets Solutions ............................................................. 6-31
Trouble Ticket #1 Solution .............................................................................................................. 6-31
Trouble Ticket #2 Solution .............................................................................................................. 6-33
Chapter 7: Static Rendezvous Points (RPs) ............................................................................................... 7-1
Static RP Technology Review ................................................................................................................ 7-2
The Operation and Troubleshooting of Static RP ................................................................................. 7-3
Introduction to Load Balancing between RPs ................................................................................... 7-3

Copyright by IPexpert, Inc. All Rights Reserved.

IPv4/6 Multicast Operation and Troubleshooting

Common Issues with Static RP .............................................................................................................. 7-7


Static RP Sample Troubleshooting Scenarios ............................................................................................ 7-8
Incorrect RP Assignment ................................................................................................................... 7-8
ACL Issue ......................................................................................................................................... 7-10
Static Rendezvous Points show Command Tools ................................................................................... 7-13
Static Rendezvous Points debug Command Tools .................................................................................. 7-17
Chapter Challenge: Static RP Sample Trouble Tickets ............................................................................ 7-19
Trouble Ticket #1 ............................................................................................................................ 7-19
Chapter Challenge: Static RP Sample Trouble Tickets Solutions ............................................................ 7-20
Trouble Ticket #1 Solution .............................................................................................................. 7-20
Chapter 8: AutoRP .................................................................................................................................... 8-1
AutoRP Technology Review .................................................................................................................. 8-2
The Operation and Troubleshooting of Auto-RP .................................................................................. 8-4
C-RP Announcements ....................................................................................................................... 8-4
Mapping Agent Assignment and Placement ..................................................................................... 8-9
Multicast Routing Topology ............................................................................................................ 8-14
Auto-RP Listener ............................................................................................................................. 8-18
One Last Step .................................................................................................................................. 8-31
Common Issues with Auto-RP ............................................................................................................ 8-33
RPF Failures ..................................................................................................................................... 8-33
Multicast Routing and Forwarding Problems ................................................................................. 8-33
Auto-RP Sample Troubleshooting Scenarios .......................................................................................... 8-35
RPF failures ..................................................................................................................................... 8-37
Multicast Forwarding and Routing Failures .................................................................................... 8-40
AutoRP show Command Tools ................................................................................................................ 8-45
AutoRP debug Command Tools .............................................................................................................. 8-50
Chapter Challenge: Auto-RP Sample Trouble Tickets ............................................................................. 8-52
Trouble Ticket #1 ............................................................................................................................ 8-52

Copyright by IPexpert, Inc. All Rights Reserved.

XI

IPv4/6 Multicast Operation and Troubleshooting

Trouble Ticket #2 ............................................................................................................................ 8-52


Trouble Ticket #3 ............................................................................................................................ 8-52
Chapter Challenge: Auto-RP Sample Trouble Tickets Solutions ............................................................. 8-53
Trouble Ticket #1 Solution .............................................................................................................. 8-53
Trouble Ticket #2 Solution .............................................................................................................. 8-55
Trouble Ticket #3 Solution .............................................................................................................. 8-59
Chapter 9: Bootstrap Router (BSR) Protocol ............................................................................................ 9-1
BSR Technology Review ........................................................................................................................ 9-2
The Operation and Troubleshooting of BSR ......................................................................................... 9-6
BSR Election/Announcements .......................................................................................................... 9-6
Candidate-Rendezvous Point Announcement .................................................................................. 9-9
Propagation of Group-to-RP Mappings .......................................................................................... 9-10
Load Balancing Between Candidate-RPs ........................................................................................ 9-12
The Final Step ................................................................................................................................. 9-14
Common Issues with BSR .................................................................................................................... 9-16
RPF Failures ..................................................................................................................................... 9-16
Unicast Routing and Forwarding Problems .................................................................................... 9-16
Multicast Routing and Forwarding Problems ................................................................................. 9-17
BSR Sample Troubleshooting Scenarios ................................................................................................. 9-18
BSR show Command Tools ..................................................................................................................... 9-31
BSR debug Command Tools .................................................................................................................... 9-35
Chapter Challenge: BSR Sample Trouble Tickets .................................................................................... 9-37
Trouble Ticket #1 ............................................................................................................................ 9-37
Trouble Ticket #2 ............................................................................................................................ 9-37
Trouble Ticket #3 ............................................................................................................................ 9-37
Chapter Challenge: BSR Sample Trouble Tickets Solutions .................................................................... 9-38
Trouble Ticket #1 Solution .............................................................................................................. 9-38
Trouble Ticket #2 Solution .............................................................................................................. 9-42

Copyright by IPexpert, Inc. All Rights Reserved.

XII

IPv4/6 Multicast Operation and Troubleshooting

Trouble Ticket #3 Solution .............................................................................................................. 9-45


Chapter 10: Multicast Source Discovery Protocol (MSDP) ..................................................................... 10-1
MSDP Technology Review .................................................................................................................. 10-2
The Operation and Troubleshooting of MSDP .................................................................................... 10-4
Source Active Messages ................................................................................................................. 10-4
MSDP RPF Failure ............................................................................................................................ 10-5
SA Message Arrives on the RP in the Other Multicast Domain ...................................................... 10-5
SA Cache ......................................................................................................................................... 10-6
Common Issues with MSDP ................................................................................................................ 10-7
Incorrect Peering Configuration ..................................................................................................... 10-7
No PIM Enabled Path Between MSDP peers .................................................................................. 10-7
MSDP Passwords and Filters ........................................................................................................... 10-7
MSDP Sample Troubleshooting Scenarios .............................................................................................. 10-8
Incorrect Peering Configuration ..................................................................................................... 10-8
No PIM Enabled Path Between MSDP Peers ................................................................................ 10-12
MSDP Passwords and Filters ......................................................................................................... 10-16
MSDP Show Command Tools ................................................................................................................ 10-21
MSDP Debug Command Tools .............................................................................................................. 10-23
Chapter Challenge: MSDP Sample Trouble Tickets ............................................................................... 10-24
Trouble Ticket #1 .......................................................................................................................... 10-24
Trouble Ticket #2 .......................................................................................................................... 10-24
Chapter Challenge: MSDP Sample Trouble Tickets Solutions ............................................................... 10-25
Trouble Ticket #1 Solution ............................................................................................................ 10-25
Trouble Ticket #2 Solution ............................................................................................................ 10-27
Chapter 11: Anycast-RP .......................................................................................................................... 11-1
Anycast-RP Technology Review .......................................................................................................... 11-2
The Operation and Troubleshooting of Anycast-RP ........................................................................... 11-3
Common Issues with Anycast-RP ........................................................................................................ 11-7

Copyright by IPexpert, Inc. All Rights Reserved.

XIII

IPv4/6 Multicast Operation and Troubleshooting

MSDP Peering Issues ....................................................................................................................... 11-7


Unicast Routing Problems ............................................................................................................... 11-8
Anycast-RP Sample Troubleshooting Scenarios ...................................................................................... 11-9
MSDP Peering Issues ....................................................................................................................... 11-9
Unicast Routing Problems ............................................................................................................. 11-11
Anycast-RP Show Command Tools ....................................................................................................... 11-15
Anycast-RP debug Command Tools ...................................................................................................... 11-17
Chapter Challenge: Anycast-RP Sample Trouble Tickets ...................................................................... 11-18
Trouble Ticket #1 .......................................................................................................................... 11-18
Chapter Challenge: Anycast-RP Sample Trouble Tickets Solutions ...................................................... 11-19
Trouble Ticket #1 Solution ............................................................................................................ 11-19
Chapter 12: Multiprotocol-BGP (MP-BGP) and Interdomain Multicast .................................................. 12-1
Multiprotocol-BGP Technology Review .............................................................................................. 12-3
The Operation and Troubleshooting of MP-BGP ................................................................................ 12-4
MP-BGP ........................................................................................................................................... 12-5
Common Issues with MP-BGP .......................................................................................................... 12-12
Peer Rejects All MSDP SA Messages ............................................................................................. 12-12
Failure to Advertise the MSDP Peer Network ............................................................................... 12-12
Using Incorrect Addresses to form MSDP Peers ........................................................................... 12-12
MP-BGP Sample Troubleshooting Scenarios ........................................................................................ 12-13
Peer Rejects all MSDP SA Messages ............................................................................................. 12-13
Failure to Advertise the MSDP Peer Network ............................................................................... 12-15
Incorrect Address Used to form MSDP Peers ............................................................................... 12-19
MP-BGP Show Command Tools ............................................................................................................ 12-22
MP-BGP Debug Command Tools .......................................................................................................... 12-25
Chapter Challenge: MP-BGP Sample Trouble Tickets ........................................................................... 12-27
Trouble Ticket #1 .......................................................................................................................... 12-27
Trouble Ticket #2 .......................................................................................................................... 12-27

Copyright by IPexpert, Inc. All Rights Reserved.

XIV

IPv4/6 Multicast Operation and Troubleshooting

Chapter Challenge: MP-BGP Sample Trouble Tickets Solutions ........................................................... 12-28


Trouble Ticket #1 Solution ............................................................................................................ 12-28
Trouble Ticket #2 Solution ............................................................................................................ 12-30
Chapter 13: Multicast Security and Advanced Features ......................................................................... 13-1
The Operation and Troubleshooting of Multicast Security and Advanced Features .............................. 13-2
Multicast Filtering on a Cisco Catalyst Switch ................................................................................. 13-2
Multicast Route Limiting ................................................................................................................. 13-8
Multicasting Through a GRE Tunnel ............................................................................................... 13-9
Chapter Challenge: Multicast Security and Advanced Features Sample Trouble Tickets ..................... 13-15
Trouble Ticket #1 .......................................................................................................................... 13-15
Trouble Ticket #2 .......................................................................................................................... 13-15
Chapter Challenge: Multicast Security and Advanced Features Sample Trouble Tickets Solutions ..... 13-16
Trouble Ticket #1 Solution ............................................................................................................ 13-16
Trouble Ticket #2 Solution ............................................................................................................ 13-19
Chapter 14: IPv6 Multicast ..................................................................................................................... 14-1
IPv6 Multicast Technology Review ..................................................................................................... 14-2
IPv6 Multicast Addressing ............................................................................................................... 14-2
Protocol Independent Multicast Version 2 (PIMv2) for IPv6 .......................................................... 14-3
Multicast Listener Discovery (MLD) Protocol ................................................................................. 14-3
IPv6 PIM Bootstrap Router Protocol (BSR) ..................................................................................... 14-4
IPv6 Embedded RP .......................................................................................................................... 14-4
The Operation and Troubleshooting of IPv6 Multicast ....................................................................... 14-5
IPv6 Multicast Addressing ............................................................................................................... 14-5
Protocol Independent Multicast Version 2 (PIMv2) for IPv6 .......................................................... 14-6
Multicast Listener Discovery (MLD) Protocol ................................................................................. 14-8
IPv6 PIM Bootstrap Router Protocol (BSR) ..................................................................................... 14-9
IPv6 Embedded RP .......................................................................................................................... 14-9

Copyright by IPexpert, Inc. All Rights Reserved.

XV

IPv4/6 Multicast Operation and Troubleshooting

Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting

Chapter 1:
Introduction to
IPv4/6 Multicast
Operation and
Troubleshooting



Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting introduces the team of
authors, consultants, and editors that completed this book and describes the books purpose. This
chapter also provides suggestions for the usage of this written work. This introductory chapter also
covers a basic overview of multicasts operations and troubleshooting concerns. Readers who are very
familiar with basic multicast principles may safely skip this section.

Copyright by IPexpert, Inc. All Rights Reserved.

1-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting

About the Authors


Anthony Sequeira, CCIE No. 15626 (R&S), formally began his career in the information technology
industry in 1994 with IBM in Tampa, Florida. He quickly formed his own computer consultancy,
Computer Solutions, and then discovered his true passionteaching and writing about Microsoft and
Cisco technologies. Anthony is currently pursuing his second CCIE in the area of Security, and is a full-
time instructor for the next generation of KnowledgeNet: www.StormWindLive.com. He recently
achieved his VMware Certified Professional certification. When Anthony is not writing or lecturing about
the latest innovations in networking technologies, you may find him flying a Cessna in the Florida skies.
Terry Vinson, CCNP, Terry Vinson is a highly experienced training consultant, specializing in
documentation development, validation, verification and communications. For the last 10 years, Terry
has worked in the private sector as a Senior Technology Consultant and Trainer for several consulting
firms in Washington DC, Northern and Central Virginia. In this capacity, he has provided services to
Major Metropolitan Health Systems, the Mexican Embassy, and the Executive Office of the President of
the United States of America (EOP).

About the Technical Editors


Carl Yost Jr., CCIE No. 30486 (R&S), currently works as a Network Engineer/Director of I.T. for a health
care company in Buffalo NY. He has worked in numerous roles in I.T. since 1998. Carl is currently
preparing for the CCIE in Security while living with his wife and children in Western New York. When not
surrounded by Cisco devices, Carl truly enjoys working with Redhat Linux.
Jason Gooley, CCNP, Jason is a highly motivated network engineer with over 17 years of experience in
the communications industry. Based in Chicago, Jason currently manages the network for the nations
most famous next day carpet company. Jason is currently in the process of pursuing his CCIE certification
for Routing and Switching while also expanding his knowledge in Unified Communications and Security.

About the Technical Consultant


Scott Morris, CCIEx4, CCDE, JNCIEx2, CISSP, Scott has been one of the most well-known figures in the IT
industry for over 25 years. He has fulfilled a number of important roles within both the public and
private sectors. As a Certified Cisco Systems Instructor (CCSI) and Juniper Networks Certified Instructor
(JNCI), Scott has provided world-renowned CCIE training since 2002. He has delivered courses to a wide
variety of audiences including internal training at Cisco Systems.

Copyright by IPexpert, Inc. All Rights Reserved.

1-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting

About the Editor


Tiffany Pagan began her career in editing in 1997. Throughout her career, she has worked with several
private individuals and companies such as Moffitt Cancer Center and Tampa General Hospital. Tiffany is
currently working on writing her own series of short stories as well as working as an editor and personal
assistant. Tiffany resides in Tampa, Florida with her husband and three beautiful children.

Who Should Read this Book?


This text has two primary audiences. The first audience is for those CCIE candidates that are searching
for the most comprehensive and error-free materials available for the operation and troubleshooting of
key technologies presented in the various tracks of the CCIE written and practical lab exams. These
students should possess a home rack of equipment for CCIE-level command-line practice, they should
possess an equipment emulator, or they should rent equipment from a company like
www.proctorlabs.com. The authors and technical editors exhaustively tested all of the demonstrations
found throughout the text and the important end of chapter Trouble Ticket challenges against all
practice rack options described earlier. Where issues arise with popular equipment emulators, the text
makes note. This book is the most remarkably thorough and technically accurate book written on the
subject of multicast to date.
The books second audience is those readers that must support multicast technologies in their actual
network environments. This book serves as an amazing guide and reference for real-world problem
solving within production networks that deploy these specific technologies. In fact, while many courses
and texts purport to have certification success as a by-product of a thorough investigation of all
protocols, this book actually succeeds in this approach.

How to Use this Book


This book breaks specific multicast technologies down on a chapter-by-chapter basis for a complete and
thorough review of this broad set of topics. Each chapter begins with a review of the selected multicast
technology. Following this, the text provides an intense examination of the operation of the protocols,
including key aspects of troubleshooting for the specific technology. After this, the chapter presents
some of the most common issues that can result with a particular technology, and most importantly,
details the simple troubleshooting tools and steps that succeed for remediation.
Each chapter concludes with sample troubleshooting scenarios that provide a full walkthrough of a well-
designed approach for troubleshooting each major issue. The text provides reference guides for the
most popular and powerful show and debug commands for a specific technology.
Each chapter concludes with sample Trouble Tickets on the specific technology. Readers may download
initial configurations, or install them in a simple Graphical User Interface (GUI) on

Copyright by IPexpert, Inc. All Rights Reserved.

1-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting

www.proctorlabs.com. These sample Trouble Tickets allow students to build confidence and expertise
by actually troubleshooting issues in the multicast domain presented in the chapter.
Students are encouraged to follow along on a rack of equipment for every section of every chapter. This
really enhances and strengthens the learning process.

An Introduction to IPv4/6 Multicast


IP multicast is a bandwidth-conserving technology delivering a single stream of information
simultaneously to a large or small number of endpoints within the network. Video conferencing,
corporate communications, distance learning, and software distribution, stock quotes, and news are
common applications that take advantage of a multicast traffic delivery approach.
IP multicast routing enables a source host to send packets to a group of receivers anywhere within the
IP network by using a special form of IP address called the IP multicast group address. This special
multicast group address forms the destination IP address of the packet. Multicast enabled routers and
switches forward these incoming multicast packets out all interfaces that lead to members of the
multicast group. Any host, regardless of whether it is a member of a group, can send to a group.
However, only the members of a group receive the message.
A quick question that arises when examining this multicast approach is how can hosts that are
interested in receiving the multicast information actually join this multicast group? Internet Group
Management Protocol (IGMP) makes this possible in IPv4, while the Multicast Listener Discovery (MLD)
protocol makes this possible in IPv6.
Network administrators who assign multicast group addresses must make sure the addresses conform
to the multicast address range assignments reserved by the Internet Assigned Numbers Authority
(IANA). The IANA assigns the IPv4 Class D address space for multicast. The first four high-order bits of a
Class D address are 1110. This causes the multicast group addresses to fall in the range 224.0.0.0 to
239.255.255.255. This book fully describes multicast addressing for IPv6 networks in Chapter 14: IPv6
Multicast.
To provide predictable behavior for various address ranges and for address reuse within smaller
domains, the overall multicast address range shown above is subdivided.

Reserved link-local addresses - 224.0.0.0 to 224.0.0.255


Globally scoped addresses - 224.0.1.0 to 238.255.255.255 for use on the Internet
Source specific multicast - 232.0.0.0 to 232.255.255.255 source specific delivery model
GLOP addresses - 233.0.0.0 to 233.255.255.255 reserved for use on the Internet by companies
that possess a publicly registered AS

Copyright by IPexpert, Inc. All Rights Reserved.

1-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 1: Introduction to IPv4/6 Multicast Operation and Troubleshooting

Administrative limited scope - 239.0.0.0 to 239.255.255.255 reserved for internal corporate


use

The Protocol Independent Multicast (PIM) protocol is responsible for routing multicast traffic through
the network infrastructure. As the name of this protocol conveys, it is not dependent on a specific
unicast routing protocol; it is IP routing protocol independent and can leverage the unicast routing
protocol used to populate the unicast routing table, including simple static routes. PIM relies upon this
unicast routing information to perform the multicast forwarding function.
PIM uses the unicast routing table to perform the reverse path forwarding (RPF) check function instead
of building up a completely independent multicast routing table. The RPF process is a key in the
operation and subsequent troubleshooting of multicast As such, this book covers the RPF process in
depth throughout its chapters. Unlike other routing protocols, PIM does not send and receive routing
updates between routers.
PIM can operate in dense mode or sparse mode. The mode determines how the router populates its
multicast routing table and how the router forwards multicast packets it receives from its directly
connected LANs.

An Introduction to IPv4/6 Multicast Troubleshooting


This book takes a common sense approach for multicast troubleshooting. First, the text carefully
dissects the operation of each technology for an intense understanding of the protocols operation.
Without this intense level of knowledge, troubleshooting is difficult at best, or impossible at worst. The
troubleshooting approach separates the control-plane from the data-plane and relies heavily on the use
of the show ip mroute command for control-plane verification.

As you will discover in this text, a common reason for most issues with multicast routing is incongruence
between the Protocol Independent Multicast (PIM) topology and the logical/physical topology. In a
perfect world, one deploys multicast in a single Interior Gateway Protocol (IGP) domain with PIM
enabled on all links running the IGP. These links should be point-to-point or broadcast multi-access in
structure. Should you have multicast running across a domain with multiple IGPs, or you do not have
PIM enabled on all links, or you have Non Broadcast Multi-Access (NBMA) links, you are at a much
greater risk for issues. As one might guess, exam authors (proctors) are very familiar with these issues
and intentionally construct CCIE practical lab exam scenarios as such.

The next chapter of this text, Chapter 2: Internet Group Management Protocol (IGMP) begins with a
thorough analysis of the most logical starting point in the journey. The protocol responsible for hosts
indicating their desire to join multicast groups.

Copyright by IPexpert, Inc. All Rights Reserved.

1-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Chapter 2: Internet
Group Management
Protocol (IGMP)



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and functionality of
the Internet Group Management Protocol (IGMP) are examined in great depth. Once the operational
characteristics of this important protocol are detailed completely, the focus becomes that of
troubleshooting. This includes the careful examination of symptoms, a fault isolation methodology, and
the implementation of repairs for the Internet Group Management Protocol (IGMP). The chapter begins
with a thorough review of IGMP, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting this multicast support protocol. This important chapter concludes with sample
troubleshooting scenarios, reference materials for the most important show and debug commands, and
exciting challenges that allow readers to practice implementing the troubleshooting skills they have
obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

2-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP Technology Review


Internet Group Management Protocol (IGMP) dynamically registers individual hosts in a multicast group
on a particular LAN. Enabling Protocol Independent Multicast (PIM) on an interface also enables IGMP.
IGMP provides a means to automatically control and limit the flow of multicast traffic throughout your
network with the use of special multicast queriers and hosts. A querier is a router that sends query
messages to discover which network devices are members of a given multicast group. A host, on the
other hand, is a receiver that sends report messages (in response to query messages) to inform the
querier of a host membership. Hosts use IGMP messages to join and leave multicast groups.
Hosts identify group memberships by sending IGMP messages to their local multicast router. These
multicast routers listen to IGMP messages and periodically send out queries to discover which groups
are active or inactive on a particular subnet. Figure 2-1 illustrates a sample IGMP topology.


Figure 2-1: A Sample IGMP Topology

This text covers the following three versions of IGMP:

IGMP version 1 - provides a basic query-response mechanism that allows the multicast router to
determine which multicast groups are active; also enables hosts to join and leave a multicast
group

Copyright by IPexpert, Inc. All Rights Reserved.

2-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP version 2 introduces the IGMP leave process, group specific queries, and an explicit
maximum response time field; also adds the capability for routers to elect the IGMP querier
without dependence on the multicast protocol to perform this task
IGMP version 3 - provides source filtering; supports the link local address 224.0.0.22, which is
the destination IP address for IGMP version 3 membership reports

Note: By default, enabling PIM on an interface enables IGMP version 2 on that interface. IGMP version 2
was designed to be as backward compatible with IGMP version 1 as possible.
IGMP Version 1
IGMP version 1 routers send IGMP queries to the "all-hosts" multicast address of 224.0.0.1 to solicit
multicast groups with active multicast receivers. The multicast receivers also send IGMP reports to the
router to notify it that they are interested in receiving a particular multicast stream. Hosts can send the
report independently or in response to the IGMP queries sent by the router. If more than one multicast
receiver exists for the same multicast group, only one of these hosts sends an IGMP report message; the
other hosts suppress their report messages.
In IGMP version 1, there is no election of an IGMP querier. If more than one router on the segment
exists, all the routers send periodic IGMP queries. IGMP version 1 has no special mechanism by which
the hosts can leave the group. If the hosts are no longer interested in receiving multicast packets for a
particular group, they simply do not reply to the IGMP query packets sent from the router. The router
continues sending query packets. If the router does not hear a response in three IGMP queries, the
group times out and the router stops sending multicast packets on the segment for the group.
If there are multiple routers on a LAN, a designated router (DR) must be elected to avoid duplicating
multicast traffic for connected hosts. PIM routers follow an election process to select a DR. The PIM
router with the highest IP address becomes the DR.
The DR is responsible for the following tasks:

Sending PIM register, PIM Join, and Prune messages toward the rendezvous point (RP) to inform
it about host group memberships
Sending IGMP host-query messages
Sending host-query messages in order to keep the IGMP overhead on hosts and networks very
low

Copyright by IPexpert, Inc. All Rights Reserved.

2-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP Version 2
IGMP version 2 improves the query messaging capabilities of IGMP version 1. The query and
membership report messages in IGMP version 2 are identical to the IGMP version 1 messages with two
exceptions:

IGMP version 2 query messages are broken into two categories: general queries (identical to
IGMP version 1 queries) and group-specific queries
IGMP version 1 membership reports and IGMP version 2 membership reports have different
IGMP type codes

IGMP version 2 also enhances IGMP by providing support for the following capabilities:

Querier election processIGMP version 2 routers can elect the IGMP querier without having to
rely on the multicast routing protocol to perform the process
Maximum Response Time fielda new field in query messages permits the IGMP querier to
specify the maximum query-response time; this feature permits the tuning of the query-
response process to control response burstiness and to fine-tune leave latencies
Group-Specific Query messagespermits the IGMP querier to perform the query operation on a
specific group instead of all groups
Leave-Group messagesprovides hosts with a method of notifying routers on the network that
they wish to leave the group

Unlike IGMP version 1, in which the DR and the IGMP querier are typically the same router, in IGMP
version 2 the two functions are decoupled. The DR and the IGMP querier are selected based on different
criteria and may be different routers on the same subnet. The DR is the router with the highest IP
address on the subnet, whereas the IGMP querier is the router with the lowest IP address.
Query messages are used to elect the IGMP querier as follows:
Step 1 - when an IGMP version 2 router starts, they each multicast a general query message to the all-
systems group address of 224.0.0.1 with their interface address in the source IP address field of the
message.
Step 2 - when an IGMP version 2 router receives a general query message, the router compares the
source IP address in the message with its own interface address. The router with the lowest IP address
on the subnet is elected the IGMP querier.
Step 3 - all routers (excluding the querier) start the query timer, which is reset whenever a general
query message is received from the IGMP querier. If the query timer expires, it is assumed that the
IGMP querier has gone down, and the election process is performed again to elect a new IGMP querier.

Copyright by IPexpert, Inc. All Rights Reserved.

2-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

By default, the timer is two times the query interval.


IGMP Version 3
IGMP version 3 adds support in the IOS for source filtering, which enables a multicast receiver host to
signal to a router which groups it wants to receive multicast traffic from, and from which sources this
traffic is expected. This membership information enables Cisco IOS software to forward traffic only from
those sources from which receivers requested the traffic.
IGMP version 3 supports applications that explicitly signal sources from which they want to receive
traffic. With IGMP version 3, receivers signal membership to a multicast group in the following two
modes:

INCLUDE mode the receiver announces membership to a group and provides a list of IP
addresses (the INCLUDE list) from which it wants to receive traffic
EXCLUDE modethe receiver announces membership to a group and provides a list of IP
addresses (the EXCLUDE list) from which it does not want to receive traffic; to receive traffic
from all sources, like in the case of the Internet Standard Multicast (ISM) service model, a host
expresses EXCLUDE mode membership with an empty EXCLUDE list

IGMP version 3 is the industry-designated standard protocol for hosts to signal channel subscriptions in
a Source Specific Multicast (SSM) network. For SSM to rely on IGMP version 3, IGMP version 3 must be
available in the network stack portion of the operating systems running on the last hop routers, hosts
and be used by the applications running on those hosts.
In IGMP version 3, hosts send their membership reports to 224.0.0.22; all IGMP version 3 routers,
therefore, must listen to this address. Hosts, however, do not listen or respond to 224.0.0.22; they only
send their reports to that address. In addition, in IGMP version 3, there is no membership report
suppression because IGMP version 3 hosts do not listen to the reports sent by other hosts. Therefore,
when a general query is sent out, all hosts on the wire respond.
When a host wants to join a multicast group, the host sends one or more unsolicited membership
reports for the multicast group it wants to join. The IGMP join process is the same for IGMP version 1
and IGMP version 2 hosts.
In IGMP version 3, the join process for hosts proceeds as follows:

When a hosts wants to join a group, it sends an IGMP version 3 membership report to
224.0.0.22 with an empty EXCLUDE list
When a host wants to join a specific channel, it sends an IGMP version 3 membership report to
224.0.0.22 with the address of the specific source included in the INCLUDE list

Copyright by IPexpert, Inc. All Rights Reserved.

2-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

When a host wants to join a group excluding particular sources, it sends an IGMP version 3
membership report to 224.0.0.22 excluding those sources in the EXCLUDE list

IGMP Leave Process


The method that hosts use to leave a group varies depending on the version of IGMP in operation.
IGMP Version 1 Leave Process
There is no leave-group message in IGMP version 1 to notify the routers on the subnet that a host no
longer wants to receive the multicast traffic from a specific group. The host simply stops processing
traffic for the multicast group and ceases responding to IGMP queries with IGMP membership reports
for the group. As a result, the only way IGMP version 1 routers know that there are no longer any active
receivers for a particular multicast group on a subnet is when the routers stop receiving membership
reports. To facilitate this process, IGMP version 1 routers associate a countdown timer with an IGMP
group on a subnet. When a membership report is received for the group on the subnet, the timer is
reset. For IGMP version 1 routers, this timeout interval is typically three times the query interval (3
minutes). This timeout interval means that the router may continue to forward multicast traffic onto the
subnet for up to 3 minutes after all hosts have left the multicast group.
IGMP Version 2 Leave Process
IGMP version 2 incorporates a leave-group message that provides the means for a host to indicate that
it wishes to stop receiving multicast traffic for a specific group. When an IGMP version 2 host leaves a
multicast group, if it was the last host to respond to a query with a membership report for that group, it
sends a leave-group message to the all-routers multicast group (224.0.0.2).
IGMP Version 3 Leave Process
IGMP version 3 enhances the leave process by introducing the capability for a host to stop receiving
traffic from a particular group, source, or channel in IGMP by including or excluding sources, groups, or
channels in IGMP version 3 membership reports.

Copyright by IPexpert, Inc. All Rights Reserved.

2-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

The Operation and Troubleshooting of IGMP


Multicast technology has developed significantly since its introduction in the late 1990's. This is
abundantly evident when considering the large number of multicast-enabled applications developed in
the last few years. Some multicast applications used on a daily basis include webinars, video
conferencing, internet radio, IPTV and network gaming. These applications share one critical element;
they require real-time data flow between a group of receivers and a set of sources. In instances where
multiple receivers have a need for the same data, multicast technology is a natural fit, because it
enables the efficient transfer of data from a single source or a set of sources to a dynamically formed
group of receivers.
The concept of IP multicast was the perfect solution to this one-to-many model of data distribution.
However, IP multicast brought with it, its own issues. This section focuses on the first of these issues:

Maintenance of dynamic group membership information - A router must maintain group


member information so that it can efficiently forward the necessary multicast data to interested
receivers.

IP multicast environments propagate membership information via Internet Group Management Protocol
(IGMP). IGMP propagates membership information from the host toward the routers attached to
discreet sources. This is odd behavior when compared to the typical unicast routing model. So odd, in
fact, that many phrases exist to describe this process including "upside-down routing" or "bottoms-up
routing. It is important to understand this concept early on. Multicasting routes packets away from the
source (toward the receivers) not toward a given destination. Understanding this one concept will make
deploying and troubleshooting IP multicast much simpler.
In summary, routers attached to multicast sources learn about group members via IGMP. This section
will take an exhaustive look at the operation and troubleshooting of the three versions of IGMP protocol
by using the topology in the Figure 2-2.

Copyright by IPexpert, Inc. All Rights Reserved.

2-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Figure 2-2: IGMP Lab Topology

IGMP Version 1
A critical analysis of the inner working of IGMP version 1 must begin with the protocol message types it
supports:

IGMPv1 Membership Query Messages - Generated by the IGMP querier - One multicast router
per LAN must periodically transmit host "membership query messages". These messages
identify which groups have members on a directly connected network. Query Messages use an
address of 224.0.0.1. An adjacent router does not forward query messages to any other
multicast enabled router.

IGMPv1 Membership Report Messages - Generated by the hosts - When a host receives an
IGMP query message it responds with a membership report. The membership report identifies
the groups that a host has joined.

These two message types are part of a two-phase mechanism where, an IGMP version 1 host sends a
report when it joins a multicast group. In order to configure a router to utilize IGMP version 1 protocol it
is necessary to apply the ip igmp version 1 command at the interface level:
interface FastEthernet0/0
ip address 172.16.100.2 255.255.255.0
ip igmp version 1

An IGMP version 1 router (known as a querier) queries periodically using query messages to dynamically
identify active members of groups. Whenever a host receives a query message, it responds with
membership report messages for all its associated multicast groups. The host sends an individual
membership report for each multicast group it has joined to the querier. A host will wait a random

Copyright by IPexpert, Inc. All Rights Reserved.

2-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

period between responses to queries (no more than 10 seconds) for each group it has joined. This delay
affords the host time to receive a valid report sent by another device on the segment. If a host does not
receive a report from another host on the same segment during the delay period, it will generate a
membership report itself. If a host does receive a membership report from another host for one of its
multicast group associations, it will suppress its own membership report for that group. This process
prevents a "storm" of membership report messages. Observe this behavior in the output provided by
the debug ip igmp command on R2:
R2#
IGMP(0): Received v1 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 0.2 seconds for 224.7.7.7 on GigabitEthernet0/0
IGMP(0): Send v1 Report for 224.7.7.7 on GigabitEthernet0/0

On R1 we can see the report for the group address 224.0.1.40 being canceled.
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.5 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.5 for 0
sources
IGMP(0): Cancel report for 224.0.1.40 on FastEthernet0/0

When the querier receives a membership report for a given group, it will start an EXCLUDE Group timer.
The querier will remove group membership information not refreshed by a subsequent membership
report sent in response to periodic general queries, within the configured EXCLUDE group timer. First,
we see the EXCLUDE timer update in the output of the debug ip igmp command on R4 when the
membership report arrives:
R4#
IGMP(0): Received v1 Report on FastEthernet0/0 from 172.16.100.2 for 224.7.7.7
IGMP(0): Received Group record for group 224.7.7.7, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.7.7.7

This output clearly illustrates the process we have just described. The querier received the IGMP version
1 membership report from 172.16.100.2 for the group 224.7.7.7. The router in question has no
knowledge of an active source, but it still resets the EXCLUDE group timer.
At first blush, the IGMP version 1 protocol seems to accomplish all the goals we have discussed to date.
However, this version IGMP does have one huge Achilles heel. Hosts have no special mechanism to
allow them to leave a group. As stated earlier in the IGMP Technology Review, in IGMP version 1 a host
that no longer needs to receive multicast packets for a particular group, simply stops replying to the
IGMP query packets sent from the router.

Copyright by IPexpert, Inc. All Rights Reserved.

2-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

The router will continue sending queries and only stops sending queries on the network for the group
after three consecutive query messages go unanswered. If a host on the segment wants to receive
multicast packets after this timeout period, it simply sends a new IGMP join to the router, and the
router will begin forwarding the packets once more. Some call this long period needed to identify the
loss of an active multicast member "leave latency".
IGMP Version 2
IGMP version 2 introduced a number of critical changes to IGMP. These modifications made IGMP more
efficient by adding a new operational mechanism, and two additional message types. The new
operational mechanism involves the election of the IGMP querier. In IGMP version 1 more than one
device on a segment could and often does send IGMP version 1 queries as illustrated by the following
debug ip igmp output on R2:
R2#show logging |
IGMP(0): Received
IGMP(0): Received
IGMP(0): Received

inc Received v1 Query


v1 Query on GigabitEthernet0/0 from 172.16.100.5
v1 Query on GigabitEthernet0/0 from 172.16.100.4
v1 Query on GigabitEthernet0/0 from 172.16.100.1

This output demonstrates that R2 has received IGMP version 1 Query messages on GigabitEthernet0/0
from each of its neighbors on the VLAN 1245 segment. Recognizing that having more than one device on
a segment constantly sending query messages was less than ideal; this behavior in IGMP version 2 was
changed. Once we execute the ip igmp version 2 commands at the interface level on R1, R2, R4 and R5
the devices will now elect a single querier for the VLAN 1245 segment. The router with the lowest IP
address on the segment becomes the querier. In the case of this topology, the process chooses R1.
Verify this with show ip igmp interface on any device connected to VLAN 1245:
R2#show ip igmp interface
GigabitEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.2/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 4 joins, 1 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1
Multicast groups joined by this system (number of users):

Copyright by IPexpert, Inc. All Rights Reserved.

2-10

IPv4/6 Multicast Operation and Troubleshooting

224.0.1.40(1)

Chapter 2: Internet Group Management Protocol (IGMP)

224.7.7.7(1)

This command provides a lot of output. The third line from the bottom identifies R1 as the IGMP
querying router by its IP address 172.16.100.1. Note that the output notifies us that the current IGMP
host version is 2, and that the current IGMP router version is 2. IGMP version 2 is backwards compatible
with IGMP version 1. This is possible because IGMP version 2 still supports IGMP version 1 query and
report messages. Where IGMP version 1 had 2 types of messages, IGMP version 2 has three: query
messages, report messages and leave group messages.
IGMP version 2 - Query Messages
An IGMP version 2 querier sends two types of query messages:

IGMPv2 General Query Messages - Created by the querier - Allows dynamic discovery of all
multicast group members on a segment.
IGMPv2 Group-Specific Query Messages - Created by the querier - Used to identify the
existence of any members for a specific group.

Using the debug ip igmp command on R1 illustrates each message type:


R1#
IGMP(0): Send v2 general Query on FastEthernet0/1
IGMP(0): Received v2 Report on FastEthernet0/1 from 172.16.17.7 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.17.7 for 0
sources

We see that R1 has sent a IGMP version 2 general Query. Removing the group membership to 224.7.7.7
from the GigabitEthernet0/0 interface of R2 will force R1 to send a group-specific query:
R2(config)#interface GigabitEthernet0/0
R2(config-if)#no ip igmp join-group 224.7.7.7

R1 should now send a group-specific query:


R1#
IGMP(0): Send v2 Query on FastEthernet0/0 for group 224.7.7.7
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.5 for 224.0.1.40
IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 172.16.100.5 for 0
sources

Note that R1 sent an IGMP version 2 query specifically for the group 224.7.7.7.
IGMP version 2 - Report Messages
An IGMP version 2 host will send IGMP version 2 report messages under the following circumstances:

Copyright by IPexpert, Inc. All Rights Reserved.

2-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

On joining a multicast group - the host will immediately notify the elected querier that it has
actively joined a multicast group.
Upon receiving a General Query - the host will send a report after a random delay in the same
fashion employed in IGMP version 1 in response to a general query.
Upon receiving a Group-Specific Query - the host will send a report in response to a group-
specific query if it is a member of the group queried.

IGMP Version 2 - Leave Messages


An IGMP version 2 host will send a leave group message when it is no longer interested in a multicast
group. Once this message reaches the elected querier a group-specific query is generated. As discussed
in the section on IGMP version 2 query messages, the group-specific query determines if other members
of a group exist.
If a report message for a group-specific query arrives the IGMP version 2 router maintains the
membership information for that specific group. In the case where no report messages arrive for a given
group-specific query, the querier will discard the membership information. Typically, an IGMP version 2
router will transmit two queries with an interval of 1 second before discarding any information.
Leave messages enable IGMP version 2 to find multicast groups without active members quickly. This
alleviates the "leave latency" issues common in the older version of the protocol. In an effort to
illustrate this process, R2 will remove its membership to the group 224.7.7.7 (debug ip igmp is running):
R2(config)#interface GigabitEthernet0/0
R2(config-if)#no ip igmp join-group 224.7.7.7
R2(config-if)#end
R2#
IGMP(0): IGMP delete group 224.7.7.7 on GigabitEthernet0/0
IGMP(0): Received Group record for group 224.7.7.7, mode 3 from 172.16.100.2 for 0
sources
IGMP(0): Send Leave for 224.7.7.7 on GigabitEthernet0/0
IGMP(0): Received v2 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Lower expiration timer to 2000 msec for 224.7.7.7 on GigabitEthernet0/0


R2 deletes the group, sends the leave message, and adjusts the expiration timer for its 224.7.7.7 entry
to 2 secs. This results in an efficient leave process and the IGMP version 2 router will stop forwarding
the multicast packets for this group.
IGMP Version 3
This enhancement to IGMP introduced the concept of Group-Source Report messages. As stated earlier
in the IGMP Technology Review, these messages allow a host to elect to receive traffic from specific

Copyright by IPexpert, Inc. All Rights Reserved.

2-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

sources of a multicast group; a concept referred to as Source Specific Multicast (SSM). Chapter 13:
Multicast Security and Advanced Features will cover this topic in depth.
IGMP and Multicast Forwarding
IGMP is part of the last leg of multicast packet delivery. This is because IGMP is only concerned with
forwarding multicast traffic from the local router to a group of members that share a common network.
IGMP is a host-to-router communications protocol. For router-to-router delivery of multicast services, it
is mandatory to define multicast routing protocols. These protocols are outside the scope of this
chapter, but IGMP is an integral component to these protocols.
The role of IGMP is to notify the querier/IGMP router, of the existence of devices that have joined
multicast groups or group ranges. We have thoroughly examined the operational mechanisms for each
version of IGMP in the previous sections of this chapter. Now we will look at the general operational
process of IGMP as it relates to the overall multicast process:
Step One - The client/host sends an IGMP join message to a designated multicast router. The
destination MAC address maps to the Class D address of the group being joined not to the MAC address
of the router. The body of the IGMP datagram also includes the Class D group address.

Step Two - The IGMP router logs the join message and uses a multicast routing protocol (covered later
in this document) to add this segment to the multicast distribution tree.

Step Three - IP multicast traffic is then transmitted from the server via the designated router. The
designated router manages the distribution of multicast packets to the host's subnet. The destination
MAC address that is used corresponds to the Class D address of the multicast group.

Step Four - The switch receives the multicast packet and examines its forwarding table. If no entry exists
for the MAC address, the packet will be flooded to all ports within the broadcast domain. If an entry
does exist in the switch table, only the designated ports will forward packets.

Step Five - With IGMP version 2, the client can end a group membership by sending an IGMP leave
message to the router. With IGMP version 1, the client remains a member of the group until it fails to
send a join message in response to a query from the router. Multicast routers also periodically send an
IGMP query to the "all multicast hosts" group (224.0.0.1) or if using IGMP version 2, to a specific
multicast group on the subnet to determine which groups are still active within the subnet. Each host
delays its response to a query for a small random period and will only respond if no other hosts in the
group have already responded. This mechanism prevents multiple hosts from congesting the network
with simultaneous reports.

Copyright by IPexpert, Inc. All Rights Reserved.

2-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Common Issues with IGMP


IGMP is a very simple protocol to troubleshoot. IGMP uses a simple operational mechanism to
accomplish its duties forwarding multicast packets. Even though the overall process is simple, dividing it
into specific phases makes troubleshooting more straightforward. To simplify troubleshooting common
issues while deploying IGMP, we identify three categories of problems: Host Fails to Send IGMP joins,
Switch Fails to Forward IGMP Packets, and IGMP Packet Filtering.
Host Fails to Send IGMP joins
In the Troubleshooting IGMP section, this text discussed the different versions of IGMP and what
message types they utilize. The remainder of this chapter will deal specifically with IGMP version 2
because it is the default IGMP version employed by Cisco IOS. It is important to note that successful
troubleshooting of IGMP depends on role-based operations within the protocol. A host failing to send a
IGMP membership report is a common issue encountered while deploying IGMP. In effect, this means
that a host is not notifying the IGMP router that it has joined a multicast group.
Failure of a host to send IGMP joins would most probably be the result of a poorly written multicast
application, or a configuration mistake in a testing scenario. Configuration mistakes may include but are
not limited to; failure to apply an ip igmp join-group command or ip igmp static-group command under
an interface. Additionally, the wrong multicast group applied to an interface prevents a host router from
sending a membership report for the correct multicast group.
Switch Fails To Forward IGMP Packets
Thus far, we have addressed the fact that IGMP Multicast involves both a method of delivery and
discovery of senders and receivers of multicast data. This information is transmitted via IP multicast
addresses called groups. A multicast address that includes a group and source IP address is a channel or
stream. The fact that the successful operation of multicast IGMP depends on the correct configuration
of the upstream switch has not been addressed as of yet. By default, switches like the Catalyst 3560
utilize a concept called IGMP Snooping.
IGMP snooping, scopes the flooding of multicast traffic by dynamically configuring Layer 2 interfaces so
that multicast traffic is forwarded to only those interfaces associated with IP multicast devices. IGMP
snooping requires the LAN switch to monitor IGMP transmissions between the host and the router and
to keep track of multicast groups and member ports.
When a switch receives an IGMP report from a host for a particular multicast group, the switch adds the
host port number to its forwarding table entry; when it receives an IGMP Leave Group message from a

Copyright by IPexpert, Inc. All Rights Reserved.

2-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

host, it removes the host port from the table entry. It also periodically deletes entries if it does not
receive IGMP membership reports from any multicast clients.
The most common issue that can cause problems where a switch does not forward IGMP messages are
where IGMP Filtering and Throttling have been erroneously configured, or previous configurations have
not been completely removed.

IGMP Filtering - filters multicast joins on a per-port basis by configuring IP multicast profiles and
associating them with individual switch ports. IGMP filtering controls only group-specific query
and membership reports, including join and leave reports and does not control general IGMP
queries. IGMP filtering is applicable only to the dynamic learning of IP multicast group
addresses, not static configuration.

IGMP Throttling - throttling sets the maximum number of IGMP groups that Layer 2 interfaces
can join. Utilization of this technique can cause a switch to drop IGMP membership reports.

Apply IGMP Filtering using the ip igmp filter command. Apply IGMP Throttling using the ip igmp max-
groups command. Both are interface level commands.
IGMP Packet Filtering
IGMP Throttling and Filtering deal with IGMP packets at Layer 2 on a switch. There are versions of the
same concepts designed to function at Layer 3. On routers, the ip igmp access-group and ip igmp limit
commands accomplish these same goals:

ip igmp access-group - used to filter groups from IGMP membership reports by applying a
standard access list. This command restricts hosts on a subnet to joining only multicast groups
permitted by the configured standard IP access list.

ip igmp limit - used to configure a limit on the number of mroute states that are created as a
result of IGMP membership reports (IGMP joins). Membership reports exceeding the configured
limits do not enter the IGMP cache.

The most common issue that can cause problems where a router does not accept IGMP join messages
are where IGMP Filtering or limiting have been erroneously configured, or previous configurations have
not been completely removed.
In the IGMP Sample Troubleshooting Scenarios section that follows, troubleshooting of these issues are
demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

2-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the IGMP operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem is IGMP related, and then how to begin
isolating the cause of the fault in the most efficient manner possible. Figure 2-3 illustrates the topology
used to explore this topic. Note that R1 is the IGMP Router (Querier), and R2, R4, and R5 are emulating
hosts:

Figure 2-3: A Sample IGMP Topology

In the Common Issues with IGMP section, three primary types of problems were identified: Host Fails to
Send IGMP joins; Switch Fails to Forward IGMP Packets, and IGMP Packet Filtering. This section explores
these three categories of failure, by directing our attention to the commands necessary to identify that a
problems exists. There are three types of devices in this topology: Hosts (R2, R4 and R5), FastEthernet
Switch (CAT1), and an IGMP router (R1).
Host Fails To Send IGMP Joins
This situation is where a host does not send membership reports for a specific multicast group address.
The best tool available in our troubleshooting arsenal to verify the existence of this type of problem is
the debug ip igmp command:
R2#debug ip igmp
IGMP debugging is on
R2#
IGMP(0): Received v2 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 3.1 seconds for 224.0.1.40 on GigabitEthernet0/0
R2#
IGMP(0): Received v2 Report on GigabitEthernet0/0 from 172.16.100.1 for 224.0.1.40

Copyright by IPexpert, Inc. All Rights Reserved.

2-16

IPv4/6 Multicast Operation and Troubleshooting

IGMP(0):
sources
IGMP(0):
IGMP(0):
IGMP(0):

Chapter 2: Internet Group Management Protocol (IGMP)

Received Group record for group 224.0.1.40, mode 2 from 172.16.100.1 for 0
Cancel report for 224.0.1.40 on GigabitEthernet0/0
Updating EXCLUDE group timer for 224.0.1.40
MRT Add/Update GigabitEthernet0/0 for (*,224.0.1.40) by 0

R2 is receiving IGMP membership reports, but it is not sending any for 224.2.2.2. Based on Figure 2-3,
R2's GigabitEthernet0/0 interface should have joined the group 224.2.2.2. The quickest and simplest
way to determine what multicast groups a device has joined on an interface-by-interface basis is to
execute the show ip igmp interface command:
R2#show ip igmp interface
GigabitEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.2/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 9 joins, 7 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1
Multicast groups joined by this system (number of users):
224.0.1.40(1)

This output indicates that R2 has only joined the multicast group 224.0.1.40 on GigabitEthernet0/0.
There is no listing for 224.2.2.2. According to Figure 2-3, this address should appear here. To verify why,
the next step would be to look for an ip igmp join-group command under GigabitEthernet0/0 for the
group 224.2.2.2:
R2#show run interface GigabitEthernet0/0
Building configuration...
Current configuration : 116 bytes
!
interface GigabitEthernet0/0
ip address 172.16.100.2 255.255.255.0
ip pim dense-mode
duplex auto
speed auto
end

Copyright by IPexpert, Inc. All Rights Reserved.

2-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

The join command is missing. Applying the ip igmp join-group 224.2.2.2 command on the
GigabitEthernet0/0 interface of R2 will force R2 to begin sending IGMP version 2 membership reports on
the VLAN 1245 segment for 224.2.2.2.
R2(config)#interface GigabitEthernet0/0
R2(config-if)#ip igmp join-group 224.2.2.2
R2(config-if)#end
%SYS-5-CONFIG_I: Configured from console by console
R2#debug ip igmp
IGMP debugging is on
IGMP(0): Send v2 Report for 224.2.2.2 on GigabitEthernet0/0
IGMP(0): Received v2 Report on GigabitEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(0): Received Group record for group 224.2.2.2, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.2.2.2
IGMP(0): MRT Add/Update GigabitEthernet0/0 for (*,224.2.2.2) by 0

This indicates that R2 has joined the group 224.2.2.2 as expected.


Switch Fails To Forward IGMP Packets
This situation occurs when a host transmits a membership report for a given multicast group, but the
Layer 2 switch drops or fails to forward these packets to the IGMP router. The quickest method to
detect this type of scenario is a three-step process.
Step One: Is the host sending membership reports for the specific multicast group?
In this scenario, we will look at R4 to determine if it is sending membership report messages for
224.4.4.4. This was accomplished with the debug ip igmp command on R4:
R4#debug ip igmp
IGMP debugging is on
R4#
IGMP(0): Send v2 Report for 224.4.4.4 on FastEthernet0/0
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.4 for 224.4.4.4
IGMP(0): Received Group record for group 224.4.4.4, mode 2 from 172.16.100.4 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.4.4.4
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.4.4.4) by 0

The IGMP membership reports are being transmitted for the group 224.4.4.4, based on the debug
output. What device is acting as the querier on the VLAN 1245 segment? This can be determined with
the show ip igmp interface command:
R4#show ip igmp interface FastEthernet0/0
FastEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.4/24

Copyright by IPexpert, Inc. All Rights Reserved.

2-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP is enabled on interface


Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 11 joins, 7 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1
Multicast groups joined by this system (number of users):
224.4.4.4(1) 224.0.1.40(1)

The third line from the bottom of this show output indicates that the membership reports are being sent
to R1 (172.16.100.1).
Step 2: Are the IGMP version 2 membership reports making it to the querier?
The answer to this question is best provided by the output of the show ip igmp groups command:
R1#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.4.4.4
FastEthernet0/0
224.5.5.5
FastEthernet0/0
224.2.2.2
FastEthernet0/0
224.0.1.40
FastEthernet0/1
224.0.1.40
FastEthernet0/0

Uptime

Expires

Last Reporter

00:11:12
01:05:44
00:37:20
1d02h
1d02h

00:02:19
00:02:21
00:02:14
00:02:21
00:02:13

172.16.100.4
172.16.100.5
172.16.100.2
172.16.17.7
172.16.100.4

Group

The output indicates that R1 does indeed know about the multicast group 224.4.4.4, and it did indeed
learn about the Group Address membership from the "Last Reporter" R4 (172.16.100.4). In a situation
where the host is sending the membership report and the IGMP router is not receiving it, a logical
assumption is that the switch is blocking it.
Step Three: Is the switch blocking or limiting the IGMP traffic?
The switch in this topology is CAT1. The easiest way to learn what multicast enabled IGMP devices are
connected to the switch is by using the show ip igmp snooping mrouter command:
CAT1#show ip igmp snooping mrouter
Vlan
ports

Copyright by IPexpert, Inc. All Rights Reserved.

2-19

IPv4/6 Multicast Operation and Troubleshooting

---1245

Chapter 2: Internet Group Management Protocol (IGMP)

----Gi0/2(dynamic), Fa0/1(dynamic), Fa0/4(dynamic),


Fa0/5(dynamic)

After identifying what ports are connected to IGMP enabled routers on the catalyst switch, the next step
is to verify if there are any commands configured on the switch that will alter the forwarding of IGMP
packets. The first thing to look for is if the ip igmp filter and/or ip igmp max-groups command(s) have
been applied to any of these interfaces:
CAT1#show run interface FastEthernet0/5
Building configuration...
Current configuration : 150 bytes
!
interface FastEthernet0/5
switchport access vlan 1245
switchport mode access
spanning-tree portfast
ip igmp max-groups 5
ip igmp filter 1
end

The output of this command on CAT1 for the interface connected to R5 illustrates examples of the
commands we are looking for. In this particular instance, the configured commands are not effecting the
current environment.
IGMP Packet Filtering
This situation occurs when a host transmits a membership report for a given multicast group, but the
Layer 3 router drops or fails to forward these packets. The quickest method to isolate this type of
problem is to look for Inbound IGMP access groups or Interface IGMP State Limits configured on the
IGMP router via the show ip igmp interface command:
R1#show ip igmp interface FastEthernet0/0
FastEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.1/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is 1
IGMP activity: 14 joins, 10 leaves
Interface IGMP State Limit : 4 active out of 10 max
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1 (this system)
Multicast groups joined by this system (number of users):
224.0.1.40(1)

Copyright by IPexpert, Inc. All Rights Reserved.

2-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Note that the seventh line up from the bottom notifies us that the interface will support only 10
maximum IGMP groups. The ninth line up from the bottom identifies a standard access-list has been
applied to the interface. In this environment, there are no issues on R1, but if there were; a closer
examination of the access-list or the maximum IGMP state limit could isolate a fault.

IGMP Show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
IGMP topology in Figure 2-4 for all example output.

Figure 2-4: A Sample IGMP Topology

show COMMAND:
show ip igmp [vrf vrf-name] interface [interface-type interface-number]
This command displays multicast-related information about an interface.
Where:

vrf optional; specifies the name of the multicast VRF instance


Interface used to filter the information based on the interface

EXAMPLE OUTPUT:
R2#show ip igmp interface GigabitEthernet0/0
GigabitEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.2/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2

Copyright by IPexpert, Inc. All Rights Reserved.

2-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP query interval is 60 seconds


IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 11 joins, 7 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1
Multicast groups joined by this system (number of users):
224.0.1.40(1) 224.2.2.2(1)

show COMMAND:
show ip igmp [vrf vrf-name] groups [group-name | group-address | interface-type interface-number]
[detail]
This command displays the multicast groups with receivers directly connected to the router and learned
through IGMP.
Where:

vrf optional; Specifies the name of the multicast VRF instance


group-name optional; Name of the multicast group, as defined in the Domain Name System
(DNS) hosts table.
group-address optional; Address of the multicast group. This is a multicast IP address in four-
part, dotted-decimal notation.
interface-type interface-number optional; Interface type and Interface number.
detail optional; Provides a detailed description of the sources known through IGMP Version 3
(IGMPv3), IGMPv3lite, or URL Rendezvous Directory (URD).

EXAMPLE OUTPUT:
R1#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.4.4.4
FastEthernet0/0
224.5.5.5
FastEthernet0/0
224.2.2.2
FastEthernet0/0
224.0.1.40
FastEthernet0/1
224.0.1.40
FastEthernet0/0

Uptime

Expires

Last Reporter

Group

02:04:36
00:00:24
02:04:31
1d06h
1d06h

00:02:38
00:02:35
00:02:34
00:02:37
00:02:35

172.16.100.4
172.16.100.5
172.16.100.2
172.16.17.7
172.16.100.4

Ac
Ac
Ac
Ac

Copyright by IPexpert, Inc. All Rights Reserved.

2-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

show COMMAND:
show ip igmp snooping [groups [count | vlan vlan-id [ip-address | count]] | mrouter [[vlan vlan-id] |
[bd bd-id]] | querier | vlan vlan-id | bd bd-id]
This command displays the IGMP snooping configuration of a device.
Where:

groups optional; Displays group information.


count optional; Displays the number of multicast groups learned by IGMP snooping.
vlan optional; Specifies a VLAN. Valid values are 1 to 1001. If this keyword is not configured,
information is displayed for all VLANs.
bd optional; Specifies a bridge domain. Valid values are 1 to 1001. If this keyword is not
configured, information is displayed for all bridge domains.
count optional; Displays group count inside a VLAN.
mrouter optional; Displays information about dynamically learned and manually configured
multicast router ports.
querier optional; Displays IGMP querier information.

EXAMPLE OUTPUT:
CAT1#show ip igmp snooping mrouter
Vlan
ports
-------1245
Gi0/2(dynamic), Fa0/1(dynamic), Fa0/4(dynamic),
Fa0/5(dynamic)

Copyright by IPexpert, Inc. All Rights Reserved.

2-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP Debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
IGMP topology in Figure 2-5 for all example output.

Figure 2-5: A Sample IGMP Topology

debug COMMAND:
debug ip igmp [vrf vrf-name] [group-address]
This command displays IGMP packets received and sent, and IGMP-host related events.
Where:

vrf optional; Specifies the name of the multicast VRF instance


group-address optional; Address of a particular group about which to display IGMP
information.

EXAMPLE OUTPUT:
IGMP(0):
IGMP(0):
IGMP(0):
IGMP(0):
IGMP(0):
sources
IGMP(0):
IGMP(0):
IGMP(0):

Received v2 Query on FastEthernet0/0 from 172.16.100.1


Set report delay time to 1.5 seconds for 224.0.1.40 on FastEthernet0/0
Set report delay time to 8.4 seconds for 224.2.2.2 on FastEthernet0/0
Received v2 Report on FastEthernet0/0 from 172.16.100.4 for 224.0.1.40
Received Group record for group 224.0.1.40, mode 2 from 172.16.100.4 for 0
Cancel report for 224.0.1.40 on FastEthernet0/0
Updating EXCLUDE group timer for 224.0.1.40
MRT Add/Update FastEthernet0/0 for (*,224.0.1.40) by 0

Copyright by IPexpert, Inc. All Rights Reserved.

2-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

debug COMMAND:
debug ip igmp snooping {group | management | router | timer}
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:

group displays debugging messages related to multicast groups


management displays debugging messages related to management services
router displays debugging messages related to the local routers
timer displays debugging messages related to the IGMP timer

EXAMPLE OUTPUT:
CAT1#debug ip igmp snooping router
IGMPSN: router: Received IGMP pak on Vlan 1245, port Fa0/1
IGMPSN: router: Is a router port on Vlan 1245, port Fa0/1
IGMPSN: router: Learning port: Fa0/1 as rport on Vlan 1245

Copyright by IPexpert, Inc. All Rights Reserved.

2-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Chapter Challenge: IGMP Sample Trouble Tickets


The following section includes three sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH2-IGMP-TT-INITIAL.txt. Keep in mind these sample Trouble
Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 2-6 below:

Figure 2-6: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that R2 is not generating IGMP version 2 membership
reports for the multicast group 224.2.2.2. Correct this issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that membership reports generated by R4
for the multicast group 224.4.4.4 are not making it to the IGMP router. Correct this issue.
Trouble Ticket #3
Your supervisor has notified you that membership reports generated by R2 for the multicast group
224.2.2.2 are not making it into the IGMP Groups table on R1. Correct this issue without the removal of
existing configurations.

Copyright by IPexpert, Inc. All Rights Reserved.

2-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Chapter Challenge: IGMP Sample Trouble Tickets Solutions


The following section includes the solutions to the three Trouble Tickets presented in the previous
section. Figure 2-7 provides a flowchart that outlines a "quick fire" approach to isolating and
remediating issues associated with IGMP.


Figure 2-7: IGMP Quick Fire Troubleshooting Flowchart


Trouble Ticket #1 Solution
Your supervisor has brought to your attention that R2 is not generating IGMP version 2 membership
reports for the multicast group 224.2.2.2. Correct this issue.
Step 1 - Fault Verification:
Is R2 generating IGMP version 2 membership reports?
R2#debug ip igmp
IGMP debugging is on
R2#
IGMP(0): Received v2 Query on GigabitEthernet0/0 from 172.16.100.1
IGMP(0): Set report delay time to 3.8 seconds for 224.0.1.40 on GigabitEthernet0/0

Copyright by IPexpert, Inc. All Rights Reserved.

2-27

IPv4/6 Multicast Operation and Troubleshooting

R2#
IGMP(0):
IGMP(0):
sources
IGMP(0):
IGMP(0):
IGMP(0):

Chapter 2: Internet Group Management Protocol (IGMP)

Received v2 Report on GigabitEthernet0/0 from 172.16.100.4 for 224.0.1.40


Received Group record for group 224.0.1.40, mode 2 from 172.16.100.4 for 0
Cancel report for 224.0.1.40 on GigabitEthernet0/0
Updating EXCLUDE group timer for 224.0.1.40
MRT Add/Update GigabitEthernet0/0 for (*,224.0.1.40) by 0

R2 is receiving reports and queries but is not sending reports for 224.2.2.2. This verifies that the problem
actually exists.

Step 2 - Fault Isolation:
The next course of action is to use the show ip igmp interface command on R2.

R2#show ip igmp interface
GigabitEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.2/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 11 joins, 9 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1
Multicast groups joined by this system (number of users):
224.0.1.40(1)

The last line of the output does not list 224.2.2.2 as a group that R2 has joined. This is most likely a
missing or erroneous configuration of the ip igmp join-group command under the GigabitEthernet0/0
interface. This is verified with the show run interface command on R2:
R2#show run interface GigabitEthernet0/0
Building configuration...
Current configuration : 116 bytes
!
interface GigabitEthernet0/0
ip address 172.16.100.2 255.255.255.0
ip pim dense-mode

Copyright by IPexpert, Inc. All Rights Reserved.

2-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

duplex auto
speed auto
end


This isolates the fault.

Step 3 - Fault Remediation:
In this scenario, the ip igmp join-group 224.2.2.2 command needs to be applied to the
GigabitEthernet0/0 interface of R2.

R2(config)#interface GigabitEthernet0/0
R2(config-if)#ip igmp join-group 224.2.2.2
R2(config-if)#end

Step 4 - Verification of Remediation


Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R2#debug ip igmp
IGMP debugging is on
R2#
IGMP(0): Send v2 Report for 224.2.2.2 on GigabitEthernet0/0
IGMP(0): Received v2 Report on GigabitEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(0): Received Group record for group 224.2.2.2, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.2.2.2
IGMP(0): MRT Add/Update GigabitEthernet0/0 for (*,224.2.2.2) by 0

R2 is now sending membership reports for the group 224.2.2.2. The solution has successfully
remediated the problem.
Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that membership reports generated by R4
for the multicast group 224.4.4.4 are not making it to the IGMP router. Correct this issue.
Step 1 - Fault Verification:
Does R1 have a record of the IGMP joins for the group 224.4.4.4?
R1#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.5.5.5
FastEthernet0/0
224.0.1.40
FastEthernet0/1

Copyright by IPexpert, Inc. All Rights Reserved.

Uptime

Expires

Last Reporter

Group

00:47:43
1d06h

00:02:37
00:02:16

172.16.100.5
172.16.17.7

Ac

2-29

IPv4/6 Multicast Operation and Troubleshooting

224.0.1.40

FastEthernet0/0

Chapter 2: Internet Group Management Protocol (IGMP)

1d06h

00:02:30

172.16.100.2

Ac


R1 has no record of the group 224.4.4.4. Thus proving the problem exists.

Step 2 - Fault Isolation:
Now, the first step is to determine if R1 even receives the membership reports from R4 at all.
R1#debug ip igmp
IGMP debugging is on
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(*): Group 224.2.2.2 access denied on FastEthernet0/0
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.5 for 224.5.5.5
IGMP(0): Received Group record for group 224.5.5.5, mode 2 from 172.16.100.5 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.5.5.5
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.5.5.5) by 0

R1 receives membership reports for the groups 224.2.2.2 and 224.5.5.5 but not 224.4.4.4. This means
that the next step of the verification needs to be on CAT1. Use the debug ip igmp filter command on
CAT1 to identify any interfaces that may have profiles filtering specific multicast groups:
CAT1#debug ip igmp filter
event debugging is on
CAT1#
IGMPFILTER: igmp_filter_process_pkt() checking group from Gi0/2 : no profile attached
CAT1#
IGMPFILTER: igmp_filter_process_pkt(): checking group 224.4.4.4 from Fa0/4: deny

CAT1 has an igmp-filter applied on interface FastEthernet0/4 that denies the multicast group 224.4.4.4.
The actual parameters of the profile can be seen via the show ip igmp profile command:
CAT1#show ip igmp profile
IGMP Profile 1
range 224.4.4.4 224.4.4.4

This has isolated our fault.



Step 3 - Fault Remediation:
In this scenario, the ip igmp filter 1 command needs to be removed from the FastEthernet0/4 interface
of CAT1:
CAT1(config)#interface FastEthernet0/4
CAT1(config-if)#no ip igmp filter 1

Copyright by IPexpert, Inc. All Rights Reserved.

2-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Step 4 - Verification of Remediation


Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:
R1#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.4.4.4
FastEthernet0/0
224.5.5.5
FastEthernet0/0
224.0.1.40
FastEthernet0/1
224.0.1.40
FastEthernet0/0

Uptime

Expires

Last Reporter

Group

00:00:28
01:06:49
1d07h
1d07h

00:02:31
00:02:24
00:02:17
00:02:22

172.16.100.4
172.16.100.5
172.16.17.7
172.16.100.1

Ac
Ac
Ac


R1, the IGMP router, now sees the group 224.4.4.4. Thus verifying that the error has been corrected.
Trouble Ticket #3 Solution
Your supervisor has notified you that membership reports generated by R2 for the multicast group
224.2.2.2 are not making it into the IGMP Groups table on R1. Correct this issue without the removal of
existing configurations.
Step 1 - Fault Verification:
Are membership reports from R2 for the group 224.2.2.2 making it to R1?
R1#debug ip igmp
IGMP debugging is on
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(*): Group 224.2.2.2 access denied on FastEthernet0/0
R1#

R1 is receiving the membership reports from R2 but they are being actively denied on FastEthernet0/0,
thus verifying the validity of the trouble ticket.

Step 2 - Fault Isolation:
To determine why the IGMP packets for the group 224.2.2.2 are being dropped use the show ip igmp
interface command.

R1#show ip igmp interface FastEthernet0/0
FastEthernet0/0 is up, line protocol is up
Internet address is 172.16.100.1/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds

Copyright by IPexpert, Inc. All Rights Reserved.

2-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

IGMP querier timeout is 120 seconds


IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is 1
IGMP activity: 16 joins, 13 leaves
Interface IGMP State Limit : 3 active out of 10 max
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.100.5
IGMP querying router is 172.16.100.1 (this system)
Multicast groups joined by this system (number of users):
224.0.1.40(1)


We see that we have a IGMP State limit of 10 maximum groups, but we only have 3 active. This rules the
max-groups setting out as a cause. However, we also notice that there is an access-group applied to the
interface. This access-group references the standard numbered access-list 1. At this point, the contents
of that access-list are significant and can be viewed with the show ip access-list 1 command:
R1#show ip access-list 1
Standard IP access list 1
10 deny
224.2.2.2 (39 matches)
20 permit any (139 matches)


Looking at this output, we see that the multicast group 224.2.2.2 is being denied by access-list 1.
Additionally, the output tells us that IGMP report messages are arriving on R1, but we have denied 39 of
them. This is the problem with the configuration.

Step 3 - Fault Remediation:
In this scenario, access-list 1 should be edited such that line 10 is removed.
R1(config)#ip access-list standard 1
R1(config-std-nacl)#no 10
R1(config-std-nacl)#exit

To verify that the editing worked we use the show ip access-list command:
R1(config)#do show ip access-list 1
Standard IP access list 1
20 permit any (151 matches)
R1(config)#

Step 4 - Verification of Remediation

Copyright by IPexpert, Inc. All Rights Reserved.

2-32

IPv4/6 Multicast Operation and Troubleshooting

Chapter 2: Internet Group Management Protocol (IGMP)

Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R1#debug ip igmp
IGMP debugging is on
R1#
IGMP(0): Send v2 general Query on FastEthernet0/0
IGMP(0): Set report delay time to 1.1 seconds for 224.0.1.40 on FastEthernet0/0
R1#
IGMP(0): Received v2 Report on FastEthernet0/0 from 172.16.100.2 for 224.2.2.2
IGMP(0): Received Group record for group 224.2.2.2, mode 2 from 172.16.100.2 for 0
sources
IGMP(0): Updating EXCLUDE group timer for 224.2.2.2
IGMP(0): MRT Add/Update FastEthernet0/0 for (*,224.2.2.2) by 0


R1 is no longer denying the membership report from R2 for the group 224.2.2.2. As a final verification
this group should now appear in the output of the show ip igmp groups command:
R1#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.4.4.4
FastEthernet0/0
224.5.5.5
FastEthernet0/0
224.2.2.2
FastEthernet0/0
224.0.1.40
FastEthernet0/1
224.0.1.40
FastEthernet0/0

Uptime

Expires

Last Reporter

Group

00:22:48
01:29:10
00:03:54
1d07h
1d07h

00:02:09
00:02:04
00:02:05
00:02:50
00:02:01

172.16.100.4
172.16.100.5
172.16.100.2
172.16.17.7
172.16.100.5

Ac
Ac
Ac
Ac

R1 now has 224.2.2.2 in the list of active IGMP groups. Thus verifying that the issue has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

2-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: Protocol Independent Multicast - Dense Mode (PIM-DM)

Chapter 3: Protocol
Independent
Multicast - Dense
Mode (PIM-DM)



This chapter of IPv4/6 Multicast Operation and Troubleshooting examines the processes and the
functionality of the PIM dense-mode (PIM-DM) protocol in great depth. Once the operational
characteristics of this important protocol are detailed completely, the focus becomes that of
troubleshooting. This includes the careful examination of symptoms, a fault isolation methodology, and
the implementation of repairs for the PIM dense-mode (PIM-DM) protocol. The chapter begins with a
thorough review of PIM-DM, and then quickly launches into an exhaustive analysis of the art of
troubleshooting this multicast routing protocol. This important chapter concludes with sample
troubleshooting scenarios, reference materials for the most important show and debug commands, and
exciting challenges that allow readers to practice implementing the troubleshooting skills they have
obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

3-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

PIM-DM Technology Review


Chapter 2: Internet Group Management Protocol (IGMP) discussed the important control protocol of
IGMP in great length. We learned the nature of IGMP's role in multicast routing was to fulfill the last leg
of the multicast routing process. IGMP reports the existence of hosts that have joined multicast groups
to the underlying multicast routing protocol. Several types and varieties of multicast protocols exist, but
the most common protocol used in Cisco networks today is Protocol Independent Multicast (PIM).
Where IGMP is the protocol used to exchange information between hosts and routers, PIM is used
between routers to build the multicast tree from the sender down to the interested hosts. PIM is
protocol independent because no topology information is exchanged while the multicast tree is created.
Instead, PIM relies on the underlying interior gateway protocol running in the network. This means that
the multicast tree is built with no concern for the existence of loops in the topology. There is however, a
convention built into the multicast forwarding and routing logic that works in unison with the routing
protocol to prevent multicast loops. This convention, Reverse Path Forwarding (RPF), ensures that all
multicast packets that arrive on an interface that are not used to reach the source of the packets are
dropped. RPF will be covered in detail later in this chapter, but first it will be necessary to discuss the
two current versions of PIM that can be deployed in a network.
Just like IGMP; PIM has evolved over the years. By default, current versions of the IOS will run PIM
version 2. In order to better, understand the nature of the enhancements made to the protocol in its
current version a close examination of PIM version 1 is in order.
PIMv1
PIM version 1 is a Cisco propriety protocol that can dynamically map RP's to multicast groups in concert
with a standalone protocol called Auto-RP. PIM version 1 uses a time-to-live value to scope its
announcements. PIM version 1 packets are transmitted inside IGMP packets. PIM routers that create
the multicast tree use these PIM laden IGMP packets. IGMP packets containing PIM packets are
designated as Type 5 IGMP messages, or "Router PIM Messages".
PIMv2
PIM version 2 is a standards-based track protocol that made several improvements on the earlier Cisco
proprietary version. These improvements include the concept of a single active RP per multicast group
with multiple alternate RP's. PIM version 1 supported multiple active RPs for the same group. In version
2, PIM packets are stand-alone packets and no longer embedded in IGMP messages. These new PIM
version 2 packets have support for automated fault tolerant RP discovery and distribution called a
Bootstrap router (BSR). This means that PIMv2 does not need any standalone protocols like Auto-RP
does to allow routers to dynamically learn group-to-RP mappings. Additional modifications to the PIM
version 1 protocol also include more flexible encoding of future capability options inside PIM join and

Copyright by IPexpert, Inc. All Rights Reserved.

3-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

prune messages, as well as a robust and flexible Hello Packet format that replaces the old Query packet
operation adopted from IGMP.
To better understand the operation of multicast in IP networks, it is best to separate the construction of
the PIM control plane from the actual forwarding of multicast data (the data plane). Once the control
plane protocols have actually constructed the multicast tree the actual forwarding of multicast packets
will take place in the data plane. The data plane uses Reverse Path Forwarding (RPF) to ensure traffic is
not forwarded in a fashion that results in a loop of the multicast stream. We mentioned briefly the fact
that the multicast tree is built with no concern for loops in the tree. In fact, it is worth mentioning that
commonly there will be valid deployments where the tree is built as a looped topology. Realizing this
fact, PIM was created to use RPF at all times for all multicast traffic. Specifically, the reverse path
forwarding check is going to ensure that multicast packets will only be forwarded when they arrive on
interfaces that are designated as being loop-free unicast paths back to the source of the multicast
stream. The main goal of the reverse path forwarding check is every time a device receives a multicast
packet on an interface the router will perform a lookup based on the source IP address. This lookup
recurses to the interface used by the underlying routing protocol to reach the source of the packet. This
interface will be compared to the interface the packet actually arrived on. If the interfaces match then
the RPF check passes. If the interfaces do not match then the RPF check fails and the packet is dropped.
It is important to understand that these RPF checks are a data plane protection process, separate and
apart from the control plane and they are performed against each and every multicast packet a device
receives. So in the event of a loop in the IGP topology or if PIM was not able to create a loop-free tree, it
is assured that multicast packets will not loop as they are forwarded. Essentially guaranteeing that even
if a looped tree is built the data plane will determine which interfaces should or should not be used in
forwarding. All this is based on the underlying unicast routing table.
The majority of problems encountered when deploying or troubleshooting PIM will actually result in
some part of the physical network design failing the RPF check mechanism. This means that most
multicast issues are going to be related to RPF check failures that will need to be remediated either by
changing the underlying unicast routing, implementing static multicast routes, deploying tunnels or by
using multicast BGP.
The last stage of data plane forwarding is going to be the creation of the multicast routing table for a
particular device. PIM neighbors exchange messages, specifically join and prune messages that are used
to create the multicast tree, these messages are also used in the creation of the multicast routing table.
The role of the multicast routing table is to keep track of what interfaces point to multicast sources, and
what interfaces lead to multicast receivers. This table can at first seem confusing and difficult to

Copyright by IPexpert, Inc. All Rights Reserved.

3-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

interpret, but a short amount of time looking at how the information is organize will reveal just how
powerful this table is when trying to troubleshoot any multicast routing problem.
Two classifications of information found in the multicast routing table correspond to the interface
classification mentioned previously. Interfaces that point toward the source of a multicast feed
(upstream facing) are classified as incoming interfaces. These interfaces are placed in the section of the
multicast routing table called the "incoming interface list". The remaining links (downstream facing) lead
toward any possible multicast receivers. These interfaces compose what is called the Outgoing Interface
List or "OIL".
At this point it is important to note that there is a "split-horizon-like" behavior that multicast follows.
This behavior prevents an interface from being able to be classified as both incoming and outgoing for a
given multicast group simultaneously. This behavior can be disabled in some categories of PIM, but not
in others. For troubleshooting purposes, it is important to remember this behavior as it can cause
significant issues in network designs that have multipoint non-broadcast interfaces like some
deployments of frame relay. It should also be noted that this behavior could also result in some network
designs that make it impossible to deploy some modes of PIM. As a rule, it is best to run multicast over a
Layer 2 technology such as frame-relay using point-to-point PIM enabled links everywhere.
Once the multicast tree has been created using the associated PIM join and prune messages, and
assuming the RPF checks pass, multicast routes (mroutes) will begin to populate the multicast routing
table. These routes utilize an annotation scheme unique to multicast. This annotation format follows the
model of (S,G) where "S" is the source and "G" is the group. This means that the router knows the
identity of the source for a specific group as opposed to the (*,G) entry. The *,G entry means that the
device knows the group but not the identity of the source. The (S,G) is often referred to as the "Source
Tree", where the (*,G) is known as the "Shared Tree".
Multicast routing has one other thing in common with unicast routing. The most specific route or
"longest match" will always be preferred. This means that a *,G entry will always be less specific than a
S,G entry in the multicast routing table. This being the case, once a multicast packet arrives on an
interface the router will switch the packet from the incoming interface, to all interfaces in the OIL. The
mechanism being described here is one where a "single" packet arrives on an interface, and a layer
three replication of that single packet takes place and it is forwarded out all the interfaces in the OIL.
PIM Dense Mode
We have discussed the basic operation of PIM. The next step in this critical analysis of the protocol is to
look at different modes of PIM operation. There are three modes to this essential multicast routing
protocol: dense-mode (PIM-DM), sparse-mode (PIM-SM), and sparse-dense-mode (PIM-SM-DM).

Copyright by IPexpert, Inc. All Rights Reserved.

3-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

The mode that this chapter will explore is PIM-DM. The remaining operational modes each will have
their own dedicated chapters. It was decided to start with PIM-DM because the protocol is the simplest
of the three, and affords a very streamline technology to introduce the basic and advanced verification
commands, tools and processes associated with troubleshooting all of the PIM mode types.
Figure 3-1 demonstrates a sample PIM-DM topology.


Figure 3-1: A Sample PIM-DM Topology

In this chapter, we are going to look critically at the specifics of PIM-DM to include the different
verification techniques used to validate its operation, as well as the commands used to isolate faults in
the protocol.
PIM-DM operates via an implicit join paradigm. This implicit join model is often called a "push model"
because the routers are going to flood all multicast traffic they receive out every single interface running
PIM-DM (except the interface the packet arrived on). This means that the individual adjacent routers are
responsible for deciding if they are interested in the multicast feed or not. If the router has no interest in
the multicast feed, it will send a PIM Prune message. This message is the equivalent of an un-join
instruction sent out the originating link. This "flood and prune" behavior makes PIM-DM operation
unwieldy due to the sheer volume of state information it must maintain. The larger the network the
more state information that needs to be maintained, therefore PIM-DM's biggest detractor is its lack of
scalability.

The Operation and Troubleshooting of PIM-DM


To better understand the process used by PIM-DM to create the multicast tree the protocols operation
will be divided into four individual steps.

Copyright by IPexpert, Inc. All Rights Reserved.

3-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Step One: Application of PIM-DM on one router


A PIM-DM router is first going to attempt to identify all the PIM-DM enabled neighbors on its individual
links. In an effort to find these PIM enabled neighbors PIM-DM will use the reserved link-local multicast
address 224.0.0.13. As a result of this methodology it is essential to ensure that any Layer 2 protocols,
like frame-relay, are configured to support multicast transport when it comes to the neighbor discovery
process or multicast forwarding. This is accomplished by ensuring that these links are configured to
support pseudo-broadcast. The application of the "broadcast" command allows a link to support
broadcast addresses like 255.255.255.255, but by extension, the command also supports multicast
packets.
A closer look at the operation of PIM-DM can be accomplished by using the debug ip packet detail
command. Using this command before actually configuring multicast routing will afford valuable insight
into the processes that take place on the router. Using the topology in Figure 3-2, we will demonstrate
this in detail.

Figure 3-2: PIM-DM Topology

Looking at the output of debug ip packet on any of these devices after we apply the ip multicast-routing
and the ip pim dense-mode commands we will see the underlying process take place on the console. In
order to filter out traffic associated with our internal routing protocol we will apply an ACL to the debug
ip packet command that will prevent EIGRP's locally process switched traffic from cluttering the console
messages. Also, note that like other chapters in this text we have disabled the service timestamps
feature, again in an effort to reduce any possible confusion regarding the interpretation of the debug
output.
R1(config)#access-list 101 deny eigrp any any
R1(config)#access-list 101 permit ip any any

Copyright by IPexpert, Inc. All Rights Reserved.

3-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R1(config)#end
R1#
R1#debug ip packet detail 101
IP packet debugging is on (detailed) for access list 101

With this accomplished our next task is to enable ip multicast-routing and apply the ip pim dense-mode
command under the FastEthernet0/0 interface of R1. Once completed, we will observe the output of the
debug ip packet command:
R1#conf t
Enter configuration commands, one per line.
R1(config)#ip multicast-routing
R1(config)#interface FastEthernet0/0
R1(config-if)#ip pim dense-mode
R1(config-if)#end

End with CNTL/Z.

Almost immediately, debug output begins to appear on the console of R1. It is this output we need to
interpret in order to understand the process that is taking place. Specifically, we need to look at these
three samples of output. Notice that each represents an instance where R1 is sending a
broadcast/multicast packet, but there are two different protocol types, and three different destination
addresses used.
IP: s=172.16.15.1 (local), d=224.0.0.13 (FastEthernet0/0), len 54, sending
broad/multicast, proto=103
IP: s=172.16.15.1 (local), d=224.0.0.1 (FastEthernet0/0), len 28, sending
broad/multicast, proto=2
IP: s=172.16.15.1 (local), d=224.0.1.40 (FastEthernet0/0), len 28, sending
broad/multicast, proto=2

First, we will look at the protocol types. In the sample output above, we see protocols 103 and 2. What
are these protocol types? It is simple enough to find out by creating an extended access-list using the IP
protocol number. The default behavior of Cisco IOS is to translate these protocol numbers into a human
readable form. Taking advantage of this feature is a handy tool to identify the protocol types without
needing external resources or materials.
R1(config)#access-list 100 permit 103 any any
R1(config)#access-list 100 permit 2 any any
R1(config)#end
%SYS-5-CONFIG_I: Configured from console by console
R1#show ip access-list 100
Extended IP access list 100
10 permit pim any any
20 permit igmp any any

Copyright by IPexpert, Inc. All Rights Reserved.

3-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

The output of the show ip access-list 100 command reveals that the protocol types in question are PIM
(type 103) and IGMP (type 2) messages respectively. Now that we know, what type of messages R1 is
sending the next logical step is to look at where the messages are going. The type 103 or PIM messages
are being sent to the link-local multicast group 224.0.0.13. The type 2 messages are actually being sent
to two different destination groups. The first group is 224.0.0.1, a link-local multicast group employed by
all multicast hosts. This means that by virtue of enabling PIM on the interface R1 is now sending IGMP
query messages to all hosts on the VLAN 15 segment.
However, it is also observed that R1 is sending type two protocol messages to a new multicast
destination address that has not been discussed as of yet; the multicast group 224.0.1.40. This address is
used in the process of Auto-RP that was discussed briefly in Chapter 2: Internet Group Management
Protocol (IGMP). Auto-RP will be discussed in detail in Chapter 8: AutoRP. At this point, it is enough to
know that the group 224.0.1.40 is the multicast group that the Auto-RP mapping agent will use to
disseminate Auto-RP information, but the important thing to note is that as soon as the multicast
routing process is enabled, the router will automatically join the group 224.0.1.40. Regardless of what
PIM mode we are running, the router will attempt to dynamically learn the identity of a rendezvous
point (RP). PIM-DM does not utilize an RP as part of the multicast routing process but none-the-less the
router will join the group.
This behavior affords us an ideal opportunity to look at the multicast routing table while it has only one
entry.
R1#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:42:27/00:01:54, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:42:27/00:00:00

Observe the (*, 224.0.1.40) entry. This entry is referred to as a "star comma gee", and fits the
annotation model of (*,G). This means that the router has joined this multicast group, but there are no

Copyright by IPexpert, Inc. All Rights Reserved.

3-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

known sources for it at this time. How can we actually tell the router has joined this group? Remember
the behavior of IGMP. An IGMP host records all the groups it has joined in an IGMP membership list. The
contents of this list can be viewed with the show ip igmp membership command.
R1#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.0.1.40

Reporter
172.16.15.1

Uptime
Exp. Flags
00:48:30 02:59 2LA

Interface
Fa0/0

The output verifies that R1 has actually joined the multicast group 224.0.1.40 on Interface
FastEthernet0/0.

These two show commands show ip mroute and show ip igmp membership, demonstrate that we have
IP multicast-routing enabled globally, and at the interface level we are running PIM-DM. Also based on
the output we have observed we know that R1 is sending PIM messages to 224.0.0.13 with a protocol
number of 103, and IGMP messages related to the group 224.0.1.40. In fact, R1 has actually become an
IGMP client for the Auto-RP mapping agent group.
Step Two: PIM-DM on the remaining routers
Now that we have observed the behavior of PIM-DM critically on one device and have an understanding
of how it operates it is time to enable ip multicast-routing and PIM-DM on every router interface with an
ip address in the topology. At this point in our critical analysis, we will enable PIM-DM on all interfaces in
an effort to prevent any failures in the reverse path forwarding mechanism. Once this is accomplished, it
is very simple to identify and map the multicast routing topology based on the output of a single show
command: show ip pim neighbors. Using this command on all the routers in this topology will quickly
allow us to construct a logical drawing of the multicast routing topology.
Beginning on R1:
R1#show ip pim nei
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable

Copyright by IPexpert, Inc. All Rights Reserved.

3-9

IPv4/6 Multicast Operation and Troubleshooting

Neighbor
Address
172.16.15.5

Chapter 3: PIM - Dense Mode (PIM-DM)

Interface

Uptime/Expires

Ver

FastEthernet0/0

00:07:26/00:01:41 v2

DR
Prio/Mode
1 / DR S

We see that R1 has formed a PIM neighbor relationship with R5. The same command on R5 informs us
that R5 is neighbored with R1 and R4:
R5#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.15.1
FastEthernet0/0
00:08:28/00:01:38
172.16.45.4
FastEthernet0/1
00:08:47/00:01:18

DR Priority,
Ver
v2
v2

DR
Prio/Mode
1 / S
1 / S

Consistently repeating this command on all the routers in the topology will produce a topology drawing
matching Figure 3-3.

Figure 3-3: PIM neighbor relationships

Note: When using the show ip pim neighbor command be sure to check for adjacencies on both ends of
a link. There are scenarios where the PIM neighbor can show up on one side and not the other.
Now that we see the topology, there are issues that need discussion. Observe that the PIM neighbor
relationships between R4, R2 and R6 form a distinct loop. At first glance, this will seem extremely odd,
but remember the control plane is formed with no thought toward possible loops. This is where the RFP
check mechanism will come into play. Later in this section, this entire process will be laid out and
dissected step-by-step, but for now it is time to move to the next stage of the process.
Step Three: A host joins a multicast group

Copyright by IPexpert, Inc. All Rights Reserved.

3-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

We have reviewed the nature and purpose of the PIM and IGMP messages that sent between PIM-DM
enabled devices, now it is time to look at the IGMP join process. In Chapter 2: Internet Group
Management Protocol (IGMP) we saw the inner workings of this protocol and we know that to force a
router to join a multicast group we can employ a number of commands at the interface level. In this
explanation, the FastEthernet0/1 interface of R9 will use the ip igmp join-group command to join the
multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Once this is accomplished, the show ip mroute or show ip igmp membership commands will tell us if
the router's interface has indeed joined the group 224.9.9.9. In order to develop more familiarity with
the show ip mroute command, we use it below.
R9#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:42:21/00:02:33, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:42:21/00:00:00
(*, 224.0.1.40), 00:42:22/00:02:38, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:42:22/00:00:00

The output reveals that the router has in fact joined the group 224.9.9.9 as well as the group 224.0.1.40.
Note that there is a (*,G) entry for each of these groups. In this instance, R9 is acting like a host and as a
result will send its IGMP membership reports to R7. This can be observed via the show ip igmp
membership command.
R7#show ip igmp membership
Flags: A - aggregate, T - tracked

Copyright by IPexpert, Inc. All Rights Reserved.

3-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

L - Local, S - static, V - virtual, R - Reported through v3


I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.9.9.9
*,224.0.1.40
*,224.0.1.40

Reporter
172.16.79.9
172.16.79.9
172.16.67.7

Uptime
00:52:19
00:52:03
00:52:29

Exp.
02:53
02:58
02:30

Flags
2A
2A
2LA

Interface
Fa0/1
Fa0/1
Fa0/0

R9 has notified the IGMP router R7 that it is interested in the multicast group 224.9.9.9. Does R7
forward any of this information to adjacent devices? No, R7 has no mechanism or requirement to
forward these IGMP report messages to any of its adjacent neighbors in PIM-DM. This can be seen with
either the show ip igmp membership or show ip mroute commands on R6.
R6#show ip mroute 224.9.9.9
Group 224.9.9.9 not found

There is no entry for a (*, 224.9.9.9) on R6, nor is there an entry for that group in the IGMP membership
list.
R6#show ip igmp membership 224.9.9.9
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group

Reporter

Uptime

Exp.

Flags

Interface

This clearly illustrates that the IGMP router is not forwarding any of the information learned from R9.
The reason for this behavior is that PIM-DM uses an implicit join methodology where the protocol
assumes all devices are interested in receiving the multicast feed. So rather than notifying any of its
neighbors that R9 has joined a specific group, R7 instead will assume that any multicast feed for the
group 224.9.9.9 will be flooded to it. This means that in PIM-DM, R4 will be listening for the multicast
traffic to arrive on any of its interfaces, but it will not notify any other routers that it has an adjacent

Copyright by IPexpert, Inc. All Rights Reserved.

3-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

host that has joined any particular group. This is the flooding portion of the PIM-DM "Flood and Prune"
behavior.
Step Four: Flooding of a Multicast Feed
The previous step outlined the relationship between the IGMP host and the IGMP router. The previous
chapter described the IGMP mechanism as the protocol that encompasses the last leg of the multicast
routing tree. We have observed that only the host and its adjacent IGMP router knows about the join-
group command under the FastEthernet0/1 interface of R9. The next step is to emulate a multicast
source and observe how PIM-DM will flood the multicast stream. The source in our topology is R1, and
to emulate a multicast feed we will initiate a ping destined to a multicast group. For testing purposes,
this ping will not match the multicast group that R9 has joined. No host being interested in the multicast
feed will result in the pings on R1 failing. Nevertheless, the object of this exercise is to follow the
multicast feed through the network, observe its behavior, and note any related characteristics. We will
use a very high repeat count on R1 to make sure that the multicast feed remains active throughout our
testing.
R1#ping 224.1.1.1 repeat 100000000
Type escape sequence to abort.
Sending 100000000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
................................... <output omitted>

The pings are not successful as we expected, but what about the multicast-routing tables on all the
routers in the topology. We will look at R5 first:
R5#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:02:43/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:02:43/00:00:00
FastEthernet0/0, Forward/Dense, 00:02:43/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

3-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

(172.16.15.1, 224.1.1.1), 00:02:43/00:00:16, flags: PT


Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list:
FastEthernet0/1, Prune/Dense, 00:02:43/00:00:16

Now we observe something new. For the multicast group 224.1.1.1 there are two entries, a (*,G) entry
and a (S,G) entry. This affords us an ideal opportunity to look closely at the "more specific" match rule
that we discussed earlier. The output of this show command illustrates that R5 knows about both the
group and the source for that group. This constitutes the creation of the (S,G) entry. Observe that the
(*,G) entry is stopped, and that each of R5's PIM-DM enabled interfaces are in the OIL. The more specific
(S,G) entry is different. We see that there is now an expiration timer 00:00:16 in this output capture, and
that FastEthernet0/0 is in the Incoming Interface List and FastEthernet0/1 is in the OIL. Based on the
multicast "flooding" model used by PIM-DM, we can expect to see this (*,G), (S,G) pair on each router in
our topology. To reduce this output in this verification this test will only be done on R6 and R9, and the
output will be filtered to reduce the amount of output to a more manageable level.
R6#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 00:28:36/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/0.1, Forward/Dense, 00:28:36/00:00:00
FastEthernet0/1, Forward/Dense, 00:28:36/00:00:00
FastEthernet0/0, Forward/Dense, 00:28:36/00:00:00
(172.16.15.1, 224.1.1.1), 00:28:36/00:01:52, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:01:08/00:00:00
Serial0/1/0.1, Prune/Dense, 00:13:18/00:01:52, A
R9#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 00:14:15/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:14:15/00:00:00
(172.16.15.1, 224.1.1.1), 00:02:11/00:00:48, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.79.7
Outgoing interface list: Null

Remember the traffic will be flooded throughout the multicast domain, because of the flood and prune
model employed by PIM-DM. So all PIM-DM routers will receive the multicast feed whether they want it
or not. This covers the flood portion of PIM-DM. What about the "prune" aspect of PIM-DM?
We need to look at the output of the show ip mroute command on R9 once more.

Copyright by IPexpert, Inc. All Rights Reserved.

3-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R9#show ip mroute 224.1.1.1 | sec 224.1.1.1


(*, 224.1.1.1), 00:18:29/00:02:38, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:18:29/00:00:00

Now there is only a (*,G) entry in the multicast routing table where before there was both a (*,G) and an
(S,G). What happened? We need to repeat the command.
R9#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 00:21:08/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:21:08/00:00:00
(172.16.15.1, 224.1.1.1), 00:00:04/00:02:55, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.79.7
Outgoing interface list: Null

Now both the (*,G) and the (S,G) entries are back. Is there a problem with our configuration?
No. We are observing the normal flood and prune behavior of PIM-DM. When all PIM-DM enabled
routers receive a multicast feed and they do not have an IGMP join state for that feed, or they do not
have a neighbor with a join state for that feed they will send a prune message back toward the source.
Evidenced by the output of the debug ip pim command on R9.
R9#debug ip pim
PIM debugging is on
R9#
PIM(0): Insert (172.16.15.1,224.1.1.1) prune in nbr 172.16.79.7's queue
PIM(0): Building Join/Prune packet for nbr 172.16.79.7
PIM(0): Adding v2 (172.16.15.1/32, 224.1.1.1) Prune
PIM(0): Send v2 join/prune to 172.16.79.7 (FastEthernet0/1)

Looking critically at the output of the debug command we see that R9 built a Join/Prune packet. The
router then added a Prune state to its own multicast routing table, and then sent the join/prune packet
to its neighbor 172.16.79.7. This entire mechanism is known as a pruning. Without employing the debug
commands we can see in the multicast routing table if a multicast feed is in a pruned state or not.
R9#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 02:09:53/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 02:09:53/00:00:00
(172.16.15.1, 224.1.1.1), 00:01:53/00:01:06, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.79.7
Outgoing interface list: Null

Copyright by IPexpert, Inc. All Rights Reserved.

3-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

In this output, we see there is a field called "flags". For the (S,G) entry for 224.1.1.1 these flags are PT.
The index for the show command tells us that the "P" flag means this multicast feed has been pruned.
Since there are no interested receivers in this topology, we expect the status of this feed to be pruned
on all routers between R1 and R9.
R7#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:27:39/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 04:27:39/00:00:00
FastEthernet0/0, Forward/Dense, 04:27:39/00:00:00
(172.16.15.1, 224.1.1.1), 00:03:25/00:02:42, flags: PT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Prune/Dense, 00:00:21/00:02:48
R6#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:28:18/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/0.1, Forward/Dense, 04:28:18/00:00:00
FastEthernet0/1, Forward/Dense, 04:28:18/00:00:00
FastEthernet0/0, Forward/Dense, 04:28:18/00:00:00
(172.16.15.1, 224.1.1.1), 04:28:18/00:02:03, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Prune/Dense, 00:01:00/00:01:59
Serial0/1/0.1, Prune/Dense, 00:01:04/00:01:58
R2#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:28:44/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Dense, 04:28:44/00:00:00
GigabitEthernet0/0, Forward/Dense, 04:28:44/00:00:00
(172.16.15.1, 224.1.1.1), 00:16:30/00:01:40, flags: PT
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Prune/Dense, 00:01:26/00:01:33
R4#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:29:13/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 04:29:13/00:00:00
FastEthernet0/0, Forward/Dense, 04:29:13/00:00:00
Serial0/0/0.1, Forward/Dense, 04:29:13/00:00:00
(172.16.15.1, 224.1.1.1), 00:22:59/00:01:09, flags: PT

Copyright by IPexpert, Inc. All Rights Reserved.

3-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5


Outgoing interface list:
Serial0/0/0.1, Prune/Dense, 00:01:59/00:01:01, A
FastEthernet0/0, Prune/Dense, 00:01:55/00:01:04

Notice that the flags for each of these multicast routing table entries are P for pruned. Additionally,
observe that the interfaces in the OIL all have a state/mode of Prune/Dense. This indicated by looking at
the value after the interface designator. This output tells us that the adjacent router on the segment has
sent us a prune message. In effect, informing the router that there are no interested devices on or
beyond this segment.
As an additional test, we will have the FastEthernet0/1 interface of R6 join the multicast group
224.1.1.1.
R6(config)#interface FastEthernet0/1
R6(config-if)#ip igmp join-group 224.1.1.1
R6(config-if)#end

After implementing this command on R6 there will be a significant change in the multicast routing table
of the devices in the path from R1 to R6. Most notably the state/mode value will change to
forward/dense. This means that the these devices are actively forwarding the multicast feed to R6 this
can be seen by observing the contents of the multicast routing tables on R5, R4, R2 and R6.
R5#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:42:45/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 04:42:45/00:00:00
FastEthernet0/0, Forward/Dense, 04:42:45/00:00:00
(172.16.15.1, 224.1.1.1), 00:06:21/00:02:57, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:03:54/00:00:00
R4#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:44:55/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 04:44:55/00:00:00
FastEthernet0/0, Forward/Dense, 04:44:55/00:00:00
Serial0/0/0.1, Forward/Dense, 04:44:55/00:00:00
(172.16.15.1, 224.1.1.1), 00:11:31/00:02:57, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
Serial0/0/0.1, Prune/Dense, 00:02:19/00:00:41, A
FastEthernet0/0, Forward/Dense, 00:06:05/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

3-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R2#show ip mroute 224.1.1.1 | sec 224.1.1.1


(*, 224.1.1.1), 04:45:13/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Dense, 04:45:13/00:00:00
GigabitEthernet0/0, Forward/Dense, 04:45:13/00:00:00
(172.16.15.1, 224.1.1.1), 00:32:59/00:02:51, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Dense, 00:06:22/00:00:00

Once the multicast feed reaches R6 we will see that it is not forwarded beyond R6 because there are no
devices beyond R6 that have joined the multicast group 224.1.1.1. This is reflected by the flag of "P" and
the state/mode value of Prune/Dense found in the multicast routing table for the (S,G) pair.
R6#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 04:46:57/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/0.1, Forward/Dense, 04:46:57/00:00:00
FastEthernet0/1, Forward/Dense, 04:46:57/00:00:00
FastEthernet0/0, Forward/Dense, 04:46:57/00:00:00
(172.16.15.1, 224.1.1.1), 04:46:57/00:02:58, flags: PLTX
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Prune/Dense, 00:01:08/00:01:50
Serial0/1/0.1, Prune/Dense, 00:01:17/00:01:45

We should be seeing successful ICMP echo replies now on R1.


Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request
request
request
request
request
request

8837
8838
8839
8840
8841
8842
8843
8844
8845
8846
8847
8848
8849
8850
8851
8852

from
from
from
from
from
from
from
from
from
from
from
from
from
from
from
from

172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,

Copyright by IPexpert, Inc. All Rights Reserved.

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

3-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

<output omitted>

This tells us that everything is working, but it is important to note that successful ICMP echo replies may
always tell us that the test is working, but not getting echo replies back may not mean that the tests are
failing. Keep in mind that normal multicast operations are unidirectional in nature. Specifically, they will
be unidirectional UDP packet flows travelling from the sender down to the receivers. This process being
UDP takes place without any explicit acknowledgement from any device in the multicast path. This
means in a normal multicast feed the multicast application at the hosts will never reply to the source of
the multicast stream. Though using pings and looking for echo replies is a useful tool, the important
thing to observe is whether the multicast packets are actually arriving at the host, by using debug ip
mpacket at the host. For instance on R6 we see the packets arriving.
R6(config)#access-list 101 deny eigrp any any
R6(config)#access-list 101 permit ip any any
R6(config)#exit
R6#
%SYS-5-CONFIG_I: Configured from console by console
R6#
R6#debug ip mpacket detail list 101
IP multicast packets debugging is on (detailed) for access list 101
R6#
IP(0): MAC sa=000f.8f4a.1061 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=9132, ttl=251, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0)
prot=1, len=100(100), mforward
R6#
IP(0): MAC sa=000f.8f4a.1061 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=9133, ttl=251, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0)
prot=1, len=100(100), mforward
R6#
IP(0): MAC sa=000f.8f4a.1061 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=9138, ttl=251, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 id=9138, ttl=251,
len=114(100), mroute olist null
R6#
IP(0): MAC sa=000f.8f4a.1061 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=9139, ttl=251, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 id=9139, ttl=251,
len=114(100), mroute olist null
R6#
IP(0): MAC sa=000f.8f4a.1061 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=9140, ttl=251, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 id=9140, ttl=251,
len=114(100), mroute olist null
R6#undebug all

Copyright by IPexpert, Inc. All Rights Reserved.

id=9132, ttl=251,

id=9133, ttl=251,

prot=1,

prot=1,

prot=1,

3-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

All possible debugging has been turned off

This output demonstrates that multicast packets are arriving on R6 via the FastEthernet0/1 interface
sourced from the IP address 172.16.15.1 and destined to the multicast group 224.1.1.1. Also, observe
the statement that reads "mroute olist null". The mroute olist null states that there are interfaces
interested in this feed.
This leads us to the next question. What does the output of this command look like on a device in the
transit path where there should be an outbound interface in the OIL? We can see this by using debug ip
mpacket on a router in the path like R4.
R4(config)#access-list 101 deny eigrp any any
R4(config)#access-list 101 permit ip any any
R4(config)#end
%SYS-5-CONFIG_I: Configured from console by console
R4#debug ip mpacket detail list 101
IP multicast packets debugging is on (detailed) for access list 101

Oddly enough, no matter how long we wait, we see no output on this device, but we know the multicast
feed is transiting R4 to get to R6. Verified with the show ip mroute command.

R4#show ip mroute 224.1.1.1 | sec 224.1.1.1
(*, 224.1.1.1), 05:16:16/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 05:16:16/00:00:00
FastEthernet0/0, Forward/Dense, 05:16:16/00:00:00
Serial0/0/0.1, Forward/Dense, 05:16:16/00:00:00
(172.16.15.1, 224.1.1.1), 00:42:52/00:02:56, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
Serial0/0/0.1, Prune/Dense, 00:03:00/00:00:00, A
FastEthernet0/0, Forward/Dense, 00:37:26/00:00:00

This confirms our expectation. The output verifies the forwarding of the multicast feed for 224.1.1.1 out
the FastEthernet0/0 interface of R4. What is blocking the results of the debug ip mpacket command?
We cannot see the packets transit through R4 because they are not being process switched. Only traffic
destined to or from a multicast enabled router will be process switched by default. Currently, multicast
traffic on the FastEthernet interfaces of R4 is fast switched made evident via the show ip interface
FastEthernet command.
R4#show ip interface FastEthernet0/0 | inc multicast
IP multicast fast switching is enabled
IP multicast distributed fast switching is disabled

Copyright by IPexpert, Inc. All Rights Reserved.

3-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R4#show ip interface FastEthernet0/1 | inc multicast


IP multicast fast switching is enabled
IP multicast distributed fast switching is disabled

Note that the output tells us that the router has enabled fast switching, and disabled distributed fast
switching. Distributed fast switching is hardware based where regular fast switching is software based.
The Cisco routers used in this topology do not support hardware switching.
What can we do to change this behavior? In order to disable multicast fast switching we will need to
disable the multicast routing cache feature that is on by default on all interfaces. Doing so will force the
router to process switch multicast packets.
R4(config)#interface FastEthernet0/0
R4(config-if)#no ip mroute-cache
R4(config-if)#interface FastEthernet0/1
R4(config-if)#no ip mroute-cache
R4(config-if)#end


Once this is accomplished, we will immediately begin to see output from the debug ip mpacket
command.

R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10022, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0)
ttl=253, prot=1, len=100(100), mforward
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10023, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0)
ttl=253, prot=1, len=100(100), mforward
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10024, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0)
ttl=253, prot=1, len=100(100), mforward
R4#
IP(0): MAC sa=0017.9486.c711 (FastEthernet0/1)
IP(0): IP tos=0x0, len=100, id=10025, ttl=253, prot=1
IP(0): s=172.16.15.1 (FastEthernet0/1) d=224.1.1.1 (FastEthernet0/0)
ttl=253, prot=1, len=100(100), mforward

id=10022,

id=10023,

id=10024,

id=10025,


Now that the traffic is being process switched, we can see the multicast packets enter and leave R4. The
multicast feed is entering the FastEthernet0/1 interface and being multicast forwarded out the

Copyright by IPexpert, Inc. All Rights Reserved.

3-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

FastEthernet0/0 interface. We can verify that we are process switching the multicast traffic with the
show ip interface command.

R4#show ip interface FastEthernet0/0 | inc multicast
IP multicast fast switching is disabled
IP multicast distributed fast switching is disabled
R4#show ip interface FastEthernet0/1 | inc multicast
IP multicast fast switching is disabled
IP multicast distributed fast switching is disabled


Notice that the router has now disabled multicast fast switching.

In this analysis of PIM-DM multicast operation, we have observed the flood and prune behavior used to
propagate multicast packets. We have looked at the valuable troubleshooting tools used to isolate
specific behaviors with the multicast tree. This leaves only one aspect of PIM-DM to explore.
Reverse Path Forwarding
We have not introduced any errors or configuration issues into this topology. However, it is important to
point out that there are currently RPF check failures occurring in the topology. We will observe this
occurring by re-enabling the ip mroute-cache capabilities of R4's interfaces. Once this is accomplished,
we will execute the debug ip mpacket command, wait for the flood, and prune process to run a few
times. We will then analyze the output of the debug command.

R4(config)#interface FastEthernet0/0
R4(config-if)#ip mroute-cache
R4(config-if)#interface FastEthernet0/1
R4(config-if)#ip mroute-cache
R4(config-if)#end
R4#
%SYS-5-CONFIG_I: Configured from console by console
R4#debug ip mpacket detail list 101
IP multicast packets debugging is on (detailed) for access list 101


It may take up to five minutes to get the following output.

R4#
IP(0): MAC sa=DLCI 406 (Serial0/0/0.1)
IP(0): IP tos=0x0, len=100, id=10599, ttl=250, prot=1
IP(0): s=172.16.15.1 (Serial0/0/0.1) d=224.1.1.1 id=10599, ttl=250, prot=1,
len=104(100), not RPF interface

Copyright by IPexpert, Inc. All Rights Reserved.

3-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

After the flood and prune has run, we get this message. Why do we get output for transit traffic when
we just discussed the necessity to disable fast switching? In multicast fast switching the first packet of a
feed will be processes switched. Therefore, we will see the first packet that arrives during each flood
refresh. The issue with this output is the "not RPF interface" value.
We have discussed the fact that the PIM multicast routing topology forms with no concern for multicast
routing loops. Instead, PIM, in this instance PIM-DM, relies on the RPF check to prevent loops in the
multicast data plane. Here R4 is receiving an mpacket from R6 because there is a loop in the multicast
topology, but RPF prevents the looping of multicast packets by dropping packets that arrive on
interfaces that are not in the unicast path back to the source. The RPF check takes place as follows.
Once a multicast packet arrives on a given interface, the router immediately knows two things based on
the information in the packet: the source and group. This comprises the (S,G) entry in the multicast
routing table as illustrated by the show ip mroute command.
R4#show ip mroute 224.1.1.1 | sec 1, 224
(172.16.15.1, 224.1.1.1), 01:31:14/00:02:54, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 01:25:48/00:00:00
Serial0/0/0.1, Prune/Dense, 00:02:18/00:00:42, A

Using this information the router will perform a recursive lookup on the source IP address until it
determines the interface that would be used to reach the IP address of the source in its routing table.

R4#show ip route 172.16.15.1
Routing entry for 172.16.15.0/24
Known via "eigrp 100", distance 90, metric 30720, type internal
Redistributing via eigrp 100
Last update from 172.16.45.5 on FastEthernet0/1, 08:38:53 ago
Routing Descriptor Blocks:
* 172.16.45.5, from 172.16.45.5, 08:38:53 ago, via FastEthernet0/1
Route metric is 30720, traffic share count is 1
Total delay is 200 microseconds, minimum bandwidth is 100000 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 1

The route 172.16.15.1 is reachable via the IP address 172.16.45.5. How will R4 reach 172.16.45.5?
R4#show ip route 172.16.45.5
Routing entry for 172.16.45.0/24
Known via "connected", distance 0, metric 0 (connected, via interface)
Redistributing via eigrp 100
Routing Descriptor Blocks:

Copyright by IPexpert, Inc. All Rights Reserved.

3-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

* directly connected, via FastEthernet0/1


Route metric is 0, traffic share count is 1


This finally recurses to the physical interface FastEthernet0/1. This means that R4 will expect to receive
multicast packets inbound on the FastEthernet0/1 interface from a PIM neighbor with an IP address of
172.16.45.5. The router will drop any mpackets sourced from 172.16.15.1 not arriving on
FastEthernet0/1 as part of the multicast routing loop prevention mechanism. The reverse path
forwarding check is a normal part of the multicast environment. The important thing to learn when
troubleshooting multicast issues is how to determine when an RPF check failure is the cause of a
problem or a normal part of the multicast routing process.

On R4, a simple check to determine the RPF interface for a given source would be to use the show ip rpf
command.

R4#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.45.5)
RPF route/mask: 172.16.15.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables

The output of this simple command provides us with a wealth of information. We see the multicast
source, the RPF interface for that source, as well as the IP address of the neighbor the router expects to
send the multicast packets. The output even tells us the unicast routing protocol used to perform the
RPF check.

Copyright by IPexpert, Inc. All Rights Reserved.

3-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Common Issues with PIM-DM


While not as problematic as other versions of PIM, PIM-DM has a number of issues that can surface
when deployed. The most common problems relate to the exchange of essential control plane
information. The control plane establishment in PIM-DM is as streamlined as its data plane process, and
when compared to its PIM-SM counterpart, PIM-DM is much easier to troubleshoot. For simplicity in
troubleshooting common issues while deploying PIM-DM, we identify three categories of problems:
Reverse Path Forwarding (RPF) failures, Hub and Spoke Designs, and Multicast Threshold problems.
RPF Failures
In the Troubleshooting PIM-DM section, this text discussed the phases of the PIM-DM operational
mechanism. Since these mechanisms utilize messages communicated using multicast they all are subject
to Reverse Path Forwarding (RPF) checks. Logically then, RPF issues can prevent optimal multicast
routing, or stop multicast forwarding entirely.
PIM-DM performs RPF checks in both the control and data plane.

Control Plane - The PIM-DM control plan uses PIM messages in its creation. PIM sends messages
via the link-local multicast group 224.0.0.13, and are therefore subject to RPF checks. It is
important to note that RPF checks in the control plane are against the source IP address
encapsulated into each PIM packet as they arrive. More often than not, this will be the IP
address of the adjacent neighbor.

Data Plane - PIM-DM will perform RPF checks on each individual multicast packet before
deciding to forward it. This means that the source IP address of each multicast packet a router
receives must be reachable out the receiving interface before the router will forward it to an
adjacent neighbor. In PIM-DM, RPF always performs checks against the source of the multicast
feed.

The RPF check mechanism can result in scenarios where the control plane fails to form correctly, or
multicast packets fail to transit the multicast tree. When only a few packets or no packets reach the
receivers, RPF failures will normally be the cause.
We will perform a walk through for each of these RPF issues in the PIM-DM Sample Troubleshooting
Scenarios section that follows.
Hub and Spoke Designs
It is important to remember the "split-horizon-like" behavior of PIM-DM. This rule states that an
interface cannot exist in both the incoming interface list and the outgoing interface list (OIL) for the
same multicast S,G pair at the same time. There are no commands in PIM-DM that will allow the

Copyright by IPexpert, Inc. All Rights Reserved.

3-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

deactivation of this behavior. So in order to facilitate the forwarding of multicast packets in scenarios
like multicast hub-and-spoke designs it will be necessary to utilize other solutions. Solutions for these
situations include, but are not limited to Tunnel Interfaces or M-BGP.
Multicast Threshold Problems
Every multicast packet has a TTL value, just like their unicast IP counterparts. In many environments,
using PIM-DM, this fact is used as a method to scope or contain multicast packets to the internal
network. Multicast-threshold is effectively employed to keep multicast packets from leaking into any
internetwork space. However, it is possible to create a multicast routing fault by setting the multicast
threshold on a given router interface.
If the packet's TTL is higher than the multicast threshold configured on an interface (and it passes the
RPF check), the packet will be forwarded. If the TTL of the packet is lower than the multicast threshold,
the router drops the packet. The possible range for a multicast threshold value is 0 to 255, with 0
meaning all packets will be forwarded verses 255 where virtually no packets will be forwarded.
In the PIM-DM Sample Troubleshooting Scenarios section that follows, troubleshooting of these issues
are demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

3-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

PIM-DM Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the PIM-DM operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem exists, and then how to begin isolating the
cause of the fault in the most efficient manner possible. Figure 3-4 illustrates the topology used to
explore this topic.

Figure 3-4: A Sample PIM-DM Topology

In the Common Issues with PIM-DM section, three primary types of problems were identified: RPF
failures, Hub and Spoke Designs, and Multicast Threshold problems. This section explores these three
categories of failure by directing our attention to the commands necessary to verify a problem, isolate it
and remediate it.
Fault isolation in PIM-DM
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9 R1 can emulate a multicast feed:
R1#ping 224.9.9.9 repeat 100000000

Copyright by IPexpert, Inc. All Rights Reserved.

3-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Type escape sequence to abort.


Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
.......... <output omitted>

The output from the ping command is unsuccessful.


Troubleshooting Method One: Follow the multicast feed hop-by-hop.
On R5 look at the output of show ip mroute:
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:44/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:01:44/00:00:00
(172.16.15.1, 224.9.9.9), 00:01:44/00:01:15, flags: PT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list: Null

Are there interfaces in the OIL for the (172.16.15.1, 224.9.9.9) group? The output indicates that there
are no interfaces in the output list with the value of "Null". Looking at the output for the show ip
mroute count command for the group 224.9.9.9 we see what is happening:
R5#show ip mroute 224.9.9.9 count
IP Multicast Statistics
3 routes using 1548 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits per
second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)
Group: 224.9.9.9, Source count: 1, Packets forwarded: 0, Packets received: 86
Source: 172.16.15.1/32, Forwarding: 0/-1/0/0, Other: 86/0/86

Copyright by IPexpert, Inc. All Rights Reserved.

3-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

This output tells us many things; first, we know we are dropping about 1 packet per second. This can be
seen under the Forwarding Counts based on the value -1. Next, we know that we do not have an RPF
check issue because the second field in the Other Counts category is 0. Lastly, we see that we have
received 86 multicast packets for the group 224.9.9.9 and we have dropped 86 packets for "Other"
reasons. The command even points out some common issues.
Interpreting this output is simple. We have no interfaces in the OIL for this group. The reason will be
revealed when we look at the output of show ip pim neighbors on R5.
R5#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.15.1
FastEthernet0/0
01:06:25/00:01:16 v2
1 / S

Immediately, we can see there is no PIM-DM neighbor relationship with R4. Logically the next step
would be to verify what interfaces are participating in PIM-DM on R4:

R5#show ip pim interface
Address

Interface

172.16.15.5
172.16.45.5

FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/D
v2/D

Nbr
Count
1
0

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.15.5
172.16.45.5

R5 is running PIM-DM on the FastEthernet0/1 interface toward R4. What about R4 is it running the PIM-
DM on the interface toward R4?

R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.24.2
FastEthernet0/0
01:53:26/00:01:26 v2
1 / S
172.16.46.6
Serial0/0/0.1
01:52:41/00:01:43 v2
1 / S

R4 is not running PIM-DM on its FastEthernet0/1 interface. To correct this issue the ip pim dense-mode
command will be applied:

R4(config)#interface FastEthernet0/1
R4(config-if)#ip pim dense-mode

Copyright by IPexpert, Inc. All Rights Reserved.

3-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R4(config-if)#end
R4#
%PIM-5-NBRCHG: neighbor 172.16.45.5 UP on interface FastEthernet0/1
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.45.5 on interface
FastEthernet0/1
%SYS-5-CONFIG_I: Configured from console by console

The PIM neighbor relationship comes up with R5.



Is the ping successful on R1?

R1#
.......... <output omitted>


It is not successful. Are there interfaces in the OIL for the 224.9.9.9 S,G pair on R5 now?

R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 01:08:25/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:05:16/00:00:00
FastEthernet0/0, Forward/Dense, 01:08:25/00:00:00
(172.16.15.1, 224.9.9.9), 00:05:25/00:02:59, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:05:17/00:00:00

R5 now has FastEthernet0/1 in OIL of the entry. Are multicast packets being forwarded to R4? We will
clear the multicast routing table to before we take a look at the counters.

R5#clear ip mroute *

Copyright by IPexpert, Inc. All Rights Reserved.

3-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R5#show ip mroute 224.9.9.9 count


IP Multicast Statistics
3 routes using 2046 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits per
second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)
Group: 224.9.9.9, Source count: 1, Packets forwarded: 28, Packets received: 28
Source: 172.16.15.1/32, Forwarding: 28/1/100/0, Other: 28/0/0

The router has forwarded 28 and received 28 packets.



R5 is working but the pings on R1 are still not successful. This means that there is another issue
somewhere in the multicast routing tree. We will locate it by moving to the next device in the multicast
topology. What is the status of the multicast routing table on R4?

R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:17:30/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:17:30/00:00:00
Serial0/0/0.1, Forward/Dense, 00:17:30/00:00:00
FastEthernet0/0, Forward/Dense, 00:17:30/00:00:00
(172.16.15.1, 224.9.9.9), 00:17:30/00:02:57, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
FastEthernet0/0, Prune/Dense, 00:01:43/00:01:26
Serial0/0/0.1, Forward/Dense, 00:17:31/00:00:00


We see the S,G entry for the pair. There are two interfaces in the OIL. It is important to observe that
FastEthernet0/0 and Serial0/0/0.1 both have different Interface states. FastEthernet0/0 is operating in

Copyright by IPexpert, Inc. All Rights Reserved.

3-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

PIM-DM mode, but has a state of Pruned. We know this means no device on this segment has joined or
knows of a member of the group 224.9.9.9. However, the interface Serial0/0/0.1 is in PIM-DM and in a
forwarding state. Seeing that R4 is forwarding out this interface the next logical step will be to follow
this link to the next adjacent PIM-DM neighbor. We can find that by with show ip pim neighbor:

R4#show ip pim neighbor Serial0/0/0.1
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.46.6
Serial0/0/0.1
02:18:56/00:01:32 v2
1 / S


The next device we need to look at is R6 (172.16.46.6) according to this output. We need to look at the
multicast routing table on this router now:

R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:26:29/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/0.1, Forward/Dense, 00:26:29/00:00:00
FastEthernet0/0, Forward/Dense, 00:26:29/00:00:00
(172.16.15.1, 224.9.9.9), 00:02:29/00:00:30, flags:
Incoming interface: Null, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:02:30/00:00:00
Serial0/1/0.1, Forward/Dense, 00:02:30/00:00:00

We have interfaces in the OIL for the 224.9.9.9 S,G pair and they are both operating in PIM-DM and are
in the forwarding state. However, something very important is missing in this output. The incoming
interface list is NULL. This means R6 is learning the S,G entry for 224.9.9.9 but it is not learning it from its

Copyright by IPexpert, Inc. All Rights Reserved.

3-32

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

RPF neighbor. What is the RPF neighbor? The output demonstrates that R6 expects to learn about this
multicast source from R2 (172.16.26.2). This means that R6 is expecting to receive the multicast stream
sourced from R1's 172.16.15.1 unicast address on what interface?
R6#show ip route 172.16.15.1
Routing entry for 172.16.15.0/24
Known via "eigrp 100", distance 90, metric 35840, type internal
Redistributing via eigrp 100
Last update from 172.16.26.2 on FastEthernet0/1, 02:59:16 ago
Routing Descriptor Blocks:
* 172.16.26.2, from 172.16.26.2, 02:59:16 ago, via FastEthernet0/1
Route metric is 35840, traffic share count is 1
Total delay is 400 microseconds, minimum bandwidth is 100000 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 3

Notice that 172.16.15.1 is reachable via the ip address 172.16.26.2 on FastEthernet0/1. This is the
interface that R6 expects to receive the stream on. Is R6 forwarding multicast packets for the group
224.9.9.9?

R6#show ip mroute 224.9.9.9 count
IP Multicast Statistics
3 routes using 2170 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits per
second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)
Group: 224.9.9.9, Source count: 1, Packets forwarded: 0, Packets received: 87
Source: 172.16.15.1/32, Forwarding: 0/0/0/0, Other: 87/87/0


R6 is not forwarding packets. Observe the Other Count section of the output. This tells us that 87
packets have arrived and that 87 packets were dropped because they failed the RPF check. At this point,
we know that the RPF interface should be FastEthernet0/1. Are multicast packets arriving on this
interface?

R6#show ip mroute interface FastEthernet0/1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group

Copyright by IPexpert, Inc. All Rights Reserved.

3-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Outgoing interface flags: H - Hardware switched, A - Assert winner


Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
R6#


There are no packets arriving on this interface. Are there any PIM neighbors on out this interface?

R6#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.67.7
FastEthernet0/0
10:05:05/00:01:35 v2
1 / DR S
172.16.46.4
Serial0/1/0.1
10:05:21/00:01:38 v2
1 / S

FastEthernet0/1 has no neighbors. Is FastEthernet0/1 running PIM-DM?



R6#show ip pim interface
Address

Interface

172.16.67.6
172.16.46.6

FastEthernet0/0
Serial0/1/0.1

Ver/
Nbr
Mode
Count
v2/D
1
v2/D
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.67.7
0.0.0.0

FastEthernet0/1 is not participating in PIM-DM. If we apply ip pim dense-mode to FastEthernet0/1 will it


correct this issue?

R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#interface FastEthernet 0/1
R6(config-if)#ip pim dense-mode
R6(config-if)#end
R6#
%PIM-5-NBRCHG: neighbor 172.16.26.2 UP on interface FastEthernet0/1
%SYS-5-CONFIG_I: Configured from console by console
R6#
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.26.6 on interface
FastEthernet0/1

Observe that the neighbor 172.16.26.2 has come "UP". If this has corrected the issue on R6 we should
now see FastEthernet0/1 in the incoming interface list for the 224.9.9.9 (S,G) pair.

R6#show ip mroute 224.9.9.9

Copyright by IPexpert, Inc. All Rights Reserved.

3-34

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

IP Multicast Routing Table


Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 08:19:47/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:03:08/00:00:00
Serial0/1/0.1, Forward/Dense, 08:19:47/00:00:00
FastEthernet0/0, Forward/Dense, 08:19:47/00:00:00
(172.16.15.1, 224.9.9.9), 00:04:47/00:02:59, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:04:48/00:00:00
Serial0/1/0.1, Prune/Dense, 00:00:04/00:02:58

Observe that the interface is in the Incoming interface list as we expected. We see that the
FastEthernet0/0 interface is in the OIL and that it is operating in PIM-DM mode and in the forwarding
state. Are packets being forwarded?

R6#clear ip mroute *
R6#show ip mroute 224.9.9.9 count
IP Multicast Statistics
3 routes using 2554 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits per
second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)
Group: 224.9.9.9, Source count: 1, Packets forwarded: 16, Packets received: 16
Source: 172.16.15.1/32, Forwarding: 16/1/100/0, Other: 16/0/0

This output tells us that each of the 16 packets R6 has received have been forwarded. We used the clear
ip mroute * command to clear the counters. The multicast issues on R6 have been remediated. Are the
pings from R1 working now?

R1#

Copyright by IPexpert, Inc. All Rights Reserved.

3-35

IPv4/6 Multicast Operation and Troubleshooting

Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
<output omitted>

17230
17231
17232
17233
17234
17235
17236
17237
17238
17239

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 3: PIM - Dense Mode (PIM-DM)

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


This output demonstrates that R9 is now receiving the multicast feed sent from R1. This configuration
had multiple issues that required the hop-by-hop verification. Other methods exist for isolating the
problematic areas in a multicast routing topology.

Troubleshooting Method Two: Using mtrace to isolate the location of a single fault
The hop-by-hop method of fault isolation lends itself readily to environments where there may be
multiple issues associated with the multicast topology failing to operate. There are two primary tools
that can be used to quickly identify the location where a multicast fault may exist. In this section the
mtrace utility will be used to isolate a fault. In this section R9 is still joined to the group 224.9.9.9, and
R1 has failed to successfully send pings to that address. This test can be done with any multicast group
and does not require the receiver to have actually joined the group used.
Step One: Use mtrace on R1 to isolate the place in the topology where the multicast stream fails.
R1#mtrace 172.16.15.1 172.16.79.9 224.1.1.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.1.1.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 None Multicast disabled [172.16.15.0/24]


This output demonstrates that the issue exists on R7. Specifically, the command notifies us that PIM is
not enabled on the interface with the IP address of 172.16.79.7. This indicates where to begin
troubleshooting. The next step is to go to R7 and look to see if it is running the interface connected to R9
in PIM-DM. This can be verified by running show ip pim neighbors.

R7#show ip pim neighbor
PIM Neighbor Table

Copyright by IPexpert, Inc. All Rights Reserved.

3-36

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,


S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.67.6
FastEthernet0/0
00:15:29/00:01:32 v2
1 / S
R7#

The interface connected to R9 is not running PIM-DM. This can be remediated by applying the command
under the FastEthernet0/1.

R7(config)#interface FastEthernet0/1
R7(config-if)#ip pim dense-mode
R7(config-if)#end
%PIM-5-NBRCHG: neighbor 172.16.79.9 UP on interface FastEthernet0/1
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.79.9 on interface
FastEthernet0/1

The PIM-DM neighbor comes up. As verification, the mtrace command will be used again on R1.

R1#mtrace 172.16.15.1 172.16.79.9 224.1.1.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.1.1.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]

The output clearly demonstrates that the path between R1 and R9 no longer has any issues. As a final
verification, the ping test will be repeated.
R1#ping 224.9.9.9 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 8 ms
Reply to request 1 from 172.16.79.9, 1 ms
Reply to request 2 from 172.16.79.9, 1 ms
Reply to request 3 from 172.16.79.9, 1 ms
Reply to request 4 from 172.16.79.9, 1 ms

As expected everything works.

Copyright by IPexpert, Inc. All Rights Reserved.

3-37

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

PIM-DM show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
PIM topology in Figure 3-5 for all example output.

Figure 3-5: A Sample PIM-DM Topology

show COMMAND:
show ip igmp membership [group-address | group-name] [tracked] [all]
This command displays Internet Group Management Protocol (IGMP) membership information for
multicast groups and (S, G) channels.
Where:

group-address optional; specifies the specific multicast group address


tracked optional; displays the multicast groups with the explicit tracking feature enabled
all - optional; displays the detailed information about the multicast groups with and without the
explicit tracking feature enabled

EXAMPLE OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude

Copyright by IPexpert, Inc. All Rights Reserved.

3-38

IPv4/6 Multicast Operation and Troubleshooting

Channel/Group
*,224.9.9.9
*,224.0.1.40

Chapter 3: PIM - Dense Mode (PIM-DM)

Reporter
172.16.79.9
172.16.79.9

Uptime
Exp. Flags
00:24:29 02:25 2LA
00:24:29 02:31 2LA

Interface
Fa0/1
Fa0/1


show COMMAND:
show ip mroute
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:54:11/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:16:16/00:00:00
FastEthernet0/1, Forward/Dense, 00:54:11/00:00:00
(172.16.79.7, 224.9.9.9), 00:00:06/00:02:53, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Prune/Dense, 00:00:07/00:02:52
(*, 224.0.1.40), 00:54:15/00:02:50,
Incoming interface: Null, RPF nbr
Outgoing interface list:
FastEthernet0/1, Forward/Dense,
FastEthernet0/0, Forward/Dense,

RP 0.0.0.0, flags: DCL


0.0.0.0
00:54:14/00:00:00
00:54:15/00:00:00



show COMMAND:
show ip pim interface

Copyright by IPexpert, Inc. All Rights Reserved.

3-39

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

This command displays information about interfaces configured for Protocol Independent Multicast
(PIM).
EXAMPLE OUTPUT:
R7#show ip pim interface
Address

Interface

172.16.67.7
172.16.79.7

FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/D
v2/D

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.67.7
172.16.79.9

show COMMAND:
show ip pim [vrf vrf-name] neighbor [interface-type interface-number]
This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.
Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

EXAMPLE OUTPUT:
R5#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.15.1
FastEthernet0/0
00:26:04/00:01:16
172.16.45.4
FastEthernet0/1
00:22:39/00:01:43

DR Priority,
Ver
v2
v2

DR
Prio/Mode
1 / S
1 / S


show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]
This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance

Copyright by IPexpert, Inc. All Rights Reserved.

3-40

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-


distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information
group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

EXAMPLE OUTPUT:
R5#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.45.4)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables

PIM-DM debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
PIM-DM topology in Figure 3-6 for all example output.

Figure 3-6: A Sample PIM-DM Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]

Copyright by IPexpert, Inc. All Rights Reserved.

3-41

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

EXAMPLE OUTPUT:
IP(0): s=172.16.24.4
len=114(100), mroute
IP(0): s=172.16.24.4
len=114(100), mroute
IP(0): s=172.16.24.4
len=114(100), mroute

(FastEthernet0/0) d=224.9.9.9 id=7, ttl=254, prot=1,


olist null
(FastEthernet0/0) d=224.9.9.9 id=8, ttl=254, prot=1,
olist null
(FastEthernet0/0) d=224.9.9.9 id=9, ttl=254, prot=1,
olist null


debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays Protocol Independent Multicast (PIM) packets received and sent and displays
PIM-related events
Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R7#debug ip pim
PIM debugging is on
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.6, to us
PIM(0): Prune-list: (172.16.79.7/32, 224.9.9.9)
PIM(0): Prune FastEthernet0/0/224.9.9.9 from (172.16.79.7/32, 224.9.9.9)
IP(0): s=172.16.79.7 (FastEthernet0/1) d=224.9.9.9 id=8, ttl=254, prot=1,
len=114(100), mroute olist null

Copyright by IPexpert, Inc. All Rights Reserved.

3-42

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Chapter Challenge: PIM-DM Sample Trouble Tickets


The following section includes three sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH3-PIM-DM-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 3-7 below:

Figure 3-7: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that users on the 192.1.2.0/24 network cannot receive
the multicast feed from the source R1. The feed should be destined to the multicast address 224.2.2.2.
You must correct the issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that users on the network 192.1.6.0/24
cannot receive traffic destined to the multicast group 224.6.6.6. You must correct this issue.
Trouble Ticket #3
Your supervisor has notified you that users on the network 172.16.79.0/24 are not receiving multicast
feeds. Use the multicast group 224.9.9.9 to verify this task. You must correct this issue.

Copyright by IPexpert, Inc. All Rights Reserved.

3-43

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Chapter Challenge: PIM-DM Sample Trouble Tickets Solutions


The following section includes the solutions to the three Trouble Tickets presented in the previous
section. Figure 3-8 provides a flowchart that outlines a "quick fire" approach to isolating and
remediating issues associated with PIM-DM.


Figure 3-8: PIM-DM Quick Fire Troubleshooting Flowchart


Trouble Ticket #1 Solution
Your supervisor has brought to your attention that users on the 192.1.2.0/24 network cannot receive
the multicast feed from the source R1. The feed should be destined to the multicast address 224.2.2.2.
You must correct the issue.

Step 1 - Fault Verification:
Does R2 reply to pings to the multicast group 224.2.2.2:
R1#ping 224.2.2.2 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:
.....

Copyright by IPexpert, Inc. All Rights Reserved.

3-44

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

The pings are not successful. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
The next course of action is to use the mtrace utility to rule out the possibility of an RPF issue. Make
certain to perform this process in both directions, first from R2 toward R7, then from R7 toward R2.

R1#mtrace 172.16.15.1 192.1.2.2
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.2.2
-1 172.16.24.2 PIM [172.16.15.0/24]
-2 172.16.24.4 None No route


This output indicates that the multicast traffic is stopping on R4. The show ip pim neighbor command
will tell us whether or not we have PIM-DM relationships with the neighboring PIM-DM enabled routers.

R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.24.2
FastEthernet0/0
1d09h/00:01:15
v2
1 / S
172.16.46.6
Serial0/0/0.1
1d09h/00:01:30
v2
1 / S

The verification clearly demonstrates that R4 has not neighbor relationship with R5 via FastEthernet0/1.
This most likely means that the ip pim dense-mode is missing either on R4 or R5. The show ip pim
interface will quickly reveal which device.
R4#show ip pim interface
Address

Interface

172.16.24.4
172.16.46.4

FastEthernet0/0
Serial0/0/0.1

Ver/
Mode
v2/D
v2/D

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.24.4
0.0.0.0

The ip pim dense-mode command is missing under FastEthernet0/1 interface thus blocking the
multicast traffic. This has unquestionably isolated our fault.

Step 3 - Fault Remediation:

Copyright by IPexpert, Inc. All Rights Reserved.

3-45

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

In this scenario, the ip pim dense-mode command should be applied under the interface as we have in
the past.

R4#conf t
Enter configuration commands, one per line.
R4(config)#interface FastEthernet0/1
R4(config-if)#ip pim dense-mode
R4(config-if)#end

End with CNTL/Z.


Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification.

R1#ping 224.2.2.2 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to

request
request
request
request
request

0
1
2
3
4

from
from
from
from
from

172.16.24.2,
172.16.24.2,
172.16.24.2,
172.16.24.2,
172.16.24.2,

4
4
1
1
1

ms
ms
ms
ms
ms


The issue has been corrected.
Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that users on the network 192.1.6.0/24
cannot receive traffic destined to the multicast group 224.6.6.6.You must correct this issue.
Step 1 - Fault Verification:
Can R1 ping the group 224.6.6.6 successfully:
R1#ping 224.6.6.6 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.6.6.6, timeout is 2 seconds:
.....


The ping test to the multicast group 224.6.6.6 fails. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
In order to verify that RPF issues are not at fault, use the mtrace utility.

Copyright by IPexpert, Inc. All Rights Reserved.

3-46

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)


R1#mtrace 172.16.15.1 192.1.6.6
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.6.6 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.6.6
-1 172.16.26.6 PIM [172.16.15.0/24]
-2 172.16.26.2 PIM [172.16.15.0/24]
-3 172.16.24.4 PIM [172.16.15.1/32]
-4 172.16.45.5 PIM [172.16.15.0/24]
-5 172.16.15.1


There seems to be no issues involving multicast RPF failures. With this observation, we can next look to
see if the issue may be TTL-Threshold induced with the mstat command:

R1#mstat 172.16.15.1 192.1.6.6
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.6.6 via RPF
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:
Source
Response Dest
172.16.15.1
172.16.15.1
|
__/ rtt 0
ms
v
/
hop 169 s
172.16.15.5
172.16.45.5
?
|
^
ttl
0
v
|
hop -213 s
172.16.45.4
172.16.24.4
?
|
^
ttl
1
v
|
hop -244 s
172.16.24.2
172.16.26.2
?
|
^
ttl
255
v
|
hop -66 s
172.16.26.6
?
|
\__
ttl
256
v
\ hop -169 s
192.1.6.6
172.16.15.1
Receiver
Query Source

Packet Statistics For


All Multicast Traffic
Lost/Sent = Pct Rate
---------------------

Only For Traffic


From 172.16.15.1
To 0.0.0.0
--------------------

-2/0 = --%

0 pps

0/0 = --%

0 pps

-3/0 = --%

0 pps

0/0 = --%

0 pps

-2/0 = --%

0 pps

0/0 = --%

0 pps

0 pps

0 pps


Observe that the output indicates that the TTL values progress in the following pattern 0,1,255,256. We
can see that the TTL count jumps massively on R2. Specifically this happens on the interface with the IP
address 172.16.26.2 (GigabitEthernet0/1). Show run interface GigabitEthernet 0/1 will reveal any
interface specific commands that may cause this.

Copyright by IPexpert, Inc. All Rights Reserved.

3-47

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

R2#show run interface GigabitEthernet0/1


Building configuration...
Current configuration : 147 bytes
!
interface GigabitEthernet0/1
ip address 172.16.26.2 255.255.255.0
ip pim dense-mode
ip multicast ttl-threshold 255
duplex auto
speed auto
end


We see the ip multicast ttl-threshold 255 command. This command causes the issue associated with
this trouble ticket. This has isolated our fault.

Step 3 - Fault Remediation:
In this scenario, the ip multicast ttl-theshold 255 command needs removed from R2's
GigabitEthernet0/1 interface:

R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface GigabitEthernet0/1
R2(config-if)#no ip multicast ttl-threshold 255
R2(config-if)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:

R1#ping 224.6.6.6 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.6.6.6, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to

request
request
request
request
request

0
1
2
3
4

from
from
from
from
from

172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,
172.16.26.6,

4
1
1
1
1

ms
ms
ms
ms
ms


The issue has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

3-48

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Trouble Ticket #3 Solution


Your supervisor has notified you that users on the network 172.16.79.0/24 are not receiving multicast
feeds. Use the multicast group 224.9.9.9 to verify this task. You must correct this issue.
Step 1 - Fault Verification:
Are pings destined for the group 224.9.9.9 successful on R1:
R1#ping 224.9.9.9 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
.....


The pings are not successful. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
To ensure that the issue is not RPF related, use the mtrace utility.

R1#mtrace 172.16.15.1 172.16.79.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.79.9
-1 172.16.79.9 PIM [172.16.15.0/24]
-2 172.16.79.7 PIM [172.16.15.0/24]
-3 172.16.67.6 PIM [172.16.15.0/24]
-4 172.16.26.2 PIM [172.16.15.0/24]
-5 172.16.24.4.5 PIM [172.16.15.0/24]
-6 172.16.45PIM [172.16.15.0/24]
-7 172.16.15.1


There are no RPF issues in the multicast routing path. Now use the mstat utility to verify that the issue is
not a TTL-Threshold problem:

R1#mstat 172.16.15.1 172.16.79.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via RPF
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:

Copyright by IPexpert, Inc. All Rights Reserved.

3-49

IPv4/6 Multicast Operation and Troubleshooting

Source
Response Dest
172.16.15.1
172.16.15.1
|
__/ rtt 3
ms
v
/
hop 244 s
172.16.15.5
172.16.45.5
?
|
^
ttl
0
v
|
hop 235 s
172.16.45.4
172.16.24.4
?
|
^
ttl
1
v
|
hop -244 s
172.16.24.2
172.16.26.2
?
|
^
ttl
2
v
|
hop -66 s
172.16.26.6
172.16.67.6
?
|
^
ttl
3
v
|
hop 85
s
172.16.67.7
172.16.79.7
?
|
^
ttl
4
v
|
hop -9
s
172.16.79.9
?
|
\__
ttl
5
v
\ hop -244 s
172.16.79.9
172.16.15.1
Receiver
Query Source

Chapter 3: PIM - Dense Mode (PIM-DM)

Packet Statistics For


All Multicast Traffic
Lost/Sent = Pct Rate
---------------------

Only For Traffic


From 172.16.15.1
To 0.0.0.0
--------------------

-2/0 = --%

0 pps

0/0 = --%

0 pps

-2/0 = --%

0 pps

0/0 = --%

0 pps

-2/0 = --%

0 pps

0/0 = --%

0 pps

-3/0 = --%

0 pps

0/0 = --%

0 pps

-2/0 = --%

0 pps

0/0 = --%

0 pps

0 pps

0 pps


The TTL values increment normally (0,1,2,3,4 and 5). This means that there are no TTL-Threshold issues.
Having ruled this out the only tool left to us is to use the hop-by-hop technique. Ping the multicast group
224.9.9.9 with a very high repeat count:

R1#ping 224.9.9.9 repeat 500000
Type escape sequence to abort.
Sending 500000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
........... <output omitted>


To demonstrate how to streamline the hop-by-hop process we will use show ip mroute on all devices
between R1 and R9, looking to see if the (S,G) entry for (172.16.15.1, 224.9.9.9) has incoming interfaces,
and outgoing interfaces in the foward/dense state beginning with R5:
R5#show ip 9.9 | mroute 224.9.sec .1, 224.9
(172.16.15.1, 224.9.9.9), 00:03:03/00:02:51, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:03:03/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

3-50

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

Now R4:
R4#show ip mroute 224.9.9.9 | sec .1, 224.9
(172.16.15.1, 224.9.9.9), 00:04:00/00:02:56, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
Serial0/0/0.1, Prune/Dense, 00:00:55/00:02:04, A
FastEthernet0/0, Forward/Dense, 00:04:00/00:00:00

Now R2:
R2#show ip mroute 224.9.9.9 | sec .1, 224.9
(172.16.15.1, 224.9.9.9), 00:04:00/00:02:55, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Dense, 00:04:00/00:00:00

Now R6:
R6#show ip mroute 224.9.9.9 | sec .1, 224.9
(172.16.15.1, 224.9.9.9), 00:04:00/00:03:02, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:04:00/00:00:00
Serial0/1/0.1, Prune/Dense, 00:00:56/00:02:06

Then R7:

R7#show ip mroute 224.9.9.9 | sec .1, 224.9
(172.16.15.1, 224.9.9.9), 00:01:00/00:01:59, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Int Limit 0 kbps
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:01:00/00:00:00

We see that the routers between R1 and R9 are all forwarding the multicast traffic for the group
224.9.9.9. However, on R7 we see something new in the show ip mroute output. Observe that the
incoming interface for the S,G pair states that multicast rate-limiting has been applied. The interface is
configured to allow 0 kbps of multicast traffic. This can be confirmed via show run interface
FastEthernet0/0:

R7#show run interface FastEthernet0/0
Building configuration...
Current configuration : 145 bytes

Copyright by IPexpert, Inc. All Rights Reserved.

3-51

IPv4/6 Multicast Operation and Troubleshooting

Chapter 3: PIM - Dense Mode (PIM-DM)

!
interface FastEthernet0/0
ip address 172.16.67.7 255.255.255.0
ip pim dense-mode
ip multicast rate-limit in 0
duplex auto
speed auto
end

Looking carefully at this output on R7, leads us to believe that the router is not going to allow any
multicast traffic to enter this interface. This has isolated the design fault.

Step 3 - Fault Remediation:
In this scenario, the ip multicast rate-limit command on FastEthernet0/0 should be removed.

R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/0
R7(config-if)#no ip multicast rate-limit in 0
R7(config-if)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R1#ping 224.9.9.9 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to

request
request
request
request
request

0
1
2
3
4

from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1

ms
ms
ms
ms
ms


The pings to the group from R1 are now successful.

Copyright by IPexpert, Inc. All Rights Reserved.

3-52

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Chapter 4: Protocol
Independent
Multicast - Sparse
Mode (PIM-SM)



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and the functionality
of the Protocol Independent Multicast - Spare-Mode (PIM-SM) protocol are examined in great depth.
Once the operational characteristics of this important protocol are detailed completely, the focus
becomes that of troubleshooting. This includes the careful examination of symptoms, a fault isolation
methodology, and the implementation of repairs for PIM-SM. The chapter begins with a thorough
review of PIM-SM, and then quickly launches in to an exhaustive analysis of the art of troubleshooting
this multicast protocol. This important chapter concludes with sample troubleshooting scenarios,
reference materials for the most important show and debug commands, and exciting challenges that
allow readers to practice implementing the troubleshooting skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

4-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

PIM-SM Technology Review


Unlike PIM dense mode (PIM-DM), PIM sparse mode (PIM-SM) uses a pull model to deliver multicast
traffic. Only network segments with active receivers that have explicitly requested the data will receive
the traffic. Sparse mode interfaces are added to the multicast routing table only when periodic Join
messages are received from downstream routers, or when a directly connected member is on the
interface. A rendezvous point (RP) is a special role introduced by PIM-SM. This special meeting place
on the network for multicast traffic allows the use of a shared tree for multicast distribution. This
important RP may be assigned to the network statically, or dynamically. This book covers three possible
methods for this configuration. Each important method receives its own chapter to ensure complete
coverage.
You should note that in PIM-SM, it is the RP that keeps track of multicast groups. Hosts that send
multicast packets are registered with the RP by the first hop router of that host. The RP then sends Join
messages toward the source. Packets are then forwarded on a shared distribution tree. If the multicast
traffic from a specific source is sufficient, the first hop router of the host may send Join messages toward
the source to build a source-based distribution tree. An administrator can force traffic to stay on the
shared tree by using the ip pim spt-threshold infinity command. Another powerful option for the
enforcement of shared tree topologies is Bidirectional Protocol Independent Multicast. This book
provides exhaustive coverage of this technology in Chapter 6: Bidirectional Protocol Independent
Multicast (BIDIR-PIM).
It is certainly worth noting that while the name sparse mode implies that this approach to multicast
routing would only be appropriate in topologies with a sparse distribution of receivers, PIM-SM scales
well to a network of any size and with any number of potential receivers.
To configure PIM-SM, we use the following command:
ip pim sparse-mode
Note: Remember, the rendezvous point (RP) is a critical component that should also be configured. The
various options for doing this appear in later chapters of this book.

Copyright by IPexpert, Inc. All Rights Reserved.

4-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

The Operation and Troubleshooting of PIM-SM


PIM-SM eliminated the main factors holding back PIM-DM. Remember; the critical factor that makes
PIM-DM unattractive is its lack of scalability. We discussed in Chapter 3: Protocol Independent Multicast
- Dense Mode (PIM-DM), the mechanism that this protocol employs to propagate multicast packets. The
"flood and prune" process that PIM-DM relies on makes the protocol unwieldy to deploy because of the
sheer volume of traffic used just to propagate multicast packets. PIM-SM introduces a much more
streamlined solution, which offers both scalability and optimized transportation of multicast
information. In doing so, this newer version of protocol independent multicast, changed the paradigm of
how the routers create the control plane and how the data plane operates. This paradigm shift
introduces new device roles: the Rendezvous Point (RP) and the PIM Designated Router.
Rendezvous Point (RP)
PIM-SM introduced the concept of the Rendezvous Point. In an effort to make the multicast routing and
forwarding process more streamlined, the RP's role is to manage the multicast domain. The RP
accomplishes this by dynamically learning about respective sources and receivers. The RP discovers the
identity of any potential receivers through PIM join messages. When the RP learns about a receiver it
will place the interface the PIM join message arrived on into the Outbound Interface List (OIL) for the
multicast group identified in the message. This is already a change in behavior from PIM-DM. In PIM-
DM, the host sent its IGMP report message to the IGMP router. The IGMP router recognizing that PIM-
DM was the multicast routing protocol would wait for all multicast traffic to be flooded to it from any
respective source on the network. If a source arrives, matching the IGMP join it has received from an
attached host the IGMP router would then forward the multicast traffic onto the segment. PIM-SM
changes this behavior. Now the IGMP router will recognize that the multicast routing protocol is PIM-SM
and know that it needs to forward this information up to the RP. Keep in mind that the router-to-router
protocol in multicast is PIM version 2 by default, and therefore will be using the link-local multicast
group 224.0.0.13 and subject to RPF checks.
This discussion outlines the first issue that can affect the operation of PIM-SM. All devices must agree on
the identity of the RP for the protocol to function correctly. Using the topology in Figure 4-1, we will
systematically deploy PIM-SM and then designate a device to fulfill the role of RP.

Copyright by IPexpert, Inc. All Rights Reserved.

4-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Figure 4-1: Sample PIM-SM Topology

We will statically assign R2 to be the RP in this network. Beginning with R9 and working our way to R2
initially.
R9(config)#ip
R7(config)#ip
R6(config)#ip
R2(config)#ip

pim
pim
pim
pim

rp-address
rp-address
rp-address
rp-address

192.1.2.2
192.1.2.2
192.1.2.2
192.1.2.2


All the interfaces between each of these devices are running ip pim sparse-mode, and we have statically
assigned the RP on each device. The output of show ip pim rp mapping will demonstrate that these 4
devices agree on the identity of the RP.
R9#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)

Copyright by IPexpert, Inc. All Rights Reserved.

4-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

R2#show ip pim rp mapping


PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)

All four devices agree that R2 (192.1.2.2) is the RP for the entire multicast range of 224.0.0.0/4. Now
that this has been configured and verified, we will have the FastEthernet0/1 interface of R9 join the
multicast group 224.9.9.9.
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

End with CNTL/Z.

Once this is accomplished, we will use the show ip mroute to follow the formation of the tree that
forms between R9 and the R2 (the RP). There is the systematic process:
R9 has joined the group 224.9.9.9 as evidenced by that group appearing in the output of show ip igmp
groups.
R9#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.9.9.9
FastEthernet0/1
224.0.1.40
FastEthernet0/1

Uptime

Expires

Last Reporter

00:08:30
03:14:04

00:02:28
00:02:31

172.16.79.9
172.16.79.9

Group

R9 has the group in the list as expected. Now R6 will send an IGMP join to R7, this join packet will arrive
on the FastEthernet0/1 interface of that router. Illustrated by the output of debug ip igmp:
R7#
IGMP(0):
IGMP(0):
sources
IGMP(0):
IGMP(0):

Received v2 Report on FastEthernet0/1 from 172.16.79.9 for 224.9.9.9


Received Group record for group 224.9.9.9, mode 2 from 172.16.79.9 for 0
Updating EXCLUDE group timer for 224.9.9.9
MRT Add/Update FastEthernet0/1 for (*,224.9.9.9) by 0

Having received this message R7 will create a (*, 224.9.9.9) entry in its multicast routing table, with the
FastEthernet0/1 interface in the OIL, as evidenced by a show ip mroute:
R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,

Copyright by IPexpert, Inc. All Rights Reserved.

4-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,


U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:26:38/00:03:20, RP 192.1.2.2, flags: SJC
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:26:38/00:03:20


As expected, the FastEthernet0/1 interface is in the OIL and the incoming interface is the
FastEthernet0/0. This interface is the one used to reach the RP. Now R7 will send PIM version 2
Join/Prune packets to its neighbor 172.16.67.6. Evidenced by the results of debug ip pim on R7:

R7#debug ip pim
PIM debugging is on
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.9.9.9
PIM(0): Insert (*,224.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (192.1.2.2/32, 224.9.9.9), WC-bit, RPT-bit, S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#


This join packet arrives on the FastEthernet0/0 of R6. Seeing this packet R6 will create a *,G entry in the
multicast routing table for the group 224.9.9.9, and place FastEthernet0/0 in the OIL. As evidenced by
the output of show ip mroute.

R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:36:23/00:02:30, RP 192.1.2.2, flags: S
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2

Copyright by IPexpert, Inc. All Rights Reserved.

4-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Outgoing interface list:


FastEthernet0/0, Forward/Sparse, 00:36:23/00:02:30

R6 has FastEthernet0/0 in the OIL and FastEthernet0/1 in the incoming as expected. Now R6 will actively
begin to send PIM join/prune messages to R2. These messages will arrive on the GigabitEthernet0/1
interface of R2. Seeing this R2 will create the same *,G entry for the group 224.9.9.9 with
GigabitEthernet0/1 in the OIL. Again, evidenced by show ip mroute:
R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:40:31/00:03:15, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:40:31/00:03:15


We have just observed the creation of the Rendezvous Point Tree (RPT), sometimes referred to as the
shared tree. In a working deployment of PIM-SM there will always be a series of (*,G) entries in the
multicast routing tables of the RP and the hosts, and in the tables of the devices between them. This
condition is unique to PIM-SM and does not occur in PIM-DM. PIM-SM reliance on both a shared and
source base tree make it more flexible and scalable than PIM-DM. This begs the question, "How is the
source base tree built?" That takes us to the next role based device in the PIM-SM model.
PIM Designated Router (PIM-DR)
In PIM, there is an election process between two or more devices when they become PIM neighbors.
This election process determines which device on the segment is responsible for sending multicast PIM
Register Messages to the RP. When the PIM-DR connected to a sender sees multicast traffic, it will send
a unicast PIM register message to the RP. However, before this process can work the devices in the
source based tree (SPT) will need to know the identity of the RP. We will statically assign R2 to be the RP
in this network. Beginning with R1 and working our up to R2.
R1(config)#ip pim rp-address 192.1.2.2
R5(config)#ip pim rp-address 192.1.2.2
R4(config)#ip pim rp-address 192.1.2.2

Copyright by IPexpert, Inc. All Rights Reserved.

4-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

All the interfaces between each of these devices are running ip pim sparse-mode, and we have statically
assigned the RP on each device. The output of show ip pim rp mapping will demonstrate that these four
devices agree on the identity of the RP.
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R2#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)


Now PIM-SM messages can be sent because the identity of the RP has been established. Now if a source
where to appear the PIM-DR can sent the PIM register message. If the RP accepts the PIM register
message it will send an acknowledgement to the PIM-DR known as a Register Stop.
The Register Stop informs the PIM-DR that the RP has received the PIM register message, created an
(S,G) entry for the group identified in the message, and tells the PIM-DR to stop sending register
messages. To illustrate this process R1 will send multicast traffic to the group 224.1.1.1.
R1#ping 224.1.1.1 repeat 100000000
Type escape sequence to abort.
Sending 100000000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
.............................. <output omitted>

Copyright by IPexpert, Inc. All Rights Reserved.

4-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

It should come as no surprise that the ping fails. No hosts have joined this particular group. We will
observe the exchange of information between the PIM-DR and the RP. First, the PIM-DR sends the PIM
register message to the RP via unicast:

R5#debug ip pim
PIM debugging is on
R5#
PIM(0): Send v2 Data-header Register to 192.1.2.2 for 172.16.15.1, group 224.1.1.1


We see that R5 sends the Register message to 192.1.2.2 for the group 224.1.1.1. This is a unicast packet.
R2 will then receive this message and send a register stop as evidence by the output of debug ip pim:

R2#debug ip pim
PIM debugging is on
R2#
PIM(0): Received v2 Register on FastEthernet0/0 from 172.16.45.5
(Data-header) for 172.16.15.1, group 224.1.1.1
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.1.1.1


The output demonstrates that R2 receives the Register message for the group 224.1.1.1 and responds
by sending the Register-Stop for the group 224.1.1.1. Observe that this process is taking place via
unicast. Before we look to see if the Register-Stop made it to R5 we will want to see if the (S,G) entry for
the multicast group 224.1.1.1 was created in R2's multicast routing table. Use show ip mroute to
accomplish this:

R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:12:27/stopped, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 224.1.1.1), 00:12:27/00:02:32, flags: P
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4

Copyright by IPexpert, Inc. All Rights Reserved.

4-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Outgoing interface list: Null

The S,G pair is clearly installed. Note that there are no interfaces in the OIL at this time. Now back to R5
to check for the Register-Stop:
R5#debug ip pim
PIM debugging is on
R5#
PIM(0): Received v2 Register-Stop on FastEthernet0/1 from 192.1.2.2
PIM(0):
for source 172.16.15.1, group 224.1.1.1
PIM(0): Clear Registering flag to 192.1.2.2 for (172.16.15.1/32, 224.1.1.1)

The Register-Stop was received for the group 224.1.1.1 from the RP (192.1.2.2), and the register flag for
the S,G pair (172.16.15,1, 224.1.1.1) was cleared. There will be an entry for this S,G pair in the multicast
routing table of R5:
R5#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:19:36/stopped, RP 192.1.2.2, flags: SPF
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list: Null
(172.16.15.1, 224.1.1.1), 00:19:36/00:02:56, flags: PFT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list: Null


We see the S,G entry. As part of our critical analysis, we need to look at the flags. "P" indicates that
group has been pruned (there are no interested hosts). "F" indicates the register flag (which we saw
being cleared). "T" indicates that the SPT-bit is set (we will look at this later in this chapter). When we
looked at the formation of the shared or RP tree we observed that there where (*,G) entries on each
device from R9 to R2. The shared tree was created purely via multicast messages on a hop-by-hop basis.
Thus far, the source or SPT has been build using unicast messages. This means that the an entry for the
multicast S,G pair for 224.1.1.1 will only exist on R5 and R2 as evidenced by show ip mroute:

Copyright by IPexpert, Inc. All Rights Reserved.

4-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)


R5#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:26:42/stopped, RP 192.1.2.2, flags: SPF
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list: Null
(172.16.15.1, 224.1.1.1), 00:26:42/00:02:50, flags: PFT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list: Null
R4#show ip mroute 224.1.1.1
Group 224.1.1.1 not found
R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:27:29/stopped, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 224.1.1.1), 00:27:29/00:01:30, flags: P
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list: Null


This behavior will change once a receiver appears for the group 224.1.1.1.

Copyright by IPexpert, Inc. All Rights Reserved.

4-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Merging the Trees


The normal role of the RP in PIM-SM is to facilitate the creation of a reliable and scalable multicast
routing environment. The previous sections in this chapter have explained the RP's role in the creation
of two operationally different multicast trees. First, we looked at the shared tree. PIM-SM creates the
shared tree using the multicast group 224.0.0.13. This process results in the establishment of *,G entries
in the multicast routing tables of all devices in the shared tree path from the host to the RP. Second, we
observed the exchange of unicast traffic between the PIM-DR and the RP. This unicast messaging allows
the RP to discover the existence of active multicast sources on the network. Once these two
components to PIM-SM have been completed, the RP's next task is to merge the two trees into one
uniform multicast tree. Initially this tree will always be created to transit through the RP. We will discuss
how this process changes later in this section.

In order to illustrate the process of how the RP works in the formation of the shared and source tree we
have used two multicast groups. One to form the shared tree to the RP and the other to create the
source tree to the RP. Next, we will illustrate how this process works when we use the same multicast
address. We will stop the multicast traffic on R1:

....................<cntrl-6>
R1#

Wait for the multicast routing table entry for 224.1.1.1 to expire on R2:

R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:44:19/stopped, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 224.1.1.1), 00:44:19/00:00:39, flags: P
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list: Null

Copyright by IPexpert, Inc. All Rights Reserved.

4-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Thirty-nine seconds later:


R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:45:15/00:02:44, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null

Two minutes and forty-four seconds later:



R2#show ip mroute 224.1.1.1
Group 224.1.1.1 not found


Note: This process could easily be facilitated with the clear ip mroute * command, but we wanted to
take the time to make the point that the expiration timers in multicast are long. This fact can create
anomalous behavior, compound troubleshooting, or delay multicast security measures.

Now that the 224.1.1.1 pair is gone we only have the 224.9.9.9 (*,G) entry remaining on R2:

R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 03:55:26/00:03:27, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0

Copyright by IPexpert, Inc. All Rights Reserved.

4-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Outgoing interface list:


GigabitEthernet0/1, Forward/Sparse, 03:55:26/00:03:27
(*, 224.0.1.40), 07:02:19/00:03:21, RP 192.1.2.2, flags: SJCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:54:19/00:03:21
FastEthernet0/1, Forward/Sparse, 06:26:08/00:03:15


Enable debug ip pim on R2:
R2#debug ip pim
PIM debugging is on
R2#

On R1, generate the multicast pings for the group 224.9.9.9 with a repeat count of 1.
R1#ping 224.9.9.9 repeat 1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 32 ms


Observe that R9 replied to the ping. We need to look at the debug output on R2 to see what has
happened at the RP thus far.

PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.9.9.9
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.9.9.9

The RP sent the Register-Stop. This seems strange given the fact that we have a host that wants to
receive this multicast group. The things we need to point out are as follows:

R2 will now have the (*,G) and (S,G) entry in its multicast routing table:

R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group

Copyright by IPexpert, Inc. All Rights Reserved.

4-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Outgoing interface flags: H - Hardware switched, A - Assert winner


Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:25:15/stopped, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:25:15/00:02:50
(172.16.15.1, 224.9.9.9), 00:00:11/00:02:48, flags:
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:00:11/00:02:48

This output tells us that the RP has learned the identity of the source, so now once a PIM join message
for this source arrives on the RP it will send a PIM join up the reverse path tree toward the source. As
evidenced by the debug ip pim output on R2:
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.9.9.9), Forward state, by PIM
*G Join
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (172.16.15.1, 224.9.9.9), Forward
state, by PIM *G Join


We see join arrive from R6, and then the RP builds and sends its own join to R4 toward the source:
R2#
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)


This PIM join creates a (*,G) entry in all the devices between the RP and the source.

R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires

Copyright by IPexpert, Inc. All Rights Reserved.

4-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Interface state: Interface, Next-Hop or VCD, State/Mode


(*, 224.9.9.9), 00:00:06/stopped, RP 192.1.2.2, flags: SP
Incoming interface: FastEthernet0/0, RPF nbr 172.16.24.2
Outgoing interface list: Null
(172.16.15.1, 224.9.9.9), 00:00:06/00:03:23, flags:
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
Serial0/0/0.1, Forward/Sparse, 00:00:06/00:03:23
FastEthernet0/0, Forward/Sparse, 00:00:06/00:03:23

Right now we are only interested in the (*,G) entry in the multicast routing table. Notice that the RPF
address on R4 is the ip address used to reach the RP (172.16.24.2).

R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:51/stopped, RP 192.1.2.2, flags: SPF
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list: Null
(172.16.15.1, 224.9.9.9), 00:01:51/00:01:40, flags: FT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1, Registering
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:51/00:02:38

R5 has the (*,G) entry in the multicast routing table. This entry was created as a result of a PIM join
arriving on the FastEthernet0/1 interface, and the RPF neighbor is 172.16.45.4. Also note that we have
(S,G) entries on these routers as well. The appearance of both the (*,G) and (S,G) values in the table tells
us that the router has passed some packets. Following the pattern that we see in the show command
output above, we see that initially the multicast tree is built end-to-end through the RP.

Copyright by IPexpert, Inc. All Rights Reserved.

4-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

The Shortest Path Tree (SPT)


Once the end-to-end tree has been built through the RP, the default behavior of PIM-SM mode is to
immediately look for the shortest path and switch to it. This process is can be accomplished as a result
of the information extracted from the first packet that travels from one end of the multicast domain to
the other. This first packet has the ip address of the source in its header. The last hop router before the
hosts will look at this information and determine the shortest path back to the source IP address.
Remember, initially the last hop router only knew the identity of the host, the source multicast address,
and the RP. However, once the first packet arrives through the RP this new information allows the
router to determine if it is currently using the most efficient path to reach the source. If it is, the router
will decide to continue to use the path through the RP. If a shorter path exists in its routing table, the
router will send a prune message to the RP and a join message directly to the source. This process is
called - SPT switchover.
We will observe this behavior by sending just one ping as we did before. This time we will enable debug
ip pim on R2 and analyze the PIM messages we receive:
R2#
PIM(0): Insert (172.16.15.1,224.9.9.9) join in nbr 172.16.24.4's queue
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.9.9.9
PIM(0): Forward decapsulated data packet for 224.9.9.9 on GigabitEthernet0/1
PIM(0): Insert (172.16.15.1,224.9.9.9) join in nbr 172.16.24.4's queue
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
R2#

We see that the (S,G) entry is inserted into R2's multicast routing table. The RP builds a Join packet for
R4. This join is forward via the link-local multicast address 224.0.0.13, and once it arrives on R5 the PIM-
DR, the PIM register message will be sent back toward the RP. Observe that this time R2 does not send a
Register Stop. Instead, R2 begins to forward multicast packets.

Once the first packet arrives at R7, the source ip address is learned and R7 will send a PIM Prune
message to the RP once it determines that there is a shorter path toward the source. This prune
message is propagated in a hop-by-hop manner using the link-local multicast address 224.0.0.13. Once
this messages arrives at the RP it will stop forwarding multicast packets, create its own prune message
and sent that to its next hop neighbor toward the source.

Copyright by IPexpert, Inc. All Rights Reserved.

4-17

IPv4/6 Multicast Operation and Troubleshooting

PIM(0):
PIM(0):
PIM(0):
PIM(0):
PIM(0):
PIM(0):
PIM(0):
R2#

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us


Prune-list: (172.16.15.1/32, 224.9.9.9) RPT-bit set
Prune GigabitEthernet0/1/224.9.9.9 from (172.16.15.1/32, 224.9.9.9)
Insert (172.16.15.1,224.9.9.9) prune in nbr 172.16.24.4's queue - deleted
Building Join/Prune packet for nbr 172.16.24.4
Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Prune
Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)

In the output provided we see the "S-bit Prune" value for the (S,G) pair. This indicates that the status of
the pair (172.16.15.1, 224.9.9.9) will transition to pruned:
R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:21:02/00:03:07, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:21:02/00:03:07
(172.16.15.1, 224.9.9.9), 00:00:53/00:02:43, flags: PT
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list: Null


Observe that the status for the pair is pruned; as indicated by the P flag. Note also that there are no
interfaces in the OIL (Null). R2 is no longer participating in the multicast data plane. As a final
verification, we can see that R6 is receiving the multicast traffic directly from R4 via the S0/1/0.1
interface.

R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

Copyright by IPexpert, Inc. All Rights Reserved.

4-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

U - URD, I - Received Source Specific Host Report,


Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:26:34/00:03:28, RP 192.1.2.2, flags: S
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:26:34/00:03:28
(172.16.15.1, 224.9.9.9), 00:06:24/00:03:23, flags: T
Incoming interface: Serial0/1/0.1, RPF nbr 172.16.46.4
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:06:24/00:03:28

The incoming interface list for the (S,G) pair has Serial0/1/0.1 as expected. Though this process of
switching from the RPT to the SPT is default behavior, it is possible to change this behavior with the ip
pim spt-threshold infinity command on the IGMP router.
R7#conf t
Enter configuration commands, one per line.
R7(config)#ip pim spt-threshold infinity
R7(config)#end

End with CNTL/Z.

Once this command has been applied to the IGMP router, the multicast tree will always be formed end-
to-end through the RP. We will demonstrate this by issuing pings from R1 for the group 224.9.9.9, and
observing the output of the debug ip pim on R2:

R2#
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 0.0.0.0, group 0.0.0.0
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.9.9.9
PIM(0): Forward decapsulated data packet for 224.9.9.9 on GigabitEthernet0/1
PIM(0): Insert (172.16.15.1,224.9.9.9) join in nbr 172.16.24.4's queue
PIM(0): Building Join/Prune packet for nbr 172.16.24.4
PIM(0): Adding v2 (172.16.15.1/32, 224.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.24.4 (GigabitEthernet0/0)
R2#

In this instance, R2 receives and forwards the "S-Bit Join" to its next hop neighbor toward the source.
But, R2 never receives a PIM Prune from the IGMP router, thus the multicast tree remains rooted to the

Copyright by IPexpert, Inc. All Rights Reserved.

4-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

RP. This can also be verified via show ip mroute on R6. This output will show that the incoming interface
on R6 will be the FastEthernet0/1 interface:

R6#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:06:44/00:03:20, RP 192.1.2.2, flags: S
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:06:44/00:02:39
(*, 224.0.1.40), 00:06:44/00:02:40, RP 192.1.2.2, flags: SJCL
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:06:44/00:02:40

This output demonstrates two things. First, the incoming interface for the group 224.9.9.9 is
FastEthernet0/1 as we expected. Second, we have no (S,G) entries in the multicast routing table of R6.
There will be no (S,G) entries on R6, R7 or R9 because all of these routers will send their multicast
packets to the RP, rather than the unicast ip address of the source.

Copyright by IPexpert, Inc. All Rights Reserved.

4-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Common Issues with PIM-SM


There are a number of issues that can surface with PIM-SM. The most common problems relate to the
exchange of essential control plane information. By far the control plane establishment in PIM-SM has
many more components compared to its PIM-DM counterpart, and therefore PIM-SM is somewhat
more difficult to troubleshoot. For simplicity in troubleshooting common issues while deploying PIM-
SM, we identify three categories of problems: Reverse Path Forwarding (RPF) failures, unicast routing
issues, and multicast routing problems.
RPF Failures
In the Troubleshooting PIM-SM section, this text discussed what phases of the PIM-SM operational
mechanisms use unicast and multicast. The multicast driven portions of this protocol are all subject to
Reverse Path Forwarding (RPF) checks. Recall that of all the phases, only the PIM Register and Register
Stop processes rely on unicast everything else uses link-local multicast. Logically then, RPF issues can
prevent an RP from learning multicast routing information from the IGMP router. Additionally, this
problem can prevent the RP from successfully merging the multicast trees.
The following list of issues has a relatively high probability of occurring thanks to RPF failures.
Remember that these RPF checks are performed against the IP address of either the RP or the Multicast
source. Be aware that anytime all interfaces in a network are not running PIM - these issues may arise.

RP is not learning the (*,G) entries for some or all multicast groups.
RP fails to notify the PIM-DR to begin forwarding multicast packets.

We will perform a walk through for each of these RPF issues in the PIM-SM Sample Troubleshooting
Scenarios section that follows.
Unicast Routing and Forwarding Problems
From earlier portions of this chapter, it is clear that the ability of the PIM-DR to register an active source
with the RP is dependent on its ability to unicast to the RP. Of course, since this reachability is unicast, it
is not subject to RPF checks. As a result, common issues are:

The PIM-DR fails to send PIM Register messages to the RP.


RPF errors induced by the unicast routing environment.

This is a situation where it will be necessary to look at the underlying routing protocols used in the
network. Typically, this would be an issue of asynchronous routing, and should be something obvious
once the routing tables of the source and transit devices are analyzed.

Copyright by IPexpert, Inc. All Rights Reserved.

4-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Multicast Routing and Forwarding Problems


These problems manifest themselves in more subtle ways when compared to the previous points. As
discussed earlier, the majority of PIM-SM's operational mechanisms involve the formation of the control
plane so that the RP can manage the multicast domain and help maintain the multicast routing tables.
Situations like the following exist when information fails to propagate to any or all devices, but RPF
checks and unicast routing seem to be functioning correctly:

Multicast data packets are lost in the multicast domain.


Multicast control plane are lost in the multicast domain.

In the PIM-SM Sample Troubleshooting Scenarios section that follows, troubleshooting these issues are
demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

4-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

PIM-SM Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the PIM-SM operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem is multicast or unicast related, and then how
to begin isolating the cause of the fault in the most efficient manner possible. Figure 4-2 illustrates the
topology used to explore this topic. Note that R2 is the RP in this topology:

Figure 4-2: A Sample PIM-SM Topology

In the Common Issues with PIM-SM section, three primary types of problems were identified: RPF
failures, unicast routing failures, and multicast forwarding and routing failures. This section explores
these three categories of failure, by directing our attention to the commands necessary to identify that a
problems exists. There are four types of devices in this topology: RP, Host/Receiver, Source and transit
devices (PIM enabled routers).
Step One: Do all devices agree on the identity of the RP?
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)

Copyright by IPexpert, Inc. All Rights Reserved.

4-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

R4#show ip pim rp mapping


PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R2#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R9#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)


All the routers in the multicast domain agree that R2 is the RP for the entire multicast range 224.0.0.0/4,
additionally we can see that this determination was made by static assignment. It is essential that all
devices in the topology agree on the identity of the RP on a group-by-group basis. If not there is no way
possible for the PIM-SM control plane to form correctly.

Step Two: Does the shared or RPT tree form from the IGMP router to the RP?

This step is verified by having the host join the multicast group of interest.

R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.99.99.99
R9(config-if)#end

Now that R9 has joined the group 224.99.99.99 where does it send the IGMP Join message?

Copyright by IPexpert, Inc. All Rights Reserved.

4-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)


R9#show ip igmp interface FastEthernet0/1
FastEthernet0/1 is up, line protocol is up
Internet address is 172.16.79.9/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 3 joins, 1 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.79.9 (this system)
IGMP querying router is 172.16.79.7
Multicast groups joined by this system (number of users):
224.0.1.40(1) 224.99.99.99(1)
R9#


The identity of the IGMP querying router is 172.16.79.7, R7. This router will have a record of the IGMP
report sent by R9:
R7#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted
224.99.99.99
FastEthernet0/1
224.0.1.40
FastEthernet0/1
224.0.1.40
FastEthernet0/0

Uptime

Expires

Last Reporter

00:03:59
04:02:58
04:03:53

00:02:57
00:02:59
00:02:58

172.16.79.9
172.16.79.9
172.16.67.6

Group


In this output, R7 has recorded the IGMP Report sent by R9 for the multicast group 224.99.99.99. Now
R7 will use link-local multicast messages to form the PIM-SM RPT. Evidenced by the output of show ip
mroute on the devices leading to the RP.
R7#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,

Copyright by IPexpert, Inc. All Rights Reserved.

4-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Y - Joined MDT-data group, y - Sending to MDT-data group


Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:07:49/00:02:13, RP 192.1.2.2, flags: SC
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:07:49/00:02:34

Based on the output we see that R7 is forwarding the PIM Report messages to R6 via its
FastEthernet0/0, the interface used to reach the RP. If these messages arrive on R6 there will be a (*,G)
entry in its multicast routing table for the group and FastEthernet0/1 should be in the incoming
interface list. This can be verified via show ip mroute:
R6#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:00:41/00:02:48, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:41/00:02:48

Observe the (*,G) is present but there are no interfaces in the incoming interface list: Null. Additionally,
the RPF neighbor address is 0.0.0.0. Something is preventing the FastEthernet0/1 address from being
assigned to the incoming interface list. The RPF value of 0.0.0.0 usually indicates that the RP cannot be
recognized, and commonly occurs because of an RPF error. What interface is the RPF interface for the
RP?

R6#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2) failed, no route exists

This output indicates that there is no multicast route to the RP. What interface would R6 use to reach
192.1.2.2?

Copyright by IPexpert, Inc. All Rights Reserved.

4-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

R6#show ip route 192.1.2.2


Routing entry for 192.1.2.0/24
Known via "rip", distance 120, metric 1
Redistributing via rip
Last update from 172.16.26.2 on FastEthernet0/1, 00:00:09 ago
Routing Descriptor Blocks:
* 172.16.26.2, from 172.16.26.2, 00:00:09 ago, via FastEthernet0/1
Route metric is 1, traffic share count is 1

This output points to the FastEthernet0/1 interface. Is FastEthernet0/1 running PIM-SM?


R6#show ip pim interface
Address

Interface

172.16.67.6
172.16.46.6

FastEthernet0/0
Serial0/1/0.1

Ver/
Mode
v2/S
v2/S

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.67.7
0.0.0.0

FastEthernet0/1 is not running PIM-SM. Corrected this by applying ip pim sparse-mode under the
interface.
R6#conf t
Enter configuration commands, one per line.
R6(config)#interface FastEthernet0/1
R6(config-if)#ip pim sparse-mode
R6(config-if)#end

End with CNTL/Z.


Now we will use show ip mroute to determine if R6 can now resolve the identity of the RP, and place
FastEthernet0/1 into the incoming interface list:
R6#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:26:03/00:03:04, RP 192.1.2.2, flags: S
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2

Copyright by IPexpert, Inc. All Rights Reserved.

4-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Outgoing interface list:


FastEthernet0/0, Forward/Sparse, 00:26:03/00:03:04


As part of the final verification of the correct creation of the multicast shared tree, we will use show ip
mroute to look for the (*,G) entry for the group 224.99.99.99.
R2#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:09:00/00:02:52, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:09:00/00:02:52


In this situation, we may initially think that there is an issue with RPF like on R6. The RPF value 0.0.0.0
can mean one of two things. The identity of the RPF cannot be recognized or the router is the RP. In this
instance, R2 is the RP. We see no interface in the incoming interface list because we have not emulated
a multicast source.
Step Three: Does the PIM-DR send the PIM Register message to the RP?
This process is accomplished via unicast, and as such is not subjected to the RPF check mechanism. The
initiation of a multicast source on R1 will allow us to confirm whether the PIM Register message is sent.
R1#ping 224.99.99.99 r 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:
.....

The ping was not successful. Was a (S,G) record created on R2 for the group 224.99.99.99?
R2#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,

Copyright by IPexpert, Inc. All Rights Reserved.

4-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

L - Local, P - Pruned, R - RP-bit set, F - Register flag,


T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:22:07/00:02:37, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:22:07/00:02:37

R2 has no record of the (S,G) group. Does R5 send the PIM Register message?
R2#debug ip pim
PIM debugging is on
R2#
PIM(0): Send RP-reachability for 224.0.1.40 on GigabitEthernet0/0
PIM(0): Send RP-reachability for 224.0.1.40 on GigabitEthernet0/1
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward state, by
PIM *G Join
R2#


R2 is not receiving a PIM Register from R5. Does R5 have a (S,G) entry for the group 224.99.99.99?
R5#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:00:06/00:02:53, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null

Copyright by IPexpert, Inc. All Rights Reserved.

4-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Observe that again we have an RPF neighbor value of 0.0.0.0. R5 is not the RP, so this means that R5
cannot recognize the RP. The next most logical consideration is, "Is there an RPF error?"
R5#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2) failed, no route exists

Once again, we see that there is no route to the RP. Are all the interfaces running PIM-SM?
R5#show ip pim interface
Address

Interface

172.16.15.5
172.16.45.5

FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/S
v2/S

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.15.5
172.16.45.5

Observe that both interfaces are running PIM-SM. What is the next step in this part of the PIM-SM
operational mechanism? R5 should now unicast the PIM Register message to the RP. Can R5 reach the
RP?

R5#ping 192.1.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.2.2, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)


R5 cannot ping the ip address of the RP. Is this prefix in the unicast routing table?
R5#show ip route 192.1.2.2
Routing entry for 192.1.2.2/32
Known via "static", distance 1, metric 0 (connected)
Routing Descriptor Blocks:
* directly connected, via Null0
Route metric is 0, traffic share count is 1


Observe that the prefix is being routed to Null0 via a "static" route. This can be corrected by removing
the static route for the loopback of R2.


R5(config)#no ip route 192.1.2.2 255.255.255.255 Null0

Now we can verify that that the ping from R1 is successful now:
R1#ping 224.99.99.99 repeat 10
Type escape sequence to abort.

Copyright by IPexpert, Inc. All Rights Reserved.

4-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Sending 10, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:


Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

32
28
28
28
28
28
28
28
44
28

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


We see that the ping is now successful.

Copyright by IPexpert, Inc. All Rights Reserved.

4-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

PIM-SM show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
PIM-SM topology in Figure 4-3 for all example output.

Figure 4-3: A Sample PIM-SM Topology

show COMMAND:
show ip igmp membership [group-address | group-name] [tracked] [all]
This command displays Internet Group Management Protocol (IGMP) membership information for
multicast groups and (S, G) channels.
Where:

group-address optional; specifies the specific multicast group address


tracked optional; displays the multicast groups with the explicit tracking feature enabled
all - optional; displays the detailed information about the multicast groups with and without the
explicit tracking feature enabled

EXAMPLE OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude

Copyright by IPexpert, Inc. All Rights Reserved.

4-32

IPv4/6 Multicast Operation and Troubleshooting

Channel/Group
*,224.9.9.9
*,239.9.9.9
*,224.0.1.40
R9#

Reporter
172.16.79.9
172.16.79.9
172.16.79.9

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Uptime
00:36:54
00:36:54
00:36:54

Exp.
02:06
02:09
02:05

Flags
2LA
2LA
2LA

Interface
Fa0/1
Fa0/1
Fa0/1


show COMMAND:
show ip mroute
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE OUTPUT:
R6#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.9.9.9), 00:01:36/stopped, RP 192.1.7.7, flags: SP
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.7
Outgoing interface list: Null
(172.16.15.1, 239.9.9.9), 00:01:36/00:03:01, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:01:36/00:03:01
(*, 224.0.1.40), 00:38:26/00:02:35, RP 192.1.2.2, flags: SJCL
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:38:26/00:02:34
R6#


show COMMAND:
show ip pim interface
This command displays information about interfaces configured for Protocol Independent Multicast
(PIM).

Copyright by IPexpert, Inc. All Rights Reserved.

4-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

EXAMPLE OUTPUT:
R6#show ip pim interface
Address

Interface

172.16.67.6
172.16.46.6
172.16.26.6
R6#

FastEthernet0/0
Serial0/0/0.1
FastEthernet0/1

Ver/
Mode
v2/S
v2/S
v2/S

Nbr
Count
1
1
1

Query
Intvl
30
30
30

DR
Prior
1
1
1

DR
172.16.67.7
0.0.0.0
172.16.26.6

show COMMAND:
show ip pim rp mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE OUTPUT:
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via
Uptime: 00:40:34, expires:
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via
Uptime: 00:40:36, expires:
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R6#

bootstrap, priority 0, holdtime 150


00:01:34
bootstrap, priority 0, holdtime 150
00:01:34


show COMMAND:
show ip pim [vrf vrf-name] neighbor [interface-type interface-number]
This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.
Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

EXAMPLE OUTPUT:
R6#show ip pim neighbor
PIM Neighbor Table

Copyright by IPexpert, Inc. All Rights Reserved.

4-34

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Mode: B - Bidir Capable, DR - Designated Router, N - Default


S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.67.7
FastEthernet0/0
00:39:27/00:01:40
172.16.46.4
Serial0/0/0.1
00:38:18/00:01:17
172.16.26.2
FastEthernet0/1
00:39:55/00:01:41
R6#

DR Priority,
Ver
v2
v2
v2

DR
Prio/Mode
1 / DR S
1 / S
1 / S


show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]
This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance


route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-
distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information
group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

EXAMPLE OUTPUT:
R6#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.26.2)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R6#

Copyright by IPexpert, Inc. All Rights Reserved.

4-35

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

PIM-SM debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
PIM-SM topology in Figure 4-4 for all example output.

Figure 4-4: A Sample PIM-SM Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]
This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

EXAMPLE OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward


debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays Protocol Independent Multicast (PIM) packets received and sent and displays
PIM-related events

Copyright by IPexpert, Inc. All Rights Reserved.

4-36

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R6#
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (*, 224.0.1.40), Forward state, by PIM
*G Join
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (172.16.46.6/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (172.16.46.6, 239.9.9.9), Forward state,
by PIM SG Join
R6#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 239.9.9.9
PIM(0): Send v2 Null Register to 192.1.7.7
PIM(0): Received v2 Register-Stop on FastEthernet0/0 from 192.1.7.7
PIM(0):
for source 0.0.0.0, group 0.0.0.0
R6#
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.26.2's queue
PIM(0): Building Join/Prune packet for nbr 172.16.26.2
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.26.2 (FastEthernet0/1)
R6#
PIM(0): Received v2 Bootstrap on FastEthernet0/1 from 172.16.26.2
PIM(0): Update (224.0.0.0/4, RP:192.1.7.7), PIMv2
PIM(0): Update (224.0.0.0/4, RP:192.1.5.5), PIMv2
PIM(0): Received v2 Bootstrap on Serial0/0/0.1 from 172.16.46.4
R6#
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (172.16.15.1/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM SG Join
R6#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.0.1.40
PIM(0): Insert (*,224.0.1.40) join in nbr 172.16.26.2's queue
PIM(0): Building Join/Prune packet for nbr 172.16.26.2
PIM(0): Adding v2 (192.1.2.2/32, 224.0.1.40), WC-bit, RPT-bit, S-bit Join
PIM(0): Send v2 join/prune to 172.16.26.2 (FastEthernet0/1)
R6#
PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.16.67.7, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/0/172.16.67.7 to (*, 224.0.1.40), Forward state, by PIM
*G Join
R6#

Copyright by IPexpert, Inc. All Rights Reserved.

4-37

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Chapter Challenge: PIM-SM Sample Trouble Tickets


The following section includes three sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH4-PIM-SM-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 4-5 below:

Figure 4-5: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that the RP in this topology is not learning about the
multicast group 224.9.9.9 that R9 has joined. You must correct the issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that the RP is not creating the S,G entry for
the group 224.9.9.9 and any pings sent to this group from R1 fail. Correct this issue.
Trouble Ticket #3
Your supervisor has instructed you to prevent any multicast traffic associated with the group 224.9.9.9
from traversing the point-to-point frame-relay circuit between R4 and R6. You are not allowed to change
or remove any pim sparse-mode commands at the interface level to accomplish this.

Copyright by IPexpert, Inc. All Rights Reserved.

4-38

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Chapter Challenge: PIM-SM Sample Trouble Tickets Solutions


The following section includes the solutions to the three Trouble Tickets presented in the previous
section. Figure 4-6 provides a flowchart that outlines a "quick fire" approach to isolating and
remediating issues associated with PIM-SM.


Figure 4-6: PIM-SM Quick Fire Troubleshooting Flowchart


Trouble Ticket #1 Solution
Your supervisor has brought to your attention that the RP in this topology is not learning about the
multicast group 224.9.9.9 that R9 has joined. You must correct the issue.

Step 1 - Fault Verification:
R9 has joined the multicast group 224.9.9.9 does the RP (R2) have a (*,G) entry for this multicast group?
R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,

Copyright by IPexpert, Inc. All Rights Reserved.

4-39

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Y - Joined MDT-data group, y - Sending to MDT-data group


Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:09:55/00:03:17, RP 192.1.2.2, flags: SJCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:09:05/00:03:17
GigabitEthernet0/0, Forward/Sparse, 00:09:50/00:02:33
Loopback0, Forward/Sparse, 00:09:55/00:02:28


There is no (*, 224.9.9.9) entry in the multicast routing table. This verifies that the problem actually
exists.

Step 2 - Fault Isolation:
The next course of action is to use the mtrace utility to rule out the possibility of an RPF issue between
the IGMP R9 and the RP.
R9#mtrace 172.16.79.9 192.1.2.2
Type escape sequence to abort.
Mtrace from 172.16.79.9 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 192.1.2.2
-1 172.16.26.2 PIM [172.16.79.0/24]
-2 172.16.26.6 None No route


We see that there is no multicast route on R6 for the next hop toward the host.

This output indicates that there is a Reverse Path Forwarding error in the path from R2 to R9. With this
confirmed, the next step in the process is to look use show ip rpf on R6.

R6#show ip rpf 172.16.79.9
RPF information for ? (172.16.79.9) failed, no route exists
R6#

There is no RPF interface that can be used to reach R9. Using the show ip pim neighbors we need to
verify that PIM-SM is running on the interface that would be used to reach R9. The quickest method to
verify this is to execute the show ip pim interface command on R6:

R6#show ip pim interface

Copyright by IPexpert, Inc. All Rights Reserved.

4-40

IPv4/6 Multicast Operation and Troubleshooting

Address

Interface

172.16.26.6
172.16.46.6

FastEthernet0/1
Serial0/1/0.1

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Ver/
Mode
v2/S
v2/S

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.26.6
0.0.0.0

The ip pim sparse-mode is not configured on the FastEthernet0/0 interface. This has unquestionably
isolated our fault.

Step 3 - Fault Remediation:
In this scenario, the ip pim sparse-mode command needs to be added to FastEthernet0/0.
R6(config)#interface FastEthernet0/0
R6(config-if)#ip pim sparse-mode


Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification.

R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:22/00:03:07, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:00:22/00:03:07
(*, 224.0.1.40), 00:21:40/00:03:18, RP 192.1.2.2, flags: SJCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:20:51/00:03:18
GigabitEthernet0/0, Forward/Sparse, 00:21:36/00:02:36
Loopback0, Forward/Sparse, 00:21:41/00:02:43


The (*, 224.9.9.9) entry now appears in the multicast routing table of the RP.

Copyright by IPexpert, Inc. All Rights Reserved.

4-41

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Trouble Ticket #2 Solution


After solving Trouble Ticket #1, your supervisor has observed that the RP is not creating the S,G entry for
the group 224.9.9.9 and any pings sent to this group from R1 fail. Correct this issue.
Step 1 - Fault Verification:
Do pings succeed for the group 224.9.9.9 from R1?
R1#ping 224.9.9.9 r 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
...................................<output omitted>


Does the (S,G) entry for the multicast group 224.9.9.9 appear in the multicast routing table of the RP?

R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:09:20/00:03:00, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:09:20/00:03:00


We see that the fault does exist.

Step 2 - Fault Isolation:
In order to verify that RPF issues are not at fault, use the mtrace utility from R2 toward the multicast
source. We will perform this test in both directions.

R2#mtrace 192.1.2.2 172.16.15.1
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.15.1 via RPF
From source (?) to destination (?)
Querying full reverse path...

Copyright by IPexpert, Inc. All Rights Reserved.

4-42

IPv4/6 Multicast Operation and Troubleshooting

0
-1
-2
-3
-4
-5

172.16.15.1
172.16.15.1
172.16.15.5
172.16.45.4
172.16.24.2
192.1.2.2

PIM
PIM
PIM
PIM

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

[192.1.2.0/24]
[192.1.2.0/24]
[192.1.2.0/24]
[192.1.2.0/24]


Next we will reverse the direction of the check:
R2#mtrace 172.16.15.1 192.1.2.2
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.2.2
-1 192.1.2.2 PIM [172.16.15.0/24]
-2 172.16.24.4 PIM [172.16.15.0/24]
-3 172.16.45.5 PIM [172.16.15.0/24]
-4 172.16.15.1


There are no RPF issues between R1 and R2. However during this test the following console messages
appeared:
%PIM-4-INVALID_SRC_REG: Received Register from 172.16.45.5 for (172.16.15.1,
224.9.9.9), not willing to be RP


R2 is refusing to be the RP when it receives the PIM Register message from 172.16.45.5. Two issues can
cause a device to refuse to become the RP. A missing PIM-SM configuration on the loopback used as the
IP by the RP, or an incorrectly applied accept-register command. First, will verify that the interface with
the ip address 192.1.2.2 is operating in PIM-SM.
R2#show run interface loopback 0
Building configuration...
Current configuration : 83 bytes
!
interface Loopback0
ip address 192.1.2.2 255.255.255.0
ip pim sparse-mode
end


We see that Loopback0 is running PIM-SM. Therefore, this lead to a filtering or security issue associated
with PIM-SM. To see all the configuration entries dealing with pim or interfaces use the following show
run command and filter:

Copyright by IPexpert, Inc. All Rights Reserved.

4-43

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)


R2#show run | inc interface | pim
interface Loopback0
ip pim sparse-mode
interface GigabitEthernet0/0
ip pim sparse-mode
interface GigabitEthernet0/1
ip pim sparse-mode
interface Serial0/1/0
interface Serial0/2/0
ip pim rp-address 192.1.2.2
ip pim accept-register list 100



Note that the last line of this output shows that a pim accept-register command has been applied to the
RP. The access-list being called by this configuration is extended access-list 100. What does this access-
list permit or deny?
R2#show ip access-list 100
Extended IP access list 100
10 deny ip any any (12 matches)


The access-list denies all ip traffic. This explains why the R2 (the RP) is refusing to accept the Register
Messages being sent by R5:

Step 3 - Fault Remediation:
In this scenario, access-list 100 needs to be configured to allow pim register messages from R5:
R2#conf t
R2(config)#ip access-list extended 100
R2(config-ext-nacl)#5 permit ip host 172.16.15.1 any
R2(config-ext-nacl)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:

R1#ping 224.9.9.9 r 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 28 ms
Reply to request 1 from 172.16.79.9, 28 ms
Reply to request 2 from 172.16.79.9, 28 ms

Copyright by IPexpert, Inc. All Rights Reserved.

4-44

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Reply to request 3 from 172.16.79.9, 28 ms


Reply to request 4 from 172.16.79.9, 28 ms
Reply to request 5 from 172.16.79.9, 28 ms
<output omitted>


The pings are successful. So the RP is now accepting the Register Messages. We know the pings work but
it would still be worthwhile to verify that the RP has created the (S,G) entry for 224.9.9.9 now:

R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:35:58/00:02:59, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:35:58/00:02:59
(172.16.15.1, 224.9.9.9), 00:03:06/00:03:21, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:03:06/00:02:59

The entry has been created. Thus demonstrating that the fault has been corrected.
Trouble Ticket #3 Solution
Your supervisor has instructed you to prevent any multicast traffic associated with the group 224.9.9.9
from traversing the point-to-point frame-relay circuit between R4 and R6. You are not allowed to change
or remove any pim sparse-mode commands at the interface level to accomplish this.
Step 1 - Fault Verification:
Generate multicast traffic destined to the group 224.9.9.9 on R1 and verify if it is transiting the point-to-
point frame relay link between R4 and R6.
R1#ping 224.9.9.9 r 1000
Type escape sequence to abort.

Copyright by IPexpert, Inc. All Rights Reserved.

4-45

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:


Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
<output omitted>

0
1
2
3
4
5

from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

28
28
28
28
28
28

ms
ms
ms
ms
ms
ms


Is the traffic traversing the frame circuit? This can be determined using show ip mroute on R6:
R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:43:57/00:03:17, RP 192.1.2.2, flags: S
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:43:57/00:03:17
(172.16.15.1, 224.9.9.9), 00:01:31/00:03:27, flags: T
Incoming interface: Serial0/1/0.1, RPF nbr 172.16.46.4
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:01:31/00:03:17


R6 is receiving this multicast feed via its Serial0/1/0.1 interface. This demonstrates that the fault does
indeed exist.

Step 2 - Fault Isolation:
What is the shortest path from R9 to R1's FastEthernet0/0 interface?

R9#traceroute 172.16.15.1
Type escape sequence to abort.
Tracing the route to 172.16.15.1

Copyright by IPexpert, Inc. All Rights Reserved.

4-46

IPv4/6 Multicast Operation and Troubleshooting

1
2
3
4
5

172.16.79.7
172.16.67.6
172.16.46.4
172.16.45.5
172.16.15.1

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

0 msec 4 msec 0 msec


0 msec 0 msec 0 msec
28 msec 28 msec 28 msec
28 msec 28 msec 28 msec
28 msec * 36 msec


This output tells us that the shortest path to R1 from R9 will be through the 172.16.46.0/24 network.
This is the point-to-point interface. Knowing that the default behavior of PIM-SM is to revert to the
Source based tree, we know that is normal behavior. Behavior that can be stopped using the ip pim spt-
threshold command.

Step 3 - Fault Remediation:
In this scenario, the ip pim spt-threshold command needs to be configured on R7.

R7(config)#ip pim spt-threshold infinity group-list 1
R7(config)#access-list 1 permit 224.9.9.9 0.0.0.0
R7(config)#end


Note: After making this configuration we should clear ip mroute * on all devices

Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R1#ping 224.9.9.9 r 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
<output omitted>

0
1
2
3
4

from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

32
28
28
28
28

ms
ms
ms
ms
ms


Is the incoming interface for the group 224.9.9.9 on R6 still the frame relay link?

R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,

Copyright by IPexpert, Inc. All Rights Reserved.

4-47

IPv4/6 Multicast Operation and Troubleshooting

Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM)

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,


U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:04:25/00:03:21, RP 192.1.2.2, flags: S
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:03:26/00:02:58

The incoming interface is now FastEthernet0/1 toward R2 as expected. This fault has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

4-48

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Chapter 5: Protocol
Independent
Multicast Sparse-
Dense Mode (PIM-S-
DM)



This chapter of IPv4/6 Multicast Operation and Troubleshooting details the processes and functionality
of the PIM sparse-dense mode (PIM-S-DM) protocol. Following the coverage of the operational
characteristics of the protocol, the focus becomes that of troubleshooting. This includes the careful
examination of symptoms, a fault isolation methodology, and the implementation of repairs for the PIM
sparse-dense mode (PIM-S-DM) protocol. The chapter begins with a thorough review of PIM-S-DM, and
then quickly launches in to an exhaustive analysis of the art of troubleshooting this multicast routing
protocol. This important chapter concludes with sample troubleshooting scenarios, reference materials
for the most important show and debug commands, and exciting challenges that allow readers to
practice implementing the troubleshooting skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

5-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

PIM-S-DM Technology Review


Earlier chapters of this book examined PIM dense mode and PIM sparse mode in detail. Recall that the
configuration of either sparse mode or dense mode on an interface applied that setting to the interface
as a whole. However, what about a single region where it is desirable to run in sparse mode for some
groups and in dense mode for other groups? This is the purpose of PIM sparse-dense mode.
With PIM sparse-dense mode (PIM-S-DM) the interface is treated as dense mode if the group is in dense
mode; the interface is treated in sparse mode if the group is in sparse mode. Obviously, for sparse mode
operation, there must be a Rendezvous Point (RP).
PIM sparse-dense mode solves issues with Auto-RP, covered in detail in Chapter 8: AutoRP. With sparse-
dense mode, dense mode operation can distribute the Auto-RP information, while the multicast groups
for user data can operate in sparse fashion. To successfully implement Auto-RP and prevent any groups
other than 224.0.1.39 and 224.0.1.40 from operating in dense mode, Cisco recommends configuring a
"sink RP" or "RP of last resort". A sink RP is a statically configured RP that may or may not actually exist
in the network. Configuring a sink RP does not interfere with Auto-RP operation since the default
behavior is for Auto-RP messages to supersede static RP configurations.
When an interface operates in dense mode, it will be populated into the outgoing interface list of a
multicast routing table entry when either of the following conditions are true:

Members or DVMRP neighbors are on the interface


There are PIM neighbors and the group has not been pruned

When an interface operates in sparse mode, it will be populated into the outgoing interface list of a
multicast routing table entry when either of the following conditions are true:

Members or DVMRP neighbors are on the interface


An explicit Join message has been received by a PIM neighbor on the interface

To configure sparse-dense mode on the interface, use the following command:


ip pim sparse-dense-mode

Copyright by IPexpert, Inc. All Rights Reserved.

5-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

The Operation and Troubleshooting of PIM-S-DM


Thus far, in this text we have discussed the individual operational and troubleshooting characteristics of
PIM-DM and PIM-SM. Now we are introducing a new concept. Sparse-dense-mode is a combination of
the two modes of PIM we have discussed in earlier chapters. This protocol does not actually change any
characteristics of, or define any new extensions to PIM. The application of the ip pim sparse-dense-
mode command discussed in the Technology Review section merely allows an individual interface to
forward multicast traffic for both sparse and dense mode groups. Using the topology illustrated in Figure
5-1 we will demonstrate this behavior.

Figure 5-1: Sample PIM-S-DM Topology

Introduction of the Topology


In this environment all devices agree on the identity of the RP for the multicast address range 224.0.0.0
to 231.255.255.255, therefore this traffic will be treated as sparse. Any multicast addresses not in this
scope will be forwarded using dense mode. In this example, R9 has joined the multicast groups 224.9.9.9
and 239.9.9.9 as evidence by show ip igmp membership on R9:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.9.9.9

Reporter
172.16.79.9

Copyright by IPexpert, Inc. All Rights Reserved.

Uptime
Exp. Flags
01:30:33 02:51 2LA

Interface
Fa0/1

5-3

IPv4/6 Multicast Operation and Troubleshooting

*,239.9.9.9
*,224.0.1.40

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

172.16.79.9
172.16.79.9

01:30:33 02:50 2LA


01:30:33 02:57 2LA

Fa0/1
Fa0/1


By using the ping utility on R1 we can verify that R9 will receive both of these multicast groups.
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

R1#ping 239.9.9.9 repeat 10


Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

8
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

This output on R1 clearly illustrates the multicast pings are successful but have they been routed in
dense or sparse mode? The show ip mroute command will tell us how traffic has been routed on any
given device:

R2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,

Copyright by IPexpert, Inc. All Rights Reserved.

5-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,


U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 01:41:29/00:03:26, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 01:41:29/00:03:26
(172.16.15.1, 224.9.9.9), 00:00:48/00:03:08, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:48/00:03:26
(*, 239.9.9.9), 00:00:27/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:27/00:00:00
(172.16.15.1, 239.9.9.9), 00:00:27/00:02:58, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:27/00:00:00
<output omitted>

Observe that the traffic sent to 224.9.9.9 was flagged as "S" (sparse) whereas the traffic to the group
239.9.9.9 was flagged as "D" (dense). What determines whether a group is treated as either sparse or
dense mode traffic? Simply put, if a device knows of an RP for a given multicast address or scope of
addresses that group or scope is treated as sparse mode traffic. If a device does not have an RP mapping
for a given group or scope of addresses then this traffic will be forwarded in dense mode. Observe in this
topology that the first half of the multicast range has been assigned to the RP 192.1.2.2. This can be
discovered using show ip pim rp mapping:

R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Acl: 1, Static
RP: 192.1.2.2 (?)


This tells us that an access-list was used to assign groups to an RP (192.1.2.2) this is often referred to as
a group-to-RP mapping. What multicast groups are defined in the access-list?

Copyright by IPexpert, Inc. All Rights Reserved.

5-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)


R1#show access-list 1
Standard IP access list 1
10 permit 224.0.0.0, wildcard bits 7.255.255.255 (978 matches)


As we can see this means that any multicast address between 224.0.0.0 and 231.255.255.255 will have
an RP assigned, and therefore be forwarded in sparse-mode. Verified by with the mtrace utility:

R1#mtrace 172.16.15.1 172.16.79.9 224.1.1.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.1.1.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM Reached RP/Core [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]

Observe that the -4 hop arrives at R2. R2 is designated as the RP or "Core" router. This means that this
traffic will be forwarded in sparse mode. As evidenced by the output on R2:
R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:05:11/stopped, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 224.1.1.1), 00:00:11/00:02:48, flags: P
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list: Null

Copyright by IPexpert, Inc. All Rights Reserved.

5-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Observe the *,G flag is "S" for sparse. Now we will look at the last group in the range that will have an RP
assigned.
R1#mtrace 172.16.15.1 172.16.79.9 231.255.255.255
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 231.255.255.255
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM Reached RP/Core [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]

Observe that the -4 hop arrives at R2. R2 is designated as the RP or "Core" router. This means that this
traffic will be forwarded in sparse mode. As evidenced by the output on R2:
R2#show ip mroute 231.255.255.255
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 231.255.255.255), 00:00:35/stopped, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 231.255.255.255), 00:00:35/00:02:24, flags: P
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list: Null

Observe the *,G flag is "S" for sparse. Now we will look at the first group in the range that will not have
an RP assigned.
R1#mtrace 172.16.15.1 172.16.79.9 232.0.0.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 232.0.0.1

Copyright by IPexpert, Inc. All Rights Reserved.

5-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

From source (?) to destination (?)


Querying full reverse path...
0 172.16.79.9
-1 172.16.67.7 PIM [172.16.15.0/24]
-2 172.16.67.6 PIM [172.16.15.0/24]
-3 172.16.26.2 PIM [172.16.15.0/24]
-4 172.16.24.4 PIM [172.16.15.0/24]
-5 172.16.45.5 PIM [172.16.15.0/24]
-6 172.16.15.1 PIM [172.16.15.0/24]
-7 172.16.15.1

Observe that the -4 hop arrives at R2. R2 is not designated as the RP or "Core" router. This means that
this traffic will be forwarded in dense mode. As evidenced by the output on R2:

R2#show ip mroute 232.0.0.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 232.0.0.1), 00:00:08/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:08/00:00:00
(172.16.15.1, 232.0.0.1), 00:00:08/00:02:55, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:08/00:00:00

Observe the *,G flag is "D" for dense.



The Problem with PIM-S-DM
PIM-S-DM, is very useful in environments where multicast needs to be delivered in a simple PIM-DM
fashion. In Chapter 8: AutoRP, we discuss multicast addresses that are excellent examples of groups that
require the flooding of packets in dense-mode simultaneously with sparse mode traffic. More often than
not PIM-S-DM will not be needed except in some environments using AutoRP, therefore these scenarios
will be discussed in that chapter.

Copyright by IPexpert, Inc. All Rights Reserved.

5-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

PIM-S-DM seems to be the most flexible of the PIM modes we have discussed thus far. However, there
is one behavior of PIM-S-DM that makes it less than attractive as a PIM deployment option. Remember,
any group that does not have an RP defined is forwarded in dense mode. Take the situation where
significant amounts of multicast traffic is forwarded in the topology toward a handful of interested
hosts. Assume now that the RP fails. In this situation, the routers will begin treating all traffic as dense
and begin flooding the network with the multicast traffic. This means that all PIM enabled devices will
receive all multicast traffic. Additionally, the prune process we discussed in Chapter 3: Protocol
Independent Multicast - Dense Mode (PIM-DM) will add further to the network congestion. The
environment we are currently working with uses static RP assignment. This fallback to PIM-DM behavior
only takes place when the RP has been dynamically defined in AutoRP and will be covered in depth in
Chapter 8: AutoRP. It is odd that the unwanted behavior of PIM-S-DM only takes place while using the
very protocol it was meant to facilitate. This behavior can be stopped in a number of ways to include the
"sink RP" mentioned in the Technology Review section, or the more commonly used no ip pim dm-
fallback command on all PIM-S-DM enabled routers.

Copyright by IPexpert, Inc. All Rights Reserved.

5-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Common Issues with PIM-S-DM


While not as problematic as other versions of PIM, PIM-S-DM has a number of issues that can surface
when deployed. The most common problems relate to the exchange of essential control plane
information used to determine if traffic will be sparse or dense mode forwarded. It is somewhat more
difficult to isolate issues if the traffic is PIM-SM forwarded rather than PIM-DM. For simplicity in
troubleshooting common issues while deploying PIM-S-DM, we identify three categories of problems:
Reverse Path Forwarding (RPF) failures, Unicast Routing and Forwarding Problems, and Multicast
Routing and Forwarding problems.
RPF Failures
In the Troubleshooting PIM-S-DM section, this text discussed the PIM-S-DM operational process. Since
these mechanisms utilize messages communicated using multicast they all are subject to Reverse Path
Forwarding (RPF) checks. Logically then, RPF issues can prevent optimal multicast routing, or stop
multicast forwarding entirely.
Whether traffic is forwarded as sparse or dense, RPF checks are performed in both the control and data
plane processes.

Control Plane - PIM-S-DM control plane uses PIM messages in its creation. PIM sends messages
via the link-local multicast group 224.0.0.13, and are therefore subject to RPF checks. It is
important to note that RPF checks in the control plane are against the source IP address
encapsulated into each PIM packet as they arrive. More often than not, this will be the IP
address of the adjacent neighbor.

Data Plane - PIM-S-DM will perform RPF checks on each individual multicast packet before
deciding to forward it. This means that the source IP address of each multicast packet a router
receives must be reachable out the receiving interface before the router will forward it to an
adjacent neighbor. In instances when traffic is forwarded as PIM-DM, RPF always performs
checks against the source of the multicast feed. In instances where the traffic is treated as PIM-
SM RPF checks will first be done toward the RP and then toward the source as part of the
shortest path tree failover process.

The RPF check mechanism can result in scenarios where the control plane fails to form correctly, or
multicast packets fail to transit the multicast tree. When only a few packets or no packets reach the
receivers RPF failures will normally be the cause.
We will perform a walk through for each of these RPF issues in the PIM-S-DM Sample Troubleshooting
Scenarios section that follows.

Copyright by IPexpert, Inc. All Rights Reserved.

5-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Unicast Routing and Forwarding Problems


From what we learned in Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM), it is clear
that the ability of the PIM-DR to register an active source with the RP is dependent on its ability to
unicast to the RP. Of course, since this reachability is unicast, it is not subject to RPF checks. As a result,
common issues are:

The PIM-DR fails to send PIM Register messages to the RP.


RPF errors induced by the unicast routing environment.
PIM-S-DM routers do not agree on the RP for the same source.

This is a situation where it will be necessary to look at the underlying routing protocols used in the
network. Typically, this would be an issue of asynchronous routing, and should be something obvious
once the routing tables of the source and transit devices are analyzed.
Multicast Routing and Forwarding Problems
These problems manifest themselves in more subtle ways when compared to the previous points. As
discussed earlier, the majority of PIM-S-DM's operational mechanisms involve the formation of the
control plane. When traffic will be forwarded as sparse the RP manages the multicast domain and helps
maintain the multicast routing tables.
When sparse mode forwarding is being used, situations like the following normally exist when
information fails to propagate to any or all devices, but RPF checks and unicast routing seem to be
functioning correctly:

Multicast data packets are lost in the multicast domain.


Multicast control plane are lost in the multicast domain.

In instances when dense mode forwarding is being used it is important to keep in mind that every
multicast packet has a TTL value, just like their unicast IP counterparts. In many environments, using
PIM-DM this fact is used as a method to scope or contain multicast packets to the internal network.
Multicast-threshold, is effectively employed to keep multicast packets from leaking into any
internetwork space. However, it is possible to create a multicast routing fault by setting the multicast
threshold on a given router interface.
If the packet's TTL is higher than the multicast threshold configured on an interface (and it pass the RPF
check), the packet will be forwarded. If the TTL of the packet is lower than the multicast threshold, the
router drops the packet. The possible range for a multicast threshold value is 0 to 255, with 0 meaning
all packets will be forwarded verses 255 where virtually no packets will be forwarded

Copyright by IPexpert, Inc. All Rights Reserved.

5-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

In the PIM-S-DM Sample Troubleshooting Scenarios section that follows, troubleshooting these issues
are demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

5-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

PIM-S-DM Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the PIM-S-DM operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem exists, and then how to begin isolating the
cause of the fault in the most efficient manner possible. Figure 5-2 illustrates the topology used to
explore this topic.

Figure 5-2: A Sample PIM-S-DM Topology

In the Common Issues with PIM-S-DM section, three primary types of problems were identified: RPF
failures, Unicast Routing and Forwarding Problems, and Multicast Routing and Forwarding Problems.
This section explores these three categories of failure, by directing our attention to the commands
necessary to verify a problem, isolate it and remediate it.
RPF Fault Isolation in PIM-S-DM (for Sparse Mode Traffic)
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9, R1 can emulate a multicast feed that will be
forwarded as sparse mode:
R1#ping 224.9.9.9 repeat 10

Copyright by IPexpert, Inc. All Rights Reserved.

5-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Type escape sequence to abort.


Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

The output from the ping command is unsuccessful.


Step One: Verify possible RPF issues in the path between the RP and source bi-directionally.
On R1 look at the output of mtrace:
R1#mtrace 172.16.15.1 192.1.2.2 224.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.2.2 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 192.1.2.2
-1 * 172.16.24.2 PIM Reached RP/Core [172.16.15.0/24]
-2 * 172.16.24.4 PIM [172.16.15.0/24]
-3 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-4 * 172.16.15.1 PIM [172.16.15.0/24]
R1#mtrace 192.1.2.2 172.16.15.1 224.9.9.9
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.15.1 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path...
0 172.16.15.1
-1 172.16.15.1 PIM [192.1.2.0/24]
-2 172.16.15.5 PIM Prune sent upstream [192.1.2.0/24]
-3 172.16.45.4 PIM [192.1.2.0/24]
-4 172.16.24.2 PIM Reached RP/Core [192.1.2.0/24]


There are no issues in either direction.
Step Two: Verify RPF issues in the path between the RP and host bi-directionally.
R9#mtrace 192.1.2.2 172.16.79.9 224.9.9.9
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.79.9 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 172.16.79.9 PIM [192.1.2.0/24]
-2 172.16.79.7 None No route

Copyright by IPexpert, Inc. All Rights Reserved.

5-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

R9#mtrace 172.16.79.9 192.1.2.2 224.9.9.9


Type escape sequence to abort.
Mtrace from 172.16.79.9 to 192.1.2.2 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 192.1.2.2
-1 * 172.16.26.2 PIM Reached RP/Core [172.16.79.0/24]
-2 * 172.16.26.6 PIM [172.16.79.0/24]
-3 * 172.16.67.7 None Multicast disabled [172.16.79.0/24]


The trace from R9 to the RP reveals that the interface with the ip address 172.16.67.7 on R7 is not
running PIM. Logically the next step would be to verify this on R7:

R7#show run interface FastEthernet0/0
Building configuration...
Current configuration : 96 bytes
!
interface FastEthernet0/0
ip address 172.16.67.7 255.255.255.0
duplex auto
speed auto
end


This interface is not running PIM-S-DM. To correct this issue the ip pim sparse-dense-mode command
will be applied:

R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/0
R7(config-if)#ip pim sparse-dense-mode
R7(config-if)#end
R7#
%PIM-5-NBRCHG: neighbor 172.16.67.6 UP on interface FastEthernet0/0
%SYS-5-CONFIG_I: Configured from console by console
R7#
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.67.7 on interface
FastEthernet0/0


The PIM neighbor relationship comes up with R6.

Is the ping successful on R1?

R1#ping 224.9.9.9 repeat 10

Copyright by IPexpert, Inc. All Rights Reserved.

5-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Type escape sequence to abort.


Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request

0
1
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

8
1
4
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


RPF Fault isolation in PIM-S-DM (for Dense Mode Traffic)
Setting the stage: R9 will join the multicast group 239.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 239.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 239.9.9.9?
By generating a ping on R1 to the group 239.9.9.9, R1 can emulate a multicast feed that will be
forwarded as dense mode:
R1#ping 239.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
..........

The output from the ping command is unsuccessful.


Single Step Process: Verify possible RPF issues in the path between the host and source bi-directionally.
On R1 look at the output of mtrace:
R1#mtrace 172.16.15.1 172.16.79.9 239.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 239.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9

Copyright by IPexpert, Inc. All Rights Reserved.

5-16

IPv4/6 Multicast Operation and Troubleshooting

-1
-2
-3
-4
-5
-6

*
*
*
*
*
*

172.16.79.9
172.16.79.7
172.16.67.6
172.16.26.2
172.16.24.4
172.16.45.5

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

PIM Prune sent upstream [172.16.15.0/24]


PIM [172.16.15.0/24]
PIM [172.16.15.0/24]
PIM [172.16.15.0/24]
PIM [172.16.15.0/24]
None Multicast disabled [172.16.15.0/24]


In this instance, it is not necessary to run the command bi-directionally because we see the value of
None. This tells us that the interface with IP address 172.16.45.5 is not PIM enabled. This can be verified
with show run interface on R5:

R5#show run interface fa0/1
Building configuration...
Current configuration : 96 bytes
!
interface FastEthernet0/1
ip address 172.16.45.5 255.255.255.0
duplex auto
speed auto
end

To correct this issue the ip pim sparse-dense-mode command will be applied:



R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#interface FastEthernet0/1
R5(config-if)#ip pim sparse-dense-mode
R5(config-if)#end
%PIM-5-NBRCHG: neighbor 172.16.45.4 UP on interface FastEthernet0/1
R5#
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.45.5 on interface
FastEthernet0/1
%SYS-5-CONFIG_I: Configured from console by console


The PIM neighbor relationship comes up with R4.

Is the ping successful on R1?

R1#ping 239.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 4 ms

Copyright by IPexpert, Inc. All Rights Reserved.

5-17

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request

1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms


Unicast Routing and Forwarding Problems
Unicast routing issues can only affect PIM-S-DM traffic that is being forwarded as sparse. Typified by the
RP failing to receive a PIM Register either because the message is not generated by the PIM-DR, or the
RP never receives the message. This means that the (S,G) entry for the group in question will never
appear in the multicast routing table of the RP.

Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9, R1 can emulate a multicast feed that will be
forwarded as sparse mode:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

The output from the ping command is unsuccessful.


Single Step Process: Verify the existence of a (S,G) entry for the group 224.9.9.9 on the next hop router
from the source and on the RP with show ip mroute:

R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,

Copyright by IPexpert, Inc. All Rights Reserved.

5-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,


U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:03:59/stopped, RP 192.1.2.2, flags: SPF
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list: Null
(172.16.15.1, 224.9.9.9), 00:00:06/00:02:59, flags: PFT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1, Registering
Outgoing interface list: Null


The entry for 172.16.15.1, 224.9.9.9 tells us clearly that R5 is actively Registering the source with the RP.
Is the RP getting the Register Message?

R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:06:31/00:02:52, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:06:31/00:02:52


We see no (S,G) for the group and the (*,G) entry is not "stopped". This means something happened to
the unicast Register message. The quickest way to identify what has happened in this scenario will be to
use traceroute:

R5#traceroute 192.1.2.2
Type escape sequence to abort.
Tracing the route to 192.1.2.2

Copyright by IPexpert, Inc. All Rights Reserved.

5-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

1 172.16.45.4 0 msec 0 msec 0 msec


2 172.16.24.2 !A * !A


When we see the value "A" in the output of a traceroute the IOS is telling us that the packets are being
administratively blocked. This is usually the direct result of an access-list. The traceroute indicates that
the interface with the ip address 172.16.24.2 is where the ACL may be applied. This can be verified with
show run interface on R2:

R2#show run interface GigabitEthernet0/0
Building configuration...
Current configuration : 140 bytes
!
interface GigabitEthernet0/0
ip address 172.16.24.2 255.255.255.0
ip access-group 100 in
ip pim sparse-mode
duplex auto
speed auto
end

The access-group command under this interface is referencing the extended access-list 100. What is
being denied by this ACL?
R2#show access-list 100
Extended IP access list 100
10 deny ip host 172.16.45.5 host 192.1.2.2 (417 matches)
20 permit ip any any (113 matches)


Any traffic sourced from the FastEthernet0/1 interface of R5 destined to the Loopback0 interface of R2
is being blocked. This will include any PIM Register messages arriving from R5. This can be corrected by
modifying the ACL on R2. We will remove line 10.

R2(config)#ip access-list extended 100
R2(config-ext-nacl)#no 10
R2(config-ext-nacl)#end
%SYS-5-CONFIG_I: Configured from console by console
R2#show access-list 100
Extended IP access list 100
20 permit ip any any (139 matches)


Now are pings successful from R1?

R1#ping 224.9.9.9 repeat 10

Copyright by IPexpert, Inc. All Rights Reserved.

5-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Type escape sequence to abort.


Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request

0
1
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


This tells us that the multicast stream is reaching R9. As a final verification we can see on R2 that the
(S,G) entry is created and the (*,G) entry is indeed sparse mode as designated by the flag of "S".

R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:24:08/00:02:59, RP 192.1.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:24:08/00:02:59
(172.16.15.1, 224.9.9.9), 00:02:08/00:03:21, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:02:08/00:03:20


The configuration is working as expected.

Copyright by IPexpert, Inc. All Rights Reserved.

5-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Multicast Routing and Forwarding problems


Multicast Routing and Forwarding issues affect both sparse and dense mode traffic in PIM-S-DM. The
most common issues that cause these problems are multicast filters and thresholds.

Sparse Mode Multicast Routing and Forwarding Problem
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9, R1 can emulate a multicast feed that will be
forwarded as sparse mode:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

The output from the ping command is unsuccessful.


Step One: Verify possible RPF issues in the path between the source and the RP.
R1#mtrace 172.16.15.1 192.1.2.2 224.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 192.1.2.2 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 192.1.2.2
-1 * 172.16.24.2 PIM Reached RP/Core [172.16.15.0/24]
-2 * 172.16.24.4 PIM [172.16.15.0/24]
-3 * 172.16.45.5 PIM [172.16.15.0/24]
-4 * 172.16.15.1 PIM Prune sent upstream [172.16.15.0/24]

There are no RPF issues indicated.


Step Two: Initiate a ping with a high repeat count from R1:
R1#ping 224.9.9.9 repeat 500

Copyright by IPexpert, Inc. All Rights Reserved.

5-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Type escape sequence to abort.


Sending 500, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
...........<output omitted)>


Step Three: Verify the (*,G) and (S,G) entries for the group 224.9.9.9 on the RP (R2):
R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:14:48/stopped, RP 192.1.2.2, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 224.9.9.9), 00:05:09/00:02:50, flags: P
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list: Null


Step Four: Enable debug ip pim on all devices in the path for the multicast tree. Wait to see any unique
messages regarding the group 224.9.9.9.

R7#debug ip pim
PIM debugging is on
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): No addition to olist at scoped boundaryrpt-bit src 192.1.2.2 grp 224.9.9.9
R7#


This tells us that there is a "scoped boundary" assigned on the FastEthernet0/1 interface of R7; verified
with show run interface:

R7#show run interface FastEthernet0/1
Building configuration...
Current configuration : 151 bytes
!
interface FastEthernet0/1

Copyright by IPexpert, Inc. All Rights Reserved.

5-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

ip address 172.16.79.7 255.255.255.0


ip pim sparse-dense-mode
ip multicast boundary 2 out
duplex auto
speed auto
end


This can be corrected by removing the ip multicast boundary command:

R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip multicast boundary 2 out
R7(config-if)#end


Are pings from R1 successful now?

R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


They are successful.
Dense Mode Multicast Routing and Forwarding Problem
Setting the stage: R9 will join the multicast group 239.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 239.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 239.9.9.9?

Copyright by IPexpert, Inc. All Rights Reserved.

5-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

By generating a ping on R1 to the group 239.9.9.9, R1 can emulate a multicast feed that will be
forwarded as dense mode traffic:
R1#ping 239.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
..........

The output from the ping command is unsuccessful.


Step One: Verify possible RPF issues in the path between the source and the host bi-directionally.

R1#mtrace 172.16.15.1 172.16.79.9 239.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 239.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM Prune sent upstream [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]
R1#mtrace 172.16.79.9 172.16.15.1 239.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.79.9 to 172.16.15.1 via group 239.9.9.9
From source (?) to destination (?)
Querying full reverse path...
0 172.16.15.1
-1 172.16.15.1 PIM [172.16.79.0/24]
-2 172.16.15.5 PIM [172.16.79.0/24]
-3 172.16.45.4 PIM [172.16.79.0/24]
-4 172.16.24.2 PIM [172.16.79.0/24]
-5 172.16.26.6 PIM [172.16.79.0/24]
-6 172.16.67.7 PIM [172.16.79.0/24]
-7 172.16.79.9


This indicates there or no evident RPF failures.

Step Two: Initiate a ping with a high repeat count from R1:
R1#ping 239.9.9.9 repeat 50000

Copyright by IPexpert, Inc. All Rights Reserved.

5-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Type escape sequence to abort.


Sending 50000, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
...........<output omitted)>


Step Three: Verify the (*,G) and (S,G) entries for the group 239.9.9.9 on all devices between R1 and R9.
We will start with R9:
R9#sh ip mroute 239.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.9.9.9), 00:10:47/00:02:28, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:10:47/00:00:00


We see that the (S,G) is missing from R9. This is not a surprise because R9 is not receiving the multicast
feed. Moving one hop closer to the source we will look at R7:

R7#show ip mroute 239.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.9.9.9), 00:10:59/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse-Dense, 00:10:59/00:00:00
FastEthernet0/1, Forward/Sparse-Dense, 00:10:59/00:00:00, limit 0 kbps

Copyright by IPexpert, Inc. All Rights Reserved.

5-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

(172.16.15.1, 239.9.9.9), 00:02:25/00:00:33, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:02:27/00:00:00, limit 0 kbps


We see that the R7 has both the (*,G) and (S,G) entries, but we also see the interface FastEthernet0/1 in
the OIL has a rate limit of 0 applied. This is confirmed with show run interface:

R7#show run interface FastEthernet0/1
Building configuration...
Current configuration : 166 bytes
!
interface FastEthernet0/1
ip address 172.16.79.7 255.255.255.0
ip pim sparse-dense-mode
ip multicast rate-limit out group-list 2 0
duplex auto
speed auto
end


This is most quickly corrected by removing the ip multicast rate-limit configuration:

R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip multicast rate-limit out group-list 2 0
R7(config-if)#end


Are the pings successful from R1?

R1#ping 239.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8

from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Copyright by IPexpert, Inc. All Rights Reserved.

1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms

5-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Reply to request 9 from 172.16.79.9, 1 ms


They are successful.

PIM-S-DM show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
PIM-S-DM topology in Figure 5-3 for all example output.

Figure 5-3: A Sample PIM-S-DM Topology

show COMMAND:
show ip igmp membership [group-address | group-name] [tracked] [all]
This command displays Internet Group Management Protocol (IGMP) membership information for
multicast groups and (S, G) channels.
Where:

group-address optional; specifies the specific multicast group address


tracked optional; displays the multicast groups with the explicit tracking feature enabled
all - optional; displays the detailed information about the multicast groups with and without the
explicit tracking feature enabled

EXAMPLE OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel

Copyright by IPexpert, Inc. All Rights Reserved.

5-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

1,2,3 - The version of IGMP the group is in


Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.9.9.9
*,239.9.9.9
*,224.0.1.40
R9#

Reporter
172.16.79.9
172.16.79.9
172.16.79.9

Uptime
00:12:33
00:12:33
00:12:33

Exp.
02:31
02:28
02:35

Flags
2LA
2LA
2LA

Interface
Fa0/1
Fa0/1
Fa0/1


show COMMAND:
show ip mroute
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:13:45/00:03:12, RP 192.1.7.7, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:05:29/00:03:12
(*, 239.9.9.9), 00:13:45/00:03:12, RP 192.1.7.7, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:05:29/00:03:12
(172.16.15.1, 239.9.9.9), 00:00:19/00:02:40, flags:
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:00:19/00:03:11

Copyright by IPexpert, Inc. All Rights Reserved.

5-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

(*, 224.0.1.40), 00:13:47/00:02:19, RP 192.1.2.2, flags: SJCL


Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
Loopback0, Forward/Sparse-Dense, 00:05:35/00:02:18
R7#


show COMMAND:
show ip pim interface
This command displays information about interfaces configured for Protocol Independent Multicast
(PIM).
EXAMPLE OUTPUT:
R7#show ip pim interface
Address

Interface

192.1.7.7
172.16.67.7
172.16.79.7
R7#

Loopback0
FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/SD
v2/SD
v2/SD

Nbr
Count
0
1
1

Query
Intvl
30
30
30

DR
Prior
1
1
1

DR
192.1.7.7
172.16.67.7
172.16.79.9

show COMMAND:
show ip pim rp mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is a candidate RP (v2)
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via
Uptime: 00:13:30, expires:
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via
Uptime: 00:13:30, expires:
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R7#

bootstrap, priority 0, holdtime 150


00:02:12
bootstrap, priority 0, holdtime 150
00:02:11


show COMMAND:

Copyright by IPexpert, Inc. All Rights Reserved.

5-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

show ip pim [vrf vrf-name] neighbor [interface-type interface-number]


This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.
Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

EXAMPLE OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.67.6
FastEthernet0/0
00:07:58/00:01:37
172.16.79.9
FastEthernet0/1
00:07:49/00:01:17
R7#

DR Priority,
Ver
v2
v2

DR
Prio/Mode
1 / S
1 / DR S


show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]
This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance


route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-
distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information
group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

Copyright by IPexpert, Inc. All Rights Reserved.

5-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

EXAMPLE OUTPUT:
R7#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1)
RPF interface: FastEthernet0/0
RPF neighbor: ? (172.16.67.6)
RPF route/mask: 172.16.15.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#

PIM-S-DM debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
PIM-S-DM topology in Figure 5-4 for all example output.

Figure 5-4: A Sample PIM-S-DM Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]
This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

Copyright by IPexpert, Inc. All Rights Reserved.

5-32

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

EXAMPLE OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward


debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays Protocol Independent Multicast (PIM) packets received and sent and displays
PIM-related events
Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R7#debug ip pim
PIM debugging is on
R7#
PIM(0): Received v2 Register on FastEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 239.9.9.9
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Forward decapsulated data packet for 239.9.9.9 on FastEthernet0/1
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 224.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 224.9.9.9), Forward state, by PIM *G
Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 239.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 239.9.9.9), Forward state, by PIM *G
Join
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM *G Join
R7#
PIM(0): Send RP-reachability for 239.9.9.9 on FastEthernet0/1
PIM(0): Send RP-reachability for 224.9.9.9 on FastEthernet0/1
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.9.9.9
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.0.1.40
PIM(0): Insert (*,224.0.1.40) join in nbr 172.16.67.6's queue

Copyright by IPexpert, Inc. All Rights Reserved.

5-33

IPv4/6 Multicast Operation and Troubleshooting

PIM(0):
PIM(0):
PIM(0):
R7#
PIM(0):
R7#

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Building Join/Prune packet for nbr 172.16.67.6


Adding v2 (192.1.2.2/32, 224.0.1.40), WC-bit, RPT-bit, S-bit Join
Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 239.9.9.9

Chapter Challenge: PIM-S-DM Sample Trouble Tickets


The following section includes two sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH5-PIM-S-DM-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 5-5 below:

Figure 5-5: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that users on the VLAN79 segment connecting R7 and R9
cannot receive the multicast feed for 233.99.99.99. You have been instructed to use R9 and R1 for any
testing. You must correct the issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that users on the VLAN79 segment
connecting R7 and R9 cannot receive traffic destined to any sparse mode forwarded multicast traffic.
Again, you are to use R1 as the source and R9 as a simulated host. You have been assigned the multicast
group 224.99.99.99 for all testing. Correct this issue.

Copyright by IPexpert, Inc. All Rights Reserved.

5-34

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Chapter Challenge: PIM-S-DM Sample Trouble Tickets Solutions


The following section includes the solutions to the two Trouble Tickets presented in the previous
section.
Trouble Ticket #1 Solution
Your supervisor has brought to your attention that users on the VLAN79 segment connecting R7 and R9
cannot receive the multicast feed for 233.99.99.99. You have been instructed to use R9 and R1 for any
testing. You must correct the issue.
Step 1 - Fault Verification:
Does R9 reply to pings to the multicast group 233.99.99.99:
R1#ping 233.99.99.99 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 233.99.99.99, timeout is 2 seconds:
..........


The pings are not successful. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
The next course of action is to use the mtrace utility to rule out the possibility of an RPF issue. Make
certain to perform this process in both directions, first from R1 toward R9, then from R9 toward R1.

R1#mtrace 172.16.15.1 172.16.79.9 233.99.99.99
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 233.99.99.99
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.26.2 PIM [172.16.15.0/24]
-5 * 172.16.24.4 PIM [172.16.15.0/24]
-6 * 172.16.45.5 PIM [172.16.15.0/24]
-7 * 172.16.15.1 PIM [172.16.15.0/24]
R1#mtrace 172.16.79.9 172.16.15.1 233.99.99.99
Type escape sequence to abort.
Mtrace from 172.16.79.9 to 172.16.15.1 via group 233.99.99.99
From source (?) to destination (?)

Copyright by IPexpert, Inc. All Rights Reserved.

5-35

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Querying full reverse path...


0 172.16.15.1
-1 172.16.15.1 PIM [172.16.79.0/24]
-2 172.16.15.5 PIM [172.16.79.0/24]
-3 172.16.45.4 PIM [172.16.79.0/24]
-4 172.16.24.2 PIM [172.16.79.0/24]
-5 172.16.26.6 PIM [172.16.79.0/24]
-6 172.16.67.7 PIM [172.16.79.0/24]
-7 172.16.79.9


This output indicates that there are no evident RPF issues in the path between R1 and R9. This means
that we will need to check the status of the (S,G) pairs for this group on all devices between R1 and R9.
Use a ping with a high repeat count on R1 for this process:

R1#ping 233.99.99.99 repeat 5000
Type escape sequence to abort.
Sending 5000, 100-byte ICMP Echos to 233.99.99.99, timeout is 2 seconds:
.............................<output omitted>

Now use show ip mroute on all the devices and observe the output for the (S,G) pair 172.16.15.1,
233.99.99.99 on the transit devices.
R5#sh ip mroute 233.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 233.99.99.99), 00:01:41/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:01:41/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 00:01:41/00:00:00
(172.16.15.1, 233.99.99.99), 00:01:41/00:01:18, flags: PT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.1
Outgoing interface list:
FastEthernet0/1, Prune/Sparse-Dense, 00:01:42/00:01:17

Copyright by IPexpert, Inc. All Rights Reserved.

5-36

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

R5 has the (S,G) pair, but it is in the Prune state for the interface in the OIL. We will now look at R4:
R4#show ip mroute 233.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 233.99.99.99), 00:11:57/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:11:57/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 00:11:57/00:00:00
(172.16.15.1, 233.99.99.99), 00:02:05/00:00:56, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list:
FastEthernet0/0, Prune/Sparse-Dense, 00:02:06/00:00:53

R4 has the (S,G) pair, but it is in the Prune state for the interface in the OIL. We will now look at R2:
R2#sh ip mroute 233.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 233.99.99.99), 00:02:23/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:02:23/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 00:02:23/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

5-37

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

(172.16.15.1, 233.99.99.99), 00:02:23/00:00:41, flags: PT


Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Prune/Sparse-Dense, 00:02:24/00:00:37

R2 has the (S,G) pair, and it is in the Prune state for the interface in the OIL. We will now look at R6:
R6#show ip mroute 233.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 233.99.99.99), 00:02:43/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:02:43/00:00:00
(172.16.15.1, 233.99.99.99), 00:02:43/00:00:16, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list: Null

R6 has the (S,G) pair, but there are no interfaces in the OIL. We need to examine the nature of the PIM-
S-DM configuration on R6:

R6#show ip pim interface
Address

Interface

172.16.67.6
172.16.26.6

FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/S
v2/SD

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.67.7
172.16.26.6

Observe that the FastEthernet0/0 interface is running PIM version 2, but it is in sparse mode as
indicated by the value of "S". In this lab this flag should be SD for sparse-dense. This can be confirmed
further by looking at the configuration under this interface:

R6#show run interface FastEthernet0/0
Building configuration...

Copyright by IPexpert, Inc. All Rights Reserved.

5-38

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Current configuration : 116 bytes


!
interface FastEthernet0/0
ip address 172.16.67.6 255.255.255.0
ip pim sparse-mode
duplex auto
speed auto
end


This interface cannot forward traffic in dense mode with the current configuration. This has isolated the
cause of the problem.

Step 3 - Fault Remediation:
In this scenario, the ip pim sparse-dense-mode command should be applied under the interface as we
have in the past.

R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#interface FastEthernet0/0
R6(config-if)#no ip pim sparse-mode
R6(config-if)#ip pim sparse-dense-mode
R6(config-if)#end
R6#
%PIM-5-NBRCHG: neighbor 172.16.67.7 UP on interface FastEthernet0/0
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.67.7 on interface
FastEthernet0/0
%SYS-5-CONFIG_I: Configured from console by console


Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification.

R1#ping 233.99.99.99 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 233.99.99.99, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7

from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Copyright by IPexpert, Inc. All Rights Reserved.

1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms

5-39

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Reply to request 8 from 172.16.79.9, 1 ms


Reply to request 9 from 172.16.79.9, 1 ms


The issue has been corrected.
Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that users on the VLAN79 segment
connecting R7 and R9 cannot receive traffic destined to any sparse mode forwarded multicast traffic.
Again, you are to use R1 as the source and R9 as a simulated host. You have been assigned the multicast
group 224.99.99.99 for all testing. Correct this issue.
Step 1 - Fault Verification:
Can R1 ping the group 224.99.99.99 successfully:
R1#ping 224.99.99.99 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:
..........


The ping test to the multicast group 224.99.99.99 fails. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
In order to verify that RPF issues are not at fault, use the mtrace utility between the RP and the host,
and the RP and Source.

R2#mtrace 192.1.2.2 172.16.79.9 224.99.99.99
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.79.9 via group 224.99.99.99
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM Prune sent upstream [192.1.2.0/24]
-2 * 172.16.79.7 PIM [192.1.2.0/24]
-3 * 172.16.67.6 PIM [192.1.2.0/24]
-4 * 172.16.26.2 PIM Reached RP/Core [192.1.2.0/24]

There are no issues between the RP and the host, what about the RP and the source:

R2#mtrace 192.1.2.2 172.16.15.1 224.99.99.99
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.15.1 via group 224.99.99.99
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:

Copyright by IPexpert, Inc. All Rights Reserved.

5-40

IPv4/6 Multicast Operation and Troubleshooting

0
-1
-2
-3
-4

172.16.15.1
* 172.16.15.1
* 172.16.15.5
* 172.16.45.4
* 172.16.24.2

PIM
PIM
PIM
PIM

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

Prune sent upstream [192.1.2.0/24]


Prune sent upstream [192.1.2.0/24]
Prune sent upstream [192.1.2.0/24]
Reached RP/Core [192.1.2.0/24]


There are no issues in the creation of the control plane between the RP and the source. Initiate a ping
from R1 with a high repeat count and see if the next hop router and the RP create (S,G) entries for
224.99.99.99 after enabling debug ip pim on R2:

R2#debug ip pim
PIM debugging is on

Now on R1:
R1#ping 224.99.99.99 repeat 1000

Type escape sequence to abort.


Sending 1000, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:
.................... <output omitted>

We will watch the output of debug ip pim on R2:


R2#
PIM(0): Received v2 Register on GigabitEthernet0/0 from 172.16.45.5
for 172.16.15.1, group 224.99.99.99
%PIM-4-INVALID_SRC_REG: Received Register from 172.16.45.5 for (172.16.15.1,
224.99.99.99), not willing to be RP
R2#
PIM(0): Register for 172.16.15.1, group 224.99.99.99 rejected
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.99.99.99

We see that R2 is receiving the PIM Register message from R5, but the router is refusing to be the RP for
this group claiming the Registration is coming from an "INVALID_SRC". This is most commonly caused by
a filter or security configuration. These commands are best located with show run:
R2#show run | inc interface | pim
interface Loopback0
ip pim sparse-dense-mode
interface GigabitEthernet0/0
ip pim sparse-dense-mode
interface GigabitEthernet0/1
ip pim sparse-dense-mode
interface Serial0/1/0
interface Serial0/2/0

Copyright by IPexpert, Inc. All Rights Reserved.

5-41

IPv4/6 Multicast Operation and Troubleshooting

Chapter 5: PIM - Sparse-Dense Mode (PIM-S-DM)

ip pim rp-address 192.1.2.2 1


ip pim accept-register list 199


We see the ip pim accept-register command. This command references the extended access-list 199.
What is permitted and denied in this ACL?

R2#show access-list 199
Extended IP access list 199
10 deny ip any any (2 matches)


Any PIM Register messages from any device will be denied. This isolates the cause of our fault.

Step 3 - Fault Remediation:
In this scenario, the ip pim accept-register command needs removed from R2:

R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#no ip pim accept-register list 199
R2(config)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:

R1#ping 224.99.99.99 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request

0
1
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


The issue has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

5-42

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Chapter 6:
Bidirectional
Protocol
Independent
Multicast (BIDIR-
PIM)



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and the functionality
of Bidirectional PIM (BIDIR-PIM) protocol are examined in great depth. Once the operational
characteristics of this important protocol are detailed completely, the focus becomes that of
troubleshooting. This includes the careful examination of symptoms, a fault isolation methodology, and
the implementation of repairs for the Bidirectional PIM protocol. The chapter begins with a thorough

Copyright by IPexpert, Inc. All Rights Reserved.

6-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

review of BIDIR-PIM, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting this multicast routing protocol. This important chapter concludes with sample
troubleshooting scenarios, reference materials for the most important show and debug commands, and
exciting challenges that allow readers to practice implementing the troubleshooting skills they have
obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

6-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

BIDIR-PIM Technology Review


The hard work spent in Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM) is about to
pay large dividends. Bidirectional PIM (BIDIR-PIM) is operationally very similar. Like PIM sparse-mode,
bidirectional PIM also has unconditional forwarding of source traffic toward the RP upstream on the
shared tree, but has no registering process for sources. This ensures all routers send traffic using strictly
(*, G) multicast routing entries. This elimination of source-specific states allows much better scalability
in the case of many different sources.
Devices signal membership in a bidirectional PIM group using explicit Join messages. Sources send
multicast traffic up the shared tree toward the Rendezvous Point (RP). The RP then passes traffic down
the tree to any receivers on each branch. Note that for these packets passed downstream, there is no
fundamental difference between BIDIR-PIM and PIM-SM. The unique behavior is with the traffic that
passes from the various sources upstream to the RP.
In PIM-SM, traffic from sources destined for the RP does not flow upstream in the shared tree, but
downstream along the shortest path tree of the source until it reaches the RP. From the RP, traffic flows
along the shared tree toward all receivers. In BIDIR-PIM, devices can pass traffic up the shared tree
toward the RP. To avoid multicast packet looping, BIDIR-PIM introduces a new mechanism called the
designated forwarder (DF). This establishes a loop-free shortest path tree rooted at the RP.
The designated forwarder (DF) election takes place for all PIM routers on every network segment and
point-to-point link. The procedure selects one router as the DF for every RP of bidirectional groups. The
designated forwarder is responsible for forwarding multicast packets received on that network. Routers
use unicast routing metrics for this DF election process. The router with the most preferred unicast
routing metric to the RP becomes the designated forwarder. This ensures that only one copy of every
packet is sent to the RP, even if there are parallel equal-cost paths.
Note: Because a DF is selected for every RP of bidirectional groups, multiple routers may be elected as
DF on any network segment.
The procedure for joining the shared tree of a bidirectional group is almost identical to that used in PIM-
SM, except that with BIDIR-PIM, the role of the designated router (DR) is assumed by the designated
forwarder for the RP. On a network that has local receivers, only the router elected as the DF populates
the outgoing interface list (olist) upon receiving Internet Group Management Protocol (IGMP) Join
messages. This DF then sends (*, G) Join and Leave messages upstream toward the RP. When a
downstream router wishes to join the shared tree, the reverse path forwarding neighbor in the PIM Join
and Leave messages is always the DF elected for the interface that leads to the RP. When a router
receives a Join or Leave message, and the router is not the DF for the receiving interface, the message is
ignored. Otherwise, the router updates the shared tree in the same way as in sparse mode.

Copyright by IPexpert, Inc. All Rights Reserved.

6-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Another unique property of BIDIR-PIM is that there is no need to send PIM assert messages. This is
because the DF election procedure eliminates parallel downstream paths from any RP. An RP never joins
a path back to the source, nor will it send any register stops.
The configuration of BIDIR-PIM on the router is very simple. First, configure PIM sparse-mode on the
appropriate interfaces using the command ip pim sparse-mode. Then, use the global configuration
mode command for BIDIR-PIM:
ip pim bidir-enable
Finally, to configure the BIDIR-PIM RP, use the command:
ip pim rp-address rp-address [access-list] [override] bidir
Where:

access-list permits the specification of the bidirectional multicast group


override specifies that if dynamic and static group-to-RP mappings are used together and
there is an RP address conflict, the RP address configured for a static group-to-RP mapping will
take precedence

Copyright by IPexpert, Inc. All Rights Reserved.

6-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

The Operation and Troubleshooting of BIDIR-PIM


The primary operational stages and the role based assignment of devices employed in BIDIR-PIM where
clearly described in the Technology Review section. Here in the Operation and Troubleshooting section
we are going to take an analytical approach that will allow us to observe each of these processes.
BIDIR-PIM RP
In PIM-SM, the RP had many roles the most notable being responding to PIM Register Messages and
creation of the source-base tree between the RP and multicast sources. As indicated in the Technology
Review of this chapter BIDIR-PIM utilizes the concept of the RP, and relies on it exclusively for the
forwarding and distribution of multicast traffic. This means as stated earlier that BIDIR-PIM only uses
shared-trees. Thus, the formation of the source-based tree and responding to PIM Register messages
are no longer part of the RP's function in BIDIR-PIM. In BIDIR-PIM, the role of the RP is significantly
different from PIM-SM. The RP is still responsible for facilitating sources and hosts learning about each
other but rather than being a physical device it is more of a logical construct that fills the role of a
destination vector in BIDIR-PIM. This means that the address of the RP does not have to be assigned to a
physical device, it can simply be part of a subnet on the device intended to be the RP. The topology
outlined in Figure 6-1 will be used to illustrate this concept.

Figure 6-1: Sample BIDIR-PIM Topology


In this topology, R2 is the RP; all devices are running BIDIR-PIM. Evidenced by using show ip pim rp
mapping on all devices:

R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)

Copyright by IPexpert, Inc. All Rights Reserved.

6-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

R5#show ip pim rp mapping


PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)
R2#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)
R9#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)


In this instance, all devices agree that the RP for the entire multicast range of 224.0.0.0/4 will be R2. The
interesting thing however, is that the address of the RP is 192.1.2.255. It should be pointed out that this
address does not exist in the network as evidenced with the ping utility on R1.

R1#ping 192.1.2.255
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.2.255, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)

Copyright by IPexpert, Inc. All Rights Reserved.

6-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Can we successfully reach a host using this ip address for the RP? To find out we will have R9 join the
multicast group 224.9.9.9.

R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

End with CNTL/Z.


With this accomplished we will then ping this group from R1:

R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

28
28
28
28
28
28
28
28
28
28

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

The pings are successful. This illustrates that the RP address is not required to exist on a physical
interface, and to illustrate the concept that in BIDIR-PM the RP is more like a destination vector as
mentioned previously. So in BIDIR-PM the RP is not required to be a physical router like in PIM-SM.
Understanding this vector idea my prove useful in troubleshooting BIDIR-PIM problems.
Host-to-RP Shared Tree
Just like in PIM-SM, joins are processed toward the RP in the creation of the customary shared tree.
During this process the host sends an IGMP join, which triggers the creation of (*,G) entries in the
multicast routing tables of the routers in the transit path from the IGMP router to the RP. In this
demonstration, R9 has joined the multicast group 224.9.9.9. This means that it has sent an IGMP join to
R7, the IGMP router on its VLAN79 segment. The fact that this IGMP membership report has made it to
R7 as evidenced by the output of show ip igmp groups:
R7#show ip igmp groups
IGMP Connected Group Membership
Group Address
Interface
Accounted

Copyright by IPexpert, Inc. All Rights Reserved.

Uptime

Expires

Last Reporter

Group

6-7

IPv4/6 Multicast Operation and Troubleshooting

224.9.9.9
224.0.1.40
224.0.1.40

FastEthernet0/1
FastEthernet0/1
FastEthernet0/0

Chapter 6: Bidirectional Protocol Independent Multicast

01:19:18
01:19:14
01:20:06

00:02:39
00:02:46
00:02:28

172.16.79.9
172.16.79.9
172.16.67.6

We see that 172.16.79.9 was the last reporter for the group 224.9.9.9. Now that R7 has this IGMP
membership message from the host, the router will send PIM joins toward the RP for this group. These
joins will be propagated hop-by-hop using the link-local address 224.0.0.13. We verify this process by
looking for the creation of the (*, 224.9.9.9) entries in the multicast routing tables of R7, R6, and R2 as
evidenced by show ip mroute:
R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 01:18:25/00:02:37, RP 192.1.2.255, flags: BC
Bidir-Upstream: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 01:12:14/00:02:50
FastEthernet0/0, Bidir-Upstream/Sparse, 01:12:14/00:00:00

The (S,G) entry is on R7 for the group 224.9.9.9. Observe the flag of B for the pair. Also, observe that the
is no incoming interface list in the output. This has been replaced with the entry Bidir-Upstream. The
interface found in this section (FastEthernet0/0) is the RPF interface used to reach the RP. This means
any traffic received on this "upstream" interface will be forwarded "downstream" to any receivers on
the shared-tree.
R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires

Copyright by IPexpert, Inc. All Rights Reserved.

6-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Interface state: Interface, Next-Hop or VCD, State/Mode


(*, 224.9.9.9), 01:18:25/00:02:52, RP 192.1.2.255, flags: B
Bidir-Upstream: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 01:12:14/00:02:52
FastEthernet0/1, Bidir-Upstream/Sparse, 01:12:14/00:00:00

The entry also exists on R6.


R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 01:18:25/00:02:47, RP 192.1.2.255, flags: B
Bidir-Upstream: Loopback0, RPF nbr 192.1.2.255
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 01:12:14/00:02:47
Loopback0, Bidir-Upstream/Sparse, 01:12:14/00:00:00

Source-to-RP Shared Tree


Here is where BIDIR-PIM deviates from the standard behavior or PIM-SM. We discussed in Chapter 4:
Protocol Independent Multicast - Sparse Mode (PIM-SM), the PIM Register and Register Stop function.
In BIDIR-PIM, these processes no longer take place. Once a source for a given multicast group becomes
active in a BIDIR-PIM environment, a router configured for BIDIR-PM will immediately forward that
traffic toward the RP. Once this traffic reaches, the network for the RP, connectivity to the hosts is
completed. Again, in many ways it is the subnet and not the actual router that is acting as the RP. We
will generate a multicast source from R1 for the group 224.9.9.9 and observe this behavior hop-by-hop
toward the RP.
R1#show ip mroute 224.9.9.9
Group 224.9.9.9 not found
R1#ping 224.9.9.9 repeat 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:

Copyright by IPexpert, Inc. All Rights Reserved.

6-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Reply to request 0 from 172.16.79.9, 32 ms


Reply to request 1 from 172.16.79.9, 28 ms
Reply to request 2 from 172.16.79.9, 28 ms
<output omitted>

Next we will look at the contents of the multicast routing table on the next hop router using show ip
mroute:
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:37/00:02:50, RP 192.1.2.255, flags: BP
Bidir-Upstream: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list:
FastEthernet0/1, Bidir-Upstream/Sparse, 00:00:37/00:00:00

Observe that we have the entry for 224.9.9.9, again note the absence of the Incoming Interface List,
replaced with the Bidir-Upstream interface toward the RP. Next look at R4:
R4#show ip mroute 224.9.9.9
Group 224.9.9.9 not found
R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:00/00:02:51, RP 192.1.2.255, flags: BP
Bidir-Upstream: FastEthernet0/0, RPF nbr 172.16.24.2
Outgoing interface list:

Copyright by IPexpert, Inc. All Rights Reserved.

6-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

FastEthernet0/0, Bidir-Upstream/Sparse, 00:01:00/00:00:00

R4 has the entry, and it is connected directly to the RP:


R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 01:54:24/00:03:25, RP 192.1.2.255, flags: B
Bidir-Upstream: Loopback0, RPF nbr 192.1.2.255
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 01:48:13/00:03:16
Loopback0, Bidir-Upstream/Sparse, 01:48:13/00:00:00

Observe that the RP has only the single entry for the group 224.9.9.9. As mentioned in the Technology
Review, BIDIR-PIM does not utilize (S,G) entries, nor do we need to be concerned with the multicast
stream reverting to the shortest path. In BIDIR-PIM, this behavior simply does not take place. The RP will
remain the root of the BIDIR-PIM environment and traffic will always travel upstream toward the RP or
downstream away from the RP.
Another deviation from typical PIM-SM behavior is how multicast packets are forwarded; this process,
as discussed, involves the election of a Designated Forwarder.
BIDIR-PIM Neighbors and Designated Forwarder Election
The good news is BIDIR-PIM does not use RPF checks for multicast traffic. BIDIR-PIM relies on another
mechanism for loop prevention. PIM neighbors exchange PIM Hello messages on adjacent links. If a
router is BIDIR-PIM enabled that fact is communicated inside the Hello messages it sends to its
neighbors. These messages and the BIDIR enabled status they contain is essential for the BIDIR-PIM
control plane to form properly. In order for BIDIR-PIM to operate, a single device on each network
segment must be elected to be the Designated Forwarder (DF) for a given group-to-RP mapping, on a
segment-by-segment basis.

It is the purpose of this DF to create a loop free shared-tree to the RP and a DF will be elected for every
BIDIR-RP group for each segment carrying BIDIR-PIM traffic. This mechanism eliminates the need for RPF

Copyright by IPexpert, Inc. All Rights Reserved.

6-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

checks. The ultimate role of the DF is to forward multicast traffic received on its segment. The role of DF
is assigned based on the lowest metric to reach the RP, and in the event of a tie in these values the
highest IP address is the selection criteria. This can be observed based on the output of debug ip pim df
on R1:

R1#debug ip pim df
PIM RP DF debugging is on
R1#clear ip route *
R1#
PIM(0): RP(192.1.2.255) metric changed from (NULL, unicast, 2147483647, -1)
PIM(0): to (FastEthernet0/0, unicast, 119, 3)
PIM(0): Elect DF for FastEthernet0/0, new RP 192.1.2.255
PIM(0): Send v2 Offer on FastEthernet0/0 (Non-DF) for RP 192.1.2.255
PIM(0): Sender 172.16.15.1, pref 2147483647, metric 2147483647
PIM(0): Receive DF Winner message from 172.16.15.5 on FastEthernet0/0 (Non-DF)
PIM(0): RP 192.1.2.255, pref 120, metric 2
PIM(0): Metric is better


When troubleshooting this protocol it is important to note that if adjacent routers are not both BIDIR-
PIM enabled then the DF election process will not take place. This fact is evident when a router receives
a PIM Hello message that does not contain the BIDIR flag. If the designated forwarder cannot be elected
then no BIDIR-PIM traffic can be forwarded to or from the segment. This process is designed to protect
the network from possible multicast routing loops.

The fact that BIDIR-PIM uses nothing but shared-trees, eliminates the PIM Register process and works
bi-directionally, makes it a very attractive version of PIM to utilize in environments that require
applications like video conferencing. However, to reduce operational overhead by eliminating possibly
hundreds of multicast routing states, BIDIR-PIM has eliminated the source state (S,G) entries from the
multicast routing table. The most used tool in troubleshooting multicast issues to date has been these
source state entries. Therefore, the very thing that makes BIDIR-PIM more efficient also makes it more
difficult to troubleshoot.


Copyright by IPexpert, Inc. All Rights Reserved.

6-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Common Issues with BIDIR-PIM


BIDIR-PIM has a number of issues that can surface when deployed. The most common problems relate
to the exchange of essential control plane information. The control plane establishment in BIDIR-PIM is
very streamlined. For simplicity in troubleshooting common issues while deploying BIDIR-PIM, we
identify two categories of problems: RP and DR failures, and Multicast Routing and Forwarding
Problems.
RP and DR Failures
In the Troubleshooting BIDIR-PIM section, this text discussed the shared tree mechanism used to create
the BIDIR-PIM operational environment. The RP is so pivotal in BIDIR-PIM because all traffic is forwarded
to the RP and then from the RP to any member hosts. These trees can transport multicast packets bi-
directionally. Keep in mind that these bidirectional trees are created using a fail-safe design. This design
involves the Designated Forwarder (DF) election mechanism operating on each link in the multicast
topology.
With the assistance of the DF, multicast data is natively forwarded from sources to the Rendezvous-
Point (RP) and hence along the shared tree to receivers without requiring source-specific state
information being added to the multicast routing table. It is necessary to observe that this process only
works if all devices in the multicast path agree on the identity of the RP. In this section, we are only
working with static mapping to a single RP, but in environments with multiple statically assigned RPs like
those discussed in Chapter 7: Static Rendezvous Points (RPs), or in dynamically assigned RP's using BSR
or AutoRP the agreement on the specific Group-to-RP mapping is essential on all devices in the multicast
domain. Fragmented agreement on the identity of the RP could result in loops or complete failure of the
BIDIR-PIM configuration.
Multicast Routing and Forwarding Problems
These problems manifest themselves in more subtle ways when compared to the previous points. As
discussed earlier, the majority of the BIDIR-PIM operational mechanism involves the formation of the
control plane so that the RP can manage the multicast domain and help maintain the multicast routing
tables.
Situations like the following exist when information fails to propagate to any or all devices and unicast
routing seems to be functioning correctly:

Multicast data packets are lost in the multicast domain.


Multicast control plane packets are lost in the multicast domain.

Copyright by IPexpert, Inc. All Rights Reserved.

6-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

In the BIDIR-PIM Sample Troubleshooting Scenarios section that follows, troubleshooting these issues
are demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

6-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

BIDIR-PIM Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the BIDIR-PIM operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem exists, and then how to begin isolating the
cause of the fault in the most efficient manner possible. Figure 6-2 illustrates the topology used to
explore this topic.

Figure 6-2: A Sample BIDIR-PIM Topology

In the Common Issues with BIDIR-PIM section, two primary types of problems were identified: RP and
DF failures, and Multicast Routing and Forwarding Problems. This section explores these two categories
of failure, by directing our attention to the commands necessary to verify a problem, isolate it and
remediate it.
RP Failure in BIDIR-PIM
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9 R1 can emulate a multicast feed:
R1#ping 224.9.9.9 repeat 100000000
Type escape sequence to abort.

Copyright by IPexpert, Inc. All Rights Reserved.

6-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:


.......... <output omitted>

The output from the ping command is unsuccessful.


Step One: Follow the multicast feed hop-by-hop.
On R5 look at the output of show ip mroute:
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:57/00:02:02, RP 192.2.2.255, flags: BP
Bidir-Upstream: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null

Are there interfaces in the OIL for the (*, 224.9.9.9) group? The output indicates that there are no
interfaces in the output list with the value of "Null". Observe that the RP 192.2.2.255 and the RPF nbr
0.0.0.0 do not match. This tells us something is wrong with the RP.

Step Two: Identify the RP network and verify reachability to it?

Find the configured RP address with show ip pim rp:

R5#show ip pim rp
Group: 224.9.9.9, RP: 192.2.2.255, uptime 00:01:01, expires never
Group: 224.0.1.40, RP: 192.2.2.255, uptime 00:07:42, expires never

This output tells us that the Group-to-RP mapping on R5 for 224.9.9.9 is for 192.2.2.255. Is this even
reachable in our topology?
R5#show ip route 192.2.2.255
% Network not in table

Copyright by IPexpert, Inc. All Rights Reserved.

6-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Immediately, we can see there is no route on R5 for this address. The drawing tells us that the RP is
supposed to be 192.1.2.255. To correct this issue use the correct ip pim rp-address command:

R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip pim rp-address 192.1.2.2 bidir
R5(config)#end

Is the ping successful on R1?



R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

28
28
28
28
28
28
28
28
28
28

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

DF Failure in BIDIR-PIM
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9 R1 can emulate a multicast feed:
R1#ping 224.9.9.9 repeat 100000000
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
.......... <output omitted>

The output from the ping command is unsuccessful.

Copyright by IPexpert, Inc. All Rights Reserved.

6-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Step One: Follow the multicast feed hop-by-hop.


On R5 look at the output of show ip mroute:
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:47/00:02:52, RP 192.1.2.255, flags: BP
Bidir-Upstream: FastEthernet0/1, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Bidir-Upstream/Sparse, 00:00:50/00:00:00

Observe that the RPF nbr is 0.0.0.0. This means either this device is the RP, or the device believes the RP
address is invalid. This device has no interface in the network 192.1.2.0/24, so it cannot be the RP. This
leaves the later problem.
Step Two: Verify the identity of the designated forward for all interfaces.
On R5 us show ip pim interface df command:
R5#show ip pim interface
* implies this system is
Interface
FastEthernet0/0

df
the DF
RP
192.1.2.255

DF Winner
*172.16.15.5

Metric
2

Uptime
00:11:05

This output indicates that no designated forwarder has been elected on the FastEthernet0/0 interface of
R5. Use show ip pim interface to see if both interfaces are running sparse-mode.
R5#show ip pim interface
Address

Interface

172.16.15.5
172.16.45.5

FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/S
v2/S

Nbr
Count
1
1

Query
Intvl
30
30

DR
Prior
1
1

DR
172.16.15.5
172.16.45.5

This output indicates that neighbor relationships exit out both interfaces.

Copyright by IPexpert, Inc. All Rights Reserved.

6-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Step Three: Check the next hop router in the multicast path for the (*,G).

Use show ip mroute on R4:

R4#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:30:07/00:02:32, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:30:07/00:02:21

There is no record in the multicast routing table for the group 224.9.9.9. If the group was created on R5
that means R5 was participating in BIDIR-PIM. Also, remember that R5 considered the identity of the RP
to be invalid. This could be caused by a router in the transit path not being able to participate in BIDIR-
PIM or a lack of end-to-end PIM communication between R5 and the RP. Observe that the show ip
mroute output indicates that there is an Incoming Interface List; something that does not exist in BIDIR-
PIM. Use show run to see if the router is enabled for BIDIR-PIM:
R4#show run | inc bidir
R4#

We can see that R4 has not been configured with BIDIR-PIM, without this command the router cannot
participate in the DF election and as such breaks the connectivity to the RP. Correct this issue by
applying the ip pim bidir-enable and ip pim rp-address commands:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#ip pim bidir-enable
R4(config)#ip pim rp-address 192.1.2.255 bidir
R4(config)#end

Is the ping successful on R1?


Copyright by IPexpert, Inc. All Rights Reserved.

6-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

R1#ping 224.9.9.9 repeat 10


Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

28
28
28
28
28
28
28
28
28
28

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Multicast Routing and Forwarding Failure in BIDIR-PIM


Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9 R1 can emulate a multicast feed:
R1#ping 224.9.9.9 repeat 100000000
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
.......... <output omitted>

The output from the ping command is unsuccessful.


Step One: Follow the multicast feed hop-by-hop.
On R5 look at the output of show ip mroute:
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,

Copyright by IPexpert, Inc. All Rights Reserved.

6-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Z - Multicast Tunnel, z - MDT-data group sender,


Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:53/00:02:54, RP 192.1.2.255, flags: BP
Bidir-Upstream: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list:
FastEthernet0/1, Bidir-Upstream/Sparse, 00:01:53/00:00:00

There are no apparent issues on R5. Now go to the next hop:


R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:53/00:02:59, RP 192.1.2.255, flags: BP
Bidir-Upstream: FastEthernet0/0, RPF nbr 172.16.24.2
Outgoing interface list:
FastEthernet0/0, Bidir-Upstream/Sparse, 00:01:53/00:00:00

R4 has no issues. We see the RP, the RPF nbr used to reach the RP, and the FastEthernet0/0 interface
pointing to R2. Try the next hop:
R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

Copyright by IPexpert, Inc. All Rights Reserved.

6-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

(*, 224.9.9.9), 07:14:42/00:03:26, RP 192.1.2.255, flags: B


Bidir-Upstream: Loopback0, RPF nbr 192.1.2.255
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 07:08:31/00:03:17
Loopback0, Bidir-Upstream/Sparse, 07:08:31/00:00:00

No issues exist on R2 either. Note that the RP and the RPF nbr values match, meaning that this device is
the RP for this group.
R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 07:14:41/00:03:27, RP 192.1.2.255, flags: B
Bidir-Upstream: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 07:08:31/00:02:47
FastEthernet0/1, Bidir-Upstream/Sparse, 07:08:31/00:00:00

R6 is also non-problematic. What about the next hop?


R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 07:14:42/00:02:58, RP 192.1.2.255, flags: BC
Bidir-Upstream: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 07:08:31/00:02:58, Int limit 0 kbps
FastEthernet0/0, Bidir-Upstream/Sparse, 07:08:31/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

6-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Note that R7 has FastEthernet0/1 in the OIL, and we can see that it will limit any outbound traffic to 0
kbps. This will effectively block all outbound traffic to R9. We can verify this with the show run
command:
R7#show run interface FastEthernet0/1
Building configuration...
Current configuration : 147 bytes
!
interface FastEthernet0/1
ip address 172.16.79.7 255.255.255.0
ip pim sparse-mode
ip multicast rate-limit out 0
duplex auto
speed auto
end

Correct this issue by removing the ip multicast rate-limit command:


R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip multicast rate-limit out 0
R7(config-if)#end

Is the ping successful on R1?



R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

28
28
28
28
28
28
28
28
28
28

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Copyright by IPexpert, Inc. All Rights Reserved.

6-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

BIDIR-PIM show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
BIDIR-PIM topology in Figure 6-3 for all example output.

Figure 6-3: A Sample BIDIR-PIM Topology

show COMMAND:
show ip igmp membership [group-address | group-name] [tracked] [all]
This command displays Internet Group Management Protocol (IGMP) membership information for
multicast groups and (S, G) channels.
Where:

group-address optional; specifies the specific multicast group address


tracked optional; displays the multicast groups with the explicit tracking feature enabled
all - optional; displays the detailed information about the multicast groups with and without the
explicit tracking feature enabled

EXAMPLE OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude

Copyright by IPexpert, Inc. All Rights Reserved.

6-24

IPv4/6 Multicast Operation and Troubleshooting

Channel/Group
*,224.9.9.9
*,239.9.9.9
*,224.0.1.40
R9#

Chapter 6: Bidirectional Protocol Independent Multicast

Reporter
172.16.79.9
172.16.79.9
172.16.79.9

Uptime
00:12:33
00:12:33
00:12:33

Exp.
02:31
02:28
02:35

Flags
2LA
2LA
2LA

Interface
Fa0/1
Fa0/1
Fa0/1


show COMMAND:
show ip mroute
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:07/00:02:07, RP 192.1.2.255, flags: BP
Bidir-Upstream: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(*, 224.0.1.40), 00:01:28/00:02:50, RP 192.1.2.255, flags: BPL
Bidir-Upstream: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null


show COMMAND:
show ip pim interface
This command displays information about interfaces configured for Protocol Independent Multicast
(PIM).
EXAMPLE OUTPUT:
R7#show ip pim interface
Address

Interface

172.16.67.7

FastEthernet0/0

Copyright by IPexpert, Inc. All Rights Reserved.

Ver/
Mode
v2/S

Nbr
Count
1

Query
Intvl
30

DR
Prior
1

DR
172.16.67.7

6-25

IPv4/6 Multicast Operation and Troubleshooting

172.16.79.7
R7#

FastEthernet0/1

Chapter 6: Bidirectional Protocol Independent Multicast

v2/S

30

172.16.79.9

show COMMAND:
show ip pim rp mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static, Bidir Mode
RP: 192.1.2.255 (?)
R7#


show COMMAND:
show ip pim [vrf vrf-name] neighbor [interface-type interface-number]
This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.
Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

EXAMPLE OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.67.6
FastEthernet0/0
00:03:33/00:01:37
172.16.79.9
FastEthernet0/1
00:03:15/00:01:28
R7#

DR Priority,
Ver
v2
v2

DR
Prio/Mode
1 / B S
1 / DR B S


show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]

Copyright by IPexpert, Inc. All Rights Reserved.

6-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance


route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-
distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information
group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

EXAMPLE OUTPUT:
R7#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1)
RPF interface: FastEthernet0/0
RPF neighbor: ? (0.0.0.0)
RPF route/mask: 172.16.15.0/24
RPF type: unicast (rip)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#

Copyright by IPexpert, Inc. All Rights Reserved.

6-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

BIDIR-PIM debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
BIDIR-PIM topology in Figure 6-6 for all example output.

Figure 6-6: A Sample BIDIR-PIM Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]
This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

EXAMPLE OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward


debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays Protocol Independent Multicast (PIM) packets received and sent and displays
PIM-related events

Copyright by IPexpert, Inc. All Rights Reserved.

6-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R2#debug ip pim
PIM debugging is on
R2#
R2#
R2#
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward
PIM *G Join
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/0 from 172.16.24.4, to
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/0/172.16.24.4 to (*, 224.0.1.40), Forward
PIM *G Join
R2#
PIM(0): Building Periodic Join/Prune message for 224.0.1.40
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward
PIM *G Join
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/0 from 172.16.24.4, to
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/0/172.16.24.4 to (*, 224.0.1.40), Forward
PIM *G Join
R2#
PIM(0): Building Periodic Join/Prune message for 224.0.1.40
R2#
PIM(0): Received v2 Join/Prune on GigabitEthernet0/1 from 172.16.26.6, to
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update GigabitEthernet0/1/172.16.26.6 to (*, 224.0.1.40), Forward
PIM *G Join

Copyright by IPexpert, Inc. All Rights Reserved.

us
state, by

us
state, by

us
state, by

us
state, by

us
state, by

6-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Chapter Challenge: BIDIR-PIM Sample Trouble Tickets


The following section includes two sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH6-BDIR-PIM-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 6-7 below:

Figure 6-7: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has been experimenting to deploy BIDIR-PIM on the network in Figure 3-6. During
testing over the weekend he discovered that when hosts on the VLAN79 segment between R7 and R9
join multicast groups, these group memberships are not propagated to the RP. You have been instructed
to use the multicast address 224.9.9.9 on R9 to isolate the problem. Once the fault has been found
correct this issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor while doing more testing has observed that multicast
sources generated by R1 never reach the RP. You have been instructed to use the group 224.9.9.9 on R1
to isolate this issue. Once the fault has been isolated correct the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

6-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast


Chapter Challenge: PIM-DM Sample Trouble Tickets Solutions
The following section includes the solutions to the two Trouble Tickets presented in the previous
section.
Trouble Ticket #1 Solution
Your supervisor has been experimenting to deploy BIDIR-PIM on the network in Figure 3-6. During
testing over the weekend he discovered that when hosts on the VLAN79 segment between R7 and R9
join multicast groups, these group memberships are not propagated to the RP. You have been instructed
to use the multicast address 224.9.9.9 on R9 to isolate the problem. Once the fault has been found
correct this issue.
Step 1 - Fault Verification:
Does R2 create the (*, 224.9.9.9) entry in its multicast routing table?
R2#show ip mroute 224.9.9.9
Group 224.9.9.9 not found


The *,G pair is not created. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
The next course of action is to use the show ip mroute command to see where the (*,G) entry appears.

R6#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:13:45/00:02:31, RP 192.1.2.255, flags: SJPL
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list: Null

Copyright by IPexpert, Inc. All Rights Reserved.

6-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

The verification clearly demonstrates that R6 is not using BIDIR-PIM to communicate with the RP.
Observe that there is an Incoming interface lists in the output. This can be verified with show ip pim rp
mapping, we see there is no Bidir Mode after the Static:
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.255 (?)

The output confirms our observation.



Step 3 - Fault Remediation:
In this scenario, the ip pim rp-address command should be applied using the bidir option on R6.

R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#ip pim rp-address 192.1.2.255 bidir


Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification.

R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:18/00:03:11, RP 192.1.2.255, flags: B
Bidir-Upstream: Loopback0, RPF nbr 192.1.2.255
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:00:18/00:03:11
Loopback0, Bidir-Upstream/Sparse, 00:00:18/00:00:00


The issue has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

6-32

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

Trouble Ticket #2 Solution


After solving Trouble Ticket #1, your supervisor while doing more testing has observed that multicast
sources generated by R1 never reach the RP. You have been instructed to use the group 224.9.9.9 on R1
to isolate this issue. Once the fault has been isolated correct the issue.
Step 1 - Fault Verification:
Can R1 ping the group 224.9.9.9 successfully:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........


The ping test to the multicast group 224.9.9.9 fails. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
The next course of action is to use the show ip mroute command to see where the (*,G) entry appears
or to observe the nature of the multicast routing table.

R5#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:23/00:02:36, RP 0.0.0.0, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(*, 224.0.1.40), 00:32:52/00:02:09, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:32:52/00:02:09

Copyright by IPexpert, Inc. All Rights Reserved.

6-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 6: Bidirectional Protocol Independent Multicast

The verification clearly demonstrates that R6 is not using BIDIR-PIM to communicate with the RP.
Observe that there is an Incoming interface list in the output. This can be verified with show ip pim rp
mapping:

R5#show ip pim rp mapping
PIM Group-to-RP Mappings


Observe that the output indicates there is no RP for any multicast groups. This isolates our problem.
Step 3 - Fault Remediation:
In this scenario, the ip pim bidir-enable and ip pim rp-address command needs to be applied to R5:

R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip pim bidir-enable
R5(config)#ip pim rp-address 192.1.2.255 bidir
R5(config)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:

R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

32
28
28
28
28
28
28
28
28
28

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


The issue has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

6-34

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Chapter 7: Static
Rendezvous Points
(RPs)



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and the functionality
of static Rendezvous Points (RPs) are examined in great depth. Once the operational characteristics of
static RPs are detailed completely, the focus becomes that of troubleshooting. This includes the careful
examination of symptoms, a fault isolation methodology, and the implementation of repairs for static RP
assignments. The chapter begins with a thorough review of static RP assignment, and then quickly
launches in to an exhaustive analysis of the art of troubleshooting. This important chapter concludes
with sample troubleshooting scenarios, reference materials for the most important show and debug
commands, and exciting challenges that allow readers to practice implementing the troubleshooting
skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

7-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Static RP Technology Review


Recall from Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM) that a rendezvous
point (RP) is a critically important multicast role provided by a key router or routers in your network
infrastructure. Rendezvous points acts as the meeting places for sources and receivers of multicast
traffic. As was detailed in Chapter 4, sources must send their traffic to the RP and then multicast routers
forward the traffic to receivers down a shared distribution tree. In most cases, the placement of the RP
in the network is not a difficult or complex decision. By default, multicast operations only require the RP
to start new sessions with sources and receivers; therefore, the RP experiences a small amount of
overhead from traffic flow or processing. Furthermore, in PIM version 2, the RP performs less processing
than in PIM version 1 because sources must only periodically register with the RP to create the required
state information.
In IP version 4, there are three main options for the dissemination of RP information to the multicast
domain. There is the manual (static) assignment of this information as detailed in this chapter. There is
the AutoRP protocol, and there is the Bootstrap Router Protocol (BSR). Chapters 8 and 9 of this book
detail the latter dynamic technologies in detail.
Statically assigning the RP information in the domain hinges upon a single command:
ip pim [vrf vrf-name] rp-address rp-address [access-list] [override] [bidir]
Where:

vrf optional; specifies that the static group-to-RP mapping be associated with the Multicast
Virtual Private Network VRF instance listed
rp-address the IP address of the RP to be used for the static group-to-RP mapping
access-list optional; the standard access list that defines the multicast groups to be statically
mapped to the RP; if no access list is defined, the RP will map to all multicast groups, 224/4
override optional; specifies that if dynamic and static group-to-RP mappings are used together
and there is an RP address conflict, the RP address configured for a static group-to-RP mapping
will take precedence
bidir optional; specifies that the static group-to-RP mapping be applied to a bidirectional PIM
RP; Chapter 6: Bidirectional Protocol Independent Multicast (BIDIR-PIM) details bidirectional
PIM in detail

Copyright by IPexpert, Inc. All Rights Reserved.

7-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

The Operation and Troubleshooting of Static RP


Thus far in this text we have used only a single device as an RP. This chapter will now introduce the
concept of multiple RPs serving a single multicast domain. The rules that where initially discussed in
Chapter 4: PIM-SM still apply in multiple static RP environments. In these configurations, the RPs can
operate in a fashion that permits Load Balancing.
Introduction to Load Balancing between RPs
During deploying PIM-SM, it may be necessary to designate more than one router to act as an RP. To do
this it is necessary to specify what multicast groups map to what RPs. By applying an access-list to the ip
pim rp-address command as specified in the Technology Review section we tell the device what router
to use as an RP. Later in Chapter 8: AutoRP and Chapter 9: Bootstrap Router Protocol (BSR) we will look
at how load balancing can be performed when using dynamically assigned RPs. The important thing to
note is that with Static RP assignment any "load balancing" will require administrative overhead in the
form of time and effort to accomplish. We will explore this process using the topology outlined in Figure
7-1:

Figure 7-1: Sample Static RP Topology

In this topology, R4 will perform the duties of RP for the multicast groups ranging from 224.0.0.1 to
231.255.255.255, and R6 will be the RP for the groups ranging from 232.0.0.1 to 239.255.255.255.

In a working environment, we can see how this is configured by looking at the commands used on any
single device:

R2#show run | inc access-list | rp-address
ip pim rp-address 192.1.4.4 1
ip pim rp-address 192.1.6.6 2
access-list 1 permit 224.0.0.0 7.255.255.255

Copyright by IPexpert, Inc. All Rights Reserved.

7-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

access-list 2 permit 232.0.0.0 7.255.255.255

In this situation we see that any groups matching the standard access-list 1 will be mapped to R4's
loopback0 interface, and that any matching standard access-list 2 will be mapped to R6. These define
the group-to-RP mappings we have been discussing. The nature of these mappings can be viewed on all
devices in the topology with show ip pim rp mapping:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Acl: 1,
RP:
Acl: 2,
RP:

Static
192.1.4.4 (?)
Static
192.1.6.6 (?)


We see that R1 has two mappings defined by ACL1 and ACL2. We can see what each ACL matches by
using show ip access-list:
R1#show ip access-list
Standard IP access list 1
10 permit 224.0.0.0, wildcard bits 7.255.255.255
Standard IP access list 2
10 permit 232.0.0.0, wildcard bits 7.255.255.255

This means that R1 will use R4 as the RP for all groups matched by the standard access-list 1. This can be
tested using mtrace:
R1#mtrace 172.16.15.1 172.16.79.9 224.1.1.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.1.1.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.46.4 PIM Reached RP/Core [172.16.15.0/24]
-5 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-6 * 172.16.15.1 PIM [172.16.15.0/24]

By specifying the group address for 224.1.1.1 we know that based on the access-lists available that R4
will chose as the RP for this group. We can repeat this test using the group 231.255.255.255. This group
represents the highest address in the range matched by ACL1. Thus, this group should use R4 as the RP.

Copyright by IPexpert, Inc. All Rights Reserved.

7-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

R1#mtrace 172.16.15.1 172.16.79.9 231.255.255.255


Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 231.255.255.255
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM [172.16.15.0/24]
-4 * 172.16.46.4 PIM Reached RP/Core [172.16.15.0/24]
-5 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-6 * 172.16.15.1 PIM [172.16.15.0/24]

As we see R4 is indeed the RP that is selected. But what about 232.0.0.1?


R1#mtrace 172.16.15.1 172.16.79.9 232.0.0.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 232.0.0.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]
-2 * 172.16.79.7 PIM [172.16.15.0/24]
-3 * 172.16.67.6 PIM Reached RP/Core [172.16.15.0/24]
-4 * 172.16.46.4 PIM [172.16.15.0/24]
-5 * 172.16.45.5 PIM Prune sent upstream [172.16.15.0/24]
-6 * 172.16.15.1 PIM [172.16.15.0/24]

The multicast stream for 232.0.0.1 matches ACL2 so the RP for this group becomes R6 as specified by
the "Reached RP/Core" entry in the mtrace output.
From a troubleshooting point of view, what would happen if the Loopback0 interface of R6 goes down?
Will 232.0.0.1 be forwarded using R4 rather than R6?
R6(config)#interface Loopback0
R6(config-if)#shut
R6(config-if)#end

Now we will test again:


R1#mtrace 172.16.15.1 172.16.79.9 232.0.0.1
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 232.0.0.1
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM [172.16.15.0/24]

Copyright by IPexpert, Inc. All Rights Reserved.

7-5

IPv4/6 Multicast Operation and Troubleshooting

-2
-3
-4
-5

*
*
*
*

172.16.79.7
172.16.67.6
172.16.46.4
172.16.45.5

PIM
PIM
PIM
PIM

Chapter 7: Static Rendezvous Points (RPs)

[172.16.15.0/24]
[172.16.15.0/24]
[172.16.15.0/24]
[172.16.15.0/24]

This output may seem confusing at first, but it is important to look carefully. The mtrace utility verifies
the multicast path hop-by-hop. The most important part of the is output is what we do not see. Observe
that there are no entries for a "RP/Core". This tells us that R4 does not take over as the RP for this
group. This can be tested by having R9 join the group 232.0.0.1:
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 232.0.0.1
R9(config-if)#end

Now generate a multicast stream from R1:


R1#ping 232.0.0.1 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 232.0.0.1, timeout is 2 seconds:
..........

Observe the test fails. This is because there is no RP mapped to the group 232.0.0.1.

Copyright by IPexpert, Inc. All Rights Reserved.

7-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Common Issues with Static RP


When it comes to common issues associated with troubleshooting Static RPs the most problematic issue
facing us is incorrect configuration. The essential point when multiple static RPs is used is that each
device participating in the multicast domain is required to agree on the identity of the RP on a group-by-
group basis. Fortunately, very few things cause issues associated with multiple Static RP assignment.
In the Troubleshooting Static RP section, this text demonstrates how uniform configuration is required
between all devices in order for the PIM-SM environment to work properly. The most common issue
associated with the assignment of the RP in this type of environment is typographical mistakes. Most
commonly, these involve the IP address used in the individual rp-address statements, or improper
creation or application of the access lists involved.
In the Static RP Sample Troubleshooting Scenarios section that follows, troubleshooting these issues
are demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

7-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Static RP Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the Static RP operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem exists, and then how to begin isolating the
cause of the fault in the most efficient manner possible. Figure 7-2 illustrates the topology used to
explore this topic.

Figure 7-2: A Sample Static RP Topology

In the Common Issues with Static RP section, two primary types of problems were identified: Incorrect
RP Assignment or ACL Issues. This section explores these two categories of failure, by directing our
attention to the commands necessary to verify a problem, isolate it and remediate it.
Incorrect RP Assignment
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9 R1 can emulate a multicast feed:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:

Copyright by IPexpert, Inc. All Rights Reserved.

7-8

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 7: Static Rendezvous Points (RPs)

56
56
56
56
56
56
56
68
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

The output from the ping command is successful, what if we ping another address. This time we will use
239.9.9.9 in the second multicast range.
R1#ping 239.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
..........

The output indicates that the pings are unsuccessful.


Emulate a high repeat multicast feed: Generate a multicast feed for 239.9.9.9 on R1 with very high
repeat count:
R1#ping 239.9.9.9 repeat 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
............. <output omitted>

Look to see what RP has been assigned for the group 239.9.9.9 on all devices in the topology. Keep in
mind that the group 239.9.9.9 should use R6:
R5#show ip pim rp 239.9.9.9
Group: 239.9.9.9, RP: 192.1.5.5, next RP-reachable in 00:00:36

This output indicates that R5 is being used as the RP for the group rather than R6. This means that the
incorrect address was used on R5 for the second RP address. This can be confirmed with show run:
R5#show run | inc rp-address
ip pim rp-address 192.1.4.4 1
ip pim rp-address 192.1.5.5 2

Correct this issue by modifying the second ip pim rp-address statement:

Copyright by IPexpert, Inc. All Rights Reserved.

7-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

R5#conf t
Enter configuration commands, one per line.
R5(config)#no ip pim rp-address 192.1.5.5 2
R5(config)#ip pim rp-address 192.1.6.6 2
R5(config)#end

End with CNTL/Z.

Verify that the correction has worked by repeating the multicast ping test from R1:
R1#ping 239.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request

0
1
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

68
64
80
76
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

ACL Issue
Setting the stage: R9 will join the multicast group 224.9.9.9.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Emulating a multicast feed: Can R1 successfully source a multicast stream from 172.16.15.1 to the
group 224.9.9.9?
By generating a ping on R1 to the group 224.9.9.9 R1 can emulate a multicast feed:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

The output from the ping command is unsuccessful, what if we ping another address. This time we will
use 239.9.9.9 in the second multicast range.
R1#ping 239.9.9.9 repeat 10

Copyright by IPexpert, Inc. All Rights Reserved.

7-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Type escape sequence to abort.


Sending 10, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

56
56
56
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

The output indicates that the pings are successful.


Emulate a high repeat multicast feed: Generate a multicast feed for 224.9.9.9 on R1 with a very high
repeat count:
R1#ping 224.9.9.9 repeat 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
.................... <output omitted>

Now we will verify the identity of the RP elected for 224.9.9.9 on all devices. Remember that R4 should
be selected for this group up to the RP.
R5#show ip pim rp 224.9.9.9
Group: 224.9.9.9, RP: 192.1.4.4, v2, uptime 00:05:41, expires never
R4#show ip pim rp 224.9.9.9
Group: 224.9.9.9, RP: 192.1.6.6, uptime 00:45:31, expires never

Of the routers between R1 and R4, it is clear that R4 is not in agreement with the rest of the network,
because it identifies the RP as 192.1.6.6 rather than 192.1.4.4. R4 is making the incorrect decision
regarding the RP. The question is why?
R4#sh ip pim rp mapping
PIM Group-to-RP Mappings
Acl: 1, Static
RP: 192.1.4.4 (?)
Group(s): 224.0.0.0/4, Static
RP: 192.1.6.6 (?)

Copyright by IPexpert, Inc. All Rights Reserved.

7-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

We see that we have an ACL applied to the first static entry, but there is no ACL assigned to the second.
Observe that the second group-to-RP mapping is for the entire 224.0.0.0/4 range. This means that on R4
there is an overlap between the static assignments. In instances like this when static RP is used and a
single group has been erroneously assigned to more than one RP, the RP with the highest IP address will
assume the role of RP.
This can be corrected by applying the standard access-list 2 to the second ip pim rp-address statement:
R4#conf t
Enter configuration commands, one per line.
R4(config)#no ip pim rp-address 192.1.6.6
R4(config)#ip pim rp-address 192.1.6.6 2
R4(config)#end

End with CNTL/Z.

Now are multicast pings from R1 successful?


R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Copyright by IPexpert, Inc. All Rights Reserved.

60
56
56
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

7-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Static Rendezvous Points show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
static RP topology in Figure 7-3 for all example output.

Figure 7-3: A Sample Static RP Topology

show COMMAND:
show ip igmp membership [group-address | group-name] [tracked] [all]
This command displays Internet Group Management Protocol (IGMP) membership information for
multicast groups and (S, G) channels.
Where:

group-address optional; specifies the specific multicast group address


tracked optional; displays the multicast groups with the explicit tracking feature enabled
all - optional; displays the detailed information about the multicast groups with and without the
explicit tracking feature enabled

EXAMPLE OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude

Copyright by IPexpert, Inc. All Rights Reserved.

7-13

IPv4/6 Multicast Operation and Troubleshooting

Channel/Group
*,224.9.9.9
*,239.9.9.9
*,224.0.1.40
R9#

Reporter
172.16.79.9
172.16.79.9
172.16.79.9

Chapter 7: Static Rendezvous Points (RPs)

Uptime
00:09:54
00:09:54
00:09:54

Exp.
02:23
02:23
02:23

Flags
2LA
2LA
2LA

Interface
Fa0/1
Fa0/1
Fa0/1


show COMMAND:
show ip mroute
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:41/00:03:06, RP 192.1.2.2, flags: SJC
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:24/00:03:06
(*, 239.9.9.9), 00:01:41/stopped, RP 192.1.2.2, flags: SJC
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:24/00:03:05
(172.16.15.1, 239.9.9.9), 00:00:18/00:02:43, flags: JT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:18/00:02:41
(*, 224.0.1.40), 00:02:17/00:03:04, RP 192.1.2.2, flags: SJCL
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:25/00:03:04
Loopback0, Forward/Sparse, 00:02:18/00:02:19

Copyright by IPexpert, Inc. All Rights Reserved.

7-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

R7#


show COMMAND:
show ip pim interface
This command displays information about interfaces configured for Protocol Independent Multicast
(PIM).
EXAMPLE OUTPUT:
R7#show ip pim interface
Address

Interface

192.1.7.7
172.16.67.7
172.16.79.7
R7#

Loopback0
FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/S
v2/S
v2/S

Nbr
Count
0
1
1

Query
Intvl
30
30
30

DR
Prior
1
1
1

DR
192.1.7.7
172.16.67.7
172.16.79.9

show COMMAND:
show ip pim rp mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s): 224.0.0.0/4, Static
RP: 192.1.2.2 (?)
R7#


show COMMAND:
show ip pim [vrf vrf-name] neighbor [interface-type interface-number]
This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.
Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

Copyright by IPexpert, Inc. All Rights Reserved.

7-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

EXAMPLE OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.67.6
FastEthernet0/0
00:11:57/00:01:38
172.16.79.9
FastEthernet0/1
00:11:57/00:01:35
R7#

DR Priority,
Ver
v2
v2

DR
Prio/Mode
1 / S
1 / DR S


show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]
This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance


route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-
distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information
group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

EXAMPLE OUTPUT:
R7#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/0
RPF neighbor: ? (172.16.67.6)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#

Copyright by IPexpert, Inc. All Rights Reserved.

7-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Static Rendezvous Points debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
static RP topology in Figure 7-4 for all example output.

Figure 7-4: A Sample Static RP Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]
This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

EXAMPLE OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward


debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays Protocol Independent Multicast (PIM) packets received and sent and displays
PIM-related events

Copyright by IPexpert, Inc. All Rights Reserved.

7-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (172.16.15.1/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM SG Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 239.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 239.9.9.9), Forward state, by PIM *G
Join
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM *G Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 224.0.1.40), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 224.0.1.40), Forward state, by PIM
*G Join
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (*, 224.9.9.9), RPT-bit set, WC-bit set, S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (*, 224.9.9.9), Forward state, by PIM *G
Join
R7#
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Join-list: (172.16.15.1/32, 239.9.9.9), S-bit set
PIM(0): Update FastEthernet0/1/172.16.79.9 to (172.16.15.1, 239.9.9.9), Forward state,
by PIM SG Join
R7#
PIM(0): Building Periodic (*,G) Join / (S,G,RP-bit) Prune message for 224.0.1.40
PIM(0): Insert (*,224.0.1.40) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (192.1.2.2/32, 224.0.1.40), WC-bit, RPT-bit, S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#

Copyright by IPexpert, Inc. All Rights Reserved.

7-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Chapter Challenge: Static RP Sample Trouble Tickets


The following section includes a sample Trouble Ticket designed to challenge the troubleshooting skills
that have been developed in all previous sections of this chapter. These Trouble Tickets were designed
using the Routing & Switching rental racks at www.ProctorLabs.com with the initial configurations
provided in the file MCAST-CH7-STATIC-RP-TT-INITIAL.txt. Keep in mind these sample Trouble Tickets
were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 7-5 below:

Figure 7-5: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has informed you that multicast traffic sourced from the VLAN15 segment to the group
224.9.9.9 never reach the PIM-SM RP. There are two RPs in this topology; R4 for the multicast range
224.0.0.1 - 231.255.255.255, and R6 for the range 232.0.0.1 - 239.255.255.255. You have been
instructed to use the multicast group 224.9.9.9 to isolate this issue. You must correct the problem.

Copyright by IPexpert, Inc. All Rights Reserved.

7-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

Chapter Challenge: Static RP Sample Trouble Tickets Solutions


The following section includes the solution to the Trouble Ticket presented in the previous section.
Trouble Ticket #1 Solution
Your supervisor has informed you that multicast traffic sourced from the VLAN15 segment to the group
224.9.9.9 never reaches the PIM-SM RP. There are two RPs in this topology; R4 for the multicast range
224.0.0.1 - 231.255.255.255, and R6 for the range 232.0.0.1 - 239.255.255.255. You have been
instructed to use the multicast group 224.9.9.9 to isolate this issue. You must correct the problem.
Step 1 - Fault Verification:
Can R1 ping the group 224.9.9.9 successfully?
R1#ping 224.9.9.9 r 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..... <output omitted>


The ping test to the multicast group 224.9.9.9 fails. The next question is, "does R4 create a S,G entry for
224.9.9.9?"

R4#show ip mroute 224.9.9.9
Group 224.9.9.9 not found


There is no S,G entry. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
The next course of action is to use show ip pim rp on all devices between R1 and R4.

R5#show ip pim rp 224.9.9.9
Group: 224.9.9.9, RP: 192.1.4.5, uptime 00:00:54, expires never

Next, we will look on R4.



R4#show ip mroute 224.9.9.9
Group 224.9.9.9 not found

The verification clearly demonstrates that R5 is not using the correct IP address from the RP. This can be
verified using show run on R5:
R5#show run | inc access-list | rp-address

Copyright by IPexpert, Inc. All Rights Reserved.

7-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 7: Static Rendezvous Points (RPs)

ip pim rp-address 192.1.4.5 1


ip pim rp-address 192.1.6.6 2
access-list 1 permit 224.0.0.0 7.255.255.255
access-list 2 permit 232.0.0.0 7.255.255.255

The first ip pim rp-address command is not using the correct ip address. This has unquestionably
isolated our fault.

Step 3 - Fault Remediation:
In this scenario, the ip pim rp-address command will need to be corrected.

R5#conf t
Enter configuration commands, one per line.
R5(config)#no ip pim rp-address 192.1.4.5 1
R5(config)#ip pim rp-address 192.1.4.4 1
R5(config)#end

End with CNTL/Z.


Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification. Ensure that the ping is
still running on R1 and verify if the S,G entry is now created:

R4#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:03/stopped, RP 192.1.4.4, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(172.16.15.1, 224.9.9.9), 00:01:03/00:01:56, flags: P
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.5
Outgoing interface list: Null

The S,G entry has been created. The issue has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

7-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Chapter 8: AutoRP



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and the functionality
of the AutoRP protocol are examined in great depth. Once the operational characteristics of this
important protocol are detailed completely, the focus becomes that of troubleshooting. This includes
the careful examination of symptoms, a fault isolation methodology, and the implementation of repairs
for the AutoRP protocol. The chapter begins with a thorough review of AutoRP, and then quickly
launches in to an exhaustive analysis of the art of troubleshooting this multicast support protocol. This
important chapter concludes with sample troubleshooting scenarios, reference materials for the most
important show and debug commands, and exciting challenges that allow readers to practice
implementing the troubleshooting skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

8-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

AutoRP Technology Review


AutoRP is a Cisco proprietary multicast mechanism that automates the distribution of multicast group to
rendezvous point (RP) mappings in a Protocol Independent Multicast (PIM) network. Recall from
Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM) that a rendezvous point is a
required component for the popular sparse mode operations.
AutoRP consists of two components. A router or routers acting as candidate RPs and a router designated
as an RP mapping agent (MA). The RP mapping agent receives RP announcement messages from the
candidate RPs and arbitrates conflicts. The RP mapping agent is then responsible for communicating the
consistent multicast group to RP mappings to all other routers by way of dense mode flooding. This
allows all routers to discover the RP to use with the groups they support.
A major design flaw with AutoRP is the fact that the protocol uses multicast groups in its operation. The
Internet Assigned Numbers Authority (IANA) has assigned two group addresses, 224.0.1.39 and
224.0.1.40, for use with AutoRP. Using multicast groups for the dissemination of RP information creates
what Cisco terms a chicken and an egg paradox. The multicast groups disseminate the RP information,
but these groups need an RP in order to function if a strict sparse mode environment is desired. The
Bootstrap Router Protocol (BSR) is an open standard protocol that solves this dilemma. Chapter 9:
Bootstrap Router Protocol (BSR) details this important protocol.
There are multiple solutions for issues presented by AutoRP used in conjunction with sparse mode
environments. Some are:

Use the Cisco invention of sparse-dense mode


Use the ip pim autorp listener command
Statically map an RP for use with the AutoRP groups (224.0.1.39 and 224.0.1.40)

To configure a router as a candidate RP (C-RP), use the following command:


ip pim send-rp-announce {interface-type interface-number | ip-address} scope ttl-value [group-
list access-list] [interval seconds] [bidir]
Where:

interface-type interface-number defines which IP address is to be used as the RP address


ip-address defines the IP address directly connected to the router to serve as the RP address
ttl-value defines a scope for the announcement
group-list defines an access list specifying multicast groups for the C-RP
interval defines an interval for the announcements

Copyright by IPexpert, Inc. All Rights Reserved.

8-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

bidir specifies the groups are to function as bidirectional; bidirectional PIM is detailed in
Chapter 6: Bidirectional Protocol Independent Multicast (BIDIR-PIM)

To configure the mapping agent (MA), use the following command:


ip pim send-rp-discovery [interface-type interface-number] scope ttl-value [interval seconds]
Where:

interface-type interface-number defines which IP address is to be used as the MA address


ttl-value defines a scope for the multicast group to RP mapping announcements
interval defines an interval for the announcements

Copyright by IPexpert, Inc. All Rights Reserved.

8-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

The Operation and Troubleshooting of Auto-RP


To better understand how to troubleshoot Auto-RP we will divide its basic operation into three distinct
stages: C-RP Announcements, Mapping Agent assignment and Placement, and the Multicast Routing
Topology. Once each phase has been outlined and defined, we will see how its operation could be
negatively impacted by environmental variables found in the multicasting, IP routing, and switching
domains.
C-RP Announcements
In the previous chapters of this text we have configured a static RP in both PIM-SM and PIM-S-DM. The
issue with statically making these assignments is the amount of effort that goes into managing the
process. Recognizing this issue Cisco created the concept of Auto-RP. The primary concept being to
afford a network administrator the ability to dynamically configure devices to operate in role-based
assignments so that RP can be dynamically elected for different multicast groups, or ranges of groups.
This process brings with a mechanism used to identify devices that wish to be considered as candidates
for different roles. This section will explore the concept of a candidate RP.
A candidate RP is a router that is configured to "send rp annoucements" identifying itself as a possible
RP for a given source. We will explore this concept in depth using the topology illustrated in Figure 8-1:

Figure 8-1 Sample Auto-RP Topology

In this network R2 will be the Auto-RP Mapping Agent (covered later in this chapter) and both R4 and R6
will be configured to be C-RPs which we will concern ourselves with in this section. As mentioned in the
Technology Overview C-RPs are configured by using the ip pim send-rp-announce command. We will
configure R4 and R6 using this command. Once we do this, we will monitor their behavior. To
accomplish this we will use debug ip packet on both R4 and R6:
R4(config)#access-list 101 deny eigrp any any

Copyright by IPexpert, Inc. All Rights Reserved.

8-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R4(config)#access-list 101 permit ip any host 224.0.1.39


R4(config)#access-list 101 permit ip any host 224.0.1.40
R4(config)#end
R4#debug ip packet detail 101
IP packet debugging is on (detailed) for access list 101


Now we will enable ip pim send-rp-announce on R4:
R4(config)#interface Loopback0
R4(config-if)#ip pim sparse-dense-mode
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 192.1.4.4 on interface Loopback0
R4(config-if)#exit
R4(config)#ip pim send-rp-announce loopback 0 scope 16
R4(config)#end

Immediately after the command is applied this C-RP tries to notify the Auto-RP Mapping Agent that is
has been configured as a C-RP.
IP: s=192.1.4.4 (Null0),
broad/multicast
UDP src=496, dst=496
IP: s=192.1.4.4 (Null0),
broad/multicast
UDP src=496, dst=496
IP: s=192.1.4.4 (Null0),
UDP src=496, dst=496
IP: s=192.1.4.4 (local),
UDP src=496, dst=496

d=224.0.1.39 (FastEthernet0/0), len 48, sending

d=224.0.1.39 (FastEthernet0/1), len 48, sending

d=224.0.1.39 (Serial0/0.1), len 48, sending broad/multicast


d=224.0.1.39 (Loopback0), len 48, sending broad/multicast

Observe that the packets now generated on R4 are sourced from the Loopback0 address and destined to
the multicast group 224.0.1.39. The role-based behavior of a C-RP is to send information to the MA.
What information does a C-RP send when it is configured to operate in Auto-RP? We can find out with
debug ip pim auto-rp:
R4#debug ip
PIM Auto-RP
R4#
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):

pim auto-rp
debugging is on
Build RP-Announce for 192.1.4.4, PIMv2/v1, ttl 16, ht 181
Build announce entry for (224.0.0.0/4)
Send RP-Announce packet of length 48 on FastEthernet0/0
Send RP-Announce packet of length 48 on FastEthernet0/1
Send RP-Announce packet of length 48 on Serial0/0/0.1
Send RP-Announce packet of length 48 on Loopback0(*)

Observe that the RP-announce messages for 192.1.4.4 are being sent out all PIM-S-DM enabled
interfaces, and the messages state that R4 is configured to offer RP services for the entire multicast

Copyright by IPexpert, Inc. All Rights Reserved.

8-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

range (224.0.0.0/4). Before we enable R6 to act as a C-RP in this topology we need to look at the
multicast routing table of R2, R6, R7 and R9:
R2#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:25:55/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:25:55/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 00:25:55/00:00:00
(192.1.4.4, 224.0.1.39), 00:01:03/00:02:05, flags: PT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Prune/Sparse-Dense, 00:01:04/00:01:55

R2 is actually routing this multicast traffic. It is being sent to R6 via GigabitEthernet0/1. R6 will also route
this multicast traffic:
R6#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:26:20/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/0.1, Forward/Sparse-Dense, 00:26:20/00:00:00
GigabitEthernet0/1, Forward/Sparse-Dense, 00:26:20/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

8-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

GigabitEthernet0/0, Forward/Sparse-Dense, 00:26:20/00:00:00


(192.1.4.4, 224.0.1.39), 00:01:28/00:01:36, flags: PT
Incoming interface: GigabitEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
GigabitEthernet0/0, Prune/Sparse-Dense, 00:01:29/00:01:29
Serial0/1/0.1, Prune/Sparse-Dense, 00:01:30/00:01:32

R6 is routing the traffic to R7 via FastEthernet0/0 and R4 via Serial0/1/0.1. On R7 we see that it to routes
the traffic.
R7#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:26:36/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:26:36/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 00:26:36/00:00:00
(192.1.4.4, 224.0.1.39), 00:01:45/00:01:23, flags: PT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Prune/Sparse-Dense, 00:01:46/00:01:16

224.0.1.39 is being routed toward R9 via FastEthernet0/1:


R9#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires

Copyright by IPexpert, Inc. All Rights Reserved.

8-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Interface state: Interface, Next-Hop or VCD, State/Mode


(*, 224.0.1.39), 00:27:02/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:27:02/00:00:00
(192.1.4.4, 224.0.1.39), 00:02:10/00:00:49, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.79.7
Outgoing interface list: Null

What this process illustrates is how the multicast traffic destined to the group 224.0.1.39 is being
actually routed throughout the multicast domain in order to reach the Auto-RP MA. Currently, in this
topology R2 is not the MA. We will get to that part after both R4 and R6 have been configured to be C-
RPs
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#ip pim send-rp-announce loopback 0 scope 16
R6(config)#end

Again to drive home the point that this traffic is being multicast routed we will look at the multicast
routing table on R2:
R2#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:35:49/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:35:49/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 00:35:49/00:00:00
(192.1.6.6, 224.0.1.39), 00:00:39/00:02:21, flags: PT
Incoming interface: GigabitEthernet0/1, RPF nbr 172.16.26.6
Outgoing interface list:
GigabitEthernet0/0, Prune/Sparse-Dense, 00:00:40/00:02:19

Copyright by IPexpert, Inc. All Rights Reserved.

8-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

(192.1.4.4, 224.0.1.39), 00:02:59/00:00:09, flags: PT


Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Prune/Sparse-Dense, 00:02:59/00:00:00

Note now that there is a (S,G) entry for the group 224.0.1.39.
Mapping Agent Assignment and Placement
As currently configured, the topology has no MA. This means multicast packets are being forwarded
throughout the domain each time the C-RPs send an RP announcement message. We will now configure
R2 to assume the role of Mapping Agent. Before doing so we will enable debug ip pim auto-rp:
R2#debug ip pim auto-rp
PIM Auto-RP debugging is on
R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#ip pim send-rp-discovery loopback 0 scope 16
R2(config)#end

What happens next?


R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.6.6, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.6.6, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.4.4), PIMv2 v1
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.4.4), PIMv2 v1
R2#
Auto-RP(0): Build RP-Discovery packet
Auto-RP: Build mapping (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1,
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/0 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/1 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on Loopback0(*) (1 RP entries)

Observe that R2 is now receiving the individual RP-announcement packets from both R4 and R6. The last
portion of the screen capture illustrates that the Mapping Agent builds an RP-Discovery Packet that
defines the group-to-RP mapping for use by all devices in the multicast topology. Here R4 and R6 are
both announcing their candidacy to be the RP for the range 224.0.0.0/4. R6 is elected by the MA
because it has the highest IP address. We can demonstrate this by changing the IP address used on R4 to
a higher value than that used on R6:

Copyright by IPexpert, Inc. All Rights Reserved.

8-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R4(config)#interface Loopback0
R4(config-if)#ip address 192.1.44.44 255.255.255.0
R4(config-if)#end

Now we will see that the MA will elect to use R4 rather than R6 because of the higher IP address:
R2#
Auto-RP(0): Build RP-Discovery packet
Auto-RP: Build mapping (224.0.0.0/4, RP:192.1.44.44), PIMv2 v1,
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/0 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/1 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on Loopback0(*) (1 RP entries)

This process is very simple to see. Now we need to look at the traffic leaving R2 for the rest of the
network. What IP address is it using?
R2(config)#access-list 101 deny eigrp any any
R2(config)#access-list 101 permit ip any host 224.0.1.39
R2(config)#access-list 101 permit ip any host 224.0.1.40
R2(config)#end
R2#
%SYS-5-CONFIG_I: Configured from console by console
R2#debug ip packet detail 101
IP packet debugging is on (detailed) for access list 101

We see the packets leave the MA:


R2#
IP: s=192.1.2.2 (Null0), d=224.0.1.40 (GigabitEthernet0/0), len 48, sending
broad/multicast
UDP src=496, dst=496
IP: s=192.1.2.2 (Null0), d=224.0.1.40 (GigabitEthernet0/1), len 48, sending
broad/multicast
UDP src=496, dst=496
IP: s=192.1.2.2 (local), d=224.0.1.40 (Loopback0), len 48, sending broad/multicast
UDP src=496, dst=496
IP: s=172.16.24.2 (local), d=224.0.0.1 (GigabitEthernet0/0), len 28, sending
broad/multicast, proto=2

They are destined to the multicast address 224.0.1.40. Again this group is going to be multicast
forwarded throughout the multicast domain in order to allow all multicast speakers to learn the identity
of the elected RP from the MA. All routers running current version of Cisco IOS automatically join the
multicast group 224.0.1.40 once you enable multicast routing. This default behavior was created to
facilitate the deployment of Auto-RP. We will use show ip mroute on all devices in the topology to
illustrate that the multicast group 224.0.1.40 is being multicast forwarded. As such we will expect to see
a (*,G) and a (S,G) entry on each of these devices for the group 224.0.1.40.

Copyright by IPexpert, Inc. All Rights Reserved.

8-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R1#show ip mroute 224.0.1.40


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:20:15/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse-Dense, 01:20:15/00:00:00
(192.1.2.2, 224.0.1.40), 00:25:13/00:02:37, flags: PLT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.5
Outgoing interface list: Null

R1 has joined the group 224.0.1.40 from 192.1.2.2 via interface FastEthernet0/0 toward R5:
R5#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:20:00/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 01:19:42/00:00:00
FastEthernet0/0, Forward/Sparse, 01:19:56/00:02:57
Loopback0, Forward/Sparse-Dense, 01:20:00/00:00:00
(192.1.2.2, 224.0.1.40), 00:25:13/00:02:39, flags: LT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list:
Loopback0, Forward/Sparse-Dense, 00:25:14/00:00:00
FastEthernet0/0, Forward/Sparse, 00:25:14/00:02:56

Copyright by IPexpert, Inc. All Rights Reserved.

8-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R5 has joined 224.0.1.40 from 192.1.2.2 via FastEthernet0/1 pointing to R4:


R4#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:19:46/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/0/0.1, Forward/Sparse-Dense, 01:13:54/00:00:00
FastEthernet0/1, Forward/Sparse-Dense, 01:19:42/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 01:19:46/00:00:00
(192.1.2.2, 224.0.1.40), 00:25:13/00:02:46, flags: LT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.24.2
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 00:25:14/00:00:00
Serial0/0/0.1, Prune/Sparse-Dense, 00:02:22/00:00:40

R4 has joined 224.0.1.40 sourced from 192.1.2.2 sourced from FastEthernet0/0 pointing to R2:
R2#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:18:34/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 01:14:17/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 01:18:31/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

8-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Loopback0, Forward/Sparse-Dense, 01:18:34/00:00:00


(192.1.2.2, 224.0.1.40), 00:25:13/00:02:37, flags: LT
Incoming interface: Loopback0, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/0, Forward/Sparse-Dense, 00:25:14/00:00:00
GigabitEthernet0/1, Forward/Sparse-Dense, 00:25:14/00:00:00

R2 is the MA agent in this equation. Note that the incoming interface is the loopback0 interface, and
that both GigabitEthernet0/0 and GigabitEthernet0/1 are in the OIL for that S,G pair.
R6#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:14:20/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1/0.1, Forward/Sparse-Dense, 01:13:54/00:00:00
FastEthernet0/1, Forward/Sparse-Dense, 01:14:17/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 01:14:20/00:00:00
(192.1.2.2, 224.0.1.40), 00:25:13/00:02:38, flags: LT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse-Dense, 00:25:14/00:00:00
Serial0/1/0.1, Prune/Sparse-Dense, 00:02:22/00:00:38, A

R6 has joined the group and the incoming interface is via FastEthernet0/1 pointing to R2:
R7#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group

Copyright by IPexpert, Inc. All Rights Reserved.

8-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Outgoing interface flags: H - Hardware switched, A - Assert winner


Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:13:35/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 01:13:17/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 01:13:32/00:00:00
Loopback0, Forward/Sparse-Dense, 01:13:35/00:00:00
(192.1.2.2, 224.0.1.40), 00:25:13/00:02:44, flags: LT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
Loopback0, Forward/Sparse-Dense, 00:25:14/00:00:00
FastEthernet0/1, Forward/Sparse-Dense, 00:25:14/00:00:00

R7 has joined 224.0.1.40, and is receiving the group via FastEthernet0/0 pointing toward R2:
R9#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:13:20/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse-Dense, 01:13:20/00:00:00
(192.1.2.2, 224.0.1.40), 00:25:13/00:02:37, flags: PLTX
Incoming interface: FastEthernet0/1, RPF nbr 172.16.79.7
Outgoing interface list: Null

Lastly, R9 is receiving the multicast information for 224.0.1.40 via FastEthernet0/1. Again all this is
actually being forwarded throughout the domain using routed multicast packets.
Multicast Routing Topology
The previous sections exposed the fact that Auto-RP utilizes multicast packets that require routing
throughout the topology. More to the point, the multicast routing addresses used by the C-RP and the

Copyright by IPexpert, Inc. All Rights Reserved.

8-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

MA to communicate group-to-RP information are all routed in PIM-DM by default. We discussed the use
of an RP for these multicast groups to avoid this behavior in the Technology Review section. We can
illustrate this point by using show ip mroute dense:
R2#show ip mroute dense
(*, 224.0.1.39), 02:14:16/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Loopback0, Forward/Sparse-Dense, 01:31:06/00:00:00
GigabitEthernet0/1, Forward/Sparse-Dense, 02:14:16/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 02:14:16/00:00:00
(192.1.44.44, 224.0.1.39), 01:19:21/00:02:44, flags: LT
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 01:19:21/00:00:00
Loopback0, Forward/Sparse-Dense, 01:19:21/00:00:00
(192.1.6.6, 224.0.1.39), 01:35:06/00:02:54, flags: LT
Incoming interface: GigabitEthernet0/1, RPF nbr 172.16.26.6
Outgoing interface list:
Loopback0, Forward/Sparse-Dense, 01:31:06/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 01:31:06/00:00:00
(*, 224.0.1.40), 02:25:45/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 02:21:29/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 02:25:43/00:00:00
Loopback0, Forward/Sparse-Dense, 02:25:48/00:00:00
(192.1.2.2, 224.0.1.40), 01:32:27/00:02:56, flags: LT
Incoming interface: Loopback0, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/0, Forward/Sparse-Dense, 01:32:27/00:00:00
GigabitEthernet0/1, Forward/Sparse-Dense, 01:32:27/00:00:00

The output of this command reveals that only the group 224.0.1.39 and 224.0.1.40 are operating in PIM-
DM. This works because we are currently using the Cisco Proprietary PIM mode PIM-S-DM. PIM-S-DM
was initially created to overcome this operational paradox where Auto-RP uses dense mode traffic to
elect an RP. Based on the current topology if traffic where sourced from R1 for the multicast group
224.9.9.9 that traffic will be forwarded via PIM-SM evidenced by show ip mroute sparse:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:

Copyright by IPexpert, Inc. All Rights Reserved.

8-15

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 8: AutoRP

4
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Now to pick a router for verification:


R2#show ip mroute sparse
(*, 224.9.9.9), 01:23:48/00:03:06, RP 192.1.44.44, flags: S
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 01:23:48/00:03:06
(172.16.15.1, 224.9.9.9), 00:00:44/00:03:11, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:44/00:03:06

We see that traffic to 224.9.9.9 is PIM-SM forwarded. This is because there is a group-to-RP mapping for
224.9.9.9:
R2#show ip pim rp
Group: 224.9.9.9, RP: 192.1.44.44, v2, v1, uptime 01:26:42, expires 00:02:14

What would happen if both RPs failed?


R4#conf t
Enter configuration commands, one per line.
R4(config)#interface Loopback0
R4(config-if)#shut
R4(config-if)#end
R6#conf t
Enter configuration commands, one per line.
R6(config)#interface Loopback0
R6(config-if)#shut
R6(config-if)#end

End with CNTL/Z.

End with CNTL/Z.

Neither R4 or R6 can be the RP any longer. How will traffic to 224.9.9.9 be forwarded now?
R1#show ip mroute 224.9.9.9

Copyright by IPexpert, Inc. All Rights Reserved.

8-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Group 224.9.9.9 not found


R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

With no RP, all traffic in this topology will "fallback" to PIM-DM.


R2#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:46/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:46/00:00:00
GigabitEthernet0/0, Forward/Sparse-Dense, 00:00:46/00:00:00
(172.16.15.1, 224.9.9.9), 00:00:46/00:02:33, flags: T
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse-Dense, 00:00:47/00:00:00

Recognizing that this was less than efficient, Cisco created the no ip dm-fallback command. This
command when applied to all PIM-S-DM speaking routers will stop this resorting dense mode behavior
(here we illustrate the command just on R1):
Enter configuration commands, one per line.

Copyright by IPexpert, Inc. All Rights Reserved.

End with CNTL/Z.

8-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R1(config)#no ip pim dm-fallback


R1(config)#

Now when traffic is generated from R1 to the group 224.9.9.9 the traffic will no longer use PIM-DM:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

Now we will look to see if the traffic is still dense mode forwarded.
R5#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:01:41/00:01:18, RP 0.0.0.0, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null

Based on this output it is obvious that R5 is not forwarding the traffic because there are not interfaces in
the OIL. There are no interfaces for this group because R5 has no valid RP address to use for the group
as typified by the RP 0.0.0.0, and based on the flag for the *,G entry we know the group is PIM-SM
forwarded.
Auto-RP Listener
Auto-RP listener was created as a better solution than the no ip dm-fallback command. Auto-RP allows
the network to use pure PIM-SM, by making on simple modification to the operational mechanism that
protocol uses. Without the ip pim autorp listener command, PIM-SM will attempt to forward traffic for
all multicast group address using PIM-SM. However, with the command ip pim autorp listener Cisco IOS
affords us a hack on this process. Once the command is deployed all multicast groups except 224.0.1.39
and 224.0.1.40 are PIM-SM. These two groups, and only these two groups will be PIM-DM.
In the following topology illustrated in Figure 8-2 where all interfaces on all devices are running ip pim
sparse-mode we will observe the application and operation of ip pim autorp listener:

Copyright by IPexpert, Inc. All Rights Reserved.

8-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Figure 8-2: Sample Auto-RP topology


In this topology, the ip pim rp-listener command has not been applied to any device. We are going to
look at the multicast routing tables of all devices. I needs to be pointed out that we are going to observe
what will initially appear as strange behavior, but after we walk through what is happening things will
make sense. Right now in the topology R2 is the MA and the C-RPs are R4 and R6. We will look at R2 to
see if it has learned the identities of the C-RPs. If R2 is learning this information we would expect to see
two S,G entries in the multicast routing table. One for R4 and the other R6 evidenced by the output of
show ip mroute 224.0.1.39:
R2#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 02:13:54/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 02:13:54/00:00:00
GigabitEthernet0/0, Forward/Sparse, 02:13:54/00:00:00
Loopback0, Forward/Sparse, 02:13:54/00:02:12
(192.1.4.4, 224.0.1.39), 01:54:49/00:02:19, flags: LT
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4

Copyright by IPexpert, Inc. All Rights Reserved.

8-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Outgoing interface list:


Loopback0, Forward/Sparse, 01:54:50/00:02:11
(192.1.6.6, 224.0.1.39), 01:58:57/00:02:08, flags: LT
Incoming interface: GigabitEthernet0/1, RPF nbr 172.16.26.6
Outgoing interface list:
Loopback0, Forward/Sparse, 01:58:57/00:02:11

This output indicates the two sources are active for the group 224.0.1.39. Many aspiring students see
this behavior and think that the show ip pim rp-listener command is not necessary. Can you imagine
why R2 is learning the information generated on these two C-RPs? We will look closer at this process by
using debug ip auto-rp:
R2#debug ip pim auto-rp
PIM Auto-RP debugging is on
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.6.6, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.6.6, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.4.4), PIMv2 v1
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Update (224.0.0.0/4, RP:192.1.4.4), PIMv2 v1
R2#
Auto-RP(0): Build RP-Discovery packet
Auto-RP: Build mapping (224.0.0.0/4, RP:192.1.6.6), PIMv2 v1,
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/0 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on GigabitEthernet0/1 (1 RP entries)
Auto-RP(0): Send RP-discovery packet of length 48 on Loopback0(*) (1 RP entries)

R2 is actually receiving the messages from the two C-RP, and building the RP-Discovery packet. In this
scenario, messages from R4 and R6 are making it to R2 because they are adjacent, and the RP-discovery
packets are making it to R4 and R6 for the same reason. To better illustrate this point we will disable the
point-to-point link between R4 and R6.
R4#conf t
Enter configuration commands, one per line.
R4(config)#interface Serial0/0/0.1
R4(config-subif)#shut
R4(config-subif)#end

End with CNTL/Z.

With this accomplished we will use show ip mroute to get a snapshot of what information has been
propagated.

Copyright by IPexpert, Inc. All Rights Reserved.

8-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R2#show ip mroute 224.0.1.39


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:01:05/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Loopback0, Forward/Sparse, 00:01:05/00:02:47
(192.1.4.4, 224.0.1.39), 00:00:14/00:02:55, flags: LT
Incoming interface: GigabitEthernet0/0, RPF nbr 172.16.24.4
Outgoing interface list:
Loopback0, Forward/Sparse, 00:00:14/00:02:47
(192.1.6.6, 224.0.1.39), 00:00:22/00:02:43, flags: LT
Incoming interface: GigabitEthernet0/1, RPF nbr 172.16.26.6
Outgoing interface list:
Loopback0, Forward/Sparse, 00:00:22/00:02:46

This means that R2 has learned about the 224.0.1.39 group from both R4 and R6. Now we will look at R4
and R6 respectively:
R4#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:00:14/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:14/00:02:45

Copyright by IPexpert, Inc. All Rights Reserved.

8-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

(192.1.4.4, 224.0.1.39), 00:00:14/00:02:45, flags: T


Incoming interface: Loopback0, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:14/00:02:45
R6#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:01:05/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:05/00:02:53
(192.1.6.6, 224.0.1.39), 00:00:21/00:02:38, flags: T
Incoming interface: Loopback0, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:21/00:02:53

We will add this to our topology drawing:

Figure 8-3: Propagation of (S,G) entries for 224.0.1.39 based on adjacency

Copyright by IPexpert, Inc. All Rights Reserved.

8-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Because the information for the C-RP is generated by R4 and R6, we see in Figure 8-3 that R2 and R5
learn the source and destination address from R4 and R2 while, R7 learns about the source and
destination address from R6. The next question is will R5 and R7 forward these multicast packets?
R5#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:44:19/stopped, RP 0.0.0.0, flags: DP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(192.1.4.4, 224.0.1.39), 00:02:19/00:00:40, flags: PT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list: Null

This output lets us know that R5 will not forward any information for this S,G pair, this can be verified on
R1 with show ip mroute:
R1#show ip mroute 224.0.1.39
Group 224.0.1.39 not found

We will repeat this process on R7:


R7#show ip mroute 224.0.1.39
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.39), 00:00:21/stopped, RP 0.0.0.0, flags: DP

Copyright by IPexpert, Inc. All Rights Reserved.

8-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list: Null
(192.1.6.6, 224.0.1.39), 00:00:21/00:02:38, flags: PT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list: Null

Again there are no interfaces in the OIL for the S,G pair, thus there will be no knowledge of the group
224.0.1.39 on R9:
R9#show ip mroute 224.0.1.39
Group 224.0.1.39 not found

In Figure 8-4 we will apply this information to our drawing to better illustrate the issues at hand:

Figure 8-4: Incomplete Propagation of (S,G) entries for 224.0.1.39 based on adjacency
Figure 8-4 makes it very clear that the information regarding the possible C-RPs are not being properly
propagated through the network.
We have looked at the multicast group 224.0.1.39, and admittedly, in this topology there is only one MA
so this issue may not affect us. But now we need to look at the address 224.0.1.40 that is used to
propagate the Auto-RP Discovery messages. Starting at R2 where these messages are originiated we will
follow the multicast stream from R2 to R1:
R4#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,

Copyright by IPexpert, Inc. All Rights Reserved.

8-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Z - Multicast Tunnel, z - MDT-data group sender,


Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert
Interface state: Interface, Next-Hop or VCD, State/Mode
winner
Timers: Uptime/Expires
(*, 224.0.1.40), 01:13:41/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 01:13:41/00:02:12
(192.1.2.2, 224.0.1.40), 01:13:37/00:02:47, flags: PLTX
Incoming interface: FastEthernet0/0, RPF nbr 172.16.24.2
Outgoing interface list: Null

R4 is learning the S,G for 224.0.1.40 from R2, but we can see that it is not forwarding the traffic from
that source out any interfaces. This means that R5 will not have an S,G entry as evidenced by show ip
mroute:
R5#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:18:11/00:02:53, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 01:18:11/00:02:53
Loopback0, Forward/Sparse-Dense, 01:18:11/00:00:00

We will see that this process repeats itself between R6 and R7:
R6#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

Copyright by IPexpert, Inc. All Rights Reserved.

8-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

U - URD, I - Received Source Specific Host Report,


Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:14:03/stopped, RP 0.0.0.0, flags: DPL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
(192.1.2.2, 224.0.1.40), 01:13:59/00:02:31, flags: PLT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list: Null

No interfaces in the OIL means R6 will not forward the multicast stream from R2:
R7#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 01:18:38/00:02:32, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 01:18:38/00:02:29
Loopback0, Forward/Sparse-Dense, 01:18:38/00:00:00

Placing all this information in the drawing will allow us to get a better understanding of where the
configuration has failed. Figure 8-5 has all this information

Copyright by IPexpert, Inc. All Rights Reserved.

8-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Figure 8-5: Incomplete Propagation of (S,G) entries for 224.0.1.39 and 224.0.1.40 based on adjacency
Based on this illustration we can assume that only R4 and R6 will have knowledge of any group-to-RP
mappings. As evidenced by show ip pim rp mapping on R4 and R7:
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 03:55:32, expires: 00:02:33
R6#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 03:55:32, expires: 00:02:33

But no other devices beyond R4 and R6 will have this information because the necessary groups to
propagate the Auto-RP information is being dropped in this topology:
R5#show ip pim rp mapping
PIM Group-to-RP Mappings
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
R7#show ip pim rp mapping

Copyright by IPexpert, Inc. All Rights Reserved.

8-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

PIM Group-to-RP Mappings


R9#show ip pim rp mapping
PIM Group-to-RP Mappings

The method best able to correct this issue will be to execute ip pim autorp listener on all devices (we
demonstrate this on R1):
R1#conf t
Enter configuration commands, one per line.
R1(config)#ip pim autorp listener
R1(config)#end

End with CNTL/Z.

Now that this has been accomplished we will look at the topology again for the multicast group
224.0.1.40, and verify that each router in the topology agrees on the identity of the RP. We will start
with R1 and work our way to R2:
R1#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:02:26/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:02:26/00:00:00
(192.1.2.2, 224.0.1.40), 00:01:36/00:02:22, flags: PLT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.15.5
Outgoing interface list: Null

R1 knows now knows the S,G entry from 192.1.2.2, and therefore can receive the Group-to-RP mappings
from the MA:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1

Copyright by IPexpert, Inc. All Rights Reserved.

8-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Info source: 192.1.2.2 (?), elected via Auto-RP


Uptime: 00:01:43, expires: 00:02:15

We will see the same behavior on R5:


R5#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:02:26/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:02:26/00:00:00
FastEthernet0/0, Forward/Sparse, 00:02:26/00:00:00
Loopback0, Forward/Sparse-Dense, 00:02:26/00:00:00
(192.1.2.2, 224.0.1.40), 00:01:36/00:02:27, flags: LT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.45.4
Outgoing interface list:
Loopback0, Forward/Sparse-Dense, 00:01:37/00:00:00
FastEthernet0/0, Forward/Sparse, 00:01:37/00:00:00
R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:01:43, expires: 00:02:13

R4 is next in the topology where we expect to see the same results:


R4#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,

Copyright by IPexpert, Inc. All Rights Reserved.

8-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Z - Multicast Tunnel, z - MDT-data group sender,


Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:02:26/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:02:26/00:00:00
FastEthernet0/0, Forward/Sparse, 00:02:26/00:00:00
(192.1.2.2, 224.0.1.40), 00:01:36/00:02:31, flags: LT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.24.2
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:37/00:00:00
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 04:04:49, expires: 00:02:12

Just to complete the verification we will check the MA:


R2#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:02:26/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet0/1, Forward/Sparse, 00:02:26/00:00:00
GigabitEthernet0/0, Forward/Sparse, 00:02:26/00:00:00
Loopback0, Forward/Sparse, 00:02:26/00:00:00
(192.1.2.2, 224.0.1.40), 00:01:36/00:02:22, flags: LT

Copyright by IPexpert, Inc. All Rights Reserved.

8-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Incoming interface: Loopback0, RPF nbr 0.0.0.0


Outgoing interface list:
GigabitEthernet0/0, Forward/Sparse, 00:01:37/00:00:00
GigabitEthernet0/1, Forward/Sparse, 00:01:37/00:00:00
R2#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6
Uptime: 04:10:10,
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4
Uptime: 04:06:03,

(?), elected via Auto-RP


expires: 00:02:49
(?), via Auto-RP
expires: 00:02:52

Observe that the MA knows the identity of both C-RPs, but it is only propagating information about the
RP that it has selected for the topology.
One Last Step
Now that we have verified half the topology, we will check everything by initiating an mtrace from R1 to
the 172.16.79.0/24 interface of R9:
R1#mtrace 172.16.15.1 172.16.79.9 224.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path...
0 172.16.79.9
-1 172.16.67.7 PIM [172.16.15.0/24]
-2 172.16.67.6 PIM Reached RP/Core [172.16.15.0/24]
-3 172.16.26.2 PIM [172.16.15.0/24]
-4 172.16.24.4 PIM [172.16.15.0/24]
-5 172.16.45.5 PIM [172.16.15.0/24]
-6 172.16.15.1 PIM Prune sent upstream [172.16.15.0/24]

We see exactly what we would expect. The multicast path forms with R6 as the RP/Core. Verification
would be to a have R9 join the multicast group 224.9.9.9, and generate a multicast feed for that address
on R1:
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Copyright by IPexpert, Inc. All Rights Reserved.

End with CNTL/Z.

8-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R1#ping 224.9.9.9 repeat 10


Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request

0
0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
4
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

This demonstrates that the topology is fully functional after the application of the ip pim autorp listener
command.


Copyright by IPexpert, Inc. All Rights Reserved.

8-32

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Common Issues with Auto-RP


AutoRP is somewhat more complicated to deploy than Static RP's. This means there are a number of
issues that can surface with Auto-RP. The most common problems relate to the exchange of essential
control plane information. By far the control plane establishment in Auto-RP has many more
components compared to its data plane process, but when compared to its Static RP counterpart, Auto-
RP is much more involved to troubleshoot. For simplicity in troubleshooting common issues while
deploying Auto-RP, we identify two categories of problems: Reverse Path Forwarding (RPF) failures, and
multicast routing problems.
RPF Failures
In the Troubleshooting Auto-RP section, this text discussed what phases of the Auto-RP operational
mechanisms where subject to Reverse Path Forwarding (RPF) checks. Recall that all phases of Auto-RP
subject to the RPF process. Logically then, RPF issues can prevent a MA from learning about C-RPs.
Additionally, this problem can prevent any device in the domain from successfully learning the identity
of the RP elected by the MA for a given group-to-RP mapping.
The following list of issues has a relatively high probability of occurring thanks to RPF failures.
Remember that these RPF checks performed by C-RPs are done against the IP address of the MA itself.
Where RPF checks made by the MA will be performed against the IP address of the C-RPs.

Candidate-RPs cannot communicate their RP-Set information to the MA .


Mapping Agent cannot communicate the elected group-to-RP mapping information to C-RPs or
multicast enabled devices..

We will perform a walk through for each of these RPF issues in the Auto-RP Sample Troubleshooting
Scenarios section that follows.
Multicast Routing and Forwarding Problems
These problems manifest themselves in more subtle ways when compared to the previous points. As
discussed earlier, the majority of the Auto-RP operational mechanisms involve the formation of the
control plane so that a device can be assigned as the MA, and so that C-RP group-to-RP mappings can be
communicated to the MA. From that point, the elected group-to-RP mapping can be propagated
throughout the multicast domain by the MA.
Situations like the following exist when information fails to propagate to any or all devices, but RPF
checks and unicast routing seem to be functioning correctly:

One or more devices fail to receive the elected group-to-RP-set information from the MA.

Copyright by IPexpert, Inc. All Rights Reserved.

8-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

One or more C-RP fails to communicate with the its candidacy as a C-RP for a given group or
scope.

In the Auto-RP Sample Troubleshooting Scenarios section that follows, troubleshooting these issues are
demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

8-34

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Auto-RP Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the Auto-RP operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem is MA or C-RP related, and then how to begin
isolating the cause of the fault in the most efficient manner possible. Figure 8-6 illustrates the topology
used to explore this topic. Note that R4 and R6 operate as C-RPs and R2 is the MA:

Figure 8-6: A Sample Auto-RP Topology

In the Common Issues with Auto-RP section, two primary types of problems were identified: RPF
failures, and multicast forwarding and routing failures. This section explores these categories of failure,
by directing our attention to the commands necessary to identify that a problem exists. There are three
types of devices in this topology: C-RP(s), a MA, and PIM enabled routers. We will verify that this
environment is operating correctly by checking that the topology agrees with Figure 8-6.
Step One: Is R2 the Mapping Agent?
The fastest way to verify that R2 is the Mapping Agent would be show ip pim autorp:
R2#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 0/54, RP Discovery: 45/0

Copyright by IPexpert, Inc. All Rights Reserved.

8-35

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

The output indicates that R2 is sending Discovery messages. Only the MA will do this in an Auto-RP
configuration.

Step Two: Are R4 and R6 configured as C-RPs?
The fastest way to verify that R4 and R6 are Candidate-RPs would be show ip pim autorp:
R4#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 68/0, RP Discovery: 0/18
R6#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 68/0, RP Discovery: 0/18

Observe that R4 and R6 are indeed sending RP announcement messages.



Step Three: Are test pings successful from our designated source router?
Before conducting this test, we will need to have R6 join the multicast group used to verify. In this
instance R9, will join the group 224.9.9.9:
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

End with CNTL/Z.

With this accomplished are pings to this group successful from R1?
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 4 ms
Reply to request 1 from 172.16.79.9, 1 ms
Reply to request 1 from 172.16.79.9, 1 ms

Copyright by IPexpert, Inc. All Rights Reserved.

8-36

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request

2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 8: AutoRP

1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms

Step Four: Which of the two possible C-RPs were elected to serve as the RP for the group 224.9.9.9?

Verified with show ip pim rp:
R1#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:07
R5#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, uptime 00:21:42, expires 00:02:06
R4#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:04
R2#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:16
R6#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, next RP-reachable in 00:00:47
R7#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:06
R9#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, v1, uptime 00:21:42, expires 00:02:04

This output clearly identifies R6 as the RP for the group 224.9.9.9. All things being equal in the
configuration between R4 and R6 we would expect this based on R6's higher IP address.
RPF failures
Pings to the group 224.9.9.9 are no longer successful from R1:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

Copyright by IPexpert, Inc. All Rights Reserved.

8-37

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

There are a number of reasons why this may be happening, a logical approach would be to determine if
R1 has a RP mapping for the group 224.9.9.9:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
R1#

R1 has no mapping for this or any group. Are RP Discovery messages arriving on R1 from the MA?
R1#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 0/0, RP Discovery: 0/32

We see that 32 RP Discovery messages have arrived. We know that these messages will be send at
periodic intervals. So logically, we would expect this value to increment over time. After 2 minutes we
will execute the command again.
R1#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 0/0, RP Discovery: 0/32

The value is not incrementing. We know that these messages are subjected to RPF checks, and that they
are sourced from the loopback0 interface of R2. To verify the multicast path between R2 and R1 we will
us mtrace:
R1#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.15.1 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.15.1
-1 172.16.15.1 PIM [192.1.2.0/24]
-2 172.16.15.5 PIM [192.1.2.0/24]
-3 172.16.45.4 PIM Multicast disabled [192.1.2.0/24]
-4 172.16.24.2 PIM [192.1.2.0/24]

Copyright by IPexpert, Inc. All Rights Reserved.

8-38

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

The output of mtrace shows us that multicast PIM is disabled on the 172.16.45.4 interface of R4 as
evidenced with show run on that device:
R4#show run interface FastEthernet0/1
Building configuration...
Current configuration : 96 bytes
!
interface FastEthernet0/1
ip address 172.16.45.4 255.255.255.0
duplex auto
speed auto
end

This can be corrected by applying the ip pim sparse-mode command under this interface:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#interface FastEthernet0/1
R4(config-if)#ip pim sparse-mode
R4(config-if)#end
R4#
%PIM-5-NBRCHG: neighbor 172.16.45.5 UP on interface FastEthernet0/1
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.45.5 on interface
FastEthernet0/1

We see the PIM neighbor come up with R5. Now do we see any group-to-RP mappings on R1?
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:01:26, expires: 00:02:31

We see the mapping. Are pings to the group 224.9.9.9 successful now?
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to

request
request
request
request
request

0
1
1
2
3

from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Copyright by IPexpert, Inc. All Rights Reserved.

4
1
1
1
1

ms
ms
ms
ms
ms

8-39

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to

request
request
request
request
request
request

4
5
6
7
8
9

from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 8: AutoRP

1
1
1
1
1
1

ms
ms
ms
ms
ms
ms

The pings are now successful.


Multicast Forwarding and Routing Failures
Pings to the group 224.9.9.9 are no longer successful from R1:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........


There are a number of reasons why this may be happening, a logical approach would be to determine if
R1 has a RP mapping for the group 224.9.9.9:
R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:17:24, expires: 00:02:22

Do all devices between R1 and R9 have the same Group-to-RP mapping?


R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:18:50, expires: 00:01:59
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:02:02

Copyright by IPexpert, Inc. All Rights Reserved.

8-40

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R2#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6
Uptime: 01:08:09,
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4
Uptime: 00:35:36,

(?), elected via Auto-RP


expires: 00:02:50
(?), via Auto-RP
expires: 00:02:24

R6#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:01:59
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 01:08:09, expires: 00:02:02
R9#show ip pim rp mapping
PIM Group-to-RP Mappings

No, they do not. R9 has no RP mappings assigned. This could be an RPF error between R9 and the MA. If
so, this can be identified via mtrace:


R9#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.79.9 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.79.9
-1 172.16.79.9 PIM [192.1.2.0/24]
-2 172.16.79.7 PIM [192.1.2.0/24]
-3 172.16.67.6 PIM [192.1.2.0/24]
-4 172.16.26.2 PIM [192.1.2.0/24]
-5 192.1.2.2

Copyright by IPexpert, Inc. All Rights Reserved.

8-41

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

There does not appear to be an RPF issue. That leaves a multicast routing and forwarding issue
somewhere in the path. By looking at the multicast routing table on R9 we can see if it is learning any
information from the MA 192.1.2.2 for the group 224.0.1.40:
R9#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:20:14/00:02:37, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:20:14/00:00:00

This output demonstrates that R9 has only the *,G entry for the group 224.0.1.40, and has no incoming
interface. This means that R9 is not receiving any multicast packets for this group. Logically, by looking
at Figure 8-6, we know that R9 should receive these packets from R7. We need to look at the multicast
routing table on R7 now:
R7#show ip mroute 224.0.1.40
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.0.1.40), 00:21:28/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:21:28/00:02:09
(192.1.2.2, 224.0.1.40), 00:20:51/00:02:10, flags: PLTX
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6

Copyright by IPexpert, Inc. All Rights Reserved.

8-42

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Outgoing interface list: Null

We see that R7 has both the *,G and the S,G entry for the group 224.0.1.40, but we also see that the OIL
is empty (Null). Note again the flag of "D" for this traffic. This means that the packets are dense mode
forwarded. We are running pim sparse-mode under both interfaces on R7, as evidenced by show run:
R7#show run interface FastEthernet0/0
Building configuration...
Current configuration : 116 bytes
!
interface FastEthernet0/0
ip address 172.16.67.7 255.255.255.0
ip pim sparse-mode
duplex auto
speed auto
end
R7#show run interface FastEthernet0/1
Building configuration...
Current configuration : 116 bytes
!
interface FastEthernet0/1
ip address 172.16.79.7 255.255.255.0
ip pim sparse-mode
duplex auto
speed auto
end

We know that we need the ip pim autorp listener command to allow R7 to forward the multicast group
224.0.1.39 and 224.0.1.40 in dense mode. We can see if this command is configured on R7 with show
run:
R7#show run | inc listener
R7#

The command is not configured. This can be corrected by adding the command on R7:
R7#conf t
Enter configuration commands, one per line.
R7(config)#ip pim autorp listener
R7(config)#end

End with CNTL/Z.

Now are pings successful from R1?


R1#ping 224.9.9.9 repeat 10

Copyright by IPexpert, Inc. All Rights Reserved.

8-43

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Type escape sequence to abort.


Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request
request

0
1
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

We have corrected the multicast routing and forwarding issue.

Copyright by IPexpert, Inc. All Rights Reserved.

8-44

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

AutoRP show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
AutoRP topology in Figure 8-7 for all example output.

Figure 8-7: A Sample AutoRP Topology

show COMMAND:
show ip igmp membership [group-address | group-name] [tracked] [all]
This command displays Internet Group Management Protocol (IGMP) membership information for
multicast groups and (S, G) channels.
Where:

group-address optional; specifies the specific multicast group address


tracked optional; displays the multicast groups with the explicit tracking feature enabled
all - optional; displays the detailed information about the multicast groups with and without the
explicit tracking feature enabled

EXAMPLE OUTPUT:
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude

Copyright by IPexpert, Inc. All Rights Reserved.

8-45

IPv4/6 Multicast Operation and Troubleshooting

Channel/Group
*,224.9.9.9
*,239.9.9.9
*,224.0.1.40
R9#

Reporter
172.16.79.9
172.16.79.9
172.16.79.9

Chapter 8: AutoRP

Uptime
00:20:07
00:20:07
00:20:07

Exp.
02:22
02:22
02:22

Flags
2LA
2LA
2LA

Interface
Fa0/1
Fa0/1
Fa0/1


show COMMAND:
show ip mroute
This command displays the contents of the multicast routing (mroute) table.
EXAMPLE OUTPUT:
R7#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:21:15/00:02:38, RP 192.1.6.6, flags: SJC
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:05/00:03:01
(*, 239.9.9.9), 00:21:15/00:03:04, RP 192.1.6.6, flags: SJC
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:05/00:03:04
(172.16.15.1, 239.9.9.9), 00:01:00/00:02:03, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:01:00/00:03:28
(*, 224.0.1.39), 00:03:07/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:03:07/00:00:00
FastEthernet0/0, Forward/Sparse, 00:03:09/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

8-46

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

(192.1.4.4, 224.0.1.39), 00:00:59/00:02:05, flags: PT


Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Prune/Sparse, 00:00:59/00:02:02
(*, 224.0.1.40), 00:03:32/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:03:32/00:00:00
FastEthernet0/0, Forward/Sparse, 00:03:32/00:00:00
Loopback0, Forward/Sparse, 00:03:32/00:00:00
(192.1.2.2, 224.0.1.40), 00:01:08/00:02:55, flags: LT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
Loopback0, Forward/Sparse, 00:01:08/00:00:00
FastEthernet0/1, Forward/Sparse, 00:01:08/00:00:00
R7#


show COMMAND:
show ip pim interface
This command displays information about interfaces configured for Protocol Independent Multicast
(PIM).
EXAMPLE OUTPUT:
R7#show ip pim interface
Address

Interface

192.1.7.7
172.16.67.7
172.16.79.7
R7#

Loopback0
FastEthernet0/0
FastEthernet0/1

Ver/
Mode
v2/S
v2/S
v2/S

Nbr
Count
0
1
1

Query
Intvl
30
30
30

DR
Prior
1
1
1

DR
192.1.7.7
172.16.67.7
172.16.79.9

show COMMAND:
show ip pim rp mapping
This command displays information about Protocol Independent Multicast (PIM) RP mappings.
EXAMPLE OUTPUT:
R7#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4

Copyright by IPexpert, Inc. All Rights Reserved.

8-47

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

RP 192.1.6.6 (?), v2v1


Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:02:07, expires: 00:02:51
R7#


show COMMAND:
show ip pim [vrf vrf-name] neighbor [interface-type interface-number]
This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.
Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

EXAMPLE OUTPUT:
R7#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Address
172.16.67.6
FastEthernet0/0
00:22:16/00:01:36
172.16.79.9
FastEthernet0/1
00:22:40/00:01:23
R7#

DR Priority,
Ver
v2
v2

DR
Prio/Mode
1 / S
1 / DR S


show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]
This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance


route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-
distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information

Copyright by IPexpert, Inc. All Rights Reserved.

8-48

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

EXAMPLE OUTPUT:
R7#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/0
RPF neighbor: ? (172.16.67.6)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables
R7#

Copyright by IPexpert, Inc. All Rights Reserved.

8-49

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

AutoRP debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
AutoRP topology in Figure 8-8 for all example output.

Figure 8-8: A Sample AutoRP Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]
This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

EXAMPLE OUTPUT:
IP(0): s=172.16.26.6 (FastEthernet0/1) d=239.9.9.9 (FastEthernet0/0) id=1, ttl=254,
prot=1, len=100(100), mforward


debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays Protocol Independent Multicast (PIM) packets received and sent and displays
PIM-related events

Copyright by IPexpert, Inc. All Rights Reserved.

8-50

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R7#debug ip pim
PIM debugging is on
R7#
PIM(0): Insert (172.16.15.1,239.9.9.9) join in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (172.16.15.1/32, 239.9.9.9), S-bit Join
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Prune-list: (192.1.4.4/32, 224.0.1.39)
PIM(0): Prune FastEthernet0/1/224.0.1.39 from (192.1.4.4/32, 224.0.1.39)
PIM(0): Insert (192.1.4.4,224.0.1.39) prune in nbr 172.16.67.6's queue
PIM(0): Building Join/Prune packet for nbr 172.16.67.6
PIM(0): Adding v2 (192.1.4.4/32, 224.0.1.39) Prune
PIM(0): Send v2 join/prune to 172.16.67.6 (FastEthernet0/0)
R7#
PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 172.16.79.9, to us
PIM(0): Prune-list: (192.1.4.4/32, 224.0.1.39)
R7#

Copyright by IPexpert, Inc. All Rights Reserved.

8-51

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Chapter Challenge: Auto-RP Sample Trouble Tickets


The following section includes three sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH8-AUTO-RP-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 8-9 below:

Figure 8-9: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that the router R5 refuses to use R6 as the RP for any
multicast group. This behavior is not acceptable and is resulting in R9 from being able to receive
multicast traffic. You have been instructed to isolate this issue with the multicast group 224.9.9.9. You
must correct the issue once is it identified.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that the MA (R2) is only recording RP
Announcements from R6 in the group-to-RP mapping table. R2 needs to be configured such that it
accepts RP announcements from R5 and R4 only. Be advised that this task was previously in the hands of
a junior technician. Correct this issue.
Trouble Ticket #3
Your supervisor has notified you that R7 will be assuming the role of RP for all multicast groups in this
topology. Previous testing has been performed using R7 after business hours as the RP. You have been
instructed place R7 into operation immediately.

Copyright by IPexpert, Inc. All Rights Reserved.

8-52

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Chapter Challenge: Auto-RP Sample Trouble Tickets Solutions


The following section includes the solutions to the three Trouble Tickets presented in the previous
section. Figure 8-10 provides a flowchart that outlines a "quick fire" approach to isolating and
remediating issues associated with Auto-RP.


Figure 8-10: Auto-RP Quick Fire Troubleshooting Flowchart


Trouble Ticket #1 Solution
Your supervisor has brought to your attention that the router R5 refuses to use R6 as the RP for any
multicast group. This behavior is not acceptable and is resulting in R9 from being able to receive
multicast traffic. You have been instructed to isolate this issue with the multicast group 224.9.9.9. You
must correct the issue once is it identified.

Step 1 - Fault Verification:
Initiate a ping test from R1 to the group 224.9.9.9 with a high repeat count:
R1#ping 224.9.9.9 repeat 100000

Copyright by IPexpert, Inc. All Rights Reserved.

8-53

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Type escape sequence to abort.


Sending 100000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........................<output omitted>


These pings are not successful. What RP is in use on R5?

R5#show ip pim rp
Group: 224.9.9.9, RP: 192.1.5.5, next RP-reachable in 00:00:22

R5 does not choose R6 as RP for this group thus verifying the problem.
Step 2 - Fault Isolation:
The next course of action is to use the show ip pim rp mapping to determine what R5 is learning as
possible candidate RPs:

R5#show ip pim rp mapping 224.9.9.9
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.2.2 (?), elected via Auto-RP
Uptime: 00:45:18, expires: 00:02:44
Group(s): 224.0.0.0/4, Static-Override
RP: 192.1.5.5 (?)


We see that R5 is learning about R6's desire to be the RP, but it is not choosing it as the RP because of a
static RP assignment. Note that this output tells us that the static-override option has been used. This
means that the static route that has been assigned will always override any dynamically learned RP
information. We can see this via show run:

R5#show run | inc override
ip pim rp-address 192.1.5.5 override

The override option will prevent R5 from using the dynamic RP assignment via Auto-RP. This has
unquestionably isolated our fault.

Step 3 - Fault Remediation:
In this scenario, the ip pim rp-address command needs to be removed.

R5#conf t

Copyright by IPexpert, Inc. All Rights Reserved.

8-54

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

Enter configuration commands, one per line. End with CNTL/Z.


R5(config)#no ip pim rp-address 192.1.5.5 override
R5(config)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification.

Initiate a ping test from R1 to the group 224.9.9.9:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms


These pings are successful. What RP is in use on R5?

R5#show ip pim rp
Group: 224.9.9.9, RP: 192.1.6.6, v2, uptime 00:56:01, expires 00:02:55

R5 now chooses R6 as the RP, demonstrating that the issue has been corrected.
Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that the MA (R2) is only recording RP
Announcements from R6 in the group-to-RP mapping table. R2 needs to be configured such that it
accepts RP announcements from R5 and R4 only. Be advised that this task was previously in the hands of
a junior technician. Correct this issue.
Step 1 - Fault Verification:
Is R2 only learning the information from R6?
R2#show ip pim rp mapping
PIM Group-to-RP Mappings

Copyright by IPexpert, Inc. All Rights Reserved.

8-55

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

This system is an RP-mapping agent (Loopback0)


Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6 (?), elected via Auto-RP
Uptime: 03:14:19, expires: 00:02:38


The MA is only learning of R6's desire to be considered as a C-RP thus verifying the problem exists.

Step 2 - Fault Isolation:
Are R4 and R5 advertising RP Announcement messages?

R4#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 769/0, RP Discovery: 0/200

Note that R4 is sending RP Announcements. This is confirmed by the fact that the counter increments
over time:
R4#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 773/0, RP Discovery: 0/200


The same test will be repeated on R5:

R5#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 201/0, RP Discovery: 0/183

Note that R5 is also sending RP Announcements. This is confirmed by the fact that the counter
increments over time:
R5#show ip pim autorp

Copyright by IPexpert, Inc. All Rights Reserved.

8-56

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 204/0, RP Discovery: 0/185


This means either messages are not reaching R2 or they are being ignored by R2. An RPF error could
cause this issue. We will verify if this is the case via mtrace:

R4#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.24.4 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.24.4
-1 172.16.24.4 PIM [192.1.2.0/24]
-2 172.16.24.2 PIM [192.1.2.0/24]
-3 192.1.2.2
R5#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.45.5 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.45.5
-1 172.16.45.5 PIM [192.1.2.0/24]
-2 172.16.45.4 PIM [192.1.2.0/24]
-3 172.16.24.2 PIM [192.1.2.0/24]
-4 192.1.2.2


The output does not seem to indicate an RPF issue. Now we will look to see if the messages are arriving
on R2 (MA) via debug ip pim auto-rp:
R2#debug ip
PIM Auto-RP
R2#
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):

pim auto-rp
debugging is on
Received RP-announce
Update (224.0.0.0/4,
Received RP-announce
Update (224.0.0.0/4,

packet of length 48,


RP:192.1.6.6), PIMv2
packet of length 48,
RP:192.1.6.6), PIMv2

from 192.1.6.6, RP_cnt 1, ht 181


v1
from 192.1.6.6, RP_cnt 1, ht 181
v1

Here we see the RP-announcement arrive from R6, but look closely at the messages for R4 and R5:
R2#
Auto-RP(0): Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181
Auto-RP(0): Filtered 224.0.0.0/4 for RP 192.1.4.4

Copyright by IPexpert, Inc. All Rights Reserved.

8-57

IPv4/6 Multicast Operation and Troubleshooting

Auto-RP(0):
Auto-RP(0):
R2#
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):

Chapter 8: AutoRP

Received RP-announce packet of length 48, from 192.1.4.4, RP_cnt 1, ht 181


Filtered 224.0.0.0/4 for RP 192.1.4.4
Received
Filtered
Received
Filtered

RP-announce
224.0.0.0/4
RP-announce
224.0.0.0/4

packet
for RP
packet
for RP

of length 48, from 192.1.5.5, RP_cnt 1, ht 181


192.1.5.5
of length 48, from 192.1.5.5, RP_cnt 1, ht 181
192.1.5.5

These messages are being filtered and therefore dropped. This is not a default situation and must be
related to a filter of some kind as evidenced by show run:

R2#show run | inc filter
ip pim rp-announce-filter rp-list 1


This command references an access-list. What is being permitted and denied by the standard access-list
1:

R2#show ip access-list 1
Standard IP access list 1
10 deny
192.1.6.6 (146 matches)
20 permit any (296 matches)

This access-list has been incorrectly configured. Line 10 specifically states that RP Announcements
sourced from the ip address 192.1.6.6 are not allowed to filtered. Therefore any line matching sequence
number 20 will be filtered. The junior technician should have implicitly permitted 192.1.6.6 and denied
all other traffic sources. This has isolated our fault.
Step 3 - Fault Remediation:
In this scenario, access-list 1 needs to be rewritten to permit 192.1.6.6 and deny all other traffic. The
implicit deny at the end an access-list will accomplish this normally but we will make it implicit so that
we can see the packet count for those packets being denied.

R2#conf t
Enter configuration commands, one per line.
R2(config)#no access-list 1
R2(config)#access-list 1 permit 192.1.6.6
R2(config)#access-list 1 deny any
R2(config)#end

End with CNTL/Z.


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:

Copyright by IPexpert, Inc. All Rights Reserved.

8-58

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP


R2#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2v1
Info source: 192.1.6.6
Uptime: 03:36:03,
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5
Uptime: 00:00:24,
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4
Uptime: 00:00:30,

(?), elected via Auto-RP


expires: 00:00:53
(?), via Auto-RP
expires: 00:02:33
(?), via Auto-RP
expires: 00:02:28


Now the MA is learning about R5 and R4, we still see the entry for 192.1.6.6 but notice that the
expiration timer only has 53 seconds remaining. After waiting a minute R6 should age out leaving only
R4 and R5:

R2#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5
Uptime: 00:02:14,
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4
Uptime: 00:02:20,

(?), elected via Auto-RP


expires: 00:02:46
(?), via Auto-RP
expires: 00:02:39

This is the desired behavior thus proving the issue has been corrected.
Trouble Ticket #3 Solution
Your supervisor has notified you that R7 will be assuming the role of RP for all multicast groups in this
topology. Previous testing has been performed using R7 after business hours as the RP. You have been
instructed place R7 into operation immediately.
Step 1 - Fault Verification:
Is R2 learning anything from R7?

Copyright by IPexpert, Inc. All Rights Reserved.

8-59

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

R2#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5
Uptime: 00:05:40,
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4
Uptime: 00:05:46,

(?), elected via Auto-RP


expires: 00:02:18
(?), via Auto-RP
expires: 00:02:11


This output notifies us that the MA does not know about R7's C-RP status. Is R7 sending RP
Announcement messages?

R7#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 276/0, RP Discovery: 0/228

R7 has send 276 RP Announce messages, if we wait and repeat the verification this number should
increment:
R7#show ip pim autorp
AutoRP Information:
AutoRP is enabled.
AutoRP groups over sparse mode interface is enabled
PIM AutoRP Statistics: Sent/Received
RP Announce: 279/0, RP Discovery: 0/229


We see 279 messages now after about a minute or so. This tells us that R7 is configured to act as C-RP,
but the MA is not learning this fact. This verifies the issue exists.

Step 2 - Fault Isolation:
Realizing that these messages are subjected to RPF checks we will need to verify any possible RPF issues
via mtrace:

R2#mtrace 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.7.7 to 172.16.26.2 via RPF

Copyright by IPexpert, Inc. All Rights Reserved.

8-60

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

From source (?) to destination (?)


Querying full reverse path...
0 172.16.26.2
-1 172.16.26.2 PIM [192.1.7.0/24]
-2 172.16.26.6 PIM [192.1.7.0/24]
-3 172.16.67.7 PIM [192.1.7.0/24]
-4 192.1.7.7


Now we will check the reverse direction from R7:

R7#mtrace 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 172.16.67.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 172.16.67.7
-1 172.16.67.7 PIM [192.1.2.0/24]
-2 172.16.67.6 PIM [192.1.2.0/24]
-3 172.16.26.2 PIM [192.1.2.0/24]
-4 192.1.2.2


The packets take the same path in both directions. This seems to eliminate the possibility of an RPF
check failure. This means that something non-RPF related is stopping the packets from arriving at R2.
We will monitor these packets as the leave R7 and make their way to R2 by using the debug ip pim auto-
rp command:

R7#debug ip
PIM Auto-RP
R7#
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
Auto-RP(0):
R7#

pim auto-rp
debugging is on
Build RP-Announce for 192.1.7.7, PIMv2/v1, ttl 1, ht 181
Build announce entry for (224.0.0.0/4)
Send RP-Announce packet of length 48 on FastEthernet0/0
Send RP-Announce packet of length 48 on FastEthernet0/1
Send RP-Announce packet of length 48 on Loopback0(*)

If we are not careful, we could miss some very critical information in this debug output. Note that we
are sending an announcement for 192.1.7.7, but note the value of the TTL. Time-to-Live has been set to
a value of 1 on this device. This means that this packet will expire after making one "hop". As mentioned
in the Technology Review section of this chapter. The scope command can be used to "bound" Auto-RP
information. In this instance the value has been set to low for the packet to reach the MA as evidenced
by show run:

R7#sh run | inc send-rp-announce

Copyright by IPexpert, Inc. All Rights Reserved.

8-61

IPv4/6 Multicast Operation and Troubleshooting

Chapter 8: AutoRP

ip pim send-rp-announce Loopback0 scope 1


This value is inadequate to allow packets to reach the MA. This has isolated our issue.
Step 3 - Fault Remediation:
In this scenario, ip pim send-rp-announce needs to be configured to allow enough hops to reach the
MA.

7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#ip pim send-rp-announce loopback 0 scope 2
R7(config)#end


Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R2#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2v1
Info source: 192.1.7.7
Uptime: 00:00:07,
RP 192.1.5.5 (?), v2v1
Info source: 192.1.5.5
Uptime: 00:26:05,
RP 192.1.4.4 (?), v2v1
Info source: 192.1.4.4
Uptime: 00:26:11,

(?), elected via Auto-RP


expires: 00:02:48
(?), via Auto-RP
expires: 00:02:52
(?), via Auto-RP
expires: 00:02:45


The MA is now learning the information from R7. Based on R7's higher IP address it is being elected by
the MA as the RP for the range 224.0.0.0/4. This should mean that an mtrace from R1 to the
FastEthernet0/1 interface of R9 using the multicast group 224.9.9.9 should show R7 as the RP/Core
device:

R1#mtrace 172.16.15.1 172.16.79.9 224.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.15.1 to 172.16.79.9 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.79.9
-1 * 172.16.79.9 PIM Prune sent upstream [172.16.15.0/24]

Copyright by IPexpert, Inc. All Rights Reserved.

8-62

IPv4/6 Multicast Operation and Troubleshooting

-2
-3
-4
-5
-6
-7

*
*
*
*
*
*

172.16.79.7
172.16.67.6
172.16.26.2
172.16.24.4
172.16.45.5
172.16.15.1

Chapter 8: AutoRP

PIM Reached RP/Core [172.16.15.0/24]


PIM [172.16.15.0/24]
PIM [172.16.15.0/24]
PIM [172.16.15.0/24]
PIM [172.16.15.0/24]
PIM Prune sent upstream [172.16.15.0/24]

We see this is indeed the case demonstrating that the fault has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

8-63

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Chapter 9: Bootstrap
Router (BSR)
Protocol



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and the functionality
of the Bootstrap Router (BSR) protocol are examined in great depth. Once the operational
characteristics of this important protocol are detailed completely, the focus becomes that of
troubleshooting. This includes the careful examination of symptoms, a fault isolation methodology, and
the implementation of repairs for the Bootstrap Routing (BSR) protocol. The chapter begins with a
thorough review of BSR, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting this multicast support protocol. This important chapter concludes with sample
troubleshooting scenarios, reference materials for the most important show and debug commands, and
exciting challenges that allow readers to practice implementing the troubleshooting skills they have
obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

9-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

BSR Technology Review


Bootstrap router (BSR) was created as an open standard solution meant to address many of the
shortcomings in the Cisco proprietary AutoRP technology. The Protocol Independent Multicast Sparse
Mode (PIM-SM) version 2 specification introduced BSR.
Note: Many texts refer to BSR as simply PIM-SM version 2.
While BSR addresses many issues with AutoRP, it operates in a very similar manner when examined
from a high level. There are router(s) that act as candidate-Rendezvous Points (RPs) and router(s) that
act similar to the Mapping Agent (MA) found in AutoRP. In BSR terminology, the equivalent to the
Mapping Agent is the Bootstrap Router itself.
Note: AutoRP is detailed in Chapter 8: AutoRP.
Figure 9-1 demonstrates a sample BSR topology.


Figure 9-1: A Sample BSR Topology

Copyright by IPexpert, Inc. All Rights Reserved.

9-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

A major design improvement over Ciscos AutoRP is the fact that BSR requires no dense mode operation
whatsoever. Rendezvous Point (RP) information is within BSR messages, which are carried inside of
Protocol Independent Multicast (PIM) messages themselves. These PIM messages are link-local
multicast messages. When a router receives a BSR message containing RP information, the router
applies the Reverse Path Forwarding (RPF) check and then floods the message out all of the PIM-enabled
interfaces. Remember, the link-local multicast address used for PIM messages is 224.0.0.13.
Since the PIM messages that carry the BSR information are link-local in scope, notice that there is no
Time to Live (TTL) scoping that can be used with BSR.
Note: BSR and AutoRP cannot interoperate directly with each other.
Obviously, a key element to the BSR process is the device or devices that want to serve as the
Rendezvous Point for multicast groups in the Sparse Mode domain. To configure a candidate-RP in BSR,
use the following command:
ip pim [vrf vrf-name] rp-candidate interface-type interface-number [bidir] [group-list access-
list] [interval seconds] [priority value]
Where:

vrf - configures the router to advertise itself as the candidate-RP to the Bootstrap Router for the
Virtual Routing and Forwarding (VRF) instance specified for the vrf-name argument
interface-type interface-number - the interface bound to the IP address to serve as the
candidate-RP IP address; for availability purposes, consider the use of a loopback interface; this
interface needs to be PIM enabled
bidir - optional - indicates that the multicast groups specified by the access-list argument are to
operate in PIM bidirectional mode; PIM bidirectional mode is covered in Chapter 6:
Bidirectional PIM
group-list - optional - specifies the prefixes that are advertised in association with the RP
address; note that unlike AutoRP, this list cannot contain DENY entries
interval - optional - specifies the candidate-RP advertisement interval, in seconds; the range is
from 1 to 16383 with a default value of 60 seconds
priority - optional - specifies the priority for the candidate-RP; the range is from 0 to 255; with a
default priority value of 0; the BSR candidate-RP with the lowest priority value is preferred; be
aware that other vendor implementations of BSR might default priority to 192 as this is the
recommended default priority by the IETF

Copyright by IPexpert, Inc. All Rights Reserved.

9-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

As stated earlier, the Bootstrap Router itself in the topology is similar to the Mapping Agent in AutoRP
with some subtle differences. Like the AutoRP Mapping Agent, the BSR listens to the candidate-RP
announcements, but the BSR does not actually select the best RP for every group range. Instead, the BSR
builds a set of candidate-RPs for each group range and disseminates this information to the topology
using PIM. Multicast routers that receive these BSR messages select the preferred candidate-RP using a
special hash function.
To configure the BSR router itself, use the following command:
ip pim [vrf vrf-name] bsr-candidate interface-type interface-number [hash-mask-length [priority] ]
Where:

vrf - configures the router to advertise itself as the Bootstrap Router for the Virtual Routing and
Forwarding (VRF) instance specified for the vrf-name argument
interface-type interface-number - the interface bound to the IP address to serve as the BSR
device IP address; for availability purposes, consider the use of a loopback interface; this
interface needs to be PIM enabled and the IP address is sent in BSR messages as the BSR IP
address
hash-mask-length - optional - the length of the mask to be ANDed with the group address
before the PIMv2 hash function; all groups with the same seed hash correspond to the same RP;
the hash mask length allows one RP to be used for multiple groups; the default length is 0
priority - priority of the candidate-BSR; the range is from 0 to 255 with a default priority of 0;
the candidate-BSR with the highest priority value is preferred; RFC 5059 specifies that 64 be
used as the default priority value

It is important to remember that in BSR, the multicast routers determine the RP to use based on RP-set
information received from the BSR itself. The RP selection process for a particular multicast group is as
follows:
Step 1 - a longest match lookup is performed on the group prefix that is announced by the BSR
candidate-RPs
Step 2 - if more than one candidate-RP is found by the longest match lookup, the candidate-RP with the
lowest priority (configured with the ip pim rp-candidate command) is preferred
Step 3 - if more than one candidate-RP have the same priority, the BSR hash function is used to select
the RP for a group; this hash function is covered in detail in the Operation and Troubleshooting BSR
section of this chapter

Copyright by IPexpert, Inc. All Rights Reserved.

9-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Step 4 - if more than one candidate-RP return the same hash value derived from the BSR hash function,
the candidate-RP with the highest IP address is preferred
Note: RFC 2362 does not specify the longest match lookup step, to ensure compatibility with this
standard, configure the same group prefix length for redundant candidate-RPs.

Copyright by IPexpert, Inc. All Rights Reserved.

9-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

The Operation and Troubleshooting of BSR


To better understand how to troubleshoot BSR we will divide its basic operation into three distinct
stages: BSR election/announcements, candidate-Rendezvous Point announcements, and the
propagation of group-to-RP mappings. Once each phase has been outlined and defined, we will see how
its operation could be negatively impacted by environmental variables found in the multicasting, IP
routing, and switching domains.
BSR Election/Announcements
In this stage of the BSR operation a Bootstrap Router has either been assigned, or elected on the basis of
its configured priority. This works in a two stage process whereby the identity of a BSR is discovered.
During this stage, every router that has been configured to work as a Bootstrap Router will begin to
flood "bootstrap messages" while simultaneously listening for "bootstrap messages" from other
candidate-BSRs in the domain. Once a BSR learns of another BSR with a higher priority it will
immediately relinquish its role as BSR. This demonstrates that the BSR election process is preemptive in
nature and is designed to provide alternate availability during equipment or process failures. It must be
observed that this process ultimately, if configured properly, will result in the election of a single BSR.
Other devices may exist in the multicast domain that can assume the role of the BSR, but they will
always remain in a standby state until the existing BSR goes down or the priority settings are changed.
After the BSR is elected, it will begin attempting to discover the identity of any existing candidate-RPs (C-
RPs). The BSR will also actively listen for messages coming from these C-RPs as they are discovered.
In an effort to find the C-RPs in the domain, the BSR will first begin to inform the other devices in the
topology of its existence. BSR accomplishes this using PIM-SM version 2 protocol messages. These
messages are flooded on a hop-by-hop basis between all devices in the multicast domain. Figure 9-2
illustrates this process.

Figure 9-2: The BSR Announcing its Presence

Copyright by IPexpert, Inc. All Rights Reserved.

9-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Clearly, BSR can only be employed between PIM version 2 enabled devices. It is important to note that
because of this; BSR is not compatible with PIM-SM version 1. The primary difference between PIM-SM
versions 1 and 2 is that in version 2, messages are no longer encapsulated inside IGMP. PIM version 2
messages are encapsulated in IP packets with a protocol number of 103. Another significant difference is
that PIM-SM version 2 messages are propagated throughout the domain via the link-local multicast
group 224.0.0.13 (ALL-PIM-ROUTERS). As described in the BSR Technology Review section, this means
that BSR does not require any legacy dense mode functionality to announce its presence throughout the
multicast domain as is the case with AutoRP. Fortunately, these distinctions reduce the level of difficulty
associated with troubleshooting all phases of the BSR operational process. Placement of the BSR is no
longer as sensitive an issue as with its AutoRP counterpart, the Mapping Agent (MA). Furthermore, the
use of a link-local multicast address for hop-by-hop flooding of BSR announcements creates fewer
overall issues compared to AutoRP in general.
The fact that BSR uses a multicast address to communicate its identity and presence to the multicast
domain means that this phase of BSR is subjected to Reverse Path Forwarding (RPF) checks. Specifically,
any messages destined for PIM speakers in the domain will need to pass the RPF check toward the IP
address of the BSR from the C-RP. The multicast process drops messages that fail this check.
The fastest method to determine that BSR announcements are being propagated successfully is to
execute the show ip pim bsr-router command on each of the respective candidate-RPs. In a working
environment, like that shown in Figure 9-3, we would expect to see output similar to the following for
this command:

Figure 9-3: A Sample Working BSR Topology

Copyright by IPexpert, Inc. All Rights Reserved.

9-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R5#show ip pim bsr-router


PIMv2 Bootstrap information
BSR address: 192.1.2.2 (?)
Uptime:
00:00:21, BSR Priority: 0, Hash mask length: 0
Expires:
00:02:00
Candidate RP: 192.1.5.5(Loopback0)
Holdtime 150 seconds
Advertisement interval 60 seconds
Next advertisement in 00:00:50
Notice the identity of the Bootstrap Router is 192.1.2.2, which is the Loopback0 interface of R2. We have
two C-RPs in this topology so we will repeat this show command on the second C-RP:
R7#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.2.2 (?)
Uptime:
00:12:40, BSR Priority: 0, Hash mask length: 0
Expires:
00:01:59
Candidate RP: 192.1.7.7(Loopback0)
Holdtime 150 seconds
Advertisement interval 60 seconds
Next advertisement in 00:00:49
The second C-RP knows the identity of the BSR as well. Clearly, the BSR announcements have
successfully propagated to all C-RPs in the topology.
Note: Notice the (?) entry next to the BSR address. This is not a point for concern. This simply means
that the C-RP cannot resolve the IP address to a hostname.
Earlier it was described how the elected BSR begins to actively listen for messages coming from the C-
RPs. It is clear now that the C-RPs know where to send their messages thanks to the propagation of the
BSR information. Notice also in the output of the show commands used above that there is candidate-
RP information on both R5 and R7 that needs to be communicated. Also note the holdtime and
advertisement intervals for each.
Before the next stage of verification, however, execute the same show command on a device
participating in the multicast domain that is not a C-RP. For example, R4:
R4#sh ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.2.2 (?)
Uptime:
00:41:02, BSR Priority: 0, Hash mask length: 0
Expires:
00:01:18

Copyright by IPexpert, Inc. All Rights Reserved.

9-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Note that R4 knows the identity of the BSR, but has no candidate-RP information to communicate. This
is a normal condition for this stage of the BSR operation. It is time to take a closer look at the second
stage of BSR.
Candidate-Rendezvous Point Announcement
In the previous section, the election of the BSR was observed, and the process of PIM-SM version 2
announcements were monitored. This resulted in the C-RPs and other PIM devices participating in the
multicast domain learning the identity of the BSR. This section focuses on how the BSR dynamically
learns the identity of all devices configured as RP candidates.
After a C-RP discovers the BSR it will immediately begin to send periodic C-RP advertisements directly to
the BSR every 60 seconds via unicast. This is the default advertisement interval and can be changed. The
default holdtime is 150 seconds. In addition to notifying the BSR of its identity, the RP candidates also
communicate what group-to-RP mappings they possess. Figure 9-4 illustrates this unicast process.

Figure 9-4: The C-RP Announcements

Let us examine the BSR to see if it is receiving the information unicast by the individual C-RPs. Use the
show ip pim rp mapping command on the BSR itself to accomplish this:

Copyright by IPexpert, Inc. All Rights Reserved.

9-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R2#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is the Bootstrap Router (v2)
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 172.16.67.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:57:49, expires: 00:01:36
RP 192.1.5.5 (?), v2
Info source: 172.16.45.5 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:35:57, expires: 00:01:29

Note the BSR has learned the identity of both candidate-RP devices.
The fact that C-RP advertisements are sent via unicast means that RPF checks are unnecessary. This
makes this phase of the BSR operational mechanism very streamlined and easy to troubleshoot. Should
any information from any RP candidate fail to appear in the BSR's group-to-RP mappings table, it is most
likely an IP routing issue. This can quickly be identified by pinging the IP address of the C-RP in question
from the BSR. To ensure the proper testing of bidirectional unicast reachability, always source this ping
from the BSR's IP address. For example:
R2#ping 192.1.7.7 source loopback 0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.7.7, timeout is 2 seconds:
Packet sent with a source address of 192.1.2.2
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms

In the third and final stage of the BSR operational mechanism, the BSR begins to flood group-to-RP
mapping information to other devices in the multicast domain.
Propagation of Group-to-RP Mappings
In this phase of the BSR operation, the BSR floods all of the group-to-RP C-RP advertisements in its PIM
group-to-RP mappings table throughout the multicast domain. In this stage, the information that the
BSR communicates is known as the "candidate RP-set" or just "RP-set" for short.
This propagation of mappings phase uses the same methodology that the BSR employed to
communicate its presence to the RP candidates in the earlier step. This means that if the initial phase of
the BSR announcement process operated without complication, then it is highly likely that this stage will

Copyright by IPexpert, Inc. All Rights Reserved.

9-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

perform likewise. Figure 9-5 illustrates the hop-by-hop flooding process employed to propagate the RP-
set information.

Figure 9-5: The C-RP RP-Set Announcements



Remember, this process uses the PIM-SM version 2 multicast address of 224.0.0.13 to communicate
with the PIM enabled devices in the network on a hop-by-hop basis. This process, because it employs
multicast, is susceptible to the RPF check.
The most efficient method employed to test whether or not the BSR has successfully communicated the
RP-set information to each of the devices in the topology is to execute the show ip pim rp mapping
command on each device participating in the multicast domain. This should produce identical output on
all devices similar to the following:
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2
150
Uptime: 01:41:24,
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2
150
Uptime: 01:19:31,

(?), via bootstrap, priority 0, holdtime


expires: 00:01:44
(?), via bootstrap, priority 0, holdtime
expires: 00:01:43

The critical component here is that all devices should have the same group-to-RP mappings. Note that
both RP candidates are mapped to provide RP services to the entire multicast group range of
224.0.0.0/4.

Copyright by IPexpert, Inc. All Rights Reserved.

9-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Which C-RP will assume the role of RP if a multicast source is introduced to the topology? Emulate a
multicast source destined to the group address of 224.9.9.9 on the FastEthernet interface of R1 and
examine where the source-based tree terminates:
R1#ping 224.9.9.9 repeat 100000

This ping will not be successful because there are no multicast receivers for this group in the topology,
but it provides a way to verify what C-RP will assume the role of RP for the group 224.9.9.9. This is
proven using the show ip pim rp command on both R5 and R7:
R5#show ip pim rp
Group: 224.9.9.9, RP: 192.1.7.7, v2, uptime 01:34:58, expires 00:01:30
R7#show ip pim rp
Group: 224.9.9.9, RP: 192.1.7.7, v2, next RP-reachable in 00:00:16

The output indicates that the RP is R7 (192.1.7.7). The BSR process selects R7 as the RP because the
assigned priority for each of the candidates is the same. In this topology, the Cisco default of priority of 0
is used. In instances where the priority is a tie, the determining factor for RP selection is the highest IP
address as described in the BSR Technology Review section.
Once again, it is critical to note that this process of RP selection is not performed by the BSR. Unlike the
mapping agent in AutoRP, the Bootstrap Router communicates the entire RP-set to the devices in the
multicast domain. The individual devices accept the RP-set and then make logical decisions as to which
C-RP they select. Network administrators can manipulate or influence this election process through the
use of hashes, priorities, filters, and message constraints.
Load Balancing Between Candidate-RPs
It is possible to distribute RP functionality between multiple C-RPs when configuring BSR. This load
balancing is not always a perfectly balanced approach between individual RPs. However, using factors
like the total number of available C-RPs and an optional value known as the hash mask length can
provide an approximation of load balancing. Assigning a value to the hash mask length greater than the
default of "0" directly affects the normal RP selection algorithm that runs on all PIM-SM version 2
enabled devices. Hash mask length is communicated to all devices inside the BSR announcements.
Generally, the longer the hash length, the more evenly the BSR process will try to assign groups to
individual RPs in the candidate RP-Set. The assumptions are that the same hash mask length is
communicated to each PIM-SM version 2 device by the BSR, and that each of those devices has the
same candidate RP-set. As a result, each PIM device runs the same algorithm and makes the same RP
selection for each multicast group.

Copyright by IPexpert, Inc. All Rights Reserved.

9-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

There are three values used by this mathematical function in BSR: the candidate-RP address, a multicast
group address, and the hash mask length. These values are all hashed together on a group-by-group
basis in order to approximate the load balancing.
This process can be summarized as follows:
1 - A hashing algorithm is run for each multicast group address for each C-RP in the RP-set and a hash
value is obtained.
2 - The C-RP with the highest calculated hash value becomes the RP for that particular multicast group.
3 - There is the possibility that the hashing algorithm will result in equal hash values for different C-RPs.
In this case, the C-RP with the highest IP address will become the RP for that group. With all this taken
into account, the outcome should be predicable:
2(32 - hash length) = # of RPs that will be used in load balancing*
* This assumes that there are enough C-RPs in the RP-set to allow an even distribution.
Figure 9-6 shows a sample topology for load balancing.

Figure 9-6: A Sample C-RP Load Balancing Topology


In this example, there are two candidate-RPs. In order to achieve the most even distribution between
these devices apply a hash mask length of 31 to the ip pim bsr-candidate command on R2:
R2(config)#ip pim bsr-candidate Loopback0 31
Now verify what RP each device in the topology will use for any given group through the use of the show
ip pim rp-hash command. Here is such a test on R4 using the multicast group addresses of 224.1.1.1 and
224.1.1.2:

Copyright by IPexpert, Inc. All Rights Reserved.

9-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R4#show ip pim rp-hash 224.1.1.1


RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:18:30, expires: 00:02:06
PIMv2 Hash Value (mask 255.255.255.254)
RP 192.1.7.7, via bootstrap, priority 0, hash value 1483128991
RP 192.1.5.5, via bootstrap, priority 0, hash value 1211976133

Note the hash value of R7=1,483,129,991. This is greater than R5's value of 1,211,976,133, so R7 is
selected as the RP for the group 224.1.1.1. Now for the group 224.1.1.2:
R4#show ip pim rp-hash 224.1.1.2
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:51:56, expires: 00:02:16
PIMv2 Hash Value (mask 255.255.255.254)
RP 192.1.7.7, via bootstrap, priority 0, hash value 423840189
RP 192.1.5.5, via bootstrap, priority 0, hash value 694993047
The hashing process selects R5 as the RP because of its higher hash value. Now for 224.1.1.3:
R4#show ip pim rp-hash 224.1.1.3
RP 192.1.5.5 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:52:00, expires: 00:02:12
PIMv2 Hash Value (mask 255.255.255.254)
RP 192.1.7.7, via bootstrap, priority 0, hash value 423840189
RP 192.1.5.5, via bootstrap, priority 0, hash value 694993047
Notice the hashing process selects R5 again. One might logically think that it would fall to R7, but notice
this is not the case. The algorithm will try to evenly distribute the load between the two available
candidate-RPs, but it will do so via a relatively random process, given the wide range of arbitrary
variables that go into the hash calculation.
The Final Step
As one last step to fully demonstrate that our BSR configuration and multicast domain are configured
properly, R9 joins the multicast group 224.9.9.9. Verify that a ping test from R1 is successful.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

Copyright by IPexpert, Inc. All Rights Reserved.

9-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

This results in successful pings on R1:


R1#ping 224.9.9.9 repeat 100000
Type escape sequence to abort.
Sending 100000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2
seconds:
Reply
Reply
Reply
Reply

to
to
to
to

request
request
request
request

0
1
2
3

from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1

ms
ms
ms
ms

Copyright by IPexpert, Inc. All Rights Reserved.

9-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Common Issues with BSR


While not as problematic as AutoRP, there are a number of issues that can surface with Bootstrap
Router protocol. The most common problems relate to the exchange of essential control plane
information. By far the control plane establishment in BSR has many more components compared to its
data plane process, but when compared to its AutoRP counterpart, BSR is much easier to troubleshoot.
For simplicity in troubleshooting common issues while deploying BSR, we identify three categories of
problems: Reverse Path Forwarding (RPF) failures, unicast routing issues, and multicast routing
problems.
RPF Failures
In the Troubleshooting BSR section, this text discussed what phases of the BSR operational mechanisms
where subject to Reverse Path Forwarding (RPF) checks. Recall that of the three phases, only the BSR
election/announcement phase and the propagation of group-to-RP mappings phase are subject to the
RPF process. Logically then, RPF issues can prevent a candidate-BSR from learning about other
candidate-BSRs. Additionally, this problem can prevent an elected BSR from successfully communicating
the candidate RP-set to any, some, or all of the other PIM enabled devices in the multicast domain.
The following list of issues has a relatively high probability of occurring thanks to RPF failures.
Remember that these RPF checks are performed against the IP address of the BSR itself. Be aware that
anytime all interfaces in a network are not running PIM - these issues may arise.

Candidate-BSRs do not agree on the identity of the BSR for the multicast domain.
All or some of the PIM-SM version 2 enabled devices in the multicast domain do not receive any
candidate RP-set information from the elected BSR.

We will perform a walk through for each of these RPF issues in the BSR Sample Troubleshooting
Scenarios section that follows.
Unicast Routing and Forwarding Problems
From earlier portions of this chapter, it is clear that the ability of the RP candidates to communicate
their candidate group-to-RP mapping information directly to the BSR is dependent on their ability to
unicast to the advertised IP address of the BSR. Of course, since this reachability is unicast, it is not
subject to RPF checks. As a result, a common issue is:

An elected BSR fails to learn candidate group-to-RP mappings from all or some of the C-RPs in
the topology and the IP address of the candidate RP(s) are not reachable when ICMP echoes are
sourced from the IP address of the BSR.

Copyright by IPexpert, Inc. All Rights Reserved.

9-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

This is a situation where it will be necessary to look at the underlying routing protocols used in the
network. Typically, this would be an issue of asynchronous routing, and should be something obvious
once the routing tables of the source and transit devices are analyzed.
Multicast Routing and Forwarding Problems
These problems manifest themselves in more subtle ways when compared to the previous points. As
discussed earlier, the majority of the BSR operational mechanisms involve the formation of the control
plane so that a device can be assigned as the BSR, and so that C-RP group-to-RP mappings can be
communicated to the BSR. From that point, the candidate RP-set information can be propagated
throughout the multicast domain.
Situations like the following exist when information fails to propagate to any or all devices, but RPF
checks and unicast routing seem to be functioning correctly:

One or more candidate-RP(s) fail to receive any c-RP-set information from the BSR.
One or more candidate-BSR fails to participate in the BSR election process resulting in the
assignment of more than one BSR.

In the BSR Sample Troubleshooting Scenarios section that follows, troubleshooting these issues are
demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

9-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

BSR Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the BSR operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem is multicast or unicast related, and then how
to begin isolating the cause of the fault in the most efficient manner possible. Figure 9-7 illustrates the
topology used to explore this topic. Note that R4 and R6 operate as C-RPs and R5 and R7 are C-BSRs:

Figure 9-7: A Sample BSR Topology

In the Common Issues with BSR section, three primary types of problems were identified: RPF failures,
unicast routing failures, and multicast forwarding and routing failures. This section explores these three
categories of failure, by directing our attention to the commands necessary to identify that a problems
exists. There are three types of devices in this topology: C-RP(s), C-BSR(s), and transit devices (PIM
enabled routers).
Step One: Which device won the BSR election - R5 or R7?
R5#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.5.5 (?)
Uptime:
00:01:35, BSR Priority: 0, Hash mask length: 0
Next bootstrap message in 00:00:25


R5 believes that it is the PIM version 2 Bootstrap Router. Given that the priority is zero, this seems odd,
because R7 has a higher IP address. Issue the same show command on R7:

Copyright by IPexpert, Inc. All Rights Reserved.

9-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#show ip pim bsr-router


PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime:
00:13:11, BSR Priority: 0, Hash mask length: 0
Next bootstrap message in 00:00:48


R7 has also elected itself as the Bootstrap Router.
It is not possible for this to happen in a correctly configured BSR environment. This issue seems to
indicate that the two candidate-BSR devices have failed to exchange their BSR announcement messages.
How are those BSR announcement messages exchanged?
As discussed previously, PIM-SM version 2 messages exchange the BSR information, and these Bootstrap
messages are how the C-BSRs discover each other and decide which assumes the role of the BSR.
The link-local multicast group 224.0.0.13 accomplishes this process and is subject to the RPF check
mechanism. There are a number of ways to isolate RPF issues (mstat, mtrace, show ip rpf, debug ip pim
bsr), but mstat and mtrace cannot be used with link-local multicast as they result in a "% bad IP group
address" message.
Eliminating the mstat and mtrace commands leaves either show ip rpf or debug ip pim bsr, or some
combination of both. However, this brings up an issue that should be considered. The BSR
advertisement interval is fixed at 60 seconds and cannot be changed. This means valuable time could be
wasted waiting for results using debug ip pim bsr on all devices in the multicast path. This leaves show
ip rpf as the best option to isolate this issue.
Does R5 have any RPF issues reaching R7's Loopback0 interface?
R5#show ip rpf 192.1.7.7
RPF information for ? (192.1.7.7)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.45.4)
RPF route/mask: 192.1.7.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables


No, it does not. Does R7 have any issues reaching R5's Loopback0 interface?

Copyright by IPexpert, Inc. All Rights Reserved.

9-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#show ip rpf 192.1.5.5


RPF information for ? (192.1.5.5) failed, no route exists


R7 does in fact have an issue related to RPF failure. How would R7 reach the IP address of 192.1.5.5:


R7#show ip route 192.1.5.5
Routing entry for 192.1.5.0/24
Known via "eigrp 100", distance 90, metric 163840, type internal
Redistributing via eigrp 100
Last update from 172.16.67.6 on FastEthernet0/0, 01:59:59 ago
Routing Descriptor Blocks:
* 172.16.67.6, from 172.16.67.6, 01:59:59 ago, via FastEthernet0/0
Route metric is 163840, traffic share count is 1
Total delay is 5400 microseconds, minimum bandwidth is 100000
Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 4


R7 will use FastEthernet0/0 as the RPF interface. An RPF interface must be configured to participate in
PIM-SM version 2 for BSR messages to be exchanged successfully. Use the show ip pim interface
command to most quickly verify if this is taking place.
R7#show ip pim interface

Address Interface Ver/ Nbr Query DR DR


Mode Count Intvl Prior
192.1.7.7 Loopback0
v2/S 0 30 1 192.1.7.7
172.16.79.7 FastEthernet0/1
v2/S 1 30 1 172.16.79.9


FastEthernet0/0 is not in the interface list. To remediate this, enable PIM-SM version 2 on R7's
FastEthernet0/0 interface.


R7(config)#int FastEthernet0/0
R7(config-if)#ip pim sparse-mode


Note the PIM neighborship between R7 and R6 immediately comes up.


R7(config-if)#
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.67.7 on
interface FastEthernet0/0

Copyright by IPexpert, Inc. All Rights Reserved.

9-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol


Verification in now necessary to determine if one BSR has been elected. Based on the equal priority
values, R7 should be elected as the BSR.


R7#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime:
00:47:31, BSR Priority: 0, Hash mask length: 0
Next bootstrap message in 00:00:29


And R5 should agree:


R5#show ip pim bsr
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:03:15, BSR Priority: 0, Hash mask length: 0
Expires:
00:01:54
This system is a candidate BSR
Candidate BSR address: 192.1.5.5, priority: 0, hash mask length: 0


This output indicates that both R5 and R7 agree that R7 (192.1.7.7) is the BSR. Note that R5 maintains its
Candidate-BSR status; it will opt to elect itself BSR should R7 stop functioning. This is part of the normal
Active/Passive failover mechanism employed by BSR.

Having corrected the issue related to the actual election of the BSR, the next step is to determine
whether or not the BSR is learning each of the C-RP RP-sets. This is best accomplished with the show ip
rp mapping command on the BSR itself.

Copyright by IPexpert, Inc. All Rights Reserved.

9-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is the Bootstrap Router (v2)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 172.16.67.6 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:21:10, expires: 00:02:16


The BSR is only learning about R6's RP-Set information for the multicast scope of 224.0.0.0/4. How are
the C-RPs communicating this information to the BSR?

Unicast routing is used to deliver this information from the C-RP, but multicast is used by the BSR to
communicate its presence to the individual C-RPs. Has the BSR successfully communicated its existence
to both C-RPs?
R4#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:03:21, BSR Priority: 0, Hash mask length: 0
Expires:
00:01:48
Candidate RP: 192.1.4.4(Loopback0)
Holdtime 150 seconds
Advertisement interval 60 seconds
Next advertisement in 00:00:35
R6#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:54:57, BSR Priority: 0, Hash mask length: 0
Expires:
00:01:12
Candidate RP: 192.1.6.6(Loopback0)
Holdtime 150 seconds
Advertisement interval 60 seconds
Next advertisement in 00:00:22


R4 and R6 know that R7 is the BSR. These C-RPs now unicast their RP-Set information to the BSR for
dissemination throughout the multicast domain. Knowing that this is a unicast problem, the most

Copyright by IPexpert, Inc. All Rights Reserved.

9-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

effective tool now is ping. Remember, the unicast of the RP-Set information will be sourced and
destined to specific IP addresses, and the easiest method of testing reachability is to verify from the BSR.
Specifically, pings should be sourced from the IP address of the BSR to the IP address of each C-RP.


R7#ping 192.1.6.6 source 192.1.7.7
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.6.6, timeout is 2 seconds:
Packet sent with a source address of 192.1.7.7
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms


Based on the fact that the BSR learned the RP-set information for R6, it should come as no surprise that
unicast reachability exists. R4, however, is the C-RP in question:


R7#ping 192.1.4.4 source 192.1.7.7
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.4.4, timeout is 2 seconds:
Packet sent with a source address of 192.1.7.7
.....
Success rate is 0 percent (0/5)


This output is proof of a unicast routing problem between R7 and R4. Now, several options can be used
including pinging almost every interface between the two devices. The best course of action in this
scenario would be to utilize the traceroute command from the BSR using the same source and
destination used in the ping test:

Copyright by IPexpert, Inc. All Rights Reserved.

9-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#traceroute 192.1.4.4 source 192.1.7.7


Type escape sequence to abort.
Tracing the route to 192.1.4.4
1 172.16.67.6 0 msec 0 msec 0 msec
2 172.16.26.2 4 msec 0 msec 0 msec
3 * * *
<output omitted>


This output clearly illustrates the fact that the unicast issue exits on the router immediately after R2.
According to the topology, this is R4 itself.

Go to R4 and verify the contents of the routing table. Specifically, the IP address of interest is the
Loopback0 interface of R7 (192.1.7.7):


R4#show ip route 192.1.7.7
Routing entry for 192.1.7.7/32
Known via "static", distance 1, metric 0 (connected)
Routing Descriptor Blocks:
* directly connected, via Null0
Route metric is 0, traffic share count is 1


Based on this output, any traffic destined to R7's Loopback0 interface is immediately forwarded to the
Null0 interface via the static route configured. To remediate this problem, the best course of action is to
remove this static route, and then check if R7 begins to learn RP-sets from both R4 and R6.
R4(config)#no ip route 192.1.7.7 255.255.255.255 null 0
Verification on R7 should show that both R4 and R6 are now sending their respective RP-Set information
to the BSR:

Copyright by IPexpert, Inc. All Rights Reserved.

9-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is the Bootstrap Router (v2)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 172.16.67.6 (?), via bootstrap, priority 0, holdtime
150
Uptime: 01:59:04, expires: 00:02:20
RP 192.1.4.4 (?), v2
Info source: 172.16.24.4 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:27, expires: 00:01:58


R6 (192.1.6.6) and R4 (192.1.4.4) have actually succeeded in communicating their RP-sets to the BSR.
Now that the BSR has learned each of these sets, the BSR will communicate this information to all PIM-
SM version 2 enabled devices in the multicast domain. This is observed by issuing the show ip pim rp
mapping command on each device:

R1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7
150
Uptime: 00:12:33,
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7
150
Uptime: 01:11:37,

(?), via bootstrap, priority 0, holdtime


expires: 00:01:53
(?), via bootstrap, priority 0, holdtime
expires: 00:01:55

Copyright by IPexpert, Inc. All Rights Reserved.

9-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R2#show ip pim rp mapping


PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7
150
Uptime: 02:54:14,
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7
150
Uptime: 00:11:37,

(?), via bootstrap, priority 0, holdtime


expires: 00:01:56
(?), via bootstrap, priority 0, holdtime
expires: 00:01:56

R4#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is a candidate RP (v2)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7
150
Uptime: 00:12:33,
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7
150
Uptime: 01:11:37,

(?), via bootstrap, priority 0, holdtime


expires: 00:01:54
(?), via bootstrap, priority 0, holdtime
expires: 00:01:53


R5#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7
150
Uptime: 00:12:33,
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7
150
Uptime: 01:11:37,

(?), via bootstrap, priority 0, holdtime


expires: 00:01:55
(?), via bootstrap, priority 0, holdtime
expires: 00:01:56

Copyright by IPexpert, Inc. All Rights Reserved.

9-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R6#show ip pim rp mapping


PIM Group-to-RP Mappings
This system is a candidate RP (v2)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7
150
Uptime: 02:54:14,
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7
150
Uptime: 00:11:37,

(?), via bootstrap, priority 0, holdtime


expires: 00:01:53
(?), via bootstrap, priority 0, holdtime
expires: 00:01:56


R7#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is the Bootstrap Router (v2)
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 172.16.67.6 (?), via bootstrap, priority 0, holdtime
150
Uptime: 02:10:14, expires: 00:02:14
RP 192.1.4.4 (?), v2
Info source: 172.16.24.4 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:11:37, expires: 00:01:49


R9#show ip pim rp mapping
PIM Group-to-RP Mappings


R9 has not received any RP-Set information from the BSR. How is this information being communicated?
Recall that BSR announcements are sent via multicast. Multicast traffic is susceptible to RPF checks.
Failure of the multicast traffic to pass the RPF check can be verified via the show ip rpf command on R9.
This test should be done toward the IP address of the BSR.

Copyright by IPexpert, Inc. All Rights Reserved.

9-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R9#show ip rpf 192.1.7.7


RPF information for ? (192.1.7.7)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.79.7)
RPF route/mask: 192.1.7.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables


This output indicates that there are no RPF issues. This begs the question, "If there are no RPF failures,
what else can cause problems with multicast traffic?" The answer - issues related to the forwarding,
routing, and filtering of multicast traffic.
debug ip pim bsr is the best tool for troubleshooting issues on one device associated with multicast
forwarding and how this can specifically effect BSR messages:
R9#debug ip pim bsr
PIM-BSR debugging is on
R9#
PIM-BSR(0): bootstrap dropped


In this particular instance, the output of the debug command states specifically that the bootstrap
packets are being dropped. This message will appear every 60 seconds, as new BSR announcements
arrive from R7. Why are the packets being dropped?
Careful observation will show that under the FastEthernet0/1 interface of R9 someone has configured
the ip pim bsr-border command.

Copyright by IPexpert, Inc. All Rights Reserved.

9-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R9#show run int f0/1


Building configuration...
Current configuration : 165 bytes
!
interface FastEthernet0/1
ip address 172.16.79.9 255.255.255.0
ip pim bsr-border
ip pim sparse-mode
ip igmp join-group 224.9.9.9
duplex auto
speed auto
end
When this command is configured on an interface, no PIM-SM version 2 BSR messages will be sent or
received through the interface. Removal of this command will allow R9 to receive the RP-set
information.

R9(config)#interface fastethernet0/1
R9(config-if)#no ip pim bsr-border

Once this is accomplished, perform the verification again:

R9#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:51, expires: 00:01:37
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:51, expires: 00:01:37

The remediation has worked and now all devices have received complete RP-set information from the
information source: 192.1.7.7.

Copyright by IPexpert, Inc. All Rights Reserved.

9-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

As a final verification, a simulated source generated on R1 bound for the multicast group 224.9.9.9 can
successfully reach R9's FastEthernet0/1 interface:

R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Copyright by IPexpert, Inc. All Rights Reserved.

9-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

BSR show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
BSR topology in Figure 9-8 for all example output.

Figure 9-8: A Sample BSR Topology

show COMMAND:
show ip pim [vrf vrf-name] bsr-router
This command displays information about a bootstrap router (BSR)
Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R1#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.2.2 (?)
Uptime:
00:26:57, BSR Priority: 0, Hash mask length: 0
Expires:
00:01:12

Copyright by IPexpert, Inc. All Rights Reserved.

9-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

show COMMAND:
show ip pim [vrf vrf-name] rp-hash {group-address | group-name}
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:

vrf optional; specifies the name of the multicast VRF instance


group-address the multicast group address

EXAMPLE OUTPUT:
R4#show ip pim rp-hash 224.9.9.9
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 03:32:33, expires: 00:01:27
PIMv2 Hash Value (mask 0.0.0.0)
RP 192.1.7.7, via bootstrap, priority 0, hash value 390961567
RP 192.1.5.5, via bootstrap, priority 0, hash value 119808709
show COMMAND:
show ip pim [vrf vrf-name] rp mapping [rp-address]
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:

vrf optional; specifies the name of the multicast VRF instance


rp-address optional; allows the specification of a specific RP IP address in order to filter the
output

EXAMPLE OUTPUT:
R4#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.7.7 (?), v2
Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime
150
Uptime: 03:45:33, expires: 00:01:29
RP 192.1.5.5 (?), v2

Copyright by IPexpert, Inc. All Rights Reserved.

9-32

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Info source: 192.1.2.2 (?), via bootstrap, priority 0, holdtime


150
Uptime: 03:45:47, expires: 00:01:30

show COMMAND:
show ip rpf [vrf vrf-name] {route-distinguisher | source-address [group-address] [rd route-
distinguisher]} [metric]
This command displays information that IP multicast routing uses to perform the Reverse Path
Forwarding (RPF) check for a multicast source
Where:

vrf optional; specifies the name of the multicast VRF instance


route-distinguisher - Route distinguisher (RD) of a VPNv4 prefix; entering the route-
distinguisher argument displays RPF information related to the specified VPN route
source-address - IP address or name of a multicast source for which to display RPF information
group-address - optional; IP address or name of a multicast group for which to display RPF
information
rd route-distinguisher - optional; displays the Border Gateway Protocol (BGP) RPF next hop for
the VPN route associated with the RD specified for the route-distinguisher argument
metric - optional; displays the unicast routing metric

EXAMPLE OUTPUT:
R5#show ip rpf 192.1.2.2
RPF information for ? (192.1.2.2)
RPF interface: FastEthernet0/1
RPF neighbor: ? (172.16.45.4)
RPF route/mask: 192.1.2.0/24
RPF type: unicast (eigrp 100)
RPF recursion count: 0
Doing distance-preferred lookups across tables


show COMMAND:
show ip pim [vrf vrf-name] neighbor [interface-type interface-number]
This command displays information about Protocol Independent Multicast (PIM) neighbors discovered
by PIM version 1 router query messages or PIM version 2 hello messages.

Copyright by IPexpert, Inc. All Rights Reserved.

9-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Where:

vrf optional; specifies the name of the multicast VRF instance


interface-type - optional; restricts the output to information about PIM neighbors reachable on
the specified interface

EXAMPLE OUTPUT:
R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR
Priority,
S - State Refresh Capable
Neighbor
Interface Uptime/Expires
Ver
DR Address
Prio/Mode
172.16.45.5 FastEthernet0/1 01:17:41/00:01:25 v2
1 / DR S
172.16.46.6 Serial0/0/0.1
01:16:39/00:01:19 v2
1 / S

Copyright by IPexpert, Inc. All Rights Reserved.

9-34

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

BSR debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
BSR topology in Figure 9-9 for all example output.

Figure 9-9: A Sample BSR Topology

debug COMMAND:
debug ip mpacket [vrf vrf-name] [detail | fastswitch] [access-list] [group]
This command displays multicast packets that are received and sent on the device.
Where:

vrf optional; specifies the name of the multicast VRF instance


detail optional; displays IP header and MAC information
fastswitch optional; displays IP packet information in the fast path
access-list optional; restricts the output per the specified access-list

EXAMPLE OUTPUT:
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=7, ttl=254,
prot=1, len=114(100), mroute olist null
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=8, ttl=254,
prot=1, len=114(100), mroute olist null
IP(0): s=172.16.24.4 (FastEthernet0/0) d=224.9.9.9 id=9, ttl=254,
prot=1, len=114(100), mroute olist null

Copyright by IPexpert, Inc. All Rights Reserved.

9-35

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

debug COMMAND:
debug ip pim [vrf vrf-name] [bsr]
This command displays the mappings for the PIM group to the active Rendezvous Point(s).
Where:

vrf optional; specifies the name of the multicast VRF instance

EXAMPLE OUTPUT:
R4#debug ip pim bsr
PIM-BSR debugging is on
R4#
PIM-BSR(0): 192.1.2.2
PIM-BSR(0): 192.1.2.2
PIM-BSR(0): bootstrap
from non-RPF neighbor

bootstrap forwarded on FastEthernet0/1


bootstrap forwarded on Serial0/0/0.1
(192.1.2.2) on non-RPF path Serial0/0/0.1 or
172.16.24.2 discarded

R2#debug ip pim bsr


PIM-BSR(0): RP-set for 224.0.0.0/4
PIM-BSR(0):
RP(1) 192.1.7.7, holdtime 150 sec priority
PIM-BSR(0):
RP(2) 192.1.5.5, holdtime 150 sec priority
PIM-BSR(0): Bootstrap message for 192.1.2.2 originated
R2#
PIM-BSR(0): RP 192.1.5.5, 1 Group Prefixes, Priority 0,
R2#
PIM-BSR(0): RP 192.1.7.7, 1 Group Prefixes, Priority 0,
R2#
R5#debug ip
PIM-BSR(0):
0, holdtime
PIM-BSR(0):
PIM-BSR(0):
R5#

0
0

Holdtime 150
Holdtime 150

pim bsr
Build v2 Candidate-RP advertisement for 192.1.5.5 priority
150
Candidate RP's group prefix 224.0.0.0/4
Send Candidate RP Advertisement to 192.1.2.2

Copyright by IPexpert, Inc. All Rights Reserved.

9-36

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Chapter Challenge: BSR Sample Trouble Tickets


The following section includes three sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH9-BSR-TT-INITIAL.txt. Keep in mind these sample Trouble
Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 9-10 below:

Figure 9-10: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that the C-BSR routers R2 and R7 do not agree on the
identity of the BSR. You must correct the issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that a new C-BSR (R5) that has just been
introduced in the network does not agree with R2 and R7 regarding the identity of the Bootstrap Router.
Correct this issue.
Trouble Ticket #3
Your supervisor has notified you that R1 is not receiving any RP-set information from the BSR. You must
correct this issue.

Copyright by IPexpert, Inc. All Rights Reserved.

9-37

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Chapter Challenge: BSR Sample Trouble Tickets Solutions


The following section includes the solutions to the three Trouble Tickets presented in the previous
section. Figure 9-11 provides a flowchart that outlines a "quick fire" approach to isolating and
remediating issues associated with BSR.


Figure 9-11: BSR Quick Fire Troubleshooting Flowchart


Trouble Ticket #1 Solution
Your supervisor has brought to your attention that the C-BSR routers R2 and R7 do not agree on the
identity of the BSR. You must correct the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

9-38

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Step 1 - Fault Verification:


R2 and R7 are the C-BSRs that are of interest in this trouble ticket:
R2#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.2.2 (?)
Uptime:
00:59:48, BSR Priority: 200, Hash mask length: 0
Next bootstrap message in 00:00:12
R7#show ip pim bsr
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime:
03:35:07, BSR Priority: 255, Hash mask length: 0
Next bootstrap message in 00:00:53

These two C-BSRs each think they are the BSR in this topology. This verifies that the problem actually
exists.

Step 2 - Fault Isolation:
The next course of action is to use the mtrace utility to rule out the possibility of an RPF issue. Make
certain to perform this process in both directions, first from R2 toward R7, then from R7 toward R2.

R2#mtrace 192.1.2.2 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.2.0/24]
-2 172.16.67.6 PIM [192.1.2.0/24]
-3 172.16.26.2 PIM [192.1.2.0/24]
-4 192.1.2.2

There are no problems in the path from R2 to R7. Now reverse the testing:


Copyright by IPexpert, Inc. All Rights Reserved.

9-39

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#mtrace 192.1.7.7 192.1.2.2


Type escape sequence to abort.
Mtrace from 192.1.7.7 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.2.2
-1 172.16.26.2 PIM [192.1.7.0/24]
-2 172.16.26.6 PIM [192.1.7.0/24]
-3 172.16.67.7 PIM [192.1.7.0/24]
-4 192.1.7.7

This output indicates that there are no Reverse Path Forwarding errors in the path between the C-BSRs.
With this confirmed, the next step in the process is to utilize debug ip pim bsr on all candidate-BSRs and
the devices in the path between them.

R2#debug ip pim bsr
PIM-BSR(0): Bootstrap message for 192.1.2.2 originated
R6#debug ip pim bsr
PIM-BSR(0): 192.1.2.2 bootstrap forwarded on FastEthernet0/0
R7#debug ip pim bsr
PIM-BSR(0): bootstrap dropped
The verification clearly demonstrates that R2 generates a Bootstrap message. R6 forwards that
Bootstrap message, and R7 drops it. This means that either there is a PIM neighborship issue or a
filter/border/boundary command on R7. The FastEthernet0/0 interface of R7 is the only interface
capable of receiving any BSR messages from R2 (192.1.2.2). The quickest method to verify this is to
execute the show run interface FastEthernet0/0 command on R7:

Copyright by IPexpert, Inc. All Rights Reserved.

9-40

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#show run interface FastEthernet0/0


Building configuration...
Current configuration : 135 bytes
!
interface FastEthernet0/0
ip address 172.16.67.7 255.255.255.0
ip pim bsr-border
ip pim sparse-mode
duplex auto
speed auto
end
The ip pim bsr-border command under the interface stops the BSR messages as they arrive at or exit R7.
This has unquestionably isolated our fault.

Step 3 - Fault Remediation:
In this scenario, the ip pim bsr-border command needs to be removed.

R7(config)#interface FastEthernet0/0
R7(config-if)#no ip pim bsr-border

Step 4 - Verification of Remediation
Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method of the initial fault verification.

R2#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:01:51, BSR Priority: 255, Hash mask length: 0
Expires:
00:01:18
This system is a candidate BSR
Candidate BSR address: 192.1.2.2, priority: 200, hash mask length: 0


Copyright by IPexpert, Inc. All Rights Reserved.

9-41

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R7#show ip pim bsr-router


PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime:
04:14:14, BSR Priority: 255, Hash mask length: 0
Next bootstrap message in 00:00:46

Both the C-BSRs agree that R7 is the BSR (based on the priority of 255), and R2 is continuing to
announce itself as a C-BSR should R7 fail.
Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that a new C-BSR (R5) that has just been
introduced in the network does not agree with R2 and R7 regarding the identity of the Bootstrap Router.
Correct this issue.
Step 1 - Fault Verification:
R2 and R7 are the C-BSRs that are of interest in this trouble ticket:
R2#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:01:51, BSR Priority: 255, Hash mask length: 0
Expires:
00:01:18
This system is a candidate BSR
Candidate BSR address: 192.1.2.2, priority: 200, hash mask length: 0

R7#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime:
04:14:14, BSR Priority: 255, Hash mask length: 0
Next bootstrap message in 00:00:46

R5#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.5.5 (?)
Uptime:
04:15:26, BSR Priority: 255, Hash mask length: 0
Next bootstrap message in 00:00:33

Copyright by IPexpert, Inc. All Rights Reserved.

9-42

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R2 and R7 agree that R7 is the BSR, but R5 is reporting itself as the BSR in the topology. This verifies that
the problem actually exists.

Step 2 - Fault Isolation:
In order to verify that RPF issues are not at fault, use the mtrace utility. Perform this check in both
directions, first from R2 toward R5, and then in reverse.

R2#mtrace 192.1.2.2 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.2.2 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.2.0/24]
-2 172.16.67.6 PIM [192.1.2.0/24]
-3 172.16.26.2 PIM [192.1.2.0/24]
-4 192.1.2.2

R5#mtrace 192.1.5.5 192.1.2.2
Type escape sequence to abort.
Mtrace from 192.1.5.5 to 192.1.2.2 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.2.2
-1 172.16.24.2 PIM [192.1.5.0/24]
-2 172.16.24.4 PIM [192.1.5.0/24]
-3 172.16.45.5 PIM [192.1.5.0/24]
-4 192.1.5.5

Next is the verification of the BSR messaging. Use the debug ip pim bsr command on R2, R4 and R5:

R2#debug ip pim bsr
PIM-BSR debugging is on
R2#
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on Loopback0
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on GigabitEthernet0/0

R2 is sending BSR announcements out the Gi0/0 interface directed to R4.

Copyright by IPexpert, Inc. All Rights Reserved.

9-43

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R4#debug ip pim bsr


PIM-BSR debugging is on
R4#
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on Serial0/0/0.1
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on Loopback0

We see that R4 is forwarding BSR messages from R7 out the Serial0/0/0.1 and on Loopback0, but not
out FastEthernet 0/0 toward R5. The next step is to examine PIM neighbor relationships and inspect for
multicast boundaries/filters.

R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR
Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
172.16.24.2
FastEthernet0/0
22:34:25/00:01:23 v2
1 / S
172.16.46.6
Serial0/0.1
22:35:35/00:01:17 v2
1 / S
172.16.45.5
FastEthernet0/1
22:35:14/00:01:06 v1
1 / DR S

Looking carefully at this output on R4 demonstrates that PIM version 2 neighbor relationships have
formed across the FastEthernet0/0 and Serial0/0/0.1 interfaces, but a PIM version 1 neighbor
relationship has formed across FastEthernet0/1 toward R5. BSR requires the use of PIM-SM version 2 in
order to operate. This has isolated our fault.

Step 3 - Fault Remediation:
In this scenario, ip pim version 2 needs to be configured between R4 and R5:

R4(config)#int f0/1
R4(config-if)#no ip pim version 1
R5(config)#int f0/1
R5(config-if)#no ip pim version 1

Step 4 - Verification of Remediation

Copyright by IPexpert, Inc. All Rights Reserved.

9-44

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:

R2#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:36:11, BSR Priority: 255, Hash mask length: 0
Expires:
00:01:58
This system is a candidate BSR
Candidate BSR address: 192.1.2.2, priority: 200, hash mask length: 0

R7#show ip pim bsr-router
PIMv2 Bootstrap information
This system is the Bootstrap Router (BSR)
BSR address: 192.1.7.7 (?)
Uptime:
04:48:37, BSR Priority: 255, Hash mask length: 0
Next bootstrap message in 00:00:23

R5#show ip pim bsr-router
PIMv2 Bootstrap information
BSR address: 192.1.7.7 (?)
Uptime:
00:02:06, BSR Priority: 255, Hash mask length: 0
Expires:
00:02:03
This system is a candidate BSR
Candidate BSR address: 192.1.5.5, priority: 250, hash mask length: 0

All three C-BSRs agree that R7 is the BSR.
Trouble Ticket #3 Solution
Your supervisor has notified you that R1 is not receiving any RP-set information from the BSR. You must
correct this issue.
Step 1 - Fault Verification:
R1 is the router of interest in this trouble ticket:

Copyright by IPexpert, Inc. All Rights Reserved.

9-45

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R1#sh ip pim rp mapping


PIM Group-to-RP Mappings
R1#

R1 is not receiving the C-RP RP-set information from the BSR. This verifies that the problem actually
exists.

Step 2 - Fault Isolation:
To ensure that BSR messages have made it to all PIM devices, use the mtrace utility. Make certain to
perform this process from the C-RPs to the BSR.

R4#mtrace 192.1.4.4 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.4.4 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.4.0/24]
-2 172.16.67.6 PIM [192.1.4.0/24]
-3 172.16.26.2 PIM [192.1.4.0/24]
-4 172.16.24.4 PIM [192.1.4.0/24]
-5 192.1.4.4

There are no problems in the path from R4 to R7. Now repeat the test from R6 to R7:

R6#mtrace 192.1.6.6 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.6.6 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.6.0/24]
-2 172.16.67.6 PIM [192.1.6.0/24]
-3 192.1.6.6

This indicates that there are no RPF errors. Next, execute the debug ip pim bsr command on R1, R4, R5,
R2, R6 and R7.

Copyright by IPexpert, Inc. All Rights Reserved.

9-46

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R1#debug ip pim bsr


PIM-BSR debugging is on
R1#
R1#

The output on R1 indicates it is not receiving any BSR messages on Fa0/0 from R5. On R5:
R5#debug ip pim bsr
PIM-BSR debugging is on
R5#
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on Loopback0
PIM-BSR(0): 192.1.7.7 bootstrap forwarded on FastEthernet0/0
R5#

R5 is forwarding BSR messages from R7 out FastEthernet0/0 toward R1. The previous output on R1
indicated that no BSR messages are arriving. The next verification is to look for RPF failures on R1.

R1#sh ip rpf 192.1.7.7
RPF information for ? (192.1.7.7) failed, no route exists
R1#
The issue is an RPF failure on R1 toward R5. This is best verified by examining the PIM-SM neighbors on
R1:

R1#sh ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR
Priority,
S - State Refresh Capable
Neighbor
Interface
Uptime/Expires
Ver
DR
Address
Prio/Mode
R1#

There is no neighbor relationship between R1 and R5. A show run interface FastEthernet0/0 command
will reveal the issue.


Copyright by IPexpert, Inc. All Rights Reserved.

9-47

IPv4/6 Multicast Operation and Troubleshooting

Chapter 9: Bootstrap Router (BSR) Protocol

R1#sh run interface FastEthernet 0/0


Building configuration...
Current configuration : 96 bytes
!
interface FastEthernet0/0
ip address 172.16.15.1 255.255.255.0
duplex auto
speed auto
end

Looking carefully at this output on R1 demonstrates that PIM-SM version 2 is not enabled on
FastEthernet0/0.

Step 3 - Fault Remediation:
In this scenario, ip pim sparse-mode needs to be configured on FastEthernet0/0.

R1(config)#int f0/0
R1(config-if)#ip pim sparse-mode

Step 4 - Verification of Remediation
Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R1#sh ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 192.1.6.6 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 0, holdtime
150
Uptime: 00:00:15, expires: 00:02:13
RP 192.1.4.4 (?), v2
Info source: 192.1.7.7 (?), via bootstrap, priority 255, holdtime
150
Uptime: 00:00:15, expires: 00:02:12

R1 now has the complete C-RP RP-set information as expected.

Copyright by IPexpert, Inc. All Rights Reserved.

9-48

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Chapter 10:
Multicast Source
Discovery Protocol
(MSDP)



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and functionality of
the Multicast Source Discovery Protocol (MSDP) are examined in great depth. Once the operational
characteristics of this important protocol are detailed completely, the focus becomes that of
troubleshooting. This includes the careful examination of symptoms, a fault isolation methodology, and
the implementation of repairs for the Multicast Source Discovery Protocol (MSDP). The chapter begins
with a thorough review of MSDP, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting this multicast support protocol. This important chapter concludes with sample
troubleshooting scenarios, reference materials for the most important show and debug commands, and
exciting challenges that allow readers to practice implementing the troubleshooting skills they have
obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

10-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

MSDP Technology Review


It is the job of Multicast Source Discovery Protocol (MSDP) to connect multiple Protocol Independent
Multicast Sparse Mode (PIM-SM) domains. Thanks to MSDP, a rendezvous point (RP) can dynamically
discover active sources outside of its domain. The main advantage of MSDP is that it reduces the
complexity of interconnecting multiple PIM-SM domains by allowing these domains to use an
interdomain source tree as opposed to a common shared tree.
With MSDP, RPs in different domains can exchange information. An RP can join the interdomain source
tree for sources that are sending to groups for which it has receivers. When a last-hop router learns of a
new source outside the PIM-SM domain (through the arrival of a multicast packet from the source down
the shared tree), it then can send a join toward the source and join the interdomain source tree. If the
RP has no shared tree for a particular group or it has a shared tree whose outgoing interface list is null, it
does not send a join to the source in another domain.
With MSDP, an RP in a PIM-SM domain maintains MSDP peering relationships with MSDP-enabled
routers in other domains. This peering relationship occurs over a TCP connection (port 639). MSDP relies
on BGP or multiprotocol BGP (MP-BGP) for interdomain operation.
When utilizing Multicast Source Discovery Protocol, when a PIM designated router (DR) registers a
source with its RP, the RP sends a Source-Active (SA) message to all of its MSDP peers. The DR sends the
encapsulated data to the RP only once per source when the source goes active. The SA message
identifies the source address, the group that the source is sending to, and the address of the RP. Each
MSDP peer that receives the SA message floods the message to all of its peers downstream from the
originator. In some cases, an RP may receive a copy of an SA message from more than one MSDP peer.
To prevent looping, the RP consults the BGP next-hop database to determine the next hop toward the
originator of the SA message. That next-hop neighbor is the RPF-peer for the originator. SA messages
that are received from the originator on any interface other than the interface to the RPF peer are
dropped.
When an RP receives an SA message, it checks to see whether there are any members of the advertised
groups in its domain by checking to see whether there are interfaces on the group's (*, G) outgoing
interface list. If there are no group members, the RP does nothing. If there are group members, the RP
sends an (S, G) join toward the source. As a result, a branch of the interdomain source tree is
constructed across autonomous system boundaries to the RP. As multicast packets arrive at the RP, they
are then forwarded down its own shared tree to the group members in the RP's domain. The members'
DRs then have the option of joining the rendezvous point tree (RPT) to the source using standard PIM-
SM procedures.

Copyright by IPexpert, Inc. All Rights Reserved.

10-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

The originating RP continues to send periodic SA messages for the (S, G) state every 60 seconds for as
long as the source is sending packets to the group. When an RP receives an SA message, it caches the SA
message.
There are four basic MSDP message types, each encoded in their own Type, Length, and Value (TLV)
data format. These messages are:

SA Messages
SA Request Messages
SA Response Messages
Keepalive Messages

SA messages are used to advertise active sources in a domain. In addition, these SA messages may
contain the initial multicast data packet that was sent by the source. SA messages contain the IP address
of the originating RP and one or more (S, G) pairs being advertised. In addition, the SA message may
contain an encapsulated data packet.
SA request messages are used to request a list of active sources for a specific group. These messages are
sent to an MSDP SA cache that maintains a list of active (S, G) pairs in its SA cache. Join latency can be
reduced by using SA request messages to request the list of active sources for a group instead of having
to wait up to 60 seconds for all active sources in the group to be readvertised by originating RPs.
SA response messages are sent by the MSDP peer in response to an SA request message. SA response
messages contain the IP address of the originating RP and one or more (S, G) pairs of the active sources
in the originating RP's domain that are stored in the cache.
Keepalive messages are sent every 60 seconds in order to keep the MSDP session active. If no keepalive
messages or SA messages are received for 75 seconds, the MSDP session is reset.

Copyright by IPexpert, Inc. All Rights Reserved.

10-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

The Operation and Troubleshooting of MSDP


Based on the observations made in the Technology Review section it is apparent that MSDP can be
considered a corner case solution that can be employed to all multicast sources for a given group or
range of groups to be communicated between Rendezvous Points in different multicast domains. The
fact that MSDP allows RPs to communicate relevant information necessary to allow multicast packets to
transit between domains means that it relies on PIM-SM or PIM-S-DM to operate correctly. This process,
accomplished by configuring the RPs located in the disparate domains to utilize transmission control
protocol (TCP) to discover and exchange information about multicast sources.
This exchange of sources sending packets to multicast groups happens as a result of the MSDP peering
relationship that takes place via the TCP session described previously. For this process to work properly
there must be a PIM enabled path between the RPs, and the underlying routing protocol must contain
enough information to support the creation of the necessary MSDP peering sessions. Once this listing of
active sources is exchanged, the receiving RP will use them to establish a source path between the two
multicast domains to a specific group. This operation and behavior will be demonstrated using the
topology provided in Figure 10-1. Observe that we have two statically assigned RPs. One RP for each of
two multicast domains. Please observe that R1, R5 and R4 are using R5 as the RP for domain "A", and
R6, R7 and R9 are using R7 as the RP for domain "B".

Figure 10-1: MSDP Lab Topology

Source Active Messages


When an RP in a PIM-SM or PIM-S-DM environment using MSDP peering receives a PIM Register
Message that RP will create a source active (SA) message and send it to all of the devices MSDP peers.
This means that before a SA messages can be exchanged all RPs that could originate or receive SA
messages must be MSDP peers with all other RPs or via an intermediate RP shared with another device.
SA messages employ fields that describe:

Copyright by IPexpert, Inc. All Rights Reserved.

10-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Source address of the data source


Group address the data source is sending multicast packets to
IP Address of the RP

These SA messages are forwarded away from the RP address using what is referred to as a peer-RPF
flooding process. In this method of flooding the multicast routing table is used to determine which peer
would be used to reach the originating RP of a given SA message; this peer is called an MSDP "RPF Peer".
MSDP RPF Failure
There can be situation where an MSDP peer receives an SA message from what it would consider a non-
RPF MSDP peer. This means that this peer would not be used to reach the originating RP. In this
situation, the receiving peer will drop the SA message. If the message arrives from an RPF-Peer toward
the originating RP, then that SA message will be forwarded to all the devices MSDP peers. Note that this
forwarding process will be not be performed out the interface the SA message arrived on; in order to
prevent looping of the SA messages.
SA Message Arrives on the RP in the Other Multicast Domain
As a result of the peer-RPF flooding process an SA message will ultimately arrive on the actual RP in
another multicast domain. At this time this RP will determine if there are any group members in the
domain that are interested in receiving any group defined by the particular S,G pair associated with the
particular SA message.
This process takes place by the router looking for a *,G entry in the multicast routing table for the group
with any interface in the OIL. The fact that the OIL has any other value than "Null" implies that there is a
host in the domain interested in the particular group. In this situation the RP will trigger a S,G join
message that will be set toward the multicast source. This process is emulates the exact mechanism
used when a Join/Prune message is received that is addressed to the RP itself.
This process is how the source-based tree is created to reach this domain. Any data packets that
subsequently arrive at the RP via this source-based tree will be forwarded down the shared tree inside
the domain toward the receivers. At this point if any leaf routers choose to join the source-based tree
they can do so using the standard PIM-SM mechanisms described in previous chapters. With this in mind
it is useful to note that if an RP in a domain receives a PIM Join message for a new group (G), the RP
should trigger a PIM Join/Prune message for each active S,G entry it learns from its MSDP Peers via the
SA messages. This process is often times referred to as flood-and-join, a parody of flood and prune.
However, in flood-and-join, if an RP is not interested in a group it can ignore any SA messages for that
group.

Copyright by IPexpert, Inc. All Rights Reserved.

10-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

SA Cache
An MSDP speaker caches SA messages. This caching process allows MSDP messages to be stored locally.
This mechanism reduces join latency for new receivers of a particular multicast group of an originating
RP which has an existing MSDP (S,G) state for that group. This process paces the replication of SA
messages between MSDP peers. An additional benefit to this process is that it makes diagnosis, and
debugging of various problems easier.

Copyright by IPexpert, Inc. All Rights Reserved.

10-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Common Issues with MSDP


MSDP is a very simple protocol to troubleshoot. MSDP uses a simple operational mechanism to
accomplish its duties forwarding multicast packets between RP in different multicast domains. Even
though the overall process is simple, dividing it into specific phases makes troubleshooting more
straightforward. To simplify troubleshooting common issues while deploying MSDP, we identify three
categories of problems: Incorrect Peering Configuration, No PIM Enabled Path Between MSDP Peers,
and MSDP Passwords and Filters.
Incorrect Peering Configuration
In the Technology Review section, this text discussed the different configuration command needed to
enable MSDP peering between RPs in different multicast domains. More often than not, there are issues
associated with applying these commands and parameters. Most commonly, configuration or
typographical errors are the cause of failures in this type of multicast deployment. MSDP configuration
is very much like BGP in that it relies on TCP session initiation to operate. This means that it is necessary
to use the connect-source to specify an interface other than that used to directly reach the MSDP peer.
These source interfaces must match on each MSDP enable RP.
Another issue that can affect the formation of MSDP peers is incomplete routing information rendering
the IP addresses used during the peering process unreachable.
No PIM Enabled Path Between MSDP peers
In order for MSDP to work there must be an PIM enabled path on all devices between the all the MSDP
Peered RPs. This necessity is part of the successful operation of the protocol because MSDP does not
provide a replacement for PIM. PIM is still required for the successful transfer of multicast packets, and
is part of the data plane RPF check mechanism.
MSDP Passwords and Filters
In order to maintain some level of secure deployment, MSDP is designed to leverage MD5 digests as
part of its authentication process. These digests must match exactly and are easily misconfigured.
Additionally, MSDP employs a number of filter mechanisms to block communications:

filter-sa-request - Filter SA-Requests from peer


sa-filter - Filter SA messages from peer
sa-limit - Configure SA limit for a peer
ttl-threshold - Configure TTL Threshold for MSDP Peer


Beware of issues where these have been deployed incorrectly or in situations where previous
configurations have not been completely removed.

Copyright by IPexpert, Inc. All Rights Reserved.

10-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

MSDP Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the IGMP operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem is MSDP related, and then how to begin
isolating the cause of the fault in the most efficient manner possible. Figure 10-2 illustrates the topology
used to explore this topic.

Figure 10-2: A Sample MSDP Topology

In the Common Issues with MSDP section, three primary types of problems were identified: Incorrect
Peering Configuration, No PIM Enabled Path Between MSDP Peers, and MSDP Passwords and Filters.
This section explores these three categories of failure, by directing our attention to the commands
necessary to identify that a problems exists. There are three types of devices in this topology: Sources
(R1), Hosts (R9), and MSDP peered Static RPs (R5 and R7).
Incorrect Peering Configuration
This situation is where an MSDP enabled RP is not peering correctly. This situation is very common.
Setting the Stage:
Generate a multicast ping from R1 for the multicast group 224.1.1.1 with high repeat count:
R1#ping 224.1.1.1 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
................ <output omitted>

Copyright by IPexpert, Inc. All Rights Reserved.

10-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Now that R1 is generating the multicast stream, we need to see if R5 is sending an SA message to its
MSDP peer. This is accomplished via show ip msdp peer:
R5#show ip msdp peer 192.1.7.7 advertised-SAs
MSDP SA advertised to peer 192.1.7.7 (?) from mroute table
224.1.1.1
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.7.7 (?) from SA cache

This output indicates that R5 is advertising an SA message for the S,G pair of 224.1.1.1, 172.16.15.1 to
the peer located at 192.1.7.7. What is the status of the TCP session to this MSDP Peer?
R5#show ip msdp peer 192.1.7.7
MSDP Peer 192.1.7.7 (?), AS ?
Connection status:
State: Down, Resets: 0, Connection source: Loopback0 (192.1.5.5)
Uptime(Downtime): 00:04:03, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:04:03 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

This output indicates that the status of the connection is: Down. We also see that R5 is using the
connection source 192.1.5.5. It would be worthwhile to see the status of the output of the command on
R7:

R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Down, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:06:55, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:06:55 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none

Copyright by IPexpert, Inc. All Rights Reserved.

10-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled


The status on this MSDP RP is also down but observe that the connection source is not configured. This
can be confirmed with show run:

R7#show run | inc msdp
ip msdp peer 192.1.5.5

This indicates that R7 is not using the correct source for the MSDP peering. It is corrected by adding
connection-source key word:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#ip msdp peer 192.1.5.5 connect-source loopback0
%MSDP-5-PEER_UPDOWN: Session to peer 192.1.5.5 going up
R7(config)#end

We see that R7 now has a peering with 192.1.5.5 (R5). Is R7 receiving the SA Message for the group
224.1.1.1, 172.16.15.1 now?
R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)
224.1.1.1

172.16.15.1 (?) RP: 192.1.5.5

The SA Message has been learned. The question now is, "Is the S,G pair added to the multicast routing
table of R7?
R7#show ip mroute 224.1.1.1
Group 224.1.1.1 not found

This behavior is normal based on our discussion of the mechanisms used by MSDP. The S,G will only be
added to the multicast routing table if a host joins the multicast group 224.1.1.1:
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.1.1.1
R9(config-if)#end

Copyright by IPexpert, Inc. All Rights Reserved.

End with CNTL/Z.

10-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Now to repeat the command on R7:


R7#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:00:57/stopped, RP 192.1.7.7, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:57/00:02:32
(172.16.15.1, 224.1.1.1), 00:00:57/00:02:02, flags: M
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:57/00:02:32

Are the pings successful on R1 now?


R1#ping 224.1.1.1 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Copyright by IPexpert, Inc. All Rights Reserved.

10-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

No PIM Enabled Path Between MSDP Peers


This situation is where multicast packets cannot travel between the MSDP peers.
Setting the Stage:
Generate a multicast ping from R1 for the multicast group 224.9.9.9 with high repeat count:
R1#ping 224.9.9.9 repeat 100000
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 4 ms......... <output omitted>

We see that the first packet actually succeeds but all after that fail. Is R5 sending the SA Message? This is
accomplished via show ip msdp peer:
R5#show ip msdp peer 192.1.7.7 advertised-SAs
MSDP SA advertised to peer 192.1.7.7 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.7.7 (?) from SA cache

This output indicates that R5 is advertising an SA message for the S,G pair of 224.9.9.9, 172.16.15.1 to
the peer located at 192.1.7.7. What is the status of the TCP session to this MSDP Peer?
R5#show ip msdp peer 192.1.7.7
MSDP Peer 192.1.7.7 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.5.5)
Uptime(Downtime): 00:16:55, Messages sent/received: 20/16
Output messages discarded: 0
Connection and counters cleared 00:26:50 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

Copyright by IPexpert, Inc. All Rights Reserved.

10-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

This output indicates that the status of the connection is: Up. We also see that R5 is using the
connection source 192.1.5.5. It would be worthwhile to see the status of the output of the command on
R7:

R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:17:48, Messages sent/received: 17/21
Output messages discarded: 0
Connection and counters cleared 00:17:49 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 2
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabledd


The status on this MSDP RP is also: Up and the connection source is correctly configured. Is R7 receiving
the SA Message?

R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)
224.9.9.9

172.16.15.1 (?) RP: 192.1.5.5

Is the S,G entry making it into the multicast routing table on R7?
R7#show ip mroute 224.9.9.9
Group 224.9.9.9 not found

Has the source joined the multicast group?



R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))

Copyright by IPexpert, Inc. All Rights Reserved.

10-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.0.1.40

Reporter
172.16.79.9

Uptime
Exp. Flags
04:11:11 02:53 2LA

Interface
Fa0/1

The S,G entry is not made to the multicast routing table of R7 because R9 is not a member of the group.
This can be corrected via ip igmp join-group on R9:
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

End with CNTL/Z.

Now is the S,G entry in R7's multicast routing table?


R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:00:40/stopped, RP 192.1.7.7, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:40/00:02:49
(172.16.15.1, 224.9.9.9), 00:00:40/00:02:19, flags: M
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:40/00:02:49

The pair was added. Are the pings successful on R1?


R1#ping 224.9.9.9 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
...................................... <output omitted>

Copyright by IPexpert, Inc. All Rights Reserved.

10-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

This means that the packets are not making it from R5 to R7. This situation is most likely associated with
a problem in the multicast routing and forwarding plane. This could be an RPF issue, and asynchronous
routing issue, or a corrupt multicast routing table between the MSDP peers. All of these issues can be
isolated via mtrace (used bidirectionally):
R5#mtrace 192.1.5.5 192.1.7.7
Type escape sequence to abort.
Mtrace from 192.1.5.5 to 192.1.7.7 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.7.7
-1 172.16.67.7 PIM [192.1.5.0/24]
-2 172.16.67.6 PIM [192.1.5.0/24]
-3 172.16.26.2 PIM Multicast disabled [192.1.5.0/24]
-4 172.16.24.4 PIM [192.1.5.0/24]
-5 172.16.45.5 PIM [192.1.5.0/24]

Now in the other direction.


R5#mtrace 192.1.7.7 192.1.5.5
Type escape sequence to abort.
Mtrace from 192.1.7.7 to 192.1.5.5 via RPF
From source (?) to destination (?)
Querying full reverse path...
0 192.1.5.5
-1 192.1.5.5 PIM [192.1.7.0/24]
-2 172.16.45.4 PIM [192.1.7.0/24]
-3 172.16.24.2 None No route

This output informs us that R2 the interface with the ip address 172.16.26.2 is not enabled with ip pim
sparse-mode, as evidenced with show run interface:
R2#show run interface GigabitEthernet0/1
Building configuration...
Current configuration : 116 bytes
!
interface GigabitEthernet0/1
ip address 172.16.26.2 255.255.255.0
no ip mroute-cache
duplex auto
speed auto
end

To correct this problem we will apply ip pim sparse-mode under the GigabitEthernet0/1 interface of R2:

Copyright by IPexpert, Inc. All Rights Reserved.

10-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)


R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#interface GigabitEthernet0/1
R2(config-if)#ip pim sparse-mode
R2(config-if)#end
R2#
%PIM-5-NBRCHG: neighbor 172.16.26.6 UP on interface GigabitEthernet0/1
%PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 172.16.26.6 on interface
GigabitEthernet0/1

We see the PIM neighbor come up on FastEthernet0/1. Are the pings successful from R1?
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

The pings are successful thus demonstrating the issue has been corrected.
MSDP Passwords and Filters
This situation is where SA Messages are either not sent as a result of failed authentication protocols or
messages are dropped as a result of message filters.
Setting the Stage:
Generate a multicast ping from R1 for the multicast group 224.3.3.3 with high repeat count:
R1#ping 224.3.3.3 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.3.3.3, timeout is 2 seconds:
......... <output omitted>

Is R5 sending the SA Message? This is confirmed via show ip msdp peer:

Copyright by IPexpert, Inc. All Rights Reserved.

10-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

R5#show ip msdp peer 192.1.7.7 advertised-SAs


MSDP SA advertised to peer 192.1.7.7 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.7.7 (?) from SA cache

This output indicates that R5 is advertising an SA message for the S,G pair of 224.3.3.3, 172.16.15.1 to
the peer located at 192.1.7.7. What is the status of the TCP session to this MSDP Peer?
R5#show ip msdp peer 192.1.7.7
MSDP Peer 192.1.7.7 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.5.5)
Uptime(Downtime): 00:16:55, Messages sent/received: 20/16
Output messages discarded: 0
Connection and counters cleared 00:26:50 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

This output indicates that the status of the connection is: Up. We also see that R5 is using the
connection source 192.1.5.5. It would be worthwhile to see the status of the output of the command on
R7:

R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:07:08, Messages sent/received: 8/9
Output messages discarded: 0
Connection and counters cleared 00:07:09 ago
SA Filtering:
Input (S,G) filter: everything, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0

Copyright by IPexpert, Inc. All Rights Reserved.

10-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Input queue size: 0, Output queue size: 0


MD5 signature protection on MSDP TCP connection: not enabled


The status on this MSDP RP is also: Up, and the connection source is correctly configured. However, we
have to note that there is an S,G filter applied on R7 that is filtering everything. Is R7 receiving the SA
Message?

R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)

What is the nature of the filter applied on R7?



R7#show run | inc msdp
ip msdp peer 192.1.5.5 connect-source Loopback0
ip msdp sa-filter in 192.1.5.5

An MSDP sa-filter is blocking all inbound SA messages sourced from 192.1.5.5, to correct this issue the
filter should be removed:
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#no ip msdp sa-filter in 192.1.5.5
R7(config)#end

Is R7 receiving the SA Message now?


R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)
224.3.3.3

172.16.15.1 (?) RP: 192.1.5.5

Has the pair been added to the multicast routing table of R7?
R7#show ip mroute 224.3.3.3
Group 224.3.3.3 not found

Is there a host that is a member of the group 224.3.3.3?


R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:

Copyright by IPexpert, Inc. All Rights Reserved.

10-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

<mac-or-ip-address> - last reporter if group is not explicitly tracked


<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.0.1.40

Reporter
172.16.79.9

Uptime
Exp. Flags
04:44:23 02:41 2LA

Interface
Fa0/1

There is no member of this group. To correct this issue have R9's interface FastEthernet0/1 join the
group:
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/1
R9(config-if)#ip igmp join-group 224.3.3.3
R9(config-if)#end

End with CNTL/Z.

Has the S,G pair been added to the multicast routing table on R7 now?
R7#show ip mroute 224.3.3.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.3.3.3), 00:00:54/stopped, RP 192.1.7.7, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:54/00:02:35
(172.16.15.1, 224.3.3.3), 00:00:54/00:03:20, flags: MT
Incoming interface: FastEthernet0/0, RPF nbr 172.16.67.6
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:00:54/00:03:03

Are the pings successful on R1?


R1#ping 224.3.3.3 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.3.3.3, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 4 ms

Copyright by IPexpert, Inc. All Rights Reserved.

10-19

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request

1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 10: Multicast Source Discovery Protocol (MSDP)

1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms

This indicates that the issue has been corrected.


Copyright by IPexpert, Inc. All Rights Reserved.

10-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

MSDP Show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
MSDP topology in Figure 10-3 for all example output.

Figure 10-3: A Sample MSDP Topology

show COMMAND:
show ip msdp peer ip_address
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:

ip_address the IP address of the msdp peer

EXAMPLE OUTPUT:
R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:07:19, Messages sent/received: 7/10
Output messages discarded: 0
Connection and counters cleared 00:07:28 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0

Copyright by IPexpert, Inc. All Rights Reserved.

10-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

SAs learned from this peer: 1


Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
R7#


show COMMAND:
show ip msdp peer ip_address advertised-SAs
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:

ip_address the IP address of the msdp peer

EXAMPLE OUTPUT:
R5#show ip msdp peer 192.1.7.7 advertised-SAs
MSDP SA advertised to peer 192.1.7.7 (?) from mroute table
224.1.1.1
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.7.7 (?) from SA cache
R5#

Copyright by IPexpert, Inc. All Rights Reserved.

10-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

MSDP Debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
MSDP topology in Figure 10-4 for all example output.

Figure 10-4: A Sample MSDP Topology

debug COMMAND:
debug ip msdp detail
This command displays detailed information regarding MSDP operations.
EXAMPLE OUTPUT:
R5#debug ip msdp detail
MSDP Detail debugging is on
R5#
MSDP(0): Received 3-byte TCP segment from 192.1.7.7
MSDP(0): Append 3 bytes to 0-byte msg 12 from 192.1.7.7, qs 1
R5#
MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
R5#
MSDP(0): Received 3-byte TCP segment from 192.1.7.7
MSDP(0): Append 3 bytes to 0-byte msg 13 from 192.1.7.7, qs 1
R5#

Copyright by IPexpert, Inc. All Rights Reserved.

10-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Chapter Challenge: MSDP Sample Trouble Tickets


The following section includes two sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH10-MSDP-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 10-5 below:

Figure 10-5: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that the RP in Multicast Domain "A" is not forming an
MSDP peering relationship with the RP in Multicast Domain "B". You have been instructed to correct the
issue causing this problem. You must use the most secure method possible to accomplish this task.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that R7 is failing to accept SA Messages
from R5 for the multicast group 224.1.1.1.

Copyright by IPexpert, Inc. All Rights Reserved.

10-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Chapter Challenge: MSDP Sample Trouble Tickets Solutions


The following section includes the solutions to the two Trouble Tickets presented in the previous
section.
Trouble Ticket #1 Solution
Your supervisor has brought to your attention that the RP in Multicast Domain "A" is not forming an
MSDP peering relationship with the RP in Multicast Domain "B". You have been instructed to correct the
issue causing this problem. You must use the most secure method possible to accomplish this task.
Step 1 - Fault Verification:
What is the status of the MSDP peering connection between R5 and R7?
R5#show ip msdp peer 192.1.7.7
MSDP Peer 192.1.7.7 (?), AS ?
Connection status:
State: Down, Resets: 1, Connection source: Loopback0 (192.1.5.5)
Uptime(Downtime): 00:18:12, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:25:02 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
R5#

The output clearly indicates that the connection status with the peer is down thus verifying the problem
exists.

Step 2 - Fault Isolation:
The next course of action is to use the show ip msdp peer command on R7 and compare the results with
those seem on R5.

R7#show ip msdp peer 192.1.5.5
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Listen, Resets: 0, Connection source: Loopback0 (192.1.7.7)

Copyright by IPexpert, Inc. All Rights Reserved.

10-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Uptime(Downtime): 00:17:33, Messages sent/received: 0/0


Output messages discarded: 0
Connection and counters cleared 00:17:33 ago
SA Filtering:
Input (S,G) filter: 100, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled


Looking at this output we can see that both R5 and R7 are using their loopbacks to form the connection.
Both devices have specified the correct connection source. However, we can see that R7 has enabled
MD5 signature protection for the TCP connection where R5 has not. We can see the nature of the
authentication configuration with show run on each of these MSDP RPs:

R5#show run | inc msdp password
R5#
R7#show run | inc msdp password
ip msdp password peer 192.1.5.5 CISCO


We see that R5 is not using password protection for the TCP session. This isolates the fault.

Step 3 - Fault Remediation:
In this scenario, the ip msdp password peer command needs to be applied to R5.

R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip msdp password peer 192.1.7.7 CISCO
R5(config)#end

Step 4 - Verification of Remediation


Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R5#show ip msdp peer
MSDP Peer 192.1.7.7 (?), AS ?
Connection status:
State: Up, Resets: 1, Connection source: Loopback0 (192.1.5.5)

Copyright by IPexpert, Inc. All Rights Reserved.

10-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Uptime(Downtime): 00:00:00, Messages sent/received: 1/1


Output messages discarded: 0
Connection and counters cleared 00:34:30 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled

We see that the MSDP peer connection with 192.1.7.7 is now up, additionally we see a console message
notifying us that:
%MSDP-5-PEER_UPDOWN: Session to peer 192.1.7.7 going up

The solution has successfully remediated the problem.


Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that R7 is failing to accept SA Messages
from R5 for the multicast group 224.1.1.1.
Step 1 - Fault Verification:
Generate a multicast stream for the group 224.1.1.1 on R1 with a high repeat count:
R1#ping 224.1.1.1 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
....................................... <output omitted>


Does R7 accept the SA Message for the group 224.1.1.1?
R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)
R7#


R7 has no record of accepting the SA for 224.1.1.1. Thus proving the problem exists.

Copyright by IPexpert, Inc. All Rights Reserved.

10-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Step 2 - Fault Isolation:


Now, the first step is to determine if R5 even sends a SA Message for the group.
R5#show ip msdp peer 192.1.7.7 advertised-SAs
MSDP SA advertised to peer 192.1.7.7 (?) from mroute table
224.1.1.1
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.7.7 (?) from SA cache

R5 is sending the SA Messages. We can use show ip msdp peer on R7 to see what is happening to these
messages:
R7#show ip msdp peer
MSDP Peer 192.1.5.5 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.7.7)
Uptime(Downtime): 00:29:08, Messages sent/received: 30/34
Output messages discarded: 0
Connection and counters cleared 00:56:48 ago
Elapsed time since last message: 00:00:06
Local Address of connection: 192.1.7.7
Local Port: 639, Remote Port: 23779
SA Filtering:
Input (S,G) filter: 100, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled
Message counters:
RPF Failure count: 0
SA Messages in/out: 21/0
SA Requests in: 0
SA Responses out: 0
Data Packets in/out: 2/0

Note that SA Filtering is taking place on R7. Specifically an Input S,G filter has been applied using the
extended access-list 100. What is the nature of the access-list being called?
R7#show access-list 100
Extended IP access list 100
10 deny ip any host 224.1.1.1 (18 matches)
20 permit ip any any

Copyright by IPexpert, Inc. All Rights Reserved.

10-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

This ACL is blocking any SA Messages for the group 224.1.1.1 sourced from any sender. This has isolated
our fault.

Step 3 - Fault Remediation:
In this scenario, the ip access-list extended 100 command needs to be used to remove sequence
number 10 from the access-list:
R7#conf t
Enter configuration commands, one per line.
R7(config)#ip access-list extended 100
R7(config-ext-nacl)#no 10
R7(config-ext-nacl)#end

End with CNTL/Z.

To verify that this ACL has been edited:


R7#show access-list 100
Extended IP access list 100
20 permit ip any any (1 match)

Step 4 - Verification of Remediation


Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:
R7#show ip msdp peer 192.1.5.5 accepted-SAs
MSDP SA accepted from peer 192.1.5.5 (?)
224.1.1.1

172.16.15.1 (?) RP: 192.1.5.5


R7 is now accepting the SA Messages for the group 224.1.1.1. Thus verifying that the error has been
corrected. Additionally, we see that pings from R1 are now successful:

R1#ping 224.1.1.1 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7

from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Copyright by IPexpert, Inc. All Rights Reserved.

1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms

10-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 10: Multicast Source Discovery Protocol (MSDP)

Reply to request 8 from 172.16.79.9, 1 ms


Reply to request 9 from 172.16.79.9, 1 ms

Copyright by IPexpert, Inc. All Rights Reserved.

10-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Chapter 11: Anycast-


RP



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and functionality of
the Anycast-RP protocol are examined in great depth. Once the operational characteristics of this
important protocol are detailed completely, the focus becomes that of troubleshooting. This includes
the careful examination of symptoms, a fault isolation methodology, and the implementation of repairs
for the Anycast-RP protocol. The chapter begins with a thorough review of Anycast-RP, and then quickly
launches in to an exhaustive analysis of the art of troubleshooting this multicast support protocol. This
important chapter concludes with sample troubleshooting scenarios, reference materials for the most
important show and debug commands, and exciting challenges that allow readers to practice
implementing the troubleshooting skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

11-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Anycast-RP Technology Review


In Chapter 10: Multicast Source Discovery Protocol (MSDP), you learned all about that important
protocol for interdomain multicast implementations. Anycast-Rendezvous Point (RP) is a powerful
feature that arises from MSDP.
Originally developed for interdomain multicast applications, Anycast-RP provides redundancy and load-
sharing capabilities for the critical RP role within multicast. While the original intent was interdomain
multicast, enterprises today typically use Anycast-RP for configuring a PIM-SM network to meet fault
tolerance requirements within a single multicast domain.
With Anycast-RP, we configure two or more rendezvous points with the same IP address and 32-bit
mask on loopback interfaces. Configure all downstream routers in the multicast domain with this
Anycast-RP IP address. IP routing automatically selects the closest RP for each source and receiver
based on the IP routing protocol information. Careful placement of the Anycast-RP devices can help
ensure adequate load balancing.
In Anycast-RP, MSDP shares information between the redundant RPs. This is important because a source
may register with one RP and receivers may join a different RP. When a source registers with one RP, an
SA message will be sent to the other RPs informing them that there is an active source for a particular
multicast group. The result is that each RP will know about the active sources in the area of the other
RPs. If any of the RPs were to fail, IP routing would converge, and one of the RPs would become the
active RP. Receivers join the new RP and connectivity is maintained.

Copyright by IPexpert, Inc. All Rights Reserved.

11-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

The Operation and Troubleshooting of Anycast-RP


Anycast-RP is a method of deploying MSDP in a fashion that will allow multiple devices to fill the role of
RP simultaneously. This application of MSDP supports both RP High Availability and RP load-balancing
within a single multicast domain. Typically, this deployment will be between two RP's employing the
same IP Address. This deployment eliminates the often cumbersome process used to by the dynamic RP
discovery protocols Auto-RP and BSR mentioned in previous chapters.
The concept of Anycast-RP between multiple devices presents an attractive solution to deploying RPs in
a multicast domain. In Anycast-RP, all RPs are configured with identical IP addresses typically assigned to
a loopback interface. This address is then advertised into the native IGP protocol employed in the
routing environment. Often times, these routes will be advertised as external in nature, in order to more
readily allow routing metrics to be manipulated to obtain desired routing behavior or RP selection.
This process of advertising these matching prefixes into the unicast routing domain allow devices to
utilize the RP with the lowest IGP cost to build multicast trees. There will normally be several trees per
group, and at least one per RP. The issue in this situation is that by default RP do not natively exchange
information regarding PIM Registrations or PIM joins for multicast groups. The solution to this intra-
domain conundrum is the same as that used between RPs in inter-domain multicast deployments.
Deploying MSDP between these devices will allow this exchange to take place via the TCP peering
session formed between the RPs. Once configured, SA messages will notify the other RP when a source
is active. This ensures, all RPs are aware of all active sources so that they can facilitate joining the SPT to
receive a give multicast stream directly.
We will use Figure 11-1 to illustrate the operational process employed by Anycast-RP.


Figure 11-1: Anycast-RP

Copyright by IPexpert, Inc. All Rights Reserved.

11-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

This process as mention earlier in this section is the same as that used in inter-domain multicast.
Step One - First-hop router unicasts a PIM Register Message (containing an encapsulated multicast
packet) to the closest RP, in this case R5.
This process can be observed by generating a ping from R1 to a multicast group, and observing the
output of the debug ip pim on R4:
R4#debug ip pim
PIM debugging is on
R4#

Now to generate the ping on R1:


R1#ping 224.10.10.10 repeat 100000
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.10.10.10, timeout is 2 seconds:
...... <output omitted>

Now we will look at R4 to see the Register Message arrive from R5:
R4#
PIM(0): Received v2 Register on FastEthernet0/1 from 172.16.45.5
for 172.16.15.1, group 224.10.10.10
PIM(0): Check RP 192.1.100.100 into the (*, 224.10.10.10) entry
PIM(0): Send v2 Register-Stop to 172.16.45.5 for 172.16.15.1, group 224.10.10.10

As expected the Register Message arrives from 172.16.45.5.


Step Two - R4 will then advertise an MSDP SA Message to R6 notifying it that there is an active source
(S) for the group (G). The first SA message contains the encapsulated multicast packet, future SA
messages for this group will not have this packet.
This can be verified by using the show ip msdp peer command:
R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
224.10.10.10
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.6.6 (?) from SA cache

Step Three - If R6 receives the SA messages and it has a receiver for that particular group, it joins SPT
toward the source by sending an (S,G) join.

Copyright by IPexpert, Inc. All Rights Reserved.

11-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

We can see if the SA Message arrives by using the show ip msdp peer command on R6:
R6#show ip msdp peer 192.1.4.4 accepted-SAs
MSDP SA accepted from peer 192.1.4.4 (?)
224.10.10.10

172.16.15.1 (?) RP: 192.1.100.100

We see that the SA message arrives, and as a result of R9 having joined the multicast group
224.10.10.10 we see the following Join Message go out with the Shortest Path Bit set:
R6#
PIM(0):
PIM(0):
PIM(0):
PIM(0):

Insert (172.16.15.1,224.10.10.10) join in nbr 172.16.26.2's queue


Building Join/Prune packet for nbr 172.16.26.2
Adding v2 (172.16.15.1/32, 224.10.10.10), S-bit Join
Send v2 join/prune to 172.16.26.2 (FastEthernet0/1)

Step Four - R4 will forward via the shared tree to the hosts the multicast packet it receives in the SA
message
As demonstrated by the output of the show ip mroute command on R6:
R6#show ip mroute 224.10.10.10
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.10.10.10), 00:04:54/stopped, RP 192.1.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:04:54/00:02:31
(172.16.15.1, 224.10.10.10), 00:00:04/00:02:55, flags: MT
Incoming interface: FastEthernet0/1, RPF nbr 172.16.26.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:04/00:03:25

Step Five - when last-hop router receives the multicast packet, it joins the SPT unless it is configured not
to do so (ip pim spt-threshhold infinity)

Copyright by IPexpert, Inc. All Rights Reserved.

11-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

In a MSDP environment the behavior is the same when the first encapsulated multicast packet arrives in
the SA Message. After receiving the SA message, the RP extracts the multicast data and sends the
multicast data down the RPT to the DRs at the receiver side. The MSDP RP acts as a transfer station for
all multicast packets. The whole process involves the following issues:

Multicast packets could be delivered via the MSDP peering relationship along a path that might
not be the shortest path.
An increase in multicast traffic could add a potential congestion on the RP, increasing the risk of
failure.

To solve these issues, MSDP permits the normal PIM-SM process of allowing the DR at the receiver side
to initiate the SPT switchover process. After receiving the first multicast packet, the receiver-side DR
initiates an SPT switchover process, as follows:

The receiver-side DR sends an (S, G) join message hop by hop toward the multicast source.
When the join message reaches the source-side DR, all the routers on the path have installed
the (S, G) entry in their forwarding table, and thus an SPT branch is established.
When the multicast packets travel to the router where the RPT and the SPT deviate, the router
drops the multicast packets received from the RPT and sends an RP-bit prune message hop by
hop to the RP. After receiving this prune message, the RP sends a prune message toward the
multicast source (suppose only one receiver exists). Thus, SPT switchover is completed.
Multicast data is directly sent from the source to the receivers along the SPT.

Copyright by IPexpert, Inc. All Rights Reserved.

11-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Common Issues with Anycast-RP


Anycast-RP is a very simple protocol to troubleshoot. The protocol uses a MSDP to ensure that multicast
source information is exchanged between Anycast-RPs. Even though the overall process is simple,
dividing it into specific categories will make troubleshooting more straightforward. To simplify
troubleshooting common issues while deploying Anycast-RP, we identify two categories of problems:
MSDP Peering Issues and Unicast Routing Problems.
MSDP Peering Issues
Incorrect Peering Configuration
In Chapter 10: Multicast Source Discovery Protocol (MSDP), this text discussed the different
configuration command needed to enable MSDP peering between RPs in different multicast domains.
This same process is the one used in intra-domain deployments. More often than not, there are issues
associated with applying these commands and parameters. Most commonly, configuration or
typographical errors are the cause of failures in this type of multicast deployment. MSDP configuration
is very much like BGP in that it relies on TCP session initiation to operate. This means that it is necessary
to use the connect-source to specify an interface other than that used to directly reach the MSDP peer.
These source interfaces must match on each MSDP enabled RP.
No PIM Enabled Path Between MSDP peers
In order for MSDP to work there must be a PIM enabled path on all devices between the MSDP Peered
RPs. This necessity is part of the successful operation of the protocol because MSDP does not provide a
replacement for PIM. PIM is still required for the successful transfer of multicast packets, and is part of
the data plane RPF check mechanism.
MSDP Passwords and Filters
In order to maintain some level of secure deployment, MSDP is designed to leverage MD5 digests as
part of its authentication process. These digests must match exactly and are easily misconfigured.
Additionally, MSDP employs a number of filter mechanisms to block communications:

filter-sa-request - Filter SA-Requests from peer


sa-filter - Filter SA messages from peer
sa-limit - Configure SA limit for a peer
ttl-threshold - Configure TTL Threshold for MSDP Peer

Copyright by IPexpert, Inc. All Rights Reserved.

11-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Beware of issues where these have been deployed incorrectly or in situations where previous
configurations have not been completely removed.
Unicast Routing Problems
Individual multicast enabled routers will utilize their own unicast routing table to determine which RP is
the closet. The same set of selection rules are used to create the multicast routing trees that are used by
the unicast routing protocol. This means that a router will always take the longest match over
Administrative distance. Ties are broken by using the actual routing metric to the respective RPs.
The process explained above, is intended to make you aware of the fact that the routers in your domain
may select RPs that you do not expect them to choose based on the dynamic nature of the routing
protocols you employ on the network. Three such circumstances that are commonly encountered are:

More Than One Routing Protocol - Different administrative distances can have unexpected
effects on the RP selection process.
Matching Metrics To Each Of The RPs - This will result in a router alternating between the
RPs. (A less than favorable situation.)
Unequal Cost Load Balancing - This situation will introduce a complex weighted round robin
selection process that can make troubleshooting exceedingly difficult.

The MSDP protocol will always forward Join/Prune messages to the appropriate RP, but if you do not
know the identity of the RP in use, you will have issues troubleshooting any protocol faults.
In the Anycast-RP Sample Troubleshooting Scenarios section that follows, troubleshooting these issues
are demonstrated. For each problem, the text demonstrates how to quickly and efficiently verify each
symptom, isolate the cause, and remediate the issue.

Copyright by IPexpert, Inc. All Rights Reserved.

11-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Anycast-RP Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the Anycast-RP operational process. The intent here is to hone and develop
troubleshooting skills tailored to first identify if a problem is IGMP related, and then how to begin
isolating the cause of the fault in the most efficient manner possible. Figure 11-2 illustrates the topology
used to explore this topic.

Figure 11-2: A Sample Anycast-RP Topology

In the Common Issues with Anycast-RP section, two primary types of problems were identified: MSDP
Peering Issues and Unicast Routing Problems. This section explores these three categories of failure, by
directing our attention to the commands necessary to identify that a problems exists.
MSDP Peering Issues
This situation is were two or more MSDP peers fail to successfully negotiate a TCP Session this situation
is most commonly verified via the use of show ip msdp peer:
R4#show ip msdp peer
MSDP Peer 192.1.6.6 (?), AS ?
Connection status:
State: Down, Resets: 0, Connection source: Loopback0 (192.1.4.4)
Uptime(Downtime): 00:00:13, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:00:13 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none

Copyright by IPexpert, Inc. All Rights Reserved.

11-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled

Observe the output of this command indicates that the connection state is: Down. More often than not,
a direct comparison between the two RPs in a domain will reveal the cause of this type of issue:
R6#show ip msdp peer
MSDP Peer 192.1.4.4 (?), AS ?
Connection status:
State: Listen, Resets: 1, Connection source: Loopback0 (192.1.6.6)
Uptime(Downtime): 00:02:10, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 05:27:55 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

Note that R6 is running MD5 authentication and R4 is not. This can be verified via show run:

R4#sh run | inc msdp password
ip msdp password peer 192.1.6.6 CISCO
R6#show run | inc msdp password
R6#


Applying a password to R6 will correct this issue:
R6(config)#ip msdp password peer 192.1.4.4 CISCO
R6(config)#end
R6#
%MSDP-5-PEER_UPDOWN: Session to peer 192.1.4.4 going up

Repeating the show mspd peer command will now tell us that the connection is up:

Copyright by IPexpert, Inc. All Rights Reserved.

11-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

R6#show ip msdp peer


MSDP Peer 192.1.4.4 (?), AS ?
Connection status:
State: Up, Resets: 1, Connection source: Loopback0 (192.1.6.6)
Uptime(Downtime): 00:00:54, Messages sent/received: 1/1
Output messages discarded: 0
Connection and counters cleared 05:35:32 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled

Indicating that our fault has been corrected.


Unicast Routing Problems
As we discussed in the previous sections, the ability of a device to used the features of Anycast-RP
depends on the correct operation of the individual devices unicast routing tables. The fact that Anycast-
RP relys on unicast to communicate with the closest RP can cause issues.
Step One: Pings from R1 do not reach member host on R9?
In this scenario, we will generate a multicast stream for the group 224.9.9.9 on R1:
R1#ping 224.9.9.9 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........................<output omitted>

The ping is not successful. We need to determine if R4 is sending the SA Messages to its peer:
R4#show ip msdp peer 192.1.6.6 advertised-sa
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.6.6 (?) from SA cache

We see that the message is sent. Do the messages arrive on R6?


R6#show ip msdp peer 192.1.4.4 accepted-SAs

Copyright by IPexpert, Inc. All Rights Reserved.

11-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

MSDP SA accepted from peer 192.1.4.4 (?)


224.9.9.9

172.16.15.1 (?) RP: 192.1.100.100

The SA messages are accepted by R6. Is the S,G entry addred to R6s multicast routing table?
R6#show ip mroute 224.9.9.9
Group 224.9.9.9 not found

The entry is not added. Oddly enough we do not even see a *,G for the group. Has R9 joined 224.9.9.9?
R9#show ip igmp membership
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (*,G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m>
- <n> reporter in include mode, <m> reporter in exclude
Channel/Group
*,224.9.9.9
*,224.0.1.40

Reporter
172.16.79.9
172.16.79.9

Uptime
Exp. Flags
00:26:33 02:29 2LA
00:26:31 02:28 2LA

Interface
Fa0/1
Fa0/1

R9 has joined the group. Is this fact being communicated to R7?


R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group,
V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:08:10/00:03:09, RP 192.1.100.100, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:08:10/00:03:09

Copyright by IPexpert, Inc. All Rights Reserved.

11-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

R7 is learning the that R9 has joined the group is has also added the *,G entry with the interface facing
R9 to the OIL for this entry. The odd thing is that the RPF nbr is 0.0.0.0. We know from past experience
that this indicates that the router either thinks that it is the RP or that the addressed for the RP is
invalid. We know that R7 should not be the RP, but we still need to verify:

R7#show ip pim rp
Group: 224.9.9.9, RP: 192.1.100.100, next RP-reachable in 00:00:03
Group: 224.0.1.40, RP: 192.1.100.100, next RP-reachable in 00:00:02

We see that the RP is 192.1.100.100 which is correct. Can R7 reach this address?
R7#ping 192.1.100.100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.1.100.100, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms


We see the address is reachable. But what route will R7 take to reach this address.

R7#show ip route 192.1.100.100
Routing entry for 192.1.100.0/24
Known via "connected", distance 0, metric 0 (connected, via interface)
Redistributing via eigrp 100
Routing Descriptor Blocks:
* directly connected, via Loopback100
Route metric is 0, traffic share count is 1

We see that R7 thinks the prefix 192.1.100.100 is reachable via its loopback100 interface. This means
that the address resides on this router as evidenced by show run:
R7#show run interface loopback 100
Building configuration...
Current configuration : 69 bytes
!
interface Loopback100
ip address 192.1.100.100 255.255.255.0
end

This interface needs to be removed because it is causing an issue with the unicast reachability to the RP.
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#no interface loopback 100
% Not all config may be removed and may reappear after reactivating the logicalinterface/sub-interfaces

Copyright by IPexpert, Inc. All Rights Reserved.

11-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

R7(config)#
%LINK-5-CHANGED: Interface Loopback100, changed state to administratively down
%LINEPROTO-5-UPDOWN: Line protocol on Interface Loopback100, changed state to down
R7(config)#end

Now are pings successful from R1?


R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Copyright by IPexpert, Inc. All Rights Reserved.

11-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Anycast-RP Show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
Anycast-RP topology in Figure 11-3 for all example output.

Figure 11-3: A Sample Anycast-RP Topology

show COMMAND:
show ip msdp peer ip_address
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:

ip_address the IP address of the msdp peer

EXAMPLE OUTPUT:
R6#show ip msdp peer 192.1.5.5
MSDP peer 192.1.5.5 not found
R6#show ip msdp peer 192.1.4.4
MSDP Peer 192.1.4.4 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.6.6)
Uptime(Downtime): 01:17:18, Messages sent/received: 77/86
Output messages discarded: 0
Connection and counters cleared 01:17:49 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none

Copyright by IPexpert, Inc. All Rights Reserved.

11-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Peer ttl threshold: 0


SAs learned from this peer: 1
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
Message counters:
RPF Failure count: 0
SA Messages in/out: 69/0
SA Requests in: 0
SA Responses out: 0
Data Packets in/out: 4/0
R6#


show COMMAND:
show ip msdp peer ip_address advertised-SAs
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:

ip_address the IP address of the msdp peer

EXAMPLE OUTPUT:
R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.6.6 (?) from SA cache
R4#

Copyright by IPexpert, Inc. All Rights Reserved.

11-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Anycast-RP debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
Anycast-RP topology in Figure 11-4 for all example output.

Figure 11-4: A Sample Anycast-RP Topology

debug COMMAND:
debug ip msdp detail
This command displays detailed information regarding MSDP operations.
EXAMPLE OUTPUT:
R4#debug ip msdp detail
MSDP Detail debugging is on
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 92 from 192.1.6.6, qs 1
R4#
MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 93 from 192.1.6.6, qs 1
R4#

Copyright by IPexpert, Inc. All Rights Reserved.

11-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Chapter Challenge: Anycast-RP Sample Trouble Tickets


The following section includes a sample Trouble Ticket designed to challenge the troubleshooting skills
that have been developed in all previous sections of this chapter. This Trouble Ticket is designed using
the Routing & Switching rental racks at www.ProctorLabs.com with the initial configurations provided in
the file MCAST-CH11-ANYCAST-RP-TT-INITIAL.txt. Keep in mind this sample Trouble Ticket has also
tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 2-6 below:

Figure 11-6: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that ping sourced from the FastEthernet0/0 interface of
R1 for the group 224.9.9.9 is not reaching receivers on R9s VLAN79 segment. Using this group you have
been instructed to isolate and correct this issue.

Copyright by IPexpert, Inc. All Rights Reserved.

11-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Chapter Challenge: Anycast-RP Sample Trouble Tickets Solutions


The following section includes the solutions to the Trouble Ticket presented in the previous section.
Trouble Ticket #1 Solution
Your supervisor has brought to your attention that ping sourced from the FastEthernet0/0 interface of
R1 for the group 224.9.9.9 is not reaching receivers on R9s VLAN79 segment. Using this group you have
been instructed to isolate and correct this issue.
Step 1 - Fault Verification:
Are pings from R1 successful for the group 224.9.9.9?
R1#ping 224.9.9.9 repeat 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
.....................<output omitted>

The pings are not successful. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
The next course of action is to verify that R4 is sending SA messages to R6.

R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.6.6 (?) from SA cache

The SA messages are being sent. Are they being accepted by the MSDP peer:

R6#show ip msdp peer 192.1.4.4 accepted-SAs
MSDP SA accepted from peer 192.1.4.4 (?)
224.9.9.9
R6#

172.16.15.1 (?) RP: 192.1.100.100

R6 is accepting the SA messages. Is the S,G pair being added to the multicast routing table for the group
(172.16.15.1, 224.9.9.9) on R6?
R6#show ip mroute 224.9.9.9
Group 224.9.9.9 not found

Copyright by IPexpert, Inc. All Rights Reserved.

11-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

We know that we will not add the S,G entry unless we have a *,G for the particular group. This *,G is
missing. Has R9 joined the group?

R9#show ip igmp interface Fa0/1
FastEthernet0/1 is up, line protocol is up
Internet address is 172.16.79.9/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 60 seconds
IGMP querier timeout is 120 seconds
IGMP max query response time is 10 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 2 joins, 0 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 172.16.79.9 (this system)
IGMP querying router is 172.16.79.7
Multicast groups joined by this system (number of users):
224.9.9.9(1) 224.0.1.40(1)

We see that R9 has joined the group 224.9.9.9, this IGMP version 2 join information should be sent to
the IGMP Querying router (172.16.79.7). This means that R7 should have the *,G entry:

R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group,
V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:12:07/00:03:12, RP 192.1.100.100, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Sparse, 00:12:07/00:03:12

Copyright by IPexpert, Inc. All Rights Reserved.

11-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Note that we see the *,G entry for the group. However, observe the RPF nbr of 0.0.0.0. This means the
router thinks it is the RP or that the RP address is invalid. What RPF interface will be used to check the
multicast traffic learned from this RP address?

R7#show ip rpf 192.1.100.100
RPF information for ? (192.1.100.100) failed, no route exists

This output tells us that no route exists. We need to take a closer look at the routing table of R7:

R7#show ip route 192.1.100.100
Routing entry for 192.1.100.0/24
Known via "connected", distance 0, metric 0 (connected, via interface)
Redistributing via eigrp 100
Routing Descriptor Blocks:
* directly connected, via Loopback100
Route metric is 0, traffic share count is 1

This tells us that the network 192.1.100.0/24 resides on this router on interface Loopback100, as
evidenced by show run:
R7#show run int loopback 100
Building configuration...
Current configuration : 69 bytes
!
interface Loopback100
ip address 192.1.100.100 255.255.255.0
end


This interface should not exist in the topology on this router. This isolates the fault.

Step 3 - Fault Remediation:
In this scenario, the no interface loopback100 command needs to be applied on R7 to eliminate the
configuration artifact from a previous lab.

R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#no interface loopback100
% Not all config may be removed and may reappear after reactivating the logicalinterface/sub-interfaces
R7(config)#end

Copyright by IPexpert, Inc. All Rights Reserved.

11-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 11: Anycast-RP

Step 4 - Verification of Remediation


Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

4
1
1
1
4
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Pings from R1 are now successful. The solution has successfully remediated the problem.


Copyright by IPexpert, Inc. All Rights Reserved.

11-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Chapter 12:
Multiprotocol-BGP
(MP-BGP) and
Interdomain
Multicast



In this chapter of IPv4/6 Multicast Operation and Troubleshooting, the processes and functionality of
the Multiprotocol-BGP (MP-BGP) protocol are examined in great depth. Once the operational
characteristics of this important protocol are detailed completely, the focus becomes that of
troubleshooting. This includes the careful examination of symptoms, a fault isolation methodology, and
the implementation of repairs for the Multiprotocol-BGP. The chapter begins with a thorough review of
Multiprotocol-BGP, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting this multicast support protocol. This important chapter concludes with sample
troubleshooting scenarios, reference materials for the most important show and debug commands, and

Copyright by IPexpert, Inc. All Rights Reserved.

12-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

exciting challenges that allow readers to practice implementing the troubleshooting skills they have
obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

12-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Multiprotocol-BGP Technology Review


Multiprotocol-BGP (MP-BGP) is an important technology that enables multicast routing policy
throughout the Internet. These enhancements to Border Gateway Protocol (BGP) connect multicast
topologies within and between BGP autonomous systems.
Multiprotocol-BGP is very simple actually. By extending the Network Layer Reachability Information
(NLRI) that BGP can carry, the protocol is capable of carrying IP multicast routes. With MP-BGP, BGP
ends up carrying two sets of routes, one set for unicast routing and one set for multicast routing.
Protocol Independent Multicast (PIM) uses the routes associated with multicast routing to build its
multicast data distribution trees as described in previous chapters of this book. Multiprotocol-BGP
allows you to have a unicast routing topology different from a multicast routing topology, thus,
providing more control over the network and its resources.

Copyright by IPexpert, Inc. All Rights Reserved.

12-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

The Operation and Troubleshooting of MP-BGP


MP-BGP provides many advantages over BGP because it provides a distinction between multicast and
unicast-only networks. MP-BGP allows you to advertise which networks in the topology are multicast
capable. Additionally, MP-BGP allows us to deploy multicast capabilities between multiple isolated
domains. In this working explanation, a domain is identified as a group of devices under a single
administrative control. A perfect example of this would be two or more ISPs. In the situation were
multiple ISPs have a need to provide inter-domain multicast services, each ISP will need to utilize the
following protocols:

Multi-Protocol Border Gateway Protocol (MP-BGP) - for interdomain routing.


Multicast Source Discovery Protocol (MSDP) - for interdomain source discovery.

In this situation both MP-BGP and MSDP will connect the individual PIM-SM domains. MP-BGP in this
scenario provides a unique capability. MP-BGP serves as a policy-based inter-domain routing protocol.
This means that the MP-BGP protocol will be used for choosing best paths through an IP internetwork.
As discussed in Chapter 10: MSDP, MSDP is the protocol that enables RPs from different domains to
exchange information about active sources.
Having spent an entire chapter covering the operation and troubleshooting of MSDP we will not go into
any detail regarding the capabilities or deployment of that protocol here. However, MP-BGP is new to us
and will need to be discussed to obtain a better understanding of what capabilities it brings to the inter-
domain multicast topology that we will be referencing in Figure 12-2.

Figure 12-2: IGMP Lab Topology

Copyright by IPexpert, Inc. All Rights Reserved.

12-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

MP-BGP
The biggest advantage that MP-BGP brings to an inter-domain multicast deployment is the ability for a
Service Provider to selectively determine what prefixes they will use for RPF checks in the multicast
environment. We have discussed in many of the previous chapters the importance of the RPF
mechanism that devices use to create multicast forwarding trees in a loop free and efficient manner.
This process determines what trees will be used to successfully route multicast packets between sources
and receivers.

MP-BGP defines Multiprotocol Extensions for BGP version 4. As such it is an extension of BGP protocol
itself that defines all the administrative mechanisms necessary to allow service providers and customers
to independently manipulate their inter-domain routing environment. MP-BGP allows tools traditionally
used in unicast BGP to now be employed for the benefit of inter-domain multicast needs. These tools
include inter-AS mechanisms that filter and control routing like route-maps. This means that any
network running iBGP or eBGP protocols can now use MP-BGP to apply multiple policy control
mechanisms traditionally used in BGP to specify routing and forwarding policies for multicast.

The two primary attributes that we will discuss in this the chapter will be: MP_REACH_NLRI and
MP_UNREACH_NLRI. These two attributes were introduced in BGP version 4 to create a simple way for
the protocol to carry two categories of routing informationunicast routing and multicast routing. This
means that the routes communicated via MP-BGP that are associated with multicast routing are the
routes used for RPF checking at the inter-domain borders.

This extension to the traditional BGP version 4 protocol has one huge advantage. MP-BGP allows an
inter-domain network to support non-congruent unicast and multicast topologies. However, almost as
importantly when the unicast and multicast topologies are congruent, MP-BGP can support different
policies for each. This provides unparalleled policy-based inter-domain scalability for our multicast and
unicast routing protocols.

The operational deployment of MP-BGP for the purpose of allowing inter-domain multicast functionality
follows a simple five step approach. This approach provides not only an organized method for deploying
the protocol, but also a controlled and optimized technique for isolation and troubleshooting issues
when MP-BGP is used in unison with MSDP.

Step 1 - Configure MP-BGP to exchange unicast and multicast routing information

This process means that we will need to configure all BGP speaking devices in the topology to utilize the
MP-BGP multicast extensions. Specifically, we will need to use the multicast Address Family Identifier:

Copyright by IPexpert, Inc. All Rights Reserved.

12-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#router bgp 154
R1(config-router)#address-family ipv4 multicast
R1(config-router-af)#neighbor 172.16.15.5 activate
R1(config-router-af)#end
R5#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#router bgp 154
R5(config-router)#address-family ipv4 multicast
R5(config-router-af)#neighbor 172.16.15.1 activate
R5(config-router-af)#neighbor 172.16.45.4 activate
R5(config-router-af)#neighbor 172.16.15.1 next-hop-self
R5(config-router-af)#neighbor 172.16.45.4 next-hop-self
R5(config-router-af)#neighbor 172.16.15.1 route-reflector-client
R5(config-router-af)#neighbor 172.16.45.4 route-reflector-client
R5(config-router-af)#end


R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#router bgp 154
R4(config-router)# address-family ipv4
R4(config-router-af)# redistribute ospf 1
R4(config-router-af)#exit
R4(config-router)#address-family ipv4 multicast
R4(config-router-af)#neighbor 172.16.45.5 activate
R4(config-router-af)#neighbor 172.16.45.5 next-hop-self
R4(config-router-af)#redistribute ospf 1
R4(config-router-af)#neighbor 172.16.46.6 activate
R4(config-router-af)#end

Observe that R4 in our topology is the boundary device between AS154 and AS679. Additionally, this
device is the RP for the AS154 domain. This means that R4 will require NLRI for the other RP located in
AS679. To accomplish this we will redistribute OSPF 1 into both the ipv4 unicast and multicast address
families. This will propagate the reachability information from one AS to the other. Now we will repeat
this process in AS679.
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#router bgp 679
R6(config-router)# address-family ipv4
R6(config-router-af)#redistribute eigrp 100
R6(config-router-af)#exit
R6(config-router)#address-family ipv4 multicast
R6(config-router-af)#neighbor 172.16.67.7 activate

Copyright by IPexpert, Inc. All Rights Reserved.

12-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

R6(config-router-af)#neighbor 172.16.67.7 next-hop-self


R6(config-router-af)#neighbor 172.16.46.4 activate
R6(config-router-af)#redistribute eigrp 100
R6(config-router-af)#end
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#router bgp 679
R7(config-router)#address-family ipv4 multicast
R7(config-router-af)#neighbor 172.16.67.6 activate
R7(config-router-af)#neighbor 172.16.67.6 next-hop-self
R7(config-router-af)#neighbor 172.16.67.6 route-reflector-client
R7(config-router-af)#neigh 172.16.79.9 activate
R7(config-router-af)#neigh 172.16.79.9 next-hop-self
R7(config-router-af)#neigh 172.16.79.9 route-reflector-client
R7(config-router-af)#end
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#router bgp 679
R9(config-router)#address-family ipv4 multicast
R9(config-router-af)#neighbor 172.16.79.7 activate
R9(config-router-af)#end


Once MP-BGP has been enabled throughout the domain we need to verify that information is being
exchanged for both the AFIs, this is done by using show ip bgp ipv4 for each AFI:

R4#show ip bgp ipv4 multicast regexp ^679_
BGP table version is 11, local router ID is 192.1.100.100
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

*>
*>
*>
*>

Network
172.16.79.0/24
192.1.6.0
192.1.7.0
192.1.9.0

Next Hop
172.16.46.6
172.16.46.6
172.16.46.6
172.16.46.6

Metric LocPrf Weight Path


30720
0 679 ?
0
0 679 ?
156160
0 679 ?
158720
0 679 ?

We clearly see that R4 is learning about all the addresses in AS679, now we need to see if R6 is learning
all four of the prefixes from AS154:
R6#show ip bgp ipv4 multicast regexp ^154_
BGP table version is 9, local router ID is 192.1.6.6
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

Copyright by IPexpert, Inc. All Rights Reserved.

12-7

IPv4/6 Multicast Operation and Troubleshooting

Network
172.16.24.0/24
172.16.45.0/24
172.16.46.0/24
192.1.4.0

*>
*>
*>
*>

Next Hop
172.16.46.4
172.16.46.4
172.16.46.4
172.16.46.4

Chapter 12: Multiprotocol-BGP (MP-BGP)

Metric LocPrf Weight Path


0
0 154 ?
0
0 154 ?
0
0 154 ?
0
0 154 ?

We see that the multicast AFI is working correctly but, we need to keep in mind that this is the multicast
AFI that is used for RPF checks, not the unicast AFI that is used for reachability and routing. To verify that
we have reachability between the two domains we will perform tests at each of the most distant ends.
First, we will use show ip bpg ipv4 unicast then we will use traceroute:
R1#show ip bgp ipv4 unicast regexp ^679_
BGP table version is 21, local router ID is 192.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
Network
*>i172.16.26.0/24
*>i172.16.67.0/24
*>i172.16.79.0/24
*>i192.1.6.0
*>i192.1.7.0
*>i192.1.9.0

Next Hop
172.16.45.4
172.16.45.4
172.16.45.4
172.16.45.4
172.16.45.4
172.16.45.4

Metric LocPrf Weight Path


0
100
0 679 ?
0
100
0 679 ?
30720
100
0 679 ?
0
100
0 679 ?
156160
100
0 679 ?
158720
100
0 679 ?

R1#traceroute 192.1.9.9 source loopback 0


Type escape sequence to abort.
Tracing the route to 192.1.9.9
1
2
3
4
5

172.16.15.5
172.16.45.4
172.16.46.6
172.16.67.7
172.16.79.9

0 msec 0 msec 0 msec


0 msec 0 msec 4 msec
28 msec 24 msec 28 msec
[AS 679] 28 msec 28 msec 28 msec
[AS 679] 28 msec * 28 msec

It is clear that the unicast AFI has learned all the information needed for reachability and forwarding.
However, to be thorough we will repeat this test from R9:
R9#show ip bgp ipv4 unicast regexp ^154_
BGP table version is 22, local router ID is 192.1.9.9
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
Network

Next Hop

Copyright by IPexpert, Inc. All Rights Reserved.

Metric LocPrf Weight Path

12-8

IPv4/6 Multicast Operation and Troubleshooting

*>i172.16.15.0/24
*>i172.16.24.0/24
*>i172.16.45.0/24
*>i192.1.1.0
*>i192.1.4.0
*>i192.1.5.0

172.16.67.6
172.16.67.6
172.16.67.6
172.16.67.6
172.16.67.6
172.16.67.6

Chapter 12: Multiprotocol-BGP (MP-BGP)

2
0
0
3
0
2

100
100
100
100
100
100

0
0
0
0
0
0

154
154
154
154
154
154

?
?
?
?
?
?

R9#traceroute 192.1.1.1 source loopback0


Type escape sequence to abort.
Tracing the route to 192.1.1.1
1
2
3
4
5

172.16.79.7
172.16.67.6
172.16.46.4
172.16.45.5
172.16.15.1

0 msec 0 msec 4 msec


0 msec 0 msec 0 msec
28 msec 28 msec 28 msec
[AS 154] 32 msec 28 msec 28 msec
[AS 154] 28 msec * 28 msec

Step 2 - Configure MSDP peering sessions


Configure peering sessions from the local RP to the RP in another AS using the commands referenced in
the Technology Review section of this chapter. Since we have a BGP peering session with this MSDP
peer, it is necessary to use the IP address used to form the eBGP peering relationship for MSDP as well.
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#ip msdp peer 172.16.46.6 remote-as 679
R4(config)#
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#ip msdp peer 172.16.46.4 remote-as 154
R6(config)#
%MSDP-5-PEER_UPDOWN: Session to peer 172.16.46.4 going up
R6(config)#end

This configuration can easily be verified using the show ip msdp peer command:
R4#show ip msdp peer
MSDP Peer 172.16.46.6 (?), AS 679 (configured AS)
Connection status:
State: Up, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:01:39, Messages sent/received: 2/2
Output messages discarded: 0
Connection and counters cleared 00:02:39 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none

Copyright by IPexpert, Inc. All Rights Reserved.

12-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Output (S,G) filter: none, route-map: none


Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

Now on the other RP:


R6#show ip msdp peer
MSDP Peer 172.16.46.4 (?), AS 154 (configured AS)
Connection status:
State: Up, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:02:10, Messages sent/received: 2/3
Output messages discarded: 0
Connection and counters cleared 00:02:43 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

Observe the connection status is: Up.


Step 3 (Optional) - Configure recommended SA filters
By default all SA messages that are received will be forwarded to an MSDP peer. There is however, an
access-list argument. This argument is an extended access list that describes particular S,G pairs that are
allowed to pass through the filter. Additionally, a route-map map-name keyword and argument can also
be specified, that will allow filtering based on arguments specified in match criteria. If these multiple
criteria are true, a permit from the route map allows routes to pass through the filter. A deny statement
will filter routes. If both keywords are used, all conditions must be true to pass or filter any specific S, G
pairs in outgoing SA messages. If no keyword is specified, all S,G pairs will be filtered.

Copyright by IPexpert, Inc. All Rights Reserved.

12-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Step 4 - Verify that MSDP peers are working properly


The simplest method suggested to discover if MSDP peers are operating as anticipated is to use the
show ip msdp peer command used previously. This command tells us if the sessions are up, but another
useful command is show ip msdp cache:
R6#show ip msdp sa-cache
MSDP Source-Active Cache - 1 entries
(172.16.15.1, 224.9.9.9), RP 192.1.4.4, MBGP/AS 154, 00:00:06/00:05:53, Peer
172.16.46.4

Step 5 (Optional) - Configure multicast borders appropriately



When deploying inter-domain multicast solutions it is considered best practice to bound or
administratively scope multicast control plane traffic to their respective domains. This is accomplished
typically, using one of two configuration commands:

ip multicast boundary [access-list] - This command configures an administratively scoped
boundary on the interface for multicast group addresses in the range defined by an access-list
argument. No multicast data packets flow across this type of boundary from either direction,
allowing reuse of the same multicast group address in different administrative domains.
ip pim bsr-border - This command configures an interface to be the PIM domain border.
Bootstrap messages will not be able to pass through this border in either direction. The PIM
domain border effectively partitions the network into regions using different RPs that are
configured using the bootstrap router feature. No other PIM messages are dropped by this
domain border setup. [Please also note that this command does not set up any multicast
boundaries].


Copyright by IPexpert, Inc. All Rights Reserved.

12-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Common Issues with MP-BGP


MP-BGP is a moderately complex protocol to troubleshoot. MP-BGP uses the extensions that it adds to
BGP version 4 to accomplish its duties. Specifically, in relation to an inter-domain multicast deployment
MP-BGP is not used for the forwarding of multicast packets. This is still accomplished via the PIM
protocol, but the decision regarding what links to use for RPF checks is where MP-BGP offers its most
significant advantages. To simplify troubleshooting common issues while deploying MP-BGP, we identify
three categories of problems: Peer Rejects all MSDP SA Messages, Failure to Advertise the MSDP Peer
Network, and Using Incorrect Addresses to form MSDP Peers.
Peer Rejects All MSDP SA Messages
In the event that a peer rejects all MSDP SA messages that are being advertised to it, the most likely
scenario will be SA message Filtering or an MP-BGP issue. This type of situation in environments with
more than two MSDP peers more notably comes in the form of the ipv4 multicast AFI not being correctly
configured thus resulting in an RPF failure at the time the Peer MSDP enabled RP attempts to assemble
the tree back toward the source. Not being able to isolate an RPF interface the RP will drop all SA
messages as they arrive.
Failure to Advertise the MSDP Peer Network
In order to configure an operational environment for inter-domain multicast the MSDP peers must have
reachability between them. This necessitates the exchange of routing information for each of the
respective peering networks. This can be accomplished via redistribution, or advertising the routes into
MP-BGP. These specific prefixes need to be found in both the unicast and multicast AFI sections of the
MP-BGP routing tables..
Using Incorrect Addresses to form MSDP Peers
In the situation where MSDP peers are configured via information shared between MP-BGP enabled
autonomous systems the addresses used for the MSDP peer must be the same address used to form the
MP-BGP peering relationship. Failure to do this will prevent the MSDP peering sessions from forming
and thus will not allow the inter-domain multicast environment to be created.

Copyright by IPexpert, Inc. All Rights Reserved.

12-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

MP-BGP Sample Troubleshooting Scenarios


This section provides a detailed look at how to best approach troubleshooting some of the common
issues discussed in previous sections. It includes coverage of a methodology for identification, isolation,
and remediation of faults in the MP-BGP inter-domain multicasting operational process. The intent here
is to hone and develop troubleshooting skills tailored to first identify if a problem is MP-BGP related, and
then how to begin isolating the cause of the fault in the most efficient manner possible. Figure 12-3
illustrates the topology used to explore this topic.

Figure 2-3: A Sample MP-BGP Topology

In the Common Issues with MP-BGP section, three primary types of problems were identified: Peer
Rejects all MSDP SA Messages, Failure to Advertise the MSDP Peer Network, and Using the Incorrect
Address to Form MSDP Peers. This section explores these three categories of failure, by directing our
attention to the commands necessary to identify that a problems exists.
Peer Rejects all MSDP SA Messages
As explained this is a situation where a MSDP peer in another autonomous system refuses to accept
MSDP SA messages. This situation can be verified by generation a ping to a test multicast address to see
if SA messages are sent to a peer. We will do this by staring the ping from R1:
R1#ping 224.9.9.9 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
................ <output omitted>

Now we will check to see of R4 is sending an SA message toward its MSDP peer:
R4#show ip msdp peer 172.16.46.6 advertised-SAs

Copyright by IPexpert, Inc. All Rights Reserved.

12-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

MSDP SA advertised to peer 172.16.46.6 (?) from mroute table


224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 172.16.46.6 (?) from SA cache

This output indicates that R4 is sending the SA messages; we next need to verify if they arrive at R6:
R6#show ip msdp peer 172.16.46.4 accepted-SAs
MSDP SA accepted from peer 172.16.46.4 (?)
R6#

R6 is not accepting any of the SA messages from R4. The first method employed to ascertain why will be
to look at the MDSP peering arrangement between R4 and R6. This is done with the show ip msdp peer
command:
R4#show ip msdp peer
MSDP Peer 172.16.46.6 (?), AS 679 (configured AS)
Connection status:
State: Up, Resets: 2, Connection source: none configured
Uptime(Downtime): 00:26:09, Messages sent/received: 31/27
Output messages discarded: 0
Connection and counters cleared 01:02:39 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
R6#show ip msdp peer
MSDP Peer 172.16.46.4 (?), AS 154 (configured AS)
Connection status:
State: Up, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:26:55, Messages sent/received: 27/31
Output messages discarded: 0
Connection and counters cleared 00:27:48 ago
SA Filtering:
Input (S,G) filter: everything, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:

Copyright by IPexpert, Inc. All Rights Reserved.

12-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Input filter: none


Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

The output on R6 indicates that an input S,G filter has been applied that blocks "everything". Removing
this filter will correct the issue we have encounted.
R6#conf t
Enter configuration commands, one per line.

End with CNTL/Z.

R6(config)#no ip msdp sa-filter in 172.16.46.4


R6(config)#end

Now pings sourced from R1 to group 224.9.9.9 should be successful:


R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

56
56
56
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Failure to Advertise the MSDP Peer Network


This is caused when the network used to establish MSDP peering is not advertised into MP-BGP. This
results in an inability of the MSDP peer to establish peering. This can be caused by a number of issues,
but most commonly it is cause by failing to have NLRI information needed to select an RPF interface.
This situation can be verified by generation a ping to a test multicast address to see if SA messages are
sent to a peer. We will do this by staring the ping from R1:
R1#ping 224.9.9.9 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
................ <output omitted>

Copyright by IPexpert, Inc. All Rights Reserved.

12-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Now we will check to see of R4 is sending an SA message toward its MSDP peer:
R4#show ip msdp peer 172.16.46.6 advertised-SAs
MSDP SA advertised to peer 172.16.46.6 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 172.16.46.6 (?) from SA cache

This output indicates that R4 is sending the SA messages; we next need to verify if they arrive at R6:
R6#show ip msdp peer 172.16.46.4 accepted-SAs
MSDP SA accepted from peer 172.16.46.4 (?)
224.9.9.9

172.16.15.1 (?) RP: 192.1.4.4

R6 is accepting the SA messages. The next step is to determine if the S,G pair for the group we see in the
SA message is added to the multicast routing table:
R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:31:48/00:03:08, RP 192.1.6.6, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:31:48/00:03:08

This output tells us that the S,G group is not added. Knowing that we have the *,G entry for the group
the S,G should have been added. Perhaps there is a filter applied on either R4 or R6:
R4#show ip msdp peer
MSDP Peer 172.16.46.6 (?), AS 679 (configured AS)
Connection status:
State: Up, Resets: 2, Connection source: none configured
Uptime(Downtime): 00:46:57, Messages sent/received: 54/47
Output messages discarded: 0
Connection and counters cleared 01:23:27 ago

Copyright by IPexpert, Inc. All Rights Reserved.

12-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
R6#show ip msdp peer
MSDP Peer 172.16.46.4 (?), AS 154 (configured AS)
Connection status:
State: Up, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:47:18, Messages sent/received: 48/54
Output messages discarded: 0
Connection and counters cleared 00:48:11 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 1
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

There are no filters applied and the MSDP peering between the RPs is working as expected. The issue we
are looking at is definitely looking like an RP failure. Knowing that the group 224.9.9.9 is being sourced
from 172.16.15.1 we can verify the RPF status by using the show ip rpf command:
R6#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1) failed, no route exists

There is no RPF interface toward this source address. We know that we should be learning this prefix via
MP-BGP, so we will look at the routing tables for the AFIs we have running in this topology. First we
know that RPF information is exchanged using the ipv4 multicast AFI so we will look there first:
R6#show ip bgp ipv4 multicast
BGP table version is 39, local router ID is 192.1.6.6
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

Copyright by IPexpert, Inc. All Rights Reserved.

12-17

IPv4/6 Multicast Operation and Troubleshooting

*>
*>
*>
*>

Network
172.16.79.0/24
192.1.6.0
192.1.7.0
192.1.9.0

Next Hop
172.16.67.7
0.0.0.0
172.16.67.7
172.16.67.7

Chapter 12: Multiprotocol-BGP (MP-BGP)

Metric LocPrf Weight Path


30720
32768 ?
0
32768 ?
156160
32768 ?
158720
32768 ?

We only see the address originating in our own AS as indicated by the Weight Attribute of 32768 we
know these routes are redistributed into the MP-BGP process because of the incomplete (?) Origin code.
What about the unicast AFI?
R6#show ip bgp ipv4 unicast
R6#

There are no routes at all in the ipv4 unicast AFI. Are any prefixes being advertised to us from R4?
R4#show ip bgp ipv4 unicast neighbors 172.16.46.6 advertised-routes
Total number of prefixes 0
R4#show ip bgp ipv4 multicast neighbors 172.16.46.6 advertised-routes
Total number of prefixes 0

Nothing is being advertised from R4 for either of the AFIs. We can look at the BGP configuration via the
show run command:
R4#show run | sec bgp
router bgp 154
bgp log-neighbor-changes
neighbor 172.16.45.5 remote-as 154
neighbor 172.16.46.6 remote-as 679
!
address-family ipv4
neighbor 172.16.45.5 activate
neighbor 172.16.45.5 next-hop-self
neighbor 172.16.46.6 activate
no auto-summary
no synchronization
exit-address-family
!
address-family ipv4 multicast
redistribute ospf 1 match internal
neighbor 172.16.45.5 activate
neighbor 172.16.45.5 next-hop-self
neighbor 172.16.46.6 activate

Copyright by IPexpert, Inc. All Rights Reserved.

12-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

no auto-summary
no synchronization
exit-address-family

We see that OSPF routes are not being redistributed into the unicast AFI. This can be corrected via:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#router bgp 154
R4(config-router)#address-family ipv4 unicast
R4(config-router-af)#redistribute ospf 1
R4(config-router-af)#end

Are pings successful on R1 now?


R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

56
56
56
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Incorrect Address Used to form MSDP Peers


This scenario describes a situation where the MSDP peering relationship never forms. In our working in
environment this process should take place between R4 and R6. When this mechanism fails it is shown
in the output of show ip msdp peers:
R4#show ip msdp peer
MSDP Peer 192.1.6.6 (?), AS 679 (configured AS)
Connection status:
State: Down, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:01:36, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:01:36 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none

Copyright by IPexpert, Inc. All Rights Reserved.

12-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
R6#show ip msdp peer
MSDP Peer 192.1.4.4 (?), AS 154 (configured AS)
Connection status:
State: Down, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:03:52, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:03:52 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

This output informs us that the MSDP peering is down. We see that authentication is not configured,
and we see that the MSDP peering addresses used were the Loopback0 interfaces. But we see that no
connection source was specified. This might initially look like our problem but closer examination of the
output tells us that we are using a remote AS. Knowing that the MSDP must use the same ip address
used to form the MP-BGP peering we will need to change this configuration. What address was used to
form the MP-BGP peering session between R4 and R6?
R4#show ip bgp ipv4 unicast summary
BGP router identifier 192.1.4.4, local AS number 154
BGP table version is 8, main routing table version 8
7 network entries using 840 bytes of memory
7 path entries using 364 bytes of memory
8/3 BGP path/bestpath attribute entries using 992 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
Bitfield cache entries: current 2 (at peak 2) using 64 bytes of memory
BGP using 2284 total bytes of memory
BGP activity 73/55 prefixes, 83/65 paths, scan interval 60 secs
Neighbor

AS MsgRcvd MsgSent

Copyright by IPexpert, Inc. All Rights Reserved.

TblVer

InQ OutQ Up/Down

State/PfxRcd

12-20

IPv4/6 Multicast Operation and Troubleshooting

172.16.45.5
172.16.46.6

4
4

154
679

193
144

Chapter 12: Multiprotocol-BGP (MP-BGP)

188
164

8
8

0
0

0 00:18:57
0 00:18:57

0
0

The frame relay point-to-point interface was used, so we will need to use those addresses for the MSDP
peering:
R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#no ip msdp peer 192.1.6.6 remote-as 679
R4(config)#ip msdp peer 172.16.46.6 remote-as 679
R4(config)#end
R6#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#no ip msdp peer 192.1.4.4 remote-as 154
R6(config)#ip msdp peer 172.16.46.4 remote-as 154
R6(config)#end
R6#
%MSDP-5-PEER_UPDOWN: Session to peer 172.16.46.4 going up

The console messages tell us the MSDP session has been formed. As final test are ping from R1
successful?
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

76
56
56
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Copyright by IPexpert, Inc. All Rights Reserved.

12-21

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

MP-BGP Show Command Tools


As a quick reference, here are the show command tools utilized in this chapter. This section utilizes the
MP-BGP topology in Figure 12-4 for all example output.

Figure 12-4: A Sample MP-BGP Topology

show COMMAND:
show ip msdp peer ip_address
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:

ip_address the IP address of the msdp peer

EXAMPLE OUTPUT:
R6#show ip msdp peer 192.1.5.5
MSDP peer 192.1.5.5 not found
R6#show ip msdp peer 192.1.4.4
MSDP Peer 192.1.4.4 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (192.1.6.6)
Uptime(Downtime): 01:17:18, Messages sent/received: 77/86
Output messages discarded: 0
Connection and counters cleared 01:17:49 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:

Copyright by IPexpert, Inc. All Rights Reserved.

12-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Input filter: none


Peer ttl threshold: 0
SAs learned from this peer: 1
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
Message counters:
RPF Failure count: 0
SA Messages in/out: 69/0
SA Requests in: 0
SA Responses out: 0
Data Packets in/out: 4/0
R6#


show COMMAND:
show ip msdp peer ip_address advertised-SAs
This command displays detailed information about Multicast Source Discovery Protocol (MSDP) peers.
Where:

ip_address the IP address of the msdp peer

EXAMPLE OUTPUT:
R4#show ip msdp peer 192.1.6.6 advertised-SAs
MSDP SA advertised to peer 192.1.6.6 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 192.1.6.6 (?) from SA cache
R4#

show COMMAND:
show ip bgp ipv4 multicast summary
This command displays IPv4 multicast database information.
EXAMPLE OUTPUT:
R4#show ip bgp ipv4 multicast summary
BGP router identifier 192.1.4.4, local AS number 154
BGP table version is 12, main routing table version 12
9 network entries using 1188 bytes of memory
9 path entries using 432 bytes of memory
7/6 BGP path/bestpath attribute entries using 1176 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
Bitfield cache entries: current 2 (at peak 2) using 64 bytes of memory

Copyright by IPexpert, Inc. All Rights Reserved.

12-23

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

BGP using 2884 total bytes of memory


BGP activity 15/0 prefixes, 17/2 paths, scan interval 60 secs
Neighbor
172.16.45.5
172.16.46.6
R4#

V
4
4

AS MsgRcvd MsgSent
154
14
13
679
7
12

TblVer
12
12

InQ OutQ Up/Down


0
0 00:03:40
0
0 00:01:31

State/PfxRcd
0
3

show COMMAND:
show ip bgp ipv4 multicast
This command displays detailed information about ipv4 multicast learned prefixes.
EXAMPLE OUTPUT:
R4#show ip bgp ipv4 multicast
BGP table version is 12, local router ID is 192.1.4.4
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

*>
*>
*>
*>
*>
*>
*>
*>
*>

Network
172.16.15.0/24
172.16.45.0/24
172.16.46.0/24
172.16.79.0/24
192.1.1.1/32
192.1.4.0
192.1.5.5/32
192.1.7.0
192.1.9.0

Next Hop
172.16.45.5
0.0.0.0
0.0.0.0
172.16.46.6
172.16.45.5
0.0.0.0
172.16.45.5
172.16.46.6
172.16.46.6

Metric LocPrf Weight Path


2
32768 ?
0
32768 ?
0
32768 ?
30720
0 679 ?
3
32768 ?
0
32768 ?
2
32768 ?
156160
0 679 ?
158720
0 679 ?

Copyright by IPexpert, Inc. All Rights Reserved.

12-24

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

MP-BGP Debug Command Tools


As a quick reference, here are the debug command tools utilized in this chapter. This section utilizes the
MP-BGP topology in Figure 12-5 for all example output.

Figure 12-5: A Sample MP-BGP Topology

debug COMMAND:
debug ip msdp detail
This command displays detailed information regarding MSDP operations.
EXAMPLE OUTPUT:
R4#debug ip msdp detail
MSDP Detail debugging is on
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 92 from 192.1.6.6, qs 1
R4#
MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
R4#
MSDP(0): Received 3-byte TCP segment from 192.1.6.6
MSDP(0): Append 3 bytes to 0-byte msg 93 from 192.1.6.6, qs 1
R4#

debug COMMAND:
debug ip bgp ipv4 multicast
This command displays detailed information regarding bgp ipv4 multicast operations.

Copyright by IPexpert, Inc. All Rights Reserved.

12-25

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

EXAMPLE OUTPUT:
R4#debug ip bgp ipv4 multicast
BGP debugging is on for address family: IPv4 Multicast
BGPNSF state: 172.16.46.6 went from nsf_not_active to nsf_not_active
BGP: 172.16.46.6 went from Established to Idle
%BGP-5-ADJCHANGE: neighbor 172.16.46.6 Down User reset
R4#
BGP: 172.16.46.6 closing
R4#
BGP: 172.16.46.6 went from Idle to Active
BGP: 172.16.46.6 open active, local address 172.16.46.4
BGP: 172.16.46.6 read request no-op
BGP: 172.16.46.6 went from Active to OpenSent
BGP: 172.16.46.6 sending OPEN, version 4, my as: 154, holdtime 180 seconds
BGP: 172.16.46.6 send message type 1, length (incl. header) 61
BGP: 172.16.46.6 rcv message type 1, length (excl. header) 42
BGP: 172.16.46.6 rcv OPEN, version 4, holdtime 180 seconds
BGP: 172.16.46.6 rcv OPEN w/ OPTION parameter len: 32
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 6
BGP: 172.16.46.6 OPEN has CAPABILITY code: 1, length 4
BGP: 172.16.46.6 OPEN has MP_EXT CAP for afi/safi: 1/1
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 6
BGP: 172.16.46.6 OPEN has CAPABILITY code: 1, length 4
BGP: 172.16.46.6 OPEN has MP_EXT CAP for afi/safi: 1/2
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 2
BGP: 172.16.46.6 OPEN has CAPABILITY code:
R4#128, length 0
BGP: 172.16.46.6 OPEN has ROUTE-REFRESH capability(old) for all address-families
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 2
BGP: 172.16.46.6 OPEN has CAPABILITY code: 2, length 0
BGP: 172.16.46.6 OPEN has ROUTE-REFRESH capability(new) for all address-families
BGP: 172.16.46.6 rcvd OPEN w/ optional parameter type 2 (Capability) len 6
BGP: 172.16.46.6 OPEN has CAPABILITY code: 65, length 4
BGP: 172.16.46.6 OPEN has 4-byte ASN CAP for: 679
BGP: 172.16.46.6 rcvd OPEN w/ remote AS 679, 4-byte remote AS 679
BGP: 172.16.46.6 went from OpenSent to OpenConfirm
BGP: 172.16.46.6 went from OpenConfirm to Established
%BGP-5-ADJCHANGE: neighbor 172.16.46.6 Up
R4#

Copyright by IPexpert, Inc. All Rights Reserved.

12-26

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Chapter Challenge: MP-BGP Sample Trouble Tickets


The following section includes two sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH12-MP-BGP-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 12-6 below:

Figure 12-6: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that R4 and R6 are not successfully forming a MSDP
peering relationship. Correct this issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that multicast pings from R1 to the group
224.9.9.9 do not reach test clients on R9 VLAN79 segment. Correct this issue.

Copyright by IPexpert, Inc. All Rights Reserved.

12-27

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Chapter Challenge: MP-BGP Sample Trouble Tickets Solutions


The following section includes the solutions to the two Trouble Tickets presented in the previous
section.
Trouble Ticket #1 Solution
Your supervisor has brought to your attention that R4 and R6 are not successfully forming a MSDP
peering relationship. Correct this issue.
Step 1 - Fault Verification:
Has the peering relationship formed between R4 and R6?
R4#show ip msdp peer
MSDP Peer 172.16.46.6 (?), AS 679 (configured AS)
Connection status:
State: Down, Resets: 1, Connection source: none configured
Uptime(Downtime): 00:03:38, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:06:55 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled

The status of the peering relation to 172.16.46.6 is Down. This verifies that the problem actually exists.

Step 2 - Fault Isolation:
Failure of MSDP peers to form are either caused by misconfiguration or unicast routing failures. We will
rule out misconfiguration by comparing the output of the show ip msdp peer command on R6:

R6#sh ip msdp peer
MSDP Peer 172.16.46.4 (?), AS 154 (configured AS)
Connection status:
State: Listen, Resets: 0, Connection source: none configured
Uptime(Downtime): 00:06:10, Messages sent/received: 0/0
Output messages discarded: 0
Connection and counters cleared 00:06:10 ago
SA Filtering:

Copyright by IPexpert, Inc. All Rights Reserved.

12-28

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Input (S,G) filter: none, route-map: none


Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled

The last line of the output on R6 says that MD5 authentication is enabled. Authentication is not enabled
on R4, this has isolated our problem.

Step 3 - Fault Remediation:
In this scenario, the ip msdp password command needs to be applied to R4:

R4#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R4(config)#ip msdp password peer 172.16.46.6 CISCO
R4(config)#end
R4#
%MSDP-5-PEER_UPDOWN: Session to peer 172.16.46.6 going up

Step 4 - Verification of Remediation


Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R4#show ip msdp peer
MSDP Peer 172.16.46.6 (?), AS 679 (configured AS)
Connection status:
State: Up, Resets: 1, Connection source: none configured
Uptime(Downtime): 00:00:36, Messages sent/received: 1/1
Output messages discarded: 0
Connection and counters cleared 00:12:50 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests:
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 0
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: enabled

Copyright by IPexpert, Inc. All Rights Reserved.

12-29

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

The MSDP peering session status is: Up. The solution has successfully remediated the problem.
Trouble Ticket #2 Solution
After solving Trouble Ticket #1, your supervisor has observed that multicast pings from R1 to the group
224.9.9.9 do not reach test clients on R9 VLAN79 segment. Correct this issue.
Step 1 - Fault Verification:
Are pings from R1 to 224.9.9.9 successful?
R1#ping 224.9.9.9 repeat 10000
Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
................. <output omitted>


The pings fail, thus proving the problem exists.

Step 2 - Fault Isolation:
The first step is to see if R4 generates SA messages for the group 224.9.9.9 and sends them to its peer.
R4#show ip msdp peer 172.16.46.6 advertised-SAs
MSDP SA advertised to peer 172.16.46.6 (?) from mroute table
224.9.9.9
172.16.15.1 (?)
MSDP SA advertised to peer 172.16.46.6 (?) from SA cache

R4 advertises the SA messages. Does R6 accept the messages:


R6#show ip msdp peer 172.16.46.4 accepted-SAs
MSDP SA accepted from peer 172.16.46.4 (?)
224.9.9.9

172.16.15.1 (?) RP: 192.1.4.4

R6 accepts the SA messages. Under normal circumstances R6 should add a S,G pair to its multicast
routing table for this group, if there is a *,G learned from an interested source. This can be verified with
show ip mroute:
R6#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,

Copyright by IPexpert, Inc. All Rights Reserved.

12-30

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

Y - Joined MDT-data group, y - Sending to MDT-data group


Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 01:55:40/00:02:58, RP 192.1.6.6, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 01:55:40/00:02:58

The *,G entry is in the table but not the S,G for the source 172.16.15.1. This could be a reachability
issues or an RPF issue. We will check the RPF status first.

R6#show ip rpf 172.16.15.1
RPF information for ? (172.16.15.1) failed, no route exists

This clearly tells us that we have an RPF issue. Remember that RPF checks in MP-BGP are performed
against the contents of the ipv4 multicast AFI. We can look at the contents of this table via show ip bgp
ipv4 multicast:

R6#show ip bgp ipv4 multicast
BGP table version is 53, local router ID is 192.1.6.6
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

*>
*>
*>
*>

Network
172.16.79.0/24
192.1.6.0
192.1.7.0
192.1.9.0

Next Hop
172.16.67.7
0.0.0.0
172.16.67.7
172.16.67.7

Metric LocPrf Weight Path


30720
32768 ?
0
32768 ?
156160
32768 ?
158720
32768 ?

This output reflects that we only see prefixes redistributed (?) into our AS on R6 (based on the weight
value of 32768). Where are the routes learned from AS154?
R4#show ip bgp ipv4 multicast neighbors 172.16.46.6 advertised-routes
Total number of prefixes 0

R4 is not advertising any ipv4 multicast prefixes additionally it is not advertising any ipv4 unicast routes
either:
R4#show ip bgp ipv4 unicast neighbors 172.16.46.6 advertised-routes
Total number of prefixes 0

Copyright by IPexpert, Inc. All Rights Reserved.

12-31

IPv4/6 Multicast Operation and Troubleshooting

Chapter 12: Multiprotocol-BGP (MP-BGP)

A show run will reveal that R4 is not redistributing ospf information into the address-family ipv4 unicast:
R4#show run | sec bgp
router bgp 154
bgp log-neighbor-changes
neighbor 172.16.45.5 remote-as 154
neighbor 172.16.46.6 remote-as 679
!
address-family ipv4
neighbor 172.16.45.5 activate
neighbor 172.16.45.5 next-hop-self
neighbor 172.16.46.6 activate
no auto-summary
no synchronization
exit-address-family
!
address-family ipv4 multicast
redistribute ospf
neighbor 172.16.45.5 activate
neighbor 172.16.45.5 next-hop-self
neighbor 172.16.46.6 activate
no auto-summary
no synchronization
exit-address-family


Without unicast reachability this configuration cannot work. This has isolated our fault.

Step 3 - Fault Remediation:
In this scenario, the redistribute ospf 1 command needs to applied under the ipv4 unicast AFI on R4:
R4(config)#router bgp 154
R4(config-router)#address-family ipv4 unicast
R4(config-router-af)#redistribute ospf 1
R4(config-router-af)#end

Step 4 - Verification of Remediation


Once the error has been isolated and remediated, it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially:
R1#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request 0 from 172.16.79.9, 60 ms
Reply to request 1 from 172.16.79.9, 56 ms

Copyright by IPexpert, Inc. All Rights Reserved.

12-32

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request

2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from

172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,
172.16.79.9,

Chapter 12: Multiprotocol-BGP (MP-BGP)

56
56
56
56
56
56
56
56

ms
ms
ms
ms
ms
ms
ms
ms


Pings are now successful, thus verifying that the error has been corrected.

Copyright by IPexpert, Inc. All Rights Reserved.

12-33

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

Chapter 13:
Multicast Security
and Advanced
Features



This chapter of IPv4/6 Multicast Operation and Troubleshooting details the security features of
Multicast technologies in great depth. This chapter also covers several advanced multicast features. This
chapter includes the careful examination of symptoms, a fault isolation methodology, and the
implementation of repairs for these various features. The chapter begins with a thorough review of
these various features, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting. This important chapter concludes with sample troubleshooting scenarios and exciting
challenges that allow readers to practice implementing the troubleshooting skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

13-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

The Operation and Troubleshooting of Multicast Security and Advanced


Features
Traditionally, in the chapters of this book we have dealt with the granular explanation of the operational
mechanisms and troubleshooting of protocols. In this section that paradigm is going to be altered
somewhat. The concept in this chapter is to introduce the topics describing Multicast Security and a
number of advanced features. Many of these mechanisms are very straightforward and most have been
used in previous sections to induce system faults meant to reflect improper application or incomplete
removal of previously operational filters or multicast limiting commands.
As in all other chapters in this text, we will utilize a single deployment environment to demonstrate the
operation and troubleshooting of the mechanisms outlined in this chapter. This topology, illustrated in
Figure 13-2, will allow us to deploy switch and router versions of different commands as well as
technologies that only work on catalyst switches.

Figure 13-2: Basic Multicast Security and Advanced Features Topology

Using the topology outlined in figure 13-2 this chapter will take an intuitive approach to describing the
operation and troubleshooting issues associated with the following multicast security topics:
Multicast Filtering on a Cisco Catalyst Switch
This section details the two primary tools used to filter multicast packets. In this instance we are using
the term multicast packets to describe control plane packets. This means we are looking at both IGMP
and PIM messages. However, most specifically this section will deal with IGMP. Using either a static
IGMP join filter or the more dynamic IGMP snooping protocol certain multicast security processes can
be put into place. We will begin with the filtering IGMP Join messages.

Copyright by IPexpert, Inc. All Rights Reserved.

13-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

IGMP Join Filter on a Cisco Catalyst Switch


In situations where it is deemed undesirable to allow all hosts on a Layer 2 interface to join one or more
multicast groups on option is to filter specific groups at specific interfaces. This is accomplished through
the application of what is known as an IGMP profile to a particular interface. It is important to note that
these IGMP filters can only be applied to a layer 2 physical interface. This means that they cannot be
used on routed ports, SVIs or any port participating in Etherchannel. Additionally, a single port can only
have one profile affixed to it. In our topology we will prevent R9 from joining the multicast group
224.9.9.9, we will have the FastEthernet0/0 interfaces of R8 and R9 join the multicast group 224.9.9.9:
R8#conf t
Enter configuration commands, one per line.
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 224.9.9.9
R8(config-if)#end
R9#conf t
Enter configuration commands, one per line.
R9(config)#interface FastEthernet0/0
R9(config-if)#ip igmp join-group 224.9.9.9
R9(config-if)#end

End with CNTL/Z.

End with CNTL/Z.

We can clearly see that both devices have joined the multicast group 224.9.9.9 by looking at the output
of generating a ping from R5 for the group 224.9.9.9. We will expect based on the fact that this topology
is running PIM-DM that we should obtain echo replies from both R8 and R9:
R5#ping 224.9.9.9 r 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
Reply to request
<output omitted>

0
0
1
1
2
2

from
from
from
from
from
from

172.16.200.8,
172.16.200.9,
172.16.200.9,
172.16.200.8,
172.16.200.9,
172.16.200.8,

4
4
1
1
1
1

ms
ms
ms
ms
ms
ms


We see the results match what we expected. However, in the situation where we would not want R9 to
join the multicast group 224.9.9.9, we would apply an IGMP profile like this:

CAT2(config)#ip igmp profile 1
CAT2(config-igmp-profile)#deny
CAT2(config-igmp-profile)#range 224.9.9.9 224.9.9.9

Copyright by IPexpert, Inc. All Rights Reserved.

13-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

CAT2(config-igmp-profile)#exit
CAT2(config)#interface FastEthernet0/9
CAT2(config-if)#ip igmp filter 1
CAT2(config-if)#end

The most common issue associated with IGMP filtering is a failure to create the profile, or the miss
application of the igmp filter command. Issues associated with this process can best be isolated using
debug ip igmp filter on the switch where the process has been applied:
CAT2#show ip igmp filter
IGMP filter enabled
CAT2#
IGMPFILTER: igmp_filter_process_pkt() checking group from Fa0/7 : no profile attached
CAT2#
IGMPFILTER: igmp_filter_process_pkt(): checking group 224.9.9.9 from Fa0/9: deny
CAT2#
IGMPFILTER: igmp_filter_process_pkt() checking group from Fa0/8 : no profile attached

IGMP Snooping on a Cisco Catalyst Switch


IGMP Snooping is another Layer 2 function that helps manage and secure multicast traffic traversing
catalyst switches. IGMP Snooping helps reconcile many issues that are associated with multicast traffic
issues.
IGMP Snooping is a method that actually snoops or inspects IGMP traffic on a switch. When enabled, a
switch will watch for IGMP messages passed between a host and a router, and will add the necessary
ports to its multicast table, ensuring that only the ports that require a given multicast stream actually
receive it. IGMP Snooping suffers from one major drawback it requires the switch to inspect all IGMP
traffic, on top of its other responsibilities. This inspection of the IGMP transmissions between the host
and the router permits the device to keep track of multicast groups and member ports. When the switch
receives an IGMP report from a host for a particular multicast group, the switch adds the host port
number to the forwarding table entry; when it receives an IGMP Leave Group message from a host, it
removes the host port from the table entry. It also periodically deletes entries if it does not receive
IGMP membership reports from the multicast clients.
On Catalyst 3560 switches this protocol is on by default, and utilizes a concept known as IGMP Report
Suppress. IGMP report suppression is used to forward only one IGMP report per multicast router query
to multicast devices. When IGMP router suppression is enabled (the default), the switch sends the first
IGMP report from all hosts for a group to all the multicast routers. The switch does not send the
remaining IGMP reports for the group to the multicast routers. This feature prevents duplicate reports

Copyright by IPexpert, Inc. All Rights Reserved.

13-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

from being sent to the multicast devices and in effect is filtering IGMP messages for the purpose of
bandwidth and processor conservation.
There are no commands to activate this protocol, but it can be turned off via the no ip igmp snooping.
BSR-Border Filter
In situations where there are two or more multicast domains it is often desirable to bound or scope
multicast control plane protocols to their respective domains. This is accomplished with regard to BSR
environments by applying the ip pim bsr-border command at the multicast domain boundaries. When
this command is used on a interface, no PIM BSR messages will be allowed to enter or be sent out the
interface. This prevents BSR information from being exchanged between multicast inter domain
neighbors. Typically, this is to prevent the accidental election of an RP not found in a particular multicast
domain. Common issues associated with the use of this command are incorrect placement. This
command can in effect split what should be an otherwise contiguous domain into two dysfunctional
domains.
PIM Neighbor Filter
Two primary factors will facilitate the need or desire to use a PIM neighbor filter. The first reason is for
security purposes. There are circumstances where all devices on a WAN or LAN are not under a single
administrative control. In order to manage what multicast trees are formed over the network it is
necessary to prevent any devices that you do not manage from being able to participate in your PIM
environment. This more often than not means that a network administrator will take all necessary steps
to ensure that no device can become the DR for a given segment, or prevent any undesirable devices
from discovering the identity of or using any RPs designated in our domain.
The second most common rationale for this feature is to create a multicast "stub routing environment."
In stub routing, the all routers, even those we do not manage, are still able to take part in the
forwarding of multicast packets, but they must do so by exchanging IGMP Join and Leave packets with
designated devices in our managed domain. In this scenario, it is possible to conserve resources by
reducing the number of PIM neighbor states to track. Typically, multicast stub networks use PIM-DM
and therefore conserve resources on your RPs.
In our topology we create a multicast stub environment by prevent R1 from accepting PIM joins from R7
via the VLAN17 segment connecting them:
R1#conf t
Enter configuration commands, one per line.
R1(config)#interface FastEthernet0/1
R1(config-if)#ip pim neighbor-filter 1
R1(config-if)#exit
R1(config)#access-list 1 deny 172.16.17.7

Copyright by IPexpert, Inc. All Rights Reserved.

End with CNTL/Z.

13-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

R1(config)#end

Keep in mind that this PIM relationship will have to expire before we see visible results:
R1#
%PIM-5-NBRCHG: neighbor 172.16.17.7 DOWN on interface FastEthernet0/1 DR
%PIM-5-DRCHG: DR change from neighbor 172.16.17.7 to 172.16.17.1 on interface
FastEthernet0/1

This issue now is the fact that we do not have a peering relationship between R1 and R7.
Multicast Helper Address
There are situations where limitations in the network or applications running on the network
necessitate the need to convert multicast traffic into broadcast traffic. Assumptions that suggest all
devices on a segment may require the packets associated with a multicast stream, or issues where
applications need the packet and do not support multicast protocol lead to the necessity to deploy the
multicast helper command. In this scenario, we will remove the PIM neighbor relationship between R1
and R7:
R1(config)#interface FastEthernet0/1
R1(config-if)#no ip pim dense
R1(config-if)#no ip pim dense-mode
R1(config-if)#end
R7#conf t
Enter configuration commands, one per line.
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip pim dense-mode
R7(config-if)#end

End with CNTL/Z.

This deployment facilitates the conversion of multicast traffic to some method of transfer that will be
able to reach host across the now PIM disabled connection. We will illustrate the issue by having R8 join
the multicast group 224.9.9.9 and then test from R5 using that group:
R8#conf t
Enter configuration commands, one per line.
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 224.9.9.9
R8(config-if)#end

End with CNTL/Z.

R5#ping 224.9.9.9 repeat 10


Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

Copyright by IPexpert, Inc. All Rights Reserved.

13-6

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

The pings are unsuccessful. Nevertheless, if we utilize the multicast address-helper command to use
broadcast traffic to overcome this configuration issue.
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#interface FastEthernet0/0
R1(config-if)# ip multicast helper-map 224.9.9.9 172.16.100.5 100
R1(config-if)#!
R1(config-if)#interface FastEthernet0/1
R1(config-if)# ip directed-broadcast
R1(config-if)#!
R1(config-if)#ip forward-protocol udp 5001
R1(config)#!
R1(config)#access-list 100 permit udp any any eq 5001

At this point, the modification to R1 is going to take any traffic sourced from the ip address 172.16.100.5
for the group 224.9.9.9 and translate it to udp broadcast traffic destined to port 5001. The next portion
will be to go to R7 and translate it back to multicast so that it can be multicast forwarded the rest of the
way toward the hosts.
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/1
R7(config-if)# ip multicast helper-map broadcast 239.1.1.1 100
R7(config-if)#!
R7(config-if)#ip forward-protocol udp 5001
R7(config)#!
R7(config)#access-list 100 permit udp any any eq 5001


This configuration can now be tested on R5 by repeating the multicast ping.

R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,
172.16.100.1,

Copyright by IPexpert, Inc. All Rights Reserved.

4
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

13-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

This configuration is working perfectly. In scenarios where the traffic can remain broadcast in nature it is
not necessary to translate it back to multicast. Common issues impacting the deployment of this
solution include failure to enable directed broadcast, use of the wrong udp port number, or incorrectly
configured access-lists.
Multicast Route Limiting
In certain situations, it is deem necessary or important to restrict the number of routes that are added
to a multicast routing table. The use of the command ip multicast route-limit accomplishes the
establishment of this upper threshold. The fact that this command will generate messages that continue
to occur so long as configured limit is being exceeded. We will configure R1 such that it will attempt to
limit the number of routes in the multicast routing table to 2:
R1#conf t
Enter configuration commands, one per line.
R1(config)#ip multicast route-limit 2

End with CNTL/Z.

The most common issue that causes problems in the deployment of this configuration is the failure to
take multicast groups like 224.0.1.40 that all routers will join in account. But the router will send
messages explaining that an issue exists so long as the route-limit is exceeded.
R1#
%MROUTE-4-ROUTELIMIT_ATTEMPT: Attempt to exceed multicast route-limit of 2 -Process=
"IGMP Input", ipl= 0, pid= 250

This fact simplifies troubleshooting issues generated by the type of scenario.


Multicast Rate Limiting
Sometime large volumes of multicast traffic can become an issue for bandwidth challenged connections
between devices. The command ip multicast rate-limit is a very simple solution to this problem, but the
utilization of this command can be problematic. The logic of how to define which source and multicast
groups to rate-limit can prove problematic.
Most times in Cisco IOS it is necessary to define something based on a source (multicast source in this
case) and a destination (multicast group(s)) starting with the source followed by the destination. The
most common problem with this command is to follow this logic. In multicast rate limiting the group is
specified first. On R1 we will deploy the ip multicast rate-limit command for the group 224.9.9.9 coming
from the source 172.16.100.5 to block this particular multicast flow:
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#access-list 1 permit 224.9.9.9
R1(config)#access-list 2 permit 172.16.100.5

Copyright by IPexpert, Inc. All Rights Reserved.

13-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

R1(config)#interface FastEthernet0/0
R1(config-if)#ip multicast rate-limit in group-list 1 source-list 2 0
R1(config-if)#end

We first will test from R5 and expect the test to fail:


R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

If the command has worked as anticipated, the ping will work for the same group from R2:
R2#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Multicasting Through a GRE Tunnel


GRE tunneling provides a useful tool that can be used to encapsulate Multicast traffic. This is very often
used to take advantage of GREs IPsec tunneling capabilities. GRE offers much more granular QoS
because routers can see into the IP packet header. But another important aspect of Multicast through a
GRE tunnel is the fact that multicast can be transported through what would normally be a unicast
network.
There are two common problems when deploying GRE tunnels. The first issue is that GRE tunnels do not
maintain state information. This means that once a tunnel is configured it will show an up/up state. To
prevent confusion it is recommended to use keepalive support under the tunnel with the keepalive
command. The next most common issue is known as a recursive lookup failure. This is where the source
or destination of a tunnel are learned through the tunnel itself. To prevent this issue from becoming an
issue it is advised not to advertise the tunnel interfaces into the running IGP, but this makes it necessary

Copyright by IPexpert, Inc. All Rights Reserved.

13-9

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

to use a multicast static route to correct any RFP issues. We will remove the PIM relationship between
R1 and R7 once more but this time we will create GRE tunnel between R1 and R7.
R1(config)#interface FastEthernet0/1
R1(config-if)#no ip pim dense-mode
R1(config-if)#end
R7(config)#interface FastEthernet0/1
R7(config-if)#no ip pim dense-mode
R7(config-if)#end

This will prevent pings to 224.9.9.9 from reaching R8 from R5:


R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

Now we will create GRE tunnel:


R1(config)#interface tunnel 17
%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel17, changed state to down
R1(config-if)#ip address 17.17.17.1 255.255.255.0
R1(config-if)#tunnel source lo 0
R1(config-if)#tunnel destination 7.7.7.7
R1(config-if)#keepalive 1 3
R1(config-if)#ip pim dense-mode
R1(config-if)#end
R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface tunnel 17
R7(config-if)#ip address 17.17.17.7 255.255.255.0
R7(config-if)#tunnel source lo 0
R7(config-if)#tunnel destination 1.1.1.1
R7(config-if)#keepalive 1 3
R7(config-if)#ip pim dense-mode
R7(config-if)#end
%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel17, changed state to up

We can look at the status of the tunnel and test that it works via pings:
R7#show ip int brief tunnel 17
Interface
IP-Address
Tunnel17
17.17.17.7

OK? Method Status


YES manual up

Protocol
up

R7#ping 17.17.17.1

Copyright by IPexpert, Inc. All Rights Reserved.

13-10

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 17.17.17.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms

Pings from R5 will not succeed because of an RPF failure on R7:


R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

This can be corrected by applying an ip mroute for the source 172.16.100.5 that specifies the tunnel 17
interface:
R7(config)#ip mroute 172.16.100.5 255.255.255.255 tunnel 17

Now a repeat of the ping from R5 will succeed:


R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,

4
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Most notably this demonstration illustrates the need to account for the fact that the GRE tunnel will not
appear in the unicast routing table, and thus cannot be employed to reach any source.
Source-Specific Multicast
RFC 3569 describes source-specific multicast (SSM). It is a datagram delivery model that best supports
audio and video-based one-to-many applications. SSM depends on Protocol Independent Multicast
source-specific mode (PIM-SSM) and Internet Group Management Protocol Version 3 (IGMPv3) for its
implementation. PIM-SSM is based on PIM sparse mode (PIM-SM). Review Chapter 4: Protocol

Copyright by IPexpert, Inc. All Rights Reserved.

13-11

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

Independent Multicast - Sparse Mode (PIM-SM) should you need more information on this important
protocol.
A multicast network must maintain knowledge about which hosts in the network are actively sending
multicast traffic. With source-specific multicast, this information is provided by receivers through the
source addresses relayed to the last-hop routers using IGMPv3. In SSM, receivers must subscribe or
unsubscribe to (S, G) channels to receive or not receive traffic from specific sources. The proposed
standard approach for channel subscription signaling utilizes IGMP INCLUDE mode membership reports,
which are supported only in IGMP Version 3.
SSM coexists with normal PIM and IGMP operations by applying the SSM delivery model to a
configured subset of the IP multicast group address range. The Internet Assigned Numbers Authority
(IANA) has reserved the address range from 232.0.0.0 through 232.255.255.255 for SSM applications
and protocols. When an SSM range is defined, an existing IP multicast receiver application will not
receive any traffic when it tries to use addresses in the SSM range unless the application is modified to
use explicit (S, G) channel subscription or is SSM-enabled through a URL Rendezvous Directory (URD).
SSM can be deployed alone in a network without the full range of protocols that are required for
interdomain PIM-SM. In other words, SSM does not require a rendezvous point (RP), so there is no need
for an RP mechanism such as Auto-RP, MSDP, or bootstrap router (BSR).

Deploying SSM in a network that is already configured for PIM-SM simply requires the last-hop routers
be upgraded to a software image that supports SSM. These non-last-hop routers must only run PIM-SM
in the SSM range. The SSM mode of operation is enabled by configuring the SSM range using the ip pim
ssm global configuration command. For groups within the SSM range, (S, G) channel subscriptions are
accepted through IGMPv3 INCLUDE mode membership reports.
PIM operations within the SSM range of addresses change to PIM-SSM, a mode derived from PIM-SM. In
this mode, only PIM (S, G) Join and Prune messages are generated by the router. Incoming messages
related to rendezvous point tree (RPT) operations are ignored or rejected, and incoming PIM register
messages are immediately answered with Register-Stop messages. PIM-SSM is backward-compatible
with PIM-SM unless a router is a last-hop router. Therefore, routers that are not last-hop routers can run
PIM-SM for SSM groups (for example, if they do not yet support SSM). For groups within the SSM range,
no MSDP Source-Active (SA) messages within the SSM range will be accepted, generated, or forwarded.
IGMPv3 is the third version of the IETF standards track protocol in which hosts signal membership to
last-hop routers of multicast groups. IGMPv3 introduces the ability for hosts to signal group membership

Copyright by IPexpert, Inc. All Rights Reserved.

13-12

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

that allows filtering capabilities with respect to sources. A host can signal either that it wants to receive
traffic from all sources sending to a group except for some specific sources (a mode called EXCLUDE) or
that it wants to receive traffic only from some specific sources sending to the group (a mode called
INCLUDE). IGMPv3 can operate with both ISM and SSM. In ISM, both EXCLUDE and INCLUDE mode
reports are accepted by the last-hop router. In SSM, only INCLUDE mode reports are accepted by the
last-hop router.
In some environments it is necessary to adapt the typical PIM protocol deployment in such a fashion as
to allow it to better support one-to-many packet exchange models. This adaptation takes the form of a
special extension called Source Specific Multicast (SSM). This extension enables a receiver to select
content directly from a specified source. This results in the creation of a source based tree, thus
bypassing the need for a using an RP.

In most multicast implementations, applications must "join" an IP multicast group, because traffic is
distributed group members. If two applications with different sources and receivers use the same IP
multicast group address, receivers of both applications will receive traffic from the senders of both the
applications. Even though the receivers, if programmed appropriately, can filter out the unwanted
traffic, using filters like those discussed previously, this situation generates large amounts of unwanted
traffic. However, in an SSM multicast network, the router closest to the receiver will be aware of a
request coming from an application to join to a particular multicast source by using the include mode in
IGMPv3.

The multicast router now forwards the request directly to the source rather than sending the request to
an RP. The source will send packets directly to the receiver using the shortest path. In SSM, routing of
multicast traffic relies solely on source-based trees. This means that an RP is not required.

The ability for SSM to explicitly include and exclude particular sources allows for a limited amount of
security. Traffic from a source to a group that is not explicitly listed on the include list will not be
forwarded to uninterested receivers.

SSM also solves IP multicast address collision issues associated with one-to-many type applications.
Routers running in SSM mode will route data streams based on the full (S, G) address. Assuming that a
source has a unique IP address to send on the internet, any (S, G) from this source also would be unique.
We will apply SSM in our environment on R1 as an example:

R1#conf t
Enter configuration commands, one per line.

Copyright by IPexpert, Inc. All Rights Reserved.

End with CNTL/Z.

13-13

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

R1(config)#interface FastEthernet0/0
R1(config-if)#ip pim sparse-mode
R1(config-if)#interface FastEthernet0/1
R1(config-if)#ip pim sparse-mode
R1(config-if)#exit
R1(config)#ip pim ssm default
R1(config)#end

This command application tells the router that multicast groups in the range of 232.0.0.0/8 will be
multicast routed using source specific multicast. Next on the client it is necessary to employ an IGMP
protocol that supports SSM. In the case of R8 we will have its FastEthernet0/0 interface join the
multicast group 232.9.9.9 specifically from the source located at 172.16.100.5 only:
R8#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R8(config)#ip pim ssm default
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 232.9.9.9 source 172.16.100.5
R8(config-if)#ip igmp version 3
R8(config-if)#end

This means that R8 will accept the multicast stream for the group 232.9.9.9 from the source
172.16.100.5 and no other source.


Copyright by IPexpert, Inc. All Rights Reserved.

13-14

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

Chapter Challenge: Multicast Security and Advanced Features Sample


Trouble Tickets
The following section includes two sample Trouble Tickets designed to challenge the troubleshooting
skills that have been developed in all previous sections of this chapter. These Trouble Tickets were
designed using the Routing & Switching rental racks at www.ProctorLabs.com with the initial
configurations provided in the file MCAST-CH13-SEC-ADV-TT-INITIAL.txt. Keep in mind these sample
Trouble Tickets were also tested against home practice racks and the most popular router emulators.
The network topology used in this section is shown in Figure 13-3 below:

Figure 13-3: The Chapter Challenge Topology

Trouble Ticket #1
Your supervisor has brought to your attention that R8 is not receiving any multicast traffic sent from R5
to the multicast address 224.9.9.9. Correct this issue.
Trouble Ticket #2
After solving Trouble Ticket #1, your supervisor has observed that multicast traffic being sent to the
group 224.99.99.99 is not being received by R9. Correct this issue.

Copyright by IPexpert, Inc. All Rights Reserved.

13-15

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

Chapter Challenge: Multicast Security and Advanced Features Sample


Trouble Tickets Solutions
The following section includes the solutions to the three Trouble Tickets presented in the previous
section.
Trouble Ticket #1 Solution
Your supervisor has brought to your attention that R8 is not receiving any multicast traffic sent from R5
to the multicast address 224.9.9.9. Correct this issue.
Step 1 - Fault Verification:
Have R8 join the group for testing purposes?
R8#conf t
Enter configuration commands, one per line.
R8(config)#interface FastEthernet0/0
R8(config-if)#ip igmp join-group 224.9.9.9
R8(config-if)#end

End with CNTL/Z.

Now ping that group from R5:



R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
..........

This verifies the fault.



Step 2 - Fault Isolation:
The next course of action is to use the will be to attempt to isolate the fault. First we will try mtrace
from the source to the host:

R5#mtrace 172.16.100.5 172.16.200.8 224.9.9.9
Type escape sequence to abort.
Mtrace from 172.16.100.5 to 172.16.200.8 via group 224.9.9.9
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.200.8
-1 * 172.16.200.8 PIM [172.16.100.0/24]
-2 * 172.16.200.7 PIM [172.16.100.0/24]
-3 * 172.16.17.1 PIM [172.16.100.0/24]
-4 * 172.16.100.5 PIM [172.16.100.0/24]

Copyright by IPexpert, Inc. All Rights Reserved.

13-16

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features


Mtrace output seem to indicate that there is no RPF fault. This means the next most logical step will be
to look at the multicast routing tables of the devices in the network between the source and the
destination devices:

R1#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:07:09/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:07:09/00:00:00
FastEthernet0/0, Forward/Dense, 00:07:09/00:00:00
(172.16.100.5, 224.9.9.9), 00:07:09/00:02:50, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.100.5
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:07:10/00:00:00

We see the S,G pair in the multicast routing table of R1, what about R7?

R7#show ip mroute 224.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.9.9.9), 00:09:06/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:

Copyright by IPexpert, Inc. All Rights Reserved.

13-17

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

FastEthernet0/1, Forward/Dense, 00:09:06/00:00:00


FastEthernet0/0, Forward/Dense, 00:09:06/00:00:00
(172.16.100.5, 224.9.9.9), 00:01:24/00:01:35, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.17.1
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:01:25/00:00:00, limit 0 kbps


We see the S,G pair but we also observe that the pair is being limited to 0 kbps. This isolates the fault.

Step 3 - Fault Remediation:
In this scenario, the ip multicast rate-limit command needs to be removed from R7s FastEthernet0/0
interface:

R7#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R7(config)#interface FastEthernet0/0
R7(config-if)#no ip multicast rate-limit out group-list 1 source-list 2 0
R7(config-if)#end

Step 4 - Verification of Remediation


Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R5#ping 224.9.9.9 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.9.9.9, timeout is 2 seconds:
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to
to
to
to

request
request
request
request
request
request
request
request
request
request

0
1
2
3
4
5
6
7
8
9

from
from
from
from
from
from
from
from
from
from

172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,
172.16.200.8,

1
1
1
1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms
ms
ms
ms

Pings to the group are now successful thus telling us that we have remediated the problem.

Copyright by IPexpert, Inc. All Rights Reserved.

13-18

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

Trouble Ticket #2 Solution


After solving Trouble Ticket #1, your supervisor has observed that multicast traffic being sent to the
group 224.99.99.99 is not being received by R9. Correct this issue.
Step 1 - Fault Verification:
Have R8 join the group for testing purposes?
R9#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R9(config)#interface FastEthernet0/0
R9(config-if)#ip igmp join-group 224.99.99.99
R9(config-if)#end

Now ping that group from R5:



R5#ping 224.99.99.99 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:
..........

This verifies the fault.



Step 2 - Fault Isolation:
The next course of action will be to attempt to isolate the fault. First, we will try mtrace from the source
to the host:

R5#mtrace 172.16.100.5 172.16.200.9 224.99.99.99
Type escape sequence to abort.
Mtrace from 172.16.100.5 to 172.16.200.9 via group 224.99.99.99
From source (?) to destination (?)
Querying full reverse path... * switching to hop-by-hop:
0 172.16.200.9
-1 * 172.16.200.9 PIM [172.16.100.0/24]
-2 * 172.16.200.7 PIM [172.16.100.0/24]
-3 * 172.16.17.1 PIM [172.16.100.0/24]
-4 * 172.16.100.5 PIM [172.16.100.0/24]


Mtrace output seem to indicate that there is no RPF fault. This means the next most logical step will be
to look at the multicast routing tables of the devices in the network between the source and the
destination devices:

R1#show ip mroute 224.99.99.99

Copyright by IPexpert, Inc. All Rights Reserved.

13-19

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features

IP Multicast Routing Table


Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:02:31/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:02:30/00:00:00
FastEthernet0/0, Forward/Dense, 00:02:31/00:00:00
(172.16.100.5, 224.99.99.99), 00:02:31/00:02:55, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 172.16.100.5
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:02:32/00:00:00

We see the S,G pair in the multicast routing table of R1, what about R7?

R7#show ip mroute 224.99.99.99
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.99.99.99), 00:03:50/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/1, Forward/Dense, 00:03:50/00:00:00
FastEthernet0/0, Forward/Dense, 00:03:50/00:00:00
(172.16.100.5, 224.99.99.99), 00:03:00/00:02:50, flags: T
Incoming interface: FastEthernet0/1, RPF nbr 172.16.17.1
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:03:01/00:00:00

Copyright by IPexpert, Inc. All Rights Reserved.

13-20

IPv4/6 Multicast Operation and Troubleshooting

Chapter 13: Multicast Security and Advanced Features


We see the S,G pair but what about the table on R9?

R9#show ip mroute 224.99.99.99
Group 224.99.99.99 not found

The output says that R9 does not have any record of the group. Not long after executing the show
command the console reports the message:
R9#
%MROUTE-4-ROUTELIMIT: Current count of 2 exceeds multicast route-limit of 1 -Process=
"<interrupt level>", ipl= 1

This output tells us that we have a route-limit parameter configured on R9 as evidenced by the output of
show run:
R9#show run | inc route-limit
ip multicast route-limit 1

This isolates the fault.



Step 3 - Fault Remediation:
In this scenario, the ip multicast route-limit command needs to be on R9 to allow the correct number of
multicast entries to allow the deployment to operate correctly without exceed limit messages being
generated:

R9#conf t
Enter configuration commands, one per line.
R9(config)#ip multicast route-limit 4
R9(config)#end

End with CNTL/Z.

Step 4 - Verification of Remediation


Once the error has been isolated and remediated it is highly recommended to verify that the Trouble
Ticket has been repaired using the same method used to verify the fault initially.

R5#ping 224.99.99.99 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:
Reply to request 0 from 172.16.200.9, 1 ms
Reply to request 1 from 172.16.200.9, 1 ms
Reply to request 2 from 172.16.200.9, 1 ms

Copyright by IPexpert, Inc. All Rights Reserved.

13-21

IPv4/6 Multicast Operation and Troubleshooting

Reply
Reply
Reply
Reply
Reply
Reply
Reply

to
to
to
to
to
to
to

request
request
request
request
request
request
request

3
4
5
6
7
8
9

from
from
from
from
from
from
from

172.16.200.9,
172.16.200.9,
172.16.200.9,
172.16.200.9,
172.16.200.9,
172.16.200.9,
172.16.200.9,

Chapter 13: Multicast Security and Advanced Features

1
1
1
1
1
1
1

ms
ms
ms
ms
ms
ms
ms

Pings to the group are now successful thus telling us that we have remediated the problem.

Copyright by IPexpert, Inc. All Rights Reserved.

13-22

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

Chapter 14: IPv6


Multicast



This chapter of IPv4/6 Multicast Operation and Troubleshooting examines IPv6 multicast in great
depth. Once the operational characteristics of IPv6 multicast are detailed completely, the focus becomes
that of troubleshooting. This includes the careful examination of symptoms, a fault isolation
methodology, and the implementation of repairs. The chapter begins with a thorough review of IPv6
multicast technologies, and then quickly launches in to an exhaustive analysis of the art of
troubleshooting. This important chapter concludes with sample troubleshooting scenarios, reference
materials for the most important show and debug commands, and exciting challenges that allow
readers to practice implementing the troubleshooting skills they have obtained.

Copyright by IPexpert, Inc. All Rights Reserved.

14-1

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

IPv6 Multicast Technology Review


All of the hard work that readers have done in previous chapters is about to pay off. IPv6 multicast
builds upon the technologies learned in previous chapters very well. This book starts with the most
logical of all starting places for this topic, IPv6 multicast addressing.
IPv6 Multicast Addressing
Like in IP version 4, multicast addresses nodes so that copies of data are sent to all nodes that possess
the appropriate multicast address. Multicast allows for the elimination of broadcasts in IPv6. Broadcasts
in IP version 4 were problematic, since the copy of data is delivered to all nodes in the network, whether
the node cares to receive the information or not.
You can quickly spot an IPv6 multicast address by examining the initial bit settings. A multicast address
begins with the first 8 bits set to 1 (11111111). The corresponding IPv6 prefix notation is FF00::/8.
Following the initial 8 bits, there are 4 bits (labeled 0RPT) which are flag fields. The high-order flag is
reserved, and must be initialized to 0. If the R bit is set to 1, then the P and T bits must also be set to 1.
This indicates there is an embedded Rendezvous Point (RP) address in the multicast address.
The next four bits are scope. The possible scope values are:

0 - reserved
1 - Interface-Local scope
2 - Link-Local scope
3 - reserved
4 - Admin-Local scope
5 - Site-Local scope
6 - (unassigned)
7 - (unassigned)
8 - Organization-Local scope
9 - (unassigned)
A - (unassigned)
B - (unassigned)
C - (unassigned)
D - (unassigned)
E - Global scope
F - reserved

Copyright by IPexpert, Inc. All Rights Reserved.

14-2

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

The remaining 112 bits of the address make up the multicast Group ID. An example of an IPv6 multicast
address would be all of the NTP servers on the Internet FF0E:0:0:0:0:0:0:101.
Keep in mind that just like in IPv4 multicast, there are many reserved addresses of link-local scope. Here
are some examples:

FF02:0:0:0:0:0:0:1 all nodes


FF02:0:0:0:0:0:0:2 all routers
FF02:0:0:0:0:0:0:9 all RIP

A special, reserved IPv6 multicast address that is very important is the Solicited-Node multicast address:
FF02:0:0:0:0:1:FFXX:XXXX
A Solicited-Node multicast address is created automatically by the router. The router takes the low-
order 24 bits of the IPv6 address (unicast or anycast) and appends those bits to the prefix
FF02:0:0:0:0:1:FF00::/104. This results in a multicast address within the range FF02:0:0:0:0:1:FF00:0000
to FF02:0:0:0:0:1:FFFF:FFFF. These addresses are used by the IPv6 Neighbor Discovery (ND) protocol in
order to provide a much more efficient address resolution protocol than Address Resolution Protocol
(ARP) of IPv4.
Protocol Independent Multicast Version 2 (PIMv2) for IPv6
In Chapter 4: Protocol Independent Multicast - Sparse Mode (PIM-SM) you learned the detailed
operation of PIM-SM. It is important that you review that chapter now. Protocol Independent Multicast
version 2 for IPv6 features a single mode of operation sparse mode.
Just like the version 4 PIM-SM, PIMv2 for IPv6 utilizes concepts such as Designated Routers, Assert
Messages, and Rendezvous Points. Reverse Path Forwarding (RPF) checks are performed against the
underlying IPv6 routing database. Again, be sure to review Chapter 4 should any of these important
concepts need a review.
Multicast Listener Discovery (MLD) Protocol
IPv6 multicast renames IGMP (detailed in Chapter 2: Internet Group Management Protocol (IGMP)) to
the Multicast Listener Discovery Protocol (MLD). Version 1 of MLD is similar to IGMP Version 2, while
Version 2 of MLD is similar to Version 3 IGMP. As such, MLD Version 2 supports Source Specific Multicast
(SSM) for IPv6 environments.
Using the Multicast Listener Discovery Protocol, hosts can indicate they want to receive multicast
transmissions for select groups. Routers (queriers) can control the flow of multicast in the network
through the use of MLD.

Copyright by IPexpert, Inc. All Rights Reserved.

14-3

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

MLD uses Internet Control Message Protocol (ICMP) to carry its messages. All such messages are link-
local in scope, and they all have the router alert option set.
MLD uses three types of messages Query, Report, and Done. The Done message is like the Leave
message in IGMP version 2. It indicates a host no longer wants to receive the multicast transmission.
This triggers a Query to check for any more receivers on the segment.
IPv6 PIM Bootstrap Router Protocol (BSR)
Options for Rendezvous Point (RP) assignment in IPv6 multicast are:

Static
BSR
Embedded RP

Note: There is no longer an Auto-RP option in IPv6 multicast.


As you might guess, BSR is functionally very similar to its IPv4 counterpart as covered in Chapter 6:
Bidirectional Protocol Independent Multicast (BIDIR-PIM).
IPv6 Embedded RP
The embedded RP concept of IPv6 multicast takes direct advantage of the enormous size of the
multicast IPv6 address. This functionality also helps fill the gap left by no Multicast Source Disocvery
Protocol (MSDP) in IPv6 multicast. Embedded RP allows for the rendezvous point address information to
be embedded directly in the multicast group address.

Copyright by IPexpert, Inc. All Rights Reserved.

14-4

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

The Operation and Troubleshooting of IPv6 Multicast


The examples in this chapter will use the sample IPv6 multicast topology shown in Figure 14-1.

Gig0/0

Gig0/1

64

:2
01


R1

Fa0/0
Fa0/0
2001:1515::/64

R5

Fa0/1
Fa0/1
2001:4545::/64

R4

01
:2
20

Fa0/0

64

/
6::

42

62

4::
/

20

R2

S0/0/0.1

S0/1/0.1

Fa0/1

R6

Fa0/0
Fa0/0
2001:6767::/64

R7

Fa0/1
Fa0/1
2001:7979::/64

R9

Receiver

Source
406
. . . . . . . . .604
. . . .
2001:4646::/6
4

Figure 14-1: The IPv6 Multicast Sample Topology

IPv6 Multicast Addressing


Let us examine multicast addressing on the router.
R1#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#ipv6 unicast-routing
R1(config)#interface fa0/0
R1(config-if)#ipv6 address 2001:1515::1/64
R1(config-if)#no shutdown
R1(config-if)#
*Mar 1 00:03:32.627: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up
*Mar 1 00:03:33.627: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
changed state to up
R1(config-if)#do show ipv6 interface fa0/0
FastEthernet0/0 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::20A:B8FF:FE1A:5030
No Virtual link-local address(es):
Global unicast address(es):
2001:1515::1, subnet is 2001:1515::/64
Joined group address(es):
FF02::1
FF02::2
FF02::A
FF02::D
FF02::16
FF02::1:FF00:1
FF02::1:FF1A:5030

Copyright by IPexpert, Inc. All Rights Reserved.

14-5

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

MTU is 1500 bytes


ICMP error messages limited to one every 100 milliseconds
ICMP redirects are enabled
ICMP unreachables are sent
ND DAD is enabled, number of DAD attempts: 1
ND reachable time is 30000 milliseconds (using 26344)
ND advertised reachable time is 0 (unspecified)
ND advertised retransmit interval is 0 (unspecified)
ND router advertisements are sent every 200 seconds
ND router advertisements live for 1800 seconds
ND advertised default router preference is Medium
Hosts use stateless autoconfig for addresses.


Notice that because IPv6 routing capabilities are enabled for this device, one of the multicast groups
joined is ALL ROUTERS for the local-link (FF02::2).
Protocol Independent Multicast Version 2 (PIMv2) for IPv6
Typing the following command on the Cisco router automatically enables IPv6 PIM version 2 on all IPv6
enabled interfaces:
ipv6 multicast-routing
Should you need to disable this functionality on a particular interface, simplt type no ipv6 pim under
that interface as shown below:
R1(config)#ipv6 multicast-routing
R1(config)#exit
R1#
R1#show ipv6 pim interface
Interface
PIM Nbr
Hello
Count Intvl

DR
Prior

Loopback0
on
0
30
1
Address: FE80::20A:B8FF:FE1A:5030
DR
: this system
Null0
off 0
30
1
Address: FE80::1
DR
: not elected
VoIP-Null0
off 0
30
1
Address: ::
DR
: not elected
FastEthernet0/0
on
1
30
1
Address: FE80::20A:B8FF:FE1A:5030
DR
: FE80::20A:B8FF:FE2C:80E0
FastEthernet0/1
off 0
30
1
Address: ::

Copyright by IPexpert, Inc. All Rights Reserved.

14-6

IPv4/6 Multicast Operation and Troubleshooting

DR
:
Loopback10
Address:
DR
:
Tunnel0
Address:
DR
:

Chapter 14: IPv6 Multicast

not elected
on
0
30
1
FE80::20A:B8FF:FE1A:5030
this system
off 0
30
1
FE80::20A:B8FF:FE1A:5030
not elected

R1#conf t
R1(config)#interface lo10
R1(config-if)#no ipv6 pim
R1(config-if)#end
R1#show ipv6 pim interface
Interface
PIM Nbr
Hello
Count Intvl

DR
Prior

Loopback0
on
0
30
1
Address: FE80::20A:B8FF:FE1A:5030
DR
: this system
Null0
off 0
30
1
Address: FE80::1
DR
: not elected
VoIP-Null0
off 0
30
1
Address: ::
DR
: not elected
FastEthernet0/0
on
1
30
1
Address: FE80::20A:B8FF:FE1A:5030
DR
: FE80::20A:B8FF:FE2C:80E0
FastEthernet0/1
off 0
30
1
Address: ::
DR
: not elected
Loopback10
off 0
30
1
Address: FE80::20A:B8FF:FE1A:5030
DR
: not elected
Tunnel0
off 0
30
1
Address: FE80::20A:B8FF:FE1A:5030
DR
: not elected
R1#


Static assigmnet of the RP for the topology is simple. Use the following command:
ipv6 pim rp-address ip_address
A significant difference between PIM for IP version 4 and PIM for IP version 6 involves the source
registration process. Immediately upon learning the RP, an IPv6 multicast Cisco router constructs a

Copyright by IPexpert, Inc. All Rights Reserved.

14-7

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

tunnel interface leading to the RP. The tunnel is also immediately enabled for IPv6 multicast. This tunnel
is used for the duration of the registration process. After the completion of registration, multicast
receivers switch to the most optimal path, which negates the use of the tunnel. Examine the automatic
creation of the tunnel below:
R1(config)#ipv6 pim rp-address 2001:2222::2
R1(config)#
%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up
R1(config)#end
R1#


In order to examine a list of currently active tunnels, Cisco provides the command show ipv6 pim
tunnel.
R1#show ipv6 pim tunnel
Tunnel0*
Type : PIM Encap
RP
: Embedded RP Tunnel
Source: 2001:1111::1
Tunnel1*
Type : PIM Encap
RP
: 2001:2222::2
Source: 2001:1111::1
R1#

Multicast Listener Discovery (MLD) Protocol


As one might guess, the configuration of MLD is very similar to the configuration of IGMP as detailed in
Chapter 2: Internet Group Management Protocol (IGMP). In order to define the maximum number of
groups joined by hosts on an interface of a Cisco router, use the following command:
ipv6 mld limit
In order to statically join a group on the interface, use the command:
ipv6 mld join-group
Verifications follow IGMP logic as shown below:
R9(config-if)#ipv6 mld join-group ff08::9
R9(config-if)#do show ipv6 mld groups
MLD Connected Group Membership
Group Address
Interface
FF08::9
FastEthernet0/1

Copyright by IPexpert, Inc. All Rights Reserved.

Uptime
00:00:18

Expires
never

14-8

IPv4/6 Multicast Operation and Troubleshooting

Chapter 14: IPv6 Multicast

R9(config-if)#


Various MLD-related query paramets may be set as well on the Cisco router, once again, following IGMP
logic. The following commands are used:

ipv6 mld query-interval


ipv6 mld query-timeout
ipv6 mld query-max-response-time

You may also use an IPv6 access control list in order to control groups joined by a host. In order to apply
the ACL to the MLD configuration, use the command:
ipv6 mld access-group acl_name
IPv6 PIM Bootstrap Router Protocol (BSR)
As was pointed out earlier in this chapter, BSR is functionally very similar to its IPv4 counterpart as
covered in Chapter 6: Bidirectional Protocol Independent Multicast (BIDIR-PIM). The command
required for the candidate rendezvous point (C-RP) configuration is:
ipv6 pim bsr candidate rp ipv6_address
The command to configure the bootstrap router itself is:
ipv6 pim bsr candidate bsr ipv6_address
An interesting enhancement to the IPv6 version of BSR is the fact that you may statically configure the
bootstrap router itself with a list of candidate RPs (C-RPs) using the command:
ipv6 pim bsr announced rp ipv6_address
Remarkably, this altogether eliminates the need for the dynamic candidate RP announcements from
other devices in the topology.
IPv6 Embedded RP
As described earlier, following the initial 8 bits of an IPv6 multicast address, there are 4 bits (labeled
0RPT) which are flag fields. The high-order flag is reserved, and must be initialized to 0. If the R bit is set
to 1, then the P and T bits must also be set to 1. This indicates there is an embedded Rendezvous Point
(RP) address in the multicast address. Cisco routers will now no longer rely on BSR or static
configurations for the RP assignment, and now senders and receivers in IPv6 multicast will agree to
embed the RP IPv6 address in the IPv6 multicast address itself.

Copyright by IPexpert, Inc. All Rights Reserved.

14-9

S-ar putea să vă placă și