Friday, May 10, 2013

FlexConnect Local Switching - Guest/BYOD

Overview
BYOD is all the rage lately so it is imparative for us to be able to get it working properly.  One environment that I found particularly challenging was getting Central Web Authentication working with FlexConnect and locally switched WLANs.  In this article I will step through a working configuration from both the ISE (Identity Services Engine) and WLC (Wireless LAN Controller) perspectives.  
NOTE: All components and configurations were performed in a lab environment.  If you are working in a production environment be sure to validate the steps in your scenario.
We will cover the configuration requirements in two major sections:  WLC Configuration and ISE Configuration
My lab consists of the following components: 
    Cisco vWLC version 7.5 (beta)
    Cisco 1131 AG Access Point
    Cisco CSR1000v
    Cisco ISE version 1.1.2 in Stand Alone Mode
    ESXi 5.1
    Ubuntu Server for DNS, NTP, etc
    Catalyst 2950 Lab for basic connectivity
WLC Configuration
Here is a summary of the steps to be performed in this section: 
1.Create the locally switched SSID
2.Create the access control list for Authenticated users
3.Create the access control list for web authentication redirect
4.Define the FlexConnect group
5.Configure the Access Point
1. Create new SSID named flex-guest
You can use any name you choose for your SSID.  I chose one that I could use that would be easily identifiable in my lab.  The SSID will be flex-guest in this example. 

Screen shot 2013 05 09 at 11 46 32 AM
     
Once you have created the WLAN and are in the configuration details screen, set the L2 security to be None and Mac Filtering as shown below.  This will allow ISE to treat this as wireless MAB (MAC Authentication Bypass) and it will flow through to the CWA profile to be created later. 
 Screen shot 2013 05 09 at 11 46 48 AM
     
Make sure to select your RADIUS servers for authentication and accounting on the AAA Servers tab.
Screen shot 2013 05 09 at 11 46 57 AM
It is required to enable AAA Override and NAC State of “Radius NAC”.
 Screen shot 2013 05 09 at 11 47 16 AM

In the FlexConnect Section will will make this WLAN locally switched. 
Screen shot 2013 05 09 at 11 47 29 AM
2.  Create the flex-guest FlexConnect  access control list
We will have to configure a couple of ACLs to be used in our solution.  The first ACL will be the ACL that is applied to control what the guest users can do once authenticated.  This will be referenced in our Authorization Policy by name.  
The first ACL can be tailored to support your needs, but I have essentially allowed all traffic accept for ICMP to one of the IP addresses on the CSR1000v router for testing purposes. 
 Screen shot 2013 05 09 at 12 02 56 PM
3.  Create the  WEBAUTH-REDIRECT-ACL  access-control list
The WEBAUTH-REDIRECT-ACL is a FlexConnect ACL.  The FlexConnect ACL is applied to the FlexConnect Group under the Policies tab later.
FlexConnect ACL
Screen shot 2013 05 08 at 4 26 30 PM
I created the ACL to be used during central web auth and will identify the traffic that will  be let through without being redirected.  There are some good examples out there in the Cisco trustsec docs you can copy.  I basically allowed any traffic to / from the ISE box and also DNS to get us going. 
4.  Define the FlexConnect Group
Create the Group and associate the access point. 
Screen shot 2013 05 09 at 12 04 51 PM
Associate the WEBAUTH-REDIRECT-ACL under the ACL Mapping -> Policies tab.
Screen shot 2013 05 08 at 4 26 16 PM
Add the WLAN to VLAN mapping 
Screen shot 2013 05 09 at 12 06 52 PM
5.  Configure the AP for FlexConnect 
First, ensure the access point is in AP Mode FlexConnect
Screen shot 2013 05 09 at 11 51 54 AM
Enable VLAN Support on the FlexConnect Tab.
Screen shot 2013 05 09 at 11 52 43 AM
Click the VLAN Mappings Tab and add the VLAN to the AP. 
Then at the bottom you will associated the FlexConnect FLEX-GUEST ACL created earlier. 
Screen shot 2013 05 09 at 11 53 20 AM
ISE Related Configurations
The assumption will be that you have a basic, functional ISE installation.  I will talk about only the specific configurations related to the FlexConnect lab I am working on.  
1.Guest Account Created and Active
2.Result Element Create for CWA 
3.Result Element Created for FLEX-GUEST
4.AuthZ Policy created for CWA
5.AuthZ Policy create for Activated Guest
1.  Guest Account Creation 
Login into your sponsor portal (default https://ise:8443) and create a guest account to be used during testing
2.  Create the Result Policy Element for central web auth
Screen shot 2013 05 09 at 4 04 10 PM
Screen shot 2013 05 09 at 4 04 22 PM
3.  Create the Result Policy Element for FLEX-GUEST 
Screen shot 2013 05 09 at 4 03 41 PM
Screen shot 2013 05 09 at 4 03 57 PM
4. Create the AuthZ policy for CWA
I am using the default catch all authorization policy in this example and have associated it with the result, CentralWebAuth, created in a prior step. 
Screen shot 2013 05 09 at 4 05 05 PM
5.  Create the AuthZ policy for FLEX-GUEST
This rules is using the ActivatedGuest endpoint group for guests and the result of FLEX-GUEST. 
Screen shot 2013 05 09 at 4 05 17 PM
The basic flow will be: 
1.Client Connects to the flex-guest SSID
2.WLC will send the RADIUS packet to ISE as MAB 
3.Since there is not an endpoint with the MAC address in the database we will allow authentication to “Continue” through to Authorization. 
4.There will also not be a match in the Authorization rules yet so it will end up at the default rule (CWA policy from above)
5.ISE returns RADIUS packet with the ACL name for webauth-redirect to be used and access-accept
6.WLC will use the webauth redirect ACL created locally to determine what traffic will qualify for redirect and send browser traffic to the ISE guest portal
7.User will enter the guest username and password created in the sponsor portal
8.Following success authentication the COA is issued and the ACL from (FLEX-GUEST) will replace the webauth redirect and the client will enter the run state.  

Thursday, January 24, 2013

Multicast VPN



MPLS Multicast Routing (IOS)


Here are my notes for configuring multicast in an MPLS environment from a recent training.  This will be broken down into four major sections with output samples for each configuration step and validation:
  1. MPLS in the Core
  2. Multicast between the PE/CE
  3. Multicast Distribution Tree
  4. Final Validations and Test


BASIC TOPOLOGY





MPLS IN THE CORE

Core Network Setup Steps

The assumption is that you already have a fully configured and functioning IGP.  These configuration steps are performed on the 'P' routers within the core of the MPLS network.
  1. Enable multicast routing
  2. Configure PIM on the Interface
  3. If using sparse-mode, configure rendezvous point in the core


1.  Enable Multicast routing
     ip multicast-routing

2.  Configure PIM on the Interface
     interface x/y
     ip pim sparse-mode
     
The easiest way to avoid trouble with the RPF check and multicast forwarding is to maintain a 1:1 ratio between your PIM enabled interfaces and your IGP enabled interfaces.  
     

3.  Configure the rendezvous point

The configuration of the RP can be auto-rp, bootstrap router or static.  I chose bsr in this case.

     ip pim bsr-candidate loopback0

      ip pim rp-candidate loopback0


Core Network Validation Steps
  1.  Validate the appropriate interfaces are configured with PIM
  2. Validate the PIM neighbors are seen successfully
  3. Validate the RP is seen on all multicast enabled core routers
  4. Enable a multicast receiver in the core
  5. Compare the unicast routing table, rpf for the rp and the outgoing interfaces list
  6. Source multicast traffic to the receiver from another device



1.  Validate PIM Interfaces


Compare the output of the 'show ip pim interface' command to your diagram to ensure all required interfaces have been configured.  Perform this step on all routers in the MPLS core.

P5#sho ip pim int 

Address Interface Ver/ Nbr Query DR DR
                                          Mode Count Intvl Prior
20.4.5.5 FastEthernet0/0 v2/S 1 30 1 20.4.5.5
20.5.6.5 FastEthernet0/1 v2/S 1 30 1 20.5.6.6
20.3.5.5 FastEthernet1/0 v2/S 0 30 1 0.0.0.0
20.5.19.5 FastEthernet1/1 v2/S 1 30 1 20.5.19.19
5.5.5.5 Loopback0 v2/S 0 30 1 5.5.5.5


2.  Validate PIM Neighbors

Perform this test on all multicast enabled routers and validate all interfaces are appropriately configured.  In the sample topology all interfaces have been PIM enabled.

     show ip pim neighbor

SAMPLE OUTPUT
P5#sho ip pim neighbor 
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
20.4.5.4 FastEthernet0/0 00:25:59/00:01:35 v2 1 / S P G
20.5.6.6 FastEthernet0/1 00:39:45/00:01:44 v2 1 / DR S P G
20.5.19.19 FastEthernet1/1 00:38:53/00:01:15 v2 1 / DR S P G

3.  Validate RP Assignment

Execute this check on all core routers that are going to be multicast enabled.

show ip pim rp mapping

P5#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 4.4.4.4 (?), v2
    Info source: 4.4.4.4 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:37:23, expires: 00:01:46


4.  Enable Multicast Receiver

interface Loopback0
 ip address 6.6.6.6 255.255.255.255
 ip router isis 1
 ip pim sparse-mode
 ip igmp join-group 224.1.1.1


5.  Unicast routing, rpf check and OIL

Validate the unicast route to the RP address and the rpf are in agreement

P5#sho ip route 4.4.4.4
Routing entry for 4.4.4.4/32
  Known via "isis", distance 115, metric 10, type level-2
  Redistributing via isis 1
  Last update from 20.4.5.4 on FastEthernet0/0, 01:05:42 ago
  Routing Descriptor Blocks:
  * 20.4.5.4, from 4.4.4.4, 01:05:42 ago, via FastEthernet0/0
      Route metric is 10, traffic share count is 1


P5#sho ip rpf 4.4.4.4
RPF information for ? (4.4.4.4)
  RPF interface: FastEthernet0/0
  RPF neighbor: ? (20.4.5.4)
  RPF route/mask: 4.4.4.4/32
  RPF type: unicast (isis 1)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base


P5#sho ip mroute 
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 01:04:13/00:03:20, RP 4.4.4.4, flags: S
  Incoming interface: FastEthernet0/0, RPF nbr 20.4.5.4
  Outgoing interface list:
    FastEthernet0/1, Forward/Sparse, 01:04:13/00:03:20

(*, 224.0.1.40), 01:20:58/00:02:16, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 01:20:56/00:02:16

   The incoming interface should match the RPF interfaces toward the RP
   The outgoing interface list (OIL) should contain the interface toward the router with the igmp join statement




MULTICAST BETWEEN THE PE/CE


CE Router Configuration

The multicast configuration on the CE router will be the standard type of enterprise multicast configuration.  The same practice of maintaining a 1:1 ratio of IGP enabled to multicast enabled interface parity will help avoid RPF issues later. 

  1. Enable Multicast Routing
  2. Enabled PIM on the interfaces
  3.  Configure an rendezvous point for sparse mode multicast networks



Validate CE Multicast Configuration
  1. Validate PIM interfaces
  2. Validate PIM neighbors
  3. Validate RP within the local site with the RP (other sites won't see it until you configure the MDT piece)


** For brevity I have omitted the output of these commands as the are the same as the previous and repeated again later. 

  
PE to CE Configuration

  1. Enable multicast routing
  2. Enable multicast routing for the customer vrf 
  3. Configure pim on the CE interface


1.  Enable multicast routing

ip multicast-routing 

2.  Enable multicast routing on the customer VRF

ip multicast-routing vrf A 

3.  PIM on the CE

interface FastEthernet0/0

 vrf forwarding A

 ip address 10.1.2.2 255.255.255.0

 ip pim sparse-mode


Validations
  1. Validate PIM interfaces in the VRF
  2. Validate the PIM neighbors in the VRF
  3. At the site where the RP exists, verify the PE sees the RP in the VRF


1.  PIM Interfaces

PE1#show ip pim vrf A interface 

Address Interface Ver/ Nbr Query DR DR
                                          Mode Count Intvl Prior
10.1.2.2 FastEthernet0/0 v2/S 1 30 1 10.1.2.2


2.  PIM Neighbors

PE1#sho ip pim vrf A neighbor 

PIM Neighbor Table

Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,

      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable

Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

10.1.2.1 FastEthernet0/0 00:55:20/00:01:38 v2 1 / S P G


3. Validate the RP is seen on PE1 (connected to site with RP)

PE1#sho ip pim vrf A rp mapping

PIM Group-to-RP Mappings



Group(s) 224.0.0.0/4

  RP 1.1.1.1 (?), v2

    Info source: 1.1.1.1 (?), via bootstrap, priority 0, holdtime 150

         Uptime: 01:29:19, expires: 00:01:56





MULTICAST DISTRIBUTION TREE


The MDT (multicast distribution tree) is the solution that utilizes PIM in the core of the network (ASM, bidir, or SSM) and builds multicast trees per VPN.  The state information maintained in the core of the network is limited to the information specifically required to maintain the trees.  The VRF default MDT configuration enables the automatic discover of PE routers participating in the VPN (based on default mdt group) for ASM.  Customer (VPN) multicast traffic is encapsulated in provider multicast groups.  



This basic configuration example only shows the use of the default mdt group.  MDT data groups can be used to further optimize the multicast traffic.  ** See the reference links at the bottom for additional info. 



Configure MDT - Multicast Distribution Tree

  1. Enable MDT address family in BGP and activate ipv4 vpnv4 neighbors
  2. Configure the MDT address parameters in the vrf
  3. Ensure PIM is enabled on Loopbacks used for BGP neighbors



1.  Enable the MDT address family in BGP

 address-family ipv4 mdt

  neighbor 19.19.19.19 activate

  neighbor 19.19.19.19 send-community both

 exit-address-family

2.  Configure the MDT VRF default address

vrf definition A

 rd 100:1

 !

 address-family ipv4

  mdt default 239.255.255.255

  route-target export 100:1

  route-target import 100:1

  route-target import 100:2

 exit-address-family
 !

3.  Enable PIM on the loopbacks 

interface Loopback0

 ip address 2.2.2.2 255.255.255.255

 ip pim sparse-mode

end


Final Validation Steps

  1. Validate on PE the global s,g pairs for each PE
  2. Validate on PE the sho ip pim mdt
  3. Validate in core the s,g with shortest path tree flag
  4. Validate PEs see each other as neighbors within the vrf over tunnel
  5. Validate the non-RP customer sites now see the RP appropriately
  6. Configure a multicast receiver
  7. Source multicast traffic and review results


1.  Global S,G pairs for PE routers



PE1#sho ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.255), 00:04:39/stopped, RP 4.4.4.4, flags: SJCFZ
  Incoming interface: FastEthernet1/0, RPF nbr 20.5.19.5
  Outgoing interface list:
    MVRF A, Forward/Sparse, 00:04:37/00:01:22

(2.2.2.2, 239.255.255.255), 00:01:23/00:01:36, flags: JTZ
  Incoming interface: FastEthernet0/0, RPF nbr 20.6.19.6
  Outgoing interface list:
    MVRF A, Forward/Sparse, 00:01:23/00:01:36

(19.19.19.19, 239.255.255.255), 00:04:39/00:03:00, flags: FT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet1/0, Forward/Sparse, 00:02:28/00:02:50

(*, 224.0.1.40), 01:14:14/00:02:35, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 01:14:12/00:02:35


2.  Validate the MDT with 'show ip pim mdt'

PE1#sho ip pim mdt

  * implies mdt is the default MDT

  MDT Group/Num Interface Source VRF

* 239.255.255.255 Tunnel2 Loopback0 A


PIM Neighbors
PE1#sho ip pim vrf A neighbor 

PIM Neighbor Table

Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,

      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable

Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

10.1.2.1 FastEthernet0/0 00:18:11/00:01:35 v2 1 / S P G

19.19.19.19 Tunnel2 00:00:48/00:01:24 v2 1 / DR S P G



3.  Validate the P routers in the core of the network see the (S,G) mroute entries for the PE routers



P4#sho ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.255), 00:53:11/00:03:22, RP 4.4.4.4, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet1/1, Forward/Sparse, 00:50:49/00:03:14
    FastEthernet0/1, Forward/Sparse, 00:53:11/00:03:22

(2.2.2.2, 239.255.255.255), 00:49:26/00:01:57, flags: PT
  Incoming interface: FastEthernet1/1, RPF nbr 20.2.4.2
  Outgoing interface list: Null

(19.19.19.19, 239.255.255.255), 00:52:09/00:03:00, flags: T
  Incoming interface: FastEthernet0/1, RPF nbr 20.4.5.5
  Outgoing interface list:
    FastEthernet1/1, Forward/Sparse, 00:50:49/00:03:29

(*, 224.1.1.1), 01:57:34/00:03:04, RP 4.4.4.4, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/1, Forward/Sparse, 01:57:34/00:03:04

(*, 224.0.1.40), 02:16:06/00:02:40, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 02:16:04/00:02:24


4.  Validate the PEs see each other as neighbors in the customer VRF 



PE1#sho ip pim vrf A neighbor 

PIM Neighbor Table

Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,

      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable

Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

10.1.2.1 FastEthernet0/0 01:01:10/00:01:34 v2 1 / S P G

19.19.19.19 Tunnel2 00:43:47/00:01:40 v2 1 / DR S P G


5.  Validate RP Mapping at non-RP sites



CE2#sho ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 1.1.1.1 (?), v2
    Info source: 1.1.1.1 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:52:27, expires: 00:01:37


6.  Configure a multicast receiver



The RP is located on CE1 so I will join the 228.1.1.1 group on the CE3 router. 

interface Loopback0
 ip address 8.8.8.8 255.255.255.255
 ip pim sparse-mode
 ip igmp join-group 228.1.1.1
end

Mroute output from PE1:

We see the join from CE3 has traversed the MPLS core and the *,G is seen on the PE router. 

PE1#sho ip mroute vrf A
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 228.1.1.1), 00:21:47/00:03:24, RP 1.1.1.1, flags: S
  Incoming interface: FastEthernet0/0, RPF nbr 10.1.2.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:21:47/00:03:24

(20.20.20.20, 228.1.1.1), 00:00:22/00:02:37, flags: 
  Incoming interface: Tunnel2, RPF nbr 19.19.19.19
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:00:22/00:03:07

(10.19.20.20, 228.1.1.1), 00:00:22/00:02:37, flags: 
  Incoming interface: Tunnel2, RPF nbr 19.19.19.19
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:00:22/00:03:07

(*, 224.0.1.40), 01:34:10/00:02:27, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 01:34:08/00:02:27


7.  Ping from CE2 to 228.1.1.1 and Validate

CE2#ping 228.1.1.1               
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 228.1.1.1, timeout is 2 seconds:

Reply to request 0 from 8.8.8.8, 512 ms


SUMMARY OUTLINE of STEPS



    



 Configure the Core

                      



Enable multicast routing 

                      



Enable PIM on the interfaces 

                      



Configure the rendezvous point

Configure the PE/CE
           CE
                    Enable multicast routing 
                    Enable PIM on the interfaces
                    Configure the rendezvous point
           PE
                    Enable multicast routing
                    Enable multicast routing in the vrf
                    Enable PIM on the VRF interfaces
Configure MDT 
           Configure the MDT address family in BGP
           Configure MDT group addresses in the VRF
           Configure PIM on the peering loopbacks. 



REFERENCES and ADDITIONAL READING