mpls: pseudowire emulation / any tansport over mpls – interconnecting sites via layer 2 – (part 2)

Hi again!
Now its time for part 2 of the MPLS Layer 2 interconection between two sites.

Challenge:
– interconnect two sites via PEM but without an MPLS enabled service provider network

Preconfigured & given:
– topology of Part 1
– CORE routers dont run MPLS on their backbone interfaces

So here we go. First we deconfigure MPLS from the core routers and check if everything is the way we want it to.

CORE1(config)#int fa0/0
CORE1(config-if)#no mpls ip
!
CORE2(config)#int fa0/0
CORE2(config-if)#no mpls ip
CORE2(config)#int fa0/1
CORE2(config-if)#no mpls ip
!
CORE3(config)#int fa0/0
CORE3(config-if)#no mpls ip

CORE2#sh mpls int
Interface IP Tunnel Operational

No output for the MPLS switching! Perfect! So now we are sure that CORE2 will definitely dont do any MPLS forwarding.
Now we want to use ATOM/PEM, but the problem is that this technology relies on MPLS. Hm…so what are we going to do now. Well…lets use a GRE tunnel to do that. The tunnel is built between the loopbacks of CORE1 and CORE3.

CORE1(config)#int tun 0
CORE1(config-if)#tunnel mode gre ip
CORE1(config-if)#tunnel source lo0
CORE1(config-if)#tunnel destination 3.3.3.3
CORE1(config-if)#ip address 13.13.13.1 255.255.255.252
!
CORE3(config)#int tun0
CORE3(config-if)#tunnel mode gre ip
CORE3(config-if)#tunnel source lo0
CORE3(config-if)#tunnel destination 1.1.1.1
CORE3(config-if)#ip address 13.13.13.2 255.255.255.252

Check if the tunnels are working!

CORE3#ping 13.13.13.1 so tun0

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 13.13.13.1, timeout is 2 seconds:
Packet sent with a source address of 13.13.13.2
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 108/136/156 ms
CORE3#

Fine! So as any traffic going through the tunnel interfaces is encapsulated in GRE, well LDP should be as well. We now activate LDP on those two interfaces

CORE1(config)#int tun0
CORE1(config-if)#mpls ip
!
CORE3(config)#int tun0
CORE3(config-if)#mpls ip

So far so good lets check the neighbors and the interfaces.

CORE3#sh mpls ldp neigh
Peer LDP Ident: 1.1.1.1:0; Local LDP Ident 3.3.3.3:0
TCP connection: 1.1.1.1.646 - 3.3.3.3.17830
State: Oper; Msgs sent/rcvd: 11/11; Downstream
Up time: 00:00:21
LDP discovery sources:
Tunnel0, Src IP addr: 13.13.13.1
Addresses bound to peer LDP Ident:
172.16.21.1 1.1.1.1 13.13.13.1
CORE3#sh mpls int
Interface IP Tunnel Operational
Tunnel0 Yes (ldp) No Yes

Ok looking good. So now we will try to do “xconnect” the two tunnel inteface addresses.

CORE1(config)#int fa0/1
CORE1(config-if)#xconnect 13.13.13.2 12 pw-class PWC-SITE-1-2
!
CORE3(config)#int fa0/1
CORE3(config-if)#xconnect 13.13.13.1 12 pw-class PWC-SITE-1-2

As we configure it we get an error message that says:

*Mar 1 03:02:07.303: %ATOM_TRANS-4-CONFIG: 13.13.13.1 mismatches the peer router id 1.1.1.1

Well the Problem is that in general the highest loopback is chosen as LDP router-id. ATOM needs to connect to the router-id ip address (at least thats my point of knowledge). And the tunnel ip address is not equal to the router-id. Well you could change that with the “mpls ldp router-id tunnel0 force” command but the problem is that when you use that in production network you can get into trouble. Here you should create a new loopback address, lets say Lo1 and then for all PEM/ATOM connections use that IP address with static routing over the tunnels. Well lets just do this.

CORE1(config)#int lo1
CORE1(config-if)#ip address 11.11.11.11 255.255.255.255
CORE1(config-if)#mpls ldp router-id lo1 force
!
CORE3(config)#int lo1
CORE3(config-if)#ip address 33.33.33.33 255.255.255.255
CORE3(config-if)#mpls ldp router-id lo1 force

NOTE: When changing the router-id, all LDP neighbor relationships will go down! Keep that in mind when you are going to do this in a production network.

Goal of this step was to create loopback addresses that can be used for creating xconnect peerings AND that traffic should go through the tunnel interface. so lets create static routes here.

CORE1(config)#ip route 33.33.33.33 255.255.255.255 tun0
!
CORE3(config)#ip route 11.11.11.11 255.255.255.255 tun0

Reachability check!

CORE3#ping 11.11.11.11 so lo1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 11.11.11.11, timeout is 2 seconds:
Packet sent with a source address of 33.33.33.33
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 64/129/236 ms

Looking good. So next step is to reconfigure the xconnect statements.

CORE1(config)#int fa0/1
CORE1(config-if)#no xconnect 13.13.13.2 12
CORE1(config-if)#xconnect 33.33.33.33 12 pw-class PWC-SITE-1-2
!
CORE3(config)#int fa0/1
CORE3(config-if)#no xconnect 13.13.13.1 12
CORE3(config-if)#xconnect 11.11.11.11 12 pw-class PWC-SITE-1-2

Give it some time to converge and then check the VC details.

CORE1#sh mpls l2transport vc

Local intf Local circuit Dest address VC ID Status
------------- -------------------------- --------------- ---------- ----------
Fa0/1 Ethernet 33.33.33.33 12 UP

Looks good! Lets do a basic ICMP test with standard size and then lets have a look at the additional headers we created.

SITE1#ping 10.10.10.2 rep 10

Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 10.10.10.2, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 52/135/236 ms

Cool it works. When we capture the traffic between CORE1 and CORE2 we should see GRE encapsulated traffic as MPLS is disabled on CORE2.

As we can see there is an additional GRE header which also takes space (4 Byte) and as GRE is an IP encapsulation technology, the frames dont get encapsulated directly into MPLS. Everything that goes through the tunnel is encapsulated into an IP packet so here we get another 20Bytes.

Well as you can see there is only one MPLS header! Guess why! Penultimate Hop Popping (PHP) is in use per default. And as we only have one hop here (between the tunnel endpoints) there is no need for a outer transport label! So you can only see the VC label.

Lets count everything together to calculate the required MTU size of the backbone interfac
es.

The new L2 payload size (MTU) for our backbone interfaces is therefore 20 + 4 + 4 + 14 + 1500 + 4 = 1546 Bytes. Lets try this now.

CORE1(config)#int fa0/0
CORE1(config-if)#mtu 1546
!
CORE2(config)#int fa0/0
CORE2(config-if)#mtu 1546
CORE2(config-if)#int fa0/1
CORE2(config-if)#mtu 1546
!
CORE3(config)#int fa0/0
CORE3(config-if)#mtu 1546

Take some time to let ISIS reconverge.

SITE1#ping 10.10.10.2 rep 10 size 1500 df

Type escape sequence to abort.
Sending 10, 1500-byte ICMP Echos to 10.10.10.2, timeout is 2 seconds:
Packet sent with the DF bit set
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 100/165/208 ms

Works!
So have fun with that scenario and feel free to comment!

Regards!
Markus

Advertisements

About markus.wirth

Living near Limburg in Germany, working as a Network Engineer around Frankfurt am Main.
This entry was posted in MPLS and tagged , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s