Present Location: News >> Blog >> L2 VPN over MPLS over GRE over IPsec on a Juniper SRX!

Blog

> L2 VPN over MPLS over GRE over IPsec on a Juniper SRX!
Posted by prox, from North Brunswick, on November 20, 2012 at 21:28 local (server) time

Yep, it's a mouthful.  However, it works and is very useful.

Traditionally, Juniper's ScreenOS line of firewalls (NetScreen, SSG, and ISG) have not supported any sort of L2 VPNs.  I'm referring specifically to connecting two LANs over an IPsec connection without the need for proxy ARP, static NAT, or any L3 hops.  This has been a severe limitation, considering other vendors such as Cisco Systems have supported this type of feature on their ASA product line for quite awhile.  It's also been supported in various forms on GNU/Linux systems, although the easiest implementation is with OpenVPN and the oh-so-awesome TUN/TAP driver.

That being said, Juniper's SRX product line supports a couple forms of L2 VPNs over IPSec.

A couple of weeks ago I worked with our Juniper Networks resident engineer to configure an SRX240 with one of the L2 VPN configurations.  We got a pair of SRX240 firewalls connected back to back in the lab and configured two types of L2 VPNs over them: VPLS and a Martini-style pseudowire.

The initial configuration we started with was a bit complex.  It was based on this example from Juniper, but we quickly changed it around and removed the need for the logical tunnel interfaces.

We tested the configuration with VPLS on two SRX240s cabled together and the BreakingPoint network testing appliance configured downstream.  The "routing robot" test components were used in UDP mode and we achieved roughly 70 Mbps in IMIX with the packet size ranging between 64 and 1400 bytes to avoid fragmentation.  The SPUs on the SRX240s were near 90-95% utilization.  There was no packet-loss, though!

Unfortunately, since the BGP-based VPLS signaling requires BGP (duh), some odd combination of IPsec IKE negotiations and BGP caused the complete tunnel setup time to be longer than expected.  It was on the order of minutes at one point.  For our purposes, since we only need one port on either side, we decided to scrap VPLS and use an l2circuit instead.

The l2circuit does not require BGP for signaling, only OSPF.  The BGP process was deleted and, as a result, the setup time appeared to be decreased slightly.  We also bumped up the L3 MTU on the downstream physical interfaces to 1600 in order to adhere to our standard MTU for WAN interfaces, knowing full-well that this would result in fragmentation since the Internet-facing interface is set to 1500.  We ended up with a configuration similar to the following:

ge-0/0/0 {
    mtu 1614;
    encapsulation ethernet-ccc;
    unit 0 {
        family ccc {
            filter {
                input CCC-packet-mode;
            }
        }
    }
}
gr-0/0/0 {
    unit 0 {
        tunnel {
            source 10.0.0.130;
            destination 10.0.0.129;
        }
        family inet {
            mtu 9000;
            address 10.0.0.134/30;
        }
        family mpls {
            mtu 9000;
            filter {
                input MPLS-packet-mode;
            }
        }
    }
}
ge-0/0/15 {
    unit 0 {
        family inet {
            address 192.0.2.10/30;
        }
    }
}
lo0 {
    unit 0 {
        family inet {
            filter {
                input protect-re;
            }
            address 10.0.0.2/32;
            address 127.0.0.1/32;
        }
    }
}
st0 {
    unit 0 {
        family inet {
            address 10.0.0.130/30;
        }
    }
}
static {
    route 0.0.0.0/0 next-hop 192.0.2.9;
}
protocols {
    mpls {
        interface gr-0/0/0.0;
    }
    ospf {
        area 0.0.0.0 {
            interface lo0.0 {
                passive;
            }
            interface gr-0/0/0.0;
        }
    }
    ldp {
        interface gr-0/0/0.0;
        interface lo0.0;
    }
    l2circuit {
        neighbor 10.0.0.1 {
            interface ge-0/0/0.0 {
                virtual-circuit-id 100000;
                encapsulation-type ethernet;
            }
        }
    }
}
firewall {
    family mpls {
        filter MPLS-packet-mode {
            term all-traffic {
                then {
                    packet-mode;
                    accept;
                }
            }
        }
    }
    family ccc {
        filter CCC-packet-mode {
            term 1 {
                then {
                    packet-mode;
                    accept;
                }
            }
        }
    }
}

I didn't include the security section or the protect-re firewall filter in the above configuration.  The security section consists of an IPsec VPN bound to st0.0 and associated zones and policies.  st0.0, gr-0/0/0.0, and lo0.0 were all put in the same zone with permissive policies.  Also, the "family ccc" under the firewall section is new to me.  I wasn't aware that the ccc family existed, prior to working on this!  It's apparently used here to instruct the SRX to process frames at L2 while also bypassing the flow module.

Suprisingly, after testing with large packets (up to the MTU of the interface), everything worked well, even with the fragmentation.  The speed with IMIX was roughly the same.  We then put this into production and everything has been working well for about a week, now!  In fact, the final setup involves running MPLS over this whole setup, so we've got an additional layer of MPLS to add to the fun.

Here's what things ultimately look like:

prox@srx240> show security ipsec security-associations
  Total active tunnels: 1
  ID    Algorithm       SPI      Life:sec/kb  Mon vsys Port  Gateway
  <131043 ESP:aes-256/sha1 cf2f7c23 1377/ unlim -  root 500   192.0.2.127
  >131043 ESP:aes-256/sha1 4cdcd526 1377/ unlim -  root 500   192.0.2.127

prox@srx240> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
10.0.0.133       gr-0/0/0.0             Full      10.0.0.1         128    35

prox@srx240> show ldp session
  Address           State        Connection     Hold time
10.0.0.1            Operational  Open             20

prox@srx240> show l2circuit connections
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid      NP -- interface h/w not present
MM -- mtu mismatch               Dn -- down
EM -- encapsulation mismatch     VC-Dn -- Virtual circuit Down
CM -- control-word mismatch      Up -- operational
VM -- vlan id mismatch		 CF -- Call admission control failure
OL -- no outgoing label          IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC    TM -- TDM misconfiguration
BK -- Backup Connection          ST -- Standby Connection
CB -- rcvd cell-bundle size bad  SP -- Static Pseudowire
LD -- local site signaled down   RS -- remote site standby
RD -- remote site signaled down  XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 10.0.0.1
    Interface                 Type  St     Time last up          # Up trans
    ge-0/0/0.0(vc 100000)     rmt   Up     Nov 12 15:49:15 2012           1
      Remote PE: 10.0.0.1, Negotiated control-word: Yes (Null)
      Incoming label: 299776, Outgoing label: 299776
      Negotiated PW status TLV: No
      Local interface: ge-0/0/0.0, Status: Up, Encapsulation: ETHERNET

prox@srx240>

We're probably going to use this solution again in the future, since it happens to work better than other solutions (proxy ARP, etc.).  However, it's unfortunate that we can't get more than roughly 70 Mbps over the connection.  I suspect an SRX650 can do much better.

> Add Comment

New comments are currently disabled for this entry.