Network Configuration Example: Campus Fabric Core Distribution-CRB-Wired Assurance
Hello and welcome to Juniper’s Campus Fabric Core Distribution overview.
First, we’ll discuss why customers are deploying campus fabric core distribution,
We’ll then dive deep into each of the building blocks.
And then we’ll discuss which Juniper hardware platforms support this particular architecture.
Although this is not an exhaustive list, these are the top four technical challenges that we’ve experienced in legacy campus networks over the past couple of years.
And I’ll start with micro-segmentation.
There are many reasons why customers segment traffic within a broadcast domain.
That could be due to legacy equipment that can’t understand a Layer 3 boundary, possible application requirements for Layer 2.
The list is fairly extensive.
But anyway, being able to isolate traffic within a broadcast domain itself is very problematic and can only really be done through the use of private VLANs.
Private VLANs are problematic for a number of reasons.
One, they’re difficult to configure, difficult to interoperate between third party devices and then they lack scale.
The second challenge here is really inefficient ACL usage.
Most customers place ACLs or if you want to call them firewall filters, everywhere to further segment, so access distribution core, and then, of course, you’ve got your firewalls.
And what happens is if you’ve ACL sprawl, which becomes an operational challenge.
I haven’t experienced…
I haven’t talked to a customer that doesn’t have the need to extend at least a couple of VLANs across their campus network, even if they’re routing at the access layer.
And so when that happens, you’ve to plumb that works, you’ve to plumb VLANs from access to access across your network, which increases your blast radius and exposes your entire network effectively to broadcast domains, and of course, spanning tree loops and so forth.
And then the last challenge here is really lack of standards.
Even though there are standards that are built, most of the time, they’re not exactly adhered to.
Perfect example would be the distribution layer here.
Those two switches are interconnected, which is a very typical MC-LAG deployment.
I’ve not yet seen a customer deployment where you’ve multi vendor MC-LAG.
It’s always vendor specific.
And then with multi chassis lag, you can’t scale past two devices.
So horizontal or scale in general becomes problematic.
Juniper campus fabric solves these many customer problems.
Let’s start with micro-segmentation.
Remember, we talked about the challenges of private VLAN earlier.
With an EVPN-VXLAN deployment, a customer can utilize group based policy, which is a standard based on the VXLAN, by applying scalable group tags to devices upon authentication.
This authentication happens against VLANs or a knack standard base radio server.
That server has a scalable group tag, which is just a numeric identifier that is passed upon authentication to the Juniper Access layer switch, which then applies at scale group tag to that MAC or authenticate device anywhere within the network, which provides incredibly efficient ACL usage.
So now customers can provide systematic firewall filters that look the same across all access switches.
One disclaimer is that campus fabric IP class is required for group based policy.
And that is because VXLAN at the access layer is required for this technology.
EVPN-VXLAN supports Layer 2 extensibility, where a customer doesn’t have to plumb VLANs from end-to-end exposing physical interfaces to broadcast or control issues we talked about earlier.
VXLAN is a tunneling standard that allows for that flexibility.
And then the campus fabric is built on EVPN-VXLAN, which is a standard taken from work in our service provider realm with EVPN and work in the data center realm with VXLAN.
Combine those two together and you have a very strong standards based approach.
Juniper campus fabric core distribution has two deployment methodologies.
Here CRB means Centrally Routed Bridge.
Effectively, we’re building an EVPN-VXLAN fabric between the core distribution levels, where Layer 2 is at distribution and Layer 3 routing between VXLAN segments happens at the core or centrally routed bridged.
This is perfect for Northwest traffic patterns.
The technology is based on EVPN-VXLAN.
So it’s a standard.
What you’ll notice down below is at the access layer, which could be Juniper virtual chassis, standalone Juniper switch.
It could be third party switches because to the access layer devices, these are standard lags looking northbound.
So when the access layer device looks northbound and sees a single MAC address and single system ID, it doesn’t have to worry about things like spanning trees on that lag or on that link.
And so active multi-homing is supported.
Very simple to deploy this methodology, particularly when the access layer can remain the same.
No new hardware, software upgrades are required to support this technology.
Campus Fabric Core Distribution: ERP (or Edge Routed Bridge).
You’ll notice, we still support EVPN-VXLAN between core distribution.
The difference here is we’re reducing the ‘blast radius’.
We’re keeping Layer 3 at the distribution layer, so that layer can route between VXLAN networks.
And that’s a perfect deployment model for customers who’ve some or maybe they’re moving more towards an east-west traffic pattern within their infrastructure.
Same type of deployment methodology with the access layer – standard lag, no difference there between ERP and CRB.
This is optimized for IP multicast.
So customers that might have for push to talk, or they might be using video content for training purposes would have, if they’re utilizing IP multicast, campus fabric core distribution would be the optimal method.
Juniper’s validated designs include all campus fabric architectures.
In this presentation, we focus on core distribution in two modes: centrally routed bridged and edge routed bridge technologies.
You’ll notice the positioning advantages below.
Campus fabric EVPN-VXLAN building blocks are as follows.
We first build an underlay and then we build an overlay on top of that underlay.
We position our Layer 2 and Layer 3 VXLAN gateways.
And then we connect devices to the fabric through lag technologies.
You’ll notice here, we build a simple IP fabric underlay.
Now, this is analogous between campus fabric core distribution, CRB and ERP modes.
Notice, we’ve core one, core two, distribution one, distribution two.
We’re leveraging technologies such as eBGP or OSPF as routing protocols between these four devices.
When I say between these four devices, you’re really looking at the interconnect between these devices.
So the cores don’t interconnect.
The distribution models or layers don’t interconnect.
This is what we call an IP class type of deployment within the core distribution model.
So it’s a simple Layer 3 fabric at the core and at the distribution layer.
What this technology really cares about is routing the loopback.
So you’ll notice we’ve loopbacks for each device.
Those loop backs need to be reachable through the underlay, which is a high speed interconnect.
We leverage ECMP (Equal Cost Multipath) load balancing between the cores and distribution in the most efficient manner.
The overlay control plane protocol is EVPN-VXLAN multiprotocol EVPN.
Same type of technology that we’ve in all of our campus fabric distribution models.
So whether it’s EVPN multi-homing core distribution or IP class.
It is a multiprotocol EVPN.
And it is between, in this case, the core and distribution models.
So we built our underlay.
That’s our high speed transport.
On top of the underlay, we leverage the overlay control plane.
Multi protocol BGP with an address family of EVPN supports Layer 2 Mac withdrawal and learning, which is different than what we normally have in the legacy world at the data plane level.
Notice that, if I had a MAC address or if I withdrawal a device, that’s broadcast to all devices within that VLAN, which becomes problematic the larger that network is and where that VLAN might reside.
With EVPN, we utilize BGP.
We don’t have to broadcast any new Max or withdrawal any new Max who broadcast methodologies.
We use standard BGP routing technologies.
You’ll notice the term VTEP.
VTEP is a software instantiation, typically tied to the loopback address.
And that’s where VXLAN tunnels are terminated.
So that we can extend Layer 2 amongst this network.
And no need for route reflectors, because we’re using eBGP as a control plane between cores and distribution.
So as we look at the Layer 2 VXLAN gateway concept, with CRB and ERP that is always going to be at the distribution mode, you’ll notice a difference when we look at Layer 3.
But Layer 2 is where we instantiate the mapping effectively between a standard VLAN and VXLAN network.
The concept of VNI is utilized here.
So a very common kind of recommended deployment methodology, if you will, is for VLANs that are part of a single domain, whereby you might have VLANs 100, 200, 300, that could be your IT department, and they need to be isolated from your servers or from your PCI traffic.
And you can do that with what we call routing instances.
We’ll talk about it in the next couple of slides.
But what the VNI does is it maps to a VLAN.
So a common methodology would be to take consistent VLANs that are part of an autonomous system, if you will, the same management, same routing instance, and applies a single numeric number to each one of those VLANs.
So VLAN 100 would be 1100, VLAN 200 would be 1200, VLAN 300, would be 1300.
And when you look at that first designator that tells the administrator, this is part of the same routing instance, right’
That’s kind of a best practice, if you will.
VXLAN is a tally methodology for Layer 2.
And it supports 2-24 addresses.
So this was built in the data center world where you’ve got overlapping VLAN requirements, and you’ve the need for multi-tenancy, and we are just borrowing LANs to the campus base.
Not that we need 2-24 addresses but we definitely need VXLAN to tunnel, and basically extend VLANs across this EVPN-VXLAN fabric.
And once again, the gateway is at the distribution layer in both methodologies.
Layer 3 VXLAN gateway capabilities for centrally routed bridged resides at the core level.
12:38 Ideal for north-south traffic patterns, good for customers who want to centralize where routing occurs within their fabric.
The term IRB is applicable to Layer 3 routing.
One common use case would be for customers who don’t have nearly the security requirements for micro-segmentation or macro-segmentation.
And they’ll place all subnets in a single routing instance.
That means that all VLANs will route through the core with no problem.
VLAN 100 and 200, 200 and 300, Layer 3 routing, back to the core, no issue.
Well, customers might want to segment traffic and one of the benefits of EVPN-VXLAN is logical separation of traffic, right’
This allows for routing instances which are applicable to VRS in the normal world.
Juniper supports the term routing instances for customers to place their IRBs in specific routing instances.
So all traffic within an IRB Layer 3 to Layer 3 happens at the core.
And then inter-VRF traffic could be forced and routed through a firewall or a stateful device, northbound of this EVPN-VXLAN fabric.
And remember the Layer 3 VXLAN gateways all reside in core switches in a CRB mode.
For customers who might be looking at eastwest traffic patterns or higher level of IP multicast, edge router bridge would be preferred.
Notice the Layer 3 VXLAN gateway here is the distribution layer, which allows the core to be a little leaner without Layer 3 routing capabilities.
Still for inter-VRF traffic that could be passed through a northbound stateful firewall for advanced security between VRFs.
Very similar to what we have in CRB.
So we talked about one of the benefits of EVPN-VXLAN being logical separation, VRF segmentation.
In Junos, the term routing instance is used.
That’s analogous to a VRF.
So we can use VRF just for the concept of this discussion.
Notice, this is very popular: employee, guest, IoT.
These are three individual VRFs.
You notice the different VLANs and the routing capabilities within each VRF.
Once again, by default, those employee VLANs will all be routed within the fabric.
And then for a customer who really wants additional security and where they might want to completely segment this traffic from end to end, they’ll pass this traffic up to a third party cluster of firewalls or other devices, maybe advanced security devices.
And that device could then route between VRFs if needed.
By default, as customers build these routing instances or VRFs, within the campus fabric, there is no communication between these VRFs by default.
You either have to turn that on or you pass it to a northbound system, which is the preferred methodology.
And then campus fabric core distribution.
We’ve lags to the EVPN fabric.
So I don’t include the core in this model.
I’ve distribution because that’s where the VXLAN tunnels really start and end.
And you’ll notice that we’ve a standard lag.
So there is no ICL requirement between.
Notice that distribution devices are not interconnected.
They are independent BGP peers.
We talked about this earlier using multi protocol BGP.
And BGP has an underlay and overlay protocol.
So very flexible.
By default, active-active multi-homing is supported here.
So to the southbound device, the access layer looks northbound is a standard lag.
Once again, it could be a virtual chassis, individual switch, a stack of third party devices, what have you.
Really any access layer switch could be utilized here.
And you’ll hear the term ESI (Ethernet Segment Identifier).
To the distribution devices, since they’re not physically attached and are using eBGP, they’ll notice that they both have the same Ethernet Segment Identifier (ESI) as they attach to the same access layer switches down below.
That’s how they know that they’re part of that domain and they communicate accordingly for things like forwarding of traffic broadcasts on unknown unicast bump traffic.
One of those distribution devices will be the designated for.
And the other will be the backup.
But by default, it is an active, active load balancing multi-homing deployment.
The config build out for CRB starts with the underlay.
Whether it be OSPF or eBGP, equal cost multipath is leveraged to load balance.
Accessibility to loopbacks across multiple links.
Once the underlay is built, we overlay with the control plane protocol called multi-protocol BGP, address family EVPN signaling.
EVPN is utilized to manage the fabric.
When MAC addresses are learned or removed from the network, we utilize BGP, which is much more scalable than the data plane.
VXLAN to VNI mapping, what we talked about earlier happens at the distribution layer.
Here we take our VLAN addresses and we apply and map a VXLAN identifier to those VLAN addresses.
This allows us to extend Layer 2 across this campus fabric over an IP network without having to plumbed VLANs.
And then the Layer 3 routing within each VXLAN segment happens at the core.
Once again, very applicable to a traditional northsouth deployment.
You’ll notice the only difference between Edge Routed Bridged and Centrally Routed Bridged is where the Layer 3 gateway resides here.
It’s at the distribution layer closest to the southbound infrastructure.
The reason for this would be for eastwest traffic pattern has smaller blast radius, as well as customers who might be looking to deploy IP multicast in larger environments.
Campus fabric or distribution platforms are shown here.
Notice the access layer is a myriad of juniper access switches.
We talked earlier about this technology, this deployment methodology, leverages, any third party individual.
Standalone switch or in the case of juniper virtual chassis or third party stacking solution, because it’s just a standard lag connected to the fabric.
At the distribution layer, we’ve the QFX5110, QFX5120 and EX4650.
And at the core layer, we’ve the chassis base EX9200, the QS fix 5120 and the EX 4650.
Juniper switching platforms that are part of the AI DE.
Business Unit within Juniper are shown here.
We’ve a nice array of access and core distribution platforms, including a modular 9200 for customers who prefer a chassis based solution.
Notice two new switches in our access portfolio, the 4100-F and 4100 that provide the EVPN-VXLAN, group based policy, as well as flow based telemetry.
We also have through the MIST UI, a concept called wired insurance, which allows customers to deploy Day 0. Day 1, Day 2 activities for the Juniper switching platforms.
Thank you for listening.
And we hope this session was informative.