Network Configuration Example: Campus Fabric IP Clos Wired Assurance
Hello and welcome to Juniper’s Campus Fabric IP Clos overview. We will discuss why companies are deploying Campus Fabric IP Clos, we’ll dive deep into each of the building blocks, and then we’ll wrap up with the Juniper platforms that are supported per this architecture.
Although this is not an exhaustive list, these are the top four technical challenges that we’ve experienced in legacy campus networks over the past couple of years, and we’ll start with micro segmentation. There are many reasons why customers segment traffic within a broadcast domain. That could be due to legacy equipment that can’t understand a layer three boundary, possible application requirements for layer two. The list is fairly extensive. At any rate, being able to isolate traffic within a broadcast domain itself is very problematic and can only really be done through the use of private VLANs.
Private VLANs are problematic for a number of reasons. One, they’re difficult to configure, difficult to interoperate between third party devices, and then they lack scale. The second challenge here is really inefficient ACL usage. Most customers place ACLs, or if you want to call them firewall filters, everywhere to further segment, so access, distribution, core, and then of course you’ve got your firewalls, and what happens is you have ACL sprawl, which becomes a operational challenge.
I haven’t talked to a customer that doesn’t have the need to extend at least a couple of VLANs across their campus network, even if they’re routing at the access layer. And so when that happens, you have to plumb networks. You have to plumb VLANs from access to access across your network, which increases your blast radius and exposes your network, your entire network effectively, to broadcast domains and of course spanning tree loops and so forth.
And then the last challenge here is really the lack of standards. Even though there are standards that are built, most of the time they’re not exactly adhered to. A perfect example would be the distribution layer here. Those two switches are interconnected, which is a very typical MC-LAG deployment. I have not yet seen a customer deployment where you have multi-vendor MC-LAG. It’s always vendor specific. And then with multi-chassis LAG, you can’t scale past two devices, so horizontal or scale in general becomes problematic.
Juniper Campus Fabric solves many campus problems, starting with micro-segmentation. Remember the challenges with private VLAN. With an EVPN-VXLAN architecture extending all the way down to the access layer, customers can realize micro-segmentation through group-based policy, which is built on VXLAN header. This provides intra-switch and inter-switch micro-segmentation within a switch and across an EVPN-VXLAN fabric and is supported in Juniper’s Campus Fabric IP Clos architecture. This creates efficient ACL usage amongst all access switches, allowing a customer to build a single firewall and spread that across access devices.
Layer two is easily extended using VXLAN, which is a tunneling methodology across an IP network, as well as a standard. This limits broadcast domains. This limits the ability or the requirement for a fabric to support broadcast of learning new MAC addresses and the broadcast of withdrawing MAC addresses. This is all done in the control plane. And then the architecture’s built on a standard EVPN-VXLAN framework.
Campus Fabric IP Clos addresses requirements that include higher scale, layer two stretch across a campus fabric, and micro-segmentation, which is performed here at the access layer. This is an EVPN-VXLAN architecture, which means it’s based on standards. Notice the layer three gateway is at the access layer, so the blast radius is much smaller, perfect for east to west traffic patterns. Also optimized for IP multicast.
Group based policy is performed at the access layer where the device plugs into the network. That access switch forwards that authentication information to a radius server or a mass device somewhere in the network. That device has a scalable group tag associated with that login, passes that information back to the access layer, and that device now has a scalable group tag associated with it that provides micro-segmentation within a switch and across a fabric. Active-active multi-homing to southbound devices, whether they be servers, Juniper access points, or other devices, and very fast conversions using equal cost multi-path with bidirectional forward tuning. This means that from the core distribution to the access layer, we’re leveraging a high speed IP transport and overlaying multi-protocol BGP on top of that.
Juniper’s validated designs include all Campus Fabric architectures. In this case, we’re talking about IP Clos. It is an end-to-end EVPN-VXLAN implementation starting at the access layer. It addresses medium to large scale campus requirements, perfect for east to west traffic patterns where the customer really wants to keep traffic local to the access switch for layer two, layer three routing and ideal for customers who wish to implement micro-segmentation within the VLAN and outside that VLAN.
Campus fabric EVPN-VXLAN building blocks are as follows. We first build a high speed underlay. We then build an overlay on top of that underlay with multi-protocol BGP. We then positioned a layer two and layer three VXLAN gateways, and then we attached the systems to the fabric using standard LAG technologies.
The IP underlay is a simple layer three fabric between core, distribution, and in this case access layers, so we’re extending EVPN-VXLAN all the way down to the access layer. We show Virtual Chassis here, but it could be other Juniper switches that support layer two VXLAN. Since it’s a layer three network, there is no requirement for loop avoidance or spanning tree. The IP Clos topology provides consistent scale out architecture as well as predictable performance. We leverage either OSPF or eBGP to provide loopback reachability amongst all the devices, and you’ll notice multiple interconnects, so equal cost multi-path is leveraged in the underlay for load balancing.
We then build the overlay, and as with the other technologies, it’s a multi-protocol EVPN-VXLAN control plane that extends all the way down to the access layer. eBGP as an overlay protocol is used to extend reachability amongst all devices. The concept of VTEP is utilized as a software concept and is typically applied to the loopbacks. In this case, you notice Virtual Chassis or at least access one and access two have a VXLAN tunnel between them. That’s how we extend layer two across this very high speed campus fabric. Since we’re using eBGP, there’s no need for route reflectors, and you’ll notice it is once again an IP Clos technology.
The VXLAN gateway resides at the access layer. It provides mapping between your traditional VLANs and VXLAN. VXLAN is a tunneling protocol that extends layer two VLANs across an IP fabric. So the fabric really doesn’t see the VLANs at all. It just routes a packet as an IP packet. VXLAN supports millions of addresses to the 24 and was built and really started in the data center space for addressing multi-tenancy and overlapping VLAN spaces for customers. Here the gateway is at the access layer, which is as far down in the network as you’ll find amongst all the other EVPN multi-homing and core distribution technologies.
In a Campus Fabric IP Clos, the layer three VXLAN gateway resides at the access layer. This is as far down in the network as opposed to the EVPN multi-homing and core distribution technologies. Perfect for customers who have east to west traffic patterns. Customers who have higher scale IP multicast would deploy this technology. What is very popular is customers who wish to segment or isolate traffic in various VRFs or, in Juno’s nomenclature, routing instances. You’ll see in the next slide how we do that. Very popular is for all traffic within a routing instance to route at the access layer. Traffic outside that routing instance that needs to hit a default route or where customers need to keep traffic isolated, they’ll push that traffic up through a third party router or a cluster of firewalls.
As mentioned earlier, customers who wish to isolate traffic can do so with VRF or routing instance segmentation. This is a benefit and a value of EVPN-VXLAN where the overlay can be sliced and diced based on different traffic requirements. In this case, we have an employee, a guest, and an IOT routing instance. They’re all three logically separated. Routing within each routing instance is done at the access layer where the layer three gateway resides. Very popular to push all this traffic to a third party router northbound or a cluster of firewalls and allow those devices to route amongst the routing instances, if needed, or apply additional security policies.
Here we show devices connecting to the fabric. You’ll notice a couple of instantiations here. If I have a Virtual Chassis Juniper technology over to the left at my access layer, my VXLAN layer two and layer three will be terminated there in IP Clos. I can take multiple connections from that access layer and connect to different devices down below, which could be other switches or APs. In that case, since those Virtual Chassis are managed as one device, it’s a standard LAG.
If I have two independent devices over to the right that are VXLAN gateways, we use a technology called ESI, Ethernet Segment Identifier. ESI is part of the EVPN standard, and it allows the two devices, two switches, although they are not physically interconnected, to negotiate who is the designated forwarder for this particular link. Notice that the server is connected just through a standard LAG to both switches. Both switches understand each other’s presence through eBGP and EVPN signaling because they have the same ESI number. It is a 10 digit number that must be the same amongst switches at the access layer that connect to the same southbound device and by default provide active-active multi-homing.
Campus Fabric IP Clos also introduces the concept of a services block. This would be the ability to segment critical services in a dedicated access switch pair. And this effectively is for customers who wish to place critical servers, WAN routers, firewalls, DHCP servers, possibly RADIUS server, mission-critical services in its own block and connect that to the core. The connection between the core and the services block, once again, is an EVPN-VXLAN architecture and leverages ECMP for load balancing across those multiple links and provides a nice level of horizontal scale.
So the config steps for IP Clos are as follows. We build the underlay. This underlay spans from core to distribution to access layer. We leverage ECMP across the multiple links and leverage eBGP or OSPF. The overlay is built amongst all the devices as well. There is no need for a route reflector because we’re using eBGP with multi-protocol BGP and EVPN signaling as a control plane, and this overlay is what the traffic utilizes from a control plane perspective.
The layer two gateway sits at the access layer. The layer three gateway also sits at the access layer. This is a distinguishing model of IP Clos where lots of the intelligence sits at the access layer. One of the benefits of this technology, as mentioned earlier, is micro-segmentation, to be able to micro-segment within a VLAN or between VLANs starting at the access layer where the security and authentication happens.
Here we show the Juniper platforms at each layer within a Campus Fabric IP Clos. The access layer, since these are EVPN-VXLAN enabled devices, not as many Juniper switches can be positioned there. So we have the EX4400, the EX4300, which is the multi gig platform, the EX4100 and EX4100-F. At the distribution layer, we have the QFX5120 and EX4650. And at the core, we have our chassis-based EX9200.
Juniper switching within the AIDE business platform can be shown here. A nice mix of access and core distribution devices, including the chassis-based 9200. Notice a different fixed power, modular power, different PoE, PoE+, PoE++ support and features, flow-based telemetry. Newer switches, the 4100-F and 4100 both support EVPN-VXLAN with group-based policy as well and flow-based telemetry. And then a key deliverable for this architecture is the missed UI, the missed cloud, where customers can deploy day zero, day one, and day two support from a deployment to a troubleshooting perspective across these platforms.
Thank you for your time and we hope this session was valuable.