Hey, this is Joel Juniper Networks.
And in this video, I’m going to show you how you can use Juniper EX switches with the MIST cloud.
Let’s jump right in and talk about switches that you might already have in your inventory.
That’s the individual switches themselves.
So, if I click on the switches, the switches tab here, very, very similar to the access points tab, except this time it switches.
So you can see that my Juniper switch is listed here.
Now, another thing that you might see is you might see a gray switch here.
If you see a gray switch there, that means that you’ve plugged a MIST access point into a non-Juniper switch or even an EX switch that you don’t have in your org.
It’s part of another org or it’s just not adopted or whatever.
If you plug a MIST AP into one of those switches, what it’ll do is, it’ll actually suck some data out of that switch with LLDP.
So we can tell which APs are plugged into which switches, how much PoE they’re drawing, what version of code the switch is running.
We can even do things like missing VLAN detection using AI and machine learning just by plugging Mist APs into third party switches.
But if you see green, that means it is a Juniper EX switch that you have adopted into your dashboard.
So let’s click on this and take a look.
So what we get here is we get a nice front panel view for what’s going on.
So you can see that my AP43 is plugged into port two there.
And you can see, I’ve got a bunch of clients plugged into ports 1 through 11 down here on the bottom.
And if you look, notice that this one’s called noble-1-carter.
It’s a Raspberry Pi.
We can see its IP address.
By the way, you don’t get the hostname unless it reports it via LLDP.
So I’m running the LLDP daemon on the Raspberry Pi so that we get that.
It looks really nice.
You don’t always get it for every client device, though.
But remember, you can use that for things like dynamic port configuration, which is really convenient.
So there’s all of our devices there.
We get CPU usage.
If any fans are dead – this is a fanless model – so there’s obviously none of that there.
How’s the temperature doing’
We got all this stuff.
That’s built in.
And additionally, down here on the bottom, this is where you can do overrides at the individual switch level.
So if you want, you can override anything that you want here, but it will only override it for this switch.
And note that this is where I have defined this as being my access switch.
So thus it gets that specific profile applied.
Now, we can go to the switch insights view.
If you remember the client insight’s view on the wireless side, this is going to look really, really familiar.
But instead of showing you what has happened with wireless clients, this is going to show you what’s happened with the switch.
You get memory, CPU, kind of all the stuff you’d expect.
But my favorite is the switch events.
And so if you see a device fail 802.1x or something along those lines, those events are going to show up here.
In fact, here’s all the events that will show for a specific switch.
You can see the total bytes that have been moved.
I’ve had all my clients unplugged for the last few days.
In fact, let’s go boost to a bigger timespan.
Remember, the default is today, that’s midnight to now.
But we can actually go up to seven days worth of data.
So it’ll pull all that back.
And we should see a little bit more as far as bytes went here.
We’ll give it just a second to load.
There we go.
Yeah, you can see kind of this base load that I was applying.
And you can see all those events that have occurred over time as well as ports coming up and down.
You can see, a little earlier today, actually, these 15 events…
I can click here to actually go to those events.
This was actually this morning when I went and plugged in all my Raspberry Pi’s to get ready for this demo.
So let’s go back to our switches view.
And we’ll go back and click on our individual switch one more time.
And one thing that I want you to notice is that not only can you look at the insights for the whole switch, you can also look at insights for the client as well.
So, for example, noble-6 here, one of my Raspberry Pi’s.
If we click on this one and click on the Wired Client Insights, notice that you can do a port profile here as well.
If we click on that, this is also going to take us to a very similar events page.
If there were any events for this client, there haven’t been, it’s very, very boring.
There’s nothing going on.
But if there’s any events for this client, you’re going to see these here as well.
Let’s see.
One more thing that I want to show you that is available that…
Maybe a couple more things I’ll show you that are available in this view.
Don’t miss the ability to click on these to do a port bounce, basically.
You can do a cable test.
There’s a TDR built into EX as far as I understand, which is pretty cool.
And we can also bounce ports as well.
So if I want to risk corrupting SD cards, we can go bounce ports.
And that’ll bounce ports five and seven.
So, really hoping to just crap the SD cards on my Raspberry Pi’s, but it’ll probably be okay.
They seem to put up with a lot.
Additionally, if you go to utilities and go to testing tools, you can actually get a shell on the switch.
So here in just a couple seconds, you’ll see a shell up here.
But one thing we do not recommend doing is, don’t make configuration changes from the shell.
If you want to push a specific config to your switch, that is what the CLI commands are for.
So use that box down here.
But that is a really handy bill to get in there and check status things, look at logs, stuff like that.
That can be really, really useful.
Cool.
Okay, so that is what we can see in the front panel view of a Juniper EX switch.
By the way, we do support virtual chassis.
And so here’s another site.
This is our live demo site down in Cupertino.
And notice that if we look at the Mist APs column…
Notice that there’s actually two entries here.
That’s a good indicator that this is a virtual chassis or a VC.
And the icon is a little bit different here as well.
So if we click on that, you will see this represented as a VC here.
So there we go, here’s our two classes that are together.
And there’s our VC connection between the two.
So pretty cool stuff.
And so I just want you to see that really quickly.
Additionally, there is also a topology view.
And so in the topology view, you can see that we’ve got a gateway here.
That’s very early.
Not available till recording this video.
But wait a couple of months and that will totally be there.
But know that you can click on your switches.
And we can see that there’s APs that are connected to this particular switch.
And so you get a nice topology view to see how everything is physically connected in the environment.
Now, what I’ve shown you, I think, is very nice.
I like that front panel stuff an awful lot.
But I want to show you something that I think is way, way cooler than that.
If we go to the monitor view and go to service levels, you are going to be greeted with the…
Oh, I had the wrong thing selected.
Let’s go over to wireless.
You are going to be greeted with the service level expectations that you’re used to seeing on the wireless side of MIST.
And remember that these are all about helping you understand what the client experience is like.
How quickly are things connecting to the network’
How good is coverage’
How good is roaming performance’
Things like that.
And that, I think is, one of the coolest things that we’ve done at NIST is to understand what the client state is like and be able to get the service level expectations to understand what the client experience is like.
I love this.
I absolutely love it.
And what I think is great is that when you bring EX switches onto your network and connect into the MIST cloud, we now get wired service level expectations.
I love this because at a glance, we can understand what the performance and what the experience looks like on the wired side.
So let’s take a quick look at these so that you can understand what they are.
So first off is throughput.
If we look at my throughput over the last seven days, I’m sitting at about 88% right now.
It’s okay.
It’s been alright.
And I’ll show you what’s broken on my home network specifically.
Notice that just like on the wireless side of the house, we also get classifiers to tell us why we’re suffering throughput, why throughput isn’t working well.
So if we click on that, we can see their storm control, congestion uplink, congestion, network and interface anomaly.
Several different things here.
So storm control will kick in if there’s any ports that are impacted or any switches or VLANs that are impacted due to storm control problems.
Congestion uplink occurs when an uplink port starts dropping frames.
If there’s any frame drops on a specific port, that is an uplink port, then that will start impacting that.
Now here’s the thing to remember about congestion uplink.
This will impact every user on the switch.
So we employ a concept called user minutes for normalizing data.
The idea behind a user minute is that it gives you a way to normalize that data and actually understand what things look like.
So let me show you an example of user minutes really quickly.
So let’s say that you have 10 users on the network and they’re all connected to the network for 60 minutes.
That’s going to equal 600 user minutes in total.
And so if half of those user minutes are good and half of those minutes are bad, then we would have a failure rate of 50%.
A user minute can either be good or a user minute can be bad.
And so it’s this constant process of looking at a user minute to understand: ‘Was that user minute good’ Was that user minute bad”
The thing to remember about congestion on an uplink port is that that’s going to impact every user on that switch.
If the uplink port has congestion problems as frames are being dropped, then that could potentially impact all the users that are on that particular switch.
And so that’s going to show up in the user minutes in a very, very big way.
And so that’s an important part of why we use the concept of user minutes.
By the way, we also use machine learning to determine whether it’s an uplink port or not.
There’s several things like: ‘Is there a switch on the other end of that port’ Is it a higher traffic port than all the other ports”
And so we use machine learning to determine whether it truly is an uplink port or not.
It’s actually a pretty difficult engineering and computer problem to solve.
Now, congestion shows us, basically, if there’s anything that causes dropped frames on non-uplink ports, basically like you would expect.
The thing to keep in mind about congestion is that it’s not going to impact every port on the switch since it’s not an uplink port.
It’s only going to impact a specific user.
So we’ve talked about congestion on uplink ports.
Then there’s congestion for non-uplink ports.
So the big thing to keep in mind about these is they work, basically, the exact same way except these are for non-uplink ports.
And so there isn’t as big of an impact network wide because congestion on a standard port on a non-uplink port just doesn’t affect as many users.
It’s only going to affect the user on that specific port, that specific VLAN.
So it’s not going to generate as many user minutes when it starts to fail.
Now, next is the network.
And this is primarily focused around the WAN.
And so if we think about this for a second, for the past 7 days, remember that my throughput has been an 88% pass – we do some math – which means that 12% failure rate, right’
So I’ve failed throughput 12% of the time.
Of all my user minutes, 12% of those user minutes were bad.
And 7% of those user minutes were bad because of network problems.
So 12% of my user minutes were no good.
And of those user minutes, 7% of that 12% was due to network problems.
And then we get sub classifiers to tell us exactly why.
Remember, this is measuring the WAN.
I think I mentioned that a moment ago, 9% of the time it was due to jitter and 91% of the time it’s due to latency.
That makes total sense because my DSL line here at home is absolutely terrible.
It’s been acting up for the past few weeks.
And you can absolutely see that in the service level expectations on the wired side of the network.
Finally is interface anomalies.
And so these are bad user minutes due to things like MTU mismatches, cable issues, and failed negotiations.
As you can see, cable issues are causing a huge amount of failures.
In fact, remember, so 12% of time my failures have been due to throughput.
And of that 12%, 92% of the problems were interface anomalies.
And 99% of that is cable issues.
And so yeah, I got a bad cable or something going on this network.
- Getting Started
- Wireless
- Wired Switching
- WAN Edge
- Mist Access Assurance
- Location Based Services
- Premium Analytics
- Security and Cloud Administration
- MSP
- Automation
- Product Updates
- Marvis
- Security Alerts
- FAQ