Ahh memories. In 2008 I was managing the network for a datacenter that was one of the first in Canada to use Nexus gear. The Nexus line had only just been introduced in January 2008, and NX-OS felt a little half-baked. In one minor-version-increment update they changed the default value of a core config flag; that earned me my worst outage ever. I should have spotted the change in the release notes, but it blew my mind that they made a breaking change to the config syntax in a minor update. NX-OS is based on Linux. It runs an iOS-like command interpreter, but access to a regular shell was possible. You definitely could run Doom on it. The supervisor boards in my 2008 Nexus 7010 had a separate “lights out” management computer that I think also ran Linux. It was used to coordinate software upgrades, and to manage the main supervisor config in the event it got messed up and couldn’t boot. I don’t see that module installed in your supervisor board, maybe they dropped the option later on.
I have four of these switches in the 2 Data Centers where I am responsible for the hardware. When fully populated, there is a LOT of fiber to sort through!
Indianapolis Motor Speedway... Cisco is a partner of the NTT Indycar Series and the Indianapolis Motor Speedway. They supply IT equipment for Penske Entertainment.
I work in a large org full of networking as critical infrastructure, so you get a bit blasé working around this level of enterprise network hardware. What they can do at the densities they're built at is truly astonishing really, but the Cisco stuff is being outclassed in some regards by other brands. We're adopting a lot of Arista for some networks and use cases, and we went through a Juniper phase. But nobody ever got fited for buying Cisco (even if sometimes it isn't quite the right SKU for a specific taak, resulting in loads of workarounds or compromises on system design 😅) I'm glad we have Mikrotik and Unifi and other software-based router solutions to play around with at home nowadays. I'd hate to have a Catalyst or Nexus running my power bill up to insane levels nowadays 😁
Interesting thermal design, it seems that instead of laying the large modules like a normal server rack, flipping them on the side allows for that up-down air movement instead of the common front-back.
@10:42 Hey! There is a race track and race car on the board! I don't recognize the track layout so it must be an older grand prix track. Man, this thing is a beast!
Nice video. Essentially obsolete now (the 10 slot model can be replaced by 1U switch with 48x4x25G essentially, at a fraction of cost and power), the 18 slot one still has some uses probably. But newer models from 9500 series of course are new cool stuff. Still, pretty niche stuff. Expensive, and still has scalability limits. Big datacenters usually build distributed switching fabric from smaller 48 or 96 port switches. But telco, government, and some business still do like them for some reasons. I do not like them, due to the cost, licenses, and meh management, etc.
The large opening at the front of the fan tray is for the airflow from the fans below. The two vertikal mounted ones. They suck air from below and pump it up. If you hide your stuff in there you will block airflow 🙂
seems these things are not used for very long. Do you replace them for power consumption or space saving? I would not expect them to fail after 10 years.
A series of switches which at one time was a bit notorious for its buggy software. If you don't know what I am talking about you should see Felix "FX" Lindner's series of talks about that.
Hello, very interesting device, what a monster.. What is the status of this switch? Is it defect, or simply outdated, and will it be scrapped? I guess, the location is not your private space, so where is this video shot? Recently, after a concert in the "Batschkapp", I passed by one of the many data processing centers here in Frankfurt / Main, a huge, massive building w/o any windows, a double fence for security reasons, and no sign outside indicating its purpose, or the company behind it.. I guess that this building is full of such switches, as well. It's got several big power stations outside (supply and/or back-up, I guess), and now, after watching your video, I can imagine, why these are so big. The Batschkapp is a famous concert hall here, and is heated by the waste heat of the data center.
I guess the way I look at the light visibility issue, most network administrators work away from the datacentre and remotely manage them. So viewing the lights may not be as important. That said, Cisco occasionally makes some dumb design decisions.
Sometimes these units were used for entire floors and then you would have uplinks running up the building to a switch unit per floor, the switch cards could do 30w of POE per port max which would be about 11.5kw though that’s wildly more than using all the ports for IP phones would draw.
Why would a switch have a high performance cpu in its supervisor board? What is the supervisor board doing? I thought the 99% of work is done by asic chips on the other 3 boards and crossbar.
ASICs (well, TCAMs) need instructions what to do too and something has to calculate BGP routes on top of it, this is Layer3 switch :) There is MUCH MUCH MUCH more to a switch then what you have at home built into the widi router :)
Forwarding is done by ASICs, but something has to control those ASICs and program them with forwarding tables and what not. I don't know if these supervisors can do it, but there looks to be a bit of a trend towards supervisor cards being able to run various applications as well. So sticking in a high performance CPU gives you some extra headroom for that kind of thing. Modern routing engine cards for e.g. Juniper MX routers and I think SRX5800 firewalls run a Linux hypervisor with JunOS as a guest. I believe I've also seen the same thing on some QFX switches.
The ASICs are pretty much pattern matching and queuing engines: "if you see this combination of bits, then put the packet in that queue". If an unknown combination of bits is seen, the ASIC passes it to the supervisor, which does a route table lookup and then updates the ASIC pattern matching tables, so future packets can be forwarded without involving the supervisor. The ASICs end up with the patterns for all the most recently seen routes, but the fast pattern tables are limited and these routers are sold for backbone use and are expected to be able to handle the full global BGP route tables, which are up around a million routes for IPv4 alone. Each of these routes represents a set of constantly changing paths and costs, so there's quite a bit of data and processing involved.
Usually monitoring, control, updating routing tables, QoS setup on flows, etc. These switches were designed around 2012, and other probably didn't work too well.
Ahh memories. In 2008 I was managing the network for a datacenter that was one of the first in Canada to use Nexus gear. The Nexus line had only just been introduced in January 2008, and NX-OS felt a little half-baked. In one minor-version-increment update they changed the default value of a core config flag; that earned me my worst outage ever. I should have spotted the change in the release notes, but it blew my mind that they made a breaking change to the config syntax in a minor update.
NX-OS is based on Linux. It runs an iOS-like command interpreter, but access to a regular shell was possible. You definitely could run Doom on it.
The supervisor boards in my 2008 Nexus 7010 had a separate “lights out” management computer that I think also ran Linux. It was used to coordinate software upgrades, and to manage the main supervisor config in the event it got messed up and couldn’t boot. I don’t see that module installed in your supervisor board, maybe they dropped the option later on.
I'm in awe at how deep those module cards are.
I have four of these switches in the 2 Data Centers where I am responsible for the hardware. When fully populated, there is a LOT of fiber to sort through!
very interesting ! you find the coolest stuff to tear down, and/or explore! 😃thanks for posting!! 👍
Indianapolis Motor Speedway... Cisco is a partner of the NTT Indycar Series and the Indianapolis Motor Speedway. They supply IT equipment for Penske Entertainment.
I love the old school Cisco serif font on parts of the machine. It would match the 2600 router that gets used with it!
I work in a large org full of networking as critical infrastructure, so you get a bit blasé working around this level of enterprise network hardware. What they can do at the densities they're built at is truly astonishing really, but the Cisco stuff is being outclassed in some regards by other brands. We're adopting a lot of Arista for some networks and use cases, and we went through a Juniper phase. But nobody ever got fited for buying Cisco (even if sometimes it isn't quite the right SKU for a specific taak, resulting in loads of workarounds or compromises on system design 😅)
I'm glad we have Mikrotik and Unifi and other software-based router solutions to play around with at home nowadays. I'd hate to have a Catalyst or Nexus running my power bill up to insane levels nowadays 😁
Yeah, seems like many people are leaving the Cisco boat fast, most seem to end up with Arista nowadays
Classic PWJ format, loveit❤
Totally! These are my faves or my fave channel.
The racing circuit could be the old Paul Ricard before the new chicanes were built :)
Interesting thermal design, it seems that instead of laying the large modules like a normal server rack, flipping them on the side allows for that up-down air movement instead of the common front-back.
@10:42 Hey! There is a race track and race car on the board! I don't recognize the track layout so it must be an older grand prix track. Man, this thing is a beast!
Ah! It's Circuit Paul Ricard in France.
Nice video. Essentially obsolete now (the 10 slot model can be replaced by 1U switch with 48x4x25G essentially, at a fraction of cost and power), the 18 slot one still has some uses probably. But newer models from 9500 series of course are new cool stuff. Still, pretty niche stuff. Expensive, and still has scalability limits. Big datacenters usually build distributed switching fabric from smaller 48 or 96 port switches. But telco, government, and some business still do like them for some reasons. I do not like them, due to the cost, licenses, and meh management, etc.
Yea but this wasn't always the case and there are still very big devices like those, they just have up to tens/hundreds of 100gbs or 400gbs ports.
The large opening at the front of the fan tray is for the airflow from the fans below. The two vertikal mounted ones. They suck air from below and pump it up. If you hide your stuff in there you will block airflow 🙂
Next time I'd like to see an F5 BIG IP load balancer/software-defined router and its large amount of extremely dense FPGAs.
Incredible!
Wow, we got 2 of these still in use at our datacenter! We are finally going to replacing them within the next few months!
seems these things are not used for very long. Do you replace them for power consumption or space saving? I would not expect them to fail after 10 years.
@@chrisridesbicycles changing venders! We are moving away from Cisco.
A series of switches which at one time was a bit notorious for its buggy software. If you don't know what I am talking about you should see Felix "FX" Lindner's series of talks about that.
When your redundant power supply has redundant power supply in it. You know, just in case you want to supply some redundant power.
It is good that it has those straps to tie it down, otherwise it would fly away 😀
Hello, very interesting device, what a monster..
What is the status of this switch?
Is it defect, or simply outdated, and will it be scrapped?
I guess, the location is not your private space, so where is this video shot?
Recently, after a concert in the "Batschkapp", I passed by one of the many data processing centers here in Frankfurt / Main, a huge, massive building w/o any windows, a double fence for security reasons, and no sign outside indicating its purpose, or the company behind it.. I guess that this building is full of such switches, as well.
It's got several big power stations outside (supply and/or back-up, I guess), and now, after watching your video, I can imagine, why these are so big.
The Batschkapp is a famous concert hall here, and is heated by the waste heat of the data center.
ITRIS One AG!
I guess the way I look at the light visibility issue, most network administrators work away from the datacentre and remotely manage them. So viewing the lights may not be as important. That said, Cisco occasionally makes some dumb design decisions.
A switch (in this sense) that surely cost as much as a modest house, and still no redundant CMOS battery?
4:50: There is an empty 3rd slot that has the size of a power supply module. Is that really an unused slot for yet another power supply?
Yes it is.
Sometimes these units were used for entire floors and then you would have uplinks running up the building to a switch unit per floor, the switch cards could do 30w of POE per port max which would be about 11.5kw though that’s wildly more than using all the ports for IP phones would draw.
Advertising...
The false front looks like a Meraki AP 😂
Cisco bought Meraki in 2012.
Looks pretty similar to the Juniper EX9214
Why would a switch have a high performance cpu in its supervisor board? What is the supervisor board doing? I thought the 99% of work is done by asic chips on the other 3 boards and crossbar.
ASICs (well, TCAMs) need instructions what to do too and something has to calculate BGP routes on top of it, this is Layer3 switch :)
There is MUCH MUCH MUCH more to a switch then what you have at home built into the widi router :)
Forwarding is done by ASICs, but something has to control those ASICs and program them with forwarding tables and what not.
I don't know if these supervisors can do it, but there looks to be a bit of a trend towards supervisor cards being able to run various applications as well. So sticking in a high performance CPU gives you some extra headroom for that kind of thing.
Modern routing engine cards for e.g. Juniper MX routers and I think SRX5800 firewalls run a Linux hypervisor with JunOS as a guest. I believe I've also seen the same thing on some QFX switches.
It's going to have to run DOOM at some point obviously
The ASICs are pretty much pattern matching and queuing engines: "if you see this combination of bits, then put the packet in that queue". If an unknown combination of bits is seen, the ASIC passes it to the supervisor, which does a route table lookup and then updates the ASIC pattern matching tables, so future packets can be forwarded without involving the supervisor. The ASICs end up with the patterns for all the most recently seen routes, but the fast pattern tables are limited and these routers are sold for backbone use and are expected to be able to handle the full global BGP route tables, which are up around a million routes for IPv4 alone. Each of these routes represents a set of constantly changing paths and costs, so there's quite a bit of data and processing involved.
Usually monitoring, control, updating routing tables, QoS setup on flows, etc. These switches were designed around 2012, and other probably didn't work too well.
Fransa Grand Prix track.