I work as a network architect and one of my customers had just one of them in each DC. We were working to add a redundant 7010 to each DC, but that was still weeks away. One day a plastic bag that was floating about in the DC was sucked into the intake of the 7010. This caused it to overheat and shutdown and the customer promptly lost their primary DC. 😂 Lesson quickly learnt about plastic bags and redundancy.
That's one of the reasons why datacenters are so picky about cardboard boxes and other waste. Normally you're not allowed to take that inside. We have to unpack smaller items outside.
That was on them for not installing a redundant pair initially. These kind of core switches are always installed as redundant pairs because of how utterly catastrophic a failure is.
I have four of these switches in the 2 Data Centers where I am responsible for the hardware. When fully populated, there is a LOT of fiber to sort through!
Ahh memories. In 2008 I was managing the network for a datacenter that was one of the first in Canada to use Nexus gear. The Nexus line had only just been introduced in January 2008, and NX-OS felt a little half-baked. In one minor-version-increment update they changed the default value of a core config flag; that earned me my worst outage ever. I should have spotted the change in the release notes, but it blew my mind that they made a breaking change to the config syntax in a minor update. NX-OS is based on Linux. It runs an iOS-like command interpreter, but access to a regular shell was possible. You definitely could run Doom on it. The supervisor boards in my 2008 Nexus 7010 had a separate “lights out” management computer that I think also ran Linux. It was used to coordinate software upgrades, and to manage the main supervisor config in the event it got messed up and couldn’t boot. I don’t see that module installed in your supervisor board, maybe they dropped the option later on.
I work in a large org full of networking as critical infrastructure, so you get a bit blasé working around this level of enterprise network hardware. What they can do at the densities they're built at is truly astonishing really, but the Cisco stuff is being outclassed in some regards by other brands. We're adopting a lot of Arista for some networks and use cases, and we went through a Juniper phase. But nobody ever got fited for buying Cisco (even if sometimes it isn't quite the right SKU for a specific taak, resulting in loads of workarounds or compromises on system design 😅) I'm glad we have Mikrotik and Unifi and other software-based router solutions to play around with at home nowadays. I'd hate to have a Catalyst or Nexus running my power bill up to insane levels nowadays 😁
Interesting thermal design, it seems that instead of laying the large modules like a normal server rack, flipping them on the side allows for that up-down air movement instead of the common front-back.
Nice video. Essentially obsolete now (the 10 slot model can be replaced by 1U switch with 48x4x25G essentially, at a fraction of cost and power), the 18 slot one still has some uses probably. But newer models from 9500 series of course are new cool stuff. Still, pretty niche stuff. Expensive, and still has scalability limits. Big datacenters usually build distributed switching fabric from smaller 48 or 96 port switches. But telco, government, and some business still do like them for some reasons. I do not like them, due to the cost, licenses, and meh management, etc.
The large opening at the front of the fan tray is for the airflow from the fans below. The two vertikal mounted ones. They suck air from below and pump it up. If you hide your stuff in there you will block airflow 🙂
The new data center networking standard is VXLAN spine leaf networks. VXLAN lets you run L3 routing everywhere and not have to use STP while still being able to tunnel L2 vlans if needed. It also allows for over 16 million VXLAN virtual networks. Spine leaf topology combined with 200, 400, or even 800 Gb spine switches and leaf switches with matching uplink ports allows for a lot of cross sectional bandwidth that can be increased by just installing a new spine and connecting it to every leaf. It also creates incredibly consistent latency and jitter. If you have the money and the need you can use giant chassis switches as your spine and connect to a LOT of leaf switches.
Indianapolis Motor Speedway... Cisco is a partner of the NTT Indycar Series and the Indianapolis Motor Speedway. They supply IT equipment for Penske Entertainment.
Funny you start the video with a thunderstorm. The first time I saw a 7010 back in 2013 it was delivered in the rain and the pallet wouldn't fit through the door so we had to unpack it outside
@10:42 Hey! There is a race track and race car on the board! I don't recognize the track layout so it must be an older grand prix track. Man, this thing is a beast!
seems these things are not used for very long. Do you replace them for power consumption or space saving? I would not expect them to fail after 10 years.
A series of switches which at one time was a bit notorious for its buggy software. If you don't know what I am talking about you should see Felix "FX" Lindner's series of talks about that.
Hello, very interesting device, what a monster.. What is the status of this switch? Is it defect, or simply outdated, and will it be scrapped? I guess, the location is not your private space, so where is this video shot? Recently, after a concert in the "Batschkapp", I passed by one of the many data processing centers here in Frankfurt / Main, a huge, massive building w/o any windows, a double fence for security reasons, and no sign outside indicating its purpose, or the company behind it.. I guess that this building is full of such switches, as well. It's got several big power stations outside (supply and/or back-up, I guess), and now, after watching your video, I can imagine, why these are so big. The Batschkapp is a famous concert hall here, and is heated by the waste heat of the data center.
Sometimes these units were used for entire floors and then you would have uplinks running up the building to a switch unit per floor, the switch cards could do 30w of POE per port max which would be about 11.5kw though that’s wildly more than using all the ports for IP phones would draw.
I guess the way I look at the light visibility issue, most network administrators work away from the datacentre and remotely manage them. So viewing the lights may not be as important. That said, Cisco occasionally makes some dumb design decisions.
Interesting video but not being familiar with these items, I would have liked to see a layman's explanation of what this 'switch' module does when in service.
Well it's basically just a huge ethernet switch with hundreds of ports. The ports can be configured into one giant switch or into several independent virtual switches. I hope you know what an ethernet switch is....
Why would a switch have a high performance cpu in its supervisor board? What is the supervisor board doing? I thought the 99% of work is done by asic chips on the other 3 boards and crossbar.
ASICs (well, TCAMs) need instructions what to do too and something has to calculate BGP routes on top of it, this is Layer3 switch :) There is MUCH MUCH MUCH more to a switch then what you have at home built into the widi router :)
Forwarding is done by ASICs, but something has to control those ASICs and program them with forwarding tables and what not. I don't know if these supervisors can do it, but there looks to be a bit of a trend towards supervisor cards being able to run various applications as well. So sticking in a high performance CPU gives you some extra headroom for that kind of thing. Modern routing engine cards for e.g. Juniper MX routers and I think SRX5800 firewalls run a Linux hypervisor with JunOS as a guest. I believe I've also seen the same thing on some QFX switches.
The ASICs are pretty much pattern matching and queuing engines: "if you see this combination of bits, then put the packet in that queue". If an unknown combination of bits is seen, the ASIC passes it to the supervisor, which does a route table lookup and then updates the ASIC pattern matching tables, so future packets can be forwarded without involving the supervisor. The ASICs end up with the patterns for all the most recently seen routes, but the fast pattern tables are limited and these routers are sold for backbone use and are expected to be able to handle the full global BGP route tables, which are up around a million routes for IPv4 alone. Each of these routes represents a set of constantly changing paths and costs, so there's quite a bit of data and processing involved.
Usually monitoring, control, updating routing tables, QoS setup on flows, etc. These switches were designed around 2012, and other probably didn't work too well.
I work as a network architect and one of my customers had just one of them in each DC. We were working to add a redundant 7010 to each DC, but that was still weeks away. One day a plastic bag that was floating about in the DC was sucked into the intake of the 7010. This caused it to overheat and shutdown and the customer promptly lost their primary DC. 😂 Lesson quickly learnt about plastic bags and redundancy.
That's one of the reasons why datacenters are so picky about cardboard boxes and other waste. Normally you're not allowed to take that inside. We have to unpack smaller items outside.
That was on them for not installing a redundant pair initially. These kind of core switches are always installed as redundant pairs because of how utterly catastrophic a failure is.
I have four of these switches in the 2 Data Centers where I am responsible for the hardware. When fully populated, there is a LOT of fiber to sort through!
I'm in awe at how deep those module cards are.
Ahh memories. In 2008 I was managing the network for a datacenter that was one of the first in Canada to use Nexus gear. The Nexus line had only just been introduced in January 2008, and NX-OS felt a little half-baked. In one minor-version-increment update they changed the default value of a core config flag; that earned me my worst outage ever. I should have spotted the change in the release notes, but it blew my mind that they made a breaking change to the config syntax in a minor update.
NX-OS is based on Linux. It runs an iOS-like command interpreter, but access to a regular shell was possible. You definitely could run Doom on it.
The supervisor boards in my 2008 Nexus 7010 had a separate “lights out” management computer that I think also ran Linux. It was used to coordinate software upgrades, and to manage the main supervisor config in the event it got messed up and couldn’t boot. I don’t see that module installed in your supervisor board, maybe they dropped the option later on.
A little half baked? It barely went in the oven haha.
Classic PWJ format, loveit❤
Totally! These are my faves or my fave channel.
Very cool video! Thanx 4 sharing this beautiful beast with us!!
The racing circuit could be the old Paul Ricard before the new chicanes were built :)
I work in a large org full of networking as critical infrastructure, so you get a bit blasé working around this level of enterprise network hardware. What they can do at the densities they're built at is truly astonishing really, but the Cisco stuff is being outclassed in some regards by other brands. We're adopting a lot of Arista for some networks and use cases, and we went through a Juniper phase. But nobody ever got fited for buying Cisco (even if sometimes it isn't quite the right SKU for a specific taak, resulting in loads of workarounds or compromises on system design 😅)
I'm glad we have Mikrotik and Unifi and other software-based router solutions to play around with at home nowadays. I'd hate to have a Catalyst or Nexus running my power bill up to insane levels nowadays 😁
Yeah, seems like many people are leaving the Cisco boat fast, most seem to end up with Arista nowadays
@@hariranormal5584 Cisco switches are still pretty decent but they are overpriced
after too much poking around on the web, it's Paul Ricard Original Grand Prix Circuit (1970-2001) before they added a chicane on the back straight.
Interesting thermal design, it seems that instead of laying the large modules like a normal server rack, flipping them on the side allows for that up-down air movement instead of the common front-back.
Nice video. Essentially obsolete now (the 10 slot model can be replaced by 1U switch with 48x4x25G essentially, at a fraction of cost and power), the 18 slot one still has some uses probably. But newer models from 9500 series of course are new cool stuff. Still, pretty niche stuff. Expensive, and still has scalability limits. Big datacenters usually build distributed switching fabric from smaller 48 or 96 port switches. But telco, government, and some business still do like them for some reasons. I do not like them, due to the cost, licenses, and meh management, etc.
Yea but this wasn't always the case and there are still very big devices like those, they just have up to tens/hundreds of 100gbs or 400gbs ports.
The power supply can output 50v at 120a perfect for an ebike. I can get a full charged in less than 4 minutes 😂
And 10 minutes later you can explain to the fire chief what happened... 🙂
Next time I'd like to see an F5 BIG IP load balancer/software-defined router and its large amount of extremely dense FPGAs.
The large opening at the front of the fan tray is for the airflow from the fans below. The two vertikal mounted ones. They suck air from below and pump it up. If you hide your stuff in there you will block airflow 🙂
The new data center networking standard is VXLAN spine leaf networks. VXLAN lets you run L3 routing everywhere and not have to use STP while still being able to tunnel L2 vlans if needed. It also allows for over 16 million VXLAN virtual networks. Spine leaf topology combined with 200, 400, or even 800 Gb spine switches and leaf switches with matching uplink ports allows for a lot of cross sectional bandwidth that can be increased by just installing a new spine and connecting it to every leaf. It also creates incredibly consistent latency and jitter. If you have the money and the need you can use giant chassis switches as your spine and connect to a LOT of leaf switches.
very interesting ! you find the coolest stuff to tear down, and/or explore! 😃thanks for posting!! 👍
"big fan of a big fans" 🤣
Indianapolis Motor Speedway... Cisco is a partner of the NTT Indycar Series and the Indianapolis Motor Speedway. They supply IT equipment for Penske Entertainment.
Indy is an oval.
I love the old school Cisco serif font on parts of the machine. It would match the 2600 router that gets used with it!
Ahh, just make the rack full with an 12k. I love their vfd’s…
I'd buy this just knowing an engineer doodled race cars on those cards
Now you know what to look for on Ebay... 🙂
Funny you start the video with a thunderstorm. The first time I saw a 7010 back in 2013 it was delivered in the rain and the pallet wouldn't fit through the door so we had to unpack it outside
i want to see a teardown of those power supplies
@10:42 Hey! There is a race track and race car on the board! I don't recognize the track layout so it must be an older grand prix track. Man, this thing is a beast!
Ah! It's Circuit Paul Ricard in France.
Wow, we got 2 of these still in use at our datacenter! We are finally going to replacing them within the next few months!
seems these things are not used for very long. Do you replace them for power consumption or space saving? I would not expect them to fail after 10 years.
@@chrisridesbicycles changing venders! We are moving away from Cisco.
Incredible!
When your redundant power supply has redundant power supply in it. You know, just in case you want to supply some redundant power.
A series of switches which at one time was a bit notorious for its buggy software. If you don't know what I am talking about you should see Felix "FX" Lindner's series of talks about that.
Hello, very interesting device, what a monster..
What is the status of this switch?
Is it defect, or simply outdated, and will it be scrapped?
I guess, the location is not your private space, so where is this video shot?
Recently, after a concert in the "Batschkapp", I passed by one of the many data processing centers here in Frankfurt / Main, a huge, massive building w/o any windows, a double fence for security reasons, and no sign outside indicating its purpose, or the company behind it.. I guess that this building is full of such switches, as well.
It's got several big power stations outside (supply and/or back-up, I guess), and now, after watching your video, I can imagine, why these are so big.
The Batschkapp is a famous concert hall here, and is heated by the waste heat of the data center.
4:50: There is an empty 3rd slot that has the size of a power supply module. Is that really an unused slot for yet another power supply?
Yes it is.
Sometimes these units were used for entire floors and then you would have uplinks running up the building to a switch unit per floor, the switch cards could do 30w of POE per port max which would be about 11.5kw though that’s wildly more than using all the ports for IP phones would draw.
Thanks for your sharing
It is good that it has those straps to tie it down, otherwise it would fly away 😀
I guess the way I look at the light visibility issue, most network administrators work away from the datacentre and remotely manage them. So viewing the lights may not be as important. That said, Cisco occasionally makes some dumb design decisions.
Interesting video but not being familiar with these items, I would have liked to see a layman's explanation of what this 'switch' module does when in service.
Well it's basically just a huge ethernet switch with hundreds of ports.
The ports can be configured into one giant switch or into several independent virtual switches.
I hope you know what an ethernet switch is....
A switch (in this sense) that surely cost as much as a modest house, and still no redundant CMOS battery?
The lifecycle of these devices is less than the lifetime of the battery.
ITRIS One AG!
I always thought Cisco was a software and training scam, but the hardware looks kinda interesting. Maybe.
Advertising...
The false front looks like a Meraki AP 😂
Cisco bought Meraki in 2012.
Obsolete before the first one was installed. It was so lacking in CPU and memory....
Why would a switch have a high performance cpu in its supervisor board? What is the supervisor board doing? I thought the 99% of work is done by asic chips on the other 3 boards and crossbar.
ASICs (well, TCAMs) need instructions what to do too and something has to calculate BGP routes on top of it, this is Layer3 switch :)
There is MUCH MUCH MUCH more to a switch then what you have at home built into the widi router :)
Forwarding is done by ASICs, but something has to control those ASICs and program them with forwarding tables and what not.
I don't know if these supervisors can do it, but there looks to be a bit of a trend towards supervisor cards being able to run various applications as well. So sticking in a high performance CPU gives you some extra headroom for that kind of thing.
Modern routing engine cards for e.g. Juniper MX routers and I think SRX5800 firewalls run a Linux hypervisor with JunOS as a guest. I believe I've also seen the same thing on some QFX switches.
It's going to have to run DOOM at some point obviously
The ASICs are pretty much pattern matching and queuing engines: "if you see this combination of bits, then put the packet in that queue". If an unknown combination of bits is seen, the ASIC passes it to the supervisor, which does a route table lookup and then updates the ASIC pattern matching tables, so future packets can be forwarded without involving the supervisor. The ASICs end up with the patterns for all the most recently seen routes, but the fast pattern tables are limited and these routers are sold for backbone use and are expected to be able to handle the full global BGP route tables, which are up around a million routes for IPv4 alone. Each of these routes represents a set of constantly changing paths and costs, so there's quite a bit of data and processing involved.
Usually monitoring, control, updating routing tables, QoS setup on flows, etc. These switches were designed around 2012, and other probably didn't work too well.
No WIFI in it? 🤷♂️ Maybe that's why it got thrown out. 😉
For a few 10'000$ more you can get a Cisco wlan controller and some access points...
Looks pretty similar to the Juniper EX9214
Fransa Grand Prix track.