Wow. Two takeaways from this video. The first is as ever, that your content and your commitment to your international viewers is through the roof. Thank you for the English content Roman. It's one thing to do a de-lidding video in two languages but this in unreal. The second is this place! Amazing attention to detail in every single thing. Just fascinating. I have also learned that perhaps my hardware is not as top-tier as I thought /s
This is fantastic! Wow, I'm blown away by that site. I work in a datacenter myself, or at least I thought it was before I saw this.. Lol. This is a completely different level than what I'm used to. Really looking forwards to part 2. :) Keep it up Roman, this was an amazing video.
It's not too surprising. WalMart was/is one of the largest users of data centers in the world. These types of businesses end up creating an enormous amount of data, and they need to know what to do with it.
OK, hands down this is the most awesome data centre tour I've ever found on RUclips. My mind was blown multiple times: the non-crossing infrastructure, spill gate for battery acid and the reduced oxygen level.
Thank you so much for everyone who made this project possible. I've been waiting for a data center walkthrough for the last 5 1/2 years that had this much in-depth knowledge.
it's so fascinating how many different technologies (and of course therefore technological experts) have to work together perfectly to make an operation like this work so flawlessly. Awesome video!
some manufacture or companies made their own technologies solely for themself. designing something clever and works good that they need. i find that awesome.
Data centers are always fun to go through. Some things I have come across which could make for better, more efficient data centers than the one you went through, and I am sure you have thought about some of this with the videos you have made, but some of it may be new to you: 1. 380V DC standard for power delivery. This is +=190V nominal, 300V to 425V range to the servers. What this standard does is after the transformer step down to 480V AC, do one conversion to 380V DC nominal and have those batteries hooked up in series and parallel to equal 380V DC nominal. From here the power goes over power rails more efficiently, as DC travels over the wires more efficiently than AC, straight to the servers. The server power supplies have basically half of the hardware in them as they skip the AC/DC conversion, which takes a lot of power electronics to efficiently and cleanly convert AC to DC, instead going straight from 380V DC to 12V DC. This is far more efficient and uses far less hardware than traditional AC power delivery to the server. Something like 30% space savings in data centers that do this (and I spell out data center long to not confuse with direct current) while getting a big boost in efficiency. You mention the efficiency of data centers at the end of the video, which is an important metric and this is a way to raise the actual efficiency and even save on costs when this tech is done at scale. 2. Considering how much Germany relies on renewable energy, something else that I think should be done, especially when doing a 380V DC standard data center build, is to swap out the batteries in the battery room with LFP batteries and build for at least 4 hours of storage. As you may have noticed, the space used in the battery room is not that efficient and they built around the notion of lead acid batteries spilling, which you will also see in old telco buildings where they built everything to run on 48V DC, which takes some crazy big bus bars to do at that voltage level. The idea of having these hours of storage is you can balance against the green energy grid and may even stop taking power from the grid for a while when the electricity prices / demand is the highest. At least when there is variable pricing involved / you make a deal with the power company to help balance the grid, such a setup can save / make a data center a lot of money as you will already have the conversion hardware and the big power use case, just add more batteries to your battery room. At this those diesel generators don't need to run as much during a power outage and can have a much longer grace period to warm up, saving on electricity to keep them warm when you have a battery system with hours of storage; you just make sure to have a certain minimum storage to give the generators more time to warm up. LFP batteries in the data center is also a good thing as modern LFP batteries will last for decades. Also by the end of the year (2022), Germany will have a large LFP battery manufacturing facility in operation run by CATL, one of the biggest names in LFP battery manufacturing. So you will likely make the batteries used in the data center in Germany. 3. Getting to where your expertise comes in, liquid cooling the high powered components in servers with a negative pressure liquid cooling loop. A number of data centers do this, especially for supercomputers, and it is extremely efficient and ironically uses a lot less water than the system you showed. The reason for this is air is a very poor carrier of heat, so you have to cool the air to a certain low temperature, much lower than the temperature of the components you are cooling, or else the server hardware in the racks will get too hot because the delta-T (change in temperature) with air is high. With direct liquid to the hot running components using water blocks, you can run the coolant at much higher temperatures as the delta-T is much lower. These much higher temperatures get into a hot summer day where it is say 37C outside is cool enough to not need any extra cooling measures such as evaporative cooling nor air conditioners. Some data centers get down into a PUE of 1.1 where this one you specify as 1.35 and the bar is 2.0. So yeah, efficiency can be better, granted this one is pretty good, granted they use a lot of water, which gets to be a problem in some places where there is not enough water to go around. This problem is getting worse with global climate change, so this thinking about evaporative cooling has to be changed unfortunately. 4. A number of data centers are moving to back of rack radiators. 5. Also in your wheelhouse, use of liquid metal thermal transfer compound and high W/mK thermal pads. The idea being the more efficiently you transfer the heat to the heat sinks with a smaller delta-T (change in temperature) between the die and heat sink, the less you have to work to keep the die at or below its max target temperature. Data centers are built around keeping the components down to a certain target temperature at max load and when you throw in all of the inefficiencies of low end thermal transfer compounds, IHS's (Internal Heat Spreaders), air cooling, and heat buildup as you go through long, high powered servers, you end up spending a lot of energy and often water to reach that target inlet temperature. Also those super noisy server fans use a tonne of energy to spin that fast and get into significant air friction heating, so if you carry most of the heat away with high density water blocks where you don't have to work as hard to move the more heat dense liquid coolant around, you can use much slower, more efficient fans for the remaining lower powered air cooled components. Anything you can do to allow the target temperature to be higher reduces your PUE and/or water consumption. 6. Shifting gears a little, use of ZFS RAIDZ in the data center. While I have used ZFS RAIDZ level 2 on Solaris in the data center primarily on mechanical drives with SSD caching drives, ZFS under Linux and FreeBSD has gotten a lot better in recent years and supports TRIM on SSDs. RAID controllers do not support TRIM. If you have ever done SSD RAID arrays, those SSDs take a beating when used with hardware RAID controllers, especially as hardware RAID does not support TRIM and in general uses the SSDs in a very write intensive fashion. ZFS is setup a lot more intelligently in terms of how much writing you do or I should say it is one of its optimizations at a slight cost elsewhere (space usage), and TRIM support is icing on the cake greatly reducing write amplification. I would venture to say that ZFS is a more reliable and flexible storage system than hardware RAID based on my experience with it in the data center and my usage of it under Linux and FreeBSD. The thing is where a data center may go for super expensive 10 DWDP (Drive Writes Per Day) SSDs when using hardware RAID controllers, they may find they can get the exact same job down with much less costly 1 DWDP drives when using ZFS RAIDZ. I mean the improvement you will see with ZFS RAIDZ level 2 over RAID 6 is huge. As a bit of a side note on this RAID level, with RAID 1 mirrors sometimes the mirror fails before you can rebuild it, causing data lose and RAID 10 just amplifies this problem by adding more mirrors to an array. In a data center there are enough drives to where you will see this happen, it is a guarantee when you are dealing with this many drives. RAID 5 also amplifies this problem as you add more drives to the array. RAID 6 is a lot better at not losing your data to random physical drive failures as you still have redundancy with a single drive failure, so the occasional second drive failure and just sector loss doesn't kill you, granted in a data center as soon as a drive starts losing sectors, you replace it right away. (At least any good admin would.) RAIDZ level 2 is basically ZFS's version of this, except a lot better in terms of data integrity and recovery capabilities. Standard practice where I worked is to have 8 drive vdevs and just add more vdevs to a zpool when increasing storage. In other words you have an 8 drive array with 2 of the drives for redundancy and then you just add more of these arrays into a single logical 'drive' / storage pool to get to the desired storage size. If you have ever dealt with hardware RAID enough, you start finding there are ways to lose data in a more traditional overall setup where there should be ways to make it better and not lose that data. ZFS RAIDZ level 2 is a great answer to these issues. There is a lot to explain here, but this is something you can read about and then this already long post doesn't need to get a lot longer and you can see why ZFS RAIDZ with direct access to the drives is just better. It is also cheaper as you don't have to spend money on all of these fancy RAID controllers, instead just need simple HBAs (Host Bus Adapters) to access the drives. 7. I was a bit surprised with all of that fiber you saw, there weren't any specialized high speed fiber connections such as InfiniBand. (Maybe you saw InfiniBand, but just didn't know what it was?) I suppose this is a thing when you go to a bank's data center, they are going to be a bit more conservative in how they setup things than say a scientific supercomputing center and their hardware suppliers are going to be a bit more traditional in their offerings, so you just won't see some of these ideas to make the data center even more efficient than the setup you saw. The most radical stuff tends to happen with hyperscale data centers. It is just these are even harder to get into as the operators tend to be a bit more secretive on how they do things.
As much as I love ZFS, it's not an FS made for scale. If you want to scale ZFS, you have to rely on a cluster FS like Gluster, or Ceph on top of it. These DELL EMC machines you saw are NOT using RAID controllers. They either use their own specific cards to do everything on ASICs or they use software to control it. Also, those are NVMe SSDs, the amount of memory bandwidth needed to support them is massive. Those machines are probably using some kind of DPU that connects the fiber and the storage directly, without even going throught the CPU.
5:32 As someone working in the water industry, my guess is that those are for water softening. Hard water causes the formation os Scale deposits on anything the water touches (the insides of the whole cooling system in this case), which reduces the effectivenes of heat transfer and ultimately may even leed to damage to the pipeing. This is especially problematic in this case, since the hotter the water the more and faster the Scaling. Depending on how hot the water gets, they may also be using a degassing system to remove oxygen from the water, since in hot water oxygen becomes very corrosive.
London water is so hard it kills kettles if you don’t descale once ever couple of months, unless you’re like me and use bottled water cause you’ve been put off drinking water boiled in London kettles pretty soon after arriving after seeing what happens.
absolutely. We have well water on my property here in the states, and we have to use a (much smaller) similar device to soften our water. Otherwise it clogs up the pipes and other stuff with Iron rust build up. Turns everything yellow or brown. I would imagine they need to remove ANY minerals and biological stuff from any water used to cool something this large. Otherwise they would be tearing the cooling down frequently trying to clean the crud out of it. The same could be seen in someones desktop pc they water cooled, and ignorantly used plain tap water inside with no additives. The cpu and Gpu blocks fins will clog up super fast.
Only thing I would add is; there is no salt in a water softener. The salt in a softener system is used to create a brine which then back washes the negatively charged resin beads inside the cylinders on a regular cycle. It's the resin beads which do all the work.
@@ZonkedCompanion "there is no salt in a water softener", yes, but also kinda no. It's true that the resin does the "work", but the way it softens the water is through "ion exchange", i.e. Calcium ions (the cause of scaling) from the water are exchanged with the salt ions (Na for example) from the resin, thus you could say that there's salt in the softener. When you wash the softener with the salt brine, the Calcium ions trapped in the resin get removed and replaced with salt ions, i.e. resin "regenerates".
you've outdone yourself on this video. thoroughly enjoyed the tour, very interesting all of the aspect to make a stable room for the servers to run smoothly. thank you for the very unique chance to see behind the scenes of a data-center. danke Roman. ;)
I’ve been to a data centre here in Australia and it’s a similar story with their infrastructure, redundancy is the key word. They didn’t just heat the generators though, they convert the generator to a grid driven motor to keep the engine spinning, so if they are needed then all they need for starting is opening the fuel valve, and the motor automatically switches back to a generator.
I'd be interested to see how they've implemented that electrically. It would probably shave (just) a couple of seconds off the time that UPS'es need to keep the load up, whilst using heaps of power to keep that motor running. There's little to no cost saving on UPS'es because the two nets would still need to be switched in and out, and that doesn't happen in less than a couple of seconds.
this was great! We have been using ibm power for decades. We have both Z and power. Rotating out power 8. Right now i am implementing 2x 1080s and will be migrating from 980s to the 1080s. Running IBMi, AIX, and RHEL.
Congrats on getting in there to see and touch all those things and thank you for sharing it with us Roman. Seeing the datacenter up close was awesome, seeing your excitement as you described it all, even better.
I used to do maintenance testing on Data Center Power Systems. We had been using bigger and bigger batteries and had similar issues with the hazardous properties of them. Now, A 1.2 MWatt Diesel Generator (Caterpillar,, Cummins, Generac....like the one you showed) provides a 2nd power source which when used with a transfer switch allows the use of smaller and smaller batteries which provide power only long enough to start the generator. After the loss of Mains Power, with the Generator started, the transfer Switch switches to the Generator. The Data Center Power is derived from those UPS systems like the ones you showed. whose Inverter is always running off the batteries which themselves are charged either from the Mains or Generator.
As an electrician doing a lot of commercial renovations and new construction in San Francisco, I've seen a lot of much smaller server rooms, but this is next level! Very impressed with the electrical work (power in particular) , btw. Very clean and neat. Thanks for the absolutely impressive tour! And thanks to the company for giving you such access.
Always a pleasure to look inside a datacenter. I had the chance to look inside a very small one in the basement of an IT service company in Cologne with my CompSci class back in school. We had a bit of extra luck, because the operator decided to reschedule the test run of one diesel generator (by one or two days) so we could witness it live in the gen room. Sadly, no blade servers, POWER or z-Series there, just standard 19inch Intel x86 servers, Ethernet, and fiber access. Eagerly awaiting part 2.
The gas suppression and aspirating systems was great to see for me. I work in that part of the fire industry here in Australia, and was interested in how ( and what brand) the systems are used overseas . I have set up similar systems for data centres here.
@@Kenia-sn1cg Depending on the local requirements , standards, and authorising bodies, it's a hard field to get into. Here in Australia, there is a lot of licencing and training to install gas suppression systems like this. Even small systems ( single tank of suppresion agent) require a lot of licencing to install, test, commision and maintain. Depending on the agent used, and volumes, and also age of the system, there is a lot of regulation. Some older system used ODP ( ozone depleteing) gases, so if accidently discharged , or even intentionally discharged ( like in a fire instance), there is a lot of paperwork to inform the relevent enviromental agencies. Most suppression systems use non ODP gases these days, usually carbon dioxide, nitrogen or nitrogen mixed with other inert gases like argon, or other synthetic agents . There is some skill required in designing gas suppression systems. You'll need to take in to account factors like, room size, temperature, room pressure, gas dispersal rates, type of equipement in room, types of fire likely to occur ( electrical, paper, plastics, etc) and many more. As for the asprirating systems, most of them usually operate in a similar manner. The general manner they work by , is to constantly sample the air via capillary tubes, or sampling points , through a laser particle reader. This reader will measure obscuration percentage of the air. Some systems can measure as low as 0.01% , by contrast, a smoke detector will normally alarm at 6-8%. So having a dust free enviroment is very important. The design of an aspirating system is generally easier, but there are still factors to consider. In most cases, the aspirating systems will be part of larger smoke/fire management system used to trigger the release of the suppression system.
I just love the attention to detail. If I had stayed in the industry after 2001, that is the kind of place that I would have wanted to be working in. Thanks so much for the tour. Loved it. Oh and Dr Oetker make the best pizzas, we have them here in the Netherlands too. The last section with the backup tape drives, I would also want a further backup of in a separate city. Back in the day I argued for data to be backed up from Manchester, Cardiff, Glasgow and London in each of the other centres. My bosses thought I was being OTT.
There is no such thing as OTT in a datacentre. You have to not only prepare for the obvious, but also the unlikely! Everyone always moans at I.T. when things go wrong, but usually it is because they did not allow enough resources in the first place.
I have to disagree with you on the pizzas affirmation. All frozen pizzas are like eating pieces of hot cardboard. Dr Oekter, Chicago Town, Goodfellas, etc.… they’re all terrible.
@@leeroyjenkins0 Still doesn’t change the fact that they’re like eating cardboard and don’t even get me started on what comes out of the oven when compared to the packaging imagery. 😂😂
Have 14 years Data Center Experience working as an operating engineer taking care of equipment. Great video and explanation of everything. Really enjoy seeing different Data Centers.
I would like to thank you truly and sincerly for this video.. I watched it from start to finish with full attention. I am soon starting a new Sales job at a DC company. Quite excited about it because It's going to be my first expirience in the DC industry. This video is very educational and helped me so much understand the nature of this business. Again, thank you so much for all the hard effort you have put to help people like me. I appreciate it and I wish you more success! Looking forward for part 2.
I always appreciate the effort you put into publishing your videos on both your main and your EN channel, but this video goes above and beyond. Doing the full tour once must have been an ordeal to organize, but convincing them to let you go over everything twice really takes your content that extra mile.
I'm someone who has lived in enterprise data centers for almost twenty years now, and have spent nearly all of it with enterprise storage arrays like the ones in the video. That PowerMax 8000 is a monster capable of pushing 14M I/Os per second (assuming all reads from cache, so a hero number). While it is an active/active array where all ports can service any IO, it can also do active/active arrays with another one in a metro distance, so an entire array could fail without host disruption. Isilon is no joke either. This data center is very tidy and well put together. I appreciate the in-depth tour into a place most people don't get to see.
This is amazing for a hardware/engineering nerd, one thing is, if possible, a 3d model of how its laid out (even approximately) or perhaps a layered plan of the building would be extremely interesting to see as well so it can be visually laid out as a building and not as separate rooms in the mind of the viewer.
Should have commented 1/2 way through the video when my brain was still functioning properly :-O Roman, don't know what to say, first of all, amazing how up front and honest you are telling everyone how the videos will be done. When I heard you say that the 2nd video would be more in detail of the actual servers I was like, "Well that'll be the one to watch", but then I started watching this and my jaw just dropped. Honestly, I think you should re-edit this and split this into 3 or 4 parts, because my brain was already melting from the infrastructure and then you got into the servers and it completely melted, this is so, so insane.
@@der8auer-en thank you for not splitting it up but also smart move transitioning to the part where you talked about the redundancy, gave us a few minutes to actually decompress all the information before diving back in.
Man, I had to go and just lie down, cover my eyes and let me brain train to come to terms with all that, then try to stop visualizing it and comprehending it. That one server uses more power in a 1/2 day than I do in a month for my entire small house :-O
This video is great. I have been sharing it to my colleagues that need to go to Power Plant sites, you have "mini data rooms" over there and you have the same cooling, fire detection and energy backup systems. Thank you for the detailed tour!
That was really really cool. Thank you for sharing this. I worked at IBM back in the 90s, and we had a comparatively miniature tape robot connected to the mainframes. Insane to think they're storing terabytes of data on tape now. I'd love to know how they made that data relatively quickly accessible - it wasn't fun with 100MB.
I've worked with tape since the 1970s. Tape technology has leapt ahead of disk in terms of storage density. Fuji demonstrated the practicality of 185TB in LTO tape package back in 2015. If you think of the surface area you have to write on in 2000 ft of 1/2" tape, compared to the platters on a disk you get the idea.
@@charliestevenson1029 I get the storage density thing, what I'm curious about is the latency accessing the specific data you want. Restores of particular files like I said, were not fun on tapes holding much less data than these do just because of the time it would take the tape to get to where it needed to be.
@@ohkay8939 Its called cold storage for a reason. Its mainly intended for worst case. Its even on a different site. Speeds have ofc improved with data density but thousands of feet of tape is still thousands of feet of tape, theres only so fast you can move it. If you buy even fairly small cold storage these days the restore times are still 4+ hours. Why would it need to be quickly accessible when they have layers of SSD fabric?
@@mycosys It's called HSM - Hierarchical Storage Management where you have different layers of access by speed required. Spinning disks are very expensive to buy and run, so you don't keep rarely accessed data 'online' , but you might keep a stub - just enough so the end user can start accessing the file quickly. The robot kicks in to get the tape with the rest of the data. Since 1985 most tape drives have recorded in a serpentine fashion, so it's not a sequential access to the bit of data you want, it's a combination of horizontal and vertical movement. LTO8 for example records 5160 tracks on 1/2" tape. Worst case is a seek to end of the physical tape. Data is (lossless) compressed, throughput in excess of 240MB/sec. Not all tape applications are for HSM, most is for offline backup security, often with remote robot infrastructure linked by fibre. Check out IBM's technical tape publications. The problem is, so many people think 'tape' is old and slow. The fact is, all large scale data centres use it - they have to. If you see inside any Google data centre, sure you see racks and racks of servers and disk, but you also see robotic tape libraries. Where I used to work, we had petabytes and petabytes of data on nearline tape (it was seismic processing), it didn't make economic sense to have everything on spinning disk.
This is really what a datacenter should used to be, everything (mirrored) as a complete (backup) installation/ system. Good video btw, you take much effort to explain almost everything about the datacenter and it's installation.
Awesome tour, impressive the amount of information shared! Did not even imagine such complex and effective redundant cooling exist. German infrastructure tech at it’s peak not to speak about the servers side! It’s at the level of banks/telecom data centers terms of redundancy/safety. I wonder if they have geographic redundancies as well eg synced across country. Definitely I will look differently at the I Dr Oetker pudding in supermarket after this :)
Cool video man, appreciate tat you took the time to run through the infrastructure side before hitting the tech. Most people don't realize how much engineering goes into housing these things. I work in the hyperscale arena, but those smaller enterprise/hosting sites will always have a special place in my heart!
Very cool to see inside this DC. The company I work for have equipment in two datacentres and I am part of the DC team so get to visit often and I'm impressed with some of the stuff they have going on here that is not seen in a regular DC such as the oxygen reduction stuff, I didn't know that was a thing either!
I took a tour of a Big Ten University here in America and while a lot of the things they had where similar, the university data center was no where near as robust or impressive as this. This is some serious bleeding edge tech and it was really cool to see a tour. I would like to buy a shirt or baseball cap or something to support more of this content. Maybe try and work with LTT to produce merch?
This is the coolest video you have made to date. Nothing else come close to this imho. Wow, just wow. Thank you for this. To se you geek out like that was awesome. I bet this was one of the coolest thing you have done in a while Roman? Love from Norway.
I learned early on to always wear hearing protection in datacenters. Any consideration desktop computer fans have for being quiet is taken out back and murdered with a claw hammer for servers. And at boot, they go 100%
That's some amazing setup. Appreciate the detailed video, this is the type of stuff you just can't get access to unless you're in high-end hosting or if you work for a place like that typically. The redundancy in every aspect, mildly out of my $ range, but very, very impressive how they managed to do this with the power efficiency stated.
Amazing. This looks thoroughly well designed, and so clean. Virtually everything thought of. Clearly targeting critical customers, and not some average Joe that wants to host his Wordpress website on a 10$/mo VPS.
Interesting video. Even more interesting is the timing. Linus just did a tour of IBM in NY and was able to take apart a mainframe which looked similar to the power 10 box. Is IBM on a PR campaign at the moment?
I was always wondering why the HANA in-memory database is still a thing when we have crazy fast NVMe drive technology, now I can finally make sense of that. TB level memory access bandwidth is absolutely amazing, no component in a normal desktop or x86 based server can come close to such a level, not even the GPU.
Muh, flash bad /s Ehhhhh 8 socket Xeon systems can easily have 8 TB (or more) of memory, and modern datacenter GPUs (ponte vecchio, MI250X, Hopper) put this to shane in raw memory bandwidth.
@@suntzu1409 Not sure about the multi-socket configuration of the Xeon platform, but I do know for Xeon scalable, the memory bottleneck is a big issue already in a single-socket configuration, about only 200-ish GBps theoretical bandwidth is achievable, and the speed via interconnects will be slower. As I did miss the H100/Hopper GPU announcement, I apologize for missing that 3TBps single card performance number, these do surpass the raw bandwidth of 1TBps, so thank you for that information. But as a note, based on my personal experience working with GPU computes, due to the "cores" in GPU having a tiny cache per hardware thread, they hit the memory much harder than modern server CPUs unless you work with very localized data or with data shapes optimized explicitly for this pattern, like crunching matrix multiplications with low precision, which is shown in NVIDIA's pages as the intended use case. Otherwise, the GPU could have worse stalls due to memory access or bandwidth saturation caused by the repetitive fetch of the same data. Also, their instruction set is not for general computing and has a sky-high penalty when branching operation is involved (heavily used by any database). Finally, the flash technology I mentioned is something like the Intel Optane series, which optimizing in-memory DB's startup latency is one of their main selling points. This is the part that initially confused me. Since they are already using PMems that have an access latency in the hundreds of nanoseconds and a maximum bandwidth of a couple tens of gigabytes, why still put everything 'in-memory' when using flash arrays would be a much more robust and also a non-volatile option. Bandwidth, size and access latency all contribute a critical part to the puzzle, and I do not think missing any one of them would make such in-mem DB configuration make sense. P.S. I did participate in the bidding process of an SAP HANA migration project from the old SAP + Oracle DB combination. Yes, they do need that 10+TiB memory as their operational database is absolutely enormous, even when the new system has more than one DB instead of the one central one in the old system.
I've seen a few data centers in videos before, including the last one that you showed us (which I thought was HUGE!!) but this place is TOP NOTCH!!!! I'm gonna have to say it TOPS the data center that 'Serve the Home' showed us in Arizona.... There were some differences but OEDIV knocks it OUT OF THE PARK!!!! The only thank that I saw in the AZ center that I thought was better than this one was the security systems... BUT I am thinking that OEDIV didn't want to put theire on video, which makes complete sense. No matter, this place was AMAZING!!!! I am SO HAPPY that you shared this with us!! I just LOVE your channel and the AMAZING CONTENT you share with us!!! THANK YOU DER8AUER!!!! I CAN'T WAIT for the next part of this journey!!! SEE YOU THERE!!! :D
The cabinets to the left of the z15 are the DS8Ks, which are the mainframe's primary storage system. Another note, that TS4500 tape storage library is most likely part of a virtual tape library which interfaces with another system, a TS7700 which incorporates hard drives as high-speed cache and storage for the tape system. The TS7700 and TS4500 work together and use both hard drives and tapes to present itself as one big virtual tape library to the mainframe for secondary storage.
Impressive... It make the Data Centers here in South Africa pale in comparison. I used to work in Dcs in South Africa...and then the tech was impressive, but this is just bonkers! Great video!!
Many would think that the title is clickbait, this is not that. I am truly speechless, both the physical and server architecture is amazing. Also, next time I'm doing server work I'll be eating a Dr. Oetker pizza to increase my processing power.
WOW what an amazing video. This place in on the cutting edge of tech which I love seeing. Its really rare to get to see inside a center like this. thank you for taking the time to so us around. All of those IBM Power E1080 nodes! .Each costing about $335k USD for each node. looks like there were 4 nodes in just that one rack. A 256gb DDR4 memory card cost $10K so the 16TB option in those servers cost $630K per node. So at the very minimum each rack with 4 nodes cost $3.4 million USD... I work in a data center, while not quite at this level we do have the HPE Flex 280s.
This is a great video. I was surprise impressed with all the technology behind to run data center. And the way you explain everything was fantastic. Thank you
Nice design on the blade servers. Air in, air out, plastic insulating the ram from the CPU heat and everything flowing front to back with heat sinks to suit.
Testing generators isn't just to make sure they work, but also to keep them fully lubricated. Additionally, while not likely in a temperature controlled environment, temperature fluctuations can cause condensation inside the crank case. For periodic maintenance runs, the engine should be run long enough to get up to full operating temperature for a while to steam off any accumulated moisture. Running them for a short time can be worse from a moisture perspective.
Fascinating! In the same lane as the big IBM power10 system, I would recommend checking out a HPE Superdome Flex system if you ever have a chance. A very neat ccNUMA x86 Intel system that scales to 32 socket (1792 cpu cores ht enabled) and 48TB memory. It's main market is SAP-HANA like the Power10, but it also has decent room for IO expansion and so sees use in the HPC space as well. Thanks for the tour!
Wow.
Two takeaways from this video. The first is as ever, that your content and your commitment to your international viewers is through the roof. Thank you for the English content Roman. It's one thing to do a de-lidding video in two languages but this in unreal.
The second is this place! Amazing attention to detail in every single thing. Just fascinating. I have also learned that perhaps my hardware is not as top-tier as I thought /s
This is fantastic! Wow, I'm blown away by that site.
I work in a datacenter myself, or at least I thought it was before I saw this.. Lol. This is a completely different level than what I'm used to.
Really looking forwards to part 2. :)
Keep it up Roman, this was an amazing video.
It’s mind blowing isn’t it. Things I’d never thought of are in this installation eg. Four random routed fibre links that don’t cross & purified water.
It's not too surprising. WalMart was/is one of the largest users of data centers in the world. These types of businesses end up creating an enormous amount of data, and they need to know what to do with it.
@@MrMartinSchou Der Bauer is a German supremacist.
Haha. I was exactly the same. I work in an Data Centre and this one was totally next level!
i work in a data center as well, but seeing them handle the parts with no gloves made me wince a bit
OK, hands down this is the most awesome data centre tour I've ever found on RUclips.
My mind was blown multiple times: the non-crossing infrastructure, spill gate for battery acid and the reduced oxygen level.
Dr Oetker has an IT business?!! They really do absolutely everything, not just pizzas.
Just imagine; their company cantina is just frozen pizzas and plastic cups of chocolate mousse EVERY DAY!
@@devilboner I'll take the mousse, but I'll leave their pizzas.
Not just any pizzas, they have fishstick pizzas.
Their cantina was a restaurant where they cook fresh for everyone :D Actually pretty impressive, too
Just think.. In the USA, Amazon used to just be a book store. 😉
Thank you so much for everyone who made this project possible. I've been waiting for a data center walkthrough for the last 5 1/2 years that had this much in-depth knowledge.
it's so fascinating how many different technologies (and of course therefore technological experts) have to work together perfectly to make an operation like this work so flawlessly. Awesome video!
some manufacture or companies made their own technologies solely for themself. designing something clever and works good that they need. i find that awesome.
Data centers are always fun to go through. Some things I have come across which could make for better, more efficient data centers than the one you went through, and I am sure you have thought about some of this with the videos you have made, but some of it may be new to you:
1. 380V DC standard for power delivery. This is +=190V nominal, 300V to 425V range to the servers. What this standard does is after the transformer step down to 480V AC, do one conversion to 380V DC nominal and have those batteries hooked up in series and parallel to equal 380V DC nominal. From here the power goes over power rails more efficiently, as DC travels over the wires more efficiently than AC, straight to the servers. The server power supplies have basically half of the hardware in them as they skip the AC/DC conversion, which takes a lot of power electronics to efficiently and cleanly convert AC to DC, instead going straight from 380V DC to 12V DC. This is far more efficient and uses far less hardware than traditional AC power delivery to the server. Something like 30% space savings in data centers that do this (and I spell out data center long to not confuse with direct current) while getting a big boost in efficiency. You mention the efficiency of data centers at the end of the video, which is an important metric and this is a way to raise the actual efficiency and even save on costs when this tech is done at scale.
2. Considering how much Germany relies on renewable energy, something else that I think should be done, especially when doing a 380V DC standard data center build, is to swap out the batteries in the battery room with LFP batteries and build for at least 4 hours of storage. As you may have noticed, the space used in the battery room is not that efficient and they built around the notion of lead acid batteries spilling, which you will also see in old telco buildings where they built everything to run on 48V DC, which takes some crazy big bus bars to do at that voltage level. The idea of having these hours of storage is you can balance against the green energy grid and may even stop taking power from the grid for a while when the electricity prices / demand is the highest. At least when there is variable pricing involved / you make a deal with the power company to help balance the grid, such a setup can save / make a data center a lot of money as you will already have the conversion hardware and the big power use case, just add more batteries to your battery room. At this those diesel generators don't need to run as much during a power outage and can have a much longer grace period to warm up, saving on electricity to keep them warm when you have a battery system with hours of storage; you just make sure to have a certain minimum storage to give the generators more time to warm up. LFP batteries in the data center is also a good thing as modern LFP batteries will last for decades. Also by the end of the year (2022), Germany will have a large LFP battery manufacturing facility in operation run by CATL, one of the biggest names in LFP battery manufacturing. So you will likely make the batteries used in the data center in Germany.
3. Getting to where your expertise comes in, liquid cooling the high powered components in servers with a negative pressure liquid cooling loop. A number of data centers do this, especially for supercomputers, and it is extremely efficient and ironically uses a lot less water than the system you showed. The reason for this is air is a very poor carrier of heat, so you have to cool the air to a certain low temperature, much lower than the temperature of the components you are cooling, or else the server hardware in the racks will get too hot because the delta-T (change in temperature) with air is high. With direct liquid to the hot running components using water blocks, you can run the coolant at much higher temperatures as the delta-T is much lower. These much higher temperatures get into a hot summer day where it is say 37C outside is cool enough to not need any extra cooling measures such as evaporative cooling nor air conditioners. Some data centers get down into a PUE of 1.1 where this one you specify as 1.35 and the bar is 2.0. So yeah, efficiency can be better, granted this one is pretty good, granted they use a lot of water, which gets to be a problem in some places where there is not enough water to go around. This problem is getting worse with global climate change, so this thinking about evaporative cooling has to be changed unfortunately.
4. A number of data centers are moving to back of rack radiators.
5. Also in your wheelhouse, use of liquid metal thermal transfer compound and high W/mK thermal pads. The idea being the more efficiently you transfer the heat to the heat sinks with a smaller delta-T (change in temperature) between the die and heat sink, the less you have to work to keep the die at or below its max target temperature. Data centers are built around keeping the components down to a certain target temperature at max load and when you throw in all of the inefficiencies of low end thermal transfer compounds, IHS's (Internal Heat Spreaders), air cooling, and heat buildup as you go through long, high powered servers, you end up spending a lot of energy and often water to reach that target inlet temperature. Also those super noisy server fans use a tonne of energy to spin that fast and get into significant air friction heating, so if you carry most of the heat away with high density water blocks where you don't have to work as hard to move the more heat dense liquid coolant around, you can use much slower, more efficient fans for the remaining lower powered air cooled components. Anything you can do to allow the target temperature to be higher reduces your PUE and/or water consumption.
6. Shifting gears a little, use of ZFS RAIDZ in the data center. While I have used ZFS RAIDZ level 2 on Solaris in the data center primarily on mechanical drives with SSD caching drives, ZFS under Linux and FreeBSD has gotten a lot better in recent years and supports TRIM on SSDs. RAID controllers do not support TRIM. If you have ever done SSD RAID arrays, those SSDs take a beating when used with hardware RAID controllers, especially as hardware RAID does not support TRIM and in general uses the SSDs in a very write intensive fashion. ZFS is setup a lot more intelligently in terms of how much writing you do or I should say it is one of its optimizations at a slight cost elsewhere (space usage), and TRIM support is icing on the cake greatly reducing write amplification. I would venture to say that ZFS is a more reliable and flexible storage system than hardware RAID based on my experience with it in the data center and my usage of it under Linux and FreeBSD. The thing is where a data center may go for super expensive 10 DWDP (Drive Writes Per Day) SSDs when using hardware RAID controllers, they may find they can get the exact same job down with much less costly 1 DWDP drives when using ZFS RAIDZ. I mean the improvement you will see with ZFS RAIDZ level 2 over RAID 6 is huge. As a bit of a side note on this RAID level, with RAID 1 mirrors sometimes the mirror fails before you can rebuild it, causing data lose and RAID 10 just amplifies this problem by adding more mirrors to an array. In a data center there are enough drives to where you will see this happen, it is a guarantee when you are dealing with this many drives. RAID 5 also amplifies this problem as you add more drives to the array. RAID 6 is a lot better at not losing your data to random physical drive failures as you still have redundancy with a single drive failure, so the occasional second drive failure and just sector loss doesn't kill you, granted in a data center as soon as a drive starts losing sectors, you replace it right away. (At least any good admin would.) RAIDZ level 2 is basically ZFS's version of this, except a lot better in terms of data integrity and recovery capabilities. Standard practice where I worked is to have 8 drive vdevs and just add more vdevs to a zpool when increasing storage. In other words you have an 8 drive array with 2 of the drives for redundancy and then you just add more of these arrays into a single logical 'drive' / storage pool to get to the desired storage size. If you have ever dealt with hardware RAID enough, you start finding there are ways to lose data in a more traditional overall setup where there should be ways to make it better and not lose that data. ZFS RAIDZ level 2 is a great answer to these issues. There is a lot to explain here, but this is something you can read about and then this already long post doesn't need to get a lot longer and you can see why ZFS RAIDZ with direct access to the drives is just better. It is also cheaper as you don't have to spend money on all of these fancy RAID controllers, instead just need simple HBAs (Host Bus Adapters) to access the drives.
7. I was a bit surprised with all of that fiber you saw, there weren't any specialized high speed fiber connections such as InfiniBand. (Maybe you saw InfiniBand, but just didn't know what it was?)
I suppose this is a thing when you go to a bank's data center, they are going to be a bit more conservative in how they setup things than say a scientific supercomputing center and their hardware suppliers are going to be a bit more traditional in their offerings, so you just won't see some of these ideas to make the data center even more efficient than the setup you saw. The most radical stuff tends to happen with hyperscale data centers. It is just these are even harder to get into as the operators tend to be a bit more secretive on how they do things.
As much as I love ZFS, it's not an FS made for scale. If you want to scale ZFS, you have to rely on a cluster FS like Gluster, or Ceph on top of it. These DELL EMC machines you saw are NOT using RAID controllers. They either use their own specific cards to do everything on ASICs or they use software to control it. Also, those are NVMe SSDs, the amount of memory bandwidth needed to support them is massive. Those machines are probably using some kind of DPU that connects the fiber and the storage directly, without even going throught the CPU.
Its just amazing that you let us watch how this industry works, ty Roman for releasing this for the international community
5:32 As someone working in the water industry, my guess is that those are for water softening. Hard water causes the formation os Scale deposits on anything the water touches (the insides of the whole cooling system in this case), which reduces the effectivenes of heat transfer and ultimately may even leed to damage to the pipeing. This is especially problematic in this case, since the hotter the water the more and faster the Scaling. Depending on how hot the water gets, they may also be using a degassing system to remove oxygen from the water, since in hot water oxygen becomes very corrosive.
London water is so hard it kills kettles if you don’t descale once ever couple of months, unless you’re like me and use bottled water cause you’ve been put off drinking water boiled in London kettles pretty soon after arriving after seeing what happens.
absolutely. We have well water on my property here in the states, and we have to use a (much smaller) similar device to soften our water. Otherwise it clogs up the pipes and other stuff with Iron rust build up. Turns everything yellow or brown. I would imagine they need to remove ANY minerals and biological stuff from any water used to cool something this large. Otherwise they would be tearing the cooling down frequently trying to clean the crud out of it. The same could be seen in someones desktop pc they water cooled, and ignorantly used plain tap water inside with no additives. The cpu and Gpu blocks fins will clog up super fast.
Only thing I would add is; there is no salt in a water softener. The salt in a softener system is used to create a brine which then back washes the negatively charged resin beads inside the cylinders on a regular cycle. It's the resin beads which do all the work.
It's a water softener. Commonly used here in the states for reverse osmosis, boiler feedwater, and city water/industrial water
@@ZonkedCompanion "there is no salt in a water softener", yes, but also kinda no. It's true that the resin does the "work", but the way it softens the water is through "ion exchange", i.e. Calcium ions (the cause of scaling) from the water are exchanged with the salt ions (Na for example) from the resin, thus you could say that there's salt in the softener. When you wash the softener with the salt brine, the Calcium ions trapped in the resin get removed and replaced with salt ions, i.e. resin "regenerates".
you've outdone yourself on this video. thoroughly enjoyed the tour, very interesting all of the aspect to make a stable room for the servers to run smoothly. thank you for the very unique chance to see behind the scenes of a data-center. danke Roman. ;)
I’ve been to a data centre here in Australia and it’s a similar story with their infrastructure, redundancy is the key word. They didn’t just heat the generators though, they convert the generator to a grid driven motor to keep the engine spinning, so if they are needed then all they need for starting is opening the fuel valve, and the motor automatically switches back to a generator.
I'd be interested to see how they've implemented that electrically.
It would probably shave (just) a couple of seconds off the time that UPS'es need to keep the load up, whilst using heaps of power to keep that motor running.
There's little to no cost saving on UPS'es because the two nets would still need to be switched in and out, and that doesn't happen in less than a couple of seconds.
That was a great tour, amazing to see into a modern datacenter.
this was great! We have been using ibm power for decades. We have both Z and power. Rotating out power 8. Right now i am implementing 2x 1080s and will be migrating from 980s to the 1080s. Running IBMi, AIX, and RHEL.
Congrats on getting in there to see and touch all those things and thank you for sharing it with us Roman. Seeing the datacenter up close was awesome, seeing your excitement as you described it all, even better.
Roman: Thank You again for the peek behind the curtain! Fascinating! And Mahalo to the crew at the data center as well! Just excellent in every way!
Thanks for the tour. Love the details you took care about. Can't wait for the continuation.
Stunning video Roman. Totally amazed at the complexity of the facility.
I used to do maintenance testing on Data Center Power Systems. We had been using bigger and bigger batteries and had similar issues with the hazardous properties of them. Now, A 1.2 MWatt Diesel Generator (Caterpillar,, Cummins, Generac....like the one you showed) provides a 2nd power source which when used with a transfer switch allows the use of smaller and smaller batteries which provide power only long enough to start the generator. After the loss of Mains Power, with the Generator started, the transfer Switch switches to the Generator. The Data Center Power is derived from those UPS systems like the ones you showed. whose Inverter is always running off the batteries which themselves are charged either from the Mains or Generator.
As an electrician doing a lot of commercial renovations and new construction in San Francisco, I've seen a lot of much smaller server rooms, but this is next level! Very impressed with the electrical work (power in particular) , btw. Very clean and neat. Thanks for the absolutely impressive tour! And thanks to the company for giving you such access.
german style
Amazing video!! So grateful for the gents who allowed you to tour and record the facility 🙌🏾🙏🏾
Thanks for the tour, it reminds me of all the equipment I used to work on in my career.
I like shows like "How It's Made", but this video is even deeper than that! Major datacenters are a huge engineering achievement.
Always a pleasure to look inside a datacenter.
I had the chance to look inside a very small one in the basement of an IT service company in Cologne with my CompSci class back in school. We had a bit of extra luck, because the operator decided to reschedule the test run of one diesel generator (by one or two days) so we could witness it live in the gen room.
Sadly, no blade servers, POWER or z-Series there, just standard 19inch Intel x86 servers, Ethernet, and fiber access.
Eagerly awaiting part 2.
I like how everything there is clean and organized
That was great Roman. Thank you for the time an effort that went into this, and all your videos.!!
Most fascinating info about datacenter! Just amazing video! Mad props to Roman for doing this in eng and de!!
Thats out of this world, even when compared to other data centres that i have seen tours of.
Best. Content. Ever.
The gas suppression and aspirating systems was great to see for me. I work in that part of the fire industry here in Australia, and was interested in how ( and what brand) the systems are used overseas . I have set up similar systems for data centres here.
Is it complex to do the installation? I am in engineering student and I am hugely interested in working in such fields
@@Kenia-sn1cg Depending on the local requirements , standards, and authorising bodies, it's a hard field to get into.
Here in Australia, there is a lot of licencing and training to install gas suppression systems like this. Even small systems ( single tank of suppresion agent) require a lot of licencing to install, test, commision and maintain.
Depending on the agent used, and volumes, and also age of the system, there is a lot of regulation. Some older system used ODP ( ozone depleteing) gases, so if accidently discharged , or even intentionally discharged ( like in a fire instance), there is a lot of paperwork to inform the relevent enviromental agencies.
Most suppression systems use non ODP gases these days, usually carbon dioxide, nitrogen or nitrogen mixed with other inert gases like argon, or other synthetic agents .
There is some skill required in designing gas suppression systems. You'll need to take in to account factors like, room size, temperature, room pressure, gas dispersal rates, type of equipement in room, types of fire likely to occur ( electrical, paper, plastics, etc) and many more.
As for the asprirating systems, most of them usually operate in a similar manner. The general manner they work by , is to constantly sample the air via capillary tubes, or sampling points , through a laser particle reader. This reader will measure obscuration percentage of the air. Some systems can measure as low as 0.01% , by contrast, a smoke detector will normally alarm at 6-8%. So having a dust free enviroment is very important.
The design of an aspirating system is generally easier, but there are still factors to consider.
In most cases, the aspirating systems will be part of larger smoke/fire management system used to trigger the release of the suppression system.
I just love the attention to detail. If I had stayed in the industry after 2001, that is the kind of place that I would have wanted to be working in. Thanks so much for the tour. Loved it.
Oh and Dr Oetker make the best pizzas, we have them here in the Netherlands too.
The last section with the backup tape drives, I would also want a further backup of in a separate city. Back in the day I argued for data to be backed up from Manchester, Cardiff, Glasgow and London in each of the other centres. My bosses thought I was being OTT.
There is no such thing as OTT in a datacentre. You have to not only prepare for the obvious, but also the unlikely! Everyone always moans at I.T. when things go wrong, but usually it is because they did not allow enough resources in the first place.
I have to disagree with you on the pizzas affirmation. All frozen pizzas are like eating pieces of hot cardboard. Dr Oekter, Chicago Town, Goodfellas, etc.… they’re all terrible.
@@AJ_UK_LIVE True. After I left, the company concerned forgot to keep backups up to date. They suffered a major data loss.
@@Jules_Diplopia A little schadenfreude there for you I'm sure.
@@leeroyjenkins0 Still doesn’t change the fact that they’re like eating cardboard and don’t even get me started on what comes out of the oven when compared to the packaging imagery. 😂😂
It amazed me how little the infrastructure in a new data centre has changed from one I worked in over 20 years ago in Australia.
Hahahahahaha australia.... lmao. You are funny.
The computer technology has moved along at a blistering pace but HVAC has not changed all that much.
@@Marin3r101 I am so sorry your meds are not working. Best of luck for the future.
@@Marin3r101 I didn't get the joke probably. I'm Indian though. We've one of the world biggest data centres in the world for 1.4 billion population.
That fan spin up 😮 That SSD storage 😅 LTTs $1m unboxing looks quaint all of a sudden.
Ha ha that's what i thought too :) Also I want to see Linus reaction... He experienced an orgasm when saw like 100GB/s though so what about 1,5TB/s :D
This is one of the best videos I've seen. SO freaking cool they let you have that level of access. What an incredible place!
Have 14 years Data Center Experience working as an operating engineer taking care of equipment. Great video and explanation of everything. Really enjoy seeing different Data Centers.
Top notch content. Extremely interesting. Thank you for this glimpse into cutting edge enterprise stuff.
I would like to thank you truly and sincerly for this video.. I watched it from start to finish with full attention. I am soon starting a new Sales job at a DC company. Quite excited about it because It's going to be my first expirience in the DC industry. This video is very educational and helped me so much understand the nature of this business. Again, thank you so much for all the hard effort you have put to help people like me. I appreciate it and I wish you more success! Looking forward for part 2.
I always appreciate the effort you put into publishing your videos on both your main and your EN channel, but this video goes above and beyond. Doing the full tour once must have been an ordeal to organize, but convincing them to let you go over everything twice really takes your content that extra mile.
😬 thanks. Yes was indeed a lot of effort 😁
I'm someone who has lived in enterprise data centers for almost twenty years now, and have spent nearly all of it with enterprise storage arrays like the ones in the video. That PowerMax 8000 is a monster capable of pushing 14M I/Os per second (assuming all reads from cache, so a hero number). While it is an active/active array where all ports can service any IO, it can also do active/active arrays with another one in a metro distance, so an entire array could fail without host disruption. Isilon is no joke either.
This data center is very tidy and well put together. I appreciate the in-depth tour into a place most people don't get to see.
Amazing video. I love these tours you're doing Roman.
What an amazing video! Very very cool! Thanks Roman.
This is amazing for a hardware/engineering nerd, one thing is, if possible, a 3d model of how its laid out (even approximately) or perhaps a layered plan of the building would be extremely interesting to see as well so it can be visually laid out as a building and not as separate rooms in the mind of the viewer.
It would this full datacenter to make that 3D model
Jfc
@@suntzu1409 I can scan my entire house, have it processed and done in 5 minutes. ON MY IPHONE. What are you talking about
@@Thebadbeaver9
It was a joke
@@suntzu1409 because people usually end their jokes with "jesus fucking christ" 🤦
JFC
@@Thebadbeaver9
"Jesus fucking christ"
What the, uhhhh, fuck did you just bring upon this cursed land
Should have commented 1/2 way through the video when my brain was still functioning properly :-O Roman, don't know what to say, first of all, amazing how up front and honest you are telling everyone how the videos will be done. When I heard you say that the 2nd video would be more in detail of the actual servers I was like, "Well that'll be the one to watch", but then I started watching this and my jaw just dropped.
Honestly, I think you should re-edit this and split this into 3 or 4 parts, because my brain was already melting from the infrastructure and then you got into the servers and it completely melted, this is so, so insane.
Haha thanks 😁 I first thought about splitting it up but when I watched it first it didn't feel like an hour so I left it as one part 😁
@@der8auer-en thank you for not splitting it up but also smart move transitioning to the part where you talked about the redundancy, gave us a few minutes to actually decompress all the information before diving back in.
Man, I had to go and just lie down, cover my eyes and let me brain train to come to terms with all that, then try to stop visualizing it and comprehending it. That one server uses more power in a 1/2 day than I do in a month for my entire small house :-O
Really enjoyed this. Thank you so much, and I hope you gave the owners a huge thank you, getting this level of access is pretty much unheard of.
This is amazing! Can't believe I'm just now finding this channel. You deserve a few hundred thousand more subscribers IMO
Wow what a high end facility. These guys know what they are doing, and to think this started in the food industry
This video is great. I have been sharing it to my colleagues that need to go to Power Plant sites, you have "mini data rooms" over there and you have the same cooling, fire detection and energy backup systems. Thank you for the detailed tour!
That was really really cool. Thank you for sharing this.
I worked at IBM back in the 90s, and we had a comparatively miniature tape robot connected to the mainframes. Insane to think they're storing terabytes of data on tape now. I'd love to know how they made that data relatively quickly accessible - it wasn't fun with 100MB.
LTO12 will be up to 144TB! Insanely cost effective
I've worked with tape since the 1970s. Tape technology has leapt ahead of disk in terms of storage density. Fuji demonstrated the practicality of 185TB in LTO tape package back in 2015. If you think of the surface area you have to write on in 2000 ft of 1/2" tape, compared to the platters on a disk you get the idea.
@@charliestevenson1029 I get the storage density thing, what I'm curious about is the latency accessing the specific data you want. Restores of particular files like I said, were not fun on tapes holding much less data than these do just because of the time it would take the tape to get to where it needed to be.
@@ohkay8939 Its called cold storage for a reason. Its mainly intended for worst case. Its even on a different site. Speeds have ofc improved with data density but thousands of feet of tape is still thousands of feet of tape, theres only so fast you can move it. If you buy even fairly small cold storage these days the restore times are still 4+ hours.
Why would it need to be quickly accessible when they have layers of SSD fabric?
@@mycosys It's called HSM - Hierarchical Storage Management where you have different layers of access by speed required. Spinning disks are very expensive to buy and run, so you don't keep rarely accessed data 'online' , but you might keep a stub - just enough so the end user can start accessing the file quickly. The robot kicks in to get the tape with the rest of the data. Since 1985 most tape drives have recorded in a serpentine fashion, so it's not a sequential access to the bit of data you want, it's a combination of horizontal and vertical movement. LTO8 for example records 5160 tracks on 1/2" tape. Worst case is a seek to end of the physical tape. Data is (lossless) compressed, throughput in excess of 240MB/sec. Not all tape applications are for HSM, most is for offline backup security, often with remote robot infrastructure linked by fibre. Check out IBM's technical tape publications. The problem is, so many people think 'tape' is old and slow. The fact is, all large scale data centres use it - they have to. If you see inside any Google data centre, sure you see racks and racks of servers and disk, but you also see robotic tape libraries. Where I used to work, we had petabytes and petabytes of data on nearline tape (it was seismic processing), it didn't make economic sense to have everything on spinning disk.
These videos are amazing Roman, great work. Thanks for taking the time to share all this.
another amazing video.. nice to see this kind of insight perspectives!
just WOW . thanks for taking us in this great tour.i appreciate the efoort you had put in this video. really enjoyed the video. sehr informativ auch
Wow i am impressed at how professional and advanced everything is inside this data center.
This is really what a datacenter should used to be, everything (mirrored) as a complete (backup) installation/ system. Good video btw, you take much effort to explain almost everything about the datacenter and it's installation.
Love that kind of content. Please try to find more companies that will showcase thier hard work.
Awesome tour, impressive the amount of information shared! Did not even imagine such complex and effective redundant cooling exist. German infrastructure tech at it’s peak not to speak about the servers side! It’s at the level of banks/telecom data centers terms of redundancy/safety. I wonder if they have geographic redundancies as well eg synced across country. Definitely I will look differently at the I Dr Oetker pudding in supermarket after this :)
This facility is insane. Very cool content Roman.
truly facinating would love to see more dc tours
The Internet knows Dr Oetker from their pizza with fishsticks on it!
Data center without wearing ear plugs?
Fun times...
Amazing piece of engineering, there's a single flaw in the entire datacenter, the Emergency Stop sign on the OxyReduct is in english 😄 Great video!
Cool video man, appreciate tat you took the time to run through the infrastructure side before hitting the tech. Most people don't realize how much engineering goes into housing these things. I work in the hyperscale arena, but those smaller enterprise/hosting sites will always have a special place in my heart!
Awesome video. I miss my data center days working for EMC and this brought back lots of memories.
That was the cleanest, best laid out, DCIM, SCIF, I have ever seen. Great Content :)
Wow. This is the best video in your chanel. very impressed. best regards from turkey
Very cool to see inside this DC. The company I work for have equipment in two datacentres and I am part of the DC team so get to visit often and I'm impressed with some of the stuff they have going on here that is not seen in a regular DC such as the oxygen reduction stuff, I didn't know that was a thing either!
Nice and educative, thank you and thank you to the IT data center dudes.
Thank you for producing these in both English and German!
As an architect and hardware enthusiast I'm finding this extremely extremely interesting. Thank you.
Okay this is awesome. I absolutely love this kind of content, the engineering challenges and super cool tech in high end datacentres is just so cool
Mind blowing stuff. Thanks guys.
Thank you making the video.
I didn't expect to see mainframes there
I took a tour of a Big Ten University here in America and while a lot of the things they had where similar, the university data center was no where near as robust or impressive as this. This is some serious bleeding edge tech and it was really cool to see a tour. I would like to buy a shirt or baseball cap or something to support more of this content. Maybe try and work with LTT to produce merch?
This is the coolest video you have made to date. Nothing else come close to this imho. Wow, just wow. Thank you for this. To se you geek out like that was awesome. I bet this was one of the coolest thing you have done in a while Roman? Love from Norway.
Yes I also enjoyed this a lot :D
I learned early on to always wear hearing protection in datacenters.
Any consideration desktop computer fans have for being quiet is taken out back and murdered with a claw hammer for servers.
And at boot, they go 100%
That's some amazing setup. Appreciate the detailed video, this is the type of stuff you just can't get access to unless you're in high-end hosting or if you work for a place like that typically. The redundancy in every aspect, mildly out of my $ range, but very, very impressive how they managed to do this with the power efficiency stated.
Super interesting and detailed tour! Thanks :)
this video was very interesting. even though I know nothing about data centers, you explained everything in a way I can understand. thank you
Wow thank you! This was amazing to see such an immense complex piece of kit. Absolute privilege so see and understand through this video
Amazing. This looks thoroughly well designed, and so clean. Virtually everything thought of. Clearly targeting critical customers, and not some average Joe that wants to host his Wordpress website on a 10$/mo VPS.
Interesting video. Even more interesting is the timing. Linus just did a tour of IBM in NY and was able to take apart a mainframe which looked similar to the power 10 box. Is IBM on a PR campaign at the moment?
That was a coincidence. My video was not coordinated with IBM. I dont even have a direct IBM contact
This was absolutely fascinating! My hats off to the people who build this.
I was always wondering why the HANA in-memory database is still a thing when we have crazy fast NVMe drive technology, now I can finally make sense of that. TB level memory access bandwidth is absolutely amazing, no component in a normal desktop or x86 based server can come close to such a level, not even the GPU.
Muh, flash bad /s
Ehhhhh 8 socket Xeon systems can easily have 8 TB (or more) of memory, and modern datacenter GPUs (ponte vecchio, MI250X, Hopper) put this to shane in raw memory bandwidth.
@@suntzu1409 ok, now which one does both?
@@suntzu1409 Not sure about the multi-socket configuration of the Xeon platform, but I do know for Xeon scalable, the memory bottleneck is a big issue already in a single-socket configuration, about only 200-ish GBps theoretical bandwidth is achievable, and the speed via interconnects will be slower.
As I did miss the H100/Hopper GPU announcement, I apologize for missing that 3TBps single card performance number, these do surpass the raw bandwidth of 1TBps, so thank you for that information. But as a note, based on my personal experience working with GPU computes, due to the "cores" in GPU having a tiny cache per hardware thread, they hit the memory much harder than modern server CPUs unless you work with very localized data or with data shapes optimized explicitly for this pattern, like crunching matrix multiplications with low precision, which is shown in NVIDIA's pages as the intended use case. Otherwise, the GPU could have worse stalls due to memory access or bandwidth saturation caused by the repetitive fetch of the same data. Also, their instruction set is not for general computing and has a sky-high penalty when branching operation is involved (heavily used by any database).
Finally, the flash technology I mentioned is something like the Intel Optane series, which optimizing in-memory DB's startup latency is one of their main selling points. This is the part that initially confused me. Since they are already using PMems that have an access latency in the hundreds of nanoseconds and a maximum bandwidth of a couple tens of gigabytes, why still put everything 'in-memory' when using flash arrays would be a much more robust and also a non-volatile option. Bandwidth, size and access latency all contribute a critical part to the puzzle, and I do not think missing any one of them would make such in-mem DB configuration make sense.
P.S. I did participate in the bidding process of an SAP HANA migration project from the old SAP + Oracle DB combination. Yes, they do need that 10+TiB memory as their operational database is absolutely enormous, even when the new system has more than one DB instead of the one central one in the old system.
I've seen a few data centers in videos before, including the last one that you showed us (which I thought was HUGE!!) but this place is TOP NOTCH!!!! I'm gonna have to say it TOPS the data center that 'Serve the Home' showed us in Arizona.... There were some differences but OEDIV knocks it OUT OF THE PARK!!!!
The only thank that I saw in the AZ center that I thought was better than this one was the security systems... BUT I am thinking that OEDIV didn't want to put theire on video, which makes complete sense. No matter, this place was AMAZING!!!! I am SO HAPPY that you shared this with us!! I just LOVE your channel and the AMAZING CONTENT you share with us!!! THANK YOU DER8AUER!!!!
I CAN'T WAIT for the next part of this journey!!! SEE YOU THERE!!! :D
The cabinets to the left of the z15 are the DS8Ks, which are the mainframe's primary storage system. Another note, that TS4500 tape storage library is most likely part of a virtual tape library which interfaces with another system, a TS7700 which incorporates hard drives as high-speed cache and storage for the tape system. The TS7700 and TS4500 work together and use both hard drives and tapes to present itself as one big virtual tape library to the mainframe for secondary storage.
Great Video! I could only wish that the DC's I've been in were that well equipped and organised.
Du könntest auch mal bei IT-Dienstleistern für Banken anfragen. Zum Beispiel Finanz Informatik oder Atruvia.
every time you step out of the datacenter it suddenly feels so peaceful
Impressive...
It make the Data Centers here in South Africa pale in comparison. I used to work in Dcs in South Africa...and then the tech was impressive, but this is just bonkers! Great video!!
This is super cool. Im a fan of this sort of content 👌👍
Wow, awesome content, thanks!!!
Many would think that the title is clickbait, this is not that.
I am truly speechless, both the physical and server architecture is amazing.
Also, next time I'm doing server work I'll be eating a Dr. Oetker pizza to increase my processing power.
Very Enlightening. Thank you :)
WOW what an amazing video. This place in on the cutting edge of tech which I love seeing. Its really rare to get to see inside a center like this. thank you for taking the time to so us around. All of those IBM Power E1080 nodes! .Each costing about $335k USD for each node. looks like there were 4 nodes in just that one rack. A 256gb DDR4 memory card cost $10K so the 16TB option in those servers cost $630K per node. So at the very minimum each rack with 4 nodes cost $3.4 million USD... I work in a data center, while not quite at this level we do have the HPE Flex 280s.
Very impressive indeed! Well done, Dr. Oetker and Roman, thank you for the video!
this is amazing content! This DC is very impressive compared to some I have seen in Australia.
This is a great video. I was surprise impressed with all the technology behind to run data center. And the way you explain everything was fantastic. Thank you
Cool video. IBM is such a weird and fascinating technology
They build the datacenter in Raid 1. Nice!
nicely efficient building , awesome video.
Nice design on the blade servers. Air in, air out, plastic insulating the ram from the CPU heat and everything flowing front to back with heat sinks to suit.
Testing generators isn't just to make sure they work, but also to keep them fully lubricated. Additionally, while not likely in a temperature controlled environment, temperature fluctuations can cause condensation inside the crank case. For periodic maintenance runs, the engine should be run long enough to get up to full operating temperature for a while to steam off any accumulated moisture. Running them for a short time can be worse from a moisture perspective.
Fascinating! In the same lane as the big IBM power10 system, I would recommend checking out a HPE Superdome Flex system if you ever have a chance. A very neat ccNUMA x86 Intel system that scales to 32 socket (1792 cpu cores ht enabled) and 48TB memory.
It's main market is SAP-HANA like the Power10, but it also has decent room for IO expansion and so sees use in the HPC space as well. Thanks for the tour!