The computing power in that room is truly remarkable. Several years ago I wrote a program to produce a look up texture that is the falloff of sunlight through an atmosphere, handles causing colors to fall off at just the right amounts depending on view angle, sun intensity and a few parameters fast enough to provide photo realistic sky coloration without actually doing the heavy math that a AAA game would use. I based it upon a guy's work whom ran it through something similar to this but with computing technology of that period. He bragged hard about how it only took a couple minutes to calculate in their data center. I had a better understanding of Direct3D9 HLSL 3.0 performance, further optimized his approach and ran it on a computer that was roughly on par with an Xbox360. It took a couple months to finish producing that lookup texture despite better optimization to hardware that was technically better suited to that form of math. I can only imagine what that system in the video is capable of considering some of our greatest computing enhancements have come in the last several years.
I took a tour of a similar facility in my university last year, it was pretty much like this, couldn't hear what anyone was saying, couldn't hear yourself think XD. Ours is pretty cool too because the heat output from all the computers actually gets used to heat two nearby buildings in the winter, to save energy.
It isn't a server, so that 240TB would mostly be for holding data sets. Still, that isn't a impressively large number as a partially obsessive enthusiast can buy that much.
That might just be the low latency storage. My university supercomputing center has 3 tiered storage, 1st tier is high performance storage tightly integrated into the computer nodes, 2nd tier is object storage for inactive data, and 3rd is archival storage.
Colin Stuart The HPC is separate to the data centre where all the university runs it's main servers and storage facilities, so that space is for HPC projects only. And of course, it's easily expandable. It's doubtful that all project data will be permanently stored on the HPC long term.
I was thinking, that random keypad to get into the room would be a great feature for smartphones and tablets as it would make it harder to shoulder surf someones password as they would have to be close enough to see what numbers you were actually pressing, not just where they were on the screen.
Android phones have this already, in security settings click the cog to the right of the screen lock menu and enable scramble layout. I don't have an iphone so i can't comment on that side of the fence though
xureality i bet if apple does end up putting in a keypad scrambler apple fanboys will say “look at how secure and up to date our new $2000 iphone xi is”
What's even better is a screen surface polarized so that you can only really see it looking direction at it. My sunglasses have a film on them which is polarized to block harmful light while driving toward the sun. If you try to look at a phone with them on from anything other than straight on the screen gets exponentially darker but looking straight at it appears perfectly normal. That same film is actually sold for application directly onto a computer screen or phone screen. Considered picking some up for my phone since it also has the really nice property of nulling out color bleed and glare.
Is it ironic that they have this incredibly powerful, state-of-the-art computing behemoth, and the the sign for the big red emergency killswitch is a piece of paper stuck to the wall with duct tape?
Irony is like printing a sign: "Never push the button under this sign" that results in the sign-affixing person firmly pushing the tape onto the button -- the attempt to prevent the outcome causing it. (Although I tend to favor unavoidable physical outcomes over simple error, like the cage that protects a button turns out to open in a way that presses it, and now that you've installed it there's no alternative... as long as a reason to open the cage exists where one must not push the button)
The problem I have with HPC clusters like those based on Sun Grid Engine is that they're inconvenient for evaluating the performance of multi-threaded applications.
If you want to specialize in high performance computing, know Linux, Assembler, C and C++, and naturally have knowledge of computing systems microarchitecture
0:22 I am astonished, upset and mostly disappointed that ma boi didn't mention the electronic structure calculations that probably occupy most of their HPC center's processor time
Why don't supercomputers use liquid cooling ? Is it because of posibility of short-circuit ? I suspect it would significantly lower the power drawn by cooling equipment.
Mortal person (not a Computerphile viewer) will never appreciate how much planing, setting up, configuration and maintenance goes into tiny HPC center like this. Let alone huge datacenters with hundreds of racks
Yay. InfiniBand! I have second hand 20Gbit InfiniBand hardware. The connection is between my home server and my desktop. Grabbing videos from RDMA (avoids CPU bottlenecking the speed) capable NFS share from server is like I had the file on my desktop PC. This is important to me since my home server keeps the backups and has the storage, while my desktop has the power but not so much storage. At the moment the link is working close to 10Gbit/s since the InfiniBand card on my desktop is on too slow PCIe slot. Still it's already enough fast for my use since the transfers don't strain CPUs.
I bet he sneaks into work at the weekends and mines cryptocurrency on it. "Hey Chris how did you afford your new McLaren?..." "Errm... Won it in a raffle."
7 лет назад+2
Josh Sisson regular CPUs used in this are terrible for mining anything
I don't remember him saying the HPC was only CPU based. I wouldn't be supprised if there were some type of GPU units in there for accelerated processes in some circumstances. As Chris said in the video, the HPC is built entirely around their needs.
Casper042 Tame is not a word I would use to describe it, but I guess if you compare it to the biggest and fastest systems, it won't impress. Back in the the mid 2000s, their very first HPC system consisted of 512, dual processor compute nodes, plus storage nodes. I can't remember the specs, but they were hoping to get well up the official list of top 500 supercomputers. It was pretty impressive at the time. This one isn't going to be slow.
Start: "Its noisy, lets go outside after a quick show around." End of the video, still inside. Of course, everything is interesting and important enough to discuss while you are inside.
I'm looking for dr steve bagley's playlist for cpu essentials if anyone has it please post it. Links to all the videos related to the cpu will also work. Thanks.
tim un The individual compute nodes probably won't need such high speed connections as they're not likely to be working on vast amounts of data, but the backbone obviously needs to feed data to them en-masse, hence the infiniband.
Supercomputers use quantum mechanics to reduce the number of computations. High Performance Computing (at least in this video) is cluster computing, in another words, a bunch of computers connected together to perform multiple computations in parallel. It does not reduce the number of computations you need to make.
+Mk Km This isn't true. Supercomputers are HPCs. Clusters are a seperate type of HPC (best used in things like particle physics) but supercomputers (standard HPCs) have more processing power.
Basically the same nowadays. Although the term supercomputer is typically reserved for the largest few HPC systems in the world, whereas almost any descent research institution would have their own HPC.
Do you know if it Is possible to achieve a cluster setup that ultimately has a normal Windows 10 User Experience for one node, But incorporating the processing power of multiple Computers. Say I have 4 Computers that are all relatively similar in performance networked on 1GB LAN and one of those is my computer that I run applications like Adobe - Blender - Fusion360 - AutoCAD Normal workstation applications in an ordinary Windows 10 environment, Is it possible to combine these other unused computers to increase productivity of my workstation? Currently having to move files around , have multiple installs of applications on multiple PC's and using VNC to interface with them, it really is incredibly cumbersome for 2021! I would much prefer a regular desktop experience with added benefit of the combined power instead of wasting all this compute power by having it used inefficiently or not at all. If so could you produce a tutorial, I would think it would become popular video in any case :) Thanks in advance and cheers.
The interviewee is great, but the questions were uninteresting. The only information we got from this video is that this uni has a machine worth 2.5M for researchers and students. Wow.
After showing the big machines that go bing, I’m not sure I get the point of standing around yelling about the architecture in a noisy server room, where the dialogue is barely audible.
You should use 3-phase electricity, cheaper and more abundant, also the fans should not go 100% all the time but have temperature regulated profiles, so you can efficiently save some power, increase the equipment lifespan and the noise will go down
Apperently RUclips notifications brought me here before the first like. I mean, what is even this timespan, sometimes it pops up a few seconds after the video is out. And sometimes it literally only tells me about new video 30 mins later And thats how you write a fancy "first" comment
Well, I meant the first *like* on the video(which for some reason got removed). But RUclips is made for big counts, not for speed, so it might've been that it wasn't the first anyway.
Maybe youtube system selects in a random order those who are subscribed and have that ringbell activated, then sends progressively the notifications, this help to keep the cpu and network usage low in a point of time so that the service don't become unavailable or verry slow but i don't know, maybe there is another reason
brilliant. comment on how noisy the place is and talk about going outside to ask the questions, then proceed to conduct the whole interview screaming on the inside.
MaxSantos No, the university has separate data centres. I know because I used to work in one of them. The HPC is a specific system, whereas the data centre will have a diverse range of servers and storage facilities. The actual room construction and layout will be similar to the data centre.
Wow Chris, you look very different to when I worked in your team! Good stuff. But they let you loose near the HPC? Are they mad? All that money spent when we obviously know the answer is 42. 😉
It's very interesting. But after 4 minutes, I couldn't watch it anymore and just started skipping through to see if the noise would get any better. In my opinion, a short introduction to show the hardware would be perfectly fine. But most of the interview should have been conducted in an area where shouting over fans wasn't necessary. In editing you could have superimposed images from the parts of the hardware that were being discussed.
+Flutesrock8900 please see the other video then - it was linked at the point where we said 'we'll talk outside' and tried to cover all the same points >Sean
I'm surprised it's air conditioned. the air conditioning seems to be using 3.5x the power of the actual compute. Rather than chiling the air why not just use more normal air?
Before you pump air through your servers and NASes, you want to make sure it's clean, not just cool. Filtering the necessary volume of air would be prohibitive.
flagpoleeip Have you any idea how much heat those things push out? I can tell you, it's an awful lot. When just one of the aircon units failed in their first HPC room, the temperature increase was dramatic and rapid, to the point that several racks had to be shut down to prevent the ambient temperature getting to too high. The temperature at the rear of the racks, coming from internal cooling fans is much higher than the ambient.
In installations I know utilization is usually over 90%, so there is little idle time. To mine effectively on such computers you need tailored algorithms (most present machines are clusters - not SMP) and apps compatible with their tailored OS.
I would suggest a trick, based on my experience on big servers farm, to decrease dramatically the cost of electricity during the winter ... just open the window.
I answer you with the question that I asked to an administrator of a huge super protected servers farm of italian telephone company (under ground level, metal walls, two armored doors entry, etc) ... Are servers conneted in some way with network? and the answer was yes. In conclusion: the data are important, not the hardware.
I think, in our future there will be server farms at the bottom of the ocean for cooling purposes... they said this in news in 2016. Edit: I take that back, they'll just pump cold water from the bottom of ocean instead!
Codiac Apart from the fact that you're entrusting all your data to a third party, and where you can't guarantee security, redundancy, data backup and resource availability. With an in-house solution, you can have more control, more security and you decide how much capacity, redundancy and security you have. And of course, you can hire out computing time to other organizations.
I'm wodering if they do mine cryptocurrency if there were spare GPU blades at a given moment. That would probably make sense. But maybe it just doesn't happen.
izimsi The GPUs aren't generally high spec in these machines. They are general purpose compute nodes designed for remote access, so they don't need anything more than a low end GPU. I daresay somebody will one day purchase something specifically for problems to which GPUs are best suited. And to play Crysis in 4K. 😁
It's the upkeep that would be a lot higher I guess - what's it for those servers you're thinking of? Probably less cooling required, fewer people to pay, fewer licenses?
"This is the high performance computing center".
"And what do you use it for?"
"Uh, high performance computing."
"They turn on and off."
haha-
The computing power in that room is truly remarkable. Several years ago I wrote a program to produce a look up texture that is the falloff of sunlight through an atmosphere, handles causing colors to fall off at just the right amounts depending on view angle, sun intensity and a few parameters fast enough to provide photo realistic sky coloration without actually doing the heavy math that a AAA game would use. I based it upon a guy's work whom ran it through something similar to this but with computing technology of that period. He bragged hard about how it only took a couple minutes to calculate in their data center. I had a better understanding of Direct3D9 HLSL 3.0 performance, further optimized his approach and ran it on a computer that was roughly on par with an Xbox360. It took a couple months to finish producing that lookup texture despite better optimization to hardware that was technically better suited to that form of math. I can only imagine what that system in the video is capable of considering some of our greatest computing enhancements have come in the last several years.
I feel like in cold countries, every building should have a supercomputer, rent out the processing time, and use it to heat the air.
Same thoughts :) Although 1per10 or 1per50 buildings I think.
But can it run Crysis?
Missed opportunity.
The correct question in 2018 is: But can it run PUBG?
goeiecool9999, the correct question is why would anyone want to run PUBG in it's current state :/
Will it blend?
But can it mine Bitcoin?
Gof (games of crisis) is a pretty standard modern hpc benchmark. I’d say this can run 2 to 3 thousand Gofs.
I have done a lot of my PhD calculations on this Minerva (HPC). Thanks to the University of Nottingham and my sponsor.
Computerphile, please do an episode on the PBS job scheduler. It would be very interesting.
Agreed, I'd like to know how that works
It would be awesome to see a video about the software side of HPC
I'll wager someone is running a Quake II server in there somewhere
EW-too more probable
Probably, but I expect the load from that will be negligible.
But will it run Crysis?
I like how the questions are coming from a large empty room
:)
I want to see ackermann(10,10) on this
"We're going to speak outside after."
Continues to shout incoherently in the room the entire vid.
...and publishes an entire video shot outside... link in description and in the card when the clip was shown >Sean
I took a tour of a similar facility in my university last year, it was pretty much like this, couldn't hear what anyone was saying, couldn't hear yourself think XD. Ours is pretty cool too because the heat output from all the computers actually gets used to heat two nearby buildings in the winter, to save energy.
240TB? Those are rookie numbers. You gotta pump those numbers up.
Colin Stuart
Agree. I have that amount of just thicc thighs
It isn't a server, so that 240TB would mostly be for holding data sets. Still, that isn't a impressively large number as a partially obsessive enthusiast can buy that much.
That might just be the low latency storage. My university supercomputing center has 3 tiered storage, 1st tier is high performance storage tightly integrated into the computer nodes, 2nd tier is object storage for inactive data, and 3rd is archival storage.
Colin Stuart The HPC is separate to the data centre where all the university runs it's main servers and storage facilities, so that space is for HPC projects only. And of course, it's easily expandable. It's doubtful that all project data will be permanently stored on the HPC long term.
Yeah, look at Linus and petabyte project
The most fun video on Computerphile here
Chris must be the happiest man ever on this show! Great video as always.
2:46 why PBS instead of slurm? is there really a difference
Thank you for the video! I like seeing different data center architectures
I was thinking, that random keypad to get into the room would be a great feature for smartphones and tablets as it would make it harder to shoulder surf someones password as they would have to be close enough to see what numbers you were actually pressing, not just where they were on the screen.
Android phones have this already, in security settings click the cog to the right of the screen lock menu and enable scramble layout.
I don't have an iphone so i can't comment on that side of the fence though
xureality Not on 4s; iPad is borked, so cannot check.
xureality i bet if apple does end up putting in a keypad scrambler apple fanboys will say “look at how secure and up to date our new $2000 iphone xi is”
What's even better is a screen surface polarized so that you can only really see it looking direction at it. My sunglasses have a film on them which is polarized to block harmful light while driving toward the sun. If you try to look at a phone with them on from anything other than straight on the screen gets exponentially darker but looking straight at it appears perfectly normal. That same film is actually sold for application directly onto a computer screen or phone screen. Considered picking some up for my phone since it also has the really nice property of nulling out color bleed and glare.
Yeah I use this on my phone. Took a couple of days to get used to, now it's fine.
Is it ironic that they have this incredibly powerful, state-of-the-art computing behemoth, and the the sign for the big red emergency killswitch is a piece of paper stuck to the wall with duct tape?
no
Noel Goetowski
Standard industry practice.
Can confirm. Original labels are painted or have plastic casings and frames, everything that was added later is office paper and duct tape.
Irony is like printing a sign: "Never push the button under this sign" that results in the sign-affixing person firmly pushing the tape onto the button -- the attempt to prevent the outcome causing it. (Although I tend to favor unavoidable physical outcomes over simple error, like the cage that protects a button turns out to open in a way that presses it, and now that you've installed it there's no alternative... as long as a reason to open the cage exists where one must not push the button)
The fire containment system is fantastic.
That scrambler pad is a great idea, the amount of times the uv paint wears off from the 4 digit keys!!
The problem I have with HPC clusters like those based on Sun Grid Engine is that they're inconvenient for evaluating the performance of multi-threaded applications.
If you want to specialize in high performance computing, know Linux, Assembler, C and C++, and naturally have knowledge of computing systems microarchitecture
No "...Beowulf cluster of those..." comments ? Well, I'll resurrect that ancient meme :p
9:31 THANK YOU FOR ASKING ABOUT THE TANKS!
How many compute nodes does it have?
0:22 I am astonished, upset and mostly disappointed that ma boi didn't mention the electronic structure calculations that probably occupy most of their HPC center's processor time
But what about High Computing Performance?
Oh man they foiled the movie technique of looking for fingerprints on the keypad!
Not just a movie technique. I've been in plenty of warehouses and other places where it's blatantly obvious which keys are used.
Yeah any mechanical keypad lock will even say it in the manual to change key combinations frequently because it becomes obvious which buttons you use.
are the specks of the computer somewhere online to see? couldn't find them
Can we see the client side next? How a process is starting.
I am impressed that the server is in a wind tunnel
nice cooling idea^^
Fairly common practice; I wouldn't be surprised to see a faraday cage around the room too.
Why don't supercomputers use liquid cooling ? Is it because of posibility of short-circuit ? I suspect it would significantly lower the power drawn by cooling equipment.
So interesting. I want to know what model cpu's they are using but I guess those are Linus questions and not computerphile questions.
Mortal person (not a Computerphile viewer) will never appreciate how much planing, setting up, configuration and maintenance goes into tiny HPC center like this. Let alone huge datacenters with hundreds of racks
Yay. InfiniBand!
I have second hand 20Gbit InfiniBand hardware. The connection is between my home server and my desktop. Grabbing videos from RDMA (avoids CPU bottlenecking the speed) capable NFS share from server is like I had the file on my desktop PC. This is important to me since my home server keeps the backups and has the storage, while my desktop has the power but not so much storage. At the moment the link is working close to 10Gbit/s since the InfiniBand card on my desktop is on too slow PCIe slot. Still it's already enough fast for my use since the transfers don't strain CPUs.
A great, big, threatening button, which must never, ever, _ever_ be pressed.
T or GFLOPs? Or other benchmark data from this particular kit?
Any chance of a vid on HTC? I know to most it's the same thing as HPC but it does solve some very different problems and imho is more interesting.
I bet he sneaks into work at the weekends and mines cryptocurrency on it.
"Hey Chris how did you afford your new McLaren?..."
"Errm... Won it in a raffle."
Josh Sisson regular CPUs used in this are terrible for mining anything
lol "regular CPUs "
I don't remember him saying the HPC was only CPU based. I wouldn't be supprised if there were some type of GPU units in there for accelerated processes in some circumstances. As Chris said in the video, the HPC is built entirely around their needs.
Nowadays mining is done mostly on ASICs I believe.
@3:45 70 KW? That's A Lot? HPC Installs with recent nodes can easily pull upwards of 25 KW per RACK, so this Cluster must be pretty tame.
Casper042 Tame is not a word I would use to describe it, but I guess if you compare it to the biggest and fastest systems, it won't impress. Back in the the mid 2000s, their very first HPC system consisted of 512, dual processor compute nodes, plus storage nodes. I can't remember the specs, but they were hoping to get well up the official list of top 500 supercomputers. It was pretty impressive at the time. This one isn't going to be slow.
How many cores, how much RAM does this HPC have?
General Request: Could you do a video on Perlin Noise?
Start: "Its noisy, lets go outside after a quick show around."
End of the video, still inside.
Of course, everything is interesting and important enough to discuss while you are inside.
I'm looking for dr steve bagley's playlist for cpu essentials if anyone has it please post it. Links to all the videos related to the cpu will also work. Thanks.
Nice computer you got there University of Nottingham! It'd be a shame if some meltdown/spectre were to happen to it....
talks about 40G infinband and shows 1G ethernet switches, not ideal, maybe replace the video on those 5secs ? 7:31
tim un The individual compute nodes probably won't need such high speed connections as they're not likely to be working on vast amounts of data, but the backbone obviously needs to feed data to them en-masse, hence the infiniband.
Does this facility include the main university servers as well??
HedHuntr25 Nope
HedHuntr25 No, the main data centres are separate to this facility. But they're just as bloomin' noisy!
What’s the difference between an HPC and a supercomputer?
Similar but slightly overlapping criteria, hard to define exactly.
Supercomputers use quantum mechanics to reduce the number of computations.
High Performance Computing (at least in this video) is cluster computing, in another words, a bunch of computers connected together to perform multiple computations in parallel. It does not reduce the number of computations you need to make.
+Mk Km
This isn't true.
Supercomputers are HPCs.
Clusters are a seperate type of HPC (best used in things like particle physics) but supercomputers (standard HPCs) have more processing power.
Basically the same nowadays. Although the term supercomputer is typically reserved for the largest few HPC systems in the world, whereas almost any descent research institution would have their own HPC.
+Mk Km Pretty much every word of what you said is wrong.
Can it run osrs?
I wonder how it will take for this to fit in a microcontroller
Do you know if it Is possible to achieve a cluster setup that ultimately has a normal Windows 10 User Experience for one node, But incorporating the processing power of multiple Computers.
Say I have 4 Computers that are all relatively similar in performance networked on 1GB LAN and one of those is my computer that I run applications like Adobe - Blender - Fusion360 - AutoCAD Normal workstation applications in an ordinary Windows 10 environment, Is it possible to combine these other unused computers to increase productivity of my workstation?
Currently having to move files around , have multiple installs of applications on multiple PC's and using VNC to interface with them, it really is incredibly cumbersome for 2021!
I would much prefer a regular desktop experience with added benefit of the combined power instead of wasting all this compute power by having it used inefficiently or not at all.
If so could you produce a tutorial, I would think it would become popular video in any case :)
Thanks in advance and cheers.
The interviewee is great, but the questions were uninteresting. The only information we got from this video is that this uni has a machine worth 2.5M for researchers and students. Wow.
so how much of the overall power is taken by NSA ? :D
is the vizualisation the scientific term for gaming Lol
how it sounds in my room.....
i hope this get more interest 👍
but can it play crysis?
“This is the high Performance Computing facility for the University of Nottingham”
“What do you use it for”
“... high performance computing...”
After showing the big machines that go bing, I’m not sure I get the point of standing around yelling about the architecture in a noisy server room, where the dialogue is barely audible.
You should use 3-phase electricity, cheaper and more abundant, also the fans should not go 100% all the time but have temperature regulated profiles, so you can efficiently save some power, increase the equipment lifespan and the noise will go down
* cyberdyne_systems.exe stopped responding *
- "Funny, this never happened before."
...
*[GRID EMERGENCY STOP BUTTON]*
Apperently RUclips notifications brought me here before the first like.
I mean, what is even this timespan, sometimes it pops up a few seconds after the video is out.
And sometimes it literally only tells me about new video 30 mins later
And thats how you write a fancy "first" comment
but you are not though, somebody called tehjamez made the brilliant comment "Word" 2 minutes before you
Well, I meant the first *like* on the video(which for some reason got removed). But RUclips is made for big counts, not for speed, so it might've been that it wasn't the first anyway.
Sanders57
Indeed
Maybe youtube system selects in a random order those who are subscribed and have that ringbell activated, then sends progressively the notifications, this help to keep the cpu and network usage low in a point of time so that the service don't become unavailable or verry slow but i don't know, maybe there is another reason
brilliant. comment on how noisy the place is and talk about going outside to ask the questions, then proceed to conduct the whole interview screaming on the inside.
It seams like any general DataCenter.
A lot of similarity, but what you see in this video is a single computer cluster.
MaxSantos No, the university has separate data centres. I know because I used to work in one of them. The HPC is a specific system, whereas the data centre will have a diverse range of servers and storage facilities. The actual room construction and layout will be similar to the data centre.
When you ask how much it costs and they start laughing its gotta be cheap right...Right?
This is fine stuff.
The button's not even red. Just the cover is red.
give him a throat mic or something, jeez!
Wow Chris, you look very different to when I worked in your team! Good stuff. But they let you loose near the HPC? Are they mad? All that money spent when we obviously know the answer is 42. 😉
I thought HPC meant Hydraulic Press Channel
It's very interesting. But after 4 minutes, I couldn't watch it anymore and just started skipping through to see if the noise would get any better.
In my opinion, a short introduction to show the hardware would be perfectly fine. But most of the interview should have been conducted in an area where shouting over fans wasn't necessary. In editing you could have superimposed images from the parts of the hardware that were being discussed.
+Flutesrock8900 please see the other video then - it was linked at the point where we said 'we'll talk outside' and tried to cover all the same points >Sean
Sorry I must not have been paying attention. Thank you!
11:15 Ooooooh? what does this button do? :D
I'm surprised it's air conditioned. the air conditioning seems to be using 3.5x the power of the actual compute.
Rather than chiling the air why not just use more normal air?
Before you pump air through your servers and NASes, you want to make sure it's clean, not just cool. Filtering the necessary volume of air would be prohibitive.
flagpoleeip Have you any idea how much heat those things push out? I can tell you, it's an awful lot. When just one of the aircon units failed in their first HPC room, the temperature increase was dramatic and rapid, to the point that several racks had to be shut down to prevent the ambient temperature getting to too high. The temperature at the rear of the racks, coming from internal cooling fans is much higher than the ambient.
Lucky that blue door opens outwards
The laugh ahhhahahaha, crazy cost
Will it run Manic Minor?
I miss the colorful Cray computers
do universities like nottingham use idle time to mine cryptocurrencies to help pay for the overhead costs? if not, why not?
In installations I know utilization is usually over 90%, so there is little idle time. To mine effectively on such computers you need tailored algorithms (most present machines are clusters - not SMP) and apps compatible with their tailored OS.
I would suggest a trick, based on my experience on big servers farm, to decrease dramatically the cost of electricity during the winter ... just open the window.
Unfortunately this can do terrible things to security.
On top of just security concerns, this adds particulates and other detritus to an otherwise far more cleanly system.
I answer you with the question that I asked to an administrator of a huge super protected servers farm of italian telephone company (under ground level, metal walls, two armored doors entry, etc) ... Are servers conneted in some way with network? and the answer was yes. In conclusion: the data are important, not the hardware.
I think, in our future there will be server farms at the bottom of the ocean for cooling purposes... they said this in news in 2016. Edit: I take that back, they'll just pump cold water from the bottom of ocean instead!
Better to just turn on your vent fan than open a window just for filtering purposes regardless of security risk.
big red buttons are getting more and more popular these days
On this episode of computerphile shouting...
Hope someone doesn't swing the door open at the end lol!
Way too much of this was in a place where you can barely hear anything.
Cool stuff
"What do you use it for?"
... Games and stuff
NSA Servers?
If you want to HPC today, you just go to Amazon (AWS) or other big cloud provider.
If you have enough task to keep it busy, a local HPC is much cheaper.
Codiac Apart from the fact that you're entrusting all your data to a third party, and where you can't guarantee security, redundancy, data backup and resource availability. With an in-house solution, you can have more control, more security and you decide how much capacity, redundancy and security you have. And of course, you can hire out computing time to other organizations.
Harrison ford
How many fps of Skyrim can I get with this?
神様 in Valhalla maybe 40-50fps
Anthony Libardi then I say the £2M was a bad investment hahaha
Mine some cryptocurrency with HPC and take the athletic departments spot as the universities biggest revenue maker.
Josh N regular CPUs used in this are terrible for mining anything
go check out a mainframe, they're wayyyyyy cooler
I'm wodering if they do mine cryptocurrency if there were spare GPU blades at a given moment. That would probably make sense. But maybe it just doesn't happen.
izimsi The GPUs aren't generally high spec in these machines. They are general purpose compute nodes designed for remote access, so they don't need anything more than a low end GPU. I daresay somebody will one day purchase something specifically for problems to which GPUs are best suited. And to play Crysis in 4K. 😁
If computers ever become dominant and enslave humans it's because we slaved them to mine Bitcoin etc.
Can it run tetris?
+Mika Norlén jäderberg remember this comment when I put the next video live! >Sean :)
2,5 million $ ? That's quite cheap :D ... servers range 30k-350k so 2,5 million is quite small price :D
Universities in the UK have notoriously limited resource budgets.
Gordon Richardson It's Nottingham university.
Yes, I know that, but Robin Hood doesn't fund their budget...
It's the upkeep that would be a lot higher I guess - what's it for those servers you're thinking of? Probably less cooling required, fewer people to pay, fewer licenses?
As it was mentioned, some simulations, i.e. fluid dynamics, or weather forecasts, or some optimisation problems, solving NP problems,
If that storage is not on ZFS its not high performance :)
0:27 We all know they are farming shitcoins.
Like this =)
Linux rules the world. suck it kids!!!