🔒 Remove your personal information from the web at JoinDeleteMe.com/DROID and use code DROID for 20% off 🙌 DeleteMe international Plans: international.joindeleteme.com
In 1992 I worked for a company that would process geophysical data for oil exploration using a Cray supercomputer. At the time it was considered the most powerful computer on Earth. Flash forward to 2024 and the phone you might be holding in your hand can process the same data that would take hours for the Cray to do in seconds.
I think you hit the nail right on the head. There will always be a need for discrete supercomputers as long as there are computational matters of national security.
I didn't know that Cray was still in business! My first exposure to Cray supercomputers was in the book Jurassic Park, where they used them to help with genetic engineering. It's crazy that my laptop has more computing power on tap than that supercomputer from the 1990s.
Oh, there are still many workloads that require massively parallel processing. Problem is most of them are NDA’d or classified, you cannot be told about them.
You totally can. Particularly the weather simulations they were running in the 90s. Probably even early 2000s. Maybe even just 10 years ago given how GPUs have progressed.
@@engineerinhickorystripehatThose are climate models, and they are less complex than weather models. Climate models don’t predict when and where it will rain, just general trends
I worked at two facilities in Research Triangle Park North Carolina that had Cray computers. The first was MCNC, probably 1989-90. I think it was a Cray 1. Later I worked as a contractor for the EPA for 11 years, they had two Cray's. I don't really remember what shape they were or model, even though I spent an enormous amount of time in the room. It was a pretty cool experience!
As a kid, I dreamed of having a Cray Y-MP (instead of my Commodore-64.) I knew nothing about the power supply and cooling demands, of course. Thoughts of using it to do all my homework, run hyper-realistic versions of Super Mario Bros. & Aliens, and host the largest BBS on the planet filled my imagination. Especially because it would make me indisputably *_the_* most popular guy at school!
Supercomputers simulating nuclear explosions? ... Now THAT is REALLY useful, and definitely improves all our lives! Thanks to the pioneering work done on supercomputers, we can all be efficiently annihilated in the blink of an eye, and the Earth reduced to charred cinder hanging in space! Yes, thank God for the miracle of the supercomputer, and the REALLY useful research they carried out!
When I started in mainframe computing in 1981, the comparison was absurd. Then the $6M computer I worked on in 1991 would nearly match the desktop I am typing this on today.
Just an additional bit: TechTechPotato (I think that's how it's spelled) recently did a video on a new chip for scientific/ supercomputing. While the tech focus of the day is AI, there are still new developments meant for scientific applications.
I got to see a Cray at the Computer History Museum in mountain view California, one very similar to the first picture you showed. That was a great visit, if anyone is ever in silicon valley.
@ 8:14 , I dont know about the flop number but Shell in collaboration with IBM build a Linux supercomputer (Red Hat) for geological survey already in the year 2000. It started of with about 1000 IBM X-series nodes (Pizza boxes). When I joined about one year later, it comprised almost 2000 nodes. And when I left, I think it must have been over 3000 nodes.
Found it: "Linux, the free computer operating system, is expected to win another high profile victory on Tuesday when Anglo-Dutch oil company Royal Dutch/Shell (RD.AS) will announce it is going to install the world's largest Linux supercomputer. Shell's Exploration & Production unit will use the supercomputer, consisting of 1,024 IBM X-Series servers, to run seismic and other geophysical applications in its search for more oil and gas."
I remember when beowulf clusters started. They put one together at a show once and it ranked in with the top supercomputers of the time. It was just a few shelf units of off and back on a shelf again PCs networked together.
The question is not how many petaflops can be done but how many petabytes must be loaded into the supercomputer and processed. Getting enough data to keep a supercomputer running at full steam is quite hard. Where possible, problems are reformulated to use less data. But when the problem requires huge amounts of data - such as training an AI - you need a complete architecture change and go to distributed systems.
What is or isn't a supercomputer? You talk about CRAY and the Vector Processors, and the use of GPUs in powerful computers today. Distinction, as far as I understand it, a server, or large general purpose computer, like Amazon Web Services or Microsoft's Azure would use, has many cores, runs thousands of Virtual-Machines, and may have GPUs present for certain classes of work, but it doesn't have 8 or 16 full scale GPU units for each physical CPU core. A Hight Performance Computer, a Supercomputer for our purposes, tends to have blades or units or combinations of one physical CPU processor, and 4, 8 or 16 GPU processors stuck on a single package the size of a postal envelope. You line those up vertically in racks and repeat. The hard part ~ the trick, is the connections between them, the internal 'networking'. This high speed low latency communications, (I understand) is what AMD bough Xilinix for. For reasons I don't quite get, their Field Programmable Gate Arrays are streets better than anything else yet invented, when it comes to communications in a high-performance computing system, like El Capitan for example. It's not just about having stupendous levels of parallelism ~ you also want to be able to do inter-process communication at high speed and low latency. They do have a secondary use, to place some 'processing' on a network card, so certain extremely time-sensitive tasks can be done right there on the network-card and a reply got out in micro-seconds. When you're dealing with an automated / programmed stock-trading algorithm, you want to be able to buy or sell faster than anybody else, and that's something those brain in a network card things are better than anybody else at.
Great video, I'm a longtime viewer and really enjoy the short, easy to digest bits of aeronautical/space flight history. Shame about the sponsor though, if I didn't really enjoy the content I'd put in a report. I'll settle for a dislike but I'd really that people wouldn't try to push unethical flim-flam.
Vector processors do SIMD parallelism, one instruction oerates on multiple bits of data in parallel. This is different from the type of parallelism provided by having multiple processors which allows each processor to execute a different program. This is called MIMD parallelism. Multiple Instruction Multiple Data. In its most powerful form each of these independent processores itself might provide SIMD parallelism. Are supercimputers stull relevant? Show me your problem and I'm going to answer :-) Modern supercompters are crazy systems. They might consume 20, 30 MW - and sometimes.much more than even el capitan with ots 50MW - of power, memory sizes are now marching towars the direction of an Petabyte, that's the unit after terabyte. I was playing with systems capable of 1TB RAM in the last millenium. 1TB for most users today still is a metric sh*tload of memory. Disk space could be in the Exabyte league, that' the unit after Petabyte. That should give an idea how far ahead supercomputers are in terms of number of CPUs, GPUs, sometimes custom accelerators,, memory, disk, networking, Some of these systems are one-off systems built especially for certain applications, some were built as part of research programs. As they are very expensive- the most expensive system I know did cost more than 1 billion USD these systems are very carefully sized for the needs of a specific use case and obviously the size of the customer's wallet. For example a user might need huge memory but has not much need for many processors because the software can't exploit much parallelism. The best hardware is useless without software. Often users of supercomputers write their own software because there just is no off-the-shelf solution available for their specific needs. Training AI models is the latest use case of spezialized supercomputers. A further increase in system size is expected. Colling is funny - for some.of the larger systems it's so critical that an orderly system shutdown.is barely possivle before the system overheats after an aor conditioning failure. I don't mind if it's cold.but the air conditioning in such computing centres feels like a nasty storm in fall. I'm a software guy, I do most things from remote and avoid such rooms. People working in such places like to wear thick pullovers. Most supercomputers for many years are running Linux. In the past 20 or so years there were times when not a single system in the Top 500 listof fastest systems on earth was not running Linux. More recently a few systems are running special OS software because the hardware is so unusual, it couldn't run Linux. Nor could Cray vector computers, btw.
You might want to have a look at Elon Musk and xAI's Colossus cluster. It's used to train the Grok AI which has just been made free to use by all X users. I'm pretty sure he called it Colossus as a nod towards the 1970 film Forbin: The Colossus Project. Back in 2017 Musk commented about that film "not the ideal ending". NVIDIA called it "the world’s largest, most-powerful supercomputer" back in October 2024 and apparently has 200,000 NVIDIA Hopper GPUs.
December 2024 update.from Capacity Media, an announcement from the Greater Memphis Chamber Annual Chairman’s Luncheon suggested that xAI plans to expand its Colossus site to incorporate “a minimum of one million GPUs”. To put it into perspective, El Capitan, which recently surpassed Frontier as the world’s most powerful supercomputer, houses 43,808 AMD 4th Gen EPYC CPUs and 43,808 AMD Instinct MI300A GPUs. Frontier, meanwhile, features around 38 thousand GPUs - meaning xAI’s Colossus would be over 26 times larger. 😮
They are. Even when quantum computing works out the kinks - mainly it's insanely delicate sensitivity and error rates , conventional computing will still be neccessary. Multiverse Willow or no!
Speaking of power consumption, Microsoft is trying to, or has, bought Three Mile Island and is going to start up the nuclear generators again just to use the power for computing.
The last thing I want to hear is Microsoft trying to run a nuclear reactor. The blue screen of Cherenkov radiation death they put out can really kill you.
Yeah, things have changed. Now we're reactivating and planning to build nuclear power plants so data centers don't drain the grid dry and drive the price of electricity even higher.
I'm sure the artificially-intelligent supercomputing quantum big black beefcake daddies are glad to hear that ... but the rest of us would rather that you stfu ...
Is a cluster of x86 boxes running a commodity OS (Linux) a supercomputer? Then sure, we will have supercomputers. But that also means that I have a supercomputer sitting here next to me. Not a big deal.
🔒 Remove your personal information from the web at JoinDeleteMe.com/DROID and use code DROID for 20% off 🙌 DeleteMe international Plans: international.joindeleteme.com
Lol, the first linux supercomputer was built in june 1998, and its called the avalon cluster.
Quantum computer proof encription methods are already a thing.
In 1992 I worked for a company that would process geophysical data for oil exploration using a Cray supercomputer. At the time it was considered the most powerful computer on Earth. Flash forward to 2024 and the phone you might be holding in your hand can process the same data that would take hours for the Cray to do in seconds.
I think you hit the nail right on the head. There will always be a need for discrete supercomputers as long as there are computational matters of national security.
I didn't know that Cray was still in business!
My first exposure to Cray supercomputers was in the book Jurassic Park, where they used them to help with genetic engineering. It's crazy that my laptop has more computing power on tap than that supercomputer from the 1990s.
And I use my laptop to watch cat videos...
They're different entities. The original Cray was bought by SGI in 1996.
Yeah they were acquired and empowered by Hewlett Packard Enterprise after Hewlett Packard Enterprise split from HP.
Oh, there are still many workloads that require massively parallel processing. Problem is most of them are NDA’d or classified, you cannot be told about them.
Wait there let me just run a full weather simulation for the next few days on my phone....
You totally can. Particularly the weather simulations they were running in the 90s. Probably even early 2000s. Maybe even just 10 years ago given how GPUs have progressed.
@tracyrreed true , but not the ones that go out to 100 years .
@@engineerinhickorystripehatThose are climate models, and they are less complex than weather models. Climate models don’t predict when and where it will rain, just general trends
Data handling will be the issue. You need to handle hundreds of terabytes.
Brilliant video, as usual Curious Droid!
I still think about those commercials of the Apple PowerMac G4 touting it as the supercomputer for the home.
I worked at two facilities in Research Triangle Park North Carolina that had Cray computers. The first was MCNC, probably 1989-90. I think it was a Cray 1. Later I worked as a contractor for the EPA for 11 years, they had two Cray's. I don't really remember what shape they were or model, even though I spent an enormous amount of time in the room. It was a pretty cool experience!
Long live the WOPR.
Colossus: "There is another system."
As a kid, I dreamed of having a Cray Y-MP (instead of my Commodore-64.) I knew nothing about the power supply and cooling demands, of course.
Thoughts of using it to do all my homework, run hyper-realistic versions of Super Mario Bros. & Aliens, and host the largest BBS on the planet filled my imagination. Especially because it would make me indisputably *_the_* most popular guy at school!
You accidentally called the 6800 the "sixty eight thousand" - which to be fair I'd likely do myself given the prevalence of that chip!
Maybe it was a typo and one zero was omitted. ;)
01:50 Wall-E's grandpa found!
There's one thing that hasn't changed. Temperature down, clock speeds up.
Supercomputers simulating nuclear explosions? ... Now THAT is REALLY useful, and definitely improves all our lives!
Thanks to the pioneering work done on supercomputers, we can all be efficiently annihilated in the blink of an eye, and the Earth reduced to charred cinder hanging in space!
Yes, thank God for the miracle of the supercomputer, and the REALLY useful research they carried out!
Michael Crichton mentioned cray and the movie mentioned thinking machines super computers and now it all makes sense thank you so much
Perfect ... just a perfect video... as always,. Merry Christmas
Like always perfect explanation easy to understand for anyone. Thanks Droid
Brian Thompson was a happy client of Delete Me.
A very effective service, I must say!
you both smell like bots
@@manitoba-op4jx You can smell me from there? Oh gosh, I should have a shower...
When I started in mainframe computing in 1981, the comparison was absurd. Then the $6M computer I worked on in 1991 would nearly match the desktop I am typing this on today.
Thank You Sir for this free and good knowledge.
None of these can run Crysis on full settings though.
10:22 As a kid in the 80s watching movies like Tron, this image is what I thought future computers would look like.
Wow! I learned a lot here. Thank you!
Quanttum hype! Quantum hype! Quantum hype! Let's go!
Just an additional bit: TechTechPotato (I think that's how it's spelled) recently did a video on a new chip for scientific/ supercomputing. While the tech focus of the day is AI, there are still new developments meant for scientific applications.
Talking about quantum computers as if they will exist; yes, and when they do they will be powered by fusion reactors
Quantum computers do exist
I got to see a Cray at the Computer History Museum in mountain view California, one very similar to the first picture you showed. That was a great visit, if anyone is ever in silicon valley.
I like your quantum shirt;-)
42. I couldn't help thinking about Deep Thought.
Those who haven't read the Hitchhiker's Guide won't relate.
1:00 Cat? 😸
Bird?
I prefer "Clown-based Computing" over Cloud Computing.
So, Azure, then?
@@halfsourlizard9319 either way you're relying on a giant corporation, right?
@ 8:14 , I dont know about the flop number but Shell in collaboration with IBM build a Linux supercomputer (Red Hat) for geological survey already in the year 2000.
It started of with about 1000 IBM X-series nodes (Pizza boxes).
When I joined about one year later, it comprised almost 2000 nodes.
And when I left, I think it must have been over 3000 nodes.
Found it:
"Linux, the free computer operating system, is expected to win another high profile victory on Tuesday when Anglo-Dutch oil company Royal Dutch/Shell (RD.AS) will announce it is going to install the world's largest Linux supercomputer.
Shell's Exploration & Production unit will use the supercomputer, consisting of 1,024 IBM X-Series servers, to run seismic and other geophysical applications in its search for more oil and gas."
I remember when beowulf clusters started. They put one together at a show once and it ranked in with the top supercomputers of the time. It was just a few shelf units of off and back on a shelf again PCs networked together.
Someday, we'll be able to make a computer that can perform dozens of calculations every second.
The question is not how many petaflops can be done but how many petabytes must be loaded into the supercomputer and processed. Getting enough data to keep a supercomputer running at full steam is quite hard. Where possible, problems are reformulated to use less data. But when the problem requires huge amounts of data - such as training an AI - you need a complete architecture change and go to distributed systems.
Depends on the workload ... some things are compute-bound; other things are IO-bound.
I can tell that my man Curious Droid was and may still very well be into the EDM scene
just a hunch :)
Y'know, if there was a way for people to enlist their home pc for some distributed computing project, people could get paid to heat their homes.
Mine crypto.
SETI@Home
Folding @ Home.
@@andyalder7910 Doesn't come close to cover energy costs 95% of the time.
@c1ph3rpunk
@bertblankenstein3738
They don't pay at all.
What is or isn't a supercomputer?
You talk about CRAY and the Vector Processors, and the use of GPUs in powerful computers today.
Distinction, as far as I understand it, a server, or large general purpose computer, like Amazon Web Services or Microsoft's Azure would use, has many cores, runs thousands of Virtual-Machines, and may have GPUs present for certain classes of work, but it doesn't have 8 or 16 full scale GPU units for each physical CPU core.
A Hight Performance Computer, a Supercomputer for our purposes, tends to have blades or units or combinations of one physical CPU processor, and 4, 8 or 16 GPU processors stuck on a single package the size of a postal envelope. You line those up vertically in racks and repeat. The hard part ~ the trick, is the connections between them, the internal 'networking'. This high speed low latency communications, (I understand) is what AMD bough Xilinix for. For reasons I don't quite get, their Field Programmable Gate Arrays are streets better than anything else yet invented, when it comes to communications in a high-performance computing system, like El Capitan for example. It's not just about having stupendous levels of parallelism ~ you also want to be able to do inter-process communication at high speed and low latency. They do have a secondary use, to place some 'processing' on a network card, so certain extremely time-sensitive tasks can be done right there on the network-card and a reply got out in micro-seconds. When you're dealing with an automated / programmed stock-trading algorithm, you want to be able to buy or sell faster than anybody else, and that's something those brain in a network card things are better than anybody else at.
That supercomputer on the thumbnail looks like Astro bot’s ancestors
Great video, I'm a longtime viewer and really enjoy the short, easy to digest bits of aeronautical/space flight history. Shame about the sponsor though, if I didn't really enjoy the content I'd put in a report. I'll settle for a dislike but I'd really that people wouldn't try to push unethical flim-flam.
Vector processors do SIMD parallelism, one instruction oerates on multiple bits of data in parallel. This is different from the type of parallelism provided by having multiple processors which allows each processor to execute a different program. This is called MIMD parallelism. Multiple Instruction Multiple Data. In its most powerful form each of these independent processores itself might provide SIMD parallelism.
Are supercimputers stull relevant? Show me your problem and I'm going to answer :-)
Modern supercompters are crazy systems. They might consume 20, 30 MW - and sometimes.much more than even el capitan with ots 50MW - of power, memory sizes are now marching towars the direction of an Petabyte, that's the unit after terabyte. I was playing with systems capable of 1TB RAM in the last millenium. 1TB for most users today still is a metric sh*tload of memory. Disk space could be in the Exabyte league, that' the unit after Petabyte. That should give an idea how far ahead supercomputers are in terms of number of CPUs, GPUs, sometimes custom accelerators,, memory, disk, networking, Some of these systems are one-off systems built especially for certain applications, some were built as part of research programs. As they are very expensive- the most expensive system I know did cost more than 1 billion USD these systems are very carefully sized for the needs of a specific use case and obviously the size of the customer's wallet. For example a user might need huge memory but has not much need for many processors because the software can't exploit much parallelism.
The best hardware is useless without software. Often users of supercomputers write their own software because there just is no off-the-shelf solution available for their specific needs.
Training AI models is the latest use case of spezialized supercomputers. A further increase in system size is expected.
Colling is funny - for some.of the larger systems it's so critical that an orderly system shutdown.is barely possivle before the system overheats after an aor conditioning failure. I don't mind if it's cold.but the air conditioning in such computing centres feels like a nasty storm in fall. I'm a software guy, I do most things from remote and avoid such rooms. People working in such places like to wear thick pullovers.
Most supercomputers for many years are running Linux. In the past 20 or so years there were times when not a single system in the Top 500 listof fastest systems on earth was not running Linux. More recently a few systems are running special OS software because the hardware is so unusual, it couldn't run Linux.
Nor could Cray vector computers, btw.
You might want to have a look at Elon Musk and xAI's Colossus cluster. It's used to train the Grok AI which has just been made free to use by all X users. I'm pretty sure he called it Colossus as a nod towards the 1970 film Forbin: The Colossus Project. Back in 2017 Musk commented about that film "not the ideal ending". NVIDIA called it "the world’s largest, most-powerful supercomputer" back in October 2024 and apparently has 200,000 NVIDIA Hopper GPUs.
December 2024 update.from Capacity Media, an announcement from the Greater Memphis Chamber Annual Chairman’s Luncheon suggested that xAI plans to expand its Colossus site to incorporate “a minimum of one million GPUs”.
To put it into perspective, El Capitan, which recently surpassed Frontier as the world’s most powerful supercomputer, houses 43,808 AMD 4th Gen EPYC CPUs and 43,808 AMD Instinct MI300A GPUs. Frontier, meanwhile, features around 38 thousand GPUs - meaning xAI’s Colossus would be over 26 times larger.
😮
Now we know why we can't get cheap video cards.
Let me check *google and daddy and US of A pointy rocket MoD says yes so I'll say...*
Wow, SUN computers. I haven't heard that name in a long time. What happened to them?
Bought by Oracle
@@TheIceGryphon Where assorted ideas go to die ... like IBM before them and Microsoft after them ...
@@TheIceGryphon Ohhh, I see. Thanks.
I, for one, welcome and am anxious to be of service to our coming quantum overlords.
They are. Even when quantum computing works out the kinks - mainly it's insanely delicate sensitivity and error rates , conventional computing will still be neccessary. Multiverse Willow or no!
Exactly, not gonna do what the general public think.
How does my (newest) MacbookPro compares with a Cray-1 Supercomputer?
Quantum Computers Designing Computer AI archetures ............. Yeh.... I feel really safe........
Can you do a video on running quantum computers in space and if they would be worthwhile.
Quantum computers are still proofs-of-concept. None has actually done anything useful yet.
Speaking of power consumption, Microsoft is trying to, or has, bought Three Mile Island and is going to start up the nuclear generators again just to use the power for computing.
The last thing I want to hear is Microsoft trying to run a nuclear reactor. The blue screen of Cherenkov radiation death they put out can really kill you.
If you ask my dad we're still using mainframes
😂😂😂
Yeah, things have changed. Now we're reactivating and planning to build nuclear power plants so data centers don't drain the grid dry and drive the price of electricity even higher.
What about the WOPR computer?
Do a report about Willow, Google's state-of-the-art quantum chip that has error correction!
Is that similar to youtube's error correction?
Super Computers were superseded by the Super Duper Computer.
Have you heard of Chat GPT?
Yes but they need to be basically GPU farms. A room full of A100s. Not CPU supercomputers and mostly not consumer grade GPUs.
Oh don't go there, Master Control.
well we already know the ultimate answer of life, the universe, everything.
BELGIUM! I say again, BELGIUM!
Top end GPUs cost $100K each and large corporations have tens or hundreds of thousands of them in data centres also known as super computers.
Sure google had some supercomputer cpu in the news only a week or two ago.
That was a quantum computer, not a supercomputer.
Yes, yes it CAN run Crysis. About 1,590,000 over in parallel.
They actually made quantum proof encryption. New video idea?
1,700 “ PS3”’s ( supercomputer)
Elon Musk just built a new one. The biggest one in fact. It's called Colossus.
Skynet is real...
MOS made the 6502, not MOSTEK.
Great video otherwise!
short answer: yes
Never been in the comments this early. It's mysterious and powerful. He should do a video about it.
I'm gay daddy
I'm sure the artificially-intelligent supercomputing quantum big black beefcake daddies are glad to hear that ... but the rest of us would rather that you stfu ...
As long as aerodynamics are advancing, we will need super computers to mimic wind tunnel test results.
Is a cluster of x86 boxes running a commodity OS (Linux) a supercomputer? Then sure, we will have supercomputers. But that also means that I have a supercomputer sitting here next to me. Not a big deal.
If computers are still relevant than it seems logical so are super variants
Hi guys
Hi all!
Yes, they are more relevant than ever. They are being built all the time.
God I love it when someone reads Wikipedia to me....Oh and some Google links too.
Who's the idiot: The one who reads Wikipedia ... or the one who watches and comments on it?