It is about the same as a clothes dryer, which is far less than I thought it would need. 24kW isn't too bad, a basic watercooling system with a pre chiller radiator would be fine. The 5V or 3V bussing would be nuts though, the amps would be 8000 at 3v and just under 5000 at 5V. 😮
@@carstenraddatz5279if the average yield is 80% your going to have 20% of the chip be dead weight. Don't see the benefit of this design than just breaking the wafer down. You're not worried about size or space requirements at that scale.
@@richr161 Worries exist, especially with this type of chip. However at that scale you are very worried if you are TSMC and only get 80% yield. Customers won't come back if you don't improve that. Realistically you're aiming for north of 97% yield or so, I hear.
@@carstenraddatz5279 Tmsc yield is literally published average at 80% with peaks of greater than 90% on a leading node. I'd assume the nodes they keep around for companies who don't use leading edge are in that range, with all optimization going into yield rather than performance.
If only Cerebras hardware had OpenCL support and wouldn't need an own proprietary language! Would open doors to HPC/simulation workloads way beyond AI.
Moore's law is based on the observation that transistor density used to double every 18-24 months. This product does not even use the latest process. If anything it indicates that Moore's law is no longer applicable. Moore's law was never about performance.
Strictly speaking Moore's law was originally about the actual number of transistors per chip. Originally the TTL and NMOS and whatever chips were 5x5 mm max. so the chip size was fairly limited. The process generation improvements are of course what kept this cycle going until maybe 2012 but after that it's been a combination of increasing the chip size and shrinking the transistors. Moore's law was never thought to apply to one square foot silicon chips.
Bitcoin miners are now selling the heat generated into an industrial process. Datacentres may soon follow suit. They really need a way to recapture the energy costs
24kW through that PAVER of a 'chip' (the term chip was meant for little pieces of silicon if I recall correctly, we need another name...). That thing needs a proper cooling tower, how does one even route 24kW at low voltage trought all that without it going woosh. That's a feat of engineering proper.
I can hook you up with a Whitebox P133 256Mbits of ram, 28.8k softmodem, and an advanced windows 95 OSR2 operating system. I can throw in a parallel ports scanner and HP B&W letter quality printer if you like also built like a tank.
I remember at one point in the 90s changing the jumpers on the motherboard to overclock my pentium from 60 mhz to 66 mhz, but never finding a way to cool it enough to remain stable. At the time I would’ve been thrilled to have that 10% jump in performance. My brain may have melted knowing that in 2024 I’d have multiple machines (including portable ones!) that are note only multi-core, but run at BILLIONS of cycles per second.
it's crazy to think of the amount of work that goes into creating these and then to sell 9 or 10 of them a year. it shows how niche the market is for this kind of processor
Can't wait to see what kind of performance boost the next gen Wafer-Scale Engine 4 will bring us!!🤤 Imagine that it will be using 2nm Forksheet GAA or 1nm CFET tech
Yesterday when this news broke I looked for a video on It, couldn't find it so I found your vid on WSE-2. Now today you deliver on the news regarding WSE-3. Nice work
I do remember cerebras claiming that their WSE 2 was better than Nvidia's offering at the time but Nvidia seems to have the most hype out of all the companies involved in AI. My understanding is that having all the processing units on one massive wafer just makes everything move faster instead of having many discrete GPUs connected together
It uses a lot of power, but the sheer scale of it makes it more efficient. For example, the RTX 4090 tops out at 100tflops. That means it would take 10 of those to have 1 petaflop. This chip does 125x that, so you would need over a thousand RTX 4090's to equal the same processing power. Not to mention the 4090's would require over 400,000 watts of power, while this requires only 25,000. That alone gives it a huge win over Nvidia to the point that if anyone is even still using Nvidia, it's because they're laundering money, because ignoring the savings of 375,000 watts an hour to run it, can't be anything but a money laundering operation at that point.
@@n00blamer LOL what you posted is NOT Moore's Law. It's the simplistic theme park version spreaded by the media. The actual law is distributed over his article "Cramming more components onto integrated circuits" from 1965, where he referenced to the complexity in two-mil squares. Please do your own homework LOL
@@NoSpeechForTheDumb In his original article, Gordon Moore stated the essence of what became known as Moore's Law with the following quote: "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase." This statement captures the crux of Moore's Law, highlighting the exponential growth in the number of components (transistors) that can be integrated onto a semiconductor chip at minimal cost, with the expectation that this trend would persist into the foreseeable future.
Imagine if every cold country used these to heat buildings in winter. You could reduce heating costs to zero, and really get both compute and heat in virtually perfect harmony.
No potato for sir. With these kind of chips I can't get rid of the feeling I had around 1990. 80386 was kinda in our grasp but yet, RISC told us: Nope you won't. This feels kinda the same again.
You do a good job of pointing out the best features of the products you cover. It makes it easier to follow for us non computer scientists. I also like that you mention a products shortcomings with ways to work around them if possible.
How do you route 25,000 Amps worth of current to a pizza box? I know some ASIC miners from the ages past got around it by stacking multiple chips/cores together, meaning multiple 0.8V cores were combined to form a single 3V-something processing block, which reduces the current requirements and makes power distribution design easier/reasonable. Anybody knows if they are doing something similar here? I'm too curious.
That single piece of silicon uses more power than my 200amp and 180 amp welders combined even when maxxed out. In fact it uses more than my entire house does 99% of the time. The stitching of rheticals is truly a remarkable innovation and something I want to learn more about. If I had to guess, I'd think there'd be a buffer in from the edge of the conventional masks then a second 'stitching' mask would be used to overlap rheticals and mark over them in a manner reminiscent of multi patterning in conventional lithography. Regardless of how it's done it's truly remarkable the level of precision and the fact that they can yield something this big on 5nm is actually insane. It seems they've exceeded their own expectations from what they initially set out to achieve. They were initially talking about being a few nodes behind but they're now basically on the leading edge and only one step from the absolute bleeding edge.
The numbers here are less important than "can you buy it? can you buy enough of it? can you easily deploy it instead of competing technologies like gpus?". I guess the answer is no, since GPU prices are going up, not down.
Cerebras increased unit production 8x in 2023, and they're 10x this year again. One they've deployed over 200 and got an order for another 400 from one customer.
one thing I really want to see are smaller AI chips that are for personal/commercial use. I've messed with ai image generation and some other ai stuff but you can't really go any higher than 520x520 image quality with a middle ground GPU. if there are any products already like this please tell me.
Wait? Moore's Law was never size limited? So it's not just density alone?? I am most excited about the Qualcomm Cloud AI100 Ultra card tbh. It seems to be the best solution for workstation/researchers who mainly care about running evals which purely require inference. And 128GB per card... Would take like two A100 to match. And those costs easily 30k+ Please let Qualcomm know we want them! I am almost ready to pay 10k for a single card... If they can sell it to me, proof the software works and finally release some accurate benchmarks. Like I want to know what a single card can do for throughput with like a 70B model at FP/BF 16 Can they donate a WSE (1,2 or 3) to Fritz for dieshots? Also the door behind you spells MOOR - surely that's on purpose
Do they also make chips out of single or a few tiles? Like from outside of the square. It's an interesting method, gets one thinking about how else it could be applied, like a CPU getting 2 or 4 still-attached tiles instead of 2 or 4 of the same chiplet. Also, imagine if we were using 450mm wafers; that might not have been a profitable transition for most uses, but for this and silicon interconnect fabrics it would've been different.
Its a single wafer... normally chips are made from a wafer just like this and then diced up into smaller chips... the reason chips are normally limited to smaller sizes is the projection system they use to image the chip only covers a small portion the rectangular areas you see on this wafer.... since they are doing all this on teh same wafer though they can put ultra high bandwidth links between the normal rectical scan areas... and link it all together. there is far more bandwidth available here than you would normally get even through an interposer since all the layers are there.... instead of it just being one layer through an interposer. Making GPUs like this might actually make sense...that said planar latency on this thing is probably quite "bad", its part of the reason vertically stacked cache on Ryzen x3d has low latency is that going vertical is faster than going sideways twice as far.
The only thing that this video, and all of your other videos for a while now, is the meme-worthy "What's your minimum specification?" jingle you used to have... is there anybody else missing that or just me?
I mean obviously you can just increase the size of the dies and many companies are already doing that, but with that comes extra heat and moore's law still applies, since you're not packing more transistors into the same area, you're increasing the area to fit more transistors
What kind of software / framework do they provide? I take it, it's not PyTorch or JAX? How hard is it actually to implement those models and the training code?
Moore's Law is about transistor density in a monolithic piece of silicon. There are creative ways of driving performance despite the end of Moore's Law!
I thought of this year's ago, why not make a single chip out of all the chips on a wafer, or just 1 giant chip. I think keeping it round and putting extra memory on the rounded areas would be a next level thing to do at some point. Also, make a bigger standard size wafer.
Although Cerebras Wafer Chip in 5nm is a good step in the right direction, it is only an incremental step. I am on the firm belief that the disruptive step would be to integrate Non-Volatile-Memory (NVM) especially MRAM (ex: SOT-MRAM or VCMA-MRAM) on the Wafer (1 out of 2 800mm2 chip should be embedded NVM) as this would open tremendous new architecture opportunities. You could even envision Wafer-on-Wafer stacking of 2 wafers in a way that each logic core is surrounded in 3D with MRAM Non-Volatile-Memory. Furthermore different kind of AI cores on the same Wafer could be envision as a better fit to multi-modal AI model. It is still early days, but clearly it is the kind of technology Apple should be investing in…
That is not only wafer scale, that looks like it used an entire 13 inch wafer. The fact that they can make a single IC die that big shows how far we have come. I can only imagine the amps required and heat extraction required for a CPU that size while running at full power. 😮😮😮
How much does a 5nm process 300mm wafer cost at TSMC these days? $15000? That's a heck of an expensive chip even with no margins added for RDI, marketing, manufacturing outside the fab, distribution, sales, profit, etc.
@@ironicdivinemandatestan4262 The chip must really be worth it to have such valuation. The distributed memory and sheer bandwith is insane compared to h100 clusters and others.
1/4 ZF. Nice. FP16, but still damn. Nice. I think raw power is one thing, but the biggest advantage is efficiency of data transfer, because most of data and communication is on chip. That itself would save a lot of power.
Bet youd be able to buy something with similar computing capability that only uses 150 watts and is 1/25th the size in just 10 to 15 years. Maybe less. Itll be cool to see what comes the closer we get to 2030.
One of the kind, very unique solution only from Cerebras. They found a hole in the market otherwise they would be out of business by now and with inference cards and renting model they can also monetize pretty good as well.
You get it all wrong. Cerebras' wse-3 chips will be used for training primarily. They are not for inference. They sold it as a whole system as a supercomputer system.
@@catchnkill "Greetings to Chinese state hackers!" - u got it wrong and obviously u r not reading that I mentioned "inference cards" mentioning Qualcomm ASICs. Nice trolling though...
I wonder whether or not they could combine WSE-3 with photonics interconnects/ interposer for between-chip communication and fiber optics for data flow between rack units and even between data centers to achieve an even faster system. there was this achievement last year by NICT for creating 22.9 petabits per second transmission in a single fiber, although it has 38 cores (or 24.7 Pb/s with better optimized coding), i mean this just demonstrates how fast a fiber and photonics in general can get, and it is just the beginning, this skinny glass can be even faster in the future. If they can incorporate these two (wafer size engines and photonics) maybe we can achieve zettascale in the near future, say five years
Routing is not "just that simple". If you turn off a core, it will warp the wafer from thermal stress. Even with their thermal solutions I would bet that microfractures will render a wafer dead after a few months or a year.
@@unvergebeneid The yield rate for such a gigantic single piece of silicon with over a trillion transistors must be really low. I mean I think the yield rates for standard size CPUs are only in the range of 10-20%. The chances for one or more defects to be present on a giant chip that is 84x larger in size must be enormous.
The number of transistors isn't really surprising since most of them are memory bits, for DRAM 1GB = 8 billion transistors. The more memory (cache) the more transistors. Apart from a few new instructions, processors are just getting smaller, using lower voltages, using higher frequencies, more cores. I wonder when designers will hit an issue, when they go sub nanometre how far can they go.
@@JonathanSwiftUK you know "since most of them are memory bits, for DRAM 1GB = 8 billion transistors" but don't know for like over a decade the nm size isn't related to transistor size?
@percywhitehead9228 because I'm not that interested I guess. It is odd I've been in IT for about 35 years, it never occurred to me that when they said nanometers they didn't mean actual scientific measurements, they were talking about some other esoteric nanometers that only people who have a very deep, and commendable, interest of cpus understand, and it's wonderful that people have expanded the definition of nanometers, I'm all for it you understand. 😀
okay but what's the gemm/W, how are you guys solving non-stationary dataflow. inter core communication has an incredible power overhead. not to mention the developer nightmare of having to debug and troubleshoot non-deterministic compilation tools.
In finance, every time you see exponential growth, you know eventually it will level off. (Sometimes the bubble pops, that's not Moore's law, though.) It's a mathematical certainty, based on the nature of exponential growth. If your investment grows exponentially faster than the economy, sooner or later the economy's going to start fighting back. That's why those exponential growth curves always level off to new equilibria. With Moore's law, I think that inflection point happens when some minimum size is reached for transistors. Then, it's system sizes that increase to keep the growth up. However, here we're energy limited. It's a finite planet. The thing that's interesting is imagining what that new equilibria state must represent, approaching practical energy limits.
The actual equilibrium we end up reaching matters a lot to the state of civilization. If integrated circuits had reached equilibrium in the mid 1970s, microcomputers would be hobbyist curiosities. If it was reached in the 1980s, we might have word processing and spreadsheets, but nothing more. If equilibrium was reached in the 1990s, we might have GUIs and a primitive Internet, but nothing more. If equilibrium was reached in the early 2000s, we don't get iPhones. If our current state is close to the equilibrium, we will miss out on general purpose AI, AI drug development and screening, simulated synthetic biology, fast protein folding modeling, longevity/biological immortality breakthroughs, etc.
@@gregorymalchuk272 It's starting to look like planetary intelligence is taking shape. In self preservation. The equilibria system size is proportional to the planet that houses it. To stretch beyond those boundaries would take not just a revolution in analysis, but also in capacity to use a lot of energy. Perhaps over a long period of time. Surely when spreading across planets, moore's law must have characteristics of a staircase function.
My issue with this company is them constantly being adamant the chip has 100% yield… this is BS and they know this. Because they have added redundancy they assume 100% yield. Not sure whether it’s their marketing or some other non-engineering figure pushing this information out. What would be useful is how much silicon as a percentage is working per wafer as an area number of the total Wafer.
TSMC N5 wafers have 50 defects or so - a D0 of 0.07/cm2. 1.5% spare cores for redundancy is over 1000 cores spare. That also covers cores that can't reach minimum voltage/frequency limits, but as they're looking at efficiency at scale, it rarely comes to that. A true wafer gets ditched when enough of the off-chip interconnect gets defects. It's still near 100% yield.
I haven’t heard “fail fast and fail often”; the closest thing I’ve heard was “fail fast and fail early”, which is a development strategy where a condition that may cause system failure is identified and reported as close as possible to where it cropped up.
The difference though-Mr. Potato, how much more computer power does that 80 MW get them compared with one of the national labs top systems (not integrating a Cerberus WSC unit (Wafer Scale Compute unit-I know *_technically_* they're called "engines", but 🙄 it's a MASSIVE compute unit, not simply an engine performing classical work; I know; I'm jaded at the marketing department) into their super computer? I have read that the national labs have started mixing these monsters into their design architecture.) That sucker looks about the same size as one of Seymour's custom CPUs from when he switched over to Gallium Arsenide for the Cray-3 & Cray-4. RIP. Way ahead of his time. I just wonder how much more powerful our tech would be today (and power efficient) had the national labs not had the budget cuts and cancelled their super computer orders (well, and had he not died in that car accident in 1996). We might all have GaN performance compute cores by now cranking 40+ GHz. 😅
4 Trillion transistors; what is the optimized use case for this computing chip? If we compare this one to Nvidias solutions; what’s the main differences ? Thanks 🙏
How do one run this thing ? How many hair dryers of power does this one take? What is the use cases of this one; automated debugging of a large code base I can imagine is done in a snap.. curious of the business use cases
@@TechTechPotato but in that video you said ~"thats why I don't expect to see round chips anytime soon, unless someone does a waferscale round chip". So since this is a waferscale chip, why not NOT trim the edges and use as much of the wafer as possible?
The programming model changes a fair bit, especially with chained workloads. The edge corner cores end up burning power and being underutilised. Also cutting the thing would be trickier and more expensive. Then having similar cuts for power and IO. A rectangle keeps the shoreline identical and easier to design for
I wonder if these sorts of Wafer Scale Engines can be combined with advanced packaging / memory stacking? To my understanding, large AI models are bottlenecked by memory capacity and throughput, so adding a closely-bundled cache or HBM stack on top could increase performance by a lot. That said, with the energy this thing uses, powering and cooling extra memory stacked directly on top might be a problem. Maybe if it has separate power delivery, fluidic cooling channels through and between the chips, etc? There's probably high-end customers who would want that if it gives significant advantages over H100s for their applications.
I'm gonna need more thermal paste.
😂👍
😂😂
It should have its own cooling system, like a freezer!
I hear it's offered in 55 gal drums. 😂
Tubes per chip, rather than chips per tube
This cooking range-sized CPU emits actually 10x more heat than a typical cooking range. That is just crazy.
you can literally heat your house with it, even in deepest winter :)
@@endeshaw1000I too am a fan of house heating that can do computation as side effect
It is about the same as a clothes dryer, which is far less than I thought it would need. 24kW isn't too bad, a basic watercooling system with a pre chiller radiator would be fine. The 5V or 3V bussing would be nuts though, the amps would be 8000 at 3v and just under 5000 at 5V. 😮
Best Device for Training AI Cooking
Connect to a floor heating system 😂
The yield is 100% because if it doesn't work you get a bangin cool frisbee !
If a manufacturing defect knocks out a single core, you still have 900k minus 1 other cores. The design caters for that.
It's called "catch" because when you don't catch it, the game is over 🙃
@@carstenraddatz5279if the average yield is 80% your going to have 20% of the chip be dead weight. Don't see the benefit of this design than just breaking the wafer down. You're not worried about size or space requirements at that scale.
@@richr161 Worries exist, especially with this type of chip. However at that scale you are very worried if you are TSMC and only get 80% yield. Customers won't come back if you don't improve that. Realistically you're aiming for north of 97% yield or so, I hear.
@@carstenraddatz5279 Tmsc yield is literally published average at 80% with peaks of greater than 90% on a leading node.
I'd assume the nodes they keep around for companies who don't use leading edge are in that range, with all optimization going into yield rather than performance.
If only Cerebras hardware had OpenCL support and wouldn't need an own proprietary language! Would open doors to HPC/simulation workloads way beyond AI.
They do support HPC simulation, right? I do see cerebras SDK supporting scientific computing. I might assume it will need some workaround.
OpenCL? Vulkan is the real shit ;-)
Moore's law is based on the observation that transistor density used to double every 18-24 months. This product does not even use the latest process. If anything it indicates that Moore's law is no longer applicable. Moore's law was never about performance.
Strictly speaking Moore's law was originally about the actual number of transistors per chip. Originally the TTL and NMOS and whatever chips were 5x5 mm max. so the chip size was fairly limited. The process generation improvements are of course what kept this cycle going until maybe 2012 but after that it's been a combination of increasing the chip size and shrinking the transistors. Moore's law was never thought to apply to one square foot silicon chips.
@@wombatillo Funny enough in 1975 article(?) Moore actually noted that increase in die size was part of how doubling transisor count was achieved
It never was a law and never held.
Its just tech priests moving goalposts because ppl will ooo and ahhh over the lies.
@@wombatilloit never held. Had it ever held we be subatomic densities by now.
@@Poctykso his "law" isnt even clearly defnined.
Don't think this was what Moore had in mind when he formulated the law 😅
Correct. He imagined a trillion-dollar company limiting users to 64GB of storage in order to push cloud solutions.
@@hrdcpyGood old apple and its supporter
Bitcoin miners are now selling the heat generated into an industrial process. Datacentres may soon follow suit. They really need a way to recapture the energy costs
Our monthly reminder that it was never really a "law" in the scientific sense
@@Ang3lUki thats why i always called it "Moores lore"
24kW through that PAVER of a 'chip' (the term chip was meant for little pieces of silicon if I recall correctly, we need another name...). That thing needs a proper cooling tower, how does one even route 24kW at low voltage trought all that without it going woosh. That's a feat of engineering proper.
This channel has a video about the Tesla Dojo chip. I'm guessing the power solution is similar.
if it's not a chip it's the whole potato
@@dnmr my potato brain hadn't made that link yet 🤣🥔
It's a SLAB.
@@dnmrbut can it run Cyberpunk 2077......100% path traced?
This is such a cool idea. Never can get over what a wild idea it is to have a die that is a full wafer with the round parts lopped off.
Imagine showing this video to someone 30 years ago.
They would probably approve. It's like a giant AI mainframe.
Imagine showing this video to someone in China today.
Imagine showing it to Alan Turing. It would be like that scene where the archeologists get to Jurassic Park and see actual dinosaurs.
Imagine showing this video to someone 30 years from now in the future?
What someone? John Connor. What future? 1984.
bro is holding the holy grail casually in his arms 😱
and literally taking a bite out it
Lol “bro” at the minimum Dr. Bro.
@@radugrigoras, esquire
That golden coco bar looking thing is probably more expensive than a normal coco bar looking thing
A La Monty Python 😂
I need two of those, that way I can have 1 core for each pixel to get 100,000 fps
I need a 900,000 core computer for blackjack and duck hunting, ah forget the duck hunting.
A fellow person of culture I see. Always happy to se a futurama reference.
@@jolness1all i know is my gut says maybe
I can hook you up with a Whitebox P133 256Mbits of ram, 28.8k softmodem, and an advanced windows 95 OSR2 operating system. I can throw in a parallel ports scanner and HP B&W letter quality printer if you like also built like a tank.
Now we can hunt ducks made of Dark Matter 😏
Nah, half a million should do fine.
I remember at one point in the 90s changing the jumpers on the motherboard to overclock my pentium from 60 mhz to 66 mhz, but never finding a way to cool it enough to remain stable. At the time I would’ve been thrilled to have that 10% jump in performance. My brain may have melted knowing that in 2024 I’d have multiple machines (including portable ones!) that are note only multi-core, but run at BILLIONS of cycles per second.
it's crazy to think of the amount of work that goes into creating these and then to sell 9 or 10 of them a year. it shows how niche the market is for this kind of processor
Can't wait to see what kind of performance boost the next gen Wafer-Scale Engine 4 will bring us!!🤤 Imagine that it will be using 2nm Forksheet GAA or 1nm CFET tech
I asked. Was told to wait
@@TechTechPotato hold yer horses potato man they says!
Considering they went from 7nm to 5nm for WSE-3, the next logical step will be TSMC 3nm for WSE-4
Peter bites? He’s never bitten me.
Hey Lois, this reminds me of that time I made an AI chip out of a whole wafer. hehehehehe
I can store my home movies at last.❤
😂😂😂
Yesterday when this news broke I looked for a video on It, couldn't find it so I found your vid on WSE-2. Now today you deliver on the news regarding WSE-3. Nice work
I do remember cerebras claiming that their WSE 2 was better than Nvidia's offering at the time but Nvidia seems to have the most hype out of all the companies involved in AI. My understanding is that having all the processing units on one massive wafer just makes everything move faster instead of having many discrete GPUs connected together
It uses a lot of power, but the sheer scale of it makes it more efficient. For example, the RTX 4090 tops out at 100tflops. That means it would take 10 of those to have 1 petaflop. This chip does 125x that, so you would need over a thousand RTX 4090's to equal the same processing power. Not to mention the 4090's would require over 400,000 watts of power, while this requires only 25,000. That alone gives it a huge win over Nvidia to the point that if anyone is even still using Nvidia, it's because they're laundering money, because ignoring the savings of 375,000 watts an hour to run it, can't be anything but a money laundering operation at that point.
Moore's law is about logic DENSITY. It's not about more logic in a chip the size of a chess board LOL
"the number of transistors in an integrated circuit (IC) doubles about every two years." -- Gordon Moore
@@n00blamer LOL what you posted is NOT Moore's Law. It's the simplistic theme park version spreaded by the media. The actual law is distributed over his article "Cramming more components onto integrated circuits" from 1965, where he referenced to the complexity in two-mil squares. Please do your own homework LOL
@@NoSpeechForTheDumb In his original article, Gordon Moore stated the essence of what became known as Moore's Law with the following quote:
"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase."
This statement captures the crux of Moore's Law, highlighting the exponential growth in the number of components (transistors) that can be integrated onto a semiconductor chip at minimal cost, with the expectation that this trend would persist into the foreseeable future.
2:14 His intrusive thoughts won there for a second
Ian's just kinda like that sometimes. Gotta have a little nibble from time to time.
Imagine sending an entire 100amp residential service into a single chip
Imagine if every cold country used these to heat buildings in winter. You could reduce heating costs to zero, and really get both compute and heat in virtually perfect harmony.
This is one of the most interesting chips on the market. Happy to hear they are earned more money then they have raised.
No potato for sir.
With these kind of chips I can't get rid of the feeling I had around 1990. 80386 was kinda in our grasp but yet, RISC told us: Nope you won't. This feels kinda the same again.
Interesting following the progress of these chips. Mindblowing.
Since 2020, all the memes and parodies became reality.
You do a good job of pointing out the best features of the products you cover. It makes it easier to follow for us non computer scientists. I also like that you mention a products shortcomings with ways to work around them if possible.
Can't wait for the laptop version of the chip !!
How do you route 25,000 Amps worth of current to a pizza box? I know some ASIC miners from the ages past got around it by stacking multiple chips/cores together, meaning multiple 0.8V cores were combined to form a single 3V-something processing block, which reduces the current requirements and makes power distribution design easier/reasonable. Anybody knows if they are doing something similar here? I'm too curious.
these numbers are mind blowing.Also one chip with that much memory to train models is lit,
Come on over, pop it on my Motherboard!! I want to try it!!
Astonishingly powerful, one hell of a CPU 😳
That single piece of silicon uses more power than my 200amp and 180 amp welders combined even when maxxed out. In fact it uses more than my entire house does 99% of the time.
The stitching of rheticals is truly a remarkable innovation and something I want to learn more about.
If I had to guess, I'd think there'd be a buffer in from the edge of the conventional masks then a second 'stitching' mask would be used to overlap rheticals and mark over them in a manner reminiscent of multi patterning in conventional lithography. Regardless of how it's done it's truly remarkable the level of precision and the fact that they can yield something this big on 5nm is actually insane.
It seems they've exceeded their own expectations from what they initially set out to achieve. They were initially talking about being a few nodes behind but they're now basically on the leading edge and only one step from the absolute bleeding edge.
Imagine building a nuclear space station full of these things like a floating AI god.
I remember my first PC - ETI Magazine DIY computer called the Transam Triton. 8080-based and 256 BYTES of memory! Cost me £300 in 1978.
I still have an SDK 8085 kit.
Oh! Wafer Scale Integration was a hot topic in the 1980s. I didn't realise it was back in vogue.
A meeting room called "Cathedral Peak" that is located on the ground floor? 🤔
That has me thinking they have a South African around, with the meeting rooms following a famous peaks convention.
The numbers here are less important than "can you buy it? can you buy enough of it? can you easily deploy it instead of competing technologies like gpus?".
I guess the answer is no, since GPU prices are going up, not down.
Cerebras increased unit production 8x in 2023, and they're 10x this year again. One they've deployed over 200 and got an order for another 400 from one customer.
Yes, it also plays Crysis with max settings.
I had to read a lot of comments to find this joke.
One day we will have a solid black monolith of nothing but transistors and memory, like the one in 2001 Space Odyssey !
one thing I really want to see are smaller AI chips that are for personal/commercial use. I've messed with ai image generation and some other ai stuff but you can't really go any higher than 520x520 image quality with a middle ground GPU. if there are any products already like this please tell me.
Wait? Moore's Law was never size limited? So it's not just density alone??
I am most excited about the Qualcomm Cloud AI100 Ultra card tbh. It seems to be the best solution for workstation/researchers who mainly care about running evals which purely require inference. And 128GB per card... Would take like two A100 to match. And those costs easily 30k+
Please let Qualcomm know we want them! I am almost ready to pay 10k for a single card... If they can sell it to me, proof the software works and finally release some accurate benchmarks. Like I want to know what a single card can do for throughput with like a 70B model at FP/BF 16
Can they donate a WSE (1,2 or 3) to Fritz for dieshots?
Also the door behind you spells MOOR - surely that's on purpose
It is/was not about density but the total transistor count.
do we even have the data to train a hypothetical 24 trillion parameter model on this?
I can’t comprehend the scale of the capabilities these processors have anymore. It’s absolutely nuts.
Man... how big is the CPU cooler??? Gonna need more thermal grease.
Thermal grease?! Thermal pads are the way to go these days, my friend. ❤
@@orangejjay I mean, that's probably true, but who makes them this large 🤣😜😃
@3:33 Wow 100% more performance from 7 to 5 nm so there should at least be another 100% boost worth of room from 5 to 3~2nm?
that’s not how it works doofus
I cannot believe you held it that long without eating it 😊
Now THATs a big chip.
Have technology, must bite it.
Do they also make chips out of single or a few tiles? Like from outside of the square.
It's an interesting method, gets one thinking about how else it could be applied, like a CPU getting 2 or 4 still-attached tiles instead of 2 or 4 of the same chiplet. Also, imagine if we were using 450mm wafers; that might not have been a profitable transition for most uses, but for this and silicon interconnect fabrics it would've been different.
Its a single wafer... normally chips are made from a wafer just like this and then diced up into smaller chips... the reason chips are normally limited to smaller sizes is the projection system they use to image the chip only covers a small portion the rectangular areas you see on this wafer.... since they are doing all this on teh same wafer though they can put ultra high bandwidth links between the normal rectical scan areas... and link it all together. there is far more bandwidth available here than you would normally get even through an interposer since all the layers are there.... instead of it just being one layer through an interposer. Making GPUs like this might actually make sense...that said planar latency on this thing is probably quite "bad", its part of the reason vertically stacked cache on Ryzen x3d has low latency is that going vertical is faster than going sideways twice as far.
The only thing that this video, and all of your other videos for a while now, is the meme-worthy "What's your minimum specification?" jingle you used to have... is there anybody else missing that or just me?
I mean obviously you can just increase the size of the dies and many companies are already doing that, but with that comes extra heat and moore's law still applies, since you're not packing more transistors into the same area, you're increasing the area to fit more transistors
i really wonder how the software stack compares to nvidia, what does the inference training actually look like
me too, devil is in the detail.
What kind of software / framework do they provide? I take it, it's not PyTorch or JAX? How hard is it actually to implement those models and the training code?
Pytorch and tensor flow iirc. I didnt show the slide, but they stood up gigaGPT in 565 lines of code, vs 20000 for megatronLM. Both 175B parameters
Moore's Law is about transistor density in a monolithic piece of silicon. There are creative ways of driving performance despite the end of Moore's Law!
I thought of this year's ago, why not make a single chip out of all the chips on a wafer, or just 1 giant chip.
I think keeping it round and putting extra memory on the rounded areas would be a next level thing to do at some point. Also, make a bigger standard size wafer.
Although Cerebras Wafer Chip in 5nm is a good step in the right direction, it is only an incremental step.
I am on the firm belief that the disruptive step would be to integrate Non-Volatile-Memory (NVM) especially MRAM (ex: SOT-MRAM or VCMA-MRAM) on the Wafer (1 out of 2 800mm2 chip should be embedded NVM) as this would open tremendous new architecture opportunities.
You could even envision Wafer-on-Wafer stacking of 2 wafers in a way that each logic core is surrounded in 3D with MRAM Non-Volatile-Memory.
Furthermore different kind of AI cores on the same Wafer could be envision as a better fit to multi-modal AI model.
It is still early days, but clearly it is the kind of technology Apple should be investing in…
Archeologists in 6000 years: no idea what this did. Maybe an element to heat your food?
That is not only wafer scale, that looks like it used an entire 13 inch wafer. The fact that they can make a single IC die that big shows how far we have come. I can only imagine the amps required and heat extraction required for a CPU that size while running at full power. 😮😮😮
How much does a 5nm process 300mm wafer cost at TSMC these days? $15000? That's a heck of an expensive chip even with no margins added for RDI, marketing, manufacturing outside the fab, distribution, sales, profit, etc.
@@wombatilloThe WSE chips are sold for around $2 million, so the cost of the wafer is a drop in the bucket.
@@ironicdivinemandatestan4262 The chip must really be worth it to have such valuation. The distributed memory and sheer bandwith is insane compared to h100 clusters and others.
I rly want to know what the cooling situation is with this. What it looks like & what temperature is like.
1/4 ZF. Nice. FP16, but still damn. Nice. I think raw power is one thing, but the biggest advantage is efficiency of data transfer, because most of data and communication is on chip. That itself would save a lot of power.
One of the few times Moores law is used correctly factoring in the cost, I'm not aware of another time it's actually kept true in the last 10 years.
This must play a mean game of Crysis.
TBH this gives me hope I may just live long enough to be able to upload my mind to run on a CPU before I die.
How old are you?
@5:06 was meant to be "a quarter of FP16 zettaflops", don't ?
Yeah it was. Jet lag hitting hard!
I had to double check the zeros in the title. Holy moly.
I might be wrong but are you sure that'll fit in my phone, looks like it might be a tad too big but things can look bigger on the screen so who knows.
😂👌
should other chips manufacturer follow them for their yeild redundancy
Bet youd be able to buy something with similar computing capability that only uses 150 watts and is 1/25th the size in just 10 to 15 years. Maybe less. Itll be cool to see what comes the closer we get to 2030.
Ok with that kind of power can we get a deep dive into the cooling system?
One of the kind, very unique solution only from Cerebras. They found a hole in the market otherwise they would be out of business by now and with inference cards and renting model they can also monetize pretty good as well.
You get it all wrong. Cerebras' wse-3 chips will be used for training primarily. They are not for inference. They sold it as a whole system as a supercomputer system.
@@catchnkill "Greetings to Chinese state hackers!" - u got it wrong and obviously u r not reading that I mentioned "inference cards" mentioning Qualcomm ASICs. Nice trolling though...
I wonder whether or not they could combine WSE-3 with photonics interconnects/ interposer for between-chip communication and fiber optics for data flow between rack units and even between data centers to achieve an even faster system.
there was this achievement last year by NICT for creating 22.9 petabits per second transmission in a single fiber, although it has 38 cores (or 24.7 Pb/s with better optimized coding), i mean this just demonstrates how fast a fiber and photonics in general can get, and it is just the beginning, this skinny glass can be even faster in the future. If they can incorporate these two (wafer size engines and photonics) maybe we can achieve zettascale in the near future, say five years
Routing is not "just that simple". If you turn off a core, it will warp the wafer from thermal stress. Even with their thermal solutions I would bet that microfractures will render a wafer dead after a few months or a year.
If only they didn't fail at implementing simultaneous multithreading, I was this close to buy one. So disappointed
I understand the having built-in dummy cores for yield purposes... but imagine if, by some miracle, they get a wafer with zero defects.
They probobly cut away something like 10% regardless to keep all the same. The extra beenfit of the extra 10% is really neglable.
Is that chip from a single silicon slice? Or, more likely, a 12 x 7 array of individual chiplets stitched together?
It's one piece of silicon (not silicone, that's a polymer). Hence the name wafer-scale.
Wafer Scale implies one piece of silicon.
It has a lot of engineering to get around defects, plus the cores are tiny.
They claim it is monolithic
@@unvergebeneid OK, I corrected the spelling.
@@unvergebeneid The yield rate for such a gigantic single piece of silicon with over a trillion transistors must be really low. I mean I think the yield rates for standard size CPUs are only in the range of 10-20%. The chances for one or more defects to be present on a giant chip that is 84x larger in size must be enormous.
The number of transistors isn't really surprising since most of them are memory bits, for DRAM 1GB = 8 billion transistors. The more memory (cache) the more transistors. Apart from a few new instructions, processors are just getting smaller, using lower voltages, using higher frequencies, more cores. I wonder when designers will hit an issue, when they go sub nanometre how far can they go.
I think they're using SRAM not DRAM on-chip, but yeah, a fair portion of the transistors are for memory.
@pyrus2814 damn them all ! That's confusing, but good to know.
Half of the core physically in die area is SRAM, fwiw.
@@JonathanSwiftUK you know "since most of them are memory bits, for DRAM 1GB = 8 billion transistors" but don't know for like over a decade the nm size isn't related to transistor size?
@percywhitehead9228 because I'm not that interested I guess. It is odd I've been in IT for about 35 years, it never occurred to me that when they said nanometers they didn't mean actual scientific measurements, they were talking about some other esoteric nanometers that only people who have a very deep, and commendable, interest of cpus understand, and it's wonderful that people have expanded the definition of nanometers, I'm all for it you understand. 😀
okay but what's the gemm/W, how are you guys solving non-stationary dataflow. inter core communication has an incredible power overhead. not to mention the developer nightmare of having to debug and troubleshoot non-deterministic compilation tools.
In finance, every time you see exponential growth, you know eventually it will level off. (Sometimes the bubble pops, that's not Moore's law, though.)
It's a mathematical certainty, based on the nature of exponential growth. If your investment grows exponentially faster than the economy, sooner or later the economy's going to start fighting back. That's why those exponential growth curves always level off to new equilibria.
With Moore's law, I think that inflection point happens when some minimum size is reached for transistors.
Then, it's system sizes that increase to keep the growth up. However, here we're energy limited. It's a finite planet.
The thing that's interesting is imagining what that new equilibria state must represent, approaching practical energy limits.
The actual equilibrium we end up reaching matters a lot to the state of civilization. If integrated circuits had reached equilibrium in the mid 1970s, microcomputers would be hobbyist curiosities. If it was reached in the 1980s, we might have word processing and spreadsheets, but nothing more. If equilibrium was reached in the 1990s, we might have GUIs and a primitive Internet, but nothing more. If equilibrium was reached in the early 2000s, we don't get iPhones. If our current state is close to the equilibrium, we will miss out on general purpose AI, AI drug development and screening, simulated synthetic biology, fast protein folding modeling, longevity/biological immortality breakthroughs, etc.
@@gregorymalchuk272 It's starting to look like planetary intelligence is taking shape.
In self preservation.
The equilibria system size is proportional to the planet that houses it.
To stretch beyond those boundaries would take not just a revolution in analysis, but also in capacity to use a lot of energy. Perhaps over a long period of time.
Surely when spreading across planets, moore's law must have characteristics of a staircase function.
My issue with this company is them constantly being adamant the chip has 100% yield… this is BS and they know this. Because they have added redundancy they assume 100% yield. Not sure whether it’s their marketing or some other non-engineering figure pushing this information out.
What would be useful is how much silicon as a percentage is working per wafer as an area number of the total
Wafer.
TSMC N5 wafers have 50 defects or so - a D0 of 0.07/cm2. 1.5% spare cores for redundancy is over 1000 cores spare. That also covers cores that can't reach minimum voltage/frequency limits, but as they're looking at efficiency at scale, it rarely comes to that. A true wafer gets ditched when enough of the off-chip interconnect gets defects. It's still near 100% yield.
This is the kind of chip that is going to wake up and become conscious as soon as it is plugged in.
Wtf? How long has this been a thing? I feel like I'm looking through a window to a decade in the future
I haven’t heard “fail fast and fail often”; the closest thing I’ve heard was “fail fast and fail early”, which is a development strategy where a condition that may cause system failure is identified and reported as close as possible to where it cropped up.
And again you have to bite it... xD
Amazing rundown thanks man
Won’t that need a huge amount of current and cooling?
2:15 I was waiting for him to do that 😄
someday, we could see all this, but roughly in the same size chip found in a typical home pc's cpu
The difference though-Mr. Potato, how much more computer power does that 80 MW get them compared with one of the national labs top systems (not integrating a Cerberus WSC unit (Wafer Scale Compute unit-I know *_technically_* they're called "engines", but 🙄 it's a MASSIVE compute unit, not simply an engine performing classical work; I know; I'm jaded at the marketing department) into their super computer? I have read that the national labs have started mixing these monsters into their design architecture.)
That sucker looks about the same size as one of Seymour's custom CPUs from when he switched over to Gallium Arsenide for the Cray-3 & Cray-4. RIP. Way ahead of his time. I just wonder how much more powerful our tech would be today (and power efficient) had the national labs not had the budget cuts and cancelled their super computer orders (well, and had he not died in that car accident in 1996). We might all have GaN performance compute cores by now cranking 40+ GHz. 😅
anamartic and wafer scale memory. happy days.
4 Trillion transistors; what is the optimized use case for this computing chip? If we compare this one to Nvidias solutions; what’s the main differences ? Thanks 🙏
Still... will it break the 60 fps barrier in Skyrim SE/AE? LOL (Just thinking out loud.)
You could heat a village with that. Cooling it must be a nightmare.
How do one run this thing ? How many hair dryers of power does this one take? What is the use cases of this one; automated debugging of a large code base I can imagine is done in a snap.. curious of the business use cases
24 hair dryers actually
Awesome video
can't wait for 450mm wafers if they ever come along...
in 5 years sitting in desktops . 10 years sitting in tv sets .
Why is it square and not round? Packaging reasons?
I have a video that explains just that!
@@TechTechPotato dammit, and here I thought I'm keeping up with your videos!
@@TechTechPotato but in that video you said ~"thats why I don't expect to see round chips anytime soon, unless someone does a waferscale round chip". So since this is a waferscale chip, why not NOT trim the edges and use as much of the wafer as possible?
The programming model changes a fair bit, especially with chained workloads. The edge corner cores end up burning power and being underutilised. Also cutting the thing would be trickier and more expensive. Then having similar cuts for power and IO. A rectangle keeps the shoreline identical and easier to design for
Check the comments on [Cerebras @ Hot Chips 34 - Sean Lie's talk, "Cerebras Architecture Deep Dive"]
That's makes one hell of a schematic
Hi, I'm here from "Tech Linked" 🎉
I wonder if these sorts of Wafer Scale Engines can be combined with advanced packaging / memory stacking? To my understanding, large AI models are bottlenecked by memory capacity and throughput, so adding a closely-bundled cache or HBM stack on top could increase performance by a lot.
That said, with the energy this thing uses, powering and cooling extra memory stacked directly on top might be a problem. Maybe if it has separate power delivery, fluidic cooling channels through and between the chips, etc? There's probably high-end customers who would want that if it gives significant advantages over H100s for their applications.