Why We're Reaching the Theoretical Limit of Computer Power
HTML-код
- Опубликовано: 22 дек 2024
- Get a free bag of fresh coffee with any Trade subscription at drinktrade.com/hai
Half as Interesting’s Crime Spree: nebula.tv/haic...
Get a Half as Interesting t-shirt: standard.tv/co...
Suggest a video: halfasinteresti...
Follow Sam from Half as Interesting on Instagram: / sam.from.wendover
Follow Half as Interesting on Twitter: / halfinteresting
Discuss this video on Reddit: / halfasinteresting
Video written by Amy Muller
Check out our other channels: / wendoverproductions
/ jetlagthegame
As an electical engineer, this was probably the best explanation of transistors aimed at non-engineers I have ever heared. Props to Amy!
I have to agree. As someone who was TA for a junior level semiconductor course when I was in grad school, I thought this was fairly accurate and easy to follow for coming from a novice.
Then again, it's been more than 20 years since I taught that course and I've forgotten a *lot*.
as an almost complete layman I understood they preceed an off or on interface that can be interpreted by software to do all kinds of stuff....binary if you will 🙂
i'm a software developer and yes, this is a good explanation about how doped silicon (the process of adding germanium and boron is called "doping") manages the flow of electricity to create tiny switches turning on and off. It's a surprisingly simple concept
@@indoor_vaping i've never heard that they dope with germanium, only knew about the phosphorus/boron combination
@@Krokoklemmee Germanium, as well as many other elements can be and are used for doping; just not for your standard CMOS-Silicon. Dunno where germainum is used, but when you go into High-Power, Extremely high precision or RF magic the materials it can get wild.
Amy is the backbone of this channel
Sam should offer Amy to be THE guest star in the Jet Lag
I mean, she’ll either be the death of, or owner of, this channel once all the lawsuits are settled.
@@silverXnoise What lawsuits? Just bringing up lawsuits without context lol.
@@Zlysium HAI is being sued for allegedly shooting a missile at a family of seven because they claimed his videos are " quarter interesting "
@@Zlysium 1:09 it’s a joke from the video
it blows my mind how people figured out how to make computers
I can understand building a small computer with vacuum tubes or basic transistors that are visible to the human eye, I could probably build something very simple with enough time but building something that’s even as complex as what we had 40 years ago seems impossible but we did it somehow
The jump from binary to language is what baffles me.
Let's all have a Mark Rober moment and remember when humans come together sometimes they can do amazing things.
computers are simpler than you think. It's actually just a lot of copies of the same things over and over again. and computers can only do a single math operation....Addition. It just does a LOT of addition, over and over again. and we use certain numbers to represent letters and other symbols.
@@SoloRenegade didnt know that lmao
I have a Ph.D. in physics and study theoretical chemistry for a living. Amy, you did an amazing job teaching yourself a topic that most people even with the proper training have a hard time describing! Being able to learn new and complex information is an impressive and valuable skill to have 😊. Nice job!
But does not pay a lot LOL sad but true a hedge fund manager richer than any scientist. AND do not say bill gates or musk neither are scientists in the true sense. they made billions in science but on the back of poor scientists that make their stuff work. Bill gates knows less about computers than a average programmer or hardware tech and elon musk could not build a tesla or design it, engineers did real scientists.
I'm a Physics PhD and totally agree. The y-axis on that wave function graph shouldn't be energy but everything else looked pretty correct.
You study for a living? HOw does this work? 🤣
Im doing Ph.D. in physics and I barely have time to do Ph.D. due to work.
Ummm...It's college level knowlege there in a vid.
As others have noted, since the introduction of FinFETs (22nm process node) (FET stands for Field Effect Transistor), the "nanometer" naming of process nodes has become detached from any physical measurement of the transistor. Originally, it referred to the gate length of the transistor. However, as quantum tunneling and other current leakage started causing issues, gate length stopped shrinking as quickly. FinFETs enabled greater control over the transistor by surrounding the channel on three sides with the gate (rather than one side with planar).
Furthermore, it is important that there are two critical technologies which will keep Moore's Law alive:
1. New transistor architectures currently being developed:
1a) First, there is GAAFET (Gate All Around FETs) which enables greater control by surrounding the channel on all sides with the gate. It also flips the stack of fins on its side, so now channels are stacking vertically rather than horizontally. This naturally increases density further.
2a) Then we have fork GAAFETs which seek to further increase density by removing the size of the gap between an n-well and p-well in a pFET and nFET (all transistors come in positive and negative pairs and this makes them sit next to each other when they would normally have to be far apart to not cause issues with overlapping doped silicon - the phosphorous and boron mentioned in the video).
3a) Lastly, we have CFETs (Complimentary FETs) which seek to increase density even further by taking to two separate pFET and nFET stacks and stacking one on top of the other! Naturally, that could double density once again! As such, transistor density scaling is still far from dead. It has been getting exponentially more difficult and more expensive to continue, but there's still a lot of momentum left.
2. Advanced packaging: Packaging is the process of taking a silicon die (the actual silicon with all the transistors) and putting it in a package. For example, a CPU will take one or more dies, bond them to a green PCB known as the substrate, and then cover it with what is known as an IHS (Integrated Heat Spreader) to help dissipate the heat generated by the CPU. Originally, packaging just meant finding some way to connect your one silicon die to the outside world. However, as the desire for more compute and more transistors never stops, we had to get a bit more creative in packaging. This is because if you just make the silicon die larger, you end up wasting more silicon because we're cutting rectangular dies out of a circular silicon wafer. Larger rectangles means more lost silicon along the edges. Larger rectangles also means that a single little defect in the wafer (there's always some) will affect a larger piece of silicon. Better to have one bad die out of 100 rather than out of 20. So the question became: "How do we provide more transistors and more compute without further increasing die sizes and thus decreasing yields?" The answer was to split everything into multiple dies! So we started creating all sorts of techniques to put multiple dies next to each other on the substrate and allow them to communicate with each other. We even started stacking dies on top of each other! All of this allows us to keep dies small with high yields while not sacrificing on performance and transistor counts. And so as long as we can continue with advanced packaging, Moore's Law is still far from dead! This is because Moore's Law does not state how large of an area the transistors have to be in. He simply said that the number of transistors in an *integrated circuit* will roughly double every two years. If we can economically increase the size of that integrated circuit, then we can still increase the number of transistors on the integrated circuit as we please and thus continue adhering to Moore's Law!
While Quantum Computers are a very interesting topic of research and worth a video on its own, most people agree that it won't replace classical computers, at least not for a long, long time. This is because virtually all quantum computers require ridiculously well-maintained environments (fractions of a degree above absolute zero and state-of-the-art magnetic shielding) in order to not become completely useless due to interference. Furthermore, in order to create properly reliable and stable quantum circuits, you need thousands of qubits which we are well over a decade out from. Furthermore, nothing can beat the price of silicon. A CPU with tens of billions of transistors can be had for a hundred bucks. But on the mature process nodes, you can have microcontrollers that cost pennies. As such, even when we get proper quantum circuits, they will likely remain cost prohibitive for all except some of the most well-funded institutions.
I clicked on this video only too se if they would differentiate between the nanometer nomenclature and the actual physical size of the components. The awnser is just a quick google search away. Makes you think all the other things youtube videos get wrong
Wow what a comment. If it isnt just ML-generated, it has value comparable to the efforts of the entire video production team that caused it
@@mormatus I promise I'm not some LLM haha. I actually work at Intel (not in the fab, but I like to keep up-to-date on all the goings on in semiconductors). I should also note (as you can probably tell), brevity is not my strong suit 😅
If you mean the video, HAI happens to be a part of Wendover Productions, which is owned by Sam Denby (who voices every video) and they have several awesome writers. They're also behind the incredible "Jet Lag: The Game" show
You can pack whatever you want, but you won't be able to remove all the heat from the core... Rip.
@@Einheit101 Yes, this is certainly true. But think about it this way: the craze over dark silicon started around 2011... That was when Intel launched Sandy Bridge which could get you 4 cores and 8 threads at a minimum of 65W of power. Today, you can get a 14100 with 4 cores and 8 threads and more than doubles the performance of the Sandy Bridge part across the board while consuming similar power (and thus outputting similar heat). Clearly, the thermal issue, while a very real obstacle, has not precluded us from improving performance and efficiency.
The thermal side of things in particular has been a concerted effort across the whole industry. Mechanical engineers researching new thermal interface materials to increase conductivity between the silicon die and IHS. Mechanical engineers researching die thinning to get the IHS as close to the transistors as physically possible.
Firmware engineers tuning algorithms to run the best cores at the best times, even taking physical location on the die to improve thermal performance.
Core architecture designers physically altering the layout of the core to move thermal hotspots as far away from each other to produce more even loading and better heat dissipation. There has also been an increasing emphasis on custom accelerators to provide massive performance and efficiency gains in common tasks. For example, Nvidia, AMD, and Intel have all implemented accelerators for video encode and decode which does exactly this. Compare the performance of hardware encoding against CPU software encoding to see the massive gap in performance and efficiency.
There are a plethora of techniques implemented by a plethora of smart engineers across all disciplines that have been implemented to get around this obstacle. And there are still plenty of other techniques to explore! That said, nothing beats power efficiency improvements that you get from an improved process node. Dennard scaling died with the transition to FinFETs, but there are still power efficiency gains to be had by better controlling the gate and channel, tuning the channel resistance, reducing via resistance, and increasing capacitor density. We still see 10-15% more performance at the same power for a half node and 30-40% more performance at the same power for a full node, and there are still plenty more gains to be had here. Look at the power efficiency of Intel's Sierra Forest CPUs which leverage the Intel 3 process node. Intel, the laughingstock of power efficiency and thermals, providing best in class power efficiency.
Can we all appreciate just how much of a legend Amy is?
Will you virgin boys give it a rest?
And the fact that she is willing to call the NLRB
I'm not entirely convinced Amy isn't just Sam in drag
No, we can't
@@chair547 It's a non-zero probability
Gotta say: Amy proves to be a great addition to the channel.
Real
Who is Amy
This was an impressively accurate explanation of how transistors work. My only note is that Moores law is already dead, in that it refers exclusively to transistors per area. For production, instead of implementing the increasingly small transistors that science can create, we've been improving ~30nm technology for like a decade now (GAAFETS and FinFETS, for example). Industry terms like "5nm technology" are actually just jargon that means "a technology a few generations after 40nm technology that happens to give a performance increase similar to what 5nm technology would look like if it was possible for production right now."
Thank you!! I’m amazed at the amount of people who say it’s still alive when there is literally a specified metric for it, and it isn’t being met
Yep, this guy is correct
Also we've been getting over Moore's law by using processes that are fine tuned for specific applications. For example: Mobile? optimize the transistor for low power. Server? Optimize for performance.
It's almost like using historical data to make a prediction with no end never works, which is exactly why I doubt the inevitability of the technological singularity
But what about operations pr second? Hz isn't limited because of quantum-tunneling.
@@Phethario hz is limited by the planck second if nothing else.
1:20 "allegedly got those actors to the moon"💀 bro's wild for that
Imagine spending hundreds of millions of dollars just to fake the moon landing🤣 which was just a hoax btw😂
@@QWERTY-bc1uq anybody who thinks the moon landing is fake, lives under a rock
@@QWERTY-bc1uqyeah, kubrick decided to film it on scene to make it more believable to get people to buy the hoax
@@QWERTY-bc1uqcalm down buddy
I had to pause. Do my eyes deceive me?
I like how quantum tunnelling kind of looks like the behavior of a loosely coded hobby physics engine. Sure, there's a collider, but if you're moving fast enough you go right through.
this is an example of a literary device known as “foreshadowing” lol
Literally Roblox. Roblox is the glitchiest game engine.
best argument for "we live in a simulation" I've seen
@@QuasariumX Source 2004: Amateurs.
the universe is just Mario 64
I took years of physics in college and specifically learned about the physics of transistors as I was in computer engineering. Amy did a better job in like 30 seconds than any of my teachers. That's seriously impressive, especially considering it's not even that simplified, and doesn't lose any actually important information. Hats off to Amy!
Same. This explanation of the depletion layer was far more understandable than the fermi level clustercuss my senior-level nanoelectronics prof tried to explain to us.
I learned on the net in 5 seconds.. on/off BING! I'm a pctech so... binary clicked right away
Presumably because your professors were trying to teach future computer engineers, not lay people.
Professors tend to be handicapped in that they forget what it's like not to know.
"not even that simplified"
quasiparticles be like:
3:57 As a physics major, that's the most accurate depiction of a quantum physicist I've ever seen.
As a physics major, quantum information is only good because the feds put a ton of money into it and we can use that to get free food at events
Totally agree! The traffic/transit analogy really works as there’s a non zero chance you’ll go the wrong way 😂
Can you tell me wtf the barrier is in quantum tunneling? Like in 3d, does having a “barrier” that is thicker in any dimension (hight, width, depth) make it a “larger” barrier in the wave function? And does the material of the barrier matter?
@@Jwellsuhhuh A barrier is basically anything that stops the particle from going there. Often this is an electrical charge in the way. For example, alpha decay can be thought of as whole alpha particles inside the atomic nucleus quantum tunneling outside of the nucleus. Nuclei are positively charged, and so is an alpha particle. Classically, the coulomb/electric force would hold that alpha particle inside the nucleus forever. It is the barrier in this example. Because of quantum tunneling, there is a nonzero chance that the alpha particle is not inside that nucleus despite the barrier in place. Nuclei are tiny so the barrier is thin enough that a somewhat significant number of alpha particles tunnel through it.
In many sense that means that the material, physical thickness, etc. of the barrier can play important roles in the probability that a particle quantum tunnels. A thin barrier, such as a thin piece of paper, will do less to get in the way than a thick sheet of lead composed of many densely packed very positively charged nuclei
@@Jwellsuhhuh there’s no physical barrier (like a fence or a wall). It’s more about distance between the two poles of the transistors: if they’re too close from each other, electrons may just “jump” the gap between the poles even though there’s atoms in between. That “jump” is the tunneling.
So the distance is the barrier as it prevents too many electrons from jumping.
The thing with quantum tunneling is that it always happens - electrons can really be anywhere around the nucleus, they’re just most likely to stay close. But there’s a non-zero chance that one of “my” electrons is going through your body as you read this message - regardless of where you are.
As someone with a CS PHD, I feel a need to correct some incorrect parts of the video (and briefly talk about what the future holds). Warning: long and rambly
1) Transistor gate widths are actually about 32nm, not 3 nm. When the tech industry says "3nm process," those aren't referring to real measurements. They are marketing terms used to continue naming new transistors as if they were scaling at the same speed as they used to. You can read actual dimensionality stuff in the more technical releases (or Wikipedia/WikiChip).
2) That being said, there is still issues of quantum tunneling of course, which is a big part of why we are about to stop shrinking transistors, but we still have until the 2031 cadence release until that stops
3) Speaking of which, after the 2031 transistor (probably will be called something like "1.0nm" or "10 angstroms"), the industry will only be fitting in more transistors by introducing 3D integration, adding a 3D element to our CPUs (which are currently logically flat). The gate width will probably be something like 16nm
4) 3D integration has a major heat issue though, which will require major redesigns of how we design our computers, which is part of the reason we are starting "power cores" versus "efficiency cores" since the heat problem means we can many slow efficiency cores, but not all problems are paralizable. So we still need some "power cores" for the things that are not. That will probably only get us another decade or 2 before even 3D integration with specific designs for it before the heat issues are too problematic (and even that doesn't happen, not too longer we'll reach a height problem, like flash memory will start seeing in a decade or 2).
5) Ideally, by that time some cool alternative things will propel us forward like better transistors (eg: carbon nanotube transistors) or alternative methods of computation.
6) No, quantum computing isn't a replacement. Realistically quantum computing will largely see use via cloud service and be targeted at researchers or companies for specialized problems that quantum computing is good at but and has a large input size.
Even if quantum computing was a suitable replacement for classical computing, the average person isn't going to be walking around with a bottle of liquid nitrogen to supercool the q-bits in their phone.
intel announced cfet transistors a few days ago with many transistor 3d stacked ontop of eachother
@@the_bi11iona1re7 to be clear, CFET as 3d transistor scaling has been known about for years, and has been on the IRDS roadmap for over a decade. Also to be clear, the 2028 release is supposed where CFET begins to be introduced, but 2034 is the first time where improvements will probably be only through extra layers rather than shrinkage.
@@vrclckd-zz3pv They are making headways to overcome the need for extreme cold from what I heard recently for quantum computing due to finding a new material, can look into that if you're curious but I think it's still pretty absurd temperatures but the fact we're constantly able to push the envelope and find new htings still which could make it possible to operate in more normal temps, I couldn't discredit the possibility entirely.
Odd that you didn't notice how the 1:30 chart claims the NES, a 6502 based system, can perform 4 FLOPs per cycle.
2:18 - “….Amy, a SOCIOLOGY major”. The way Sam said that had me laughing really hard. 😂
With how small transistors currently are, there is already some level of quantum tunnelling occurring, but at levels that existing error correction codes can detect and correct it.
Yes and no, quantum tunneling causes a steady OFF current that would need to be compensated by a higher ON voltage. The issue is that with a higher voltage comes more heat, meaning there's more thermal movement of electrons, which increases OFF current. That increased OFF current might surpass the ON threshold and cause a bit flip, which may be corrected with ECC.
Personally, I think we're at a point where the hardware is powerful enough, we need to focus on the software aspect.
@@BoopSnootthe hardware is powerful enough for our current uses, but it always has been. we don’t know what the future will bring.
I love that Sam is giving definitely not a slave Amy more publicity
I guess I need to mention Moore, who theorized the Moore's law, has passed away 2 months ago (Mar 24, 2023). I believe his discovery and the law is the foundation of the current compulational, semiconductor world. Rest in peace.
Moore was in the right place at the right time. He was certainly a capable individual in his field. But about all we're using today that he came up with is his Law. Maybe we're still growing silicon how Gordie did? 320mm wafer manufacturing surely must have blew his mind though. They started off on a much smaller scale. They wouldn't have known how to lift a modern behemoth crystal.
No way it's the legend himself (God I love your music). Also humorously morbid fact to make me aware of, I suppose the passing of Moore conveniently fits with the time frame of the death of his law.
Thank you for letting me know about the passing of Gordon Moore. He was truly a pioneer in the field of computing and his contributions have had a profound impact on the world we live in today. The concept of Moore's Law has been a guiding principle in the tech industry for decades, and his legacy will continue to be felt for many years to come. Rest in peace, Gordon Moore.
Never knew i would see camellia here
5:40 "humans are innovation gluttons making computers better until we die from them" one of the most accurate sentences said in history
Well with the quantum mindset that's hopefully still and uncertainty.
Can’t say it’s true if it isn’t the case, right?
@@JannesJustus we can extrapolate based on current data. Right now it is looking like our artificial creations will someday replace us. They're evolving a lot faster than we are.
@@1pcfred sure thing pal
It's not computers that are causing population collapse, at least not autonomously.
I know a thing or two about quantum mechinics, and I've taken courses on quantum computing, so I have the pleasure of announcing: F in the chat for quantum computers, at least commercial ones in your home. They can do some tasks orders of magnitude better than regular computers, and other tasks not at all. So you can solve certain very hard math problems on a quantum computer very well (which isnt as useless as it sounds, since many things can be turned into math problems), but would never be able to, say, play minecraft.
And similarly to how there are quantum limits on how small transistors can get, there are quantium limits on how big quantum computers can get. Currently, thats at around 100 qubits (the quantum bit, rather than being 1 or 0, it can be a fun combination of 1 and 0), and even then you need to dedicate a third of them to error correction of your computations.
TLDR; sadge
wow took the words right out of my mouth. HAI made it seem like quantum computers just replace classical computers when that's the furthest thing from the truth
So just add "quantum chips" into the upgrade slots of your ancient ThinkPad, like you're slotting in a new graphics card, a lot more memory, and literally all the rest of the computer's internals?
I just want to say as someone with a chemistry degree that she KILLED the explaination of the transistors and how the hole and electron model works!! Honestly even a better explanation than my inorganic chemistry course! You rocked it!!!
That's because "Amy" is most likely ChatGPT or a similar AI.
Because you are just a simp who over praise anything women do.
Amy giving a full intro to pn junctions. What a legend
Yeah, I have a test coming up tomorrow and I literally screamed "depletion layer" 😅
@@eti-iniER homosexual
I'm beginning to wonder if Sam has a basement like Simon but instead of writers, he keeps the interns and researchers there.
If he does I hope it has sexy chrome door hardware like Factboi
More like Silence of the Lambs, "it puts the words on the script or else it gets the hose again!"
We NEED Amy on a Jet Lag season soon. With all these skills she's developed researching random stuff for HAI she'd mop the floor with the competition
She technically is now
"allegedly got those actors to the moon"
hahaha I lost it.
It’s an insult to the human ability to build something powerful enough to perform such a task in 1969.
Quantum computing isn't necessarily any faster than classical computing. It can solve certain problems faster, but most tasks will in practice be slower. It can certainly speed up public-key decryption, which is the main concern right now, and it can do some other things too, but it won't improve your frame rate running Crysis. More specifically, the advantage of a quantum computer is that it can solve problems in polynomial time that are expected to require superpolynomial time on a deterministic computer.
Is that last bit true? Famously, we don’t even know if NP-complete problems can be solved polynomially with classical computers. I’m not a computer scientist, but I thought that particular veil of ignorance extended into the quantum domain as well.
The usefulness of quantum computing is yet to be seen. Quantum Approximate Optimization Algorithms (QAOA) are the supposed np hard solving methods, but these have yet to be formally proven. The only robust and relevant algorithm applicable to quantum computing is Shor's algorithm. If we want quantum computer to be more useful in any capacity than classical, it will require significant investment into the algorithms they run, which we have yet to really do, imo.
Nah, Ima play crisis on my quantum computer in the future.
Why is it that processing power for most things will not speed up? I'm not saying you're wrong, I'm saying I don't quite understand. Something like Crysis is limited by graphics processing, which are a TON of really simple calculations, made over and over and over again, right? Is it unreasonable to make a quantum graphics card? If so, why?
From what I've gathered as an uneducated layperson, they do have one potential application in chemical modelling, which would be very useful in biochemistry - you can use one for brute-force drug discovery.
Murphy's Law was originally: “If there are two or more ways to do something, and one of those ways can result in a catastrophe, then someone will do it.” It's a maxim about defensive design in engineering.
“Anything that can go wrong, will” is actually Finagle's law or sometimes Sod's law.
Right, Murphy’s law was after someone put an accelerometer in upside down or wired backwards before a really important high speed deceleration test.
I was taught that Murphy's law was "if it can go wrong, it will"
And Sod's law as "if it can go wrong, it will, and at the worst possible time"
my personal favourite is coles law
@@johnsmith99997 the kind with mayonnaise or the kind with vinegar?
@@joermnyc A deceleration test using a live human as a guinea pig. The test produced no data because the sensors were installed backwards and they were installed that way simply because they could be installed that way. It's why one hole on an American outlet is slightly longer than the other. Humans can be trusted to find exciting new ways to screw up even the simplest task.
As a fellow sociology major i stand with Amy. People don't realize since its the softest of sciences it needs the most dicipline in order to stand scrutiny.
To a sociologist - you are the experiment - always! Amy rocks!
Economics has physics envy
@@blocks4857 Best I can tell economics care less about the why than sociology. We can look down at them together if you want
@GuardianNecro I meant mainstream economics. Yes we should laugh at them for trying to lump all heterogeneous humans into variable x
That's not to say economics isn't a valid social science. It's just that the mainstream treats it like a natural science
"Eat Rocks" - Amy. I love it. This is seriously one of the best descriptions of the topic I have seen. Thanks!
I'm an engineer and have had transistors "explained" to me many times. I put those quotations there because at the end of the day all I was ever told was their function within a system, not as a system. Thank you so much for this explanation!
@5:25 the main thing this video failed to mention is while this bit is true. We have 3 things to overcome this.... some have been in use or in progress for a while now
- You can just go for EC, error correction, you get enough gates, and throw in error correction, where it is almost a voting system. So the "random" jumps, are outvoted by the ones that are not being too random. We have been doing this for a long time now.
- Using other metals and chemicals for the semiconductor and and such. Using different metals, different chemicals during the depping process. Some chemicals may react better with each other and less likely to jump across and out of the gates. It has been being worked on what is a good cheap chemicals that work in the same systems (like chip making tools), but let you shrink smaller
- Just get bigger, make CPU larger! This has its own issues because spacing between CPU and RAM, and such. But to overcome this, we can start to use many CPUs scattered around the board. But then the next problem is scheduling, syncing and queuing with one CPU waiting on the next to provide its info.
I dumbed these ideas wayyyyyy down, but all are being worked on, and we actively use the EC for over a decade now. And it is being more and more common to avoid errors in data.
A bigger cpu had less yeild and therefore will be more expensive. AMD has proven a chiplet design is the way to go.
@@alexstromberg7696 Yup very true, more of a size, the more chances of lower yield.
Can see this very true with ALD processing where the process should be down the atom, but one defect and you can lose that whole chip.
But there are things being worked on to lower debris and contamination. Like putting many processes into one single system. This will keep the wafer in a N2 rich space without having to leave to head over to another process area.
Instead of going from an ETCH, to EPI to RTP, back to ETCH. It can stay in a single mainframe, under vacuum, with n2 purging in. And never leave, while getting the 50 or so cycles done in one system.
Just purging your FOUPs during transit isn't cutting it for high quality chips nowadays
Another field of research that is in progress is in optics, essentially doing thing we do with eletrons, but with photons
That 30 second segment did a better job explaining how transistors work than all of the college classes I took to get an electrical and computer engineering degree.
I love this explanation. I also love that Amy is writing all this about Amy. "What can we make Sam read, today?" Seems like her favorite game, and I am 100% here for it! P.S. Give Amy a raise. ;)
“It’s just so mid.” And the hair pops on. Loved it.
3:24 for anyone trying to learn about transistors, it's important to note that the depletion layer is not negatively charged (that would break charge conservation). The P side (the material with holes, Boron-doped silicon in this example) becomes negatively charged because the number of (negative) electrons in the area outnumber the (positive) protons in the nuclei. The N side becomes positively charged because it now lacks electrons.
sorta? Assuming both sides are equally saturated in doping then the charge would be neutral as the electrons and the "holes" meet each other and co-destruct.
3:44 "We have 2 nm " - that is almost true. Actually since FinFET (28 nm) we are not using planar technologies, i.e. we have a 30 nm wide gate, but there is a 3D structure that has the electical properties of a 2nm planar technology, thus we call it a "2nm node".... So physics is biting us, but by other means.
Basically since about 28nm they've just been making crap up. It's not like anyone can call them on their BS.
Physics is biting us, but the marketing department is biting us harder.
@@aelolul it is the ghost of the Steve Jobs reality distortion field warping our existence.
@@aelolul Exactly, customers compare products based on simple measures (number of cache Mo, number of cores), they won't interpret the complexity of the structure so companies just pretend size continues decreasing 😂
I just finished my EE degree and honestly, this was such a good explanation! if amy really did all this studying up in 4 days, I'm very impressed!
and to drop the cleanest explanation of a FET in 30 seconds I have ever seen
Yes, ChatGPT can be quite good. 😄
everyone does engineering and the jobs aint there
@@pietjan2650 keep lookin, you'll find something bud
I was laughing so hard at "allegedly" that I almost missed "actors" haha
0:49 logic is out of the gate
As an applied physician's engineer, this is very well explained! Took me several hours to understand this just from studying old school books. Hats off to you!
Ok as much as I appreciate this video as good explanation of semiconductors, as a chemist I cried at 3:00 when you showed each bond being only one valance electron. Each silicon has its own 4 valence electrons, 8 electons after bonding to its neighbors and sharing. Basically you're missing half the electrons.
"a lucky guess that got a lot more publicity than it deserved."
-- Gordon Moore - 2005
The law still holds up pretty well - it's still exponential and still very close to 2x every 1.8 years (or more or less) if you plot from the original statement to now.
It is a guess that became an industry target. So it is a self fulfilling prophesy.
@@1pcfred Not at all. Ample competition to push more if others were mired at some level. Though there is some "drag" on that due to capital investment in the wafer making apparatus.
@@AlanTheBeast100 Spot on.
@@AlanTheBeast100 I do not think you fully appreciate the challenge being at the cutting edge of the industry presents. What would be the benefit to pushing more anyways? If you're a little faster than everyone else that's all it takes to have a competitive advantage. If you're even in the ballpark you're in the game. At one time there were a lot of players. Today we're down to 3. AMD used to make their own chips. They don't anymore. They bowed out at 40 nm. They simply lacked the resources to go on. The last one we lost was Global Foundries. They called it at 16 nm. Now it looks like Intel themselves are teetering on the edge. They invented the monolithic integrated circuit too. Making high end chips is a tough racket. The most difficult thing our species has ever done.
@@1pcfred My comment isn't about what can be done going forward, only that up to recent data, it seems the 'law' more or less still holds despite some click-bait.
"Pushing more"? Always do that - but it doesn't have to be in the realm of getting closer to 1 nm (if that's even possible), but instead increasing core counts, specialized I/O, graphics and memory sizes to achieve higher processing bandwidth.
Look at Apple's M1 Ultra as an example of where things are going.
"allegedly got those actors to the moon" insert Buzz Aldrin Arthur meme.
6:09 I love how whenever sam, for the first time, says a word newfangled by gen z, he pauses after as if he’s looking around for approval
This video was way more technical and well researched than I expected fro HAI and I love it. Well done Amy
That does it boys, we going to the moon using my old xbox 360 guidance system 😂
In space you have what's called the shotgun problem. You see up there away from the protection of the Earth's magnetosphere there's a lot of cosmic rays flying around. When one of those high energy particles goes through a digital gate bad things can happen. It depends on the size of the gate what the impact is. Imagine shooting a billboard with a shotgun and a postage stamp. On the billboard you're still going to be able to read whatever was on the billboard. A postage stamp, not so much. Little gates really take a beating from the shotgun problem. So we use special integrated circuits in space. They're about on par with the old 486 CPUs. You need hardware that clunky to survive the hits.
I'd say "Amy for Jet Lagged" but I don't think the channel could survive without her
We aren’t approaching the limit of computing power, but the limit of our current transistor tech.
0:25 - Minor correction to the common understanding of Moore's Law: He didn't predict "the number" would double every year, he predicted the "price per transistor" would halve, _either_ due to the doubling of transistor's per area (thereby making the same chip able to hold twice as many for the same cost) or due to just becoming cheaper due to improvements in manufacturing process. (Or any combination thereof.)
People were decrying Intel's stall of process improvements (the long delay between 14nm and 10nm) as "Moore's Law breaking down!" but it wasn't - Intel was decreasing the cost per transistor of 14nm during that time. By making the manufacturing significantly more reliable, so fewer wasted chips, making the cost per good chip less.
That was unbelievebly accurate, especially for someone with no background in the subject matter, kudos to Amy!
Moore's Law is only dead if you're looking at traditional silicone transistors. Graphene changes everything, and could potentially cause tech to leap ahead in efficiency by unimaginable amounts. We've beem attempting to apply graphene to electronics production for the last three decades, so it has been hard to get a grasp on (the hardest part is guiding the path of the electricity on a sheet of material that is only on atom in thickness), but we are making strides.
Im just utterly impressed how someone can explain such a complex topic in such a short amount of time in a way everybody can understand.
huge respect!
4:50 that's literally the first time I've seen quantum tunneling correctly explained in a general video not specifically about quantum physics.
14:23 that's an old tube train I haven't seen in over 10 years! Nice blast from the past! Those were built in the 1960s. Used to take them on my way to school in the early 2000s.
Today's Fact: In 2021, researchers discovered a new type of aurora in the skies above Earth, which is caused by high-speed plasma flows.
Old news
Facterino? What are you doing here
Its been so damn long since i saw a kripperino in a youtube comment
5:54 egad, the first occurrence of "dweeb" I've heard in ages.
Great stuff though I wish he talked about parallelism. Instead of squeezing more transistors we simply use more processors simultaneously. All modern computing including our phones makes use of this. It's pretty amazing and more relevant than quantum computing for at least the near future.
How about if the same computing power can be produced at half the cost? Would that mean Moore's law lives on?
I can't wait to see the graphs at 4:08 and 4:56 in a future video about HAI's blunders.
It's anything else BUT the energy.
Still quite good explanation though.
3:15 - What you’re saying here about the depletion layer and such is really more about Bipolar Junction Transistors, and especially silicon diodes.
The transistors used in “chips” are Metal-Oxide-Substrate Field-Effect Transistors (MOSFETs). They work very differently: You can think of the as like a voltage-controlled resistor. The electrical resistance between the pins called “source and drain” is controlled by the voltage on the “gate” terminal.
In digital circuits, that resistance is pretty much either “zero or infinity” (loosely speaking). In other words, it’s an electronic on/off switch.
3:52 - It’s debatable whether it could be called “Quantum” Tunneling, but Tunneling, yes.
This is the best depiction of both how transistors work and how quantum tunneling works that I've ever seen.
(But quantum computing isn't the future of general purpose computing, it's only for solving a very limited set of problems.)
You're probably right about quantum computing not being generally useful, but that's also how conventional computers were seen when they were being invented.
@@michaelmicek True, but it's quite a bit different situation with quantum computers. The field of quantum computing is rapidly maturing and the scientific community started out optimistic about general applications only to have those expectations slowly dashed by the math. I think that most computer scientists and physicists outside of companies developing quantum computers have reached a consensus, but it's still a hotly debated topic. I personally think that the solution to the tunneling problem and scaling computing power lies elsewhere.
1:25 is complete BS. The AGC did not have a floating point unit, especially none that conformed to the IEEE 754 floating point standard, so how would one even calculate that number.
I just wanna say this was actually an incredible video. It managed to succiently summarize both my semiconductor's class and my various quantum mechanics classes in a fairly straightforward way. Good job!
I propose a new law: Amy's law, which states that any HAI video that involves Amy is a guaranteed banger
Im doing telecom engineering with an exam on MOSFETs, transistors and semiconductors on thursday and that explanation was golden
1:20 we just gonna ignore the fact that he just blatantly called the moon landing a hoax
I was looking for this comment, wtf
He sounds so mad abt it too
I hope that was a joke
Can't wait till Quantum computers jump onto the scene!
What's your use case?
@@AlanTheBeast100 Math.
Wdym? This is obviously our next step to be able to run Crisis!
Data security go brrr.
@@DoubsGaming More specifically?
An HAI video I know something about! I'm a graduate student researching Computer Architecture, and I wanted to both commend your team for the high-quality explanation of what is going on. Moore's law is dead, and has been, but we find better and better ways to use the tools provided to us.
I know "keep improving now that we can't do this" was a bit hand-waved, so I wanted to just throw in a bit of what that looks like. One thing we do to improve chips nowadays is all about better organization and use of the space we do have. Only a small portion of the chip is really used each cycle (cycle being essentially a a single high-then-low voltage of the chip), and finding how to make our chip space more efficient (without melting them) is going to be the way forward. We can make different parts of the chip run simultaneously to make progress on multiple instructions, and also dynamically make progress on instructions out-of-order if we find we may get performance out of it. In the end, parallelism is key, and making progress on multiple instructions at the same time will be the way toward the future for our industry.
We've shifted from faster clock speeds to being able to do much more per clock cycle. Some ways we've done this are the following:
Better prediction - When waiting for information, we can 'guess' what we will get back. If we are good at guessing, and have a way to fix mistakes, we can hide the wait time for that request for information and get an improvement in performance.
Near-memory computing - There is research into doing computation inside of memory, which means we don't have to wait for that computation to get to the processor of the computer, and certain types of computations can be done without the need to transfer information. Things like matrix addition have already been shown to be doable in memory.
Accelerators - We consider having dedicated hardware, such as a GPU, that is good at one type of computation that the machine may be doing a lot of (like rendering an HAI video).
Improving the structures we have - By making structures more effective or energy efficient, we can either reduce the number of resources used for the structure (freeing up precious space and resources for other structures), or by offering direct speedup if the structure can benefit from additional resources.
There is a lot more than that going on in our field, but these are some simple ways we go about making improvements beyond "shrink wire to go fast".
Amy did a fantastic job explaining how transistors work. It’s one of the best explanations I’ve ever heard!
at 4:25 but where did the electron go?
My idea goes like this: what if it's not popping but phasing or "falling" into a 4d hole and then "jumping" back into 3d space on the other side of the barrier?
That's probably the best description of quantum tunelling written by a non-physicist I've ever heard, bravo. Amy got it as correct as is possible without spending three years asking silly questions like "But why?" And "but how?" To several physics professors.
Amy for jet lag when?
I'm a physicist and this is the best introductory explanation of quantum tunneling I've ever encountered. Amazing job Amy
They've been saying Moore's law is about to die for twenty years. They always have a perfectly plausible reason chips are reaching their highest possible level of complexity. They're always wrong.
well they aren't wrong as we are reaching the physical limit of shrinking transistors. As a result Moore's law has noticably slowed down.
that's not to say progress is dead. But we're needing to look for new ways to improve computers and it's taking more time to do it.
4:12 This bit of stock footage was recorded in southeast Prague, Czech Republic.
As an electronics engineer student who just did a course on semiconductors, There are a billion errors in this 6 minute video I feel oblidged to point out:
1) computers use MoSFETs . However, the transistor described was a BJT. (I will not explain MoSFET, just know its completeley different. The best part is that MoSFETs are easier to understand, explain and use in my opinion than BJTs).
2) in BJTs, One side must be considerably more contaminated than the other, the Emissor (because it emmits electrons). The other side is called the Collector (because it collects those electrons).
For future reference, i'll assume the emissor is on the left, and the collector on the right.
3) the size of the Boron layer is usually quite smaller compared to the size of Phosphorus layer in BJT.
4) Although more transistors help, that is not the only reason the computer is faster. The smaller the channel, the faster the transistor (in MoSFet). This means that you can increase the frecuency of your computer, and have more ticks per second, which translate to more operations per second.
5) BJT transistor dont work with Electric fields (Thats for the Field Effect Transistors, mosFET). they work by inyecting a small current into the Base (the middle). This essentialy allows more electrons that would be expected pass the electrical barrier. This causes to be an excess of electrons in the base, which normally should recombine with the holes and disappear (Thats how Diodes work). However, the size of the base is so small that the electrons don't completely dissappear, and they reach the other side by diffusion (there are more electrons near E than C, so they tend to move towards C). Here, they are actually favored by the depletion zone barrier to go towards the collector. Although electrons wont go from E->B and B
I did not know 9 was the same as 1 billion
(Sorry 😅)
Quante nottate passate a studiare Dispositivi Elettronici e questo mi confonde BJT e MOSFET 😅😅😅
Quante nottate passate a studiare Dispositivi Elettronici e questo mi confonde BJT e MOSFET 😅😅😅
I really hope he understands how hard that would be to able to learn all this, and explain it so well in such a short amount of time. This is easily a semester long course
Poor amy
There is more to the power of the computer than the transistor size. Classical computers will still get much faster every year for a long time
We got around the speed issue by adding multiple processors or cores to a processor. That sped up the computer by sending different processes to different cores like getting a job done faster through teamwork.
#FreeAmy
I think Amy is fake. Sam just forgot his meds
Amy, Jerry and Simon's minions in his basement whose names i forget are some of the biggest unsung heroes of RUclips channels
6:26 you’re welcome
Amy is such a great addition to the team, I love how Sam keep messing with her XD
1:27 That is just wrong. Flops measure floating point throughput, and while I am not an expert on the Apollo guidance computer, I can say with certainty that the NES could not do 7 million floating point operations per second. It's CPU ran at about 1.7 MHz and didn't even support floating point numbers in hardware. Emulating them would take multiple cycles per operation. At best, I'd estimate that it can do 100 to 200 kFLOPS.
If there's a way to change for the better it must be *AMZK25* . I had lost all hope with the decisions our big guys make which just keeps their power and imbalance up. Maybe at some point basic people like me and you have to take over, right?
Nice bot, like botted too. Lotta effort went into that.
@@Zlysiumright? 1.4K likes with 2 comments 😆
basic people like you and I* damn bots learn some grammar
Bro spent money on this. Imagine. I have my own AMZK(25): Aksahin, Munch Zis Kock (25 inches)
Actors to the moon huh .... instant bye
It's a joke...
whoosh
As an Electrical Engineer. This had been said for more than 10 years. But although it is true that the transistors hardly can get any smaller, we still manage every year to cramp more and more transistor in a space. Keeping moores law alive. We can be proud of that.
It’s like how we’ve reached the limit of data density on hard drives bc we can’t magnetise sections of them any smaller/closer so now they’re going back to using film for mass storage because they can just make the tape longer to increase capacity
(HDDs are still used if the data needs to be accessed frequently bc they’re way faster but if there’s a shit ton of data that just needs to be stored and only gets used once in a blue moon, tapes the way to go)
That explanation of MOSFETs was pretty good considering it was written by, and aimed towards, people who have probably never even heard of the term before.
Moore never actually dubbed his observation a law. It was originally just something he put in a pamphlet for prospective buyers when he was in charge of marketing at Fairchild. It wasn't until decades later that someone decided to start calling it a law, and the name stuck.
Lol, I remember debating this almost 20 years ago when we were just on the brink of the theoretical limit of computing power. We're not even close to the theoretical limit. We've barely scratched the surface of what's possible. If, however, you are referring to the limits of current architectures, okay, then we're close to the limit of those architectures.
A lot of comments have already said this, but you gave a much better explanation of what the video actually meant.
Working with Sam sounds like that one dream job where
after getting hired, the job becomes one with *you*
(and not the other way around)
If anyone in quantum physics knows please feel free to reply, but isn’t quantum tunneling like pushing off a wall in your pool? You eventually stop because water slows you, but if the pool is short enough you will reach the other wall without stopping. Is this a good analogy?
Transistors are nowhere near the scales that quantum tunnelling would be an issue. There so much ignorance out there it is astounding. And worse it is being repeated by people that should know better. Moore’s law is not about the size of transistors but the double of the number of transistors on an integrated circuit.
I'm so engrossed by the excellence of Sam's narration that even the sponsor is entertainment!
Where'd you pull the "7 Million FLOPS" figure on the NES from?
1:32 Wow. I never realized just how UNDERpowered the iPhone 4 was. Beaten out by a decent console from nearly a decade earlier.
In reality, Moore's Law was observed to affect the lowest price chip process. The transistor count on these was the doubling he described.
Most people are ignorant of that part.
We are not reaching the limit, we're reaching A limit where we start to tap into the secrets of the EM field itself, the fabric of the whole universe.
Just want to add that unless I'm mistaken from learning more about this in a physics class recently, the Heisenberg uncertainty principle just governs the relationship between speed (I think) and location of sub atomic particles. It doesn't say they're a wave so you can't know any of it, it just means as you narrow the probability of one, the other becomes less knowable. You can't have both at once.
My fav part of semiconductor engeneering is that silicon plates with boron/phosphorus atoms added in are called "drugged superconductors".
Also you need an extra atom every 10^3 silicon atoms to incrase by a LOT the conductivity