Well. With a span between 1 year and 2 year it was a pretty wide guess. Also they did throttle there development during periods of the 80s and 90s. Basically cheating.
@@Connection-Lost No it means that I know that predictions based on the very small data set that Moore had at the beginning has stood the test of time well. It's never going to be perfect. Indeed look at the graph in the video: ( @21:30 ) a pretty straight line on a log graph? Hmm? You do _know_ what that means, right? I mean you passed grade 9 math? Right? So, a little algebra 1) 1971: about 2500 transistors 2) 2020: about 35B transistors Based on that, if the number of transistors goes up 1.4x every years (on average), then you get close to the 2020 count. Of course you can adjust the limits as you will and get slightly different numbers. For example if I make the end number 50B, then the factor would be: 1.41 every year. So before accusing people of a low IQ, maybe you should do some basic high school math first and see if your own IQ is up to a level from which to throw cheap shots.
a forecast of a decade, based on a few years data, just to fit sales targets.. is now, more than legendary. Mythological as it were. Amazing. Thanks Jon. Really interesting stuff.
yet, it barely applies to integrated circuit as most of the improvements during the last decades has been done through the design of the chips/systems and making smarter/fine tuned algorithms. And the raw computationnal power gain has mainly been used to ease software programming and portability. (eg. cache levels, multicore, SMT, interposers, ASICs, DSPs, FPGAs, GPGPU, higher level compiled prog languages, HAL, APIs, a metric ton of interpreted languages or cloud/server/Web-based apps)
One of my pet peeves. There will be some spillover of 300mm wafers to solar cells which runs mostly on 200mm because the equipment is cheaper and often second hand. The switchover to 300mm claimed a cost advantage based on wafer size ratio. The same should apply to solar but I am not a solar manufacturing guy. Same could apply to LED's perhaps...
@@julioguardado My understanding is that solar cells are already at a high percentage of the theoretic performance (as are windmills), so only more efficient manufacture is possible.
@@tomhalla426 Same here. Polysilicon is king and its efficiency hasn't changed from around 20% iirc. They're still looking for that high efficiency material that can be manufactured cheaply. Don't see any breakthroughs there.
21:00 - when mythology is so engrained you create a global industry of adherents. George Moore's passing really struck something in us, even in very adjacent semiconductor research/industries.
I recall back in the 1990s [US] when it came to PCs, the Moore's Law was getting into the lexicon of engineering professionals that used PCs in their work; as the rapid growing power of succeeding Pentium chips were rendering 18 month old PCs obsolete. It was an amazing era on the increasing processing power of the PCs back then on an annual basis. The engineering company I worked at, as a means to unload the _obsolete_ Pentium PCs they had [less than two-years old], were selling them to employees for around $100 [US]. Yet, those Pentium PCs were purchased at around $2.5K each, when new, two years prior.
Camera lenses got better because designers could simply let the software run random variations of the lens over and over again to find a better design, the more compute power the more variations you can try and the more likely it is the find a better design. Modern smartphone camera lenses are so ridiculously complex, each element basically has a radial wavy surface that make no sense but somehow focus the light just the right way at the end with minimal aberrations.
Also important is no longer needing to produce a geometrically correct image at the focal plane. Digital image corrections allow you to trade distortion for sharpness
Really interesting video. One small quibble, 2:19 - 2:49 "That 'roughly' is doing some serious heavy lifting". Not really in my opinion. As you point out, 50*2^10 is only 51 200, but the power of the exponent is so large, that you only need to increase the base by 2.5% to 2.05 to make up the difference. 50*2.05^10 = 65 540 Personally I think 2.05 falls comfortably within the neighbourhood of "roughly" 2
Well, I'll take the opportunity that creators generally see the initial comments to thank you a lot for the amazing content you've been putting out. I wish you only the best! Hugs from (just) another one of your Brazilian viewers ❤
good video on history and i think this is also one of my favourite quotes from "the intel trinity" "Gordon more than anyone else understood that it wasn’t really a law in the sense that its fulfillment over the years was inevitable, but rather that it was a unique cultural contract made between the semiconductor industry and the rest of the world to double chip performance every couple of years and thus usher in an era of continuous, rapid technological innovation and the life-changing products that innovation produced. That much was understood pretty quickly by everyone in the electronics industry, and it wasn’t long before most tech companies were designing their future products in anticipation of the future chip generations promised by Moore’s Law. But what Gordon Moore understood before and better than anyone was that his law was also an incredibly powerful business strategy. As long as Intel made the law the heart of its business model, as long as it made the predictions of the law its polestar, and as long as it never, ever let itself fall behind the pace of the law, the company would be unstoppable. As Gordon would have put it, Moore’s Law was like the speed of light. It was an upper boundary. If you tried to exceed its pace, as Gene Amdahl did at Trilogy, your wings would fall off. Conversely, if you fell off the law’s pace, you quickly drew a swarm of competitors. But if you could stay in the groove, as Intel did for forty years, you were uncatchable"
Over the past weekend, I've been thinking of Moore, the traitorous 8, the last AMD video that you had out recently, and the impact these men made on the industry. Thanks for this video (or tribute?)
Thank you for yet another educational video. The timing of it's release is uncanny. RIP Gordon Moore indeed. A parallel technology trend which doesn't get as much attention is magnetic storage (disk drives). This has been on a similar trajectory as integrated circuits have been, and has been every bit as important to the development of technology. Data storage is now so plentiful and cheap that we don't even think about it. RUclips allows anyone to upload unlimited video content for the world to watch. That is thanks to the magnetic storage revolution. Maybe you could do a video on that topic at some point.
And here I though this was going to be some nonsense about how accurate it's been and how it will never end. Nice that you actually looked up the info first. Nicely done.
Listening to this, almost makes me wonder if Moore's Law was more of a self-fulfilling prophecy. Something that motivated people to push harder for technological advancement, which ended up making it come true.
I appreciate your take on what you said was technological nihilism, that improving the speed (among other things) of electronics is a good and necessary thing. Even if the average consumer doesn't see it, the track record of these developments have indeed transformed the lives of many humans. Seemingly, for the better.
Good info. However I just want to point out that "DRAM" should be pronounced as "DEE-RAM" Like SRAM is "S - RAM" They should be the same logic when you pronounce the word
I would never argue that we don't need improvements in compute performance but you can make the related statement that improvements in computing power (along with a seemingly never ending thirst for more SW developers) has lead to less efficient SW being developed to take adequate advantage of compute performance.
True. But this lack of code efficiency also comes with many advantages like ease of writing, prototyping, debugging, reviewing, correcting, improving, porting or even installing programs. All of the heavy lift is made by a few software bricks now (compilers, interpreters, OS, HALs, APIs, game engines, Web browsers). Even in embedded, it becomes way more practical to restrict ASM or RTOS usage only when it's imperatively required.
This claim is sort of absurd on its face. If it were more profitable to produce more efficient software, then that's would companies would make. However, the increasing complexity of business domains, infrastructure and the sheer amount of different platforms a piece of code must be compatible with makes it extremely inefficient (in terms of development costs) to attempt to squeeze every last bit of performance from a chip by writing code in low-level languages. Simply put, there's nothing stopping you from putting out highly optimized software today. But you would simply get out-competed unless you were working on a very specific domain or platform. So it's not that powerful hardware leads to less efficient software, but that less efficient software is usually more competitively produced and priced. Hardware performance simply dictates the minimum point at which software is simply not usable due to its inefficiency. For most applications, "adequate" advantage of compute performance IS the ability to produce less efficient software that is still usable, because this means you can produce MORE software or tackle problems that are more complex. If a browser already opens a webpage less than 2 seconds, nobody realistically needs it to be faster. It (the browser or the webpage) just needs better features, bugfixes or increased stability. Or maybe just less costly maintenance.
@@son_guhun it still really depends on the application. Sure Linux got rid of its old ASM pieces to be plain C (and now Rust) but in the mean time Android mostly got rid of Java runtime to compile software during install like BSDs do, despite the vast improvement of ARM SoCs. Similarly, developing production software for microcontrolers is not done with extremely inefficient/portable code like MicroPython or Arduino. And sometimes ASM is still used for critical timing functions. Same thing with desktop applications: Developing small tools and games with high level languages and APIs can be done without taxing too much of the increasing computing resources (even on laptops and embedded) but it's never used for developing AAA games or production applications that want to use every resources available to speed up the execution.
Excellent video, Jon! Among other things it explains why I was never sure what Moore's Law predicted. Was it a doubling every year, every 2 years, every 18 months? Thing itself, Moore's Law, has been redefined to fit the data, the doubling over a span of time.
I remember in the late 70s in Manila we were making a K&S 478 add on that turned that manual wire bonder into one controlled by a microprocessor. Zylog, AMD, Intel, were our customers... what an amazing time.... our computer had no monitor, used a trackpad with a billard ball in it, and had 12 kb of ram!
Another great video Jon! The last few minutes were especially interesting for me - I'll read that paper. I'd like to know what other industries rely on increasing compute density. I think that mobile phones are an example, the industry plans around people needing to buy new phones to keep up with the latest software. But if phones stop getting more powerful, then that industry needs to rethink its financial plans. I'm also guessing that AI (especially the training side) will want more and more compute going forwards.
A few years ago I saw a Philosophy Tube video, in which for the first time I heard Moore's Law referred to as a marketing term, as opposed to a venerable guideline for technological progress. This video illustrates that label wasn't just left-wing cynicism, but kind of accurate -- it demonstrably instilled a confidence in progress that drove sales.
Yes. The shrinking of a transistor’s area by 50%: 1) Allows twice as many transistors on a chip with the same area. 2) The same area implies basically the same cost, for twice as many transistors (far from costing twice as much). 3) A transistor with 1/2 the area consumes 1/2 the power. With twice as many at 1/2 the power, the power consumption stays basically unchanged, for a chip with twice as many transistors. Neither the power nor the cost double when the number of transistors double in the same unit of area. This scaling phenomenon is the real important thing. If transistors stopped shrinking, compute that needs twice as many transistors will start costing twice as much and using twice the power. Do this for a few generations and costing 8x as much and consuming 8x as much power and 8x the area …. that will make you appreciate the transistor shrinkage advances we’ve had in the past.
There ought to have been discussion of Dennard scaling which was a key driver of rapid processor improvement, node shrinks gave not only smaller & cheaper but also faster transistors at a constant energy cost, meaning there weren't the heat & power wall issues which halted the frequency scaling. I was a bit disappointed that this channel failed to mention that as most people focus on number of transistors, when multi-core is a response to physical limitations on uni-processor performance.
Wow, incredible history. I remember my first computer, a VIC 20, then upgrading to the C-64, then the 8086. The first computer I programmed was with Holirith cards on a Univac that used magnetic ring memory and large drum magnetic tape. It was a simple employee hours/wages program. So many things have changed since then. My cell phone has more computing power than the biggest computer that our local college had at that time. Amazing!
Do basically Moores law was a self fulfilling prophecy. The industry followed it not because of some universal physical law but because everyone in the industry tried their best to follow it for various reasons.
Awesome video, Jon. I worked on Intel's 65nm node - you got most of the products of the time (Cedarmill, Yonah, Tukwilla). Great summary on the main drivers keeping Moore's Law alive (advancements in design and lithography). From 2000 - 2010, Intel's biggest worry was being categorized a monopoly and broken up. Since then, Intel lost its once-massive competitive advantage, and has been surpassed by TSMC and Samsung. The company was a juggernaut with Moore, but after his retirement, hubris crept into the company culture, and mismanagement became the norm, IMO.
Moore's law, like every exponential in nature, is bounded by physical optimization limitations and more accurately follows a Sigmoidal curve as the technology hits an inflection point and only gives you diminishing returns and the graph goes logarithmic. The only way to advance is to switch to a new technology. Adding cores, gate geometry, 3D stacking has helped in many areas, but we'll likely have to switch from Silicon mosfets to Gallium Nitride or other chemistries to get any sort of frequency improvements at this point.
If you take a technology that's developing that rapidly and is that early in it's lifecycle and give me a ten year roadmap into the future and at year 10 you are actually at what you said would be year 9 that's amazingly accurate, it wasn't perfectly accurate but it's incredibly rare to see anything close to that as far as I see.
Since "Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years," all that is required to keep it valid forever is to make bigger ICs.
There's another wave coming in semiconductor manufacturing - the maturing of the industry. Optical scaling has a physical limit and wafer size is not going to go beyond 300mm. What we'll see is all chips becoming much cheaper, particularly complex ones as industry laggards catch up. I think the best is yet to come.
With chat gpt 4 this week and how much programmers are already saying it’s helping them code, I can’t help but feel we are just at the start of another massive Burt’s of exponential growth. Soon we will be able to code 10x more things, 10x faster for a fraction of the cost. That alone will be a massive boost to efficiency. As AI models grow it’s only going to get better. Add to that how quickly quantum computing is growing these last few years. Sure it’s an entirely different field but we have no idea what quantum computers could be capable of 20 years from now because we haven’t had the tools to start playing with them until yesterday.
It's hard to think of a single person who effected more human lives than Gordon Moore, RIP Gordon thank you, I hope Pat doesn't run your company into the ground completely.
Yeah. Now think about the inventor, or one of the godfathers, of AI: Geoffrey Hinton. There is a CBS morning interview that is super interesting. And... I am not able to predict how much live will change in the next 10, 20 years.
I love the little bits of humor, wit and charm you add to each video, "...technically correct, the best kind of correct (from Futurama)", and that "Whomp Whomp". We see you, @Asianometry we see you. 😘
I need more compute for my gaming rig still, My 380 can still just barely drive current gen VR titles, I will need sever orders of magnetude more compute before even being able to drive something currently high end like A XR3 at a native resulution. If i also wan't high refreshrates (atleast 144 hertz but preveribly in the 200s) without reprojection and raycasting for improved lighting then we are still many many orders of magnetude away from what is needed.
Appreciate the deep respect for Moore! Only comment is the node charge didn't really end in 2006 for Intel. They executed pretty well until about 2014-2015.
I disagree that regular people do not really care about it anymore. Software expects for systems to advance, web alone, try using web with devices 10 years old, you will rage like crazy how slow everything is. The productivity goes up as well with more powerful units.
@@WaterZer0 Clearly you don't know how programming works then. Same software these days is way more efficient than old software doing the same task on same machines. The software like cars advance and need more supporting functions to achieve more efficient and more advanced processes. The issue with badly optimized things these days are not fault of programmers in majority of cases, but fault of leadership, who push unrealistic time frames and cost restrains.
I absolutely adore your conclusion, I hate when people say things which ignore everything that technology could be, in favour of what we already have. Recently at a party I saw someone who is an engineer at Bosch saying that most of the things are already invented and that there is so little innovation that is yet to come. When I heard that from an actual engineer, I felt disgusted tbh
1978 the Motorola 6800. The applications uses exploded. The rocket was just launched. I loved all that was CMOS the RCA 4 bit; Now we are at around 2000MIPS. The IBM System 360 waas about 16.6MIPS in 1970. The self heat now increases with operating temperature due to higher leakage currents of smaller geometry, around 90nanoM
There are of course business applications for more computing power. But for the first time in history, consumer products today are not limited by computing power but by things like network speed. The market for more advanced chips is a lot smaller if you're developing for a few HPC clusters at companies or universities rather than smartphones, which perform just fine with a 7nm chip for the vast majority of people
As your narration and explanation is unbeatable, would be great if you could video on evolution of technology from 1400 AD (printing press) till date and how humans were worried about jobs being replaced. This would be very relevant to current GPT / AI revolution.
It’s just a trend line. You can make your own right now. Moore himself (or probably some analyst in company) revisited data. And if you famous enough you can call it Gelsinger-Moore’s law. Or RayJacket-Moore’s law. RIP
I have been looking into Graphene FETs for computational use and other alternatives to the existing Si. One of the main challenges facing any Si alternative is the repeated "extending" of Moore's law. Would you cover post Silicon technologies in a video ? I find your videos quite fascinating and they help keep me informed while I look for roles in the semiconductor industry. Thank you.
Industry can't move to larger wafer because cost, so moving to another technology... I would bet more on using carbon in different forms as addition to silicon base, not change of everything. At last I didn't hear anything which suggesting is a viable, I think it required a lot more research to make it to market. Researchers are overselling they inventions ;)
From a chip users standpoint, I remember the market wide semi level, the transition to new FET designs, in the 1st half of the 1980s. These improved MOSFETs made faster CMOS chips, so big chips, moved from NMOS to lower power CMOS. At the discrete level, the creation of power MOSFETs. I thought Intel's big win was IBM choosing its uP for its IBM PC, and the clones, having to stick to it, to stay compatible. Even moving to the 386, when IBM balked at using it, fearing stepping on its mainframe market.
20:17 Multicore is not related to Moore's law, which is a density thing. Multicore is a result of frequency scaling limits, which is more of a Dennard scaling thing.
Moore's Law reminds me of Hubble's Law, which describes the expansion of the Universe. Edwin Hubble used only a few data points and boldly drew a line connecting them to prove that galaxies farther from us are receding faster. The values on Hubble's chart were inaccurate, as were his predictions based on it (that the Universe was 2 billion years old). Nevertheless, Hubble's Law inspired others to replicate his work and refine their measurements, which ultimately led to plausible theories and predictions.
The less popular corollary to moore's law is the one about how moore's law will be re-defined in order to argue that it's not really meaningless every 3-4 years.
Personally I would look at "Kurzwell's law" so cost of computing as calculation/s per $1000, because it extend to times with punching machines and ignore technical aspects, which is interesting as a story how it change with time. About end, I would give different arguments "you don't need more computing for today software, but only today" because to allow something new, sometimes is few orders of magnitud before it become available (real time RT in games, really smart AI etc). Where are limits to human perception as resolution, refresh or interaction, but quality and complexity have a long way. As autonomous cars, robots (humanoid or not but one which can deliver stuff under your door) and many many more. But AI revolution is only at beginning and Von Neumann architecture will not be best for it.
His estimate of 65,000 components in 10 years wasn't that far off from a (roughly) factor of two. It's a bit unfair to say the word roughly was "doing a lot of heavy lifting" when the 65k component figure comes from a factor of 2.05 instead of 2.
"In any system of energy, Control is what consumes energy the most. Time taken in stocking energy to build an energy system, adding to it the time taken in building the system will always be longer than the entire useful lifetime of the system. No energy store holds enough energy to extract an amount of energy equal to the total energy it stores. No system of energy can deliver sum useful energy in excess of the total energy put into constructing it. This universal truth applies to all systems. Energy, like time, flows from past to future".
We're still a few gens away from not needing more compute for gaming imo. We have been trading more performance for equally more power too often last few gens. Id be glad to go back to the days of an actually 50W TDP CPU and sub 200W max TDP GPU that can handle all the latest games at 100-140fps without needing over the top cooling solutions and noise. Not to mention AMD and nvidia seem to be abandoning the low and mid range as much as they can right now. So still a way to go if you ask me.
We need to adapt and change the way we compute itself. We need to quit looking for unicorn farts with dark matter detectors and tackle problems like P Vs. NP. We need to make out computing me effective and efficient. Right now we run A LOT of flawed, blotted programs. I think things like gallium arsenic will surely help but we need to go back to the basics. We need to change the geometry and architecture of processors. The advent of the arm processor is a perfect example.
Go back to the basics? So basically we've reached the "local maximum" for the current technological paradigms in place for compute. Sounds like some fundamental revolution is needed at the underlying architectural level to continue anything like Moore's law level growth.
RIP, Gordon Moore.
And his law too
What ever happened to wang computer corporation. I remember my school having one. The ones with those little monitors and large system unit.
This is how I found out
@@ggboss8502 only God's Law exists
an omen
"...a lucky guess that got a lot more publicity than it deserved."
- Gordon Moore.
Well. With a span between 1 year and 2 year it was a pretty wide guess. Also they did throttle there development during periods of the 80s and 90s. Basically cheating.
@@matsv201 It was a very good guess IMO.
@@matsv201 Their*
@@AlanTheBeast100 You not correcting him means you must be low IQ as well
@@Connection-Lost No it means that I know that predictions based on the very small data set that Moore had at the beginning has stood the test of time well. It's never going to be perfect. Indeed look at the graph in the video: ( @21:30 ) a pretty straight line on a log graph? Hmm? You do _know_ what that means, right? I mean you passed grade 9 math? Right?
So, a little algebra
1) 1971: about 2500 transistors
2) 2020: about 35B transistors
Based on that, if the number of transistors goes up 1.4x every years (on average), then you get close to the 2020 count.
Of course you can adjust the limits as you will and get slightly different numbers. For example if I make the end number 50B, then the factor would be: 1.41 every year.
So before accusing people of a low IQ, maybe you should do some basic high school math first and see if your own IQ is up to a level from which to throw cheap shots.
a forecast of a decade, based on a few years data, just to fit sales targets.. is now, more than legendary. Mythological as it were. Amazing. Thanks Jon. Really interesting stuff.
Another issue is that Moore’s Law only applies to microchips. Some politicians act as if similar advances apply to solar cells or batteries.
yet, it barely applies to integrated circuit as most of the improvements during the last decades has been done through the design of the chips/systems and making smarter/fine tuned algorithms. And the raw computationnal power gain has mainly been used to ease software programming and portability. (eg. cache levels, multicore, SMT, interposers, ASICs, DSPs, FPGAs, GPGPU, higher level compiled prog languages, HAL, APIs, a metric ton of interpreted languages or cloud/server/Web-based apps)
One of my pet peeves. There will be some spillover of 300mm wafers to solar cells which runs mostly on 200mm because the equipment is cheaper and often second hand. The switchover to 300mm claimed a cost advantage based on wafer size ratio. The same should apply to solar but I am not a solar manufacturing guy. Same could apply to LED's perhaps...
@@julioguardado My understanding is that solar cells are already at a high percentage of the theoretic performance (as are windmills), so only more efficient manufacture is possible.
@@tomhalla426 Same here. Polysilicon is king and its efficiency hasn't changed from around 20% iirc. They're still looking for that high efficiency material that can be manufactured cheaply. Don't see any breakthroughs there.
I REJECT THIS FORM OF TECH NIHILISM
21:00 - when mythology is so engrained you create a global industry of adherents. George Moore's passing really struck something in us, even in very adjacent semiconductor research/industries.
I recall back in the 1990s [US] when it came to PCs, the Moore's Law was getting into the lexicon of engineering professionals that used PCs in their work; as the rapid growing power of succeeding Pentium chips were rendering 18 month old PCs obsolete. It was an amazing era on the increasing processing power of the PCs back then on an annual basis.
The engineering company I worked at, as a means to unload the _obsolete_ Pentium PCs they had [less than two-years old], were selling them to employees for around $100 [US]. Yet, those Pentium PCs were purchased at around $2.5K each, when new, two years prior.
Camera lenses got better because designers could simply let the software run random variations of the lens over and over again to find a better design, the more compute power the more variations you can try and the more likely it is the find a better design. Modern smartphone camera lenses are so ridiculously complex, each element basically has a radial wavy surface that make no sense but somehow focus the light just the right way at the end with minimal aberrations.
Ah the ole brute force approach.
Could you show us an example?
not entirely true, a lot is material science and a lot is micro fabrication costs
Also important is no longer needing to produce a geometrically correct image at the focal plane. Digital image corrections allow you to trade distortion for sharpness
@@kylinblue Search "polynomial optics" and get ready for a headache.
Really interesting video. One small quibble, 2:19 - 2:49 "That 'roughly' is doing some serious heavy lifting". Not really in my opinion. As you point out, 50*2^10 is only 51 200, but the power of the exponent is so large, that you only need to increase the base by 2.5% to 2.05 to make up the difference. 50*2.05^10 = 65 540
Personally I think 2.05 falls comfortably within the neighbourhood of "roughly" 2
That was my thought
I came here to make that comment. Thanks for saving me the trouble!
Thanks for telling us Moore's Lore, not just the over-quoted "Law".
Hearing the story of Moores Law, it is a self-fulfilling prophecy:
We wouldn’t have current level of tech, without the ambitions of Moores Law.
Well, I'll take the opportunity that creators generally see the initial comments to thank you a lot for the amazing content you've been putting out. I wish you only the best! Hugs from (just) another one of your Brazilian viewers ❤
Nice to see that I´m not the only Brazillian that knows this amazing channel kkkkkkkkkk
@@gordonfreeman9965 Now there are three of us.
It was nice to see as well that a Brazilian was involved in the last paper that he presented in the video
GG izi shrink brazil to a micro size so we can be more efficient also. Fit more brazils inside brazil
good video on history and i think this is also one of my favourite quotes from "the intel trinity"
"Gordon more than anyone else understood that it wasn’t really a law in
the sense that its fulfillment over the years was inevitable, but rather that it was a
unique cultural contract made between the semiconductor industry and the rest
of the world to double chip performance every couple of years and thus usher in
an era of continuous, rapid technological innovation and the life-changing
products that innovation produced.
That much was understood pretty quickly by everyone in the electronics
industry, and it wasn’t long before most tech companies were designing their
future products in anticipation of the future chip generations promised by
Moore’s Law. But what Gordon Moore understood before and better than anyone
was that his law was also an incredibly powerful business strategy. As long as
Intel made the law the heart of its business model, as long as it made the
predictions of the law its polestar, and as long as it never, ever let itself fall
behind the pace of the law, the company would be unstoppable. As Gordon
would have put it, Moore’s Law was like the speed of light. It was an upper
boundary. If you tried to exceed its pace, as Gene Amdahl did at Trilogy, your
wings would fall off. Conversely, if you fell off the law’s pace, you quickly drew
a swarm of competitors. But if you could stay in the groove, as Intel did for forty
years, you were uncatchable"
Excellent presentation. RIP Dr. Moore.
Was a nice surprise to see a picture of mine used as the thumbnail and another used part way through the video!
Also thanks for listing the source :D
Thanks!
The Broken Silicon episode you were on was 🔥!!!
rip GM. a real visionary & titan of semiconductors!
Humanity would be immeasurably better off without tech, or those who would enrich themselves from the death of humanity.
@@chrisbova9686 you mean extinction? Without tech, you are back to middle age/caveman
@@chrisbova9686 ironic u need to spread this wisdom using technology 😅
@@---------c5741 indeed. Smoke signals aren't dependable, but won't ruin the entire life experience.
@@chrisbova9686 Wouldn't you say the transistor is one of mankind's greatest inventions?
Over the past weekend, I've been thinking of Moore, the traitorous 8, the last AMD video that you had out recently, and the impact these men made on the industry. Thanks for this video (or tribute?)
We need Moore transistors….
Thank you for yet another educational video. The timing of it's release is uncanny. RIP Gordon Moore indeed. A parallel technology trend which doesn't get as much attention is magnetic storage (disk drives). This has been on a similar trajectory as integrated circuits have been, and has been every bit as important to the development of technology. Data storage is now so plentiful and cheap that we don't even think about it. RUclips allows anyone to upload unlimited video content for the world to watch. That is thanks to the magnetic storage revolution. Maybe you could do a video on that topic at some point.
I did notice a big gap in the development of data storage. The capacity has slow down dramatically.
Gordon Moore a true pioneer RIP.
And here I though this was going to be some nonsense about how accurate it's been and how it will never end. Nice that you actually looked up the info first. Nicely done.
Always look forward to your videos. Well researched and in depth.
Listening to this, almost makes me wonder if Moore's Law was more of a self-fulfilling prophecy. Something that motivated people to push harder for technological advancement, which ended up making it come true.
Look at Ray Kurzweil predictions, they are more interesting than limited Moore's Law.
I appreciate your take on what you said was technological nihilism, that improving the speed (among other things) of electronics is a good and necessary thing. Even if the average consumer doesn't see it, the track record of these developments have indeed transformed the lives of many humans. Seemingly, for the better.
Good info.
However I just want to point out that "DRAM" should be pronounced as "DEE-RAM"
Like SRAM is "S - RAM"
They should be the same logic when you pronounce the word
yes please, I cringe every time I hear "dram" instead of "Dee-ram" :-) Great video.
Whose prescription was that?
Same with people who say “oh-led” instead of “O-L-E-D”.
not the first time he does it, most likely he is intentionally saying it like this
don't know why though
This is probably my top 10 channels. Keep it going!
How do you pump out so much great content
I would never argue that we don't need improvements in compute performance but you can make the related statement that improvements in computing power (along with a seemingly never ending thirst for more SW developers) has lead to less efficient SW being developed to take adequate advantage of compute performance.
True. But this lack of code efficiency also comes with many advantages like ease of writing, prototyping, debugging, reviewing, correcting, improving, porting or even installing programs. All of the heavy lift is made by a few software bricks now (compilers, interpreters, OS, HALs, APIs, game engines, Web browsers).
Even in embedded, it becomes way more practical to restrict ASM or RTOS usage only when it's imperatively required.
This claim is sort of absurd on its face. If it were more profitable to produce more efficient software, then that's would companies would make. However, the increasing complexity of business domains, infrastructure and the sheer amount of different platforms a piece of code must be compatible with makes it extremely inefficient (in terms of development costs) to attempt to squeeze every last bit of performance from a chip by writing code in low-level languages.
Simply put, there's nothing stopping you from putting out highly optimized software today. But you would simply get out-competed unless you were working on a very specific domain or platform. So it's not that powerful hardware leads to less efficient software, but that less efficient software is usually more competitively produced and priced. Hardware performance simply dictates the minimum point at which software is simply not usable due to its inefficiency. For most applications, "adequate" advantage of compute performance IS the ability to produce less efficient software that is still usable, because this means you can produce MORE software or tackle problems that are more complex.
If a browser already opens a webpage less than 2 seconds, nobody realistically needs it to be faster. It (the browser or the webpage) just needs better features, bugfixes or increased stability. Or maybe just less costly maintenance.
@@son_guhun it still really depends on the application.
Sure Linux got rid of its old ASM pieces to be plain C (and now Rust) but in the mean time Android mostly got rid of Java runtime to compile software during install like BSDs do, despite the vast improvement of ARM SoCs.
Similarly, developing production software for microcontrolers is not done with extremely inefficient/portable code like MicroPython or Arduino. And sometimes ASM is still used for critical timing functions.
Same thing with desktop applications: Developing small tools and games with high level languages and APIs can be done without taxing too much of the increasing computing resources (even on laptops and embedded) but it's never used for developing AAA games or production applications that want to use every resources available to speed up the execution.
Excellent video, Jon! Among other things it explains why I was never sure what Moore's Law predicted. Was it a doubling every year, every 2 years, every 18 months? Thing itself, Moore's Law, has been redefined to fit the data, the doubling over a span of time.
I fucking love how this guy does not separate jokes and information. You have to actually pay attention to distinguish.
I remember in the late 70s in Manila we were making a K&S 478 add on that turned that manual wire bonder into one controlled by a microprocessor. Zylog, AMD, Intel, were our customers... what an amazing time.... our computer had no monitor, used a trackpad with a billard ball in it, and had 12 kb of ram!
Another great video Jon! The last few minutes were especially interesting for me - I'll read that paper. I'd like to know what other industries rely on increasing compute density. I think that mobile phones are an example, the industry plans around people needing to buy new phones to keep up with the latest software. But if phones stop getting more powerful, then that industry needs to rethink its financial plans. I'm also guessing that AI (especially the training side) will want more and more compute going forwards.
Many people are buying clothes each season, so I think phone industry will find the way ;)
A few years ago I saw a Philosophy Tube video, in which for the first time I heard Moore's Law referred to as a marketing term, as opposed to a venerable guideline for technological progress. This video illustrates that label wasn't just left-wing cynicism, but kind of accurate -- it demonstrably instilled a confidence in progress that drove sales.
Modern SSD have really revolutionized computing for the base user - boot in sub 2 minutes - after Windows updates can still often be sub 5 minutes.
TWO MINUTES?!
JESUS CHRIST
What are you doing?!
There's no way it should take more than 30 seconds *maximum* to load the OS.
Shrinking transistor technology also led to lower power consumption solutions, particularly valuable for battery powered devices
Yes. The shrinking of a transistor’s area by 50%:
1) Allows twice as many transistors on a chip with the same area.
2) The same area implies basically the same cost, for twice as many transistors (far from costing twice as much).
3) A transistor with 1/2 the area consumes 1/2 the power. With twice as many at 1/2 the power, the power consumption stays basically unchanged, for a chip with twice as many transistors.
Neither the power nor the cost double when the number of transistors double in the same unit of area. This scaling phenomenon is the real important thing.
If transistors stopped shrinking, compute that needs twice as many transistors will start costing twice as much and using twice the power. Do this for a few generations and costing 8x as much and consuming 8x as much power and 8x the area …. that will make you appreciate the transistor shrinkage advances we’ve had in the past.
There ought to have been discussion of Dennard scaling which was a key driver of rapid processor improvement, node shrinks gave not only smaller & cheaper but also faster transistors at a constant energy cost, meaning there weren't the heat & power wall issues which halted the frequency scaling.
I was a bit disappointed that this channel failed to mention that as most people focus on number of transistors, when multi-core is a response to physical limitations on uni-processor performance.
Wow, incredible history. I remember my first computer, a VIC 20, then upgrading to the C-64, then the 8086. The first computer I programmed was with Holirith cards on a Univac that used magnetic ring memory and large drum magnetic tape. It was a simple employee hours/wages program. So many things have changed since then. My cell phone has more computing power than the biggest computer that our local college had at that time. Amazing!
Which Univac -1108? Yes those were the days of dropped card stacks...
Obviously a grandma but where on earth the notion that boomers can't use computers? 🤔🧐
Never thought I'd hear an overlap between all the dead space lore channels I follow and my semiconductor manufacturing interests
i was surprised too .
Brilliant, as always. The world needs more compute. Long live the Moore's law !
Do basically Moores law was a self fulfilling prophecy. The industry followed it not because of some universal physical law but because everyone in the industry tried their best to follow it for various reasons.
Awesome video, Jon. I worked on Intel's 65nm node - you got most of the products of the time (Cedarmill, Yonah, Tukwilla). Great summary on the main drivers keeping Moore's Law alive (advancements in design and lithography).
From 2000 - 2010, Intel's biggest worry was being categorized a monopoly and broken up. Since then, Intel lost its once-massive competitive advantage, and has been surpassed by TSMC and Samsung. The company was a juggernaut with Moore, but after his retirement, hubris crept into the company culture, and mismanagement became the norm, IMO.
Sick dude that's awesome we on 4nm now
@@razorbackroar anybody is naming they node 4nm? Intel 4, TSMC N4, maybe only Samsung, but they lying anyway so who cares.
@@m_sedziwoj tsmc on 2nm soon bruh
@@大砲はピュEven TSMC doesn't call it with "nm"...
Moore's law, like every exponential in nature, is bounded by physical optimization limitations and more accurately follows a Sigmoidal curve as the technology hits an inflection point and only gives you diminishing returns and the graph goes logarithmic. The only way to advance is to switch to a new technology. Adding cores, gate geometry, 3D stacking has helped in many areas, but we'll likely have to switch from Silicon mosfets to Gallium Nitride or other chemistries to get any sort of frequency improvements at this point.
Moore's Law will never truly die. :(
Moore’s law, or Moore’s observation, I dead, simple as that. We don’t get to rename and define it so it still correcr
@@benc3825 AMD and intel future server cpus want to know your location, also gpu density
@@RonnieMcNutt666Which one specifically? The Turnin family, Venice, Emerald Rapids, Granite Rapids, Diamond Rapids, Sierra Forest and/or Clearwater Forest. :-P
It's been dead for awhile now
@@benc3825 3090 to 4090 having nearly 3x the transistors in a smaller die
Right on. Thanks for sharing.
If you take a technology that's developing that rapidly and is that early in it's lifecycle and give me a ten year roadmap into the future and at year 10 you are actually at what you said would be year 9 that's amazingly accurate, it wasn't perfectly accurate but it's incredibly rare to see anything close to that as far as I see.
That truck backing up really sells this video.
Since "Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years," all that is required to keep it valid forever is to make bigger ICs.
really interesting...quick question, I'd like to watch the video about Wang Labs referred to at 6:05 but am not finding it?
There's another wave coming in semiconductor manufacturing - the maturing of the industry. Optical scaling has a physical limit and wafer size is not going to go beyond 300mm. What we'll see is all chips becoming much cheaper, particularly complex ones as industry laggards catch up. I think the best is yet to come.
With chat gpt 4 this week and how much programmers are already saying it’s helping them code, I can’t help but feel we are just at the start of another massive Burt’s of exponential growth. Soon we will be able to code 10x more things, 10x faster for a fraction of the cost. That alone will be a massive boost to efficiency. As AI models grow it’s only going to get better. Add to that how quickly quantum computing is growing these last few years. Sure it’s an entirely different field but we have no idea what quantum computers could be capable of 20 years from now because we haven’t had the tools to start playing with them until yesterday.
It's hard to think of a single person who effected more human lives than Gordon Moore, RIP Gordon thank you, I hope Pat doesn't run your company into the ground completely.
Yeah. Now think about the inventor, or one of the godfathers, of AI: Geoffrey Hinton. There is a CBS morning interview that is super interesting. And... I am not able to predict how much live will change in the next 10, 20 years.
When you fart, you change the life of more bacteria than there a humans on the planet. Never forget; your asshole is a planet.
Interesting video, and thanks for the content! I just know today that Dr. Moore passed away on Friday... RIP legend...
Did not expect to see dead space in a Asianometry video😂😂👍🏻👍🏻
I love the little bits of humor, wit and charm you add to each video, "...technically correct, the best kind of correct (from Futurama)", and that "Whomp Whomp".
We see you, @Asianometry we see you. 😘
I’m so glad you addressed the elephant in the room right at the beginning.
I need more compute for my gaming rig still, My 380 can still just barely drive current gen VR titles, I will need sever orders of magnetude more compute before even being able to drive something currently high end like A XR3 at a native resulution. If i also wan't high refreshrates (atleast 144 hertz but preveribly in the 200s) without reprojection and raycasting for improved lighting then we are still many many orders of magnetude away from what is needed.
Plot twist: it was a log curve
The mainboards at 11:05 must have a virus. What the hell are they doing with that soldering iron?
Appreciate the deep respect for Moore!
Only comment is the node charge didn't really end in 2006 for Intel. They executed pretty well until about 2014-2015.
Amazing analytical history of transistors and chip design.
I am simple man, I see an interesting article cited, and I upvote.
The actors at 11:05 haven't actually soldered a goddamn thing in their entire lives.
I disagree that regular people do not really care about it anymore. Software expects for systems to advance, web alone, try using web with devices 10 years old, you will rage like crazy how slow everything is. The productivity goes up as well with more powerful units.
That's because programmers are less and less efficient.
@@WaterZer0 Clearly you don't know how programming works then. Same software these days is way more efficient than old software doing the same task on same machines. The software like cars advance and need more supporting functions to achieve more efficient and more advanced processes. The issue with badly optimized things these days are not fault of programmers in majority of cases, but fault of leadership, who push unrealistic time frames and cost restrains.
@@thepenguin11 capitalism bad? loud and clear
Great video. If you are running out of topics, then the Wintel and x86 story could be one.
FYI…NOAA is typically called “no-ah.”
Thank you for great documentation that Moore’s Law is passé.
FOR YOUR INFORMATION you don’t need to be a dick
I absolutely adore your conclusion, I hate when people say things which ignore everything that technology could be, in favour of what we already have.
Recently at a party I saw someone who is an engineer at Bosch saying that most of the things are already invented and that there is so little innovation that is yet to come. When I heard that from an actual engineer, I felt disgusted tbh
I don’t understand how someone could say that today
1978 the Motorola 6800. The applications uses exploded. The rocket was just launched.
I loved all that was CMOS the RCA 4 bit; Now we are at around 2000MIPS. The IBM System 360 waas about 16.6MIPS in 1970.
The self heat now increases with operating temperature due to higher leakage currents of smaller geometry, around 90nanoM
There are of course business applications for more computing power. But for the first time in history, consumer products today are not limited by computing power but by things like network speed. The market for more advanced chips is a lot smaller if you're developing for a few HPC clusters at companies or universities rather than smartphones, which perform just fine with a 7nm chip for the vast majority of people
Moore's Law should be rewritten as an Old English epic poem, say in the style of "Beowulf".
Moore's law is dead, long live Gordon Moore.
As your narration and explanation is unbeatable, would be great if you could video on evolution of technology from 1400 AD (printing press) till date and how humans were worried about jobs being replaced. This would be very relevant to current GPT / AI revolution.
It’s just a trend line. You can make your own right now. Moore himself (or probably some analyst in company) revisited data. And if you famous enough you can call it Gelsinger-Moore’s law. Or RayJacket-Moore’s law. RIP
I can’t find your wang labs video! I want to watch it! 6:09
insanely insightful! Thank you, Asiannometry.
Moore really got to see quite the transformation in computing power over his life.
It can't go on forever...
I have been looking into Graphene FETs for computational use and other alternatives to the existing Si. One of the main challenges facing any Si alternative is the repeated "extending" of Moore's law. Would you cover post Silicon technologies in a video ? I find your videos quite fascinating and they help keep me informed while I look for roles in the semiconductor industry. Thank you.
Industry can't move to larger wafer because cost, so moving to another technology... I would bet more on using carbon in different forms as addition to silicon base, not change of everything. At last I didn't hear anything which suggesting is a viable, I think it required a lot more research to make it to market. Researchers are overselling they inventions ;)
From a chip users standpoint, I remember the market wide semi level, the transition to new FET designs, in the 1st half of the 1980s. These improved MOSFETs made faster CMOS chips, so big chips, moved from NMOS to lower power CMOS. At the discrete level, the creation of power MOSFETs.
I thought Intel's big win was IBM choosing its uP for its IBM PC, and the clones, having to stick to it, to stay compatible. Even moving to the 386, when IBM balked at using it, fearing stepping on its mainframe market.
RIP Doctor Moore. Your fingerprints are all over everything, forever.
20:17 Multicore is not related to Moore's law, which is a density thing. Multicore is a result of frequency scaling limits, which is more of a Dennard scaling thing.
In the midnight hour, he cried Moore Moore Moore. With a rebel yell, he cried Moore Moore Moore
I couldn't find the video on wang labs, is it still around?
Rest in Peace, Gordon Moore (March 24, 2023)
Moore's Law reminds me of Hubble's Law, which describes the expansion of the Universe. Edwin Hubble used only a few data points and boldly drew a line connecting them to prove that galaxies farther from us are receding faster. The values on Hubble's chart were inaccurate, as were his predictions based on it (that the Universe was 2 billion years old). Nevertheless, Hubble's Law inspired others to replicate his work and refine their measurements, which ultimately led to plausible theories and predictions.
And I love egg fried rice :)
The less popular corollary to moore's law is the one about how moore's law will be re-defined in order to argue that it's not really meaningless every 3-4 years.
Next time you come across the acronym NOAA you can pronounce it like the name Noah and everyone will still understand you, as always great video man
Oh "dram", you’re right. He pronounces DRAM as ‘draah-m’ instead of ‘dee-ram’. Little quirks in such great content.
I was not expecting a Dead Space reference bravo
Personally I would look at "Kurzwell's law" so cost of computing as calculation/s per $1000, because it extend to times with punching machines and ignore technical aspects, which is interesting as a story how it change with time.
About end, I would give different arguments "you don't need more computing for today software, but only today" because to allow something new, sometimes is few orders of magnitud before it become available (real time RT in games, really smart AI etc). Where are limits to human perception as resolution, refresh or interaction, but quality and complexity have a long way.
As autonomous cars, robots (humanoid or not but one which can deliver stuff under your door) and many many more. But AI revolution is only at beginning and Von Neumann architecture will not be best for it.
This was a dope video
Could you make a video about Nanoimprint lithography?
According to the Dead Space analogy, who's Isaac Clarke in rl?
His estimate of 65,000 components in 10 years wasn't that far off from a (roughly) factor of two. It's a bit unfair to say the word roughly was "doing a lot of heavy lifting" when the 65k component figure comes from a factor of 2.05 instead of 2.
No mention of the physical limits of transistors only a few atoms thick?
"In any system of energy, Control is what consumes energy the most.
Time taken in stocking energy to build an energy system, adding to it the time taken in building the system will always be longer than the entire useful lifetime of the system.
No energy store holds enough energy to extract an amount of energy equal to the total energy it stores.
No system of energy can deliver sum useful energy in excess of the total energy put into constructing it.
This universal truth applies to all systems.
Energy, like time, flows from past to future".
Your videos are too cool!!!!!!!!!!!!!!
We're still a few gens away from not needing more compute for gaming imo. We have been trading more performance for equally more power too often last few gens. Id be glad to go back to the days of an actually 50W TDP CPU and sub 200W max TDP GPU that can handle all the latest games at 100-140fps without needing over the top cooling solutions and noise. Not to mention AMD and nvidia seem to be abandoning the low and mid range as much as they can right now. So still a way to go if you ask me.
Why do the subtitles stop when you talk about the Patreon?
We need 4k game running at 120hz refresh rate. That is another 6 years away
We need to adapt and change the way we compute itself. We need to quit looking for unicorn farts with dark matter detectors and tackle problems like P Vs. NP. We need to make out computing me effective and efficient. Right now we run A LOT of flawed, blotted programs. I think things like gallium arsenic will surely help but we need to go back to the basics. We need to change the geometry and architecture of processors. The advent of the arm processor is a perfect example.
Go back to the basics? So basically we've reached the "local maximum" for the current technological paradigms in place for compute. Sounds like some fundamental revolution is needed at the underlying architectural level to continue anything like Moore's law level growth.
@@buzzsaw838 there are many ways the current model of computing can change.