Is Moore's Law Finally Dead?

Поделиться
HTML-код
  • Опубликовано: 27 сен 2024

Комментарии • 1,9 тыс.

  • @SabineHossenfelder
    @SabineHossenfelder  Год назад +2

    This video comes with a quiz to help you better remember its content! quizwithit.com/start_thequiz/1694145758807x361000584255219800

  • @funtechu
    @funtechu Год назад +1086

    A few comments from a chip designer.
    1) Regarding the transistor size limit, we are pretty close to the absolute physical limit. Although the minimum gate length equivalent figure (the X nm name that is used to name the process node) only refers to one dimension (and even that's not quite that simple), we are talking dimensions now in the high single digits, or low double digits of atoms.
    2) Regarding electron tunneling, this is already quite common in all the current modern process nodes. This shows up as a consistent amount of background leakage current, however as long as it's balance (which it typically is) then it doesn't cause logical error per-se. However, it does increase the amount of energy that is just being turned into heat instead of performing any useful processing, so it does slightly cut into the power savings of going to a smaller node.
    3) One of the biggest things impacting Moore's law in the context of the transistors/chip interpretation is manufacturing precision, crystal defects, and other manufacturing defects. Silicon wafers (and others as well) have random defects on the surface. Typically when a design is arrayed up on the surface, during later wafer test a handful of the chips will not turn on due to these defects, and thus are discarded. The ratio of good transistors to total transistors is referred to as the wafer yield. As long as the chips are small, then a single defect may only impact yield a little bit because the overall wafer has hundreds or thousands of possible chips, and only a few hundred defects that could kill a chip. But as chips get larger, yield tends to go down because there are fewer chips per wafer. There are some techniques like being able to turn off part of a chip (this is how you got those 3 core AMD chips for example), but ultimately as chips get larger, the yield goes down, and thus they get more expensive to manufacture.
    4) As discussed in this video, what people really care about isn't transistor density, or even transistors per package. Rather, they care about computing performance for common tasks, and in particular for tasks that take a long time. By creating custom ASIC parts for new tasks that are compute intensive (ML cores, dedicated stream processors, etc), the performance can be increased so that the equivalent compute capability is many times better. This is one of the areas of improvement that has helped a lot with performance, indeed even outpacing process improvement. For example, dedicated SIMD cores, GPUs with lots of parallel stream processors, voice and audio codec coprocessors, and so on.
    Anyway, overall a great video on the topic as always!

    • @dr.zoidberg8666
      @dr.zoidberg8666 Год назад +31

      Surely these improvements in chip architecture (idk if that's the right way to put it) also has its limits. Ik it's not very planned obsolescence of me, but what I'd really like is for manufacturers (& programmers) to focus on increasing the longevity of computers.
      It'd be nice to only need to buy a phone once every 10 or 20 years instead of once every 2 or 3 years.

    • @Pukkeh
      @Pukkeh Год назад +59

      Process nodes like "X nm" don't refer to an actual physical channel length, gate pitch or any other transistor dimension anymore. Unfortunately this video doesn't help clear up that common misconception. They are merely marketing labels loosely correlating with performance or transistor density. In particular, channel length (i.e. transistor size) shrinking has essentially halted a while ago at ~20 nm, so a 5 nm transistor isn't nearly as small as "5 nm" would indicate.

    • @funtechu
      @funtechu Год назад +45

      @@Pukkeh Yes, that's why I used the term "minimum gate length equivalent" instead of saying minimum gate length. The actual minimum feature sizes are larger than the marketing minimum gate length equivalent number , but they are still quite small.

    • @Pukkeh
      @Pukkeh Год назад +35

      @@funtechuI understand, I'm just clarifying for the benefit of anyone else who might be reading these. Most people outside the field think the process node name refers to some physical transistor feature size. This hasn't been the case for years.

    • @funtechu
      @funtechu Год назад +50

      @@dr.zoidberg8666 Absolutely, though those design limits are much harder to put your finger on. For example, Amdahl's law sets a limit on how much you can parallelize something, but even that can sometimes be worked around by choosing a completely different algorithm that manages to convert what was previously though to be strictly sequential to some clever parallel implementation.
      As for longevity there are two major aspects of that. For phones, the biggest thing that impacts longevity is battery life, which could be remedied by having battery replacements. Personally I keep most of my phones about 5 years, with typically one battery swap in the middle of that period. Most people though buy newer phones just because they want the new features of a newer phone, not because their older phone died.
      The physical process limit on longevity is primarily driven by electromigration, where eventually the physical connections in a chip wear down and disconnect. In mission critical chips there is a fair amount of effort put into ensuring redundancy in current paths to try to improve longevity and reliability, but the fact is that heat dissipation is one of the largest factors that impact this in practice. Keep your electronics cool, make sure they have adequate cooling, and they will run longer in general. Note that this is also more of an issue with newer process technologies because metal layer trace widths are typically much smaller than they have been with older process nodes, meaning that electromigration doesn't have to work as much to break a connection. So with higher density comes the tradeoff of slightly shorter lifespan.

  • @GraemePayne1967Marine
    @GraemePayne1967Marine Год назад +119

    I am old enough to remember the transition from vacuum tubes to transistors. At one time, transistors were large enough to see with the unaided eye and were even sold individually in local electronics shops. It has been very interesting to watch the increasing complexity of all things electronics ...

    • @softgnome
      @softgnome Год назад +27

      They are still sold individually and higher power applications require large transistors, you can't pump a lot of power through a nanometer scale component.

    • @johndododoe1411
      @johndododoe1411 Год назад +14

      ​@@softgnomeYep, and many current chips require a standalone transistor nearby for some key job .

    • @DamnedSilly
      @DamnedSilly Год назад +4

      Ah, the days when 'Solid State' meant something.

    • @iRossco
      @iRossco Год назад

      @@softgnome high power transistors for what type of applications?

    • @iRossco
      @iRossco Год назад +12

      I'm 60...Vacuum tubes were huge, yeah I remember the tv repair man out to replace a blown tube in my parents B&W telly, could watch them glowing through the vents. These guys here talking in terms of "high singles to low double digit atoms" fuck me dead! 🤪 Where to in the next 50yrs?
      I forced to learn to use a slide rule in grade 12 instead of our 'modern' calculators in '79.🤦‍♂️

  • @JorenVaes
    @JorenVaes Год назад +351

    An interesting take on Moore's law that I heard from Marcel Pelgrom (a famous name in the semiconductor world) during one of his talks was that, for a lot of fields in electronics, there was little 'true' research in novel techniques until Moore's law started failing us. Up to that point, the solution to everything was 'you have more transistors now so just throw more transistors and complexity at the problem'. By the time you came up with a new novel technique to improve performance with the old process, people who just added more transistors to the old technique in the newer process nodes had caught up already or surpassed your performance gains. You see this now where the calibration processor in some cheap opamp might technically be more 'powerfull' than the entire computer network that put men on the moon - just to compensate for the crappy performance of the opamp - even a 'dumb' automotive temperature sensor in a car might have more area dedicated to digital post-processing to improve lifetime, stability and resolution.
    It is now that we no longer get these new gains from Moore's law (both because scaling decreases but also because it is just getting so, so, so expensive to do anything that isn't leading edge CPUs and GPUs and FPGAs in these nodes) that people are going back to the drawing board and coming up with really cool stuff to get more out of the same process node.
    I can partially confirm this from my own experience in research on high-speed radio circuits. For a long time, they just used the smaller devices to get more performance (performance here being higher data rate and higher carrier frequencies). This went on for decades, up to the point that we hit 40 nm or 28 nm CMOS, and this improvement just... stopped. In the last 10 years, it is just an ongoing debate between 22 nm, 28 nm, and 40 nm on what is the best. But still, using the same technology node as researchers 10 years ago, we more than 50x the data rate we achieve, and do so at double the carrier frequencies, simply by using new techniques and better understanding of what is going on at the transistor level.

    • @RobBCactive
      @RobBCactive Год назад +22

      That wasn't really true for CPUs in the early 90's memory speed began falling behind CPU speed and chips had to develop caches, instruction level parallelism, pre-fetch and out of order execution.
      The key to the old automatic performance gains was Dennard scaling which meant smaller transistors were faster, more power efficient and cheaper. Now process nodes require complicated design and power management to avoid excess heat due to power leakage as well as develop more efficient transistors in 3D like FinFET and now gate around, all more expensive to make.
      Now cache isn't scaling with logic, so Zen4 with V-cache bonds 7nm cache optimised process with the 5nm logic of the cores, interfaced externally via a 6nm IOD chiplet.

    • @jehl1963
      @jehl1963 Год назад +24

      Yup. In many respects, I think that highlights the difference between science and Engineering. A lot of engineering resources are invested in incremental improvements. It may not be glamourous, but it is meaningful. Just by constantly smoothing out the rough edges in process, important gains are made. Let's hear it for all of those engineers, technicians, operators and staff who continue to make constant improvements!

    • @edbail4399
      @edbail4399 Год назад

      yay@@jehl1963

    • @allanolley4874
      @allanolley4874 Год назад +10

      Another complication is that even in the 70s and 80s say although the effect was often just making a transistor smaller each time you made transistors smaller you would be faced with new problems. New lithography techniques were required, transistors would have new physical effects when made smaller that would have to be compensated for and so on. Each new level of miniaturization was proceeded by a lot of research into different techniques that might be able to allow work at the new level and many of those programs had false starts, failures and diversions (Charge Couple Devices were originally used as memory before being used in digital cameras etc.).
      It has been suggested that Moore's law became a goal for the semi-conductor industry and they anticipated and funded the research on the stuff they anticipated needing given the goal.
      This is all in addition to innovation or the lack of innovation in organization of chip sets and hardware as with cache, GPUs, math coprocessors, more attempts to implement parallel computing and so on.

    • @JorenVaes
      @JorenVaes Год назад +21

      ​@@RobBCactive Perhaps I should have made it a bit clearer in my original comment - I'm an analog/RF guy, and so is Marcel Pelgrom, so it might very well his comments only apply to RF/analog. At the same time from my limited expertise in digital it does kinda apply there too - sure, there was lots of architectural stuff that happened (which also happened in analog, because you have to architecture your complex blocks first) but on true low-circuit level that wasn't really the case up to recently where you see people coming up with things like error-detection per register stage to detect that 1-in-a-million event where you do use the critical path, allowing you to run closer (or even over) the edge for 99% of the time and fix the issues in the 1% of the time they occur, instead of running at 3/4 the speed but have 0% errors, and piss away 25% more power doing so.
      Sure the design also becomes more complex (as my cadence EDA bills indicate...) but from what I can tell a lot of that is handled at EDA level.

  • @Annpatricksy
    @Annpatricksy Год назад +22

    Hey guys, I know nothing about the market and I'm looking to invest, any help? As well who can I reach out to?

    • @Achakkate
      @Achakkate Год назад +2

      stock market rally run is gone, but I'm not sure if equities will swiftly recover, keep falling, or fluctuate in a narrow range for a few weeks, or if things will quickly get worse. I'm under pressure to increase my $300k reserve.

    • @JethroChayton
      @JethroChayton Год назад +2

      That's true. Its really needful for beginners not to settle for videos alone or they will see themselves losing all their money just like me when I newly started trading with this videos here on RUclips

    • @StevenProctor-xr6pp
      @StevenProctor-xr6pp Год назад +1

      I agree with you, With his help, I diversified my 450k portfolio among different markets. During this bearish market period, I was able to produce a net profit of little over $1 million from high dividend yield stocks, ETFs and equity. However, the reality is that you cannot do it without a tried and true trading coach like Julio Castillo

    • @AliciaAdahy
      @AliciaAdahy Год назад +5

      He's awesome he has managed my investment so well and my weekly returns are mind blowing.
      Could someone kindly leave his details here?.

    • @admin....1727
      @admin....1727 Год назад +6

      +346

  • @djvelocity
    @djvelocity Год назад +176

    I opened RUclips and the first video in my feed was “How Dead is Moore’s Law?”. Thank you RUclips algorithm 🙌

    • @LyleAshbaugh
      @LyleAshbaugh Год назад +2

      Same here

    • @melodyecho4156
      @melodyecho4156 Год назад +1

      Same!

    • @RunicSigils
      @RunicSigils Год назад +2

      Well of course, you would be talking about your subscription feed, right?
      You're not a monkey using the homepage and letting a company decide what you watch, right?
      Right?

    • @BradleyLayton
      @BradleyLayton Год назад +5

      I opened my messenging app and saw the link forwarded from my daughter.
      Thanks, genetic algorithm!

  • @Broockle
    @Broockle Год назад +23

    Such a holistic approach to answering the question. Going down so many avenues and to show us how nuanced the subject is.
    This was awesome 😀

  • @viktorkewenig3833
    @viktorkewenig3833 Год назад +124

    "The trouble is the production of today's most advanced logical devices requires a whopping 600-100 steps. A level of complexity that will soon rival that of getting our travel reimbursements past university admin". I feel you Sabine

    • @Unknown-jt1jo
      @Unknown-jt1jo Год назад +9

      I love how she delivers this like it's an inside joke that 99% of her viewership can relate to :)

    • @AICoffeeBreak
      @AICoffeeBreak Год назад +5

      I laughed so hard at this. 😆

    • @viktorkewenig3833
      @viktorkewenig3833 Год назад +3

      we all know the feeling@@AICoffeeBreak

    • @dzidmail
      @dzidmail Год назад +1

      How many steps?

    • @nonethelesszero7950
      @nonethelesszero7950 Год назад +1

      Since you stated it as anywhere from 600 steps UP TO 100 steps, we're now talking about the complexity and logic of university HR processes.

  • @rammerstheman
    @rammerstheman Год назад +18

    Great video Sabine! I recently finished working at lab who are trying to use similar materials to graphene to try and replace and supersede silicon.
    One tiny error about 4 mins in was you described e-beam lithography as a technique for characterising these devices. What you go on to describe is scanning electron microscopy. E-beam lithography is a device fabrication technique mostly used in research labs. Confusingly the process takes place inside an SEM! Lithography is patterning of devices, not a characterisation technique.

    • @YouHaventSeenMeRight
      @YouHaventSeenMeRight Год назад +3

      e-beam lithography (a form of mask-less lithography) was seen as a possible successor to "optical" or photo lithography (which uses masks to form the structure layers). Unfortunately it is not high speed enough to rival the production speed of current "optical" photo lithography systems (which expose each layer via a mask at once, almost like a high powered slide projector, so can achieve a higher throughput than e-beam systems, which need to scan across each chip on a wafer to create each layer shape). So for now it is relegated to specialist applications and research work. One of the companies that was building e-beam lithography systems for chip production, Mapper Lithography, went bankrupt in 2018 and was acquired by ASML. ASML has not continued the work in e-beam lithography as they are the sole producers of EUV photo lithography machines.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад

      "Confusingly the process takes place inside an SEM!"
      So whats the difference ? Electron energy or something else ?

    • @rammerstheman
      @rammerstheman Год назад +1

      @@Hexanitrobenzene in an SEM measurement, you want to scan the beam over your sample to collect a pixel-by-pixel map of a signal like the number of electrons that are backscattered.
      Whilst you're scanning the beam, it has a lot of energy so will interact strongly with your sample. In e-beam lithography, the beam traces out the pattern you want to develop and you use materials that harden or soften under the electron beam.
      So I guess the biggest difference in the shape the beam scans in. I think possibly one might use higher electron doses for lithography too.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад +1

      @@rammerstheman
      Hm, seems like the main difference is in the preparation of the sample, not electron beam. Perhaps an analogue of photoresist which is sensitive to electron irradiation is used.

  • @johneagle4384
    @johneagle4384 Год назад +95

    My first computer was a monstrous IBM mainframe. Punch cards and all. Ahhh... those were days, which I did not miss.
    Things that would take hours then, I can do in a few seconds today.

    • @WJV9
      @WJV9 Год назад +17

      Mine was an IBM 360 mainframe, type up a Fortran program on punch cards, add the header cards to your deck and wrap it with a rubber band and submit it along with your EE Course # and Student ID#. That was back in 1965. Then wait an hour or two and find out you have syntax errors so you go back and edit out the typo's and syntax errors by punching new cards, submit the deck again and wait a few hours. Those were the days

    • @Steeyuv
      @Steeyuv Год назад +7

      Started on those as a young man in 1980! These days, funnily enough, things that used to take me seconds, now take somewhat longer…

    • @tarmaque
      @tarmaque Год назад +2

      Good lord! And I thought _I_ was old!

    • @radekhn
      @radekhn Год назад +3

      @@Steeyuv, yes I remember. You switch the power on, and in few hundreds milliseconds you got an answer:
      READY
      What a time it was.

    • @appaio
      @appaio Год назад +3

      I started with an abacus. THOSE were the days! challenge won?

  • @johnhorner5711
    @johnhorner5711 Год назад +1

    Thank you for another fascinating video. I have a few nits to pick. 1) Back in 2018 Global Foundries stated that they were abandoning their 7 nM technology deal with Samsung and would no longer attempt to compete at the very highest levels of semiconductor technology, but rather would focus on other markets. Thus there are only the three (TSMC, Samsung and Intel) companies still competing at the leading edge, and of these Intel is already well behind the curve while TSMC is leading. 2) Heterogeneous computing does not extend Moore's law. It is a back-to-the-future adaptation to the increasing cost of moving to ever higher transistor densities. In many ways it is a re-discovery of the advantage of chips being highly customized for particular tasks.

  • @adrianstephens56
    @adrianstephens56 Год назад +60

    I joined Intel in 2002. At that time people were predicting the end of Moore's Law.
    My first "hands on" computer was a PDP-8S, thrown away by the High Energy Physics group at the Cavendish and rescued by me for the Metal Physics group. From this you can correctly infer the relative funding of the two groups.

    • @johndododoe1411
      @johndododoe1411 Год назад +4

      I guess your Metal Physics group didn't focus on the extremely well funded tube alloys ... :-)

    • @sehichanders7020
      @sehichanders7020 Год назад +1

      Predicting the end of Moore's Law really has become quite akin to fusion power being commercially available. Probably both will happen at the same time.

    • @chrisc62
      @chrisc62 Год назад

      I was told in 1984 the limit of optical lithography was 1um or 1000nm, now they say they are going into production with a 10nm half pitch next year, I did my undergraduate Physics project in the Metal physics group at the Cavendish on Scanning Electron Acoustic Microscopy.

  • @adriendecroy7254
    @adriendecroy7254 Год назад +18

    Sabine, maybe you could do a video on the reverse of Moore's law as it applies to efficiency of software, especially OSes, which get slower and slower every generation so that for the past 20 years, the real speed of many tasks in the latest OS on the latest hardware has basically stayed the same, whilst the hardware has become many times faster.

    • @BradleyLayton
      @BradleyLayton Год назад

      Yes please.
      Shannon's law?

    • @adriendecroy7254
      @adriendecroy7254 Год назад

      @@BradleyLayton lol perfect

    • @squirlmy
      @squirlmy Год назад

      Yeah but, that OS model is kinda specific to PCs, servers and workstation. For example look into ITRON and TRON or really any Real-Time OS. I don't think OS's are getting slower with any sort of consistency. There's a lot of embedded applications where throwing in complete Linux distro is fast enough to replace a microcontroller with custom software. It's kind of the point of Moore's Law, that software bloat just doesn't matter. Some commercial stuff is bloating in response, but others aren't. And remote "software as service" up ends the models, too. Bandwidth becomes a lot more important than OS speed.

    • @afterthesmash
      @afterthesmash Год назад

      Every OS function on the frame path of a popular video game has gotten faster, while all the rest has stagnated. Bling in bullet time by popular demand.

    • @snnwstt
      @snnwstt Год назад

      Not sure that you compare the same things. A command line single task of the 70's is not a graphical interface of today, with tons of background services all active and in competition. Different "clients" are now targeted than it was the 70's.
      As for the goal, what is the interest to detect a mouse click one hundred time faster while the end user is as slow, if not slower, than ever?
      And if your need is a specific job, you may use an MCU, no OS, injected by a program itself developed and mostly debugged using a generic OS.

  • @maxfriis
    @maxfriis Год назад +53

    I was convinced that analog computing would be mentioned in such a comprehensive and detailed overview of computing. If you need to calculate a lot of products as you do when training an AI you can actually gain a lot by sacrificing some accuracy by using analog computing.

    • @traumflug
      @traumflug Год назад +7

      Well, giving up deterministic results would open a whole can of other worms.

    • @maxfriis
      @maxfriis Год назад

      @@traumflug Veritasium has an interesting video on the topic.

    • @BaddeJimme
      @BaddeJimme Год назад +17

      @@traumflug You don't actually need deterministic results for a neural network. They are quite robust.

    • @BaddeJimme
      @BaddeJimme Год назад +7

      @ralphmacchiato3761 I suppose you consider noise from quantum effects to be "deterministic" right?

    • @mateialexandrucoltoiu7207
      @mateialexandrucoltoiu7207 Год назад +2

      Analog chips are only a niche product for AI, they are not suitable for general computing tasks.

  • @MyrLin8
    @MyrLin8 Год назад +7

    Love this one. Your [Sabine's] compairson between the complexity of designing chips and getting reimbursment paperwork through an associated bureaucracy is sooo excellent. :)

  • @tonyennis1787
    @tonyennis1787 Год назад +14

    Back in the day, Moore's Law was that the number of transistors doubled every 18 months. So we're maintaining Moore's Law but simply changing the definition.

    • @ittaiklein8541
      @ittaiklein8541 Год назад +3

      TRUE! I definitely remember that. When she said 2 years, my 1'st reaction was: NOPE! Moore's Law states 1.5 years! That was repeated over & over.
      Just had an idea: Let's just change the definition of the law so as to fit actual progress. 😅

    • @NonsenseFabricator
      @NonsenseFabricator Год назад +6

      He actually revised it in 1975

    • @ittaiklein8541
      @ittaiklein8541 Год назад +3

      @@NonsenseFabricator - could be .
      Nevertheless, I maintain that revised version did not garner the same popularity as did the "Law" in its original form. For I saw it quoted in the original form, way after 1975. But this debate here has outlasted it's importance, so I, unilaterally call it over. 🙂

    • @davideyres955
      @davideyres955 Год назад

      @@ittaiklein8541works for most governments!

    • @amentco8445
      @amentco8445 Год назад +1

      If you change the definition it's not a law, thus it died long ago.

  • @haldorasgirson9463
    @haldorasgirson9463 Год назад +7

    I recall the first time I ever saw an op-amp back in the 1970's. It was an Analog Devices Model 150 module that contained a discrete implementation of an op-amp (individual transistors, resistors and capacitors). The thing was insanely expensive.

    • @kittehboiDJ
      @kittehboiDJ Год назад

      And had no temperature compensation...

    • @johndododoe1411
      @johndododoe1411 Год назад

      I've seen pictures of tube based op amp modules . Those handled temperature quite differently, needing actual built in heaters to work .

  • @AAjax
    @AAjax Год назад +11

    When many people use "Moore's Law" as a shorthand that compute will get faster over time, rather than a doubling of transistor density. Wrong though that is, it's a common usage, especially in the media. The speed increases we get from the doubling alone have stagnated, which is why cpu clock speeds are also stagnant.
    Nowadays, the extra transistors get us multiple cores (sadly most compute problems don't parallelize neatly) and other structures (cache, branch prediction, etc) that aren't as beneficial as the raw speed increases we used to get from the doubling.

    • @davidmackie3497
      @davidmackie3497 Год назад +6

      Yeah, in the early days, the new computer you'd get after 4 years had astounding performance compared to its predecessor. But also, how fast do you need to open a word processing document? My inexpensive laptop boots up from a cold start in about 5 seconds, and most applications spring to life with barely any noticeable lag.
      What we still notice are compute-intensive tasks like audio and video processing. And those tasks are amenable to speed-up from specialized chips. Pretty soon those tasks will be so fast that we'll barely notice them either. But by then we'll be running our own personal AIs, and complaining about how slow they are.

    • @johnbrobston1334
      @johnbrobston1334 Год назад

      Sadly, I've got a problem at work that will parallize nicely, but it's written in an ancient language for which no parallel implementation has been produced, so it needs a complete rewrite before in order to do that. And there's no time to do it.

  • @electronik808
    @electronik808 Год назад +4

    Hi really great video. One clarification regarding chiplet design and 3d stacking. The chiplet design idea is to produce different wafers using different processes and then assemble them together later. This is efficient as you can optimize the process for the single parts, and use lower costs process uses for different part. This also can improve yeld as you get multiple chips that are smaller.

  • @jehl1963
    @jehl1963 Год назад +51

    Well summarised, Sabine!
    By coincidence, I'm about rhe same age as the semiconductor industry, and have enjoyed working in that industry (in various support roles) my entire life. I've been blessed to have worked with many bright, creative people in that time. In the end ,I hope that I've been able to contribute too.
    Kids! Spend the time to learn your math, sciences, and management skills, and then you too can join this great quest/campaign/endevor.

    • @sgttomas
      @sgttomas Год назад

      I want to destroy your entire legacy 😂

    • @GlazeonthewickeR
      @GlazeonthewickeR Год назад +1

      Lol. RUclips comments, man.

    • @davidrobertson5700
      @davidrobertson5700 Год назад +3

      Summarised

    • @100c0c
      @100c0c Год назад +5

      ​@@GlazeonthewickeRwhat's wrong with the comment?

    • @GraemePayne1967Marine
      @GraemePayne1967Marine Год назад +3

      @jehl1963 - I stongly agree with your last paragraph. A solid education in mathematics, the sciences, management (including process and quality management) will be VERY beneficial to almost any career path. Always strive to learn more. Never settle for being just average.

  • @viralsheddingzombie5324
    @viralsheddingzombie5324 Год назад +2

    Diodes are primarily on/off switches. Transistors are amplification and flow control devices.

  • @iatebambismom
    @iatebambismom Год назад +20

    I remember reading an article about how electrons will "fall off" chips with smaller than 100nm fabrication so it was a hard limit on fabrication. This was in the single chip x86 days. Technology is amazing.

    • @grizzomble
      @grizzomble Год назад +8

      That's not far from where we are hitting the wall. The transistors in "8 nm" chips are about 50 nm.

    • @Nefville
      @Nefville Год назад +1

      Oh do I remember those x86s, great chips. I once ran an 8086 w/o a heat sink or fan to get it to overheat and it simply refused.

    • @Furiends
      @Furiends Год назад

      Tech news is some of the worse out there because it seems to be a technical authority using buzz words and seeming to parrot some amount of truth weaved into a more interesting narrative that just makes no sense. No electrons do not "fall off" chips.

    • @woobilicious.
      @woobilicious. Год назад +2

      Oh they're definitely falling off, there's a reason why they use so much power and get so hot lol.

    • @johnbrobston1334
      @johnbrobston1334 Год назад

      @@Nefville Heat sink? On an 8086? I never saw such a thing. I did get my hands on a 20MHz '286 and it need a heat sink. Wasn't a very satisfactory machine though--there was apparently motherboard design issue on the particular board I had--I had to power it up, let it run a minute, and then hit reset before it would boot. Never did figure out where the problem was.

  • @roccov3614
    @roccov3614 Год назад +1

    Saying we are coming to the end of Moore's Law is like saying we know everything there is to know.
    There will always be more to learn and newer technology to advance Moore's Law.

  • @markus9541
    @markus9541 Год назад +438

    My first computer was a Commodore VC-20, with 3 KB of RAM for BASIC. Taught myself programming at age 11 on that thing... good old times.

    • @memrjohnno
      @memrjohnno Год назад +31

      Sinclair ZX 80 for years then a Dragon 32 with was a step up from VC-20 but below the Commodore 64 which came out ~ year after. Pre internet and typing in code out of a magazine are loaded/saved on audio tape. Heady days.

    • @ziegmar
      @ziegmar Год назад +9

      @@memrjohnno I remember that wonderful times 😊

    • @quigon6349
      @quigon6349 Год назад +4

      My first computer was the Tand6 Color Computer 2 with 64k ram.

    • @Yogarine
      @Yogarine Год назад +5

      My family’s first computer was an actual C128 as well. Though it spent most of its days in C64 mode because of the greater software library that offered, I actually learned programming in C128’s Microsoft BASIC 7.0 😆
      Like you said… good old times.

    • @imacmill
      @imacmill Год назад +11

      The VIC-20 was my first computer, too, but I moved to Atari PCs pretty quick. The 400, then 800, then ST. I taught myself to code on the Atari 400 using a language called 'Action'. Peeks and Pokes were the bleeding edge coding tech on it, and I also learned the ins and outs of creating 256-color sprites by using h-blank interrupts...sophisticated stuff back then. Antic Magazine, to which I had a subscription, was my source for all things Atari.
      Speaking of Antic Magazine, they once held an art competition for people to showcase their art skills on an Atari. My mom had bought me a tablet accessory for my 400 -- yup, a pen-driven tablet existed back then, and it was awesome -- and I used it to re-create the cover art of the novel 'Dune', a book I was smitten with at the time. I sent my art to Antic, on a tape!!, and I was certain I was going to win, but alas, nada.
      Fantastic memories, thanks for the reminder...

  • @ckmishn3664
    @ckmishn3664 Год назад +1

    1:47 Not to be pedantic, but what he noticed (and what your graph shows), was that the number of transistors was doubling every year. He later amended his observation to every 2 years as things were already slowing down.

  • @johndavidbaldwin3075
    @johndavidbaldwin3075 Год назад +22

    I can remember seeing transistors at school, they were about 1.5 cm long and 0.5 cm wide cylinders with tree wires sticking out of one end. They were used for the first truly portable radios which could use zinc-carbon batteries to operate.

    • @markuskuhn9375
      @markuskuhn9375 Год назад +6

      Transistors for power applications remain that big, or bigger.

    • @chicken29843
      @chicken29843 Год назад

      ​@@markuskuhn9375yeah I mean isn't a transistor basically just a switch?

    • @jpt3640
      @jpt3640 Год назад +6

      ​@@chicken29843only true in digital world. You use transistors for signal amplification in analog world. Eg your hifi. In this case it's a continuum between on and off, thus not a switch.

    • @gnarthdarkanen7464
      @gnarthdarkanen7464 Год назад +4

      What I find funny is that they finally got the "first truly portable radios" out to market with a slogan proudly and boldly scrawled across the front "Solid state", most popular (in my area) with a 9-Volt box-shaped battery in each one... AND around a decade later, the teenagers were throwing their backs out of whack with MASSIVE "boomboxes" on their shoulders, failing utterly to appreciate the convenience of a radio that fit in your pocket and a pair of earphones to hear it clean and clear anywhere... to have "speaker wars" that threatened the window integrity two houses down the block for a "radio" that weighed as much as an 80 year old phonograph unit... ;o)

    • @egilsandnes9637
      @egilsandnes9637 Год назад +1

      ​​@@chicken29843I'd say they generally are more like valves. If you always either turn your valve to completely stop the flow or open it fully, you've practically got a switch, and that's what's happening in digital circuits. There are a plethora of different kinds of transistors though, and for the most part the can be generalized as analog valves. One of the most common uses are signal amplifiers, used for audio and many other things.

  • @johnjakson444
    @johnjakson444 Год назад +1

    Another chip guy here.
    The transistor symbol at the start should have used the MOS FET gate even though the O for Oxide isn't used anymore.
    NPU used to mean Network Processor Unit, but Neural is so much more interesting.
    The real problem in my view is the Memory Wall which forces CPUs to have ever deeper cache hierarchies and worse performance when the processor is not making much use of the massively parallels capabilities. Cold starting PCs today is not significantly faster than 40 years ago, 30s vs 100s or so, but many capabilities are orders of magnitude higher. If we had a kind of DRAM that had say 40 times as much random throughput (called an RLDRAM), processor design would then be an order or two simpler but coding would have to be much more concurrent. Instead of using the built in SIMD x86 op codes to write Codecs, we would explicitly use hundred of threads to perform the inner operations. We then have a Thread Wall problem. And instead of including DSP, NPU, GPU like functions to help the CPU, we would write the code to simulate those blocks across very large number of simpler but also slower CPUs. But that possibility has slipped away.
    I could also rant on about compilers taking up 20-50GB of disk space when they would once fit onto floppy disks. My code used to compile in 2minuts is now maybe a few secs, that speed up does not jive with the million fold transistor counts needed to do the job. And software complexity..... I think we all jumped the shark here.

  • @Weirdanimator
    @Weirdanimator Год назад +66

    Correction! GPU was just a marketing term from Nvidia, dedicated graphics microchips have existed since at least the 70s.

    • @SabineHossenfelder
      @SabineHossenfelder  Год назад +40

      Ah, sorry about that :/

    • @hughJ
      @hughJ Год назад +12

      ATI shortly thereafter (9700 series?) tried to one-up them by coining their product as the first ever "VPU" (visual processing unit, IIRC). Kind of funny seeing companies try to twist themselves into knots to explain how their product deserves an entirely new classification. I think part of Nvidia's "GPU" distinction at the time was that it's able to transform 10 million vertices per second, as if something is special about that number.

    • @majidaldo
      @majidaldo Год назад +14

      She's referring to the consumer desktop *3d* gpus. The correction is that they were pioneered by 3dfx not Nvidia

    • @zigmar2
      @zigmar2 Год назад +8

      @@majidaldo There were consumer desktop 3D graphics processors even before 3dfx, like S3 Virge and ATI Rage; they were terrible though.
      Fun fact: The term GPU was first used by Sony for PS1's graphics processor.

    • @deth3021
      @deth3021 Год назад +1

      ​@zigmar2 what about the matrix 2d, 3d, and 5d cards....

  • @visionscaper
    @visionscaper Год назад +1

    Extensive summary, thanks! But you might have missed an important, and currently very relevant, research direction: non-von-Neumann architectures and in general, computation paradigms that are more close to computing in biological neural networks. Here compute and memory are merged and signals are spikes instead of bit streams.

  • @jimsackmanbusinesscoaching1344
    @jimsackmanbusinesscoaching1344 Год назад +10

    There is another set of problems that are related but are a lot less obvious.
    Design complexity is a problem itself. It is difficult to design large devices and get them correct. In order to accomplish this, large blocks of the device are essentially predesigned and just reused. This makes is simpler to design but less efficient. As a complement to this is testing and testability. By testing, I mean the simulations that are run before the design is finalized prior to design freeze. The bigger the device the more complex this task is. It is also one of the reasons that big blocks are just inherited. Testability mean ability to test and show correctness of production. This too gets more complex as devices get bigger.
    These complexities have led to fewer new devices being made as one has to be very sure that the investment in developing a new chip will pay off. The design and preparation for manufacturing costs are sunk well ahead of the revenue for the device. And these costs have also skyrocketed. The response to this has been the rise of the FPGA and similar ways of customizing designs built on standard silicon. These too are not as efficient as one could have made them with a fully custom device. They have the advantage of dramatically lowering the upfront costs.

  • @pro-libertatibus
    @pro-libertatibus Год назад +2

    Sabine meets Asianometry = fascinating and clear.

  • @XmarkedSpot
    @XmarkedSpot Год назад +4

    Shoutout to the channel Asianometry, an excellent first hand source for everything microchips and beyond!

  • @ctakitimu
    @ctakitimu Год назад +1

    You're so lucky to have had a Commodore 128! We started on a Sinclair ZX Spectrum before upgrading to a Commodore 64 (C64). Eventually upgrading to an Amiga 500. Then later came exploring how to build XT/AT systems, then 286 etc.

  • @casnimot
    @casnimot Год назад +5

    Yes, Moore's Law can continue for a while through parallelism. Heterogeneous approaches are working now and we're fairly close (I think) to practical use of resistor arrays for memory and compute (memristors). AI approaches to patterns of execution, and how that affects CPU, I/O and RAM, will also likely bear considerable fruit since we've spent quite a bit more time on miniaturization than optimization.
    In gaming, we're looking at a time of what will feel like only linear improvements in experience. But they will continue, and then pick up as we grow into new combined compute/storage/IO materials, structures and strategies.
    Costs and power consumption will be the real bear, and we MUST NOT leave solving such problems to the likes of NVIDIA (from whom I will purchase nothing). True competition is key.

  • @ellieban
    @ellieban Год назад +1

    “You see an exponential increase can’t continue indefinitely”
    Would someone please remind the economists 🙄

  • @brianmason9803
    @brianmason9803 Год назад +86

    Moore's law was never a law. It was an observed trend that manufacturers turned into an obsession. It caused an escalation of software that we will come to realise has made many systems unwieldy and unstable.

    • @michaelblacktree
      @michaelblacktree Год назад +15

      Moore's Law isn't really a law, just like the Drake Equation isn't really an equation.

    • @ernestgalvan9037
      @ernestgalvan9037 Год назад +24

      “Moore’s Law” rolls of the tongue a lot more smoothly than “Moore’s Observed Trend”
      😎
      P.S. she DID comment about the nomenclature/

    • @crocothemis
      @crocothemis Год назад +10

      Yes, so she said. Murphy's law isn't a law either.

    • @jamesdriscoll_tmp1515
      @jamesdriscoll_tmp1515 Год назад +2

      My observation has been the effort and ingenuity that amazing individuals have shown working to make this reality.

    • @binkwillans5138
      @binkwillans5138 Год назад +4

      Most basic software has not improved since the 90s.

  • @MrLeafeater
    @MrLeafeater Год назад +3

    I started with a Commodore 16 that plugged into the back of the TV. It was pretty horrifying, even at the time. Love your work!

  • @anttikarttunen1126
    @anttikarttunen1126 Год назад +4

    I first read it as "How Dead is Murphy's Law?", and wondered how Murphy's law could ever be dead!

    • @Anerisian
      @Anerisian Год назад +4

      …when you least expect it.

    • @bobjones2041
      @bobjones2041 Год назад

      You put it in box and it could be alive or dead just like cats in a box too

    • @ForeverTemplar
      @ForeverTemplar Год назад

      @@bobjones2041 Wrong. Murphy would say the smart money is the cats are dead.

    • @bobjones2041
      @bobjones2041 Год назад

      @@ForeverTemplar they was before Murphy had gender reassignment surgery and took up toking weed

  • @ElvisRandomVideos
    @ElvisRandomVideos Год назад +1

    Great video. I work in the semiconductor industry and you nailed every point from Graphene to EUV.

  • @ArtisanTony
    @ArtisanTony Год назад +18

    Thank God! I outlived Moore :) My first computer was a 1982 model Compaq luggable that had 2 - 5 1/4" floppy disks. It had a 9" amber screen. I traded it in a year later for the same computer but this one had a 20 mb hard drive. I was a big timer in 1982 :)

    • @tarmaque
      @tarmaque Год назад +5

      Hahaha! That reminds me of when I scored a hard drive for my first Macintosh in the late 80's. Back then the trick was to shrink the Mac OS down enough so you could get both the system and Word on one floppy, hence not having to swap out floppies constantly. Then I got this shiny new _12 mb_ hard drive! Bliss! Not only enough room for the full OS and Word, but a few dozen documents. It was SCSI, huge, noisy, and I loved it. I held onto it for years simply for the novelty factor, but finally it got lost.

    • @ArtisanTony
      @ArtisanTony Год назад +1

      @@tarmaque Yes, I am mad for getting rid of my luggable lol

    • @billirwin3558
      @billirwin3558 Год назад

      Ah yes, the days when computers were almost comprehensible to its users. Anyone for machine language Ping Pong?

  • @joetuktyyuktuk8635
    @joetuktyyuktuk8635 Год назад +2

    My families first computer was a Commodore-64. Me and my brother would spend a day typing in code, from a magazine in order to have a rudimentary game to play. One comma out of place and the game wouldn't run and you had to go through everything line by line... ah, good times.

  • @michellowe8627
    @michellowe8627 Год назад +5

    My first computer was a Motorola M6800 design evaluation kit, a 6800 micro processor, serial and parallel I/O chips, a clock, and a ROM monitor memory. It had a 48k memory board, a surplus power supply scavenged from a scrapped out minicomputer and I used my TV as a glass teletype.

  • @julioguardado
    @julioguardado Год назад +1

    The most amazing thing about Moore's law to me is not the increased density of transistors but instead the power reduction per transistor. From the original PC computers using 8086 chips to the current generation, there has been a reduction of power per transistor from 10^-4 to 10^-9 watts.
    And here's another fun fact: the entire annual semiconductor, semiconductor manufacturing equipment, materials, and facilites markets combined total slighly more than one quarter of Walmart sales. Talk about an inverted pyramid when you think about how much depends on that little industry.

  • @rredding
    @rredding Год назад +4

    My first computer was an Apricot PC, in 1984. A 16-bits system that was compatible with the Sirius 1. It was promoted in fora as WAY better than the IBM, DOS system not compatible with IBM.
    The screen was high resolution 800*400.
    *The issue: it was not compatible with IBM, so hardly any software became available. 😭

  • @ivanerofeev4873
    @ivanerofeev4873 Год назад +1

    4.12: e-beam lithography is a way of making patterns on the wafer, what you describe is scanning electron microscopy (SEM). People also use atomic force microscopy (AFM), but both mainly during the development stage and random quality checks, not in the production of every single device

  • @thepicatrix3150
    @thepicatrix3150 Год назад +5

    We need a new Sabine song to tell us about the good scientific discoveries in 2023 and make fun of the BS

  • @koumeiseidai
    @koumeiseidai Год назад

    As a structural biologist we rely on GPUs for data processing of EM data, basically hijacking the GPUs (hopefully more than one card) in parallel to do a variety of calculations like sorting images, building adaptive learning, and building models. If you run the same computations with a CPU it would take days longer.

  • @seadog8807
    @seadog8807 Год назад +3

    Great update Sabine, many thanks for your continued production of this content!

  • @robjohnston1433
    @robjohnston1433 Год назад

    Is "Leagacy" just a typo, or a clever visual/verbal joke by Sabine that I'm too dumb to understand!
    Otherwise ... FANTASTIC video!
    I have been jumping from videos to journals to press releases to industrial backgrounders ... all to understand how these super-new chips will be made.
    Here it is, ALL in one package ... with THE BEST educator.
    Thanks Sabine!!!

  • @1873Winchester
    @1873Winchester Год назад +9

    We have a lot of improvements to be done at the software level too. Modern software is so bloated.

    • @hellomeow9590
      @hellomeow9590 Год назад +5

      Yeah, a lot a lot. Like, several orders of magnitude. A lot of modern software is built on layers upon layers of inefficient code.

    • @yeetdeets
      @yeetdeets Год назад +1

      Even if Moores law ends, compute will likely still become progressively cheaper due to production scale, as will energy due to new technologies. So society rewriting a bunch of the bloated software seems unlikely. I actually think there is/will be a small gold rush of utilizing the cheap compute in new areas.

    • @danielhadad4911
      @danielhadad4911 Год назад

      Like all the software that comes with your printer and serves no purpose.

  • @brunonikodemski2420
    @brunonikodemski2420 Год назад

    Totally agree with @funtech below. My son works in this area now, and even in metamaterials, with Moire patterns, there is a limit of about "a box" of about 6+hexagon in flat planes, or an "x-box" like a bucky-ball of only a dozen or so atoms. The main problem is interrogating the "bits" within these structures. The wavelengths of the interrogation laser/optical system is so large, that it washes out the adjacent bits, after/for distances of a couple of wavelengths. This "interrogation noise" causes local destruction of the bit patterns, thus memory is lost. There is a fundamental quantum-noise level, which conflicts with the interrogation frequencies/energies, and requires near-Zero-Kelvin temperatures. 3-dimensions do not change this size range, just makes it a larger Qubit.

  • @phoule76
    @phoule76 Год назад +3

    We're not getting old, Sabine; we're becoming legacy.

    • @fatguy321
      @fatguy321 Год назад +3

      “Leagacy”

    • @cbrew8775
      @cbrew8775 Год назад

      @@fatguy321 id like to buy vowel, a contanty thingish.. shit stupid keyboard

  • @Lucius_Chiaraviglio
    @Lucius_Chiaraviglio Год назад +1

    From looking at the graph of number of transistors over time, it looks like we reached an inflection point around 2010, with slower growth since then. I would expect growth to gradually level out rather than suddenly hitting a ceiling, as development becomes more expensive, requiring longer times to recover development costs, and indeed, judging from the graph, this seems to be happening.

  • @eonasjohn
    @eonasjohn Год назад +6

    Thank you for the video.

  • @Rpol_404
    @Rpol_404 Год назад +1

    I work in aerospace and we’d be in serious trouble if not for legacy chips. Our designs have to be supported for decades due to FAA rules (DO-178) concerning rehosting software to a different chip. It is very costly!

  • @oobihdahboobeeboppah
    @oobihdahboobeeboppah Год назад

    I always appreciate Sabine's lectures here on YT. Her delivery is scaled back to a level that even I can understand her material. We're close in age (I'm still older) so her references to past tech hits home for me. Moreover, her accent (from my American ears) isn't so strong that I can't follow her; I get a kick out of hearing the different ways some words are pronounced, I guess it's we Americans who are saying things wrong. Some of her topics are over my head even though I've been in tech all my life, and if I find my brain drifting off, her very pleasant demeanor and style reels me back in. Her contributions to teaching science are second to none!

  • @davidlinnartist
    @davidlinnartist Год назад

    I went to high school with Gordon Moore's son and used to hang out at his house in Los Altos Hills. Great guy!

  • @TLguitar
    @TLguitar Год назад +15

    I'm only at the beginning of the video so I'm not sure this wasn't addressed later, but I read that the "5nm", "3nm" etc. nomenclature of modern processors is actually just marketing and it doesn't indicate the actual size of their transistors, which in effect is currently considerably larger.
    So perhaps on the upside we are still quite far, definitely more than it might seem on paper, from the theoretical 250 picometer or so size limit using silicon atoms (although I assume, without having finished the video yet, this small of a scale would be unusable in standard processors due to quantum effects).

    • @HisBortness
      @HisBortness Год назад

      Tunneling seems to become a problem at far larger scales than that.

    • @FilmsSantaFe
      @FilmsSantaFe Год назад

      I think it refers to some type of feature size, like the characteristic dimension of the conductive paths or such, please someone correct me.

    • @hughJ
      @hughJ Год назад +1

      @@FilmsSantaFe It used to be, but I think that went out the window over a decade ago. Probably around the time we switched from planar cmos to finfet.

    • @andrewsuryali8540
      @andrewsuryali8540 Год назад +2

      The nomenclature is pure marketing, but not because the transistors are much larger. The transistors themselves are within the same order of magnitude in size as the nomenclature indicates, so it isn't that a 3nm transistor is 1 micron in size. What makes the nomenclature pure marketing is that it refers to the smallest "cut" they can make on the die. This means that there might be some part of the transistor that is roughly 3nm in size, but it's only that one part and nothing else. Furthermore, which part it is depends on the company, and in the past Intel has been more "honest" about which parts can be claimed to be the size their nomenclature indicates.
      With all that said, once we get to the amstrong scale, the nomenclature will become pure fantasy.

    • @TLguitar
      @TLguitar Год назад

      @@andrewsuryali8540 I don't know much about the actual architecture of such processors, but it is stated in different sources that the gates (which I assume are the points where current, or data, is transferred between transistors) in the CPU are about 10 times as large as the _nm_ generation. So considering Moore's Law spoke of doubling the number of transistors every two years, if we coincide that with doubling of density then their marketing induces to a theoretical 6-7 year gap between our actual technological standing and what might be only in future iterations.

  • @jpuroila
    @jpuroila Год назад +2

    *3 companies. Global Foundries dropped out of the race at 14nm node and isn't investing into more advanced nodes, let alone EUV. On the other hand, we'll probably see more competitors from China if they figure out how to make EUV lithography machines (which, by the way, are produced by only one company).
    Also, Moore's law ("the density of transistors doubles every x months" where X is 12/18/24 depending on who you ask and when) is most definitely dead. That does not mean semi-conductors no longer advance, but it's no longer exponential growth.

  • @Thomas-gk42
    @Thomas-gk42 Год назад +5

    Thankful for your work ❤

  • @EebstertheGreat
    @EebstertheGreat Год назад

    The way I have always understood it, "the end of Moore's Law" doesn't mean the end of progress in hardware scaling, it meant the end of the breakneck pace of progress we've seen since the early 60s. And that certainly seems to be happening. New technologies are so expensive to implement, that a doubling every two years is no longer a feasible target for any company. That's the end of Moore's Law.

  • @zachhoy
    @zachhoy Год назад +3

    a study-worthy episode Sabine, thanks for the diligence!

    • @AndrewMellor-darkphoton
      @AndrewMellor-darkphoton Год назад

      It feels sporadic and outdated

    • @zachhoy
      @zachhoy Год назад

      I respect your opinion but disagree@@AndrewMellor-darkphoton

  • @jonwesick2844
    @jonwesick2844 Год назад +2

    My first computer was a Commadore 64. I used it to write a program that scheduled experimenters' shifts at the TRIUMF particle accelerator.

  • @stalbaum
    @stalbaum Год назад

    TRS-80 CoCo. So, one thing we are all forgetting is that Moore's law is only in effect at the high end, where we have turned to parallelism - for decades - to compensate. Moore's law will live on for some time on the low end and super low ends. While gaming PCs may hit limits, we are indeed in a golden era of SBCs, edge devices, and radically declining costs.

  • @douglasmatthews2334
    @douglasmatthews2334 Год назад +4

    Sabine! You made my morning with your upload. Ty for explaining things in simple terms I can understand. Ty for all of your videos!

  • @KilgoreTroutAsf
    @KilgoreTroutAsf Год назад +1

    The original Moore's law is about CRAMMING more transistors in the same area, i.e. shrinking the node. At the time, this also meant increasing clock frequency, i.e. performance roughly at the same pace, since shrinking the distance between gates also meant the electric potential stabilized in less time.
    But this trend began to slow down dramatically in the early 2000s due to heat dissipation problems and as of 2023 it has been dead for 5+ years regardless of marketing trying to redefine the concept.
    Of course there are still Incremental improvements in the process and chips with more cores and larger dies, but neither of these are the same as node shrinks, which is what the original law referred to.

  • @Harkmagic
    @Harkmagic Год назад +11

    There were concerns about this way back in 2000. Miniaturization has slowed dramatically. Increases in processing power has been largely the result in improvements in parallel processing. The early 2000s had faster processor than we do today. Instead, we now get more done with several parallel processes via multithreading.

    • @brainthesizeofplanet
      @brainthesizeofplanet Год назад +2

      Still the NM race will hit a wall ant around 2nm

    • @marcossidoruk8033
      @marcossidoruk8033 Год назад +8

      "The 2000s had faster(single threaded) processor than we do today" That is demonstrably wrong, totally ridiculous statement, at least if you are referring to non consumer research products thay I am not aware of that is just *extremely* untrue.

    • @MaGaO
      @MaGaO Год назад +2

      ​@@marcossidoruk8033
      I remember Pentiums hitting the wall at around 3.4GHz. Current Intel/AMD CPUs don't clock faster than that IIRC in single-thread mode even if they are more capable.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 Год назад

      Unfortunately no. The increase of computational power has slowed down even more than the slow down of miniaturization. Just because, we have more computational power than we need doesn't matter computational power is increasing in the same rate as before.

    • @marcossidoruk8033
      @marcossidoruk8033 Год назад

      ​​@@MaGaO First, That is a high end CPU of the late 2000s overclocked, Most mid range CPUs today have that as base frequency and can be overclocked to 3.6 Ghz or more, high end cpus can reach higher frequencies than 4 or 5Ghz in peak performance.
      Second, higher frequency doesn't imply a higher single core performance at all, cache sizes matter *a lot* for memory intensive programs and even in compute bound programs most actual instructions are faster in todays CPUs, modern CPUs absolutely smoke old ones even when running at the same frequency and on single core except maybe when executing a particular instruction that happens to be slow (I remember right shift instructions being awfully slow in some modern intel processors, also MMX instructions are deprecated and made to be deliberately slow so those will be slower as well) and if you add to the equation modern SIMD instructions like AVX512 its not even remotely close, a modern CPU will easily 4x a relatively old one or worse.
      The reason for all of this is that CPU manufacturers are not stupid, single threaded performance matters since not every problem can or should be paralelized, and they are constantly trying to improve that. The only difference is that in the past the main bottleneck was floating point operations, today those are blazingly fast and algorithmically very hard to make faster, so most of the optimization efforts go to better cache architectures (also matters for multithreading) better branch prediction, more, more flexible ALUs, better pipelining and out of order execution, etc.

  • @hamesparde9888
    @hamesparde9888 Год назад +2

    This is a good video, but I feel that you've massively glossed over the difficulty of designing software to take advantage of accelerators and parallelism. A lot of people underestimate the difficulty of writing even "simple" software. It's much more complicated than a lot of people think.

  • @philipp594
    @philipp594 Год назад +1

    I think they are overstressing. Density scales to the power of 2, so even a little smaller of a node goes a long way also the power efficiency gains allow for higher clocks (depending which version of Moore's Law we are talking about). Memristor cache will also make a huge difference, because on todays chips half of the area is only cache.
    I think we still have a while before we need to go beyond silicon.

  • @celeroon89
    @celeroon89 Год назад +1

    It's pretty cool that a Swedish inventor came up with the solution for colour graphics in computers in the 70's.

  • @extremawesomazing
    @extremawesomazing Год назад +5

    I breathe a sigh of relief as I'm reassured that my dream of smart artificial nails will some day allow the world to watch Sabine on literal thumbnails and give her videos a literal thumbs up.

  • @someonerandom704
    @someonerandom704 Год назад +2

    I'm still finishing undergrad in computer engineering, so I'm definitely not an expert. However, I've been consistently shocked by the amount of functionality possible with previous gen hardware. The microprocessor in my old logitech mouse is powerful enough to run Linux. I think that even if Moore's law flattened out for a few years, we'd still be making significant performance improvements.

    • @ForeverTemplar
      @ForeverTemplar Год назад

      Well yeah, pretty much any PC from even ten years ago is still practical for everyday use. Most of the PC market today is driven by gamer/streamer bullshit and the ever present need to upgrade components for a few extra FPS.

  • @AnkhArcRod
    @AnkhArcRod Год назад

    Small correction at 4:20. e-beam lithography is not the same as scanning electron microscopy. They are different beasts. EBL tool gets an entire room of its own while SEMs are dime a dozen. The cost of manufacture has majorly gone up due to EUV lithography whose throughput is lower than scanners.

  • @RobertWGreaves
    @RobertWGreaves Год назад

    My first computer was a radio shack pocket computer, then I bought a radio shack color computer. Next came a Kaypro II. That was when I started to get serious about programming and getting them to “talk” to each other. Today as an audio engineer, all the audio work I do is done on a computer with various digital peripherals. Now that I am retired I am still amazed at the advances being made in the computer world.

  • @JustinCase-wy1wm
    @JustinCase-wy1wm Год назад +1

    Hello, small correction. The method which is used for defect inspection is (Scanning) Electron Microscopy, not Electron Beam Lithography. SEM is for imaging, EBL is used for resist patterning and subsequent structuring, usually in special cases were Photolithography does not provide sufficient resolution.

  • @klin1klinom
    @klin1klinom Год назад +1

    Alternatives to silicone transistors have been actively researched for decades now, and all of them remain just that - "promising", but nothing practical ever comes out of it. Clever layering or packaging is unlikely to keep prices at bay, since regardless of how you package transistors the yield per unit area remains nearly the same. Eventually, even chiplets will become too large and prohibitively expensive. Moore's law isn't dead yet, but in its original form it will be dead within 15 years or so.

  • @rayc7192
    @rayc7192 Год назад +1

    Excellent channel. This would be mandatory viewing for any science class I would teach.

  • @AEFisch
    @AEFisch Год назад +1

    Having invested in one of the key companies that enabled DUV and later EUV, newer technology finds solutions. But maybe Moore's law isn't the definition we seek as Circuit density is only one measure of his point=greater computing ability for each chip. Lower power or faster cycling as you mentioned is one. And quantum computing as an adjunct to current systems is just happening.

  • @TrevKen
    @TrevKen Год назад +1

    It was never a "law" - it was a marketing slogan from Intel and Fairchild to keep customers on a regular upgrade cycle.

  • @woobilicious.
    @woobilicious. Год назад

    Minor nitpick/correction, Global Foundries have left the "high end" market, their 12nm+ process is now out of date (by only a few years), and they have officially announced they will not be migrating to a newer process.

  • @krashd
    @krashd Год назад

    I've always wondered why we don't just use more chips instead of trying to squeeze more onto a chip. If you lay chips flat in a socket (as we do now) then doubling the amount of chips doubles the amount of board space required and that would be unfeasible, but we have used slots instead of sockets in the past (Pentium II), in the same space that we currently fit a single CPU socket we could place multiple slots whereby each slot could fit an individual processor standing on it's end - inserted just like a wide, flat USB stick. The cooling block that sat over them would itself have slots that slid down over the chips to cool them, it would look just like a small upside-down toaster, but would obviously do the opposite of a toaster.
    In fact to cut out all of the things that could go wrong when you need to combine intricate slots with an intricate chip setup and an intricate cooler the chip manufacturers could just release the entire thing as a single package. Instead of a flat processor it would be block-shaped and inside the block would be as many vertical dies as they could fit with cooling lanes in between, perhaps 5 to 7 dies. That would be a huge jump in processing power.
    The only reason we are fearing Moore's Law is because we are determined to stick to the current configuration of a 3cm-by-3cm chip placed flatly in a socket, that is like someone placing four or five books flat on a book shelf and complaining that they've ran out of space on the shelf. We just need to use another configuration that allows more dies and then the "we can only fit 100 billion transistors on a die" becomes a problem we can deal with 50 years down the road when we are running out of places to put dies.

  • @Tezza120
    @Tezza120 Год назад +1

    Every iteration of shrinking the transistors has been a huge task. It's why it's taken 40 years to get this far and not 4. I vaguely remember one company stuck on a 12nm process while another went to 7nm with relative ease but I bet it wasn't easy. Very smart people are trying out various ways to manipulate nature to advance technologies and most people just want to voice opinions on twitter

  • @dougledbetter7039
    @dougledbetter7039 Год назад

    From my (inadequate) reading, it seems we have long past Moore's law. The last article I read on this topic said that we were essentially bumping up against the speed of light limit. I'd love to see Moore's law continue (and it might!). However, from a *mostly* consumer standpoint, computers are not getting very much faster (this "faster" is difficult to quantify, granted).

  • @battistazani8202
    @battistazani8202 Год назад

    1965: number of transistor on a chip will double every two years.
    2023: time has come where we have to double the chips every two years.

  • @douglasstrother6584
    @douglasstrother6584 Год назад +1

    Increased clock rates present new challenges: all of the fun characteristics of microwaves need consideration for interconnects, packaging, etc.

  • @Kiyoone
    @Kiyoone Год назад +1

    So... This explains why MOST of the modern electronics are not repairable. Circuits too small, impossible to reach, impossible to disassemble etc...
    Somethings are better old. Repairable and easy to do maintenance (Cars, washing machines, refrigerators etc)

  • @geraldmarcyk524
    @geraldmarcyk524 Год назад

    Gordon Moore once told me "Everyone has predicted the end of Moore's Law, even me. I keep being proven wrong."

  • @dr.a.w
    @dr.a.w Год назад

    It's been a bit of a golden age for computer performance. For quite some time, the hardware makers have been able to increase the speed of processing and bit schlepping faster than the coders can write ever sloppier and inefficient code. I'm afraid this is coming to an end. This article indicates that there may be physical limits to how fast we can compute, however, there is no limit to how bloated code can become.
    I remember installing Microsoft Word from a single master floppy onto a system floppy that had an operating system with a GUI, various drivers and a selection of fonts. There was still room on the floppy for several reasonably-sized documents, all on a computer with 512 K of RAM. (A Macintosh Plus in 1987) The laptop I obtained in January has 64 GB of RAM. The idea of having 64 GB of RAM in a single city, let alone in a single device that mere mortals could afford was in the realm of science fiction when I used that Mac +.

  • @robdielemans9189
    @robdielemans9189 Год назад

    Moore's law was on doubling the number of transistors on a chip. It doesn't say anything about the size of the chip. So whenever we reach the limits on an atomic level we still have a couple of iterations where we simply make the chips bigger, or we can also stack them (not 3D more matrix like).

  • @walkabout16
    @walkabout16 Год назад +1

    "How dead is Moore's Law?" you ask with cheer,
    Let me assure you, it's still quite clear,
    A guiding star for progress bright,
    Though challenges may come to light.
    In silicon's realm, it paved the way,
    For shrinking transistors, come what may,
    But as we venture through each gate,
    New avenues of growth await.
    Technology's march, forever strong,
    Innovation's notes, a vibrant song,
    Beyond the silicon's shrinking stride,
    New paradigms shall be our guide.
    Quantum whispers, AI's embrace,
    Innovations bloom at rapid pace,
    Moore's Law, it lives in spirit true,
    Adapting, evolving, fresh and new.
    So fear not, friend, for progress thrives,
    Inventive minds, the future drives,
    "How dead is Moore's Law?" with hope we see,
    It lives in hearts, eternally free.

  • @kounaboy7011
    @kounaboy7011 Год назад +1

    17:14 these capitalisation circutice the technologies of flow dots. Could be reverse engineered into logarithmic functions of its analogic computational advantage. Introducing a core assistant and fracturiced compartmentalized. If the analogical adherence coformulate. It would give us the thechnology to cool staked transitors. Introducing the frame aerly. These can be powerful components surrounding the main Q computer. Softproblem toggle for example. Also coherence 😮

  • @bruderdasisteinschwerermangel
    @bruderdasisteinschwerermangel Год назад +1

    As a Software Engineer I don't think that this entire thing is worth it.
    Like we had exponential growth of hardware performance, we have ridicolous amounts of computing power on even the cheapest devices.
    And yet we all feel like our devices have been getting slower and slower, mostly because software has been getting worse and worse.

    • @axle.australian.patriot
      @axle.australian.patriot Год назад

      :... mostly because software has been getting worse and worse." I fully agree with you on that. Just because a consumer PC has plenty of clock cycles and RAM we may as well just use it lol
      People laugh at me because I still code in C and learning Assembly...

  • @Geekofarm
    @Geekofarm Год назад

    3D Architecture concepts go way back. They were referred to as "The Smoking, Hairy Golf Ball." The system would be spherical to minimise the distance signals had to travel, and hence increase the speed. The hair was the large number of connections the device would need to shove the data rapidly through. The smoke came from the several kilowatts of power that the thing would fail to dissipate.

  • @PavlosPapageorgiou
    @PavlosPapageorgiou Год назад +1

    ZX Spectrum here. 48k RAM but the upper 32k had a fault so I was wondering why it kept crashing until we got it repaired. Anyway one of the coolest cooling techniques is to let one core or smaller part of the chip run at a speed that makes it overheat, but only for about a microsecond. Then shift the computation to another part of the chip, and then another, eventually cycling back to the first as it cools down. That allows a single thread to run at a speed that would otherwise melt the CPU.

  • @gljames24
    @gljames24 Год назад +1

    Technology is never exponential. It is always a sigmoidal curve as the tech reaches diminishing returns from the constraints of the system.

  • @someb0dy2
    @someb0dy2 Год назад +1

    A couple of things.
    Global Foundries is also a legacy node fab. They have decided not to invest pass the current 12nm node they have. I think this came out a couple of years ago, so not some latest news. They are innovating in the 14nm and 12nm (and older nodes) space, but they are not planning to get into 7nm,5nm, 3nm or smaller nodes. Of course this may change in the future.
    Only fabs still pushing on to leading edge nodes are TSMC, Samsung and Intel. And probably a bunch of fabs in China as well(regardless of sanctions). It's probably a different topic to discuss what is a 7nm or 3nm node, since everyone seems to define it differently nowadays.
    As for GPUs, Nvidia may have called it's graphics chips GPU first, but graphics processors have been around before Nvidia. 3Dfx comes to mind. And even S3 and others had some as well. Not to mention big guys like SGI, etc, before the x86 market basically ate them.
    Otherwise, a pretty decent video I think.

  • @szaszm_
    @szaszm_ Год назад +1

    I think the industry can still push the envelope for at least 5 or so more years, by using high NA EUV and multipatterning, but the costs are already starting to skyrocket on newer nodes. I don't know how fast we can keep shrinking nodes in the 2040s, I expect some fundamental change by that time.

  • @jamesnabors3643
    @jamesnabors3643 Год назад

    When the C128 was on cutting edge of affordable home computers, we laughed at people paying huge money for a 5Mb hard drive. You'll NEVER fill that thing up!

  • @M0rn1n6St4r
    @M0rn1n6St4r Год назад

    Sabine H. (unfortunately) failed to mention this:
    Silicon loses its semiconducting properties, when silicon transistors are fewer than 50 silicon-atoms thick. Silicon atoms are 0.2 nm in diameter. Thus, 5 nm thick silicon (50 × 0.2 nm), is the smallest (reliable) silicon transistor we can make.
    That's why researchers are exploring alternatives - _like molybdenum disulfide (MoS₂)_ - which is a 3-atoms-thick semiconductor, ≈0.65 nm in thickness; or ≈⅛th the minimum thickness of silicon transistors.

  • @TheGruntski
    @TheGruntski Год назад

    I recall sitting in a lecture hall listening to a lecture from a Stanford professor on the death of Moore's law. He was very impressive and even wore a three piece suit. He told us we'd come to the end of the road at 3um, so we would be wise to start retraining - especially if we were analog designers. That was in 1982. So now I'm still designing, still in analog and gate lengths are 3um. Of course the circuit designs are much different than they were in 1982, and I would not guess how much smaller transistors might get, or if some other physical structure might not replace them. I do know that professors can succeed in making fools of themselves.
    I will make a comment about the difference between the field of processing data and the field of new energy sources. When you're dealing data, you have to understand that thoughts have no mass, and no energy intrinsic to the thoughts or concepts that one wants to transmit or process. The equipment required to transmit and process thoughts and concepts are what consume power and material. So, physicists who dare to predict the end of the road, ie the Moore's law limit, risk making fools of themselves. It is best they stick with power generation.