3 New Groundbreaking AI Chips Explained

Поделиться
HTML-код
  • Опубликовано: 11 июн 2024
  • Visit l.linqto.com/anastasiintech and use my promo code ANASTASI500 during checkout to save $500 on your first investment with Linqto
    LinkedIn ➜ / anastasiintech
    Support me at Patreon ➜ / anastasiintech
    Sign up for my Deep In Tech Newsletter for free! ➜ anastasiintech.substack.com
    Timestamps:
    00:00 - Intro
    00:33 - New Blackwell GPU Explained
    06:35 - Demand for AI Chips
    07:01 - New 4 trillion transistors Chip
    10:04 - Why do we need Huge Chips?
    14:04 - New Analog Chip Explained
  • НаукаНаука

Комментарии • 512

  • @AnastasiInTech
    @AnastasiInTech  2 месяца назад +31

    Go to l.linqto.com/anastasiintech and use my promo code ANASTASI500 during checkout to save $500 on your first investment with Linqto

    • @Zero-oq1jk
      @Zero-oq1jk 2 месяца назад +2

      You said before we will get RISC-V. Will we? I mean Consumer PC.

    • @langhans156
      @langhans156 2 месяца назад +3

      Cerebras is not! outperforming Moore's law. Moore's law states that the density of transistors will double every few years. In what way does the Cerebras chip outperform other chips according to this metric?

    • @JakeWitmer
      @JakeWitmer 2 месяца назад

      Please forgive me if you have already covered carbon nanotube bundle memory from Nantero, or nanodomain progress from Xyvex. If you have not covered extending Moore's law using Kurzweil's favored paradigm (nanotube arrays / "pistons"), could you? They might be able to store "state" without using energy or magnetism, but physical position read by lasers, for speed and resilience (MRIs + implants = safe). I know this is more theoretical than your recent videos, but Nantero did have a functional product...
      I apologize in advance if you've covered this already. Your channel is amazing! Thank you for the wonderful updates!

    • @clarkwallcraft2741
      @clarkwallcraft2741 2 месяца назад +1

      @AnastasilnTech would you consider sharing your investment portfolio? I am very new at doing my own investments and I would be very interested in which startups you like. Doesn't have to be $ amounts and I will not consider your words as investment advice, "disclaimer".

    • @hdcomputerkeith
      @hdcomputerkeith 2 месяца назад

      I GPU mine and rent them out to ai or POUW like FLUX for renting. I avoided the 4090s but getting ready to buy the 5090. I hated the heat that vram ddr6x puts out , so i hope the drr7 vram fixed that heat issue. I own 20 evga 3070 drr6 gpus . SO ya lets go 5090 GPUs! Before xmas

  • @anirudhapandey1234
    @anirudhapandey1234 2 месяца назад +50

    As an engineer from this industry, I would like to say this channel is very informative and UpToDate.

  • @igorkogan9138
    @igorkogan9138 2 месяца назад +122

    Hi Anastasia. Congrats! Another “huge” podcast. You are the only RUclipsr uniquely qualified to bring the latest innovations in AI/processor technologies. I’ve been retired now for two years, after spending 40 years as a semiconductor equipment design engineer in Silicon Valley. So it’s especially fascinating to realize that not only Moore's law still relevant and striving, some technologies carry a promise to far exceed any expectations. Thank you, again for being such an enthusiast of your profession. P.S. it is especially fascinating that the first time I’ve even heard about private equity portfolio was on your program. Even your commercials are educating. 😊

    • @FloridaMeng
      @FloridaMeng 2 месяца назад +3

      Yep. She's the cornerstone for all ai and technology matters for me.

    • @tehgriefer9317
      @tehgriefer9317 2 месяца назад +6

      Not like the auto-proclaimed "Tech-Unicorn" Annia is the real deal.

    • @inspectorcrud
      @inspectorcrud 2 месяца назад +1

      I wonder what your personal liability could be with respect to private equity investments.

    • @Sven_Dongle
      @Sven_Dongle 2 месяца назад +3

      Morse law? Gordon Moore meets Samuel Morse? 40 years, eh?

    • @v1kt0u5
      @v1kt0u5 2 месяца назад +3

      She's amazing indeed... but the only one? this is a disrespect for some others!

  • @shahabdolatabadi4116
    @shahabdolatabadi4116 2 месяца назад +79

    Ma'am, if you are a chip designer, I think a profound and clear tutorial for computer architecture and practical chip design would be a contribution.
    Thanks for your videos.

    • @v1kt0u5
      @v1kt0u5 2 месяца назад +4

      Okay 🤣

    • @Ubya_
      @Ubya_ 2 месяца назад +9

      "profund and clear" and "tutorial" don't go really hand in hand. there's a reason why a degree takes years

    • @shahabdolatabadi4116
      @shahabdolatabadi4116 2 месяца назад +5

      ​@@Ubya_
      Apparently, you haven't seen any of the many great courses available on RUclips, many of which far better than those courses by Ivy league universities. Simply put, you're wrong buddy.

    • @Gravity360
      @Gravity360 2 месяца назад

      @@shahabdolatabadi4116 well that is actually an opinion. People learn based on several factors. Some are audio learners, some kinesthetic and then there's the one's who can just read the information. Some use a combination of the methods. But when it comes down to the 'course', it's honestly gonna be better if they use one or more of the known learning styles to present the information. Sometimes it can come down to having a analogy that is easy for people to relate the information to. Good example on this is computer networking being compared to the Mail/Postal system. I have found some RUclips video's very informative because they meet these standards and it makes it easier to digest the information for retention.

  • @RonLWilson
    @RonLWilson 2 месяца назад +39

    Way back when I was getting my BSEE in the 70's I always loved analog computers and am glad to see them now making a comeback!

    • @kristas-not
      @kristas-not 2 месяца назад +2

      how many modern ee or cs majors understands what an operational amplifier (op-amp) is or even realizes what the ”op” really meant?

    • @k-c
      @k-c 2 месяца назад +2

      Analogue technology is way too underrated. Incredible knowledge has been lost.

    • @kristas-not
      @kristas-not 2 месяца назад

      @@monad_tcp those *are* digital, but there's a *lot* of talk/chatter about analog ai chips and analog neuromorphic chips.

  • @douginorlando6260
    @douginorlando6260 2 месяца назад +6

    You are creating for yourself a remarkable reputation as the source of accurate knowledgeable and relevant IC technology. And this reputation is a valuable commodity that can’t be easily replaced. CONGRATS.
    I’ve been around chip tech since the PDP8 computer was built on 7400 series Chips that only had a handful of gates per chip. There’s been many technology approaches & architectures over the years and you seem to have an accurate perspective of how the horse race of chip technology plays out.

  • @nahkanukke
    @nahkanukke 2 месяца назад +12

    I have watched your content from beginning and it have been very informative. I love the way you present science of computing. Keep these comming. Thanks Anastasi🐿

  • @danmarquez3971
    @danmarquez3971 2 месяца назад +2

    Anastasi is insanely amazing! She is incredibly smart to understand this technology, and simultaneously, beautiful enough to be a super model! She represents the next step in human evolution!

  • @rafaelruiz-tagle358
    @rafaelruiz-tagle358 2 месяца назад +3

    As a person who has never been able to understand advance math, which has prevented me from becoming an electrical or mechanical engineer, I have always found this kind of technology incredibly fascinating.
    Anastasia, you always explain things so well, and you always keep the entire segment fascinating. You make it "easy" for an inept person like myself to understand this material, and you make me "want" to learn more. Thank you for all the wonderful videos make! I wish I would've had you as my math professor. Maybe I would've been able to fulfill my dream and become an engineer. 🙂

  • @SarahSchiffer-mj5el
    @SarahSchiffer-mj5el 2 месяца назад +11

    Any chance you could cover Extropic AI and their new thermodynamic processors? I think it seems like a promising area of development but I havent seen it covered by any experts I trust

    • @AnastasiInTech
      @AnastasiInTech  2 месяца назад +11

      I've read their light paper. It's interesting. I would be curious to learn more if anyone can connect me to them..

  • @bbamboo3
    @bbamboo3 2 месяца назад +14

    My father taught me analog computing as a teenager, two pots could multiply. He was a rocket scientist and totally understood digital computing however he also continued to say that analog computing was not being used effectively. inertial guidance was partly analog in the beginning. How interesting to see the hybrid approach implemented in cmos.

    • @chriswininger3022
      @chriswininger3022 2 месяца назад

      Jonathan Mills was one of my favorite professors in college. He was doing a lot of amazing work on modern analog computing and hybrids cns.iu.edu/docs/netscitalks/j-mills.pdf. Sadly he has sense passed. Great thinker, good teacher

  • @bakedbeings
    @bakedbeings 2 месяца назад +10

    It was a bit easier to imagine useful outcomes for fp8 calculations, but 4 bit (1/8th precision!) is just wild. Fun times in computer science 🎉

    • @paultparker
      @paultparker 2 месяца назад +3

      Have you seen the paper about high performing one bit LLM’s? They actually require two bits, but they’re still half the size of an FP4.

  • @iliasiosifidis4532
    @iliasiosifidis4532 2 месяца назад +7

    I was excited about acceleration since 2021, that Blender changed from Cuda to Optix. Oh boy, same GPU, but 7 times faster renders!
    Acceleration is the future for sure

  • @DaveShap
    @DaveShap 2 месяца назад +2

    Love your coverage. Great work. Thanks!

  • @pythonyousufparyani8407
    @pythonyousufparyani8407 2 месяца назад +10

    Thank You for another video. I am gonna watch it now.

  • @bob38161
    @bob38161 2 месяца назад +1

    Was looking forward to your video ever since the Blackwell event!! Thank you!

  • @imjody
    @imjody 2 месяца назад +1

    Love your deep dives into the latest technology, Anastasi! Thank you. :)

  • @knofi7052
    @knofi7052 2 месяца назад +1

    I love your videos, Ana, because I am always learning amazing new things!😊 Happy Easter!😉

  • @BrianFedirko
    @BrianFedirko 2 месяца назад +1

    DTA: Decade of Technoloical Advancement. It's a good acronym. DTA does seem to be applicable to this era in time. Analog is a good use of the letter "A" too, as I love the concept of analog computing and chips. It would be nice to someday print 100 electronic devices on a single wafer, and use them out of pocket at a whim. The phone is starting to become this, but I'm sure some smarty pants can collect more divergent ideas on a new multi device. I'd love to be able to microwave a pizza from my pocket... haha. Gr8! Peace ☮💜

  • @ZoOnTheYT
    @ZoOnTheYT 2 месяца назад +8

    After watching some videos of Neurolinks first participant, Noland...I wonder if in the end the analog component digital is going to connect with for the most powerful, and most energy efficient output is our brains.

  • @joe_limon
    @joe_limon 2 месяца назад +12

    There is a research paper showing scaling these networks down to binary and there was great efficiency, speed and memory gains.

  • @paullitzbarski2632
    @paullitzbarski2632 2 месяца назад +1

    Your videos are awesome! They are to the point, up to date and with no distractions. I like your accuracy and enthusiasm, it is inspiring and makes appetite for all the current processor developments.

  •  2 месяца назад +7

    Love how balance are always your videos! You almost never succumb to the hype! :)

  • @nopponw2525
    @nopponw2525 2 месяца назад +2

    Thank you very much. You simplify the field that I ever thought i cant understand it to be more reasonable conversation. Please keep making this video 🎉

  • @RestlessBenjamin
    @RestlessBenjamin 2 месяца назад +6

    Great video today. I've heard a little bit about these new chips but your explanations really helped me understand why they are so important.

  • @JohnSmall314
    @JohnSmall314 2 месяца назад +6

    Love the content, I've sent it to my friends

  • @vrendus522
    @vrendus522 2 месяца назад +1

    Interesting talk, thanks Anastasi. Dan :)

  • @HansVanIngelgom
    @HansVanIngelgom 2 месяца назад +1

    I once profiled a program that I suspected to be slow because it copied gigabytes of data multiple times. The copying time was only marginal. Instead, it spent most of the time performing a log calculation, then rounding the results. I implemented a low resolution log function that turned out to be faster than the fpu version. That's when I learned that calculating something up to an insane amount of digits is often a huge waste.

  • @TickerSymbolYOU
    @TickerSymbolYOU 2 месяца назад +1

    Awesome breakdown of some of the most important technologies in the world!

  • @dchdch8290
    @dchdch8290 2 месяца назад +20

    from 8bit to 4bit ... we are looking at the next Turing Award candidate here :D

    • @colinmiddleton9444
      @colinmiddleton9444 2 месяца назад

      So they have found that you get better and more efficient intelligence with less accuracy. It all makes sense now. That explains everything.

    • @almightysapling
      @almightysapling 2 месяца назад +1

      No need to stop there. Just read a paper that claims you can drop to 1.58 bits (just 0 and +/- 1) and get basically the same quality.

    • @Sven_Dongle
      @Sven_Dongle 2 месяца назад +1

      @@almightysapling BS

    • @yahiiia9269
      @yahiiia9269 2 месяца назад

      @@Sven_Dongle It's true.

  • @aipsong
    @aipsong 2 месяца назад +1

    Another great video!!!! Thanks!!!

  • @rrangel1968
    @rrangel1968 2 месяца назад

    Awesome video Anastasi!

  • @amribraheem8674
    @amribraheem8674 2 месяца назад +4

    How does Cerebras deal with the cooling problem over such area?

  • @dongedye3193
    @dongedye3193 2 месяца назад +2

    You have the best channel on You Tube! I really enjoyed the latest one: "This is Huge". Incredible info! Thanks!

  • @cosmicaug
    @cosmicaug 2 месяца назад +1

    5:32
    «Honestly, 4 bits is quite low and that makes me curious to see how well it's going to work for inference application.»
    There's a relatively recent paper out there that suggests you can do pretty well using a single trinary bit, or values of -1, 0, +1 (about 1.585 binary bits equivalent), for your weights.

  • @dchdch8290
    @dchdch8290 2 месяца назад +5

    thank you for this great video !

  • @marcovillani4427
    @marcovillani4427 2 месяца назад +4

    Amazing!!!!Congrats

  • @kinshin67
    @kinshin67 2 месяца назад +1

    Your insights are deep and your explanations are outstanding.

  • @sirousmohseni4
    @sirousmohseni4 2 месяца назад +2

    Thanks for the video.

  • @drewbizdev
    @drewbizdev 2 месяца назад +1

    Another excellent video Anastasi. Technology is going crazy. 🙂

  • @wintergreen4978
    @wintergreen4978 2 месяца назад +4

    Thanks and wish you more success ❤

  • @user-vh1td2tw1b
    @user-vh1td2tw1b 2 месяца назад +1

    I am grateful for your podcast. Thank you

  • @user-tf9fi7dh2n
    @user-tf9fi7dh2n 2 месяца назад

    Love the hardwork and dedication that you put in making these videos for us Anastasi👏👏👍

  • @everettputerbaugh3996
    @everettputerbaugh3996 2 месяца назад +1

    IBM was the computer to beat in the 1050s, 60s, and 70s. They started out using 4 bit words (Binary Coded Decimal) and moved through Extended B.C.D. (6 bit words) to 8 bit words (ASCII). Some modern equip. uses Unicode (16 bit words) which can represent nearly every written language in use today. The size of the work affects the amount of work the chip can do in a given bit of time and especially how much physical memory is used for each character. IF all you are doing is math, you don't need big words.

  • @Salara2130
    @Salara2130 2 месяца назад +1

    U have to love science. Everything always sounds so bombastic in highly complex. But then you get down to it and it comes down to "just use 2 chips" "just decrease the size of numbers you calculate with" "just use some redundancy".

  • @BilichaGhebremuse
    @BilichaGhebremuse Месяц назад

    Excellent explanation

  • @Printman3332
    @Printman3332 2 месяца назад +1

    I like how well you explain things very good. 👍 👍

  • @billknight7342
    @billknight7342 2 месяца назад +3

    Wasn't Apple's Ultra chip a dual well before this "first time" chip?

  • @zelogarno4478
    @zelogarno4478 2 месяца назад +1

    It is very interesting! Thanks!

  • @2smoulder
    @2smoulder 2 месяца назад

    Anastasi...you smashed this future new chip design review.

  • @tom-et-jerry
    @tom-et-jerry 2 месяца назад +3

    whaoooo i'm waiting for this new technology (analog chip with capacitors) with great impatience ! it will blow my mind ! I love all your videos ! I'm a programmer.

  • @JohnSmith762A11B
    @JohnSmith762A11B 2 месяца назад +31

    Anastasi would make a great Chief Engineer on a starship. Now we just need to create the starship! Not the SpaceX one, more like the Enterprise.

  • @JD-jdeener
    @JD-jdeener 2 месяца назад +9

    It's so hard to concentrate on the content when it's being delivered by such an exceptional presenter. I do agree, these are exceptional times we are living in.

    • @shiccup
      @shiccup 2 месяца назад

      Lol it takes me like 3-4 times the watch time to actually finish the video because i have to pause and think about everything she brings up

  • @CliftonDavis-je7qu
    @CliftonDavis-je7qu 2 месяца назад +2

    Thank you Anastasia your great

  • @donaldhenderson1870
    @donaldhenderson1870 2 месяца назад +1

    Never heard about analog chip. Very cool indeed!

  • @willykang1293
    @willykang1293 2 месяца назад +3

    1. Jensen Huang: It’s ok, hopper.😂😂
    2. TSMC had to make the fabs super clean so they must change those filters on top of the fabs so frequently maybe once within three months. And those activated carbons inside the filters must completely new, they cannot bear the cost if those activated carbons are reused even.
    3. Morris Chang once lived in a house maybe close to 20 years ago and it’s just next to my brother house now, by the way.

  • @justfellover
    @justfellover 2 месяца назад +3

    While there is statistical variation in the exact release date of individual advances, Moore's law marches on.

  • @jamesjohn2537
    @jamesjohn2537 2 месяца назад +1

    Hahahaha is 😂😂😂 first time seeing you telling jokes out of the blue. Cheers 🎉 have great Easter

  • @BilichaGhebremuse
    @BilichaGhebremuse 2 месяца назад

    Great but little bit deep knowledge to understand..great work summarized

  • @Ryan-qt9jm
    @Ryan-qt9jm 2 месяца назад +1

    Awesome video! Great info here :) I especially appreciated the lead on private equity investing. Thanks

  • @ConstellationMushrooms
    @ConstellationMushrooms Месяц назад

    I've always wanted to hear Anastasis educational story of her journey through college and work. She's multilingual, probably a total math wiz, absolutely gorgeous, and her excitement for chip design is always infectious. I wanna hear how that came to be! :)

  • @En1Gm4A
    @En1Gm4A 2 месяца назад +6

    you got telescope images on your phone background + plus points on that

  • @hightechfarmers
    @hightechfarmers 2 месяца назад +1

    Appreciate your expertise. Would love to hear your comparison with Dojo architecture. They too use one of the special packaging processes from TSMC to put 25 die on a single wafer as I understand it. Seems like that would improve yield on final chip integrated wafer higher than single chip all in yield. You get to pick the 25 best yield in smaller chips and assemble them on a larger package with only interconnect which seems similar to Blackwell but at a whole other scale. Would love to hear your thoughts.

  • @michaelurban1937
    @michaelurban1937 2 месяца назад

    Love you, Anastasia!

  • @yougeo
    @yougeo 2 месяца назад +1

    I think her explanation is interesting of the double chip in makes me realize that it really isn't some great breakthrough but it was actually the second choice that Jensen had to make because tmsc could not make the next great density improvements with the greater scale.
    That's interesting to know. It's also interesting to know that their costs are going to go up and their margins may not be as high. One thing I've never understood though is why TM sc isn't the company that makes all the margins and why they let some company like Nvidia which just gives them designs and doesn't make anything why they let in video make the margins.

  • @mordokai597
    @mordokai597 2 месяца назад +2

    lol, i'm trying to pay attention, but i keep thinking "OMG! she's waving a wafer around again" xD I still remember "the incident" with the shattered wafer ;)

  • @janmagrot
    @janmagrot 2 месяца назад

    Nice, thank you.

  • @dchdch8290
    @dchdch8290 2 месяца назад +7

    We need bigger GPUs ! 😎

    • @mdo5121
      @mdo5121 2 месяца назад +1

      the man can really sell it...right

  • @HarryLewinASR
    @HarryLewinASR 2 месяца назад +1

    Great video. Where does Dojo fit in this scheme.? Does it have any advantages?

  • @gamesndrinks
    @gamesndrinks 2 месяца назад +93

    Who else ran to the channel at sublight speed

    • @AustinThomasPhD
      @AustinThomasPhD 2 месяца назад +13

      I would be impressed by anyone running at greater than sublight speeds.

    • @yeejay6396
      @yeejay6396 2 месяца назад +1

      Beans

    • @JumpDiffusion
      @JumpDiffusion 2 месяца назад +3

      Everyone

    • @drunknmasta90
      @drunknmasta90 2 месяца назад +3

      Duh I have mass

    • @jtjames79
      @jtjames79 2 месяца назад

      ​@@AustinThomasPhD I identify as a tachion you bigot!

  • @danobrien3601
    @danobrien3601 2 месяца назад +1

    Thanks for the updates .. Always thought analog computing was undervalued ..and that giant chip thats just incredible

  • @alexb.6800
    @alexb.6800 2 месяца назад +1

    Idea for the next video: make a review of your stock picks related to semiconductors and related industry.

  • @glasperlinspiel
    @glasperlinspiel 2 месяца назад +1

    It’s fun to see you so excited about analog. I think we need an app that translates your body language into an investment indicator!

  • @CliftonDavis-je7qu
    @CliftonDavis-je7qu 2 месяца назад +5

    Mind blown 🤯

  • @JoeLion55
    @JoeLion55 2 месяца назад +2

    7:30 to be fair to Moore’s law… It doesn’t really state that transistors “per chip” doubles. It states that transistors “per area” doubles. The Cerebras chips are not breaking that - they are still putting the same number of transistors per square cm as the other N5 TSMC chips are. It’s just that historically “chip sizes“ have roughly been the same, more or less, generation to generation, due to manufacturing constraints, heat dissipation, system design constraints, package design, etc. So assuming the chips are roughly the same size, if transistors shrink then the transistor amount doubles from one generation to the next, then that roughly means transistors per chip has doubled. But in reality, the Moores law charts should all have transistors per square centimeter as the y-axis, not transistors per chip. (And for the pendants, of course Moore’s law has also been modified to indicate “performance per area” instead of just transistors per area).
    Having said all that, breaking free of the standard maximum die size as constrained by the photo mask and interlinking multiple dies together so they effectively function as one chip is a pretty incredible breakthrough, without using an interposer or other off-die layer like the Intel “Tiles” or AMD “chiplets” use.

    • @pentachronic
      @pentachronic 2 месяца назад

      Agree, this is misleading saying breaking Moore’s Law. Not true. Wafer scale has been done before. Thing new.

  • @DrGazza
    @DrGazza 2 месяца назад

    Hi Anastasia, I enjoy and learn from your well presented videos. Just one comment, in your discussion on analogue computing you seem to suggest that transistors do digital processing and (correctly) that capacitors and resistors do analogue processing. Is this correct? Transistors are commonly used in analogue ccts such as audio amplifiers. I now that in digital ccts they are used as switches, is there a cost in using them in analogue mode (i.e. energy, thermal costs)?

  • @zerorusher
    @zerorusher 2 месяца назад +1

    Since Nvidia was forced to use N4p, its safe to say that another leap on performance is almost guaranteed on next iteration from improvement alone.
    Regarding the decision to make FP4 the standard, the tradeoff between more parameters/less precision makes sense since there are recent studies showing that quantized models show negligible performance drop when compared with full precision, and that training models with lower precision from scratch may close the gap even further (given the right architecture, of course).
    About the hybrid chips using capacitors for addition, I wonder if another benefit on the future could be the ability to asynchronously discharge the accumulated charge when a certain threshold is reached. Such architecture could resemble neuron spikes and the way the human brain works asynchronously and with lower frequency.

  • @eagledctr7
    @eagledctr7 2 месяца назад

    Thanks!

  • @IvarDaigon
    @IvarDaigon 2 месяца назад +16

    Kind of makes you wonder why Cerebras doesnt just make round chips, if they are going to take up the entire wafer, why not just use all of it? (they could place notches on the edge to help with orientation). I think the biggest problem for monolithic chip designs is that they can't easily be modularized or repurposed and if the chip fries you lose the whole thing, so even though the chip has redundancy built in, its still not a fully reduntant system.

    • @supermodal
      @supermodal 2 месяца назад +2

      I assume a partial chip section is not a completed circuit.

    • @pentachronic
      @pentachronic 2 месяца назад

      Step and repeat is how IC reticules are exposed.

    • @IvarDaigon
      @IvarDaigon 2 месяца назад

      @@supermodal the cores are tiny so they could go right up to the edge of the wafer

    • @HelgeMoulding
      @HelgeMoulding 2 месяца назад

      I suspect it has to do with the circuit design software everyone is using. Laying out the logic for a compute unit in a curved segment at the edge of the wafer is possible, but the software isn't set up for that.

    • @pentachronic
      @pentachronic 2 месяца назад

      @@HelgeMoulding It has to do with the reticule step and repeat. X/Y grid step and repeat. The edge of the wafer is exposed but die to it being circular you never get a full rectangular coverage.

  • @416dl
    @416dl 2 месяца назад +6

    Some of these concepts are so elusive for non-technical types such as myself at least. Thank you for presenting great information along with superb and clear explanations that include helpful, and I presume accurate, visuals. Will always look forward to delving deeper without fear with your particularly effective guidance. Buona Pasqua.

  • @Citrusautomaton
    @Citrusautomaton 2 месяца назад +1

    Commercial analog applications!? I’ve never heard of these EnCharge people, but they seem like the real deal! I’ve always been really fascinated with analog computers, so i’m really excited that they’re making a comeback!

  • @SurfinScientist
    @SurfinScientist 2 месяца назад +1

    Good video! You obviously know what you are talking about.

  • @twinkletoe-s
    @twinkletoe-s 2 месяца назад +1

    Wow I had no idea someone was using the "interconnects" as a capacitor. Concidering that is already occurring within a chip. That is genius and opens a massive area for alternative computing methods. It could reduce heat/power and need for additional transistors very exciting stuff I need to do some research on this thanks you

  • @Hermas_360
    @Hermas_360 2 месяца назад +3

    Great content but I must admit that the first time that got to your channel I thought that you were created by IA, anyway keep up the good work. saludos

  • @derasor
    @derasor 21 день назад +1

    Encharge-AI approach is genius.

  • @sanfordschoolfield710
    @sanfordschoolfield710 2 месяца назад +3

    Thanks

  • @rsum123able
    @rsum123able 2 месяца назад +1

    Hi Anastasia. Always love your content! Tesla also creates its' own chips coupled with Nvidia's chips. 👍

  • @Integr8d
    @Integr8d 2 месяца назад +2

    When Anastasi goes to the car dealer to negotiate a purchase, they end up paying her.

  • @kostyanoob
    @kostyanoob 2 месяца назад +1

    Very good overview. Thanks, Anastasia.
    My comment is that a *solid software stack* is the real key to the adoption of every new hardware architecture out there. That's also the success behind nvidia...

  • @theosib
    @theosib 2 месяца назад

    I wrote a workshop paper years ago that shows that lower precision can be compensated for just by including that limit in the training process. Backprop has to be done in floating point to accumulate small weight updates. But all forward calculations have to be done in fixed point. The imprecision causes prediction error, which backprop compensates for naturally.

  • @GuidedBreathing
    @GuidedBreathing 2 месяца назад

    3:20 this is a bit fun to watch ☺️ ‘pathos’ too 😁

  • @gazzacroy
    @gazzacroy 2 месяца назад +1

    wow. it's just totally mind blowing

  • @oldtools6089
    @oldtools6089 2 месяца назад +1

    giant chips comprised of compartmentalized chiplets running in parallel is *obviously* better than the current commercial paradigm cudas are jamming on, but I believe that the ideal hierarchy of space and appropriated cache looks more like IBM/Motorola's Cell. Visualizing answers to human problems should be given a structure that provides chip designers with an inherently optimized skeleton to experiment with while maximizing throughput for unique formulas.

  • @taith2
    @taith2 2 месяца назад +1

    When i think of it, regular RAM is also capacitor based
    Why didn't chip manufacturers slap a lot of capacitors in form of metal layers directly on chip?
    After all transistor cache doesnt scale well compated to logic gates
    It will require rewriting EDA software from the grounds up
    But gains will be astonishing in my opinion

  • @duncan94019
    @duncan94019 2 месяца назад

    Thank you for your video. in the mid 70s I had a chance to be the early user of a hybrid computer (digital / analog computer) and in the 60s I had a chance to use an analog computer. I'm glad to see analog making a comeback. I think there are other interesting applications for analog. For example maybe it can be used in Ensemble modeling. (I recommend The Primacy of Doubt by Tim Palmer).

  • @freddoflintstono9321
    @freddoflintstono9321 2 месяца назад

    I wonder how you're going to power such a massive chip. It'll need a large amount of supply feeds. Fantastic episode - I learn something new every single time, thank you. I do like you getting animated about good ideas, nice.

  • @Nirvana504
    @Nirvana504 2 месяца назад +1

    Anastasi, what do you think about Qualcomm's future in this space?

  • @AdvantestInc
    @AdvantestInc 2 месяца назад

    Incredible insight into the cutting-edge developments in AI chip technology!

  • @anthonybelz7398
    @anthonybelz7398 2 месяца назад

    Haven't really done any machine learning, but otherwise I'm a very experienced software developer - 4-bit applications can go some distance, but I have to wonder how 4-bit node-weightings can yield real neural-net performance at lo-energy; That said, I would say the chip-architecture could isolate the lo-bit consumption from hi-bit consumption transparently from any given executable layers; Disappointing if they haven't attended to this, but I would guess they have. Enjoying your briefings on modern chip-landscapes thanks Anastasi (Finally subscribed) 🐄

  • @unixux
    @unixux 2 месяца назад +5

    Anastasia, I Can not overstate the distinction between memory and mammary. The difference is overwhelming.
    The video is awesome !