GPU vs TPU vs NPU - How do these different computer chips affect Tesla’s FSD and AI training?

Поделиться
HTML-код
  • Опубликовано: 10 ноя 2020
  • If you wish to support us in creating fun, informative content, please consider giving at our Patreon site here: / drknowitallknows . Thank you!
    This one's been requested a lot--hope you enjoy!
    Wherein Dr. Know-it-all discusses the differences between CPUs, GPUs, TPUs and NPUs. What do all these acronyms stand for? How will better chips help Tesla create better full self driving, autonomous vehicles? Watch and find out!
    If you are looking to purchase a new Tesla and would like 1,000 free Supercharger miles, just click this link and use our referral link: www.tesla.com/referral/john11286. Thank you!
    Music by Zenlee. Check out his amazing music on instagram -@zenlee_music or RUclips -
    / @zenlee_music
    If you enjoy more outdoorsy, adventure type videos--like climbing the Matterhorn or trekking to Mount Everest Base camp--and a few product reviews too, please check out our sister channel, Whole Nuts and Donuts, right here: / @wholenutsanddonuts5741
    Please ask questions in the comments below, or at: DrKnowItAllKnows@gmail.com. Thank you!

Комментарии • 123

  • @GreylanderTV
    @GreylanderTV 3 года назад +21

    Pretty sure "tops" refers to "tensor operations per second", a single tensor operation is a multiplication and an addition. The T is not for "tera". You can have gigatops, teratops, petatops, etc.

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +13

      Sorry, yes. You should see the outtakes on that little bit. And in the end I got it wrong anyway. Thanks for correcting me!

    • @tristanmanchester
      @tristanmanchester 3 года назад +8

      @@DrKnowitallKnows No you were correct, Tesla's HW3 is said to have 37 TOPS per NPU core. I don't think they'd be very good at self-driving doing just 37 operations per second.

    • @Unknownthewiu
      @Unknownthewiu 21 день назад

      Its trillion dumb

  • @jkev1425
    @jkev1425 3 года назад +36

    Would love to see more about NN in general, maybe an overview of different kinds of?! Or a general overview 😁

  • @andresvidoza
    @andresvidoza 6 месяцев назад

    Watching this 3 years later as I learn a bit more about Machine Learning... very well explained doctor!

  • @red878787
    @red878787 3 года назад +3

    Yes, please we need a deeper dive!

  • @petabb
    @petabb 3 года назад

    Thanks for a very high level overview yet describing enough info to explore in detail each individual topics. Subscribed.

  • @carterspencer1787
    @carterspencer1787 3 года назад +1

    Thanks for the information 😀

  • @edeggermont
    @edeggermont 3 года назад

    Thanks for the lecture, Dr!

  • @stf0146
    @stf0146 3 года назад

    Thanks for sharing your knowledge! Subscribed.

  • @glenncook9078
    @glenncook9078 3 года назад

    Wow thank you for explaining! I truly learned something new today!

  • @nik8593
    @nik8593 3 года назад

    Thanks for this video! Will be following the whole A.I. serie 4 sure !

  • @carvalhoribeiro
    @carvalhoribeiro Месяц назад

    Great explanation. Thanks for sharing this

  • @ljbotero1
    @ljbotero1 3 года назад +5

    yes, please talk about convolutional NN. when i did my M.S. 20 years ago, I did a minor on AI, and I do remember back-prop NN as I wrote it from scratch on C++ for a class homework, but nowadays there are so many types that it's hard to keep track, especially if you don't use them on your typical day to day.

  • @eron17
    @eron17 3 года назад +2

    This channel is underrated

  • @joseluiz7017
    @joseluiz7017 3 года назад +1

    Great video, lot of info in this one. Do the other episodes would love to see your break down on the topics you stated

  • @peterbustin2683
    @peterbustin2683 10 месяцев назад

    Fascinating ! Subbed.

  • @pascalg.8772
    @pascalg.8772 3 года назад

    Great video
    Your number of subs is growing fast! You absolutely deserve it

  • @juanjosefraga9310
    @juanjosefraga9310 2 года назад

    Fantastic. A really good introduction.

  • @IgraphyRupage
    @IgraphyRupage 3 месяца назад

    would like more about NN. Thanks for the video! Great content

  • @drdre3293
    @drdre3293 3 года назад

    I love the explanation, is Clare, deep and understandable

  • @arjunannair2176
    @arjunannair2176 2 дня назад

    Thank you very much sir ❤❤❤

  • @stevenhill3136
    @stevenhill3136 3 года назад

    New subscriber. Watched many of your vids and plan to watch them all. Keep up the great work!

  • @arjunannair2176
    @arjunannair2176 2 дня назад

    Enjoyed the presentation. ❤❤❤

  • @kethibqere
    @kethibqere 3 года назад

    This was informative.

  • @MrRobix13
    @MrRobix13 3 года назад

    Great Video !

  • @V8burble
    @V8burble 3 года назад +1

    Love your lecture type video style, can’t wait until your Tesla is delivered and you begin the self driving journey 😉

  • @magnushelin007
    @magnushelin007 3 года назад

    Yes! Would love a series on NN.

  • @samirtouati2942
    @samirtouati2942 3 года назад

    This content is great, thanks Dr Know-it-all

  • @pasanperera8235
    @pasanperera8235 3 года назад

    Great explanation

  • @sparringmonkey
    @sparringmonkey 3 года назад +3

    Dr know it all: where were you when I was in high school?; I'm envious of your ability to convey informations. Damn you're smart....I'm having a bit of a man -crush right now. more dojo's pleeeeaaaase.

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +3

      lol, thank you. I think my students all think I'm a goof, so glad to know someone thinks otherwise!

  • @atomicages8914
    @atomicages8914 3 года назад

    This channel has grown a LOT. Just a month ago it had less than 500 subscribers. Keep it up!

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад

      Right? It’s been a ride these past couple of weeks. I’m really humbled people are enjoying my videos. Thank you, everyone!

  • @UserName-di4cs
    @UserName-di4cs 3 года назад

    Perfect video as always
    You should improve the sound it would be even more perfect

  • @elck3
    @elck3 3 года назад +10

    Would love a series on neural networks etc focusing on self driving and or generalized intelligence. Don’t know if you know but Elon Musk saw and commented on an article that referenced your 4d video

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +2

      I saw that. I think I floated around the entire rest of the day :D If you want to see my "at the moment" reaction, check this out: ruclips.net/video/VRUgydmMq20/видео.html. Talk about star struck! (and PS I'll definitely work on some NN videos. I have a plan for at least 3 parts)

  • @mehul4mak
    @mehul4mak Год назад

    Good video on hardware knowledge. Just do more detailed videos on NPU

  • @easternpa2
    @easternpa2 Год назад

    Just got my Coral edge TPU. I would love to see an update on this and what "regular people" can expect to achieve as advancements continue to be made in this space.

  • @glynwilliams4204
    @glynwilliams4204 3 года назад +1

    Back in the day. GPUs were limited by their ability to transform vertices which led to rendering rates being bottlenecked at the triangle level. But that rarely is the case today. Modern GPUs are measured by their ability to process fragments (pixel samples) - which leads to games being bottlenecked by fill rate.

  • @mragendds
    @mragendds 3 года назад

    Would love to see a video or 2 on neural nets

  • @privateerburrows
    @privateerburrows 2 года назад

    I would definitely like to learn what "deconvolution" is.

  • @trungcao2100
    @trungcao2100 2 года назад

    The photo you show for gpu is actually a TPU. It’s the Volta board from NVDIA.

  • @austincox809
    @austincox809 2 года назад

    Was that a Romulan warbird in the intro sequence? It is rare to see the Romulans so far from their home territory ;)

  • @peted8896
    @peted8896 3 года назад +3

    I appreciate your deep understanding on your field and on the youtube algorithm and engagement.
    Ill make sure to like your videos. Love your content. thank you.

  • @alialhalabi8615
    @alialhalabi8615 10 месяцев назад

    5:56 "The more tops you can do the the better" 💀

  • @CyrusMurphy
    @CyrusMurphy 3 года назад

    Love the value-add, original content vs other channels that just regurgitate online articles for some views

  • @Daviddoesburg
    @Daviddoesburg 3 года назад +3

    Would love to hear your opinion on the new apple M1 chips

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +4

      So many people have asked that of me today. Guess I better look into it and do an episode! Thanks.

  • @nilsfrederking62
    @nilsfrederking62 3 года назад

    It would be great if you could dive deeper into neural nets. What I find an interesting aspect regarding Teslas FSD Beta, is that with that much of an improvement which they achieved, they have much less data to work through. The fewer driver interventions you have the fewer flaws you have to iron out. Especially as the neural net training consumes a lot of time.

  • @toonikolai
    @toonikolai 3 года назад

    Yes NN's please. Can you cover how it works using Tesla's NPU as example? i.e. when a video frame comes through, what processes it goes through

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад

      Cool. I like that way of making something a bit theoretical more practical. Thanks for the suggestion!

  • @flyboyutahfoy8896
    @flyboyutahfoy8896 3 года назад

    Thank you for your great videos on subjects not available any where else!

  • @retiredbitjuggler3471
    @retiredbitjuggler3471 3 года назад

    I recently saw a video about DOJO and I believe it was a Karpathy and Musk tag team, but I watch so much content that I may be wrong. However, if I recall the numbers correctly, the main point that knocked me off my feet was that Tesla’s current processor farm requires 3 days to run through a training cycle. The DOJO farm will do 3 cycles a day. If I can find that video, I will forward the link.

  • @giorgiorocchi8313
    @giorgiorocchi8313 3 года назад

    hahahahaha your intro is great

  • @darklywhite9017
    @darklywhite9017 3 года назад

    NPUs consume power like GPUs? I was under the impression they were much more efficient? Thanks for any clarification! Love your channel.

  • @ke6gwf
    @ke6gwf 3 года назад

    Excellent look into this aspect of the Tesla secret sauce!

  • @mheys1
    @mheys1 3 года назад

    Google does sell TPU's, allbeit only the edge TPU's which you can use to train TensorFlow Lite models, these are missing certain types of layers.

  • @woolfel
    @woolfel 3 года назад

    if people want to learn more about neural nets, brandon Rohrer has some of the best explanations of Deep neural nets.
    I don't believe the statement "GPU" isn't built for ML applies to the latest Nvidia chips. For example, the RT cores could be used to calculate risk of collision with other moving objects. Calculating rays or emulating surround sound if part of general class of problems that involve bounding volume hierarchy calculations. You can do the same calculations on CPU or CUDA cores, but the RT cores are optimized to do it faster more efficiently.
    When Hinton, Bengio and LeCunn get System 2.0 working, we're going to need different hardware that's closer to FPGA or much higher memory bandwidth. RNN models like LSTM are much more calculation intensive and require more memory. The proposals from Bengio add memory to the network. Hinton wants to add smart routing with capsule networks.

  • @TeslaLifeEurope
    @TeslaLifeEurope 3 года назад +1

    Great video! Love to hear more about NN, Dojo and artificial intelligence techniques in general.

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад

      Check out the Dojo one that's up already. AI and such will be coming soon as I can make them!

  • @eduardo.chaves
    @eduardo.chaves 2 года назад +1

    Can we use TPUs and NPUs for rendering video graphics for gaming? 😅

  • @keithnazworth2022
    @keithnazworth2022 3 года назад

    Cool video! (I think ASICs *can't* run something like an OS cause they don't have a Turing complete instruction set.)

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +1

      Yes, I believe that's correct. My understanding is that you could _somehow_ manage to jury rig something together that could work as an OS--but of course it would run like crap. I definitely wasn't suggesting anyone try!

  • @spleck615
    @spleck615 3 года назад

    Did you see the npu built into the new m1 MacBook architecture? I don’t think many details were provided but is that a first for the macs or did I miss it previously?

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +1

      Woah. Totally missed that. I need to look. I did notice the latest version of Photoshop has "neural filters" in it. I haven't had a chance to play with them yet, but NNs are invading the world, clearly ;)

    • @spleck615
      @spleck615 3 года назад

      @@DrKnowitallKnows more details blog.tensorflow.org/2020/11/accelerating-tensorflow-performance-on-mac.html

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +1

      @@spleck615 Thanks! I'm definitely planning to do an episode on this soon as I have time to really look into it... maybe Thanksgiving break will be just the opportunity :)

  • @spleck615
    @spleck615 3 года назад

    Bring on the NN content :)

  • @runewinsevik8471
    @runewinsevik8471 3 года назад

    3:13 - Isn't VRAM "Video RAM" and not "Visual RAM"? Or is it considered the same thing?

  • @AmitVerma-iv4td
    @AmitVerma-iv4td 7 месяцев назад

    Hi i am from india realy nice vedio

  • @brookchen1988
    @brookchen1988 3 года назад

    Can you make a video comparing Tesla to Intel Mobileye in the race to full self driving?

  • @matthewnescio285
    @matthewnescio285 3 года назад

    You should do a video summarizing your background and professional interests.

  • @theprofessor4664
    @theprofessor4664 3 года назад

    If one were to make a Turing machine with Kleenex as the instruction tape,
    would this be a TPU ;-)?

  • @MrCesarification
    @MrCesarification 2 года назад

    for what it's worth, I thought the kleenex analogy was very good

  • @glynwilliams4204
    @glynwilliams4204 3 года назад

    The most significant single difference between a CPU and the others is branching. CPUs evaluate conditionals and go on to execute different code pathways. So while a branch is technically possible on a GPU, typically both paths are evaluated and the wrong-path result is multiplied by zero.

  • @martinkunev9911
    @martinkunev9911 2 года назад +1

    4:10 I don't think there is a relationship between a screen being 2D and matrices being 2D arrays.
    The matrix is 2D because affine transformations are binary relations.
    The dimensions of the screen correspond to the size of the matrix (minus 1 - for 2D graphics you have 3x3 matrix).

  • @goingballisticmotion5455
    @goingballisticmotion5455 3 года назад

    I am surprised you didn't mention that GPUs are highly parallel processors which is why they are so fast.

  • @harsimranbansal5355
    @harsimranbansal5355 3 года назад

    Can you talk about the new apple chip which has a cpu, gpu, and a neural engine built in! I’d really like to know your thoughts.

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад

      Only just finding out about this. I will certainly check though. What a fascinating chip to release in a consumer focused product!

  • @Interestingworld4567
    @Interestingworld4567 3 года назад

    This gentleman needs to see the RTX 3090

  • @user-zi3nv8pu7s
    @user-zi3nv8pu7s 3 года назад

    Please make materials about ML

  • @eron17
    @eron17 3 года назад

    Feeding the algorithm!

  • @qzorn4440
    @qzorn4440 Год назад

    Most interesting AI info. Even with lots of Musk money they do make mistakes as any learning process. The rest of the world has a few ideas. 😎 Thank you. The OPi5 is AI fun. 🥰

  • @gupttura2766
    @gupttura2766 3 года назад

    just curious if you are planning to trade-in your old vehicle with TESLA? If you are, are they giving you good deal?

  • @Shiffo
    @Shiffo 3 года назад

    so how is Nvidia's DPU the Bluefield 2 comparing to the Google TPU.
    It seems that bluefield strength is the low latency bandwith with 200 Gbps.

    • @Interestingworld4567
      @Interestingworld4567 3 года назад

      The DPU is A DATA PROCESSING UNIT is like a really really premium version of a network card or WIFI card it does the work of what the CPU will be doing it but it does it on the DPU because it has a dedicated processor for networking. It takes basically work out of the CPU.

  • @nanonatronaviation6007
    @nanonatronaviation6007 2 года назад

    but you can buy TPU's isnt it, there is that google dev board for AI training

  • @natsidruk86
    @natsidruk86 3 года назад

    How about quantum computing? Tesla should try to build one

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +1

      Eek!! Completely insane. Tesla is probably working on it now! ;)

    • @harsimranbansal5355
      @harsimranbansal5355 3 года назад

      Quantum computing has a very very style of computing and presents challenges of its own. Depending on the need they can be incredible, but for traditional computing needs, they’re dog slow. If Tesla needs the performance I’m sure they’ll do it!

  • @D9ID9I
    @D9ID9I 7 месяцев назад

    GPU mem is not much faster than CPU one overall. It is just faster in GPU oriented scenarios. But it is slow in random access.
    GPU is not doing computations very fast. They just do it many in parallel.

  • @BassPlayerUnderGrace
    @BassPlayerUnderGrace 3 года назад

    What do you think of James Locke's new video that seemingly shows the FSD Beta learning on the fly through repeated use?
    ruclips.net/video/QtaWbaWi93Y/видео.html

  • @roundlaker007
    @roundlaker007 3 года назад

    How does tesla’s NPU chip compare to top tier bitcoin miner currently in the market?

  • @pasanperera8235
    @pasanperera8235 3 года назад

    everything is good. but please, please change the music and the font.

    • @DrKnowitallKnows
      @DrKnowitallKnows  3 года назад +1

      Awww. Zenlee will be sad. I've already adjusted the font once. Any suggestions on a better one? I went with a sans-serif to try to make it easier to read.

    • @pasanperera8235
      @pasanperera8235 3 года назад +1

      @@DrKnowitallKnows Pardon me for not seeing the new font. Now it’s perfect and simple. But honestly the soundtrack is annoying. If you can change the soundtrack that’ll be great. Awesome content. Appreciate it.

  • @glenncook9078
    @glenncook9078 3 года назад

    Go watch Sandy Munro. He talks about the 3.0 chip in Tesla

  • @fredt7518
    @fredt7518 3 года назад

    "They do not think" : so how do you explain some FSD wideo where FSD fail doing something (Uturn, Avoid a cone) and then, 2 minutes after, FSD succeed on second attempt and third attempt ...

    • @SodaPopin5ki
      @SodaPopin5ki 3 года назад

      Different lighting conditions could easily change how the FSD computer perceives the scene and reacts. It could easily succeed on the first, and fail on the second or third, but the driver doesn't chalk that up to un-learning, but to it being in "beta." So throw in some confirmation bias.

    • @fredt7518
      @fredt7518 3 года назад

      @@SodaPopin5ki Perhaps but the different try were juste one after another (one or 2 minutes between tries) and so far nobody reported a success in first try and fail in second ... ? puzzling ...

    • @SodaPopin5ki
      @SodaPopin5ki 3 года назад

      ​@@fredt7518 Another possibility is the car retains the drivable space area within the immediate area. After going through it the first time, it could retain the info if the car hadn't left the area before the 2nd attempt.

    • @fredt7518
      @fredt7518 3 года назад

      @@SodaPopin5ki yes that make sense. So it is like the car "learn" from its first try (like a child) :-)

    • @antoniomusico
      @antoniomusico 3 года назад

      I remember that video from another RUclipsr. It seemed that the car was learning how to do the u-turn. So, if the car does not learn by itself, which makes sense, how is possible that after the second time it did it perfectly six times in a row... It was definitely learning from the correction that the driver was doing... I wonder if cars are transmitting the data in real time and being corrected which, does not make sense but, what if... I don't have an explanation. It would be great if DrKnow could give his opinion about that video. Thanks for the channel, really interesting :)

  • @desert915
    @desert915 2 года назад

    Such a strange obsession with Tesla…

  • @inolover5925
    @inolover5925 Месяц назад

    Andrew tate: dont but the gpus they do matrix with ur life as their math

  • @giz02
    @giz02 2 года назад

    Sea of comments? There's are 30 here..

  •  3 года назад

    +1 for CNN and others.

  • @____________________________.x
    @____________________________.x 7 месяцев назад

    There was a lot of waffle here so I skipped a lot and still don’t know the difference between NPU and TPU, will try elsewhere

  • @mz4637
    @mz4637 2 года назад

    d

  • @irwainnornossa4605
    @irwainnornossa4605 3 года назад

    01:44: RAM is in the wrong slots: single channel operation only.
    03:04: talks about VRAM, zooms on coils.
    03:51: talks about triangles, shows pic with “rectangle” mesh.
    07:17: most of GPUs don't have memory on the chip. They have it in separate chips, GDDR5(X) or GDDR6(X). On chip memory is HBM2, which is rare. Of course, I am talking about consumer GPUs, because what good some tegra card does to me, when I don't have in my PC?
    07:55: Lol. That did not age well. :-D But was not there GPU shortage already at the time of filming? I believe so.
    ~10:40: IDK, but you seem to live in a world, where basically everyone has a Tesla. I'm sorry for the disappointment, but this is not true in this world. But if you came through Mirror. First of all, let me welcome you in our reality. It's kind of shitty. Unlike in yours, regular people don't have access to GPUs with HBM2, for example. I guess where you come from, the economic situation is better, tech is probably a lot further as well. Luck you…
    12:10: The what? Miles? What Miles guy? Dude. SI units!
    13:07: Billions of…what? SI units! If you want to be taken seriously.
    13:27: No, I did not enjoy this video. I will not leave you like. Or dislike. Could not be bothered. Just this reaction. And the famous YT algorithm sucks ass when this is what it suggests me.
    And all of this is ONLY what I caugth and know. Someone more educated must feel tortured watching this.

  • @tripleplonk
    @tripleplonk 3 года назад

    Should you nit drop the Dr. in solidarity with Jill Biden? At least for one episode?

    • @tripleplonk
      @tripleplonk 3 года назад

      Typography is hard. Trading an I for an O.

  • @johnsonjjohnson100
    @johnsonjjohnson100 6 месяцев назад

    You are not keeping up. Dojo is a complete flop.
    Musk fired the head module designer for non-performance.
    He designed the Dojo module with little to no memory.
    The idea was that it would make the system faster
    Turns out AI is a memory pig to work efficiently!!!
    So at the very least the entire Dojo project will need to be redesigned or scrapped altogether
    or
    Maybe repurpose the chips for a huge data center after adding memory
    The good news is that FSD AND Optimus are NOT dependent on Dojo
    Our original supercomputer is still the one we lean on for all FSD and Optimus iterations
    The 10,000 H100 Invidia chips went to our old supercomputer -- NOT Dojo

  • @dt3268
    @dt3268 3 года назад

    A bit verbose. Maybe 1/3 fewer words and more concise sentences would be wonderful