NVIDIA REFUSED To Send Us This - NVIDIA A100

Поделиться
HTML-код
  • Опубликовано: 5 май 2024
  • Get 50% Off the First Year of Bull Phish ID and 50% off setup at it.idagent.com/Linus
    SmartDeploy: Claim your FREE IT software (worth $580!) at lmg.gg/Jpt4k
    We've experienced a lot of crazy, top-of-the-line graphics cards on LinusTechTips, but we've been unable to get our hands on one famed card - the NVIDIA A100..... until now. ;)
    Discuss on the forum: linustechtips.com/topic/14139...
    Fan Duct 3D Print File: www.thingiverse.com/thing:514...
    Buy an NVIDIA A100: geni.us/DfTh3c
    Buy an MSI RTX 3090 Suprim X: geni.us/jPb6gB
    Purchases made through some store links may provide some compensation to Linus Media Group.
    ► GET MERCH: lttstore.com
    ► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/sponsors
    ► PODCAST GEAR: lmg.gg/podcastgear
    ► SUPPORT US ON FLOATPLANE: www.floatplane.com/
    FOLLOW US ELSEWHERE
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    TikTok: / linustech
    Twitch: / linustech
    MUSIC CREDIT
    ---------------------------------------------------
    Intro: Laszlo - Supernova
    Video Link: • [Electro] - Laszlo - S...
    iTunes Download Link: itunes.apple.com/us/album/sup...
    Artist Link: / laszlomusic
    Outro: Approaching Nirvana - Sugar High
    Video Link: • Sugar High - Approachi...
    Listen on Spotify: spoti.fi/UxWkUw
    Artist Link: / approachingnirvana
    Intro animation by MBarek Abdelwassaa / mbarek_abdel
    Monitor And Keyboard by vadimmihalkevich / CC BY 4.0 geni.us/PgGWp
    Mechanical RGB Keyboard by BigBrotherECE / CC BY 4.0 geni.us/mj6pHk4
    Mouse Gamer free Model By Oscar Creativo / CC BY 4.0 geni.us/Ps3XfE
    CHAPTERS
    ---------------------------------------------------
    0:00 Intro
    1:06 How we got one, heh
    1:58 Our contenders
    2:38 A100 Specs
    3:49 Teardown
    8:38 Jake sucks at throwing
    8:48 Buildup
    9:43 Shroud
    10:39 How to get it to boot
    10:56 It works!
    11:24 Blender
    14:05 Can we trick windows into running games on it? & GPU-Z
    15:27 Ethereum mining & afterburner options
    16:45 Folding@Home
    18:02 Resnet50 machine learning benchmark
    18:59 Worlds most expensive lint roller
    19:10 Resnet50 machine learning benchmark part 2
    21:12 Closing thoughts
  • НаукаНаука

Комментарии • 7 тыс.

  • @klaushakon9986
    @klaushakon9986 2 года назад +25398

    watching linus handle someone elses expensive hardware is like watching a thriller

    • @michaelbaldwin5953
      @michaelbaldwin5953 2 года назад +289

      now viewing it again whilst Michael Jackson Plays in the background. !

    • @harbl99
      @harbl99 2 года назад +683

      You just know that the guy who loaned this card to LMG is watching this video and cringing perceptibly every single time Linus does 'a Linus' to his $10,000 card.

    • @BLCKKNIGHT92
      @BLCKKNIGHT92 2 года назад +136

      Lmao it was cringe as hell, i was traumatized throughout the entire video.

    • @VeritasEtAequitas
      @VeritasEtAequitas 2 года назад +41

      @@BLCKKNIGHT92 ok soy

    • @micaelmarcos4323
      @micaelmarcos4323 2 года назад +20

      Cuz this is THRILLERRR

  • @Drsmiley72
    @Drsmiley72 2 года назад +6697

    Nvidia : "no linus you can't have that"
    Linus: "and I took that personally"

    • @generalgrievous2726
      @generalgrievous2726 2 года назад +71

      Linus always finds a way

    • @saameyr6605
      @saameyr6605 2 года назад +60

      @@generalgrievous2726 Well, the way has found him.

    • @Avetho
      @Avetho 2 года назад +42

      Reminds me of Michael Reeves "You lied to me, Boston Dynamics." XD
      Similar energy, its just that Linus has a lot more self control and professionality :P

    • @theshroomian2415
      @theshroomian2415 2 года назад +8

      They just did not want him to drop it lol

    • @bruddaozzo
      @bruddaozzo 2 года назад +26

      Just makes no sense to send him this sort of card. The people buying them don't get tech advice from fucking linus lol.

  • @seangoulden
    @seangoulden Год назад +1200

    *Linus who has broken something, on everything, in every video created*
    Linus: “I don’t know why Nvidia wouldn’t send us the card”

    • @KenyoMurabu
      @KenyoMurabu Год назад +2

      Soon To Be Every Video, jk, ^_-

    • @deez69nutshuge
      @deez69nutshuge Год назад

      Linus sex tips

    • @ltsBorrowed
      @ltsBorrowed Год назад +26

      Some guy sends it and essentially says please don't fuck it up
      Linus: drops it almost immediately
      🤣

    • @noth606
      @noth606 12 дней назад

      nVidia is unlikely to really care that much about that aspect of it, they can have bookkeeping write it off as a promo cost and deduct it from taxes, if they really care. What they DO care about is Linus shitting on the card with stuff that doesn't matter, really, but non-techie customers may think matter. See, the way that you quantify "value" for something like this isn't the most intuitive thing in the world, and has no relation to the shizz Linus is talking about, but the big kahunas of datacenters, and their investors - again don't understand how that stuff works and may misjudge it based on faulty reasoning.
      If I build high end workstations for a living, and a fortnite kiddie wants to review one - I will say FU NO! to the kiddie - not because my workstation can't play fortnite but because how well it does that is irrelevant, AND I gain absolutely nothing from that review, while risking a lot - hardware getting broken, bad rep possibly etc etc.

  • @lozzar1069
    @lozzar1069 Год назад +1518

    LTT: NVIDIA refuses to send us a super powerful gpu
    Also LTT: drops a $10k cpu by accident breaks it and attempts to fix it with a vicegrip

    • @Immadeus
      @Immadeus Год назад +70

      It's not a LTT video if something expensive doesn't get dropped

    • @Shadowclaw6612
      @Shadowclaw6612 Год назад +11

      @@Immadeus he litterally knocks the thing over not too far in lmao

    • @BCR_
      @BCR_ Год назад +2

      @@Shadowclaw6612 thats the silver one not the one he got sent

    • @niklasknipschild695
      @niklasknipschild695 Год назад +13

      If I owner of the GPU my condition would be "You have to return it in working condition, or buy a replacement, but do whatever you want"

    • @sweetsurrender815
      @sweetsurrender815 Год назад +1

      @@Shadowclaw6612 that's done for comedic effect.

  • @doderiolarkisso4038
    @doderiolarkisso4038 2 года назад +8845

    thanks for pointing out Jake, really helped me recognize him.

    • @qovro
      @qovro 2 года назад +527

      But who's that other guy with him?

    • @minimalrandom
      @minimalrandom 2 года назад +87

      Just what I had in my mind

    • @PLAYCOREE
      @PLAYCOREE 2 года назад +111

      @@qovro just what i wanted to say ...whos that dude doing all the work

    • @Jakewake52
      @Jakewake52 2 года назад +12

      I mean its right there above my coment

    • @fspeshalxo69
      @fspeshalxo69 2 года назад +60

      Who's the other guy they didn't tag him

  • @cineblazer
    @cineblazer 2 года назад +2595

    A100s are no joke, no wonder AWS wants to bill me three arms and a leg to spin up instances with them just so I can get a "Sorry, we don't currently have enough capacity for this instance type" screen!

    • @CreativityNull
      @CreativityNull 2 года назад +126

      The shortage is in the cloud! (Obviously but it's funny to say)

    • @cineblazer
      @cineblazer 2 года назад +161

      @@CreativityNull "It's... it's all in the cloud?"
      *cocks gun*
      "Always has been."

    • @bharatsingh430
      @bharatsingh430 2 года назад +65

      I had to write custom scripts which run endlessly to request the p4d instances (which has 8 of those, but the 400W versions) on aws, as they are not available in any AZs. Luckily the script managed to get one of those after 2 days in us-west-2

    • @lexffe_lol
      @lexffe_lol 2 года назад +11

      to be fair, p4d.24xlarges have 8 of these in them
      the reserved prices are not too bad, considering the hardware

    • @johannha
      @johannha 2 года назад +3

      Same for top end azure instances rn

  • @Wander4P
    @Wander4P Год назад +3407

    At this point, the GPU has become the real computer and the CPU is just there to get it going.

    • @TheWipal
      @TheWipal Год назад +416

      cpu is the coworker that got in because their relaitive works there

    • @cradlepen5621
      @cradlepen5621 Год назад +191

      Cpu handles multi tasking/ software management. Without cpu we wouldn't have multiplayer games.

    • @alladeenmdfkr2255
      @alladeenmdfkr2255 Год назад +156

      @@cradlepen5621 Singleplayer is the future

    • @tystin_gaming
      @tystin_gaming Год назад +152

      The computer is as fast as it's slowest component. As example if you have a game that uses the GPU for everything but for some reason decides to use the shadow calculations using the CPU....you are limited by the CPU.
      As you increase the GPU, you need to increase the CPU.
      Then you'll be sitting at loading screens thinking "Why is this taking forever to load? My CPU and GPU are a beast"
      But you are running a standard HDD...Ahhhh time to upgrade! NVMe SSD FTW!.
      It's all a balance and why building your own PC will always be better (when you know when you are doing) compared to just buying a PC.

    • @MisterFoxton
      @MisterFoxton Год назад +39

      This is the weirdest take I've read all week.

  • @tasty8186
    @tasty8186 Год назад +719

    I like how youtube has labelled this video as "Exclusive Access" as if Nvidia have allowed this at all lol

  • @UItEnthusiast
    @UItEnthusiast 2 года назад +5464

    Linus: "We can't just go out and get an A100 because it costs almost $10,000"
    Also Linus: Creates a solid gold Xbox controller that's worth more than many people's houses

    • @monsterhunter445
      @monsterhunter445 2 года назад +468

      Most people don't even have houses

    • @naamadossantossilva4736
      @naamadossantossilva4736 2 года назад +615

      That gold can be melted down,allowing you to recover most of its value.Try doing that with a graphics card.

    • @igorivanov2498
      @igorivanov2498 2 года назад +201

      @@monsterhunter445 the comment would obviously apply to people that own houses.

    • @MrSongib
      @MrSongib 2 года назад +19

      You are the type of people who like to quote out of context. but it's okay

    • @derptyderp5287
      @derptyderp5287 2 года назад +73

      Well, if he spent all his money on the golden gamepad, that could be why he can't afford the A100.

  • @mark3888
    @mark3888 2 года назад +820

    I replaced one of these cards for a customer who had 3 of them in total in a Dell 7515 server running dual AMD Epyc 7763 64 core processors. I remember thinking this APU is worth more than my car.

    • @megan00b8
      @megan00b8 2 года назад +81

      At that point it may be worth more than a small apartment.

    • @janemba42
      @janemba42 2 года назад +41

      @@megan00b8 *Cries in Australian*

    • @sheedyaja6465
      @sheedyaja6465 2 года назад +70

      @@megan00b8 In my country, it's worth more than our life long income

    • @IgoByaGo
      @IgoByaGo 2 года назад +41

      It was always fun working in a customer's cage and you open up the shipment that FedEx delivered and it is beat all to hell and find 6 server GPUs or a line card full of 100Gig Optics and realize that the package is worth more than you make in 5 -10 years.

    • @gregdaweson4657
      @gregdaweson4657 2 года назад +1

      @@IgoByaGo cage?

  • @miniscule_mule52
    @miniscule_mule52 Год назад +645

    Amazing how I can understand so little yet be so thoroughly entertained. 10/10.

    • @xqzyu
      @xqzyu Год назад +3

      actually funny

    • @gamingtopps8975
      @gamingtopps8975 Год назад

      @@xqzyu ye

    • @deez69nutshuge
      @deez69nutshuge Год назад

      Linus sex tips

    • @ryanjofre
      @ryanjofre Год назад +16

      I’ve been a PC guru for over two decades and even i’m outta my league here.

    • @jagomastic
      @jagomastic Год назад +8

      Oh thank God! I thought I was the only one

  • @hosseinsarshar5462
    @hosseinsarshar5462 Год назад +36

    Nice comparison. You could've rented one of those bad boys on Azure for less than $4/hour for the benchmark. In fact, 8 GPU A100s connected through NVLink are expected to be 1.5X times faster than stacking 8 A100s connected through motherboard.

  • @magno5157
    @magno5157 2 года назад +8014

    Nvidia should sell this kind of cards to miners instead of selling consumer-grade gpu's in bulk to them.

    • @Whatismusic123
      @Whatismusic123 2 года назад +513

      they would still bot buy them even with this

    • @magno5157
      @magno5157 2 года назад +740

      @@Whatismusic123 Despite the high price, they still would because it's like 100% more efficient for hashing. Just like Linus said, the running cost (electricity cost) of a gpu for mining far outweighs the price.

    • @newoperson2577
      @newoperson2577 2 года назад +17

      yes

    • @Shuroii
      @Shuroii 2 года назад +235

      Most miners wouldn't buy this because they're just not eligible to. For the cost of 10 of these cards you could've purchased like 50-60 3090's even at these high prices and gave them proper cooling which would far outhash those enterprise cards. Yes it's cheaper to run those enterprise cards for the long term but you'd be looking at how long ethereum will last rather than how long the card will last

    • @jaredchampagne2752
      @jaredchampagne2752 2 года назад +66

      This GPU would be pointless to a miner, because it costs $10,000 and it would take them months or even years for them to justify the cost of it from mining, it doesn’t take extremely powerful cards to mine.

  • @abdelfattahtoulaoui9789
    @abdelfattahtoulaoui9789 2 года назад +852

    Just wanted to point out that TensorFlow by default allocates the whole memory even if it's not using it, so the A100 may benefit from a larger batch size

    • @masihiun6008
      @masihiun6008 2 года назад +2

      ruclips.net/video/Fe9zPOZvDxI/видео.html

    • @nailsonlandim
      @nailsonlandim 2 года назад +16

      Yeah! that what this GPU is for. You can train really big stuff there!

    • @JohnM-ch4to
      @JohnM-ch4to 2 года назад +1

      This is usually used in data centers right? so this might be what we've been sharing in cloud computing

    • @ChristopherHallett
      @ChristopherHallett 2 года назад +1

      @@jesusislord6545 Hey dude, remember when those little kids made fun of a guy for being bald, so God sent a bear to kill and eat them?

    • @valarionch
      @valarionch 2 года назад +1

      @@ChristopherHallett man, the old testament God was way cooler than the new testament one. At least regarding roman-era like entertainment

  • @Tacz4005
    @Tacz4005 Год назад +6

    6:58
    The nvidia employee watching the chip serial number: 👁️👄👁️

  • @Dsuranix
    @Dsuranix Год назад +30

    so glad they contextualized the AI performance, wish they would branch into that field more, and even have a tech review integration for AI performance. they *must* know about stuff like Stable Diffusion, so it'd be useful and fun, especially when the 40 series comes out, or testing things like the ARC if they ever get an opportunity. i'm still curious if they ever explored AI performance of ARC.

  • @werewolfmoney6602
    @werewolfmoney6602 2 года назад +519

    I'm glad Jake said "ah, it has an IHS" because for a split second I thought that was all GPU die and nearly had a stroke

    • @xDLiLi1337
      @xDLiLi1337 2 года назад +11

      amogus

    • @benjaminoechsli1941
      @benjaminoechsli1941 2 года назад +53

      Yeah, me too. That would've been the most monstrous die I've ever seen.

    • @rare6499
      @rare6499 2 года назад +7

      Same. I couldn’t believe what I was seeing!

    • @moldytexas
      @moldytexas 2 года назад +4

      fucking exactly

    • @Euronjuusto999
      @Euronjuusto999 2 года назад +1

      yesss, my exact thoughts

  • @Wetheuntitled
    @Wetheuntitled 2 года назад +640

    I was one of the people handling repairs on amazon servers and I’ve seen thousands of them. They are crazy. Of course I can’t test them but just holding it you can tell it’s a beast

    • @EnsignLovell
      @EnsignLovell 2 года назад +7

      Wait, thousands went for repair.....? So they break often? 🤔.

    • @Sn1ffko
      @Sn1ffko 2 года назад +49

      @@EnsignLovell i think he meant more in a metaphor Type of way

    • @GodlyAwesome
      @GodlyAwesome 2 года назад +132

      @@Sn1ffko definitely meant he had to go to the datacenter itself and saw all the cards there in the racks

    • @jlj2169
      @jlj2169 2 года назад +1

      That’s what she said

    • @Wetheuntitled
      @Wetheuntitled 2 года назад +64

      @@GodlyAwesome yeah I’ve repaired thousands and thousands of server racks. And they have sections dedicated for graphics cards and stuff. In a single server it would have anywhere between 2-12 graphics cards.

  • @MsNikkieMichelle
    @MsNikkieMichelle Год назад +75

    Linus you look great with a beard! Long time sub here from wayyyyy back when you and Luke did those build wars and watching your videos back when you had that old space where you connected each pc daisy chained to a copper water cooled set up. It’s awesome to see your sub count and how far you’ve come since I last watched your videos. He you are your family are all well and enjoying the holiday season!

  • @jessefontainieohfwob
    @jessefontainieohfwob 6 месяцев назад +3

    When I interned at this machine learning lab I got the opportunity to train my models on a supercomputer node which had 4 of these cards. Even though my code was not optimized at all, it crunched my dataset of 500.000 images for 80 epochs in about 5 hours. For reference, my single RTX2060 Super card was on track to do it in about 4 days.
    I think the main advantage of these cards in machine learning is mainly the crazy amount of memory. My own gpu could handle batches of 64 images while the node could at least handle 512 with memory to spar (I didn't go further as the bigger batch sizes give diminishing returns in training accuracy)

    • @moriwenne
      @moriwenne 6 месяцев назад

      I get what you're getting at but that comparison seems to be a bit extreme. If you put your workload on one a100 only that costs 10000$ and then on two 3090 that cost ya 2000$, you would save a lot of money and get better performance. If you consider the power usage then yes, you would be saving but still to get to 8000$ worth of difference it would take many years. People of course pay for these things because they are made with tons of memory and linkability and data centers need that but comparing just the processor power these chips aren't better than the more affordable gaming cards. There's a big price hike that nvidia applies to the pro cards because they can and the clients can and do pay.

  • @nickelsey
    @nickelsey 2 года назад +1643

    Fun fact, our A100 servers (8 80 GB SXM A100s per server) each have a max power draw of close to 5 KW. And Linus and Jake were right! Even with the 80 gig models, we still wish we had more memory. Never enough memory!

    • @beltaxxe
      @beltaxxe 2 года назад +62

      Big Iron.

    • @velo1337
      @velo1337 2 года назад +87

      what are you doing with that hardware?

    • @nicholasvinen
      @nicholasvinen 2 года назад +170

      @@velo1337 Skynet, duh.

    • @seollenda
      @seollenda 2 года назад +60

      @@velo1337 ur mum

    • @d2factotum
      @d2factotum 2 года назад +173

      @@velo1337 Playing Crysis, probably.

  • @maximilliantimofte4797
    @maximilliantimofte4797 Год назад +10

    I would so buy this whole thing
    the promoted service at the begining seems very tempting

  • @sudoertor2009
    @sudoertor2009 8 месяцев назад +2

    There's a typo at 3:25
    The A100 has ~54 Billion transistors on it. The 54.2 million listed would put the card firmly between a Pentium III and a Pentium 4 in terms of transistor count with a curiously big die for the lithographic node.

  • @tianmul8134
    @tianmul8134 2 года назад +387

    By default, Tensorflow allocates nearly all the GPU memory for itself regardless of the problem size. So you will see nearly full memory usage even for the smallest model.

    • @gfeie2
      @gfeie2 2 года назад +19

      cuda_error = cudaMalloc((void **)&x_ptr, all_the_GPU_mem);

    • @kleingeoff
      @kleingeoff 2 года назад +55

      As much as I like LTT, they never do benchmark's involving AI/Deep Learning properly.

    • @teknoman117
      @teknoman117 2 года назад +7

      @@gfeie2 start with USIZE_MAX memory and binary search your way down to an allocation that doesn't fail XD

    • @sebastiane7556
      @sebastiane7556 2 года назад

      Oh that explains a lot. I was wondering how they managed to tune it so perfectly, because Pytorch would simply crash if you tried to use more memory than available.

    • @pu239
      @pu239 2 года назад

      should've used pytorch yeah

  • @MartianDill
    @MartianDill 2 года назад +1851

    One day our grandkids will call this GPU the "potato/calculator", just like we call all the hardware that launched people into space 50 years ago...

    • @respectedcow1490
      @respectedcow1490 2 года назад +201

      well we did hit the size limit for our logic gates and whatnot, and quantum tech is only used for crunching numbers. So that's unlikely.

    • @jorge69696
      @jorge69696 2 года назад +37

      Crazy to think this much power could be available in a phone in 10 years.

    • @VividFlash
      @VividFlash 2 года назад +121

      @@jorge69696 Also no, size constraints

    • @sown-laughter4351
      @sown-laughter4351 2 года назад +55

      Ah yes the A100..
      A outdated historical relic compared to tech in 2077
      or the classic we have those in our phones now

    • @sharoyveduchi
      @sharoyveduchi 2 года назад +116

      PS3 and Xbox360 games still look graphically impressive. We're not advancing as fast as before.

  • @ThePivotuserful123
    @ThePivotuserful123 Год назад +2

    man I wish I was half as smart and cool as you guys. I enjoy watching these videos as a casual consumer and its so cool you guys basically dance around code and near instantly recognise items instantly and with ease! I mean seeing the teardowns are always a joy too.

  • @jjb4531
    @jjb4531 11 месяцев назад +7

    It was my card. I feel okay admitting it now

  • @normiewhodrawsonpaper4580
    @normiewhodrawsonpaper4580 2 года назад +1512

    I always love the moments where I realize that 3090s aren't the peaks of it's generation.

    • @FX_ASHKN
      @FX_ASHKN 2 года назад +230

      They probably have the technology for 10x 3090 but not good for business to lay it all now

    • @evanshireman5644
      @evanshireman5644 2 года назад +57

      In terms of gaming cards, it is top of the line

    • @amashaziz2212
      @amashaziz2212 2 года назад +81

      @@evanshireman5644 well it's not. the 6900xt is mostly faster at 1080p and even at nvidia, there's a 3090 ti in existence.

    • @Ornithopter470
      @Ornithopter470 2 года назад +4

      @@ProjectPhysX except for those that are memory limited.

    • @ProjectPhysX
      @ProjectPhysX 2 года назад +20

      @@Ornithopter470 yep, you can never have enough memory... but 80GB is already quite a lot :D

  • @andrewcanavan295
    @andrewcanavan295 2 года назад +201

    I love getting to see the incredibly expensive equipment that runs data centers, even though I understand about half of what they are used for. The efficiency is just insane

    • @mayeven
      @mayeven 2 года назад +21

      Understanding half of what goes on in a data center isn't too bad, though.

    • @ColonelXZ
      @ColonelXZ 2 года назад

      Basically, it has half the gpu cores, but way more AI cores, apu, to do AI tasks, at about half the power.

  • @Kormelev
    @Kormelev Год назад +3

    Would be interested in a follow up that covers how these sort of cards perform when generating AI artwork using Dall-E or Stable Diffusion as an example.

  • @ebaystars
    @ebaystars Год назад +9

    bend a multi layer pcb and you run the risk of breaking tracks guys that may "reconnect" temporarily then disconnect randomly under under heating/etc

  • @supersimon126
    @supersimon126 2 года назад +447

    Everyone with any pc building experience: "So graphics cards take pci-e power connectors and attempting to plug an eps connector in instead would be bad right?"
    Nvidia: "Well yes but no"

    • @snowyowlll
      @snowyowlll 2 года назад +12

      it’s a power connector. it’s like saying nema 5-15p connectors can only be used in the usa.

    • @supersimon126
      @supersimon126 2 года назад +2

      @@snowyowlll Well yeah i'm just referring to the pinout
      (Yes i know they're made so it's impossible or at least a lot harder to put one connector in the wrong spot)

    • @computersales
      @computersales 2 года назад +5

      So funny thing about that. The keying for PCI Express 8 pin and EPS 12 volt is basically compatible. The only difference between the two connectors is PCI Express has a little tab between pins seven and eight. If you were to plug a PCI Express power connector into an EPS 12 volt port you basically end up shorting 12v to ground. It may or may not know from experience 🤪

  • @sloppyglizzy8313
    @sloppyglizzy8313 2 года назад +2280

    Trying to imagine a world where fans reach out to you to give you a 10k GPU whilst I struggled to obtain a 3060 so much that I bought a whole prebuilt PC just to pull it lol

    • @IngwiePhoenix
      @IngwiePhoenix 2 года назад +94

      Influencer live is pretty dank, innit... ahh, the dreams...

    • @dankduck0247
      @dankduck0247 2 года назад +5

      lmaooo even i did the same thing recently

    • @iliasben7019
      @iliasben7019 2 года назад +37

      I have a 1060

    • @hak0bu
      @hak0bu 2 года назад +21

      For me, I bought a laptop instead. Lenovo legion 7 (16" 16:10 version) with rtx 3060. you would think a laptop with that GPU wouldn't have the same performance as the desktop PC equivalence, but the laptop is big enough for the heat and everything that it is extremely close. it runs the same performance and sometimes higher than my friend's rtx 2070 desktop GPU

    • @hak0bu
      @hak0bu 2 года назад +7

      Oh and it was around £1600, one of the best bang for the buck price performance wise for a gaming laptop. beat only by Lenovo legion 5 pro which is a bit cheaper but looks quite uglier

  • @bideojames4222
    @bideojames4222 Год назад

    Thank you for the video. I've been a long time sub and actually missed this, but searched the channel just in case before we made a purchase. You covered all our questions, thank you

  • @TheMegaross91
    @TheMegaross91 Год назад +5

    "No I like to go in dry first" Accurate depiction of how Linus' treats hardware

  • @Generalkidd
    @Generalkidd 2 года назад +544

    I haven't messed around with an nVidia Tesla GPU past the Maxwell line but I do remember it is possible to switch them to WDDM mode through nVidia SMI in Windows command prompt which will let you use the Tesla GPU for gaming provided you have an iGPU passthrough. By default, nVidia Tesla GPUs like the A100 will run in compute mode which Task Manager and Windows advanced graphics settings won't recognize as a GPU that you can apply to games and apps. But idk if WDDM has been removed in later nVidia Tesla GPUs like the A100 or not.

    • @jehdudnen
      @jehdudnen 2 года назад +54

      You said you did what in the who now 😕😵

    • @JJFX-
      @JJFX- 2 года назад +12

      I recall reading WDDM not being available by default on some modern Tesla cards because the standard drivers only support TCC mode and specific driver packages from Nvidia are needed to do it. I have no idea how this applies to Ampere but I imagine it's similar.

    • @SkullGamingNation
      @SkullGamingNation 2 года назад +12

      You need more halo lore vids lol

    • @toowindy1177
      @toowindy1177 2 года назад +4

      @@SkullGamingNation fr

    • @AnonymouslyHidden
      @AnonymouslyHidden 2 года назад +16

      so strange when two of my completely unrelated hobbies come together randomly like this

  • @Gartimus_Prime
    @Gartimus_Prime 2 года назад +111

    Linus - “I like to go in dry first.”
    Jake- *Please don’t look at me.*

    • @NoonMight
      @NoonMight 2 года назад

      😂

    • @travisash8180
      @travisash8180 2 года назад +1

      Why does Linus surround himself with fat dudes ???

    • @aryanluharuwala6407
      @aryanluharuwala6407 2 года назад +5

      @@travisash8180 they bring food with them

    • @travisash8180
      @travisash8180 2 года назад

      @@aryanluharuwala6407 I think that Linus is a chubby chaser !!!

    • @johngerity
      @johngerity 2 года назад

      That is definitely NOT what she said.

  • @AOTanoos22
    @AOTanoos22 Год назад +1

    I would love to see you testing these cards on windows with deepfacelab, really curious if one can do deep fakes with them and how fast they'd be.

  • @jarrettupton57
    @jarrettupton57 Год назад +9

    I'd be curious to see the difference between the 3090 and the A40 which has ray tracing cores.

  • @dddux
    @dddux Год назад +969

    250W for such a card is excellent. I was expecting more like 400W up.

  • @Daireishi
    @Daireishi 2 года назад +879

    @13:55 what you guys are totally missing is that, the A100 has fewer CUDA cores but they do INT64/FP64 half throughput of INT32/FP32. The 3090 is what, 1/16th throughput or something? It's meant for higher precision calculation. The Desktop and Datacenter cores are different. You need to do a test on 64-bit calculations to compare.

    • @kvncnr8031
      @kvncnr8031 2 года назад +225

      Didn't understand but you sound like you know your shit

    • @kvncnr8031
      @kvncnr8031 2 года назад +80

      Nerd

    • @Daireishi
      @Daireishi 2 года назад +145

      @@kvncnr8031 It does 64-bit math like 10x faster than the 3090. So it's better for where you need high precision. Neural networks in particular can get away with much smaller numbers, like 8-bit values in the network. A bit is basically 1 or 0 in a binary number, so the number can represent a larger value with more bits. Or if it's a floating point number, it can have more precision (ie represent more decimal places). For scientific computing, like modelling the weather or physics simulations, you want higher precision math. That's why the A100 is tailored for 64-bit math, where as the 3090 is tailored for 32-bit math and below, which is the most common precision used for graphics.

    • @douwehuysmans5959
      @douwehuysmans5959 Год назад +2

      I think the amount of Tensor cores is also different. Not even sure the older graphics cards have Tensor cores

    • @Psi34ax
      @Psi34ax Год назад +90

      @@Daireishi i like your funny words magic man

  • @ChuckRayNoris
    @ChuckRayNoris Год назад +1

    I love Linus and all of his Employee's. He's done such an amazing job over time collecting all the right poeple!

  • @ItsCleedusBoyz
    @ItsCleedusBoyz 8 месяцев назад +1

    The fact you can hear him drop things its to the point he just doesnt even panic

  • @liangyuanbeats
    @liangyuanbeats 2 года назад +1580

    The reason I like Linus videos is that even though I don't understand 90% of the content, I still enjoy watching it without skipping a second. Keep it up dude

    • @DmanLucky_98
      @DmanLucky_98 2 года назад +43

      A1000 vs GTX 3090
      A1000 having similar number or slightly more lanes per core runs at a lower power consumption while having almost 2x computing power.
      So A1000 is more efficient at computing numbers workloads but not so much with graphical loading.

    • @Alexander-bx4ut
      @Alexander-bx4ut 2 года назад +3

      Same thought mid video

    • @mistersebaa6245
      @mistersebaa6245 2 года назад +23

      @@DmanLucky_98 A1000 looks like a big golden chocolate bar

    • @cejuonline
      @cejuonline 2 года назад +8

      @@DmanLucky_98 A1000 go brrr

    • @polar5578
      @polar5578 2 года назад +4

      Bro I'm here for the segways

  • @WiggglezMr
    @WiggglezMr 2 года назад +777

    Linus: "We'll mask the serial so they can't find the person."
    **Shows the chip serial instead**

    • @hrithvikkondalkar7588
      @hrithvikkondalkar7588 2 года назад +74

      also device id at 14:36

    • @faithblack3851
      @faithblack3851 2 года назад +4

      ...hmm

    • @noxious89123
      @noxious89123 2 года назад +47

      @@hrithvikkondalkar7588 The device ID is not unique. Every single card of the same model will have the same Device ID. For example, every 980Ti the same as mine (I can't say which specific model of 980Ti it is, as I bought it second hand with a waterblock fitted) will show 10DE 17C8 - 10DE 1151. You can google that and see for yourself.

    • @aelithmackinnon8656
      @aelithmackinnon8656 2 года назад +28

      That's not the chip serial. That's the model and revision.

    • @whatsmyusername1231
      @whatsmyusername1231 2 года назад +6

      Device ID is same across GPU models, it's part of the PCIe spec.

  • @erickstamand
    @erickstamand 9 месяцев назад

    There's a pretty good reason to utilize the same chip for multiple card. Often times when they get printed, they get non usable sectors and there's not much they can do about it. So by doing that they can still sell the chip depending on what sectors aren't working.

  • @carblakaman
    @carblakaman 7 месяцев назад

    I like how the Fan on the A100. Made it look like a rocket ship.

  • @MeltingCake
    @MeltingCake 2 года назад +141

    So, a note about the VRAM usage. ML/DL libraries will auto-allocate most of the RAM on the GPU rather than scale it based on what is being used. This is something you can change in settings: www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
    Another thing to note, one of the benefits of more RAM is that you can actually use larger batch sizes which should lead to substantially more images/sec (time per batch doesn't grow linearly with batch size). So, in fact, training on the A100 with the same batch size as the 3090, if you specifically care about im/sec is actually limiting its performance!

    • @MrAcuriteOf1337
      @MrAcuriteOf1337 Год назад

      Torch only allocates the VRAM it needs. But other than that, yes. In terms of batch size, the operations can be so highly parallel that you won't see an increase in time per batch until you've "saturated" the available compute resources, at which point it will more or less scale linearly. But the relationship between batch size, available compute, and VRAM in terms of evaluating which card is more gooder varies greatly based on the actual task at hand.

  • @ThePlayfulJoker
    @ThePlayfulJoker 2 года назад +298

    While machine learning can be sped up using more memory, there are things that you literally can not do without more VRAM. For example, increasing the batch size even further will very quickly overwhelm the 3090. Batch size, contrary to popular belief, is not "parallelizing" the task, but actually computing the direction of improvement with higher accuracy. Using a batch size of one for example would not usually even converge on some datasets, and even if it does, it would take ages to do so.

    • @alexzan1858
      @alexzan1858 2 года назад +5

      big batch sizes dont converge necessarily either, which is why you might want to start with a big one but lower it eventually as training goes on

    • @aedieal
      @aedieal 2 года назад

      Also, it depends a lot on the what is used. If you're running inference and your model is big, it will need a lot of VRAM (proportionally to the model size) and won't run if it doesn't have enough. You *could* split the model between cards, but it's running into bandwidth, model and performance problems.

    • @nottheengineer4957
      @nottheengineer4957 2 года назад

      I assume we're talking about neural networks. Using bigger batches just means feeding more data sets into the model before backpropagation. Why does this increase memory usage linearly?

    • @srinathvs2647
      @srinathvs2647 2 года назад +1

      @@nottheengineer4957 Imagine sending 32 images of 512x512 pixels in size with three channels, thats a batch of 32, which would be a fp-32 tensor of size 32*512*512*3, a bigger batch size would mean a larger floating point array to be handled by the GPU. So, batch of 64 would be a tensor of 64*512*512*3. This is effectively doubling the total memory required to process the tensor.

    • @crylune
      @crylune 2 года назад +3

      I understood like two words

  • @theassassin225
    @theassassin225 Год назад +6

    "No, I like to go in dry first" 6:53
    - Linus 2022

    • @OfficialPadre
      @OfficialPadre Год назад

      "Your butt is nerds butt" Yea these guys super ghaaaaaayyyyyyyyy

  • @clay5251343
    @clay5251343 Год назад +24

    Building a new server at work and I’m using one of these. Pretty excited

    • @jwaddy
      @jwaddy Год назад +1

      What sort of things do they even use these for, is it like protein folding models in biotech firms or something?

  • @TimeBucks
    @TimeBucks 2 года назад +118

    Jake is really growing

  • @espi742
    @espi742 2 года назад +154

    As a tip, Nvidia-smi runs on Windows too, its included in the driver.
    I used to use to lower the power target without needing to install anything.

    • @tobiwonkanogy2975
      @tobiwonkanogy2975 2 года назад +1

      mine always closes immediately and I cant change settings. Been working to get a tesla functional on my rig and haven't been able just yet.

    • @TrentDitto
      @TrentDitto 2 года назад +8

      @@tobiwonkanogy2975 Add that directory to the windows path.

    • @EricParker
      @EricParker 2 года назад +5

      Interesting thing with NVIDIA drivers is that they are essentially the same cross platform. That's why NV wont release source.

    • @NVMDSTEvil
      @NVMDSTEvil 2 года назад +3

      smi can also be used to overclock and adjust memory timings, that 174MH could be 200+ with tweaks.

    • @tobiwonkanogy2975
      @tobiwonkanogy2975 2 года назад

      Thanks for the tips ill try em out

  • @jamestaylorii4546
    @jamestaylorii4546 Год назад +46

    Jake’s “It’s just so thick, why would you ever use it?” about the “spiciest” 3090 DID NOT age well now that the 4000 series’s are out 😂

    • @everythingsalright1121
      @everythingsalright1121 Год назад +1

      But the 4000s suk

    • @jakesnussbuster3565
      @jakesnussbuster3565 Год назад +3

      @@everythingsalright1121 they're great gpu's just the price is out of this world

    • @laycey
      @laycey Год назад

      @@everythingsalright1121 Turns out that was a lie. the 4080 and 4090 are really good cards, they're just horribly overpriced.

  • @captainspirou
    @captainspirou Год назад +21

    Major difference is the A100 is made to run constantly for perhaps months at a time. The RT3090 just runs while gaming

  • @calsimeth1588
    @calsimeth1588 2 года назад +596

    So glad you guys are now including AI benchmarks. Please continue to do so! Some of your viewers are gamers and data scientists!

    • @Matty.Hill_87
      @Matty.Hill_87 Год назад

      What is a data scientist In terms an idiot can understand? 😂

    • @DakanX
      @DakanX Год назад +3

      SOME of their viewers are gamers?

    • @calsimeth1588
      @calsimeth1588 Год назад +20

      @@DakanX Some viewers are *BOTH* gamers and data scientists.

    • @KILLERMATYCZ
      @KILLERMATYCZ Год назад +1

      It was just good here because of the GPUs involved.

  • @adamgreenhill110
    @adamgreenhill110 2 года назад +77

    "You can do anything you want with it"
    Linus: *drops the card*

  • @elskepode2
    @elskepode2 4 дня назад

    Just got 6 of these (80GB version) in for work. Can't wait to install them!

  • @MasterMoose04
    @MasterMoose04 Год назад +1

    My school, University of Florida received a bunch of the 80GB versions for our AI research programs. It’s part of the partnership my school has with Nvidia involving the Nvidia supercomputer.

    • @Dasycottus
      @Dasycottus Год назад

      I'll be using your HiperGator pal for nanoCT work later :D
      I'm pretty sure I could actually use all 640GB of VRAM on a node if I go HAM w/my scans lol

  • @imyourmaster77
    @imyourmaster77 2 года назад +431

    You'd get a significant boost in speed with Blender when you render with a GPU of you set big tile sizes, like 1024 or 2048, under the performance window.

    • @imyourmaster77
      @imyourmaster77 2 года назад

      @@Barnaclebeard me? What? why?

    • @pygmalion8952
      @pygmalion8952 Год назад +5

      256 for 1080p renders and up. that is how you get the fastest speed. if it is 4k, you go with 1024.

    • @sayochikun3288
      @sayochikun3288 Год назад

      I dont get it. I got 4gb doodoo gpu and blender automaticly sets it up to 2048

    • @1e1001
      @1e1001 Год назад +4

      @@sayochikun3288 modern blender doesn't use tiles the sane way

    • @t0biascze644
      @t0biascze644 Год назад

      @@1e1001 but in the video there is older blender 2.9 or 2.8

  • @w3bv1p3r
    @w3bv1p3r 2 года назад +225

    Can you imagine the process that guy probably had to go through for sending that card over? Like disclosures for if Linus drops it or Jake misplaces a screw lol

    • @danielkraemer5744
      @danielkraemer5744 2 года назад +29

      Number one thing I thought of when I saw the title was there done with linus dropping their shit 😂

    • @filonin2
      @filonin2 2 года назад +11

      Well, since it was quasi-legal and trying to keep it on the DL, I'd say he just wrapped it up in bubble wrap and a box and sent it UPS.

    • @rowan-paul
      @rowan-paul 2 года назад +39

      Pretty sure if Linus broke it he'd buy a new one

    • @LoisoPondohva
      @LoisoPondohva 2 года назад +17

      @@filonin2 well, it's 100% legal, he just didn't want to ruin relationship with Nvidia.

    • @x0myspace0x
      @x0myspace0x 2 года назад +1

      I would not trust a shipping company to handle it appropriately during transit...

  • @broccoloodle
    @broccoloodle Год назад +6

    The deep learning performance mostly comes from more memory and faster memory.

  • @JohnWilliams-gy5yc
    @JohnWilliams-gy5yc 8 месяцев назад

    Even only half of the cores, the I/O bound is the *KEY* of the whole picture. The 5120-bit bus and the memory bandwidth mean efficiency and the efficient compute means speed. I wish someone hack the vulkan or directx to work on this.

  • @daurdeh
    @daurdeh 2 года назад +159

    For training deep learin ai, machine learning or similar, this one is a beast. For rendering also great because both need lotsss of GPU memory.

    • @MrSteve-hy9yo
      @MrSteve-hy9yo 2 года назад +7

      Agree, my AI buddy already has his company ordering a few (80 GB model) where 2 of them will go to his high-end workstation. How lucky. But like you said, if you have large scale learning data sets or doing deep learning, these hards are at the top. Anything else, and these cards are likely not worth it.

    • @stanleybochenek1862
      @stanleybochenek1862 2 года назад

      you know tbh
      this gpu could make a nasa supercomputer

  • @DSB1234567890
    @DSB1234567890 2 года назад +20

    The difference in finishes you see at 6:10 looks like part of the shroud was milled. The matte parts look to be as-is (likely stamped or cast depending on the thickness). The smoother parts with lines going in squares are milled (kind of like a 3D drill to cut away material). This means they were taking higher-volume parts and further customizing them for these cards (milling is done for much smaller production runs than stamping or casting/molding).

  • @loadb5985
    @loadb5985 2 месяца назад

    In the nvidia cli, you can enable OS graphics for the card and then it will show in the task manager

  • @SmileFile_exe
    @SmileFile_exe Год назад +4

    6:54 "No, I like to go in dry first" -Jake

    • @SmileFile_exe
      @SmileFile_exe Год назад +2

      Jake's gf: Do you want to use the lube?
      Jake: No, I like to go in dry first.

  • @Funny9689
    @Funny9689 2 года назад +31

    Tensorflow allocates all of the GPU that you give it. That's why the VRAM usage is almost 100% in both cases. 512 batch size on a ResNet50 barely uses any memory, so this benchmark might not actually be pushing the cards to their limit.

  • @TechDove
    @TechDove 2 года назад +124

    Linus: "I've found my gold"
    Jake: "what?"
    Linus: "Yvonne"
    Jake: *dies of cringe*

    • @WyattWinters
      @WyattWinters 2 года назад +24

      the way he said "Yvonne" was so endearing tho

    • @TechDove
      @TechDove 2 года назад +19

      @@WyattWinters I mean to be fair, that's how I feel about my wife and when you find the one you just know it

    • @BenjiiiB1
      @BenjiiiB1 2 года назад +11

      Such a lovely moment! I hope she sees it accidentally and smiles

    • @kliajesal4592
      @kliajesal4592 2 года назад +9

      Jake: *dies of cringe*
      Audience: AAAWWWW that's so sweet!

    • @industrialvectors
      @industrialvectors 2 года назад +7

      As a married man, I saw this coming from a mile. That's sweet.

  • @gienavak5425
    @gienavak5425 Год назад +5

    1:59 ✈️ 🏬🏬 bro 💀💀💀

  • @MasanaAnta
    @MasanaAnta 9 дней назад

    your video left me feeling inspired and excited, thank you!

  • @TheBritishPatriot
    @TheBritishPatriot 2 года назад +249

    Whoever sent that in is literally putting their job on the line and in the hands of a clumsy Linus, watching him take this apart gave me huge anxiety! 😂

    • @RomboutVersluijs
      @RomboutVersluijs 2 года назад +10

      Kinda doubt it, if you need that much, why send it. He now can't use for x amount of days

    • @CBourn48223
      @CBourn48223 2 года назад +34

      Probably "borrowed" from his work and hoping his manager doesn't see any identifying marks on the missing card.
      Also, if you gonna steal a 10k card you're probably not that bright to begin with.

    • @CBourn48223
      @CBourn48223 2 года назад +45

      @@RomboutVersluijs He wanted it back with a cooler and a mining benchmark he probably doesn't know how to use it. lol

    • @Oxytropis1
      @Oxytropis1 2 года назад +6

      Nope probably just a miner with extra cash looking to increase efficiency. 70% higher hash rate over a 3090 w/ 25-30% lower power consumption seems good but at 3-4x the initial cost. It will take 5 years of continuous running to pay for itself, assuming about 5.50 a day profit. 3090 will pay for itself in 2.5 years.. this is all of course assuming crypto remains completely flat, which is highly unlikely.
      If I can get some of these at a good discount I will probably pick some up.

    • @Lycon721995
      @Lycon721995 2 года назад +3

      "Yo test mining with it", dude prolly jacked the thing from somewhere or got it from the market and threw it a linus before he puts it with the rest of his mining operation to see what he's dealing with.

  • @gabrielegaetanofronze6690
    @gabrielegaetanofronze6690 2 года назад +94

    When, after admiring Linus and the crew for years and counting, you realise you bought a pair of those babies at work and you have an ssh key to login and use them you immediately figure how far you have gone since the first inspiration you got from LTT. Thanks guys, you are a good part of what I’ve got to!

    • @IngwiePhoenix
      @IngwiePhoenix 2 года назад

      People encrypt their backup.
      You better ALSO encrypt your .ssh/ xD Holy crap. Congrats on your achievements tho!

  • @captaindunsell8568
    @captaindunsell8568 9 месяцев назад

    We did this in the 70s with a IBM system 370 drop in card … the base processor is there to do IO, disk, network services to feed the co-processor … cray computer did this also … we use to call them vector processors…

  • @arrow20711
    @arrow20711 8 месяцев назад +2

    now every tech company is gonna refuse to send linus anything

  • @harrylane4
    @harrylane4 Год назад +446

    That fan sending the GPU in for them to do whatever they want to it almost makes up for them being a cryptobro.
    Almost.

    • @MalwarePad
      @MalwarePad Год назад +5

      You are the reason they decided to stay anonymous. You and the whole toxic gaming community.

    • @joeschmo123
      @joeschmo123 Год назад +9

      cringe

    • @AR15ORIGINAL
      @AR15ORIGINAL Год назад +1

      @@joeschmo123 what is cringe

    • @joeschmo123
      @joeschmo123 Год назад +18

      @@AR15ORIGINAL you

    • @Akil69
      @Akil69 Год назад +4

      @@joeschmo123 W

  • @BlahBleeBlahBlah
    @BlahBleeBlahBlah 2 года назад +85

    The HBM2 will be saving quite a bit of power vs the GDDR6X on the 3090. It’ll also be a huge boost in some workloads. TSMC’s 7nm process is no doubt better than Samsungs N8, it’ll be interesting to see how Lovelace and RDNA3 do on the same 5nm node.

    • @Nobody101guy
      @Nobody101guy 2 года назад +5

      TSMC N7 is better than Samsung's 8 nm for sure, but the reason the A100 is so much more efficient than the 3090 is not because of the die technology.

    • @ZackSNetwork
      @ZackSNetwork 2 года назад

      RNDA3 will use MCM technology. This will possibly allow AMD to win in rasterization performance and be much more power efficient than Lovelace.

    • @PAcifisti
      @PAcifisti 2 года назад +9

      People will shit on you if you even dare to mention that RDNA 2.0 is worse than Ampere as an _architecture_ because it has a pretty significant node advantage and still only trades blows with Ampere. But just look at this Ampere on TSMC's 7nm, it's quite darn efficient. It will indeed be interesting to see the Lovelace vs RDNA3 on the same node.

    • @zzzZniitemareZzzz
      @zzzZniitemareZzzz 2 года назад

      @@PAcifisti well yes but this a100 card also has a die like 3x the size so it spreads heat out better then any of the gaming cards

    • @neutronpcxt372
      @neutronpcxt372 2 года назад

      @@PAcifisti To be fair, the A100 has low clocks and has a massive die size at 830mm2.
      It's not even fair lmao.
      Same thing about current desktop Ampere: 20% larger than the largest RDNA2 die.

  • @vediovis
    @vediovis Год назад

    Dropping an idea here ! could you give it a try with stable diffusion ? lets see how many it/s can offer ! Keep up the great content guys ! ❤from 🇬🇷

  • @samk8587
    @samk8587 Месяц назад +2

    6:53 Linus: Nah, I like to go in dry first...
    great 😄

  • @freakysnuke2571
    @freakysnuke2571 2 года назад +64

    I like how they always make a separate video for the top of the line HPC/professional nvidia card of each generation and make it so hype like it's a gaming card and it just released instead of year(s) ago. I don't mean that in a negative way.

  • @drcyb3r
    @drcyb3r 2 года назад +65

    6:15 This might be the point where the case was mounted to a big industrial suction cup. Manufacturers often do that when spray painting a piece of metal. You can see that on a lot of metal stuff that doesn't need to look good from the inside.

  • @kingofnorc5430
    @kingofnorc5430 Год назад +7

    Watching them tear apart my card was stressful not gonna lie, but totally worth it! Great video guys, I'm happy I kind of got to be apart of it!

    • @fireboy2623
      @fireboy2623 Год назад +3

      bro what this video is 11 months old there is no way that was your card lol

    • @kingofnorc5430
      @kingofnorc5430 Год назад +1

      @@fireboy2623 well I actually just quit my job today, and that card was used in said job. Since I don't have to worry about being fired anymore I figured I'd finally leave a comment on the video! ^-^

    • @_Chontaduro_
      @_Chontaduro_ Год назад

      I applaud you! I’d be shitting bricks worrying that Linus would drop something if I was you 😂

  • @mr.soyhair8888
    @mr.soyhair8888 9 месяцев назад +1

    Oh yeah also for the fan shroud thing I’m sure you know but sucking is almost always more efficient when moving air through a confined space

  • @savasilviu3194
    @savasilviu3194 2 года назад +474

    I think the "cooling solution" would have worked better if you would've reversed the airflow

  • @c4sualcycl0ps48
    @c4sualcycl0ps48 2 года назад +77

    Linus has so much power, He doesn’t even have to ask Nvidia for the chance to try this card or offer a bounty for someone to get him one via “other means” fans just want to see him make the content

    • @blackperal
      @blackperal 2 года назад +28

      He also has enough integrity and funds that if he accidentally destroyed the card purely due to his own actions he'd probably refund the money.

    • @psp785
      @psp785 2 года назад +4

      Its a loan

    • @crylune
      @crylune 2 года назад +1

      @@blackperal yeah, keyword "probably"

  • @FloresdorfGaming
    @FloresdorfGaming 10 месяцев назад +3

    22:08 that aged well...

  • @lmripper3659
    @lmripper3659 Год назад

    Could you guys do an in-depth episode for MSI Afterburner?When and what to boost by how much to achieve something with minimal damage?

  • @l0n3w01f
    @l0n3w01f 2 года назад +26

    When Linus said "I found my gold" I thought "how sweet, he’s talking about his wife" and jake was just "pshhh please" xD

  • @wadalwadal
    @wadalwadal 2 года назад +88

    5:15 Great, now Nvidia can super sample that fingerprint and find the technician that assembled the card, figure out where the technician worked, then where this card was assembled, then track down where it was sold, so they can find who it was sold to. oh no 😂😂😂

    • @lunakoala5053
      @lunakoala5053 2 года назад +14

      I know this is just a joke anyway, but that "technician" is probably some chinese kid assembling hundreds of those a day.
      You need to add some stupid detail to narrow it down further. Maybe there are 9/10 prints on the card and this is explained by the technician having a cut on his 10th finger and taping it. But only for 20 minutes for the bleeding to stop, so you can narrow it down to a handful of cards whose owners you can then manually check out.
      Yes, I totally should write episodes for Navy CIS.

    • @fjjwfp7819
      @fjjwfp7819 2 года назад +1

      @@lunakoala5053 I mean imagine working in a factory to be assembling something this expensive

    • @wadalwadal
      @wadalwadal 2 года назад

      @@lunakoala5053 Collab with Joel Haver maybe? damn ❤️

    • @wadalwadal
      @wadalwadal 2 года назад

      @@fjjwfp7819 *imagine working in a factory to be assembling something this expensive, that ends up in Linus steady hands 😂😂

  • @Zaza_Cat
    @Zaza_Cat Год назад

    Your QR584 mapping on the A100 XM89 is just sufficient enough to show that the 3090 has similar Curds at the same number of Kilohats per flamingman

  • @cantunerecordsalvinharriso2872
    @cantunerecordsalvinharriso2872 Год назад +10

    you guys are the geekiest.....I don't understand a thing you are talking about, but I am fascinated and really enjoying watching and listening to you geek out....I will like and subscribe just to reward your enthusiasm!!

    • @spaghettiarmmachine7445
      @spaghettiarmmachine7445 Год назад

      how tf r they geeky

    • @cantunerecordsalvinharriso2872
      @cantunerecordsalvinharriso2872 Год назад

      @@spaghettiarmmachine7445 here is the dictionary definition of "geek"
      "engage in or discuss computer-related tasks obsessively or with great attention to technical detail.
      "we all geeked out for a bit and exchanged ICQ/MSN/AOL/website information"
      It was not meant as an insult or derogatory. I do believe they engaged in computer related tasks with great attention to technical detail. Anyway I loved their enthusiasm for their subject and although I did not understand it.... I enjoyed watching their absolute joy discovering the technical intricacies of the product they were reviewing. Sorry if I offended you...

    • @redeyeskuriboh2839
      @redeyeskuriboh2839 Год назад

      @@spaghettiarmmachine7445 How tf do you watch this video and *NOT* think they are LMFAO.
      Like bro when you're literally fawning over a piece of computer tech that pretty much no normal consumer will ever own in their life, and spitting nerdfacts and terminology that almost nobody will intricately understand unless you have a very deep grasp of the subject matter...at that point is literally the definition of the word.

    • @redeyeskuriboh2839
      @redeyeskuriboh2839 Год назад

      @@cantunerecordsalvinharriso2872 Don't apologize to these spoon brains lol. They all dress up in their mothers undergarments.

    • @D3STRUCT3RSMURF
      @D3STRUCT3RSMURF Год назад

      @@spaghettiarmmachine7445 get

  • @Mr3ppozz
    @Mr3ppozz 2 года назад +20

    I hope the guy that is the owner of this card didnt die 14 times from a heart attack...
    Also thank you actual owner for making all of us able to watch this tear down and video!

  • @DerCribben
    @DerCribben 2 года назад +55

    I'd be interested in seeing how they compare rendering a single tile all at once. That's what got me when I switched from a 2080 Ti to a 3090, not realizing right away that it would render 2k tiles without sweating, but rendering out a bunch of smaller tiles they were rendering basically the same times.

  • @jonathanjeter6934
    @jonathanjeter6934 2 месяца назад

    The video was released a year ago and its the most exciting computer video I've seen in a while 💀 The fact that its already outdated is sick

  • @johneralddayrit7833
    @johneralddayrit7833 Год назад +3

    "I'd like to go in dry first" -Linus Sebastian 2022

  • @mathieswedler
    @mathieswedler 2 года назад +58

    Note on TensorFlow and VRam utilization: TensorFlow allocates all of the available VRam even though it might not use all of it. Furthermore, in my studies, models ran considerably slower with XLA enabled. Would be interesting to know how the cards perform with XLA off!

  • @dumpsterdawg
    @dumpsterdawg 2 года назад +121

    Linus: "It's not ribbed for my pleasure"
    They're never ribbed for "Your" pleasure.

    • @whasian1487
      @whasian1487 2 года назад +4

      Flip it inside out? Lmao

    • @oldguy9051
      @oldguy9051 2 года назад +1

      No, Linus said it quite right... ;-)

    • @ironicbobcat1
      @ironicbobcat1 2 года назад +6

      @@oldguy9051 it means he's the one getting poked

    • @Chazbc
      @Chazbc 2 года назад +4

      Depends on your configuration.

    • @metroplexprime9901
      @metroplexprime9901 2 года назад

      Hey, we don't know what Linus and Yvonne are into.

  • @dysennn
    @dysennn 11 месяцев назад +1

    the technician who assembled the card is gonna be identified by that fingerprint 😳

  • @VictorTorstensen
    @VictorTorstensen Год назад

    Thank you for the Hitchhikers Guide to The Galaxy reference.

  • @jfolz
    @jfolz 2 года назад +92

    "All 40 GB used!"
    Well, that's just how tensorflow works. It reserves all memory on the card. With batch size 512 and fp16 training ResNet 50 will use maybe 16 GB? Not sure, I use Pytorch.

    • @amanfizz
      @amanfizz 2 года назад +2

      Your comment makes me question my education.

    • @larine4459
      @larine4459 2 года назад +1

      I like playing Minecraft and watching RUclips while I eat chickey nuggeys. We are not the same.

    • @Lodinn
      @Lodinn 2 года назад +3

      Came here for this comment. Although your estimate seems to be off: the memory usage for half precision ResNet50 and 512 batch size should be closer to 26 Gb, putting it out of reach for training on a 3090.
      I am glad this kind of production work finally gets coverage though.

    • @maaadkat
      @maaadkat 2 года назад

      That was probably the rate it was being filled though. When I load GPT-NeoX-20B PyTorch allocates 40GB almost instantly, and then fills it up. That's different to loading a model with HuggingFace transformers, where usage increases relatively gradually like the use case in the video.

    • @jfolz
      @jfolz 2 года назад +1

      @@maaadkat ResNet 50 is a different model though. Comparatively tiny by today's standards. Most of that memory is used by intermediate activations.