5 Questions about Dual GPU for Machine Learning (with Exxact dual 3090 workstation)

Поделиться
HTML-код
  • Опубликовано: 9 июн 2024
  • In this video I cover how to use a dual GPU system for machine learning and deep learning. I look at five questions you might have about a dual GPU system.
    1:01 Question 1: Do two GPUs combine into one big GPU?
    1:42 Data Paralyzation
    2:40 Dual GPU Performance
    5:02 Model Paralyzation
    7:16 Question 2: Can I buy one now, and one later?
    9:37 Question 3: Can I mix multiple types of GPU?
    10:35 Question 4: How do you cool multiple GPUs?
    12:39 Do I need NVLink?
    ** System Used **
    * TRX40 Motherboard
    * Threadripper 3960x
    * 128GB Memory (16GBx8)
    * 2x 4TB PCIe 4.0 NVME
    * 2x NVIDIA GeForce RTX 3090
    * NVLINK Bridge
    For more information about the machine featured in this video, please visit:
    www.exxactcorp.com/Deep-Learn...
  • НаукаНаука

Комментарии • 104

  • @deltax7159
    @deltax7159 2 месяца назад +1

    just found your channel! I'm a graduate student studying statistics planning on building my own ML/DL PC upon graduation to use for gaming/ my own personal research and your channel is slowly becoming INVALUABLE! thanks for all this great content Jeff!

  • @GuillaumeVerdonA
    @GuillaumeVerdonA 2 года назад +4

    This is exactly the video I needed right now, Jeff! Thank you

  • @lenkapenka6976
    @lenkapenka6976 Год назад

    Jeff, Fantastic video.... explained a lot of stuff I was slightly fuzzy on.... your explanations were first class

  • @adityay525125
    @adityay525125 2 года назад +28

    Can we get a 3090 vs A series, with mixed precision thrown in

  • @KhariSecario
    @KhariSecario Год назад

    Thank you! This answer many question I have for building parallel GPU

  • @simondemeule3934
    @simondemeule3934 2 года назад +13

    Would love to see a 3090 vs A5000 vs A6000 comparaison. These are all very closely related - they use the same processor die - what varies is the feature set that is enabled (notably performance on various data types and compute unit count), the memory type and size (GDDR6X vs ECC GDDR6, 24GB vs 48GB), clock speed, power consumption (350W vs 230W vs 300W), cooling form factor (consumer style vs datacenter style), and datacenter usage agreement. It costs a similar amount to get two 3090s, two A5000s or one A6000, and that can be a sweet spot for researchers, budget-wise. That yields the same total VRAM and a comparable amount of compute performance, but in practice these setups can behave drastically differently depending on how the workload parallelizes. Cooling also becomes a concern with more than two GPUs.

  • @zhyere
    @zhyere 4 месяца назад

    Thanks for giving off some of your knowledge in all your videos.

  • @hoblikdlouhovlasy2431
    @hoblikdlouhovlasy2431 2 года назад +1

    Great video as always! Thank you for your afford!

  • @silverback1861
    @silverback1861 2 года назад

    Thanks for this comparison. learnt a lot to make a serious decision.

  • @harrythehandyman
    @harrythehandyman 2 года назад +25

    It would be nice to see RTX 3060 12GB vs RTX 3080Ti 12GB vs RTX 3090 24GB vs A6000 in FP16, FP32, FP64.

  • @datalabwork
    @datalabwork 2 года назад +2

    I have watched every single bit of your video...those IDS takes interest on me.
    Would you kindly make a video on reviewing DL based IDS on GPU, in any future?

  • @wentworthmiller1890
    @wentworthmiller1890 2 года назад

    Comparison wishlist: 3090 vs (3080 ti, 3080, 3060, vs 3060). A combination also: 3090 + 3080 ti, 3090 + 3080, 3090 + 3060. That's a lot. Thought I'd ask 😊 😁. Thank you so much for putting these vids together - it's nice to see and understand various facets of DL, which are not covered in academics generally. Very helpful to get a holistic perspective for a noob like myself.

  • @weylandsmith5924
    @weylandsmith5924 2 года назад +11

    @Jeff: I don't concur about the fact that Exxact has built their workstation so that the cooling is maximized. Quite the contrary: I've not managed to understand which 3090 model they are using, but nobody will convince me that two air-cooled 3090s, stacked tightly (not even one slot separation) won't throttle. And indeed that's demonstrated in your very video.
    Note that you shouldn't watch for die throttling, BUT for gddr6x throttling. Unless you take some fairly drastic precautions, the memory will throttle, and this has been observed for all 3090s on the market (both open air and blower types). By drastic measures I mean: generous heatsinks on the backplate *and* at least two slot separation *and* a very good case airflow *and* reducing the TDP by at least 15% ("and", not "or").
    In any case, note that your upper 3090's die *IS* throttling as well: 86C engages the thermal throttling for the die. It's not surprising that there is such a big difference with the lower one, since the upper card suckles the air heated by the lower card's very hot backplate. And you don't have any margin left: the fan is already at full speed. That's BAD.
    Stacking the gpus so close just so that you can use the A-series nvlink bridge is a bad policy: you trade a bit more nvlink bandwidth for a card that will severely overheat. Use the 4-slot nvlink bridge for the 3090s, and put MORE distance between the cards.
    Disclaimer: I'm not in the business of building workstations. I'm just an AI engineer who struggled with his own build's cooling (dual nvlinked 3090s as well), learning something in the process.

    • @stanst2755
      @stanst2755 Год назад

      this copper mod might help ruclips.net/video/f8f6ZHCPVpw/видео.html

    • @peterklemenc6194
      @peterklemenc6194 Год назад +2

      So did you go the water-cooled option or just multi-fans experiments?

  • @69MrUsername69
    @69MrUsername69 2 года назад +1

    Hi Jeff, I would like to see more use cases and benchmarks with/without NVLINK as well as various precisions FP 16/32/64 to realize if Tensor Cores also combine with NVLINK Memory. Please illustrate some multi GPU use cases and benefits.

  • @FrancisJMotionDesigner
    @FrancisJMotionDesigner 2 месяца назад

    im trying to install a seccond gpu which is a 3070 on my PC. i already have on 3080ti installed. I have enough power but after installation, there is a lag when I move my mouse and frequent crashes. I tried removing all drivers and doing a fresh install with DDU. my motherboard is asus rog strix x570 E.... Please let me know what I'm doing wrong. could it be something with pcie lane support?

  • @atefamriche9531
    @atefamriche9531 2 года назад +4

    Not an expert here, but I think in terms of design, a triple or quad slot nv-link with more spacing between the two GPUs would help a LOT. The top GPU is choked.
    Also, have you checked the memory junction temp? because if your GPU code is hitting 86 deg-C, then the memory junction temps are probably over 105 deg-C, and that is definitely in thermal throttling territory.

  • @eamoralesl
    @eamoralesl Год назад

    Great video it helped me get a better picture of how dual GPUs are used, a question here. I got one of the newer 2060 with 12gb and wanted to pair with another GPU but can't find the same make and model, would it matter if it's a different make? Is it worth getting 2x 2060 in 2023 just for having 24gb VRAM? should I start saving for newer GPUs? Budget is a concern because latest gen GPUs come to my country almost 3x their price on Amazon so imagine those prices... Thanks any opinion helps

  • @absoluteRa07
    @absoluteRa07 Год назад

    Thank you very much very informative

  • @arisioz
    @arisioz 12 дней назад

    The infamous "Data Paralyzation"

  • @Mi-xp6rp
    @Mi-xp6rp 2 года назад +17

    I would love to see more use of the 12 GB RTX 3060.

    • @qjiao8204
      @qjiao8204 Год назад

      I guess you must be misguided by this guy. Don't buy 3060, for this price range, the memory is not important anymore, get a 3070 or 3080, much much faster than 3060.

  • @JamieTorontoAtkinson
    @JamieTorontoAtkinson Год назад

    Another gem, thank you!

  • @harry1010
    @harry1010 2 года назад

    Thank you for this!!!!!!

  • @plumberski8854
    @plumberski8854 Год назад

    Interesting topics for a beginner with this new ML DL hobby! Can I assume that the difference between 3090 and 3060 GPU here is the processing time (assuming data is small enough for the 3060)?

  • @seanreynoldscs
    @seanreynoldscs 2 года назад +1

    I find when I'm working with real world problems my tuning can go quicker with multiple gpu's by just training two models back to back to back as I tune.

  • @Enterprise-Architect
    @Enterprise-Architect 6 месяцев назад

    Thanks for this video. Could you please post a video on how to create a cluster using NVIDIA Tesla K80 24GB GDDR5?

  • @josephwatkins1249
    @josephwatkins1249 2 года назад

    Jeff, I have an 8 GPU 30 series rig that I'd like to use for machine learning. If I wanted to use these for data parallelization, how would I set this up?

  • @0Zed0
    @0Zed0 2 года назад +8

    I'd like to see the 3090 compared to the 3060 and also a comparison of their power consumption, although with a remote system I doubt you'll be able to do that. Obviously the 3060 would be much slower to train on the same data as a 3090 but would it use more, less or the same power to do it?

    • @amanda.collaud
      @amanda.collaud 2 года назад +4

      @@kilosierraalpha I have a 2080 ti and a 3060 in my computer, they work good. The 3060 is not horribly slower than my 2080 ti, so... plz dont make it sound like the 3060 is not suitable for ML. You can use overclocking on the buswidth btw, I did it aswell and nothing bad happened yet :D

  • @British_hunter
    @British_hunter 9 месяцев назад

    Smashed my setup with custom watercooling on a RTX3090x2 gpu's and a separate CPU loop.
    Temps on core,mem, and power don't reaching over 45 Celsius on full load

  • @theccieguy
    @theccieguy Год назад +1

    Thanks

  • @DailyProg
    @DailyProg 5 месяцев назад

    Jeff do you have a comparison between 3060 and 3090 and 4090? I have a 3060 and wondering if it is worth the 6x cost to upgrade to a 4090

  • @markhou
    @markhou Год назад

    In general would the 3060ti be a better pick than the non ti / 12gb vram?

  • @siddharthagrawal8300
    @siddharthagrawal8300 Месяц назад

    in your tests do you use nvlink on the 3090?

  • @sherifbadawy8188
    @sherifbadawy8188 Год назад

    Would you suggest dual 3090TI with nvlink vs two rtx-4090 without nvlink?

  • @rahuls190
    @rahuls190 2 года назад

    Hello, can I use NVlink between Quadro rtx5000 and rtx 3090, kindly please let me know.

  • @mamtasantoshvlog
    @mamtasantoshvlog 2 года назад +2

    Jeff seems you confused yourself both while editing the video and while shooting the video. It's Data parallelization and not paralyzation. I hope I am correct. Let me know if that's not the case. Also would love your advice on something.

  • @97pingo
    @97pingo 2 года назад +2

    I would like to ask you for your opinion regarding notebooks.
    My question is, which notebook might be worth buying in the scenario where I might have a server for heavy computing?
    The choice of notebook is linked to the need for mobility

  •  11 месяцев назад

    Hello Jeff. Thank you for your sharing. However, I see an NVLink bridge in your system that looks like a 3-slot bridge. With this bridge, obviously your two GPUs had to be placed close to each other like in the video. I think, although they may still be compatible with each other, this is not a good combination. With your way, the GPU below will heat up the GPU above, and there is no gap to provide fresh air for the GPU above. This poses a risk of damage, even fire or explosion if the system runs at full load for a long time. Looking at your temperature measurements, I also agree with a guy who commented earlier that the actual highest temperature that your GPU can reach is over 100 degrees C at the hottest point (VRAM). Also, there is no 3-slot NVLink bridge dedicated for RTX 3090 on the market. Only 4-slot bridges are available for this GPU. And I think the manufacturers have their reasons, related to the temperature issue. With a 4-slot bridge, the distance will be wider, there will be more space for fresh air to circulate and cool the RTX 3090 better.
    I think your system should use another main board, one that has a wider gap between 2 PCIE x16 slots than the current one, and enough to fit a 4-slot NVLink. I see that a mainboard like ROG Strix TRX40-E Gaming meets this condition. And, if anything I say is not accurate, please give feedback so I can update my knowledge. :D

  • @hungle2514
    @hungle2514 10 месяцев назад +1

    Thank you for your video. I have a question. Suppose that I have two monster 3090 gpus and use the NVLInk to connect together. The system will see only 1 card with 48GB or 2 cards. Can I train a model need at least 32GB on the 3090 gpus ?.

  • @jakobw135
    @jakobw135 22 дня назад

    Can you put two GPU's from TWO DIFFERENT MANUFACTURERS, and hook them up to the same monitor?

  • @mikahoy
    @mikahoy Год назад

    Is it need to be connected via NVLink or just plug and play as it is?

  • @kailashj2145
    @kailashj2145 2 года назад

    hoping to see your suggestions for this year's GTC and hoping for some coupons of the conference.

    • @HeatonResearch
      @HeatonResearch  2 года назад +1

      Working on that now, actually.

    • @hanhan-jc5mh
      @hanhan-jc5mh 2 года назад

      @@HeatonResearch Thank you for your work, I would want to know which plan is better for GAN project, 4 3080Ti or 2 3090? Thannk you.

  • @mohansathya
    @mohansathya 8 месяцев назад +1

    Jeff, Did the dual 3090 (NVLink) actually give you double the vram seamlessly?

    • @redmi26635
      @redmi26635 7 месяцев назад

      I have the same problem

  • @wlyiu4057
    @wlyiu4057 9 месяцев назад

    The upper GPU looks like it is going to overheat. I mean it is only barely drawing in air already heated by the lower card.

  • @whoseai3397
    @whoseai3397 Год назад

    It's fine to install RTX2080+RTX3080 together, it works!

  • @QuirkyAvik
    @QuirkyAvik 2 года назад

    I bought one 3090 and was so amazed I got another one. Now I am considering building a proper workstation pc since I have picked up a "hobby" of editing people 4k sometimes 8k footage for them along with learning 3d modelling as I want to get into 3d printing as well.
    The dual 3090 were bought at more than twice the MSRP which has stopped me from building a workstation even though I finally have a case(no pun intended) for it.

  • @Rednunzio
    @Rednunzio 2 года назад

    Windows or Linux for ML in a multi gpu system?

  • @abh830
    @abh830 Год назад

    What the recommended cpu case for dual rtx 3090 ti....is dual system/cpu case are better ?

    • @HeatonResearch
      @HeatonResearch  Год назад

      That is a 3-slot GPU, so make sure there is enough space and that you can fit it and have at least decent airflow. This is an area where the gamer recommendations on dual 3090 would apply directly to machine learning, and I've seen YT videos on dual 3090.

  • @maxser7781
    @maxser7781 Год назад

    The word is "parallelization" derived from the word "parallel". The word "paralyzation" could be used as a synonym to "paralysis", which is irrelevant in this case.

  • @sigma_z
    @sigma_z Год назад

    I have 6x RTX 3090, would it be possible to join all of them together? More importantly, any real advantage for machine learning? Is it better to just get a RTX 4090?

    • @andreas7278
      @andreas7278 Год назад +1

      You can't "join all 6" together like you suggest. If you just plug in all 6 you can use them in parallel for machine learning but then they don't share any memory (aka there is no memory pooling). You can get nearly linear speedup as long as the model type you are training is parallizable and no other pc component is creating a bottleneck. You can typically expect 1.92 for two cards, 3.84 for 4 cards so for 6 identical gpus you will get your near linear scaling. However, the rtx3090 does not support nv-bridge etc. What you can (and should) do is get 3x nv-link which allows you to bundle two of them always together. By doing that you can effectively use 48 instead of 24gb memory allowing for bigger models and larger batch sizes. So you can both get a nice speedup (large batchsizes are tyically much faster for transformers etc) and you can play around with larger models. Some software like video editing often times does not support nvlink, but tensorflow and pytorch do (what you are probably using).

  • @MichaelDude12345
    @MichaelDude12345 Год назад

    This is literally the only place I could find information on this subject. I am trying to decide between starting with a 3080 and either a 4070 or 4070ti. Can anyone share with me their thoughts? Price aside I like how much less power the 4070 uses, but I think it would be a performance drop. Either way I know I need the 12gb of vram for what I want to do. The 4070ti seems like it would make up the difference in the performance that the 4070 lacks, but I really like the price-point of the 3080/4070 range. My options are to get one of those and maybe eventually save up to add another card, or go for a cheaper range and get 2 cards for the data parallelization benefits. I really wasn't sure how much data parallelization would be helpful for me but it seems like it would just be a nice bonus, so I am now leaning more towards just starting with one of the cards I listed. Anyone with more knowledge than me on the topic, could you weigh in please? I could really use some pointers.

    • @Mr.AmeliasDad
      @Mr.AmeliasDad Год назад

      Hey man, I'm currently running a 3080. I know you said pricing aside, but the 3090 has come down to the same price as 4070's so I would strongly consider that. I have the 10GB model and would kill for the extra VRAM. Creating a convolutional neural network I ran out of VRAM pretty fast when trying to expand my model. So I either had to split my model among different GPU's or go with a smaller model. Thats why you want to try for more VRAM on a single GPU. That was also on a dataset with 510 classes for classification, which isn't the easiest. I recommend spending what you would on a 4070 or 4070ti and getting a used 3090 for the VRAM. Barring that, I would consider trying to get a used 3080 12GB and saving up for a second.

  • @BrianAnother
    @BrianAnother 2 года назад +1

    Parallelization

  • @Edward-un2ej
    @Edward-un2ej Год назад

    I have two 3090 for almost two years. When I training with two cards together, one of them will reduce about 30% due to the cooling.

  • @yosefali7729
    @yosefali7729 Год назад

    Does it imporve single precision processing using two 3090 with nvlink

    • @HeatonResearch
      @HeatonResearch  Год назад

      Yes, I had pretty good luck with nvlink, more here ruclips.net/video/hBKcL8fNZ18/видео.html

  • @Lorphos
    @Lorphos Год назад

    In the video description you wrote "data Paralyzation" instead of "Data parallelization"

  • @KW-jj9uy
    @KW-jj9uy 8 месяцев назад

    Yes, the Dual GPUs paralyze the data really well. stuns them for over 10 seconds

  • @danielklaffmo4506
    @danielklaffmo4506 2 года назад +3

    Jeff, thank you for making these videos. I think you are the right kind of youtuber, your looking at the practical rather than the overly theoretical. But I wish i could talk more with you because i have ideas i like to share (but after contract offcourse) I have kinda maybe done it and yeah kinda need alot of ML engineers and Personalities to gather up to make an event and annual meeting ehmmmmm please let's talk further

  • @dmoneyballa
    @dmoneyballa Год назад

    I'd love to see nvidia compared to amd now that rocm is working with all of the 6000 and 7000 series.

  • @AOTanoos22
    @AOTanoos22 Год назад +1

    Why can't you combine the memory of the 3090s to 48GB when using Nvlink and have a larger batch size ? I thought this is what Nvlink was made for, combining both Vrams into a unified memory pool, in this case 48 GB. Correct me if im wrong.

    • @andreas7278
      @andreas7278 Год назад

      That's exactly what nvlink is for, this is correct

    • @clee5653
      @clee5653 Год назад

      @@andreas7278 I'm still confused, does that mean nvlink provides a 48 GB unified vram but it's not a drop-in replacement and we still need to write some acrobatic code to run models larger than vram of a single card?

    • @andreas7278
      @andreas7278 Год назад

      It is indeed a drop-in replacement if you want to call it like that, i.e. 2x rtx3090 (same goes for 2x nvidia titan rtx from the previous generation) connected via nvlink indeed provide you with one unified 48gb vram memory pool which allows you to train larger models and use larger batch sizes. As long as the library you are using supports unified memory you don't need to do any additional trickery or coding, e.g. pytorch or tensorflow will handle this automatically if you use multi gpu mode so no further coding is needed. However, other math libraries such as numpy won't make use of memory pooling. For modern deep learning this is sufficient though since most people will only need the high vram amounts for deep learning. This is what made these dual cards being so popular for machine learning researchers. A lot of scientific ML papers have been using one of these two setups (with the exception of the big players with their gigantic server farms out there like OpenAI, DeepMind, GoogleResearch etc). It was a very economic way to get nearly twice the performance of the corresponding quadro 48gb card (2 cards mostly end up with like 1.92x performance over a single one in pytorch, taking into consideration that quadra cards with their ECC memory are usually a little bit slower you end up at roughly twice the throughput) at the same memory size for an extremely competitive price.
      Now we finally have the rtx4090, which pushes linear algebra calculations further at a larger generational jump than ever before. But the reason why the generational jump is higher is that they cut out the nvlink memory controller and used that space for more cuda units. This means that the rtx4090 has a larger generational jump over the rtx3090 than the rtx3090 over the titan rtx at a very competitive price. Also, it means that the rtx4090 in comparison to their rtx4070 and rtx4080 delivers exceptional value for money (just look at the total cost for proper water cooling, energy consumption and ML throughput for an rtx4090 compared to like an rtx4080, it's not just much faster, it's a better deal even though it's the highend card). But if you work with any type of transformer models which are very common right now, 24gb is kinda a very low boundary. Often you may only choose the small models and then in combination with ridiculously small batch sizes (not just making training slower but also resulting in other final network results due to maximum likelihood estimation being applied on too few samples for each epoch). More reasonable SOTA models require 50-60gb upwards and 48gb vram provides you with way better options. There are crazy models out there by the likes of OpenAI which literally needs hundreds of gb of vram but well, ... you can't have everything and you would only analyse or downstream train them anyways. If the rtx4090 would allow for nvlink we could get a reasonably prized 48gb setup but as it stands, you need to buy the rtx6000 ada lovelace which will cost a lot more and you also will only be able to leverage your single card throughput. Furthermore, going to 96gb will be impossible with quadro cards now since also these ones don't allow for memory pooling via nvlink any more. So you will have to get tesla cards which are a whole price tier higher. Basically, this new generation is a disappointment for ML researchers if we take reasonable setups into consideration. Other than that the new generation is pretty amazing.

    • @AOTanoos22
      @AOTanoos22 Год назад

      @@andreas7278 thank you for this detailed explanation, very appreciated !
      I’m extremely disappointed that Ada Lovelace 40 series cards have no Nvlink anymore, not even the top end RTX 6000 (Ada). Surely anyone who needs more than 48 GB will go with a last gen RTX A6000 setup. Maybe thats another one of Nvidias ways to get rid of Ampere oversupply? What really surprises me, is that Nvlink is supposed to be removed from Ada Lovelace cards on silicon design…yet the new Nvidia L40 datacenter card, which has an Ada Lovelace chip, does have Nvlink according to their website. I guess that makes it the “cheapest” card for ML with >48 GB requirement.

    • @clee5653
      @clee5653 Год назад

      @@andreas7278 You're awesome, man. Just to be specific, to train large models on nvlinked 2x 3090, all I have to do is enable ddp in pytorch, no need for any model parallelization code right? Looks like nvidia is not going to make any relatively cheap card that has vram more than 48 GB, I'm definitely considering picking up another 3090. Having done two research projects on bert-scale model, I'm fed up with not being able to lay my hands on SOTA, mid-size models. My guess is they might ramp up next-gen 5090 cards to 32 GB, but that is not going to bridge the gap between the demand anyway.

  • @mmehdig
    @mmehdig Год назад

    Data Parallelization

  • @TimGtmf
    @TimGtmf Год назад

    I have a question; can I run 3090 strix and 3090 zotac together? And what is the difference between running same brand and different brands of gpus? Thank you!

  • @infinitelylarge
    @infinitelylarge Год назад +1

    I think you mean "parallelization", not "paralyzation". "Parallelization" is the process of making things to work in parallel. "Paralyzation" is the process of becoming paralyzed.

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 2 года назад +1

    Run it on an RPI.

  • @synaestesia-bg3ew
    @synaestesia-bg3ew Год назад

    Your channel is for the rich kids only, you are the Mac Apple channel

  • @__--JY-Moe--__
    @__--JY-Moe--__ 2 года назад

    so U would need a software controller, like some of the software from intel ?🥩🦖 good luck !! I hope the nVidia 4000 series will be out soon! & AMD say's, it will make it's 7000 series beat nVidia in scientific computing!! some day I guess!

    • @HeatonResearch
      @HeatonResearch  2 года назад

      AMD needs more cloud support, the day I can start to get AMD AWS instances, I will start to consider them. I like my local to mirror what I use in the cloud. I am excited about the 4000 series as well, all the rumor mills that I follow suggest 4K series will be out next year this time.

  • @sigma_z
    @sigma_z Год назад

    Can we do more than 2 GPUs? Like 4 RTX 3090s?.😎😍🙈

    • @danielwit5708
      @danielwit5708 Год назад

      yes

    • @sigma_z
      @sigma_z Год назад

      @@danielwit5708 how? NV Link appears to only connect 2x RTX 3090's but not 4? I have 6x RTX 3090s 😛

    • @danielwit5708
      @danielwit5708 Год назад

      @@sigma_z your question didn't specify that you asked about nvlink bridge lol I thought you just asking about more than 2 cards 😅

  • @sergeysosnovski162
    @sergeysosnovski162 8 месяцев назад

    1:43 - parallelization ...

  • @marvelousbless9128
    @marvelousbless9128 11 месяцев назад

    RTX a 4500 dual GPUs

  • @pramilapatil8957
    @pramilapatil8957 11 месяцев назад

    are u the gamer grandpa?

  • @jonabirdd
    @jonabirdd Год назад

    Data paralyzation? Really?
    FYI, it's parallelisation.

  • @dhaneshr
    @dhaneshr 9 месяцев назад +2

    its "parellization" not "paralyzation" 🙂

  • @ProjectPhysX
    @ProjectPhysX Год назад

    Sadly Nvidia killed the 2-slot consumer GPUs. You can't buy these anymore, only hilariously oversized 4-slot cards that don't fit next to each other. So that people have to buy the overpriced Quadros for dual-GPU workstations.

  • @ok6959
    @ok6959 2 года назад +2

    why this guy is so slow

    • @InnocentiusLacrimosa
      @InnocentiusLacrimosa 4 месяца назад

      People speak at different speeds. Often highly analytical people speak at a slower pace.