12VHPWR is just Garbage and will Remain a Problem!!

Поделиться
HTML-код
  • Опубликовано: 6 окт 2024
  • НаукаНаука

Комментарии • 2,7 тыс.

  • @web1bastler
    @web1bastler 9 месяцев назад +2612

    In my engineering opinion, the 12V HPWR Connector is an engineering failure. The used connector (Molex Microfit 3.0) is rated for a Maximum of 8.5A/Pin before thermal derating. Every Electrician/EE knows that cables and connectors, especially in bundles, have a thermal derating factor. The 12V HPWR Connector uses 6 Pins each for +12V and 0V which puts the absolute maximum current at 51A. The PCIe5.0 spec claims that the connector is good for a sustained 600W load. Problem being that 600W at 12V is 50A! Only 1A away from the absolute maximum! And absolute maximum ratings are generally done at 20°C, which is not what you would expect to get inside a PC case, especially directly on the GPU! Honestly mind-boggling.

    • @markmanderson
      @markmanderson 9 месяцев назад +113

      kind of makes you wonder who thought it was a good idea to begin with deserving of a Homer S "Doh", well explained thank you :)

    • @rawdez_
      @rawdez_ 9 месяцев назад +110

      also 16/18/20 AWG rating is for used cable as a whole, not for theoretical cable without strands. so I don't understand why Roman used that coefficient table and cut his PSU 6+2 pin at all.
      16 AWG is 18A@90C. period. so it can handle way more than what Roman has calculated. anyway the main limit is actually pins, not wires.

    • @Kalvinjj
      @Kalvinjj 9 месяцев назад +124

      Exactly what I was thinking. I did the quick amps per contact math and facepalmed straight away with the stupid high current per pin. I was gonna check the datasheet since I thought it looked like the microfit 3.0 indeed, but thanks for saving me the disappointment when checking myself. Running a connector near max like that is just ridiculous, specially a tight 12 contact bundle like that.
      Can't understand how in heck someone was paid to do this, and another one probably paid to sign this off.

    • @jamesgodfrey1322
      @jamesgodfrey1322 9 месяцев назад +19

      I learn somthing today and now understand what has gone wrong thank you insight

    • @flamixin
      @flamixin 9 месяцев назад +68

      I don’t understand why a giant company like nvidia can make such bad/basic mistake. I meant it’s the kind of defect that impossible to miss during the prototype right?

  • @DarthChrisJ
    @DarthChrisJ 9 месяцев назад +652

    In general this is a completely ridiculous situation - pushing 600W through a cable safely, with a secure connector, is not some magic new thing that nobody ever needed to do before the 4090. This is an extremely well solved problem, but the new solution chosen by nVidia and the PCI SIG is so bad.

    • @jon4715
      @jon4715 9 месяцев назад +57

      Temperamental connector with small gauge wires and terminals. So dumb…it’s largely just physics.

    • @1sonyzz
      @1sonyzz 9 месяцев назад +57

      yet most of people and other youtubers put this fault on the user stating that it's a user problem for it... How come 8pin connector wasn't an issue in the first place

    • @priitmolder6475
      @priitmolder6475 9 месяцев назад +48

      It was evident since the first burn incident. Even before that, simple rule of thumb examination between ATX 2.0 plug and new one. The pins are way smaller, the construction is way more dodgy. Literally every ounce of safety margin was dropped. 8 pin connector has literally 3-5x safety margin based on DECADES of research and development. You just cant make a cheaper/smarter cable to save on manufacturing cost. and expect end users adapt over night or even EVER.

    • @mjc0961
      @mjc0961 9 месяцев назад +12

      @@1sonyzz Because extensive testing revealed that it was user error... When the only way to make it catch fire is to not plug it in all the way, when it works just fine when it's installed correctly, that's user error.
      This design never needed to exist, we were fine with the old connectors, but if you want to prove that it's not user error, show examples of the connector melting despite being installed correctly.

    • @longjohn526
      @longjohn526 9 месяцев назад +17

      Actually it is Intel (ATX power standard) and PCI-SIG who adopted Intel's new ATX standard ...... The entire reason for having standards and standards bodies is so this sh*t doesn't happen and when it does the blame falls on the standards bodies and NOT the manufacturers that adopted the standard ....... Blaming Nvidia gets you nowhere because they have no choice but to follow the standard which is either 1 12vHPWR connector or 4- 8 pin Molex connectors. If they use anything else then their devices is no longer compliant with the PCI-Sig standard and those devices lose their licensing as PCIe devices and effectively can't be sold

  • @kalmtraveler
    @kalmtraveler 9 месяцев назад +565

    the fun aspect of owning a 4090 is having the privilege of constantly checking the cable connection to find out if your house is about to burn down or not. /s

    • @alesksander
      @alesksander 9 месяцев назад +14

      Yay i m very happy for families with kids. Good job Nvidia. /facepalm

    • @alexfleener
      @alexfleener 9 месяцев назад +12

      I have several 4090’s rendering & I’m constantly checking the power cables

    • @mattjones4285
      @mattjones4285 9 месяцев назад +21

      Just keep in mind they are rated for what, 30 unplug/plug cycles before they are deemed no good?

    • @666Necropsy
      @666Necropsy 9 месяцев назад +20

      i wont buy a card with this style connector. its that simple.

    • @LA-MJ
      @LA-MJ 9 месяцев назад +17

      The fun aspect of not owning 4090 is using money wisely

  • @apollo_0wl
    @apollo_0wl 9 месяцев назад +225

    I'm an electrical engineer by trade and can say that in my experience, the hardest part of a design to get right are connectors. They're expensive relative to other components and notoriously unreliable unless you go ultra premium. The problem with a consumer electronics part is that companies are *always* going to cost down as much as possible, which was easy with the old Molex 8-pin connector, since it was comically oversized for the current per connector. This meant that even a bad connector that significantly eroded the safety margin would still be okay, and could even still be pushed a little beyond. This tiny micro-fit connector is great from a size perspective, but now we're putting more current (9A!) through each pin, and each pin is smaller. The thermal area to dissipate the heat from the connector is smaller and it's always going to be harder to keep running. It wouldn't surprise me to see reviewers having significant issues with these connectors going forward, because the realistic lifespan of this connector is going to be 20 mating cycles before degradation.

    • @tomaszszupryczynski5453
      @tomaszszupryczynski5453 9 месяцев назад +3

      as i live in uk, i must say that poland round 240V connectors are better than uk squere sockets, there is better surface connection with circle, in uk many of sockets are loose. same i bet is with american 110V as they are flat. and here is same with nvidia 8 pins are round and bigger, yet they are outdated and certified for less current than new socket

    • @zaxmaxlax
      @zaxmaxlax 9 месяцев назад +23

      Just look at the 12vHPWR and imagine 50amps through it, theres no fucken way it can handle it. Their engineers must have been on crack when they designed it.

    • @zaxmaxlax
      @zaxmaxlax 9 месяцев назад

      ​@@tomaszszupryczynski5453UK sockets are arguably better than EU schuko plugs, they have built in fuses and the leads are insulateds up until the very tip.

    • @Foxhood
      @Foxhood 9 месяцев назад +3

      @@zaxmaxlax Depends on how you look at it.
      Safety wise the UK do win by going nuts on fuses and insulation,
      but mechanically the Schuko are more resistant to mechanical wear as their curved contacts are better at retaining their shape compared to the flat springs of most UK sockets.

    • @Mr_Meowingtons
      @Mr_Meowingtons 9 месяцев назад +8

      @@zaxmaxlax 50 amps is nuts i have ham radios and i have a Big 50 amp supply and you need to run some good connectors or you will burn it up!

  • @primodragoneitaliano
    @primodragoneitaliano 9 месяцев назад +120

    Something that's pretty clear to me is that this new standard was clearly rushed with arbitrary constraints and never tested in real-world cases. It really looks like PCI-SIG just developed it, tested it in open bench setups, saw that the card was properly powered and went like "Eh, good enough" and called it a day. Clearly there were no torture tests done on them or tests that aim at simulate bad/improper connections to ensure that even in the event that a bad connection were to happen it wouldn't cause problems. The tiny pins also make me question the scope of this new standard because it seems to be born more out of aesthetic/size concerns than actual reliability issues with the existing standard. PCIE-SIG couldv've gone with bigger connectors and ensured that the rigidity of the connector itself would prevent things from wiggling around or chosen a different wire gauge but instead... didn't.
    All in all this new standard is a dumpster fire and should be scrapped entirely.

    • @naptastic
      @naptastic 9 месяцев назад +13

      PCI-SIG doesn't even go that far. The PCIe 6.0 spec got finalized almost 2 years ago. No one has actually demonstrated working PCIe 6.0 hardware.

    • @primodragoneitaliano
      @primodragoneitaliano 9 месяцев назад +4

      @@naptastic ooof, thanks for the explanation

    • @atta1798
      @atta1798 9 месяцев назад +2

      sure and some manager signed off that has no technical or experience overwriting the engineers

    • @primodragoneitaliano
      @primodragoneitaliano 9 месяцев назад

      @@atta1798 Yeah not unlikely that this happened as well. A tale as old as time essentially...

    • @devilzuser0050
      @devilzuser0050 9 месяцев назад +3

      " it seems to be born more out of aesthetic/size concerns" And manufacturers put the connector in the center of the VGA. LOL.

  • @wewillrockyou1986
    @wewillrockyou1986 9 месяцев назад +612

    Gonna be honest, modern GPUs being stupidly wide but still refusing to move the power connectors away from the outer edge of the card is a huge problem that also needs to be addressed.
    The connector itself is too small, it doesn't provide enough stability to the crimps, they should just go with the 8pin EPS design really.

    • @Ramog1000
      @Ramog1000 9 месяцев назад +15

      as far as I know its not about stability to the crimps but that it can wiggle arround even if plugged in with a click of the latch
      wiggling ofc causes less contact area on the connectors and that causes unwanted heat, what I don't get is why they didn't make a 1.1 version of the connector with tightned tollerances

    • @mjc0961
      @mjc0961 9 месяцев назад +18

      I think the problem with moving the power connectors is that they did this power connector at the same time they decided to make the end of the card only heatsink to blow air through. So now there's no PCB at the end of the card to put the power connectors on, they have to stay on the outer edge of the card close to the case side panel.
      If I recall correctly, EVGA was going to extend the power connectors to the side in their 4090 prototypes.

    • @TheKazragore
      @TheKazragore 9 месяцев назад +34

      Or make the 12-pin connector have pins the same size as the 8-pin for a more robust interconnect.

    • @deivytrajan
      @deivytrajan 9 месяцев назад +4

      Easy fix - have cables that have 90 degree at 12vhpwr on default. Seasonic has that but Corsair is too thick and doesn’t have that lol

    • @Ramog1000
      @Ramog1000 9 месяцев назад +5

      @@TheKazragore I mean the pin size is only magnifying different problems, pin size is technically enough its the wiggle room that really causes the issues.
      They could have made the plug more robust if they used two latches on the sides of the plugs (instead of in the middle). Or they could have used one latch and use tighter tollerances.

  • @FrostedWolf323
    @FrostedWolf323 9 месяцев назад +786

    it still blows my mind how this cable and issue was not caught during manufacturing and testing. It blows my actual mind lmao.

    • @weeooh1
      @weeooh1 9 месяцев назад +78

      It was likely tested outside of a case with straight cables.

    • @rawdez_
      @rawdez_ 9 месяцев назад +103

      they don't care. the cost to manufacture a 4090 is like 200-300 bucks now. they sell them for 1600-2000 bucks. let 'em burn = more dead cards = more money.
      a 4090 die 608mm2 costs 300 bucks MAX to make according to wafer calcs and 2 y.o. TSMC prices per wafer, its more like 200 bucks now. 300 bucks number from calcs is too high because its 2 y.o. prices, ngreedia cut TSMC orders = they are getting better prices to not cut even more orders, + on the same wafer where big dies don't fit you (theoretically) can also make smaller dies like a 4080, a 4070Ti etc. which ngreedia can get "for free" - if you account for them unlike wafer calcs do = they lower the cost of a wafer significantly.
      $1600-$2000 for a less than 300 bucks 4090 die (more likely $200 now)? just lol. I understand if they'd want 800 bucks for the 4090, max. it still would be at least 200 bucks overpriced.
      but fanboys are buying overpriced af GPUs anyway, so ngreedia keeps selling.

    • @iamdarkyoshi
      @iamdarkyoshi 9 месяцев назад +40

      A room full of engineers will always be less capable of inventing problems than the consumer. Not everything gets thought of.

    • @thesolidsnek8096
      @thesolidsnek8096 9 месяцев назад +29

      ​@@rawdez_do you have an idea how a warranty works?

    • @stevenwest1494
      @stevenwest1494 9 месяцев назад +57

      Coming from the company that made the 4060 perform worse than the card it replaced? This is completely in pattern with Nvidia

  • @paulc0102
    @paulc0102 9 месяцев назад +206

    As I've said elsewhere - at this point everything has been RMA'd except the device that's causing the problem. This is what happens when a company believes their position is unassailable :(

    • @stevenwest1494
      @stevenwest1494 9 месяцев назад +32

      Exactly, Nvidia are behaving like Apple. How they're getting away with this, is a lot like how Apple got around their sales van for using stolen tech on their watches. They're given too much leniency and understanding like they're new to this, and the risks are misunderstood.

    • @samson7294
      @samson7294 9 месяцев назад +28

      I bought my 4090 in April and did sooooo much research to make sure i bought the best cable for it. I thank God i bought the Corsair 12VHPWR. To this day I can't find a single reddit or twitter post of that cable being involved in a connector melting.
      However, that level of meticulous research shouldn't be a requirement to make sure that your +$1,600 does not go up in smoke! I'm tired of people saying it's solely from user error. It is a design flaw full stop.

    • @korinogaro
      @korinogaro 9 месяцев назад

      They lied to shareholders, they lied to stores, they lied to customers about selling cards directly to cryptominers and they paid $5mln for it while they made billions selling to miners. Guess why they don't care.

    • @alesksander
      @alesksander 9 месяцев назад +36

      @@samson7294 And yet u still buy it. Haha Nvidia laughing to the bank. That's truth.

    • @BrunodeSouzaLino
      @BrunodeSouzaLino 9 месяцев назад +19

      Or when users bitch and moan about the company products then proceed to buy them anyways.

  • @xbiker321
    @xbiker321 9 месяцев назад +390

    I've rode this rollercoaster with a 4080 & 4090. I've used different brand 12vhpwr cables as well as both v1.0 & v1.1 CableMod adapters. You nailed it, the issue isn't CableMod.. it's the 12vhpwr design.

    • @Djinnerator
      @Djinnerator 9 месяцев назад +20

      It's crazy how you say exactly the adapter that's in the middle (literally and figuratively) of the melting issue but then say it's actually not that, but the connector...
      12VHPWR connector was introduced with RTX 30. How many issues with the connector have been reported with RTX 30? 0 right? I wonder why?
      Before you try to cite power draw, 3090ti has the exact same power draw as 4090, and still how many reported issues? 0. 3089 draws more power than 4080. Both use 12VHPWR. How many reported issues with 3080? You guessed it - 0.
      How many CableMod 90 degree adapters are there for RRX 30? 0.
      Why is it that 12VHPWR PSU cables don't have this issue? How is it that Corsair's GPU Power Bridge is an adapter (180 degree) that does not have this issue?
      If the issue was 12VHPWR's design, we'd see issues with RTX 30 and all the direct-to-PSU setups. It's absolutely crazy how we, as clear as day, see that these issues are isolated to CableMod, yet put fault on the connector itself. This is the most bizarre stance I've seen the gaming community take in regards to a product problem. All of the evidence points towards specific adapters. There's no evidence that points toward the 12VHPWR connector/design...

    • @young-j731
      @young-j731 9 месяцев назад +35

      ​@@Djinnerator Are you blind or you chose to not see ?
      You dont know nothing and yet you open your mouth to talk, problem is INSIDE the cable itself
      "If it work on the 3090, it should work in the 4090" BUT WHY ARE WE HAVING BURNING ISSUE IN THE FIRST PLACE ?
      People had BURNING issue with the cable that was sent with the card (The cable Made by Nvidia)
      Burning issue with the FOUNDERS EDITION ITSELF
      (Do they even check their own product, to see the limit and problem that can haapend ?)
      People had burn issue with the cable even ON THE POWER SUPPLY SIDE made by Msi, Corsair, Many Company now knows that the issue EXIST Like MSI Colored cable, even though Nvidia said it was only 10 cases at best (They lied)
      Even a Serious Power supply brand like Seasonic who is 18 years older than Nvidia said the conncetor has to be fix
      Even INTEL said the conncetor need to be redesign and how it should be
      Did you take your to time to find these information before writing your comment ?
      Why do they had to create a new conncetor The 12v-2x6 if there was no issues ?
      Then Cable Mod try to help with their adapter, but is was worse
      This conncector have to be plug Straight 24/7, even if there's no bend and even if its FULLY PLUGGED (Dont litsen Gamer Nexus) the issues still can happend (Like DerBauer said)
      IT 'S SIMPLY NOT GARRANTY THAT THE ISSUE WILL 100% NOT HAPPEND
      I hope this comment will help

    • @HelipOfficial
      @HelipOfficial 9 месяцев назад +26

      ​@@Djinneratorthe level of ignorance you have is not even suprising. If you really want to stick with the connector, the solution would be to change the power line to 24 volts or 48 volts.

    • @mycelia_ow
      @mycelia_ow 9 месяцев назад +18

      @@Djinnerator Only founder edition cards used this connector, no AIB 30 series card used it, they used normal connectors.

    • @Djinnerator
      @Djinnerator 9 месяцев назад +4

      @@mycelia_ow it doesn't matter, there would've been issues with FE since it uses the connector. If the 12VHPWR connector were the issue, there would be reported problems with those. There have been 0 reported issues with RTX 30, especially with 3090ti which is the exact same power draw as 4090 and the exact same connector.
      The issue is the adapters, not the connector. Until someone can explain how RTX 30 was immune to this issue, it makes no logical sense to say the issue is the connector.

  • @bigbill1467
    @bigbill1467 9 месяцев назад +372

    I'm surprised there isn't a class action lawsuit going against them for this yet.

    • @Triro
      @Triro 9 месяцев назад +21

      Against... who? Can't go after nvidia, they're just using a standard.
      Can't go after the people who made the standard.
      And you can't go after cable mod sense they recalled. It would make it very difficult for anyone to present a good case sense they have solved the issue. Not to mention the amount of money a class action lawsuit would take.

    • @burrfoottopknot
      @burrfoottopknot 9 месяцев назад +6

      I would guess the ppl who ratified the cable connection are the ones you would go after

    • @matejsojka6683
      @matejsojka6683 9 месяцев назад +59

      @@Triro it should go against nvidia. Nvidia is the one who forced vendors to use this connector.

    • @SianaGearz
      @SianaGearz 9 месяцев назад +4

      @@matejsojka6683 it's of no use. If the PCI SIG says it'll do 55A, then Nvidia has no cause to question them!

    • @RamonInNZ
      @RamonInNZ 9 месяцев назад

      nVidia wrote the standard for the plug/socket then got Molex to build it for them @@Triro

  • @grasstreefarmer
    @grasstreefarmer 9 месяцев назад +235

    I've been trying to explain this for a long time now and always get told "but the old one was only rated to 150W". I have used Molex 4.2mm (ATX) and 3mm (similar to 12VHPRW) connectors in 3d printers and CNC machines for applications far more than 150W for years. The bigger connectors are much more reliable, robust and can handle higher current. It was always insane that a higher power connector would use smaller pins and therefor smaller diameter wire.

    • @Eondragon
      @Eondragon 9 месяцев назад +31

      This is a fundamental principle in electricity, it is not for nothing that there are standards on the diameter of cables according to amperage and voltage. But we must believe that Intel, because they are the ones who launched the ATX 3.0 standard, are above the fundamental rules in electricity. And Nvidia for not having done what was necessary to correct it by claiming that it was the user's fault.

    • @kiararaine3636
      @kiararaine3636 9 месяцев назад +15

      The R9 295x2, a 500W card, completely ignored the 8 pin connector specs, the reference model only had two 8 pin connectors, and it worked just fine.

    • @Arzack711
      @Arzack711 9 месяцев назад +10

      @@Eondragon even intel themselves didn't use the standard on their own gpu.

    • @raulitrump460
      @raulitrump460 9 месяцев назад +6

      8pin is rated 340w or more 150w limit is old atx standard.

    • @Kalvinjj
      @Kalvinjj 9 месяцев назад +3

      @@kiararaine3636 Yeah, people have survived being shot in the head as well.
      Doesn't mean much.
      EDIT: also, the 4.2mm Mini Fit connectors used on the 8 pin PCIe connector allow 13A per contact, so on the connectors alone we got 6 x 13 = 78A on the 2 connectors alone, not to mention the card can take another 75w from the PCIe connector. Assuming it doesn't take any little watt from the slot, it's still ~53% of the absolute maximum rating of the connectors.

  • @samson7294
    @samson7294 9 месяцев назад +63

    I bought my 4090 in April and did sooooo much research to make sure i bought the best cable for it. I thank God i bought the Corsair 12VHPWR. To this day I can't find a single reddit or twitter post of that cable being involved in a connector melting.
    However, that level of meticulous research shouldn't be a requirement to make sure that your +$1,600 does not go up in smoke! I'm tired of people saying it's solely from user error. It is a design flaw full stop.

    • @alexturnbackthearmy1907
      @alexturnbackthearmy1907 9 месяцев назад +5

      Its like L85A1. Need a complete rework to start working as it should`ve been.

    • @bobbygetsbanned6049
      @bobbygetsbanned6049 9 месяцев назад +6

      Yup, and if it's that easy for the cable to be installed wrong, because it has to be 100% perfect, it's still a design flaw. You cant put products in the market for the average consumer with no safety margin, that's negligent to think every consumer will be perfect. Every 4090 owner should be joining a class action lawsuit vs NVIDIA and contacting regulators to force a recall.

    • @dreamcat4
      @dreamcat4 9 месяцев назад +1

      absolutely 100% agree my friend. if only the whole wider industry and industry partners / working groups itself wasnt so entirely dumb. unfortunately the corsair company are really an exception rather than the industry norm here

    • @GrimpakTheMook
      @GrimpakTheMook 9 месяцев назад +7

      I wouldn't say it isn't user error. However, it is a design-induced user error, which is even worse than a simple design error.

    • @ChatGTA345
      @ChatGTA345 9 месяцев назад +1

      Same reason but now just got ModDIY cables (and not adapters) from 12VHPWR to 3x8pin going straight to the PSU, even though it has native 12VHPWR connector already. Risk decreased down to just one 12VHPWR. If I could somehow replace the connector on the card itself I would too

  • @ChannelSho
    @ChannelSho 9 месяцев назад +191

    The biggest tragedy I'm finding in all of this is nobody is stepping up to take the blame and explain what happened, nor is any government body making that call. I'm surprised the EU isn't doing something to the effect of slapping NVIDIA or anyone involved in PCI-SIG with penalties until they explain why there's literal fires going on in people's computers.

    • @fist003
      @fist003 9 месяцев назад +25

      Gamer's Nexus took out the momentum there unfortunately

    • @bobbygetsbanned6049
      @bobbygetsbanned6049 9 месяцев назад +7

      @@fist003 How did they do that? This is an obvious fire hazard, regulators should be forcing a recall regardless of what GN says or does.

    • @fist003
      @fist003 9 месяцев назад +36

      @@bobbygetsbanned6049 based on their video at that point of time, the finding mostly pointed towards user error in not plugging it all the way in. This sort of gives a free pass to NVIDIA and the AIBs.
      Also the failure is not as "spectacular" as the Gigabyte exploding GPUs, guess the authorities didnt see it as a fire hazard.

    • @sammiller6631
      @sammiller6631 9 месяцев назад +25

      @@fist003 And this video with the not-plugged-in light on the card showing how easily "not plugging it in all the way" can change. Even if you don't touch the wire, the wire moves on its own when it heats up and cools down.

    • @GameBacardi
      @GameBacardi 9 месяцев назад +3

      What EU will ban for consumers in future, are devices what take more than X number wattage. Ecample 500W.
      They force people to use mobile devices only.

  • @THEpicND
    @THEpicND 9 месяцев назад +143

    The second you cautioned your viewers about buying your own product was when I liked the video. It could have been real easy for you to just recommend your product and then rake in the cash from it but you stayed true to reality despite it not being monetarily in your best interest. Thanks for the great content :)

    • @shadowrealms2676
      @shadowrealms2676 9 месяцев назад +4

      That's gold honestly

    • @JonnyJKF
      @JonnyJKF 9 месяцев назад +7

      Not raking in cash if they continue paying out of pocket for any GPUs that melt due to a design failure beyond their control lol.

    • @THEpicND
      @THEpicND 9 месяцев назад

      @@JonnyJKF again, that is something only people with integrity will do so the point still stands. He could easily just not offer that

    • @blazingmatty123
      @blazingmatty123 9 месяцев назад +1

      Yeah that's the litmus test for a good manufacturer

    • @nemtudom5074
      @nemtudom5074 9 месяцев назад +2

      This is what good business practices look like, people.

  • @JTFish
    @JTFish 9 месяцев назад +108

    4090s and these cables should be recalled. They're literally a threat to not just the user but everyone in the same building. The cables can't safely supply the power that the 4090 draws. At this point it's not a matter of if but when a 4090 will kill someone.

    • @sammiller6631
      @sammiller6631 9 месяцев назад

      Nvidia fans don't care if they kill someone as long as they can overclock their 4090 for big number.

    • @Djinnerator
      @Djinnerator 7 месяцев назад

      Lol yes it can? 4090 is a 450w card. 12VHPWR is rated by Intel at 600W, it's 660W using conservative numbers. 3090ti uses 12VHPWR and is also 450W with 0 issues. The issue is people using cheaply made adapters. The connector has _never_ been an issue. People are just running off with the idea that the connector is the issue without any evidence to point to the connector actually being the issue, yet _all_ of the evidence points to two specific adapters that have been in every melting case. It has absolutely nothing to do with the GPU. It's strictly people using one of two adapters but people refuse to accept that and find anything to go with their confirmation bias against the connector.

    • @firghteningtruth7173
      @firghteningtruth7173 6 месяцев назад

      ​​@@Djinnerator interesting. Is that why they replaced it with the 12v 6x2 connectors already? 😂
      There is a video where they test and explain the failure of that connector. The leads were too short and made of shitty material. 😂
      "12V-2×6 is almost identical to the 12VHPWR cable, with a few minor differences in the physical appearance of the connector. The sensing pins are 0.1mm shorter and the conducting terminals are 0.15mm longer."

  • @GreasyFox
    @GreasyFox 9 месяцев назад +214

    A clear example of 'If it aint broken, dont fix it'. The old format works just fine.

    • @lolzlolz69
      @lolzlolz69 9 месяцев назад +9

      That's not how technology, or advances in technology work. Don't get me wrong, Nvidia should have done better and needs to sort this out but you can't blame a tech company for trying to better advance tech, or would you prefer to keep with decades old standards?

    • @potatoes5829
      @potatoes5829 9 месяцев назад +72

      @@lolzlolz69 yes I would prefer decade old standards. The wheel has been around for millenia yet I don't see anyone trying to replace them?

    • @kolle128
      @kolle128 9 месяцев назад +40

      @@lolzlolz69 This is not really advancement in technology. NVidia has a very NVidia problem. The 3x8 pin takes up too much space on the PCB, therefore they have to spend an additional 40 cents on the PCB. To solve this they engineered a new type of connector, that has prooven to be extremely unreliable and problematic in several ways. They must either switch to a new connector again, or go back to the old standard. I suggest going back to the old standard, because clearly they will need time to come up with a new design, and I would not like them to rush it after what just happened.

    • @lolzlolz69
      @lolzlolz69 9 месяцев назад +4

      @@potatoes5829 Never heard of alloy or carbon fiber wheels then? I guess you don't understand how much technology there is in modern wheels and rubber even at a general consumer level?
      You still running a 486 and usb 1.0?

    • @lolzlolz69
      @lolzlolz69 9 месяцев назад +1

      @@potatoes5829 Also you say you prefer the old standard yet gloss over the fact that that standard has ist self moved on from the last standard. PSUs were not always modular were they.

  • @NoName-st6zc
    @NoName-st6zc 9 месяцев назад +168

    On top of all this always keep in mind what they're charging for their GPUs. For those prices they should've at least include a fire extinguisher.

    • @3aitnmedia910
      @3aitnmedia910 9 месяцев назад +1

      So true 😂

    • @Triro
      @Triro 9 месяцев назад +1

      Its not even the GPU manufactoring, nor cable anymore. This was a fail on Cablemods part. They royally screwed this up. And ofc we all blame it on everyones current punching bag, the 12VHPWR, or Nvidia.
      Its not even any of these two's fault. 12VHPHR fixed their cable, with a better design. And Nvidia is just using it as it can deliver the power needed to its GPU without a mess of cables.
      I dislike Nvidia just as much as the next guy, and personally rock a 7800XT. But we can't blame them for another companies actions.

    • @myta1837
      @myta1837 9 месяцев назад +10

      ​@@Trirobot

    • @Triro
      @Triro 9 месяцев назад

      @@myta1837 Uhm. Nothing says sad like liking your own comment 4 times.

  • @tomtomkowski7653
    @tomtomkowski7653 9 месяцев назад +336

    This design is as broken as only can be.
    We need a new standard ASAP because this will be always a problem.
    The people behind this project should be fired.

    • @XantheFIN
      @XantheFIN 9 месяцев назад +49

      Need go back using old PCIE power connectors which didn't have these problems.

    • @theyoungjawn
      @theyoungjawn 9 месяцев назад +6

      @@XantheFINPCIE connectors have melted before too. Not as wide spread of a failure as this but it definitely happens.

    • @andreiga76
      @andreiga76 9 месяцев назад +3

      Good luck with that.
      All new ATX 3.0 power supplies have this connector for graphics (even two for 1600W versions), all power supply vendor updated their models to this one, even top models from Seasonic (like PRIME TX) are updated to this standard.
      The most we can get will be some changes to graphics cards, better connectors, placed in better places etc.

    • @deivytrajan
      @deivytrajan 9 месяцев назад +1

      @@andreiga76there is updated 12vhpwr connector that spots working if it’s not fully plugged in and prevents meltdown

    • @tudalex
      @tudalex 9 месяцев назад +28

      Server cards use 8pin EPS the same connector that you use for cpu extra power. If it is good enough for servers I think it is good enough for desktops.

  • @VRGamingTherapy
    @VRGamingTherapy 9 месяцев назад +47

    Nvidia tried to create a solution to a problem that didn't exist.
    So far so good on my ROG 4090. I thought about getting "fancy" cables, but I'll just stick to the single 16 pin to 16 pin cable that came with my PSU.
    If it aint broke, don't fix it!

    • @arclyte1859
      @arclyte1859 9 месяцев назад +1

      Don’t go for any cables other than what came with your PSU. I’ve spent the better part of a year chasing down instabilities in my new build, swapping boards, drives, RMAs, etc. When I finally got rid of my fancy cables and extensions, all my issues went away.

    • @formulaic78
      @formulaic78 8 месяцев назад +1

      ​​@@arclyte1859so do I think it's better to go the 12V 40 series cable provided with my thermaltake 1000W PSU or use two pcie eight pins plus the adapter that came with my 4080? Is this even an issue for the less powerful 4080?

  • @blasg6242
    @blasg6242 9 месяцев назад +32

    Your explanation is very informative!
    I studied in University the regulations for sizing the wiring of domestic electrical installations, which can be considered over-killed because the safety margin is really big, even in the most permissive standard. Since this new 12V high power connector appeared, I've been wondering how such a small connector can carry so much power and still be considered safe, and now you've got the answer: the safety margin of this connector is awful.
    Thanks!

  • @GraceMcClain
    @GraceMcClain 9 месяцев назад +78

    This issue was enough for my wife to decide to go with a 7900 XTX over the NVIDIA alternatives, and I can't say I blame her. I only have a 4070Ti, and so am unlikely to suffer the same issues as 4090 owners, but I will likely also move to AMD products when my next upgrade cycle comes around. Quite why they thought this was clever, or even remotely necessary, baffles me, as does this insane push for 12VHPWR on motherboards, forcing all the power conversion hardware out of the PSU where it is out of the way, and onto the motherboard where it will take up valuable board real estate. Change for change sake is what this seems to be, and I genuinely hope it comes back to bite these idiots.

    • @akcsegamedev
      @akcsegamedev 9 месяцев назад +14

      It follows the quote "create a problem to fix it", it really wasn't needed tbh

    • @stangamer1151
      @stangamer1151 9 месяцев назад

      It is just 4090 that struggles from this issue. Even 4080 is fine, since it draws only 320W (usually less). Looks like constant 400+W power usage is what makes these connectors melt. So even undervolted 4090s should be fine too.

    • @GraceMcClain
      @GraceMcClain 9 месяцев назад +3

      @@stangamer1151 That may be, but as others have stated, this should never have happened in the first place, because we NEVER needed this connector. 8-pins were, and will continue to be, perfectly fine if not vastly better.

    • @stangamer1151
      @stangamer1151 9 месяцев назад +2

      @@GraceMcClain I totally agree. They could equip only 4070 Tis and 4080s with this connector though. And 4090 should have had either 2 X 12 pin or 4 X 8 pin. Then we would never heard about any issues with connectors.

    • @seeibe
      @seeibe 9 месяцев назад +6

      Yep. I got my 4090 mostly for the 24GB VRAM and machine learning capabilities. I cap the card at 350W since that gets me most of the performance, with reduced energy bill, reduced noise, and reduced risk of connector melting. For such a use case I feel like it's the perfect card. But if you really want to get the max FPS/$ out of your card and don't care about power consumption, the 7900XTX is definitely a better choice.

  • @iamdarkyoshi
    @iamdarkyoshi 9 месяцев назад +66

    Still really annoyed we didn't just re-use the 8 pin CPU power connector on videocards when we originally started having cards with extra power. It has 4 conductors each way for carrying current, not 3 like the GPU power connector. The extra two ground pins usually don't even carry current, on many GPUs you can bridge them together and it'll work with a 6 pin cable. One of the pins is just *measuring* for a ground connection, not passing current through it so the extra wires are a glorified "cable check"
    I agree we need a new standard for GPUs but this microfit based connector ain't it. I work with microfit daily and I hate it.

    • @mrfarts5176
      @mrfarts5176 9 месяцев назад

      Nvidia just seems like they will not care if peoples houses burn down with them in it as long as it doesn't cost them money.

    • @sametekiz3709
      @sametekiz3709 9 месяцев назад +1

      yes

    • @arturpaivads
      @arturpaivads 9 месяцев назад +5

      Pretty much yeah. EPS 12v make sense for GPUs aswell... That alongside updating dedades old standards would drastically lower the connector count on GPUs.

    • @milestailprower
      @milestailprower 9 месяцев назад +7

      This 100%. Nvidia was already doing this on their Tesla cards with no issues.
      Using the 12V EPS would have requried less engineering, and would have been more safe anyways.
      Nvidia would still save space on their PCBs compared to 8pin and 6pin PCI power connectors, and they wouldn't even need to engineer a brand new connector.

    • @dreamcat4
      @dreamcat4 9 месяцев назад +5

      yes having 2x eps connector onto the 4090 would have been the killer move here. unfortunately somebody high enough up in nvidia decided that only 1x footprint of the eps size would be sufficient. which clearly is the cause of all this design choices from the beginning. so you can squarely put the blame onto jensen and 'friends'... err i mean employees. err i mean: army of scared engineers (both inside and outside) who were too scared to speak up and get fired over all this nonsense. or who thought that their job was actually some nonsense worthless politics instead of actual proper engineering

  • @light3267
    @light3267 9 месяцев назад +80

    very informative and interesting video, good work! we all going to need vertical gpu mounts and fire extinguishers

    • @ahuman5592
      @ahuman5592 9 месяцев назад +10

      Or, just work with AMD and pay less for a card with more vram(and without melting connectors)

    • @rawdez_
      @rawdez_ 9 месяцев назад

      @@ahuman5592 still overpriced 0 progress crap. the rx 7800xt isn't faster than the rx 6800xt AFTER 3 YEARS of "progress". all corporations are milking the market with obsolete 0 progress crap now. ayyymd, ngreedia, shintel, sheeple etc. - EVERY CORPORATION sells overpriced af 0 progress hardware now. and stupid consumers keep buying 0 progress. nobody compares anything to 5-10 years ago anymore like they all have 5min fish memory. i.e. "tech-tubers" are a huge part of the problem. because they are silent about overprice and 0 progress and basically all make ads for overpriced af 0 progress crap nobody should be buying anyway. get used GPUs instead if you have to. don't feed greedy coprorations.

  • @AndroidBeacshire
    @AndroidBeacshire 9 месяцев назад +18

    if the 12vhpwr pins were the same size as the 8pin pins the connector would be fine,
    it's mind boggling that Nvidia thought they could shrink the pins and launch +500w through the plug without it melting

    • @MetroidChild
      @MetroidChild 9 месяцев назад +2

      It's not that crazy, the pins are rated for the current that passes through them.
      What happens in isolation rarely reflect real world results though, which is why field studies are typically done.

    • @billy101456
      @billy101456 8 месяцев назад +5

      To say nothing of the variation you get in manufacturing. Nothing is ever exactly this dimension in production. There is always a tolerance, and the smaller the part, the tighter the tolerance has to be. If anything less then a perfect pin, in a perfect connection, under perfect environmental conditions will cause you to go over your rating and fail, its a bad system. If your connector will melt if operated in a hot environment, its a bad connector. Building with such razor thin margins, such that the increasing the temp is enough to overload the wire, its dangerous design. @@MetroidChild

    • @5etz3r
      @5etz3r 6 месяцев назад

      @@billy101456 Well said. Tolerance matters. Its almost a spectrum, and if your tolerance is so razer thin that failure is reached at such a low threshold on that spectrum, then I believe you could argue that the product is defective, despite the fact that is does function in absolutely perfect conditions.

    • @billy101456
      @billy101456 6 месяцев назад +2

      @@5etz3r it’s certainly not built with much safety margin. And 100 watts under max load, and was it 20 or 50watts more for the design spec? They shouldn’t be failing at less then perfect connections. I wonder how in a few years as the connectors and wires age if this problem will pop up again. Let’s hope the plastic doesn’t brittle with age.

  • @gsuberland
    @gsuberland 9 месяцев назад +10

    Small correction: the "rated at 20°C" part isn't actually correct for these cables; max current ratings are standardised at a high ambient temperature (e.g. 55°C or 70°C), and are almost fully dependent on the maximum temperature of the insulator. So for a 200°C rated insulator and a cable spec'd for up to 70°C ambient, the current rating of the wire is the maximum current that should never lead to an increase of +130°C in a normal scenario. AWG specifies the sum of the cross-sectional areas of the conductors, so for multi-stranded cables you end up with wider overall cable diameters due to gaps in circle packing, which can be a bit confusing - ultimately, 18AWG has the same amount of copper regardless of stranding. The reason for stranded cable current derating isn't anything to do with the amount of conductive material in the cross-section, but rather the poorer thermal contact between the strands and the trapped air between strands acting like an insulator, which means you need less current before a hotspot can occur which exceeds the insulation rating. Typical AWG charts showing current limits are generally based on a very simplified model using minimum insulation specs set by building regulations on plenum cable installs, because they're mostly made as a reference for electricians. If you want the correct rating for the exact cable you're looking at, you need to look at the markings on its insulation.

    • @atta1798
      @atta1798 9 месяцев назад +2

      That called operational temperature etc

  • @gerald8289
    @gerald8289 9 месяцев назад +62

    I've always known the 8 pin was way underrated, just tied down to old spec requirements. I find it funny that for most power supplies that come with the 12pin high power, they terminate at the PSU end via 2x 8pin. Also learned from building some mining rigs that the 8pin PCIe will take 300+ watts all day for months straight and be just fine. The OEM 4090 design shoulda have been a 3x 8 pin, giving it a 525w max tdp within spec. And then let board partners do whatever they want.

    • @alexturnbackthearmy1907
      @alexturnbackthearmy1907 9 месяцев назад +12

      8 pins have a lot of margins, their rated current is more then 2 times lower then what they can in theory take. 12VHP doesnt have any of that, this game of chance comes to small manufacturing defects that happen all the time.

    • @evalonso89
      @evalonso89 9 месяцев назад +3

      @@alexturnbackthearmy1907 that’s if they are using the max size wire rated for the contact. What if it’s 18 or 20AWG? I think the standard has to assume the worst case scenario

    • @alexturnbackthearmy1907
      @alexturnbackthearmy1907 9 месяцев назад +3

      @@evalonso89 And this is why these older ratings are so conservative. ANYTHING can happen, and its your job to make it as impossible as economically possible.

    • @evalonso89
      @evalonso89 9 месяцев назад +3

      @@alexturnbackthearmy1907honestly I think we solved the problem lol

    • @DigitalJedi
      @DigitalJedi 9 месяцев назад +3

      @@alexturnbackthearmy1907 Exactly. I don't think any engineer worth their salt would sign off on 20AWG for a modern PSU, but at the time the standard was new and parts weren't as power hungry it would've been a consideration.

  • @Marfprojects
    @Marfprojects 9 месяцев назад +32

    I did not had a 12vhpwr cable in my hands until now, realizing how small the pins are and pushing hundreds of watts though it, no wonder it melts.

    • @Niosus
      @Niosus 9 месяцев назад +8

      Especially at 12V. Another way to solve this issue is by making the connection 24 or even 48V. That cuts the current in half or by 4, instantly fixing the issue.
      But that means adapters aren't that easy to make anymore so in practice you will need a new power supply.
      The industry is going to have to get its shit together and figure out a proper solution, because this ain't it.

    • @kpakpatojournal1555
      @kpakpatojournal1555 9 месяцев назад +1

      @RAM_845 retailers do too , thats sad

    • @nasone32
      @nasone32 9 месяцев назад +1

      ​@@Niosuscan't do that without negatively impacting the VRM on the card. Stepping down from 24v to 1v for the GPU in 1 step is much more difficult than doing 12v to 1v

    • @Niosus
      @Niosus 9 месяцев назад

      @@nasone32 Sure, but if we want to keep pumping large amounts of power through small cables and connectors, something will have to give somewhere.
      It could be akin to USB Power Delivery, where there is a known voltage/current that's always safe (5V, 0.5A), and that can be negotiated upwards if both sides support it (20V, 5A). The RTX5060 can be just fine with 12V without the added complexity, while the RTX5090 can have the hardware on board to safely pull much more power through the same cable and connector. On an older PSU that's just connected through an adapter, the RTX5090 can fall back to a lower power consumption target so it still functions, just at a lower performance level.
      I know, it's not so simple and USB-PD has its flaws. But we're talking about 100B to trillion dollar companies (Nvidia, AMD, Intel) here. It really isn't too much to ask for a proper solution.

  • @bhume7535
    @bhume7535 9 месяцев назад +52

    All I need to do is read the title. I agree.

  • @Mxyzptlk30
    @Mxyzptlk30 9 месяцев назад +18

    I have to agree with you on all points! Not only was It a design failure to send even more current across fewer pins and fewer connectors, but to also place the power connector on top of a large card where you have no choice but to bend the cable at a near right angle to fit in almost all popular cases. This essentially required 4090 owners to buy special adapters and cables that potentially added more risk. I hope NVidia learns from this major design flaw!

    • @phyde1885
      @phyde1885 7 месяцев назад

      I seriously doubt it ! They're TOO GREEDY !! And re-designs cost 5 & 6 digit figures,at least ! I'm a retired QC EE from long ago,and had discussions over changing a simple non-electrical part to my bosses. They said 40 years ago then,the cost was massive. It was a Phone Handset shell !

  • @roboman2444
    @roboman2444 9 месяцев назад +31

    Keep in mind that the 8 pin pcie connector has 2 more ground pins than the 12v pins. So even if two ground pins/wires were completely burnt or not contacting properly, it would still be able to push the entire "rated" 280 watts. On the 8 pin, you only have to worry about the 12v pins being overloaded, while on the 12vhpwr you have to worry about both the 12v pins and the gnd pins being overloaded or contact issues.

  • @kancheongspidergaming
    @kancheongspidergaming 9 месяцев назад +71

    As soon as I saw the new 12VHPWR connector I got instantly suspicious given that 30-series already OCP-tripped power supplies when it first came out.

    • @SergioEduP
      @SergioEduP 9 месяцев назад +21

      Same, plus, as a general rule of thumb, if you want to draw more power you will need thicker wires and beefier connectors, this was the opposite move.

    • @EmanuelHoogeveen
      @EmanuelHoogeveen 9 месяцев назад +14

      That situation was a bit different - in that case the problem was that the power draw was very spiky, especially in the 1 millisecond range. Although the ATX 3.0 spec has provisions for exactly that problem, with the 40-series it looks like they managed to smooth things out so the spikes are much smaller.

    • @deancameronkaiser
      @deancameronkaiser 9 месяцев назад +5

      ​@@SergioEduPyes yes yes yes yes fucking YES my guy. God you should be working for Nvidia because whoever they got to design the 12 volt high power connector really doesn't know what he's doing at all. Whoever thought using that trash was a good idea should be fired from Nvidia.

    • @deancameronkaiser
      @deancameronkaiser 9 месяцев назад +3

      ​@@EmanuelHoogeveenstill doesn't mean you need to reinvent the wheel lol? 🤦
      There's nothing wrong with a standard 8pin connection. Nvidia just wanted to be fancy and it back fired. Plus if your GPU is spiking out of control then something is not right at all. By that I mean jumping from let's say 200 watts to 600watts.

    • @luvingyouu
      @luvingyouu 9 месяцев назад +7

      It wasn’t OCP being triggered, too much noise on the 12V sense pins because of transient spikes, thus the PSU freaks out and shuts down. People fixed it by either removing the 12V sense pin on the 24 pin or some put an i ferrite bead on the 12v sense wire, most people don’t know how to do this or don’t want to risk it, and I don’t blame them

  • @arclyte1859
    @arclyte1859 9 месяцев назад +13

    OMG! I’ve been struggling with stability and BSOD issues in my new build for over a year! Nothing made sense; been swapping out parts and RMAing parts. Just swapped out my CableMod extension cables and went all my issues went away. Never had an issue with before but ever since I dropped in my 4090 I had issues that I couldn’t consistently replicate. WhenI look back at how much time and money I wasted troubleshooting, I want to cry.

    • @matta.1673
      @matta.1673 9 месяцев назад +1

      Sorry to hear that, please reach out to our support team about this and we will send you a direct replacement cable if your PSU is one we support to make up for this. Just tell our support team that Matt sent you and we'll get you all taken care of.

    • @arclyte1859
      @arclyte1859 9 месяцев назад

      @@matta.1673 Thanks but after all I’ve been through with my current build, I’ll pass. I don’t blame CableMod - I’ve used many custom cables and extensions without issue. However I think after years of use my extensions for my ATx power and PCIe crapped out and just to be safe I went back to the cable that came with my PSU for the 12vhpwr.

  • @inmypaants
    @inmypaants 9 месяцев назад +25

    One of the major reasons I avoided Nvidia this generation was the plug, I just couldn’t be bothered dealing with any melting. I ended up grabbing a 790XTX and the trusty 3x 8pins.

  • @oceanbytez847
    @oceanbytez847 9 месяцев назад +10

    this makes me even more glad i went with the 7900XTX. They use 3 the old 6+2 pin connectors. more pins, and trusty old reliability.

  • @thechaosis8993
    @thechaosis8993 9 месяцев назад +15

    Interestingly enough, the 3090ti that EVGA built had a spot on the board for an additional 12V HPWR. It appears they might have been thinking the same thing der8uaer mentioned ahead of time.

  • @niezzayt3809
    @niezzayt3809 9 месяцев назад +51

    Meanwhile USB-C & USB4 240 WATT Power Delivery:
    FINALLY A WORTHY OPPONENT!!
    OUR BATTLE WILL BE LEGENDARY

    • @whitygoose
      @whitygoose 9 месяцев назад +4

      honestly i doubted that aswell, i never used it. 240W over usb C sounds like scam.

    • @der8auer-en
      @der8auer-en  9 месяцев назад +64

      Let's just power RTX5090 by 3x USB-C

    • @BrianCairns
      @BrianCairns 9 месяцев назад +25

      The USB-C connector is only rated up to 5 amps. For 240W Power Delivery it uses 48 volts at 5 amps.

    • @niezzayt3809
      @niezzayt3809 9 месяцев назад +4

      @@BrianCairns That's exactly why it is a problem.
      Based on Law of Physics alone, it will generate quite amounts of heat.

    • @niter43
      @niter43 9 месяцев назад +13

      ​​@@niezzayt3809 what? Volts don't produce heat, amps do. And current cables have shown to handle 5A just fine (you'd still need special new 240W cable thats properly insulated for 48V, but that's irrelevant to conductor cross section/heat/amperage rating)

  • @jeffm2787
    @jeffm2787 9 месяцев назад +32

    Contact surface area along with clamping force is clearly marginal at best for the 12VHPWR connectors. It's also a runaway situation where once one contact increases in it's contact resistance it causes the remaining pins to also start to heat more which causes those to also increase in resistance. 24 volts would solve this issue, but that would require a new PSU spec.

    • @bobbygetsbanned6049
      @bobbygetsbanned6049 9 месяцев назад +5

      Yup and I think that's how NVIDIA has been getting away with this, the cables are technically enough to carry the load, which is also why the cable isn't melting. The real problem is the connector doesn't have enough contact area to carry the load unless conditions are 100% perfect.

    • @techsupport8967
      @techsupport8967 9 месяцев назад +1

      24v would make a single connector practical (assuming they didn't minimize it's size), but also a zif type socket with a clamping lever for basically guaranteed engagement of the pins, resistance to pull-out and side loads.
      Load resistors per pin to monitor current to clamp any runaways (one pin delivering more than others) combined with proper use of a sense wire so the gpu can throttle or go into an error or low power state when there's excessive pin/wire voltage droop.
      Lots of things that could easily have been done, but would have increased the bom by $1 so that's out of the question.

    • @gsuberland
      @gsuberland 9 месяцев назад +1

      24V would help on the connector side, but it'd increase the cost of the VRMs. We are starting to see GaN (or GaN-on-Si) processes with some of the new stages - iirc MPS has a line of them now - which would help alleviate the challenges of switching a larger high-side voltage, but it's the passives that get tricky. High-side capacitance would end up being much more challenging at twice the voltage. Alupoly / solidpoly caps with sufficiently high dielectric strength for 12V operation (typically rated for 20V) are fairly cheap, but once you bump that to 35V+ it gets much more expensive to design around due to loss tangent / dissipation factor typically being worse at those voltages unless you really crank the physical size (which is obviously problematic from a mechanical perspective, but also comes with worse ESL). Higher voltage isn't usually an issue for standard aluminium electrolytics, but they can't handle the ripple current of these designs, and by the time you parallel enough low-ESR alu caps to handle the thermals from DF you end up running into problems with parasitic inductance (e.g. forming resonances). MLCC derating on class II dielectrics is also generally tolerable at 12V, but at 24V it becomes much more problematic and you end up needing greater MLCC package sizes across the board, which is a pain for dense layouts and may exacerbate radiative EMI problems due to larger current loops.

    • @techsupport8967
      @techsupport8967 9 месяцев назад +1

      ​@@gsuberland That's a great bit of perspective on the challenges of a higher supply voltage, I wouldn't have considered the implications on regulation/filtering. Everything is a trade off I suppose, and maybe the added complexity and constraints wouldn't have made sense.
      Regardless, it seems the trend for more and more power hungry cards isn't going to end, give it two or three more generations we'll see cards doing 800w+ on stock settings; improvements to connectors alone isn't going to solve the issue in the long run.

  • @uther10
    @uther10 9 месяцев назад +10

    Still dumbfounded that Nvidia went for this new adapter and the free pass they recieved blaming the end users.

  • @VideoklipBG
    @VideoklipBG 9 месяцев назад +4

    Don't forget that higher resistance causes voltage drop and leads to higher current draw with voltage regulators.
    Anybody who isn't completely illiterage on physics knows how dangerous this is. The die and memory require a certain amount of power. The VRM does everything to keep the output stable regardless of the input voltage. Input voltage drops due to increased resistance at the connector pins, thus the VRM draws more current, which leads to even higher temperature and increased resistance, and a thermal runaway situation.
    The fact that you can wiggle the connector slightly causing the connection to cut off intermitently shows exactly what people have been saying - this is a serious **design problem**, **not** a manufacturing tolerance from *every* PSU, GPU, adapter or cable manufacturer, nor exclusively a user issue.
    "With great power comes great responsibility" - and it should definitely come with suitable connectors.

  • @alpha007org
    @alpha007org 9 месяцев назад +10

    I wanted to say that I saw someone testing a normal 6 pin connector, and the 6pin could handle insane power before it gets warm, but it's mentioned in this video.
    It's perplexing that in the 21st century, with the availability of sophisticated simulation tools, they made a worse connector, that is susceptible to user error and it's worse than 10+ y old solution.

    • @Djinnerator
      @Djinnerator 7 месяцев назад

      Except the issue isn't the connector, it's adapters. Notice how PSU cables don't have this issue. It's only when using one of two specific adapters. Every single melting case revolves around two adapters. 3090ti has the same power draw as 4090 and uses 12VHPWR yet 0 melting cases. It also doesn't have adapters. The issue has never been the 12VHPWR connector. It's always been two adapters. You would think they already simulated using their cables because they already did...?

  • @p_mouse8676
    @p_mouse8676 9 месяцев назад +4

    As an electronics engineer myself, using similar connectors I am actually very confused about all these problems? We have been using similar connectors for the last 15 years producing well over 10-20k units per year. Not one single faulty unit. We only use Molex and JST branded and they have zero problem doing the max ratings over long periods of time in a guite warm environment.
    I also don't agree with the explanation here. I think the main problem here is the mix and match between brands of connectors between cables, adapters and video cards.
    Some video card and cable manufacturers are using very cheap and low quality connectors as well as bad matching metal parts (pin and receptacle)
    Again, an high quality Molex to Molex brand connector has no problem sustaining this amount of current.
    I test that even on a regular basis.

  • @travisholt92
    @travisholt92 9 месяцев назад +33

    Phenomenal explanation of the 12vhpwr spec flaws. We've seen it to be very unstable since it's introduction and we now have clear explanation as to why 12vhpwr is such a bad connector. Thank you for getting this information out there. 🤓

  • @daviddesrosiers1946
    @daviddesrosiers1946 9 месяцев назад +67

    I never liked this connector from the moment I laid eyes on it. The outrageous prices, and my distrust of the new connector are why there's no 40 series in my system. My instinctual reaction was that Nvidia was cheaping out big time. As time went on, Nvidia's behavior reinforced my instinctual suspicions.

    • @mrfarts5176
      @mrfarts5176 9 месяцев назад +14

      The planned obsolescence didn't turn you off from nvidia? I stopped buying their products the moment I realized they were making their own cards obsolete by limiting the vram.

    • @daviddesrosiers1946
      @daviddesrosiers1946 9 месяцев назад +13

      @@mrfarts5176 Oh, that's yet another reason there's a water cooled Red Devil 7900XTX in my rig, instead of a 4090. I was lucky to have a 4090 over for an extended sleepover and I concluded that the juice was sweet, but not worth the squeeze and I really hate the cut of Nvidia's leather jacket in recent years.

    • @mrfarts5176
      @mrfarts5176 9 месяцев назад

      @daviddesrosiers1946 That water cooled 7900xtx is one of the best cards on the market in my opinion. I would have it now if I could find it near msrp.

    • @mintymus
      @mintymus 9 месяцев назад

      @@mrfarts5176 Then how is the GTX 1080 still relevant?

    • @mrfarts5176
      @mrfarts5176 9 месяцев назад +1

      @mintymus Have you tried loading up high-resolution textures on it? All cards are still relevant. I recently took one of my old favorite cards - a gtx 560 ti, I think, and put together a build for someone who can not afford a pc. It was the first computer part I ever spent over 300 dollars on. This pc will probably continue to be used by someone who can not afford a pc until it stops working. Very relevant but not for gaming with high textures...

  • @procrastinatingnerd
    @procrastinatingnerd 9 месяцев назад +5

    I find it amusing how they touted this new connector as so much better. "this is the new standard" Then connectors started melting...
    They should be honest, they wanted to make a connector that used the bare minimum materials so they could use wiring that used the bare minimum materials so they could increase their margin's as much as possible. The graphics card company who forced this connector should be the one resolving this.

  • @BikingChap
    @BikingChap 9 месяцев назад +5

    There seems to be two issues here, the current capacity of the connector and it's general reliability in maintaining good contact. If the current capacity is tight, but within spec, there shouldn't be an issue. The issue seems to be when even lightly flexed the connector loses connectivity or goes high resistance in one or more pins and that will then go beyond safe limits for the other pins. While multiple HPWR connectors might solve the overheating issues on a 4090 this would appear only to be because as the connector pins disconnect or go HR, there is still sufficient current carrying capacity in the other pins to carry the load safely. The question surely has to be, "Why are we seeing pins disconnects / HRs when the connectors are manually flexed and given others have not had issues at high currents with genuine Molex plug / socket combos is the issue non Molex brands and their lack of capability or adherence to spec?

  • @AsthmaQueen
    @AsthmaQueen 9 месяцев назад +60

    I was benching 3x8 at 1000w and didn't really have any problems, they have a huge margin as you say.
    The resistance of many connectors is a factor and the impedance of the cable from older psu's dealing with the cable.
    This was from a 1200w atx 3.0 seasonic, standard connectors into a lian li strimmer, into evc unlocked 7900xtx, peak draw from wall around 1250+ from PSU

    • @sametekiz3709
      @sametekiz3709 9 месяцев назад

      yea

    • @deancameronkaiser
      @deancameronkaiser 9 месяцев назад +12

      So then I ask you with tears in my eyes why did we need a 12volt high power connector if what you've said is true? I believe everything you said because I see nothing wrong with the standard 8 pin design.
      Nvidia created a solution to a problem that never existed in the first place. Whatever they thought was necessary and what they actually needed to do is two completely different things.

    • @Kalvinjj
      @Kalvinjj 9 месяцев назад +6

      @@deancameronkaiser And their solution to it is just a spit in the face of engineering.
      The 8 pin connector, one single connector, at the same safety margin they're using, would be capable of delivering about 450w, believe it or not. That's just 3 12v contacts mind you, the other 2 contacts on the side of the 6+2 are just ground to signal the card it can pull twice the current per pin.

    • @StatusQuo209
      @StatusQuo209 9 месяцев назад +5

      I have also tested 1000w with that exact PSU (3090 w 1000w bios and 12900k). 8 Pins still go hard. We don't need 12pin yet imho.

    • @deancameronkaiser
      @deancameronkaiser 9 месяцев назад +2

      @@Kalvinjj yes agreed so then I ask you why the fuck did Nvidia decide to make this stupid 12 volt high power connection? With a stupid cable as well? Bro I'm literally holding my head in my hands at this point. 🤦

  • @OTechnology
    @OTechnology 9 месяцев назад +30

    Dear Nvidia and AMD, just put EPS 8-Pin on GPUs PLEASE!

    • @thepatriot6966
      @thepatriot6966 9 месяцев назад +16

      Why would you include AMD? This is all fake leather jacket mans fault. 😂

    • @OTechnology
      @OTechnology 9 месяцев назад +23

      @@thepatriot6966 Because EPS 8-pin is the superior connector, and their server divisions already know it and uses EPS on the server GPUs. It uses 4x 12V pins and 4x GND pins.

    • @marcogenovesi8570
      @marcogenovesi8570 9 месяцев назад +2

      Is AMD even using this new connector at all? all cards I've seen so far are still using the "old" connector

    • @darranrowe174
      @darranrowe174 9 месяцев назад +4

      @@marcogenovesi8570 AMD is using PCI-E power connectors. AKA the old ones.

    • @TwistedD85
      @TwistedD85 9 месяцев назад

      I missed it at first too. They're saying they should use the 12v EPS (CPU) connector instead since it's already just 12v and an existing standard that's compatible with the 12VO PSUs that brought about this dumpster fire of a new connector.​I'm kind of surprised they didn't do this to start with. @@marcogenovesi8570

  • @mortifyedpenguin
    @mortifyedpenguin 9 месяцев назад +62

    I never understood why they replaced 4x 8 pins of power delivery to 12+4 pins and thought these cables will never overheat. Was it just to save money on PCB?

    • @pedro4205
      @pedro4205 9 месяцев назад +22

      They wanted to do a single connector for a large range of power necessity. The idea was that the 12+4 would serve for every single GPU in their line-up, so it couldn't be too over spec so it would become a cost in lower end GPU, and still be able to deliver power to the highest-end. But they failed to think about using cards in the wild, they were just thinking about the lab results.

    • @haukionkannel
      @haukionkannel 9 месяцев назад +6

      Smaller… takes less space.

    • @xicofir3737
      @xicofir3737 9 месяцев назад +14

      ​@@haukionkannelhave you seen the size of graphics cards lately?
      I had a friend who RMA 2 AMD cards because his PC wouldn't boot.
      Turns out the cards were too long and the case wouldn't let the card plug correctly into the PCI-E slot.
      And I had to buy a RX 7900XTX reference card because it's the only small card that exists above a RTX 4060.

    • @ozanozkirmizi47
      @ozanozkirmizi47 9 месяцев назад +7

      nvidia always trying to milk from every possible factor.
      Less cost = More money

    • @nguyenson7073
      @nguyenson7073 9 месяцев назад

      Its better to have only one connector that will handle all the loads. If you noticed on the 3xxx series, the 8pin connector often overheats at one of 2 or 3 or 4 connectors, they're not balanced, one will go max before its load is shared to the next connector.

  • @roboman2444
    @roboman2444 9 месяцев назад +40

    Higher thickness wire will also help the connector itself. Thicker wire = more thermal transfer of heat away from the connector pins, and a higher current ability without overheating. The wires act as a sort of heat-sink for the connector itself.

    • @lassebrustad
      @lassebrustad 9 месяцев назад +7

      sure, but the connector itself is too small to handle 600W of power, or almost 1000W, which an unofficial BIOS flash can allow the GPU to draw without any hardware modifications

    • @skK-xk3yc
      @skK-xk3yc 9 месяцев назад +1

      it wont help much , the bottleneck is still the pins not cable , that tiny pins and connector is too small to carry 600 watts of heat longterm

    • @SianaGearz
      @SianaGearz 9 месяцев назад +17

      @@Triro and yet 15+ years of 6+2 connector and basically no issue with that. It definitely raises the question whether the new standard is at all adequate, whether it gives manufacturers too much wiggle room to produce items of insufficient quality, or makes it more difficult to end users to install correctly or easier to install incorrectly. There's no need to brown nose PCI SIG and Nvidia.

    • @RamonInNZ
      @RamonInNZ 9 месяцев назад

      Standard is faulty as nVidia pushed hard for it, thus we now have a replacement standard (still the pins are not of a dimension to cope with 50A continuous current) @@SianaGearz

    • @SianaGearz
      @SianaGearz 9 месяцев назад +4

      @@Triro I'm not likebotting anything.
      "Doesn't have very delicate pins" well that sounds like a massive advantage to me! Tell me again why we're "upgrading" to inferior hardware?

  • @OGParzoval
    @OGParzoval 9 месяцев назад +3

    What they should have done long ago is up the voltage which inversely lowers the amperage required to carry the same amount of power. 1Ax12V=12W 1Ax24v=24W. They'll have to do this eventually as watt requirements go up or you'll end up with stupid thick cables. However, the real issue is power contacts. If you have a large amount of surface area, contact then you get normal temperatures, but if you have a small area of contact or worse a gap then you get high heat or arcing. Those power connections are more likely melting because of that reason alone. Still, I think going to higher voltages would solve a lot of issues. You could take 2x 12v rails and output a 24v with the right converter, but it's going to take space in a PC, but it's a solution that can be done outside of the ATX standard being updated to higher voltage rails. Besides downstepping home voltages to low volt DC is lossy as well. So there's more efficiency to be gained if they moved upwards a notch in the voltages. 48VDC would be a good spot because there's so much stuff that runs on 48VDC already in telco's, those vendors would welcome more business in that area when it comes to components.

  • @HaroHoro
    @HaroHoro 9 месяцев назад +27

    Finally a video that works on the details of the spec, the differences from the 8pin, how the 12vhpowr meets the spec on paper but also the little details where the real world technicalities amount to why ultimately there are issues. very good and informative video.
    Thanks for showing the engineering side of things where possible also, it helps when your able to visually compare such data. when it is just writing it can sometimes be harder to comprehend such details when not an engineer.
    edit: grammar and better phasing.

  • @PowellCat745
    @PowellCat745 9 месяцев назад +53

    All 12VHPWR cables and GPUs should be recalled. It’s a disgrace that Greedvidia is still forcing this fire hazard upon us.

    • @Uryendel
      @Uryendel 9 месяцев назад +6

      Nvidia is not forcing it, this is the new ATX norm. Also the connector doesn't have an issue, the issue is bad adapter, nobody had an issue while using this connector directly

    • @dfv15
      @dfv15 9 месяцев назад +24

      @@Uryendellook into the history of the spec, fanboi

    • @PinkSkinSisko
      @PinkSkinSisko 9 месяцев назад +12

      ​@@Uryendelcopium kills bro

    • @blahx9
      @blahx9 9 месяцев назад +1

      uhh no the issue is nvida and adapting it, AMD doesnt use it... @@Uryendel nvidia lacking here

    • @Uryendel
      @Uryendel 9 месяцев назад +1

      @ParchedGoat1 The atx norm is for the PSU you simplet, if you get an ATX 3.0 PSU you will get the 12pin on it for powering graphics cards...
      And yes you can buy a gpu who is still following the old norm doesn't mean he new one doesn't exist...

  • @JoeL-xk6bo
    @JoeL-xk6bo 9 месяцев назад +8

    What I said from day 1. The highest amp, hottest, highest TDP GPU using one small connector is THE problem. you only see adapters fail this often on the 4090. and the narrative now is all cablemod, when every brand, native 12V PSU, nvidia included cable have all burned.

    • @jon4715
      @jon4715 9 месяцев назад +1

      Poorly designed connector, it’s microfit…tiny pins.

  • @m-copyright
    @m-copyright 9 месяцев назад +5

    Finally someone who's in the tech youtube space calls out this connector.
    Yes I know others have tested it and "proved" the connector can work well, but here's the thing, while their testing is great, it doesn't test "real" world scenarios meaning actually using the thing.
    Not using it on a test bench and seeing if it catches fire. That's not real world. Putting the card in cases, tugging on the cable things like that means real world.

  • @silentferret1049
    @silentferret1049 9 месяцев назад +2

    The problem with connectors is not the connection but the retention. Instead of relying of the push and clip, it should be a screw down which would remove all of the problems people are having with it not being pushed in enough and the slightly less holding tolerance that the plugs need to be so it does not rock lose. Its the only smart option they don't want to do.
    The other is having included 90 and 180 Degree connectors with the GPU that instead of locking onto the connection but the card frame itself. Having thicker wire like some will suggest will forget how much more difficult it is to form those wires and how much they will push back which leads to the older card sag problem. It was not just the weight of the card but also the weight of the wires and how much they were pulling or pushing on the card.
    The wire rating has a percent safety that will ship so just because the wire has a 220 W low end safety, say 30% of that will be reduced and its rating which would put it around 150W range. Thats why it will have 4 connectors for that 600W. The onboard power from the MB is a good idea to consider than just a cable alone. Mounting is the problem that needs to be fixed for these. The GPU in a MB is made and meant for a fully horizontal layout but since people have been using cases in vertical which is now a standard, that whole thing is a problem and needs to be rectified first.
    Having some sort of mounting on the other side of the card using the motherboard and its standoffs where it mounts to the case needs to be implemented. This will allow secure connections and reduce the card sag problem. This would mean a safe means of Onboard power addition which could safely deliver most if not half the needed power for a 4090 and it can be located all the way at the back of the card near the slide in case mount. I would also take with how cards are and their length for these to have the cables on the end of the card instead of the top so they can use easy to secure 90 degree connectors that would be easier to manage than on the top of the card. It would be less tension on the card through leverage and easier wiring options as it would be almost right at the port in the case leading to the back. This can be for Vertical and Horizontal mount. This can be a chance to remove a bunch of wires on future boards and cards and increase stability and reduce almost needed parts for the PC build.
    This cable connector is the tip top of the problem mountain on PCs that needs to be fixed. Its a pretty dam easy fix for most of it and not be much cost for new MBs or GPUs coming out. PCs are sloppy and inefficient as hell and compound their own problems.
    One point I would like to make as a problem is the GPU could be on the "top" side of the MB (on the opposite side of the CPU and Ram) to where the back plate and frame could be up against the top of the case to have better mounting or the GPU should be side to side flipped and be closer to the bottom of the case for better solid mounting. That would also fully fix the sage problem in a vertical case as the heatsink would be sitting all weight on the GPU's chips where it should be.

  • @prwtc
    @prwtc 9 месяцев назад +7

    I have seen such melted connectors many many times in cars. I am a car mechanic.
    It is simpel, relatively small connector for high current will fail sooner or later.
    Car manufacturers like to do things cheap as possible. So does GPU manufacturers, it is all about maximizing profit.

  • @web1bastler
    @web1bastler 9 месяцев назад +4

    Correction for @der8auer :
    The Table at 7:24 is a derating factor for bundles of wires. There is no derating factor between solid, multiple-solid, sector, stranded or fine stranded.

    • @Henrik_Holst
      @Henrik_Holst 9 месяцев назад

      yes and no but I guess that they compensate by making the stranded wire a bit thicker so the rating (e.g awg18) is the same for both (the reason being the air gap between the strands making a stranded and a solid cable of the exact same physical diameter having not the exact same practical diameter). But ofc that difference is not as huge as that table deb8auer showed.

    • @riba2233
      @riba2233 9 месяцев назад

      Yep he messed up that part

  • @magfal
    @magfal 9 месяцев назад +11

    I would have bought the 4090 if a vendor were to make a larger 3x 8 pin or if needed 4x 8 pin card.
    If they allowed one of the AIB partners that I trust to do so I'd buy it this week.

  • @quegyboe
    @quegyboe 9 месяцев назад +11

    I'm so glad I settled for an Asus Dual RTX 4070 12GB with the single 8 pin. I purposely bought it because it was the highest RTX 40 series that was available with an 8 pin and was hearing so much about the 12VHPWR causing problems.

    • @ddd45125
      @ddd45125 8 месяцев назад

      Good for you, nice thing is you don’t need to run 600w to a 4090 when most of the time it is drawing 325-375 watts. Not worried at all.

  • @Immudzen
    @Immudzen 9 месяцев назад +5

    Jayz also found that it is almost exclusively just 4090s that have this problem. The 4080 pulls a low enough amount of power that it is extremely rare for a failure since even if you pull the wires the same way the remaining wires can still handle the load. If you undervolt the card even a tiny bit you can drop the power a lot with almost no impact on performance.

    • @elmalloc
      @elmalloc 9 месяцев назад

      the latter sentence pertains to 4080 or 4090?

    • @Immudzen
      @Immudzen 9 месяцев назад

      @@elmalloc 4080

    • @PabloB888
      @PabloB888 4 месяца назад

      I have also read about connectors melting on the RTX4080. It's definitely not as common as the RTX4090.

  • @talha7408
    @talha7408 9 месяцев назад +12

    Huge respect for you, bro. After Gamers Nexus's research, Nvidia just keeps laying on that "it's user error"! They must have to done it right. And that Far Cry clip was hilariously right place.

  • @peterwroberts
    @peterwroberts 9 месяцев назад +4

    Hey Roman, just wanted to share that in English, at least in the UK, we say "gauge" like it rhymes with "cage".

    • @der8auer-en
      @der8auer-en  9 месяцев назад +2

      Thanks :) will try to remember

  • @FrozenThai
    @FrozenThai 9 месяцев назад +9

    Had thought 12VHPWR would have some backwards compatability, but it's power connectors are literally smaller. So it's weird they would lock themselves to this formfactor if they are going to use a new connector anyways.

  • @__aceofspades
    @__aceofspades 9 месяцев назад +20

    Nvidia should've immediately recalled RTX 4000 and used 8-pin. It's insane they are still using a plug that causes a fire hazard. Also shame on GN and Nvidia for downplaying the issue.

    • @Tuhar
      @Tuhar 9 месяцев назад +4

      PCI SIG - the standard group that developed the 12vHP connector, should be the ones under scrutiny. Imagine if your sole responsibility was to develop new PC standards, and this is what you came up with? Where is your head (lodged fairly far up somewhere unpleasant I'd guess...). Nvidia was stupid to adopt it for their highest power draw card ever without more testing, but really I think the failure comes to the group solely responsible for coming up with the new standards.
      Never doubt the ability of idiots to infiltrate high-end positions. Idiots are everywhere.

    • @arch1107
      @arch1107 9 месяцев назад +3

      gn?
      are you blaming steve for melting gpus he could buy to show that there was a problem?
      on what reality you live? your own i think

    • @rFey
      @rFey 9 месяцев назад

      @@arch1107 He's blaming him for downplaying it. Downplaying the fact that slightly not plugging in a connector all the way (which yes, is user error) could burn your house down. This is definitely on nvidia and the makers of the 12vhpwr connector for allowing this to ship out to customers. This was never a problem with 8 pins because of years and years of development into making it a relatively safe connector, even if you for example, don't plug it in all the way ;)

    • @arch1107
      @arch1107 9 месяцев назад +1

      the conclusion there is that they managed to replicate easily that way the melting cables, but he mentioned that it was not the only possible waym they only could make it by just moving the connector in a angle and you had a melted gpu power connector, i think no one of you watched the entire video and decided that gamers nexus had the investigative job and a position where they can decide who is guilty and do everything perfect where the problem and the solution was entirely on hands of the company selling this crap
      gamers nexus reported and demonstrated that it could indeed happen and how easy could happen, they had other theories like poor quality cables, poor quality connectors between others
      you are doing declrations not knowing what happened and trying to blame the reporter, for what disgusting nvidia did@@rFey

    • @rFey
      @rFey 9 месяцев назад

      @@arch1107 bruh im blaming gamersnexus for giving people this idea that it's purely user error, which in turn is making it easy for people like nvidias PR team to spin it into something less bad than it is.
      I'm not blaming him for the controversy as a whole, only blaming him for being a part of the damage control afterwards.

  • @blackdragonx1186
    @blackdragonx1186 9 месяцев назад +7

    When I first saw this cable being speculated on, I knew this would become a problem. Smaller pins are never the way to go with increased current draw. It's clear Nvidia was going for a certain aesthetic at the cost of reliability. I run RC cars and realized many many years ago to go with the biggest connector possible to minimize loss as heat.
    I am currently running a 6900XT. I did some pretty hard overclocking on it where I allowed it to draw as much power as it wanted as it is water-cooled and for benchmark runs it was fine. I let firestrike loop for over an hour and while the GPU temps were right on the edge of acceptable (drawing over 400w), my cheap Amazon 180 degree connectors never went above the case ambient temperature, I used a FLIR camera to verify it. It uses 3x8 pin connectors, which as you stated are definitely the way to go. Yeah, they are larger and a bit more bulky, but it really shouldn't bother people. It's just more surface area to have cool looking cables, kind of like the patterns you can create on the 24 pin connector.

  • @stavrevk
    @stavrevk 9 месяцев назад +4

    You might want to ask yourself - what they were using more than 10 years ago to power these 500W cards: AMD R9 295X2 and Nvidia GTX 295 ASUS MARS ? Well, guess what - they used just two 8-pin. And it was more than enough. So why bother using more than three 6-pin (or 8-pin) for any current 600W card, let alone the ridiculous 12VHPWR ?

    • @Born_Stellar
      @Born_Stellar 9 месяцев назад +2

      used to draw close to 750w-800w on an overclocked 295x2.

  • @AMMO_x
    @AMMO_x 9 месяцев назад +33

    Great video and I hope your influence will reach Nvidia and their partners to recall all 4090's and provide the owners an updated version! Nobody wants someone's house to burn down!

    • @Skobeloff...
      @Skobeloff... 9 месяцев назад +8

      There is little reason for Nvidia to care when so many fanboys just blame the users.

    • @TheRisingMiles
      @TheRisingMiles 9 месяцев назад

      Facts

  • @sublime2craig
    @sublime2craig 9 месяцев назад +3

    I have been saying this since the beginning, people on Reddit etc are roasting CableMOD for their 12VHPWR connectors like they made and designed said connector. People need to put the blame where it belongs and that's Nvidia and Nvidia only...

  • @Kelekona_808
    @Kelekona_808 9 месяцев назад +7

    Better connector is just addressing the system of needing a ridiculous amount of power for these GPUs. I think we also have to look at cards that are more power efficient so they don't have to have so much power pushed through them to reach improved performance levels.

  • @greggreg2458
    @greggreg2458 9 месяцев назад +7

    I developed the habit to check the temperature of my 12vhpwr connector form time to time.. This shouldn't happen

  • @mrmrgaming
    @mrmrgaming 9 месяцев назад +7

    There is also something odd with cables. I used a Cablemod cable (after my adapter melted) just on its own, and because I was worried about it, I put a temp probe (attached to the connector) on it. I played Cyberpunk maxed out and saw the temp probe hit 61/64c at the most. Much of that was the case temp and the heat from the GPU fan exhaust, but it was a baseline for me to work on.
    I changed my PSU a month later to an Asus ATX 3 one and used the cable that came with it. I set it up the same, put the probe on the same and ran the same save file of Cyberpunk to compare. The probe hit 51 to 53c max....10c cooler.
    That seems to show that the pin type/cable thickness/plastic used by Cablemod is different from Asus.

    • @Born_Stellar
      @Born_Stellar 9 месяцев назад +1

      and neither of those temps are something I'm comfortable with my cables being. 50C can burn you, and i know you aren't going to be touching it but imagine if a plug from your wall was getting over 50C.

    • @Born_Stellar
      @Born_Stellar 9 месяцев назад

      @@innopriest lol not dangerous. but I would be concerned if I picked up a cable that was 50C. I also watercool everything with a ton of rads, I just like keeping stuff cool.

    • @mrmrgaming
      @mrmrgaming 9 месяцев назад

      @@Born_Stellar As I mentioned, most of that is from the case and card (heat it throws out). Idle, its around 30c.
      It's a physical temp probe, so it picks up all the heat, but it's still a way to monitor any changes as long as I have a max baseline, which was Cyberpunk, but is now Avatar.

    • @mrmrgaming
      @mrmrgaming 9 месяцев назад

      @@innopriest@innopriest The probe is a physical one, so it reads all heat, case, GPU fans and so on. It is 30c idle and has hit 53/54 with Avatar, but most of that is from everything else rather than the cable. I have a baseline from that, as Avatar Maxed now draws more on than Cyberpunk. I have an alarm set, so if it goes over 58c, I know something is wrong.
      The only time I need to redo this is if a game comes out that pulls more power than Avatar or if we have a good/hot summer.

  • @cloudcultdev
    @cloudcultdev 9 месяцев назад +13

    Best explanation I’ve seen on this issue, at least from everything I’ve read/seen so far. Guaranteed the 5000 series cards are already lined up and will have these connectors as well. Can you imagine how bad things will get before NVIDIA admits that there’s an issue? And still…users will buy them expecting to be one of the lucky ones… 🤷🏻‍♂️

  • @noname-gp6hk
    @noname-gp6hk 9 месяцев назад +2

    Server GPUs are already moving away from 12V to 54V power. The big socketed GPUs are already pushing 750W, and there are public discussions preparing the industry for 3,000W per GPU within the next couple of generations. We won't be getting these mega powered GPUs at the desktop, for one standard US wall power can't support that kind of power level, but we are already running into power delivery issues with big consumer cards at 12v. I wonder if it is time for ATX to start working on 54V desktop power to reduce amperage across connectors.

  • @AdamBrackney
    @AdamBrackney 9 месяцев назад +2

    The Cable mod adapter was absolute trash. When mine showed up, I inspected it and immediately threw it in the trash. I'm currently using the wire view and it's very well built and I had no concerns at all. I push 550-600 watts through it for hours regularly, no issues.

  • @pt0x
    @pt0x 9 месяцев назад +9

    Great to see that you are not scared to say what you think. Happy new year and keep it up in 2024 Roman!
    I 100% agree. I have a 4090 strix and the luxury of having it vertically mounted and watercooled. All in a lian li 011d XL. And still every time I open up my case I triple check the cable. Im never going to get comfortable with it.
    I wish I could post photo's of how I mounted the card and routed my cable just to show how straight up paranoid I got and still am about it.

    • @anorax001
      @anorax001 9 месяцев назад

      I am also using the vertical mount adapter in my O11D XL with my 4080. I have the 12V cable feeding directly up and over the 24pin m/b cable which is supporting it vertically and keeps it straight. I still check the connector every time I open the case to work on something.

  • @RoyaltyInTraining.
    @RoyaltyInTraining. 9 месяцев назад +7

    At this point, it would be easier to just introduce a new power supply standard with a higher voltage.

    • @THeBoZZHoGG
      @THeBoZZHoGG 9 месяцев назад

      The problem there is that the voltage would then have to bucked back to 12V by the device. You are not wrong, but it isn't a simple fix it would mean entire redesign of devices. Perhaps that is what is needed for the future. However, it is not like this amount of power hasn't been done in 12v systems before though. Car audio comes to mind where 1000W systems are considered casual load. If you look into that world you see beefy connections and wires that are over engineered for what they are doing.

  • @datriaxsondor590
    @datriaxsondor590 8 месяцев назад +3

    Thankfully, mine has held up thus far, though, I'm not pumping 400W through it either. I tend to go for stability, over "max performance". Highest I've seen my card draw, is about 320W, but, most games I play, it typically averages between 180 - 240W.
    I just hope I can get through this generation without any 12VHPWR problems, and I definitely won't be buying another card with this connector on it.

  • @JTTTTT850
    @JTTTTT850 7 месяцев назад +2

    The biggest problem is selling a 600 watt card for playing video games. Instead of improving their actual silicon they’re just pumping more power and using software tricks to try and make the same generation outperform the last one even though the hardware itself is basically identical. All while charging more and more and more money. If you spend $2,000 and burn 600W for a video game component you need your head checked and need some education on the power grid. The days of “unlimited” electricity are over there is not enough to go around. I’m usually not for government regulation but they need to ban cards over 400W even that is ridiculous for fucking video games. The 4070 has literally the same amount of cuda cores at the 3070. The 4060 Ti has LESS cuda cores than the 3060 Ti. The 4090 only outperforms the 3090 Ti because it’s fucking 600W vs 450W. that is more than my entire fucking system with a 5800X and a 3070. This generation is an absolute fucking scam and nvidia employees should be in jail

  • @Rushifell
    @Rushifell 9 месяцев назад +5

    When i was building the case width issue was a nightmare and ive ended up with a case that was exorbitantly expensive and much bigger than i wanted just to accommodate it at all. I ended up with a moddiy 90 degree and that has thankfully been a fine 90 degree adapter but this build has the worst time building a PC ive had in 30 years of builds. The cable spec should have been designed 90 degrees in mind because it clearly didnt take any aspects of reality into account, very much a works on paper and in our lab design.

  • @GiGaSzS
    @GiGaSzS 9 месяцев назад +5

    Thank you for comparing the datasheet specifications of the connectors!
    There is only one thing I do not agree with, they need to completely discontinue this small and fragile 12pin connector.
    Even if you use 2 on a beefy card, you will not fix the fragility because of short pins and high torque caused by so many cables plugged into a small connector!

  • @notwhatitwasbefore
    @notwhatitwasbefore 9 месяцев назад +9

    I really hope some GPUs next gen don't have the 12vhpwr connector because at this point I would consider it a negative selling point and would much prefer the existing 8 pin and 6 pin connectors.
    I am not planing on 5090 anyway as its going to cost way too much most likely (and I like to actually play games not overheat sitting in a chair) but when choosing between the options availble that fit my use case I will pick the card without 12vhpwr connectors even if that means going down a perfromance tier. If there are no problems over the next 2-3 generations then fine it will of proven itself safe but as of the moment I as a consumer do not consider cards using the connector as safe and am waiting for the big news story of a tragic house fire. That association between 4090 and house fire you would think would be enough for Nvidia to want to change something.

    • @Djinnerator
      @Djinnerator 9 месяцев назад +1

      It's funny cause 3090ti doesn't have any of these issues, yet uses the exact same connector with the exact same power draw. Makes you wonder if it's really the connector as opposed to the adapters that have been the same with every single melting case...
      RTX 30 introduced the connector, no issues, more power draw in general.

  • @realtimeblog
    @realtimeblog 9 месяцев назад +14

    Upgraded to RTX 4070 and one of the reason in favor of choosing it was the single old 8-pin PCIe power connector. Shame on nvidia, they shrinked the market, every 4000's graphics card model has disadvantages.

    • @valrond
      @valrond 9 месяцев назад

      Shame on Nvidia, but I still buy their cards. Even though at that price point the 7800xt is better. Oh well, that's why we are here. Nvidia can dl whatever they want, the stupid pc gamer will still buy their shit.

  • @grev.
    @grev. 9 месяцев назад +1

    13:37 i cannot believe how many people were blaming the consumer that "they weren't plugging it in correctly" during the original 4090 12vhpwr scandal. i think this definitively settles that it's just a terribly designed connector.

  • @MikenFox.
    @MikenFox. 9 месяцев назад +1

    The video did not cover the issue of the common bus to which all pins of the connector are connected. As a result, if the resistance of the cables/connectors is not equal (f.ex., some cables have not been used, and when you upgrade a graphics card, both the old and new cables are connected together.), the current will be skewed and therefore overheating will occur, according to the laws of physics.

    • @PainterVierax
      @PainterVierax 9 месяцев назад

      Nvidia already implemented since a bunch of generations a power balancing circuit between PCIe and the extra connectors. No idea if AMD or Intel did the same thing nowadays but that was not the case with Vega and prior.
      Though, until the card power draw isn't too close to the max rating of the two connectors, this should be fine as the current will balance himself with resistance just like the 3 or 4 wires on the same connector already does.

  • @CalgarGTX
    @CalgarGTX 9 месяцев назад +8

    When I was learning how to size up electrical circuits we always had to apply a x1.6 safety factor to all our results because thats how the industry makes sure most edge cases are taken care of and sht doesn't burn down all the time.
    Making a cable that can actually barely push more than 600 Watt brand new is quite insane. Ofc so is making GPUs can pull that much power in the first place.
    Any deviation in manufacturing (remember all this sht is made in china at the lowest bidder) or extra twists and bends, less than perfect connectors and you are easily out of spec in the real world.
    Not sure where this fantasy of needing it to be 12pin came from, we have had multiple-to-single cables for decades without any issues.
    If anything they could have went for a 16pin cable but I guess it would start getting too rigid for the 'muh cable management' crowd.

    • @DigitalJedi
      @DigitalJedi 9 месяцев назад +1

      They could've also just used the same sized connectors as the 8-pin. Since the 4.2mm Mini Fit connectors are rated to 13A per contact, they could have used 10 power contacts for a 130A max, and all 12 would be a 156A max at 20C.
      600W at 12V is 50A, so they would have at least a 2.6x factor and at most a 3.12x. 12x 4.2mm would be exactly 2x 8-pin, as it only uses 6 of the 8 pins to provide 12V power.

  • @nizzen2
    @nizzen2 9 месяцев назад +7

    My 4090 HOF has 2x cables :D No failure yet, even I've been running 1000w xoc bios for months...

    • @der8auer-en
      @der8auer-en  9 месяцев назад +4

      Very well done by Galax :)

  • @tylerlloydboone
    @tylerlloydboone 9 месяцев назад +19

    Maybe having a gpu that draws over 500 watts is the problem. Efficiency has taken a back seat.

    • @stuartfury3390
      @stuartfury3390 9 месяцев назад +1

      A 3060 will be a great card for you!

    • @deancameronkaiser
      @deancameronkaiser 9 месяцев назад

      Undervolting doesn't occur to some people I see but hey when in doubt read the manual.

    • @Khloya69
      @Khloya69 9 месяцев назад

      4090 is a very efficient GPU, it’s just we’re reaching the limitations of silicon, so you have to push more power to see real performance gains.

  • @dil6969
    @dil6969 9 месяцев назад +2

    A safety factor of 1.1 with a 4090 per PCI-SIG specs is absolutely insane. Even unmanned spacecraft and launch vehicles that are strictly weight-limited have a safety factory of at least 1.25. It's unheard of to have any terrestrial piece of engineering operating so close to its rated maximum under normal conditions. It's fair to assume that a lot of people running the old 6 and 8-pin connectors likely have a connector that isn't fully seated, but incidents of them failing are exceedingly rare compared to 12VHPWR. I have no doubt the huge safety factor has contributed to their incredibly low failure rate.

    • @ddd45125
      @ddd45125 8 месяцев назад

      Is 600watt normal load conditions for a 4090? 😂😂 it’s literally barely half that.

  • @Blazerdoom169
    @Blazerdoom169 9 месяцев назад +1

    When the 40 series came out I stated the plug is not substantial enough to handle that wattage, with photos of other compirable board failures in the automotive lighting industry. Was massively down voted. One of the manufacturers uses the same 12 pin connectors and they stated without a heat sync to counteract aging connector resistance the connector will not handle more than ~350 watts at "atmospheric temperature". Being inside of a computer chassis lowers that threshold substantially. Hundreds of units failed after about 6 months and I had to rewire all of them by hand to spread the load out between 24 pins. This issue will always be lurking around theese cards, only getting more common as the years go on and oxidation takes its toll on contact surfaces.

  • @EMU1
    @EMU1 9 месяцев назад +7

    I have been skeptical about the connector since it was announced. So much power through such a small connector, makes a lot of heat on the one location. With multiple 8 pin, or multiple 12VPHPWR, it reduces the heat for each connector, reducing resistance and power loss. Low end cards can get away with it, but I think anything pulling over 400w, it should have 2 connectors.

    • @Djinnerator
      @Djinnerator 9 месяцев назад

      When do you think it was announced? 12VHPWR has been out since RTX 30, and those cards draw more power overall than RTX 40. Both series' flagships have the same rated power draw - 450w. Why is it that there have been no reported issues with 3090ti, or RTX 30 in general with 12VHPWR?
      The melting has been with two specific adapters, yet people are putting blame on the connector itself. No one can explain why the issue didn't exist with RTX 30, but now they say it's the connector when it comes to RTX 40...which has lower power draw overall, same power draw at the top.

    • @EMU1
      @EMU1 9 месяцев назад +1

      @@Djinnerator I am well aware that the connector was implemented in the 30 series, my brothers 3090ti has it, and I didnt like it then, and dont like it now.
      In my other hobbies, mainly RC racing, I spend a lot of time with various connectors by JST, Molex, Deans... and other variants. I always look at datasheets and want at minimum 2x the rating of the amperage that I will put through the connector. Overkill, yes, but it makes a difference even at smaller scales, and makes a difference with these GPU's.
      I dont personally own any GPU's that use the connector, and dont plan to unless absolutely necessary. I dont think that its an upgrade over what we currently have, as it increases the thermal density of the connector and probably increases resistance due to fewer pins carrying the load.

    • @Djinnerator
      @Djinnerator 9 месяцев назад +1

      @@EMU1 yeah...2x headroom is pretty overkill lol but it does give you a lot of wiggle room in case anything happens.
      Not liking the connector is perfectly valid and fine. I'm still on the fence about it. I like that it reduced the cables needed to power my GPUs down to one, and when you have a multi-GPU setup, where each would take 3x 8-pin connectors, that would've been 4-6x PCIe cables /just/ for the GPUs. Cabling would've been a nightmare. But like you mention, it can be questionable once you get to the high wattage devices. Before this video, I made many comments here, Twitter reddit, etc. about 8-pin being 16 or 18awg, and that two cable bundles could easily power a 4090, really one, but of course people argued that wasn't true...sigh...
      But regardless of that, the hungriest GPU is 75% of the rated power delivery, 68% of the conservative max rated wattage of the cables/pins. While it's good to have devices be rated lower than the supplies max rating, I think it's a bit disingenuous to use the max rating of two different cables as a metric of cable quality in terms of the cable being good or bad. If the device connected to the cable doesn't exceed the rated power delivery, then the max rated power delivery is mostly irrelevant. The cable has never been marketed to be higher than supplying 600W, and if we go based on the max rating of the cable, then the cable is doing exactly what is marketed to do. In that context, if we see that there have been no issues with RTX 30, and they're using the exact same cables and connector (in the case of 3090ti), it makes no sense to put fault on the connector for RTX 40 melting issues.

  • @TheTardis157
    @TheTardis157 9 месяцев назад +27

    This connector is one of the reasons I currently avoid Nvidia cards. I don't want to risk an easy $1000 card with a poorly designed connector.

    • @AndrewB23
      @AndrewB23 9 месяцев назад +7

      The other reason is that you can't afford it

    • @machinainc5812
      @machinainc5812 9 месяцев назад +2

      There are new gen cards with the old connectors. I went with a 3090 Tuf exactly because of that. That card still uses the older connectors

    • @Mnorbert25
      @Mnorbert25 9 месяцев назад

      And also the price they just throw it to all of us and the poor quality they give

    • @testpilot.
      @testpilot. 9 месяцев назад

      Watch out that one has memory issues ​@@machinainc5812

    • @everope
      @everope 9 месяцев назад

      4070 has a single 8 pin

  • @anonymoususer7985
    @anonymoususer7985 9 месяцев назад +4

    You know the solution Nvidia will come up with; is to have a dedicated 140mm fan on the 12vhpwr cable and socket

    • @Qs_Internet_Cafe
      @Qs_Internet_Cafe 9 месяцев назад +1

      Will it have RGB thou ? Asking the real questions here !/s

    • @anonymoususer7985
      @anonymoususer7985 9 месяцев назад

      @@Qs_Internet_Cafe RGB should give at least-3c to the cool’ing of the cable and socket. Guaranteed not to get hot enough to melt

  • @DrivenKeys
    @DrivenKeys 9 месяцев назад +1

    Thank you for this. Even when fully seated, this connector has caused melting. Repair shops are overwhelmed with these. The problem seems to be what you point out here: even when fully seated, the crimp contacts aren't supported well enough to guarantee proper contact.

  • @LBXZero
    @LBXZero 9 месяцев назад +1

    If we look at something that Gamers Nexus did in their experimentation for melting 12VHPWR connectors, they cut one plug down so that only 2 +12v and 2 Ground pins are making connection, and the RTX 4090 was still drawing full power. We already know that much power split between 2 circuits is already well over the safety limit. That connector for them should have melted, so they should have studies why theirs didn't fail, or they faked it not failing. This leads to a bigger problem of playing with a low safety factor and +12v circuits, load balancing. Back to our laws of physics, energy takes the path of least resistance. There is no way to guarantee that each path between the PSU's 12v rail(s) and the RTX 4090's power system has the same overall conductivity, so a route with slightly better conductivity will draw more amps over the other wires.

  • @3800S1
    @3800S1 9 месяцев назад +3

    When I first saw these connectors I thought, yeah probably a good idea for size constrains as long as they bump the voltage up to say 36v or similar, but a physically much smaller connector and trying to pump more current through it is the same thing you do in electronics/electrical engineering class when you are bored with nothing to do, so you slowly torture electrical things like Photonic Induction style until they blow up or catch fire. Someone was bored in that engineering team and thought it would be funny, I mean I would have too if there wasn't any legal consequences lol. But as customer I would be pissed!

  • @SavageOutlaw17
    @SavageOutlaw17 9 месяцев назад +3

    This is a prime example of why I'm slow to adopt new technology sometimes.

  • @ronnyspanneveld8110
    @ronnyspanneveld8110 9 месяцев назад +4

    Finally someone actually sad it ! the thing i have been saying since this crap came out.
    8-pin connector for CPU EPS12V is rated for 336 Watts maximum

    • @Raivo_K
      @Raivo_K 9 месяцев назад

      Exactly. No need to reinvent the wheel. This connector is already used on workstation cards - even Nvidia's own cards.

  • @David-yx3bd
    @David-yx3bd 9 месяцев назад +2

    Every internet engineer says the same thing: these are garbage and fire hazards ad nauseam with a variety of different reasons given for why depending on which one you look at. Okay, and according to those same sources you mentioned (Reddit, RUclips, etc.) this is happening all the way down to the 4070, although admittedly more common with the 4090, you've got reported cases all the way down to the 4070, if you look at mirror sites for deleted posts, there are even claims on 4060 models that don't even use the 12 pin solution - and I'm guessing that's why they were deleted.
    Okay so my question is why is the failure rate so low? If it's such an obvious point of failure, such bad engineering, and so prone to failure that it has everyone ignoring the fact that 8 pin connectors actually do melt on occasion as well, why aren't we seeing bigger numbers that would place real pressure on Nvidia to say recall all 40 series cards and replace the connectors as some of those same sources you sighted are demanding? There are millions of these cards in the wild, and yet even if we go by the most dramatic of sources the number of affected units this isn't even a 1% problem, it's a zero point something percent problem. That's the part I don't get.

  • @system450
    @system450 8 месяцев назад +1

    I have an Asus 4090 TUF with a Seasonic 1000W Prime-GX and his 12vhpwr cable. I tested the card for 30 minutes with 3dmark speedway rtx test (always more than 420w of consumption) and the temp of the connector never went above 41 degree celsius (the card never went above 63°), and the hot air, that comes out of the heatsink, goes right over the connector. The cable goes out for 3,5 cm from the card, before bending down, and it's fixed on the chassis, so the weight of the cable does not apply force on the connector.

  • @ericthedesigner
    @ericthedesigner 9 месяцев назад +10

    I don't understand why the psu industry flat out ignores basic Volts to Amps compared to cable size. For instance, on my Electric dirt bike the wires I run need to meet a minimum thickness or gauge to handle the volts and more important the amps.

    • @ChannelSho
      @ChannelSho 9 месяцев назад +1

      Reputable power supply companies do care about this sort of thing. If you get something from a reputable manufacturer, they usually use 16g or 18g cables. Those that cheap out usually use more like 20g or 22g.
      Though of note, volts matters zero in this regard. The most a PC needs is 12VDC.

    • @robertlee6338
      @robertlee6338 9 месяцев назад +1

      This is not a PSU maker issue but a GPU maker issue. Nothing in this video indicated that the cables was under specification, infact the cables are built to higher standard than required.
      It is up to the GPU maker to factor in the safety margin, hence it was mentioned that the makers should use two high power 12v cable.
      Think of it this way, a twisted steel rope factory makes a cable rated for a 1000kg to specification.
      It is up to the winch manufacturer selling a 950kg winch to decide is one cable safe or to use two, even though it is bulkier.

    • @rawdez_
      @rawdez_ 9 месяцев назад

      PSUs ar fixed 12v and fixed total power.

    • @ericthedesigner
      @ericthedesigner 9 месяцев назад

      @@robertlee6338 it is a wire issue. Like I stated. PSU's do come with cables, do they not?