Nvidia RTX 5090s melting power connectors AGAIN!
HTML-код
- Опубликовано: 9 фев 2025
- TL;DR: Nvidia should add extra shunts to monitor the current balance of the connector(like ASUS is doing on their cards).
Patreon: / buildzoid
Twitch(mainly gaming streams): / buildzoid
AHOC Shirts: actually-hardc...
Bandcamp: machineforscre...
My FPS game: buildzoid.itch...
Bluesky: bsky.app/profi...
Facebook: / actuallyhardcoreovercl...
#nvidia #rtx5090
there goes 10% of the 5090 population
Thats like a dozen of GPU's!
Hahaha😂
"Nvidia warranty is scam!!!!"
@@gonrico Just a user error. Warranty doesn't cover it ¯\_(ツ)_/¯
More like 50%+
Power connector melting is not surprising. Someone actually managing to find a 5090 card, that's the amazing part.
Probably bought off a scalper so they're SOL with warranty
@@austinsloop9774no the guy bought it from bestbuy
Nope the amazing part is that 5090 FE owner didnt bother to use original power connector and used old 4090 worn cable.
Im sure Nvidia wont do anything with the card it wont go under warranty.
But overall another design flaw by nvidia to cheap out using compromise over quality.
@@austinsloop9774 heya! I'm the guy who smoked the 5090FE in the video.
I managed to buy it off Nvidia website in EU. Hence I have all the invoices etc. Not SOL.
I was closely monitoring the drops and managed to snag the card the Last Wednesday morning. I actually was about to try and let go, stay on my 4090FE until the 60th gen. But I lucked out and decided to go through with an upgrade, considering I managed to actually buy the card. WEEEEELLL :)
The 5090 is actually fully fine. The connector isn't tho.
@@supremeboy 3rd party cable
3000$ for the beta testing 💪
Nvidia should offer 30% off fire insurance to every 5090 buyer as a package deal. That's the only way I see it working out with this connector.
beta testers lol. spot on. at some point this is going to kill people. just waiting for the lawsuit.
If you had a decent job you wouldn't worry about what other people do with their money.
@alifahran8033 not if you use the correct power cable....if you go and use some other cable that isnt recommend...thats your fault.
$3,000 is a very small price to pay for beta testing a 5090, get a better job instead of cleaning tables at a fast food joint, or check your self😅
This connector should warrant a trade commission recall and lawsuit. This is clearly a hazardous product. How the trade commission is allowing a fire hazard be sold is pathetic.
$$$$
FTC recall and lawsuit? Not sure the FTC still exists, or probably won't much longer.
Little change to win the lawsuit for monopoly
the lawsuit could be detrimental to nvdia, the GPU brand, fps worship and the US government. imagine the sadness of gamers if they find their favorite GPU maker bankrupt and can't release GPUs every year. So accept your fate as NVIDIA slave.
@@arik2216 LOl you have lost your mind if you think for one second that if this went down that NVidia would go bankrupt. That is a straw man to push your narrative.
This idiot was using a shitty third party, years old cable. He got the adapater in the box and a direct cable from his new PSU but kept using his crappy old cable from moddiy. Probably installed it badly as well.
8 pin: works
Big tech: lets change it to something else!!
12 pin: couses trouble
Big tech: keep using it!!!
8 pin (x3) actually spread the heat and power evenly thus prevent overheating , why a big tech company like Nvidia wont understand this. whats wrong in using 8 pin x3 LMAO
More reason I went with AMD. 3 to 4 sets of 8 pins i am fine with
@@CGFUN829 The more you buy, the more you save. ;)
@@CGFUN829 they understand, they just HAVE TO create proprietary junk.
it's like an itch for them.
physx, dlss, power connector.
if physx were non-proprietary, like FSR, it would have survived till now.
it's typical nvidia style
@@CGFUN829 It would need 4x 8 pin pcie for 600 watt delivery. Which would look badass with my sleeved cables.
Der8auer said in one of his videos as long as they keep using this connector there is going to be melting problems.
The take away being they are just.. 💩
This person should have upgraded to the new cable instead of continuing to use the old design with the known issue.
@@ScoobyJoobyJew The cables are the same, only the connectors on the GPU and PSU side are different, obviously assuming the PSU is using the 12v-2x6 and not a proprietary pinout on that side.
@@ScoobyJoobyJew Cables are excatly the same. They changed the connector on the GPU.
@@ScoobyJoobyJew Yes blame the consumer. New Low, good job. Now don't you think $2000 GPU has some kind of foolproof to deal with these kinda common issue?
@@adilazimdegilx But consumer used aftermarked cable in first place.... Not original cable.. That is the culdprint
Its wild to me as an electrician that the computer industry doesnt seem to grasp the concept of a true purpose engineered high current connector.
PCIe connectors are a flimsy joke and 12VHPWR is even worse since the contact pins are even smaller and they send even more current trough them.
I wish something like XT60 or XT90 plugs were the standard. MUCH safer and a crazy rated current. Things like current imbalance wouldnt be a thing anymore since those only have
one large + and one - pin.
I can wholeheartedly say: The current "high current" connectors such as 6 and 8 pin PCIe, 4+4 pin CPU power and the 12HPWR are JANKY AF compared to industry proven low voltage high current connectors.
At least AMD sticks to the PCIe connector for now since a discounted 7900XT is looking really good right now especially with another price drop when the new cards launch.
Unbelievable that Nvidia is still pushing this flawed connector.
Jay2Cents did a video on one of the 5090's drawing more than 600 watts of power which is above the limit specification of the connector cable and was tripping the alarm on his PMD sensor. It was drawing 625 watts of power through a cable that was rated only for 600 lol. This whole 50 series launch is janky AF.
personally i would de-solder this dumb plug and solder wires in it's place and use a some 221-2401 connectors
I prefer EC3 or EC6 thanks. HAHA but I get what your saying.
I commented something similar before reading this. It makes too much sense.
@@luminatrixfanfictionJust power limit the GPU to %75 and it will pull 400w max and perform the same.
GPU repair channels: STONKS
400$ bucks to replace the connector, keep buying these GPUs. 20 mins of work for 400$, easy money
Okay.
you're onto something get some game stonk in the company making replacement connectors
E quando não der reparo? Mais uma GPU vendida.
you think paperlaunch gpus would satisfy gpu repairers ? Or do they billing $4000 per gpu ?
I ABSOLUTELY CANNOT believe this is happening again. I mean, who would've thought that pushing 30% more current trough a fundamentally unchanged connector design, that was already at its limits before, would result in more melted connectors? Nobody, and I mean absolutely nobody, could have predicted this.
Yes, it was melting last time... pushing more power... actually doesn't sound like a good idea... but people are stubborn sometimes. One of the reasons I would probably never get something above 5070ti, is these flimsy connectors... I don't think they are good enough even for 5080... I think 5090 probably needs 2 of these.
@@Slav4o911 I've been running my 4090 at 60% maximum and keeping my fingers crossed. If it ever burns out I'll probably be done with nvidia forever and just get something like a Strix Halo board.
Here we go again...
Wait for Steve to say "user error"
Tobe fair it is usually user error its why it should be build in a way that kids dont have problem with it
@kosajk The cable wire gauge is a joke for the high wattage its pulling. The connector is a poor design. Cable requirements are all over the place. Spending as much as these cards cost the cables should be over built.
@@kosajk it is not user error if the cable can be plugged in incorrectly. It is a design error.
@@666Necropsy Well I do think the design is bad but still its user error if you don't plug it in correctly, especially if the situation is known
'
We need to stop blaming this on user error. This has been shown time and time again that this happens with stock cables that have been inserted properly. We had individual repair shops getting thousands of damaged connectors a year.
This is a design flaw and we need to stop letting Nvidia or any creator bought out by Nvidia gaslight us. With any other brand, there would have been a class action lawsuit by now.
Yeah, if the connector melts from getting worn out by plugging in a handful of times or walking itself out while the locking tab is still engaged, that's definitely a design fault. These people would probably call the Cybertruck accelerator pedal sliding off and wedging itself to 100% user error too.
Then show proof that it is not user error. For 4090 this was 50 out of 125'000 units, that's 0.04% failure rate. I own the early 4090 and didn't see anything in 2 years. Gigabytes PSUs that exploded had like 60-80% rate when you met the right conditions. That's literally 2000x more. Pretty sure plugging your stock cable in all the way and listening for a click is not a hard requirement to meet. This was already proved billions times by Gamers Nexus other youtubers and Nvidia 2 years ago...
those edgy power connector cables have always been hurting my fingers and i never knew for sure is it plugged in correctly or not. This was already a problem long before the 4090 even arrived. Now those power connector cables get bulkier the more power they need. Can't they simply create a pci-bus that provides all the power a gpu needs? It's definitely a design flaw and a stupid bottleneck that always poses a risk to brick your gpu or even worse to set your home on fire.
@@w04h peki bu oranı kim açıkladı. tahmin edeyim nvidia peki neden inanmamız gerekiyor. intel 2 yıl içerisinde sürekli ciplerinin yandığı konsunda bildirim almış ve buna dair yalan söylemiş. firmalar yanıltıcı bilgi verebilir. nvidia gtx 970 te gddr 4 ram içine 1 modül gddr3 modül eklemişti sırf 4 dolar kar için ve sonra vram yetmyince olay ortaya çıkmıştı. buda nvidianın bariz yalan söylemesiydi. yine en son 5070 şin 4090 na eşit olduğunu ilan ettiler ama mfg ile olduğu küçük harflerle ekrana yazılmıştı.
arkadaşımın 4090 yandı ve değiştirdiler. Ancak 3 ay boyunca kanıtlamak için uğraştı. oyun oynaması gereken sürede. elinizdeki alet çalışıyor olması başkalarının bunu yaşamıyor olmasını gerektirmez. samsung galaxy kaç adet patladı belki 10 adet . 1 milyon tane alet satılmış iken. ancak hepsini topladılar nedeni yangın riski. evet pc den çıkan yangın hayal edin yada tüm elektirik şebekenize hasar verdiğini. sonra can ve mal güvenliğinizden emin olun.
@@w04h We have singular shops that have had thousands of failures brought to them. Look at NorthridgeFix as an example. 40 units total? You are making numbers out of nowhere. The question is, are you a bot, simp, or otherwise paid by Nvidia? Don't even respond, because your ridiculous claim shows that you are commenting in bad faith.
These connectors are the reason why I will never buy an NVIDIA card again.
why? they work fine
Nvidia does what AMDon't.
@@SD-vp5vo Yes we see it in the video how fine they work, they burn great!
I don't understand what was wrong with the 8 pin PCIe power connectors. Using a huge resistance connector with tiny terminals, is not the way to solve the problem.
Think different! Ah, my bad, wrong company.
Because nvdia thought it look bad in their stupid FE models. So they decided to make it "sleek". Everyone else followed suit.
The problem was that you need 4 of them for 600W.
@@TheBlackIdentety But even my rm750x comes with 4x 6+2 pin
It’s because of the real estate that the connectors take up on the pcb. Plus the new atx standard for the cables like he said has data pins to detect voltage spikes so overcurrent protection isn’t spikes erroneously.
Nvidia… “And watch me do it AGAIN!”
It's even funnier when it happens a second time 😂
TBF, sorry for the guy who got his 2000$ GPU melted. Wonder if Nvidia will blame the cable or user. 😢
No watch dumb consumers buy cheap ass cables and not plug shit in properly lol.
@@stennan user… because if it was caused by something last time I guess it just makes sense to keep following the narrative
@stennan I don't feel bad at all. He should buy AMD next time. Fool me once - shame on you. Fool me twice - shame on me.
didn't expect you here 😂 hey man
I remember EVGA telling nvidia ages ago the way to fix this was to add two connectors to split the load did they listen no and we are here again
Then EVGA saw the writing on the wall and stopped working with Nvidia. Good call on their part!
iirc it was originally Kingpin at EVGA who suggested this
@@davidice7454 Right, but they're not making AMD or Intel GPUs, either.
Nvidia does what big car companies did before their sales and reputation collapsed.
@@UmbraWeissThe problem is there's actual competition on the car market
In the world of car steros, we've been supplying 200 amps of 12 volt to 5000 watt units for 40 years by squishing twisted wires with a screw. No problems.
well that would require nvidia to sell a gpu with wires coming out of it and I think they would rather kill themselves
In think Jensen would be sad when he says "do you like my jacket? 😢" Instead of "Do you like my JACKEEEET?😊"
Buddy - can you imagine the amount of fires if people had mains transformer supplied single 60 amp cables in their PCs?
I went from PC building to Mechanics and DB Drags.... Is why I know old PCIE Power or ""8pin"" is worse....
18awg x 2ft x 6.25a x 12v = 1.3%vdrop
Old cable
16awg x 2ft x 8.33a x12v = 1.1%vdrop
Most these guys have no idea what they're on about and companies have started to call and label it PCIE 2x3 and 12v-2x6 so they can figure out how to math it...
When used proper the plug/cable is just about better in every way and if people want better you could solder some 4awg terminals on in place of the 12v-2x6 plug 💪 but nobody is doing that they're just complaining that cables they bent to snot and didn't plug in proper are melting.😊
@@psychosis7325What?
The terminals are significantly miniaturized, and I am skeptical that the new standard is 16awg with smaller terminals vs 18awg with larger terminals. I used 16awg when making my custom atx cables.
Melting connectors for these 16 pin cables is a significant, widespread issue. Who cares if part of the problem is people not seating the connector fully, the connector should be designed with a failsafe in mind! It’s definitely more flimsy and temperamental than the rock-solid atx connectors of the past.
The most insane thing is that they didn't put 2 of those connectors on the 5090. I 100% didn't expect that.
That would be too close to admitting they made a mistake in the first place.
How about, instead of adding more complexity and unnecessary components to a bad connector design, we actually use a better designed connector that actually can handle the current?! Screw terminals would be better at this point!
💯
You mean the 8 pin? kekw
@@HiimAbyss Yes. Unironically three or four of them. Even a 750W PSU comes with four PCIe 8pins for GPUs.
Yeah, but then how would the GPU look like an Apple product? Don't you get it? If the cooler on the 5090 FE was larger (i.e 3 slots) then the memory wouldn't be running on 95 C°. If the were using 4 8-pin power connectors (which is objectively the proper way to make a 600W card) then how would they force you to buy another one 2 weeks after the first purchase? And don't forget - the GPU repair services need customers too. I just think they should lower the degree of planned obsolescence just a little bit.
@@alifahran8033At least Apple is actually good at making power efficient chips.
At this point, we should be using RC hobby grade connectors with thick gauge wiring. Something like XT60 or 90 connectors and be done with it. Fewer points of failure and more than enough amperage handling.
100%
Its multiple cables for safety reasons, the max ea can deliver is generally 20A before tripping the overcurrent. A single 60A cable supplied form mains transformer would be asking for fires, and against a bunch of regs.
Wires aren't the problem, it's 12 50 amps, as far as i know rc uses lower voltage and higher amperage, that would warrant thicker cable
But for 100% there are previously existing connectors that would work better
Der8auer showed a live demonstration of the 12VHPWR monitoring. German Video from 5080 Astral, at minute 10:40.
All of this could've been avoided if they made the 4090/5090 with two connectors. While engeniring anything you must make sure it can at least withstand twice the expected load. If the cable isn't rated for 1200W it shouldn't be used with a 600W card.
Who ever designed that cable is the problem.
intel & pci sig created it, they wanted to change also the 24 pins atx, good that didn't happened
I agree
It should be someone very important if they are ready to burn 2 generations of GPUs.... just to prove this connector is "safe"... it's not safe, there is no need for more prove. I think they need more stable connector, not this flimsy thing. Also it has to have clear click sound, to be sure it's actually connected.
wait did you find hot water this is crazy we need to show the engineers your comment
@FoxSky-md1ul it's standard industry practice when it comes to the p c market. All connectors have a very wide safety margin. While this doesn't even have even an 11% margin
RIP to 40% of the entire 5090 stock in that picture 😢
40% of the entire stock is what, 17 cards worldwide?
I don't think the solution is more shunt resistors, it's to avoid buying GPUs with the 12VHPWR/12V-2x6 connector.
NVIDIA and PCI-SIG need to accept they tried to ignore the laws of physics by replacing 3 or 4 big 8-pin connectors rated correctly but conservatively for the power with a single tiny connector barely capable of the design power in perfect lab conditions.
Exactly. But the guys at Nvidia know they're dealing with gamers, actual addicts. As long as you're on top of the charts you get away with literally everything
@@tarron3237Nice way to put it. If you are into boats and there's one company that makes all the ones you like it's kind of hard to boycott them or punish them.
An extra 175w of power on the worst power connector ever made…didn’t redesign it? Good work.
This problem is not returning, it has not gone away at all. Northridgefix published a 4090 melted connector repair video 2 weeks ago and he said in it, just because he does not make a video every day about melted connector it does not mean he does not receive a GPU with melted connector ever day. Because he does.
and nonstop cracked PCBs.
He is also one of the few people doing these repairs, so him having a high number of them gives zero indication of how widespread the problem is.
@@alexatkin and what do you think how many PCIe connector melts?
user errors, both connector melting and cracked pcbs
one thing I learned from the various reviews of the cards, is that on some benchmarks I believe cyberpunk 2077 they go up to 675 watts (56A) that's a lot for this failure of a connector
people who don't have a 12vhpwr card can't imagine how small it is, it's barely larger than a single PCIE8 connector it's ridiculous
56A is a lot of power, going through this little flimsy connector. I still can't believe why Nvidia is so stubborn with this...
Right, why didn’t they just make a 12 pin connector with standard atx molex terminals? Miniaturization while increasing power significantly makes no sense. Very frustrating to see this happening for years and years with no course correction.
I've heard from a GPU fix guy that 3090's didn't have this issue because they had 3 separate power lines on the board, not 1 power line like 40 and 5090. It was sort of protection from excessive power on 1-2 pins when you have a contact issue on one of the pins.
yup 3090Tis treat the 12VHPWR as 3 separate power sources.
Gotta cut costs where you can, can't have too good of a product otherwise you can't give it the planned obsolescence treatment.
Engineers should be taught in school about safety tolerance in design. I remember when I was in school, they taught this standard and would preach a 20% tolerance for safety on load barring components. Example would be if an elevator cable could theoretically handle a load of 10,000 lbs, they would sell it at a rating of 8,000 lbs, and if you designed your elevator to hold 10,000 lbs you would spec you cable at 12,000 lbs.
Is that why a shunt modded 3090 FE pulling 600W through the micro-fit 3.0 connector (this wasn't even called 12VHPWR yet) doesn't burn up?
I think igor's lab tested that that connector (not talking about the octopus adapter) can pull 660W before things start getting toasty. And that's sourced on the PSU side from 2x8 pin connectors also.
@@Falkentyne07 The guys at TecLab put IIRC 1200W through a 12VHPWR without melting it. If connector is making good contact it can handle more than 600W. If you get unlucky it will melt at 450W.
Is this not a kind of funny - engineers are able to design a working chip with 90+ billion transistors, but cannot make a reliable connector for two contacts, plus and minus, quite thick and extending deep enough when connecting. The task is for a student who is able to open an electronics textbook and understand the letters written there.
Nvidia: If it ain't broke, find a way to break it...
How else are you going to buy the next sh1t product if the old one isn't broken.
Was waiting for this. No surprise and totally expected 😂
What a supprise 600w rated cable, used on 575w tdp card, and thats for founders, other models are usually rated for more, + you want some room for overclocking. Sure there is some power at pcie slot, but they should have thought about this more .. Could have added 1 more 8pin or smth idk . My msi psu comes with this cable that goes into 12vhpwr in psu, but splits in x2 8pins, and for 3 8pin cards , you use x1 8pin and that 12vhpwr that splits into x2 8 pins, so tehnicly i could have in my psu, x1 12vhpwr, and x2 8 pins , which would be close to 1000w if there was a gpu with enough connectors . ps im saying this, cause if they did x2 12vhpwr (what a lot of people are saying ), they would need to create new psu with such connectors, meanwhile most of the people have x1 12vhpwr and x2 or x3 8pins that arent used .
i dont get why they didnt just add another 4pins on the connector it would hardly be any bigger
victim blaming xD
you using your hydra connector that came with the MSI card? oh nvm you got the msi psu
@ I used x1 8 pin and that 12vhpwr that splits into x2 8pins for my 7900xt tuf, which had x3 8 pins. Psu is Msi Mpg A1000g pcie5 and its recommended by MSI to run it like that .My psu has been fine for almost 3 years now, 0 issues .
@ I just shared my thoughts, i dont even own a gpu with such connector .
I think the 16-pin HVPWR Voltage reading in HWINFO64 can notify you if there is a problem. If there is too much vdroop on this reading, then your cable isn't properly seated/is degrading. Monitor your voltage at 600w, and when it begins to drop below the norm, you set an alarm in HWINFO64 to notify you. This is what I've been doing, at least.
this is valuable information!
thank you.
Sort of. You really need to measure each pin individually like asus is doing for that to have meaningful protection because if you don't the voltage reading gets spread out and one bad pin will likely still cause the vdroop to be in spec.
Seems like something right up GN's domain.
hwinfo won't necessarily show the per pin voltage which is all that matters. Some card vendors have their own software that can show such information but others don't.
Uh this is fundamentally wrong. If you get an even drop across all pins, then yes, but clearly that is not the problem. Nobody has had a melted connector when all the pins connected properly. That's the very core of the problem....
Any drop difference would be tiny because it is shorted to the other pins, you would need to measure teh pins individually for a meaningful result
Hasn't Nvidia actually made the problem worse on the FE as well? Since they changed the orientation of the connector, that has 2 major consequences.
1. The resistance on the pins furthest from the PCB will have slightly higher resistance from just distance.
2. Those same pins now also have a harder time sinking residual heat trough the pins into the PCB. Which means they run hotter, increasing resistance even more.
1. is literally nothing. The difference is microscopic.
@@ПётрБ-с2ц Resistance wise it is rather minor yes, but it is also a question about thermal conductivity of the pins themselves. And the area of PCB you sink that heat into.
That is not as minor of a change as you think.
Wanted a 5090 but I think I'll be going with a 9070XT at this rate.
using third party cables with a 5090 is brave
or stupid...
It worked for 2 years with the 4090.
I would have done the same.
This is an nvidia problem.
The issue has nothing to do with the cables. Don't let Nvidia gaslight us.
@@Hathos9Maybe they should test the cables then, have him send them in for an independent tester and test various cables. Sounds like a good test for someone like GN or even this guy.
@@Pwnag3Inc they changed between the 4090 and 5090....
tbh, the person at nvidia that wanted a 12pin connector for 600W should get fired!!!
I bet it's Jensen
The guy who is scamming people that a 5070 is a 4090.
I heard they're shedding alot of their engineering talent because NVidia stock is now worth enough where they can comfortably retire early, so the company is likely trying to hold on to as many people as possible.
"Your power connector is about to melt. Would you like fries with that?"
Also ATX 3.1 something something.
"These cards are $2000 and Nvidia just can't be bothered to spend the money"..... This can come as a surprise to absolutely nobody who's been around PC gamers and Nvidia for any length of time.
People can't seem to be bothered to spend the money anyway when the guy started buying DIY cables on a modular 1000W PSU.
@ If he had no idea it was an issue, why wouldn't he just pick the cable, that said it was compatible, that he liked the look and/or practicality the most of? Until all this shit began nobody gave a rats ass about cables. You just bought the ones that fit your build the best.
@@andersjjensen Fitting builds isn't a prerequisite of high power cables... If you want to jerry rig secondhand cables, you can go back to a 200W GPU.
@ Stop sucking Nvidia's dick. It's a bad design when something that was previously never a concern is now a common pitfall... that sets fire to things. And who the hell is even talking about second hand cables?
In Theorie this are the Pins that used to be certified for 25 Watts.
But Nvidia pretty much forced the certification for 100Watt per Pin.
This is pretty much the same as 2x6 Pins supporting 150W but we run 600 Watts on it.
Rule no.1: never push near the design limit of something power related
5090s are pulling 30-40w over spec. Regularly seeing 630-640w from multiple content creators with these cards. Nvidia needs to have their asses handed to them over this. It's ridiculous for people to have to go through everything they go through to get one of these GPUs only to have to constantly worry about whether they're going to burn up or not.
Nvidia's way of measuring it is, unfortunately, accepted. The power rating is post-VRM-losses. That is why there is a difference between TDP, TBP and TGP.
The 12VHPWR connector is rated for 600 watts continuous and 660 watts for a short duration. Not defending the design of the plug at all here but it is “rated” for the power you see people using them for.
@@Sir_Defyable Yeah, and junk transmissions are rated for 100K shifts and they die in 10K mi like the 10L80. When it's designed wrongly and cheaply, rating means nothing. Their rating isn't based on physics aka reality. It's based on how many more leather jackets that scumbag can buy with the savings.
@@Sir_Defyable Every other industry derates components to 0.8... so in reality that 12VHPWR cable is only good for 480W anywhere else. Even then its vastly inferior to just having proper current rated cables contacts and connectors. 3 or 4 AWG superflex and something like PCB 75 Powerpole...
@@Sir_Defyable The rating does depend one ambient temperature which no one controls. So it's an overall bullcrap as every cable and current passing is rated at given temperature. What happens to the hot air from the GPU is anyone's guess.
Imagine having a perfectly functional RTX 4090 since launch and then going out and buying an RTX 5090 just to have it burn.
yeah the guy is an idiot
AMD and Intel are both very happy to have sat this disaster of a standard out.
You can find the odd AIB who uses the 12-pin.
I know AsRock put it on their 7900XT/X Creators
Intel, I doubt anyone will bother unless they get serious
Intel still isn't relevant here. Their paper launches don't make them a real GPU company.
@RyTrapp0 They seriously only launched the handful of B-series cards to front to their investors so they wouldn't dismantle them on the spot.
"See! We can follow our own roadmaps, I swear 😶🌫️😰"
If that happened to Amd cards it would be they are low quality junk.Since it's Nvidia all is fine nothing to see here.
Buildzoid I'm surprised that it happened so quickly as there's almost no 5090 out there in hands of a non-distributor or non-influencer.
The real story here is that someone got their hands on a 5090 😂
Nvidia never misses an opportunity to create a problem in their half-spirited attempt to solve a problem that never actually existed. I'd rather have a 4-slot monster with 4x 8-pin connectors that could probably still handle 50% more power draw in extreme scenarios than the 16-pin can.
I don't want my $2000+ GPU to be powered by "hope".
I still cant believe this connector is used. When I first saw it I remeber saying it was a terrible design. The connectiors, the size, the position. Everything about it was bad.
It was already pushing the limits on the 40, and they reuse is for the 50. Seriously?
Nvidia told some days ago this is not going to happen...lol
Jensen also told me that if i buy more i save more. I can't save because there is nothing to buy.
@@bracusforge7964 💯
Nvidia also told everybody that the 5080 was $999. Good luck finding one .
@@bracusforge7964 If you CAN buy at retail, you save yourself from getting scalped or not having the latest hardware.
It would probably happen, I think 5090 will have worse melting problems. Also these connectors are flimsy, that's why 8pin is so much overprotected, they have to make the pins click much more secure. Basically all these connectors were designed when computers were pulling much less power, then they just increased their ratings and thickened the cables... but the connectors are the same flimsy connectors from 30 years ago. We need much more stable connection, otherwise the things will continue to burn.
Take a shot each time "shunt resistor" is mentioned = 💀
You'd think Nvidia would learn that those small pins can't handle those amps
first thing you notice when you look at 12VHPWR is how tiny its pins are when compared to regular 8 pin GPU and CPU connectors. Nvidia should at least use same size power pins, maybe even larger ones.
contact area matters, not pin size
@@mycosys small pin size always has small contact area unless u increase number of pins or increase ping length. 12VHPWR uses 6 tiny power pins of same length for 600W while regular 8 pin uses 3 much larger (probly with 8-12 times larger contact area) for much less watts. And the more amps/mm2 you have the more likely it is to melt as soon as your connection is not perfect bc of dust or too many connect/disconnect cycles.
@@honkhonk6219 no. there are a lot of styles of contact and a smaller higher quality contact can have a much larger contact area than a larger badly designed contact. circular pins are a particular problem
When Nvidia said: _"Our 12vhpwr is safe for 50 series now! No burns!"_ I knew that was bullshit... Especially when I've seen load tests showing up to ~850w spikes. Yeah, there it goes...
Even though it's kind of a user error and 12vhpwr is rated for 600w constant and can do with spikes up to 2x times the load.. It does not sound too great regardless... More vendors should've did the same as Galax/KFA2 did with their HoF on 4090 where it has 2 of those for extra safety.
Yeah its pulling over 600W. From 20W to 40W extra in other YT reviews. Of course its gonna melt.
@eliezerortizlevante1122 oh, there is a neat part. A tendency rather.
It has to be like a couple of months before issues will appear. I guess this case and driver troubles is just the beginning. We're might be witnessing something much more than that later on the show...
The issue is that connection was simply poorly engineered and rushed to meet the Ada Lovelace launch. Nvidia practically bribed the rest of the PCI-SIG to implement to it. No serious take at QC/QA was done. It is telling that Nvidia doesn't implement 12VHPWR on their professional SKUs. They use EPS12V connector.
Based TA enjoyer, I see.
RTX XX80 gang stay winning.
it's just not designed for 600w no matter what nvidia says about it. I have my 4090 limited to 350w for a reason and I lost 10fps from doing so.
Thanks for the pointer.
you lost 10fps for nothing, mine is not limited and everything is fine, your hands just have to grow from the right place
@ I have it limited because theres no point drawing 450w for 10 fps.
Same here. Undervolted my 4090 and game at 3440x1440 and the card do not draw over 270 watt. And it did not affect my performance. Maybe at 4k it would use more power and loose performance.
@@crm484 what programme did you use to undervolt the card?
If the connector standard is designed in such a way that any tiny deviation can cause it to destroy itself, it's a bad standard.
Exactly! You should be able to bend the cable, use a third party cable, and connect/disconnect 1000 times without any problems.
Lets put liquid metal on the chip, but a save power delivery? naaa
4090 has problems with connector -Nvidia "more power will solve the problem"
it doesn't
Well I for one am SHOCKED AND STUNNED in Alphabetical fucking order. Oh wait no IM NOT, I mean DUH
DRINK everytime he says "Shunt Resistor"
Alcohol poisoning incoming!
Inciting self harm via alcohol poisoning is against the law :P
Here i am. Using 8 pin. Still works fine. Next card i get will also have 8 pin. I'm not spending 400$+ to worry every day if connector seated properly
I don’t think it was a contact issue like you’re suggesting since it also melted the PSU side on the same pin. Look at the other photos.
That’s what I’m thinking, the user mentioned they were using an ATX 3.0 PSU also which would still have the old 12vhpwr connectors
@@efcut_Melting both ends simultaneously suggests some sort of short or something.
@@mackjait's not.
Let me explain as simple as I can.
12vhpwr got 6 +12v contacts and 6 gnd. Let's assume you make a bad contact, so 2 of +12v contacts are not used at all. But the videocard doesn't know it. Still trying to "eat" all 600w of power. So instead of 100w from each +12v it's 150w now, +50% of load for every line you got. Just imagine, that all 5 out of 6 pins lost contact. It's now all 600watts, which is 50 amps going through one +12v line. It's not designed for such a load, ofcourse, it will melt. The wire itself, as the weakest spot will work as a fuze - just starting burning, from the both sides, as it's a wire.
@@vladimag2Especially when the cable is old and as badly tywrapped as the one in the picture. Guaranteed to place a sideways twisting torque at the connector gradually loosening pins from one side 😢
The new cables are beefed up a bit, but it does look like the stock cables are much better than any 3rd party cable or adapter. I was obsessed with this problem, after a gaming session I would have a big flashlight and examine my 4090. I got worried because I would keep pulling it on and off over and over to check. The 3rd party cable would get hot to the touch, and when I switched back to stock cable it was no longer getting hot GPU is working fine after that.
That's it, unfortunately cable manufacturers are not producing the cables within the standards. This causes problems when card manufacturers push those cables to their limits.
450W tdp in 4090 was too much for this connector, so lets use this connector in 575w tdp 5090, what can go wrong :)
The shunt over current warning on the Asus cards should be something like "Danger to Manifold" ....
That was the first thing that i thought about when i saw the first benchmarks showing that it pulls more than the cable is capable of which is 600W. And that is why i do not even feel secure about getting 5090(not that its even possible right now). And the most worrying thing about it is that those who reported melting of connectors are not even using third party cards which ARE overclocked to pull even MORE power.
NVIDIA: the way it’s meant to be played
Upd: even better: the way it's meant to be paid
Nvidia: "The way you're ment to be played".
We will see this a lot more often and the cable wont be to blame, its the truly garbage connector. Also, 600Watts connector and cards power draw 570Watts and peaks at over 600Watts...no brainer that this would happen, I just didn't see it coming so son and only very few 5090s were sold. Joke is on people who got their 5080s and 5090s from scalpers...🤣🤣🤣
My 4090 Strix melted its connector, but I used a cable mod connector, which ended up being recalled in Australia. Just don’t buy a third-party connector. Fortunately Asus repaired my card as it was still under warranty.
3rd party power extension cables have existed for decades and it's only been an issue with this shitty connector.
Kind of baffling how the atx consortium could ratify such a connector. All the safety margins seem to be used up by regular use. From an amateurish look, two 8 pin pcie power-connectors have more conductive material to than those 12vhpwr connector but are only rated to 300W combined.
Those 12vhpwr things are rated for 30 connect-cycles.
they didnt, nvidia is forcing this on everyone.
Why think so many adapters for the things.
Our power supplies will shut off if we draw too much power on a single rail. It shouldn't be too much of a stretch to expect our GPUs to do something similar during a literal meltdown.
Not withstanding the fact that it is a basic flawed design, the correct cheap fix for the 5090 was to use two cables/connectors as it pulls over 600w from the cable supply at full load, 350-400W is all that is safe on this connector just.
hey nvidia, why are the 12v power pins are not gold plated?
Jensen in gold plated jacket:
Because they don't need corrosion or wear protection and don't carry signals. Most PCi connectors have ridiculously low max mating cycle ratings. Gold plating won't save this connector.
The worst part is that they will most likely make a 5080 Ti, and even a 5090 Ti, and they won't fix anything even for those models in about a year and a half from now. They never listen. To them, it's always an 'acceptable failure rate', they're isolated cases and of course, "user error". It's a non-issue for NVIDIA.
I am not surprised NVIDIA designed this flaw connector on purpose so we can buy more to save more. Seriously NVIDIA just need to make the connector bigger and use more metal. The 12-pin connector(600w) is barely any bigger than the 8-pin connector(150w). One 8-pin(150w) might be using more metal than the entire 12-pin(600w). And my EVGA 3090 FTW3 required 3x PCIE 8-pin.
I have the same GPU, what I remember about early reviews of the card was poor power delivery circuitry. Basically one cable was doing most of the work and the other two weren't doing much. I think the 10 series was the last well designed cards, after the mining boom and Nvidia limiting the price of partner cards, they stopped improving the cards and just sold meh products. They knew no matter what they sold consumers would buy it. The evga 3090 had early failures and rear memory cooling issues.
@@HN6studios I got my 3090 9 months after launch no issue so far. I heard about the thermal pad issues too. I think you can power the card with 2x PCIE.
@@adink6486 My friend's 3090 FTW3 failed in just a couple months. Not long after that my 3090 FTW3 also failed. We both bought our cards in late 2020. The FTW3 was one of the 3090 cards that had better memory cooling. The memory temps were fine, but the early 3090 FTW3 were failing due to bad soldering on the MOSFET.
3:20 Does the TUF 50 series also have those the same features as the Astral line?
Its so funny coming from Car Audio where we really think about the size of the conductors and the amount of amperage. These figures are so insane for the size of the connector. I know electrical and electronics are different but still! 😂😂
Not in that regard, electricity is the same everywhere, the connectors on GPUs are flimsy for a long time already... and instead of making a more sturdier connector they did that 12 pin garbage.
OR go back to 8 pin. Doesn't matter how many it needs. It WORKS.
So how many failures per the 1000 that are out there?
so far just 1.
I doubt there are a thousand of 5090s out there.
Well, it's hard to answer this question. I doubt there are many 5090s that are actively used (the vast majority of them are currently being in a possession of the scalpers). And out of those who are in the hands of actual users, many of the cards have died due to a faulty driver (likely connected to the usage of PCIE5) and I am not sure if they can be bringed back to life. And out of those who worked properly and weren't scalped one has started melting down. I'd the reliability is probably higher than the reliability of the 4090, but that's not saying a lot. It's hard to even imagine a less reliable card than the 4090 in recent time.
this design needs to be improved. drastically
why you are suprised? Its the same conector 😂😂😂
One detail that is missing here is that the cable melted on both sides, the gpu and the psu. That could mean that there was some issue with the cable.
Yea, both Nvidia and gamers Nexus said not to use 3rd party cables
it was a shit quality 3rd party cable he'd been using for years. blaming nvidia for this is like blaming a car manufacturer for a crash because the owner drove the car on bald tyres on iced roads.
Even my 4080 had issues with this stupid connector...
the ONLY way to combat this is to have a sensor monitoring voltage and current draw on each pin. if any single pin gets a bad connector, it will get hotter, lose contact and the drop in voltage will cause a higher and higher current draw till it turns into a fkn strip heater and melts.
an alarm that sounds when a cable becomes defective will avoid this. this power connector is a failure and Nvidia needs a class action against them to stop this.
That's what happens when Nvidia focuses on their AI chips and instead tries to rush a desktop GPUs since their cash cow right now is AI... can't wait for that AI bubble to burst and we all put it behind us.
they say the card is 2k- i'm gonna say good luck finding one for 2k
That cable isn’t going to handle 600W, it’s under speced period. The 8-pin PCIE over speced hence why it will never melt.
I would prefer temperature sensors (for example thermistors) inside the connector instead of shunts on PCB. Measuring current disbalance could work in some scenarios, but not in every case. Also the shunt system has to be thoroughly tested to avoid false triggers.
If connector is overheating, then best way to directly detect with temperature sensors instead of extrapolating from shunts.
Supposition:
The concept of the molex connector which the 12V.... connector is based on, is that the cable end recepticals must float. They must be able to move. If they are soldered together with their neighbor, then that makes it vulnerable to this sort of issue. While it is ALSO in danger based on incomplete or crooked insertion, it is this issue that can cause a problem even when fully inserted.
Shocking! not
Melting!
im just a chill guy with my first wave 3080 , rolling 100+ fps/4K , and not catching on fire.
I may skip this series and wait until 60 series when the connector is finally abandoned.
They will probably make a new one for 800w
Abandon nvidia
@@ArchPC9 I’m actually thinking of going AMD just to avoid this connector.
Never heard anything about amd cards , thats why ive been happy with all the gpus i got from amd until now and i will be buying amd gpus till i die
RTX 6090 750WTDP, 3000$ msrp
Buildzoid out here single handedly saving the PC world.
Yeah theres a reason why US electrical code (and probably other countries too) mentions limiting constant electrical draw (like charging an EV car for hours/days on end) when planning out your circuits to like 80% of the max rated draw (so like on a 15amp breaker, only use 12 amps and on a 20 amp breaker, 16 amps and so on). 575watts on a connector rated for 600 is flying too close to the sun
Real world limit threshold vs theoretical limit threshold are very different. Just because a cable could theoretically handle 600 watts doesn't mean it could in real world scenarios because it doesn't take into account impurities in the cable conducting wires when manufactured (higher impurities = increased resistance). And high resistance translates into heat waste, which is why people are seeing thermal runaway with those melting cables. No two cables are created equally, so it's like winning the silicon lottery in a way to put the analogy. If they say it's rated for 600 watts, then reality it's only rated for 480 - 540 watts.
@@luminatrixfanfiction I think the cable can handle more than 600W, the connector is the problem, that's why it burns.
@@luminatrixfanfictionThat is absolutely not how ratings work. Like, at all.
@@luminatrixfanfiction 🤨
@@luminatrixfanfiction dude its capable of 2ce that, no connector is rated near its limit for that reason - you arent even an Engineer, let alone the first Engineer
The 4090 and 5090 connector needs thicker gauge pins and wires. It's too thin of a gauge to run that much power through it. That's like rewiring your house from 12/14 gauge to 22/24 gauge wire...your house will catch on fire if the circuit breaker don't catch the fault in time.
Someone needs to come up with a 48v plan for these cards, to cut down on the ridiculous amp draw
What's needed is daughter board with the connectors and shunt resistors. if things are about to melt down, the daughter board explodes sending it zooming out of the GPU ,isolating it. With the error message "I just averted a nuclear melt down !"
3:28 blows my mind Nvidia doesn’t do this with FE’s
The 3090FE had crappy cheap thermal pads in all first series. The FA cooler are brilliantly designed but the rest seem to have a lot of oversights.
3090FE has worst 12v fuses. They fixed it on the 4090. But they still have not fixed the connector.
I wonder if the TUF or Gigabyte's 5090s have something useful like this too. Would be crazy if the only 5090 model with at least some safeguarding built-in would be the Astral seeing as it's the most expensive of all of them.
Twelve Volt High Fire Cable