We've been working hard lately at posting more investigations, documentaries, and deep-dives! Check out some other ones below! Intel CPU fabrication factory documentary in Arizona: ruclips.net/video/IUIh0fOUcrQ/видео.html AMD lab tour & documentary in Texas: ruclips.net/video/7H4eg2jOvVw/видео.html The Downfall of EK Water Blocks: ruclips.net/video/6VjYFdHMC3A/видео.html& And we're working on more! Support this content by grabbing something on the store at store.gamersnexus.net/ or by throwing a few bucks at us on Patreon at www.patreon.com/gamersnexus HUGE THANKS to our fact checkers & peer reviewers. Find them below! Aris (Hardware Busters & Cybenetics): @HardwareBusters & www.cybenetics.com/ Roman (Der8auer): @der8auer Elmor Labs: www.elmorlabs.com/ (they sell great tools for PCs)
From what I've managed to observe. Everything was fine until the eleventh generation processors. Before e-cores and pcie 5.0 appeared. Or companies don't want to pay highly qualified employees because the columns in the spreadsheet don't match in the management. And they think that everyone is going to be replaced and go to the streets.
I like MSI's solution. Their PCIE-5 PSUs come with a 12V2x6 socket and a 12V2x6 cable with yellow pins. Essentially, if you can still see yellow after plugging in, it's not plugged in all the way. You have to push until you can no longer see yellow. Look up A750GL or A850GL.
For its faults, MSI has some product managers we've met who are very build conscious / ease-of-installation focused. Awesome to see those voices occasionally winning amongst all the marketing!
You know the reporting will be next level (for RUclips) when they outright disclose the outside reviewers of their content they are presenting. Kudos for that
I work as an electrician in germany. I mostly saw this kind of failure in outlets which were used to their maximum capacity at regular intervals. They used normal 230V sockets which are rated for 16A for their forklift chargers. What they didnt know was, that these plugs were actually rated for 10A continuous operation. At first nothing will happen. But everyday the socket was overloaded and got warm during operation. When it cooled down again a little bit of condensation formed inside the connector and the resistance got a little bit higher. This cycle continued for while until the socket failed completely and caught on fire. So the actual failure was not that the connector was bad, but it got overloaded everyday and the cycle between cold and warm increased the resistance until the failure accured. I could also imagine that this factor can also come into play with the 12VHPWR. But I cant understand why they would put such a high load on such a small connector. And the fact that you have to be extra careful to not stress the pins while installing these is a huge design oversight. Just look at an XT60/90 connector and how easy it is to connect those. Would be nice if they would build some kind of failsafe into the connector such as a PTC thermistor. If that would be implemented in the future you could give the user a message that the connection is bad before the connector melts and shut down the card into a low Power mode.
Apparently, cable design is too hard, advanced a technology, for a manufacturer of computer parts from the world's highest-valued company, such as NVIDIA.
"But I cant understand why they would put such a high load on such a small connector" I think that the answer to this is pretty simple: they wanted to go the Apple route with just one small, sleek & sexy connector to look "high end" engineering be damned. Thing is you can't cheat physics (something you know even better than I do) and well it's come to bite them in the ass.
Incredible video. Absolutely groundbreaking and amazing journalistic work here. A true embodiment of what the SPJ Code of Ethics stands for. Keep on keeping on, GN. Much love from Seattle
I think I speak for the community when I say: Thank you for your support to GN ❤ Some of us have financial difficulties and are unable to properly compensate GN for the work that they do. We appreciate you.
Wow - massive donation. Thank you for that. It is sincerely appreciated and will go straight back into the next one. Also, it's been years since I was out in Seattle (PAX Prime), but I love it out there! Great area.
I'm a new subscriber here, only been tuning in for a few months. I have to say the integrity of this channel and the depths you go to in order to ensure you're not providing false/misinfo is so refreshing. when the F was the last time you heard anyone on RUclips say the words "Peer Review" outside of a dedicated scientific journal? I doubt you'll see this comment (currently 3,396 other comments) but pls keep doing what you're doing, the education you provide is invaluable
These videos are exactly why I'm proud to be a paying supporter/member of this channel! Thank you Steve for your incredible integrity :) And also for signing my sweet modmat - I'll be breaking that out again in a week for another PC I'm building for a friend :)
I can’t believe how lucky we are to have GN. I feel like this level of quality, detail, professionalism, is something I never expected. Like this is real journalism, science, and dedication FROM A RUclips CHANNEL!! Love you Steve and Team ❤
@@GamersNexus I still don't like how you spent the earliest days of this issue putting 99% of the blame on the user for "not plugging it in right" despite how easy it was to have it be the tiniest bit out and not even realize, plus this not being a problem with any other plug before this. Great that you're being thorough now, but you jumped to a wild and unfair conclusion at the start that made me start questioning your impartiality.
@jkazos at that time we didn't know much at all about the issue. Improperly plugging in the connector is a failure point. It was a "new" connector type 2 years ago which most users are unfamiliar with. I think you're stretching. Iirc GN said (not exact quote) "we don't have enough information but here is our recommendations. Saying 99% is a a bit on the edge, no? Edit: why can't I reply properly it won't link the user ?
@@jkazos Did you watch the video...? He quoted directly from his original video that if a design lends itself to not being plugged in right, the design is to blame as well. Sometimes companies are wrong, sometimes end users are wrong, or a combination of both.
Steve, I've been watching you ever since some of your very first videos. I cannot express my gratitude for you and your team's contribution to this industry and journalism as a whole. Keep on keeping on!
Consumer electronics are required to use lead-free solder. AFAIK the only place you can use leaded solder in mass production is healthcare, aviation and military equipment.
Yeah it's a real shame IMO, the tiny bit of lead in solder helps so much more than it hurts. If govts would be willing to properly dispose of stuff this would not be an issue & we'd have much sounder electrical connections on our electronics. Thankfully you can still get leaded solder for personal use, it'll be a real sad day when they outlaw leaded solder across the board. Lead free introduces so many issues into what should be a very easy task of soldering a connection together!
Soldering with lead free alert is not that much more difficult. We do it all day, every day in the industry. With the proper prep that is always needed for long lived solder connections it is not that different.
@@robr640 It's not down to the government to actually put it in the correct bin, or take it to the correct waste collection facility to make sure it's disposed of effectively. They can provide all the safe disposal sites they want. Joe Bloggs is still going to f**k it in the bin though. It's a consumer conscience problem. They don't care.
This looks like a gigantic mess. I am not an engineer, But when I saw them change the PCIe 8 pin to 12vhpwr connector? Why? I didn't understand why they were trying to push more current through a smaller connector.
That's the best part! It is a gigantic mess! Saw you edited your comment: We do address why they did that (at face value) in this piece, if you're genuinely curious what their reasons were!
@@GamersNexus yes why one reason it's to reduce space the other maybe more power in one connector for biggest cards instead of 4x8 pin pcie they do 2x 12VHPWR also big mistake in testing products after fabrication
The big selling point was the 12vhpwr connector being able to “talk” to the power supply better than existing connections. I’ve been into computers north of a decade and have seen the shift away from Molex (not a bad thing) to PCIE power connectors and now this. This connector is plagued by the same “engineering” principles that have caused people to loathe getting a new car. It’s great on paper but a train wreck in practice.
@@jonathanjones7751 That’s the most ridiculous part because most cables have the sense pins wired within the male connector to use 600W - the sense pins on the GPU and the PSU actually don’t connect
@@jonathanjones7751 This power connector is cutting-edge-right up there as the new design wave! Just like Starliner and the Titan submersible! 🚀🌊 What could possibly go wrong? 😅
My first question when I saw the size of the 12VHPWR connect was, "why is it so small?" Why does everything HAVE to be small? Regardless, please keep up the good work. I look to you guys for information on current industry issues, despite being an enthusiast and not a professional.
Its like all connectors getting smaller. It's the same with SATA and HDMI. They randomly have bad connections that can be fixed with wiggling. The old RCA connectors never had that problem.
@@tinkerman5220 never had problem with nether sata nether hdmi. my friend have second phone with type-c and his always start to fail after about 1-2 years of usage, i use type-c phones for more then 5 years, never had that problem. i guess problems appear when he charges phones in his pocket with powerbanks
I've had so many people tell me that my own card is all fixed now that the previous discussions had settled. I'm so glad you decided to revisit this Steve because the dialogue needs to be revisited to address this horrible spec. Even the basic nomenclature is completely backwards in the white paper. 😂
I had my original 12V connector that came with the power supply, it's been a year no issues whatsoever strange how reports keep coming out about this issue
Excellent video. One thing that I've been wondering about ever since the 12VHPWR connector was announced is: Why the PCI-SIG didn't switch to a blade type contact connector design. Blade type connectors have been the de-factor industry standard in high current applications for decades, with nominal ratings well over 15A for a single contact and some of them going up to 100A per contact in some of the off-the-shelf designs that have been available for quite some time.
@@hexarith you mean like Mini-Fit Sr at 50A per contact 😀 Given all the other connectors are Mini-Fit Jr that would be the obvious option IMHO. Given the new plug anyway means new PSU, a 20V option would give the headroom for 1000W in the GPU on two pins and while the individual pins are larger as you only need two it will be similar sized.
Then NVidia wouldn't have owned it and been able to take the credit. They tried to "push the envelope" by being cheap, and surprise surprise, engineers "overbuild" things for a reason. "Overbuilding" is about knowing everything's not going to be perfect. It's reminiscent of Apple's butterfly keyboards.
Man, these deep-dives and documentaries are simply top-notch. I don't know how you guys can afford to do these....I know it sure isn't the YT revenue lol....but I'm super glad you're doing more and more of these. Thank you!
As a hobbyist electronics guy that has built some PSU adapter cables and such for people, I decided to look into the difference between 12VHPWR and PCIe power connectors at a component level. PCIe 6-pin and 8-pin connectors, as well as ATX power supplies, the extra 4-pin ATX12V board connectors, etc. use Molex Mini-Fit Jr. pins. I got to looking in to what pins are used for 12VHPWR, and someone pointed out that it's Amphenol Minitek PWR 3.0, plus listed some part numbers, so I got to digging. Looking at the drawings for female pins from each family rated for 16AWG wire, the MiniTek PWR 3.0 pins are smaller in the outside dimensions of the folded-rectangle shape of the female pins, and the pin contact length is shorter than the roughly-equivalent Mini-Fit Jr. female pins used for PCIe and other connectors. Oddly enough, the Amphenol docs list their pins being capable of up to 12.0A per circuit, while Molex lists theirs as up to 9.0A per circuit. I'm sure if I really wanted to dig and do some math, I could come up with the total contact area between the female receptacle and male pin for each family as well, but I think the point is clear by now. The MiniTek PWR 3.0 pins are going to have a smaller contact surface area between the male and female pin than the Mini-Fit Jr. pins even when engaged properly, yet the MiniTek PWR 3.0 pins are rated for a third *more* current than the Mini-Fit Jr. Pins. Since each pin in the 12VHPWR connector is rated for 9.5A by the PCI-SIG spec, that gives you a total supported power of 684W...In a perfect world. That gives you a 14% safety margin, again in a perfect world. Let's say that one of these six pins drops out due to a poor connection...Then you've got 600W running over five pins. 5*9.5A*12V is 570W, but you're trying to push 600W over those five pins...Congrats, you just put yourself at high risk of thermal runaway. The safety margin isn't there with the Amphenol pins. IMHO, it would be safer to run a 6x2 Molex Mini-fit Jr arrangement technically out-of-spec than the 6x2 Amphenol MiniTek PWR 3.0 arrangement technically within spec in an improperly-installed plug/receptacle, simply due to the larger contact area with the Mini-Fit Jr. pins. Physics doesn't lie; the lower safety margin of the Amphenol setup puts the designer behind the eight ball right off the hop, then you add a connector that doesn't positively latch? No wonder these things are melting. This was a bad design choice on NVIDIA's part. NVIDIA is 100% counting on a perfect engagement every time between those twelve pins, and even one pin being off puts you out-of-spec according to the PCI-SIG spec. Once the first pin heats up, its resistance goes up...Which is going to drive more current to the lower-resistance pins, causing them to be over-spec even more and heat up as well.
I should state clearly here: the problem is in the pins. 16AWG will happily carry well over 10A for chassis applications. People have seen the same failure modes for 14AWG-wired 12VHPWR plugs. This is 100% a spec problem for the pins and plugs used for 12VHPWR, and the fault is entirely NVIDIA's.
Yes, the very small engineering safety margin is shocking to me even (or especially) for a consumer oriented product. We often bake in 50% engineering margins or more for safety critical parts.
As I predicted, when this whole 12VHPWR fiasko blew up: its a connector designed and built for the bare minimum of "scraping by" in terms of overhead. I suspect this was a pennies on the dollar decision, making the CHEAPEST possible connector to do the job. They could have taken the old 8pin PCI plug, added the extra 4 and sense wires... and STILL have a way more reliable and redundant design with marginal overhead. Considering theres been literal DECADES of connector development, know-how etc... this just seems LAUGHABLY undercooked (no pun intended). I truly hope who ever designed and signed off on this... does not work at PCI-SIG anymore.
If it was ready for daylight it would have been on the ARC cards, not nvidia's. Intel WARNED PEOPLE about this head of time as they were directly involved in creating it.
Thanks - I remember when you first posted coverage on the Cablemod adapters I had been having trouble with my PC resetting - that led me to replace the adapter and Cablemod cable with an "official" 8 pin to 12vhpwr cable from Corsair and the problem went away. I had originally planned to replace the entire PSU.
GN feels like modern day Sherlock Holmes going after criminals and I'm here for it. Impressive work all around. As a phd level scientist in medicine, I appreciate your efforts for transparency and reproducibility. Its refreshing to see rigorous science being applied in the consumer tech world. Cheers!
@@joshschmidt8784 because sherlock holmes solved murder cases through deduction and didn't expose dodgy electrical engineering by asking some industry experts.
@@RegenTonnenEnte I think he was declaring his expertise in the field of discussion and I imagine paying a well deserved compliment to GN. But maybe English isn't your first language.
Awesome research, it’s so cool that you guys worked hard and now are able to afford making this type of content! GN deserves every bit of success it has and then some more. Also love that Aris and other expertises were used. Thanks Patrick, Steve and Team! Looking forward to the next awesome content.
If only there already was a connector able to deliver 300W in the same footprint as the PCIe 8pin, widely proven and already in mass production for all modular PSUs... Wait a minute
They could have just kept the old pitch size of the old PIC-E power plugs and use the pin layout from 12VHPWR, so 6x12V and 6x GND plus their sense pin thingy. The way I see it the smaller size is the main culprit.
I've watched a ton of Northridge Fix's videos. Alex is a BEAST at microsoldering. I think he said it best. You can say what you want about angle of insertion, debris etc but the point stands it's a connector. You plug it in, it clicks and you don't think about it again until it fries your $2000 graphics card. You shouldn't HAVE to think about how you plug it in at all.
Dude is a rip off artist. Literally has a video of him putting a new connector on a 4090 with the connection side of the plug facing INSIDE the card meaning it was un usable for the customer and he still charged them for a full fix. Guy is notorious for being incredibly shady and all about making as much money as he can off his customers.
@@QactisX Oh so you want me to get into how hes not good at microsoldering and just flooding PCBs with shit flux to make his job easy? Keep gassing up a dude who makes a living ripping people off. You are directly making a comment that would cause people to assume hes good at his job when hes not.
@@xnitropunkx you are making a comment that is extremely annoying and negative, cursing me out for no reason when I noted he’s good at microsoldering and has some valid points about the 12V connector. Sounds like you work for Nvidia or Intel and don’t like hearing that it sucks, or maybe you work at Joe Shmoe’a Solder Shack down the road in Northridge and he’s taking all your business so you’re on GN smearing. No I don’t want you to go into it
I had my original 12V connector that came with the power supply, it's been a year no issues whatsoever strange how reports keep coming out about this issue
"THIS IS BAD" Thank you Steve EXCELLENT coverage of a potentially multi level catastrophic event that could have been exasperated if the cards were in a more affordable price range. The initial buy-in cost of these cards greatly kept them out of reach of budget volume amounts of retail sales.
This video is the perfect example of why this channel is my go to for information you guys triple check every thing before making a statement to be as accurate as humanly possible thanks for all your hard work over the years
I've worked a lot with high amperage/low voltage electronics. I knew from the beginning that those pcb adapters were going to be a problem from the moment they were announced. We did a lot of experimentation with high current lipos years ago and we constantly worried about the wired connectors we used, and those were from reputable, professional industry/military companies that have been making high current connectors for decades.
That's really what blows my mind. Hobby RC electronics for example have proven that more appropriate consumer grade connectors are possible and even available quite cheaply as of recent years. Paralleling up small friction fit connections is rarely a better solution then one solid low resistance one. Some even contain sense pins already. And on top of that silicone wire with its flexible jacket and high strand counts would practically eliminate the bend strain caused by these atrociously stiff meshed/bundled PCI power cables. It even comes in all sorts of fun colors for your customization desires haha!
In addition to the original ampacity compared to the latest, along with less connection contact area, a mess. Any power engineer should have seen this coming. I did. Most did. All of this root cause is for those that lack any basic understanding. The root cause is obvious, not enough. Any connector that blows will cause a cascade. Any of these other things that might have been listed are just sensationalism. The how and the why all will point to the real root, ampacity and quality connections. There simply isn't enough. The mechanics and quality of the connector just make it worse. Delving into that is nothing more than entertainment. Yup, a crack at Steve who blatantly said everyone was wrong. Basics are enough in most cases. Just jumped to the end to get this comment out. Test and test, build and build, technician. Leave the design to those who do. Comment on it, but stop throwing shade. Best Buy at it's best.
@@Wiresgalore Funny you say that because we actually tried those at first. We needed a large bus of separate power adapters all connecting to the batteries at the same time. We glued them all together after plugging them in and then hoped to be able to plug them all together remove to make it easier. That's when we realized how sloppy the tolerance to fully seat each connector actually is. The metal clips inside the connectors would not fully seat even if the plastic was flush a lot of the time.
Also if we're starting to expect GPUs with 600W and more, maybe we need to move away from 12V and go for 24V connectors. Nvidia could do it if they wanted and force everyone else to deal with it
@AlbertScoot Thats fair, I guess more of the point I was reaching towards is so many better options do/can exist, whereas this whole debacle feels like beating a dead horse trying over and over to make something fundamentally flawed for its purpose work for such.
I've worked in electrical engineering for 25~ years and one thing I still can't get my head around is why PC cable design is still using multiple cables instead of a larger single conductor. I can only assume it's an economy of scale 'problem' and they only have to buy one cable size. If they started making interconnects using flexible 20sqmm or so (4awg) cable and used a larger more beefy connector, literally none of these issues would ever exist. The problem with using lots of smaller connection points is that they won't always be equal - it's only takes one connection point to be not as well mated to start creating heat, oxidising, and creating more problems. They need to just start spending a little bit more in investing on bigger connectors and bigger (fewer) cables and this will all be a thing of the past.
I miss using the "~" symbol as you just did. For some reason they neglected to add it to the keyboard on this computer and I have been left writing only in exactitudes ever since, with no approximations.
I do IT, but started with cars, and kind of what I thought, like why not a big ol screw down connector like they do with car audio amps (and some batteries). That or I think Asus had a power connector slot just after the PCI 16x slot with beefy connectors (GC_HPWR). Even easier if motherboards go full 12v (ATX12VO).
Stability is the reason why, by spreading the load out over multiple smaller cables you compensate for voltage drop and surges, with just one cable the card could crash trying to go from low to high load if there isn't enough power coming through that single cable. We're talking electronics running at less than 1 volt, the precision really does matter.
@@celeriumlerium8266 This has nothing to do with the amount of cables you use, but the conductor surface area, which was the very point I was making. As long as you spec a cable of sufficient size to negate any voltage changes, stability has nothing to do with how many conductors you have. Another person mentioned car audio which is a good example because it's also around 12v and you also don't want voltage drops because it can damage equipment. Car audio folks often use huge cable like 4/0 guage (120sqmm) because they are dealing with massively higher current than anything in a pc, and don't want any voltage drop.
The craziest thing about this is that we could just use a re-branded eps cable like they use in servers. The safety margin on them is so big it could hold the same amount of power as a 12 pin and still have a larger safety margin.
Gran recopilación de los eventos... Un resumen de una historia larga y complicada!!! Gracias por el relato pausado para poder seguir aunque sea la traducción de RUclips !!! Great compilation of events... A summary of a long and complicated story!!! Thanks for the slow story so I can follow even the RUclips translation!!!
This wasnt even reduction in temperature. More like +300c if you dicerolled wrong or plugged the socket in loose without double checking. Though I wonder what the resistance difference is between regular pcie vs hvpower
Holy shit, I'd like to thank y'all from the bottom of my heart! I had recently put together my first PC build and I wasn't sure about my GPU cable, I pushed it in as far as it would go, made sure the cable angle was okay and assumed it would be fine, but this video made me double check, and sure enough it wasn't fully seated! Took waaaaay too much force than I would have liked (had to grip the back of my GPU to make sure I didn't break the connection to the motherboard @_@ ) but I finally got it seated right! Y'all may have just saved me a lot of heartache and money, thank you for all the info!
Steve, an interesting part you did not pay attention to is that the problem is not only in the 16-pin Connector but also in the board's design. Nvidia changed the power design at the transition of 3090ti to 4090. As far as I know, 3090ti had 3 separate power lines and with problems with one of them, the card turned off. 4090 has all the lines work as one, so the card continues to work even when some of the power pins in the Connector are not connected as they should or are overloaded by power because others are not connected or have poor contact. Nvidia can solve the problem, at least partially, by adding protection against overload to each of the 6 power lines of the connector or at least 3 lines for 2 pins each. When at least one of the contacts is not connected or transmits excess power, the video card should turn off. Please like it so that Steve can see this.
Indeed. Any design they solely relies on the power distributing itself automatically over multiple parallel paths is flawed. The moment there's an imbalance, one of the paths will be overloaded and the whole thing will fail. Parallel paths either need to be redundant, i.e. every single path can take all the load, or it needs to be at least monitored but better actively managed.
as someone who works in aviation, specifically avionics/wiring for most of my job, this was a fun video to watch. get to see the small scale view of what happens when shit isn't made properly.
Steve you and your teams work has always been the cream of the crop but these days you are putting out next level videos man... you are, without question the authority when it comes to real, accurate information in the cpu world. Thank you so much and keep up the great work!!!
I like a few things about the channel. The main thing that keeps me watching the channel over others is the attention to integrity in reporting, Steve and the others do fact check early and often. They also aren't afraid to criticize their very own advertisers- EK Cooling Systems comes to mind, they have to read carefully on these sort of issues because you don't want to lose access to future products and reviews on their line of work. They also haven't been afraid to admit that they're not subject matter experts, something that not many people on the platform say. I have to try to ignore the "attention grabbing" title cards on each video, but I get that's a part of trying to keep audience engagement on RUclips these days.
Literally, just sent in my HX1200 (barely over a year old in use) for warranty replacement because of inconsistent power delivery and was using the CableMod adapter up until the recall on my 4090. I'm almost certain it caused damage to the card -.- As always: all the hard work, investigation and dedication you've all put into this sh*tshow in MASSIVELY appreciated.
It was complete for the units we tested. The conclusion was that it was a mix of improper insertion for the units we tested and the failures of that era plus design oversights, and this expanded as more cable designs came out later to trigger this piece (which could not have existed at the time, because many of these changes didn't exist yet).
Awesome work as always! - Just a reminder to new folk - if your GPU is using 2x8pin or even 2x6 pin etc - use one PSU VGA cable PER socket on the GPU. IE - use 2x individual cables for your 2x8 or 2x6 configs etc.
I've been saying for two years now: This plug cannot handle 50 amps, and the proper way to have handled this was to change power supply standards and GPU standards to power GPUS off of 48 volts instead of 12 volts. This would reduce the current by 75% for a given number of watts, and would completely eliminate the overheating and failure of these plugs. When I first started saying this, people mocked me, asking if I thought I knew more than the engineers at Nvidia. Plainly, on this subject, I did. Based on almost 50 years in electronics, I knew immediately on seeing this connector that it would not hold up at 50 amps of current draw. And renaming the plug is not going to help. These plugs need to go away, but they're not going to. So, if you're buying a 4090 or 5090, keep a fire extinguisher handy.
Issue with 48V solution is that almost everything else in your PC is powered by 12V. Meaning PSU manufacturers will need to build 2 circuits for 12V and 48V separately... And, then, how do you even divide power budget? Remember times when PSU almost never were able to provide rated wattage on 12V line because 5V eaten significant part? You suggest similar story. Not to say it will break almost all intercompatability (even if with adapter cables) that PSU spec currently has. [And will increase risk of user error of building system in a way that will send 48V instead of 12V... But this one can be avoided by unique connectors. Returning to point of intercompatability]
People assume Nvidia Engineers are also Electricians. Well these people are downright morons to think that. Maybe Nvidia Engineers are similar to an Electrician but on a sub level. Engineer doesnt mean your an expert in multiple fields with the project your building. Most of these stupid shenanigans are the simple result of companies attempting to find a cheaper, cut - the - corners type scenario to make more on their margins. You can bet Nvidia pushed and backed this stupid connector.
Yes higher voltage transmission will be more efficient and will require thinner cables (hence newer 800V EVs), but CPU EPS is also 12V so what do you do with it? You would either have to make a new 4th rail on PSU side (requires buying a brand new more expensive PSU), or you would entirely replace 12V with 48V, which would require both new PSU and new GPU/mother boards with more expensive VRMs to handle 48V to ~1V conversion. Both solutions would generate a ton of ewaste
@@ThunderingRoarYou handle it the same way USB cables do. The base voltage is 12V for the connector, and through some handshake the PSU, cable and card can switch to a higher voltage mode. Sure it takes a bit more hardware to make it work, but making circuits that support 50A ain't cheap either. Backwards and forwards compatible. No extra e-waste. Gradual rollout is possible. And low power devices can stick with 12V where it's absolutely fine. Obviously this would require a bunch of coordination in the industry, but this is far from impossible to make work...
In 15+ years of PCIe power, 3.3% of users have ever had a failure. In less than one Nvidia generation 4% had failures. That's huge! I'll go check my 4090 again 👀
Yep, that's something that need to be re-iterated. Plus it should also be noted that of the PCIe 8 pin failures, most of them are due to PSU manufacturers having daisy chained cables. The connector itself isn't at fault.
Ehm... That's how percentages work all right. If statistics is correct, then difference is actually not as big... Reasons for failure rate are important though (for older connector daisy-chaining is a thing) But other than that. 3.3% in 15+ years and 3.3% in 1 year is basically same percentage. Granted, you for sure have larger sample size with more lasting measurement. And 4% is not that much larger than 3.3%... Hard to say anything on statistical significance of this difference, though.
@@DimkaTsv Not necessarily. the 3.3% includes failures shortly after the product was made and long term failures that took a long time to happen, the new one has only been around for a year so 4% fail in the short term and an unknown additional amount that fail after a longer period of use is not counted in that 4%.
I have been using a 4090 undervolted shortly after it's launch. It eats around 300W. I have an older Corsair 750W TX-M PSU, to which I custom made a 12VHPWR with a local electrician guy that does this as a side income. Cable is oversized and rigid AF, it's secured in the slot so much, it feels like it will never come out. :D It was a really good buy. Quite happy I didn't bother at the end with 3rd party adapters.
I bought a new 4090 FE (12V-2x6 revision), and believe it or not, I managed to push/break 1 pin of the GPU connector by just plugging the supplied octopus cable the first time. I wanted to hear "the click" so used some force. Does not inspire confidence for sure.
I was just talking about the shit show around this adapter with a friend a couple of days ago! My thanks to the GN team for the show to go along with my breakfast!
I still can't believe they approved that retention clip design. It's one of the tiniest most pitiful clips I've ever seen, and it's for a thick, stiff bundle of high-power cables that have to run into the side of a graphics card. You can tell just from looking at it that it's destined to waste hundreds of thousands of hours in cumulative troubleshooting and RMAs.
Not just that, bit it also is in the centre of the long side. That's not how you secure a cable that experiences tension in random directions. For that, mechanical locks go on both short sides (e.g. Centronics, SCSI, VGA/any sub-D, DVI, ...) OR the connections are so deep, that it cannot be canted so far that they lose contact (that's what automotive connectors do).
I have a background (degree) in computer engineering and I very much enjoyed my class specifically on optics and how to design cables.... Great topic. Now, as a design engineer your task is to understand the spec, design the product to meet the spec, and to work with certification to verify performance. If we want to take a step back I can give you 4-5 main things I would investigate or inquire further for why this happened. This is a broader term.... Computer design engineering puzzle.... And I think many people failed to do their jobs along the way. 1. Molex failed to design connectors and cables that were able to operate correctly. They had to be redesigned and my instincts tell me that they will have to be revised again. 2. ATX, PCI-E, Nvidia, and Intel all failed to verify the spec appropriately. Specs are designed to be upgraded, improved. However, you should be able to have a standard that works the vast majority of the time and the reason specs exist is to have a reliable product in manufacturing. This is because of engineering calculations needing to have a reliable tolerance. However you want to cut it, the spec(s) are not reliable and need to be revised..... Again. (Sidenote: imagine if this was ANSI and it was a screw on the space shuttle.... Netflix has great documentaries about this) 3. Manufacturers had the issue of demand from the designers of the products (Nvidia, Corsair, Asus, etc.) which led to extremely terrible quality control due to a variety of factors. The common mantra is "ship it and we'll figure it out later" because the timeline of sales matters way more to the business mindset than the certification and engineering concerns. Think about profit margin vs. replacement costs. Often you will see inferior products added onto quality designs due to availability because at that specific moment the product was already manufactured and engineering is told to update their design to allow it. Certification is told they have to accept it through simple analysis. And this is what leads to Boeing having their massive issues with the dreamliner. 4. No one considered the basic functionality of the product and NOBODY did long term testing for fatigue failures. Things were done in such a brief timeline that it was not possible to test them thoroughly, which is more clear when you step back and think about why this all happened. 5. No one really cared or understood what bend radius means in the design requirements sense. You can even see this from the actual pictures in the spec. Computer cases have been designed in such a way for how long? No one thought it was a good idea to run every single design constraint on this cable+connector on an open bench scenario where there aren't things jammed into some small enclosure. The stack up of massive connector on the card into massive connector on the cable into computer literally does not work with bend radius (cable) design guidelines. Talk to a person who works with sheet metal and ask them about bend radius and why it matters. Ask them about different types of metal and how the bend requirements change based on the material itself that you're using. Now.... Consider that the vast majority of phone cables fail because the weight of the wire itself and the relatively small size of the connector. Flip that on its head.... Now you have a massive connector trying to jam itself into a small space. It doesn't work when you need (in both situations) proper bend radius to prevent signal and connector failures. How we fix this as a whole...... A. You need a minimum clearance and bend radius guideline for CASE MANUFACTURERS. B. You need power supply manufacturers to meet a specific quality standard for clean power signals and standardization of what the cables need to do (this is from ATX spec) C. You need motherboards, video cards, and OEMs to follow the actual specs and not mess things up by trying to push limits (waves at Nvidia) D. You need to design things in such a way that they aren't designed to fail. Step back and think about 6-pin and 8-pin connectors. You have a clip and it's attached on one side to keep a power cable from disconnecting and sparking on the board or somewhere else. Now, you have 2x6 or 2x8 connectors, even 24-pin connectors which have been used for a long time to send signals and various power connections from the PSU to the board reliably for years, decades. All of those connectors were not this big and didn't require this much power. It's a massive tolerance issue and the entire thing is centered around this very cheap clip that supposedly going to magically hold down this massive duct-taped connector design to the card while being jammed into place because the computer door won't shut and the weight of everything alone is too heavy to keep it in place. You should have 2-4 clips. Not just one. Fix the spec... Fix the connector.... Fix the quality issues..... Then maaaaaybe it'll be reliable.
A possible solution might be actually hiring engineers instead of calling engineer whoever has a job in tech regardless of their background... 🤷♂ Call me crazy, but I feel like the industry is trying to cheap out on skills, materials and QA while still trying to keep that aura of "we've got the best minds" when clearly either is not true or these people do not sleep enough to take poised and well reasoned decisions...
@@or1on89 I did my Engineering degree back in the 1990s and the fight had already been lost back then, I remember one of my lecturers going on an epic rant about the difference between fitters, technicians and engineers.
Nice work Patrick, Steve et al! One quibble, though: when multiple pins share the current at the connector block, but the 0 and 12 V groups are shorted together on either side of the connector, it's *not* the high resistance pins which will get hot, but the **LOW** resistance pins which are of course carrying the most current. This is because the power dissipated in each pin is V*I, and since the voltage drop across the pins is forced to be identical (...as I say, shorted on either side) the pins carrying more current are the ones which will melt. Add a bunch of resistance to 5 of the 6 12 V pins, for example, and it's the final one which will melt when you pass 50 A through the connector.
The more I look at 12VHPWR / 12V 2×6, the more I think to myself that the connector should had been compression-based or a _really_ thick ribbon cable. With that many small attachments, shoe-horning it into a PCI-like formfactor with smaller conductors seems to be the root problem. Whereas, if it was a compression-attached solution it would require a couple of screws, but it would be _guaranteed_ to be seated well or if a giant ribbon, would afford more (and firm) contact. The execution was bad, and PCI-SIG should feel bad.
@@OGSumo *_Damn_* right. Thumbscrews would had been baller to have and yet another accessory. Combine this with special shapes or ribbing that affix onto ratcheting teeth and the connector would _never_ come undone unless a tab were pulled down, and it would had _stayed._
@@GamersNexus speaking of coolers, I'm very intrigued by the Thermalright Burst Assassin 120 Evo Dark - it looks like an attempt to rival the NH-U12A at a much more reasonable price. Could we get a review on that?
I've worked as a terminal contact subject matter expert in automotive connections systems now for 13 years, and your investigation parallels any one of the professional root cause analysis I have worked on in that time. Great work; Linus eat your heart out you little weasel. I am concerned that the manufacturers of these products are seeing the same issues that we may see in the automotive industry due to extreme vibration profile validation testing, and/or very high ambient temperature validation (150 degrees C or higher for at least a continuous 1008 hours), where their products should only be sitting quietly in a computer case at comparatively mild elevated ambient temperatures. Seems like they need to take a step back and get some consultation for high power connections.
The problem is user error. Thats how companies get away with bad designs. They BLAME YOU for their fuckups' people and that's how they get lawsuits dismissed in court.
Too bad you had to pay extra for those "AI cores" that still aren't being utilized ..... Those could have been at least $50 cheaper without the wasted AI cores or more powerful by making those million+ transistors used in something like more shader cores, TMUs and ROPs and thus closer to actually competing with a 4090
@longjohn526 cool thing it doesn't compete with the 4090 and was never supposed to. AMD didn't bother making a competitor to that because no one would buy it. The XTX competes with the 4080, and beats it for cheaper in everything but ray tracing. Which to many of us is a joke
this problem is something that will easily become more prevalent with future GPUs that almost certainly are going to pull over 600Watts (theoretically we could see GPUs that pull 1000Watts within a few years from now), so if this 'connector issue' isn't fixed then we will be seeing way more than just 4% of high-end GPU owners reporting this issue
@@miken3963 If you want more pixels, you'll have to make more horsepower. The 4090 barely does simracing on triple 1440p, and is totally inadequate for triple 4K. As long as pixelcount increases faster than the Chipmakers can make smaller more efficient processors, the GPU size and powerconsumption has to increase in order to make more power.
Good luck trying to keep your circuit breakers from popping with a single machine pulling +1700 watts from a single outlet. People seem to forget that whole rooms (sometimes more) are serviced by a singular 15 amp (sometimes 20 amp) breaker. A 15 amp breaker in the USA gives you 1800 watts capability. Taking your hypothetical +1000 watt GPU and giving it a +300 watt cpu when combined with the power draw of all subsystems, fans, drives etc you're looking at +1300 watts from the PSU. So a 1300 watt system would require about 1500 watts from the wall with a 90% efficient PSU. I'm not aware of a PSU that is that efficient at 1300 watts but we're just making assumptions at this point. Point is every time you crank that system up it's going to pop a breaker unless you have nothing else running on that circuit. If CPUs continue to increase in watt demands you could easily surpass the ability for a 15 amp circuit to provide sufficient power. What then?
@@ToolofSociety In germany you can hook up 16A at 230V per circuit. I can run my 1200W sim rig, my 2000W pizza oven and some powertools at the same time no problem. Though i can refrain from using my angle grinder while baking Pizza if that means I get a GPU that runs a 12288x2560 Resolution for my Racing sims.
My take on 12VHPWR is I'm going to avoid purchasing a video card that uses it for as long as possible. I still don't trust it enough to use it in my primary PC which is usually on 24/7. Although I am encouraged by the updates to it in the past 2 years. Seems like many of those updates should have been worked out BEFORE it was widely used in some commercial products.
I'm hoping AMD will use the 2x6 and 2x8 plugs when they can. AMD seemed to nope out of using it in the 7000 series. I open up my desktop case enough to where I might bump a cable. It's getting less often (for example: hard drive and SSD storage capacity nowadays is huge), but it's mostly a habit. Still, I rather have some wiggle room when I use the electronic duster that might wiggle cables around
we do that constantly tho. and at similar scales. USB C can support between 25 and 250 watts depending on spec yet the connector remains the same as does the crosssection of the wires used (with some variation) The issue is not "smaller can not support more power" the issue is "Smaller might require more skill to properly plug in/operate" the 12VHPWR connector is just not designed to be used by the average human. Its "safety" the seating of the clamp is very easy to miss, when compared to the old connector which was both audible as well as haptic due to more material being in the clamp. The old 6 and 8 pin was also bigger with less pins making each pin easier to see and therefore gaps more obvious. The 12VHPWR connector was developed in a lap for lap use and never ever saw the eyes of someone who knows how people are. Its like car engeniers and mechanics. Some are hated by the mechanic, others are mechanics as well as engeniers.
12:44 the survey is not normalized for time. Micro-Fit (6 and 8 pins) connectors have been used for video cards for around 2 decades, whereas 12VHPWR at the time of the survey had only been out for what, 2 years? Normalized for time, that means we are looking at about 0.2% and 2%, respectively. A staggering difference in failure rate.
They need to move away from using multi pin connections to delivery high amperage. Instead they could use a more simpler and flexible 2 pin connection like the ones I use that can easily deliver 600w+ of power. Better yet move from 12v to a higher voltage like 24v or even 48v and been able to deliver more power at less amps, it's the amperage that can cause the melting in connections.
The very first time I saw this connector I thought "wow that's really small for 600w." How this was signed off by any of the PCI SIG members, let alone Nvidia actually _use_ it is beyond me. I've been building PCs for almost 3 decades and I've seen Molex and 8 pins melt if they don't get a solid connection. For anyone to think pushing 4X the power through a connector that's even smaller is a good idea is simply challenging the laws of physics. Especially when it could easily have been made much more robust while being barely bigger, and still be notably smaller than 2 8pins, let alone 3.
The stupid thing is that Molex have the Mini-Fit Sr range of plug sockets which is rated at 50A per pin, so while bigger you only need two pins for 600W, so actually no bigger.
Well, USB C can do 240w so the size isn’t actually an issue, as other comments have pointed out it’s more the amperage. I hope I’m not mistaken lol but IMO 24 pin motherboard power connections bother me and I like small connections
@@raypav USB C 240w uses 48V at 5A max while 12VHPWR uses 12V at 50A max. Heat generated is I²R, so the increase from 5A to 50A leads to almost 100x increase in heat generation
@@x1000plusx You're missing the resistance part of the equation - V = IR, so 48V at 5A = 9.6Ohm resistance. 12V at 50A = 0.24Ohm. 5A^2 * 9.6Ohm = 240W, the heat generated by the 240W rated USB C spec. 50A^2 * 0.24Ohm = 600W the heat generated by the 600W 12VHPWR spec. Putting 2.5 times the power through a plug similar in size to USB is still ridiculous though.
I've never been a fan of this 12vHPWR mess. If I upgraded to a card using it, honestly? I'd trust it more if I hardwired it by soldering wires to the thing, rather than using 12vHPWR.
I remote trouble shot one of these not being plugged in(friend thought her new gfx card was broken) almost lots a friend as she swore the plug was in.. I have a pic of like .5mm gap and the card wouldn't even power up, she said it took all her might to seat it fully after I convinced her that it wasn't seated.. these plugs are super flawed
I like that the 12vhpwr cables from seasonic just uses two six pin pcie plugs on the psu side. Which for me shows that a six pin pcie plug could easily provide 300 watts. And I’m sure they still have a safety margin.
Wow, crazy info! When I found a great deal on an RTX 40 series card, I made sure to spend the extra money to purchase a PSU that supported a direct 12VHPWR cable connection without use of adapters, just because I had heard that a handful of people had melting issues (likely through you guys). Based on this video it looks like I made the right decision! I even removed a cable comb setup I had planned to allow my GPU cable to have a lot more slack in it. Just indispensable info here guys. Thanks for all your hard work.
Oh FCK yeah dude I'm sitting on a 12th gen Intel build ATM with a CORSAIR RM850 CP-9020235-NA 850W PSU. I've been seeing yours and others' coverage of 12VHPWR and honestly has kept me from upgrading my RTX 3080 in part because the entire circus around cramming more watts on less copper seems just awful. Thanks for making this video - I still have to figure out what the hell to do moving forward, but hopefully all this sorts out when I go to upgrade!
@@IntegerOfDoom The spec doesn't cover the materials used on the connector if I understood this excellent coverage correctly. The use of substandard materials or manufacturing is a risk for any power standard.
Be afraid, be very afraid. This is what happens when you are terminally online and only follow negativity and clickbait thumbnails on RUclips. Imagine still believing connector is a problem when entire generation and millions of GPU users used it without a problem for 2 years.
@@Glotttis I appreciate GN's approach to root cause analysis, as well as giving a full timeline of Events, ATX/PCI-SIG Revisions. I work in engineering and respect attention to detail. Fear mongering, this video clearly ain't. But thank you for your opinion on my post. I'll be sure to file your constructive post in the appropriate part of my mind.
@@FrenziedManbeast GN did plenty of fear mongering. For example GN claimed that PS5 memory chip cooling is bad and PS5s will starts dying en masse. Hasn't happened. All electronics have a chance to fail and you can make an hour long video about one electric kettle that broke. Doesn't mean that all other electric kettles will break. Nvidia repeatedly said that RTX40 GPUs with 12VHPWR connector RMA numbers are in line with all previous generations. I can appreciate some of investigations that GN is doing, but also they often tend to harp on too long to the point where it starts to get uncomfortable. It's a fact that negativity earns more money for RUclipsrs than positivity. Anyways no point prolonging this discussion any longer, because RUclips comment sections tend to have cult-like mentality. Even slightest criticism of beloved RUclipsr is seen as the biggest threat by fanboys.
I have an Asus RTX 4090 Strix White Edition and have been using the cable it came with. When I connected mine it gave an actual click and was fully inserted and so far it has been doing it's job perfectly.
As an engineer. When the 4090 was released... it intrigued me for work load. But refused to put one in my system cause of the potential fire hazard. The math on the pin density never mathed for 600w. It only maths to 480w. And that leaves no overhead for the voltage droop, spikes, etc... There needs to be a lawsuit!
Also as an engineer, I just opted for the 7900 XTX and it’s been superb. Sure, Nvidia still takes the cake with ray tracing, but there’s yet to be a single game I’ve played over the past year or two where I felt like I wanted to enable RT, and I can apply that $800 savings from the 4090 to my upgrade next generation. :)
I'm using my 4090 TUF OC since November 2022. I had zero issues so far but my question is - even if the adapter is plugged in correctly, does the risk of failure grow with time and usage?
As an engineer you should also already know that any open flame or melting like this, is a direct violation of any consumer rights and regulations. Or in simple words, this shouldn't be legal. In fact it's strictly illegal by law. Full stop. It blows my mind how these companies get away with it. I don't even understand how somebody with half decent morals could sign off on this. I also don't understand what crappy variant of these connectors they use. We have been using similar connectors from Molex (the brand), Würth and JST for decades under similar load conditions. I have never seen such insane failures over thousands of tests
I appreciate the timeline. It was nice to see a revisit / summary of all the events that happened around the 12VHPWR connector, and the format of a chronological timeline is a great way to organize the information. However, two sections of the video were IMO presented a bit chaotically. The first one: 1:40 "we realized that we have to start from the beginning, like PCIE 6- and 8-pin cable getting created level of beginning" and then Steve jumps straight into failure analysis of CableMod's adapter, instead of starting with the beginning of the timeline / with PCIE 6- and 8-pin cables. And after watching the whole video, the part of the timeline near the end about the Cablemod felt like it was missing something, so I think putting the results from failure analysis lab in there instead of at the beginning would've been a good idea narration-wise, youtube gods notwithstanding. The second one: The part about different specs is somewhat hard to follow. First of all, it would be nice if the introductions of what specs there are and where they come from was all done up-front, instead of intermixing it with what they say about the connector. Then quoting what they say about the connector could be done in quick succession, without interruptions, so that it's easy to compare and see what the differences are. I had to rewind a few times to make sense of it so I made a summary here: 22:40 PCIE CEM 5.0 v1.0: "may optionally implement any of these features", so the sense pins are fully optional at this point 24:54 ATX 3.0 rev 2.0: "support for both sense pins is required for a PSU", so sense pins are required for PSU, and card is required to check those pins unless 150W is enough for it 25:34 PCIE CEM 5.1 v1.0: "the sense pins were required on the PSU side, and the GPU was required to monitor them", so now it's required on both sides* Also, notice how the second PCIE CEM is a different version than the first one. I think Steve didn't mention that clearly enough - I initally thought this was the same PCIE CEM that was mentioned before, and was confused that it now says a different thing than it said 3 minutes earlier in the video. Also, Steve says "the sense pins were required on the PSU side, and the GPU was required to monitor them" and that this conflicts with ATX... the way Steve says it, no this doesn't conflict with ATX. Both say sense pins are required, and if you combine what they say about cards, "optional" + "required" = "required" so yes, if you behave as if it was required on card side, you satisfy both specs. Except that's not what the text on the screen says - it's: "it is recommended that [...] sense pins be connected to ground or shorted in the power supply" idk what vocabulary PCI-SIG uses in its specs, but in most standards I've read, "recommended" is not "required". "recommended" is a suggestion, it means that in most cases doing the thing is a good idea, unless the implementer knows a good reason to do otherwise. But that's ok, right? PCIE's "recommended" + ATX's "required" = "required". but wait "be connected to ground" - irrespective of maximum power the PSU supports? or does PCIE spec require that connector to always support 600W now? "or shorted in the power supply" - the heck? that'd leave both sense pins floating unless the circuitry on GPU's side to read those pins is completely redesigned! It makes sense in combination with the table from 44:52, which I AFAIU is a different part of the same spec - I wish both parts were shown together. If the point GN were trying to get across is "there are two different specs, each with multiple revisions, and it's hard to figure out which one to follow" then yeah, I got that part. But for anyone who tries to understand what the actual differences between the specs are, it's difficult to make sense of that information due to the way it's presented. Anyway, thanks for the reasearch and the video, and keep up the good work!
Couldn't they use a more server-grade-alike solution with ZIF latches or side connectores yada yada yada the thousands of low cost solutions that servers used for years now instead of stuffing 500W+ in a classic-molex-ish connector that sticks out "beautifully" at the center point of the VGA making cable management a nightmare. 😢
@nemezzyyzz yeah, probably something like "we don't really care about gamer grade hardware anymore, just but this byproduct of our datacenter hardware"
@5urg3x That seems the most appropriate solution. Even some AIBs made some cableless VGA+MoBo combos, I just don't like that they implemented it in a way where it kinda blocks ITX designs (if the whole GPU designs would become these cable-less design with the server-alike slide-in power supply connectors.) But the power slot could be moved to the second slot, since most VGAs, even the ITXs ones, are dual slot nowadays. And I'd wish for a modular connector design, not integrating the power connector on the motherboard, but mounted on the case MoBo tray instead. (man the whole ATX spec is too old, it works, but it's old, and it's not like the oddly overly integrated NUC-alikes with CPU cooling capped at ~100W that will save us lol.)
@@5urg3x power through the PCIE slot is limited, unless a complete redesign is done (which is long overdue). we can not draw enough power through the slot so we have to have additional power connectors.
We've been working hard lately at posting more investigations, documentaries, and deep-dives! Check out some other ones below!
Intel CPU fabrication factory documentary in Arizona: ruclips.net/video/IUIh0fOUcrQ/видео.html
AMD lab tour & documentary in Texas: ruclips.net/video/7H4eg2jOvVw/видео.html
The Downfall of EK Water Blocks: ruclips.net/video/6VjYFdHMC3A/видео.html&
And we're working on more! Support this content by grabbing something on the store at store.gamersnexus.net/ or by throwing a few bucks at us on Patreon at www.patreon.com/gamersnexus
HUGE THANKS to our fact checkers & peer reviewers. Find them below!
Aris (Hardware Busters & Cybenetics): @HardwareBusters & www.cybenetics.com/
Roman (Der8auer): @der8auer
Elmor Labs: www.elmorlabs.com/ (they sell great tools for PCs)
PROUDLY SPONSORED BY VALVE
From what I've managed to observe. Everything was fine until the eleventh generation processors. Before e-cores and pcie 5.0 appeared. Or companies don't want to pay highly qualified employees because the columns in the spreadsheet don't match in the management. And they think that everyone is going to be replaced and go to the streets.
This is absolutely incredible work. Fantastic insight. Extensively detailed investigation.
I have an idea. Just freaking solder the wires to it. There, fixed!
Kinda hilarious title from the people who famously just said that if you have issues with 12VHPWR it is user error.
I like MSI's solution. Their PCIE-5 PSUs come with a 12V2x6 socket and a 12V2x6 cable with yellow pins. Essentially, if you can still see yellow after plugging in, it's not plugged in all the way. You have to push until you can no longer see yellow. Look up A750GL or A850GL.
I just built a PC for someone with one of these and I was impressed.
Actual 5.0 compliant PSU released along the timeline of the 4090. This was the way 👍
For its faults, MSI has some product managers we've met who are very build conscious / ease-of-installation focused. Awesome to see those voices occasionally winning amongst all the marketing!
I like how Asus does it with their btf connection..... An extra part of PCB for the power.
You know the reporting will be next level (for RUclips) when they outright disclose the outside reviewers of their content they are presenting. Kudos for that
I work as an electrician in germany. I mostly saw this kind of failure in outlets which were used to their maximum capacity at regular intervals. They used normal 230V sockets which are rated for 16A for their forklift chargers. What they didnt know was, that these plugs were actually rated for 10A continuous operation. At first nothing will happen. But everyday the socket was overloaded and got warm during operation. When it cooled down again a little bit of condensation formed inside the connector and the resistance got a little bit higher. This cycle continued for while until the socket failed completely and caught on fire.
So the actual failure was not that the connector was bad, but it got overloaded everyday and the cycle between cold and warm increased the resistance until the failure accured.
I could also imagine that this factor can also come into play with the 12VHPWR.
But I cant understand why they would put such a high load on such a small connector. And the fact that you have to be extra careful to not stress the pins while installing these is a huge design oversight. Just look at an XT60/90 connector and how easy it is to connect those.
Would be nice if they would build some kind of failsafe into the connector such as a PTC thermistor. If that would be implemented in the future you could give the user a message that the connection is bad before the connector melts and shut down the card into a low Power mode.
Apparently, cable design is too hard, advanced a technology, for a manufacturer of computer parts from the world's highest-valued company, such as NVIDIA.
Main question is do we need that much high power on low quality crap micro component filled pcbs designed by
greedy idiots ?
@@Miparwo they only made a few trillion on ai bullsh1t... They don't have the spare cash to engineer a proper cable.
"But I cant understand why they would put such a high load on such a small connector"
I think that the answer to this is pretty simple: they wanted to go the Apple route with just one small, sleek & sexy connector to look "high end" engineering be damned. Thing is you can't cheat physics (something you know even better than I do) and well it's come to bite them in the ass.
@@primodragoneitaliano This is also my guess. It's all about the aesthetic.
Awesome to see that my now broken PSU got a spot in the video. Thank you so much for all your work for this community!
I can't believe I just spent an 1 hour watching a video on a connector, but enjoyed every minute.
I didn’t know this video was a hour long until I saw your comment
No way it was 60 minutes, now I wish it was longer. Thank you Patrick for the in-detailed analysis.
A connector i'm not even using in my case. GN has done some amazing journalism and it's always engaing to watch no matter the subject.
can recommend Technology Connections
you watch a 2 hour long video about Dishwasher Detergent without noticing
That's because it's user error.
Incredible video. Absolutely groundbreaking and amazing journalistic work here. A true embodiment of what the SPJ Code of Ethics stands for.
Keep on keeping on, GN.
Much love from Seattle
Coke much? 😅
I think I speak for the community when I say:
Thank you for your support to GN ❤
Some of us have financial difficulties and are unable to properly compensate GN for the work that they do. We appreciate you.
Wow - massive donation. Thank you for that. It is sincerely appreciated and will go straight back into the next one. Also, it's been years since I was out in Seattle (PAX Prime), but I love it out there! Great area.
Can you send me $100.00 I need to buy food.
@@emilgustavsson7310I am just here for my free donut to go with the glaze
I'm a new subscriber here, only been tuning in for a few months. I have to say the integrity of this channel and the depths you go to in order to ensure you're not providing false/misinfo is so refreshing. when the F was the last time you heard anyone on RUclips say the words "Peer Review" outside of a dedicated scientific journal? I doubt you'll see this comment (currently 3,396 other comments) but pls keep doing what you're doing, the education you provide is invaluable
Comment seen. Thank you!
These videos are exactly why I'm proud to be a paying supporter/member of this channel!
Thank you Steve for your incredible integrity :)
And also for signing my sweet modmat - I'll be breaking that out again in a week for another PC I'm building for a friend :)
Thank you for the support! And glad that the modmat is getting use!
@@GamersNexus
My brain unit gets use when I watch your content
I can’t believe how lucky we are to have GN. I feel like this level of quality, detail, professionalism, is something I never expected. Like this is real journalism, science, and dedication FROM A RUclips CHANNEL!!
Love you Steve and Team ❤
Very kind words. Thank you. We will keep trying to live up to them!
I feel the same
@@GamersNexus I still don't like how you spent the earliest days of this issue putting 99% of the blame on the user for "not plugging it in right" despite how easy it was to have it be the tiniest bit out and not even realize, plus this not being a problem with any other plug before this. Great that you're being thorough now, but you jumped to a wild and unfair conclusion at the start that made me start questioning your impartiality.
@jkazos
at that time we didn't know much at all about the issue. Improperly plugging in the connector is a failure point. It was a "new" connector type 2 years ago which most users are unfamiliar with.
I think you're stretching.
Iirc GN said (not exact quote) "we don't have enough information but here is our recommendations.
Saying 99% is a a bit on the edge, no?
Edit: why can't I reply properly it won't link the user ?
@@jkazos Did you watch the video...? He quoted directly from his original video that if a design lends itself to not being plugged in right, the design is to blame as well. Sometimes companies are wrong, sometimes end users are wrong, or a combination of both.
The journalism we don't deserve but needed, well done & bravo to those who are involved!
Steve, I've been watching you ever since some of your very first videos. I cannot express my gratitude for you and your team's contribution to this industry and journalism as a whole. Keep on keeping on!
Thank you for sticking around for so long! It's been a long journey. Still love it!
Consumer electronics are required to use lead-free solder. AFAIK the only place you can use leaded solder in mass production is healthcare, aviation and military equipment.
The amount of e-waste made by lead free solder because of misguided policy is staggering.
Yeah it's a real shame IMO, the tiny bit of lead in solder helps so much more than it hurts. If govts would be willing to properly dispose of stuff this would not be an issue & we'd have much sounder electrical connections on our electronics. Thankfully you can still get leaded solder for personal use, it'll be a real sad day when they outlaw leaded solder across the board. Lead free introduces so many issues into what should be a very easy task of soldering a connection together!
Soldering with lead free alert is not that much more difficult. We do it all day, every day in the industry. With the proper prep that is always needed for long lived solder connections it is not that different.
@@robr640 It's not down to the government to actually put it in the correct bin, or take it to the correct waste collection facility to make sure it's disposed of effectively. They can provide all the safe disposal sites they want. Joe Bloggs is still going to f**k it in the bin though. It's a consumer conscience problem. They don't care.
@@robr640I'm glad they don't mass produce products with brain damage included
Excellent tech journalism! Tons of time spent by several GN staffers, so here's to you fine folks.
This looks like a gigantic mess. I am not an engineer, But when I saw them change the PCIe 8 pin to 12vhpwr connector? Why? I didn't understand why they were trying to push more current through a smaller connector.
That's the best part! It is a gigantic mess! Saw you edited your comment: We do address why they did that (at face value) in this piece, if you're genuinely curious what their reasons were!
@@GamersNexus yes why
one reason it's to reduce space
the other maybe more power in one connector for biggest cards
instead of 4x8 pin pcie they do 2x 12VHPWR
also big mistake in testing products after fabrication
The big selling point was the 12vhpwr connector being able to “talk” to the power supply better than existing connections. I’ve been into computers north of a decade and have seen the shift away from Molex (not a bad thing) to PCIE power connectors and now this. This connector is plagued by the same “engineering” principles that have caused people to loathe getting a new car. It’s great on paper but a train wreck in practice.
@@jonathanjones7751 That’s the most ridiculous part because most cables have the sense pins wired within the male connector to use 600W - the sense pins on the GPU and the PSU actually don’t connect
@@jonathanjones7751 This power connector is cutting-edge-right up there as the new design wave! Just like Starliner and the Titan submersible! 🚀🌊
What could possibly go wrong? 😅
Grabbing my popcorn now! Thanks Steve.
Same 🍿
I'll grab the 12VHPWR to pop the popcorn.
Maybe we should just put all the longer videos into a playlist called "popcorn"
@@GamersNexus I'll watch the whole playlist!
@@GamersNexus please!
My first question when I saw the size of the 12VHPWR connect was, "why is it so small?" Why does everything HAVE to be small?
Regardless, please keep up the good work. I look to you guys for information on current industry issues, despite being an enthusiast and not a professional.
Can I have $20 to buy candy ? 😂
@@nitrowarrior-lj5ip here, take it. just write down here your credit card info and dont forget 3 digits from behind
Its like all connectors getting smaller. It's the same with SATA and HDMI. They randomly have bad connections that can be fixed with wiggling. The old RCA connectors never had that problem.
@@tinkerman5220 never had problem with nether sata nether hdmi. my friend have second phone with type-c and his always start to fail after about 1-2 years of usage, i use type-c phones for more then 5 years, never had that problem. i guess problems appear when he charges phones in his pocket with powerbanks
I've had so many people tell me that my own card is all fixed now that the previous discussions had settled. I'm so glad you decided to revisit this Steve because the dialogue needs to be revisited to address this horrible spec. Even the basic nomenclature is completely backwards in the white paper. 😂
I had my original 12V connector that came with the power supply, it's been a year no issues whatsoever strange how reports keep coming out about this issue
Excellent video.
One thing that I've been wondering about ever since the 12VHPWR connector was announced is: Why the PCI-SIG didn't switch to a blade type contact connector design. Blade type connectors have been the de-factor industry standard in high current applications for decades, with nominal ratings well over 15A for a single contact and some of them going up to 100A per contact in some of the off-the-shelf designs that have been available for quite some time.
@@hexarith you mean like Mini-Fit Sr at 50A per contact 😀 Given all the other connectors are Mini-Fit Jr that would be the obvious option IMHO. Given the new plug anyway means new PSU, a 20V option would give the headroom for 1000W in the GPU on two pins and while the individual pins are larger as you only need two it will be similar sized.
yeah that is the weirdest part to me
Then NVidia wouldn't have owned it and been able to take the credit. They tried to "push the envelope" by being cheap, and surprise surprise, engineers "overbuild" things for a reason. "Overbuilding" is about knowing everything's not going to be perfect. It's reminiscent of Apple's butterfly keyboards.
Ring terminal and a screw
@@jonathanbuzzard1376 * 20:29 * The GPUs PCB itself is flat blade... and divide into small pin.
Man, these deep-dives and documentaries are simply top-notch. I don't know how you guys can afford to do these....I know it sure isn't the YT revenue lol....but I'm super glad you're doing more and more of these. Thank you!
As a hobbyist electronics guy that has built some PSU adapter cables and such for people, I decided to look into the difference between 12VHPWR and PCIe power connectors at a component level.
PCIe 6-pin and 8-pin connectors, as well as ATX power supplies, the extra 4-pin ATX12V board connectors, etc. use Molex Mini-Fit Jr. pins. I got to looking in to what pins are used for 12VHPWR, and someone pointed out that it's Amphenol Minitek PWR 3.0, plus listed some part numbers, so I got to digging.
Looking at the drawings for female pins from each family rated for 16AWG wire, the MiniTek PWR 3.0 pins are smaller in the outside dimensions of the folded-rectangle shape of the female pins, and the pin contact length is shorter than the roughly-equivalent Mini-Fit Jr. female pins used for PCIe and other connectors. Oddly enough, the Amphenol docs list their pins being capable of up to 12.0A per circuit, while Molex lists theirs as up to 9.0A per circuit. I'm sure if I really wanted to dig and do some math, I could come up with the total contact area between the female receptacle and male pin for each family as well, but I think the point is clear by now. The MiniTek PWR 3.0 pins are going to have a smaller contact surface area between the male and female pin than the Mini-Fit Jr. pins even when engaged properly, yet the MiniTek PWR 3.0 pins are rated for a third *more* current than the Mini-Fit Jr. Pins.
Since each pin in the 12VHPWR connector is rated for 9.5A by the PCI-SIG spec, that gives you a total supported power of 684W...In a perfect world. That gives you a 14% safety margin, again in a perfect world. Let's say that one of these six pins drops out due to a poor connection...Then you've got 600W running over five pins. 5*9.5A*12V is 570W, but you're trying to push 600W over those five pins...Congrats, you just put yourself at high risk of thermal runaway.
The safety margin isn't there with the Amphenol pins. IMHO, it would be safer to run a 6x2 Molex Mini-fit Jr arrangement technically out-of-spec than the 6x2 Amphenol MiniTek PWR 3.0 arrangement technically within spec in an improperly-installed plug/receptacle, simply due to the larger contact area with the Mini-Fit Jr. pins. Physics doesn't lie; the lower safety margin of the Amphenol setup puts the designer behind the eight ball right off the hop, then you add a connector that doesn't positively latch? No wonder these things are melting. This was a bad design choice on NVIDIA's part. NVIDIA is 100% counting on a perfect engagement every time between those twelve pins, and even one pin being off puts you out-of-spec according to the PCI-SIG spec. Once the first pin heats up, its resistance goes up...Which is going to drive more current to the lower-resistance pins, causing them to be over-spec even more and heat up as well.
I should state clearly here: the problem is in the pins. 16AWG will happily carry well over 10A for chassis applications. People have seen the same failure modes for 14AWG-wired 12VHPWR plugs. This is 100% a spec problem for the pins and plugs used for 12VHPWR, and the fault is entirely NVIDIA's.
Tack on recessed female end and lack of push/pull/squeeze tabs to avoid flexing the PCB during mating.
A design that has a single point of failure times twelve is... let's just say some concepts are only allowed in the computer world.
Yes, the very small engineering safety margin is shocking to me even (or especially) for a consumer oriented product. We often bake in 50% engineering margins or more for safety critical parts.
Thermal runaway is something that happens to transistors. Connectors simply overheat and catch fire.
As I predicted, when this whole 12VHPWR fiasko blew up: its a connector designed and built for the bare minimum of "scraping by" in terms of overhead. I suspect this was a pennies on the dollar decision, making the CHEAPEST possible connector to do the job. They could have taken the old 8pin PCI plug, added the extra 4 and sense wires... and STILL have a way more reliable and redundant design with marginal overhead. Considering theres been literal DECADES of connector development, know-how etc... this just seems LAUGHABLY undercooked (no pun intended). I truly hope who ever designed and signed off on this... does not work at PCI-SIG anymore.
Or switched to mini fit senior connector range which is rated at 50A per pin so you only need 2 pins for 600W
If it was ready for daylight it would have been on the ARC cards, not nvidia's. Intel WARNED PEOPLE about this head of time as they were directly involved in creating it.
I blame DEI heir engineers and managers.
He does. It's John. John sucks.
@@markburton5292 That is one of the stupidest things I have ever seen
It is Un-imaginable that you go to such incredible lenghts to be correct and accurate.
This is no doubt the best content creator of the genre.
Thanks - I remember when you first posted coverage on the Cablemod adapters I had been having trouble with my PC resetting - that led me to replace the adapter and Cablemod cable with an "official" 8 pin to 12vhpwr cable from Corsair and the problem went away. I had originally planned to replace the entire PSU.
Wow! Big donation. Thank you. Glad that you were able to save your PSU and keep it simple!
GN feels like modern day Sherlock Holmes going after criminals and I'm here for it. Impressive work all around. As a phd level scientist in medicine, I appreciate your efforts for transparency and reproducibility. Its refreshing to see rigorous science being applied in the consumer tech world. Cheers!
What a clumsy boast and misrepresentation of sherlock holmes.
@@RegenTonnenEnteI would love to hear your breakdown of your comment.
@@joshschmidt8784 because sherlock holmes solved murder cases through deduction and didn't expose dodgy electrical engineering by asking some industry experts.
@@joshschmidt8784 got deleted by YT... well basically he just isn't and I fail to see *any* parallel.
@@RegenTonnenEnte I think he was declaring his expertise in the field of discussion and I imagine paying a well deserved compliment to GN. But maybe English isn't your first language.
You are the only working investigative tech journalists left on this planet.
Awesome research, it’s so cool that you guys worked hard and now are able to afford making this type of content! GN deserves every bit of success it has and then some more. Also love that Aris and other expertises were used. Thanks Patrick, Steve and Team! Looking forward to the next awesome content.
If only there already was a connector able to deliver 300W in the same footprint as the PCIe 8pin, widely proven and already in mass production for all modular PSUs...
Wait a minute
The one for the Outlet in the wall.
They could have just kept the old pitch size of the old PIC-E power plugs and use the pin layout from 12VHPWR, so 6x12V and 6x GND plus their sense pin thingy. The way I see it the smaller size is the main culprit.
Nvidia DC GPUs even already use EPS for power which I think is what you're getting at!
But then how are the poor power supply companies supposed to make money?
Did Nvidia ever explain why they made this cable standard. I dont get it.
I've watched a ton of Northridge Fix's videos. Alex is a BEAST at microsoldering. I think he said it best. You can say what you want about angle of insertion, debris etc but the point stands it's a connector. You plug it in, it clicks and you don't think about it again until it fries your $2000 graphics card. You shouldn't HAVE to think about how you plug it in at all.
Dude is a rip off artist. Literally has a video of him putting a new connector on a 4090 with the connection side of the plug facing INSIDE the card meaning it was un usable for the customer and he still charged them for a full fix. Guy is notorious for being incredibly shady and all about making as much money as he can off his customers.
@@xnitropunkx none of that has any bearing on my comment in the slightest
@@QactisX Oh so you want me to get into how hes not good at microsoldering and just flooding PCBs with shit flux to make his job easy? Keep gassing up a dude who makes a living ripping people off. You are directly making a comment that would cause people to assume hes good at his job when hes not.
@@xnitropunkx you are making a comment that is extremely annoying and negative, cursing me out for no reason when I noted he’s good at microsoldering and has some valid points about the 12V connector. Sounds like you work for Nvidia or Intel and don’t like hearing that it sucks, or maybe you work at Joe Shmoe’a Solder Shack down the road in Northridge and he’s taking all your business so you’re on GN smearing.
No I don’t want you to go into it
@@QactisXsome people just cant get over it
Thanks heaps for this Steve & GN team. I've been really curious and worried about this overall issue. You guys are doing fucking brilliant work.
Thanks so much for the kind words! We're really trying to focus on gradual improvement of the formats!
I had my original 12V connector that came with the power supply, it's been a year no issues whatsoever strange how reports keep coming out about this issue
"THIS IS BAD"
Thank you Steve
EXCELLENT coverage of a potentially multi level catastrophic event that could have been exasperated if the cards were in a more affordable price range. The initial buy-in cost of these cards greatly kept them out of reach of budget volume amounts of retail sales.
I'll always be thankful for your help with my warranty claim with ASUS, Steve. Thank you!
This video is the perfect example of why this channel is my go to for information you guys triple check every thing before making a statement to be as accurate as humanly possible thanks for all your hard work over the years
It's just pure fortune that I can watch this kind of quality video almost for free
I've worked a lot with high amperage/low voltage electronics. I knew from the beginning that those pcb adapters were going to be a problem from the moment they were announced. We did a lot of experimentation with high current lipos years ago and we constantly worried about the wired connectors we used, and those were from reputable, professional industry/military companies that have been making high current connectors for decades.
That's really what blows my mind. Hobby RC electronics for example have proven that more appropriate consumer grade connectors are possible and even available quite cheaply as of recent years. Paralleling up small friction fit connections is rarely a better solution then one solid low resistance one. Some even contain sense pins already. And on top of that silicone wire with its flexible jacket and high strand counts would practically eliminate the bend strain caused by these atrociously stiff meshed/bundled PCI power cables. It even comes in all sorts of fun colors for your customization desires haha!
In addition to the original ampacity compared to the latest, along with less connection contact area, a mess. Any power engineer should have seen this coming. I did. Most did. All of this root cause is for those that lack any basic understanding. The root cause is obvious, not enough. Any connector that blows will cause a cascade. Any of these other things that might have been listed are just sensationalism. The how and the why all will point to the real root, ampacity and quality connections. There simply isn't enough. The mechanics and quality of the connector just make it worse. Delving into that is nothing more than entertainment. Yup, a crack at Steve who blatantly said everyone was wrong. Basics are enough in most cases. Just jumped to the end to get this comment out. Test and test, build and build, technician. Leave the design to those who do. Comment on it, but stop throwing shade. Best Buy at it's best.
@@Wiresgalore Funny you say that because we actually tried those at first. We needed a large bus of separate power adapters all connecting to the batteries at the same time. We glued them all together after plugging them in and then hoped to be able to plug them all together remove to make it easier. That's when we realized how sloppy the tolerance to fully seat each connector actually is. The metal clips inside the connectors would not fully seat even if the plastic was flush a lot of the time.
Also if we're starting to expect GPUs with 600W and more, maybe we need to move away from 12V and go for 24V connectors. Nvidia could do it if they wanted and force everyone else to deal with it
@AlbertScoot Thats fair, I guess more of the point I was reaching towards is so many better options do/can exist, whereas this whole debacle feels like beating a dead horse trying over and over to make something fundamentally flawed for its purpose work for such.
You guys are a beacon of trust and knowledge in an environment ridden with shady and negligent companies.
A peer-reviewed channel. This is groundbreaking. We need more of this!
I've worked in electrical engineering for 25~ years and one thing I still can't get my head around is why PC cable design is still using multiple cables instead of a larger single conductor. I can only assume it's an economy of scale 'problem' and they only have to buy one cable size.
If they started making interconnects using flexible 20sqmm or so (4awg) cable and used a larger more beefy connector, literally none of these issues would ever exist.
The problem with using lots of smaller connection points is that they won't always be equal - it's only takes one connection point to be not as well mated to start creating heat, oxidising, and creating more problems.
They need to just start spending a little bit more in investing on bigger connectors and bigger (fewer) cables and this will all be a thing of the past.
This sounds like a PERFECT solution! Like to upvote! 👍🏼
I miss using the "~" symbol as you just did. For some reason they neglected to add it to the keyboard on this computer and I have been left writing only in exactitudes ever since, with no approximations.
I do IT, but started with cars, and kind of what I thought, like why not a big ol screw down connector like they do with car audio amps (and some batteries). That or I think Asus had a power connector slot just after the PCI 16x slot with beefy connectors (GC_HPWR). Even easier if motherboards go full 12v (ATX12VO).
Stability is the reason why, by spreading the load out over multiple smaller cables you compensate for voltage drop and surges, with just one cable the card could crash trying to go from low to high load if there isn't enough power coming through that single cable.
We're talking electronics running at less than 1 volt, the precision really does matter.
@@celeriumlerium8266 This has nothing to do with the amount of cables you use, but the conductor surface area, which was the very point I was making. As long as you spec a cable of sufficient size to negate any voltage changes, stability has nothing to do with how many conductors you have.
Another person mentioned car audio which is a good example because it's also around 12v and you also don't want voltage drops because it can damage equipment. Car audio folks often use huge cable like 4/0 guage (120sqmm) because they are dealing with massively higher current than anything in a pc, and don't want any voltage drop.
The craziest thing about this is that we could just use a re-branded eps cable like they use in servers. The safety margin on them is so big it could hold the same amount of power as a 12 pin and still have a larger safety margin.
Bingo!
my RTX A6000 actually has an 8 pin EPS socket on it! no reason why they couldn't do that on consumer GPUs too
the whole PCIe connector thing is stupid, yes. Also would make wiring up a PC less confusing for beginners by standardizing on just one type of 8 pin
@@tommihommi1 confusion engineering 😹
@@tommihommi1not all power supplies have a single 12v rail, so this is probably why they decided to have different connectors.
Gran recopilación de los eventos... Un resumen de una historia larga y complicada!!!
Gracias por el relato pausado para poder seguir aunque sea la traducción de RUclips !!!
Great compilation of events... A summary of a long and complicated story!!!
Thanks for the slow story so I can follow even the RUclips translation!!!
When the design spec calls for tight tolerances, but corner cutting is your business model:
someone pin this man's comment
"Did you achieve a 5 degree Celsius reduction in temperature?"
"Yes."
"What did it cost you?"
"Everything."
This wasnt even reduction in temperature. More like +300c if you dicerolled wrong or plugged the socket in loose without double checking.
Though I wonder what the resistance difference is between regular pcie vs hvpower
I don't see how a melting connector can run 5C cooler than the traditional connectors.
@@Lurch-Bot That's because you have not replaced 90% of your brain with CUDA cores. Poor choice friend.
Holy shit, I'd like to thank y'all from the bottom of my heart! I had recently put together my first PC build and I wasn't sure about my GPU cable, I pushed it in as far as it would go, made sure the cable angle was okay and assumed it would be fine, but this video made me double check, and sure enough it wasn't fully seated! Took waaaaay too much force than I would have liked (had to grip the back of my GPU to make sure I didn't break the connection to the motherboard @_@ ) but I finally got it seated right!
Y'all may have just saved me a lot of heartache and money, thank you for all the info!
Steve, an interesting part you did not pay attention to is that the problem is not only in the 16-pin Connector but also in the board's design. Nvidia changed the power design at the transition of 3090ti to 4090. As far as I know, 3090ti had 3 separate power lines and with problems with one of them, the card turned off. 4090 has all the lines work as one, so the card continues to work even when some of the power pins in the Connector are not connected as they should or are overloaded by power because others are not connected or have poor contact.
Nvidia can solve the problem, at least partially, by adding protection against overload to each of the 6 power lines of the connector or at least 3 lines for 2 pins each. When at least one of the contacts is not connected or transmits excess power, the video card should turn off.
Please like it so that Steve can see this.
Indeed. Any design they solely relies on the power distributing itself automatically over multiple parallel paths is flawed. The moment there's an imbalance, one of the paths will be overloaded and the whole thing will fail. Parallel paths either need to be redundant, i.e. every single path can take all the load, or it needs to be at least monitored but better actively managed.
Oh boy here we go, another exhilarating 60 minutes of a deep dive that I don't truly understand but I watch because it's GN. Thanks Steve.
Thank you GN, for the excellent work. This is exactly why im a member, and dont plan to no be one.
as someone who works in aviation, specifically avionics/wiring for most of my job, this was a fun video to watch. get to see the small scale view of what happens when shit isn't made properly.
Steve you and your teams work has always been the cream of the crop but these days you are putting out next level videos man... you are, without question the authority when it comes to real, accurate information in the cpu world. Thank you so much and keep up the great work!!!
Yes worship him, he is tech Jesus after all 😂
I like a few things about the channel. The main thing that keeps me watching the channel over others is the attention to integrity in reporting, Steve and the others do fact check early and often. They also aren't afraid to criticize their very own advertisers- EK Cooling Systems comes to mind, they have to read carefully on these sort of issues because you don't want to lose access to future products and reviews on their line of work. They also haven't been afraid to admit that they're not subject matter experts, something that not many people on the platform say. I have to try to ignore the "attention grabbing" title cards on each video, but I get that's a part of trying to keep audience engagement on RUclips these days.
Literally, just sent in my HX1200 (barely over a year old in use) for warranty replacement because of inconsistent power delivery and was using the CableMod adapter up until the recall on my 4090. I'm almost certain it caused damage to the card -.-
As always: all the hard work, investigation and dedication you've all put into this sh*tshow in MASSIVELY appreciated.
GN videos should be added to IMBD. The production quality is always top notch.
Glad to see you finally follow up on this. I always felt that the initial conclusion of "user error" was lacking and incomplete.
It was complete for the units we tested. The conclusion was that it was a mix of improper insertion for the units we tested and the failures of that era plus design oversights, and this expanded as more cable designs came out later to trigger this piece (which could not have existed at the time, because many of these changes didn't exist yet).
Awesome work as always! - Just a reminder to new folk - if your GPU is using 2x8pin or even 2x6 pin etc - use one PSU VGA cable PER socket on the GPU. IE - use 2x individual cables for your 2x8 or 2x6 configs etc.
I've been saying for two years now: This plug cannot handle 50 amps, and the proper way to have handled this was to change power supply standards and GPU standards to power GPUS off of 48 volts instead of 12 volts. This would reduce the current by 75% for a given number of watts, and would completely eliminate the overheating and failure of these plugs.
When I first started saying this, people mocked me, asking if I thought I knew more than the engineers at Nvidia. Plainly, on this subject, I did. Based on almost 50 years in electronics, I knew immediately on seeing this connector that it would not hold up at 50 amps of current draw. And renaming the plug is not going to help.
These plugs need to go away, but they're not going to. So, if you're buying a 4090 or 5090, keep a fire extinguisher handy.
Issue with 48V solution is that almost everything else in your PC is powered by 12V.
Meaning PSU manufacturers will need to build 2 circuits for 12V and 48V separately... And, then, how do you even divide power budget? Remember times when PSU almost never were able to provide rated wattage on 12V line because 5V eaten significant part? You suggest similar story.
Not to say it will break almost all intercompatability (even if with adapter cables) that PSU spec currently has. [And will increase risk of user error of building system in a way that will send 48V instead of 12V... But this one can be avoided by unique connectors. Returning to point of intercompatability]
People assume Nvidia Engineers are also Electricians. Well these people are downright morons to think that. Maybe Nvidia Engineers are similar to an Electrician but on a sub level. Engineer doesnt mean your an expert in multiple fields with the project your building. Most of these stupid shenanigans are the simple result of companies attempting to find a cheaper, cut - the - corners type scenario to make more on their margins. You can bet Nvidia pushed and backed this stupid connector.
Yes higher voltage transmission will be more efficient and will require thinner cables (hence newer 800V EVs), but CPU EPS is also 12V so what do you do with it?
You would either have to make a new 4th rail on PSU side (requires buying a brand new more expensive PSU), or you would entirely replace 12V with 48V, which would require both new PSU and new GPU/mother boards with more expensive VRMs to handle 48V to ~1V conversion. Both solutions would generate a ton of ewaste
Mini-Fit Sr enters the room at 50A per pin 😂
@@ThunderingRoarYou handle it the same way USB cables do. The base voltage is 12V for the connector, and through some handshake the PSU, cable and card can switch to a higher voltage mode. Sure it takes a bit more hardware to make it work, but making circuits that support 50A ain't cheap either.
Backwards and forwards compatible. No extra e-waste. Gradual rollout is possible. And low power devices can stick with 12V where it's absolutely fine.
Obviously this would require a bunch of coordination in the industry, but this is far from impossible to make work...
In 15+ years of PCIe power, 3.3% of users have ever had a failure. In less than one Nvidia generation 4% had failures. That's huge! I'll go check my 4090 again 👀
Yep, that's something that need to be re-iterated. Plus it should also be noted that of the PCIe 8 pin failures, most of them are due to PSU manufacturers having daisy chained cables. The connector itself isn't at fault.
Ehm... That's how percentages work all right.
If statistics is correct, then difference is actually not as big... Reasons for failure rate are important though (for older connector daisy-chaining is a thing)
But other than that. 3.3% in 15+ years and 3.3% in 1 year is basically same percentage. Granted, you for sure have larger sample size with more lasting measurement.
And 4% is not that much larger than 3.3%... Hard to say anything on statistical significance of this difference, though.
0.22% vs 4%, that's ~18.2x the failure rate (3.3/15=0.22)
@@DimkaTsv Not necessarily. the 3.3% includes failures shortly after the product was made and long term failures that took a long time to happen, the new one has only been around for a year so 4% fail in the short term and an unknown additional amount that fail after a longer period of use is not counted in that 4%.
@@coopercummings8370 how YEAR since this shit plug is with us from 2020 ??
I love this channel so much. Thank you fellas/ladies behind it and Steve!!!
This is investigative journalism at its finest: FAIR. TRANSPARENT & ACCURATE. Because NO ONE else wants to do the dirty work...ain't that the truth!
I love how one timestamp is "CHAOS" 🤣
CHAOS...OUT OF CONTROL!
Excellent video. The way we're headed, I'm concerned we are going to be beyond what one circuit in our homes is cable of delivering.
I have been using a 4090 undervolted shortly after it's launch. It eats around 300W. I have an older Corsair 750W TX-M PSU, to which I custom made a 12VHPWR with a local electrician guy that does this as a side income. Cable is oversized and rigid AF, it's secured in the slot so much, it feels like it will never come out. :D It was a really good buy. Quite happy I didn't bother at the end with 3rd party adapters.
I bought a new 4090 FE (12V-2x6 revision), and believe it or not, I managed to push/break 1 pin of the GPU connector by just plugging the supplied octopus cable the first time. I wanted to hear "the click" so used some force. Does not inspire confidence for sure.
That's spooky, really.
The more you break the more you save
Mad respect for going through through peer-review! Y'all do truly great work!
I was just talking about the shit show around this adapter with a friend a couple of days ago! My thanks to the GN team for the show to go along with my breakfast!
It's amazing how 8 pin cables have existed for well over a decade and had zero issues that justified these terrible connectors.
HDDs also existed, as well as CDs, and IDE cables and floppy discs.
It’s impressive that you peer review with so many other people. Love the content
Tonight I am skipping the movies to watch this gem. Thanks Steve & GN Team!
I've spent some time designing connectors in the past and my first thought when I came across this connector was, "What in the actual fuck is this?"
Peer reviewed videos, I'm here for this! Top video, well done lads.
I still can't believe they approved that retention clip design. It's one of the tiniest most pitiful clips I've ever seen, and it's for a thick, stiff bundle of high-power cables that have to run into the side of a graphics card. You can tell just from looking at it that it's destined to waste hundreds of thousands of hours in cumulative troubleshooting and RMAs.
Not just that, bit it also is in the centre of the long side. That's not how you secure a cable that experiences tension in random directions. For that, mechanical locks go on both short sides (e.g. Centronics, SCSI, VGA/any sub-D, DVI, ...) OR the connections are so deep, that it cannot be canted so far that they lose contact (that's what automotive connectors do).
I have a background (degree) in computer engineering and I very much enjoyed my class specifically on optics and how to design cables.... Great topic.
Now, as a design engineer your task is to understand the spec, design the product to meet the spec, and to work with certification to verify performance.
If we want to take a step back I can give you 4-5 main things I would investigate or inquire further for why this happened. This is a broader term.... Computer design engineering puzzle.... And I think many people failed to do their jobs along the way.
1. Molex failed to design connectors and cables that were able to operate correctly. They had to be redesigned and my instincts tell me that they will have to be revised again.
2. ATX, PCI-E, Nvidia, and Intel all failed to verify the spec appropriately. Specs are designed to be upgraded, improved. However, you should be able to have a standard that works the vast majority of the time and the reason specs exist is to have a reliable product in manufacturing. This is because of engineering calculations needing to have a reliable tolerance. However you want to cut it, the spec(s) are not reliable and need to be revised..... Again.
(Sidenote: imagine if this was ANSI and it was a screw on the space shuttle.... Netflix has great documentaries about this)
3. Manufacturers had the issue of demand from the designers of the products (Nvidia, Corsair, Asus, etc.) which led to extremely terrible quality control due to a variety of factors. The common mantra is "ship it and we'll figure it out later" because the timeline of sales matters way more to the business mindset than the certification and engineering concerns. Think about profit margin vs. replacement costs. Often you will see inferior products added onto quality designs due to availability because at that specific moment the product was already manufactured and engineering is told to update their design to allow it. Certification is told they have to accept it through simple analysis. And this is what leads to Boeing having their massive issues with the dreamliner.
4. No one considered the basic functionality of the product and NOBODY did long term testing for fatigue failures. Things were done in such a brief timeline that it was not possible to test them thoroughly, which is more clear when you step back and think about why this all happened.
5. No one really cared or understood what bend radius means in the design requirements sense. You can even see this from the actual pictures in the spec.
Computer cases have been designed in such a way for how long? No one thought it was a good idea to run every single design constraint on this cable+connector on an open bench scenario where there aren't things jammed into some small enclosure. The stack up of massive connector on the card into massive connector on the cable into computer literally does not work with bend radius (cable) design guidelines. Talk to a person who works with sheet metal and ask them about bend radius and why it matters. Ask them about different types of metal and how the bend requirements change based on the material itself that you're using. Now.... Consider that the vast majority of phone cables fail because the weight of the wire itself and the relatively small size of the connector. Flip that on its head.... Now you have a massive connector trying to jam itself into a small space. It doesn't work when you need (in both situations) proper bend radius to prevent signal and connector failures.
How we fix this as a whole......
A. You need a minimum clearance and bend radius guideline for CASE MANUFACTURERS.
B. You need power supply manufacturers to meet a specific quality standard for clean power signals and standardization of what the cables need to do (this is from ATX spec)
C. You need motherboards, video cards, and OEMs to follow the actual specs and not mess things up by trying to push limits (waves at Nvidia)
D. You need to design things in such a way that they aren't designed to fail.
Step back and think about 6-pin and 8-pin connectors. You have a clip and it's attached on one side to keep a power cable from disconnecting and sparking on the board or somewhere else.
Now, you have 2x6 or 2x8 connectors, even 24-pin connectors which have been used for a long time to send signals and various power connections from the PSU to the board reliably for years, decades. All of those connectors were not this big and didn't require this much power. It's a massive tolerance issue and the entire thing is centered around this very cheap clip that supposedly going to magically hold down this massive duct-taped connector design to the card while being jammed into place because the computer door won't shut and the weight of everything alone is too heavy to keep it in place.
You should have 2-4 clips. Not just one.
Fix the spec... Fix the connector.... Fix the quality issues..... Then maaaaaybe it'll be reliable.
A possible solution might be actually hiring engineers instead of calling engineer whoever has a job in tech regardless of their background... 🤷♂
Call me crazy, but I feel like the industry is trying to cheap out on skills, materials and QA while still trying to keep that aura of "we've got the best minds" when clearly either is not true or these people do not sleep enough to take poised and well reasoned decisions...
@@or1on89 I did my Engineering degree back in the 1990s and the fight had already been lost back then, I remember one of my lecturers going on an epic rant about the difference between fitters, technicians and engineers.
Nice work Patrick, Steve et al!
One quibble, though: when multiple pins share the current at the connector block, but the 0 and 12 V groups are shorted together on either side of the connector, it's *not* the high resistance pins which will get hot, but the **LOW** resistance pins which are of course carrying the most current. This is because the power dissipated in each pin is V*I, and since the voltage drop across the pins is forced to be identical (...as I say, shorted on either side) the pins carrying more current are the ones which will melt.
Add a bunch of resistance to 5 of the 6 12 V pins, for example, and it's the final one which will melt when you pass 50 A through the connector.
The more I look at 12VHPWR / 12V 2×6, the more I think to myself that the connector should had been compression-based or a _really_ thick ribbon cable. With that many small attachments, shoe-horning it into a PCI-like formfactor with smaller conductors seems to be the root problem. Whereas, if it was a compression-attached solution it would require a couple of screws, but it would be _guaranteed_ to be seated well or if a giant ribbon, would afford more (and firm) contact.
The execution was bad, and PCI-SIG should feel bad.
Bring back the thumbscrews! VGA was a pain to swap in and out, but when it was in it was *locked in*
One or two XT90 connectors would deliver 480/960W with a wide margin. Proven reliable.
@@OGSumoand it's not like you're plugging in and out your gpu power cable every day
@@OGSumo *_Damn_* right. Thumbscrews would had been baller to have and yet another accessory. Combine this with special shapes or ribbing that affix onto ratcheting teeth and the connector would _never_ come undone unless a tab were pulled down, and it would had _stayed._
@@bluephreakr i want some bullet shaped thumb screws like i had on my bike tyre valves
Feels like everything sucks now.
Not everything! Cases and coolers have been awesome lately!
@@GamersNexus If only there was good new stuff to put in those cases and cool with the coolers
@@LivingParable693 It's simple - just buy any Sapphire, Powercolor or XFX GPU. It's safe and it works.
@@GamersNexusso basically the only thing you "don't need" to run your PC doesn't sucks...
@@GamersNexus speaking of coolers, I'm very intrigued by the Thermalright Burst Assassin 120 Evo Dark - it looks like an attempt to rival the NH-U12A at a much more reasonable price. Could we get a review on that?
I've worked as a terminal contact subject matter expert in automotive connections systems now for 13 years, and your investigation parallels any one of the professional root cause analysis I have worked on in that time. Great work; Linus eat your heart out you little weasel.
I am concerned that the manufacturers of these products are seeing the same issues that we may see in the automotive industry due to extreme vibration profile validation testing, and/or very high ambient temperature validation (150 degrees C or higher for at least a continuous 1008 hours), where their products should only be sitting quietly in a computer case at comparatively mild elevated ambient temperatures. Seems like they need to take a step back and get some consultation for high power connections.
The problem is user error. Thats how companies get away with bad designs. They BLAME YOU for their fuckups' people and that's how they get lawsuits dismissed in court.
Or how to make NVIDIA angry right before RTX 5000 launch :D
dont worry - they can wipe their tears with billions of dollars
@@traiges414 more like trillions 😭
5090: Hold my beer 4090!
@@Tra-vis nvidia really needs the ubisoft treatment
angr? they got lucky houses did not ended as ashes and people died
the sole reason i went with a 7900XTX for this gen
Same here, that and the extra vram is nice.
same, i don't use Rt anyway. no reason to pay $1700 -$2200 for a headache.
Too bad you had to pay extra for those "AI cores" that still aren't being utilized ..... Those could have been at least $50 cheaper without the wasted AI cores or more powerful by making those million+ transistors used in something like more shader cores, TMUs and ROPs and thus closer to actually competing with a 4090
Yes me too, i'm using 7900XXX
@longjohn526 cool thing it doesn't compete with the 4090 and was never supposed to. AMD didn't bother making a competitor to that because no one would buy it. The XTX competes with the 4080, and beats it for cheaper in everything but ray tracing. Which to many of us is a joke
A smoke and a pancake. Just perfect for a one hour video on an industry standard failed power connector. Good job as always.
And all the while, the EPS connector sits by idle, able to handle 300 watts, but being used only the MB.
this problem is something that will easily become more prevalent with future GPUs that almost certainly are going to pull over 600Watts (theoretically we could see GPUs that pull 1000Watts within a few years from now), so if this 'connector issue' isn't fixed then we will be seeing way more than just 4% of high-end GPU owners reporting this issue
I believe at this point it's more appropriate to call it "space heater with a graphic output" rather than "GPU"...
@@miken3963 If you want more pixels, you'll have to make more horsepower. The 4090 barely does simracing on triple 1440p, and is totally inadequate for triple 4K. As long as pixelcount increases faster than the Chipmakers can make smaller more efficient processors, the GPU size and powerconsumption has to increase in order to make more power.
Good luck trying to keep your circuit breakers from popping with a single machine pulling +1700 watts from a single outlet.
People seem to forget that whole rooms (sometimes more) are serviced by a singular 15 amp (sometimes 20 amp) breaker. A 15 amp breaker in the USA gives you 1800 watts capability.
Taking your hypothetical +1000 watt GPU and giving it a +300 watt cpu when combined with the power draw of all subsystems, fans, drives etc you're looking at +1300 watts from the PSU.
So a 1300 watt system would require about 1500 watts from the wall with a 90% efficient PSU. I'm not aware of a PSU that is that efficient at 1300 watts but we're just making assumptions at this point. Point is every time you crank that system up it's going to pop a breaker unless you have nothing else running on that circuit.
If CPUs continue to increase in watt demands you could easily surpass the ability for a 15 amp circuit to provide sufficient power. What then?
@@ToolofSociety In germany you can hook up 16A at 230V per circuit. I can run my 1200W sim rig, my 2000W pizza oven and some powertools at the same time no problem. Though i can refrain from using my angle grinder while baking Pizza if that means I get a GPU that runs a 12288x2560 Resolution for my Racing sims.
@@ToolofSociety Be Quiet! makes a 80+ Platinum 1500W PSU up to 93% efficiency.
Looking forward to watching later, but just from the opening, it's easy to see how you guys keep pushing and elevating yourselves. 改善いつもだね
15:48 Shaky table makes 4090 nervous
My take on 12VHPWR is I'm going to avoid purchasing a video card that uses it for as long as possible. I still don't trust it enough to use it in my primary PC which is usually on 24/7. Although I am encouraged by the updates to it in the past 2 years. Seems like many of those updates should have been worked out BEFORE it was widely used in some commercial products.
I bet jensen were laughing at us for paying to be his beta testers 😂
I'm hoping AMD will use the 2x6 and 2x8 plugs when they can. AMD seemed to nope out of using it in the 7000 series. I open up my desktop case enough to where I might bump a cable. It's getting less often (for example: hard drive and SSD storage capacity nowadays is huge), but it's mostly a habit. Still, I rather have some wiggle room when I use the electronic duster that might wiggle cables around
I am so grateful for this sum up guys. Great timing, thank you 💪
Pushing more power through a smaller connector. Seriously, what could go wrong?
we do that constantly tho. and at similar scales.
USB C can support between 25 and 250 watts depending on spec yet the connector remains the same as does the crosssection of the wires used (with some variation)
The issue is not "smaller can not support more power" the issue is "Smaller might require more skill to properly plug in/operate"
the 12VHPWR connector is just not designed to be used by the average human. Its "safety" the seating of the clamp is very easy to miss, when compared to the old connector which was both audible as well as haptic due to more material being in the clamp. The old 6 and 8 pin was also bigger with less pins making each pin easier to see and therefore gaps more obvious.
The 12VHPWR connector was developed in a lap for lap use and never ever saw the eyes of someone who knows how people are. Its like car engeniers and mechanics. Some are hated by the mechanic, others are mechanics as well as engeniers.
I think this is the earliest I have started watching a GN video.
same
Bro what a blessing this channel is. Thanks Steve
12:44 the survey is not normalized for time. Micro-Fit (6 and 8 pins) connectors have been used for video cards for around 2 decades, whereas 12VHPWR at the time of the survey had only been out for what, 2 years? Normalized for time, that means we are looking at about 0.2% and 2%, respectively. A staggering difference in failure rate.
Yes, we talked about this.
@@GamersNexus ah I must have missed it! Thanks for covering this issue again.
They need to move away from using multi pin connections to delivery high amperage. Instead they could use a more simpler and flexible 2 pin connection like the ones I use that can easily deliver 600w+ of power. Better yet move from 12v to a higher voltage like 24v or even 48v and been able to deliver more power at less amps, it's the amperage that can cause the melting in connections.
Nice hour of entertainment. Big kudos to the whole GN staff and partners
The very first time I saw this connector I thought "wow that's really small for 600w."
How this was signed off by any of the PCI SIG members, let alone Nvidia actually _use_ it is beyond me. I've been building PCs for almost 3 decades and I've seen Molex and 8 pins melt if they don't get a solid connection. For anyone to think pushing 4X the power through a connector that's even smaller is a good idea is simply challenging the laws of physics. Especially when it could easily have been made much more robust while being barely bigger, and still be notably smaller than 2 8pins, let alone 3.
The stupid thing is that Molex have the Mini-Fit Sr range of plug sockets which is rated at 50A per pin, so while bigger you only need two pins for 600W, so actually no bigger.
"How this was signed off by any of the PCI SIG members, let alone Nvidia actually use it is beyond me."
One word : greed.
Well, USB C can do 240w so the size isn’t actually an issue, as other comments have pointed out it’s more the amperage. I hope I’m not mistaken lol but IMO 24 pin motherboard power connections bother me and I like small connections
@@raypav USB C 240w uses 48V at 5A max while 12VHPWR uses 12V at 50A max.
Heat generated is I²R, so the increase from 5A to 50A leads to almost 100x increase in heat generation
@@x1000plusx You're missing the resistance part of the equation - V = IR, so 48V at 5A = 9.6Ohm resistance. 12V at 50A = 0.24Ohm. 5A^2 * 9.6Ohm = 240W, the heat generated by the 240W rated USB C spec. 50A^2 * 0.24Ohm = 600W the heat generated by the 600W 12VHPWR spec. Putting 2.5 times the power through a plug similar in size to USB is still ridiculous though.
I've never been a fan of this 12vHPWR mess. If I upgraded to a card using it, honestly? I'd trust it more if I hardwired it by soldering wires to the thing, rather than using 12vHPWR.
😅 I'm seriously considering doing this on my next build!
Thanks for the video on this GN, and thanks for all the hard work Patrick.
I remote trouble shot one of these not being plugged in(friend thought her new gfx card was broken) almost lots a friend as she swore the plug was in.. I have a pic of like .5mm gap and the card wouldn't even power up, she said it took all her might to seat it fully after I convinced her that it wasn't seated.. these plugs are super flawed
At least for her the sense pins *may* have been working and prevented it working & melting over time.
I like that the 12vhpwr cables from seasonic just uses two six pin pcie plugs on the psu side. Which for me shows that a six pin pcie plug could easily provide 300 watts. And I’m sure they still have a safety margin.
at least they know what they're doing. because Seasonic.
Wow, crazy info! When I found a great deal on an RTX 40 series card, I made sure to spend the extra money to purchase a PSU that supported a direct 12VHPWR cable connection without use of adapters, just because I had heard that a handful of people had melting issues (likely through you guys). Based on this video it looks like I made the right decision! I even removed a cable comb setup I had planned to allow my GPU cable to have a lot more slack in it. Just indispensable info here guys. Thanks for all your hard work.
Oh FCK yeah dude I'm sitting on a 12th gen Intel build ATM with a CORSAIR RM850 CP-9020235-NA 850W PSU. I've been seeing yours and others' coverage of 12VHPWR and honestly has kept me from upgrading my RTX 3080 in part because the entire circus around cramming more watts on less copper seems just awful. Thanks for making this video - I still have to figure out what the hell to do moving forward, but hopefully all this sorts out when I go to upgrade!
*More watts on copper clad aluminimum.
@@IntegerOfDoom The spec doesn't cover the materials used on the connector if I understood this excellent coverage correctly. The use of substandard materials or manufacturing is a risk for any power standard.
Be afraid, be very afraid. This is what happens when you are terminally online and only follow negativity and clickbait thumbnails on RUclips. Imagine still believing connector is a problem when entire generation and millions of GPU users used it without a problem for 2 years.
@@Glotttis I appreciate GN's approach to root cause analysis, as well as giving a full timeline of Events, ATX/PCI-SIG Revisions. I work in engineering and respect attention to detail.
Fear mongering, this video clearly ain't. But thank you for your opinion on my post. I'll be sure to file your constructive post in the appropriate part of my mind.
@@FrenziedManbeast GN did plenty of fear mongering. For example GN claimed that PS5 memory chip cooling is bad and PS5s will starts dying en masse. Hasn't happened. All electronics have a chance to fail and you can make an hour long video about one electric kettle that broke. Doesn't mean that all other electric kettles will break. Nvidia repeatedly said that RTX40 GPUs with 12VHPWR connector RMA numbers are in line with all previous generations.
I can appreciate some of investigations that GN is doing, but also they often tend to harp on too long to the point where it starts to get uncomfortable. It's a fact that negativity earns more money for RUclipsrs than positivity. Anyways no point prolonging this discussion any longer, because RUclips comment sections tend to have cult-like mentality. Even slightest criticism of beloved RUclipsr is seen as the biggest threat by fanboys.
Uhhh...I'll just keep getting cards with and using three 8pin GPU connectors, TYVM!
Same
TYVM has actually been replaced with HTYVM.
@@whlewis9164 What about HTYTYMVVM?
Same here, but I'm sticking with 2x8. I refuse to use a GPU over 225W TDP. I need a GPU not a space heater.
reason why I bought a 4070. Couldn't risk my house for a 4080 Super
I have an Asus RTX 4090 Strix White Edition and have been using the cable it came with. When I connected mine it gave an actual click and was fully inserted and so far it has been doing it's job perfectly.
As an engineer. When the 4090 was released... it intrigued me for work load. But refused to put one in my system cause of the potential fire hazard. The math on the pin density never mathed for 600w. It only maths to 480w. And that leaves no overhead for the voltage droop, spikes, etc... There needs to be a lawsuit!
I was saying what de8ueurs was saying from day one. It made zero sense.
^ This. Spot on!
Also as an engineer, I just opted for the 7900 XTX and it’s been superb. Sure, Nvidia still takes the cake with ray tracing, but there’s yet to be a single game I’ve played over the past year or two where I felt like I wanted to enable RT, and I can apply that $800 savings from the 4090 to my upgrade next generation. :)
I'm using my 4090 TUF OC since November 2022. I had zero issues so far but my question is - even if the adapter is plugged in correctly, does the risk of failure grow with time and usage?
As an engineer you should also already know that any open flame or melting like this, is a direct violation of any consumer rights and regulations.
Or in simple words, this shouldn't be legal.
In fact it's strictly illegal by law.
Full stop.
It blows my mind how these companies get away with it.
I don't even understand how somebody with half decent morals could sign off on this.
I also don't understand what crappy variant of these connectors they use.
We have been using similar connectors from Molex (the brand), Würth and JST for decades under similar load conditions.
I have never seen such insane failures over thousands of tests
remember the days the thick molex connector was so good, and well connect, you needed pliers to remove them? I sure do..
I appreciate the timeline. It was nice to see a revisit / summary of all the events that happened around the 12VHPWR connector, and the format of a chronological timeline is a great way to organize the information.
However, two sections of the video were IMO presented a bit chaotically.
The first one:
1:40 "we realized that we have to start from the beginning, like PCIE 6- and 8-pin cable getting created level of beginning" and then Steve jumps straight into failure analysis of CableMod's adapter, instead of starting with the beginning of the timeline / with PCIE 6- and 8-pin cables.
And after watching the whole video, the part of the timeline near the end about the Cablemod felt like it was missing something, so I think putting the results from failure analysis lab in there instead of at the beginning would've been a good idea narration-wise, youtube gods notwithstanding.
The second one:
The part about different specs is somewhat hard to follow.
First of all, it would be nice if the introductions of what specs there are and where they come from was all done up-front, instead of intermixing it with what they say about the connector.
Then quoting what they say about the connector could be done in quick succession, without interruptions, so that it's easy to compare and see what the differences are.
I had to rewind a few times to make sense of it so I made a summary here:
22:40 PCIE CEM 5.0 v1.0: "may optionally implement any of these features", so the sense pins are fully optional at this point
24:54 ATX 3.0 rev 2.0: "support for both sense pins is required for a PSU", so sense pins are required for PSU, and card is required to check those pins unless 150W is enough for it
25:34 PCIE CEM 5.1 v1.0: "the sense pins were required on the PSU side, and the GPU was required to monitor them", so now it's required on both sides*
Also, notice how the second PCIE CEM is a different version than the first one. I think Steve didn't mention that clearly enough - I initally thought this was the same PCIE CEM that was mentioned before, and was confused that it now says a different thing than it said 3 minutes earlier in the video.
Also, Steve says "the sense pins were required on the PSU side, and the GPU was required to monitor them" and that this conflicts with ATX... the way Steve says it, no this doesn't conflict with ATX. Both say sense pins are required, and if you combine what they say about cards, "optional" + "required" = "required" so yes, if you behave as if it was required on card side, you satisfy both specs.
Except that's not what the text on the screen says - it's:
"it is recommended that [...] sense pins be connected to ground or shorted in the power supply"
idk what vocabulary PCI-SIG uses in its specs, but in most standards I've read, "recommended" is not "required". "recommended" is a suggestion, it means that in most cases doing the thing is a good idea, unless the implementer knows a good reason to do otherwise. But that's ok, right? PCIE's "recommended" + ATX's "required" = "required".
but wait "be connected to ground" - irrespective of maximum power the PSU supports? or does PCIE spec require that connector to always support 600W now?
"or shorted in the power supply" - the heck? that'd leave both sense pins floating unless the circuitry on GPU's side to read those pins is completely redesigned!
It makes sense in combination with the table from 44:52, which I AFAIU is a different part of the same spec - I wish both parts were shown together.
If the point GN were trying to get across is "there are two different specs, each with multiple revisions, and it's hard to figure out which one to follow" then yeah, I got that part.
But for anyone who tries to understand what the actual differences between the specs are, it's difficult to make sense of that information due to the way it's presented.
Anyway, thanks for the reasearch and the video, and keep up the good work!
Couldn't they use a more server-grade-alike solution with ZIF latches or side connectores yada yada yada the thousands of low cost solutions that servers used for years now instead of stuffing 500W+ in a classic-molex-ish connector that sticks out "beautifully" at the center point of the VGA making cable management a nightmare. 😢
" no, f you, give me money" Jen-Hsun Huang probably
@nemezzyyzz yeah, probably something like "we don't really care about gamer grade hardware anymore, just but this byproduct of our datacenter hardware"
Or just deliver the power thru the slot…
@5urg3x That seems the most appropriate solution. Even some AIBs made some cableless VGA+MoBo combos, I just don't like that they implemented it in a way where it kinda blocks ITX designs (if the whole GPU designs would become these cable-less design with the server-alike slide-in power supply connectors.) But the power slot could be moved to the second slot, since most VGAs, even the ITXs ones, are dual slot nowadays. And I'd wish for a modular connector design, not integrating the power connector on the motherboard, but mounted on the case MoBo tray instead. (man the whole ATX spec is too old, it works, but it's old, and it's not like the oddly overly integrated NUC-alikes with CPU cooling capped at ~100W that will save us lol.)
@@5urg3x power through the PCIE slot is limited, unless a complete redesign is done (which is long overdue). we can not draw enough power through the slot so we have to have additional power connectors.