The PCIE switching to lower rate is a BIOS bug in the Intel 6xx and 7xx series chipsets. At first it has happening to old hardware, like 7970 running only PCIE 1.0, then an updated BIOS was provided by most manufacturers, but the new hardware bugged out.
He could also check every pcie lane pair with a multi meter and checking if there is any weird values. Diode mode with positive probe on ground helps a lot. I've had cards do 8x only due to contacts being slightly dirty as well.
It's not bugged if it's ASUS Z790 Extreme and he has an M.2 drive in the M.2_1 slot - ASUS was being stupid by splitting the Gen5 lanes between the slots and the M.2_1 to make the M.2 and GPU run Gen5 but at the cost of the GPU being x8. You must run the M.2 SSD's in the M.2 PCH slots (2 and 3) to get X16 back on the GPU.
You have a Asus Maximus 790 Extreme. If you download the manual and go to page 13 you see that M.2_1 (CPU attached) run on x4 and the PCIEX16(G5)_1 will run x8. So If you have your SSD in the slot above your GPU it will run 8x. If you use the M.2 slots under your GPU it will run x16. so use slot M.2_2 supports PCIE 4.0 x4 (CPU attached) Or M.2_3 supports PCIE 4.0 x4 (Intel Z790 Chipset) Or use the DIMM.2 slot
@@xite45 Its possible he has the SSD in the wrong slot on that MOBO as well, many PCIe Gen 5 capable boards split bandwidth with certain M.2 slots. Im not sure what slot he is using, and I would assume he knows this, he is clearly very intelligent, but it is still possible.
I have Asus Maximus Z690 Extreme and nothing in my M2_1. My BIOS is up to date. My vbios (4090 Suprim X) is up-to-date. I'm still getting X8. I've tried 4090 FE and a Gigabyte 4090 Gaming OC before this (as well as my evga 3090) which all ran at X16.
@@MeakerSE He was a worker for NCIX, what he could do was very limited but he was extremely curious and at the time one of the few trying new things, mods , ln2. Don't get me wrong , he is doing amazing with LMG and I still watch his videos . But you can tell he lost that passion for PC hardware that nerds like der8auer still have. Can't blame the dude when he's a millionaire and focusing on bigger things tho
@@rx10 what "bigger" things is he focusing on? lol, I always saw him as an ackward imbecile...but he did get a bit better. His view of technology overall is about as stupid as they come. Especially when he doesn't understand something or how a certain product is used in a real professional environment. Something neither he or any of the people he hired have ever worked in.
@@rx10 Not sure you watch his videos if this is what you honestly think. If he was concerned about money, he would've retired years ago. I've followed Linus since he was at NCIX, and he's always had a broader interest in tech as a whole. PC hardware was a good place to start. If you really think he's lost passion, you should watch more of his videos. Yeah, he doesn't do unboxings anymore, but that's because you find out everything about the PC hardware and then some on the websites now, but if you like the ins and outs of just PC hardware then Gamers Nexus is probably a good place to be alongside our good friend Der8auer.
Hi, It's a problem with MSI Afterburner which also locks out zero fan mode. The solution is to use NVIDIA Cleanup Tool / reinstall driver and all functionality is restored. Worked for 3090 Ti FE. Hope this research helps the community too.😀
All this was is empty content, only thing he does is flash a bios, you come off as a bot! Being in "awe" over a bios being flashed? Do you need to wear a helmet at all times for insurance reasons?
Holy shit before next video, you're going to need to take a sedative before hand so you do not stroke out when he changes a printer cartage & calls it content, by the way, while you're extreme brown nosing him, that involves you're whole head, would you stick out your tongue & check his prostate, it would save him a trip to the Urologist, & if you feel any growths, polyps, boils, or cysts, if you would just crew them off & sucking out any puss. And talking about puss, your mom last night told me to to just pick off her scabs to make her extra dripping wet.
Would be cool to test the stock cooler out to see how well it’ll do with extra power. They’re so over engineered for 500w… I bet it’ll do great at 750w. Not sure about 1k but that’s what testing is for! 🏆🍻
The problem is the vrm temps that's nuts. I'd expect them to be a bit better. I'm wondering what the original TDPs where for the 4090 for them to have these massive heat sinks...
I repair graphics cards for a living and I see this all the time its most likely a damaged 250nf capacitor on one of the pcie lanes.. You can measure with a standard multimeter and compare the readings from neighboring PCIe capacitors... They are located next to the pcie slot connected to each pcie lane on the back of the card.. Good luck love the vids keep up the good work..
Please, when you are allowed, try to boost a 7900XTX to use 450W and compare its performance to the stock 4090. Then of course see just how far you can take it. I really enjoy these bleeding-edge videos.
Yup, you're a bot no question about it. Or you're really 'truly amazed' by seeing someone flash a bios. "raising the bar above everyone else" ... again all he did was flash a bios, or am I missing something?! Yeah no one else flash a bios
@@112Famine Nope, you didn't miss anything, especially not all the effort in discovering and then lifting bottleneck after bottleneck in order to truly explore the maximum potential of this card in ways no one else has even attempted. I guess I must be a bot =P
@der8auer , i know you probably know that, but disable all C-States and power saving stuff in the BIOS and for the PCIe, and force PCIe x16 GEN4, try with different PCIe x16 Gen4 on the motherboard if available, also check the Windows Power Settings, if you haven't touched this, it can have some PCI related settings to be left on power saving mode, even if it's on Balanced or Recommended setting. Also in Nvidia Control Panel, set it to be on Prefer Max Performance. You can use other than GPU-Z tools to check how much PCIe lanes it uses, also check only under load, sometimes even after the settings like I said above, it will show x8 unless you put load on it. Well, you can inject voltage on the GPU itself and check if they are really dead lanes or something else is the problem. It doesn't make sense to kill half of PCIe lanes, without even disassembling the card. I had a problem before with the PCIe lanes, but it was because I used too much of them for other peripheral devices, like too much M.2 NVMe SSDs and SATA Connections, 10Gb PCIe LAN Cards and etc.
x8 PCIe lanes could be one bios setting for the NVMe drive config. I got the same issue after a BIOS flash. An NVMe drive was sharing the slot bandwidth (no drive installed) but you know this can happen. Just saying. Might be possible to get full link speed.
Try cleaning the contacts with alchohol and check the caps and resistors near the PCI contacts. Occasionally when I've had weird problems, like your x8 problem, that I couldn't find anything obvious, I first try cleaning the whole board with alchohol and then if that doesn't work I spray the board with some flux, because the alchohol will have cleaned the old flux off, and try reflowing the whole board in an oven. Occasionally one of those two will fix the problem.
reflow in a oven is not the way of fixing it, as it can do more damage. the problem could be the bios bug that some of the new intel boards have, it sounds more like a motherboard problem
@@FilipMunk I specifically said reflow last because it is a last resort when you know there is a hardware problem but cannot find it. I did not say that it was the best primary go to fix.
Regarding the bus interface problem: I think it would probably be nice if you can place the GPU in another motherboard (since I think you don't have another Strix 4090), to check if the problem is with the GPU or with the motherboard itself. Edit: 3090 -> 4090. My bad.
@@deher9110 you realize that if derbauer didn't try this 4090 on another mobo he'd have no credibility right? It's literally the first thing he would have tried.
Tldr: they are already running an impractical over lock out of the box. That level of power isn't getting much extra . They were so worried what RDNA was going to hit. It will be fascinating to see how the AMD cards scale considering their more sane starting TDP....
they got another way to make things work properly, i love that. they got a step back from useless powerforcing with a worser powerdellivery, you know what o mean, but got a huuuuuuuuuge jump to delivering people some technologies and computing powers that other companies thinks we're not worthy (very fluffy, but as it is)
The two times I had the PCIe lanes randomly dropping, 1st was the CPU needed re-seating (and cooler was tightened too much) and 2nd time the PCIe socket and edge connector just needed cleaning :)
2:05 Same thing happened to me with a 3090 FE, was stuck at running at X4 3.0 PICe, tried everything, wasted a ton of time looking for solutions and drove me nuts, none worked...RMA'd back to Nvidia. Mobo was perfectly fine, worked X16 with any other card.
Love this videos, if I may give a personal opinion regarding the x8 issue, maybe just look at the following as I had a similar issue with my one card a while back. The basics like the MB bios settings was messed up once causing my x16 slot to operate as a x8 and something as simple as a nvme and/or sata device was somehow interfering with my display card also causing the x8 issue, what I did in the bios was to change it to x8 apply and reboot and then back to x16 apply and reboot and it was working again
I am, so amazed, on something small such as CPU/GPU die block that producing more than 500W of heat, can survive on sudden jump of delta temp without crack / broken. Even the VRM MOSFET, what the hell they can provide very small and very accurate voltage, with massive current flow through it.
Thats why the VRM comes with at least 20 phases..... It's such balanced and often over-engineerd. Different portions of the GPU use that voltage, so you have multiple points of voltage input and not just one.
Will be interesting to see if a proper block like the up coming ones from Optimus or Watercool Make a meaningful difference. Probably like 5c°. 10c° at most.
Bought an Asus motherboard one time and it also showed the PCIe x16 in x8 mode. Randomly fixed itself to be x16 at one point, then randomly reverted to x8. It wasn't related to having NVMe. I returned the board and bought a different one. New board worked correctly.
For the x8 pcie problem, I'm pretty sure that when not under load the bus reduces itself to save power. In gpu-z there should be a test rendering option, and that should make the bus go back up to 16x
Hey there! Nice! What about M.2 SSD attached to PCI connected directly to CPU (the same one where the GPU is)? As far as I remember it shares the lines.
Direct connect of M.2 SSD to CPU should not consume PCIe lanes of the 1st slot in modern boards. They are using separate lines. It does consume when you connect m2 SSD via PCH in PCI-mode tho. At leasts thats how it works for x570, x670, z690. You can check chipset diagrams as a proof.
Doing and OctaneBench run (Octane Render 2021) is one way to test card speed without the PCI-E bandwidth restriction as it loads the card up for local-compute only. I've had identical scores on a 3090 inside a PC vs outside via Thunderbolt. Maybe that's one idea for your testing ?
I'd strongly suspect that Roman (Der8auer) has either OEM or custom power cables able to carry higher current. The Nvidia adapter is only used if you're using the PCI-E power connectors. There's a new ATX spec for the PSUs that defines this new 12pin high power input for the GPU. A number PSU manufacturers either have an included a dedicated cable, or have made one available as an aftermarket part if your PSU was made before this was a standard. Also, actually manufacturing your own power cables isn't terribly difficult, just needing some wire, the plastic housing, a hand crimping tool, and the correct size pins for the housing. Most people will then sleeve the individual cables either with a plastic sleeving material, or use paracord to cover the cable and make it look nicer, and match their build color scheme.
I found that shunt modding didn't really help much. Seems like the BIOS has some sort of Aux power limit thing going on that isn't determined by a big shunt resistor. It was the same on the 3090 where you couldn't get a shunted 3090 above about 650W, and needed the XOC bios to hit 1000W.
I always wondered what a 1600W power supply was good for. My guess is you forced PCIe 4.0 on you 5.0 slot and it was still a no go. You may have to wait for a motherboard BIOS update or try a different one.
Speaking about your strix; EK are producing a dual (front & back BOTH ACTIVE plates - so looks amazing from top and bottom with RGB) water block set for the RXT4090 founder's edition, called the "EK-Quantum Vector² FE RTX 4090 D-RGB ABP Set - Nickel + Plexi". Water blocks reduce the size of the card so that it would even fit in a small form factor case with a custom water loop, as well as gaining all the other water-cooling features (best possible temperatures giving the card the thermal headroom to CONSTANTLY boost to maximum speeds [so long as you also have a beefy PSU] in a way no fans would, silent running, reduced power passing through the card, greater stability and greater potential for future overclocking). The latest CPUs also run incredibly hot so benefit from water cooling, with the fastest DDR5 RAM also benefit from a water-cooling block as well. Could you please ask ASUS ROG is they will produce a Strix RTX4090 in partnership with EK so that it can be bought with the same front & back active water blocks, for the best performance whilst also fixing the "too big for the majority of cases" issue? If they also switched to display port 2.1 on this card then it would sell like hot cakes, because the RTX4090 not being able to output graphics that it can manage to generate makes it a serious joke, with DP 1.4 an unacceptable bottleneck. I plan on getting a Cooler Master HAF 700 case and installing three 420mm radiators (top & side & bottom) which, when paired with the front two 200mm fans & rear two 120mm fans, should allow me to cool even super-hot gear to near room temperatures, I hope.
The RTX 4090 was released before the DP 2.1 standard, only just though. Also 2.1 seems more aimed at laptop and USB-C standards alignment, rather than more bandwidth. DP 1.4 is good for 4K @ 144hz, so that's still beyond what a 4090 can do in AAA games and is at the limit of what is available on the market monitor wise. (Asus's own ROG XG27UQ UHD 144hz monitor uses DP 1.4 wise DSC) So no DP 1.4 is not a bottle neck yet.
@@agentcrmThe RTX4090 can manage considerably better than 4K @144hz lol. In fact, it can run some games at 8K at very playable refresh rates (see the Linus Tech tips video on 8K gaming for the proof), so yes, DP 1.4 is ALREADY a bottleneck. And as DP 2.1 is a powered connection it will not be just a bios update, as some people have suggested.
@@TimLongson The 4090 requires DLSS to get close to 4k @ 144 in most games, so that is why I say it is beyond it's capability. As for 8k, those LG TV's only have HDMI 2.1. So DP isn't going to help there. DP 2.0 would have been an option, but since there's no monitors on the market that support it, there's not much in the way of an incentive. Not to mention the games LTT showed in 8k, didn't even have 8k assets to render. They could have put the powered connection in there for DP 2.0 but not activated, which I doubt.
At that power limit I'd be looking into active cooling for the Nvidia stock adapter - knowing how Nvidia likes to save money on stuff like that I'd just be beyond paranoid of welding the plastic connector ends together after a few benchmarks lol.
Nice. Also, I would def buy that WireView thing. If it didn’t say “WireView” in big white letters on the front of it. Why not engraved and colorless, or just a sticker? And yea, I know WHY it's there. But if fewer ppl buy it bc of that then fewer ppl will see/ask about it anyway.
Whats the point. The chip is pretty much at it's ideal frequency / voltage scale. Going even more higher will bring you diminishing returns. (more power, less performance)
The first 20 posts all are in awe of this video ... need to turn down the setting on the praise bots if you're going to create content to just create content.
How are those power cables not melting past 600 watts (and the plug on the card?) when we're hearing left and right people are having issues even at stock? :o
You tried a different motherboard, re: x8 mode right? I've seen certan NVME slots which cause dropdowns if using a 4.0 x4 nvme The other thing i've seen is that the CPU driven PCI-E bus pins is on the EDGE of the CPU, which means that re-seating the CPU (and if you're using your bracing block this is more likely) and re-seating it will potentially fix it.
Out of curiosity what is the maximum theoretical power the PCB could handle? Even if you could pump unlimited power through the PCB would the die itself have a limit?
there is no limit as such, the limiting factor is the temperature raise of copper tracks.. so if you don't mind a toasty PCB then the "limit" is pretty high :)
1:52 Have you tried on another MB? Could be something as simple as resiting the CPU. Also try forcing it to run on PCIe 3.0 on the MB Bios and see if it still stuck at 8x.
Thank you Roman for the effort you spend on these videos!
Youre doing a very important job for the community
Yes, many of gamers will do that to their setups 🤪✌️
@@Albert41122
All he did was flash his bios ... yeah very important job for the community
@@112Famine its about trying whats possible
@@112Famine Well someone’s grumpy… If you don’t like the video then don’t watch it, nobody’s forcing you 🤷✌️
The PCIE switching to lower rate is a BIOS bug in the Intel 6xx and 7xx series chipsets. At first it has happening to old hardware, like 7970 running only PCIE 1.0, then an updated BIOS was provided by most manufacturers, but the new hardware bugged out.
Точно
He could also check every pcie lane pair with a multi meter and checking if there is any weird values. Diode mode with positive probe on ground helps a lot. I've had cards do 8x only due to contacts being slightly dirty as well.
my friend i has the same issue , is there anyway to fix please and thank you
It's not bugged if it's ASUS Z790 Extreme and he has an M.2 drive in the M.2_1 slot - ASUS was being stupid by splitting the Gen5 lanes between the slots and the M.2_1 to make the M.2 and GPU run Gen5 but at the cost of the GPU being x8. You must run the M.2 SSD's in the M.2 PCH slots (2 and 3) to get X16 back on the GPU.
@@puciohenzap891 but I have msi board
You have a Asus Maximus 790 Extreme. If you download the manual and go to page 13 you see that M.2_1 (CPU attached) run on x4 and the PCIEX16(G5)_1 will run x8.
So If you have your SSD in the slot above your GPU it will run 8x. If you use the M.2 slots under your GPU it will run x16.
so use slot M.2_2 supports PCIE 4.0 x4 (CPU attached)
Or M.2_3 supports PCIE 4.0 x4 (Intel Z790 Chipset)
Or use the DIMM.2 slot
he test it on another rig and still got 8x.....
@@xite45 Its possible he has the SSD in the wrong slot on that MOBO as well, many PCIe Gen 5 capable boards split bandwidth with certain M.2 slots. Im not sure what slot he is using, and I would assume he knows this, he is clearly very intelligent, but it is still possible.
I have Asus Maximus Z690 Extreme and nothing in my M2_1. My BIOS is up to date. My vbios (4090 Suprim X) is up-to-date. I'm still getting X8. I've tried 4090 FE and a Gigabyte 4090 Gaming OC before this (as well as my evga 3090) which all ran at X16.
Man these new faster paced videos are so informative and useful. Great job!
After this the "Q" on the BIOS switch went from "quiet" to "quite insane"
You are what I thought Linus would become in time.
Thank you for the passion behind your work, keep loving what you do man, the rest comes with time
Really? Linus never really seemed like the sort to go into extreme over locking properly.
@@MeakerSE He was a worker for NCIX, what he could do was very limited but he was extremely curious and at the time one of the few trying new things, mods , ln2.
Don't get me wrong , he is doing amazing with LMG and I still watch his videos .
But you can tell he lost that passion for PC hardware that nerds like der8auer still have.
Can't blame the dude when he's a millionaire and focusing on bigger things tho
@@rx10 what "bigger" things is he focusing on? lol, I always saw him as an ackward imbecile...but he did get a bit better. His view of technology overall is about as stupid as they come. Especially when he doesn't understand something or how a certain product is used in a real professional environment. Something neither he or any of the people he hired have ever worked in.
@@rx10 Not sure you watch his videos if this is what you honestly think. If he was concerned about money, he would've retired years ago. I've followed Linus since he was at NCIX, and he's always had a broader interest in tech as a whole. PC hardware was a good place to start. If you really think he's lost passion, you should watch more of his videos. Yeah, he doesn't do unboxings anymore, but that's because you find out everything about the PC hardware and then some on the websites now, but if you like the ins and outs of just PC hardware then Gamers Nexus is probably a good place to be alongside our good friend Der8auer.
@@Redman147 uhhh... Sure kid
I suppose as long as you're gaming in winter, and you don't mind using your GPU as part of your central heating this 800W power draw is not that bad 😅
protip : you can also intake air from outside of your place to ease the cooling of you rig
Hi, It's a problem with MSI Afterburner which also locks out zero fan mode. The solution is to use NVIDIA Cleanup Tool / reinstall driver and all functionality is restored. Worked for 3090 Ti FE. Hope this research helps the community too.😀
Holy sh!t you really do create some amazing content. I am perpetually in awe of your skill/knowledge/competence ... I cant wait to see the results!
All this was is empty content, only thing he does is flash a bios, you come off as a bot! Being in "awe" over a bios being flashed? Do you need to wear a helmet at all times for insurance reasons?
@@112Famine I'm sorry your mommy didn't love you
Holy shit before next video, you're going to need to take a sedative before hand so you do not stroke out when he changes a printer cartage & calls it content, by the way, while you're extreme brown nosing him, that involves you're whole head, would you stick out your tongue & check his prostate, it would save him a trip to the Urologist, & if you feel any growths, polyps, boils, or cysts, if you would just crew them off & sucking out any puss. And talking about puss, your mom last night told me to to just pick off her scabs to make her extra dripping wet.
@@112Famine wow wow, wow, wow........wow.
Trolling on the internet is tight!
Do you kiss your momma with that mouth?
112Famine >"Yes sir I do"
The effort and the accomplishments made by this channel are incredible . Well done 😊
effort and the accomplishments? Well done? All he did was flash his bios or am I missing something?
@@112Famine he did design and sell many delidding tools which are very helpful for modders
Would be cool to test the stock cooler out to see how well it’ll do with extra power. They’re so over engineered for 500w… I bet it’ll do great at 750w. Not sure about 1k but that’s what testing is for! 🏆🍻
can check that after doing the XOC :D because it's kind of a risk as I said.
The problem is the vrm temps that's nuts. I'd expect them to be a bit better. I'm wondering what the original TDPs where for the 4090 for them to have these massive heat sinks...
Right now Im more concerned with wtf is going on with the adapters...
@@SimanSlivar topping out at 90 degrees while providing 700W+ to the GPU and running naked with no heatsink that's actually quite impressive.
@@utubby3730 cause that’s all u know
edge of seat. I have the same mod block from EKWB and I was too scared to use it without VRM cooling. thanks for doing this.
Insane
You always push the envelope further than anyone else
All he did was flash a bios, not have the waterblock needed, & pet his cat.
I've been installing hacked bios's & water cooling for 3 decades.
I repair graphics cards for a living and I see this all the time its most likely a damaged 250nf capacitor on one of the pcie lanes.. You can measure with a standard multimeter and compare the readings from neighboring PCIe capacitors... They are located next to the pcie slot connected to each pcie lane on the back of the card.. Good luck love the vids keep up the good work..
This would only be the case if it does it on both AMD and Intel boards..
This is my favorite tech channel. I love how in-depth you get with specific topics and how you aren't afraid to damage some hardware to test something
... this video was nothing more than "empty content", all he did was install a not good enough waterblock, & flash a bios, & pet his cat.
i think youre confusing "arent afraid" with "crazy man"
Please, when you are allowed, try to boost a 7900XTX to use 450W and compare its performance to the stock 4090.
Then of course see just how far you can take it. I really enjoy these bleeding-edge videos.
To be on the safe side, install a waterblock and benchmark first on stock BIOS to check the mount. Only flash BIOS if you know your mount is good.
Yes! New Der8auer mod let's go!
Loving the cat getting some camera time, he(?)'s absolutely gorgeous.
Truly amazed once more. Thanks for raising the bar above everyone else !
Yup, you're a bot no question about it. Or you're really 'truly amazed' by seeing someone flash a bios. "raising the bar above everyone else" ... again all he did was flash a bios, or am I missing something?! Yeah no one else flash a bios
@@112Famine Nope, you didn't miss anything, especially not all the effort in discovering and then lifting bottleneck after bottleneck in order to truly explore the maximum potential of this card in ways no one else has even attempted. I guess I must be a bot =P
@der8auer , i know you probably know that, but disable all C-States and power saving stuff in the BIOS and for the PCIe, and force PCIe x16 GEN4, try with different PCIe x16 Gen4 on the motherboard if available, also check the Windows Power Settings, if you haven't touched this, it can have some PCI related settings to be left on power saving mode, even if it's on Balanced or Recommended setting. Also in Nvidia Control Panel, set it to be on Prefer Max Performance. You can use other than GPU-Z tools to check how much PCIe lanes it uses, also check only under load, sometimes even after the settings like I said above, it will show x8 unless you put load on it. Well, you can inject voltage on the GPU itself and check if they are really dead lanes or something else is the problem.
It doesn't make sense to kill half of PCIe lanes, without even disassembling the card. I had a problem before with the PCIe lanes, but it was because I used too much of them for other peripheral devices, like too much M.2 NVMe SSDs and SATA Connections, 10Gb PCIe LAN Cards and etc.
More power Mr. Scott!
Adding an M.2 drive can cause lane bifurcation and drop your x16 slot to x8 mode. Check your motherboard manual.
x8 PCIe lanes could be one bios setting for the NVMe drive config. I got the same issue after a BIOS flash. An NVMe drive was sharing the slot bandwidth (no drive installed) but you know this can happen. Just saying. Might be possible to get full link speed.
Thankfully even though shorts is "broken" (read as YT tampered with the content algorithm leading into a US election) my subs work still. Good stuff.
Bro you are insane, I love it
That thumbnail definetly grabbed me.
Try cleaning the contacts with alchohol and check the caps and resistors near the PCI contacts. Occasionally when I've had weird problems, like your x8 problem, that I couldn't find anything obvious, I first try cleaning the whole board with alchohol and then if that doesn't work I spray the board with some flux, because the alchohol will have cleaned the old flux off, and try reflowing the whole board in an oven. Occasionally one of those two will fix the problem.
reflow in a oven is not the way of fixing it, as it can do more damage. the problem could be the bios bug that some of the new intel boards have, it sounds more like a motherboard problem
@@FilipMunk I specifically said reflow last because it is a last resort when you know there is a hardware problem but cannot find it. I did not say that it was the best primary go to fix.
Regarding the bus interface problem:
I think it would probably be nice if you can place the GPU in another motherboard (since I think you don't have another Strix 4090), to check if the problem is with the GPU or with the motherboard itself.
Edit: 3090 -> 4090. My bad.
4090** sorry, had to
Lol, you realize derbauer is one of the world's most knowledgeable overclockers with dozens of motherboards at his disposal right?
@@jlgroovetek yeah but maybe he didn't try another motherboard?
@@jlgroovetek you realize its impossible for a single person to know everything about something right?
@@deher9110 you realize that if derbauer didn't try this 4090 on another mobo he'd have no credibility right? It's literally the first thing he would have tried.
Fun tip: If you hold shift and right click in a directory you can open a command line.
Maybe its an wierd motherboard bug that causes the x8, maybe worth an bios flash just to test?
1kw 4090, dude, you're Mad :D
My guy is a mad lad.
Bad CPU mount? Linus had the PCIe issue happen on a LGA Threadripper and the problem was solved by reseating the CPU.
This, my RX 570 running at x8 was fixed by reseating my R5 1600.
Tldr: they are already running an impractical over lock out of the box. That level of power isn't getting much extra . They were so worried what RDNA was going to hit.
It will be fascinating to see how the AMD cards scale considering their more sane starting TDP....
they got another way to make things work properly, i love that. they got a step back from useless powerforcing with a worser powerdellivery, you know what o mean, but got a huuuuuuuuuge jump to delivering people some technologies and computing powers that other companies thinks we're not worthy
(very fluffy, but as it is)
AMD will be 10 to 20% below to 4090 pretty much. But at 600$ discount and less power consumption then Nvidia.
It's the resistor on the back side of the motherboard somewhere close to the solder blobs of the pins. I've seen this before and RMA fixed it.
And it seems to happen only with some 4090 cards, while lets say 3080 4070 seem to be fine.
Hello Roman, could you add digital thermometer in the background when testing the 4090? I'm sure it can heat your room easily
The two times I had the PCIe lanes randomly dropping, 1st was the CPU needed re-seating (and cooler was tightened too much) and 2nd time the PCIe socket and edge connector just needed cleaning :)
Legend right here , Good video Roman,
From the Mad Scientist
2:05 Same thing happened to me with a 3090 FE, was stuck at running at X4 3.0 PICe, tried everything, wasted a ton of time looking for solutions and drove me nuts, none worked...RMA'd back to Nvidia. Mobo was perfectly fine, worked X16 with any other card.
Maybe is the drivers ? somehow?
With a BIOS mod or driver update?
@@BlueSkyYGO Nope, just happened out of the blue.
@@Miskatonik even in other PC?
@@BlueSkyYGO Nope, again, the RMA was accepted and got my money back.
En route to 1 gigawatt for the 50 series... I'm having talks with Siemens and GE to have one of those nuclear plant installed in my backyard.
Love this videos, if I may give a personal opinion regarding the x8 issue, maybe just look at the following as I had a similar issue with my one card a while back.
The basics like the MB bios settings was messed up once causing my x16 slot to operate as a x8 and something as simple as a nvme and/or sata device was somehow interfering with my display card also causing the x8 issue, what I did in the bios was to change it to x8 apply and reboot and then back to x16 apply and reboot and it was working again
Your post here has more info that his video where he flashes a bios & pets his cat.
You changed what to X8 before applying and rebooting only to change back X16?
This indication ftom gpuz is the card is on power saving mode, when you click render you should show x 16
Really hope you do a launch day video for the 7900XT like you did on the 4090. curious how much power there is to be saved.
I am, so amazed, on something small such as CPU/GPU die block that producing more than 500W of heat, can survive on sudden jump of delta temp without crack / broken. Even the VRM MOSFET, what the hell they can provide very small and very accurate voltage, with massive current flow through it.
Thats why the VRM comes with at least 20 phases..... It's such balanced and often over-engineerd. Different portions of the GPU use that voltage, so you have multiple points of voltage input and not just one.
Dayam you beat me! Good Job.
Friend: Hey what are you doing? Gaming?
ME: NO! Welding with RTX 4090
5:57 the most important support
Will be interesting to see if a proper block like the up coming ones from Optimus or Watercool Make a meaningful difference. Probably like 5c°. 10c° at most.
I pre ordered the EK v2 block, hoping it does good. Hoping for 40-45c at 100% load.
Have that same universal GPU block, wish that EK would continue support and create updated mounts for it. It's an awesome little block
waste of stuff. better upgrade your ssd(2 more topic)
That's insane. I want one!
Bought an Asus motherboard one time and it also showed the PCIe x16 in x8 mode. Randomly fixed itself to be x16 at one point, then randomly reverted to x8. It wasn't related to having NVMe. I returned the board and bought a different one. New board worked correctly.
ok, I have to know, are we getting a Der8auer Cat Tips channel in the future. You know we're really here for them right? 😁
Power saving features affect the GPUz pci-e displayed speeds. If you run a load on the gpu it'll change to x16 4.0.
He ran a load and it was still max x8 4.0
You're a king and asus ROG PC components are as well. Goat
This is the beginning of a new era on GPU O.C.
1:00 The last fan on right side of GPU looks scared of overclocking.
if i remember right, i think i found that the pci-e slot can be active, as in it can change its settings as needed.
For the x8 pcie problem, I'm pretty sure that when not under load the bus reduces itself to save power. In gpu-z there should be a test rendering option, and that should make the bus go back up to 16x
this is wonderfully insane
The strix looks like a boom box on that desk. Lmao
That WireView module is slick! Would you happen to have a link for where to buy it?
Hey there! Nice! What about M.2 SSD attached to PCI connected directly to CPU (the same one where the GPU is)? As far as I remember it shares the lines.
that's why I tested it on an APEX board with SATA SSD and different CPU :D but same thing
Direct connect of M.2 SSD to CPU should not consume PCIe lanes of the 1st slot in modern boards. They are using separate lines. It does consume when you connect m2 SSD via PCH in PCI-mode tho.
At leasts thats how it works for x570, x670, z690.
You can check chipset diagrams as a proof.
Finally letting them eat awesome 4090TI probably hit 1000w o man so crazy
Doing and OctaneBench run (Octane Render 2021) is one way to test card speed without the PCI-E bandwidth restriction as it loads the card up for local-compute only. I've had identical scores on a 3090 inside a PC vs outside via Thunderbolt. Maybe that's one idea for your testing ?
So if you put 1,000 watt power target. What good does it do when they only supply cables that can handle 600max.
I'd strongly suspect that Roman (Der8auer) has either OEM or custom power cables able to carry higher current. The Nvidia adapter is only used if you're using the PCI-E power connectors. There's a new ATX spec for the PSUs that defines this new 12pin high power input for the GPU. A number PSU manufacturers either have an included a dedicated cable, or have made one available as an aftermarket part if your PSU was made before this was a standard. Also, actually manufacturing your own power cables isn't terribly difficult, just needing some wire, the plastic housing, a hand crimping tool, and the correct size pins for the housing. Most people will then sleeve the individual cables either with a plastic sleeving material, or use paracord to cover the cable and make it look nicer, and match their build color scheme.
Cant wait for the 13 gen dedicated bracket :) for Z790 mobos.
In my motherboard the screws are to short to mount bracket now.
You should be fine... the boards are the same... i had no issues going from an apex to a aorus master
Try cleaning the PCIe gold pin connectors. It could be thermal compound stopping connectivity. That's what happened with my 3080.
Exciting as hell
RDNA3's more exciting
For most people this is just a RUclips channel for me is mede8er is something is possible this guy gonna find it
I've watched both versions of this video, and I keep thinking the light reflecting off your radiator fans is smoke.....lol
weren't these pushing 3ghz with the stock power draw? Kind of feel cheated with an additional 300w only getting another 30mhz ..
I found that shunt modding didn't really help much. Seems like the BIOS has some sort of Aux power limit thing going on that isn't determined by a big shunt resistor. It was the same on the 3090 where you couldn't get a shunted 3090 above about 650W, and needed the XOC bios to hit 1000W.
To install the waterblock, you used the G80 adapter ? Indeed, I intend to watercool my future RTX 4090 with a universal waterblock.
I can't wait to get a card for ln2!
I always wondered what a 1600W power supply was good for. My guess is you forced PCIe 4.0 on you 5.0 slot and it was still a no go. You may have to wait for a motherboard BIOS update or try a different one.
Unlimited power!
of course power increases drastically with tiny voltage increase. Card is pushing over 600amps there
1000w of power that's insane
smart idea double the subscriptions :D and of course not all us English speaking noobs know German as well as we could.
You're the best.
Speaking about your strix; EK are producing a dual (front & back BOTH ACTIVE plates - so looks amazing from top and bottom with RGB) water block set for the RXT4090 founder's edition, called the "EK-Quantum Vector² FE RTX 4090 D-RGB ABP Set - Nickel + Plexi". Water blocks reduce the size of the card so that it would even fit in a small form factor case with a custom water loop, as well as gaining all the other water-cooling features (best possible temperatures giving the card the thermal headroom to CONSTANTLY boost to maximum speeds [so long as you also have a beefy PSU] in a way no fans would, silent running, reduced power passing through the card, greater stability and greater potential for future overclocking). The latest CPUs also run incredibly hot so benefit from water cooling, with the fastest DDR5 RAM also benefit from a water-cooling block as well.
Could you please ask ASUS ROG is they will produce a Strix RTX4090 in partnership with EK so that it can be bought with the same front & back active water blocks, for the best performance whilst also fixing the "too big for the majority of cases" issue? If they also switched to display port 2.1 on this card then it would sell like hot cakes, because the RTX4090 not being able to output graphics that it can manage to generate makes it a serious joke, with DP 1.4 an unacceptable bottleneck.
I plan on getting a Cooler Master HAF 700 case and installing three 420mm radiators (top & side & bottom) which, when paired with the front two 200mm fans & rear two 120mm fans, should allow me to cool even super-hot gear to near room temperatures, I hope.
The RTX 4090 was released before the DP 2.1 standard, only just though. Also 2.1 seems more aimed at laptop and USB-C standards alignment, rather than more bandwidth.
DP 1.4 is good for 4K @ 144hz, so that's still beyond what a 4090 can do in AAA games and is at the limit of what is available on the market monitor wise. (Asus's own ROG XG27UQ UHD 144hz monitor uses DP 1.4 wise DSC)
So no DP 1.4 is not a bottle neck yet.
@@agentcrmThe RTX4090 can manage considerably better than 4K @144hz lol. In fact, it can run some games at 8K at very playable refresh rates (see the Linus Tech tips video on 8K gaming for the proof), so yes, DP 1.4 is ALREADY a bottleneck. And as DP 2.1 is a powered connection it will not be just a bios update, as some people have suggested.
@@TimLongson The 4090 requires DLSS to get close to 4k @ 144 in most games, so that is why I say it is beyond it's capability.
As for 8k, those LG TV's only have HDMI 2.1. So DP isn't going to help there.
DP 2.0 would have been an option, but since there's no monitors on the market that support it, there's not much in the way of an incentive.
Not to mention the games LTT showed in 8k, didn't even have 8k assets to render.
They could have put the powered connection in there for DP 2.0 but not activated, which I doubt.
Fascinating
Nvidia with the 2 in 1 gpu and space heater combo. Just in time for winter. Thanks Nvidia!
Roman about the x8 i had exact same problem with a refurbished evga 1080 ti ftw3. It is 100% card issue and unfortunately you have to RMA.
Hey @der8auer. Any explanation about that nice Wireview adaptor that you used :) Coming out soon? Would be sweet over the cablemod one.
At that power limit I'd be looking into active cooling for the Nvidia stock adapter - knowing how Nvidia likes to save money on stuff like that I'd just be beyond paranoid of welding the plastic connector ends together after a few benchmarks lol.
Get the galax hof so you don't have to worry about going out of spec power wise
Nice. Also, I would def buy that WireView thing. If it didn’t say “WireView” in big white letters on the front of it. Why not engraved and colorless, or just a sticker?
And yea, I know WHY it's there. But if fewer ppl buy it bc of that then fewer ppl will see/ask about it anyway.
derauer please this is crazy stop while you can!!!
WE NEED MORE POWER.
Really want a higher power limit bios for my 3080 tuf, its power limited at 340 watts
Whats the point. The chip is pretty much at it's ideal frequency / voltage scale. Going even more higher will bring you diminishing returns. (more power, less performance)
The first 20 posts all are in awe of this video ... need to turn down the setting on the praise bots if you're going to create content to just create content.
I guess if you can justify using it as a room heater then this is a worthwhile thing to do.
How are those power cables not melting past 600 watts (and the plug on the card?) when we're hearing left and right people are having issues even at stock? :o
You tried a different motherboard, re: x8 mode right?
I've seen certan NVME slots which cause dropdowns if using a 4.0 x4 nvme
The other thing i've seen is that the CPU driven PCI-E bus pins is on the EDGE of the CPU, which means that re-seating the CPU (and if you're using your bracing block this is more likely) and re-seating it will potentially fix it.
Maybe, ur bios change sattings for issue with disc nvm m.2 to pci lains then u have always 8x
Out of curiosity what is the maximum theoretical power the PCB could handle? Even if you could pump unlimited power through the PCB would the die itself have a limit?
there is no limit as such, the limiting factor is the temperature raise of copper tracks.. so if you don't mind a toasty PCB then the "limit" is pretty high :)
@@robertjung8929 So LN2 the whole thing and 4090W 4090.
would love to see you bios flash a card like the pny 4090 to see how it handles an increased tdp. its locked at 450w stock
the day where your GPU is taking power as much as your microwave 🙆♂️😂
1:52 Have you tried on another MB? Could be something as simple as resiting the CPU.
Also try forcing it to run on PCIe 3.0 on the MB Bios and see if it still stuck at 8x.
Will you do a review of the GALAX 4090 HOF ? It realy intesret me with that dual 16pin