@@zee-fr5kw reaaaaally bad and sad advice man. yeah lets have the next video be an extremely shocked face of some girl with huge boobs and the title being "Nvidia won't like this" or "I tried some OCing, the results where shocking!" or whatever other lowest common denominator zero self respect clickbait bullshit you can think off. Its just about the most depressing thing on youtube and I dont understand how these mouthbreathers keep clicking on this shit...at some point you would think they would catch on....
You can try and use nvidia-smi to force max boost clocks (same thing precision X does for it's "boost lock") to prevent that odd downclocking at the beginning. Though, the card running at 2610 would indicate it is already in the P0 state, but worth a try. The 29k+ PR scores (and even upper 28's) on ambient are mostly just people abusing the memory bug. Gonna have to force ecc enabled for hwbot leaderboards as using the HoF is basically pointless ATM.
@@iMagic16 People jacking up the memory clock until it's unstable and produces errors in the texture rendering. You can watch gamer nexus' ln2 video where their last run did this and gave them a +1700 points over what they had been getting. I personally have gotten over 31k on ambient when this happens (I didn't submit the score because it's not right to do so and is a borked run).
I have the same thing happening on my 3080 Ti FE which is downvolted on MSI afterburner. It is set to 1830 MHz and that works in most games, but during TimeSpy it will first run at 1815 MHz for a few seconds, then 1845 MHz... Then after some time it will stabilize to 1830 MHz ! The difference isn't big enough for me to worry about it, the only problem is that 1845 MHz can provoke crashes in games so I have to monitor this
I have a strix 4090 too. When you apply an overclock in afterburner and run the pcie bandwidth render test in gpuz, the strix will run at 2610 (its factory boost) rather than the overclock applied in afterburner. Same thing that is happening to you during your first 10 seconds of benching.
The pcie x8 is a platform issue. Alder lake/raptor lake cpus only have 16 pcie 5.0 lanes so most motherboard manufacturers chose to bifurcate the x16 gpu connection whenever the top m.2 slot is pluged in, in order for the ssd to have gen 5 connection available.
Maybe try to get the Galax HOF card. The higher amount of power stages, maybe higher quality could decrease vrm temp at higher voltages. If it is a good sample you might be able to run colder at higher power consumption. Especially with the double connectors.
Pretty cool! Can i ask a question? Can i build a bracket for my RTX 3080 Turbo by gigabyte for aio cooling? There is no water block for it, It's a really hot card. I was curious to build a bracket for an aio cooler then attach it to my gpu. I'm not sure how can i cool the memories and other parts
@@erkinalp Strange, it still reports as 16x lanes used in the BIOS. I wonder why? I have, 3 NVME slots, and three PCIe slots, one occupied, the former using 2 NVME Drives + U.2 using an NVME adapter, and a 1080ti, so I assume it only halves it if I used PCIE 4.0?+
9:42 in my case this happened without any XOC due to nvidia drivers not respecting my "prefer maximum performance" global preset. By manually setting 3Dmark to run at maximum performance mode in nvidia control panel and restarting, it locked to the set frequency as soon as 3Dmark was launched and stays there until it closes. I was having similar clock issues in other applications and the same process resolved it (was running at only 400mhz for 2d acceleration even though it was 70% short of frame rate target).
Yeah the card might be broken. Hey man seeing as your channel is so high profile, I think that Asus should send you a new one With your setup i would expect some high placings on Timespy! GJ man you always go really in depth. GJ
Hi Roman,can you flash back stock BIOS and see if PCI_E speed will be x16 or it will be same stuck at x8,I have experienced something similar with RTX 3090 when I flashed there XOC BIOS,my RTX 3090 would stuck x8 with XOC 1000W BIOS
Maybe a silly question, but are you putting your boot SSD in the PCIe 5.0 m.2 slot on these boards? 'Cause on Z690/790 those are always bifurcated off the PEG lanes as the CPU only has 16 5.0 lanes. Make sure your SSD is in a nlot explicitly marked 4.0 or 3.0.
that could help but i'm not sure that would be the problem another thing that could possibly be creating the problem is the fact that all the safety features are turned off as that could be causing the GPU to think there is something wrong with the safety features there for pulling the PCIe lanes down from 16 to 8 in order to keep within its safety margins
@@lordi2009k I'm not sure that would be the problem either, hence why I'm asking. But if there is a drive in that slot, then the GPU won't have access to more than 8 lanes no matter what you do. That's just how LGA1700 works, sadly. As for what you're saying, it's a theoretical possibility, but rather unlikely. The number of lanes active is detected and negotiated during POST, and the GPU or motherboard doesn't normally change this while in operation - they'll step down to a lower PCIe gen for power savings, but I've never seen actively negotiated bus widths. If there was a safety fallback mode it would likely go to the lowest possible configuration, i.e. x1 PCIe 1.0 or 2.0.
The motherboard runs at x8 mode because they lock it when there is an SSD installed in the slot closest to the CPU or m.2_1. I was also bugged by this and the solution is to move the SSD which is impossible for me because my board has 3 slots and I have 3 m.2 ssds.
My GPU goes pretty hard, pulls 485w @ 1.096v with core clock of 2.2ghz @ 74c under gaming load. Asus Strix 3080 Ti OC, Asus just builds things different I guess, I have loads of headroom too, I could easily undervolt it and run higher, I've achieved 2.3ghz at 1.069v before but the temps were at 73c during Timespy benchmark and assumed I'd be looking at 80+ on gaming for long periods so I'm currently running 2.1ghz max, pulling 353w @ 64c 1.000v
yep, but the problem at present is they still are so it makes any legit runs pointless.
2 года назад
However, there is one thing to be aware of when installing an SSD in the first M.2 slot. Namely, that it takes the PCIe 5.0 lines from the graphics card interface. It has the disadvantage that the moment you install an SSD in this slot, you will switch to eight GPU lines, which will only work in PCIe 5.0×8 or PCIe 4.0×8 mode
Could the 700mhz for the first 10 seconds be caused by the chip actually being too cold? Maybe it’s so damn cold that the temp is “misinterpreted” as an error and the card full throttles for safety. But after about 10 seconds of load the chip warms up enough to be in a range that the card recognizes… Or is that all just crazy talk?
A lot of the scores on the hall of fame for Port Royal should be invalid , there is a bug where if you can get your memory to clock really high ( over +1500 ) there is a chance it will artifact ramping up the fps and still posting a valid score , jay mentioned about this in one of his recent videos so wouldn't be put off by most of the scores currently posted , its pretty easy to tell which are genuine and which are not :) As for your card have you reached out to ASUS for any advice on where the missing pci lanes have gone ? its a bit past the rma stage but maybe they can help or offer some advice ... Keep up the good work as absolutely love your content :D
@KopaZ I can get a genuine ( no artifacts ) score for just over 28k on my FE when overclocked but this is pushing it to the limit as core will be around 3045mhz and memory at +1500 . I can push the memory higher to +1600 amd will have lots of artifacts and post a much higher score but its not genuine imo🙂28k should be possible though on an overclock air cooled card 🙂
New post on reddit 5 hours ago with a Zotac 4090 burned with connector fully seated. If 0.5mm of gap is all it takes to burn a connector its faulty design either way. There should be enough contact either way even with a small gap. Someone, like der8auer for example should measure pin depth and terminal depth vs how far the adapter is socketed to verify terminal/pin contact. Would easily tell if the "not socketed correctly" is even a thing.
What psu are you using? I just got a strix 4090 oc and I have rm850x. I need to get something a little stronger. I'm running 12900kf. I'd really like to get something that I don't have to worry about upgrading in the next couple of years... so I'm looking 1400 to 1600w range.
I'm curious.. could these condensation issues be helped by running a de-humidifier in the room to try and reduce the amount of water in the air? Just a thought...
@@Sarlyx no he seems to be right. Condensation is the water in the air cooling and condensing. If there is less water in the air it would take longer for condensation to build up, ie. there would be less of it. Theoretically if it was 0% you wouldn't get condensation but that's basically impossible.
Just a thought, but are you using the primary and secondary nvme slots? On some older boards using both will lower pcix slot to 8x as there are not enough pcix lanes available
noooo its not about your english, your english is actually quite good but ive noticed it in many other german friends of mine that pronounce ideaR too. im just curious where it originates from and why its so widespread
I used to say "ideaR". I'm a naitive speaking English speaker. Not sure if it's like the 2% German in me (I have some amount, just not sure how much), and I said it like that for a while. It's just a mispronunciation of the word, which can be a bit difficult for some to get right. This is why some go to speech therapy, especially during schools and what not. It's fairly common, and since @Der8auer EN is a naitive German speaker, his English is beyond well and very impressive. Sure, it's not perfect, but no one is, we're human, embrace that.
@@der8auer-en As a naitive English speaker, you're English is way better than you think. I used to do the same thing, it's just a mispronunciation and will get better over time as you naturally improve. If you want, you may read my reply above to someone else. It's absolutely nothing to be ashamed of and completely normal 😊. Thank you for always going 100% and beyond with your content!
G'day Roman, 🤔Just wondering if the PCIe X8 problem exists when you do Air or Water OC too? as you have tested with different CPUs & Motherboards maybe some Temp testing to see if there is a point where the PCIe limits because of that, or just another thaought as you & BPS Brian both had this with your intel platform does AM4 & AM5 cut the PCIe lanes too? As nVIDA is so strict with the Partners regarding BIOS & Power Limits do you know if they have said anything about Elmore's BIOS for 4090, 800-900W has gotta be giving them the SHITS BIG TIME.
@@les_railgun not at the moment, I have been trying to find a new home suitable for my disabilties & now am in the middle of moving. So if Roman has answered these questions in past videos I would be grateful for you to answer them for me & point out which der8auerEN videos so I can watch them first when settled in my new home.
@@vinylSummer But we're not talking about NAND flash here (and I am familiar with the thermal sweet spot NAND seems to have.) We're talking about DRAM - which does seem to respond to cryogenic cooling - though I've never seen it on a GPU.
@@bradley3549 oh, you're right, I mixed things up in my head a little bit. Maybe not direct cooling, but rather some heatpipes going from the base of a ln2 pot to memory chips would be interesting to see
1:28 thermal pad… Thats a thermal brick!! XOC bios are amazing shame i cant find one for the 3090 with rebar support so stuck at 370w max cus 2x8 pin even tho my PSU wouldnt even flinch if i used it as 1x8pin daisy chain and my cables will happily do 500w each im stuck at 370w total cus bios
Yesterday i saw JayzTwoCent's video and he was mentioned you about legit scores, then today i saw your legit 3300 boost. What a good coincidence 😂😂 Btw, congrats even you have only the half of the pci lanes
for what its worth im using a MSI gaming x trio 4090 on a gigabyte B550 master board, 5800x 3d cpu and my bus interface reads x8 aswell. stock bios as it would be out of box.
Could the 8x PCIe lanes be caused by a driver issue? Have you tried completely wiping Nvidia drivers and reinstalling them with DDU or however that tool was called?
It's a motherboard feature. Alder lake and raptor lake cpus only have 16 pcie 5.0 lanes so the motherboard has to bifurcate the x16 connection in order for both the top gpu and m.2 ssd to have gen 5 connection, even when non of them are actually running at gen 5
here is a genius idea for the best overclocking advanture ever, bring your cpu/gpu whatever you wanna overlock, do it outside on the street but not at your home country, do it in seberia. no hydrogen needed
The PCB is basically layers of copper sheets. They conduct heat very well. This is when LN2 cooling just the GPU core, everything else including the VRM stays reasonably cool. The connector can't melt if it's being cooled!
2 года назад
Try EVGA precition x1 boost lock for 4090, jay said shoud be work🔥
It's literally the same as every other semiconductor, CPU or GPU, for the last couple of decades. Increasing frequency increases power draw in a linear fashion. Increasing voltage increases power draw exponentially. (To be specific, "power consumption is proportional to the square of the voltage"). It's more drastic than we've seen with older cards, simply because it's far more power hungry to start with. There's an awful lot of transistors in that GPU!
Gruesse aus dem fernen China. Bezueglich des PCIe Problems eine kurze Frage. Die SSD fuer das Betriebssystem etc ist nicht im SSD Platz 1 auf dem Mainboard verbaut, oder? Dies wuerde das downgrade der Grafikkarte auf 8x erklaeren.
@@Wasmachineman Wow, why do you feel the need to be such an @$$ HOLE? It just looked old to me because I have one that looks very similar and it's very old.
@@technologicalelite you see the little 😜 I put after what I wrote? That indicates sarcasm. I was attempting to give him a bad time just for fun. The ThinkPad that I have that looks like that is so old that I haven't used it in 10 years. Also, I thought maybe it was some kind of a sleeper that he had built.
There is a catch, you don't use LN to cool the GPU, but the power connector!
BHHahahahahha
lols
There is another catch, you can suffocate with all that nitrogen gas in the room!
New 12pin ln2 mini pot coming?
If you already have the LN2 on hand, why not go even further and replace the 12pin with a superconductor?
I really enjoy watching you overclock, and having the ability to see your validated scores on 3DMark.
thanks!
Thank you so much for actually putting useful information in your titles. It's a dying art judging by other tech RUclipsrs, keep up the great videos!
@LTT
you didn't even watched the video or you're blind, card is under ln2 with a freaking fan pointed at the connector!
@@zee-fr5kw reaaaaally bad and sad advice man. yeah lets have the next video be an extremely shocked face of some girl with huge boobs and the title being "Nvidia won't like this" or "I tried some OCing, the results where shocking!" or whatever other lowest common denominator zero self respect clickbait bullshit you can think off.
Its just about the most depressing thing on youtube and I dont understand how these mouthbreathers keep clicking on this shit...at some point you would think they would catch on....
@@ChrisGR93_TxS What does that mean? "with a freaking fan pointed at the connector!" ?
@@yobson Are you guys saying that @LTT is the RUclipsrs who don't put useful info in their titles?
You can try and use nvidia-smi to force max boost clocks (same thing precision X does for it's "boost lock") to prevent that odd downclocking at the beginning. Though, the card running at 2610 would indicate it is already in the P0 state, but worth a try.
The 29k+ PR scores (and even upper 28's) on ambient are mostly just people abusing the memory bug. Gonna have to force ecc enabled for hwbot leaderboards as using the HoF is basically pointless ATM.
what is the memory bug?
@@iMagic16 People jacking up the memory clock until it's unstable and produces errors in the texture rendering. You can watch gamer nexus' ln2 video where their last run did this and gave them a +1700 points over what they had been getting.
I personally have gotten over 31k on ambient when this happens (I didn't submit the score because it's not right to do so and is a borked run).
I have the same thing happening on my 3080 Ti FE which is downvolted on MSI afterburner.
It is set to 1830 MHz and that works in most games, but during TimeSpy it will first run at 1815 MHz for a few seconds, then 1845 MHz... Then after some time it will stabilize to 1830 MHz !
The difference isn't big enough for me to worry about it, the only problem is that 1845 MHz can provoke crashes in games so I have to monitor this
Great video Roman! I can't wait to see you push the score even higher.
And you made us wait for a whole week for this video T_T
I have been waiting for this video so much xDDD
sorry just too many different projects right now :D
I have a strix 4090 too. When you apply an overclock in afterburner and run the pcie bandwidth render test in gpuz, the strix will run at 2610 (its factory boost) rather than the overclock applied in afterburner. Same thing that is happening to you during your first 10 seconds of benching.
The pcie x8 is a platform issue. Alder lake/raptor lake cpus only have 16 pcie 5.0 lanes so most motherboard manufacturers chose to bifurcate the x16 gpu connection whenever the top m.2 slot is pluged in, in order for the ssd to have gen 5 connection available.
yea already been said in the last video, it's running x8 because the nvme isn't in the right slot
@Panurgic well yes and no, im pretty sure they can set the top slot to use gen 4 which would allow the gpu x16
Hope you get that new galax hall of fame 4090! I bet you could crush some leader board numbers with that dual 12vhpwr connector and unlocked bios lol
Maybe try to get the Galax HOF card. The higher amount of power stages, maybe higher quality could decrease vrm temp at higher voltages. If it is a good sample you might be able to run colder at higher power consumption. Especially with the double connectors.
9:45 you can lock the clock frequency in afterburner to be constant.
I've been waiting for this! Thanks der8auer!
Pretty cool!
Can i ask a question?
Can i build a bracket for my RTX 3080 Turbo by gigabyte for aio cooling?
There is no water block for it, It's a really hot card.
I was curious to build a bracket for an aio cooler then attach it to my gpu.
I'm not sure how can i cool the memories and other parts
Great video to watch while eating lunch thanks!
If you're using the M.2_1 slot, on that board, it behaves as though PCIE_2 is in use and halves your bandwidth to x8.
Even on X570? D:
@@BBWahoo Yeah, it is due to the board layout.
@@erkinalp
Strange, it still reports as 16x lanes used in the BIOS. I wonder why?
I have, 3 NVME slots, and three PCIe slots, one occupied, the former using 2 NVME Drives + U.2 using an NVME adapter, and a 1080ti, so I assume it only halves it if I used PCIE 4.0?+
@@BBWahoo "Even on X570?" - No, specifically on the board that Roman is using.
9:42 in my case this happened without any XOC due to nvidia drivers not respecting my "prefer maximum performance" global preset. By manually setting 3Dmark to run at maximum performance mode in nvidia control panel and restarting, it locked to the set frequency as soon as 3Dmark was launched and stays there until it closes. I was having similar clock issues in other applications and the same process resolved it (was running at only 400mhz for 2d acceleration even though it was 70% short of frame rate target).
This is exactly what I was thinking, low clocks like that is what we see when the card is in the "Normal" power saving mode.
I’m sure you know this but if you populate the first nvme slot it makes the gpu run in x8 not x16
only if you dont have enough pcie lanes
@@Proto_is_noob on the extreme boards that is what it does. I have the same board
I wonder if a bad BGA contact could create the PCIe x8 issue.
For Port Royal, doesnt many just OC the memory until they get artifacts, but still pass the test?
Great content as always 👍
Try using EVGA’s precision overclocking tool, you can lock the clock speed of the gpu with it
Use EVGA Precision X1 to lock the card in P1 state and force the full clocks to fix the start of your tests.
Does Precision X1 support RTX4000 series?
@@noxious89123 Yes Jay (JayzTwoCentz) has already been using it
Oh boi been waiting for this
Yeah the card might be broken. Hey man seeing as your channel is so high profile, I think that Asus should send you a new one
With your setup i would expect some high placings on Timespy! GJ man you always go really in depth. GJ
Hi Roman,can you flash back stock BIOS and see if PCI_E speed will be x16 or it will be same stuck at x8,I have experienced something similar with RTX 3090 when I flashed there XOC BIOS,my RTX 3090 would stuck x8 with XOC 1000W BIOS
Showed it in the last video already. It's x8 even with stock bios
@@der8auer-en Roman, z690 mobos have a bug (if you are using nvme slot 1, the gpu runs at 8x) try in another slot.
@@der8auer-en Ohh okay Roman,thought so its down to BIOS,does happen with both BIOS? I assume that Strix as always have dual BIOS
@@andresm5087 That doesn't sound like a bug but a feature.
t. ASUS CH7 with dual NVMes with PCIe x16_1 running at x8
@@andresm5087 he tried other slots, other motherboards etc.....definitely a #### card.
Maybe a silly question, but are you putting your boot SSD in the PCIe 5.0 m.2 slot on these boards? 'Cause on Z690/790 those are always bifurcated off the PEG lanes as the CPU only has 16 5.0 lanes. Make sure your SSD is in a nlot explicitly marked 4.0 or 3.0.
that could help but i'm not sure that would be the problem another thing that could possibly be creating the problem is the fact that all the safety features are turned off as that could be causing the GPU to think there is something wrong with the safety features there for pulling the PCIe lanes down from 16 to 8 in order to keep within its safety margins
@@lordi2009k I'm not sure that would be the problem either, hence why I'm asking. But if there is a drive in that slot, then the GPU won't have access to more than 8 lanes no matter what you do. That's just how LGA1700 works, sadly.
As for what you're saying, it's a theoretical possibility, but rather unlikely. The number of lanes active is detected and negotiated during POST, and the GPU or motherboard doesn't normally change this while in operation - they'll step down to a lower PCIe gen for power savings, but I've never seen actively negotiated bus widths. If there was a safety fallback mode it would likely go to the lowest possible configuration, i.e. x1 PCIe 1.0 or 2.0.
The motherboard runs at x8 mode because they lock it when there is an SSD installed in the slot closest to the CPU or m.2_1. I was also bugged by this and the solution is to move the SSD which is impossible for me because my board has 3 slots and I have 3 m.2 ssds.
Did you watch the video? He tried multiple, boards and CPUs.. it’s the card 💀
Move the ssd to a different slot. A lot of boards share gen 5 lanes with the ssd.
My GPU goes pretty hard, pulls 485w @ 1.096v with core clock of 2.2ghz @ 74c under gaming load. Asus Strix 3080 Ti OC, Asus just builds things different I guess, I have loads of headroom too, I could easily undervolt it and run higher, I've achieved 2.3ghz at 1.069v before but the temps were at 73c during Timespy benchmark and assumed I'd be looking at 80+ on gaming for long periods so I'm currently running 2.1ghz max, pulling 353w @ 64c 1.000v
First ten seconds might be caused by the 8x pci loading benchmark assets slower?
Jay said the majority of 3DMark results go with ECC turned off, which supposedly are not gonna be counted as valid runs.
yep, but the problem at present is they still are so it makes any legit runs pointless.
However, there is one thing to be aware of when installing an SSD in the first M.2 slot. Namely, that it takes the PCIe 5.0 lines from the graphics card interface. It has the disadvantage that the moment you install an SSD in this slot, you will switch to eight GPU lines, which will only work in PCIe 5.0×8 or PCIe 4.0×8 mode
This is great
good work here
Could the 700mhz for the first 10 seconds be caused by the chip actually being too cold? Maybe it’s so damn cold that the temp is “misinterpreted” as an error and the card full throttles for safety. But after about 10 seconds of load the chip warms up enough to be in a range that the card recognizes… Or is that all just crazy talk?
Thx for this great video, loved the edit!
It may be a dumb question, but have you reflashed the bios on the gpu?
yes back and forth like 10 times I think :D
Wouldnt neverwet prevent the condensation issues?
A lot of the scores on the hall of fame for Port Royal should be invalid , there is a bug where if you can get your memory to clock really high ( over +1500 ) there is a chance it will artifact ramping up the fps and still posting a valid score , jay mentioned about this in one of his recent videos so wouldn't be put off by most of the scores currently posted , its pretty easy to tell which are genuine and which are not :)
As for your card have you reached out to ASUS for any advice on where the missing pci lanes have gone ? its a bit past the rma stage but maybe they can help or offer some advice ...
Keep up the good work as absolutely love your content :D
Highest I got was 27.1k, it seems like without any voltage/vbios mod getting above this seems impossible
@KopaZ I can get a genuine ( no artifacts ) score for just over 28k on my FE when overclocked but this is pushing it to the limit as core will be around 3045mhz and memory at +1500 . I can push the memory higher to +1600 amd will have lots of artifacts and post a much higher score but its not genuine imo🙂28k should be possible though on an overclock air cooled card 🙂
Cope.
wondering if the rumored titan has two connectors or still running one. one seems enough.
Did you check the pcie capacitors on the back side of the card?
what was the capacity of the power supply you were using for this OC test? I can't believe it didn't blow up or something
Question for you Der8auer:
Do you agree with corsair that the melting is caused by not fully inserting the 12+4 pin connector?
right now it seems like a lot of guesses but somebody has to find a way to reproduce the problem. Otherwise just guesses
New post on reddit 5 hours ago with a Zotac 4090 burned with connector fully seated. If 0.5mm of gap is all it takes to burn a connector its faulty design either way. There should be enough contact either way even with a small gap. Someone, like der8auer for example should measure pin depth and terminal depth vs how far the adapter is socketed to verify terminal/pin contact. Would easily tell if the "not socketed correctly" is even a thing.
@@Tommyof84 Which sub was the post on?
@@noxious89123 R/nvidia post "9900k 4090 adapter melted"
I think to get more performance it helps to have better sample perhaps.
love to see the Galax HOF 4090, Liquid Helium and Thread ripper
What psu are you using? I just got a strix 4090 oc and I have rm850x. I need to get something a little stronger. I'm running 12900kf. I'd really like to get something that I don't have to worry about upgrading in the next couple of years... so I'm looking 1400 to 1600w range.
What you need is back side cooling.
I'm curious.. could these condensation issues be helped by running a de-humidifier in the room to try and reduce the amount of water in the air? Just a thought...
lmao no
@@Sarlyx no he seems to be right. Condensation is the water in the air cooling and condensing. If there is less water in the air it would take longer for condensation to build up, ie. there would be less of it. Theoretically if it was 0% you wouldn't get condensation but that's basically impossible.
Sounds like us here in Montana "it's not too cold it's only -40" lol
Just a thought, but are you using the primary and secondary nvme slots? On some older boards using both will lower pcix slot to 8x as there are not enough pcix lanes available
PCI-E issue connected with motherboard - ASUS Z790 Maximus Hero. I got one and have same issue.
Hey Roman, will the WireView be fine if installed permanently in the system?
Legit question, no hate just curious
Why does debauer say "ideaR"?
my english no good :D will try to work on it
@@der8auer-en Your English is waaaayy better than my German. Carry On.
noooo its not about your english, your english is actually quite good
but ive noticed it in many other german friends of mine that pronounce ideaR too. im just curious where it originates from and why its so widespread
I used to say "ideaR". I'm a naitive speaking English speaker. Not sure if it's like the 2% German in me (I have some amount, just not sure how much), and I said it like that for a while. It's just a mispronunciation of the word, which can be a bit difficult for some to get right. This is why some go to speech therapy, especially during schools and what not. It's fairly common, and since @Der8auer EN is a naitive German speaker, his English is beyond well and very impressive. Sure, it's not perfect, but no one is, we're human, embrace that.
@@der8auer-en As a naitive English speaker, you're English is way better than you think. I used to do the same thing, it's just a mispronunciation and will get better over time as you naturally improve. If you want, you may read my reply above to someone else. It's absolutely nothing to be ashamed of and completely normal 😊.
Thank you for always going 100% and beyond with your content!
Have you tried EVGA PX1 with the boost lock to force the GPU speed in the beginning of the benchmark?
G'day Roman,
🤔Just wondering if the PCIe X8 problem exists when you do Air or Water OC too? as you have tested with different CPUs & Motherboards maybe some Temp testing to see if there is a point where the PCIe limits because of that, or just another thaought as you & BPS Brian both had this with your intel platform does AM4 & AM5 cut the PCIe lanes too?
As nVIDA is so strict with the Partners regarding BIOS & Power Limits do you know if they have said anything about Elmore's BIOS for 4090, 800-900W has gotta be giving them the SHITS BIG TIME.
Have you not seen his past videos?
@@les_railgun not at the moment, I have been trying to find a new home suitable for my disabilties & now am in the middle of moving.
So if Roman has answered these questions in past videos I would be grateful for you to answer them for me & point out which der8auerEN videos so I can watch them first when settled in my new home.
Set port royal to max power in nvidia driver to avoid drops in clock speed
have you tried using evga precision x1 to lock the gpu core to max? then close it so it doesn't disturb your runs
Reflashing the card's bios should probably help of it's a software issue, right?
Suggestion, but also a legitimate question hahaha
No clue as to why the card runs at PCIe 8X? What does ASUS say about it? GPU or Mobo thing?
With 40 series the thicker PCB may be the problem because it can "spread" the cold to mem/around the pcb because of increased copper content?
Massive 14-layer pcb man!!!
Does that motherboard have a dedicated setting for 3DMark like some older monos?
Why are you using the broken gpu
Perhaps time to directly LN2 cool the memory then!?
That wold make it worse
@@Adam-bw4lw Explain?
@@bradley3549 its memory controller inside the gpu that likes to be cold, nand flash itself can perform worse with lower than ambient temps
@@vinylSummer But we're not talking about NAND flash here (and I am familiar with the thermal sweet spot NAND seems to have.) We're talking about DRAM - which does seem to respond to cryogenic cooling - though I've never seen it on a GPU.
@@bradley3549 oh, you're right, I mixed things up in my head a little bit. Maybe not direct cooling, but rather some heatpipes going from the base of a ln2 pot to memory chips would be interesting to see
fan pointed at the connector.... X)
1:28 thermal pad…
Thats a thermal brick!!
XOC bios are amazing shame i cant find one for the 3090 with rebar support so stuck at 370w max cus 2x8 pin even tho my PSU wouldnt even flinch if i used it as 1x8pin daisy chain and my cables will happily do 500w each im stuck at 370w total cus bios
Thats insane
Где взять такой 12VHPWR с индикатором?
Hi where can i get that Wireview cable connector please? thanks in advance
Does der8aur have a video showing how he does ln2 overclocking in depth?
Yesterday i saw JayzTwoCent's video and he was mentioned you about legit scores, then today i saw your legit 3300 boost. What a good coincidence 😂😂
Btw, congrats even you have only the half of the pci lanes
Is the PCI-E X8 lane issue related to a EXO bios flash issue?
Nice video!
Any possibility to share bios?
I'd love to get into doing this sorta stuff, how did you get into it?
try locking to P0 might help the delay/ramp up?
The card isn’t broken, you’ve observing a bug with the 1000w bios - x16 is just not possible yet.
Showed it in the last video already. It's x8 even with stock bios
for what its worth im using a MSI gaming x trio 4090 on a gigabyte B550 master board, 5800x 3d cpu and my bus interface reads x8 aswell. stock bios as it would be out of box.
may I know what thermal paste did you use?
This will become new meme for invidia. What's the max POWER usage we can get!!!!! 2.5kw 🥳🥳🥳🥳🥳 yeaaaa
where do you find that wireview device, tried googling it and i get something by the same name for ethernet
What is the wireview adapter I cannot seem to find one online.
thats becuase you cant buy it its an unreleased product that der8auer made with thermalgrizzly
When did they change port royal?
wow as fast as my overclocked pentium e6300, nice
good video
Galax send this man a 4090 HOF please
Could the 8x PCIe lanes be caused by a driver issue? Have you tried completely wiping Nvidia drivers and reinstalling them with DDU or however that tool was called?
It's a motherboard feature. Alder lake and raptor lake cpus only have 16 pcie 5.0 lanes so the motherboard has to bifurcate the x16 connection in order for both the top gpu and m.2 ssd to have gen 5 connection, even when non of them are actually running at gen 5
What CPU was he using ?
Anyone knows how to buy these blue paper towels in EU?
Man, Farming has really changed over the years.
here is a genius idea for the best overclocking advanture ever, bring your cpu/gpu whatever you wanna overlock, do it outside on the street but not at your home country, do it in seberia. no hydrogen needed
How do you get the power connector to not mealt!?
The PCB is basically layers of copper sheets. They conduct heat very well. This is when LN2 cooling just the GPU core, everything else including the VRM stays reasonably cool. The connector can't melt if it's being cooled!
Try EVGA precition x1 boost lock for 4090, jay said shoud be work🔥
What disc u use ?
I mean, 46 degrees while gaming doesn't sound too bad right now.
Connector.?
you can ask a Theodore (KrisFix-Germany youtube) about PCI-E problem, maybe he can fixed that
Can you do a episode on why 4090 takes so much more power with each voltage increase?
It's literally the same as every other semiconductor, CPU or GPU, for the last couple of decades. Increasing frequency increases power draw in a linear fashion. Increasing voltage increases power draw exponentially. (To be specific, "power consumption is proportional to the square of the voltage"). It's more drastic than we've seen with older cards, simply because it's far more power hungry to start with. There's an awful lot of transistors in that GPU!
Gruesse aus dem fernen China. Bezueglich des PCIe Problems eine kurze Frage. Die SSD fuer das Betriebssystem etc ist nicht im SSD Platz 1 auf dem Mainboard verbaut, oder? Dies wuerde das downgrade der Grafikkarte auf 8x erklaeren.
900W!!
How old is that ThinkPad you're using? And why? Why are you using one so old...? 😜
P14s isn't that old, what the fuck are you on about?
@@Wasmachineman Wow, why do you feel the need to be such an @$$ HOLE? It just looked old to me because I have one that looks very similar and it's very old.
Not everything needs the best and newest thing.
"If you're happy with your system, then there's no reason to upgrade." - Gamers Nexus
@@technologicalelite you see the little 😜 I put after what I wrote? That indicates sarcasm. I was attempting to give him a bad time just for fun. The ThinkPad that I have that looks like that is so old that I haven't used it in 10 years. Also, I thought maybe it was some kind of a sleeper that he had built.
I just use it for office and a bit of CAD so works fine to me :) upgrading wouldn't give me anything except for wasting money
Maybe you can ask Galax for their Hall of Fame card ? The PCB looks even better than the Strix