Good video and a great topic. For those that don't understand why it's true that the CAS timing doesn't have much impact on performance: Your modern CPU's (by modern I mean since the Pentium 4) don't normally read a single address then begin another RAS strobe and CAS strobe sequence. They perform a single RAS-CAS sequence then read from 8 to 256 (or even 1024) consecutive addresses using burst mode. Burst mode means the memory can provide new data at every tick of the clock, and the longer the burst used the less effect the RAS to CAS delay has on the system. The timings that have more impact usually refer to recovery time required between commands. Note: I have designed around synchronous DRAM, but I don't work with any of the DDRx memorys (no need), but have read data sheets for DDR2, DDR3, and DDR4 memory. Reading ANY data sheet relevant to the type of memory you are working with will give you a much better understanding not only of why certain timings work better, but will give you a better idea of what you can get away with (and how to get away with it) if you are going for the bleeding edge in memory timings.
It "doesn't matter" because a CAS value is presented in absolute clock cycles and is the number of cycles between sending command to a column address and the start of the return of said command execution. Once the row is open and your asking for a long string of data it doesn't matter much. tRCD & tRP may be more impactful but the only value that is absolute (always the same cycles) is the CAS value. Everything is is minimal clock cycles if I am reading the white paper correctly. You can type in 10 tRCD, doesn't mean it will execute it that fast.
@@theodanielwollff no I don't there's a few timings the IMC will just ignore if you punch in impossibly low values but anything the IMC/RAM is able to try run will cause a crash if it's too low.
@@ActuallyHardcoreOverclocking Hmmm.. Yes I agree. I guess I'll rephrase my statement a bit. Since CAS is the only absolute value. Any other primary value "can" miss its cycle. Say a tRCD is 40, it could excluate at 60 (on its own). I agree that if its too low, it simply will not run/boot. But whatever the "base" value is, that is the lowest cycle count allowed / set, but not guaranteed per x cycles unlike CAS.
As I remember, there was a real difference between CL2 and CL3 when dealing with PC100 SDRAM, but the faster RAM gets the shorter each clock cycle becomes and the less one or two cycles matters. I don't think I've given two shits about it in almost twenty years as far as performance was concerned.
Your GPU VRam latency is more important than the entire System's main RAM. Go ahead and try DDR4 3600 Mts CL18 vs CL12 makes visually no difference at all in video games. The only noticeable difference is your GPU's VRam timing latency.
@@ClayWheeler Back in the DDR2 days i got couple FPS more when i tightened latencies actually. But as noted RAM has got more faster in basic level its not so much matter on these days sure if you go so small steps as steps were much huger back then in %. There was two ways.. tighten latency or go higher mhz.. in the end i went higher mhz just... 667MHz --> 914Mhz.
@@ClayWheelerIf your CPU bottlenecks performance (which isn't rare), then no, this is simply untrue. Lowering tRCD, tRRDS/L, tFAW and tRFC does give quite a noticeable boost to performance in those cases, and dialing in the tertiary ones is very helpful, too. If you have all timings locked in properly, then older games and ones that just hog your entire processor will have higher average framerates and stutter much less. The GPU has multiple potential points where bottlenecks can occur as well, so the VRAM speeds and latency being the main issue here is false in this scenario as well (such as the bus width, speed of the core, the *amount* of VRAM installed on the GPU, etc).
I would love to see a video with detailed description and graphics of what any of the timings exactly do, and what the timings actually matter in terms of real performance, no synthetic BS! Nice video btw.
Yeah tuning timings to get more performance is cool and all but not actually knowing what any of the said timings do in terms of real-world performance is an entirely different matter.
He tried doing that for DDR4. It's a mess. There are many "if this happens, then this happens but only if this happens to be X, otherwise that happens..." and it gets too deep in complexity real fast.
Low trcd + tras has always been more impressive when it comes to buying ram and overclocking results. If you oc memory you will know how much more difficult it is to have tight trcd + tras compared to cas.
It's not a buildzoid video if he doesn't say "this was way longer than it needed to be" It's nice to see such a clear demonstration of the importance of subtimings.
So if I am not supposed to buy ram based on first word latency, or voltage, or based on MT/s, or based on CAS latency then what should I base my purchasing decision on? Do I just google "What RAM is good for _____ CPU?" I don't want to overclock my RAM. I just want to hit XMP and go.
Anyway, buildzoid, I know you actually read your comments, and if you read this I just wanted to say I love your videos and I'm not making fun. I 100% understood what you meant by this, but it's a funny observation when taken out of context
I love the ultimatum you made at 8:19 "If I see ONE MORE PERSON complaining about CAS latency on DDR5 being loose, I.. just... I'm gonna make another video" No offense but the way you said it sounded really funny EDIT: i got wooshed hard, i didnt read the other comments before posting this lol
8:34 Mr. Builzoid drops the nuke-thread of all nuke-threads: "... then I'm going to make ANOTHER video on how incredibly wrong they are!" Jesus man. Relax! You could kill someone with that! :P Joking aside: I hear you loud and clear! I'll research the hell out of DDR5 kits when Zen 4 comes out and I'm good and ready to pull the trigger.
I used to have A LOT of stutters in games that were CPU demanding. After overclocking RAM, most of the stutter disappeared and now its crazy stable. It did help the performance for my ddr4 9600k
Hey man , you seem knowledgeable but if cas latency is the time to receive the command and the mhz is the transfers per second, I could see how in normal gaming people wouldn’t feel a difference but how would cas latency not effect the feel or snappy feeling in sniping in a game like halo infinite ? Competitively the time the command is sent and received for a snipe feels different in subtle yet detrimental and annoying ways. How would cas latency not play a roll in this ?
i was thinking the same thing. i currently have 32gb of DDR3 with 11-11-11-8 timings somehow. When i play unreal tournament at a very low resolution, the response time is spectacular, leading to some nasty headshots that make people wonder if im cheating. Now im going to build a 13700ksystem with 32gb DDR5 6400mt but with 32 latency and im really wondering of i'll ever have that response time again. it really is a head scratcher for me, i'll just have to test and see
Hello! I've recently upgraded my computer with an Intel Core i7-13700K and 32GB of DDR5 memory from Corsair, running at a speed of 6400MT/s. The motherboard I'm using is a Gigabyte Aorus Z690 Elite AX. With Unreal Engine installed on my new Crucial T500 SSD, I've noticed a significant improvement in gaming performance. The higher CAS Latency of CL32 does not seem to be an issue, as the newer generation of hardware compensates for this and performs better overall. Upgrading to newer hardware has reduced render latency in games, which enhances the responsiveness and overall experience. For the lowest render latency, I play on low settings with all unnecessary features turned off. I've also switched to using a mouse with a 1000Hz polling rate, a substantial improvement over the standard 125Hz mouse, which feels laggy by comparison. Additionally, to optimize mouse input, I assign the USB Hub host processing to one of the better CPU cores via an affinity mask. Take it from someone who's dived deep into the world of input lag and render latency optimizations-there's a whole world of upgrades that could make you dizzy with potential and sometimes lead to a 'smashing' time with Windows, if you catch my drift! 😆@@TheOneGhost12
@@afti03 hey thank you for the update , I’m not sure if intel boards are like amd and have a usb directly to cpu and not the controller , also why not 8k polling ? Two insane improvements for input lag is disabling hdcp and using bandwidth that will not enable DSc and will be native so hdmi 2.1 depending on resolution and refresh rates
I was hoping to get through life not understanding sub-timings, because two numbers (three if you count the DDR version) was enough to think about when buying RAM, heh.
It comes down to models and manufacturer rankings + a bit of research really. I don't think you actually need to pay attention to those factors as much as who is actually making the RAM and how nice mfr thinks it is
Hi, found your channel out via RUclips recommendation, you made DDR5 over clocking such a fun interesting topic however as a new joiner of this club it’s a struggle to OC on your own, I would love a full tutorial on the basics and how to do it! Thanks
He did a nice live-stream on that topic (or multiple?) and by watching some of his videos you will learn more than in a condensed tutorial I guess...especially since buildzoid likes to to get lost in details (which I really enjoy sometimes and it's already a meme 😄). I don't think tutorials are his strong suit but that's what makes it more interesting imo...just watch more stuff :)
RAM OC, to me, is tedious. It's very repetitive and timeconsuming. So you want to cut down on the time it takes to test. Pick a program or two from here: github.com/integralfx/MemTestHelper/blob/oc-guide/DDR4%20OC%20Guide.md#memory-testing-software Try out a couple, see which one you like. Because you are going to run the program(s) thousands of times and you want to know what timings and speed gives errors, as fast as possible. So not within an hour, more like within a minute or 5. I picked Testmem5. Why? Because, once I have selected the config-file to run, to start the program it is only a doubleclick. I do not want to insert RAM amount, or start 12 instances of the program, like HCI Memtest requires (unless you pay). And because Testmem5 kicked out errors faster for me and on top of that, each test is different. So if test 10 kicks out errors, I can open the config file and make sure it runs test 10 first, instead of after like 20 minutes of other tests. So I just saved 20 minutes times a dozen, over and over again. Because timings will react differently to different tests. Next timing might kick out errors on test 16. Then I would run test 16 first, maybe even a couple times in a row. Or in-between the other tests. But where do you start? I think I got this from BZ. I have bad memory. Set primaries to very loose, add 10 or 15 to each of them, rest on Auto. Add 200 Mhz to RAM speed, see if it boots. Keep doing that til it no longer boots. Drop clocks by 66-100 Mhz til it boots again. That would be the max speed. If you drop by another 100 Mhz, it might be more stable, with lower timings or something. Bleeding edge can be sensitive. Then tighten 1 timing at a time. Maybe start with dropping a timing by 2 to begin with, see if boots. And eventually move to dropping a timing by 1. When it no longer boots, it's too tight, loosen it by 1. Save to profiles constantly, you will have to load em numerous times when PC fails to boot and you have to reset CMOS/BIOS. Do a quick stability test with the RAM stability program you chose. What does quick mean? Until you are satisfied that you never have to touch that timing again, in my case. That can be 20-30 minutes in TM5, it could be 1 whole cycle. For me it will also depend on the RAM capacity. I once worked on 48 gigs of RAM. I wasn't testing anything for 1 cycle, that would have taken way too long. 3 times longer than testing 16 gigs. At the end I ran 3 cycles to make sure it was stable but during dialing it in, absolutely not. I have worked on my RAM kits for years, I know what timings they can do at this stage so I don't start at zero, at any speed. That saves a lot of time. Knowledge saves a lot of time. When it comes to voltages, I wont recommend any numbers. I don't have DDR5. Never owned an Intel-platform. Check overclocking subreddit www.reddit.com/r/overclocking, search the web, ask around. At a certain point it will have to be a judgment call by you. Is the performance gained worth possibly destroying hardware over? How high voltages are you willing to go for? What voltages will the IMC, RAM, SA etc tolerate and for how long? DDR5 hasn't been out long so I assume it is very hard to answer. Will DDR5 last when running daily for a year or 5 years? It hasn't been out long enough. What do we know about the longterm, daily settings of voltages? What about the PMIC? Things I would look into. If I am wrong about something, I would gladly be corrected.
I had a Corsair DDR4 3600 CL14 kit perform worse than a DDR4 3600 CL18 kit (both dual rank 2 stick kits). It was confusing to me at the time but this explains it. As a customer this is problematic because most vendor XMP profiles don't include other timings nor do they publish these anyways. How can I get a better idea of performance of these kits before purchase (aside from checking if they are dual / sing rank and seeing what memory chips are used)? I feel like vendors should include rank / memory chips used in addition to providing a performance figure with all the timings used. Right now memory vendors provide very little information for the most part.
If youre on AMD, I don't think sub timings matter if left on XMP. AGESA auto adjust crap based on internal code per sticks.. at least on ASUS/MSI boards. For example. Micron B single rank auto adjust to 550Nns TRFC on MSI boards even tho the XMP profile has 350NS writen to it.. Latency tanks hard, but AGESA code is modifying based on density.. lol There are XMP kits that are absolutely horrible for INTEL though.. Kingston Renegages with DJR @ 3600 have insanely high TRC.. fine for AMD, (auto adjust anyway), but horrible on intel. Memory vendors are just completely lost regardless IMO.
Correct me if I'm wrong, but it seems to me that for any complex operation the CPU needs to do that hits ram, it's a cumulative effect of a large number of the timings, over focusing on just a single one or two won't help you overall.
im curious as to whats going on though because obviously frequency and timing will have a certain latency, which timing figure are we missing as the 4-sequency timing went up even the CR2 too, you cant exactly say nothing matters, its a algorithm thats being adjusted which balanced that last timing test to reduce the latency. i haven't covered timings in detail for 4 years so i forgot too
so how do you explain the benchmark videos showing realworld fps increases on everything with a lower cas latency? (gaming benchmarks not synthetic tests).
I love this video so much. I learned a lot! I learned to look for CAS timings back in the early single core CPU + DDR days, and I'm glad to know that in the multi-core, massive CPU cache, DDR5 era, they aren't the primary metric to check after frequency. And yet, now I have to find some other way to judge the best memory aside from name brand and frequency. oh no!
The latencies are provided unitless, in the number of clock cycles. You need to divide then with the clock speed (or in other words, multiply with the clock interval) to get the speed in absolute time terms. A 3200 CL16 (1600 Mhz clock speed) CAS latency is (16 / 1600000000) seconds = 10 ns. A 6000 CL36 (3000 MHz clock speed) is (36 / 3000000000) seconds = 12 ns. As you can see, the actual time is not double.
Do benchmarks in modern games and show the difference between both cas latency and sub timings and show the 0.1% and 1% lows then show it "makes no difference"
Shame on you people if you don't have the dignity and/or the wisdom to maturely acknowledge this: You are either so misguided as to think that you'll be getting practically perceivable performance gains through all these memory overclocking efforts, or you are just lunatics who actually value the 1% better performance you'll perhaps be able to achieve after hours of work; either way, you have way too much free time on your hands
@@AndyU96 Or maybe, this is a hobby, and we enjoy the tinkering and tweaking, and you shouldn't judge what other people choose to do with their free time.
@@AndyU96 we are the equivalent of car guys, we want peak performance from the hardware we have. I mean if you memory has some free performance on the table why not run with it?
Im more of a noob but not completely lost, however, I cant seem to find much info on the overall memory latency (NOT CAS as most searches pull up) such as the AIDA memory latency test provides... I have an EXTREME overall latency (from AIDA) of >110ns and thats using XMP on 3200mts CL16 Gskill (hynix). on an ASRock B550 PG ITX/ax board running a 5900x. Im trying to OC this memory but that latency is very extreme right? Any ideas on how to lower this? Thx for anyone who attempts to help this noob who wants to not be a noob
Of corse changing the memory timing is irrelvant. You didn't bother to increase the L3 cache speed. The L3 cache on intel controls the speed of the access to the memory controller. Even with a seprate I/O die on Amd's chiplet design, the limitation is now infinity fabric speed chiplet die to I/o die which hold the memory controller. The only time these timings matter is for cache misses when the cpu has to go straight out to memory skipping L3 cache or when you've over staturated the L3 cache. Thats why the 5800x 3D doesn't need or benifit from tight timings or high mhz. The bandwidth comes from the L3 cache.
I beg to differ, having a 5800x3d and a ram kit with terrible secondary timings, I saw 10+ Fps gains (mid 60's to now high 70's for minimums) and I haven't even finished tightening the timings yet. That big cache helps but if your memory is terrible it will still impact performance.
@@buck3t_ it just has to be able to maintain a certain ratio, as speed to L3 cache in the end. I believe 2,400mhz is below that spec, If i'm not mistaken main timing of 20+ are also below that spec.
So it doesn't matter if I use different brand memories and different CL (Latency)? I want to combine some Gskill that I already have with some from Teamgroup but the Gskill are CL 20 and the Teamgroup CL 19 (The rest of the specifications are the same), I have read different answers everywhere. If the system does not adjust the memories automatically, would doing so manually solve it? Or should I not worry?
6:47 It was me, I was the 8000MhzCL28 guy Buildzoid xD if it makes you feel better on this comment, watching this informed me on why i was wrong, and I appreciate the video! xD I figured 4000CL14 would be the same latency at 8000CL28, my bad!
I don't understand... :/ I'm sure it's brilliant, but something organized into 3 points with a conclusion and recommendations would help a lot. So, is this right? --> The first number in the four number memory timings (which is used to calculate RAM first word latency) doesn't matter as the something like the average of all the four timings listed?
Is there some way a lay-man can pick up these types of things without going so deep into overclocking? I guess I don't adjust my timings other than hitting DOCP because the performance uplift isn't worth the time and potential reliability for me. I think you should do recommendations of specific kits. I will probably go to Intel 13th gen for my next upgrade and that will be DDR5 and I'd love to get a good set of ram to minimise bottlenecking in any applications I like working with snappy PC's.
what outcome is with 6400 40-40-40-86 but with auto sub timings vs the 6400 c28 with auto sub timings was at you ran initially? Wouldn’t that show better difference between the primary timings?
Im thinking of getting some viper steel rgb its 3600 c20 my current ram is 3200 c16 reason im getting new ram is because my current ram isn't on the compatibility list for the x570 board im planning on getting i just want to make sure that it will work soo basically the difference between c16 and 20 wont matter all that much specially if i can get them to c18 through overclocking they will essentially be the same?
Thank you, I appreciate your reasoning. You've given me some confidence in choosing memory, even though I will probably hardly ever actually touch the settings.
I only learnt about CL timing on RAM about a month before ddr5 came out so when I saw the timings on ddr5 ram I was quite confused, it seemed like it was crazy high but turns out I was just misinformed. Thank you for your videos I think I'm going to end up watching a lot of them.
When ever I watch these kind of videos I come away thinking ok, I need to get the lowest latency and the highest bandwidth. whenever I adjust the settings I hit a wall. Either I can have higher bandwidth or lower latency. Which is most important? Would you go for lower latency and sacrifice bandwidth or higher bandwidth and lower latency. Of the 2 which is most important for gaming tasks?
Latency is better, but it depends from game to game. However games doing better with lower latency is much more likely then a game doing better on bandwith.
I like how you said someone that specifically said they'll wait till there's CL28 DDR5 memory. Like I've seen people say things like this and I'm always like, why do you feel like you'll pick this arbitrary number when you have no reason to think that would be the best number, there's nothing for them to factually base the logic off of 😆
hello i have 8700K with 4X8gb 3200 ....i should use 2666 with low latency because i read there is bandwith limit for each cpu and mine is 64Gb/s why use 3200 if my cpu don't used? THANKS
I have a Gigabyte B660 Gaming x AX DDR4 + 12400 + Corsair vengence RGB Pro 3200 8GB sticks. Whenever populate more than 2 Ram slots With X.M.P. enabled I can't get boot. downclocking to 2800 sloves it. is there a way where I can loose the timings/voltage and stuff to run 3200Mhz stable? Can you make a video on that? since nobody talks about this topic. can't find proper solution on google.
Anything to watch for with DDR5 4800 Sodim ? Thinking of getting a fair priced MSI GS66 Laptop, but instead of buying the version that comes with 32gb ram wich comes at an atrocious price premium compared to the 16gb model, I can get a 2x32gb cl40 4800 set cheaper and then be abler to sell the 2x8gb modules that is mounted. does timings matter a whole lot in a laptop, not even sure the bios is able to set advanced memory timings in a laptop
Can you or someone link me to the video that advises how to choose a good "Die" memory since its not advertised what Die is actually used. I was wanting some 6000 CL32 memory when I make my next build, but ofcourse, if I get just get a better Die and tune it that would be better performance and so I'd like to do it.
I really like this video. We make guidelines for people buying something for years. The goal is not to get the best memory available at the store - not the best performance. The goal is to tight your chances to something least random. In order of importance: CPU supported clock, lower voltage, higher clock, lower latency. We switched the last two when Ryzen came out and we wanted to avoid BFUs running custom profiles just to match the clock of the CPU. The fact is that manufacturer can make the label say anything they want and the only balance they have to keep is returns. So you play their game.
I’m getting the g skill c14 3200mhz 16x2 32gb b-die baby to pair it with my Ryzen 5 5600x and getting rid of my ballistix 3200mhz cl16 which do you think I should go with?
Is the situation different for ddr3? I want to make the most out of my i7 2600k so should I start tightening tertiary timings first and primaries last?
It's about time to throw it in the bin bro. I changed from my 3770k to a 9900k in 2019 and the jump was amazing. Now we have 2022... soon to be 2023. Get a new computer.
I understood correctly that he compares “low cl” on automatically secondary and tertiary subtimings, against higher cl, but with compressed secondary and tertiary timings?
Have you had a chance to review RAM that show up in Thaiphoon Burner as CXMT? I bought an inexpensive Silicon Power 3200 MHz DDR4 2x8GB kit and it doesn't seem bad. The XMP profile appears similar to Micron E-Die (A little weaker on the sub timings). I tried tightening up sub timings but the profile seems to be its limit. IDK if you have had any luck playing with them and finding timings that can be tightened or not. It also seems like these chips get warmer as well. Maybe it's just me, but I can't validate as there are no temp reading pulled into HWinfo.
What i've found is the general rule is if two different CAS timings are equal in ratio to mhz, the higher mhz wins out. Other than that idk, I think CAS latency is important. I'll watch the vid to probably learn something new but to me it's as important as mhz, just about.
Thank you for this PSA! For us hobbyists, who are not into overclocking, and have less understanding of the whole realm… Can you give a hint of what we should be looking for when shopping for DDR five. Is there something simple on a product page, other than simply the speed of the memory? I used to think that CAS latency was the next piece to look at… But I hear you loud and clear :-)
CAS latency will tell you the quality of the dies. A low CAS at a certain speed means that it was a better binned chip. That means more overclocking and tuning headroom. So you can still look at it when you buy it for overclocking, but it won't really tell you the performance level, since XMP profiles will still be made by the vendors and they can be really bad. So in the end, the buying patterns remains the same.
10:22 Hmm I thought it was because real world workloads do random reads 1 cache line at a time and that takes at least tras+cas cycles. Can’t we just minimize that sum?
What is Trc again? To be honest I would love it if memory manufacturers would start producing designs that minimize capacitance while maximizing conductance. I don't care if they need to use low density libraries on a 28nm class process node with high-k for the caps and ultra-low-k signal insulation, but if we just start cooling the chips with something like a low-powered Peltier module, we could easily benefit from
What do you mean "real world"? This is how the memory controller works, the prefetcher can load sequential memory locations before the program requests them, but that only works if the software is optimized to read sequentially.
@@mortlet5180 I think that's a really interesting proposition but the use-case is too niche to spend the design effort on it I guess - given that the only use case I can think of is OC records given the power consumption of the peltier...even though server CPUs are already pretty power hungry they also need a lot of memory and cooling such systems is hard (and loud) enough as it is so that leaves only real enthusiast use-case I'd guess
So normal buyer as myself who have basic knowledge of ram don't how overclock ram what kit do recommend for ddr5 and ddr4 which has good xmp profile and how performace in game can achieve by tuning of your on ram and lastly if i am buying ram what sub timing should I look for in ram
Completely irrelevant to the topic of the video but I noticed the date on your system is March 27, 2023, which makes me want to believe that you filmed this footage in the future (still a few weeks away at the time of writing) and sent it back about 8 months to be edited and uploaded in Mid July 2022.
Now that you have successfully disabused us of mistakes regarding CAS - Question: What would be good "[X] to keep in mind" things when buying memory - For Overclocking as well as for *not* overclocking (AKA - just daily driver where the XMPs are consistently in the general ball park of 'good enough' ) More explanation below. I've been following/Sub'd for a year or so I guess, maybe more now because I love all this nerd stuff, I keep learning bits from a lot of different YT sources - It's an entertainment thing. However, I don't have the time or talent at research on my own outside of the spoon feed I take from my general entertainment. (My Google-Fu is terrible. I'm like the Anti-SEO that elderly search engines tell stories about to scare the little web crawler bots around campfires. .. Or whatever electronic version of that analog that exists). So, as a laymam, I have relatively little knowledge to use when purchasing system memory. Help please? P.S. : Sincerely thanks btw. Specifically because of your channel I first learned of SK Hynix and Samsung B DDR4 dies are a thing that exists, and that they are great for overclocking (even if I dont have the money or the impetus to get into Overclocking). All these rants and tidbits of info you drop a great.
"I could just edit the screenshots" he said, as he was *supposedly* showing us an unedited video... hmmm. I now question the entirety of your channel. /s
Wouldn’t he edit that part out? And what would be the benefit of him showing you fake scores when he’s literally teaching you how to do something that you can then go and do yourself and verify.
I was discovering today that primaries don't really matter. I was too late to hear that from you. I spent days overclocking a stupid spectek memory rev E. I will be showing off how a territory time like tRDRD_dg should never go over 4.... super important time.
@actually hardcore Overclocking: So 3600MHz vs 3200MHz is the better option even though 3200MHz runs on a much smaller CL (so CL18 for 3600MHz va CL15 for 3200MHz)? I also heard lots of talk that it does matter for AMD systems but not as much for Intel systems, what is your view on this matter?
Well it didn't work for my kingston 6000Mhz ram. Does anyone have suggestions at how to reduce my latency? It won't go under 70ns. My other speeds are fine and the pyprime score is around 10.6 seconds which seems to be low. Will reducing my latency improve my gaming experience or should i just leave everything the way it is?
This won't increase your gaming performance, just your cpu. Gaming is usually limited by your gpu. Even then you're not going to increase your frame rate by more than like one or two FPS which really doesn't matter.
Hey I really enjoy how informative your videos are as I my self am a stickler for overclocking an making sure you get the most out of your hardware for bang for buck. I'm starting new AM5 system an started to dismantle info on latest an greatest DDR5 set ups. . an came across trying to understand timings speeds an what exact sweet spots I'm suppose to get to to get the most out of my system. As of now am rocking Asus Prime x670E-pro wifi an will be getting amd amd 7950x. Was looking into ram an trying to figure out what best one to get stumbled across this wanted to get your suggestion for this set up. I will be OC'ing both ram/cpu an cpu will be liquid cooled what are your thoughts?
Wholeheartedly agree with your assessment. I spent 2 weeks or so messing and learning about with secondary, tertiary and even quaternary timings on my budget DDR4 3200 Samsung B-die 32GB (2x16GB) dual rank (oddly enough the set I have, is a gskill SKU thats not really documented anywhere, seems like it was a very limited batch of budget binned B-die, perhaps because of demand for 3200 or oversupply of B-die at the time) on z370 ITX motherboard with an 8700k ... and man, I couldnt believe how bad XMP + auto tertiary/quaternary timings were compared to my hand dialed timings ... and I think my primaries are looser than XMP ... but my tertiary / quaternary are a lot lot tighter than auto as a result ... and I have 0 training issues when powering on system ... and my gosh the performance is night and day difference in aida and benchmarks. I think my latency dropped by like 20-30ns and BW went up by at least 10-20GB/s in aida iirc vs XMP+auto. 0 stability problems, 0 training issues (IOTLs and RTLs are never out of spec on power-on) ... it amazes me how much of a difference hand tuning can make! And yeah I came to same conclusion you speak of ... secondary/tertiary and even quaternary timings can have far more positive impact on performance then primaries ... when tuned properly. Ill also add that while I was learning about the timings, tweakin them , testing them.. it really helped to find guides and even ready technical documents about the effects of some of the more granular timings and in detail how RAM works. It really helped establish a picture / concept mentally of what a timing meant for what the RAM was doing in relation to said timing and also how it potentially effected other timings, or at least their relation. The whole learning aspect about the mystery of RAM and its timings ... was what really made the whole adventure / endeavor quite exciting and dramatically helped arrive at a , what I believe, pretty damn effective result in the end. I actually love the fact there are so many knobs that can be tweaked with RAM to make it run as optimally as possible, its a tweakers paradise really.
Two questions: 1. Why can't RAM be automatically tuned, via some software algorithm? 2. How much does that difference vs XMP + auto timings that you mention affect game performance (when the target is Vsync-locked 60 fps)?
@@bricaaron3978 1. Think of tuning RAM like tuning a car's ECU for maximum engine efficiency, performance and safety. Sure, software can do that to a point or at least try and sometimes fail miserably(while most likely corrupting the OS its running from(the biggest risk when tuning RAM)) ... but nothing can replace human intellect when you are pushing the edge of the tune. 2. As far as raw FPS increase(w/o caps), its not insane or anything, but the lows I believe are what see the biggest gains, at least seems that way ... ultimately diff with every game and also depends on how good or bad Auto vs XMP results are on your mobo and RAM.
@@RandoTark Regarding FPS, the 1% low is, as you know, what is important when trying to maintain a Vsync-locked framerate. So if that's where the biggest improvement is, it sounds like tuning RAM could be worth it. Do you have any recommendation for a tutorial on learning how to manually tune RAM?
Also I clicked one song on your bandcamp and I thought my OC failed and the pc just chopped and was about to day then I noticed everything is responsive and its just the audio from the song lol :P I mean I dont want to bully you I guess its fine for the people that hear whatever this genre is called I just didnt know it exists and its not my cup of tea :P
Why though? Real world use case is all that matters. Doing a test with literally nothing running to get a small number is cool and all but you're never going to get that number in real life while actually using your computer.
Depends on need. Best way is to understand the timing is on a timing chart to get the full picture... imo But adjusting to theoretical doesn't work because of many other factors and not just the silicon (rise time, fall time, power noise...etc). Basically the complete timing set/cycle needs to happen within the 1/freq setting... plus a little for physics ; ) Yea, just hunting low cas isn't a real improvement and yea, mem timing lookup in most ram kits db is crap... takes time but manual all the way... if enthusiast.
@@mauriceelet6376 is 11th gen Intel bad at ddr4 overclocking? To me 9th gen I was getting 31ms in aid64 and with a 11900k I get 45ms. I have Samsung b-die dual rank 32GB 3600mhz but I can’t overclocking it on 11th gen.
@@Typhon888 yea. the 11gen has the same crap rizen got: gear mode. 10th gen is the last without where you get the full "gear 1" up to 5000Mhz. I think 11 Gen can do 3800-4000Mhz with gear 1. everything over that is meaningless. thats why you can hit +5000 on there so easy. gear 2 no problem. but you will still suffer at gear 1 as its not the same as the earlier gens have.
Good video and a great topic.
For those that don't understand why it's true that the CAS timing doesn't have much impact on performance: Your modern CPU's (by modern I mean since the Pentium 4) don't normally read a single address then begin another RAS strobe and CAS strobe sequence. They perform a single RAS-CAS sequence then read from 8 to 256 (or even 1024) consecutive addresses using burst mode. Burst mode means the memory can provide new data at every tick of the clock, and the longer the burst used the less effect the RAS to CAS delay has on the system. The timings that have more impact usually refer to recovery time required between commands. Note: I have designed around synchronous DRAM, but I don't work with any of the DDRx memorys (no need), but have read data sheets for DDR2, DDR3, and DDR4 memory. Reading ANY data sheet relevant to the type of memory you are working with will give you a much better understanding not only of why certain timings work better, but will give you a better idea of what you can get away with (and how to get away with it) if you are going for the bleeding edge in memory timings.
It "doesn't matter" because a CAS value is presented in absolute clock cycles and is the number of cycles between sending command to a column address and the start of the return of said command execution. Once the row is open and your asking for a long string of data it doesn't matter much. tRCD & tRP may be more impactful but the only value that is absolute (always the same cycles) is the CAS value. Everything is is minimal clock cycles if I am reading the white paper correctly. You can type in 10 tRCD, doesn't mean it will execute it that fast.
@@theodanielwollff if you punch in tRCD 10 the RAM/IMC will try to do it in 10 cycles and crash.
@@ActuallyHardcoreOverclocking Maybe bad example, but you get the point right?
@@theodanielwollff no I don't there's a few timings the IMC will just ignore if you punch in impossibly low values but anything the IMC/RAM is able to try run will cause a crash if it's too low.
@@ActuallyHardcoreOverclocking Hmmm.. Yes I agree. I guess I'll rephrase my statement a bit. Since CAS is the only absolute value. Any other primary value "can" miss its cycle. Say a tRCD is 40, it could excluate at 60 (on its own). I agree that if its too low, it simply will not run/boot. But whatever the "base" value is, that is the lowest cycle count allowed / set, but not guaranteed per x cycles unlike CAS.
I love how your threat to misunderstandings is “I’ll make a video “. Cracked me up.
Yeah, genuine lol out of me too.
I mean, is there a better reaction?
The silence and anticipation where he was imagining what he could do... Then he just says he'll make another video. Lmao this ia gold.
Kids are acting up. . .
"Don't make me come back there and make a video!"
'The pen is mightier than thy sword'
so what you're saying is that CAS latency matters
I kno so stupid
As I remember, there was a real difference between CL2 and CL3 when dealing with PC100 SDRAM, but the faster RAM gets the shorter each clock cycle becomes and the less one or two cycles matters. I don't think I've given two shits about it in almost twenty years as far as performance was concerned.
That's a 50% increase bro...
@@TheHighbornDecrease. We're talking about latency.
@@nexxusty ?
2x1.5 is 3. CL3 ram has 50% more latency than cl2 no?
He mentions as speed increases, the bigger number means less but still.
What about for gaming? Is there a way to know which timings (primary, secondary, or tertiary) are more important for FPS, or is it all the same?
Your GPU VRam latency is more important than the entire System's main RAM.
Go ahead and try DDR4 3600 Mts CL18 vs CL12 makes visually no difference at all in video games.
The only noticeable difference is your GPU's VRam timing latency.
@@ClayWheeler Back in the DDR2 days i got couple FPS more when i tightened latencies actually. But as noted RAM has got more faster in basic level its not so much matter on these days sure if you go so small steps as steps were much huger back then in %. There was two ways.. tighten latency or go higher mhz.. in the end i went higher mhz just... 667MHz --> 914Mhz.
@@ClayWheeler personally cl16 3600mhz ram feels much more smoother than cl 18 3600mhz
@@ClayWheelerIf your CPU bottlenecks performance (which isn't rare), then no, this is simply untrue. Lowering tRCD, tRRDS/L, tFAW and tRFC does give quite a noticeable boost to performance in those cases, and dialing in the tertiary ones is very helpful, too. If you have all timings locked in properly, then older games and ones that just hog your entire processor will have higher average framerates and stutter much less. The GPU has multiple potential points where bottlenecks can occur as well, so the VRAM speeds and latency being the main issue here is false in this scenario as well (such as the bus width, speed of the core, the *amount* of VRAM installed on the GPU, etc).
I would love to see a video with detailed description and graphics of what any of the timings exactly do, and what the timings actually matter in terms of real performance, no synthetic BS!
Nice video btw.
Yeah tuning timings to get more performance is cool and all but not actually knowing what any of the said timings do in terms of real-world performance is an entirely different matter.
He tried doing that for DDR4. It's a mess. There are many "if this happens, then this happens but only if this happens to be X, otherwise that happens..." and it gets too deep in complexity real fast.
Low trcd + tras has always been more impressive when it comes to buying ram and overclocking results. If you oc memory you will know how much more difficult it is to have tight trcd + tras compared to cas.
How do I know if DDR5 sub-timings are finally becoming good if they only specify the main ones?
When do you expect something like this to happen?
It's not a buildzoid video if he doesn't say "this was way longer than it needed to be"
It's nice to see such a clear demonstration of the importance of subtimings.
So if I am not supposed to buy ram based on first word latency, or voltage, or based on MT/s, or based on CAS latency then what should I base my purchasing decision on? Do I just google "What RAM is good for _____ CPU?" I don't want to overclock my RAM. I just want to hit XMP and go.
"36 and 40 are basically the same number"
-Buildzoid, 2022
With an error bar of +/- 11% he is correct, where as "16 and 12 are basically the same number" has an error bar of +/- 33%
@@andersjjensen 11 and 33 are basically the same number
Anyway, buildzoid, I know you actually read your comments, and if you read this I just wanted to say I love your videos and I'm not making fun.
I 100% understood what you meant by this, but it's a funny observation when taken out of context
@@werewolfmoney6602 Well absolutely. Given the right error bars everything but false boolean expressions can be true :P
Buildzoid right and wrong. There is mathmatical eqautions for proof for ram timings and mhz.
I love the ultimatum you made at 8:19
"If I see ONE MORE PERSON complaining about CAS latency on DDR5 being loose, I.. just... I'm gonna make another video"
No offense but the way you said it sounded really funny
EDIT: i got wooshed hard, i didnt read the other comments before posting this lol
8:34 Mr. Builzoid drops the nuke-thread of all nuke-threads: "... then I'm going to make ANOTHER video on how incredibly wrong they are!"
Jesus man. Relax! You could kill someone with that! :P
Joking aside: I hear you loud and clear! I'll research the hell out of DDR5 kits when Zen 4 comes out and I'm good and ready to pull the trigger.
I used to have A LOT of stutters in games that were CPU demanding. After overclocking RAM, most of the stutter disappeared and now its crazy stable. It did help the performance for my ddr4 9600k
I love rants. Keeps me from asking questions that have already been answered. Watch the videos, it is all there.
yup....lolz
Hey man , you seem knowledgeable but if cas latency is the time to receive the command and the mhz is the transfers per second, I could see how in normal gaming people wouldn’t feel a difference but how would cas latency not effect the feel or snappy feeling in sniping in a game like halo infinite ? Competitively the time the command is sent and received for a snipe feels different in subtle yet detrimental and annoying ways. How would cas latency not play a roll in this ?
i was thinking the same thing. i currently have 32gb of DDR3 with 11-11-11-8 timings somehow. When i play unreal tournament at a very low resolution, the response time is spectacular, leading to some nasty headshots that make people wonder if im cheating. Now im going to build a 13700ksystem with 32gb DDR5 6400mt but with 32 latency and im really wondering of i'll ever have that response time again. it really is a head scratcher for me, i'll just have to test and see
@@afti03 please let me know , when will your build be done ?
@@afti03hey man did you ever go to am5 ? Did it effect input lag for your unreal ?
Hello! I've recently upgraded my computer with an Intel Core i7-13700K and 32GB of DDR5 memory from Corsair, running at a speed of 6400MT/s. The motherboard I'm using is a Gigabyte Aorus Z690 Elite AX. With Unreal Engine installed on my new Crucial T500 SSD, I've noticed a significant improvement in gaming performance. The higher CAS Latency of CL32 does not seem to be an issue, as the newer generation of hardware compensates for this and performs better overall.
Upgrading to newer hardware has reduced render latency in games, which enhances the responsiveness and overall experience. For the lowest render latency, I play on low settings with all unnecessary features turned off. I've also switched to using a mouse with a 1000Hz polling rate, a substantial improvement over the standard 125Hz mouse, which feels laggy by comparison. Additionally, to optimize mouse input, I assign the USB Hub host processing to one of the better CPU cores via an affinity mask.
Take it from someone who's dived deep into the world of input lag and render latency optimizations-there's a whole world of upgrades that could make you dizzy with potential and sometimes lead to a 'smashing' time with Windows, if you catch my drift! 😆@@TheOneGhost12
@@afti03 hey thank you for the update , I’m not sure if intel boards are like amd and have a usb directly to cpu and not the controller , also why not 8k polling ? Two insane improvements for input lag is disabling hdcp and using bandwidth that will not enable DSc and will be native so hdmi 2.1 depending on resolution and refresh rates
2 videos in a row that are a treat.
2 are a treat, the 3rd one was a threat 😂
I was hoping to get through life not understanding sub-timings, because two numbers (three if you count the DDR version) was enough to think about when buying RAM, heh.
It comes down to models and manufacturer rankings + a bit of research really. I don't think you actually need to pay attention to those factors as much as who is actually making the RAM and how nice mfr thinks it is
Hi, found your channel out via RUclips recommendation, you made DDR5 over clocking such a fun interesting topic however as a new joiner of this club it’s a struggle to OC on your own, I would love a full tutorial on the basics and how to do it! Thanks
That's a tall order
He did a nice live-stream on that topic (or multiple?) and by watching some of his videos you will learn more than in a condensed tutorial I guess...especially since buildzoid likes to to get lost in details (which I really enjoy sometimes and it's already a meme 😄). I don't think tutorials are his strong suit but that's what makes it more interesting imo...just watch more stuff :)
RAM OC, to me, is tedious. It's very repetitive and timeconsuming. So you want to cut down on the time it takes to test.
Pick a program or two from here: github.com/integralfx/MemTestHelper/blob/oc-guide/DDR4%20OC%20Guide.md#memory-testing-software Try out a couple, see which one you like. Because you are going to run the program(s) thousands of times and you want to know what timings and speed gives errors, as fast as possible. So not within an hour, more like within a minute or 5.
I picked Testmem5. Why? Because, once I have selected the config-file to run, to start the program it is only a doubleclick. I do not want to insert RAM amount, or start 12 instances of the program, like HCI Memtest requires (unless you pay). And because Testmem5 kicked out errors faster for me and on top of that, each test is different. So if test 10 kicks out errors, I can open the config file and make sure it runs test 10 first, instead of after like 20 minutes of other tests. So I just saved 20 minutes times a dozen, over and over again. Because timings will react differently to different tests. Next timing might kick out errors on test 16. Then I would run test 16 first, maybe even a couple times in a row. Or in-between the other tests.
But where do you start? I think I got this from BZ. I have bad memory. Set primaries to very loose, add 10 or 15 to each of them, rest on Auto. Add 200 Mhz to RAM speed, see if it boots. Keep doing that til it no longer boots. Drop clocks by 66-100 Mhz til it boots again. That would be the max speed. If you drop by another 100 Mhz, it might be more stable, with lower timings or something. Bleeding edge can be sensitive. Then tighten 1 timing at a time. Maybe start with dropping a timing by 2 to begin with, see if boots. And eventually move to dropping a timing by 1. When it no longer boots, it's too tight, loosen it by 1. Save to profiles constantly, you will have to load em numerous times when PC fails to boot and you have to reset CMOS/BIOS. Do a quick stability test with the RAM stability program you chose. What does quick mean? Until you are satisfied that you never have to touch that timing again, in my case. That can be 20-30 minutes in TM5, it could be 1 whole cycle. For me it will also depend on the RAM capacity. I once worked on 48 gigs of RAM. I wasn't testing anything for 1 cycle, that would have taken way too long. 3 times longer than testing 16 gigs. At the end I ran 3 cycles to make sure it was stable but during dialing it in, absolutely not.
I have worked on my RAM kits for years, I know what timings they can do at this stage so I don't start at zero, at any speed. That saves a lot of time. Knowledge saves a lot of time.
When it comes to voltages, I wont recommend any numbers. I don't have DDR5. Never owned an Intel-platform. Check overclocking subreddit www.reddit.com/r/overclocking, search the web, ask around. At a certain point it will have to be a judgment call by you. Is the performance gained worth possibly destroying hardware over? How high voltages are you willing to go for? What voltages will the IMC, RAM, SA etc tolerate and for how long? DDR5 hasn't been out long so I assume it is very hard to answer. Will DDR5 last when running daily for a year or 5 years? It hasn't been out long enough. What do we know about the longterm, daily settings of voltages? What about the PMIC? Things I would look into.
If I am wrong about something, I would gladly be corrected.
do things you think is good >confirm you get more performance >post results and ask others for their opinion ♻️
I had a Corsair DDR4 3600 CL14 kit perform worse than a DDR4 3600 CL18 kit (both dual rank 2 stick kits). It was confusing to me at the time but this explains it.
As a customer this is problematic because most vendor XMP profiles don't include other timings nor do they publish these anyways. How can I get a better idea of performance of these kits before purchase (aside from checking if they are dual / sing rank and seeing what memory chips are used)? I feel like vendors should include rank / memory chips used in addition to providing a performance figure with all the timings used. Right now memory vendors provide very little information for the most part.
If youre on AMD, I don't think sub timings matter if left on XMP. AGESA auto adjust crap based on internal code per sticks.. at least on ASUS/MSI boards.
For example. Micron B single rank auto adjust to 550Nns TRFC on MSI boards even tho the XMP profile has 350NS writen to it.. Latency tanks hard, but AGESA code is modifying based on density.. lol
There are XMP kits that are absolutely horrible for INTEL though.. Kingston Renegages with DJR @ 3600 have insanely high TRC.. fine for AMD, (auto adjust anyway), but horrible on intel.
Memory vendors are just completely lost regardless IMO.
Correct me if I'm wrong, but it seems to me that for any complex operation the CPU needs to do that hits ram, it's a cumulative effect of a large number of the timings, over focusing on just a single one or two won't help you overall.
im curious as to whats going on though because obviously frequency and timing will have a certain latency, which timing figure are we missing as the 4-sequency timing went up even the CR2 too, you cant exactly say nothing matters, its a algorithm thats being adjusted which balanced that last timing test to reduce the latency. i haven't covered timings in detail for 4 years so i forgot too
So if CAS doesn't matter then why we use it in the formula to determine the ns???
Because we do CAS*2000 / speed
Assuming you're referring to what's called "First word latency", that is basically the ram equivalent of "eFPS"
so how do you explain the benchmark videos showing realworld fps increases on everything with a lower cas latency? (gaming benchmarks not synthetic tests).
I love this video so much. I learned a lot!
I learned to look for CAS timings back in the early single core CPU + DDR days, and I'm glad to know that in the multi-core, massive CPU cache, DDR5 era, they aren't the primary metric to check after frequency.
And yet, now I have to find some other way to judge the best memory aside from name brand and frequency. oh no!
What fortuitous timing, I was just deciding between a DDR4 and DDR5 build and the CL freaked me out a little. Thanks.
The latencies are provided unitless, in the number of clock cycles. You need to divide then with the clock speed (or in other words, multiply with the clock interval) to get the speed in absolute time terms. A 3200 CL16 (1600 Mhz clock speed) CAS latency is (16 / 1600000000) seconds = 10 ns. A 6000 CL36 (3000 MHz clock speed) is (36 / 3000000000) seconds = 12 ns. As you can see, the actual time is not double.
I only play fps games so I need a low latency witch ddr5 will be the best for my build please ?
Do benchmarks in modern games and show the difference between both cas latency and sub timings and show the 0.1% and 1% lows then show it "makes no difference"
3600Mhz and fast timings seem to be more stable in the lows on my am5 system.
@@Syphirioth Off course, that`s why it`s made.
Just saying that it doesn`t matter doesn`t make it so. This vid is crap.
Feeling better spending 3 days on tertiary timings and an hour on primaries.
there's also an order of magnitude more tertiary timings to tweak, so there's that. Honestly an hour on primaries is like, 3x more than necessary.
Shame on you people if you don't have the dignity and/or the wisdom to maturely acknowledge this: You are either so misguided as to think that you'll be getting practically perceivable performance gains through all these memory overclocking efforts, or you are just lunatics who actually value the 1% better performance you'll perhaps be able to achieve after hours of work; either way, you have way too much free time on your hands
@@AndyU96 Or maybe, this is a hobby, and we enjoy the tinkering and tweaking, and you shouldn't judge what other people choose to do with their free time.
@@AndyU96 we are the equivalent of car guys, we want peak performance from the hardware we have. I mean if you memory has some free performance on the table why not run with it?
@@AndyU96 Why are you here?
Im more of a noob but not completely lost, however, I cant seem to find much info on the overall memory latency (NOT CAS as most searches pull up) such as the AIDA memory latency test provides...
I have an EXTREME overall latency (from AIDA) of >110ns and thats using XMP on 3200mts CL16 Gskill (hynix). on an ASRock B550 PG ITX/ax board running a 5900x.
Im trying to OC this memory but that latency is very extreme right? Any ideas on how to lower this?
Thx for anyone who attempts to help this noob who wants to not be a noob
Can you compare apples to apples with only CAS latency values changed and see what the difference is?
It would be cool for you to do a video of which timings are the most important
Of corse changing the memory timing is irrelvant. You didn't bother to increase the L3 cache speed. The L3 cache on intel controls the speed of the access to the memory controller. Even with a seprate I/O die on Amd's chiplet design, the limitation is now infinity fabric speed chiplet die to I/o die which hold the memory controller. The only time these timings matter is for cache misses when the cpu has to go straight out to memory skipping L3 cache or when you've over staturated the L3 cache. Thats why the 5800x 3D doesn't need or benifit from tight timings or high mhz. The bandwidth comes from the L3 cache.
I beg to differ, having a 5800x3d and a ram kit with terrible secondary timings, I saw 10+ Fps gains (mid 60's to now high 70's for minimums) and I haven't even finished tightening the timings yet. That big cache helps but if your memory is terrible it will still impact performance.
@@buck3t_ it just has to be able to maintain a certain ratio, as speed to L3 cache in the end. I believe 2,400mhz is below that spec, If i'm not mistaken main timing of 20+ are also below that spec.
So it doesn't matter if I use different brand memories and different CL (Latency)? I want to combine some Gskill that I already have with some from Teamgroup but the Gskill are CL 20 and the Teamgroup CL 19 (The rest of the specifications are the same), I have read different answers everywhere.
If the system does not adjust the memories automatically, would doing so manually solve it? Or should I not worry?
I need that wallpaper so badly.
Could you share by any chance?
Damn me too!!!!
Let’s comment and upvote so they get their wish!
actually-hardcore-overclocking.creator-spring.com/listing/ram-oc-desktop-V2?product=953
@@ActuallyHardcoreOverclocking Thank you!
The pyprime WR is on Rocket Lake, right? How does pyprime DDR4 vs DDR5 go on Alder Lake?
6:47 It was me, I was the 8000MhzCL28 guy Buildzoid xD if it makes you feel better on this comment, watching this informed me on why i was wrong, and I appreciate the video! xD I figured 4000CL14 would be the same latency at 8000CL28, my bad!
Hey ! what rams do you suggest for gaming on a 7600x ? the best you can get
I don't understand... :/ I'm sure it's brilliant, but something organized into 3 points with a conclusion and recommendations would help a lot. So, is this right? --> The first number in the four number memory timings (which is used to calculate RAM first word latency) doesn't matter as the something like the average of all the four timings listed?
Do you have a video on how to tune sub timings?
Is there some way a lay-man can pick up these types of things without going so deep into overclocking? I guess I don't adjust my timings other than hitting DOCP because the performance uplift isn't worth the time and potential reliability for me. I think you should do recommendations of specific kits. I will probably go to Intel 13th gen for my next upgrade and that will be DDR5 and I'd love to get a good set of ram to minimise bottlenecking in any applications I like working with snappy PC's.
what outcome is with 6400 40-40-40-86 but with auto sub timings vs the 6400 c28 with auto sub timings was at you ran initially? Wouldn’t that show better difference between the primary timings?
any chance we could get a list of most important to least important timings
seriously. right? That would be like finding some cl09 1/2 off!
I had this in full screen @4:32 when you rebooted - freaked me out
He was still talking during my reboot 🤯
but i really dont understand why that ddr4 hase cl16 and the lowest ddr5 i cant find is with cl28 ????????
Im thinking of getting some viper steel rgb its 3600 c20 my current ram is 3200 c16 reason im getting new ram is because my current ram isn't on the compatibility list for the x570 board im planning on getting i just want to make sure that it will work soo basically the difference between c16 and 20 wont matter all that much specially if i can get them to c18 through overclocking they will essentially be the same?
Thank you, I appreciate your reasoning.
You've given me some confidence in choosing memory, even though I will probably hardly ever actually touch the settings.
Do you know how to get the best result for DDR5 @ 1.1V ?
I am afraid of long term reliability of the CPU IMC running at 1.35V
Now how much would you notice from auto sub timings vs optimized sub timings in gaming
When you are cpu bound @240fps?
You generally notice it more in the 1% lows, which is not exactly an issue at those kinds of FPS.
I only learnt about CL timing on RAM about a month before ddr5 came out so when I saw the timings on ddr5 ram I was quite confused, it seemed like it was crazy high but turns out I was just misinformed. Thank you for your videos I think I'm going to end up watching a lot of them.
yeah, no wonder DDR5 is so much faster than DDR4
When ever I watch these kind of videos I come away thinking ok, I need to get the lowest latency and the highest bandwidth. whenever I adjust the settings I hit a wall. Either I can have higher bandwidth or lower latency. Which is most important? Would you go for lower latency and sacrifice bandwidth or higher bandwidth and lower latency. Of the 2 which is most important for gaming tasks?
Latency is better, but it depends from game to game.
However games doing better with lower latency is much more likely then a game doing better on bandwith.
@@TehShiaT11einself Thanks..
So...what ddr5 ram would you suggest that we buy then? I always have trouble picking the best/right memory...needing at least 64Gb of ram(32Gbx2)...
I like how you said someone that specifically said they'll wait till there's CL28 DDR5 memory. Like I've seen people say things like this and I'm always like, why do you feel like you'll pick this arbitrary number when you have no reason to think that would be the best number, there's nothing for them to factually base the logic off of 😆
28 is faster, so there is your logic...you OK?
@@bryanbowling1857 You’re missing the point of what I’m saying and the video and the point is, in the real, it doesn’t matter too much.
hello i have 8700K with 4X8gb 3200 ....i should use 2666 with low latency because i read there is bandwith limit for each cpu and mine is 64Gb/s why use 3200 if my cpu don't used? THANKS
I have a Gigabyte B660 Gaming x AX DDR4 + 12400 + Corsair vengence RGB Pro 3200 8GB sticks.
Whenever populate more than 2 Ram slots With X.M.P. enabled I can't get boot. downclocking to 2800 sloves it.
is there a way where I can loose the timings/voltage and stuff to run 3200Mhz stable?
Can you make a video on that?
since nobody talks about this topic. can't find proper solution on google.
Anything to watch for with DDR5 4800 Sodim ? Thinking of getting a fair priced MSI GS66 Laptop, but instead of buying the version that comes with 32gb ram wich comes at an atrocious price premium compared to the 16gb model, I can get a 2x32gb cl40 4800 set cheaper and then be abler to sell the 2x8gb modules that is mounted. does timings matter a whole lot in a laptop, not even sure the bios is able to set advanced memory timings in a laptop
so will i be fine with DDR5 6400 (PC5 51200)
Timing 36-48-48-104?
Can you or someone link me to the video that advises how to choose a good "Die" memory since its not advertised what Die is actually used. I was wanting some 6000 CL32 memory when I make my next build, but ofcourse, if I get just get a better Die and tune it that would be better performance and so I'd like to do it.
I really like this video. We make guidelines for people buying something for years. The goal is not to get the best memory available at the store - not the best performance. The goal is to tight your chances to something least random. In order of importance: CPU supported clock, lower voltage, higher clock, lower latency. We switched the last two when Ryzen came out and we wanted to avoid BFUs running custom profiles just to match the clock of the CPU. The fact is that manufacturer can make the label say anything they want and the only balance they have to keep is returns. So you play their game.
was this supposed to make zero sense? This is gibberish.
I’m getting the g skill c14 3200mhz 16x2 32gb b-die baby to pair it with my Ryzen 5 5600x and getting rid of my ballistix 3200mhz cl16 which do you think I should go with?
Is the situation different for ddr3? I want to make the most out of my i7 2600k so should I start tightening tertiary timings first and primaries last?
It's about time to throw it in the bin bro. I changed from my 3770k to a 9900k in 2019 and the jump was amazing. Now we have 2022... soon to be 2023.
Get a new computer.
@@TehShiaT11einself throw your hot garbage in the bin, "bro". I don't need a stronger cpu. The 5700xt at 1440p isn't limited by it
Hell, that was incredibly fun. Man i love your rants
I understood correctly that he compares “low cl” on automatically secondary and tertiary subtimings, against higher cl, but with compressed secondary and tertiary timings?
Have you had a chance to review RAM that show up in Thaiphoon Burner as CXMT? I bought an inexpensive Silicon Power 3200 MHz DDR4 2x8GB kit and it doesn't seem bad. The XMP profile appears similar to Micron E-Die (A little weaker on the sub timings). I tried tightening up sub timings but the profile seems to be its limit. IDK if you have had any luck playing with them and finding timings that can be tightened or not. It also seems like these chips get warmer as well. Maybe it's just me, but I can't validate as there are no temp reading pulled into HWinfo.
What i've found is the general rule is if two different CAS timings are equal in ratio to mhz, the higher mhz wins out. Other than that idk, I think CAS latency is important. I'll watch the vid to probably learn something new but to me it's as important as mhz, just about.
Thank you for this PSA! For us hobbyists, who are not into overclocking, and have less understanding of the whole realm… Can you give a hint of what we should be looking for when shopping for DDR five. Is there something simple on a product page, other than simply the speed of the memory? I used to think that CAS latency was the next piece to look at… But I hear you loud and clear :-)
CAS latency will tell you the quality of the dies. A low CAS at a certain speed means that it was a better binned chip. That means more overclocking and tuning headroom. So you can still look at it when you buy it for overclocking, but it won't really tell you the performance level, since XMP profiles will still be made by the vendors and they can be really bad. So in the end, the buying patterns remains the same.
I have a 8gb ram ddr4 3200mhz with cl22 and i bought a 16gb ddr4 3200mhz but its cl16 , does this mean i wont face any issues?
10:22 Hmm I thought it was because real world workloads do random reads 1 cache line at a time and that takes at least tras+cas cycles. Can’t we just minimize that sum?
What is Trc again?
To be honest I would love it if memory manufacturers would start producing designs that minimize capacitance while maximizing conductance.
I don't care if they need to use low density libraries on a 28nm class process node with high-k for the caps and ultra-low-k signal insulation, but if we just start cooling the chips with something like a low-powered Peltier module, we could easily benefit from
What do you mean "real world"? This is how the memory controller works, the prefetcher can load sequential memory locations before the program requests them, but that only works if the software is optimized to read sequentially.
@@mortlet5180 I think that's a really interesting proposition but the use-case is too niche to spend the design effort on it I guess - given that the only use case I can think of is OC records given the power consumption of the peltier...even though server CPUs are already pretty power hungry they also need a lot of memory and cooling such systems is hard (and loud) enough as it is so that leaves only real enthusiast use-case I'd guess
Do you have any DDR5 recommendations now that it's been a few months?
So normal buyer as myself who have basic knowledge of ram don't how overclock ram what kit do recommend for ddr5 and ddr4 which has good xmp profile and how performace in game can achieve by tuning of your on ram and lastly if i am buying ram what sub timing should I look for in ram
Wait what? Buildztoid vidoes with 13 minutes? What happens?
hello, i have an asus z590-a with a i7-10700k , which ddr4 memory you recommend me, thank you
ever thought you'd be over 100k subs? I remember when u were around 2k, hope life is good for you man!
is ok if i put rams in my pc that are the same but has a diffrent cas latency
So what would be a good DDR5 RAM at 5600mhz or above with good (as in least terrible) subtimings? If that even exists currently.
How much does the primary timing matter on gaming ? Say for like going from DDR5 6400 CAS 32 to CAS 30
Completely irrelevant to the topic of the video but I noticed the date on your system is March 27, 2023, which makes me want to believe that you filmed this footage in the future (still a few weeks away at the time of writing) and sent it back about 8 months to be edited and uploaded in Mid July 2022.
Now that you have successfully disabused us of mistakes regarding CAS -
Question: What would be good "[X] to keep in mind" things when buying memory - For Overclocking as well as for *not* overclocking (AKA - just daily driver where the XMPs are consistently in the general ball park of 'good enough' ) More explanation below.
I've been following/Sub'd for a year or so I guess, maybe more now because I love all this nerd stuff, I keep learning bits from a lot of different YT sources - It's an entertainment thing. However, I don't have the time or talent at research on my own outside of the spoon feed I take from my general entertainment. (My Google-Fu is terrible. I'm like the Anti-SEO that elderly search engines tell stories about to scare the little web crawler bots around campfires. .. Or whatever electronic version of that analog that exists). So, as a laymam, I have relatively little knowledge to use when purchasing system memory. Help please?
P.S. : Sincerely thanks btw. Specifically because of your channel I first learned of SK Hynix and Samsung B DDR4 dies are a thing that exists, and that they are great for overclocking (even if I dont have the money or the impetus to get into Overclocking). All these rants and tidbits of info you drop a great.
I can't get stable oc for 12900k Auros master? Need help c23 crashes
dont waste your time on timings. focus on frequency.
@@jayden974 what bios setting should I put? I don't need high heat or voltage just for normal use like 49 or 50 ghz.
Won't it have any problem if I dual channel both 3200mhz rams with different cas one with 16 and one with 2
Nope, no problem
"I could just edit the screenshots" he said, as he was *supposedly* showing us an unedited video... hmmm. I now question the entirety of your channel.
/s
Wouldn’t he edit that part out? And what would be the benefit of him showing you fake scores when he’s literally teaching you how to do something that you can then go and do yourself and verify.
I was discovering today that primaries don't really matter. I was too late to hear that from you. I spent days overclocking a stupid spectek memory rev E. I will be showing off how a territory time like tRDRD_dg should never go over 4.... super important time.
What a complete LOL comment.
@actually hardcore Overclocking: So 3600MHz vs 3200MHz is the better option even though 3200MHz runs on a much smaller CL (so CL18 for 3600MHz va CL15 for 3200MHz)?
I also heard lots of talk that it does matter for AMD systems but not as much for Intel systems, what is your view on this matter?
So why don't the advertise the sub timings instead of the primary timings?
because the subtimings are the same for most memory kits.
@@ActuallyHardcoreOverclocking so why don't manufacutres focus on making memory with better subtimings if they are so much more important?
Well it didn't work for my kingston 6000Mhz ram. Does anyone have suggestions at how to reduce my latency? It won't go under 70ns. My other speeds are fine and the pyprime score is around 10.6 seconds which seems to be low. Will reducing my latency improve my gaming experience or should i just leave everything the way it is?
This won't increase your gaming performance, just your cpu. Gaming is usually limited by your gpu. Even then you're not going to increase your frame rate by more than like one or two FPS which really doesn't matter.
Hey I really enjoy how informative your videos are as I my self am a stickler for overclocking an making sure you get the most out of your hardware for bang for buck. I'm starting new AM5 system an started to dismantle info on latest an greatest DDR5 set ups. . an came across trying to understand timings speeds an what exact sweet spots I'm suppose to get to to get the most out of my system. As of now am rocking Asus Prime x670E-pro wifi an will be getting amd amd 7950x. Was looking into ram an trying to figure out what best one to get stumbled across this wanted to get your suggestion for this set up. I will be OC'ing both ram/cpu an cpu will be liquid cooled what are your thoughts?
Wholeheartedly agree with your assessment. I spent 2 weeks or so messing and learning about with secondary, tertiary and even quaternary timings on my budget DDR4 3200 Samsung B-die 32GB (2x16GB) dual rank (oddly enough the set I have, is a gskill SKU thats not really documented anywhere, seems like it was a very limited batch of budget binned B-die, perhaps because of demand for 3200 or oversupply of B-die at the time) on z370 ITX motherboard with an 8700k ... and man, I couldnt believe how bad XMP + auto tertiary/quaternary timings were compared to my hand dialed timings ... and I think my primaries are looser than XMP ... but my tertiary / quaternary are a lot lot tighter than auto as a result ... and I have 0 training issues when powering on system ... and my gosh the performance is night and day difference in aida and benchmarks. I think my latency dropped by like 20-30ns and BW went up by at least 10-20GB/s in aida iirc vs XMP+auto. 0 stability problems, 0 training issues (IOTLs and RTLs are never out of spec on power-on) ... it amazes me how much of a difference hand tuning can make! And yeah I came to same conclusion you speak of ... secondary/tertiary and even quaternary timings can have far more positive impact on performance then primaries ... when tuned properly.
Ill also add that while I was learning about the timings, tweakin them , testing them.. it really helped to find guides and even ready technical documents about the effects of some of the more granular timings and in detail how RAM works. It really helped establish a picture / concept mentally of what a timing meant for what the RAM was doing in relation to said timing and also how it potentially effected other timings, or at least their relation. The whole learning aspect about the mystery of RAM and its timings ... was what really made the whole adventure / endeavor quite exciting and dramatically helped arrive at a , what I believe, pretty damn effective result in the end. I actually love the fact there are so many knobs that can be tweaked with RAM to make it run as optimally as possible, its a tweakers paradise really.
Two questions:
1. Why can't RAM be automatically tuned, via some software algorithm?
2. How much does that difference vs XMP + auto timings that you mention affect game performance (when the target is Vsync-locked 60 fps)?
@@bricaaron3978 1. Think of tuning RAM like tuning a car's ECU for maximum engine efficiency, performance and safety. Sure, software can do that to a point or at least try and sometimes fail miserably(while most likely corrupting the OS its running from(the biggest risk when tuning RAM)) ... but nothing can replace human intellect when you are pushing the edge of the tune.
2. As far as raw FPS increase(w/o caps), its not insane or anything, but the lows I believe are what see the biggest gains, at least seems that way ... ultimately diff with every game and also depends on how good or bad Auto vs XMP results are on your mobo and RAM.
@@RandoTark Regarding FPS, the 1% low is, as you know, what is important when trying to maintain a Vsync-locked framerate. So if that's where the biggest improvement is, it sounds like tuning RAM could be worth it.
Do you have any recommendation for a tutorial on learning how to manually tune RAM?
how can you achieve to push a ryzen 5 3600 to over 5ghz? I hope with LN2 :D
You know I love mem OC, so of course I'm very happy to see you continue improving this kind of content !
Also I clicked one song on your bandcamp and I thought my OC failed and the pc just chopped and was about to day then I noticed everything is responsive and its just the audio from the song lol :P I mean I dont want to bully you I guess its fine for the people that hear whatever this genre is called I just didnt know it exists and its not my cup of tea :P
How about CAS timing with 2 kits 2x16GB CL40 2x16GB CL36? Does it matter? Adata sent me wrong kits even with the same SKU... :(
Certain apps effect latency in Aida64 steam has biggest impact but even less resourceful apps hit it so the less you run the better the results
yeah best to run it in safe mode if you can be bothered with that.
Why though? Real world use case is all that matters. Doing a test with literally nothing running to get a small number is cool and all but you're never going to get that number in real life while actually using your computer.
O God I was utterly wrong all the time 😭😭
Which Kit is this actually?
Why does it say that your memory is in quad channel? Does 12900K actually have a quad-channel memory controller?
so all i changed was going from cl34 to cl44 and it would not boot even boot .
What memory are you using i haven't seen any 6400 memory with 28 latency
is there any chance this is error correction at play? (sorry if i missed you mention it)
nope the on die ECC is only for data retention not transmission.
The actual (non)influence of the primary timings wasn't shown! CL40+auto might result in >11sec PYPrime and CL28+tight might result in
What is a DDR5 kit with a good XMP profile?
Depends on need. Best way is to understand the timing is on a timing chart to get the full picture... imo
But adjusting to theoretical doesn't work because of many other factors and not just the silicon (rise time, fall time, power noise...etc).
Basically the complete timing set/cycle needs to happen within the 1/freq setting... plus a little for physics ; )
Yea, just hunting low cas isn't a real improvement and yea, mem timing lookup in most ram kits db is crap... takes time but manual all the way... if enthusiast.
the fun thing for DDR4 SB-die is, sub timings are relativly uniform for 24/7 settings. 3200 or 4266Mhz... doesnt really matter.
@@mauriceelet6376 is 11th gen Intel bad at ddr4 overclocking? To me 9th gen I was getting 31ms in aid64 and with a 11900k I get 45ms. I have Samsung b-die dual rank 32GB 3600mhz but I can’t overclocking it on 11th gen.
@@Typhon888 yea. the 11gen has the same crap rizen got: gear mode. 10th gen is the last without where you get the full "gear 1" up to 5000Mhz. I think 11 Gen can do 3800-4000Mhz with gear 1. everything over that is meaningless. thats why you can hit +5000 on there so easy. gear 2 no problem. but you will still suffer at gear 1 as its not the same as the earlier gens have.
Won’t leaving sub timings in auto keep them the same, this making this pointless? That’s artificially tanking the 28 timings. I don’t get your point