Explaining Server DDR5 RDIMM vs. UDIMM Differences

Поделиться
HTML-код
  • Опубликовано: 20 авг 2024
  • НаукаНаука

Комментарии • 127

  • @mikegrok
    @mikegrok Год назад +15

    An example where ECC to the module was needed, more than ECC on chip.
    I was working at a company who had spent 1.5 man years tracking down a software bug in production, when it suddenly resolved itself.
    A week later we realized that one of the computers in the high availability cluster had turned off. I was present when it was opened. I noticed that the ram had 8 chips per dimm.
    After we got new ram, the computer couldn’t complete post because it had bad memory. In fact any memory installed into channel 3 was bad.
    We got a replacement motherboard, and that fixed it.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +4

      We had something similar with an early HPE DL385 Gen10

    • @jgurtz
      @jgurtz Год назад

      A sign that the app needs better instrumentation. These kind of bugs happen too often!

    • @johndododoe1411
      @johndododoe1411 Год назад +3

      ​@@jgurtzThe App wasn't at fault, the hardware was at fault for wasting software dev time by corrupting data.

    • @johndododoe1411
      @johndododoe1411 Год назад +1

      Such hardware bugs is why I'm skeptical of on-chip RAM ECC. From an architecture perspective, the CPU package should do ECC on entire RAM cache slots independent of the RAM module design. So when the core needs something from a 64 byte (512 bit) cache slot, the memory interface chiplet loads 80 bytes from the DRAM interface and uses the extra 128 bits as ECC to correct up to 12 single bit errors. Those extra bytes translate to two extra 64 bit memory cycles and its up to the ECC format designer to ensure that flaky PCB traces or stuck DRAM data lines are always detected.

    • @mikegrok
      @mikegrok Год назад +1

      @@johndododoe1411 The on dimm ecc is mostly just to allow memory manufacturers to use lower quality parts with higher error rates that would otherwise disqualify them from use.

  • @ellenorbjornsdottir1166
    @ellenorbjornsdottir1166 Год назад +12

    I'm not moving to D5 until D6 comes out, for my own financial health.

  • @ospis12
    @ospis12 Год назад +25

    There are many more differences between RDIMMs and UDIMMs:
    * the number of address lines for each sub channels is 13 for UDIMM while only 7 for RDIMM,
    thanks to RCDs RDIMM address lines can run DDR while UDIMM must be SDR,
    * ECC on UDIMMs according to JEDEC is only 4 bits per sub channel, while RDIMM can be 4 or 8,
    * UDIMMs are restricted to x8/x16 memory dies, RDIMM can use x4 as well,
    this allows for UDIMM's hosts to mask writes using DM_n signal, RDIMM's hosts must send full transfers.
    I think that points 1 and 3 are mostly responsible for why UDIMMs and RDIMMs are no longer slot compatible.
    The supply voltage is different, but as far as I know all PMICs can run in the 5-15V range.

  • @kiri101
    @kiri101 Год назад +12

    You did a great job pacing all the information I needed to keep me up to date with newer memory technologies, thank you.

  • @l3xx000
    @l3xx000 Год назад +1

    Great video Patrick! this was a topic that was really hot on the forums, with lots of conversation, especially around ECC UDIMMs, thanks for clearing everything up, and agree, this is an excellent resource that can be useful to lots of people - cheers!

  • @nster3
    @nster3 Год назад +15

    I'd love a quick summary of how this affects ranks, 4 vs 8 bit ECC (9x4 vs 10x4) and maybe latency impacts and what would benefit from different configs.

    • @therealb888
      @therealb888 Год назад

      +1

    • @Starman97
      @Starman97 Год назад +2

      It looks like DDR5 RDIMM is inherently Dual Rank.

  • @jgurtz
    @jgurtz Год назад +2

    Great dive into the state of PC memory arch., love to keep up with this stuff! My DDR5 key takeaways: cut memory channel width in about half, provide 2 channels per-dimm, and add on-dimm power supply & other reliability features to support even higher clocking. The ratio of memory bandwidth to clock cycle has not kept up with high core counts, explaining why top500 use 24-48 core CPUs. Related points: unbuffered dimms now have on-chip ecc to support smaller processes and faster speed (I'll think of it like reed-solomon on a hdd); and, there's some interesting future potential of ram over pci-e.

  • @becktronics
    @becktronics 11 месяцев назад

    Hey Patrick, awesome video explaining the differences between DDR4 and DDR5. I loved the various pictures and captions that you'd place as you were explaining the PMIC, RCD, and SPD hub! You have a knack for articulately and concisely explaining device differences and cementing what the myriad of acronyms actually do. Definitely will be coming back for more tech educational content over here. I have a background in chemical engineering and got curious to see the electrical/computer engineering side of semiconductor manufacturing :)

  • @johnknightiii1351
    @johnknightiii1351 Год назад +5

    Consumer CXL seems pretty exciting. We know at least AMD is working on it. We might be getting it with zen 5 and PCIe 6.0 which is really exciting

  • @skaltura
    @skaltura Год назад +4

    Awesome! Can't wait till CXL hits sensible pricing too! :)

  • @dvone4124
    @dvone4124 Год назад +2

    Useful content. Thanks! In a few years when I'm going through the next round of server upgrades, I'll understand better what I'm looking at. (Yes, I expect some updates from you before then, too.)

  • @RickJohnson
    @RickJohnson Год назад +3

    Very useful to get those of us in the DDR4 world up to speed!

  • @jedcheng5551
    @jedcheng5551 Год назад +7

    At the beginning of DDR5 rollout, quite a lot of DIMM's pmic were broken (including mine) during use
    It was my first time seeing faulty RAM but my friend who works in a big tech data centre said that it occurs every day for his company's scale. The quality of the pmic should also be way better now after 1.5 Yeats as well as the better shaped pmic supplies

  • @mumar100
    @mumar100 Год назад +3

    Thanks for the very helpful content for a self "studied" amateur trying to go from desktop to workstation / server.

  • @I4get42
    @I4get42 Год назад +3

    Hi Patrick! Aw man, this is going stay useful. Thanks for the work 😀

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +3

      Hopefully this helps folks. Have a great weekend

    • @I4get42
      @I4get42 Год назад

      @@ServeTheHomeVideo Thanks! I hope you do too 😃

  • @wewillrockyou1986
    @wewillrockyou1986 Год назад +4

    I would consider the advantage of multiple independent channels to really be a latency advantage, it reduces the chance a memory access has to be queued behind a previous access, thus reducing the chance of it being delayed. Back to back memory accesses to the same bank are the biggest contributor to higher latency under load of memory systems, increasing the number of channels is the best way to increase bank parallelism without adding more devices to each channel.

  • @DrivingWithJake
    @DrivingWithJake Год назад +2

    Quite interesting.
    Should be fun to see how the server world is changing. Just had 8x 7443P servers arrive today at my house all in just 4u which is quite nice.

  • @__--JY-Moe--__
    @__--JY-Moe--__ Год назад +1

    thought I saw smoke rolling out U'r ears once! I hope U find some great trade shows 2 go to!
    great breakdown! OMG!! it's a CXL module in the wild!!! I've waited 5yrs to see where that tech
    was going!

  • @VoSsWithTheSauce
    @VoSsWithTheSauce Год назад +3

    I agree on needing ECC Server memory, their up to speed and their amounts are awesome but i hate that persistent memory is dying and AMD EYPC Would be nice with it.

  • @edschaller3727
    @edschaller3727 Год назад +2

    Thanks for the overview of the differences. It is good to know. A couple of questions for you if you have the time:
    With the support components moving onto the memory module (eg: pmic) and PICe cards with memory, do you think we are headed towards a high bandwidth serial protocol between CPU and memory simlarly to how storage interconnects (eg: ata=>sata, pscsi=>sas) and even expansion buses (eg:: PCI=>PCIe)?
    How does the memory on PCIe work with NUMA configurations for the system?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +3

      CXL Type-3 looks like a NUMA node without cores attached in the topology. On the memory attached bit, that is somewhat the promise of CXL 3.x because it will make sense to start using shelves of memory connected via CXL instead of adding more DDR interfaces on a chip.

  • @rem9882
    @rem9882 Год назад +3

    This really was a great video talking about all the new benefits that DDR5 brings

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +3

      Thank you. Glad you liked it.

    • @rem9882
      @rem9882 Год назад +1

      Could you make a video about power ic chips and how there being hit with the shortages and development problems. I’d love to know more about them

  • @spuchoa
    @spuchoa Год назад +2

    Thank you for the DDR5 explanation. Great video!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +1

      Glad it was helpful! Sadly, the Super Bowl seems to have stopped views on it. Hopefully people share it.

  • @stevekrawcke3937
    @stevekrawcke3937 Год назад +2

    Great info, now to get a budget for new servers.

  • @BansheeBunny
    @BansheeBunny Год назад +26

    DDR5 is forcing people to know the difference between UDIMM and RDIMM, I love it.

  • @krisclem8290
    @krisclem8290 Год назад +2

    First 3 seconds of this video had me checking my playback speed.

  • @MrMartinSchou
    @MrMartinSchou Год назад +5

    If a CXL module with 4 DDR5 modules only gives you the same bandwidth of 2 DDR5 channels, why not use DDR4 modules?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +3

      That is a major area folks are looking at. Imagine for a hyperscaler reusing DDR4

    • @MrMartinSchou
      @MrMartinSchou Год назад

      @@ServeTheHomeVideo Oh - I'm surprised.
      I was thinking "smarter folks than me have thought of this and dismissed it - I would like to know why so I can become smarter".
      I figured it would be a latency issue, bandwidth not being enough to saturate the PCI5 link or something, because as you said reusing DDR4 is cheaper.

  • @comrade171
    @comrade171 Год назад +1

    Great breakdown, thanks!

  • @jannegrey593
    @jannegrey593 Год назад +25

    That is soooooo expensive ATM. I don't even blame EPYC's motherboards not having 24 DIMM's. Though they should start right about now releasing them - that was the promise last year.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +7

      Showed a 8x DIMM one in the video from ASrock Rack

    • @jannegrey593
      @jannegrey593 Год назад +1

      @@ServeTheHomeVideo True. I do wonder if there will be 24 DIMM per socket (with 2 sockets) motherboards as promised. I don't doubt they could be made, but I'm thinking of how much space it takes. I have to re-watch your earlier videos on Genoa. After the first one, my PC started acting up and only couple days ago I finally found the root of the problem.

    • @revcrussell
      @revcrussell Год назад

      Came here to say the same thing. I am happy to be building with DDR4 right now due to cost.

    • @jannegrey593
      @jannegrey593 Год назад

      @@revcrussell If past experience is anything to go by - and mine goes back to before DDR, the prices will flip. Though in case of DDR5 it seems to take longer than usual (average was around 18 months since introduction). But only a bit. I wouldn't be surprised if DDR5 - 6000 EXPO kits were cheaper than DDR4 - 4000 kits before the end of they year. Heck, you can find some places were they are cheaper already, but by the end of 2023 it should be universal. Especially since DDR4 will probably go completely out of mass production, only enough to support legacy systems. This will depend on how many Zen 3 and older Intel CPU's are unsold. And additionally there was almost a year when DDR5 was only an option - slowing down the change in production.

    • @radekc5325
      @radekc5325 Год назад +1

      Gigabyte MZ33-AR0 is an example mobo with 24 DIMMs per socket. Never used Gigabyte server mobos, but at least it means more are likely.

  • @therealb888
    @therealb888 Год назад +3

    If I had $100 everytime he says "THIS" I could buy some of THIS set of 32GBx24 DIMMs 😂
    Excellent video, learned a lot!.

  • @gowinfanless
    @gowinfanless Год назад

    Really cool, we plan to use the DDR5 for the next design of the R86S MINI PC BOX+Alder Lake N300 CPU

  • @flagger2020
    @flagger2020 Год назад +1

    Nice video. For top500 most hplinpack machines use GPUs for the heavy lift, the bw mostly goes to them. CPU cores are good for other mixed workloads such as HPCG etc

  • @BOXabaca
    @BOXabaca Год назад +2

    Speaking of ddr5 servers on a tangent, you should check out the M80q gen3 which uses DDR5 SODIMM in a tinyminimicro class device.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +2

      Wow! Just snagged a great deal on one because of this comment. Thank you!

    • @BOXabaca
      @BOXabaca Год назад

      @@ServeTheHomeVideo Can't wait for the review!

  • @mrsittingmongoose
    @mrsittingmongoose Год назад +1

    We are finally seeing ddr5 be beneficial in consumer side too. Raptor lake takes a major hit on ddr4 that alder lake did not.

  • @thebyzocker
    @thebyzocker Год назад +4

    actually a great video

  • @JasonsLabVideos
    @JasonsLabVideos Год назад +1

    DAMN!!!! this is insane !!

  • @ander1482
    @ander1482 Год назад

    Would be nice to see what workloads scale with more memory bandwidth as there is not many available info out there. Thanks Patrick for the video.

  • @jackykoning
    @jackykoning Год назад +2

    So does AM5 support UDIMM? I really don't want to use non ecc anymore. Because most are unstable out of the box when you are gaming 12 hours you are nearly guaranteed to crash.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +3

      Yes. AMD AM5 is a consumer platform. The next use for that ECC UDIMM shown is for an AM5 server platform

    • @jackykoning
      @jackykoning Год назад

      @@ServeTheHomeVideo Good to know so any unbuffered DDR5 will likely work as long as the slot matches which it should in theory always do.

  • @naifaltamimi2885
    @naifaltamimi2885 Год назад

    Very informative, thank you.

  • @Nightowl_IT
    @Nightowl_IT Год назад

    The smiley is on^^
    It flickers a bit but it isn't bad :)

  • @uncrunch398
    @uncrunch398 4 месяца назад

    Other than a server being built specifically for HB apps and banning low bandwidth from running on them, I see no point in choosing a lower core count CPU to save bandwidth. This is the same problem as inefficiencies in farm land use to make massive machine use more efficient and faster. It takes many times more land and other resources to feed a person this way. Put LB apps on the extra cores to save on hardware and floor space that they otherwise take up.

  • @constantinosschinas4503
    @constantinosschinas4503 Год назад +1

    So why exactly Micron gave you 32x32GB to test?

  • @Veptis
    @Veptis 8 месяцев назад

    I am currently deciding on parts for a workstation. And picking the right combination of dimm slots, number of sticks, frequency, timing, capacity and price ... Is difficult.

  • @keeperofthegood
    @keeperofthegood Год назад +2

    ROI is going to be a pita to reach before DDR6 is out

  • @timramich
    @timramich Год назад +1

    Are there ever even going to be any E-ATX 2 socket boards for Epyc Genoa (SuperMicro H13)? I see a few boards that are for specific cases. It doesn't look like an E-ATX board has room for of these CPUs. Seems they're going backwards.

    • @revcrussell
      @revcrussell Год назад +2

      That is why they need so many cores, you can only get one socket on a board. Just think of the loss of PCIe lanes.

  • @berndeckenfels
    @berndeckenfels Год назад +2

    So one RDIMM has 2 channels and CPUs have 12, is that now 6 DIMM per Socket or can you have multiple?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +2

      12 DIMMs per socket in 1DPC, 24 in 2DPC (when that is available). You are right it is confusing now.

    • @concinnus
      @concinnus Год назад +1

      CPU channels means 64-bit data width channels, as before. The two channels within a DIMM are best referred to as sub-channels.

  • @johndododoe1411
    @johndododoe1411 Год назад

    As someone who learned about DRAM when each chip might contain only 8Kibyte or less and studied cache hardware and CPU design later, keeping up with marketing code names such as Death Lake and superclean plus plus mega is a useless game of noise.
    Interesting though that the old RAMBUS company is coming back as a maker of standard high end RAM chips instead of a monopoly.

  • @nfavor
    @nfavor Год назад +4

    Wow. RAMBUS is still around.

    • @revcrussell
      @revcrussell Год назад +1

      Only as a patent troll.

    • @TheBackyardChemist
      @TheBackyardChemist Год назад +1

      @@revcrussell i dont think so, they are actually designing memory/pci-e controller blocks and selling them to cpu/gpu/*pu designers

    • @revcrussell
      @revcrussell Год назад

      @@TheBackyardChemist If they are, I stand corrected, but I read recently they were just making money on patents.

    • @TheBackyardChemist
      @TheBackyardChemist Год назад +1

      @@revcrussell I do not remember where I read this, but I seem to remember that out of AMD/Nvidia/IBM, at least one is using a DRAM controller block they have bought from RAMBUS

    • @alext3811
      @alext3811 Год назад

      @@revcrussell I think they're doing both.

  • @marcello4258
    @marcello4258 Год назад

    This is why I don’t see risc like arm over cisc in the big servers just yet.. it’s the same reason why cisc was set for long time.. the memory doesn’t keep up

  • @rafaelmanochio6990
    @rafaelmanochio6990 Год назад

    Awesome content!

  • @tristankordek
    @tristankordek Год назад +1

    👍

  • @user-hj8rn5wp8z
    @user-hj8rn5wp8z Год назад +1

    what about timings?
    and how timings impact server work csenarious?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +2

      CAS latency increase is mostly offset by higher clock speeds so the latency in NS is only up ~3%.

  • @zerothprinciples
    @zerothprinciples Год назад

    How would I build a compute server to maximize RAM (specifically , a single Java address space for AI applications)?
    Four TB on a single motherboard would just be the starting point.

  • @Xiph1980
    @Xiph1980 Год назад +1

    Ehm, about that graph.... Might've been better to put a release year on the x-axis, because the MT-Gbps chart is essentially a chart displaying the relationship between inches on X, and centimeters on Y. Not really informative. 😉

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад

      Somewhat hard to do that. When was DDR5-4800? When it was delivered for consumers? When Genoa launched? When both Genoa and Intel used it? DDR4-3200 is another good example

  • @therealb888
    @therealb888 Год назад

    Incompatibility is such an anti consumer jerk move.

  • @minnesnowtan9970
    @minnesnowtan9970 7 месяцев назад +1

    At 41 through 44 seconds, it in UNCLEAR if you said CAN or Can't. So please start learning to say "can not" when appropriate, and please STOP using contractions entirely. This is ESPECIALLY true when Brits, South Efrikans and Indians (3 examples should be enough) speak. Contractions make you less understandable and make you more likely to be skipped over, avoided and certainly not subscribed to.

  • @mr.b5566
    @mr.b5566 Год назад +2

    Ya overloaded information. I had to played at 0.75 just a better obtaining it

  • @bandit8623
    @bandit8623 Год назад

    great vid

  • @clausdk6299
    @clausdk6299 Год назад

    I mean.. 2 of those would be nice

  • @marvintpandroid2213
    @marvintpandroid2213 Год назад +4

    That looks like a very expensive box

  • @JimFeig
    @JimFeig Год назад

    They made it so they can charge a larger premium for server memory. Artificial infatuation.

  • @reki353
    @reki353 Год назад

    Me who still used DDR3 FB-DIMM

    • @akirafan28
      @akirafan28 Год назад +1

      FB-Dimm? What's that?

    • @reki353
      @reki353 Год назад +1

      @@akirafan28 fully buffered dimms instead of the regular unbuffered dimms

    • @akirafan28
      @akirafan28 Год назад +1

      @@reki353 Thanks! 🙂👍

  • @chaosong9628
    @chaosong9628 10 месяцев назад

    What a wonderful video ! I Just Need This !

  • @charlievikram4510
    @charlievikram4510 Год назад

    i am a freelancer from india love to desiogn thumnail for you how can i contect you ..?

  • @paxdriver
    @paxdriver Год назад

    TLDR - server RAM has no RGB so its definitely better lol

  • @christ2290
    @christ2290 Год назад

    Jesus, Rambus, the patent troll, is *still* around sticking their name on things. Haven't thought of them in years.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад

      Actually they develop a lot of IP other companies use. I think of patent trolls more of organizations without R&D.

  • @AraCarrano
    @AraCarrano Год назад +2

    Smiley face prop light is just a little flickery.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад

      Yes. I am not sure why it is moreso in this one than others. That Canon C70 has not had settings change.

  • @abritabroadinthephilippines
    @abritabroadinthephilippines Год назад

    Why do you say "pretty much" either it is or it isn't m8.

  • @mikebruzzone9570
    @mikebruzzone9570 Год назад

    mb

  • @kimsmith6066
    @kimsmith6066 Год назад

    do u let people win 1 who has subscribed h a good 1

  • @sfoyogi8979
    @sfoyogi8979 Год назад +3

    sponsored by micron.