I can't believe AMD trusted me with this...

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2024

Комментарии • 223

  • @marcogenovesi8570
    @marcogenovesi8570 4 месяца назад +142

    Of course Jeff is using all of this as an excuse to get more free hardware to feed to his infinite "we have VDI at home" video series.

  • @brianwalker7771
    @brianwalker7771 4 месяца назад +17

    How is this channel still under half a million subscribers? Sweet gear and craft bear What's not to love?

    • @Sunlight91
      @Sunlight91 4 месяца назад +1

      Obviously server hardware is quite niche.

  • @DJSammy69.
    @DJSammy69. 4 месяца назад +62

    Run Crysis of course. Cannot wait to see what this beast can do.

    • @Decenium
      @Decenium 4 месяца назад +9

      Crysis is not a good game to test this stuff, its ultimately in design CPU limited, it was made for a future with single cores running at 9 ghz rather then multiple cores.

    • @BF26595
      @BF26595 4 месяца назад +2

      If I remember correctly there is a mod out there that allows cpu rendering of Crysis, I saw Linus test that, I think it was with an old threadripper.
      Would be cool to try that experiment here

    • @CraftComputing
      @CraftComputing  4 месяца назад +19

      CPU Rendering could be fun :-)

    • @TalpaDK
      @TalpaDK 4 месяца назад

      Sure... but how many at once?

    • @BF26595
      @BF26595 4 месяца назад

      @TalpaDK oh I would like to see just one, I really don't think more is feasible... you need all the cores to work for that task...
      I think Linus was getting in the 10s fps wise with that old threadripper but I may be wrong. It would just be cool to see a game like Crysis, regarded by the entire community as being a nightmare to run properly on a good pc from a few years ago being rendered on a CPU of all things

  • @alexstraz
    @alexstraz 4 месяца назад +3

    I just built this as well last month. I ran into all the same problem you did as well. I appreciate you making this videos. Now I can send it to my friends to better explain whats going on lol

  • @gustersongusterson4120
    @gustersongusterson4120 3 месяца назад +5

    Going 220v on your server rack is a smart move. Didn't occur to me until you mentioned it.

    • @MelroyvandenBerg
      @MelroyvandenBerg 3 месяца назад +1

      Currently we have here in the Netherlands 250v for some reason.

  • @lmfao-pq3yj
    @lmfao-pq3yj 4 месяца назад +85

    Damn. The only time I'd be able to afford this is when this is like 10 years old lol.

    • @needfuldoer4531
      @needfuldoer4531 4 месяца назад +7

      The homelabber's lament. :(

    • @DrDipsh1t
      @DrDipsh1t 4 месяца назад +6

      I literally just said that out loud when he mentioned the cpu 🤣 "I know what I'm getting in about 10 years"

    • @pokeysplace
      @pokeysplace 3 месяца назад

      I think I'll come back and finish this in 2034 when I might be able to afford it... but for now off to something in my price range.

    • @sinnwalker
      @sinnwalker 2 месяца назад

      it's alright, things get more powerful and cheaper in time. by the 2030s we're likely gonna have computers magnitudes more powerful than anything today, at current midrange prices.

  • @codeman99-dev
    @codeman99-dev 4 месяца назад +23

    I want to hear the sentence "Through the power of buying three of them" as often as possible.

  • @peyton_uwu
    @peyton_uwu 4 месяца назад +6

    16:25 I audibly went "holy shit" when you pulled those suckers out.
    Nice job Jeff!!!

  • @M1America
    @M1America 4 месяца назад +6

    >Dual VDI setup
    >full docker home lab stack. traefik, nextcloud, homeassistant, databases for each, graphana, plex, etc.
    >ollama with llama3:70B
    >another zfs pool made out of a bunch of 12TB sas drives connected to an HBA
    basically what im running on my 5950x desktop, without the vdi. Run the databases on a dataset thats on the ssd raidz1 you got. have it be a one unit wonder for a home. Maybe also run PXE off it for a home theater pc. maybe run a steam cache on the hdds as well.

    • @ochinchinsama
      @ochinchinsama 3 месяца назад

      @M1America What gpu are you using?

    • @M1America
      @M1America 3 месяца назад

      @@ochinchinsama 6600xt

  • @nadtz
    @nadtz 3 месяца назад +1

    As much as I'd love to have a Genoa box at home I'll be sticking with Rome until Genoa gets to a price I'm comfortable with second hand. Awesome that you get to play with one though, looking forward to seeing what you do with that monster.

  • @DigitsUK
    @DigitsUK 3 месяца назад

    Nice to see those cases still come with a blue-laser power LED!

  • @eliotrulez
    @eliotrulez 4 месяца назад +2

    360W TDP does also include the I/O Die and the I/O with all populated PCIe Slot can consume 100W alone.

  • @montecorbit8280
    @montecorbit8280 4 месяца назад +2

    At 10:45
    Input voltage....
    In the United States, voltages at the wall are 120 volts and 240 volts. I believe they used to be 110 and 220 50 or 60 years ago, but we upped the game slightly....

  • @Seimar12
    @Seimar12 3 месяца назад

    Thank you very much for you VDI work Jeff.
    I have couple of VMware esxi's and just bought DL380 Gen9 server for my first Proxmox experience.
    I am planning to buy 2080 Ti or RTX A5000 for my VDI.

  • @Freedbot
    @Freedbot 4 месяца назад

    Don't get me wrong, I'm all for the sneaky deal chop-shop chinese specials, but I'm happy for you getting to play with a top end shiny new rig. May all your bugs be squished before your beers go flat.

  • @TheFrantic5
    @TheFrantic5 4 месяца назад +7

    He plugged his merch store again! Quick, someone complain!

    • @CraftComputing
      @CraftComputing  4 месяца назад +5

      DON'T MAKE ME PIN YOU!

    • @tappy8741
      @tappy8741 3 месяца назад

      Like, I could complain that he does the drinking gimmick to the point that I nearly didn't click on this and probably won't click on the next one, but I won't ;)

  • @jjvanrensburg
    @jjvanrensburg 3 месяца назад

    Dude! I can't wait to see what this beast can do!! Serious cool hardware!!!

  • @johnpaulbacon8320
    @johnpaulbacon8320 4 месяца назад +1

    Nice video. I'll be waiting to see how this machine performs.

  • @hxx888
    @hxx888 3 месяца назад

    Wow this is the next level home lab

  • @GhostieXV
    @GhostieXV 3 месяца назад

    the Wall-e and plant holder take the spotlight in the b roll. :D

  • @Vexcenot
    @Vexcenot 4 месяца назад

    man they really just give this guy everything

  • @KenOttaviano
    @KenOttaviano 4 месяца назад +1

    You as far as homelab goes with our consumer budget restrictions. 7950X gets about 40,000 in cinebench and 100GB/s in RAM throughput. I think it would be more cost effective to just scale to the number of 7950X, RTX 4090, 2x48GB DDR5 6000 CAS 30 nodes that you need. There are obvious pcie limitations but I think pros would outweigh the cons.

  • @draskuul
    @draskuul 4 месяца назад +2

    Butterbot finally has meaning to his life!

  • @patrickprafke4894
    @patrickprafke4894 3 месяца назад

    If your worried about usb connections. For around $20-30. A Microsoft Internet Pro keyboard has your back. Not only does it have a PS/2 port connection. It has usb. AS WELL AS 2 usb connections in the back on the right. Perfect for your mouse or a slow ISO install. And the handful of times when I had only one working port. All 3 threw 1 port. And its pretty durable. I've had mine around 20 years as a daily driver.

  • @andrewphi4958
    @andrewphi4958 4 месяца назад +1

    Thumbs up for going 240V :)

  • @davidtaylor7245
    @davidtaylor7245 4 месяца назад +3

    Have you researched SR-iov features of SSD's? I would be grateful to hear what you think about using the SSD feature for VDI use cases. Long time viewer. Thanks.

  • @KS-wr8ub
    @KS-wr8ub 4 месяца назад

    Can't wait until this is a few gens old and I can purchase these on eBay! 😍

  • @jeremybarber2837
    @jeremybarber2837 4 месяца назад

    That is a super dope build out!

  • @shutdowncnn6086
    @shutdowncnn6086 4 месяца назад +1

    In 2015 I built three X99 boards @ about $6000.00 a piece with all the latest and greatest whistles and bells. My wife thought it was a bit over kill. Those three builds total over $18,000.00. But one AMD CPU @ $9000.00? And BTW all three X99 boxes are still running flawlessly today in 2024. I put a lot of TLC into those (Linux) boxes which have serve me well. This VDI build is interesting, ultra fast large memory. Geeks are becoming virtual machine freaks. I prefer a system once down which can be easy to restored in just a few minutes.

    • @Prophes0r
      @Prophes0r 3 месяца назад

      VMs were "the future" in like...2005.
      Everything is done on VMs and containers now for a good reason.
      Less waste.
      Also, one of the biggest reasons to use a VM/container is to be able to move them easily or blow them away and rebuild them with much less effort.
      I don't understand how your bare hardware setup could be easier/faster to get running again than having it in a VM/Container would?

    • @shutdowncnn6086
      @shutdowncnn6086 3 месяца назад

      @@Prophes0r I have built quite a few VM's for the fun of it on Archlinux and Gentoo systems. OS2, OpenBSD, NetBSD, FreeBSD, Windows. It's easy to do. With better hardware, bandwidth, large number of cpu cores, large memory needed especially for a ZFS file system it's like running real metal. As a server with many clients I say yes. Those VM's can make life easy for those who really need it. Your right about "hardware setup could be easier/faster to get running" it seems to me setups are easier in a VM than setting up on bare metal hardware. Reasons for this are obvious, because the latest and greatest computer hardware with many new device drivers are not there yet. Simply put VM's are emulators.

  • @polypolyman
    @polypolyman 4 месяца назад +12

    The 8004 platform is Siena, not Bergamo as stated… Bergamo is the Zen 4c / high core count SP5 chip in the 9004 series.

    • @CraftComputing
      @CraftComputing  4 месяца назад +9

      You are correct. I misspoke during that section.

    • @hellfiredreadnought4618
      @hellfiredreadnought4618 4 месяца назад +5

      @@CraftComputing Unforgivable. Your punishment is to try a beer in every video you make from now on.

    • @CraftComputing
      @CraftComputing  4 месяца назад +5

      It's my burden to bear.

    • @pcislocked
      @pcislocked 4 месяца назад +5

      @@CraftComputing ... to beer?

    • @madb132
      @madb132 4 месяца назад

      @@CraftComputing Your punishment should be to " Drink 1 litre of castlemane xxxx (WARM) live on stream!" I Apologise for unttering such fowl language of such utter liguid but a punishment must be met! The internet has spoken. Hee Hee.

  • @charagender
    @charagender 4 месяца назад +2

    OMG WALL-E TO HOLD THE CPU I LOVE THAT

  • @aquaticbob7595
    @aquaticbob7595 4 месяца назад

    Funny that this video pops into my feed, as all my parts are arriving for my genoa build. I went Epyc 9354, and added a beefy RTX 6000 ada, and I'm stuffing it inside a fractal meshify 2 xl. I didn't do enough research on the motherboard, I probably would have bought this board had this video come out last Saturday. Oh well! Awesome video, and I'll be checking out more videos from your channel

    • @aquaticbob7595
      @aquaticbob7595 3 месяца назад

      In an unfortunate turn of events, the MOBO I bought was DOA, so I ordered the same MOBO in the video. Fortunately it was on prime so I didn't have to wait long, and now I finally have a working server.

  • @JPageauBienvenu
    @JPageauBienvenu 3 месяца назад

    Tripp-lite (Eaton) makes some "Plug-Lock Inserts" for C14 which I found help with the loose cables on a crowded PDU.
    They come in batches of 100, and they seem brittle, but damn are they useful.
    Just watch out, the force needed to get them in/out after the lock is installed ain't a joke.

  • @dervsoh2468
    @dervsoh2468 4 месяца назад

    as a casual user, still running his x79 platform on ivy bridge, this feels like a time lapse to distant future. damn, tech has made a leap forward last 10 years.

  • @kendric5578
    @kendric5578 3 месяца назад

    Awesome rig to have at home. Super jealous! Slight nit pick... US is 120v/240v. We switched from 110v/220v like 75 years ago.

  • @victorshane4134
    @victorshane4134 4 месяца назад

    An All-In-One solution as always. Proxmox + Gaming VMs, NAS, etc.. whole home lab in a single box :)

  • @HM-rz8nv
    @HM-rz8nv 3 месяца назад

    Hey Jeff! I'm thinking of making an Epyc based workstation/server at home, but going the Milan route just to save a huge amount of money. Rome and Milan parts are far cheaper than what they used to cost at launch, and that includes everything, the CPU's themselves, the motherboard options, the ram, everything. My plan is to get a dual socket, and upgrade it gradually.
    The sheer power of a 64 core system would be incredible by itself, but having say... 1TB of memory could allow me to do some pretty crazy things in certain applications.

  • @wannabesq
    @wannabesq 4 месяца назад

    Slaps case "You can fit so many plex transcodes in this baby"

  • @EricInTheNet
    @EricInTheNet 4 месяца назад +1

    I think local LLMs will be a large consumer of computer power in the next 4-5 years. Llamafile or Ollama in CPU or GFX mode with a Llama3.1 or Mistral Large 2 performance in tokens/seconds would be a great benchmark.

  • @kendokaaa
    @kendokaaa 4 месяца назад +2

    I love Pit Caribou

  • @Jaabaa_Prime
    @Jaabaa_Prime 4 месяца назад +7

    7 times 16x lanes is insane! Jealous? No, you deserve getting this kid of system. Envious? Also no, I sure don't deserve one either. Do I want one? Oh yes, but I can't justify it in a "home lab"! Awesome server Jeff! Congratulations!

    • @LordApophis100
      @LordApophis100 4 месяца назад

      You can get used Zen 3 Epycs with 7 16x slots for quite reasonable prices now.

    • @LtdJorge
      @LtdJorge 4 месяца назад

      @@LordApophis100Those are Gen 4 instead of Gen 5, although there’s not much difference today as most devices don’t take advantage of 5.0 😂

    • @noth606
      @noth606 4 месяца назад

      @@LtdJorge A ton of stuff doesn't come close to needing gen 3, let alone later. Just be sure to check their specs etc, there are some things that will nerf&gimp themselves if the pcie is not the "right gen" even though they don't come close to needing it in terms of bandwidth. Some SSD's for sure but some other stuff too that I don't remember what it was now.

    • @morosis82
      @morosis82 4 месяца назад

      Looking at replacing my Epyc MOBO with a H12SSL, that has lots of lovely PCIe4 slot goodness!
      Sadly while trying to troubleshoot I dropped a CPU into the pin grid of the socket on my H11SSL-i 😢

    • @LordApophis100
      @LordApophis100 4 месяца назад

      @@morosis82 I really like the Gigabyte MZ32-AR0 rev 3 because of its OCP 2.0 slot.
      You can add decent networking without using a PCI slot. Only drawback is, if you want to use the top x16 slot you'd need to run 4 cables from the connectors below the CPU to the slot, but the other 6 slots are already plenty. Other than that it's the almost perfect Epyc board for me.

  • @harshbarj
    @harshbarj 4 месяца назад

    And here I am deciding between Epyc 7001 and 7002 (Gen 1 and 2).

  • @chromerims
    @chromerims 3 месяца назад

    4:42 -- PCIe 4.0/5.0 . . . appreciate the commentary on signalling and Phison PS7101 (5:15) redrivers.
    More VDI/cloud gaming videos 👍 please. Highly entertaining, educational.
    Kindest regards, friends and neighbours.
    12:53 -- "One of the main struggles I've had with VDI is the intense load on storage for loading the OS or launching a game." Interesting. Lkg fwd to your storage configs in your testing.

  • @Monarchias
    @Monarchias 4 месяца назад

    Let's go wild Jeff! This hardware is so WOW, that I thought, the whole neighborhood could benefit from it. This is growing bigger than homelab. Let's call it neighborhood Lab! Now imagine.. just for fun! We build the cult later. :)

  • @speedkerr
    @speedkerr 4 месяца назад +1

    With how much 950p's have come down in price, might be worth picking up a couple of testing given how loading applications tends to be latency rather than throughput bound

  • @waterflame321
    @waterflame321 4 месяца назад +8

    What is my purpose?
    You hold CPU
    Oh no....

  • @jhines97
    @jhines97 4 месяца назад

    I would like to see VDI implementation using the A series cards. I would also like to see some server virtualization at the same time.

  • @ryanw.9828
    @ryanw.9828 4 месяца назад +1

    Wall-E - immediate like on the video

  • @k01db100d
    @k01db100d 3 месяца назад

    Perhaps we'll be lucky enough to see... games?

  • @varno
    @varno 3 месяца назад

    It would be cool to see the performance of a VPP based routing and packet inspection setup using slurm.

  • @thirdwheel1985au
    @thirdwheel1985au 4 месяца назад

    I feel like you're doing the merch sponsor spot to spite the guy who whinged about it last time. If so, by all means continue.

  • @garthkey
    @garthkey 4 месяца назад

    After the ESC4000 I've moved onto the 2u 4 node cluster computing

  • @chaosfenix
    @chaosfenix 4 месяца назад

    This is basically the setup I want to upgrade to in a few years after the prices have dropped. I would like to see how much you could potentially consolidate your rack. Could you virtualize and run your entire homelab off this single server? You already have a pretty amazing setup so obviously you aren't going to be able to throw a petabyte of storage into that chassis but could you do everything else? How much storage can you reasonably get in it? Would homelabbers be better off buying multiple lower cored systems and basically clustering them or would they be better off spending a little more buying an older 32-64+ core processor with gobs of pcie and virtualizing everything? What software would you use and how would it compare performance wise to your current setup? Also what are the power savings, if any, or consolidating like this? Yes that single chip pulls 274W alone but if you could replace 3 servers that pull 100W each you are going to be ahead. It would also impact your need for networking as you could pull switches out of your rack entirely if you didn't need to connect as many devices.

  • @casperghst42
    @casperghst42 4 месяца назад

    LLM (Ollama), would be interesting to see what that box can do.

  • @DustinShort
    @DustinShort 4 месяца назад +6

    I would normally say "why didn't you just use a c19 plug lock insert as intended" but I'm assuming you ran into the exact same problem I ran into....it's virtually impossible to find those inserts sold individually and paying over $60 for 100pc seems exceedingly wasteful lol. I just got a 3D printer though, so I'm sure I can find or draw up a model to fix the issue.

    • @whimsicalsociety119
      @whimsicalsociety119 4 месяца назад +1

      My first thought when I heard Jeffs issue was to grab a hot glue gun and just accept the cable as near permanent 😅

    • @MelroyvandenBerg
      @MelroyvandenBerg 3 месяца назад

      I use APC cables with locking feature on both ends.

  • @Felix-ve9hs
    @Felix-ve9hs 4 месяца назад +2

    Can you please also test the four NVMe SSDs in a four drive "RAID 10" ZFS Pool (two striped mirror vdevs)?

    • @CraftComputing
      @CraftComputing  4 месяца назад +4

      RAID-10 could be interesting as well. I'll definitely give it a look.

  • @VincentSaelzler
    @VincentSaelzler 3 месяца назад

    Run some full etherem nodes! Raw storage required is around 10-15 TB flash

  • @steven44799
    @steven44799 3 месяца назад

    Bought 3x 9354 32 core Genoa based servers semi recently, their improvement over Rome which they were replacing was noticeable just in using the virtual machines that were running on them, you pay for that improvement in power though as maxed out with a power virus load I was seeing 600-700w per system and these only have 1 cpu, 12 sticks of ram and 2 sata SSDs each, it is nice and fast though. The old Rome servers were never that power hungry, 32c, 8 sticks of ram and 2 SSDs would see it use ~300w.

  • @rodrimora
    @rodrimora 4 месяца назад +1

    DDR5, 12 lanes and 4800MT... that's some serious bandwidth, would that be ~900GB/s? that's close to an nvidia 3090, running an LLM on RAM might yield good speeds

  • @Viewable11
    @Viewable11 4 месяца назад

    Cooling the "Genoa" CPUs is very difficult because the largest fan that fits on the mainboard has 92mm side length. This small size requires the fans to run very fast and be very loud.

  • @kienanvella
    @kienanvella 4 месяца назад

    If you do end up running proxmox, a 6 drive, two vdev raidz1 (3 drives per vdev) is a config that performs quite impressively - at least in my use case with older micron 2300 nvme drives.

  • @halbouma6720
    @halbouma6720 3 месяца назад

    Jeff bricked his Epyc Quanta motherboard, what's the worse thing that can happen if we give him another Eypc system? ... Thanks for the video!

  • @faealiciadotsys
    @faealiciadotsys 4 месяца назад

    Really looking forward to the speed testing, if only to drool all over myself wishing I could afford such a thing.
    Umm, people suggested Crysis CPU rendering, which I'd definitely be interested in seeing, but also, I can't help but wonder how many copies of Cities: Skylines 2 you can get running before the framerate tanks to unplayable levels. Given how notoriously CPU-heavy that one is, and the fact that city sim games can usually get by with 30 FPS or lower, it'll be interesting to see if the dual GPUs or the Epyc ends up being the ultimate bottleneck.

  • @Ginita12
    @Ginita12 4 месяца назад

    killer setup .

  • @piterbrown1503
    @piterbrown1503 4 месяца назад

    Jeff i struggled a lot to get out some good performance with zfs and nvme drives. In my case a singel drive gets 6/GBs R/W but with zfs the speed drops a lot like 2/GBs - 3/GBs R/W. I tweakt a lot like zfs= metadataonly or atime....and ashift the performance reases a lit bit like 1/GBs more R/W speed. Also somenthing what i notice was if the proxmox host is under heavy load the performance of zfs is also droping cleary in my case i use also some "Epyc Rome" and enterprise U.2 SSD Gen4

  • @claucmgpcstuf5103
    @claucmgpcstuf5103 4 месяца назад +1

    Wel tah si gama 🐝 utra interesting yeas .. ohhhhfor tha sposers AMD Corsair foca
    . Yea . Me personally what time would have loved to see you running of course some... All the 3D marks that you can a sinner bench live cinebench all of them from.. a couple of games 5 or 10.. the ones that you also thank you to stress test the things.. of course not you useless stress this like some other things.. Carter zilla benchmark.. Al tha sisiros... Cristal mark plus normal copy past action etc . Wya a shizel . Osam stuf man

  • @igordasunddas3377
    @igordasunddas3377 2 месяца назад

    I like this! I've just got my hands on a Gigabyte ME03-CE1 (SP6) motherboard with an EPYC 8024P (I don't need much performance, but quite some fast I/O). After being absolutely disappointed with ASRockRack W680D4U-2L2T/G5 (I really can't recommend it - janky as sh!t), I am hoping this will work nicely.
    Got a SuperMicro ATX tower, a Noctua CPU cooler, 4*3.84TB U.2 NVMe drives. Only the RAM is so prohibitedly expensive, that I am coping with just one (soon 2) 16Gb DDR5 ECC RDIMM RAM stick(s).
    Still: I'd never ever want a server, that doesn't have a BMC with media redirection.
    Also if you are looking to stress tests, try running a LLM - should be fun, especially with the GPUs you've got.

  • @ewenchan1239
    @ewenchan1239 3 месяца назад

    Please share your setup for the software side of things for the cloud gaming VDI setup.
    I've tried both Moonlight and Sunshine my testing shows that the connection can be VERY spotty at times, even on hardwired 1 GbE LAN.
    Beyond that, there isn't a heck of a lot that you can throw at your Genoa server that would be a "standard" test.

  • @MorpheusXTRM201
    @MorpheusXTRM201 4 месяца назад

    Hey Jeff, I was considering to do the same thing if I ever gotten a house of my own to have a dedicated and grounded 220v circuit using a NEMA-30 plug for a APC Battery back-up but was concerned about power managing that UPS to support 120v devices like a couple of SPF+ switches, cable modem, controller and a 2U/3U custom server. If you do that upgrade, under your discretion, would you provide a video for that including any challenges you experience?

  • @shaunlavoie6183
    @shaunlavoie6183 4 месяца назад

    Looking for your whiskey stones but a really big one for an old fashioned big rock. Any chance that is coming?!

  • @JoBo-ug6tf
    @JoBo-ug6tf 4 месяца назад

    I don't need this, I have absolutely no use case for this, but I want this SO BAD!!

  • @shephusted2714
    @shephusted2714 4 месяца назад

    you have 12 chan support with this chip but opt for 8 chan - still good to see this content as the potential and possibilities are immense - should be a workhorse for you for years. by going to 22 or 240v you shoud save 3-5% on power - not negligible. you need to do some true open source ai on this monster - pair it with a really big array and let it run to digest and learn all your data

    • @notme4526
      @notme4526 4 месяца назад

      Electrical companies in the US measure usage in kilowatt hours so running 120v @ 10 amps or 240v @ 5 amps makes no difference in your power bill with the wattage being exactly the same. Some applications may run more efficiently on 240v but it's not 3% and won't help cost in any noticable way (I found this out the hard way). It will allow you to keep the amp load down if your running low since most homes have 100amp services with larger homes having 200amp so it is still better than 120v for permanent installs.

  • @AIC_onyt
    @AIC_onyt 3 месяца назад

    i want to see this rig chew through blender renders

  • @alexej7790
    @alexej7790 4 месяца назад

    Maybe to mainstream now, but could be interesting to see how two A5000 work with training ai models.

  • @denvera1g1
    @denvera1g1 4 месяца назад

    AMD is just killing it.
    One of the things i was most excited about from AMD was Epyc 4004. Now dont get me wrong, extra validation is great.
    But i was disappointment in no 16 core x3D option, and no 32 core Zen4C option, or maybe a 2 NUMA node 8X3D+16Zen4C. I was also holding out hope for a re-designed die for the 8700G that gave it 32-48MB of L3 cache instead of 16MB, and then the full 24 lanes of PCIe, Cherry on top of this fantasy would be CXL support for what would equate to a 3rd channel of RAM over 8 lanes of PCIe, but outside of handhelds and small formfactor gaming focused systems, that would basically be useless. If they had released an epyc that was this theoretical ryzen 9 8950G, even without the CXL support, i'd probably pick one up for my file server which right now is a 3950x on an asrock rack X470 motherboard with a BMC because i dont have enough PCIe lanes for a video card.

    • @nickfarley2268
      @nickfarley2268 4 месяца назад

      I’m don’t understand the homelab excitement for epyc 4004. They work the same as regular consumer ryzen and fit in the same motherboards. The only people who should be excited are corporate sysadmins who need to fill out a ton of certification paperwork for every part in a server

    • @denvera1g1
      @denvera1g1 4 месяца назад

      @@nickfarley2268 From what i can tell, you're basically guaranteed functionality, as opposed to my 7950x where i had to hunt around for boards and RAM that would actually work with ECC. Which is why i was dissapointed, i already have a working 7950x since launch, if this offered 32 cores at a lower power envelope i would throw down $1000 right now, but its basically just my 7950x.

  • @andreasborschinger7235
    @andreasborschinger7235 3 месяца назад

    Perhaps you can setup a Ollama AI solution / training with pdf documents. (Programmers handbooks and system administration along with some network handbooks ) would be nice to see this ...
    ;.)

  • @rotors_taker_0h
    @rotors_taker_0h 3 месяца назад

    Run llama-405B on that puppy's CPU. Maybe put 12 channel memory to fully uncover its potential.

  • @simonpalmer123
    @simonpalmer123 4 месяца назад

    Ideas: Sharing part of the GPU setup with VDI, and part with an ollama VM... will your 2xA5000's fit llama 3.1 405B? Also: Proxmox VDI with Guacamole and Netbird? Or similar? Maybe securing sunshine and moonlight with a netbird/similar and whole package latency?
    Thnx, enjoying videos

  • @DissertatingMedieval
    @DissertatingMedieval 4 месяца назад

    I would love seeing wierd things like stereophotogrammetry and the like.

  • @DrMitsos
    @DrMitsos 4 месяца назад

    Hey Jeff. Can you please try not only GPU passthrought under Proxmox, but also vGPU setup with the RTX A5000? Also it would be interesting to see if the above are possible under TrueNAS SCALE as they now include latest Nvidia drivers. Thank you very much!

  • @spotopolis
    @spotopolis 3 месяца назад

    I have the Rome version of that board. The layout is exactly the same and I have run into the USB issue as well. I run unRAID on mine so one of the USBs is populated by the unRAID boot drive, and the other is for the communication to my battery backup. If I ever need to connect a mouse/keyboard directly to the board, I have to unplug the battery backup com connection. Great board otherwise.
    Have you tried running any of the Intel QAT cards in your servers to assist with VPN performance? Im currently trying to passthrough one to a pFsense VM, but Im having issues with the drivers being blacklisted in vfio.

  • @lavalamp3773
    @lavalamp3773 4 месяца назад

    I was surprised you didn't already have a 220 V circuit in the garage with how many KiloWatts you keep in there!
    I'm sure that if and when you do upgrade to 220 V out there it'll be a video, but I'd also be interested to see how the total power draw changes. Probably only a couple of percent, but when you have that many of them I bet it's a substantial difference.

    • @CraftComputing
      @CraftComputing  4 месяца назад

      Shockingly enough, my rack only needs around 900W at max, (plus another 600W on a second circuit for the AC). Idle is around 600W.

    • @lavalamp3773
      @lavalamp3773 4 месяца назад

      @@CraftComputing wow, that is substantially less than I expected, I thought the AC would be a couple of kW by itself!

    • @CraftComputing
      @CraftComputing  4 месяца назад

      Yeah, it's really not terrible. I've been as high as 1200W idle in the last few years, but 2023 was all about consolidating down and getting more efficient equipment.

  • @elalemanpaisa
    @elalemanpaisa 4 месяца назад

    at least it runs RDIMMs.. what is crazy expensive is UDIMM

  • @gamingthunder6305
    @gamingthunder6305 4 месяца назад

    run stable diffusion on the cpu. would be interesting to see how close we can get compared to a 3090 or 4090.

  • @marcin_karwinski
    @marcin_karwinski 4 месяца назад

    I wonder how noisy that air-cooler is, though perhaps the PSUs may be noisier in that setup... Pity there's no somewhat quiet Arctic 4U update for the increased TDPs and socket sizes of the SP5-using generations... Maybe this cooler is not capable of reigning in the 360-400W TDP, so the CPU got downclocked a bit, and thus the lower power/heat observed while testing... or maybe you had more eco-friendly options in the BIOS to limit/restrict the power before the chip can spread its wings?

  • @Ianochez
    @Ianochez 4 месяца назад

    Gaspésienne can be written as "La Gass-pay-zee-en" to be pronounced propely in english. Thank you and have a good one all. 17:52

  • @UntouchedWagons
    @UntouchedWagons 4 месяца назад +1

    I'd like to see you try training your own AI models on those GPUs. I don't know what exactly you'd train but it'd be cool nonetheless.

    • @CraftComputing
      @CraftComputing  4 месяца назад +1

      I'm sure I'll be doing some AI things. Not sure what yet.

  • @la009895
    @la009895 4 месяца назад

    Ollama. Load some huge model and see how the entire system performs

  • @will1122
    @will1122 4 месяца назад +1

    Ok I don’t get the complaint about usb ports, like Why buy a high end server with Ipmi to then boot with usb drive. I’d either pxe boot if there’s a lot of them or just use the ipmi.

    • @stephen1r2
      @stephen1r2 4 месяца назад

      He was explaining that in a workstation context, using this particular motherboard may be a bad idea due to the lack of connectivity.
      Personally I thing just losing the x8 port to a multiport card wouldn't be much of a sacrifice.

  • @3k3k3
    @3k3k3 4 месяца назад

    Well Deserved!

  • @AdhityaMohan
    @AdhityaMohan 4 месяца назад

    Great video ! Will you ever try a Dual socket config with Epyc 9554?

    • @CraftComputing
      @CraftComputing  4 месяца назад

      I'm hoping to get a dual-socket board soon :-)

  • @CalvinHenderson
    @CalvinHenderson 4 месяца назад

    How about a web server available to the local network that provides updates on stats and such for game play, and other things from the system.
    How much of the CPU will be dedicated to the virtualized systems?

  • @namedoasis1134
    @namedoasis1134 4 месяца назад +8

    Wall-e with eypc it so cute❤

    • @DBitRun
      @DBitRun 4 месяца назад

      Where can I get that Wall-E!?

  • @xmine08
    @xmine08 3 месяца назад

    Run LLMs on this thing! Both on the GPUs but also on the CPU as the 8-channel RAM should really help. Go with Llama 3.1 70B or a similarly sized model, no one would buy this thing to run an 8B lol.

  • @TrueHelpTV
    @TrueHelpTV 3 месяца назад

    16:00 "it's not really realistic that anyone will be running.."
    As I sit here with four Tesla P40 just eagerly waiting for ANYTHING about configurations so I can finally get the best bang for buck CPU/Mobo to run them ='D
    **It's CRAZY how little documentation there is on the M40 (24gb)/P40
    Even Nvidia has all but removed ALL documentation on them on their website.. I think it's the real reason they're only $75 now, because nobody can find enough information to risk the investment.

  • @adamchandler9260
    @adamchandler9260 4 месяца назад

    Twelve instances of Crysis at a time? Also studio update?

  • @Fahdalrabeayah
    @Fahdalrabeayah 3 месяца назад

    Windows with steam GPU enabled + steam link

  • @yramagicman675
    @yramagicman675 4 месяца назад

    How many simultaneous Plex/Jellyfin streams can you CPU transcode before it chokes? That might be an interesting test.

    • @CraftComputing
      @CraftComputing  4 месяца назад

      Not really a great test though. CPU is the least efficient (power/price speaking) way to encode video. Anything more than a few streams and you'd want dedicated hardware, like NVENC from NVIDIA or QuickSync via an Intel Arc GPU.

    • @yramagicman675
      @yramagicman675 4 месяца назад

      @@CraftComputing I knew it wouldn't be efficient. I was more trying to think of off the wall ways to stress the cpu that might be halfway interesting for a video. Of course dedicated hardware like nvenc is going the beat the pants off cpu encoding.

  • @DeFi-Macrodosing
    @DeFi-Macrodosing 4 месяца назад

    Of course, the question is: can it run Doom?

  • @Feed9Will
    @Feed9Will 3 месяца назад

    Damn that a beefy system. VDI at home? Lab? It will serve all residents in a Mil sq ft + mansion. I'm game to see more of vGPU sharing on Proxmox. But VMware and Horizon are dead to me.
    Video series idea - Proxmox HCI with Ceph deep dive. Help peeps get off VMware VSAN. I'm sure AMD will understand that Ceph requires minimum 3 node cluster and send 2 more 9554s right over. :)