Oops! two corrections. I misread the block diagram: ftp://ftp1.tyan.com/doc/S8030_UG_v1.0b.pdf The HD Mini Sas connectionx (3x) are for SATA only. BUT slimline SAS (NOT oculink!) are the Pcie x8 interfaces there. My bad, sorry about that.
Please review the "AsRock Rack ROMED8-2T ATX" - looks pretty decent for this same purpose. Maybe together with the "V-color 32GB DDR4 SDRAM ECC Load Reduced DIMM DDR4 2666MHz" - ECC, CL 17 @ 1.2V! Wondering how far an overclock could go with these. Anyways, thanks for the great content. Cheers
Also, according to the datasheet it could support 2TBs of RAM. Hooray! :D "Up to 512GB RDIMM/ 1,024GB LRDIMM/ 2,048GB LRDIMM 3DS *Follow latest AMD DDR4 Memory POR"
I pretty much use that setup since may as my new workstation build (ROMED8-2T mobo with 7702p and 512G). I use it for simulating infrastructures (kubernets clusters, OpenStack, F5 virtual appliances, NetAPP simulators and whatnot all cobbled together). My main concern (besides CPU power, corecount and memory capacity) was, to build a silent system under load. Therefore, I was concerned with the headsink orientation and a decent, long-term reliable cooling solution. The planned lifetime of the system is around 4-5 years. I just want to share some details of my build, just in case someone may make good use of the hints: I went for the Notuca NH-U14S you cited as a cooler. Case is a corsair carbide air 540, a cube case which allows for decent airflow over the mobo. I slapped in 5 bequiet silent wings 3 140mm fans. Two at the top of the case blowing out, two at the front blowing in and one at the back blowing out. The heatsink-fan blows "upwards" towards the two top fans blowing out. All fans a PWM. Power supply is a 1200W bequiet straight power 11. All storage is SSD (either nvme or sata), so there is no noise there. The end-result is astonishing. Even under full CPU load (linux, mersenne-prime CPU tortue test), the CPU barely reaches 65 Celsius even after hours and all fans stay below 1000rpms. The system remains barely audible, while sitting in a 19" cabinet right next to my desk. If running the aforementioned test, even if you open the cabinet and "hear" inside, the system is barely audible. So, if anyone wants to go the same route in terms of workstation build and can accept the expandability-limits of the corsair 540 (2*5.25", 4*2,5" internally - the 2*3,5" internally at the bottom of the case are not usable due to the mobos size and I took them out), I strongly recommend this build.
YES, Tyan! Still have my old Tyan Tiger 100 (S1832DL) with dual Slot 1 sockets for PII & PIII from -97 in a case as a footstand under my desk for a rainy day, maxed out and rocking two PIII 500Mhz
I am so happy the old back ground music is back!!! It's also kind of spooky that I have recently (just last week) been thinking about the possibility of an Epyc workstation PC.
Interesting setup if used for a workstation. I looked at TR, TR-Pro, Epyc, and Xeon W for a new workstation earlier this year 2023. The TR-Pro was crazy price here in Canada and basically old tech now with PCIe4/DDR4, the Epyc didn't have the workstation setup I wanted, since it is aimed towards servers. I settled on an Intel Xeon W7-2495X 24C 48T, Noctua NH-U14S-4677, ASUS WS Pro W790 ACE (max 2TB), 512GB DDR5-4800 ECC RDIMM, ASUS ROG STRIX RTX-3090 OC 24GB, WD Black SN850X 4TB, WD Black SN770, Corsair 5000D Airflow, Corsair HX1200 Platinum PSU, Dell 34-inch Ultrasharp Wide, Ducky One2 kbd, Microsoft mouse, Mackie speakers, APC 1500 UPS. The workstation is awesome. I use it for the 3D software I develop, and for Unreal Engine 5 work.
Hey those are not OCuLink, but x8 SlimSAS. OCuLink is SFF-8611 and is PCIe x4 (4.0 on boards released for Rome) Btw, if you're looking at Rome the ASUS KRPA-U16 is cheaper than this and has some neat add on abilities like OCP 2.0 and a HBA card for the build in miniSAS HD connectors. It's a server board, but no one says you can't put it in a Define XL or O-11 XL or something. That and the frequency optimized 24c part and now we're talking about a heavy duty system. 6 NVMe drives plus 12 sata SSDs is some sort of tiered storage/caching situation (don't think zfs would be the way to go with such a solution?) and dual QSFP28 ports, plus enough PCIe lanes to spare for some heavy computing. Now that's a system that I would build if I stood in your shoes, can probably get a second video out of it too about advanced NVMe storage solutions or whatever insanity you can come up with in such a build. My dream is 3-4 systems like this working both redundantly and as what I call an "extreme edge"-node, which are directly connected to both m/kb and video/audio and load up a VM locally with containers dynamically distributed over the cluster. Whoops sorry I started dreaming out loud when talking about that Asus motherboard. This is basically my version of sending a video request to an amateur pornstar lol.
Hey Wendell! Rick Beato needs your help with some storage! Dude has decades of music and tracks and albums in his harddrives, and he keeps getting new external drives. :D lol Would be a cool video with a cool dude.
The octa channel memory tied with more cores in itself is more than enough for simulations (like cfd/fea/astrophysics). Imagine all dat openfoam performance!. Also having the pcie lanes are also amazing for CUDA.
Well 64-core / 128-thread is not enough for some but for most workstations it is enough. As he said it is overkill for most but when it isn't you do need all 128-threads. I can't think of anything but AI that could use 128 threads that will work better on the CPU than on the GPU.
@yumri4 i can use all 128 threads doing AV1 encodes -_- (see the project av1an on github) ...just short of 2 days to encode a movie at 1080p on an 8 core 1700X... and that's not even at the highest quality setting (~4th down, "--cpu-used=3")
lol amd suck. why on earth would i pay like $2500 for 32 cores 64 threads? I can go on ebay and buy a Xeon Phi for $200 that has 72 cores and 288 threads. and guess what? i can put four of those in one system so i can have 288 cores and 1152 threads in one system. u cant do that for amd. intel had this tech YEARS ago. its old school now. intel just did there best quarter ever, theyre doing better than ever
@@lost4468yt Phi cards are co-processors and they run pretty slow. They are also not incredibly compatible with a lot of functions and they're out-moded by newer tech. Just dropping one (or several) of them into a rig isn't going to actually help anything. The PCIe slot it takes could be used for a GPU that can accelerate renders, act as a fixed point or floating point computational aid (breaking encryption, for instance), etc.
Isn't Windows 10 Pro for Workstations meant for this type of thing? I thought it was basically the Server stuff in the kernel turned on to be used in the consumer space.
With the benefit of hindsight, which motherboard would you recommend if you were to build an Epyc dual CPU SP3 socket workstation today? Would it be Gigabyte, Super Micro, Tyan, AsRock Rack, or others? Ideally, it should fit into an easily accessible chassis. It'll primarily be used for everyday computing workstation, but I'd like to use downtime to mine. Thank you for taking time to advise.
If you do high end video editing, compositing or 3D graphics, that 30% lower single threaded performance will really slow you down, though... Just overall, a lot of stuff simply can't be threaded due to the math. Stuff like resampling curves or some types of mesh optimizations are just not possible to multithread... Now, for the work I do, like building custom simulation setups with very specific controls for the artists, I can usually work around most of that, but not all of it. And if you're in Premiere, AE, Resolve or Nuke, it's not that easy to work around... But people doing this stuff should really use real time per core monitoring so they see the single threaded bottlenecks slowing stuff down in production. I've used that for over a decade - man, it's helpful. 😁👌
I wonder what would you say about something of an even more epic workstation - by reaching to ASRockRack for their ROMED4ID-2T, pair it with 128GB+ of RAM sticks (quad-ranked with 4 sticks, you're at the Epyc's mem controller sweetspot), watercool it, smash it inside some ITX case, eg. CM NR200 or Sama Quzao IM01, use those 6x PCIe4x8 to add eg. 4x2.5" U.2 NVMe drives, another to add USB controllers (for extra ports of higher speeds, great for WS builds) and audio card or WiFi card in some low-profile format to somehow place them within those smaller cases, add a watercooled GPU and you could possibly end up with an extremely beefy SFF overkill per volume, and possibly even better than this... And truly something different since you can't get TR in mATX or mITX or mDTX format... Now that would have been a marvel noone could dispute against by showing TR alternative ;)
#Wendell, quick question if you could help please. What kind of temps was hwinfo64 reporting on this Tyan mainboard for your epyc CPU? specifically the Tctl/Tdie and average Tdie? Thanks in advance
Are you building in India? I am also interested in building a workstation for my data science work. What has been your experience in building a workstation in India? Cost? Noise control? Where did you source components? Thanks in anticipation
Can someone explain to me how in consumer boards there is so much focus on powerdelivery/chiset temps to the point where fans are used, aside from the standard but ever growing heatsinks. How on earth does a pro MB, like this require NONE of these? what am i missing?
A couple of reasons - this is a server motherboard which plays in a different market segment than consumer land, and different markets have different needs. High end consumer motherboards are expected to have overclocking support. In order to support the power delivery demands of an overclocked CPU, these motherboards have very large VRM designs with as many power stages as possible to increase peak current delivery and improve transient response under high loading... Basically allow the board to perform well and be stable under high load. The power delivery specification here is unknown and is basically 'as big as possible'. Things like power consumption and board cost are relatively unimportant factors here. Over in server land, processors cannot be overclocked. This means that maximal power delivery is a known quantity, and much smaller VRMs can be designed. Key factors in this market revolve around lowest possible power consumption and board cost, so VRMs here are designed to be as cheap, efficient, and small as possible without sacrificing reliability. Another part is marketing. A large chunk of consumer motherboards are sold based on what they look like. Flashy looks simply add cost to server products, making what is basically an industrial computer more expensive for no reason.
Also, in regards to lack of chipset and VRM fans... Motherboard fans are always a board designer's last hope to solve thermal problems. You don't want to use them unless you absolutely have to. As a moving part, a fan is one of the most likely points of failure that you can design into your product. One of the most important things to a server is high uptime and reliability, and relying on little fans attached to the motherboard is a good way to ruin your reputation as the fans fail out in the field. Another reason that you rarely see chipset or VRM fans on server motherboards is due to chassis design. Consumer motherboards live in unknown chassis of usually questionable airflow and cooling. A motherboard with a marginal VRM design that runs on the hot side could cook itself to death in a poorly cooled desktop PC chassis, so you're more likely to see consumer motherboards with VRM fans slapped on by the board designer just to be safe in a wider variety of desktop cases. Server chassis, on the other hand, almost universally have large volumes of front-to-back air blowing across the entire chassis. This nearly guarantees that the passive VRM heatsink seen on most server motherboards gets plenty of airflow across the fins, which removes the need for fans on even the worst planned and hottest VRM designs. Hope this helps.
I wish there were motherboards that actually moved the pcie slots to give you room to install 2x gpu's.. what's the point of having 128 pcie lanes if they're all crammed in next to each other. intel xeon workstation boards have pcie slot arrangements that give plenty of space between them for full tower installs
Please review the "AsRock Rack ROMED8-2T ATX" - looks pretty decent for this same purpose. Maybe together with the "V-color 32GB DDR4 SDRAM ECC Load Reduced DDR4 2666MHz" - ECC, CL 17 @ 1.2V! Wondering how far an overclock could go with these. Anyways, thanks for the great content. Cheers
I am using a WS with epyc 7402p now, but with OS ws2019. I tried various ver of win10 like pro or education, but on none of those I can make chipset driver installed. CPU-Z even mistakenly recognized my CPU as TR . . Meanwhile on ws2019, everything just works fine. However I will consider ubuntu 2004 after I got my GPUs home . . ..
8:12 I really doubt that two blower style cards will be cooler than two good axial fan cooled cards with the same spacing and in a high airflow case like the Meshify S2. Does anyone have test results for that?
I don't have specific A/B data, but I'm in the folding@home community and it's common knowledge that blower cards are the way to go if you're going to keep the case closed up. The much better axial designed cards just swirl around hot air. It's like an alpha cat; they don't play well with others, but they do great by themselves. I used to run two 8800GTs, single slot, blower style, and never had any issues. I run two 5600XTs with good axial coolers. I can only get good temps by leaving both sides of the case open, and even then, my lower card is at 63/65C and my upper card is at 74/85C. (This is running compute 24/7, 27C ambient. Ideally, you'd get PCIE risers and do a basic GPU mining set up, and everything will run cool when it's spread out. But if you insist on putting them in the slots on the board in the case, blower cards do run cooler as multi GPU setups.
I would buy this MoBo today if i was sure it would support 7262 or 7272 with Windows 10 Ultimate and drivers for it, I most of all need the PCIe lanes / Slots.
@Level1Techs Suggestion to start a crowdfunding campaign for such a reduced, bare “interface adapter” Threadripper Pro (and Ryzen) motherboard? I’d like “nothing” on it, just gimme aaall the PCI lanes from the CPU to various slots. You can use a dedicated PCIe USB controller AIC for keyboard BIOS access if the BIOS is in UEFI boot mode (just make it the CMOS clear default setting).
more I/O would be hard to populate on such board... We need better connectors than those long PCIe slots. Almost 30 years seems like it would be enough to make the port more like USB size.
What kind of I/O are you thinking of? A couple of quad port SAS HBA cards and you can have stupid amounts of hard drives available for one. Quad port network cards can end up with a serious hardware firewall/router....
@@Level1Techs Haha true! To be fair, swiss army knife is perfect for the job, as long as you know what you're doing )) Love the videos, cheers from UA!
This motherboard looks nice but in my opinion the Asrock rack "ROMED8-2T" is a better choice for "workstation" if you use a lot of pcie cards (not only gpus)
You will pay 600USD more than a Ryzen Threadripper 3970X 32-Core, 64-Thread, just for 8 channel RAM and lower boost? Why? I think it is best to wait and see the price of the Lenovo P620 with Threadripper Pro.
@@hotstovejer I do agree that Linux has its merits for certain applications (DL and ML being one good example) But there will many applications that only run on Windows architecture that a workstation user will use, i believe my question is a valid one
@@SoranPryde I was kinda joking, but you could always use KVM, do pcie passthrough to an NVME and video card to a Windows VM and still reap the benefits.
The push on plastic screw replacements for m.2's are on some of the cheapest Asus AMD B450 Motherboards wendell, does it feel like a sturdy implementation on this motherboard? feels kind of weak and flimsy on those cheap asus ones
tommihommi1 Maybe “we” can start a crowdfunding campaign for such a reduced, bare “interface adapter” Threadripper (and Ryzen’ motherboard? I’d like “nothing” on it, just gimme aaall the PCI lanes from the CPU to various slots. You can use a dedicated PCIe USB controller AIC for keyboard BIOS access if the BIOS is in UEFI boot mode (just make it the CMOS clear default setting).
Motherboards for towers, whether they're "gaming" or workstation boards, ARE optimized. For convection. So putting a server board in a tower case is a crap shoot.
@@timramich kinda, ATX as a forum factor technically isn't which is why Intel tried BTX. I have some oem BTX boards that were in Dell machines, it actually makes sense. PcIe isn't directly below the CPU etc, idk if it even is "better" anymore. Idk that's a whole separate convo I guess lol.
To be fair, gaming motherboards are usually in a case with plenty of room to add fans to overcome suboptimal airflow. Server boards are wedged in the smallest possible space they can occupy, surrounded by a dozen others.
"High-endurance SD cards" I didn't knew those existed. I thought they always use crappy flash and controllers in SD cards. What would be the cost difference? I'm tired of my phone not having storage.
Same as any other PC, via USB stick, network deploy etc. What he meant is that AMD don't guarantee that this will run Windows 10 flawlessly, as these kinds of systems are meant to run server OS's. It's more a cover their ass thing really and if something doesn't work, you on your own for all intents and purposes
@@jouldalk In theory yes, so long as the vendors have gotten them into Windows Update. There might be some corner cases where drivers are only available for server based systems, but until someone actually installs Win 10 on the board it's hard to say. Things have come a long way since the old days of hardware market segmentation
Oops! two corrections. I misread the block diagram: ftp://ftp1.tyan.com/doc/S8030_UG_v1.0b.pdf The HD Mini Sas connectionx (3x) are for SATA only. BUT slimline SAS (NOT oculink!) are the Pcie x8 interfaces there. My bad, sorry about that.
Please review the "AsRock Rack ROMED8-2T ATX" - looks pretty decent for this same purpose.
Maybe together with the "V-color 32GB DDR4 SDRAM ECC Load Reduced DIMM DDR4 2666MHz" - ECC, CL 17 @ 1.2V! Wondering how far an overclock could go with these.
Anyways, thanks for the great content. Cheers
Also, according to the datasheet it could support 2TBs of RAM. Hooray! :D
"Up to 512GB RDIMM/ 1,024GB LRDIMM/ 2,048GB LRDIMM 3DS *Follow latest AMD DDR4 Memory POR"
Another correction, @6:18 - MoBo RGB started on L33T G4M3R Desktops. Still hasn't made it's way to servers. Microcomp be lagging.
I bought everything you mentioned in this video.
Then I woke up.
What a dream.
I like how Wendell has just 2 stacks of sex books casually sitting next to him on the set!
the stories of orgies....got me to giggle
sex books
lol
yeah! what the hell hahaha
You could say that it makes a pretty epyc workstation
*ba-dum-tss*
You could. But don't
Bruh
Such a dad joke. Definitely approve! 🤣
Oh hoo 👈👈👁👄👁
I was hoping to see how the M.2 mechanism worked.
6:50 - Subtle Verge ribbing. LOVE IT!
Waiting to see the build process. 👍👍👍
How to build a Epyc server : you need a swiss army knife which hopefully has a s screwdriver in it and then you need to screw in with confidence. xD
Don't forget the tweezers.
300k subs! Congrats
I pretty much use that setup since may as my new workstation build (ROMED8-2T mobo with 7702p and 512G). I use it for simulating infrastructures (kubernets clusters, OpenStack, F5 virtual appliances, NetAPP simulators and whatnot all cobbled together).
My main concern (besides CPU power, corecount and memory capacity) was, to build a silent system under load. Therefore, I was concerned with the headsink orientation and a decent, long-term reliable cooling solution. The planned lifetime of the system is around 4-5 years. I just want to share some details of my build, just in case someone may make good use of the hints:
I went for the Notuca NH-U14S you cited as a cooler. Case is a corsair carbide air 540, a cube case which allows for decent airflow over the mobo. I slapped in 5 bequiet silent wings 3 140mm fans. Two at the top of the case blowing out, two at the front blowing in and one at the back blowing out. The heatsink-fan blows "upwards" towards the two top fans blowing out. All fans a PWM. Power supply is a 1200W bequiet straight power 11.
All storage is SSD (either nvme or sata), so there is no noise there.
The end-result is astonishing. Even under full CPU load (linux, mersenne-prime CPU tortue test), the CPU barely reaches 65 Celsius even after hours and all fans stay below 1000rpms. The system remains barely audible, while sitting in a 19" cabinet right next to my desk. If running the aforementioned test, even if you open the cabinet and "hear" inside, the system is barely audible.
So, if anyone wants to go the same route in terms of workstation build and can accept the expandability-limits of the corsair 540 (2*5.25", 4*2,5" internally - the 2*3,5" internally at the bottom of the case are not usable due to the mobos size and I took them out), I strongly recommend this build.
Would like to see this built, in a system, and benched between a full server and a threadripper.
YES, Tyan! Still have my old Tyan Tiger 100 (S1832DL) with dual Slot 1 sockets for PII & PIII from -97 in a case as a footstand under my desk for a rainy day, maxed out and rocking two PIII 500Mhz
Subscribed! Thanks for the information.
I am so happy the old back ground music is back!!! It's also kind of spooky that I have recently (just last week) been thinking about the possibility of an Epyc workstation PC.
CONGRATULATIONS ON 300K SUBSCRIBERS
Interesting setup if used for a workstation. I looked at TR, TR-Pro, Epyc, and Xeon W for a new workstation earlier this year 2023. The TR-Pro was crazy price here in Canada and basically old tech now with PCIe4/DDR4, the Epyc didn't have the workstation setup I wanted, since it is aimed towards servers. I settled on an Intel Xeon W7-2495X 24C 48T, Noctua NH-U14S-4677, ASUS WS Pro W790 ACE (max 2TB), 512GB DDR5-4800 ECC RDIMM, ASUS ROG STRIX RTX-3090 OC 24GB, WD Black SN850X 4TB, WD Black SN770, Corsair 5000D Airflow, Corsair HX1200 Platinum PSU, Dell 34-inch Ultrasharp Wide, Ducky One2 kbd, Microsoft mouse, Mackie speakers, APC 1500 UPS. The workstation is awesome. I use it for the 3D software I develop, and for Unreal Engine 5 work.
Hey those are not OCuLink, but x8 SlimSAS. OCuLink is SFF-8611 and is PCIe x4 (4.0 on boards released for Rome)
Btw, if you're looking at Rome the ASUS KRPA-U16 is cheaper than this and has some neat add on abilities like OCP 2.0 and a HBA card for the build in miniSAS HD connectors. It's a server board, but no one says you can't put it in a Define XL or O-11 XL or something. That and the frequency optimized 24c part and now we're talking about a heavy duty system. 6 NVMe drives plus 12 sata SSDs is some sort of tiered storage/caching situation (don't think zfs would be the way to go with such a solution?) and dual QSFP28 ports, plus enough PCIe lanes to spare for some heavy computing. Now that's a system that I would build if I stood in your shoes, can probably get a second video out of it too about advanced NVMe storage solutions or whatever insanity you can come up with in such a build. My dream is 3-4 systems like this working both redundantly and as what I call an "extreme edge"-node, which are directly connected to both m/kb and video/audio and load up a VM locally with containers dynamically distributed over the cluster.
Whoops sorry I started dreaming out loud when talking about that Asus motherboard. This is basically my version of sending a video request to an amateur pornstar lol.
Thanks for doing this it’s creative and kinda bonkers!
Hey Wendell!
Rick Beato needs your help with some storage! Dude has decades of music and tracks and albums in his harddrives, and he keeps getting new external drives. :D lol Would be a cool video with a cool dude.
I thought Asrock was the only boards that are solid as a rock..
Linus: 7 gamers 1 CPU!!
Wendall: Hold my drink.
@Andrew Crews I was not going to assume with Wendall. He's next level.
@@hotstovejer I think 15 gamers 2 cpus is the max. That's still 8 cores per user with a 8x pcie 4.0 interface.
@@tmi1234567 you mean 2 times more gamers?!?!?! #yayamd
You'll probably catch him drinking a coke or Fanta ;P
@@Level1Techs I knew you were a man of culture.
MacGyver: Built something with a swiss knife.
Wendell: Installing an expensive processor with a swiss knife.
Epyc!!!
The octa channel memory tied with more cores in itself is more than enough for simulations (like cfd/fea/astrophysics). Imagine all dat openfoam performance!. Also having the pcie lanes are also amazing for CUDA.
I would love to see a dual 7F72 workstation.
"I want to like Epyc, but there's just not enough cores and threads." - Nobody.
Well 64-core / 128-thread is not enough for some but for most workstations it is enough. As he said it is overkill for most but when it isn't you do need all 128-threads. I can't think of anything but AI that could use 128 threads that will work better on the CPU than on the GPU.
@yumri4 i can use all 128 threads doing AV1 encodes -_- (see the project av1an on github) ...just short of 2 days to encode a movie at 1080p on an 8 core 1700X... and that's not even at the highest quality setting (~4th down, "--cpu-used=3")
Nobody is such an idiot.
lol amd suck. why on earth would i pay like $2500 for 32 cores 64 threads? I can go on ebay and buy a Xeon Phi for $200 that has 72 cores and 288 threads. and guess what? i can put four of those in one system so i can have 288 cores and 1152 threads in one system. u cant do that for amd. intel had this tech YEARS ago. its old school now. intel just did there best quarter ever, theyre doing better than ever
@@lost4468yt Phi cards are co-processors and they run pretty slow. They are also not incredibly compatible with a lot of functions and they're out-moded by newer tech. Just dropping one (or several) of them into a rig isn't going to actually help anything. The PCIe slot it takes could be used for a GPU that can accelerate renders, act as a fixed point or floating point computational aid (breaking encryption, for instance), etc.
Some of the most stable and reliable boards I have ever ran were Tyan. Only problem is paying for them :)
This the kind of motherboard layout I would like to have on each & every motherboard. Uni-directional airflow.
The larger Swiss Army pocket knife even got a Phillips-head screwdriver in it 👍
Dammit, Wendell, now I want to build the ultimate capture rig with this
Be glad that you didn't, Zen2 was awful for anything video. Zen3 on the other hand...
Because we all knew Wendell was gonna have an Epyc workstation. The drool runs down his chin when he sees them. So... congrats I guess? Good stuff.
Isn't Windows 10 Pro for Workstations meant for this type of thing? I thought it was basically the Server stuff in the kernel turned on to be used in the consumer space.
With the benefit of hindsight, which motherboard would you recommend if you were to build an Epyc dual CPU SP3 socket workstation today? Would it be Gigabyte, Super Micro, Tyan, AsRock Rack, or others? Ideally, it should fit into an easily accessible chassis.
It'll primarily be used for everyday computing workstation, but I'd like to use downtime to mine. Thank you for taking time to advise.
If you do high end video editing, compositing or 3D graphics, that 30% lower single threaded performance will really slow you down, though...
Just overall, a lot of stuff simply can't be threaded due to the math. Stuff like resampling curves or some types of mesh optimizations are just not possible to multithread...
Now, for the work I do, like building custom simulation setups with very specific controls for the artists, I can usually work around most of that, but not all of it. And if you're in Premiere, AE, Resolve or Nuke, it's not that easy to work around...
But people doing this stuff should really use real time per core monitoring so they see the single threaded bottlenecks slowing stuff down in production. I've used that for over a decade - man, it's helpful. 😁👌
Next video needed to be "all about epyc motherboard ports and connectors". People need to know.
What does have to do with Captain Picard?
I wonder what would you say about something of an even more epic workstation - by reaching to ASRockRack for their ROMED4ID-2T, pair it with 128GB+ of RAM sticks (quad-ranked with 4 sticks, you're at the Epyc's mem controller sweetspot), watercool it, smash it inside some ITX case, eg. CM NR200 or Sama Quzao IM01, use those 6x PCIe4x8 to add eg. 4x2.5" U.2 NVMe drives, another to add USB controllers (for extra ports of higher speeds, great for WS builds) and audio card or WiFi card in some low-profile format to somehow place them within those smaller cases, add a watercooled GPU and you could possibly end up with an extremely beefy SFF overkill per volume, and possibly even better than this... And truly something different since you can't get TR in mATX or mITX or mDTX format... Now that would have been a marvel noone could dispute against by showing TR alternative ;)
VGA and a serial port? I love it.
!! Please explain the Value/Need/use-case for Quad Channel (Virtualization, not gaming...)
Have you checked VRM temps on this thing? I dont know the components but it does look like a small vrm for a 64 core.
I have ZERO interest in the subject and I'd buy completely different stuff.. but I love the video!
@Wendell, Running supermicro satadom on my esxi host. So far very happy.
#Wendell, quick question if you could help please. What kind of temps was hwinfo64 reporting on this Tyan mainboard for your epyc CPU? specifically the Tctl/Tdie and average Tdie?
Thanks in advance
Do a workstation review of the MZ72-HB0! I'd love to hear your thoughts on watercooling a dual EPYC setup.
Are you building in India? I am also interested in building a workstation for my data science work. What has been your experience in building a workstation in India? Cost? Noise control? Where did you source components? Thanks in anticipation
What's the driver situation with Rome? Can W10Pro for workstation be installed when there is no physical chipset?
Can someone explain to me how in consumer boards there is so much focus on powerdelivery/chiset temps to the point where fans are used, aside from the standard but ever growing heatsinks. How on earth does a pro MB, like this require NONE of these? what am i missing?
A couple of reasons - this is a server motherboard which plays in a different market segment than consumer land, and different markets have different needs.
High end consumer motherboards are expected to have overclocking support. In order to support the power delivery demands of an overclocked CPU, these motherboards have very large VRM designs with as many power stages as possible to increase peak current delivery and improve transient response under high loading... Basically allow the board to perform well and be stable under high load. The power delivery specification here is unknown and is basically 'as big as possible'. Things like power consumption and board cost are relatively unimportant factors here.
Over in server land, processors cannot be overclocked. This means that maximal power delivery is a known quantity, and much smaller VRMs can be designed. Key factors in this market revolve around lowest possible power consumption and board cost, so VRMs here are designed to be as cheap, efficient, and small as possible without sacrificing reliability.
Another part is marketing. A large chunk of consumer motherboards are sold based on what they look like. Flashy looks simply add cost to server products, making what is basically an industrial computer more expensive for no reason.
Also, in regards to lack of chipset and VRM fans... Motherboard fans are always a board designer's last hope to solve thermal problems. You don't want to use them unless you absolutely have to. As a moving part, a fan is one of the most likely points of failure that you can design into your product. One of the most important things to a server is high uptime and reliability, and relying on little fans attached to the motherboard is a good way to ruin your reputation as the fans fail out in the field.
Another reason that you rarely see chipset or VRM fans on server motherboards is due to chassis design. Consumer motherboards live in unknown chassis of usually questionable airflow and cooling. A motherboard with a marginal VRM design that runs on the hot side could cook itself to death in a poorly cooled desktop PC chassis, so you're more likely to see consumer motherboards with VRM fans slapped on by the board designer just to be safe in a wider variety of desktop cases. Server chassis, on the other hand, almost universally have large volumes of front-to-back air blowing across the entire chassis. This nearly guarantees that the passive VRM heatsink seen on most server motherboards gets plenty of airflow across the fins, which removes the need for fans on even the worst planned and hottest VRM designs.
Hope this helps.
just placed the order for 7 of these for a brand new deep learning lab. these will be paired with quadro rtx 8000 and 128 GB ECC memory.
I wish there were motherboards that actually moved the pcie slots to give you room to install 2x gpu's.. what's the point of having 128 pcie lanes if they're all crammed in next to each other. intel xeon workstation boards have pcie slot arrangements that give plenty of space between them for full tower installs
Like this one: www.supermicro.com/CDS_Image/uploads/imagecache/600px_wide/intel_motherboard_active/x10drg-q-1.jpg SOMEONE make something for epyc ffs
I have watched to do a Tyan build for years.
Please review the "AsRock Rack ROMED8-2T ATX" - looks pretty decent for this same purpose.
Maybe together with the "V-color 32GB DDR4 SDRAM ECC Load Reduced DDR4 2666MHz" - ECC, CL 17 @ 1.2V! Wondering how far an overclock could go with these.
Anyways, thanks for the great content. Cheers
I am using a WS with epyc 7402p now, but with OS ws2019. I tried various ver of win10 like pro or education, but on none of those I can make chipset driver installed. CPU-Z even mistakenly recognized my CPU as TR . .
Meanwhile on ws2019, everything just works fine. However I will consider ubuntu 2004 after I got my GPUs home . . ..
What's the benefit of using EPYC for workstation compared to Threadripper?
PCIe lane count, mainly.
Now with TR Pro in the retail market that's not really a thing anymore though.
8:12 I really doubt that two blower style cards will be cooler than two good axial fan cooled cards with the same spacing and in a high airflow case like the Meshify S2. Does anyone have test results for that?
I don't have specific A/B data, but I'm in the folding@home community and it's common knowledge that blower cards are the way to go if you're going to keep the case closed up. The much better axial designed cards just swirl around hot air. It's like an alpha cat; they don't play well with others, but they do great by themselves.
I used to run two 8800GTs, single slot, blower style, and never had any issues.
I run two 5600XTs with good axial coolers. I can only get good temps by leaving both sides of the case open, and even then, my lower card is at 63/65C and my upper card is at 74/85C. (This is running compute 24/7, 27C ambient.
Ideally, you'd get PCIE risers and do a basic GPU mining set up, and everything will run cool when it's spread out. But if you insist on putting them in the slots on the board in the case, blower cards do run cooler as multi GPU setups.
Oculink x8 was mentioned but I don't see it in the Tyan S8030 specs?
Taunting to build a server board based workstation. I've always used consumer/gaming category parts. Are there any real pitfalls?
Apart from long boot times, lack of overclocking features and a lackluster BIOS, no.
What's with the lego in the one shot?
TYAN!!! Long time no see!!!
I use 2x 7262 as a work station and I have no issue. This is crunching my data and I can view it as it comes.
It's easier to find high endurance SD than equal or better USB flash storage? *doubt*
What do you think of the Asrock Rack motherboards? I'm thinknig specifically of the ROMED8-2T and ROMED6U-2L2T
Romed8-2t looks baller.
I am guessing this has no audio onboard?
I hope in future we will be able to build a cheapo epyc workstation from retired server hardware.
I would buy this MoBo today if i was sure it would support 7262 or 7272 with Windows 10 Ultimate and drivers for it, I most of all need the PCIe lanes / Slots.
@Level1Techs
Suggestion to start a crowdfunding campaign for such a reduced, bare “interface adapter” Threadripper Pro (and Ryzen) motherboard?
I’d like “nothing” on it, just gimme aaall the PCI lanes from the CPU to various slots. You can use a dedicated PCIe USB controller AIC for keyboard BIOS access if the BIOS is in UEFI boot mode (just make it the CMOS clear default setting).
Will it support new upcoming 7003 series Milan cpu's ?
more I/O would be hard to populate on such board... We need better connectors than those long PCIe slots. Almost 30 years seems like it would be enough to make the port more like USB size.
What kind of I/O are you thinking of? A couple of quad port SAS HBA cards and you can have stupid amounts of hard drives available for one. Quad port network cards can end up with a serious hardware firewall/router....
Did you get that from Linus when he took down the PC Pro?
Which Kevin Macleod song is that? I like it.
6:49 Installing 2500$ CPU with a swiss army knife?? )))
Verge enters the chat...
"he's either really good... Or really bad...." Lol
@@Level1Techs Haha true! To be fair, swiss army knife is perfect for the job, as long as you know what you're doing )) Love the videos, cheers from UA!
This motherboard looks nice but in my opinion the Asrock rack "ROMED8-2T" is a better choice for "workstation" if you use a lot of pcie cards (not only gpus)
You will pay 600USD more than a Ryzen Threadripper 3970X 32-Core, 64-Thread, just for 8 channel RAM and lower boost? Why? I think it is best to wait and see the price of the Lenovo P620 with Threadripper Pro.
So where's the workstation?
Are there Windows 10 Pro drivers for this motherboard?
Because it seems that from the website, i only can find Win Server 2016 and 2019 drivers
Linux or GTFO
@@hotstovejer
I do agree that Linux has its merits for certain applications (DL and ML being one good example)
But there will many applications that only run on Windows architecture that a workstation user will use, i believe my question is a valid one
@@SoranPryde I was kinda joking, but you could always use KVM, do pcie passthrough to an NVME and video card to a Windows VM and still reap the benefits.
i used g34 on win7 - win8,1. with no problems.
can you confirm sp3 works well on win10 desktop os ?
The push on plastic screw replacements for m.2's are on some of the cheapest Asus AMD B450 Motherboards wendell, does it feel like a sturdy implementation on this motherboard? feels kind of weak and flimsy on those cheap asus ones
lulz the stories of orgies... gett'em Wendell!
Is there any reason why there are no Threadripper motherboards without a chipset solely using the CPU package‘s SoC?
AMD doesn't allow it.
tommihommi1 Maybe “we” can start a crowdfunding campaign for such a reduced, bare “interface adapter” Threadripper (and Ryzen’ motherboard?
I’d like “nothing” on it, just gimme aaall the PCI lanes from the CPU to various slots. You can use a dedicated PCIe USB controller AIC for keyboard BIOS access if the BIOS is in UEFI boot mode (just make it the CMOS clear default setting).
yussss LOTES socket. does Foxconn even MAKE server sockets? Foxconn was balls on my X399 Designare EX... >_
I counted, and not counting the optional 10gig LAN, I think you need 132 PCIE lanes for everything on the board.
I wanna see how Threadripper Pro performs before I buy anything ;)
Educating folks on the real advantages of big brain blower style coolers. 🧠
That VCORE will need serious cooling.
The fact that server motherboards and not "gaming" motherboards are airflow optimized is a crime.
Motherboards for towers, whether they're "gaming" or workstation boards, ARE optimized. For convection. So putting a server board in a tower case is a crap shoot.
@@timramich kinda, ATX as a forum factor technically isn't which is why Intel tried BTX. I have some oem BTX boards that were in Dell machines, it actually makes sense. PcIe isn't directly below the CPU etc, idk if it even is "better" anymore. Idk that's a whole separate convo I guess lol.
To be fair, gaming motherboards are usually in a case with plenty of room to add fans to overcome suboptimal airflow. Server boards are wedged in the smallest possible space they can occupy, surrounded by a dozen others.
@@morosis82 Doesn't have to be like that.
Never heard of nor seen oculink before.
I wanted to see some Benchmarks. Now we don't know how workstation software runs on Epyc.
It is a shame that Little Devil doesn't make a EPYC compatible PC case with integrated Phase Change Cooler.
can I install win 10 here?
"High-endurance SD cards" I didn't knew those existed. I thought they always use crappy flash and controllers in SD cards. What would be the cost difference? I'm tired of my phone not having storage.
They are marketed for dashcams and surveilance applications; ( www.newegg.com/p/pl?d=high+endurance+sd+card )
@@stephen1r2 Oh, cool. I might pick one up for my phone then. They are not even more expensive.
anybody has data how SMP scalate on amd and intel?
I wonder when i will be able to get one used extremely cheap
Yeah, looked into that a year ago, and the answer was no.
Now that Milan has replaced by Rome, the answer is *YES*.
Yes, but can it play Crysis?
The P processors are now more expensive than their 1P/2P counterparts :P
Server costs more than my life, W uses a Swiss army knife to screw.
Can I run Ubuntu on this setup?
Will it support windows 95?
Can I play competetive level Fornite on this?
Does it run quake!
@Andrew Crews LOL
So can you mess with the PBO settings on Epyc CPUs?
I did not fully understood the part about the OS.
Can you install windows 10 on this thing? How?
Tx
Same as any other PC, via USB stick, network deploy etc. What he meant is that AMD don't guarantee that this will run Windows 10 flawlessly, as these kinds of systems are meant to run server OS's. It's more a cover their ass thing really and if something doesn't work, you on your own for all intents and purposes
@@craigmurray4746 OK but basically windows should be able to find drivers for all this, right?
@@jouldalk In theory yes, so long as the vendors have gotten them into Windows Update. There might be some corner cases where drivers are only available for server based systems, but until someone actually installs Win 10 on the board it's hard to say. Things have come a long way since the old days of hardware market segmentation
Some cheap entry level motherboards from Asus and Gigabyte also have clip-on M.2 SSD mount.
Could you daisy chain a few of these together with occulink?