It's like having 32 totally useless people to help you with your tasks. At least half of them would be loitering around doing nothing, the others wouldn't give much help either.
Server CPU's are meant to run many tasks on one machine. My server uses full advantage of its cores through virtualization. Instead of having 1 windows machine, why not 4?
@sajber kurajber You can't really compare GPU cores to CPU cores, GPU are specific to some "tasks", and these "tasks" here are just Graphic rendering with API's like OpenGL, DirectX, Vulkan etc. They are very heavily multithreaded and hence can utilize the 2000+ CUDA cores modern GPU's have
I'm so glad you make these videos so I can look at all the nonsense I want to fiddle around with but don't have the time or budget to do. If you can run dual Opteron 6328s I'd love to see the performance difference at half the cores but at higher base/turbo frequencies. Clearly a set-up that makes even less sense because it looks like people sell those used for a significantly higher price.
@@IsmaelWensder unfortunately I beg to differ. An AM3+ Opteron in a consumer board despite being only 8 cores gets fairly better results. I tested that theory using my current setup downclocked to the same speed as its' 16core cousin on a g34 board. It was a difference of up to 50%fps in many games including mass effect andromeda for instance. In this case it seems more of a limitation of it being a NUMA archutecture and/or overall server board limitation. Granted on a proper board, this supermicro for instance, the results are somewhat better.
I have a pair of 6380's running and can confirm that the performance bump is significant - not really worth it though. It's was a worthwhile distraction to get it tuned and working properly but the fact is that the 2S motherboards are more expensive than the intel alternative, the best on the platform offers only mediocre comparative performance and it uses at least 20-30% extra power. I run these and a pair of 2651v2's so i have first hand experience.
I approve this setup!! Nice dual cpu board with alot of 16x slots. The issue is that it is just not effective for the money. But if you find it free or cheap it is fun to play with
Victor Bart - RETRO Machines exactly, I stumbled upon a dell 690 and it was worth the $8 for 16 gb of ram and dropping a gtx 1050 ti in it and adding to my gaming PC collection (I have five kids) and this PC has far more games as options due to sse 3+ support
you know, H8DGi-F has a OCNG5 BIOS that allows these processors to overclock, I have made 6276s hit 2.9-3GHz on decent cooler so look into that if you are interested
will that board run the 6300 series CPU's? there was a good increase in performance to the new generation but not all boards take em. but if they do the 6380 isn't that more expensive than it's 6200-series predecessor.
Phil, I think you should consider doing benchmarks which include OBS streaming and video rendering. Most people don't care about content production but it's nice to know whether it would be an option if ever the need arose. With that many cores, it should be effortless to have OBS software encode in the background.
When you say most people, are you using a sample size of one? I guarantee quite a few people are wondering if a 32-core machine is capable of performing properly in content creation workloads.
Many games still do not scale well with multi core cpus, expecting instead high single thread performance, might be worth trying this chip again in 1-2 years. I mean, sure, it wont compare to the newest Ryzen/Intel chips but if games are more optimised to use many cores, we might see some surprising results... or not :( .
Thanks Phil for trying out every idea I have ever had! Now, if you'll get a Via Quadcore or Zhaoxin system. ^_^ The old Quadcore boards are stupid expensive for the performance and I've never seen a Zhaoxin anything for sale . Anyways, thanks for the awesome content Phil!
I sure didn’t see a dual CPU video coming! Very nice! I have a pair of AthlonMP CPU’s somewhere in my stash. Doubt I’d ever find a motherboard to try them with.
I find you videos to be very well done, interesting, and informative. I know that the focus of these videos you do is gaming on older hardware, But I ask you to please consider adding a Blender Benchmark score to your builds. Blender can use either GPU - OR - pure CPU cores/threads to render, and now has the built-in ability to use multiple network machines in a render pool there many users that build their own local render farms. It would be very helpful, (and could add some new viewers) by adding the Blender Bench mark for people that build machines to render, color grade, animate and other uses in addition to gaming. My current Windows 10 based Blender machine is a Asus B450 Mother Board, Ryzen 2700x, 32GB DDR4 2933 ram, 1TB 3D NVMe PCIe M.2 SSD, 550W power supply, and I repurposed an old GTX 1050 TI video card and old case from a previous system. The total cost was just around $780 USD. At 4Ghz all core: Cinebench R15.0 CPU score average of 1,785 CPU-Z multi core benchmark 4,882 Blender benchmark CPU (quick score) of 18:48.68 * As a note, Blender typically renders MUCH faster under Linux. Even just booting my machine into a Linux environment from an old USB2 thumb drive without any optimizing gives me: Linux Blender benchmark CPU (quick score) of 14:22.63 - So whatever you get under windows you can expect far greater rending on Linux. Instead of building another machine like mine to farm rendering, It ‘appears’ that I could easily build 2 of these systems from this video cheaper and have a much higher possible rendering capacity. BUT that’s presuming an extrapolation based on the Cinebench and CPU-Z scores. It would be so much better to have a blender benchmark score for an ‘apples to apples comparison. You don’t need any familiarity with blender to use the Blender benchmark you just unzip it to a folder and run it, you can get the test from here opendata.blender.org/ there is information on the benchmark itself and the database they keep www.blender.org/news/introducing-blender-benchmark/ Thank you for the time to read and hear me out.
Love these types of reviews, Phil. Even though these older processors aren't the highest performers, they can be very fun to experiment with and frankly, affordable under the right circumstances.
Hey phil shoutout from the states ! i really digest and love your videos that you make. I just found an old socket 754 in my tech junk yard im trying to find an old Athlon 64 for a decent price lol. i just like tinkering around with older hardware even if its almost 20 years old. my first build was on socket 754 a Sempron 2600 "SDA2600AIO2BO" that was well over a decade ago , its crazy to say that.
At 7:27 was the cpu utilization frozen or this game just using constant resources all the time? Thank you for your videos. Your channel opened a new world for me. :)
I'm sure many people would appreciate some more production oriented benchmarks. It's a bit frustrating that you admit it is definitely not made for games, but you still focus the bulk of your review and conclusion on them. Maybe some blender/premier/z-zip tests?
Crysis only optimised up to 3 cores usage. I had i5-4670k and the game console reports only 3 cores were used instead of 4, running fairly smooth over 60 fps with maximum details (paired with RX 590). RivaTuner overlay reports the same.
I used to run a workstation with those exact processors doing video editing, it was slow compared to the Intel xeons at the time. Those bulldozer chips just weren't up to snuff.
my i5 2300 was only 2.8Ghz with 16 gigs ram and a shotty gtx 750 (not a Ti card) and the only game I could find it could play was Star Citizen (at 40+gig game not surprising lol)
@@wal81270 different CPUs can do a different amount of computation per clock cycle, and the rate at which data can be transferred to and from the CPU is also very important. This is a gross oversimplification, but that's the gist of it all.
@@notabagel - Are you serious with this? The 8 core CPUs used in PS4 and XboxOne are Jaguar. This is a microarchitecture from 2013 that has nowhere near the IPC of intel at the time, nor even comparable to AMD Zen chips now.
I would've liked to see how this Opteron would've faired as a video editing PC. Premier Pro after all can pretty much use as many cores as you can throw at it yet games are usually optimized for up to quad cores only so all those extra 12 cores / 28 threads won't give you any performance increase in games. Not without unofficial patches at least.
Newer games these days are reaching for more resources than an old 4c/8t chip can provide, 6c/12 quickly becoming the new baseline chip for such games and 8c/16t+ chips becoming more and more recommended for those that stream games on the same system as they're played (ie: for those that can't or won't have space or cash for a dual-system setup).
@@victorbart Indeed, Adobe apps are still heavily skewed towards high clocks and high IPC, they're woefully short of having decent threaded optimisations as yet. This is especially true for Photoshop. Premiere is better than it once was, but has a long way to go. AE is a mixed bag, not helped by many plugins still running on old code.
@@ElNeroDiablo People say this since Ryzen cpus come out but I still don't see this in benchmarks. Recent GamerNexus video about 6700k, where it was almost on par with Ryzen 3600, and i5-9600k perform way better than both of them, still shows that games still prioritize single core performance than number of cores.
@@ElNeroDiablo I agree for streaming power. I have an AMD FX8350 8 core CPU. When I used to stream I would run a game only on 4 cores (cores 0 to 3) and OBS would run on the other 4 cores to handle the capture, but would use my old GXT 970 to handle the actual encoding. But I'm talking about just playing the games. There may be a handful of games that push for +4 cores, but the current market favors 4 cores. Give it a few years and I do feel more games will need more than 4 cores. My point is, as we stand at the moment having a system with more than 4 cores won't provide any performance boost (any boost will be negligible). The games that will run as if in a Powerpoint show will be those like Cities Skylines who's performance is mainly CPU based.
I'm Still rocking Sims eons from 2012 and I have to admit I'm still really happy with them they still work fast and with new boards coming out where you can overclock them lots of potential. I wish there was some overclocking that was on the platform for AMD because I'm sure it's possible. I mean Lee actually do work on my machine and every once in a while and my son comes over he games and even though I have an older Dell OEM T 3500 the sucker really kicks ass with a 1070 Founder's Edition GPU there has to be a killer app that should be in your next test run where you can actually use all those cords to actually do some real work that would be fun around the house or fun to actually do.
@@MarshallSambell I already asked Phil about them. I bought one (Kllisre model), currently testing with two crazy cheap XEON 2640s and 64GB/1866 RAM. Scores 1426 for CB R15, 3041 for CB R20 (similar to a Ryzen 1700). Testing later with more potent 2650 v2s and 2680 v2s, should be interesting. Tried GTA V for a while last night, runs very well. Running further benchmarks today.
You're a wicked, evil person. :) I just got a P9X79 and wound up with a spare E5-1620v2 CPU, and was wondering what to do with it... stop giving me ideas! :D Was going to swap out the i7-4820K for the Xeon so I could use server RAM, but then enough desktop RAM came my way for about the same price and now I can't be arsed to swap it (since the Xeon has the same clockspeed but not quite the performance). So here it lays feeling lonely and forlorn... maybe I should get it a friend...
@@Reziac either go with the 1650v2 or the 4930k. It's a six core and both like to be overclocked. Running 2 x79 boards. One with 1600mhz ddr3 and p9x79 with a 4930k overclocked to 4.6ghz and a lenovo s30 x79 workstation board with 1600 mhz 64gb ecc ddr3 and a 2890 xeon. The consumer board has a gtx1080. Which runs everything at 120fps and the workstation has a rx580 8gb and that also runs silky smooth. Just love the x79 board. Still have another s30 board in a box and i have an apple g5 case i still need to mod, i want to put the spare x79 board in there and make it a cheap lan party pc
Do you plan on testing opteron 6300 series processors, Phil? They should be a good bit faster as they are at least based on Piledriver and not Bulldozer. The clock speeds are also higher on these.
boards like that also often have just 8x PCIe. That will still work fine for a lot of people but keep that in mind. Then some of those boards often have very few slots.
It's showing them as 16 cores and 32 logical threads because the windows 8+ amd CPU driver was updated to make it "try" and place load not on the same module as there is a 50% penalty for doing so (basically treats the cpu like ht/smt) there should be a option to stabilise performance in the bios by setting module to core count to 1m/1c ratio from 1m/2c this should improve games performance by making sure the games/programs only use 1 core per 1 module so resources are not been shared (should stop inconsistent fps) Not sure if this bulldozer dual socket setup uses numa nodes so the other cpu would Likey been doing nothing (taskmanager performance cpu around where cpu count is it may state the numa nodes is, if so dual socket is pointless test in general as don't think I know any game that is numa aware so one socket and only 1 bank of ram will be used)
I'm really loving your channel since I live in South America and most parts are so expencive just because it has a corei in the begining lol no joke here in Chile a 2gen i7 goes for 100bucks and 60bucks for a 2nd gen i5
It would be interesting to see how a couple other apps run, such as blender or some compression/decompress software, for basic workoload indication. Cool tests on old hardware, keep up the good work.
From the overlay on the left during the games it only uses 4 of its 32 cores/threads...So yeah, with such a clock speed it makes no difference to have dual CPU's if the games won't use all the threads. Such a platform is really made for larger amounts of multi-threading stuff, like running a cluster of JVM's doing all their garbage collection on 8 threads each, etc... Or maybe video rendering or other batch stuff like that (as already mentioned in other comments) as far as the rendering engines for video (etc..) are designed for that many cores. You could also VMWare the system, and run multiple OS copies doing stuff simultaneously. I can think of many uses for this set-up but single station gaming is not one of them.
Gonna take a lot of Haswell 4 core cpus bolted together for Intel to compete..... 14nm++++++++++++++++++++++ FTW! Still, as an intel premium product, gonna cost a motza.
AMD does have Epic, Server grade Threadripper basically. Tho that many cores and threads hardly do anything in games right now, only reason them opterons were doing bad was because of how the architecture works and some games simply not knowing how to use dual CPU's, and of course the latency with communicating between CPU's and RAM, Heck my 5960x is still more than enough for games, I also have a Ryzen 7 1700 that hardly ever gets pegged out in any game work loads tho ipc is a bit lacking. Right now the sweet spot seems to be a 6 core 12 thread, though I expect that to change soon I hope, I still plan on getting my the 3900x or 3950x.
@@dfkdiek518 only have a single threadripper 1920x, 64GB RAM, dual 1080ti's, all on a custom loop with a 480mm radiator... But clearly I was thinking along the same lines. It's all in a Thermaltake P90 case.
What are you doing?!! 😂 For crysis sake, do at least some productivity tests like blender or vray, it might work for some people in that scenario, since you bothered buying and assembling all that stuff 😉
Yeah, I'd like to always have CPU-Z numbers just to have one that's completely consistent and not dependent on anything but the CPU. And yes, I think it would be good to have some productivity benchmarks too, because there are an awful lot of us out here who need that kind of performance, but couldn't care less about gaming optimization (often not the same thing).
@@mebibyte9347 yeah, i think it might fares better if he trying it out put all those cores and threads and capacious RAM to the max by building an os... Using ultimate Linux building script.. How long does it take to build one complete distribution along with it's supporting repository system...
Thanks for this, it answered a couple questions I had myself. It almost seems to be a lot of bottlenecking due to the NUMA architecture afterall. There is a performance issue with switching threads across nodes in highly threaded apps. Could you try using "CorePrio" and dissassociating the nodes to see if it improves things? It is an issue under threadripper as well and I did see some gains with a single 6380. bitsum.com/portfolio/coreprio/
Hi @PhilsComputerLab, it is not simply due to clockspeed, but more heavily towards architecture. If I downclock my Ryzen 5 1600 from 3.6Ghz down to 2.1Ghz I am roughly at the IPC of my FX 8350 (4.2Ghz), which is also roughly two times the IPC of this opteron of yours. Sad that most games rely so much on single core performance, but also slightly understandable in the past due to multicore programming difficulty.
Hello, Since you are visiting dual cpu setups you might wanne look at the asus Z8NA-D6 combined with 2 xeon X5675's. You can find the mainboard on aliexpress for about 75us$, you might have the xeons allready.
I just found something that overwhelmed my 32 cores Opteron build, and it is 4K Video render, it suck all resource available on both CPU completely. Do you have an issue with PCI-Express lane for graphic card only show as x8 after power cut once? I used to have 2 of this motherboard and both have.
@ I can but it will not so soon. Last time I used that machine is around 47 minutes for 32 minutes 4K30p footage (with some 2560x1600 footage in it) exported from Adobe Premiere Pro CC 2017.
5 лет назад+1
@@sophustranquillitastv4468 thank for information, thats what i wanted to know... not gonna invest on a machine like this then
Great vid. I have messed with the older opeteron CPU's in the past, and found the "bulldozer" to not be very fast, but slow and reliable. I have a few servers running them running with basic stuff. My current gaming/video/streaming rig is a dual Xeon on an Intel board... Certain models are great for this kind of combination of work, and are a bit of fun to play around with. If you search around, you can find them pretty cheap.
My Ryzen 2 beats this dual CPU setup! So glad I upgraded. My old CPU was a Bulldozer. Would just die trying to play VRChat. Other games seemed better. But now EVERYTHING runs smoothly.
The core usage on Apex Legends and TR! If anything the abundance of cores shows how many games are now using more than 8 threads! (Then look at Strange Brigade only using two...) Very interesting video Phil, thanks. This setup is not useful for most any more considering its cost, but at least we can come away knowing now more than ever modern games need more cores. Even 6C/12T is already looking rather mid range at this point.
Another great video but I question the 130W idle draw, that seems way too high. Also you could investigate using Nvidia Frameview to really test the FPS - it gives important frame time data which translates to how smooth the game feels. Also some video encoding benchmarks would be nice for other streamers and such. Perhaps compare to buying a modern encoding system or whatever.
Très belle démonstration. J'apprécie beaucoup l'objectivité de vos analyses. Elles sont un précieux guide si toutefois l'envie nous prenait d'essayer une telle configuration. Grâce à vos vidéos on a comme toujours une vraie approche de ce que l'on pourrait obtenir et en termes de résultats techniques et en termes de finances. Bonnes fêtes de fin d'années 🎄🎁✨🎈🧨🎃 à vous et bonne continuation. Merci de France !
I enjoy the videos you make with using previous generation hardware and see what is good for gaming on a budget, I'm a budget conscious gamer as well. Tho I don't have the extra funds to do what you are doing. However, is there a chance you could add something like League of Legends or Diablo 3 to your testing? Just to give a different perspective for those that don't play the first person oriented titles. Thanks and keep up the great content!
It's really not even close. The only type of workload that this setup would work well with is heavily threaded integer tasks. Performance is halved for any floating point tasks since each CPU has half the number of FPUs as integer cores. As shown in the video, only one game was able to use all 32 threads, Most of the games here did use ~16 threads, but there was no benefit as shown with the mediocre frame rates. With how cheap older Ryzen parts are now, it doesn't really make sense to build this heat monster. People are dumping 2700x chips by the truckload for the newer 3000 series parts, and I've seen them as low as $150.
Really, a 3600x is only running 1300's in Cinebench R15? I would be really disappointed with that. My R7 1700 is turning 1840cb on R15 with a bit of an overclock. I would have expected the 3600x to be north of 1500.
@@mpeugeot Cinebench is a synthetic benchmark only representative of heavily threaded tasks. The numbers it gives you only matter in such types of workloads, which games are not. Games prefer fewer, higher clocked and more efficient cores, not tons of slow inefficient cores. The games run in this video show this where most struggle to keep 60 fps at 1080p and don't even utilize all available cores.
@@GGigabiteM yes, I understand that, but his 3600x is still putting up some pretty lackluster numbers. Core for core and clock for clock, the 3600x should be faster on cinebench than my R7 1700. So an R5-1600 overclocked to 4.1 GHz would be expected to get approximately 1360-1380 points on cinebench R15, a 3600 should be well north of 1500, and the 3600x above that.
You absolute legend. You knew it would suck at gaming, and yet you splurged on another CPU and a ludicrously expensive motherboard! We truly do not deserve you and your excellence. Your sacrifice is very much appreciated, and anyone who criticises you this time can swizzle on a proverbial!
i dont care how slow it is. for me, their is something fun about owning actual dual cores on one board. great video.
There was a time when dual CPUs was the only way to have true dual cores
You can disable 1 core per module in the bios. as a result the turbo boost will be 3.3Ghz. you can run parkcontrol to disable cpu parking.
@LabRat Knatz if you get the right stuff you can put an open source bios on them and remove the backdoors.
That 3d mark score... Double nice.
Happy Gaming :D
nice
Nice
Nice
Nice Nice
Googles Quantum computer recently went down.. I hear someone tried running Crysis on it.
:D
It's like having 32 totally useless people to help you with your tasks. At least half of them would be loitering around doing nothing, the others wouldn't give much help either.
sounds about right when you consider that bulldozer "modules" cant act independently.
Server CPU's are meant to run many tasks on one machine. My server uses full advantage of its cores through virtualization. Instead of having 1 windows machine, why not 4?
@sajber kurajber
You can't really compare GPU cores to CPU cores, GPU are specific to some "tasks", and these "tasks" here are just Graphic rendering with API's like OpenGL, DirectX, Vulkan etc. They are very heavily multithreaded and hence can utilize the 2000+ CUDA cores modern GPU's have
I had to look at the video again. Thought you where explaining how the government works.
Lol
6 969 score on 3dmark 😉😏
Nice.
Nice
Nice
Nice
Nice
I'm so glad you make these videos so I can look at all the nonsense I want to fiddle around with but don't have the time or budget to do. If you can run dual Opteron 6328s I'd love to see the performance difference at half the cores but at higher base/turbo frequencies. Clearly a set-up that makes even less sense because it looks like people sell those used for a significantly higher price.
They are bulldozer, so any FX Bulldozer have similar results depending on number of cores and clock speed.
LOL. this is exactly what I am gonna say.
Lol, not intended for the Gaming use. But the power usage info is useful.
So thanks a lot Phil.🙄👌
I'd still like this legacy setup 😍👍
@@IsmaelWensder unfortunately I beg to differ. An AM3+ Opteron in a consumer board despite being only 8 cores gets fairly better results. I tested that theory using my current setup downclocked to the same speed as its' 16core cousin on a g34 board. It was a difference of up to 50%fps in many games including mass effect andromeda for instance. In this case it seems more of a limitation of it being a NUMA archutecture and/or overall server board limitation. Granted on a proper board, this supermicro for instance, the results are somewhat better.
I have a pair of 6380's running and can confirm that the performance bump is significant - not really worth it though. It's was a worthwhile distraction to get it tuned and working properly but the fact is that the 2S motherboards are more expensive than the intel alternative, the best on the platform offers only mediocre comparative performance and it uses at least 20-30% extra power.
I run these and a pair of 2651v2's so i have first hand experience.
6969 in Fire Strike.
Best PC ever, right?
Happy Gaming :D
nicce
I was your 69th liker on this comment.
Thank you, you make really interesting videos. Happy holidays.
I would love more dual CPU content
Heh, if he can find the Olde celerons, on ABIT BP6. Epic
I have a dual cpu pentium 4 server sitting around. I should try that sometime. 😂️
Phill man! I love your videos. You literally do all the things I want to do but lack the funds for. Thanks again!
I approve this setup!! Nice dual cpu board with alot of 16x slots. The issue is that it is just not effective for the money. But if you find it free or cheap it is fun to play with
Victor Bart - RETRO Machines exactly, I stumbled upon a dell 690 and it was worth the $8 for 16 gb of ram and dropping a gtx 1050 ti in it and adding to my gaming PC collection (I have five kids) and this PC has far more games as options due to sse 3+ support
When you have a 32core processor. But thr game only uses 2 cores that run at 2.6ghz. OOF
I love your videos Phil. Keep up the good work.
you know, H8DGi-F has a OCNG5 BIOS that allows these processors to overclock, I have made 6276s hit 2.9-3GHz on decent cooler so look into that if you are interested
will that board run the 6300 series CPU's?
there was a good increase in performance to the new generation but not all boards take em.
but if they do the 6380 isn't that more expensive than it's 6200-series predecessor.
@@brrebrresen1367 these boards will run 6300 series processors with BIOS update, the 6300 series with overclock could hit 3.2-3.3GHz All core
me too same board 3ghz all cores
Phil, I think you should consider doing benchmarks which include OBS streaming and video rendering. Most people don't care about content production but it's nice to know whether it would be an option if ever the need arose. With that many cores, it should be effortless to have OBS software encode in the background.
When you say most people, are you using a sample size of one?
I guarantee quite a few people are wondering if a 32-core machine is capable of performing properly in content creation workloads.
I definitely wish it was more common to run encoding benchmarks.
Love the videos with used server/oem parts keep it up :)
Another awesome project. I hope you had a good holiday.
Many games still do not scale well with multi core cpus, expecting instead high single thread performance, might be worth trying this chip again in 1-2 years.
I mean, sure, it wont compare to the newest Ryzen/Intel chips but if games are more optimised to use many cores, we might see some surprising results... or not :( .
Thanks Phil for trying out every idea I have ever had! Now, if you'll get a Via Quadcore or Zhaoxin system. ^_^ The old Quadcore boards are stupid expensive for the performance and I've never seen a Zhaoxin anything for sale . Anyways, thanks for the awesome content Phil!
That Supermicro board looks great.
I sure didn’t see a dual CPU video coming! Very nice! I have a pair of AthlonMP CPU’s somewhere in my stash. Doubt I’d ever find a motherboard to try them with.
These dual cpu boards have always interested me. Keep up the great content man👍👍
And how about renderimg, editing videos and using this machine as a workbench.
Could you include this mini section of this in your future videos?
This would be awesome for a gaming room: 3-4 VMs with their own GPU, monitors, and peripherals.
Does this platform even support PCIe passthrough?
Fascinating experiment, keep up the great videos very enjoyable to watch.
Thank you Phil for putting the money down for one of those mobo's just so we could see whats what, they arnt cheap when i looked at them
I find you videos to be very well done, interesting, and informative. I know that the focus of these videos you do is gaming on older hardware, But I ask you to please consider adding a Blender Benchmark score to your builds. Blender can use either GPU - OR - pure CPU cores/threads to render, and now has the built-in ability to use multiple network machines in a render pool there many users that build their own local render farms. It would be very helpful, (and could add some new viewers) by adding the Blender Bench mark for people that build machines to render, color grade, animate and other uses in addition to gaming.
My current Windows 10 based Blender machine is a Asus B450 Mother Board, Ryzen 2700x, 32GB DDR4 2933 ram, 1TB 3D NVMe PCIe M.2 SSD, 550W power supply, and I repurposed an old GTX 1050 TI video card and old case from a previous system. The total cost was just around $780 USD.
At 4Ghz all core:
Cinebench R15.0 CPU score average of 1,785
CPU-Z multi core benchmark 4,882
Blender benchmark CPU (quick score) of 18:48.68
* As a note, Blender typically renders MUCH faster under Linux. Even just booting my machine into a Linux environment from an old USB2 thumb drive without any optimizing gives me:
Linux Blender benchmark CPU (quick score) of 14:22.63 - So whatever you get under windows you can expect far greater rending on Linux.
Instead of building another machine like mine to farm rendering, It ‘appears’ that I could easily build 2 of these systems from this video cheaper and have a much higher possible rendering capacity. BUT that’s presuming an extrapolation based on the Cinebench and CPU-Z scores. It would be so much better to have a blender benchmark score for an ‘apples to apples comparison.
You don’t need any familiarity with blender to use the Blender benchmark you just unzip it to a folder and run it, you can get the test from here opendata.blender.org/
there is information on the benchmark itself and the database they keep
www.blender.org/news/introducing-blender-benchmark/
Thank you for the time to read and hear me out.
2:00 oh that's not a problem, just get a usb3.0 pci-e adapter
And an PCI-E SSD adapter card. You'd still need a SATA boot drive, but all the apps and data can enjoy killer storage performance.
@@geonerdor have the boot partition on the sata ssd and then have it hand over the os to the nvme
Love these types of reviews, Phil. Even though these older processors aren't the highest performers, they can be very fun to experiment with and frankly, affordable under the right circumstances.
Hey phil shoutout from the states ! i really digest and love your videos that you make. I just found an old socket 754 in my tech junk yard im trying to find an old Athlon 64 for a decent price lol. i just like tinkering around with older hardware even if its almost 20 years old. my first build was on socket 754 a Sempron 2600 "SDA2600AIO2BO" that was well over a decade ago , its crazy to say that.
At 7:27 was the cpu utilization frozen or this game just using constant resources all the time?
Thank you for your videos. Your channel opened a new world for me. :)
Hmm definitely looks like a bug or glitch :)
I wish I could get my hands on this hardware so I can experiment with vulkan wrapping games and forcing thread utilisation to change.
I love these dual socket game benchmarks!
I'm sure many people would appreciate some more production oriented benchmarks. It's a bit frustrating that you admit it is definitely not made for games, but you still focus the bulk of your review and conclusion on them.
Maybe some blender/premier/z-zip tests?
Crysis only optimised up to 3 cores usage.
I had i5-4670k and the game console reports only 3 cores were used instead of 4, running fairly smooth over 60 fps with maximum details (paired with RX 590).
RivaTuner overlay reports the same.
I like what you do on your channel, thx !
I used to run a workstation with those exact processors doing video editing, it was slow compared to the Intel xeons at the time. Those bulldozer chips just weren't up to snuff.
Bulldozer + 2.6GHz in gaming, yeah, that didn't go well!
Who knew 32 shit cores were still going to be most shit! 🤣
my i5 2300 was only 2.8Ghz with 16 gigs ram and a shotty gtx 750 (not a Ti card) and the only game I could find it could play was Star Citizen (at 40+gig game not surprising lol)
How do the current gen consoles manage it with 8 AMD cores at lower speeds than this? They don't have Zen cores, either.
@@wal81270 different CPUs can do a different amount of computation per clock cycle, and the rate at which data can be transferred to and from the CPU is also very important. This is a gross oversimplification, but that's the gist of it all.
@@notabagel - Are you serious with this? The 8 core CPUs used in PS4 and XboxOne are Jaguar. This is a microarchitecture from 2013 that has nowhere near the IPC of intel at the time, nor even comparable to AMD Zen chips now.
The original Threadripper.
I would've liked to see how this Opteron would've faired as a video editing PC. Premier Pro after all can pretty much use as many cores as you can throw at it yet games are usually optimized for up to quad cores only so all those extra 12 cores / 28 threads won't give you any performance increase in games. Not without unofficial patches at least.
Newer games these days are reaching for more resources than an old 4c/8t chip can provide, 6c/12 quickly becoming the new baseline chip for such games and 8c/16t+ chips becoming more and more recommended for those that stream games on the same system as they're played (ie: for those that can't or won't have space or cash for a dual-system setup).
Premiere sweet spot is 10 / 12 cores. The issue with these opterons is 2300mhz baseclock. It won't perform that great
@@victorbart Indeed, Adobe apps are still heavily skewed towards high clocks and high IPC, they're woefully short of having decent threaded optimisations as yet. This is especially true for Photoshop. Premiere is better than it once was, but has a long way to go. AE is a mixed bag, not helped by many plugins still running on old code.
@@ElNeroDiablo People say this since Ryzen cpus come out but I still don't see this in benchmarks. Recent GamerNexus video about 6700k, where it was almost on par with Ryzen 3600, and i5-9600k perform way better than both of them, still shows that games still prioritize single core performance than number of cores.
@@ElNeroDiablo I agree for streaming power. I have an AMD FX8350 8 core CPU. When I used to stream I would run a game only on 4 cores (cores 0 to 3) and OBS would run on the other 4 cores to handle the capture, but would use my old GXT 970 to handle the actual encoding. But I'm talking about just playing the games. There may be a handful of games that push for +4 cores, but the current market favors 4 cores. Give it a few years and I do feel more games will need more than 4 cores.
My point is, as we stand at the moment having a system with more than 4 cores won't provide any performance boost (any boost will be negligible). The games that will run as if in a Powerpoint show will be those like Cities Skylines who's performance is mainly CPU based.
Great video, great system!! (I'm talking from my hardware collector perspective, of course, not performance wise :-P).
I'm Still rocking Sims eons from 2012 and I have to admit I'm still really happy with them they still work fast and with new boards coming out where you can overclock them lots of potential.
I wish there was some overclocking that was on the platform for AMD because I'm sure it's possible.
I mean Lee actually do work on my machine and every once in a while and my son comes over he games and even though I have an older Dell OEM T 3500 the sucker really kicks ass with a 1070 Founder's Edition GPU
there has to be a killer app that should be in your next test run where you can actually use all those cords to actually do some real work that would be fun around the house or fun to actually do.
Uh oh. Phil is messing with dual socket boards... NO ONE MENTION THE X79 DUAL SOCKET MOBOS FROM CHINA, HE WILL GRAB ONE!
André Pedreiro no, please do mention them with dual Xeon e5 2687w v2
@@MarshallSambell I already asked Phil about them. I bought one (Kllisre model), currently testing with two crazy cheap XEON 2640s and 64GB/1866 RAM. Scores 1426 for CB R15, 3041 for CB R20 (similar to a Ryzen 1700). Testing later with more potent 2650 v2s and 2680 v2s, should be interesting. Tried GTA V for a while last night, runs very well. Running further benchmarks today.
You're a wicked, evil person. :) I just got a P9X79 and wound up with a spare E5-1620v2 CPU, and was wondering what to do with it... stop giving me ideas! :D
Was going to swap out the i7-4820K for the Xeon so I could use server RAM, but then enough desktop RAM came my way for about the same price and now I can't be arsed to swap it (since the Xeon has the same clockspeed but not quite the performance). So here it lays feeling lonely and forlorn... maybe I should get it a friend...
OOPSIE
@@Reziac either go with the 1650v2 or the 4930k. It's a six core and both like to be overclocked. Running 2 x79 boards. One with 1600mhz ddr3 and p9x79 with a 4930k overclocked to 4.6ghz and a lenovo s30 x79 workstation board with 1600 mhz 64gb ecc ddr3 and a 2890 xeon. The consumer board has a gtx1080. Which runs everything at 120fps and the workstation has a rx580 8gb and that also runs silky smooth. Just love the x79 board. Still have another s30 board in a box and i have an apple g5 case i still need to mod, i want to put the spare x79 board in there and make it a cheap lan party pc
Thanks for the great video!
I cant stop watching ur vid
I wasn't expecting to see this video so soon. You work fast :D
Did you try the modded OC bios I sent you? Or will that be another video?
Totally worth as a home server. I'm planning to get a quad 16-core version. For my projects that will be more than enough!
Did you count the power cost? That's one reason why I'm switching everything to Raspberry Pis.
Do you plan on testing opteron 6300 series processors, Phil? They should be a good bit faster as they are at least based on Piledriver and not Bulldozer. The clock speeds are also higher on these.
I like how you give tips on improving each game’s performance in case anyone goes out and replicates this build. :P
boards like that also often have just 8x PCIe. That will still work fine for a lot of people but keep that in mind. Then some of those boards often have very few slots.
It would be nice to see some comparison renders with a more modern machine, if the number of cores was useful in soft that would actually use them.
ive been looking for a decent review... looking at doing a similar build for a Server/Workstation hybrid ... need to free up my main pc
It's showing them as 16 cores and 32 logical threads because the windows 8+ amd CPU driver was updated to make it "try" and place load not on the same module as there is a 50% penalty for doing so (basically treats the cpu like ht/smt)
there should be a option to stabilise performance in the bios by setting module to core count to 1m/1c ratio from 1m/2c this should improve games performance by making sure the games/programs only use 1 core per 1 module so resources are not been shared (should stop inconsistent fps)
Not sure if this bulldozer dual socket setup uses numa nodes so the other cpu would Likey been doing nothing (taskmanager performance cpu around where cpu count is it may state the numa nodes is, if so dual socket is pointless test in general as don't think I know any game that is numa aware so one socket and only 1 bank of ram will be used)
Imagine how power elements on motherboard will be hot.
watching ... hoping for cinebench numbers... and you did. Awesome.
This may have been decent rig for a web or application server.
I'm really loving your channel since I live in South America and most parts are so expencive just because it has a corei in the begining lol no joke here in Chile a 2gen i7 goes for 100bucks and 60bucks for a 2nd gen i5
Quality content right there
It would be interesting to see how a couple other apps run, such as blender or some compression/decompress software, for basic workoload indication. Cool tests on old hardware, keep up the good work.
From the overlay on the left during the games it only uses 4 of its 32 cores/threads...So yeah, with such a clock speed it makes no difference to have dual CPU's if the games won't use all the threads.
Such a platform is really made for larger amounts of multi-threading stuff, like running a cluster of JVM's doing all their garbage collection on 8 threads each, etc... Or maybe video rendering or other batch stuff like that (as already mentioned in other comments) as far as the rendering engines for video (etc..) are designed for that many cores.
You could also VMWare the system, and run multiple OS copies doing stuff simultaneously.
I can think of many uses for this set-up but single station gaming is not one of them.
Server boards are beautiful.
imagine a dual socket threadripper. you would not see anything in the benchmark games because of the cpu performance graphics
Gonna take a lot of Haswell 4 core cpus bolted together for Intel to compete..... 14nm++++++++++++++++++++++ FTW! Still, as an intel premium product, gonna cost a motza.
AMD does have Epic, Server grade Threadripper basically. Tho that many cores and threads hardly do anything in games right now, only reason them opterons were doing bad was because of how the architecture works and some games simply not knowing how to use dual CPU's, and of course the latency with communicating between CPU's and RAM, Heck my 5960x is still more than enough for games, I also have a Ryzen 7 1700 that hardly ever gets pegged out in any game work loads tho ipc is a bit lacking. Right now the sweet spot seems to be a 6 core 12 thread, though I expect that to change soon I hope, I still plan on getting my the 3900x or 3950x.
Dual 64core threadrippers each on a 480MM custom loop and two Titan RTX on a custom loop... with 128gb ddr4
Especially when the 1920x is only $199 now.
@@dfkdiek518 only have a single threadripper 1920x, 64GB RAM, dual 1080ti's, all on a custom loop with a 480mm radiator... But clearly I was thinking along the same lines. It's all in a Thermaltake P90 case.
RPCS3 the emulator of Play Station 3 uses all the cores you can get.. I wonder how it would run Gran Turismo 6 on this setup.
That 32 cores must be awesome for emulator if it utilize at least 16
What are you doing?!! 😂
For crysis sake, do at least some productivity tests like blender or vray, it might work for some people in that scenario, since you bothered buying and assembling all that stuff 😉
But us, feisty fellas of the unwashed internets. 😂Always troll about de Crying this n that.
So, will it run, CrySiS? 😍👍🙄❤️
For real! I can't believe he set up a server board to test games only.
Yeah, I'd like to always have CPU-Z numbers just to have one that's completely consistent and not dependent on anything but the CPU.
And yes, I think it would be good to have some productivity benchmarks too, because there are an awful lot of us out here who need that kind of performance, but couldn't care less about gaming optimization (often not the same thing).
@@mebibyte9347 yeah, i think it might fares better if he trying it out put all those cores and threads and capacious RAM to the max by building an os... Using ultimate Linux building script.. How long does it take to build one complete distribution along with it's supporting repository system...
These work great for distributed computing needs, Not everything is about gaming. And play with no work makes you a broke ass bum.
Nice review... I have one of these cpu back then, but i didnt have the mainbord untill now.. so im curious about the performance.
It's interesting to see the new AAA titles using so many cores pretty amazing.
Yea. Maybe I'm missing something, but a steady ~60FPS seems fine to me.
Your channel is cool.
It may be a difficult find, but I've always wondered about a dual Tualatin build.
There's actually a channel on youtube with semi-modern games on dual Tualatin and AGP graphics, like radeon 3850.
MrOfforDef is the channel, interesting stuff he has, tens of games tested on dual CPU boards with AGP graphics.
You can overclock these Supermicro G34 boards with a custom bios. I have a quad socket with 6378s. Got them up to 3ghz.
I agree with your assessment 100%, however I still really enjoyed the vvideo.Thank you. 👍🙂👍
Thanks for this, it answered a couple questions I had myself. It almost seems to be a lot of bottlenecking due to the NUMA architecture afterall. There is a performance issue with switching threads across nodes in highly threaded apps. Could you try using "CorePrio" and dissassociating the nodes to see if it improves things? It is an issue under threadripper as well and I did see some gains with a single 6380.
bitsum.com/portfolio/coreprio/
i believe there is modified bios for some supermicro G34 motherboard that allows you to overclock. have you ever tried overclocking these opterons?
love the cpu usage for shadow of the tomb raider 8:12 , very cool
These systems run very fine (even 3d games) when splitted in two with "Aster Pro Software".
Phil should try some time the trial version.
Perneta what it's that software exactly?
What exactly happens to a system when it is "splitted in two"
This board would be awesome for a multi head virtualized XP retro machine (like LTT 7 Gamers, 1 CPU).
Hi @PhilsComputerLab, it is not simply due to clockspeed, but more heavily towards architecture. If I downclock my Ryzen 5 1600 from 3.6Ghz down to 2.1Ghz I am roughly at the IPC of my FX 8350 (4.2Ghz), which is also roughly two times the IPC of this opteron of yours.
Sad that most games rely so much on single core performance, but also slightly understandable in the past due to multicore programming difficulty.
Hello, Since you are visiting dual cpu setups you might wanne look at the asus Z8NA-D6 combined with 2 xeon X5675's.
You can find the mainboard on aliexpress for about 75us$, you might have the xeons allready.
I just found something that overwhelmed my 32 cores Opteron build, and it is 4K Video render, it suck all resource available on both CPU completely.
Do you have an issue with PCI-Express lane for graphic card only show as x8 after power cut once? I used to have 2 of this motherboard and both have.
could u show us how good is it on video render?
@ I can but it will not so soon. Last time I used that machine is around 47 minutes for 32 minutes 4K30p footage (with some 2560x1600 footage in it) exported from Adobe Premiere Pro CC 2017.
@@sophustranquillitastv4468 thank for information, thats what i wanted to know... not gonna invest on a machine like this then
Great vid. I have messed with the older opeteron CPU's in the past, and found the "bulldozer" to not be very fast, but slow and reliable. I have a few servers running them running with basic stuff. My current gaming/video/streaming rig is a dual Xeon on an Intel board... Certain models are great for this kind of combination of work, and are a bit of fun to play around with. If you search around, you can find them pretty cheap.
Kinda sucks for games, but what about rendering videos and 3d apps? This 2 opterons will have a shot!
My Ryzen 2 beats this dual CPU setup! So glad I upgraded. My old CPU was a Bulldozer. Would just die trying to play VRChat. Other games seemed better. But now EVERYTHING runs smoothly.
It was really amazing!!
cool! to look at dual socket!! Need more!!! like lga 1366!!
You can have this as quad socket setup too, try that next :)
The core usage on Apex Legends and TR! If anything the abundance of cores shows how many games are now using more than 8 threads! (Then look at Strange Brigade only using two...) Very interesting video Phil, thanks. This setup is not useful for most any more considering its cost, but at least we can come away knowing now more than ever modern games need more cores. Even 6C/12T is already looking rather mid range at this point.
Thanks for taking the trouble to do this, Phil.
@5:26 F1 2018? Looks and sounds like F1 2008.
Sehr gutes Video 👍
Nice video. How did you get all CPU percentages on the screen?
Nice looking board for the age. How's it stack up against a dual socket 2011 Xeon board from the same era?
Another great video but I question the 130W idle draw, that seems way too high.
Also you could investigate using Nvidia Frameview to really test the FPS - it gives important frame time data which translates to how smooth the game feels. Also some video encoding benchmarks would be nice for other streamers and such. Perhaps compare to buying a modern encoding system or whatever.
Idk why, but it's so weird seeing you owning modern GPUs
Great vid, very informative.
Though it's a bit weird to see you disassembling the PC as you talk instead of assembling
Très belle démonstration. J'apprécie beaucoup l'objectivité de vos analyses. Elles sont un précieux guide si toutefois l'envie nous prenait d'essayer une telle configuration. Grâce à vos vidéos on a comme toujours une vraie approche de ce que l'on pourrait obtenir et en termes de résultats techniques et en termes de finances. Bonnes fêtes de fin d'années 🎄🎁✨🎈🧨🎃 à vous et bonne continuation. Merci de France !
I enjoy the videos you make with using previous generation hardware and see what is good for gaming on a budget, I'm a budget conscious gamer as well. Tho I don't have the extra funds to do what you are doing. However, is there a chance you could add something like League of Legends or Diablo 3 to your testing? Just to give a different perspective for those that don't play the first person oriented titles. Thanks and keep up the great content!
What about quad socket test?
this setup is about 60% as good as my 3600x, it's crazy if you think this was about 9 years ago and is still capable. actually matches R15 score
It's really not even close. The only type of workload that this setup would work well with is heavily threaded integer tasks. Performance is halved for any floating point tasks since each CPU has half the number of FPUs as integer cores.
As shown in the video, only one game was able to use all 32 threads, Most of the games here did use ~16 threads, but there was no benefit as shown with the mediocre frame rates. With how cheap older Ryzen parts are now, it doesn't really make sense to build this heat monster. People are dumping 2700x chips by the truckload for the newer 3000 series parts, and I've seen them as low as $150.
Really, a 3600x is only running 1300's in Cinebench R15? I would be really disappointed with that. My R7 1700 is turning 1840cb on R15 with a bit of an overclock. I would have expected the 3600x to be north of 1500.
@@mpeugeot Cinebench is a synthetic benchmark only representative of heavily threaded tasks. The numbers it gives you only matter in such types of workloads, which games are not.
Games prefer fewer, higher clocked and more efficient cores, not tons of slow inefficient cores. The games run in this video show this where most struggle to keep 60 fps at 1080p and don't even utilize all available cores.
@@GGigabiteM yes, I understand that, but his 3600x is still putting up some pretty lackluster numbers. Core for core and clock for clock, the 3600x should be faster on cinebench than my R7 1700. So an R5-1600 overclocked to 4.1 GHz would be expected to get approximately 1360-1380 points on cinebench R15, a 3600 should be well north of 1500, and the 3600x above that.
@@mpeugeot Maybe there's something up with the memory, Ryzen CPUs are super sensitive to memory speed.
Interesting video 👍
For perspective: My Ryzen 3600x (6c/12t) oc'ed at 4.2ghz gets around 1600 in cinebench 15. I would not have guessed that is was faster. Good video.
Would love to know how it would handle Premiere and After Effects... Thanks for the video!
You absolute legend.
You knew it would suck at gaming, and yet you splurged on another CPU and a ludicrously expensive motherboard!
We truly do not deserve you and your excellence. Your sacrifice is very much appreciated, and anyone who criticises you this time can swizzle on a proverbial!