It's like having 32 totally useless people to help you with your tasks. At least half of them would be loitering around doing nothing, the others wouldn't give much help either.
Server CPU's are meant to run many tasks on one machine. My server uses full advantage of its cores through virtualization. Instead of having 1 windows machine, why not 4?
@sajber kurajber You can't really compare GPU cores to CPU cores, GPU are specific to some "tasks", and these "tasks" here are just Graphic rendering with API's like OpenGL, DirectX, Vulkan etc. They are very heavily multithreaded and hence can utilize the 2000+ CUDA cores modern GPU's have
Many games still do not scale well with multi core cpus, expecting instead high single thread performance, might be worth trying this chip again in 1-2 years. I mean, sure, it wont compare to the newest Ryzen/Intel chips but if games are more optimised to use many cores, we might see some surprising results... or not :( .
my i5 2300 was only 2.8Ghz with 16 gigs ram and a shotty gtx 750 (not a Ti card) and the only game I could find it could play was Star Citizen (at 40+gig game not surprising lol)
@@wal81270 different CPUs can do a different amount of computation per clock cycle, and the rate at which data can be transferred to and from the CPU is also very important. This is a gross oversimplification, but that's the gist of it all.
@@notabagel - Are you serious with this? The 8 core CPUs used in PS4 and XboxOne are Jaguar. This is a microarchitecture from 2013 that has nowhere near the IPC of intel at the time, nor even comparable to AMD Zen chips now.
I'm so glad you make these videos so I can look at all the nonsense I want to fiddle around with but don't have the time or budget to do. If you can run dual Opteron 6328s I'd love to see the performance difference at half the cores but at higher base/turbo frequencies. Clearly a set-up that makes even less sense because it looks like people sell those used for a significantly higher price.
@@IsmaelWensder unfortunately I beg to differ. An AM3+ Opteron in a consumer board despite being only 8 cores gets fairly better results. I tested that theory using my current setup downclocked to the same speed as its' 16core cousin on a g34 board. It was a difference of up to 50%fps in many games including mass effect andromeda for instance. In this case it seems more of a limitation of it being a NUMA archutecture and/or overall server board limitation. Granted on a proper board, this supermicro for instance, the results are somewhat better.
I have a pair of 6380's running and can confirm that the performance bump is significant - not really worth it though. It's was a worthwhile distraction to get it tuned and working properly but the fact is that the 2S motherboards are more expensive than the intel alternative, the best on the platform offers only mediocre comparative performance and it uses at least 20-30% extra power. I run these and a pair of 2651v2's so i have first hand experience.
you know, H8DGi-F has a OCNG5 BIOS that allows these processors to overclock, I have made 6276s hit 2.9-3GHz on decent cooler so look into that if you are interested
will that board run the 6300 series CPU's? there was a good increase in performance to the new generation but not all boards take em. but if they do the 6380 isn't that more expensive than it's 6200-series predecessor.
It's showing them as 16 cores and 32 logical threads because the windows 8+ amd CPU driver was updated to make it "try" and place load not on the same module as there is a 50% penalty for doing so (basically treats the cpu like ht/smt) there should be a option to stabilise performance in the bios by setting module to core count to 1m/1c ratio from 1m/2c this should improve games performance by making sure the games/programs only use 1 core per 1 module so resources are not been shared (should stop inconsistent fps) Not sure if this bulldozer dual socket setup uses numa nodes so the other cpu would Likey been doing nothing (taskmanager performance cpu around where cpu count is it may state the numa nodes is, if so dual socket is pointless test in general as don't think I know any game that is numa aware so one socket and only 1 bank of ram will be used)
Phil, I think you should consider doing benchmarks which include OBS streaming and video rendering. Most people don't care about content production but it's nice to know whether it would be an option if ever the need arose. With that many cores, it should be effortless to have OBS software encode in the background.
When you say most people, are you using a sample size of one? I guarantee quite a few people are wondering if a 32-core machine is capable of performing properly in content creation workloads.
Crysis only optimised up to 3 cores usage. I had i5-4670k and the game console reports only 3 cores were used instead of 4, running fairly smooth over 60 fps with maximum details (paired with RX 590). RivaTuner overlay reports the same.
I approve this setup!! Nice dual cpu board with alot of 16x slots. The issue is that it is just not effective for the money. But if you find it free or cheap it is fun to play with
Victor Bart - RETRO Machines exactly, I stumbled upon a dell 690 and it was worth the $8 for 16 gb of ram and dropping a gtx 1050 ti in it and adding to my gaming PC collection (I have five kids) and this PC has far more games as options due to sse 3+ support
I used to run a workstation with those exact processors doing video editing, it was slow compared to the Intel xeons at the time. Those bulldozer chips just weren't up to snuff.
@@MarshallSambell I already asked Phil about them. I bought one (Kllisre model), currently testing with two crazy cheap XEON 2640s and 64GB/1866 RAM. Scores 1426 for CB R15, 3041 for CB R20 (similar to a Ryzen 1700). Testing later with more potent 2650 v2s and 2680 v2s, should be interesting. Tried GTA V for a while last night, runs very well. Running further benchmarks today.
You're a wicked, evil person. :) I just got a P9X79 and wound up with a spare E5-1620v2 CPU, and was wondering what to do with it... stop giving me ideas! :D Was going to swap out the i7-4820K for the Xeon so I could use server RAM, but then enough desktop RAM came my way for about the same price and now I can't be arsed to swap it (since the Xeon has the same clockspeed but not quite the performance). So here it lays feeling lonely and forlorn... maybe I should get it a friend...
@@Reziac either go with the 1650v2 or the 4930k. It's a six core and both like to be overclocked. Running 2 x79 boards. One with 1600mhz ddr3 and p9x79 with a 4930k overclocked to 4.6ghz and a lenovo s30 x79 workstation board with 1600 mhz 64gb ecc ddr3 and a 2890 xeon. The consumer board has a gtx1080. Which runs everything at 120fps and the workstation has a rx580 8gb and that also runs silky smooth. Just love the x79 board. Still have another s30 board in a box and i have an apple g5 case i still need to mod, i want to put the spare x79 board in there and make it a cheap lan party pc
I just found something that overwhelmed my 32 cores Opteron build, and it is 4K Video render, it suck all resource available on both CPU completely. Do you have an issue with PCI-Express lane for graphic card only show as x8 after power cut once? I used to have 2 of this motherboard and both have.
@ I can but it will not so soon. Last time I used that machine is around 47 minutes for 32 minutes 4K30p footage (with some 2560x1600 footage in it) exported from Adobe Premiere Pro CC 2017.
5 ปีที่แล้ว +1
@@sophustranquillitastv4468 thank for information, thats what i wanted to know... not gonna invest on a machine like this then
I would've liked to see how this Opteron would've faired as a video editing PC. Premier Pro after all can pretty much use as many cores as you can throw at it yet games are usually optimized for up to quad cores only so all those extra 12 cores / 28 threads won't give you any performance increase in games. Not without unofficial patches at least.
Newer games these days are reaching for more resources than an old 4c/8t chip can provide, 6c/12 quickly becoming the new baseline chip for such games and 8c/16t+ chips becoming more and more recommended for those that stream games on the same system as they're played (ie: for those that can't or won't have space or cash for a dual-system setup).
@@victorbart Indeed, Adobe apps are still heavily skewed towards high clocks and high IPC, they're woefully short of having decent threaded optimisations as yet. This is especially true for Photoshop. Premiere is better than it once was, but has a long way to go. AE is a mixed bag, not helped by many plugins still running on old code.
@@ElNeroDiablo People say this since Ryzen cpus come out but I still don't see this in benchmarks. Recent GamerNexus video about 6700k, where it was almost on par with Ryzen 3600, and i5-9600k perform way better than both of them, still shows that games still prioritize single core performance than number of cores.
@@ElNeroDiablo I agree for streaming power. I have an AMD FX8350 8 core CPU. When I used to stream I would run a game only on 4 cores (cores 0 to 3) and OBS would run on the other 4 cores to handle the capture, but would use my old GXT 970 to handle the actual encoding. But I'm talking about just playing the games. There may be a handful of games that push for +4 cores, but the current market favors 4 cores. Give it a few years and I do feel more games will need more than 4 cores. My point is, as we stand at the moment having a system with more than 4 cores won't provide any performance boost (any boost will be negligible). The games that will run as if in a Powerpoint show will be those like Cities Skylines who's performance is mainly CPU based.
boards like that also often have just 8x PCIe. That will still work fine for a lot of people but keep that in mind. Then some of those boards often have very few slots.
From the overlay on the left during the games it only uses 4 of its 32 cores/threads...So yeah, with such a clock speed it makes no difference to have dual CPU's if the games won't use all the threads. Such a platform is really made for larger amounts of multi-threading stuff, like running a cluster of JVM's doing all their garbage collection on 8 threads each, etc... Or maybe video rendering or other batch stuff like that (as already mentioned in other comments) as far as the rendering engines for video (etc..) are designed for that many cores. You could also VMWare the system, and run multiple OS copies doing stuff simultaneously. I can think of many uses for this set-up but single station gaming is not one of them.
Gonna take a lot of Haswell 4 core cpus bolted together for Intel to compete..... 14nm++++++++++++++++++++++ FTW! Still, as an intel premium product, gonna cost a motza.
AMD does have Epic, Server grade Threadripper basically. Tho that many cores and threads hardly do anything in games right now, only reason them opterons were doing bad was because of how the architecture works and some games simply not knowing how to use dual CPU's, and of course the latency with communicating between CPU's and RAM, Heck my 5960x is still more than enough for games, I also have a Ryzen 7 1700 that hardly ever gets pegged out in any game work loads tho ipc is a bit lacking. Right now the sweet spot seems to be a 6 core 12 thread, though I expect that to change soon I hope, I still plan on getting my the 3900x or 3950x.
@@dfkdiek518 only have a single threadripper 1920x, 64GB RAM, dual 1080ti's, all on a custom loop with a 480mm radiator... But clearly I was thinking along the same lines. It's all in a Thermaltake P90 case.
I'm sure many people would appreciate some more production oriented benchmarks. It's a bit frustrating that you admit it is definitely not made for games, but you still focus the bulk of your review and conclusion on them. Maybe some blender/premier/z-zip tests?
Thanks Phil for trying out every idea I have ever had! Now, if you'll get a Via Quadcore or Zhaoxin system. ^_^ The old Quadcore boards are stupid expensive for the performance and I've never seen a Zhaoxin anything for sale . Anyways, thanks for the awesome content Phil!
I sure didn’t see a dual CPU video coming! Very nice! I have a pair of AthlonMP CPU’s somewhere in my stash. Doubt I’d ever find a motherboard to try them with.
I find you videos to be very well done, interesting, and informative. I know that the focus of these videos you do is gaming on older hardware, But I ask you to please consider adding a Blender Benchmark score to your builds. Blender can use either GPU - OR - pure CPU cores/threads to render, and now has the built-in ability to use multiple network machines in a render pool there many users that build their own local render farms. It would be very helpful, (and could add some new viewers) by adding the Blender Bench mark for people that build machines to render, color grade, animate and other uses in addition to gaming. My current Windows 10 based Blender machine is a Asus B450 Mother Board, Ryzen 2700x, 32GB DDR4 2933 ram, 1TB 3D NVMe PCIe M.2 SSD, 550W power supply, and I repurposed an old GTX 1050 TI video card and old case from a previous system. The total cost was just around $780 USD. At 4Ghz all core: Cinebench R15.0 CPU score average of 1,785 CPU-Z multi core benchmark 4,882 Blender benchmark CPU (quick score) of 18:48.68 * As a note, Blender typically renders MUCH faster under Linux. Even just booting my machine into a Linux environment from an old USB2 thumb drive without any optimizing gives me: Linux Blender benchmark CPU (quick score) of 14:22.63 - So whatever you get under windows you can expect far greater rending on Linux. Instead of building another machine like mine to farm rendering, It ‘appears’ that I could easily build 2 of these systems from this video cheaper and have a much higher possible rendering capacity. BUT that’s presuming an extrapolation based on the Cinebench and CPU-Z scores. It would be so much better to have a blender benchmark score for an ‘apples to apples comparison. You don’t need any familiarity with blender to use the Blender benchmark you just unzip it to a folder and run it, you can get the test from here opendata.blender.org/ there is information on the benchmark itself and the database they keep www.blender.org/news/introducing-blender-benchmark/ Thank you for the time to read and hear me out.
Do you plan on testing opteron 6300 series processors, Phil? They should be a good bit faster as they are at least based on Piledriver and not Bulldozer. The clock speeds are also higher on these.
Hello, Since you are visiting dual cpu setups you might wanne look at the asus Z8NA-D6 combined with 2 xeon X5675's. You can find the mainboard on aliexpress for about 75us$, you might have the xeons allready.
At 7:27 was the cpu utilization frozen or this game just using constant resources all the time? Thank you for your videos. Your channel opened a new world for me. :)
It not worth any time and money to spend. 6 core x5670 scores similar scores to this single opetron but it can be overclocked so it will benefit higher single core performance. And 6 cores 12 threads seems to be joke for "16 cores" but truly it's 8 cores 16 threads....
@@bleyz3557 it is actually 16 integer alus, and 8 floating point alus, one control unit/one core has one integer alu and shares a floating point unit with another alu, if you don't do floating point operations, it is essentially 8 cores, and games happen to be full of floating point operations(coordinates and what not).
What are you doing?!! 😂 For crysis sake, do at least some productivity tests like blender or vray, it might work for some people in that scenario, since you bothered buying and assembling all that stuff 😉
Yeah, I'd like to always have CPU-Z numbers just to have one that's completely consistent and not dependent on anything but the CPU. And yes, I think it would be good to have some productivity benchmarks too, because there are an awful lot of us out here who need that kind of performance, but couldn't care less about gaming optimization (often not the same thing).
@@mebibyte9347 yeah, i think it might fares better if he trying it out put all those cores and threads and capacious RAM to the max by building an os... Using ultimate Linux building script.. How long does it take to build one complete distribution along with it's supporting repository system...
Hi @PhilsComputerLab, it is not simply due to clockspeed, but more heavily towards architecture. If I downclock my Ryzen 5 1600 from 3.6Ghz down to 2.1Ghz I am roughly at the IPC of my FX 8350 (4.2Ghz), which is also roughly two times the IPC of this opteron of yours. Sad that most games rely so much on single core performance, but also slightly understandable in the past due to multicore programming difficulty.
I'm Still rocking Sims eons from 2012 and I have to admit I'm still really happy with them they still work fast and with new boards coming out where you can overclock them lots of potential. I wish there was some overclocking that was on the platform for AMD because I'm sure it's possible. I mean Lee actually do work on my machine and every once in a while and my son comes over he games and even though I have an older Dell OEM T 3500 the sucker really kicks ass with a 1070 Founder's Edition GPU there has to be a killer app that should be in your next test run where you can actually use all those cords to actually do some real work that would be fun around the house or fun to actually do.
These processors are great for number crunching, and they're a bit cheaper than the equivalent Intel server equipment. Neither are good for gaming. Probably better suited for video editing, where most software can take advantage of all the cores. You typically won't find fast single-threaded performance in the server/workstation world. Still fun to see how they actually perform for gaming. Maybe in the future they will perform better as game engines become more optimized for high thread counts.
Hey phil shoutout from the states ! i really digest and love your videos that you make. I just found an old socket 754 in my tech junk yard im trying to find an old Athlon 64 for a decent price lol. i just like tinkering around with older hardware even if its almost 20 years old. my first build was on socket 754 a Sempron 2600 "SDA2600AIO2BO" that was well over a decade ago , its crazy to say that.
Thanks for this, it answered a couple questions I had myself. It almost seems to be a lot of bottlenecking due to the NUMA architecture afterall. There is a performance issue with switching threads across nodes in highly threaded apps. Could you try using "CorePrio" and dissassociating the nodes to see if it improves things? It is an issue under threadripper as well and I did see some gains with a single 6380. bitsum.com/portfolio/coreprio/
I'm really loving your channel since I live in South America and most parts are so expencive just because it has a corei in the begining lol no joke here in Chile a 2gen i7 goes for 100bucks and 60bucks for a 2nd gen i5
Love these types of reviews, Phil. Even though these older processors aren't the highest performers, they can be very fun to experiment with and frankly, affordable under the right circumstances.
Never would have seen this one coming. Think the issue here is CPU speed is just too low and I don't see there being an advantage in 2 CPUs over 1 as it really has enough cores with 1 CPU for most needs. Still interesting to see how it performs compared to the single CPU you tested the other week. As you say some may have a better use for it, like actually using it for a server. If you needed to make virtual machines for example, then the cores make more sense.
the issue with them was they have 2 INT units per core but only 1 FPU unit so if your doing int ops it acts like a 2 cpus but for fpu ops its only 1 cpu
Only if you have two threads running all time float point instructions, you will notice the downgrade. And This is very relative. I ran N-Body simulator (pure float math using every core) on single thread and with 8 threads on a Fx8370E, and the speedup was over seven times respect the single thread version.
it is possible to overclock these just fine. With the stocker/locked retail chips you can just flash the OCNG bios and then increase the reference clock. i bought a set of Opteron 6328s for example and they ran refclock 213 just fine. Essentially that gave me an all core turbo of 3.7 GHz and single core turbo of just over 4 GHz. OCNG will allow the server board to also run the dram at XMP timings. i'm currently running a pair of Opteron 63xx ES chips on my quad socket SuperMicro H8QGi-F. These are unlocked. i run them at an all core turbo of 4 GHz and single core turbo of 4.5 GHz. Even though they have 32 cores between them, i have the downcore mode in the BIOS set to "Compute Unit". This effectively power gates one of the Piledriver cores in each compute unit, allowing the other Piledriver core full unshared access to the L2 cache, the L1 instruction cache, as well as the decoder which is good for about 15% extra IPC per core. Downcoring like this gives you 8 cores per 6380, so on my system 16 cores instead of 32. For the unlocked ES chips, you can control the pstates using TurionPowerControl very easily. i'm currently experimenting with how high i can get the ref clock on those units. Currently at a stable 205, but i think at least 210 should be possible for a 5% overclock on the L3 and the memory controllers. i'll try to post a video once i get the outer limits of these chips figured out and all settings prime95 and RMMT stable. PS: if you would like me to send you my spare set of 6328s let me know. Those run at 3.5/3.8.
Its because of your gpu I believe, its pinned at 99% usage and the processor is at 30/50% usage, so maybe not the most accurate. I'm no expert so take that with a grain of salt but I think that may be why.
This would be fun to use for home lab since it gets so many cores to play with... but the full setup price that I can buy is around $500... kinda wasteful.
The power draw on these old machines is what stops them from being "worth it" even if they are still quick by today's standards. I have an old 16 core Xeon workstation. Nice machine, plenty fast. But it uses two to three times power needed as my Ryzen machine.
my main driver is t7500 with a dual e5649 with 48 gb of ram and it's a 1080p gaming beast and the gpu was always the limit(first a 560ti, after a r9 280 and now a rx580 )
This is better suited as being a hyper visor host like ESXI. You can set it up with a bunch of VM's and you can install Kubernetes on the VM's and host your own private cloud.
I'm just trying that experiment with a dual Xeon 2620 board from a few years back. Performing like a champ in Win 10 at the moment, good few years of service left in it.
I LOVE seeing the core utilization on high core count systems. Shadow Of the Tombraider uses multicore fairly well. Meanwhile... fortnite... primarily dual core utilization...
Just a random thought, anyone try fortnite on a g3258 with an overclock and a low power GPU? Or maybe g4560? If it primarily uses 2 cores, maybe it will run well on a dual core CPU with a higher base frequency and a dGPU that performs around rx570 or better.
Huh odd. Which GPU are you using with the i5? Also, I think that model i5 has a max boost clock of 3.4ghz... not the best IPC, but what can we ask from a mid tear Haswell i5?
@@greenprotag rx 570 4gb. The game has got way more cpu demanding lately for some reason and as a result my i5 is now struggling. Planning to put an i7 4770k on my board and overclock it to 4ghz or so. Should get rid of any stutters
It's really not even close. The only type of workload that this setup would work well with is heavily threaded integer tasks. Performance is halved for any floating point tasks since each CPU has half the number of FPUs as integer cores. As shown in the video, only one game was able to use all 32 threads, Most of the games here did use ~16 threads, but there was no benefit as shown with the mediocre frame rates. With how cheap older Ryzen parts are now, it doesn't really make sense to build this heat monster. People are dumping 2700x chips by the truckload for the newer 3000 series parts, and I've seen them as low as $150.
Really, a 3600x is only running 1300's in Cinebench R15? I would be really disappointed with that. My R7 1700 is turning 1840cb on R15 with a bit of an overclock. I would have expected the 3600x to be north of 1500.
@@mpeugeot Cinebench is a synthetic benchmark only representative of heavily threaded tasks. The numbers it gives you only matter in such types of workloads, which games are not. Games prefer fewer, higher clocked and more efficient cores, not tons of slow inefficient cores. The games run in this video show this where most struggle to keep 60 fps at 1080p and don't even utilize all available cores.
@@GGigabiteM yes, I understand that, but his 3600x is still putting up some pretty lackluster numbers. Core for core and clock for clock, the 3600x should be faster on cinebench than my R7 1700. So an R5-1600 overclocked to 4.1 GHz would be expected to get approximately 1360-1380 points on cinebench R15, a 3600 should be well north of 1500, and the 3600x above that.
I was interested to see how it stacked up to my Ryzen 1700X In CPU-Z Im getting 411 points on single core performance and 4334 on multi thread. Jeez! I thought the Opteron might do a bit better on multi thread!
you can pick up a Dell R815 equipped with 4 of these processors, on e-bay for around $500 including delivery, if you need a lot of cores for some reason, an R815 is a good buy.
Phil,, please some reference values for the games you use for benchmarks. I would like to know what these CPUs are comparable to in terms of modern setups.
U actually can overclock some opterons with Phenom MSR tweaker. There is a some guy on Reddit who overclocked dual opteron system (32 core) to almost 4.2 GHz! Edit: there is a thing called OCNG5 which is a some sort of firmware mod for opteron mobos. It's compatible with board show in this video
in fact this platform performs much better now than 9 years before when it was first released, since software is way more optimized for multicore performance. I think it is usefull as an editing rig, i can see it being a helluva budget platform for ppl doing 3D render. Not so much for video editors, since many ppl will use Adobe Premiere which still depends on singlecore performance
Not really, this isn't budget hardware, it's expensive and requires a lot of power. To make things worse, most of these motherboards have proprietary power connectors and lack pcie slots.
Another great video but I question the 130W idle draw, that seems way too high. Also you could investigate using Nvidia Frameview to really test the FPS - it gives important frame time data which translates to how smooth the game feels. Also some video encoding benchmarks would be nice for other streamers and such. Perhaps compare to buying a modern encoding system or whatever.
You can disable 1 core per module in the bios. as a result the turbo boost will be 3.3Ghz. you can run parkcontrol to disable cpu parking.
@LabRat Knatz if you get the right stuff you can put an open source bios on them and remove the backdoors.
i dont care how slow it is. for me, their is something fun about owning actual dual cores on one board. great video.
There was a time when dual CPUs was the only way to have true dual cores
Googles Quantum computer recently went down.. I hear someone tried running Crysis on it.
:D
It's like having 32 totally useless people to help you with your tasks. At least half of them would be loitering around doing nothing, the others wouldn't give much help either.
sounds about right when you consider that bulldozer "modules" cant act independently.
Server CPU's are meant to run many tasks on one machine. My server uses full advantage of its cores through virtualization. Instead of having 1 windows machine, why not 4?
@sajber kurajber
You can't really compare GPU cores to CPU cores, GPU are specific to some "tasks", and these "tasks" here are just Graphic rendering with API's like OpenGL, DirectX, Vulkan etc. They are very heavily multithreaded and hence can utilize the 2000+ CUDA cores modern GPU's have
I had to look at the video again. Thought you where explaining how the government works.
Lol
6969 in Fire Strike.
Best PC ever, right?
Happy Gaming :D
nicce
I was your 69th liker on this comment.
6 969 score on 3dmark 😉😏
Nice.
Nice
Nice
Nice
Nice
When you have a 32core processor. But thr game only uses 2 cores that run at 2.6ghz. OOF
That 3d mark score... Double nice.
Happy Gaming :D
nice
Nice
Nice
Nice Nice
Many games still do not scale well with multi core cpus, expecting instead high single thread performance, might be worth trying this chip again in 1-2 years.
I mean, sure, it wont compare to the newest Ryzen/Intel chips but if games are more optimised to use many cores, we might see some surprising results... or not :( .
Bulldozer + 2.6GHz in gaming, yeah, that didn't go well!
Who knew 32 shit cores were still going to be most shit! 🤣
my i5 2300 was only 2.8Ghz with 16 gigs ram and a shotty gtx 750 (not a Ti card) and the only game I could find it could play was Star Citizen (at 40+gig game not surprising lol)
How do the current gen consoles manage it with 8 AMD cores at lower speeds than this? They don't have Zen cores, either.
@@wal81270 different CPUs can do a different amount of computation per clock cycle, and the rate at which data can be transferred to and from the CPU is also very important. This is a gross oversimplification, but that's the gist of it all.
@@notabagel - Are you serious with this? The 8 core CPUs used in PS4 and XboxOne are Jaguar. This is a microarchitecture from 2013 that has nowhere near the IPC of intel at the time, nor even comparable to AMD Zen chips now.
I'm so glad you make these videos so I can look at all the nonsense I want to fiddle around with but don't have the time or budget to do. If you can run dual Opteron 6328s I'd love to see the performance difference at half the cores but at higher base/turbo frequencies. Clearly a set-up that makes even less sense because it looks like people sell those used for a significantly higher price.
They are bulldozer, so any FX Bulldozer have similar results depending on number of cores and clock speed.
LOL. this is exactly what I am gonna say.
Lol, not intended for the Gaming use. But the power usage info is useful.
So thanks a lot Phil.🙄👌
I'd still like this legacy setup 😍👍
@@IsmaelWensder unfortunately I beg to differ. An AM3+ Opteron in a consumer board despite being only 8 cores gets fairly better results. I tested that theory using my current setup downclocked to the same speed as its' 16core cousin on a g34 board. It was a difference of up to 50%fps in many games including mass effect andromeda for instance. In this case it seems more of a limitation of it being a NUMA archutecture and/or overall server board limitation. Granted on a proper board, this supermicro for instance, the results are somewhat better.
I have a pair of 6380's running and can confirm that the performance bump is significant - not really worth it though. It's was a worthwhile distraction to get it tuned and working properly but the fact is that the 2S motherboards are more expensive than the intel alternative, the best on the platform offers only mediocre comparative performance and it uses at least 20-30% extra power.
I run these and a pair of 2651v2's so i have first hand experience.
you know, H8DGi-F has a OCNG5 BIOS that allows these processors to overclock, I have made 6276s hit 2.9-3GHz on decent cooler so look into that if you are interested
will that board run the 6300 series CPU's?
there was a good increase in performance to the new generation but not all boards take em.
but if they do the 6380 isn't that more expensive than it's 6200-series predecessor.
@@brrebrresen1367 these boards will run 6300 series processors with BIOS update, the 6300 series with overclock could hit 3.2-3.3GHz All core
me too same board 3ghz all cores
2:00 oh that's not a problem, just get a usb3.0 pci-e adapter
And an PCI-E SSD adapter card. You'd still need a SATA boot drive, but all the apps and data can enjoy killer storage performance.
It's showing them as 16 cores and 32 logical threads because the windows 8+ amd CPU driver was updated to make it "try" and place load not on the same module as there is a 50% penalty for doing so (basically treats the cpu like ht/smt)
there should be a option to stabilise performance in the bios by setting module to core count to 1m/1c ratio from 1m/2c this should improve games performance by making sure the games/programs only use 1 core per 1 module so resources are not been shared (should stop inconsistent fps)
Not sure if this bulldozer dual socket setup uses numa nodes so the other cpu would Likey been doing nothing (taskmanager performance cpu around where cpu count is it may state the numa nodes is, if so dual socket is pointless test in general as don't think I know any game that is numa aware so one socket and only 1 bank of ram will be used)
Imagine how power elements on motherboard will be hot.
Phil, I think you should consider doing benchmarks which include OBS streaming and video rendering. Most people don't care about content production but it's nice to know whether it would be an option if ever the need arose. With that many cores, it should be effortless to have OBS software encode in the background.
When you say most people, are you using a sample size of one?
I guarantee quite a few people are wondering if a 32-core machine is capable of performing properly in content creation workloads.
I definitely wish it was more common to run encoding benchmarks.
And how about renderimg, editing videos and using this machine as a workbench.
Could you include this mini section of this in your future videos?
Crysis only optimised up to 3 cores usage.
I had i5-4670k and the game console reports only 3 cores were used instead of 4, running fairly smooth over 60 fps with maximum details (paired with RX 590).
RivaTuner overlay reports the same.
I would love more dual CPU content
Heh, if he can find the Olde celerons, on ABIT BP6. Epic
I have a dual cpu pentium 4 server sitting around. I should try that sometime. 😂️
I approve this setup!! Nice dual cpu board with alot of 16x slots. The issue is that it is just not effective for the money. But if you find it free or cheap it is fun to play with
Victor Bart - RETRO Machines exactly, I stumbled upon a dell 690 and it was worth the $8 for 16 gb of ram and dropping a gtx 1050 ti in it and adding to my gaming PC collection (I have five kids) and this PC has far more games as options due to sse 3+ support
I wish I could get my hands on this hardware so I can experiment with vulkan wrapping games and forcing thread utilisation to change.
I used to run a workstation with those exact processors doing video editing, it was slow compared to the Intel xeons at the time. Those bulldozer chips just weren't up to snuff.
This would be awesome for a gaming room: 3-4 VMs with their own GPU, monitors, and peripherals.
Does this platform even support PCIe passthrough?
Thank you, you make really interesting videos. Happy holidays.
That Supermicro board looks great.
Uh oh. Phil is messing with dual socket boards... NO ONE MENTION THE X79 DUAL SOCKET MOBOS FROM CHINA, HE WILL GRAB ONE!
André Pedreiro no, please do mention them with dual Xeon e5 2687w v2
@@MarshallSambell I already asked Phil about them. I bought one (Kllisre model), currently testing with two crazy cheap XEON 2640s and 64GB/1866 RAM. Scores 1426 for CB R15, 3041 for CB R20 (similar to a Ryzen 1700). Testing later with more potent 2650 v2s and 2680 v2s, should be interesting. Tried GTA V for a while last night, runs very well. Running further benchmarks today.
You're a wicked, evil person. :) I just got a P9X79 and wound up with a spare E5-1620v2 CPU, and was wondering what to do with it... stop giving me ideas! :D
Was going to swap out the i7-4820K for the Xeon so I could use server RAM, but then enough desktop RAM came my way for about the same price and now I can't be arsed to swap it (since the Xeon has the same clockspeed but not quite the performance). So here it lays feeling lonely and forlorn... maybe I should get it a friend...
OOPSIE
@@Reziac either go with the 1650v2 or the 4930k. It's a six core and both like to be overclocked. Running 2 x79 boards. One with 1600mhz ddr3 and p9x79 with a 4930k overclocked to 4.6ghz and a lenovo s30 x79 workstation board with 1600 mhz 64gb ecc ddr3 and a 2890 xeon. The consumer board has a gtx1080. Which runs everything at 120fps and the workstation has a rx580 8gb and that also runs silky smooth. Just love the x79 board. Still have another s30 board in a box and i have an apple g5 case i still need to mod, i want to put the spare x79 board in there and make it a cheap lan party pc
I just found something that overwhelmed my 32 cores Opteron build, and it is 4K Video render, it suck all resource available on both CPU completely.
Do you have an issue with PCI-Express lane for graphic card only show as x8 after power cut once? I used to have 2 of this motherboard and both have.
could u show us how good is it on video render?
@ I can but it will not so soon. Last time I used that machine is around 47 minutes for 32 minutes 4K30p footage (with some 2560x1600 footage in it) exported from Adobe Premiere Pro CC 2017.
@@sophustranquillitastv4468 thank for information, thats what i wanted to know... not gonna invest on a machine like this then
I would've liked to see how this Opteron would've faired as a video editing PC. Premier Pro after all can pretty much use as many cores as you can throw at it yet games are usually optimized for up to quad cores only so all those extra 12 cores / 28 threads won't give you any performance increase in games. Not without unofficial patches at least.
Newer games these days are reaching for more resources than an old 4c/8t chip can provide, 6c/12 quickly becoming the new baseline chip for such games and 8c/16t+ chips becoming more and more recommended for those that stream games on the same system as they're played (ie: for those that can't or won't have space or cash for a dual-system setup).
Premiere sweet spot is 10 / 12 cores. The issue with these opterons is 2300mhz baseclock. It won't perform that great
@@victorbart Indeed, Adobe apps are still heavily skewed towards high clocks and high IPC, they're woefully short of having decent threaded optimisations as yet. This is especially true for Photoshop. Premiere is better than it once was, but has a long way to go. AE is a mixed bag, not helped by many plugins still running on old code.
@@ElNeroDiablo People say this since Ryzen cpus come out but I still don't see this in benchmarks. Recent GamerNexus video about 6700k, where it was almost on par with Ryzen 3600, and i5-9600k perform way better than both of them, still shows that games still prioritize single core performance than number of cores.
@@ElNeroDiablo I agree for streaming power. I have an AMD FX8350 8 core CPU. When I used to stream I would run a game only on 4 cores (cores 0 to 3) and OBS would run on the other 4 cores to handle the capture, but would use my old GXT 970 to handle the actual encoding. But I'm talking about just playing the games. There may be a handful of games that push for +4 cores, but the current market favors 4 cores. Give it a few years and I do feel more games will need more than 4 cores.
My point is, as we stand at the moment having a system with more than 4 cores won't provide any performance boost (any boost will be negligible). The games that will run as if in a Powerpoint show will be those like Cities Skylines who's performance is mainly CPU based.
Phill man! I love your videos. You literally do all the things I want to do but lack the funds for. Thanks again!
boards like that also often have just 8x PCIe. That will still work fine for a lot of people but keep that in mind. Then some of those boards often have very few slots.
From the overlay on the left during the games it only uses 4 of its 32 cores/threads...So yeah, with such a clock speed it makes no difference to have dual CPU's if the games won't use all the threads.
Such a platform is really made for larger amounts of multi-threading stuff, like running a cluster of JVM's doing all their garbage collection on 8 threads each, etc... Or maybe video rendering or other batch stuff like that (as already mentioned in other comments) as far as the rendering engines for video (etc..) are designed for that many cores.
You could also VMWare the system, and run multiple OS copies doing stuff simultaneously.
I can think of many uses for this set-up but single station gaming is not one of them.
imagine a dual socket threadripper. you would not see anything in the benchmark games because of the cpu performance graphics
Gonna take a lot of Haswell 4 core cpus bolted together for Intel to compete..... 14nm++++++++++++++++++++++ FTW! Still, as an intel premium product, gonna cost a motza.
AMD does have Epic, Server grade Threadripper basically. Tho that many cores and threads hardly do anything in games right now, only reason them opterons were doing bad was because of how the architecture works and some games simply not knowing how to use dual CPU's, and of course the latency with communicating between CPU's and RAM, Heck my 5960x is still more than enough for games, I also have a Ryzen 7 1700 that hardly ever gets pegged out in any game work loads tho ipc is a bit lacking. Right now the sweet spot seems to be a 6 core 12 thread, though I expect that to change soon I hope, I still plan on getting my the 3900x or 3950x.
Dual 64core threadrippers each on a 480MM custom loop and two Titan RTX on a custom loop... with 128gb ddr4
Especially when the 1920x is only $199 now.
@@dfkdiek518 only have a single threadripper 1920x, 64GB RAM, dual 1080ti's, all on a custom loop with a 480mm radiator... But clearly I was thinking along the same lines. It's all in a Thermaltake P90 case.
Thank you Phil for putting the money down for one of those mobo's just so we could see whats what, they arnt cheap when i looked at them
I'm sure many people would appreciate some more production oriented benchmarks. It's a bit frustrating that you admit it is definitely not made for games, but you still focus the bulk of your review and conclusion on them.
Maybe some blender/premier/z-zip tests?
RPCS3 the emulator of Play Station 3 uses all the cores you can get.. I wonder how it would run Gran Turismo 6 on this setup.
That 32 cores must be awesome for emulator if it utilize at least 16
Thanks Phil for trying out every idea I have ever had! Now, if you'll get a Via Quadcore or Zhaoxin system. ^_^ The old Quadcore boards are stupid expensive for the performance and I've never seen a Zhaoxin anything for sale . Anyways, thanks for the awesome content Phil!
I sure didn’t see a dual CPU video coming! Very nice! I have a pair of AthlonMP CPU’s somewhere in my stash. Doubt I’d ever find a motherboard to try them with.
I find you videos to be very well done, interesting, and informative. I know that the focus of these videos you do is gaming on older hardware, But I ask you to please consider adding a Blender Benchmark score to your builds. Blender can use either GPU - OR - pure CPU cores/threads to render, and now has the built-in ability to use multiple network machines in a render pool there many users that build their own local render farms. It would be very helpful, (and could add some new viewers) by adding the Blender Bench mark for people that build machines to render, color grade, animate and other uses in addition to gaming.
My current Windows 10 based Blender machine is a Asus B450 Mother Board, Ryzen 2700x, 32GB DDR4 2933 ram, 1TB 3D NVMe PCIe M.2 SSD, 550W power supply, and I repurposed an old GTX 1050 TI video card and old case from a previous system. The total cost was just around $780 USD.
At 4Ghz all core:
Cinebench R15.0 CPU score average of 1,785
CPU-Z multi core benchmark 4,882
Blender benchmark CPU (quick score) of 18:48.68
* As a note, Blender typically renders MUCH faster under Linux. Even just booting my machine into a Linux environment from an old USB2 thumb drive without any optimizing gives me:
Linux Blender benchmark CPU (quick score) of 14:22.63 - So whatever you get under windows you can expect far greater rending on Linux.
Instead of building another machine like mine to farm rendering, It ‘appears’ that I could easily build 2 of these systems from this video cheaper and have a much higher possible rendering capacity. BUT that’s presuming an extrapolation based on the Cinebench and CPU-Z scores. It would be so much better to have a blender benchmark score for an ‘apples to apples comparison.
You don’t need any familiarity with blender to use the Blender benchmark you just unzip it to a folder and run it, you can get the test from here opendata.blender.org/
there is information on the benchmark itself and the database they keep
www.blender.org/news/introducing-blender-benchmark/
Thank you for the time to read and hear me out.
Do you plan on testing opteron 6300 series processors, Phil? They should be a good bit faster as they are at least based on Piledriver and not Bulldozer. The clock speeds are also higher on these.
The original Threadripper.
It would be nice to see some comparison renders with a more modern machine, if the number of cores was useful in soft that would actually use them.
Hello, Since you are visiting dual cpu setups you might wanne look at the asus Z8NA-D6 combined with 2 xeon X5675's.
You can find the mainboard on aliexpress for about 75us$, you might have the xeons allready.
These systems run very fine (even 3d games) when splitted in two with "Aster Pro Software".
Phil should try some time the trial version.
Perneta what it's that software exactly?
What exactly happens to a system when it is "splitted in two"
At 7:27 was the cpu utilization frozen or this game just using constant resources all the time?
Thank you for your videos. Your channel opened a new world for me. :)
Hmm definitely looks like a bug or glitch :)
ive been looking for a decent review... looking at doing a similar build for a Server/Workstation hybrid ... need to free up my main pc
I was checking out some opetrons but thought it wouldn't be worth it.
It not worth any time and money to spend. 6 core x5670 scores similar scores to this single opetron but it can be overclocked so it will benefit higher single core performance. And 6 cores 12 threads seems to be joke for "16 cores" but truly it's 8 cores 16 threads....
@@bleyz3557 it is actually 16 integer alus, and 8 floating point alus, one control unit/one core has one integer alu and shares a floating point unit with another alu, if you don't do floating point operations, it is essentially 8 cores, and games happen to be full of floating point operations(coordinates and what not).
No old Opterons are really worth it at this point, and most old Xeons aren't any more either with 1st gen Ryzen getting super cheap.
i believe there is modified bios for some supermicro G34 motherboard that allows you to overclock. have you ever tried overclocking these opterons?
You can overclock these Supermicro G34 boards with a custom bios. I have a quad socket with 6378s. Got them up to 3ghz.
If memory serves, aren't there modified BIOSes available for some of the Supermicro G34 boards to enable overclocking features?
Josiah Moorhouse Yes, and there is always FSB overclocking using apps like SoftFSB, setFSB etc...
@@ikannunaplays Can you overclock with AMD Overdrive or is that functionality disabled?
Kinda sucks for games, but what about rendering videos and 3d apps? This 2 opterons will have a shot!
Another awesome project. I hope you had a good holiday.
I love your videos Phil. Keep up the good work.
What are you doing?!! 😂
For crysis sake, do at least some productivity tests like blender or vray, it might work for some people in that scenario, since you bothered buying and assembling all that stuff 😉
But us, feisty fellas of the unwashed internets. 😂Always troll about de Crying this n that.
So, will it run, CrySiS? 😍👍🙄❤️
For real! I can't believe he set up a server board to test games only.
Yeah, I'd like to always have CPU-Z numbers just to have one that's completely consistent and not dependent on anything but the CPU.
And yes, I think it would be good to have some productivity benchmarks too, because there are an awful lot of us out here who need that kind of performance, but couldn't care less about gaming optimization (often not the same thing).
@@mebibyte9347 yeah, i think it might fares better if he trying it out put all those cores and threads and capacious RAM to the max by building an os... Using ultimate Linux building script.. How long does it take to build one complete distribution along with it's supporting repository system...
These work great for distributed computing needs, Not everything is about gaming. And play with no work makes you a broke ass bum.
Hi @PhilsComputerLab, it is not simply due to clockspeed, but more heavily towards architecture. If I downclock my Ryzen 5 1600 from 3.6Ghz down to 2.1Ghz I am roughly at the IPC of my FX 8350 (4.2Ghz), which is also roughly two times the IPC of this opteron of yours.
Sad that most games rely so much on single core performance, but also slightly understandable in the past due to multicore programming difficulty.
I'm Still rocking Sims eons from 2012 and I have to admit I'm still really happy with them they still work fast and with new boards coming out where you can overclock them lots of potential.
I wish there was some overclocking that was on the platform for AMD because I'm sure it's possible.
I mean Lee actually do work on my machine and every once in a while and my son comes over he games and even though I have an older Dell OEM T 3500 the sucker really kicks ass with a 1070 Founder's Edition GPU
there has to be a killer app that should be in your next test run where you can actually use all those cords to actually do some real work that would be fun around the house or fun to actually do.
These processors are great for number crunching, and they're a bit cheaper than the equivalent Intel server equipment. Neither are good for gaming. Probably better suited for video editing, where most software can take advantage of all the cores. You typically won't find fast single-threaded performance in the server/workstation world. Still fun to see how they actually perform for gaming. Maybe in the future they will perform better as game engines become more optimized for high thread counts.
i agree
It may be a difficult find, but I've always wondered about a dual Tualatin build.
There's actually a channel on youtube with semi-modern games on dual Tualatin and AGP graphics, like radeon 3850.
MrOfforDef is the channel, interesting stuff he has, tens of games tested on dual CPU boards with AGP graphics.
Love the videos with used server/oem parts keep it up :)
5:17 would disagree here, during the benchmarks the GPU is hitting max usage while the CPUs divy up the work and occasionally hit 80% on some cores.
Hey phil shoutout from the states ! i really digest and love your videos that you make. I just found an old socket 754 in my tech junk yard im trying to find an old Athlon 64 for a decent price lol. i just like tinkering around with older hardware even if its almost 20 years old. my first build was on socket 754 a Sempron 2600 "SDA2600AIO2BO" that was well over a decade ago , its crazy to say that.
Totally worth as a home server. I'm planning to get a quad 16-core version. For my projects that will be more than enough!
Did you count the power cost? That's one reason why I'm switching everything to Raspberry Pis.
This board would be awesome for a multi head virtualized XP retro machine (like LTT 7 Gamers, 1 CPU).
Thanks for this, it answered a couple questions I had myself. It almost seems to be a lot of bottlenecking due to the NUMA architecture afterall. There is a performance issue with switching threads across nodes in highly threaded apps. Could you try using "CorePrio" and dissassociating the nodes to see if it improves things? It is an issue under threadripper as well and I did see some gains with a single 6380.
bitsum.com/portfolio/coreprio/
Great video, great system!! (I'm talking from my hardware collector perspective, of course, not performance wise :-P).
Would love to know how it would handle Premiere and After Effects... Thanks for the video!
Nice review... I have one of these cpu back then, but i didnt have the mainbord untill now.. so im curious about the performance.
I'm really loving your channel since I live in South America and most parts are so expencive just because it has a corei in the begining lol no joke here in Chile a 2gen i7 goes for 100bucks and 60bucks for a 2nd gen i5
Now install Linux on it and setup a nice server with VMs so those processors feel at home.
Love these types of reviews, Phil. Even though these older processors aren't the highest performers, they can be very fun to experiment with and frankly, affordable under the right circumstances.
Never would have seen this one coming. Think the issue here is CPU speed is just too low and I don't see there being an advantage in 2 CPUs over 1 as it really has enough cores with 1 CPU for most needs. Still interesting to see how it performs compared to the single CPU you tested the other week. As you say some may have a better use for it, like actually using it for a server. If you needed to make virtual machines for example, then the cores make more sense.
As a file/NAS server or Plex server it should be an awesome setup. But like you say, for gaming, it's rubbish.
the issue with them was they have 2 INT units per core but only 1 FPU unit so if your doing int ops it acts like a 2 cpus but for fpu ops its only 1 cpu
Only if you have two threads running all time float point instructions, you will notice the downgrade. And This is very relative. I ran N-Body simulator (pure float math using every core) on single thread and with 8 threads on a Fx8370E, and the speedup was over seven times respect the single thread version.
These dual cpu boards have always interested me. Keep up the great content man👍👍
Fascinating experiment, keep up the great videos very enjoyable to watch.
I love these dual socket game benchmarks!
it is possible to overclock these just fine. With the stocker/locked retail chips you can just flash the OCNG bios and then increase the reference clock. i bought a set of Opteron 6328s for example and they ran refclock 213 just fine. Essentially that gave me an all core turbo of 3.7 GHz and single core turbo of just over 4 GHz.
OCNG will allow the server board to also run the dram at XMP timings.
i'm currently running a pair of Opteron 63xx ES chips on my quad socket SuperMicro H8QGi-F. These are unlocked. i run them at an all core turbo of 4 GHz and single core turbo of 4.5 GHz. Even though they have 32 cores between them, i have the downcore mode in the BIOS set to "Compute Unit". This effectively power gates one of the Piledriver cores in each compute unit, allowing the other Piledriver core full unshared access to the L2 cache, the L1 instruction cache, as well as the decoder which is good for about 15% extra IPC per core. Downcoring like this gives you 8 cores per 6380, so on my system 16 cores instead of 32.
For the unlocked ES chips, you can control the pstates using TurionPowerControl very easily.
i'm currently experimenting with how high i can get the ref clock on those units. Currently at a stable 205, but i think at least 210 should be possible for a 5% overclock on the L3 and the memory controllers.
i'll try to post a video once i get the outer limits of these chips figured out and all settings prime95 and RMMT stable.
PS: if you would like me to send you my spare set of 6328s let me know. Those run at 3.5/3.8.
Its because of your gpu I believe, its pinned at 99% usage and the processor is at 30/50% usage, so maybe not the most accurate. I'm no expert so take that with a grain of salt but I think that may be why.
Curious when retired 1st gen of Epyc with motherboards will hit market with good prices, probably another year or two.
This would be fun to use for home lab since it gets so many cores to play with... but the full setup price that I can buy is around $500... kinda wasteful.
Gaming aside, would this combo make for a home server build worth the money?
The power draw on these old machines is what stops them from being "worth it" even if they are still quick by today's standards. I have an old 16 core Xeon workstation. Nice machine, plenty fast. But it uses two to three times power needed as my Ryzen machine.
Yea good point :)
my main driver is t7500 with a dual e5649 with 48 gb of ram and it's a 1080p gaming beast and the gpu was always the limit(first a 560ti, after a r9 280 and now a rx580 )
watching ... hoping for cinebench numbers... and you did. Awesome.
This may have been decent rig for a web or application server.
This is better suited as being a hyper visor host like ESXI. You can set it up with a bunch of VM's and you can install Kubernetes on the VM's and host your own private cloud.
I'm just trying that experiment with a dual Xeon 2620 board from a few years back. Performing like a champ in Win 10 at the moment, good few years of service left in it.
I LOVE seeing the core utilization on high core count systems.
Shadow Of the Tombraider uses multicore fairly well.
Meanwhile... fortnite... primarily dual core utilization...
Just a random thought, anyone try fortnite on a g3258 with an overclock and a low power GPU?
Or maybe g4560?
If it primarily uses 2 cores, maybe it will run well on a dual core CPU with a higher base frequency and a dGPU that performs around rx570 or better.
@@greenprotag dunno. Fortnite kinda stutters on my i5 4460 4 core 4 thread unless I cap the framerate to 144fps instead of letting it go past 200
Huh odd. Which GPU are you using with the i5? Also, I think that model i5 has a max boost clock of 3.4ghz... not the best IPC, but what can we ask from a mid tear Haswell i5?
@@greenprotag rx 570 4gb. The game has got way more cpu demanding lately for some reason and as a result my i5 is now struggling. Planning to put an i7 4770k on my board and overclock it to 4ghz or so. Should get rid of any stutters
I have a i3 4360 with a higher boost clock and a low end GPU... maybe I'll spin up fortnite when I get the chance.
this setup is about 60% as good as my 3600x, it's crazy if you think this was about 9 years ago and is still capable. actually matches R15 score
It's really not even close. The only type of workload that this setup would work well with is heavily threaded integer tasks. Performance is halved for any floating point tasks since each CPU has half the number of FPUs as integer cores.
As shown in the video, only one game was able to use all 32 threads, Most of the games here did use ~16 threads, but there was no benefit as shown with the mediocre frame rates. With how cheap older Ryzen parts are now, it doesn't really make sense to build this heat monster. People are dumping 2700x chips by the truckload for the newer 3000 series parts, and I've seen them as low as $150.
Really, a 3600x is only running 1300's in Cinebench R15? I would be really disappointed with that. My R7 1700 is turning 1840cb on R15 with a bit of an overclock. I would have expected the 3600x to be north of 1500.
@@mpeugeot Cinebench is a synthetic benchmark only representative of heavily threaded tasks. The numbers it gives you only matter in such types of workloads, which games are not.
Games prefer fewer, higher clocked and more efficient cores, not tons of slow inefficient cores. The games run in this video show this where most struggle to keep 60 fps at 1080p and don't even utilize all available cores.
@@GGigabiteM yes, I understand that, but his 3600x is still putting up some pretty lackluster numbers. Core for core and clock for clock, the 3600x should be faster on cinebench than my R7 1700. So an R5-1600 overclocked to 4.1 GHz would be expected to get approximately 1360-1380 points on cinebench R15, a 3600 should be well north of 1500, and the 3600x above that.
@@mpeugeot Maybe there's something up with the memory, Ryzen CPUs are super sensitive to memory speed.
I hope you are soon able to get a second gen theadripper or epyck CPU just so you could see how things have evolved on on the server side of AMD.
You need a way to load balance the CPU cores. NUMA balancing could help as well.
I was interested to see how it stacked up to my Ryzen 1700X
In CPU-Z Im getting 411 points on single core performance and 4334 on multi thread.
Jeez! I thought the Opteron might do a bit better on multi thread!
Just shows you how much AMD has improved!
You need to overclock that R7 1700
valid.x86.fr/ewp27s
CPU-Z 473/5190 (I am over 476 now on single core)
you can pick up a Dell R815 equipped with 4 of these processors, on e-bay for around $500 including delivery, if you need a lot of cores for some reason, an R815 is a good buy.
Phil,, please some reference values for the games you use for benchmarks. I would like to know what these CPUs are comparable to in terms of modern setups.
U actually can overclock some opterons with Phenom MSR tweaker. There is a some guy on Reddit who overclocked dual opteron system (32 core) to almost 4.2 GHz!
Edit: there is a thing called OCNG5 which is a some sort of firmware mod for opteron mobos. It's compatible with board show in this video
in fact this platform performs much better now than 9 years before when it was first released, since software is way more optimized for multicore performance.
I think it is usefull as an editing rig, i can see it being a helluva budget platform for ppl doing 3D render. Not so much for video editors, since many ppl will use Adobe Premiere which still depends on singlecore performance
Not really, this isn't budget hardware, it's expensive and requires a lot of power. To make things worse, most of these motherboards have proprietary power connectors and lack pcie slots.
You should do a comparison with the dual 'x79' board on aliexpress and a pair of 2650v2. the combo is around $300 US
Nice video. How did you get all CPU percentages on the screen?
Nice looking board for the age. How's it stack up against a dual socket 2011 Xeon board from the same era?
Well goddamn those are some slow as hell cores. My Ryzen 5 1600 is about on par with them despite having about 16% as many cores.
I was always wondering in this benchmark videos, how do you put that thing in the top left corner?
Another great video but I question the 130W idle draw, that seems way too high.
Also you could investigate using Nvidia Frameview to really test the FPS - it gives important frame time data which translates to how smooth the game feels. Also some video encoding benchmarks would be nice for other streamers and such. Perhaps compare to buying a modern encoding system or whatever.
Great vid, very informative.
Though it's a bit weird to see you disassembling the PC as you talk instead of assembling